id,category,text,title,published,author,link,primary_category
29425,em,"This paper provides an introduction to structural estimation methods for
matching markets with transferable utility.",Structural Estimation of Matching Markets with Transferable Utility,2021-09-16 15:29:14,"Alfred Galichon, Bernard Salanié","http://arxiv.org/abs/2109.07932v1, http://arxiv.org/pdf/2109.07932v1",econ.EM
29133,em,"In this paper, we propose a model which simulates odds distributions of
pari-mutuel betting system under two hypotheses on the behavior of bettors: 1.
The amount of bets increases very rapidly as the deadline for betting comes
near. 2. Each bettor bets on a horse which gives the largest expectation value
of the benefit. The results can be interpreted as such efficient behaviors do
not serve to extinguish the FL bias but even produce stronger FL bias.",Efficiency in Micro-Behaviors and FL Bias,2018-05-11 05:17:42,"Kurihara Kazutaka, Yohei Tutiya","http://arxiv.org/abs/1805.04225v1, http://arxiv.org/pdf/1805.04225v1",econ.EM
29931,em,"This paper derives the efficiency bound for estimating the parameters of
dynamic panel data models in the presence of an increasing number of incidental
parameters. We study the efficiency problem by formulating the dynamic panel as
a simultaneous equations system, and show that the quasi-maximum likelihood
estimator (QMLE) applied to the system achieves the efficiency bound.
Comparison of QMLE with fixed effects estimators is made.",Efficiency of QMLE for dynamic panel data models with interactive effects,2023-12-13 06:56:34,Jushan Bai,"http://arxiv.org/abs/2312.07881v1, http://arxiv.org/pdf/2312.07881v1",econ.EM
28940,em,"Endogeneity and missing data are common issues in empirical research. We
investigate how both jointly affect inference on causal parameters.
Conventional methods to estimate the variance, which treat the imputed data as
if it was observed in the first place, are not reliable. We derive the
asymptotic variance and propose a heteroskedasticity robust variance estimator
for two-stage least squares which accounts for the imputation. Monte Carlo
simulations support our theoretical findings.",On the Effect of Imputation on the 2SLS Variance,2019-03-26 19:42:59,"Helmut Farbmacher, Alexander Kann","http://arxiv.org/abs/1903.11004v1, http://arxiv.org/pdf/1903.11004v1",econ.EM
28947,em,"We propose a model selection criterion to detect purely causal from purely
noncausal models in the framework of quantile autoregressions (QAR). We also
present asymptotics for the i.i.d. case with regularly varying distributed
innovations in QAR. This new modelling perspective is appealing for
investigating the presence of bubbles in economic and financial time series,
and is an alternative to approximate maximum likelihood methods. We illustrate
our analysis using hyperinflation episodes in Latin American countries.",Identification of Noncausal Models by Quantile Autoregressions,2019-04-11 23:49:57,"Alain Hecq, Li Sun","http://arxiv.org/abs/1904.05952v1, http://arxiv.org/pdf/1904.05952v1",econ.EM
28965,em,"Complex functions have multiple uses in various fields of study, so analyze
their characteristics it is of extensive interest to other sciences. This work
begins with a particular class of rational functions of a complex variable;
over this is deduced two elementals properties concerning the residues and is
proposed one results which establishes one lower bound for the p-norm of the
residues vector. Applications to the autoregressive processes are presented and
the exemplifications are indicated in historical data of electric generation
and econometric series.",On the residues vectors of a rational class of complex functions. Application to autoregressive processes,2019-07-12 23:46:56,"Guillermo Daniel Scheidereiter, Omar Roberto Faure","http://arxiv.org/abs/1907.05949v1, http://arxiv.org/pdf/1907.05949v1",econ.EM
29080,em,"Many econometric models can be analyzed as finite mixtures. We focus on
two-component mixtures and we show that they are nonparametrically point
identified by a combination of an exclusion restriction and tail restrictions.
Our identification analysis suggests simple closed-form estimators of the
component distributions and mixing proportions, as well as a specification
test. We derive their asymptotic properties using results on tail empirical
processes and we present a simulation study that documents their finite-sample
performance.",Inference on two component mixtures under tail restrictions,2021-02-11 22:27:47,"Marc Henry, Koen Jochmans, Bernard Salanié","http://arxiv.org/abs/2102.06232v1, http://arxiv.org/pdf/2102.06232v1",econ.EM
29093,em,"This paper establishes an extended representation theorem for unit-root VARs.
A specific algebraic technique is devised to recover stationarity from the
solution of the model in the form of a cointegrating transformation. Closed
forms of the results of interest are derived for integrated processes up to the
4-th order. An extension to higher-order processes turns out to be within the
reach on an induction argument.",Cointegrated Solutions of Unit-Root VARs: An Extended Representation Theorem,2021-02-21 18:28:20,"Mario Faliva, Maria Grazia Zoia","http://arxiv.org/abs/2102.10626v1, http://arxiv.org/pdf/2102.10626v1",econ.EM
29015,em,"The randomization inference literature studying randomized controlled trials
(RCTs) assumes that units' potential outcomes are deterministic. This
assumption is unlikely to hold, as stochastic shocks may take place during the
experiment. In this paper, we consider the case of an RCT with individual-level
treatment assignment, and we allow for individual-level and cluster-level (e.g.
village-level) shocks. We show that one can draw inference on the ATE
conditional on the realizations of the cluster-level shocks, using
heteroskedasticity-robust standard errors, or on the ATE netted out of those
shocks, using cluster-robust standard errors.",Clustering and External Validity in Randomized Controlled Trials,2019-12-02 22:30:25,"Antoine Deeb, Clément de Chaisemartin","http://arxiv.org/abs/1912.01052v7, http://arxiv.org/pdf/1912.01052v7",econ.EM
28777,em,"This article reviews recent advances in fixed effect estimation of panel data
models for long panels, where the number of time periods is relatively large.
We focus on semiparametric models with unobserved individual and time effects,
where the distribution of the outcome variable conditional on covariates and
unobserved effects is specified parametrically, while the distribution of the
unobserved effects is left unrestricted. Compared to existing reviews on long
panels (Arellano and Hahn 2007; a section in Arellano and Bonhomme 2011) we
discuss models with both individual and time effects, split-panel Jackknife
bias corrections, unbalanced panels, distribution and quantile effects, and
other extensions. Understanding and correcting the incidental parameter bias
caused by the estimation of many fixed effects is our main focus, and the
unifying theme is that the order of this bias is given by the simple formula
p/n for all models discussed, with p the number of estimated parameters and n
the total sample size.",Fixed Effect Estimation of Large T Panel Data Models,2017-09-26 15:46:13,"Iván Fernández-Val, Martin Weidner","http://arxiv.org/abs/1709.08980v2, http://arxiv.org/pdf/1709.08980v2",econ.EM
28778,em,"This paper considers the identification of treatment effects on conditional
transition probabilities. We show that even under random assignment only the
instantaneous average treatment effect is point identified. Since treated and
control units drop out at different rates, randomization only ensures the
comparability of treatment and controls at the time of randomization, so that
long-run average treatment effects are not point identified. Instead we derive
informative bounds on these average treatment effects. Our bounds do not impose
(semi)parametric restrictions, for example, proportional hazards. We also
explore various assumptions such as monotone treatment response, common shocks
and positively correlated outcomes that tighten the bounds.",Bounds On Treatment Effects On Transitions,2017-09-26 15:46:40,"Johan Vikström, Geert Ridder, Martin Weidner","http://arxiv.org/abs/1709.08981v1, http://arxiv.org/pdf/1709.08981v1",econ.EM
28779,em,"We propose an inference procedure for estimators defined by mathematical
programming problems, focusing on the important special cases of linear
programming (LP) and quadratic programming (QP). In these settings, the
coefficients in both the objective function and the constraints of the
mathematical programming problem may be estimated from data and hence involve
sampling error. Our inference approach exploits the characterization of the
solutions to these programming problems by complementarity conditions; by doing
so, we can transform the problem of doing inference on the solution of a
constrained optimization problem (a non-standard inference problem) into one
involving inference based on a set of inequalities with pre-estimated
coefficients, which is much better understood. We evaluate the performance of
our procedure in several Monte Carlo simulations and an empirical application
to the classic portfolio selection problem in finance.",Inference on Estimators defined by Mathematical Programming,2017-09-26 19:24:52,"Yu-Wei Hsieh, Xiaoxia Shi, Matthew Shum","http://arxiv.org/abs/1709.09115v1, http://arxiv.org/pdf/1709.09115v1",econ.EM
28780,em,"We analyze the empirical content of the Roy model, stripped down to its
essential features, namely sector specific unobserved heterogeneity and
self-selection on the basis of potential outcomes. We characterize sharp bounds
on the joint distribution of potential outcomes and testable implications of
the Roy self-selection model under an instrumental constraint on the joint
distribution of potential outcomes we call stochastically monotone instrumental
variable (SMIV). We show that testing the Roy model selection is equivalent to
testing stochastic monotonicity of observed outcomes relative to the
instrument. We apply our sharp bounds to the derivation of a measure of
departure from Roy self-selection to identify values of observable
characteristics that induce the most costly misallocation of talent and sector
and are therefore prime targets for intervention. Special emphasis is put on
the case of binary outcomes, which has received little attention in the
literature to date. For richer sets of outcomes, we emphasize the distinction
between pointwise sharp bounds and functional sharp bounds, and its importance,
when constructing sharp bounds on functional features, such as inequality
measures. We analyze a Roy model of college major choice in Canada and Germany
within this framework, and we take a new look at the under-representation of
women in~STEM.",Sharp bounds and testability of a Roy model of STEM major choices,2017-09-27 02:25:35,"Ismael Mourifie, Marc Henry, Romuald Meango","http://arxiv.org/abs/1709.09284v2, http://arxiv.org/pdf/1709.09284v2",econ.EM
28781,em,"The ongoing net neutrality debate has generated a lot of heated discussions
on whether or not monetary interactions should be regulated between content and
access providers. Among the several topics discussed, `differential pricing'
has recently received attention due to `zero-rating' platforms proposed by some
service providers. In the differential pricing scheme, Internet Service
Providers (ISPs) can exempt data access charges for on content from certain CPs
(zero-rated) while no exemption is on content from other CPs. This allows the
possibility for Content Providers (CPs) to make `sponsorship' agreements to
zero-rate their content and attract more user traffic. In this paper, we study
the effect of differential pricing on various players in the Internet. We first
consider a model with a monopolistic ISP and multiple CPs where users select
CPs based on the quality of service (QoS) and data access charges. We show that
in a differential pricing regime 1) a CP offering low QoS can make have higher
surplus than a CP offering better QoS through sponsorships. 2) Overall QoS
(mean delay) for end users can degrade under differential pricing schemes. In
the oligopolistic market with multiple ISPs, users tend to select the ISP with
lowest ISP resulting in same type of conclusions as in the monopolistic market.
We then study how differential pricing effects the revenue of ISPs.",Zero-rating of Content and its Effect on the Quality of Service in the Internet,2017-09-27 07:51:32,"Manjesh K. Hanawal, Fehmina Malik, Yezekael Hayel","http://arxiv.org/abs/1709.09334v2, http://arxiv.org/pdf/1709.09334v2",econ.EM
28782,em,"The uncertainty and robustness of Computable General Equilibrium models can
be assessed by conducting a Systematic Sensitivity Analysis. Different methods
have been used in the literature for SSA of CGE models such as Gaussian
Quadrature and Monte Carlo methods. This paper explores the use of Quasi-random
Monte Carlo methods based on the Halton and Sobol' sequences as means to
improve the efficiency over regular Monte Carlo SSA, thus reducing the
computational requirements of the SSA. The findings suggest that by using
low-discrepancy sequences, the number of simulations required by the regular MC
SSA methods can be notably reduced, hence lowering the computational time
required for SSA of CGE models.",Quasi-random Monte Carlo application in CGE systematic sensitivity analysis,2017-09-28 01:54:30,Theodoros Chatzivasileiadis,"http://arxiv.org/abs/1709.09755v1, http://arxiv.org/pdf/1709.09755v1",econ.EM
28783,em,"We propose a method of estimating the linear-in-means model of peer effects
in which the peer group, defined by a social network, is endogenous in the
outcome equation for peer effects. Endogeneity is due to unobservable
individual characteristics that influence both link formation in the network
and the outcome of interest. We propose two estimators of the peer effect
equation that control for the endogeneity of the social connections using a
control function approach. We leave the functional form of the control function
unspecified and treat it as unknown. To estimate the model, we use a sieve
semiparametric approach, and we establish asymptotics of the semiparametric
estimator.",Estimation of Peer Effects in Endogenous Social Networks: Control Function Approach,2017-09-28 18:41:48,"Ida Johnsson, Hyungsik Roger Moon","http://arxiv.org/abs/1709.10024v3, http://arxiv.org/pdf/1709.10024v3",econ.EM
28784,em,"This paper considers the problem of forecasting a collection of short time
series using cross sectional information in panel data. We construct point
predictors using Tweedie's formula for the posterior mean of heterogeneous
coefficients under a correlated random effects distribution. This formula
utilizes cross-sectional information to transform the unit-specific (quasi)
maximum likelihood estimator into an approximation of the posterior mean under
a prior distribution that equals the population distribution of the random
coefficients. We show that the risk of a predictor based on a non-parametric
estimate of the Tweedie correction is asymptotically equivalent to the risk of
a predictor that treats the correlated-random-effects distribution as known
(ratio-optimality). Our empirical Bayes predictor performs well compared to
various competitors in a Monte Carlo study. In an empirical application we use
the predictor to forecast revenues for a large panel of bank holding companies
and compare forecasts that condition on actual and severely adverse
macroeconomic conditions.",Forecasting with Dynamic Panel Data Models,2017-09-29 01:46:48,"Laura Liu, Hyungsik Roger Moon, Frank Schorfheide","http://arxiv.org/abs/1709.10193v1, http://arxiv.org/pdf/1709.10193v1",econ.EM
28785,em,"There is a fast growing literature that set-identifies structural vector
autoregressions (SVARs) by imposing sign restrictions on the responses of a
subset of the endogenous variables to a particular structural shock
(sign-restricted SVARs). Most methods that have been used to construct
pointwise coverage bands for impulse responses of sign-restricted SVARs are
justified only from a Bayesian perspective. This paper demonstrates how to
formulate the inference problem for sign-restricted SVARs within a
moment-inequality framework. In particular, it develops methods of constructing
confidence bands for impulse response functions of sign-restricted SVARs that
are valid from a frequentist perspective. The paper also provides a comparison
of frequentist and Bayesian coverage bands in the context of an empirical
application - the former can be substantially wider than the latter.",Inference for VARs Identified with Sign Restrictions,2017-09-29 02:25:13,"Eleonora Granziera, Hyungsik Roger Moon, Frank Schorfheide","http://arxiv.org/abs/1709.10196v2, http://arxiv.org/pdf/1709.10196v2",econ.EM
28786,em,"We systematically investigate the effect heterogeneity of job search
programmes for unemployed workers. To investigate possibly heterogeneous
employment effects, we combine non-experimental causal empirical models with
Lasso-type estimators. The empirical analyses are based on rich administrative
data from Swiss social security records. We find considerable heterogeneities
only during the first six months after the start of training. Consistent with
previous results of the literature, unemployed persons with fewer employment
opportunities profit more from participating in these programmes. Furthermore,
we also document heterogeneous employment effects by residence status. Finally,
we show the potential of easy-to-implement programme participation rules for
improving average employment effects of these active labour market programmes.",Heterogeneous Employment Effects of Job Search Programmes: A Machine Learning Approach,2017-09-29 11:21:08,"Michael Knaus, Michael Lechner, Anthony Strittmatter","http://dx.doi.org/10.3368/jhr.57.2.0718-9615R1, http://arxiv.org/abs/1709.10279v2, http://arxiv.org/pdf/1709.10279v2",econ.EM
28787,em,"Dynamic contracts with multiple agents is a classical decentralized
decision-making problem with asymmetric information. In this paper, we extend
the single-agent dynamic incentive contract model in continuous-time to a
multi-agent scheme in finite horizon and allow the terminal reward to be
dependent on the history of actions and incentives. We first derive a set of
sufficient conditions for the existence of optimal contracts in the most
general setting and conditions under which they form a Nash equilibrium. Then
we show that the principal's problem can be converted to solving
Hamilton-Jacobi-Bellman (HJB) equation requiring a static Nash equilibrium.
Finally, we provide a framework to solve this problem by solving partial
differential equations (PDE) derived from backward stochastic differential
equations (BSDE).",A Note on the Multi-Agent Contracts in Continuous Time,2017-10-01 20:07:08,"Qi Luo, Romesh Saigal","http://arxiv.org/abs/1710.00377v2, http://arxiv.org/pdf/1710.00377v2",econ.EM
28788,em,"This paper presents a new estimator of the intercept of a linear regression
model in cases where the outcome varaible is observed subject to a selection
rule. The intercept is often in this context of inherent interest; for example,
in a program evaluation context, the difference between the intercepts in
outcome equations for participants and non-participants can be interpreted as
the difference in average outcomes of participants and their counterfactual
average outcomes if they had chosen not to participate. The new estimator can
under mild conditions exhibit a rate of convergence in probability equal to
$n^{-p/(2p+1)}$, where $p\ge 2$ is an integer that indexes the strength of
certain smoothness assumptions. This rate of convergence is shown in this
context to be the optimal rate of convergence for estimation of the intercept
parameter in terms of a minimax criterion. The new estimator, unlike other
proposals in the literature, is under mild conditions consistent and
asymptotically normal with a rate of convergence that is the same regardless of
the degree to which selection depends on unobservables in the outcome equation.
Simulation evidence and an empirical example are included.",Rate-Optimal Estimation of the Intercept in a Semiparametric Sample-Selection Model,2017-10-04 03:02:22,Chuan Goh,"http://arxiv.org/abs/1710.01423v3, http://arxiv.org/pdf/1710.01423v3",econ.EM
28789,em,"Gale, Kuhn and Tucker (1950) introduced two ways to reduce a zero-sum game by
packaging some strategies with respect to a probability distribution on them.
In terms of value, they gave conditions for a desirable reduction. We show that
a probability distribution for a desirable reduction relies on optimal
strategies in the original game. Also, we correct an improper example given by
them to show that the reverse of a theorem does not hold.","A Note on Gale, Kuhn, and Tucker's Reductions of Zero-Sum Games",2017-10-06 12:45:42,Shuige Liu,"http://arxiv.org/abs/1710.02326v1, http://arxiv.org/pdf/1710.02326v1",econ.EM
28790,em,"This study proposes a simple technique for propensity score matching for
multiple treatment levels under the strong unconfoundedness assumption with the
help of the Aitchison distance proposed in the field of compositional data
analysis (CODA).",Propensity score matching for multiple treatment levels: A CODA-based contribution,2017-10-24 03:27:47,"Hajime Seya, Takahiro Yoshida","http://arxiv.org/abs/1710.08558v1, http://arxiv.org/pdf/1710.08558v1",econ.EM
28791,em,"We consider an index model of dyadic link formation with a homophily effect
index and a degree heterogeneity index. We provide nonparametric identification
results in a single large network setting for the potentially nonparametric
homophily effect function, the realizations of unobserved individual fixed
effects and the unknown distribution of idiosyncratic pairwise shocks, up to
normalization, for each possible true value of the unknown parameters. We
propose a novel form of scale normalization on an arbitrary interquantile
range, which is not only theoretically robust but also proves particularly
convenient for the identification analysis, as quantiles provide direct
linkages between the observable conditional probabilities and the unknown index
values. We then use an inductive ""in-fill and out-expansion"" algorithm to
establish our main results, and consider extensions to more general settings
that allow nonseparable dependence between homophily and degree heterogeneity,
as well as certain extents of network sparsity and weaker assumptions on the
support of unobserved heterogeneity. As a byproduct, we also propose a concept
called ""modeling equivalence"" as a refinement of ""observational equivalence"",
and use it to provide a formal discussion about normalization, identification
and their interplay with counterfactuals.",Nonparametric Identification in Index Models of Link Formation,2017-10-30 23:32:12,Wayne Yuan Gao,"http://arxiv.org/abs/1710.11230v5, http://arxiv.org/pdf/1710.11230v5",econ.EM
28792,em,"Web search data are a valuable source of business and economic information.
Previous studies have utilized Google Trends web search data for economic
forecasting. We expand this work by providing algorithms to combine and
aggregate search volume data, so that the resulting data is both consistent
over time and consistent between data series. We give a brand equity example,
where Google Trends is used to analyze shopping data for 100 top ranked brands
and these data are used to nowcast economic variables. We describe the
importance of out of sample prediction and show how principal component
analysis (PCA) can be used to improve the signal to noise ratio and prevent
overfitting in nowcasting models. We give a finance example, where exploratory
data analysis and classification is used to analyze the relationship between
Google Trends searches and stock prices.",Aggregating Google Trends: Multivariate Testing and Analysis,2017-12-08 19:18:10,"Stephen L. France, Yuying Shi","http://arxiv.org/abs/1712.03152v2, http://arxiv.org/pdf/1712.03152v2",econ.EM
28793,em,"We propose a new inferential methodology for dynamic economies that is robust
to misspecification of the mechanism generating frictions. Economies with
frictions are treated as perturbations of a frictionless economy that are
consistent with a variety of mechanisms. We derive a representation for the law
of motion for such economies and we characterize parameter set identification.
We derive a link from model aggregate predictions to distributional information
contained in qualitative survey data and specify conditions under which the
identified set is refined. The latter is used to semi-parametrically estimate
distortions due to frictions in macroeconomic variables. Based on these
estimates, we propose a novel test for complete models. Using consumer and
business survey data collected by the European Commission, we apply our method
to estimate distortions due to financial frictions in the Spanish economy. We
investigate the implications of these estimates for the adequacy of the
standard model of financial frictions SW-BGG (Smets and Wouters (2007),
Bernanke, Gertler, and Gilchrist (1999)).",Set Identified Dynamic Economies and Robustness to Misspecification,2017-12-11 11:41:11,Andreas Tryphonides,"http://arxiv.org/abs/1712.03675v2, http://arxiv.org/pdf/1712.03675v2",econ.EM
28794,em,"This paper defines the class of $\mathcal{H}$-valued autoregressive (AR)
processes with a unit root of finite type, where $\mathcal{H}$ is an infinite
dimensional separable Hilbert space, and derives a generalization of the
Granger-Johansen Representation Theorem valid for any integration order
$d=1,2,\dots$. An existence theorem shows that the solution of an AR with a
unit root of finite type is necessarily integrated of some finite integer $d$
and displays a common trends representation with a finite number of common
stochastic trends of the type of (cumulated) bilateral random walks and an
infinite dimensional cointegrating space. A characterization theorem clarifies
the connections between the structure of the AR operators and $(i)$ the order
of integration, $(ii)$ the structure of the attractor space and the
cointegrating space, $(iii)$ the expression of the cointegrating relations, and
$(iv)$ the Triangular representation of the process. Except for the fact that
the number of cointegrating relations that are integrated of order 0 is
infinite, the representation of $\mathcal{H}$-valued ARs with a unit root of
finite type coincides with that of usual finite dimensional VARs, which
corresponds to the special case $\mathcal{H}=\mathbb{R}^p$.",Cointegration in functional autoregressive processes,2017-12-20 18:23:20,"Massimo Franchi, Paolo Paruolo","http://dx.doi.org/10.1017/S0266466619000306, http://arxiv.org/abs/1712.07522v2, http://arxiv.org/pdf/1712.07522v2",econ.EM
28795,em,"High-dimensional linear models with endogenous variables play an increasingly
important role in recent econometric literature. In this work we allow for
models with many endogenous variables and many instrument variables to achieve
identification. Because of the high-dimensionality in the second stage,
constructing honest confidence regions with asymptotically correct coverage is
non-trivial. Our main contribution is to propose estimators and confidence
regions that would achieve that. The approach relies on moment conditions that
have an additional orthogonal property with respect to nuisance parameters.
Moreover, estimation of high-dimension nuisance parameters is carried out via
new pivotal procedures. In order to achieve simultaneously valid confidence
regions we use a multiplier bootstrap procedure to compute critical values and
establish its validity.",Simultaneous Confidence Intervals for High-dimensional Linear Models with Many Endogenous Variables,2017-12-21 20:33:40,"Alexandre Belloni, Christian Hansen, Whitney Newey","http://arxiv.org/abs/1712.08102v4, http://arxiv.org/pdf/1712.08102v4",econ.EM
28796,em,"This paper investigates the impacts of major natural resource discoveries
since 1960 on life expectancy in the nations that they were resource poor prior
to the discoveries. Previous literature explains the relation between nations
wealth and life expectancy, but it has been silent about the impacts of
resource discoveries on life expectancy. We attempt to fill this gap in this
study. An important advantage of this study is that as the previous researchers
argued resource discovery could be an exogenous variable. We use longitudinal
data from 1960 to 2014 and we apply three modern empirical methods including
Difference-in-Differences, Event studies, and Synthetic Control approach, to
investigate the main question of the research which is 'how resource
discoveries affect life expectancy?'. The findings show that resource
discoveries in Ecuador, Yemen, Oman, and Equatorial Guinea have positive and
significant impacts on life expectancy, but the effects for the European
countries are mostly negative.",Resource Abundance and Life Expectancy,2018-01-01 01:43:39,Bahram Sanginabadi,"http://arxiv.org/abs/1801.00369v1, http://arxiv.org/pdf/1801.00369v1",econ.EM
28797,em,"In this paper we estimate a Bayesian vector autoregressive model with factor
stochastic volatility in the error term to assess the effects of an uncertainty
shock in the Euro area. This allows us to treat macroeconomic uncertainty as a
latent quantity during estimation. Only a limited number of contributions to
the literature estimate uncertainty and its macroeconomic consequences jointly,
and most are based on single country models. We analyze the special case of a
shock restricted to the Euro area, where member states are highly related by
construction. We find significant results of a decrease in real activity for
all countries over a period of roughly a year following an uncertainty shock.
Moreover, equity prices, short-term interest rates and exports tend to decline,
while unemployment levels increase. Dynamic responses across countries differ
slightly in magnitude and duration, with Ireland, Slovakia and Greece
exhibiting different reactions for some macroeconomic fundamentals.",Implications of macroeconomic volatility in the Euro area,2018-01-09 16:20:42,"Niko Hauzenberger, Maximilian Böck, Michael Pfarrhofer, Anna Stelzer, Gregor Zens","http://arxiv.org/abs/1801.02925v2, http://arxiv.org/pdf/1801.02925v2",econ.EM
28798,em,"We report a new result on lotteries --- that a well-funded syndicate has a
purely mechanical strategy to achieve expected returns of 10\% to 25\% in an
equiprobable lottery with no take and no carryover pool. We prove that an
optimal strategy (Nash equilibrium) in a game between the syndicate and other
players consists of betting one of each ticket (the ""trump ticket""), and extend
that result to proportional ticket selection in non-equiprobable lotteries. The
strategy can be adjusted to accommodate lottery taxes and carryover pools. No
""irrationality"" need be involved for the strategy to succeed --- it requires
only that a large group of non-syndicate bettors each choose a few tickets
independently.",A Method for Winning at Lotteries,2018-01-05 22:35:17,"Steven D. Moffitt, William T. Ziemba","http://arxiv.org/abs/1801.02958v1, http://arxiv.org/pdf/1801.02958v1",econ.EM
28799,em,"Despite its unusual payout structure, the Canadian 6/49 Lotto is one of the
few government sponsored lotteries that has the potential for a favorable
strategy we call ""buying the pot."" By buying the pot we mean that a syndicate
buys each ticket in the lottery, ensuring that it holds a jackpot winner. We
assume that the other bettors independently buy small numbers of tickets. This
paper presents (1) a formula for the syndicate's expected return, (2)
conditions under which buying the pot produces a significant positive expected
return, and (3) the implications of these findings for lottery design.",Does it Pay to Buy the Pot in the Canadian 6/49 Lotto? Implications for Lottery Design,2018-01-06 00:58:18,"Steven D. Moffitt, William T. Ziemba","http://arxiv.org/abs/1801.02959v1, http://arxiv.org/pdf/1801.02959v1",econ.EM
28800,em,"Dynamic Discrete Choice Models (DDCMs) are important in the structural
estimation literature. Since the structural errors are practically always
continuous and unbounded in nature, researchers often use the expected value
function. The idea to solve for the expected value function made solution more
practical and estimation feasible. However, as we show in this paper, the
expected value function is impractical compared to an alternative: the
integrated (ex ante) value function. We provide brief descriptions of the
inefficacy of the former, and benchmarks on actual problems with varying
cardinality of the state space and number of decisions. Though the two
approaches solve the same problem in theory, the benchmarks support the claim
that the integrated value function is preferred in practice.",Solving Dynamic Discrete Choice Models: Integrated or Expected Value Function?,2018-01-11 23:26:00,Patrick Kofod Mogensen,"http://arxiv.org/abs/1801.03978v1, http://arxiv.org/pdf/1801.03978v1",econ.EM
28801,em,"This paper develops a new model and estimation procedure for panel data that
allows us to identify heterogeneous structural breaks. We model individual
heterogeneity using a grouped pattern. For each group, we allow common
structural breaks in the coefficients. However, the number, timing, and size of
these breaks can differ across groups. We develop a hybrid estimation procedure
of the grouped fixed effects approach and adaptive group fused Lasso. We show
that our method can consistently identify the latent group structure, detect
structural breaks, and estimate the regression parameters. Monte Carlo results
demonstrate the good performance of the proposed method in finite samples. An
empirical application to the relationship between income and democracy
illustrates the importance of considering heterogeneous structural breaks.",Heterogeneous structural breaks in panel data models,2018-01-15 09:19:28,"Ryo Okui, Wendun Wang","http://arxiv.org/abs/1801.04672v2, http://arxiv.org/pdf/1801.04672v2",econ.EM
28802,em,"We characterize common assumption of rationality of 2-person games within an
incomplete information framework. We use the lexicographic model with
incomplete information and show that a belief hierarchy expresses common
assumption of rationality within a complete information framework if and only
if there is a belief hierarchy within the corresponding incomplete information
framework that expresses common full belief in caution, rationality, every good
choice is supported, and prior belief in the original utility functions.",Characterizing Assumption of Rationality by Incomplete Information,2018-01-15 12:48:20,Shuige Liu,"http://arxiv.org/abs/1801.04714v1, http://arxiv.org/pdf/1801.04714v1",econ.EM
28803,em,"We first show (1) the importance of investigating health expenditure process
using the order two Markov chain model, rather than the standard order one
model, which is widely used in the literature. Markov chain of order two is the
minimal framework that is capable of distinguishing those who experience a
certain health expenditure level for the first time from those who have been
experiencing that or other levels for some time. In addition, using the model
we show (2) that the probability of encountering a health shock first de-
creases until around age 10, and then increases with age, particularly, after
age 40, (3) that health shock distributions among different age groups do not
differ until their percentiles reach the median range, but that above the
median the health shock distributions of older age groups gradually start to
first-order dominate those of younger groups, and (4) that the persistency of
health shocks also shows a U-shape in relation to age.",Quantifying Health Shocks Over the Life Cycle,2018-01-26 13:35:38,"Taiyo Fukai, Hidehiko Ichimura, Kyogo Kanazawa","http://arxiv.org/abs/1801.08746v1, http://arxiv.org/pdf/1801.08746v1",econ.EM
28804,em,"We define a modification of the standard Kripke model, called the ordered
Kripke model, by introducing a linear order on the set of accessible states of
each state. We first show this model can be used to describe the lexicographic
belief hierarchy in epistemic game theory, and perfect rationalizability can be
characterized within this model. Then we show that each ordered Kripke model is
the limit of a sequence of standard probabilistic Kripke models with a modified
(common) belief operator, in the senses of structure and the
(epsilon-)permissibilities characterized within them.","Ordered Kripke Model, Permissibility, and Convergence of Probabilistic Kripke Model",2018-01-26 14:46:28,Shuige Liu,"http://arxiv.org/abs/1801.08767v1, http://arxiv.org/pdf/1801.08767v1",econ.EM
28805,em,"Why women avoid participating in a competition and how can we encourage them
to participate in it? In this paper, we investigate how social image concerns
affect women's decision to compete. We first construct a theoretical model and
show that participating in a competition, even under affirmative action
policies favoring women, is costly for women under public observability since
it deviates from traditional female gender norms, resulting in women's low
appearance in competitive environments. We propose and theoretically show that
introducing prosocial incentives in the competitive environment is effective
and robust to public observability since (i) it induces women who are
intrinsically motivated by prosocial incentives to the competitive environment
and (ii) it makes participating in a competition not costly for women from
social image point of view. We conduct a laboratory experiment where we
randomly manipulate the public observability of decisions to compete and test
our theoretical predictions. The results of the experiment are fairly
consistent with our theoretical predictions. We suggest that when designing
policies to promote gender equality in competitive environments, using
prosocial incentives through company philanthropy or other social
responsibility policies, either as substitutes or as complements to traditional
affirmative action policies, could be promising.",How Can We Induce More Women to Competitions?,2018-01-27 11:51:44,"Masayuki Yagasaki, Mitsunosuke Morishita","http://arxiv.org/abs/1801.10518v1, http://arxiv.org/pdf/1801.10518v1",econ.EM
28806,em,"The rational choice theory is based on this idea that people rationally
pursue goals for increasing their personal interests. In most conditions, the
behavior of an actor is not independent of the person and others' behavior.
Here, we present a new concept of rational choice as a hyper-rational choice
which in this concept, the actor thinks about profit or loss of other actors in
addition to his personal profit or loss and then will choose an action which is
desirable to him. We implement the hyper-rational choice to generalize and
expand the game theory. Results of this study will help to model the behavior
of people considering environmental conditions, the kind of behavior
interactive, valuation system of itself and others and system of beliefs and
internal values of societies. Hyper-rationality helps us understand how human
decision makers behave in interactive decisions.",Hyper-rational choice theory,2018-01-12 02:16:09,"Madjid Eshaghi Gordji, Gholamreza Askari","http://arxiv.org/abs/1801.10520v2, http://arxiv.org/pdf/1801.10520v2",econ.EM
28807,em,"We develop a new VAR model for structural analysis with mixed-frequency data.
The MIDAS-SVAR model allows to identify structural dynamic links exploiting the
information contained in variables sampled at different frequencies. It also
provides a general framework to test homogeneous frequency-based
representations versus mixed-frequency data models. A set of Monte Carlo
experiments suggests that the test performs well both in terms of size and
power. The MIDAS-SVAR is then used to study how monetary policy and financial
market volatility impact on the dynamics of gross capital inflows to the US.
While no relation is found when using standard quarterly data, exploiting the
variability present in the series within the quarter shows that the effect of
an interest rate shock is greater the longer the time lag between the month of
the shock and the end of the quarter",Structural analysis with mixed-frequency data: A MIDAS-SVAR model of US capital flows,2018-02-02 21:12:12,"Emanuele Bacchiocchi, Andrea Bastianin, Alessandro Missale, Eduardo Rossi","http://arxiv.org/abs/1802.00793v1, http://arxiv.org/pdf/1802.00793v1",econ.EM
28808,em,"The development and deployment of matching procedures that incentivize
truthful preference reporting is considered one of the major successes of
market design research. In this study, we test the degree to which these
procedures succeed in eliminating preference misrepresentation. We administered
an online experiment to 1,714 medical students immediately after their
participation in the medical residency match--a leading field application of
strategy-proof market design. When placed in an analogous, incentivized
matching task, we find that 23% of participants misrepresent their preferences.
We explore the factors that predict preference misrepresentation, including
cognitive ability, strategic positioning, overconfidence, expectations, advice,
and trust. We discuss the implications of this behavior for the design of
allocation mechanisms and the social welfare in markets that use them.",An Experimental Investigation of Preference Misrepresentation in the Residency Match,2018-02-05 20:51:55,"Alex Rees-Jones, Samuel Skowronek","http://dx.doi.org/10.1073/pnas.1803212115, http://arxiv.org/abs/1802.01990v2, http://arxiv.org/pdf/1802.01990v2",econ.EM
28809,em,"Consumers are creatures of habit, often periodic, tied to work, shopping and
other schedules. We analyzed one month of data from the world's largest
bike-sharing company to elicit demand behavioral cycles, initially using models
from animal tracking that showed large customers fit an Ornstein-Uhlenbeck
model with demand peaks at periodicities of 7, 12, 24 hour and 7-days. Lorenz
curves of bicycle demand showed that the majority of customer usage was
infrequent, and demand cycles from time-series models would strongly overfit
the data yielding unreliable models. Analysis of thresholded wavelets for the
space-time tensor of bike-sharing contracts was able to compress the data into
a 56-coefficient model with little loss of information, suggesting that
bike-sharing demand behavior is exceptionally strong and regular. Improvements
to predicted demand could be made by adjusting for 'noise' filtered by our
model from air quality and weather information and demand from infrequent
riders.",Prediction of Shared Bicycle Demand with Wavelet Thresholding,2018-02-08 04:17:27,"J. Christopher Westland, Jian Mou, Dafei Yin","http://arxiv.org/abs/1802.02683v1, http://arxiv.org/pdf/1802.02683v1",econ.EM
28810,em,"This paper describes a numerical method to solve for mean product qualities
which equates the real market share to the market share predicted by a discrete
choice model. The method covers a general class of discrete choice model,
including the pure characteristics model in Berry and Pakes(2007) and the
random coefficient logit model in Berry et al.(1995) (hereafter BLP). The
method transforms the original market share inversion problem to an
unconstrained convex minimization problem, so that any convex programming
algorithm can be used to solve the inversion. Moreover, such results also imply
that the computational complexity of inverting a demand model should be no more
than that of a convex programming problem. In simulation examples, I show the
method outperforms the contraction mapping algorithm in BLP. I also find the
method remains robust in pure characteristics models with near-zero market
shares.",A General Method for Demand Inversion,2018-02-13 05:50:46,Lixiong Li,"http://arxiv.org/abs/1802.04444v3, http://arxiv.org/pdf/1802.04444v3",econ.EM
29016,em,"This paper develops a set of test statistics based on bilinear forms in the
context of the extremum estimation framework with particular interest in
nonlinear hypothesis. We show that the proposed statistic converges to a
conventional chi-square limit. A Monte Carlo experiment suggests that the test
statistic works well in finite samples.",Bilinear form test statistics for extremum estimation,2019-12-03 17:32:49,"Federico Crudu, Felipe Osorio","http://dx.doi.org/10.1016/j.econlet.2019.108885, http://arxiv.org/abs/1912.01410v1, http://arxiv.org/pdf/1912.01410v1",econ.EM
28811,em,"We provide an epistemic foundation for cooperative games by proof theory via
studying the knowledge for players unanimously accepting only core payoffs. We
first transform each cooperative game into a decision problem where a player
can accept or reject any payoff vector offered to her based on her knowledge
about available cooperation. Then we use a modified KD-system in epistemic
logic, which can be regarded as a counterpart of the model for non-cooperative
games in Bonanno (2008), (2015), to describe a player's knowledge,
decision-making criterion, and reasoning process; especially, a formula called
C-acceptability is defined to capture the criterion for accepting a core payoff
vector. Within this syntactical framework, we characterize the core of a
cooperative game in terms of players' knowledge. Based on that result, we
discuss an epistemic inconsistency behind Debreu-Scarf Theorem, that is, the
increase of the number of replicas has invariant requirement on each
participant's knowledge from the aspect of competitive market, while requires
unbounded epistemic ability players from the aspect of cooperative game.",Knowledge and Unanimous Acceptance of Core Payoffs: An Epistemic Foundation for Cooperative Game Theory,2018-02-13 15:49:12,Shuige Liu,"http://arxiv.org/abs/1802.04595v4, http://arxiv.org/pdf/1802.04595v4",econ.EM
28812,em,"In this study interest centers on regional differences in the response of
housing prices to monetary policy shocks in the US. We address this issue by
analyzing monthly home price data for metropolitan regions using a
factor-augmented vector autoregression (FAVAR) model. Bayesian model estimation
is based on Gibbs sampling with Normal-Gamma shrinkage priors for the
autoregressive coefficients and factor loadings, while monetary policy shocks
are identified using high-frequency surprises around policy announcements as
external instruments. The empirical results indicate that monetary policy
actions typically have sizeable and significant positive effects on regional
housing prices, revealing differences in magnitude and duration. The largest
effects are observed in regions located in states on both the East and West
Coasts, notably California, Arizona and Florida.",The dynamic impact of monetary policy on regional housing prices in the US: Evidence based on factor-augmented vector autoregressions,2018-02-16 12:08:34,"Manfred M. Fischer, Florian Huber, Michael Pfarrhofer, Petra Staufer-Steinnocher","http://arxiv.org/abs/1802.05870v1, http://arxiv.org/pdf/1802.05870v1",econ.EM
28813,em,"We study the asymptotic properties of a class of estimators of the structural
parameters in dynamic discrete choice games. We consider K-stage policy
iteration (PI) estimators, where K denotes the number of policy iterations
employed in the estimation. This class nests several estimators proposed in the
literature such as those in Aguirregabiria and Mira (2002, 2007), Pesendorfer
and Schmidt-Dengler (2008), and Pakes et al. (2007). First, we establish that
the K-PML estimator is consistent and asymptotically normal for all K. This
complements findings in Aguirregabiria and Mira (2007), who focus on K=1 and K
large enough to induce convergence of the estimator. Furthermore, we show under
certain conditions that the asymptotic variance of the K-PML estimator can
exhibit arbitrary patterns as a function of K. Second, we establish that the
K-MD estimator is consistent and asymptotically normal for all K. For a
specific weight matrix, the K-MD estimator has the same asymptotic distribution
as the K-PML estimator. Our main result provides an optimal sequence of weight
matrices for the K-MD estimator and shows that the optimally weighted K-MD
estimator has an asymptotic distribution that is invariant to K. The invariance
result is especially unexpected given the findings in Aguirregabiria and Mira
(2007) for K-PML estimators. Our main result implies two new corollaries about
the optimal 1-MD estimator (derived by Pesendorfer and Schmidt-Dengler (2008)).
First, the optimal 1-MD estimator is optimal in the class of K-MD estimators.
In other words, additional policy iterations do not provide asymptotic
efficiency gains relative to the optimal 1-MD estimator. Second, the optimal
1-MD estimator is more or equally asymptotically efficient than any K-PML
estimator for all K. Finally, the appendix provides appropriate conditions
under which the optimal 1-MD estimator is asymptotically efficient.",On the iterated estimation of dynamic discrete choice games,2018-02-19 18:19:35,"Federico A. Bugni, Jackson Bunting","http://arxiv.org/abs/1802.06665v4, http://arxiv.org/pdf/1802.06665v4",econ.EM
28814,em,"This paper proposes nonparametric kernel-smoothing estimation for panel data
to examine the degree of heterogeneity across cross-sectional units. We first
estimate the sample mean, autocovariances, and autocorrelations for each unit
and then apply kernel smoothing to compute their density functions. The
dependence of the kernel estimator on bandwidth makes asymptotic bias of very
high order affect the required condition on the relative magnitudes of the
cross-sectional sample size (N) and the time-series length (T). In particular,
it makes the condition on N and T stronger and more complicated than those
typically observed in the long-panel literature without kernel smoothing. We
also consider a split-panel jackknife method to correct bias and construction
of confidence intervals. An empirical application and Monte Carlo simulations
illustrate our procedure in finite samples.",Kernel Estimation for Panel Data with Heterogeneous Dynamics,2018-02-24 12:45:50,"Ryo Okui, Takahide Yanagi","http://arxiv.org/abs/1802.08825v4, http://arxiv.org/pdf/1802.08825v4",econ.EM
28815,em,"People reason heuristically in situations resembling inferential puzzles such
as Bertrand's box paradox and the Monty Hall problem. The practical
significance of that fact for economic decision making is uncertain because a
departure from sound reasoning may, but does not necessarily, result in a
""cognitively biased"" outcome different from what sound reasoning would have
produced. Criteria are derived here, applicable to both experimental and
non-experimental situations, for heuristic reasoning in an inferential-puzzle
situations to result, or not to result, in cognitively bias. In some
situations, neither of these criteria is satisfied, and whether or not agents'
posterior probability assessments or choices are cognitively biased cannot be
determined.",Identifying the occurrence or non occurrence of cognitive bias in situations resembling the Monty Hall problem,2018-02-25 03:28:11,"Fatemeh Borhani, Edward J. Green","http://arxiv.org/abs/1802.08935v1, http://arxiv.org/pdf/1802.08935v1",econ.EM
28816,em,"I analyse the solution method for the variational optimisation problem in the
rational inattention framework proposed by Christopher A. Sims. The solution,
in general, does not exist, although it may exist in exceptional cases. I show
that the solution does not exist for the quadratic and the logarithmic
objective functions analysed by Sims (2003, 2006). For a linear-quadratic
objective function a solution can be constructed under restrictions on all but
one of its parameters. This approach is, therefore, unlikely to be applicable
to a wider set of economic models.",On the solution of the variational optimisation in the rational inattention framework,2018-02-27 16:21:46,Nigar Hashimzade,"http://arxiv.org/abs/1802.09869v2, http://arxiv.org/pdf/1802.09869v2",econ.EM
28830,em,"This paper proposes a model-free approach to analyze panel data with
heterogeneous dynamic structures across observational units. We first compute
the sample mean, autocovariances, and autocorrelations for each unit, and then
estimate the parameters of interest based on their empirical distributions. We
then investigate the asymptotic properties of our estimators using double
asymptotics and propose split-panel jackknife bias correction and inference
based on the cross-sectional bootstrap. We illustrate the usefulness of our
procedures by studying the deviation dynamics of the law of one price. Monte
Carlo simulations confirm that the proposed bias correction is effective and
yields valid inference in small samples.",Panel Data Analysis with Heterogeneous Dynamics,2018-03-26 10:53:47,"Ryo Okui, Takahide Yanagi","http://arxiv.org/abs/1803.09452v2, http://arxiv.org/pdf/1803.09452v2",econ.EM
28817,em,"Many macroeconomic policy questions may be assessed in a case study
framework, where the time series of a treated unit is compared to a
counterfactual constructed from a large pool of control units. I provide a
general framework for this setting, tailored to predict the counterfactual by
minimizing a tradeoff between underfitting (bias) and overfitting (variance).
The framework nests recently proposed structural and reduced form machine
learning approaches as special cases. Furthermore, difference-in-differences
with matching and the original synthetic control are restrictive cases of the
framework, in general not minimizing the bias-variance objective. Using
simulation studies I find that machine learning methods outperform traditional
methods when the number of potential controls is large or the treated unit is
substantially different from the controls. Equipped with a toolbox of
approaches, I revisit a study on the effect of economic liberalisation on
economic growth. I find effects for several countries where no effect was found
in the original study. Furthermore, I inspect how a systematically important
bank respond to increasing capital requirements by using a large pool of banks
to estimate the counterfactual. Finally, I assess the effect of a changing
product price on product sales using a novel scanner dataset.",Synthetic Control Methods and Big Data,2018-03-01 00:32:09,Daniel Kinn,"http://arxiv.org/abs/1803.00096v1, http://arxiv.org/pdf/1803.00096v1",econ.EM
28818,em,"It is widely known that geographically weighted regression(GWR) is
essentially same as varying-coefficient model. In the former research about
varying-coefficient model, scholars tend to use multidimensional-kernel-based
locally weighted estimation(MLWE) so that information of both distance and
direction is considered. However, when we construct the local weight matrix of
geographically weighted estimation, distance among the locations in the
neighbor is the only factor controlling the value of entries of weight matrix.
In other word, estimation of GWR is distance-kernel-based. Thus, in this paper,
under stationary and limited dependent data with multidimensional subscripts,
we analyze the local mean squared properties of without any assumption of the
form of coefficient functions and compare it with MLWE. According to the
theoretical and simulation results, geographically-weighted locally linear
estimation(GWLE) is asymptotically more efficient than MLWE. Furthermore, a
relationship between optimal bandwith selection and design of scale parameters
is also obtained.",An Note on Why Geographically Weighted Regression Overcomes Multidimensional-Kernel-Based Varying-Coefficient Model,2018-03-04 21:50:17,Zihao Yuan,"http://arxiv.org/abs/1803.01402v2, http://arxiv.org/pdf/1803.01402v2",econ.EM
28819,em,"We study three pricing mechanisms' performance and their effects on the
participants in the data industry from the data supply chain perspective. A
win-win pricing strategy for the players in the data supply chain is proposed.
We obtain analytical solutions in each pricing mechanism, including the
decentralized and centralized pricing, Nash Bargaining pricing, and revenue
sharing mechanism.",Pricing Mechanism in Information Goods,2018-03-05 10:37:06,"Xinming Li, Huaqing Wang","http://arxiv.org/abs/1803.01530v1, http://arxiv.org/pdf/1803.01530v1",econ.EM
28820,em,"Spatial association and heterogeneity are two critical areas in the research
about spatial analysis, geography, statistics and so on. Though large amounts
of outstanding methods has been proposed and studied, there are few of them
tend to study spatial association under heterogeneous environment.
Additionally, most of the traditional methods are based on distance statistic
and spatial weighted matrix. However, in some abstract spatial situations,
distance statistic can not be applied since we can not even observe the
geographical locations directly. Meanwhile, under these circumstances, due to
invisibility of spatial positions, designing of weight matrix can not
absolutely avoid subjectivity. In this paper, a new entropy-based method, which
is data-driven and distribution-free, has been proposed to help us investigate
spatial association while fully taking the fact that heterogeneity widely
exist. Specifically, this method is not bounded with distance statistic or
weight matrix. Asymmetrical dependence is adopted to reflect the heterogeneity
in spatial association for each individual and the whole discussion in this
paper is performed on spatio-temporal data with only assuming stationary
m-dependent over time.",A Nonparametric Approach to Measure the Heterogeneous Spatial Association: Under Spatial Temporal Data,2018-03-06 21:46:49,Zihao Yuan,"http://arxiv.org/abs/1803.02334v2, http://arxiv.org/pdf/1803.02334v2",econ.EM
28821,em,"The last technical barriers to trade(TBT) between countries are Non-Tariff
Barriers(NTBs), meaning all trade barriers are possible other than Tariff
Barriers. And the most typical examples are (TBT), which refer to measure
Technical Regulation, Standards, Procedure for Conformity Assessment, Test &
Certification etc. Therefore, in order to eliminate TBT, WTO has made all
membership countries automatically enter into an agreement on TBT",A study of strategy to the remove and ease TBT for increasing export in GCC6 countries,2018-03-09 09:39:31,YongJae Kim,"http://arxiv.org/abs/1803.03394v3, http://arxiv.org/pdf/1803.03394v3",econ.EM
28822,em,"Understanding the effectiveness of alternative approaches to water
conservation is crucially important for ensuring the security and reliability
of water services for urban residents. We analyze data from one of the
longest-running ""cash for grass"" policies - the Southern Nevada Water
Authority's Water Smart Landscapes program, where homeowners are paid to
replace grass with xeric landscaping. We use a twelve year long panel dataset
of monthly water consumption records for 300,000 households in Las Vegas,
Nevada. Utilizing a panel difference-in-differences approach, we estimate the
average water savings per square meter of turf removed. We find that
participation in this program reduced the average treated household's
consumption by 18 percent. We find no evidence that water savings degrade as
the landscape ages, or that water savings per unit area are influenced by the
value of the rebate. Depending on the assumed time horizon of benefits from
turf removal, we find that the WSL program cost the water authority about $1.62
per thousand gallons of water saved, which compares favorably to alternative
means of water conservation or supply augmentation.",How Smart Are `Water Smart Landscapes'?,2018-03-13 05:00:07,"Christa Brelsford, Joshua K. Abbott","http://arxiv.org/abs/1803.04593v1, http://arxiv.org/pdf/1803.04593v1",econ.EM
28823,em,"The business cycles are generated by the oscillating macro-/micro-/nano-
economic output variables in the economy of the scale and the scope in the
amplitude/frequency/phase/time domains in the economics. The accurate forward
looking assumptions on the business cycles oscillation dynamics can optimize
the financial capital investing and/or borrowing by the economic agents in the
capital markets. The book's main objective is to study the business cycles in
the economy of the scale and the scope, formulating the Ledenyov unified
business cycles theory in the Ledenyov classic and quantum econodynamics.",Business Cycles in Economics,2018-03-16 11:24:05,"Viktor O. Ledenyov, Dimitri O. Ledenyov","http://dx.doi.org/10.2139/ssrn.3134655, http://arxiv.org/abs/1803.06108v1, http://arxiv.org/pdf/1803.06108v1",econ.EM
28824,em,"Unobserved heterogeneous treatment effects have been emphasized in the recent
policy evaluation literature (see e.g., Heckman and Vytlacil, 2005). This paper
proposes a nonparametric test for unobserved heterogeneous treatment effects in
a treatment effect model with a binary treatment assignment, allowing for
individuals' self-selection to the treatment. Under the standard local average
treatment effects assumptions, i.e., the no defiers condition, we derive
testable model restrictions for the hypothesis of unobserved heterogeneous
treatment effects. Also, we show that if the treatment outcomes satisfy a
monotonicity assumption, these model restrictions are also sufficient. Then, we
propose a modified Kolmogorov-Smirnov-type test which is consistent and simple
to implement. Monte Carlo simulations show that our test performs well in
finite samples. For illustration, we apply our test to study heterogeneous
treatment effects of the Job Training Partnership Act on earnings and the
impacts of fertility on family income, where the null hypothesis of homogeneous
treatment effects gets rejected in the second case but fails to be rejected in
the first application.",Testing for Unobserved Heterogeneous Treatment Effects with Observational Data,2018-03-20 19:30:07,"Yu-Chin Hsu, Ta-Cheng Huang, Haiqing Xu","http://arxiv.org/abs/1803.07514v2, http://arxiv.org/pdf/1803.07514v2",econ.EM
28825,em,"In the regression discontinuity design (RDD), it is common practice to assess
the credibility of the design by testing the continuity of the density of the
running variable at the cut-off, e.g., McCrary (2008). In this paper we propose
an approximate sign test for continuity of a density at a point based on the
so-called g-order statistics, and study its properties under two complementary
asymptotic frameworks. In the first asymptotic framework, the number q of
observations local to the cut-off is fixed as the sample size n diverges to
infinity, while in the second framework q diverges to infinity slowly as n
diverges to infinity. Under both of these frameworks, we show that the test we
propose is asymptotically valid in the sense that it has limiting rejection
probability under the null hypothesis not exceeding the nominal level. More
importantly, the test is easy to implement, asymptotically valid under weaker
conditions than those used by competing methods, and exhibits finite sample
validity under stronger conditions than those needed for its asymptotic
validity. In a simulation study, we find that the approximate sign test
provides good control of the rejection probability under the null hypothesis
while remaining competitive under the alternative hypothesis. We finally apply
our test to the design in Lee (2008), a well-known application of the RDD to
study incumbency advantage.",Testing Continuity of a Density via g-order statistics in the Regression Discontinuity Design,2018-03-21 17:52:59,"Federico A. Bugni, Ivan A. Canay","http://arxiv.org/abs/1803.07951v6, http://arxiv.org/pdf/1803.07951v6",econ.EM
28826,em,"In this paper, we propose the use of causal inference techniques for survival
function estimation and prediction for subgroups of the data, upto individual
units. Tree ensemble methods, specifically random forests were modified for
this purpose. A real world healthcare dataset was used with about 1800 patients
with breast cancer, which has multiple patient covariates as well as disease
free survival days (DFS) and a death event binary indicator (y). We use the
type of cancer curative intervention as the treatment variable (T=0 or 1,
binary treatment case in our example). The algorithm is a 2 step approach. In
step 1, we estimate heterogeneous treatment effects using a causalTree with the
DFS as the dependent variable. Next, in step 2, for each selected leaf of the
causalTree with distinctly different average treatment effect (with respect to
survival), we fit a survival forest to all the patients in that leaf, one
forest each for treatment T=0 as well as T=1 to get estimated patient level
survival curves for each treatment (more generally, any model can be used at
this step). Then, we subtract the patient level survival curves to get the
differential survival curve for a given patient, to compare the survival
function as a result of the 2 treatments. The path to a selected leaf also
gives us the combination of patient features and their values which are
causally important for the treatment effect difference at the leaf.",Causal Inference for Survival Analysis,2018-03-22 06:22:19,Vikas Ramachandra,"http://arxiv.org/abs/1803.08218v1, http://arxiv.org/pdf/1803.08218v1",econ.EM
28827,em,"Linear regressions with period and group fixed effects are widely used to
estimate treatment effects. We show that they estimate weighted sums of the
average treatment effects (ATE) in each group and period, with weights that may
be negative. Due to the negative weights, the linear regression coefficient may
for instance be negative while all the ATEs are positive. We propose another
estimator that solves this issue. In the two applications we revisit, it is
significantly different from the linear regression estimator.",Two-way fixed effects estimators with heterogeneous treatment effects,2018-03-22 01:56:07,"Clément de Chaisemartin, Xavier D'Haultfœuille","http://dx.doi.org/10.1257/aer.20181169, http://arxiv.org/abs/1803.08807v7, http://arxiv.org/pdf/1803.08807v7",econ.EM
28828,em,"We examine the effects of monetary policy on income inequality in Japan using
a novel econometric approach that jointly estimates the Gini coefficient based
on micro-level grouped data of households and the dynamics of macroeconomic
quantities. Our results indicate different effects on income inequality for
different types of households: A monetary tightening increases inequality when
income data is based on households whose head is employed (workers'
households), while the effect reverses over the medium term when considering a
broader definition of households. Differences in the relative strength of the
transmission channels can account for this finding. Finally we demonstrate that
the proposed joint estimation strategy leads to more informative inference
while results based on the frequently used two-step estimation approach yields
inconclusive results.",How does monetary policy affect income inequality in Japan? Evidence from grouped data,2018-03-23 19:28:23,"Martin Feldkircher, Kazuhiko Kakamu","http://dx.doi.org/10.1007/s00181-021-02102-7, http://arxiv.org/abs/1803.08868v2, http://arxiv.org/pdf/1803.08868v2",econ.EM
28829,em,"We develop inference for a two-sided matching model where the characteristics
of agents on one side of the market are endogenous due to pre-matching
investments. The model can be used to measure the impact of frictions in labour
markets using a single cross-section of matched employer-employee data. The
observed matching of workers to firms is the outcome of a discrete, two-sided
matching process where firms with heterogeneous preferences over education
sequentially choose workers according to an index correlated with worker
preferences over firms. The distribution of education arises in equilibrium
from a Bayesian game: workers, knowing the distribution of worker and firm
types, invest in education prior to the matching process. Although the observed
matching exhibits strong cross-sectional dependence due to the matching
process, we propose an asymptotically valid inference procedure that combines
discrete choice methods with simulation.","Schooling Choice, Labour Market Matching, and Wages",2018-03-24 03:41:09,Jacob Schwartz,"http://arxiv.org/abs/1803.09020v6, http://arxiv.org/pdf/1803.09020v6",econ.EM
28831,em,"In this paper, we assess the impact of climate shocks on futures markets for
agricultural commodities and a set of macroeconomic quantities for multiple
high-income economies. To capture relations among countries, markets, and
climate shocks, this paper proposes parsimonious methods to estimate
high-dimensional panel VARs. We assume that coefficients associated with
domestic lagged endogenous variables arise from a Gaussian mixture model while
further parsimony is achieved using suitable global-local shrinkage priors on
several regions of the parameter space. Our results point towards pronounced
global reactions of key macroeconomic quantities to climate shocks. Moreover,
the empirical findings highlight substantial linkages between regionally
located climate shifts and global commodity markets.",A Bayesian panel VAR model to analyze the impact of climate change on high-income economies,2018-04-04 21:23:10,"Florian Huber, Tamás Krisztin, Michael Pfarrhofer","http://arxiv.org/abs/1804.01554v3, http://arxiv.org/pdf/1804.01554v3",econ.EM
28832,em,"This paper provides a new methodology to analyze unobserved heterogeneity
when observed characteristics are modeled nonlinearly. The proposed model
builds on varying random coefficients (VRC) that are determined by nonlinear
functions of observed regressors and additively separable unobservables. This
paper proposes a novel estimator of the VRC density based on weighted sieve
minimum distance. The main example of sieve bases are Hermite functions which
yield a numerically stable estimation procedure. This paper shows inference
results that go beyond what has been shown in ordinary RC models. We provide in
each case rates of convergence and also establish pointwise limit theory of
linear functionals, where a prominent example is the density of potential
outcomes. In addition, a multiplier bootstrap procedure is proposed to
construct uniform confidence bands. A Monte Carlo study examines finite sample
properties of the estimator and shows that it performs well even when the
regressors associated to RC are far from being heavy tailed. Finally, the
methodology is applied to analyze heterogeneity in income elasticity of demand
for housing.",Varying Random Coefficient Models,2018-04-09 20:16:52,Christoph Breunig,"http://arxiv.org/abs/1804.03110v4, http://arxiv.org/pdf/1804.03110v4",econ.EM
28833,em,"We develop point-identification for the local average treatment effect when
the binary treatment contains a measurement error. The standard instrumental
variable estimator is inconsistent for the parameter since the measurement
error is non-classical by construction. We correct the problem by identifying
the distribution of the measurement error based on the use of an exogenous
variable that can even be a binary covariate. The moment conditions derived
from the identification lead to generalized method of moments estimation with
asymptotically valid inferences. Monte Carlo simulations and an empirical
illustration demonstrate the usefulness of the proposed procedure.",Inference on Local Average Treatment Effects for Misclassified Treatment,2018-04-10 08:57:30,Takahide Yanagi,"http://arxiv.org/abs/1804.03349v1, http://arxiv.org/pdf/1804.03349v1",econ.EM
28834,em,"This paper re-examines the Shapley value methods for attribution analysis in
the area of online advertising. As a credit allocation solution in cooperative
game theory, Shapley value method directly quantifies the contribution of
online advertising inputs to the advertising key performance indicator (KPI)
across multiple channels. We simplify its calculation by developing an
alternative mathematical formulation. The new formula significantly improves
the computational efficiency and therefore extends the scope of applicability.
Based on the simplified formula, we further develop the ordered Shapley value
method. The proposed method is able to take into account the order of channels
visited by users. We claim that it provides a more comprehensive insight by
evaluating the attribution of channels at different stages of user conversion
journeys. The proposed approaches are illustrated using a real-world online
advertising campaign dataset.",Shapley Value Methods for Attribution Modeling in Online Advertising,2018-04-15 12:19:25,"Kaifeng Zhao, Seyed Hanif Mahboobi, Saeed R. Bagheri","http://arxiv.org/abs/1804.05327v1, http://arxiv.org/pdf/1804.05327v1",econ.EM
28835,em,"To estimate the dynamic effects of an absorbing treatment, researchers often
use two-way fixed effects regressions that include leads and lags of the
treatment. We show that in settings with variation in treatment timing across
units, the coefficient on a given lead or lag can be contaminated by effects
from other periods, and apparent pretrends can arise solely from treatment
effects heterogeneity. We propose an alternative estimator that is free of
contamination, and illustrate the relative shortcomings of two-way fixed
effects regressions with leads and lags through an empirical application.",Estimating Dynamic Treatment Effects in Event Studies with Heterogeneous Treatment Effects,2018-04-16 19:54:46,"Liyang Sun, Sarah Abraham","http://arxiv.org/abs/1804.05785v2, http://arxiv.org/pdf/1804.05785v2",econ.EM
28836,em,"This paper offers a two-pronged critique of the empirical investigation of
the income distribution performed by physicists over the past decade. Their
finding rely on the graphical analysis of the observed distribution of
normalized incomes. Two central observations lead to the conclusion that the
majority of incomes are exponentially distributed, but neither each individual
piece of evidence nor their concurrent observation robustly proves that the
thermal and superthermal mixture fits the observed distribution of incomes
better than reasonable alternatives. A formal analysis using popular measures
of fit shows that while an exponential distribution with a power-law tail
provides a better fit of the IRS income data than the log-normal distribution
(often assumed by economists), the thermal and superthermal mixture's fit can
be improved upon further by adding a log-normal component. The economic
implications of the thermal and superthermal distribution of incomes, and the
expanded mixture are explored in the paper.",Revisiting the thermal and superthermal two-class distribution of incomes: A critical perspective,2018-04-17 19:09:59,Markus P. A. Schneider,"http://dx.doi.org/10.1140/epjb/e2014-50501-x, http://arxiv.org/abs/1804.06341v1, http://arxiv.org/pdf/1804.06341v1",econ.EM
28837,em,"Researchers increasingly leverage movement across multiple treatments to
estimate causal effects. While these ""mover regressions"" are often motivated by
a linear constant-effects model, it is not clear what they capture under weaker
quasi-experimental assumptions. I show that binary treatment mover regressions
recover a convex average of four difference-in-difference comparisons and are
thus causally interpretable under a standard parallel trends assumption.
Estimates from multiple-treatment models, however, need not be causal without
stronger restrictions on the heterogeneity of treatment effects and
time-varying shocks. I propose a class of two-step estimators to isolate and
combine the large set of difference-in-difference quasi-experiments generated
by a mover design, identifying mover average treatment effects under
conditional-on-covariate parallel trends and effect homogeneity restrictions. I
characterize the efficient estimators in this class and derive specification
tests based on the model's overidentifying restrictions. Future drafts will
apply the theory to the Finkelstein et al. (2016) movers design, analyzing the
causal effects of geography on healthcare utilization.",Estimating Treatment Effects in Mover Designs,2018-04-18 16:42:55,Peter Hull,"http://arxiv.org/abs/1804.06721v1, http://arxiv.org/pdf/1804.06721v1",econ.EM
28838,em,"The study aims to identify the institutional flaws of the current EU waste
management model by analysing the economic model of extended producer
responsibility and collective waste management systems and to create a model
for measuring the transaction costs borne by waste recovery organizations. The
model was approbated by analysing the Bulgarian collective waste management
systems that have been complying with the EU legislation for the last 10 years.
The analysis focuses on waste oils because of their economic importance and the
limited number of studies and analyses in this field as the predominant body of
research to date has mainly addressed packaging waste, mixed household waste or
discarded electrical and electronic equipment. The study aims to support the
process of establishing a circular economy in the EU, which was initiated in
2015.",Transaction Costs in Collective Waste Recovery Systems in the EU,2018-04-18 18:40:15,Shteryo Nozharov,"http://arxiv.org/abs/1804.06792v1, http://arxiv.org/pdf/1804.06792v1",econ.EM
28839,em,"We study the foundations of empirical equilibrium, a refinement of Nash
equilibrium that is based on a non-parametric characterization of empirical
distributions of behavior in games (Velez and Brown,2020b arXiv:1907.12408).
The refinement can be alternatively defined as those Nash equilibria that do
not refute the regular QRE theory of Goeree, Holt, and Palfrey (2005). By
contrast, some empirical equilibria may refute monotone additive randomly
disturbed payoff models. As a by product, we show that empirical equilibrium
does not coincide with refinements based on approximation by monotone additive
randomly disturbed payoff models, and further our understanding of the
empirical content of these models.",Empirical Equilibrium,2018-04-21 18:38:24,"Rodrigo A. Velez, Alexander L. Brown","http://arxiv.org/abs/1804.07986v3, http://arxiv.org/pdf/1804.07986v3",econ.EM
28840,em,"We analyze an operational policy for a multinational manufacturer to hedge
against exchange rate uncertainties and competition. We consider a single
product and single period. Because of long-lead times, the capacity investment
must done before the selling season begins when the exchange rate between the
two countries is uncertain. we consider a duopoly competition in the foreign
country. We model the exchange rate as a random variable. We investigate the
impact of competition and exchange rate on optimal capacities and optimal
prices. We show how competition can impact the decision of the home
manufacturer to enter the foreign market.",Price Competition with Geometric Brownian motion in Exchange Rate Uncertainty,2018-04-22 21:33:53,"Murat Erkoc, Huaqing Wang, Anas Ahmed","http://arxiv.org/abs/1804.08153v1, http://arxiv.org/pdf/1804.08153v1",econ.EM
28841,em,"Call centers' managers are interested in obtaining accurate point and
distributional forecasts of call arrivals in order to achieve an optimal
balance between service quality and operating costs. We present a strategy for
selecting forecast models of call arrivals which is based on three pillars: (i)
flexibility of the loss function; (ii) statistical evaluation of forecast
accuracy; (iii) economic evaluation of forecast performance using money
metrics. We implement fourteen time series models and seven forecast
combination schemes on three series of daily call arrivals. Although we focus
mainly on point forecasts, we also analyze density forecast evaluation. We show
that second moments modeling is important both for point and density
forecasting and that the simple Seasonal Random Walk model is always
outperformed by more general specifications. Our results suggest that call
center managers should invest in the use of forecast models which describe both
first and second moments of call arrivals.",Statistical and Economic Evaluation of Time Series Models for Forecasting Arrivals at Call Centers,2018-04-23 12:57:42,"Andrea Bastianin, Marzio Galeotti, Matteo Manera","http://dx.doi.org/10.1007/s00181-018-1475-y, http://arxiv.org/abs/1804.08315v1, http://arxiv.org/pdf/1804.08315v1",econ.EM
28842,em,"Economic inequality is one of the pivotal issues for most of economic and
social policy makers across the world to insure the sustainable economic growth
and justice. In the mainstream school of economics, namely neoclassical
theories, economic issues are dealt with in a mechanistic manner. Such a
mainstream framework is majorly focused on investigating a socio-economic
system based on an axiomatic scheme where reductionism approach plays a vital
role. The major limitations of such theories include unbounded rationality of
economic agents, reducing the economic aggregates to a set of predictable
factors and lack of attention to adaptability and the evolutionary nature of
economic agents. In tackling deficiencies of conventional economic models, in
the past two decades, some new approaches have been recruited. One of those
novel approaches is the Complex adaptive systems (CAS) framework which has
shown a very promising performance in action. In contrast to mainstream school,
under this framework, the economic phenomena are studied in an organic manner
where the economic agents are supposed to be both boundedly rational and
adaptive. According to it, the economic aggregates emerge out of the ways
agents of a system decide and interact. As a powerful way of modeling CASs,
Agent-based models (ABMs) has found a growing application among academicians
and practitioners. ABMs show that how simple behavioral rules of agents and
local interactions among them at micro-scale can generate surprisingly complex
patterns at macro-scale. In this paper, ABMs have been used to show (1) how an
economic inequality emerges in a system and to explain (2) how sadaqah as an
Islamic charity rule can majorly help alleviating the inequality and how
resource allocation strategies taken by charity entities can accelerate this
alleviation.",Economic inequality and Islamic Charity: An exploratory agent-based modeling approach,2018-04-25 01:43:11,"Hossein Sabzian, Alireza Aliahmadi, Adel Azar, Madjid Mirzaee","http://arxiv.org/abs/1804.09284v1, http://arxiv.org/pdf/1804.09284v1",econ.EM
28843,em,"This paper is concerned with inference about low-dimensional components of a
high-dimensional parameter vector $\beta^0$ which is identified through
instrumental variables. We allow for eigenvalues of the expected outer product
of included and excluded covariates, denoted by $M$, to shrink to zero as the
sample size increases. We propose a novel estimator based on desparsification
of an instrumental variable Lasso estimator, which is a regularized version of
2SLS with an additional correction term. This estimator converges to $\beta^0$
at a rate depending on the mapping properties of $M$ captured by a sparse link
condition. Linear combinations of our estimator of $\beta^0$ are shown to be
asymptotically normally distributed. Based on consistent covariance estimation,
our method allows for constructing confidence intervals and statistical tests
for single or low-dimensional components of $\beta^0$. In Monte-Carlo
simulations we analyze the finite sample behavior of our estimator.",Ill-posed Estimation in High-Dimensional Models with Instrumental Variables,2018-06-02 19:41:24,"Christoph Breunig, Enno Mammen, Anna Simoni","http://arxiv.org/abs/1806.00666v2, http://arxiv.org/pdf/1806.00666v2",econ.EM
28863,em,"By recasting indirect inference estimation as a prediction rather than a
minimization and by using regularized regressions, we can bypass the three
major problems of estimation: selecting the summary statistics, defining the
distance function and minimizing it numerically. By substituting regression
with classification we can extend this approach to model selection as well. We
present three examples: a statistical fit, the parametrization of a simple real
business cycle model and heuristics selection in a fishery agent-based model.
The outcome is a method that automatically chooses summary statistics, weighs
them and use them to parametrize models without running any direct
minimization.",Indirect inference through prediction,2018-07-04 16:52:24,"Ernesto Carrella, Richard M. Bailey, Jens Koed Madsen","http://dx.doi.org/10.18564/jasss.4150, http://arxiv.org/abs/1807.01579v1, http://arxiv.org/pdf/1807.01579v1",econ.EM
28844,em,"I propose a nonparametric iid bootstrap procedure for the empirical
likelihood, the exponential tilting, and the exponentially tilted empirical
likelihood estimators that achieves asymptotic refinements for t tests and
confidence intervals, and Wald tests and confidence regions based on such
estimators. Furthermore, the proposed bootstrap is robust to model
misspecification, i.e., it achieves asymptotic refinements regardless of
whether the assumed moment condition model is correctly specified or not. This
result is new, because asymptotic refinements of the bootstrap based on these
estimators have not been established in the literature even under correct model
specification. Monte Carlo experiments are conducted in dynamic panel data
setting to support the theoretical finding. As an application, bootstrap
confidence intervals for the returns to schooling of Hellerstein and Imbens
(1999) are calculated. The result suggests that the returns to schooling may be
higher.",Asymptotic Refinements of a Misspecification-Robust Bootstrap for Generalized Empirical Likelihood Estimators,2018-06-04 07:54:48,Seojeong Lee,"http://dx.doi.org/10.1016/j.jeconom.2015.11.003, http://arxiv.org/abs/1806.00953v2, http://arxiv.org/pdf/1806.00953v2",econ.EM
28845,em,"Many studies use shift-share (or ``Bartik'') instruments, which average a set
of shocks with exposure share weights. We provide a new econometric framework
for shift-share instrumental variable (SSIV) regressions in which
identification follows from the quasi-random assignment of shocks, while
exposure shares are allowed to be endogenous. The framework is motivated by an
equivalence result: the orthogonality between a shift-share instrument and an
unobserved residual can be represented as the orthogonality between the
underlying shocks and a shock-level unobservable. SSIV regression coefficients
can similarly be obtained from an equivalent shock-level regression, motivating
shock-level conditions for their consistency. We discuss and illustrate several
practical insights of this framework in the setting of Autor et al. (2013),
estimating the effect of Chinese import competition on manufacturing employment
across U.S. commuting zones.",Quasi-Experimental Shift-Share Research Designs,2018-06-04 20:03:07,"Kirill Borusyak, Peter Hull, Xavier Jaravel","http://arxiv.org/abs/1806.01221v9, http://arxiv.org/pdf/1806.01221v9",econ.EM
28846,em,"The implementation of a supervision and incentive process for identical
workers may lead to wage variance that stems from employer and employee
optimization. The harder it is to assess the nature of the labor output, the
more important such a process becomes, and the influence of such a process on
wage development growth. The dynamic model presented in this paper shows that
an employer will choose to pay a worker a starting wage that is less than what
he deserves, resulting in a wage profile that fits the classic profile in the
human-capital literature. The wage profile and wage variance rise at times of
technological advancements, which leads to increased turnover as older workers
are replaced by younger workers due to a rise in the relative marginal cost of
the former.",The Impact of Supervision and Incentive Process in Explaining Wage Profile and Variance,2018-06-04 22:05:37,"Nitsa Kasir, Idit Sohlberg","http://arxiv.org/abs/1806.01332v1, http://arxiv.org/pdf/1806.01332v1",econ.EM
28847,em,"I propose a nonparametric iid bootstrap that achieves asymptotic refinements
for t tests and confidence intervals based on GMM estimators even when the
model is misspecified. In addition, my bootstrap does not require recentering
the moment function, which has been considered as critical for GMM. Regardless
of model misspecification, the proposed bootstrap achieves the same sharp
magnitude of refinements as the conventional bootstrap methods which establish
asymptotic refinements by recentering in the absence of misspecification. The
key idea is to link the misspecified bootstrap moment condition to the large
sample theory of GMM under misspecification of Hall and Inoue (2003). Two
examples are provided: Combining data sets and invalid instrumental variables.",Asymptotic Refinements of a Misspecification-Robust Bootstrap for Generalized Method of Moments Estimators,2018-06-05 04:13:06,Seojeong Lee,"http://dx.doi.org/10.1016/j.jeconom.2013.05.008, http://arxiv.org/abs/1806.01450v1, http://arxiv.org/pdf/1806.01450v1",econ.EM
28848,em,"Under treatment effect heterogeneity, an instrument identifies the
instrument-specific local average treatment effect (LATE). With multiple
instruments, two-stage least squares (2SLS) estimand is a weighted average of
different LATEs. What is often overlooked in the literature is that the
postulated moment condition evaluated at the 2SLS estimand does not hold unless
those LATEs are the same. If so, the conventional heteroskedasticity-robust
variance estimator would be inconsistent, and 2SLS standard errors based on
such estimators would be incorrect. I derive the correct asymptotic
distribution, and propose a consistent asymptotic variance estimator by using
the result of Hall and Inoue (2003, Journal of Econometrics) on misspecified
moment condition models. This can be used to correctly calculate the standard
errors regardless of whether there is more than one LATE or not.",A Consistent Variance Estimator for 2SLS When Instruments Identify Different LATEs,2018-06-05 04:36:49,Seojeong Lee,"http://dx.doi.org/10.1080/07350015.2016.1186555, http://arxiv.org/abs/1806.01457v1, http://arxiv.org/pdf/1806.01457v1",econ.EM
28849,em,"We propose leave-out estimators of quadratic forms designed for the study of
linear models with unrestricted heteroscedasticity. Applications include
analysis of variance and tests of linear restrictions in models with many
regressors. An approximation algorithm is provided that enables accurate
computation of the estimator in very large datasets. We study the large sample
properties of our estimator allowing the number of regressors to grow in
proportion to the number of observations. Consistency is established in a
variety of settings where plug-in methods and estimators predicated on
homoscedasticity exhibit first-order biases. For quadratic forms of increasing
rank, the limiting distribution can be represented by a linear combination of
normal and non-central $\chi^2$ random variables, with normality ensuing under
strong identification. Standard error estimators are proposed that enable tests
of linear restrictions and the construction of uniformly valid confidence
intervals for quadratic forms of interest. We find in Italian social security
records that leave-out estimates of a variance decomposition in a two-way fixed
effects model of wage determination yield substantially different conclusions
regarding the relative contribution of workers, firms, and worker-firm sorting
to wage inequality than conventional methods. Monte Carlo exercises corroborate
the accuracy of our asymptotic approximations, with clear evidence of
non-normality emerging when worker mobility between blocks of firms is limited.",Leave-out estimation of variance components,2018-06-05 07:59:27,"Patrick Kline, Raffaele Saggio, Mikkel Sølvsten","http://arxiv.org/abs/1806.01494v2, http://arxiv.org/pdf/1806.01494v2",econ.EM
28850,em,"Autonomous ships (AS) used for cargo transport have gained a considerable
amount of attention in recent years. They promise benefits such as reduced crew
costs, increased safety and increased flexibility. This paper explores the
effects of a faster increase in technological performance in maritime shipping
achieved by leveraging fast-improving technological domains such as computer
processors, and advanced energy storage. Based on historical improvement rates
of several modes of transport (Cargo Ships, Air, Rail, Trucking) a simplified
Markov-chain Monte-Carlo (MCMC) simulation of an intermodal transport model
(IMTM) is used to explore the effects of differing technological improvement
rates for AS. The results show that the annual improvement rates of traditional
shipping (Ocean Cargo Ships = 2.6%, Air Cargo = 5.5%, Trucking = 0.6%, Rail =
1.9%, Inland Water Transport = 0.4%) improve at lower rates than technologies
associated with automation such as Computer Processors (35.6%), Fuel Cells
(14.7%) and Automotive Autonomous Hardware (27.9%). The IMTM simulations up to
the year 2050 show that the introduction of any mode of autonomous transport
will increase competition in lower cost shipping options, but is unlikely to
significantly alter the overall distribution of transport mode costs. Secondly,
if all forms of transport end up converting to autonomous systems, then the
uncertainty surrounding the improvement rates yields a complex intermodal
transport solution involving several options, all at a much lower cost over
time. Ultimately, the research shows a need for more accurate measurement of
current autonomous transport costs and how they are changing over time.",A Quantitative Analysis of Possible Futures of Autonomous Transport,2018-06-05 17:00:58,"Christopher L. Benson, Pranav D Sumanth, Alina P Colling","http://arxiv.org/abs/1806.01696v1, http://arxiv.org/pdf/1806.01696v1",econ.EM
28851,em,"A standard growth model is modified in a straightforward way to incorporate
what Keynes (1936) suggests in the ""essence"" of his general theory. The
theoretical essence is the idea that exogenous changes in investment cause
changes in employment and unemployment. We implement this idea by assuming the
path for capital growth rate is exogenous in the growth model. The result is a
growth model that can explain both long term trends and fluctuations around the
trend. The modified growth model was tested using the U.S. economic data from
1947 to 2014. The hypothesized inverse relationship between the capital growth
and changes in unemployment was confirmed, and the structurally estimated model
fits fluctuations in unemployment reasonably well.",A Growth Model with Unemployment,2018-06-11 23:29:04,"Mina Mahmoudi, Mark Pingle","http://arxiv.org/abs/1806.04228v1, http://arxiv.org/pdf/1806.04228v1",econ.EM
28852,em,"This study provides the theoretical framework and empirical model for
productivity growth evaluations in agricultural sector as one of the most
important sectors in Iran's economic development plan. We use the Solow
residual model to measure the productivity growth share in the value-added
growth of the agricultural sector. Our time series data includes value-added
per worker, employment, and capital in this sector. The results show that the
average total factor productivity growth rate in the agricultural sector is
-0.72% during 1991-2010. Also, during this period, the share of total factor
productivity growth in the value-added growth is -19.6%, while it has been
forecasted to be 33.8% in the fourth development plan. Considering the
effective role of capital in the agricultural low productivity, we suggest
applying productivity management plans (especially in regards of capital
productivity) to achieve future growth goals.",The Role of Agricultural Sector Productivity in Economic Growth: The Case of Iran's Economic Development Plan,2018-06-11 23:43:32,"Morteza Tahamipour, Mina Mahmoudi","http://arxiv.org/abs/1806.04235v1, http://arxiv.org/pdf/1806.04235v1",econ.EM
28853,em,"Tariff liberalization and its impact on tax revenue is an important
consideration for developing countries, because they are increasingly facing
the difficult task of implementing and harmonizing regional and international
trade commitments. The tariff reform and its costs for Iranian government is
one of the issues that are examined in this study. Another goal of this paper
is, estimating the cost of trade liberalization. On this regard, imports value
of agricultural sector in Iran in 2010 was analyzed according to two scenarios.
For reforming nuisance tariff, a VAT policy is used in both scenarios. In this
study, TRIST method is used. In the first scenario, imports' value decreased to
a level equal to the second scenario and higher tariff revenue will be created.
The results show that reducing the average tariff rate does not always result
in the loss of tariff revenue. This paper is a witness that different forms of
tariff can generate different amount of income when they have same level of
liberalization and equal effect on producers. Therefore, using a good tariff
regime can help a government to generate income when increases social welfare
by liberalization.",Estimating Trade-Related Adjustment Costs in the Agricultural Sector in Iran,2018-06-11 23:44:02,"Omid Karami, Mina Mahmoudi","http://arxiv.org/abs/1806.04238v1, http://arxiv.org/pdf/1806.04238v1",econ.EM
28854,em,"We consider the relation between Sion's minimax theorem for a continuous
function and a Nash equilibrium in an asymmetric multi-players zero-sum game in
which only one player is different from other players, and the game is
symmetric for the other players. Then,
  1. The existence of a Nash equilibrium, which is symmetric for players other
than one player, implies Sion's minimax theorem for pairs of this player and
one of other players with symmetry for the other players.
  2. Sion's minimax theorem for pairs of one player and one of other players
with symmetry for the other players implies the existence of a Nash equilibrium
which is symmetric for the other players.
  Thus, they are equivalent.",On the relation between Sion's minimax theorem and existence of Nash equilibrium in asymmetric multi-players zero-sum game with only one alien,2018-06-17 04:11:55,"Atsuhiro Satoh, Yasuhito Tanaka","http://arxiv.org/abs/1806.07253v1, http://arxiv.org/pdf/1806.07253v1",econ.EM
28855,em,"It is common practice in empirical work to employ cluster-robust standard
errors when using the linear regression model to estimate some
structural/causal effect of interest. Researchers also often include a large
set of regressors in their model specification in order to control for observed
and unobserved confounders. In this paper we develop inference methods for
linear regression models with many controls and clustering. We show that
inference based on the usual cluster-robust standard errors by Liang and Zeger
(1986) is invalid in general when the number of controls is a non-vanishing
fraction of the sample size. We then propose a new clustered standard errors
formula that is robust to the inclusion of many controls and allows to carry
out valid inference in a variety of high-dimensional linear regression models,
including fixed effects panel data models and the semiparametric partially
linear model. Monte Carlo evidence supports our theoretical results and shows
that our proposed variance estimator performs well in finite samples. The
proposed method is also illustrated with an empirical application that
re-visits Donohue III and Levitt's (2001) study of the impact of abortion on
crime.",Cluster-Robust Standard Errors for Linear Regression Models with Many Controls,2018-06-19 18:48:50,Riccardo D'Adamo,"http://arxiv.org/abs/1806.07314v3, http://arxiv.org/pdf/1806.07314v3",econ.EM
28856,em,"We study inference in shift-share regression designs, such as when a regional
outcome is regressed on a weighted average of sectoral shocks, using regional
sector shares as weights. We conduct a placebo exercise in which we estimate
the effect of a shift-share regressor constructed with randomly generated
sectoral shocks on actual labor market outcomes across U.S. Commuting Zones.
Tests based on commonly used standard errors with 5\% nominal significance
level reject the null of no effect in up to 55\% of the placebo samples. We use
a stylized economic model to show that this overrejection problem arises
because regression residuals are correlated across regions with similar
sectoral shares, independently of their geographic location. We derive novel
inference methods that are valid under arbitrary cross-regional correlation in
the regression residuals. We show using popular applications of shift-share
designs that our methods may lead to substantially wider confidence intervals
in practice.",Shift-Share Designs: Theory and Inference,2018-06-20 21:57:10,"Rodrigo Adão, Michal Kolesár, Eduardo Morales","http://dx.doi.org/10.1093/qje/qjz025, http://arxiv.org/abs/1806.07928v5, http://arxiv.org/pdf/1806.07928v5",econ.EM
28857,em,"In this paper, we explore the relationship between state-level household
income inequality and macroeconomic uncertainty in the United States. Using a
novel large-scale macroeconometric model, we shed light on regional disparities
of inequality responses to a national uncertainty shock. The results suggest
that income inequality decreases in most states, with a pronounced degree of
heterogeneity in terms of shapes and magnitudes of the dynamic responses. By
contrast, some few states, mostly located in the West and South census region,
display increasing levels of income inequality over time. We find that this
directional pattern in responses is mainly driven by the income composition and
labor market fundamentals. In addition, forecast error variance decompositions
allow for a quantitative assessment of the importance of uncertainty shocks in
explaining income inequality. The findings highlight that volatility shocks
account for a considerable fraction of forecast error variance for most states
considered. Finally, a regression-based analysis sheds light on the driving
forces behind differences in state-specific inequality responses.",The transmission of uncertainty shocks on income inequality: State-level evidence from the United States,2018-06-21 17:57:45,"Manfred M. Fischer, Florian Huber, Michael Pfarrhofer","http://arxiv.org/abs/1806.08278v1, http://arxiv.org/pdf/1806.08278v1",econ.EM
28858,em,"We propose a new class of unit root tests that exploits invariance properties
in the Locally Asymptotically Brownian Functional limit experiment associated
to the unit root model. The invariance structures naturally suggest tests that
are based on the ranks of the increments of the observations, their average,
and an assumed reference density for the innovations. The tests are
semiparametric in the sense that they are valid, i.e., have the correct
(asymptotic) size, irrespective of the true innovation density. For a correctly
specified reference density, our test is point-optimal and nearly efficient.
For arbitrary reference densities, we establish a Chernoff-Savage type result,
i.e., our test performs as well as commonly used tests under Gaussian
innovations but has improved power under other, e.g., fat-tailed or skewed,
innovation distributions. To avoid nonparametric estimation, we propose a
simplified version of our test that exhibits the same asymptotic properties,
except for the Chernoff-Savage result that we are only able to demonstrate by
means of simulations.",Semiparametrically Point-Optimal Hybrid Rank Tests for Unit Roots,2018-06-25 10:03:48,"Bo Zhou, Ramon van den Akker, Bas J. M. Werker","http://dx.doi.org/10.1214/18-AOS1758, http://arxiv.org/abs/1806.09304v1, http://arxiv.org/pdf/1806.09304v1",econ.EM
28859,em,"In this article we introduce a general nonparametric point-identification
result for nonseparable triangular models with a multivariate first- and second
stage. Based on this we prove point-identification of Hedonic models with
multivariate heterogeneity and endogenous observable characteristics, extending
and complementing identification results from the literature which all require
exogeneity. As an additional application of our theoretical result, we show
that the BLP model (Berry et al. 1995) can also be identified without index
restrictions.",Point-identification in multivariate nonseparable triangular models,2018-06-25 22:36:39,Florian Gunsilius,"http://arxiv.org/abs/1806.09680v1, http://arxiv.org/pdf/1806.09680v1",econ.EM
28860,em,"Historical examination of the Bretton Woods system allows comparisons to be
made with the current evolution of the EMS.",The Bretton Woods Experience and ERM,2018-07-02 03:00:20,Chris Kirrane,"http://arxiv.org/abs/1807.00418v1, http://arxiv.org/pdf/1807.00418v1",econ.EM
28861,em,"This paper describes the opportunities and also the difficulties of EMU with
regard to international monetary cooperation. Even though the institutional and
intellectual assistance to the coordination of monetary policy in the EU will
probably be strengthened with the EMU, among the shortcomings of the Maastricht
Treaty concerns the relationship between the founder members and those
countries who wish to remain outside monetary union.",Maastricht and Monetary Cooperation,2018-07-02 03:01:08,Chris Kirrane,"http://arxiv.org/abs/1807.00419v1, http://arxiv.org/pdf/1807.00419v1",econ.EM
28862,em,"This paper proposes a hierarchical modeling approach to perform stochastic
model specification in Markov switching vector error correction models. We
assume that a common distribution gives rise to the regime-specific regression
coefficients. The mean as well as the variances of this distribution are
treated as fully stochastic and suitable shrinkage priors are used. These
shrinkage priors enable to assess which coefficients differ across regimes in a
flexible manner. In the case of similar coefficients, our model pushes the
respective regions of the parameter space towards the common distribution. This
allows for selecting a parsimonious model while still maintaining sufficient
flexibility to control for sudden shifts in the parameters, if necessary. We
apply our modeling approach to real-time Euro area data and assume transition
probabilities between expansionary and recessionary regimes to be driven by the
cointegration errors. The results suggest that the regime allocation is
governed by a subset of short-run adjustment coefficients and regime-specific
variance-covariance matrices. These findings are complemented by an
out-of-sample forecast exercise, illustrating the advantages of the model for
predicting Euro area inflation in real time.",Stochastic model specification in Markov switching vector error correction models,2018-07-02 11:36:11,"Niko Hauzenberger, Florian Huber, Michael Pfarrhofer, Thomas O. Zörner","http://arxiv.org/abs/1807.00529v2, http://arxiv.org/pdf/1807.00529v2",econ.EM
28864,em,"This paper studies the identifying content of the instrument monotonicity
assumption of Imbens and Angrist (1994) on the distribution of potential
outcomes in a model with a binary outcome, a binary treatment and an exogenous
binary instrument. Specifically, I derive necessary and sufficient conditions
on the distribution of the data under which the identified set for the
distribution of potential outcomes when the instrument monotonicity assumption
is imposed can be a strict subset of that when it is not imposed.",On the Identifying Content of Instrument Monotonicity,2018-07-04 19:25:35,Vishal Kamat,"http://arxiv.org/abs/1807.01661v2, http://arxiv.org/pdf/1807.01661v2",econ.EM
28865,em,"This paper develops an inferential theory for state-varying factor models of
large dimensions. Unlike constant factor models, loadings are general functions
of some recurrent state process. We develop an estimator for the latent factors
and state-varying loadings under a large cross-section and time dimension. Our
estimator combines nonparametric methods with principal component analysis. We
derive the rate of convergence and limiting normal distribution for the
factors, loadings and common components. In addition, we develop a statistical
test for a change in the factor structure in different states. We apply the
estimator to U.S. Treasury yields and S&P500 stock returns. The systematic
factor structure in treasury yields differs in times of booms and recessions as
well as in periods of high market volatility. State-varying factors based on
the VIX capture significantly more variation and pricing information in
individual stocks than constant factor models.",State-Varying Factor Models of Large Dimensions,2018-07-06 07:05:40,"Markus Pelger, Ruoxuan Xiong","http://arxiv.org/abs/1807.02248v4, http://arxiv.org/pdf/1807.02248v4",econ.EM
28866,em,"The methods of new institutional economics for identifying the transaction
costs of trade litigations in Bulgaria are used in the current paper. For the
needs of the research, an indicative model, measuring this type of costs on
microeconomic level, is applied in the study. The main purpose of the model is
to forecast the rational behavior of trade litigation parties in accordance
with the transaction costs in the process of enforcing the execution of the
signed commercial contract. The application of the model is related to the more
accurate measurement of the transaction costs on microeconomic level, which
fact could lead to better prediction and management of these costs in order
market efficiency and economic growth to be achieved. In addition, it is made
an attempt to be analysed the efficiency of the institutional change of the
commercial justice system and the impact of the reform of the judicial system
over the economic turnover. The augmentation or lack of reduction of the
transaction costs in trade litigations would mean inefficiency of the reform of
the judicial system. JEL Codes: O43, P48, D23, K12",Transaction costs and institutional change of trade litigations in Bulgaria,2018-07-09 13:34:56,"Shteryo Nozharov, Petya Koralova-Nozharova","http://arxiv.org/abs/1807.03034v1, http://arxiv.org/pdf/1807.03034v1",econ.EM
28867,em,"The meaning of public messages such as ""One in x people gets cancer"" or ""One
in y people gets cancer by age z"" can be improved. One assumption commonly
invoked is that there is no other cause of death, a confusing assumption. We
develop a light bulb model to clarify cumulative risk and we use Markov chain
modeling, incorporating the assumption widely in place, to evaluate transition
probabilities. Age-progression in the cancer risk is then reported on
Australian data. Future modelling can elicit realistic assumptions.",Cancer Risk Messages: A Light Bulb Model,2018-07-09 13:58:20,"Ka C. Chan, Ruth F. G. Williams, Christopher T. Lenard, Terence M. Mills","http://arxiv.org/abs/1807.03040v2, http://arxiv.org/pdf/1807.03040v2",econ.EM
28868,em,"Statements for public health purposes such as ""1 in 2 will get cancer by age
85"" have appeared in public spaces. The meaning drawn from such statements
affects economic welfare, not just public health. Both markets and government
use risk information on all kinds of risks, useful information can, in turn,
improve economic welfare, however inaccuracy can lower it. We adapt the
contingency table approach so that a quoted risk is cross-classified with the
states of nature. We show that bureaucratic objective functions regarding the
accuracy of a reported cancer risk can then be stated.",Cancer Risk Messages: Public Health and Economic Welfare,2018-07-09 14:18:01,"Ruth F. G. Williams, Ka C. Chan, Christopher T. Lenard, Terence M. Mills","http://arxiv.org/abs/1807.03045v2, http://arxiv.org/pdf/1807.03045v2",econ.EM
28869,em,"This paper applies economic concepts from measuring income inequality to an
exercise in assessing spatial inequality in cancer service access in regional
areas. We propose a mathematical model for accessing chemotherapy among local
government areas (LGAs). Our model incorporates a distance factor. With a
simulation we report results for a single inequality measure: the Lorenz curve
is depicted for our illustrative data. We develop this approach in order to
move incrementally towards its application to actual data and real-world health
service regions. We seek to develop the exercises that can lead policy makers
to relevant policy information on the most useful data collections to be
collected and modeling for cancer service access in regional areas.",Simulation Modelling of Inequality in Cancer Service Access,2018-07-09 14:25:38,"Ka C. Chan, Ruth F. G. Williams, Christopher T. Lenard, Terence M. Mills","http://dx.doi.org/10.1080/27707571.2022.2127188, http://arxiv.org/abs/1807.03048v1, http://arxiv.org/pdf/1807.03048v1",econ.EM
28870,em,"The data mining technique of time series clustering is well established in
many fields. However, as an unsupervised learning method, it requires making
choices that are nontrivially influenced by the nature of the data involved.
The aim of this paper is to verify usefulness of the time series clustering
method for macroeconomics research, and to develop the most suitable
methodology.
  By extensively testing various possibilities, we arrive at a choice of a
dissimilarity measure (compression-based dissimilarity measure, or CDM) which
is particularly suitable for clustering macroeconomic variables. We check that
the results are stable in time and reflect large-scale phenomena such as
crises. We also successfully apply our findings to analysis of national
economies, specifically to identifying their structural relations.",Clustering Macroeconomic Time Series,2018-07-11 11:51:41,"Iwo Augustyński, Paweł Laskoś-Grabowski","http://dx.doi.org/10.15611/eada.2018.2.06, http://arxiv.org/abs/1807.04004v2, http://arxiv.org/pdf/1807.04004v2",econ.EM
28871,em,"This paper re-examines the problem of estimating risk premia in linear factor
pricing models. Typically, the data used in the empirical literature are
characterized by weakness of some pricing factors, strong cross-sectional
dependence in the errors, and (moderately) high cross-sectional dimensionality.
Using an asymptotic framework where the number of assets/portfolios grows with
the time span of the data while the risk exposures of weak factors are
local-to-zero, we show that the conventional two-pass estimation procedure
delivers inconsistent estimates of the risk premia. We propose a new estimation
procedure based on sample-splitting instrumental variables regression. The
proposed estimator of risk premia is robust to weak included factors and to the
presence of strong unaccounted cross-sectional error dependence. We derive the
many-asset weak factor asymptotic distribution of the proposed estimator, show
how to construct its standard errors, verify its performance in simulations,
and revisit some empirical studies.","Factor models with many assets: strong factors, weak factors, and the two-pass procedure",2018-07-11 14:53:19,"Stanislav Anatolyev, Anna Mikusheva","http://arxiv.org/abs/1807.04094v2, http://arxiv.org/pdf/1807.04094v2",econ.EM
28872,em,"This paper analyzes the bank lending channel and the heterogeneous effects on
the euro area, providing evidence that the channel is indeed working. The
analysis of the transmission mechanism is based on structural impulse responses
to an unconventional monetary policy shock on bank loans. The Bank Lending
Survey (BLS) is exploited in order to get insights on developments of loan
demand and supply. The contribution of this paper is to use country-specific
data to analyze the consequences of unconventional monetary policy, instead of
taking an aggregate stance by using euro area data. This approach provides a
deeper understanding of the bank lending channel and its effects. That is, an
expansionary monetary policy shock leads to an increase in loan demand, supply
and output growth. A small north-south disparity between the countries can be
observed.",Heterogeneous Effects of Unconventional Monetary Policy on Loan Demand and Supply. Insights from the Bank Lending Survey,2018-07-11 17:36:21,Martin Guth,"http://arxiv.org/abs/1807.04161v1, http://arxiv.org/pdf/1807.04161v1",econ.EM
28873,em,"I present a dynamic, voluntary contribution mechanism, public good game and
derive its potential outcomes. In each period, players endogenously determine
contribution productivity by engaging in costly investment. The level of
contribution productivity carries from period to period, creating a dynamic
link between periods. The investment mimics investing in the stock of
technology for producing public goods such as national defense or a clean
environment. After investing, players decide how much of their remaining money
to contribute to provision of the public good, as in traditional public good
games. I analyze three kinds of outcomes of the game: the lowest payoff
outcome, the Nash Equilibria, and socially optimal behavior. In the lowest
payoff outcome, all players receive payoffs of zero. Nash Equilibrium occurs
when players invest any amount and contribute all or nothing depending on the
contribution productivity. Therefore, there are infinitely many Nash Equilibria
strategies. Finally, the socially optimal result occurs when players invest
everything in early periods, then at some point switch to contributing
everything. My goal is to discover and explain this point. I use mathematical
analysis and computer simulation to derive the results.",Analysis of a Dynamic Voluntary Contribution Mechanism Public Good Game,2018-07-12 17:13:41,Dmytro Bogatov,"http://arxiv.org/abs/1807.04621v2, http://arxiv.org/pdf/1807.04621v2",econ.EM
28874,em,"Wang and Tchetgen Tchetgen (2017) studied identification and estimation of
the average treatment effect when some confounders are unmeasured. Under their
identification condition, they showed that the semiparametric efficient
influence function depends on five unknown functionals. They proposed to
parameterize all functionals and estimate the average treatment effect from the
efficient influence function by replacing the unknown functionals with
estimated functionals. They established that their estimator is consistent when
certain functionals are correctly specified and attains the semiparametric
efficiency bound when all functionals are correctly specified. In applications,
it is likely that those functionals could all be misspecified. Consequently
their estimator could be inconsistent or consistent but not efficient. This
paper presents an alternative estimator that does not require parameterization
of any of the functionals. We establish that the proposed estimator is always
consistent and always attains the semiparametric efficiency bound. A simple and
intuitive estimator of the asymptotic variance is presented, and a small scale
simulation study reveals that the proposed estimation outperforms the existing
alternatives in finite samples.",A Simple and Efficient Estimation of the Average Treatment Effect in the Presence of Unmeasured Confounders,2018-07-16 07:42:01,"Chunrong Ai, Lukang Huang, Zheng Zhang","http://arxiv.org/abs/1807.05678v1, http://arxiv.org/pdf/1807.05678v1",econ.EM
28875,em,"This paper analyzes how the legalization of same-sex marriage in the U.S.
affected gay and lesbian couples in the labor market. Results from a
difference-in-difference model show that both partners in same-sex couples were
more likely to be employed, to have a full-time contract, and to work longer
hours in states that legalized same-sex marriage. In line with a theoretical
search model of discrimination, suggestive empirical evidence supports the
hypothesis that marriage equality led to an improvement in employment outcomes
among gays and lesbians and lower occupational segregation thanks to a decrease
in discrimination towards sexual minorities.","Pink Work: Same-Sex Marriage, Employment and Discrimination",2018-07-18 01:57:39,Dario Sansone,"http://arxiv.org/abs/1807.06698v1, http://arxiv.org/pdf/1807.06698v1",econ.EM
28876,em,"Regression quantiles have asymptotic variances that depend on the conditional
densities of the response variable given regressors. This paper develops a new
estimate of the asymptotic variance of regression quantiles that leads any
resulting Wald-type test or confidence region to behave as well in large
samples as its infeasible counterpart in which the true conditional response
densities are embedded. We give explicit guidance on implementing the new
variance estimator to control adaptively the size of any resulting Wald-type
test. Monte Carlo evidence indicates the potential of our approach to deliver
powerful tests of heterogeneity of quantile treatment effects in covariates
with good size performance over different quantile levels, data-generating
processes and sample sizes. We also include an empirical example. Supplementary
material is available online.",Quantile-Regression Inference With Adaptive Control of Size,2018-07-18 17:40:36,"Juan Carlos Escanciano, Chuan Goh","http://dx.doi.org/10.1080/01621459.2018.1505624, http://arxiv.org/abs/1807.06977v2, http://arxiv.org/pdf/1807.06977v2",econ.EM
28877,em,"The accumulation of knowledge required to produce economic value is a process
that often relates to nations economic growth. Such a relationship, however, is
misleading when the proxy of such accumulation is the average years of
education. In this paper, we show that the predictive power of this proxy
started to dwindle in 1990 when nations schooling began to homogenized. We
propose a metric of human capital that is less sensitive than average years of
education and remains as a significant predictor of economic growth when tested
with both cross-section data and panel data. We argue that future research on
economic growth will discard educational variables based on quantity as
predictor given the thresholds that these variables are reaching.",A New Index of Human Capital to Predict Economic Growth,2018-07-18 20:34:27,"Henry Laverde, Juan C. Correa, Klaus Jaffe","http://arxiv.org/abs/1807.07051v1, http://arxiv.org/pdf/1807.07051v1",econ.EM
28878,em,"The public debt and deficit ceilings of the Maastricht Treaty are the subject
of recurring controversy. First, there is debate about the role and impact of
these criteria in the initial phase of the introduction of the single currency.
Secondly, it must be specified how these will then be applied, in a permanent
regime, when the single currency is well established.",Stability in EMU,2018-07-20 10:53:14,Theo Peeters,"http://arxiv.org/abs/1807.07730v1, http://arxiv.org/pdf/1807.07730v1",econ.EM
28879,em,"If multiway cluster-robust standard errors are used routinely in applied
economics, surprisingly few theoretical results justify this practice. This
paper aims to fill this gap. We first prove, under nearly the same conditions
as with i.i.d. data, the weak convergence of empirical processes under multiway
clustering. This result implies central limit theorems for sample averages but
is also key for showing the asymptotic normality of nonlinear estimators such
as GMM estimators. We then establish consistency of various asymptotic variance
estimators, including that of Cameron et al. (2011) but also a new estimator
that is positive by construction. Next, we show the general consistency, for
linear and nonlinear estimators, of the pigeonhole bootstrap, a resampling
scheme adapted to multiway clustering. Monte Carlo simulations suggest that
inference based on our two preferred methods may be accurate even with very few
clusters, and significantly improve upon inference based on Cameron et al.
(2011).",Asymptotic results under multiway clustering,2018-07-20 19:33:13,"Laurent Davezies, Xavier D'Haultfoeuille, Yannick Guyonvarch","http://arxiv.org/abs/1807.07925v2, http://arxiv.org/pdf/1807.07925v2",econ.EM
28880,em,"In dynamical framework the conflict between government and the central bank
according to the exchange Rate of payment of fixed rates and fixed rates of
fixed income (EMU) convergence criteria such that the public debt / GDP ratio
The method consists of calculating private public debt management in a public
debt management system purpose there is no mechanism to allow naturally for
this adjustment.",EMU and ECB Conflicts,2018-07-21 09:57:15,William Mackenzie,"http://arxiv.org/abs/1807.08097v1, http://arxiv.org/pdf/1807.08097v1",econ.EM
28881,em,"Dynamic discrete choice models often discretize the state vector and restrict
its dimension in order to achieve valid inference. I propose a novel two-stage
estimator for the set-identified structural parameter that incorporates a
high-dimensional state space into the dynamic model of imperfect competition.
In the first stage, I estimate the state variable's law of motion and the
equilibrium policy function using machine learning tools. In the second stage,
I plug the first-stage estimates into a moment inequality and solve for the
structural parameter. The moment function is presented as the sum of two
components, where the first one expresses the equilibrium assumption and the
second one is a bias correction term that makes the sum insensitive (i.e.,
orthogonal) to first-stage bias. The proposed estimator uniformly converges at
the root-N rate and I use it to construct confidence regions. The results
developed here can be used to incorporate high-dimensional state space into
classic dynamic discrete choice models, for example, those considered in Rust
(1987), Bajari et al. (2007), and Scott (2013).",Machine Learning for Dynamic Discrete Choice,2018-08-08 01:23:50,Vira Semenova,"http://arxiv.org/abs/1808.02569v2, http://arxiv.org/pdf/1808.02569v2",econ.EM
28882,em,"This paper presents a weighted optimization framework that unifies the
binary,multi-valued, continuous, as well as mixture of discrete and continuous
treatment, under the unconfounded treatment assignment. With a general loss
function, the framework includes the average, quantile and asymmetric least
squares causal effect of treatment as special cases. For this general
framework, we first derive the semiparametric efficiency bound for the causal
effect of treatment, extending the existing bound results to a wider class of
models. We then propose a generalized optimization estimation for the causal
effect with weights estimated by solving an expanding set of equations. Under
some sufficient conditions, we establish consistency and asymptotic normality
of the proposed estimator of the causal effect and show that the estimator
attains our semiparametric efficiency bound, thereby extending the existing
literature on efficient estimation of causal effect to a wider class of
applications. Finally, we discuss etimation of some causal effect functionals
such as the treatment effect curve and the average outcome. To evaluate the
finite sample performance of the proposed procedure, we conduct a small scale
simulation study and find that the proposed estimation has practical value. To
illustrate the applicability of the procedure, we revisit the literature on
campaign advertise and campaign contributions. Unlike the existing procedures
which produce mixed results, we find no evidence of campaign advertise on
campaign contribution.",A Unified Framework for Efficient Estimation of General Treatment Models,2018-08-15 04:32:29,"Chunrong Ai, Oliver Linton, Kaiji Motegi, Zheng Zhang","http://arxiv.org/abs/1808.04936v2, http://arxiv.org/pdf/1808.04936v2",econ.EM
28883,em,"Recent years have seen many attempts to combine expenditure-side estimates of
U.S. real output (GDE) growth with income-side estimates (GDI) to improve
estimates of real GDP growth. We show how to incorporate information from
multiple releases of noisy data to provide more precise estimates while
avoiding some of the identifying assumptions required in earlier work. This
relies on a new insight: using multiple data releases allows us to distinguish
news and noise measurement errors in situations where a single vintage does
not.
  Our new measure, GDP++, fits the data better than GDP+, the GDP growth
measure of Aruoba et al. (2016) published by the Federal Reserve Bank of
Philadephia. Historical decompositions show that GDE releases are more
informative than GDI, while the use of multiple data releases is particularly
important in the quarters leading up to the Great Recession.",Can GDP measurement be further improved? Data revision and reconciliation,2018-08-15 07:48:26,"Jan P. A. M. Jacobs, Samad Sarferaz, Jan-Egbert Sturm, Simon van Norden","http://arxiv.org/abs/1808.04970v1, http://arxiv.org/pdf/1808.04970v1",econ.EM
28884,em,"While investments in renewable energy sources (RES) are incentivized around
the world, the policy tools that do so are still poorly understood, leading to
costly misadjustments in many cases. As a case study, the deployment dynamics
of residential solar photovoltaics (PV) invoked by the German feed-in tariff
legislation are investigated. Here we report a model showing that the question
of when people invest in residential PV systems is found to be not only
determined by profitability, but also by profitability's change compared to the
status quo. This finding is interpreted in the light of loss aversion, a
concept developed in Kahneman and Tversky's Prospect Theory. The model is able
to reproduce most of the dynamics of the uptake with only a few financial and
behavioral assumptions",When Do Households Invest in Solar Photovoltaics? An Application of Prospect Theory,2018-08-16 19:29:55,"Martin Klein, Marc Deissenroth","http://dx.doi.org/10.1016/j.enpol.2017.06.067, http://arxiv.org/abs/1808.05572v1, http://arxiv.org/pdf/1808.05572v1",econ.EM
28910,em,"Kitamura and Stoye (2014) develop a nonparametric test for linear inequality
constraints, when these are are represented as vertices of a polyhedron instead
of its faces. They implement this test for an application to nonparametric
tests of Random Utility Models. As they note in their paper, testing such
models is computationally challenging. In this paper, we develop and implement
more efficient algorithms, based on column generation, to carry out the test.
These improved algorithms allow us to tackle larger datasets.",Column Generation Algorithms for Nonparametric Analysis of Random Utility Models,2018-12-04 16:28:33,Bart Smeulders,"http://arxiv.org/abs/1812.01400v1, http://arxiv.org/pdf/1812.01400v1",econ.EM
28885,em,"The purpose of this paper is to provide guidelines for empirical researchers
who use a class of bivariate threshold crossing models with dummy endogenous
variables. A common practice employed by the researchers is the specification
of the joint distribution of the unobservables as a bivariate normal
distribution, which results in a bivariate probit model. To address the problem
of misspecification in this practice, we propose an easy-to-implement
semiparametric estimation framework with parametric copula and nonparametric
marginal distributions. We establish asymptotic theory, including root-n
normality, for the sieve maximum likelihood estimators that can be used to
conduct inference on the individual structural parameters and the average
treatment effect (ATE). In order to show the practical relevance of the
proposed framework, we conduct a sensitivity analysis via extensive Monte Carlo
simulation exercises. The results suggest that the estimates of the parameters,
especially the ATE, are sensitive to parametric specification, while
semiparametric estimation exhibits robustness to underlying data generating
processes. We then provide an empirical illustration where we estimate the
effect of health insurance on doctor visits. In this paper, we also show that
the absence of excluded instruments may result in identification failure, in
contrast to what some practitioners believe.",Estimation in a Generalization of Bivariate Probit Models with Dummy Endogenous Regressors,2018-08-17 11:34:04,"Sukjin Han, Sungwon Lee","http://arxiv.org/abs/1808.05792v2, http://arxiv.org/pdf/1808.05792v2",econ.EM
28886,em,"Under suitable conditions, one-step generalized method of moments (GMM) based
on the first-difference (FD) transformation is numerically equal to one-step
GMM based on the forward orthogonal deviations (FOD) transformation. However,
when the number of time periods ($T$) is not small, the FOD transformation
requires less computational work. This paper shows that the computational
complexity of the FD and FOD transformations increases with the number of
individuals ($N$) linearly, but the computational complexity of the FOD
transformation increases with $T$ at the rate $T^{4}$ increases, while the
computational complexity of the FD transformation increases at the rate $T^{6}$
increases. Simulations illustrate that calculations exploiting the FOD
transformation are performed orders of magnitude faster than those using the FD
transformation. The results in the paper indicate that, when one-step GMM based
on the FD and FOD transformations are the same, Monte Carlo experiments can be
conducted much faster if the FOD version of the estimator is used.",Quantifying the Computational Advantage of Forward Orthogonal Deviations,2018-08-17 23:57:31,Robert F. Phillips,"http://arxiv.org/abs/1808.05995v1, http://arxiv.org/pdf/1808.05995v1",econ.EM
28887,em,"There is generally a need to deal with quality change and new goods in the
consumer price index due to the underlying dynamic item universe. Traditionally
axiomatic tests are defined for a fixed universe. We propose five tests
explicitly formulated for a dynamic item universe, and motivate them both from
the perspectives of a cost-of-goods index and a cost-of-living index. None of
the indices satisfies all the tests at the same time, which are currently
available for making use of scanner data that comprises the whole item
universe. The set of tests provides a rigorous diagnostic for whether an index
is completely appropriate in a dynamic item universe, as well as pointing
towards the directions of possible remedies. We thus outline a large index
family that potentially can satisfy all the tests.",Tests for price indices in a dynamic item universe,2018-08-27 22:01:08,"Li-Chun Zhang, Ingvild Johansen, Ragnhild Nygaard","http://arxiv.org/abs/1808.08995v2, http://arxiv.org/pdf/1808.08995v2",econ.EM
28888,em,"A fixed-design residual bootstrap method is proposed for the two-step
estimator of Francq and Zako\""ian (2015) associated with the conditional
Value-at-Risk. The bootstrap's consistency is proven for a general class of
volatility models and intervals are constructed for the conditional
Value-at-Risk. A simulation study reveals that the equal-tailed percentile
bootstrap interval tends to fall short of its nominal value. In contrast, the
reversed-tails bootstrap interval yields accurate coverage. We also compare the
theoretically analyzed fixed-design bootstrap with the recursive-design
bootstrap. It turns out that the fixed-design bootstrap performs equally well
in terms of average coverage, yet leads on average to shorter intervals in
smaller samples. An empirical application illustrates the interval estimation.",A Residual Bootstrap for Conditional Value-at-Risk,2018-08-28 08:34:36,"Eric Beutner, Alexander Heinemann, Stephan Smeekes","http://arxiv.org/abs/1808.09125v4, http://arxiv.org/pdf/1808.09125v4",econ.EM
28889,em,"Kotlarski's identity has been widely used in applied economic research.
However, how to conduct inference based on this popular identification approach
has been an open question for two decades. This paper addresses this open
problem by constructing a novel confidence band for the density function of a
latent variable in repeated measurement error model. The confidence band builds
on our finding that we can rewrite Kotlarski's identity as a system of linear
moment restrictions. The confidence band controls the asymptotic size uniformly
over a class of data generating processes, and it is consistent against all
fixed alternatives. Simulation studies support our theoretical results.",Inference based on Kotlarski's Identity,2018-08-28 18:54:59,"Kengo Kato, Yuya Sasaki, Takuya Ura","http://arxiv.org/abs/1808.09375v3, http://arxiv.org/pdf/1808.09375v3",econ.EM
28890,em,"This study considers various semiparametric difference-in-differences models
under different assumptions on the relation between the treatment group
identifier, time and covariates for cross-sectional and panel data. The
variance lower bound is shown to be sensitive to the model assumptions imposed
implying a robustness-efficiency trade-off. The obtained efficient influence
functions lead to estimators that are rate double robust and have desirable
asymptotic properties under weak first stage convergence conditions. This
enables to use sophisticated machine-learning algorithms that can cope with
settings where common trend confounding is high-dimensional. The usefulness of
the proposed estimators is assessed in an empirical example. It is shown that
the efficiency-robustness trade-offs and the choice of first stage predictors
can lead to divergent empirical results in practice.",Efficient Difference-in-Differences Estimation with High-Dimensional Common Trend Confounding,2018-09-05 20:41:34,Michael Zimmert,"http://arxiv.org/abs/1809.01643v5, http://arxiv.org/pdf/1809.01643v5",econ.EM
28918,em,"We study estimation, pointwise and simultaneous inference, and confidence
intervals for many average partial effects of lasso Logit. Focusing on
high-dimensional, cluster-sampling environments, we propose a new average
partial effect estimator and explore its asymptotic properties. Practical
penalty choices compatible with our asymptotic theory are also provided. The
proposed estimator allow for valid inference without requiring oracle property.
We provide easy-to-implement algorithms for cluster-robust high-dimensional
hypothesis testing and construction of simultaneously valid confidence
intervals using a multiplier cluster bootstrap. We apply the proposed
algorithms to the text regression model of Wu (2018) to examine the presence of
gendered language on the internet.",Many Average Partial Effects: with An Application to Text Regression,2018-12-22 01:35:51,Harold D. Chiang,"http://arxiv.org/abs/1812.09397v5, http://arxiv.org/pdf/1812.09397v5",econ.EM
28891,em,"The bootstrap is a method for estimating the distribution of an estimator or
test statistic by re-sampling the data or a model estimated from the data.
Under conditions that hold in a wide variety of econometric applications, the
bootstrap provides approximations to distributions of statistics, coverage
probabilities of confidence intervals, and rejection probabilities of
hypothesis tests that are more accurate than the approximations of first-order
asymptotic distribution theory. The reductions in the differences between true
and nominal coverage or rejection probabilities can be very large. In addition,
the bootstrap provides a way to carry out inference in certain settings where
obtaining analytic distributional approximations is difficult or impossible.
This article explains the usefulness and limitations of the bootstrap in
contexts of interest in econometrics. The presentation is informal and
expository. It provides an intuitive understanding of how the bootstrap works.
Mathematical details are available in references that are cited.",Bootstrap Methods in Econometrics,2018-09-11 19:39:03,Joel L. Horowitz,"http://arxiv.org/abs/1809.04016v1, http://arxiv.org/pdf/1809.04016v1",econ.EM
28892,em,"A method for implicit variable selection in mixture of experts frameworks is
proposed. We introduce a prior structure where information is taken from a set
of independent covariates. Robust class membership predictors are identified
using a normal gamma prior. The resulting model setup is used in a finite
mixture of Bernoulli distributions to find homogenous clusters of women in
Mozambique based on their information sources on HIV. Fully Bayesian inference
is carried out via the implementation of a Gibbs sampler.",Bayesian shrinkage in mixture of experts models: Identifying robust determinants of class membership,2018-09-13 12:30:21,Gregor Zens,"http://arxiv.org/abs/1809.04853v2, http://arxiv.org/pdf/1809.04853v2",econ.EM
28893,em,"Time averaging has been the traditional approach to handle mixed sampling
frequencies. However, it ignores information possibly embedded in high
frequency. Mixed data sampling (MIDAS) regression models provide a concise way
to utilize the additional information in high-frequency variables. In this
paper, we propose a specification test to choose between time averaging and
MIDAS models, based on a Durbin-Wu-Hausman test. In particular, a set of
instrumental variables is proposed and theoretically validated when the
frequency ratio is large. As a result, our method tends to be more powerful
than existing methods, as reconfirmed through the simulations.",On the Choice of Instruments in Mixed Frequency Specification Tests,2018-09-14 19:59:44,"Yun Liu, Yeonwoo Rho","http://arxiv.org/abs/1809.05503v1, http://arxiv.org/pdf/1809.05503v1",econ.EM
28894,em,"This article deals with asimple issue: if we have grouped data with a binary
dependent variable and want to include fixed effects (group specific
intercepts) in the specification, is Ordinary Least Squares (OLS) in any way
superior to a (conditional) logit form? In particular, what are the
consequences of using OLS instead of a fixed effects logit model with respect
to the latter dropping all units which show no variability in the dependent
variable while the former allows for estimation using all units. First, we show
that the discussion of fthe incidental parameters problem is based on an
assumption about the kinds of data being studied; for what appears to be the
common use of fixed effect models in political science the incidental
parameters issue is illusory. Turning to linear models, we see that OLS yields
a linear combination of the estimates for the units with and without variation
in the dependent variable, and so the coefficient estimates must be carefully
interpreted. The article then compares two methods of estimating logit models
with fixed effects, and shows that the Chamberlain conditional logit is as good
as or better than a logit analysis which simply includes group specific
intercepts (even though the conditional logit technique was designed to deal
with the incidental parameters problem!). Related to this, the article
discusses the estimation of marginal effects using both OLS and logit. While it
appears that a form of logit with fixed effects can be used to estimate
marginal effects, this method can be improved by starting with conditional
logit and then using the those parameter estimates to constrain the logit with
fixed effects model. This method produces estimates of sample average marginal
effects that are at least as good as OLS, and much better when group size is
small or the number of groups is large. .",Estimating grouped data models with a binary dependent variable and fixed effects: What are the issues,2018-09-18 05:25:25,Nathaniel Beck,"http://arxiv.org/abs/1809.06505v1, http://arxiv.org/pdf/1809.06505v1",econ.EM
28895,em,"We provide new results for nonparametric identification, estimation, and
inference of causal effects using `proxy controls': observables that are noisy
but informative proxies for unobserved confounding factors. Our analysis
applies to cross-sectional settings but is particularly well-suited to panel
models. Our identification results motivate a simple and `well-posed'
nonparametric estimator. We derive convergence rates for the estimator and
construct uniform confidence bands with asymptotically correct size. In panel
settings, our methods provide a novel approach to the difficult problem of
identification with non-separable, general heterogeneity and fixed $T$. In
panels, observations from different periods serve as proxies for unobserved
heterogeneity and our key identifying assumptions follow from restrictions on
the serial dependence structure. We apply our methods to two empirical
settings. We estimate consumer demand counterfactuals using panel data and we
estimate causal effects of grade retention on cognitive performance.",Proxy Controls and Panel Data,2018-09-30 03:38:11,Ben Deaner,"http://arxiv.org/abs/1810.00283v8, http://arxiv.org/pdf/1810.00283v8",econ.EM
28896,em,"We consider the problem of regression with selectively observed covariates in
a nonparametric framework. Our approach relies on instrumental variables that
explain variation in the latent covariates but have no direct effect on
selection. The regression function of interest is shown to be a weighted
version of observed conditional expectation where the weighting function is a
fraction of selection probabilities. Nonparametric identification of the
fractional probability weight (FPW) function is achieved via a partial
completeness assumption. We provide primitive functional form assumptions for
partial completeness to hold. The identification result is constructive for the
FPW series estimator. We derive the rate of convergence and also the pointwise
asymptotic distribution. In both cases, the asymptotic performance of the FPW
series estimator does not suffer from the inverse problem which derives from
the nonparametric instrumental variable approach. In a Monte Carlo study, we
analyze the finite sample properties of our estimator and we compare our
approach to inverse probability weighting, which can be used alternatively for
unconditional moment estimation. In the empirical application, we focus on two
different applications. We estimate the association between income and health
using linked data from the SHARE survey and administrative pension information
and use pension entitlements as an instrument. In the second application we
revisit the question how income affects the demand for housing based on data
from the German Socio-Economic Panel Study (SOEP). In this application we use
regional income information on the residential block level as an instrument. In
both applications we show that income is selectively missing and we demonstrate
that standard methods that do not account for the nonrandom selection process
lead to significantly biased estimates for individuals with low income.",Nonparametric Regression with Selectively Missing Covariates,2018-09-30 18:52:54,"Christoph Breunig, Peter Haan","http://arxiv.org/abs/1810.00411v4, http://arxiv.org/pdf/1810.00411v4",econ.EM
28897,em,"The intention of this paper is to discuss the mathematical model of causality
introduced by C.W.J. Granger in 1969. The Granger's model of causality has
become well-known and often used in various econometric models describing
causal systems, e.g., between commodity prices and exchange rates.
  Our paper presents a new mathematical model of causality between two measured
objects. We have slightly modified the well-known Kolmogorovian probability
model. In particular, we use the horizontal sum of set $\sigma$-algebras
instead of their direct product.",Granger causality on horizontal sum of Boolean algebras,2018-10-03 12:27:43,"M. Bohdalová, M. Kalina, O. Nánásiová","http://arxiv.org/abs/1810.01654v1, http://arxiv.org/pdf/1810.01654v1",econ.EM
28898,em,"Explanatory variables in a predictive regression typically exhibit low signal
strength and various degrees of persistence. Variable selection in such a
context is of great importance. In this paper, we explore the pitfalls and
possibilities of the LASSO methods in this predictive regression framework. In
the presence of stationary, local unit root, and cointegrated predictors, we
show that the adaptive LASSO cannot asymptotically eliminate all cointegrating
variables with zero regression coefficients. This new finding motivates a novel
post-selection adaptive LASSO, which we call the twin adaptive LASSO (TAlasso),
to restore variable selection consistency. Accommodating the system of
heterogeneous regressors, TAlasso achieves the well-known oracle property. In
contrast, conventional LASSO fails to attain coefficient estimation consistency
and variable screening in all components simultaneously. We apply these LASSO
methods to evaluate the short- and long-horizon predictability of S\&P 500
excess returns.",On LASSO for Predictive Regression,2018-10-07 16:19:07,"Ji Hyung Lee, Zhentao Shi, Zhan Gao","http://arxiv.org/abs/1810.03140v4, http://arxiv.org/pdf/1810.03140v4",econ.EM
28899,em,"This paper proposes a new approach to obtain uniformly valid inference for
linear functionals or scalar subvectors of a partially identified parameter
defined by linear moment inequalities. The procedure amounts to bootstrapping
the value functions of randomly perturbed linear programming problems, and does
not require the researcher to grid over the parameter space. The low-level
conditions for uniform validity rely on genericity results for linear programs.
The unconventional perturbation approach produces a confidence set with a
coverage probability of 1 over the identified set, but obtains exact coverage
on an outer set, is valid under weak assumptions, and is computationally simple
to implement.",Simple Inference on Functionals of Set-Identified Parameters Defined by Linear Moments,2018-10-07 20:03:14,"JoonHwan Cho, Thomas M. Russell","http://arxiv.org/abs/1810.03180v10, http://arxiv.org/pdf/1810.03180v10",econ.EM
28900,em,"In this paper we consider the properties of the Pesaran (2004, 2015a) CD test
for cross-section correlation when applied to residuals obtained from panel
data models with many estimated parameters. We show that the presence of
period-specific parameters leads the CD test statistic to diverge as length of
the time dimension of the sample grows. This result holds even if cross-section
dependence is correctly accounted for and hence constitutes an example of the
Incidental Parameters Problem. The relevance of this problem is investigated
both for the classical Time Fixed Effects estimator as well as the Common
Correlated Effects estimator of Pesaran (2006). We suggest a weighted CD test
statistic which re-establishes standard normal inference under the null
hypothesis. Given the widespread use of the CD test statistic to test for
remaining cross-section correlation, our results have far reaching implications
for empirical researchers.",The Incidental Parameters Problem in Testing for Remaining Cross-section Correlation,2018-10-09 00:48:52,"Arturas Juodis, Simon Reese","http://arxiv.org/abs/1810.03715v4, http://arxiv.org/pdf/1810.03715v4",econ.EM
28901,em,"This paper studies nonparametric identification and counterfactual bounds for
heterogeneous firms that can be ranked in terms of productivity. Our approach
works when quantities and prices are latent, rendering standard approaches
inapplicable. Instead, we require observation of profits or other
optimizing-values such as costs or revenues, and either prices or price proxies
of flexibly chosen variables. We extend classical duality results for
price-taking firms to a setup with discrete heterogeneity, endogeneity, and
limited variation in possibly latent prices. Finally, we show that convergence
results for nonparametric estimators may be directly converted to convergence
results for production sets.","Prices, Profits, Proxies, and Production",2018-10-10 21:15:29,"Victor H. Aguiar, Nail Kashaev, Roy Allen","http://arxiv.org/abs/1810.04697v4, http://arxiv.org/pdf/1810.04697v4",econ.EM
28902,em,"A long-standing question about consumer behavior is whether individuals'
observed purchase decisions satisfy the revealed preference (RP) axioms of the
utility maximization theory (UMT). Researchers using survey or experimental
panel data sets on prices and consumption to answer this question face the
well-known problem of measurement error. We show that ignoring measurement
error in the RP approach may lead to overrejection of the UMT. To solve this
problem, we propose a new statistical RP framework for consumption panel data
sets that allows for testing the UMT in the presence of measurement error. Our
test is applicable to all consumer models that can be characterized by their
first-order conditions. Our approach is nonparametric, allows for unrestricted
heterogeneity in preferences, and requires only a centering condition on
measurement error. We develop two applications that provide new evidence about
the UMT. First, we find support in a survey data set for the dynamic and
time-consistent UMT in single-individual households, in the presence of
\emph{nonclassical} measurement error in consumption. In the second
application, we cannot reject the static UMT in a widely used experimental data
set in which measurement error in prices is assumed to be the result of price
misperception due to the experimental design. The first finding stands in
contrast to the conclusions drawn from the deterministic RP test of Browning
(1989). The second finding reverses the conclusions drawn from the
deterministic RP test of Afriat (1967) and Varian (1982).",Stochastic Revealed Preferences with Measurement Error,2018-10-12 02:25:24,"Victor H. Aguiar, Nail Kashaev","http://arxiv.org/abs/1810.05287v2, http://arxiv.org/pdf/1810.05287v2",econ.EM
28903,em,"In this paper, we study estimation of nonlinear models with cross sectional
data using two-step generalized estimating equations (GEE) in the quasi-maximum
likelihood estimation (QMLE) framework. In the interest of improving
efficiency, we propose a grouping estimator to account for the potential
spatial correlation in the underlying innovations. We use a Poisson model and a
Negative Binomial II model for count data and a Probit model for binary
response data to demonstrate the GEE procedure. Under mild weak dependency
assumptions, results on estimation consistency and asymptotic normality are
provided. Monte Carlo simulations show efficiency gain of our approach in
comparison of different estimation methods for count data and binary response
data. Finally we apply the GEE approach to study the determinants of the inflow
foreign direct investment (FDI) to China.",Using generalized estimating equations to estimate nonlinear models with spatial data,2018-10-13 15:58:41,"Cuicui Lu, Weining Wang, Jeffrey M. Wooldridge","http://arxiv.org/abs/1810.05855v1, http://arxiv.org/pdf/1810.05855v1",econ.EM
28925,em,"Nonparametric Instrumental Variables (NPIV) analysis is based on a
conditional moment restriction. We show that if this moment condition is even
slightly misspecified, say because instruments are not quite valid, then NPIV
estimates can be subject to substantial asymptotic error and the identified set
under a relaxed moment condition may be large. Imposing strong a priori
smoothness restrictions mitigates the problem but induces bias if the
restrictions are too strong. In order to manage this trade-off we develop a
methods for empirical sensitivity analysis and apply them to the consumer
demand data previously analyzed in Blundell (2007) and Horowitz (2011).",Nonparametric Instrumental Variables Estimation Under Misspecification,2019-01-04 21:52:59,Ben Deaner,"http://arxiv.org/abs/1901.01241v7, http://arxiv.org/pdf/1901.01241v7",econ.EM
28904,em,"This paper develops a consistent heteroskedasticity robust Lagrange
Multiplier (LM) type specification test for semiparametric conditional mean
models. Consistency is achieved by turning a conditional moment restriction
into a growing number of unconditional moment restrictions using series
methods. The proposed test statistic is straightforward to compute and is
asymptotically standard normal under the null. Compared with the earlier
literature on series-based specification tests in parametric models, I rely on
the projection property of series estimators and derive a different
normalization of the test statistic. Compared with the recent test in Gupta
(2018), I use a different way of accounting for heteroskedasticity. I
demonstrate using Monte Carlo studies that my test has superior finite sample
performance compared with the existing tests. I apply the test to one of the
semiparametric gasoline demand specifications from Yatchew and No (2001) and
find no evidence against it.",A Consistent Heteroskedasticity Robust LM Type Specification Test for Semiparametric Models,2018-10-17 18:37:02,Ivan Korolev,"http://arxiv.org/abs/1810.07620v3, http://arxiv.org/pdf/1810.07620v3",econ.EM
28905,em,"This study considers treatment effect models in which others' treatment
decisions can affect both one's own treatment and outcome. Focusing on the case
of two-player interactions, we formulate treatment decision behavior as a
complete information game with multiple equilibria. Using a latent index
framework and assuming a stochastic equilibrium selection, we prove that the
marginal treatment effect from one's own treatment and that from the partner
are identifiable on the conditional supports of certain threshold variables
determined through the game model. Based on our constructive identification
results, we propose a two-step semiparametric procedure for estimating the
marginal treatment effects using series approximation. We show that the
proposed estimator is uniformly consistent and asymptotically normally
distributed. As an empirical illustration, we investigate the impacts of risky
behaviors on adolescents' academic performance.",Treatment Effect Models with Strategic Interaction in Treatment Decisions,2018-10-19 06:51:42,"Tadao Hoshino, Takahide Yanagi","http://arxiv.org/abs/1810.08350v11, http://arxiv.org/pdf/1810.08350v11",econ.EM
28906,em,"In this paper we include dependency structures for electricity price
forecasting and forecasting evaluation. We work with off-peak and peak time
series from the German-Austrian day-ahead price, hence we analyze bivariate
data. We first estimate the mean of the two time series, and then in a second
step we estimate the residuals. The mean equation is estimated by OLS and
elastic net and the residuals are estimated by maximum likelihood. Our
contribution is to include a bivariate jump component on a mean reverting jump
diffusion model in the residuals. The models' forecasts are evaluated using
four different criteria, including the energy score to measure whether the
correlation structure between the time series is properly included or not. In
the results it is observed that the models with bivariate jumps provide better
results with the energy score, which means that it is important to consider
this structure in order to properly forecast correlated time series.",Probabilistic Forecasting in Day-Ahead Electricity Markets: Simulating Peak and Off-Peak Prices,2018-10-19 12:27:16,"Peru Muniain, Florian Ziel","http://dx.doi.org/10.1016/j.ijforecast.2019.11.006, http://arxiv.org/abs/1810.08418v2, http://arxiv.org/pdf/1810.08418v2",econ.EM
28907,em,"We propose a novel two-regime regression model where regime switching is
driven by a vector of possibly unobservable factors. When the factors are
latent, we estimate them by the principal component analysis of a panel data
set. We show that the optimization problem can be reformulated as mixed integer
optimization, and we present two alternative computational algorithms. We
derive the asymptotic distribution of the resulting estimator under the scheme
that the threshold effect shrinks to zero. In particular, we establish a phase
transition that describes the effect of first-stage factor estimation as the
cross-sectional dimension of panel data increases relative to the time-series
dimension. Moreover, we develop bootstrap inference and illustrate our methods
via numerical studies.",Factor-Driven Two-Regime Regression,2018-10-26 00:12:52,"Sokbae Lee, Yuan Liao, Myung Hwan Seo, Youngki Shin","http://dx.doi.org/10.1214/20-AOS2017, http://arxiv.org/abs/1810.11109v4, http://arxiv.org/pdf/1810.11109v4",econ.EM
28908,em,"Let Y be an outcome of interest, X a vector of treatment measures, and W a
vector of pre-treatment control variables. Here X may include (combinations of)
continuous, discrete, and/or non-mutually exclusive ""treatments"". Consider the
linear regression of Y onto X in a subpopulation homogenous in W = w (formally
a conditional linear predictor). Let b0(w) be the coefficient vector on X in
this regression. We introduce a semiparametrically efficient estimate of the
average beta0 = E[b0(W)]. When X is binary-valued (multi-valued) our procedure
recovers the (a vector of) average treatment effect(s). When X is
continuously-valued, or consists of multiple non-exclusive treatments, our
estimand coincides with the average partial effect (APE) of X on Y when the
underlying potential response function is linear in X, but otherwise
heterogenous across agents. When the potential response function takes a
general nonlinear/heterogenous form, and X is continuously-valued, our
procedure recovers a weighted average of the gradient of this response across
individuals and values of X. We provide a simple, and semiparametrically
efficient, method of covariate adjustment for settings with complicated
treatment regimes. Our method generalizes familiar methods of covariate
adjustment used for program evaluation as well as methods of semiparametric
regression (e.g., the partially linear regression model).",Semiparametrically efficient estimation of the average linear regression function,2018-10-30 06:26:33,"Bryan S. Graham, Cristine Campos de Xavier Pinto","http://arxiv.org/abs/1810.12511v1, http://arxiv.org/pdf/1810.12511v1",econ.EM
28909,em,"We investigate the finite sample performance of causal machine learning
estimators for heterogeneous causal effects at different aggregation levels. We
employ an Empirical Monte Carlo Study that relies on arguably realistic data
generation processes (DGPs) based on actual data. We consider 24 different
DGPs, eleven different causal machine learning estimators, and three
aggregation levels of the estimated effects. In the main DGPs, we allow for
selection into treatment based on a rich set of observable covariates. We
provide evidence that the estimators can be categorized into three groups. The
first group performs consistently well across all DGPs and aggregation levels.
These estimators have multiple steps to account for the selection into the
treatment and the outcome process. The second group shows competitive
performance only for particular DGPs. The third group is clearly outperformed
by the other estimators.",Machine Learning Estimation of Heterogeneous Causal Effects: Empirical Monte Carlo Evidence,2018-10-31 15:10:25,"Michael C. Knaus, Michael Lechner, Anthony Strittmatter","http://dx.doi.org/10.1093/ectj/utaa014, http://arxiv.org/abs/1810.13237v2, http://arxiv.org/pdf/1810.13237v2",econ.EM
28911,em,"This article proposes doubly robust estimators for the average treatment
effect on the treated (ATT) in difference-in-differences (DID) research
designs. In contrast to alternative DID estimators, the proposed estimators are
consistent if either (but not necessarily both) a propensity score or outcome
regression working models are correctly specified. We also derive the
semiparametric efficiency bound for the ATT in DID designs when either panel or
repeated cross-section data are available, and show that our proposed
estimators attain the semiparametric efficiency bound when the working models
are correctly specified. Furthermore, we quantify the potential efficiency
gains of having access to panel data instead of repeated cross-section data.
Finally, by paying articular attention to the estimation method used to
estimate the nuisance parameters, we show that one can sometimes construct
doubly robust DID estimators for the ATT that are also doubly robust for
inference. Simulation studies and an empirical application illustrate the
desirable finite-sample performance of the proposed estimators. Open-source
software for implementing the proposed policy evaluation tools is available.",Doubly Robust Difference-in-Differences Estimators,2018-11-30 00:18:26,"Pedro H. C. Sant'Anna, Jun B. Zhao","http://arxiv.org/abs/1812.01723v3, http://arxiv.org/pdf/1812.01723v3",econ.EM
28912,em,"This paper examines a commonly used measure of persuasion whose precise
interpretation has been obscure in the literature. By using the potential
outcome framework, we define the causal persuasion rate by a proper conditional
probability of taking the action of interest with a persuasive message
conditional on not taking the action without the message. We then formally
study identification under empirically relevant data scenarios and show that
the commonly adopted measure generally does not estimate, but often overstates,
the causal rate of persuasion. We discuss several new parameters of interest
and provide practical methods for causal inference.",Identifying the Effect of Persuasion,2018-12-06 03:20:35,"Sung Jae Jun, Sokbae Lee","http://arxiv.org/abs/1812.02276v6, http://arxiv.org/pdf/1812.02276v6",econ.EM
28913,em,"We develop a uniform test for detecting and dating explosive behavior of a
strictly stationary GARCH$(r,s)$ (generalized autoregressive conditional
heteroskedasticity) process. Namely, we test the null hypothesis of a globally
stable GARCH process with constant parameters against an alternative where
there is an 'abnormal' period with changed parameter values. During this
period, the change may lead to an explosive behavior of the volatility process.
It is assumed that both the magnitude and the timing of the breaks are unknown.
We develop a double supreme test for the existence of a break, and then provide
an algorithm to identify the period of change. Our theoretical results hold
under mild moment assumptions on the innovations of the GARCH process.
Technically, the existing properties for the QMLE in the GARCH model need to be
reinvestigated to hold uniformly over all possible periods of change. The key
results involve a uniform weak Bahadur representation for the estimated
parameters, which leads to weak convergence of the test statistic to the
supreme of a Gaussian Process. In simulations we show that the test has good
size and power for reasonably large time series lengths. We apply the test to
Apple asset returns and Bitcoin returns.",A supreme test for periodic explosive GARCH,2018-12-09 15:51:14,"Stefan Richter, Weining Wang, Wei Biao Wu","http://arxiv.org/abs/1812.03475v1, http://arxiv.org/pdf/1812.03475v1",econ.EM
28914,em,"Recent studies have proposed causal machine learning (CML) methods to
estimate conditional average treatment effects (CATEs). In this study, I
investigate whether CML methods add value compared to conventional CATE
estimators by re-evaluating Connecticut's Jobs First welfare experiment. This
experiment entails a mix of positive and negative work incentives. Previous
studies show that it is hard to tackle the effect heterogeneity of Jobs First
by means of CATEs. I report evidence that CML methods can provide support for
the theoretical labor supply predictions. Furthermore, I document reasons why
some conventional CATE estimators fail and discuss the limitations of CML
methods.",What Is the Value Added by Using Causal Machine Learning Methods in a Welfare Experiment Evaluation?,2018-12-16 23:24:02,Anthony Strittmatter,"http://arxiv.org/abs/1812.06533v3, http://arxiv.org/pdf/1812.06533v3",econ.EM
28915,em,"This paper explores the use of a fuzzy regression discontinuity design where
multiple treatments are applied at the threshold. The identification results
show that, under the very strong assumption that the change in the probability
of treatment at the cutoff is equal across treatments, a
difference-in-discontinuities estimator identifies the treatment effect of
interest. The point estimates of the treatment effect using a simple fuzzy
difference-in-discontinuities design are biased if the change in the
probability of a treatment applying at the cutoff differs across treatments.
Modifications of the fuzzy difference-in-discontinuities approach that rely on
milder assumptions are also proposed. Our results suggest caution is needed
when applying before-and-after methods in the presence of fuzzy
discontinuities. Using data from the National Health Interview Survey, we apply
this new identification strategy to evaluate the causal effect of the
Affordable Care Act (ACA) on older Americans' health care access and
utilization.",Fuzzy Difference-in-Discontinuities: Identification Theory and Application to the Affordable Care Act,2018-12-17 00:27:54,"Hector Galindo-Silva, Nibene Habib Some, Guy Tchuente","http://arxiv.org/abs/1812.06537v3, http://arxiv.org/pdf/1812.06537v3",econ.EM
28916,em,"We propose convenient inferential methods for potentially nonstationary
multivariate unobserved components models with fractional integration and
cointegration. Based on finite-order ARMA approximations in the state space
representation, maximum likelihood estimation can make use of the EM algorithm
and related techniques. The approximation outperforms the frequently used
autoregressive or moving average truncation, both in terms of computational
costs and with respect to approximation quality. Monte Carlo simulations reveal
good estimation properties of the proposed methods for processes of different
complexity and dimension.",Approximate State Space Modelling of Unobserved Fractional Components,2018-12-21 17:25:45,"Tobias Hartl, Roland Weigand","http://dx.doi.org/10.1080/07474938.2020.1841444, http://arxiv.org/abs/1812.09142v3, http://arxiv.org/pdf/1812.09142v3",econ.EM
28917,em,"We propose a setup for fractionally cointegrated time series which is
formulated in terms of latent integrated and short-memory components. It
accommodates nonstationary processes with different fractional orders and
cointegration of different strengths and is applicable in high-dimensional
settings. In an application to realized covariance matrices, we find that
orthogonal short- and long-memory components provide a reasonable fit and
competitive out-of-sample performance compared to several competing methods.",Multivariate Fractional Components Analysis,2018-12-21 17:33:27,"Tobias Hartl, Roland Weigand","http://arxiv.org/abs/1812.09149v2, http://arxiv.org/pdf/1812.09149v2",econ.EM
28919,em,"Consider a setting in which a policy maker assigns subjects to treatments,
observing each outcome before the next subject arrives. Initially, it is
unknown which treatment is best, but the sequential nature of the problem
permits learning about the effectiveness of the treatments. While the
multi-armed-bandit literature has shed much light on the situation when the
policy maker compares the effectiveness of the treatments through their mean,
much less is known about other targets. This is restrictive, because a cautious
decision maker may prefer to target a robust location measure such as a
quantile or a trimmed mean. Furthermore, socio-economic decision making often
requires targeting purpose specific characteristics of the outcome
distribution, such as its inherent degree of inequality, welfare or poverty. In
the present paper we introduce and study sequential learning algorithms when
the distributional characteristic of interest is a general functional of the
outcome distribution. Minimax expected regret optimality results are obtained
within the subclass of explore-then-commit policies, and for the unrestricted
class of all policies.",Functional Sequential Treatment Allocation,2018-12-22 02:18:13,"Anders Bredahl Kock, David Preinerstorfer, Bezirgen Veliyev","http://arxiv.org/abs/1812.09408v8, http://arxiv.org/pdf/1812.09408v8",econ.EM
28920,em,"In many applications common in testing for convergence the number of
cross-sectional units is large and the number of time periods are few. In these
situations asymptotic tests based on an omnibus null hypothesis are
characterised by a number of problems. In this paper we propose a multiple
pairwise comparisons method based on an a recursive bootstrap to test for
convergence with no prior information on the composition of convergence clubs.
Monte Carlo simulations suggest that our bootstrap-based test performs well to
correctly identify convergence clubs when compared with other similar tests
that rely on asymptotic arguments. Across a potentially large number of
regions, using both cross-country and regional data for the European Union, we
find that the size distortion which afflicts standard tests and results in a
bias towards finding less convergence, is ameliorated when we utilise our
bootstrap test.",Robust Tests for Convergence Clubs,2018-12-22 15:11:04,"Luisa Corrado, Melvyn Weeks, Thanasis Stengos, M. Ege Yazgan","http://arxiv.org/abs/1812.09518v1, http://arxiv.org/pdf/1812.09518v1",econ.EM
28921,em,"We propose a practical and robust method for making inferences on average
treatment effects estimated by synthetic controls. We develop a $K$-fold
cross-fitting procedure for bias-correction. To avoid the difficult estimation
of the long-run variance, inference is based on a self-normalized
$t$-statistic, which has an asymptotically pivotal $t$-distribution. Our
$t$-test is easy to implement, provably robust against misspecification, valid
with non-stationary data, and demonstrates an excellent small sample
performance. Compared to difference-in-differences, our method often yields
more than 50% shorter confidence intervals and is robust to violations of
parallel trends assumptions. An $\texttt{R}$-package for implementing our
methods is available.",A $t$-test for synthetic controls,2018-12-27 23:40:13,"Victor Chernozhukov, Kaspar Wuthrich, Yinchu Zhu","http://arxiv.org/abs/1812.10820v7, http://arxiv.org/pdf/1812.10820v7",econ.EM
28922,em,"The instrumental variable quantile regression (IVQR) model (Chernozhukov and
Hansen, 2005) is a popular tool for estimating causal quantile effects with
endogenous covariates. However, estimation is complicated by the non-smoothness
and non-convexity of the IVQR GMM objective function. This paper shows that the
IVQR estimation problem can be decomposed into a set of conventional quantile
regression sub-problems which are convex and can be solved efficiently. This
reformulation leads to new identification results and to fast, easy to
implement, and tuning-free estimators that do not require the availability of
high-level ""black box"" optimization routines.",Decentralization Estimators for Instrumental Variable Quantile Regression Models,2018-12-28 11:50:33,"Hiroaki Kaido, Kaspar Wuthrich","http://arxiv.org/abs/1812.10925v4, http://arxiv.org/pdf/1812.10925v4",econ.EM
28923,em,"Predicting future successful designs and corresponding market opportunity is
a fundamental goal of product design firms. There is accordingly a long history
of quantitative approaches that aim to capture diverse consumer preferences,
and then translate those preferences to corresponding ""design gaps"" in the
market. We extend this work by developing a deep learning approach to predict
design gaps in the market. These design gaps represent clusters of designs that
do not yet exist, but are predicted to be both (1) highly preferred by
consumers, and (2) feasible to build under engineering and manufacturing
constraints. This approach is tested on the entire U.S. automotive market using
of millions of real purchase data. We retroactively predict design gaps in the
market, and compare predicted design gaps with actual known successful designs.
Our preliminary results give evidence it may be possible to predict design
gaps, suggesting this approach has promise for early identification of market
opportunity.","Predicting ""Design Gaps"" in the Market: Deep Consumer Choice Models under Probabilistic Design Constraints",2018-12-28 18:56:46,"Alex Burnap, John Hauser","http://arxiv.org/abs/1812.11067v1, http://arxiv.org/pdf/1812.11067v1",econ.EM
28924,em,"This paper studies identification and estimation of a class of dynamic models
in which the decision maker (DM) is uncertain about the data-generating
process. The DM surrounds a benchmark model that he or she fears is
misspecified by a set of models. Decisions are evaluated under a worst-case
model delivering the lowest utility among all models in this set. The DM's
benchmark model and preference parameters are jointly underidentified. With the
benchmark model held fixed, primitive conditions are established for
identification of the DM's worst-case model and preference parameters. The key
step in the identification analysis is to establish existence and uniqueness of
the DM's continuation value function allowing for unbounded statespace and
unbounded utilities. To do so, fixed-point results are derived for monotone,
convex operators that act on a Banach space of thin-tailed functions arising
naturally from the structure of the continuation value recursion. The
fixed-point results are quite general; applications to models with learning and
Rust-type dynamic discrete choice models are also discussed. For estimation, a
perturbation result is derived which provides a necessary and sufficient
condition for consistent estimation of continuation values and the worst-case
model. The result also allows convergence rates of estimators to be
characterized. An empirical application studies an endowment economy where the
DM's benchmark model may be interpreted as an aggregate of experts' forecasting
models. The application reveals time-variation in the way the DM
pessimistically distorts benchmark probabilities. Consequences for asset
pricing are explored and connections are drawn with the literature on
macroeconomic uncertainty.",Dynamic Models with Robust Decision Makers: Identification and Estimation,2018-12-29 02:36:41,Timothy M. Christensen,"http://arxiv.org/abs/1812.11246v3, http://arxiv.org/pdf/1812.11246v3",econ.EM
28926,em,"This paper introduces a flexible regularization approach that reduces point
estimation risk of group means stemming from e.g. categorical regressors,
(quasi-)experimental data or panel data models. The loss function is penalized
by adding weighted squared l2-norm differences between group location
parameters and informative first-stage estimates. Under quadratic loss, the
penalized estimation problem has a simple interpretable closed-form solution
that nests methods established in the literature on ridge regression,
discretized support smoothing kernels and model averaging methods. We derive
risk-optimal penalty parameters and propose a plug-in approach for estimation.
The large sample properties are analyzed in an asymptotic local to zero
framework by introducing a class of sequences for close and distant systems of
locations that is sufficient for describing a large range of data generating
processes. We provide the asymptotic distributions of the shrinkage estimators
under different penalization schemes. The proposed plug-in estimator uniformly
dominates the ordinary least squares in terms of asymptotic risk if the number
of groups is larger than three. Monte Carlo simulations reveal robust
improvements over standard methods in finite samples. Real data examples of
estimating time trends in a panel and a difference-in-differences study
illustrate potential applications.",Shrinkage for Categorical Regressors,2019-01-07 19:17:23,"Phillip Heiler, Jana Mareckova","http://arxiv.org/abs/1901.01898v1, http://arxiv.org/pdf/1901.01898v1",econ.EM
28927,em,"This article introduces lassopack, a suite of programs for regularized
regression in Stata. lassopack implements lasso, square-root lasso, elastic
net, ridge regression, adaptive lasso and post-estimation OLS. The methods are
suitable for the high-dimensional setting where the number of predictors $p$
may be large and possibly greater than the number of observations, $n$. We
offer three different approaches for selecting the penalization (`tuning')
parameters: information criteria (implemented in lasso2), $K$-fold
cross-validation and $h$-step ahead rolling cross-validation for cross-section,
panel and time-series data (cvlasso), and theory-driven (`rigorous')
penalization for the lasso and square-root lasso for cross-section and panel
data (rlasso). We discuss the theoretical framework and practical
considerations for each approach. We also present Monte Carlo results to
compare the performance of the penalization approaches.",lassopack: Model selection and prediction with regularized regression in Stata,2019-01-16 20:30:27,"Achim Ahrens, Christian B. Hansen, Mark E. Schaffer","http://arxiv.org/abs/1901.05397v1, http://arxiv.org/pdf/1901.05397v1",econ.EM
28928,em,"The maximum utility estimation proposed by Elliott and Lieli (2013) can be
viewed as cost-sensitive binary classification; thus, its in-sample overfitting
issue is similar to that of perceptron learning. A utility-maximizing
prediction rule (UMPR) is constructed to alleviate the in-sample overfitting of
the maximum utility estimation. We establish non-asymptotic upper bounds on the
difference between the maximal expected utility and the generalized expected
utility of the UMPR. Simulation results show that the UMPR with an appropriate
data-dependent penalty achieves larger generalized expected utility than common
estimators in the binary classification if the conditional probability of the
binary outcome is misspecified.",Model Selection in Utility-Maximizing Binary Prediction,2019-03-02 18:02:50,Jiun-Hua Su,"http://dx.doi.org/10.1016/j.jeconom.2020.07.052, http://arxiv.org/abs/1903.00716v3, http://arxiv.org/pdf/1903.00716v3",econ.EM
28929,em,"We provide a finite sample inference method for the structural parameters of
a semiparametric binary response model under a conditional median restriction
originally studied by Manski (1975, 1985). Our inference method is valid for
any sample size and irrespective of whether the structural parameters are point
identified or partially identified, for example due to the lack of a
continuously distributed covariate with large support. Our inference approach
exploits distributional properties of observable outcomes conditional on the
observed sequence of exogenous variables. Moment inequalities conditional on
this size n sequence of exogenous covariates are constructed, and the test
statistic is a monotone function of violations of sample moment inequalities.
The critical value used for inference is provided by the appropriate quantile
of a known function of n independent Rademacher random variables. We
investigate power properties of the underlying test and provide simulation
studies to support the theoretical findings.",Finite Sample Inference for the Maximum Score Estimand,2019-03-04 22:53:00,"Adam M. Rosen, Takuya Ura","http://arxiv.org/abs/1903.01511v2, http://arxiv.org/pdf/1903.01511v2",econ.EM
28930,em,"A fundamental problem with nonlinear models is that maximum likelihood
estimates are not guaranteed to exist. Though nonexistence is a well known
problem in the binary choice literature, it presents significant challenges for
other models as well and is not as well understood in more general settings.
These challenges are only magnified for models that feature many fixed effects
and other high-dimensional parameters. We address the current ambiguity
surrounding this topic by studying the conditions that govern the existence of
estimates for (pseudo-)maximum likelihood estimators used to estimate a wide
class of generalized linear models (GLMs). We show that some, but not all, of
these GLM estimators can still deliver consistent estimates of at least some of
the linear parameters when these conditions fail to hold. We also demonstrate
how to verify these conditions in models with high-dimensional parameters, such
as panel data models with multiple levels of fixed effects.",Verifying the existence of maximum likelihood estimates for generalized linear models,2019-03-05 05:18:49,"Sergio Correia, Paulo Guimarães, Thomas Zylkin","http://arxiv.org/abs/1903.01633v6, http://arxiv.org/pdf/1903.01633v6",econ.EM
28931,em,"Bojinov & Shephard (2019) defined potential outcome time series to
nonparametrically measure dynamic causal effects in time series experiments.
Four innovations are developed in this paper: ""instrumental paths,"" treatments
which are ""shocks,"" ""linear potential outcomes"" and the ""causal response
function."" Potential outcome time series are then used to provide a
nonparametric causal interpretation of impulse response functions, generalized
impulse response functions, local projections and LP-IV.","Econometric analysis of potential outcomes time series: instruments, shocks, linearity and the causal response function",2019-03-05 05:53:08,"Ashesh Rambachan, Neil Shephard","http://arxiv.org/abs/1903.01637v3, http://arxiv.org/pdf/1903.01637v3",econ.EM
28932,em,"In this paper we present ppmlhdfe, a new Stata command for estimation of
(pseudo) Poisson regression models with multiple high-dimensional fixed effects
(HDFE). Estimation is implemented using a modified version of the iteratively
reweighted least-squares (IRLS) algorithm that allows for fast estimation in
the presence of HDFE. Because the code is built around the reghdfe package, it
has similar syntax, supports many of the same functionalities, and benefits
from reghdfe's fast convergence properties for computing high-dimensional least
squares problems.
  Performance is further enhanced by some new techniques we introduce for
accelerating HDFE-IRLS estimation specifically. ppmlhdfe also implements a
novel and more robust approach to check for the existence of (pseudo) maximum
likelihood estimates.",ppmlhdfe: Fast Poisson Estimation with High-Dimensional Fixed Effects,2019-03-05 09:11:26,"Sergio Correia, Paulo Guimarães, Thomas Zylkin","http://dx.doi.org/10.1177/1536867X20909691, http://arxiv.org/abs/1903.01690v3, http://arxiv.org/pdf/1903.01690v3",econ.EM
28933,em,"A fixed effects regression estimator is introduced that can directly identify
and estimate the Africa-Dummy in one regression step so that its correct
standard errors as well as correlations to other coefficients can easily be
estimated. We can estimate the Nickel bias and found it to be negligibly tiny.
Semiparametric extensions check whether the Africa-Dummy is simply a result of
misspecification of the functional form. In particular, we show that the
returns to growth factors are different for Sub-Saharan African countries
compared to the rest of the world. For example, returns to population growth
are positive and beta-convergence is faster. When extending the model to
identify the development of the Africa-Dummy over time we see that it has been
changing dramatically over time and that the punishment for Sub-Saharan African
countries has been decreasing incrementally to reach insignificance around the
turn of the millennium.",The Africa-Dummy: Gone with the Millennium?,2019-03-06 16:18:13,"Max Köhler, Stefan Sperlich","http://arxiv.org/abs/1903.02357v1, http://arxiv.org/pdf/1903.02357v1",econ.EM
28934,em,"Various papers demonstrate the importance of inequality, poverty and the size
of the middle class for economic growth. When explaining why these measures of
the income distribution are added to the growth regression, it is often
mentioned that poor people behave different which may translate to the economy
as a whole. However, simply adding explanatory variables does not reflect this
behavior. By a varying coefficient model we show that the returns to growth
differ a lot depending on poverty and inequality. Furthermore, we investigate
how these returns differ for the poorer and for the richer part of the
societies. We argue that the differences in the coefficients impede, on the one
hand, that the means coefficients are informative, and, on the other hand,
challenge the credibility of the economic interpretation. In short, we show
that, when estimating mean coefficients without accounting for poverty and
inequality, the estimation is likely to suffer from a serious endogeneity bias.",A Varying Coefficient Model for Assessing the Returns to Growth to Account for Poverty and Inequality,2019-03-06 17:07:05,"Max Köhler, Stefan Sperlich, Jisu Yoon","http://arxiv.org/abs/1903.02390v1, http://arxiv.org/pdf/1903.02390v1",econ.EM
28935,em,"We consider inference on the probability density of valuations in the
first-price sealed-bid auctions model within the independent private value
paradigm. We show the asymptotic normality of the two-step nonparametric
estimator of Guerre, Perrigne, and Vuong (2000) (GPV), and propose an easily
implementable and consistent estimator of the asymptotic variance. We prove the
validity of the pointwise percentile bootstrap confidence intervals based on
the GPV estimator. Lastly, we use the intermediate Gaussian approximation
approach to construct bootstrap-based asymptotically valid uniform confidence
bands for the density of the valuations.","Inference for First-Price Auctions with Guerre, Perrigne, and Vuong's Estimator",2019-03-15 11:09:33,"Jun Ma, Vadim Marmer, Artyom Shneyerov","http://dx.doi.org/10.1016/j.jeconom.2019.02.006, http://arxiv.org/abs/1903.06401v1, http://arxiv.org/pdf/1903.06401v1",econ.EM
28936,em,"Empirical growth analysis has three major problems --- variable selection,
parameter heterogeneity and cross-sectional dependence --- which are addressed
independently from each other in most studies. The purpose of this study is to
propose an integrated framework that extends the conventional linear growth
regression model to allow for parameter heterogeneity and cross-sectional error
dependence, while simultaneously performing variable selection. We also derive
the asymptotic properties of the estimator under both low and high dimensions,
and further investigate the finite sample performance of the estimator through
Monte Carlo simulations. We apply the framework to a dataset of 89 countries
over the period from 1960 to 2014. Our results reveal some cross-country
patterns not found in previous studies (e.g., ""middle income trap hypothesis"",
""natural resources curse hypothesis"", ""religion works via belief, not
practice"", etc.).",An Integrated Panel Data Approach to Modelling Economic Growth,2019-03-19 14:38:09,"Guohua Feng, Jiti Gao, Bin Peng","http://arxiv.org/abs/1903.07948v1, http://arxiv.org/pdf/1903.07948v1",econ.EM
28937,em,"We propose a new approach to mixed-frequency regressions in a
high-dimensional environment that resorts to Group Lasso penalization and
Bayesian techniques for estimation and inference. In particular, to improve the
prediction properties of the model and its sparse recovery ability, we consider
a Group Lasso with a spike-and-slab prior. Penalty hyper-parameters governing
the model shrinkage are automatically tuned via an adaptive MCMC algorithm. We
establish good frequentist asymptotic properties of the posterior of the
in-sample and out-of-sample prediction error, we recover the optimal posterior
contraction rate, and we show optimality of the posterior predictive density.
Simulations show that the proposed models have good selection and forecasting
performance in small samples, even when the design matrix presents
cross-correlation. When applied to forecasting U.S. GDP, our penalized
regressions can outperform many strong competitors. Results suggest that
financial variables may have some, although very limited, short-term predictive
content.","Bayesian MIDAS Penalized Regressions: Estimation, Selection, and Prediction",2019-03-19 17:42:37,"Matteo Mogliani, Anna Simoni","http://arxiv.org/abs/1903.08025v3, http://arxiv.org/pdf/1903.08025v3",econ.EM
28938,em,"I study a regression model in which one covariate is an unknown function of a
latent driver of link formation in a network. Rather than specify and fit a
parametric network formation model, I introduce a new method based on matching
pairs of agents with similar columns of the squared adjacency matrix, the ijth
entry of which contains the number of other agents linked to both agents i and
j. The intuition behind this approach is that for a large class of network
formation models the columns of the squared adjacency matrix characterize all
of the identifiable information about individual linking behavior. In this
paper, I describe the model, formalize this intuition, and provide consistent
estimators for the parameters of the regression model. Auerbach (2021)
considers inference and an application to network peer effects.",Identification and Estimation of a Partially Linear Regression Model using Network Data,2019-03-22 21:59:22,Eric Auerbach,"http://arxiv.org/abs/1903.09679v3, http://arxiv.org/pdf/1903.09679v3",econ.EM
28939,em,"This paper studies a panel data setting where the goal is to estimate causal
effects of an intervention by predicting the counterfactual values of outcomes
for treated units, had they not received the treatment. Several approaches have
been proposed for this problem, including regression methods, synthetic control
methods and matrix completion methods. This paper considers an ensemble
approach, and shows that it performs better than any of the individual methods
in several economic datasets. Matrix completion methods are often given the
most weight by the ensemble, but this clearly depends on the setting. We argue
that ensemble methods present a fruitful direction for further research in the
causal panel data setting.",Ensemble Methods for Causal Effects in Panel Data Settings,2019-03-25 02:21:52,"Susan Athey, Mohsen Bayati, Guido Imbens, Zhaonan Qu","http://arxiv.org/abs/1903.10079v1, http://arxiv.org/pdf/1903.10079v1",econ.EM
28941,em,"How can one determine whether a community-level treatment, such as the
introduction of a social program or trade shock, alters agents' incentives to
form links in a network? This paper proposes analogues of a two-sample
Kolmogorov-Smirnov test, widely used in the literature to test the null
hypothesis of ""no treatment effects"", for network data. It first specifies a
testing problem in which the null hypothesis is that two networks are drawn
from the same random graph model. It then describes two randomization tests
based on the magnitude of the difference between the networks' adjacency
matrices as measured by the $2\to2$ and $\infty\to1$ operator norms. Power
properties of the tests are examined analytically, in simulation, and through
two real-world applications. A key finding is that the test based on the
$\infty\to1$ norm can be substantially more powerful than that based on the
$2\to2$ norm for the kinds of sparse and degree-heterogeneous networks common
in economics.",Testing for Differences in Stochastic Network Structure,2019-03-26 22:00:45,Eric Auerbach,"http://arxiv.org/abs/1903.11117v5, http://arxiv.org/pdf/1903.11117v5",econ.EM
28942,em,"This paper studies a regularized support function estimator for bounds on
components of the parameter vector in the case in which the identified set is a
polygon. The proposed regularized estimator has three important properties: (i)
it has a uniform asymptotic Gaussian limit in the presence of flat faces in the
absence of redundant (or overidentifying) constraints (or vice versa); (ii) the
bias from regularization does not enter the first-order limiting
distribution;(iii) the estimator remains consistent for sharp identified set
for the individual components even in the non-regualar case. These properties
are used to construct uniformly valid confidence sets for an element
$\theta_{1}$ of a parameter vector $\theta\in\mathbb{R}^{d}$ that is partially
identified by affine moment equality and inequality conditions. The proposed
confidence sets can be computed as a solution to a small number of linear and
convex quadratic programs, which leads to a substantial decrease in computation
time and guarantees a global optimum. As a result, the method provides
uniformly valid inference in applications in which the dimension of the
parameter space, $d$, and the number of inequalities, $k$, were previously
computationally unfeasible ($d,k=100$). The proposed approach can be extended
to construct confidence sets for intersection bounds, to construct joint
polygon-shaped confidence sets for multiple components of $\theta$, and to find
the set of solutions to a linear program. Inference for coefficients in the
linear IV regression model with an interval outcome is used as an illustrative
example.",Simple subvector inference on sharp identified set in affine models,2019-03-30 01:49:40,Bulat Gafarov,"http://arxiv.org/abs/1904.00111v2, http://arxiv.org/pdf/1904.00111v2",econ.EM
28943,em,"Three-dimensional panel models are widely used in empirical analysis.
Researchers use various combinations of fixed effects for three-dimensional
panels. When one imposes a parsimonious model and the true model is rich, then
it incurs mis-specification biases. When one employs a rich model and the true
model is parsimonious, then it incurs larger standard errors than necessary. It
is therefore useful for researchers to know correct models. In this light, Lu,
Miao, and Su (2018) propose methods of model selection. We advance this
literature by proposing a method of post-selection inference for regression
parameters. Despite our use of the lasso technique as means of model selection,
our assumptions allow for many and even all fixed effects to be nonzero.
Simulation studies demonstrate that the proposed method is more precise than
under-fitting fixed effect estimators, is more efficient than over-fitting
fixed effect estimators, and allows for as accurate inference as the oracle
estimator.",Post-Selection Inference in Three-Dimensional Panel Data,2019-03-30 15:51:35,"Harold D. Chiang, Joel Rodrigue, Yuya Sasaki","http://arxiv.org/abs/1904.00211v2, http://arxiv.org/pdf/1904.00211v2",econ.EM
28944,em,"We propose a framework for analyzing the sensitivity of counterfactuals to
parametric assumptions about the distribution of latent variables in structural
models. In particular, we derive bounds on counterfactuals as the distribution
of latent variables spans nonparametric neighborhoods of a given parametric
specification while other ""structural"" features of the model are maintained.
Our approach recasts the infinite-dimensional problem of optimizing the
counterfactual with respect to the distribution of latent variables (subject to
model constraints) as a finite-dimensional convex program. We also develop an
MPEC version of our method to further simplify computation in models with
endogenous parameters (e.g., value functions) defined by equilibrium
constraints. We propose plug-in estimators of the bounds and two methods for
inference. We also show that our bounds converge to the sharp nonparametric
bounds on counterfactuals as the neighborhood size becomes large. To illustrate
the broad applicability of our procedure, we present empirical applications to
matching models with transferable utility and dynamic discrete choice models.",Counterfactual Sensitivity and Robustness,2019-04-01 20:53:20,"Timothy Christensen, Benjamin Connault","http://arxiv.org/abs/1904.00989v4, http://arxiv.org/pdf/1904.00989v4",econ.EM
28945,em,"Models with a discrete endogenous variable are typically underidentified when
the instrument takes on too few values. This paper presents a new method that
matches pairs of covariates and instruments to restore point identification in
this scenario in a triangular model. The model consists of a structural
function for a continuous outcome and a selection model for the discrete
endogenous variable. The structural outcome function must be continuous and
monotonic in a scalar disturbance, but it can be nonseparable. The selection
model allows for unrestricted heterogeneity. Global identification is obtained
under weak conditions. The paper also provides estimators of the structural
outcome function. Two empirical examples of the return to education and
selection into Head Start illustrate the value and limitations of the method.",Matching Points: Supplementing Instruments with Covariates in Triangular Models,2019-04-02 04:12:10,Junlong Feng,"http://arxiv.org/abs/1904.01159v3, http://arxiv.org/pdf/1904.01159v3",econ.EM
28946,em,"Empirical economists are often deterred from the application of fixed effects
binary choice models mainly for two reasons: the incidental parameter problem
and the computational challenge even in moderately large panels. Using the
example of binary choice models with individual and time fixed effects, we show
how both issues can be alleviated by combining asymptotic bias corrections with
computational advances. Because unbalancedness is often encountered in applied
work, we investigate its consequences on the finite sample properties of
various (bias corrected) estimators. In simulation experiments we find that
analytical bias corrections perform particularly well, whereas split-panel
jackknife estimators can be severely biased in unbalanced panels.",Fixed Effects Binary Choice Models: Estimation and Inference with Long Panels,2019-04-08 20:38:31,"Daniel Czarnowske, Amrei Stammann","http://arxiv.org/abs/1904.04217v3, http://arxiv.org/pdf/1904.04217v3",econ.EM
28948,em,"This article proposes inference procedures for distribution regression models
in duration analysis using randomly right-censored data. This generalizes
classical duration models by allowing situations where explanatory variables'
marginal effects freely vary with duration time. The article discusses
applications to testing uniform restrictions on the varying coefficients,
inferences on average marginal effects, and others involving conditional
distribution estimates. Finite sample properties of the proposed method are
studied by means of Monte Carlo experiments. Finally, we apply our proposal to
study the effects of unemployment benefits on unemployment duration.",Distribution Regression in Duration Analysis: an Application to Unemployment Spells,2019-04-12 15:22:27,"Miguel A. Delgado, Andrés García-Suaza, Pedro H. C. Sant'Anna","http://arxiv.org/abs/1904.06185v2, http://arxiv.org/pdf/1904.06185v2",econ.EM
28949,em,"Internet finance is a new financial model that applies Internet technology to
payment, capital borrowing and lending and transaction processing. In order to
study the internal risks, this paper uses the Internet financial risk elements
as the network node to construct the complex network of Internet financial risk
system. Different from the study of macroeconomic shocks and financial
institution data, this paper mainly adopts the perspective of complex system to
analyze the systematic risk of Internet finance. By dividing the entire
financial system into Internet financial subnet, regulatory subnet and
traditional financial subnet, the paper discusses the relationship between
contagion and contagion among different risk factors, and concludes that risks
are transmitted externally through the internal circulation of Internet
finance, thus discovering potential hidden dangers of systemic risks. The
results show that the nodes around the center of the whole system are the main
objects of financial risk contagion in the Internet financial network. In
addition, macro-prudential regulation plays a decisive role in the control of
the Internet financial system, and points out the reasons why the current
regulatory measures are still limited. This paper summarizes a research model
which is still in its infancy, hoping to open up new prospects and directions
for us to understand the cascading behaviors of Internet financial risks.",Complex Network Construction of Internet Financial risk,2019-04-14 09:55:11,"Runjie Xu, Chuanmin Mi, Rafal Mierzwiak, Runyu Meng","http://dx.doi.org/10.1016/j.physa.2019.122930, http://arxiv.org/abs/1904.06640v3, http://arxiv.org/pdf/1904.06640v3",econ.EM
28950,em,"We develop a dynamic model of discrete choice that incorporates peer effects
into random consideration sets. We characterize the equilibrium behavior and
study the empirical content of the model. In our setup, changes in the choices
of friends affect the distribution of the consideration sets. We exploit this
variation to recover the ranking of preferences, attention mechanisms, and
network connections. These nonparametric identification results allow
unrestricted heterogeneity across people and do not rely on the variation of
either covariates or the set of available options. Our methodology leads to a
maximum-likelihood estimator that performs well in simulations. We apply our
results to an experimental dataset that has been designed to study the visual
focus of attention.",Peer Effects in Random Consideration Sets,2019-04-14 22:15:07,"Nail Kashaev, Natalia Lazzati","http://arxiv.org/abs/1904.06742v3, http://arxiv.org/pdf/1904.06742v3",econ.EM
28951,em,"In multinomial response models, idiosyncratic variations in the indirect
utility are generally modeled using Gumbel or normal distributions. This study
makes a strong case to substitute these thin-tailed distributions with a
t-distribution. First, we demonstrate that a model with a t-distributed error
kernel better estimates and predicts preferences, especially in
class-imbalanced datasets. Our proposed specification also implicitly accounts
for decision-uncertainty behavior, i.e. the degree of certainty that
decision-makers hold in their choices relative to the variation in the indirect
utility of any alternative. Second, after applying a t-distributed error kernel
in a multinomial response model for the first time, we extend this
specification to a generalized continuous-multinomial (GCM) model and derive
its full-information maximum likelihood estimator. The likelihood involves an
open-form expression of the cumulative density function of the multivariate
t-distribution, which we propose to compute using a combination of the
composite marginal likelihood method and the separation-of-variables approach.
Third, we establish finite sample properties of the GCM model with a
t-distributed error kernel (GCM-t) and highlight its superiority over the GCM
model with a normally-distributed error kernel (GCM-N) in a Monte Carlo study.
Finally, we compare GCM-t and GCM-N in an empirical setting related to
preferences for electric vehicles (EVs). We observe that accounting for
decision-uncertainty behavior in GCM-t results in lower elasticity estimates
and a higher willingness to pay for improving the EV attributes than those of
the GCM-N model. These differences are relevant in making policies to expedite
the adoption of EVs.",A Generalized Continuous-Multinomial Response Model with a t-distributed Error Kernel,2019-04-17 18:54:04,"Subodh Dubey, Prateek Bansal, Ricardo A. Daziano, Erick Guerra","http://arxiv.org/abs/1904.08332v3, http://arxiv.org/pdf/1904.08332v3",econ.EM
28952,em,"Currently all countries including developing countries are expected to
utilize their own tax revenues and carry out their own development for solving
poverty in their countries. However, developing countries cannot earn tax
revenues like developed countries partly because they do not have effective
countermeasures against international tax avoidance. Our analysis focuses on
treaty shopping among various ways to conduct international tax avoidance
because tax revenues of developing countries have been heavily damaged through
treaty shopping. To analyze the location and sector of conduit firms likely to
be used for treaty shopping, we constructed a multilayer ownership-tax network
and proposed multilayer centrality. Because multilayer centrality can consider
not only the value owing in the ownership network but also the withholding tax
rate, it is expected to grasp precisely the locations and sectors of conduit
firms established for the purpose of treaty shopping. Our analysis shows that
firms in the sectors of Finance & Insurance and Wholesale & Retail trade etc.
are involved with treaty shopping. We suggest that developing countries make a
clause focusing on these sectors in the tax treaties they conclude.",Location-Sector Analysis of International Profit Shifting on a Multilayer Ownership-Tax Network,2019-04-19 15:30:34,"Tembo Nakamoto, Odile Rouhban, Yuichi Ikeda","http://arxiv.org/abs/1904.09165v1, http://arxiv.org/pdf/1904.09165v1",econ.EM
29010,em,"This paper considers generalized least squares (GLS) estimation for linear
panel data models. By estimating the large error covariance matrix
consistently, the proposed feasible GLS (FGLS) estimator is more efficient than
the ordinary least squares (OLS) in the presence of heteroskedasticity, serial,
and cross-sectional correlations. To take into account the serial correlations,
we employ the banding method. To take into account the cross-sectional
correlations, we suggest to use the thresholding method. We establish the
limiting distribution of the proposed estimator. A Monte Carlo study is
considered. The proposed method is applied to an empirical application.",Feasible Generalized Least Squares for Panel Data with Cross-sectional and Serial Correlations,2019-10-20 18:37:51,"Jushan Bai, Sung Hoon Choi, Yuan Liao","http://arxiv.org/abs/1910.09004v3, http://arxiv.org/pdf/1910.09004v3",econ.EM
28953,em,"We study identification in nonparametric regression models with a
misclassified and endogenous binary regressor when an instrument is correlated
with misclassification error. We show that the regression function is
nonparametrically identified if one binary instrument variable and one binary
covariate satisfy the following conditions. The instrumental variable corrects
endogeneity; the instrumental variable must be correlated with the unobserved
true underlying binary variable, must be uncorrelated with the error term in
the outcome equation, but is allowed to be correlated with the
misclassification error. The covariate corrects misclassification; this
variable can be one of the regressors in the outcome equation, must be
correlated with the unobserved true underlying binary variable, and must be
uncorrelated with the misclassification error. We also propose a mixture-based
framework for modeling unobserved heterogeneous treatment effects with a
misclassified and endogenous binary regressor and show that treatment effects
can be identified if the true treatment effect is related to an observed
regressor and another observable variable.",Identification of Regression Models with a Misclassified and Endogenous Binary Regressor,2019-04-25 06:41:37,"Hiroyuki Kasahara, Katsumi Shimotsu","http://arxiv.org/abs/1904.11143v3, http://arxiv.org/pdf/1904.11143v3",econ.EM
28954,em,"In matched-pairs experiments in which one cluster per pair of clusters is
assigned to treatment, to estimate treatment effects, researchers often regress
their outcome on a treatment indicator and pair fixed effects, clustering
standard errors at the unit-ofrandomization level. We show that even if the
treatment has no effect, a 5%-level t-test based on this regression will
wrongly conclude that the treatment has an effect up to 16.5% of the time. To
fix this problem, researchers should instead cluster standard errors at the
pair level. Using simulations, we show that similar results apply to clustered
experiments with small strata.",At What Level Should One Cluster Standard Errors in Paired and Small-Strata Experiments?,2019-06-01 23:47:18,"Clément de Chaisemartin, Jaime Ramirez-Cuellar","http://arxiv.org/abs/1906.00288v10, http://arxiv.org/pdf/1906.00288v10",econ.EM
28955,em,"We propose the use of indirect inference estimation to conduct inference in
complex locally stationary models. We develop a local indirect inference
algorithm and establish the asymptotic properties of the proposed estimator.
Due to the nonparametric nature of locally stationary models, the resulting
indirect inference estimator exhibits nonparametric rates of convergence. We
validate our methodology with simulation studies in the confines of a locally
stationary moving average model and a new locally stationary multiplicative
stochastic volatility model. Using this indirect inference methodology and the
new locally stationary volatility model, we obtain evidence of non-linear,
time-varying volatility trends for monthly returns on several Fama-French
portfolios.",Indirect Inference for Locally Stationary Models,2019-06-05 03:41:13,"David Frazier, Bonsoo Koo","http://dx.doi.org/10.1016/S0304-4076/20/30303-1, http://arxiv.org/abs/1906.01768v2, http://arxiv.org/pdf/1906.01768v2",econ.EM
28956,em,"In a nonparametric instrumental regression model, we strengthen the
conventional moment independence assumption towards full statistical
independence between instrument and error term. This allows us to prove
identification results and develop estimators for a structural function of
interest when the instrument is discrete, and in particular binary. When the
regressor of interest is also discrete with more mass points than the
instrument, we state straightforward conditions under which the structural
function is partially identified, and give modified assumptions which imply
point identification. These stronger assumptions are shown to hold outside of a
small set of conditional moments of the error term. Estimators for the
identified set are given when the structural function is either partially or
point identified. When the regressor is continuously distributed, we prove that
if the instrument induces a sufficiently rich variation in the joint
distribution of the regressor and error term then point identification of the
structural function is still possible. This approach is relatively tractable,
and under some standard conditions we demonstrate that our point identifying
assumption holds on a topologically generic set of density functions for the
joint distribution of regressor, error, and instrument. Our method also applies
to a well-known nonparametric quantile regression framework, and we are able to
state analogous point identification results in that context.","Nonparametric Identification and Estimation with Independent, Discrete Instruments",2019-06-12 19:05:52,Isaac Loh,"http://arxiv.org/abs/1906.05231v1, http://arxiv.org/pdf/1906.05231v1",econ.EM
28957,em,"We consider the asymptotic properties of the Synthetic Control (SC) estimator
when both the number of pre-treatment periods and control units are large. If
potential outcomes follow a linear factor model, we provide conditions under
which the factor loadings of the SC unit converge in probability to the factor
loadings of the treated unit. This happens when there are weights diluted among
an increasing number of control units such that a weighted average of the
factor loadings of the control units asymptotically reconstructs the factor
loadings of the treated unit. In this case, the SC estimator is asymptotically
unbiased even when treatment assignment is correlated with time-varying
unobservables. This result can be valid even when the number of control units
is larger than the number of pre-treatment periods.",On the Properties of the Synthetic Control Estimator with Many Periods and Many Controls,2019-06-16 15:26:28,Bruno Ferman,"http://arxiv.org/abs/1906.06665v5, http://arxiv.org/pdf/1906.06665v5",econ.EM
28958,em,"We study the association between physical appearance and family income using
a novel data which has 3-dimensional body scans to mitigate the issue of
reporting errors and measurement errors observed in most previous studies. We
apply machine learning to obtain intrinsic features consisting of human body
and take into account a possible issue of endogenous body shapes. The
estimation results show that there is a significant relationship between
physical appearance and family income and the associations are different across
the gender. This supports the hypothesis on the physical attractiveness premium
and its heterogeneity across the gender.",Shape Matters: Evidence from Machine Learning on Body Shape-Income Relationship,2019-06-16 21:42:22,"Suyong Song, Stephen S. Baek","http://dx.doi.org/10.1371/journal.pone.0254785, http://arxiv.org/abs/1906.06747v1, http://arxiv.org/pdf/1906.06747v1",econ.EM
29011,em,"This paper considers estimation of large dynamic factor models with common
and idiosyncratic trends by means of the Expectation Maximization algorithm,
implemented jointly with the Kalman smoother. We show that, as the
cross-sectional dimension $n$ and the sample size $T$ diverge to infinity, the
common component for a given unit estimated at a given point in time is
$\min(\sqrt n,\sqrt T)$-consistent. The case of local levels and/or local
linear trends trends is also considered. By means of a MonteCarlo simulation
exercise, we compare our approach with estimators based on principal component
analysis.",Quasi Maximum Likelihood Estimation of Non-Stationary Large Approximate Dynamic Factor Models,2019-10-22 12:00:06,"Matteo Barigozzi, Matteo Luciani","http://arxiv.org/abs/1910.09841v1, http://arxiv.org/pdf/1910.09841v1",econ.EM
28959,em,"This paper aims to examine the use of sparse methods to forecast the real, in
the chain-linked volume sense, expenditure components of the US and EU GDP in
the short-run sooner than the national institutions of statistics officially
release the data. We estimate current quarter nowcasts along with 1- and
2-quarter forecasts by bridging quarterly data with available monthly
information announced with a much smaller delay. We solve the
high-dimensionality problem of the monthly dataset by assuming sparse
structures of leading indicators, capable of adequately explaining the dynamics
of analyzed data. For variable selection and estimation of the forecasts, we
use the sparse methods - LASSO together with its recent modifications. We
propose an adjustment that combines LASSO cases with principal components
analysis that deemed to improve the forecasting performance. We evaluate
forecasting performance conducting pseudo-real-time experiments for gross fixed
capital formation, private consumption, imports and exports over the sample of
2005-2019, compared with benchmark ARMA and factor models. The main results
suggest that sparse methods can outperform the benchmarks and to identify
reasonable subsets of explanatory variables. The proposed LASSO-PC modification
show further improvement in forecast accuracy.",Sparse structures with LASSO through Principal Components: forecasting GDP components in the short-run,2019-06-19 12:30:36,"Saulius Jokubaitis, Dmitrij Celov, Remigijus Leipus","http://dx.doi.org/10.1016/j.ijforecast.2020.09.005, http://arxiv.org/abs/1906.07992v2, http://arxiv.org/pdf/1906.07992v2",econ.EM
28960,em,"In 2018, allowance prices in the EU Emission Trading Scheme (EU ETS)
experienced a run-up from persistently low levels in previous years. Regulators
attribute this to a comprehensive reform in the same year, and are confident
the new price level reflects an anticipated tighter supply of allowances. We
ask if this is indeed the case, or if it is an overreaction of the market
driven by speculation. We combine several econometric methods - time-varying
coefficient regression, formal bubble detection as well as time stamping and
crash odds prediction - to juxtapose the regulators' claim versus the
concurrent explanation. We find evidence of a long period of explosive
behaviour in allowance prices, starting in March 2018 when the reform was
adopted. Our results suggest that the reform triggered market participants into
speculation, and question regulators' confidence in its long-term outcome. This
has implications for both the further development of the EU ETS, and the long
lasting debate about taxes versus emission trading schemes.",Understanding the explosive trend in EU ETS prices -- fundamentals or speculation?,2019-06-25 17:43:50,"Marina Friedrich, Sébastien Fries, Michael Pahle, Ottmar Edenhofer","http://arxiv.org/abs/1906.10572v5, http://arxiv.org/pdf/1906.10572v5",econ.EM
28961,em,"Many economic studies use shift-share instruments to estimate causal effects.
Often, all shares need to fulfil an exclusion restriction, making the
identifying assumption strict. This paper proposes to use methods that relax
the exclusion restriction by selecting invalid shares. I apply the methods in
two empirical examples: the effect of immigration on wages and of Chinese
import exposure on employment. In the first application, the coefficient
becomes lower and often changes sign, but this is reconcilable with arguments
made in the literature. In the second application, the findings are mostly
robust to the use of the new methods.",Relaxing the Exclusion Restriction in Shift-Share Instrumental Variable Estimation,2019-06-29 18:27:49,Nicolas Apfel,"http://arxiv.org/abs/1907.00222v4, http://arxiv.org/pdf/1907.00222v4",econ.EM
28962,em,"There is currently an increasing interest in large vector autoregressive
(VAR) models. VARs are popular tools for macroeconomic forecasting and use of
larger models has been demonstrated to often improve the forecasting ability
compared to more traditional small-scale models. Mixed-frequency VARs deal with
data sampled at different frequencies while remaining within the realms of
VARs. Estimation of mixed-frequency VARs makes use of simulation smoothing, but
using the standard procedure these models quickly become prohibitive in
nowcasting situations as the size of the model grows. We propose two algorithms
that alleviate the computational efficiency of the simulation smoothing
algorithm. Our preferred choice is an adaptive algorithm, which augments the
state vector as necessary to sample also monthly variables that are missing at
the end of the sample. For large VARs, we find considerable improvements in
speed using our adaptive algorithm. The algorithm therefore provides a crucial
building block for bringing the mixed-frequency VARs to the high-dimensional
regime.",Simulation smoothing for nowcasting with large mixed-frequency VARs,2019-07-02 00:08:21,"Sebastian Ankargren, Paulina Jonéus","http://arxiv.org/abs/1907.01075v1, http://arxiv.org/pdf/1907.01075v1",econ.EM
28963,em,"We propose a robust method of discrete choice analysis when agents' choice
sets are unobserved. Our core model assumes nothing about agents' choice sets
apart from their minimum size. Importantly, it leaves unrestricted the
dependence, conditional on observables, between choice sets and preferences. We
first characterize the sharp identification region of the model's parameters by
a finite set of conditional moment inequalities. We then apply our theoretical
findings to learn about households' risk preferences and choice sets from data
on their deductible choices in auto collision insurance. We find that the data
can be explained by expected utility theory with low levels of risk aversion
and heterogeneous non-singleton choice sets, and that more than three in four
households require limited choice sets to explain their deductible choices. We
also provide simulation evidence on the computational tractability of our
method in applications with larger feasible sets or higher-dimensional
unobserved heterogeneity.",Heterogeneous Choice Sets and Preferences,2019-07-04 14:47:26,"Levon Barseghyan, Maura Coughlin, Francesca Molinari, Joshua C. Teitelbaum","http://arxiv.org/abs/1907.02337v2, http://arxiv.org/pdf/1907.02337v2",econ.EM
28964,em,"In this paper we develop a new machine learning estimator for ordered choice
models based on the random forest. The proposed Ordered Forest flexibly
estimates the conditional choice probabilities while taking the ordering
information explicitly into account. In addition to common machine learning
estimators, it enables the estimation of marginal effects as well as conducting
inference and thus provides the same output as classical econometric
estimators. An extensive simulation study reveals a good predictive
performance, particularly in settings with non-linearities and
near-multicollinearity. An empirical application contrasts the estimation of
marginal effects and their standard errors with an ordered logit model. A
software implementation of the Ordered Forest is provided both in R and Python
in the package orf available on CRAN and PyPI, respectively.",Random Forest Estimation of the Ordered Choice Model,2019-07-04 17:54:58,"Michael Lechner, Gabriel Okasa","http://arxiv.org/abs/1907.02436v3, http://arxiv.org/pdf/1907.02436v3",econ.EM
28966,em,"This paper provides tests for detecting sample selection in nonparametric
conditional quantile functions. The first test is an omitted predictor test
with the propensity score as the omitted variable. As with any omnibus test, in
the case of rejection we cannot distinguish between rejection due to genuine
selection or to misspecification. Thus, we suggest a second test to provide
supporting evidence whether the cause for rejection at the first stage was
solely due to selection or not. Using only individuals with propensity score
close to one, this second test relies on an `identification at infinity'
argument, but accommodates cases of irregular identification. Importantly,
neither of the two tests requires parametric assumptions on the selection
equation nor a continuous exclusion restriction. Data-driven bandwidth
procedures are proposed, and Monte Carlo evidence suggests a good finite sample
performance in particular of the first test. Finally, we also derive an
extension of the first test to nonparametric conditional mean functions, and
apply our procedure to test for selection in log hourly wages using UK Family
Expenditure Survey data as \citet{AB2017}.",Testing for Quantile Sample Selection,2019-07-17 12:39:39,"Valentina Corradi, Daniel Gutknecht","http://arxiv.org/abs/1907.07412v5, http://arxiv.org/pdf/1907.07412v5",econ.EM
28967,em,"Clustering methods such as k-means have found widespread use in a variety of
applications. This paper proposes a formal testing procedure to determine
whether a null hypothesis of a single cluster, indicating homogeneity of the
data, can be rejected in favor of multiple clusters. The test is simple to
implement, valid under relatively mild conditions (including non-normality, and
heterogeneity of the data in aspects beyond those in the clustering analysis),
and applicable in a range of contexts (including clustering when the time
series dimension is small, or clustering on parameters other than the mean). We
verify that the test has good size control in finite samples, and we illustrate
the test in applications to clustering vehicle manufacturers and U.S. mutual
funds.",Testing for Unobserved Heterogeneity via k-means Clustering,2019-07-17 18:28:24,"Andrew J. Patton, Brian M. Weller","http://arxiv.org/abs/1907.07582v1, http://arxiv.org/pdf/1907.07582v1",econ.EM
28968,em,"Despite its critical importance, the famous X-model elaborated by Ziel and
Steinert (2016) has neither bin been widely studied nor further developed. And
yet, the possibilities to improve the model are as numerous as the fields it
can be applied to. The present paper takes advantage of a technique proposed by
Coulon et al. (2014) to enhance the X-model. Instead of using the wholesale
supply and demand curves as inputs for the model, we rely on the transformed
versions of these curves with a perfectly inelastic demand. As a result,
computational requirements of our X-model reduce and its forecasting power
increases substantially. Moreover, our X-model becomes more robust towards
outliers present in the initial auction curves data.",X-model: further development and possible modifications,2019-07-22 12:59:08,Sergei Kulakov,"http://arxiv.org/abs/1907.09206v1, http://arxiv.org/pdf/1907.09206v1",econ.EM
28969,em,"In their IZA Discussion Paper 10247, Johansson and Lee claim that the main
result (Proposition 3) in Abbring and Van den Berg (2003b) does not hold. We
show that their claim is incorrect. At a certain point within their line of
reasoning, they make a rather basic error while transforming one random
variable into another random variable, and this leads them to draw incorrect
conclusions. As a result, their paper can be discarded.","Rebuttal of ""On Nonparametric Identification of Treatment Effects in Duration Models""",2019-07-20 12:18:44,"Jaap H. Abbring, Gerard J. van den Berg","http://arxiv.org/abs/1907.09886v1, http://arxiv.org/pdf/1907.09886v1",econ.EM
28970,em,"This study examines statistical performance of tests for time-varying
properties under misspecified conditional mean and variance. When we test for
time-varying properties of the conditional mean in the case in which data have
no time-varying mean but have time-varying variance, asymptotic tests have size
distortions. This is improved by the use of a bootstrap method. Similarly, when
we test for time-varying properties of the conditional variance in the case in
which data have time-varying mean but no time-varying variance, asymptotic
tests have large size distortions. This is not improved even by the use of
bootstrap methods. We show that tests for time-varying properties of the
conditional mean by the bootstrap are robust regardless of the time-varying
variance model, whereas tests for time-varying properties of the conditional
variance do not perform well in the presence of misspecified time-varying mean.",Testing for time-varying properties under misspecified conditional mean and variance,2019-07-28 19:47:10,"Daiki Maki, Yasushi Ota","http://arxiv.org/abs/1907.12107v2, http://arxiv.org/pdf/1907.12107v2",econ.EM
28971,em,"This study compares statistical properties of ARCH tests that are robust to
the presence of the misspecified conditional mean. The approaches employed in
this study are based on two nonparametric regressions for the conditional mean.
First is the ARCH test using Nadayara-Watson kernel regression. Second is the
ARCH test using the polynomial approximation regression. The two approaches do
not require specification of the conditional mean and can adapt to various
nonlinear models, which are unknown a priori. Accordingly, they are robust to
misspecified conditional mean models. Simulation results show that ARCH tests
based on the polynomial approximation regression approach have better
statistical properties than ARCH tests using Nadayara-Watson kernel regression
approach for various nonlinear models.",Robust tests for ARCH in the presence of the misspecified conditional mean: A comparison of nonparametric approches,2019-07-30 09:19:18,"Daiki Maki, Yasushi Ota","http://arxiv.org/abs/1907.12752v2, http://arxiv.org/pdf/1907.12752v2",econ.EM
28972,em,"This paper provides a necessary and sufficient instruments condition assuring
two-step generalized method of moments (GMM) based on the forward orthogonal
deviations transformation is numerically equivalent to two-step GMM based on
the first-difference transformation. The condition also tells us when system
GMM, based on differencing, can be computed using forward orthogonal
deviations. Additionally, it tells us when forward orthogonal deviations and
differencing do not lead to the same GMM estimator. When estimators based on
these two transformations differ, Monte Carlo simulations indicate that
estimators based on forward orthogonal deviations have better finite sample
properties than estimators based on differencing.",A Comparison of First-Difference and Forward Orthogonal Deviations GMM,2019-07-30 16:19:35,Robert F. Phillips,"http://arxiv.org/abs/1907.12880v1, http://arxiv.org/pdf/1907.12880v1",econ.EM
29012,em,"This paper introduces a version of the interdependent value model of Milgrom
and Weber (1982), where the signals are given by an index gathering signal
shifters observed by the econometrician and private ones specific to each
bidders. The model primitives are shown to be nonparametrically identified from
first-price auction bids under a testable mild rank condition. Identification
holds for all possible signal values. This allows to consider a wide range of
counterfactuals where this is important, as expected revenue in second-price
auction. An estimation procedure is briefly discussed.",Nonparametric identification of an interdependent value model with buyer covariates from first-price auction bids,2019-10-23 19:12:17,"Nathalie Gimenes, Emmanuel Guerre","http://arxiv.org/abs/1910.10646v1, http://arxiv.org/pdf/1910.10646v1",econ.EM
28973,em,"Given the unconfoundedness assumption, we propose new nonparametric
estimators for the reduced dimensional conditional average treatment effect
(CATE) function. In the first stage, the nuisance functions necessary for
identifying CATE are estimated by machine learning methods, allowing the number
of covariates to be comparable to or larger than the sample size. The second
stage consists of a low-dimensional local linear regression, reducing CATE to a
function of the covariate(s) of interest. We consider two variants of the
estimator depending on whether the nuisance functions are estimated over the
full sample or over a hold-out sample. Building on Belloni at al. (2017) and
Chernozhukov et al. (2018), we derive functional limit theory for the
estimators and provide an easy-to-implement procedure for uniform inference
based on the multiplier bootstrap. The empirical application revisits the
effect of maternal smoking on a baby's birth weight as a function of the
mother's age.",Estimation of Conditional Average Treatment Effects with High-Dimensional Data,2019-08-07 02:40:47,"Qingliang Fan, Yu-Chin Hsu, Robert P. Lieli, Yichong Zhang","http://arxiv.org/abs/1908.02399v5, http://arxiv.org/pdf/1908.02399v5",econ.EM
28974,em,"We consider nonparametric identification of independent private value
first-price auction models, in which the analyst only observes winning bids.
Our benchmark model assumes an exogenous number of bidders N. We show that, if
the bidders observe N, the resulting discontinuities in the winning bid density
can be used to identify the distribution of N. The private value distribution
can be nonparametrically identified in a second step. This extends, under
testable identification conditions, to the case where N is a number of
potential buyers, who bid with some unknown probability. Identification also
holds in presence of additive unobserved heterogeneity drawn from some
parametric distributions. A last class of extensions deals with cartels which
can change size across auctions due to varying bidder cartel membership.
Identification still holds if the econometrician observes winner identities and
winning bids, provided a (unknown) bidder is always a cartel member. The cartel
participation probabilities of other bidders can also be identified. An
application to USFS timber auction data illustrates the usefulness of
discontinuities to analyze bidder participation.",Nonparametric Identification of First-Price Auction with Unobserved Competition: A Density Discontinuity Framework,2019-08-15 13:06:05,"Emmanuel Guerre, Yao Luo","http://arxiv.org/abs/1908.05476v2, http://arxiv.org/pdf/1908.05476v2",econ.EM
28975,em,"Establishing that a demand mapping is injective is core first step for a
variety of methodologies. When a version of the law of demand holds, global
injectivity can be checked by seeing whether the demand mapping is constant
over any line segments. When we add the assumption of differentiability, we
obtain necessary and sufficient conditions for injectivity that generalize
classical \cite{gale1965jacobian} conditions for quasi-definite Jacobians.",Injectivity and the Law of Demand,2019-08-15 22:13:43,Roy Allen,"http://arxiv.org/abs/1908.05714v1, http://arxiv.org/pdf/1908.05714v1",econ.EM
28976,em,"Policy evaluation is central to economic data analysis, but economists mostly
work with observational data in view of limited opportunities to carry out
controlled experiments. In the potential outcome framework, the panel data
approach (Hsiao, Ching and Wan, 2012) constructs the counterfactual by
exploiting the correlation between cross-sectional units in panel data. The
choice of cross-sectional control units, a key step in its implementation, is
nevertheless unresolved in data-rich environment when many possible controls
are at the researcher's disposal. We propose the forward selection method to
choose control units, and establish validity of the post-selection inference.
Our asymptotic framework allows the number of possible controls to grow much
faster than the time dimension. The easy-to-implement algorithms and their
theoretical guarantee extend the panel data approach to big data settings.",Forward-Selected Panel Data Approach for Program Evaluation,2019-08-16 12:00:57,"Zhentao Shi, Jingyi Huang","http://arxiv.org/abs/1908.05894v3, http://arxiv.org/pdf/1908.05894v3",econ.EM
28977,em,"A family of models of individual discrete choice are constructed by means of
statistical averaging of choices made by a subject in a reinforcement learning
process, where the subject has short, k-term memory span. The choice
probabilities in these models combine in a non-trivial, non-linear way the
initial learning bias and the experience gained through learning. The
properties of such models are discussed and, in particular, it is shown that
probabilities deviate from Luce's Choice Axiom, even if the initial bias
adheres to it. Moreover, we shown that the latter property is recovered as the
memory span becomes large.
  Two applications in utility theory are considered. In the first, we use the
discrete choice model to generate binary preference relation on simple
lotteries. We show that the preferences violate transitivity and independence
axioms of expected utility theory. Furthermore, we establish the dependence of
the preferences on frames, with risk aversion for gains, and risk seeking for
losses. Based on these findings we propose next a parametric model of choice
based on the probability maximization principle, as a model for deviations from
expected utility principle. To illustrate the approach we apply it to the
classical problem of demand for insurance.",A model of discrete choice based on reinforcement learning under short-term memory,2019-08-16 22:15:33,Misha Perepelitsa,"http://arxiv.org/abs/1908.06133v1, http://arxiv.org/pdf/1908.06133v1",econ.EM
28978,em,"We propose a new finite sample corrected variance estimator for the linear
generalized method of moments (GMM) including the one-step, two-step, and
iterated estimators. Our formula additionally corrects for the
over-identification bias in variance estimation on top of the commonly used
finite sample correction of Windmeijer (2005) which corrects for the bias from
estimating the efficient weight matrix, so is doubly corrected. An important
feature of the proposed double correction is that it automatically provides
robustness to misspecification of the moment condition. In contrast, the
conventional variance estimator and the Windmeijer correction are inconsistent
under misspecification. That is, the proposed double correction formula
provides a convenient way to obtain improved inference under correct
specification and robustness against misspecification at the same time.",A Doubly Corrected Robust Variance Estimator for Linear GMM,2019-08-21 15:41:08,"Jungbin Hwang, Byunghoon Kang, Seojeong Lee","http://arxiv.org/abs/1908.07821v2, http://arxiv.org/pdf/1908.07821v2",econ.EM
29013,em,"This paper deals with the time-varying high dimensional covariance matrix
estimation. We propose two covariance matrix estimators corresponding with a
time-varying approximate factor model and a time-varying approximate
characteristic-based factor model, respectively. The models allow the factor
loadings, factor covariance matrix, and error covariance matrix to change
smoothly over time. We study the rate of convergence of each estimator. Our
simulation and empirical study indicate that time-varying covariance matrix
estimators generally perform better than time-invariant covariance matrix
estimators. Also, if characteristics are available that genuinely explain true
loadings, the characteristics can be used to estimate loadings more precisely
in finite samples; their helpfulness increases when loadings rapidly change.",Estimating a Large Covariance Matrix in Time-varying Factor Models,2019-10-26 03:08:24,Jaeheon Jung,"http://arxiv.org/abs/1910.11965v1, http://arxiv.org/pdf/1910.11965v1",econ.EM
28979,em,"This paper considers the practically important case of nonparametrically
estimating heterogeneous average treatment effects that vary with a limited
number of discrete and continuous covariates in a selection-on-observables
framework where the number of possible confounders is very large. We propose a
two-step estimator for which the first step is estimated by machine learning.
We show that this estimator has desirable statistical properties like
consistency, asymptotic normality and rate double robustness. In particular, we
derive the coupled convergence conditions between the nonparametric and the
machine learning steps. We also show that estimating population average
treatment effects by averaging the estimated heterogeneous effects is
semi-parametrically efficient. The new estimator is an empirical example of the
effects of mothers' smoking during pregnancy on the resulting birth weight.",Nonparametric estimation of causal heterogeneity under high-dimensional confounding,2019-08-23 15:18:37,"Michael Zimmert, Michael Lechner","http://arxiv.org/abs/1908.08779v1, http://arxiv.org/pdf/1908.08779v1",econ.EM
28980,em,"The literature on stochastic programming typically restricts attention to
problems that fulfill constraint qualifications. The literature on estimation
and inference under partial identification frequently restricts the geometry of
identified sets with diverse high-level assumptions. These superficially appear
to be different approaches to closely related problems. We extensively analyze
their relation. Among other things, we show that for partial identification
through pure moment inequalities, numerous assumptions from the literature
essentially coincide with the Mangasarian-Fromowitz constraint qualification.
This clarifies the relation between well-known contributions, including within
econometrics, and elucidates stringency, as well as ease of verification, of
some high-level assumptions in seminal papers.",Constraint Qualifications in Partial Identification,2019-08-24 10:34:43,"Hiroaki Kaido, Francesca Molinari, Jörg Stoye","http://dx.doi.org/10.1017/S0266466621000207, http://arxiv.org/abs/1908.09103v4, http://arxiv.org/pdf/1908.09103v4",econ.EM
28981,em,"We develop a new extreme value theory for repeated cross-sectional and panel
data to construct asymptotically valid confidence intervals (CIs) for
conditional extremal quantiles from a fixed number $k$ of nearest-neighbor tail
observations. As a by-product, we also construct CIs for extremal quantiles of
coefficients in linear random coefficient models. For any fixed $k$, the CIs
are uniformly valid without parametric assumptions over a set of nonparametric
data generating processes associated with various tail indices. Simulation
studies show that our CIs exhibit superior small-sample coverage and length
properties than alternative nonparametric methods based on asymptotic
normality. Applying the proposed method to Natality Vital Statistics, we study
factors of extremely low birth weights. We find that signs of major effects are
the same as those found in preceding studies based on parametric models, but
with different magnitudes.",Fixed-k Inference for Conditional Extremal Quantiles,2019-09-01 01:39:33,"Yuya Sasaki, Yulong Wang","http://arxiv.org/abs/1909.00294v3, http://arxiv.org/pdf/1909.00294v3",econ.EM
28982,em,"We study the incidental parameter problem for the ``three-way'' Poisson
{Pseudo-Maximum Likelihood} (``PPML'') estimator recently recommended for
identifying the effects of trade policies and in other panel data gravity
settings. Despite the number and variety of fixed effects involved, we confirm
PPML is consistent for fixed $T$ and we show it is in fact the only estimator
among a wide range of PML gravity estimators that is generally consistent in
this context when $T$ is fixed. At the same time, asymptotic confidence
intervals in fixed-$T$ panels are not correctly centered at the true point
estimates, and cluster-robust variance estimates used to construct standard
errors are generally biased as well. We characterize each of these biases
analytically and show both numerically and empirically that they are salient
even for real-data settings with a large number of countries. We also offer
practical remedies that can be used to obtain more reliable inferences of the
effects of trade policies and other time-varying gravity variables, which we
make available via an accompanying Stata package called ppml_fe_bias.",Bias and Consistency in Three-way Gravity Models,2019-09-03 20:54:06,"Martin Weidner, Thomas Zylkin","http://arxiv.org/abs/1909.01327v6, http://arxiv.org/pdf/1909.01327v6",econ.EM
28983,em,"We analyze the challenges for inference in difference-in-differences (DID)
when there is spatial correlation. We present novel theoretical insights and
empirical evidence on the settings in which ignoring spatial correlation should
lead to more or less distortions in DID applications. We show that details such
as the time frame used in the estimation, the choice of the treated and control
groups, and the choice of the estimator, are key determinants of distortions
due to spatial correlation. We also analyze the feasibility and trade-offs
involved in a series of alternatives to take spatial correlation into account.
Given that, we provide relevant recommendations for applied researchers on how
to mitigate and assess the possibility of inference distortions due to spatial
correlation.",Inference in Difference-in-Differences: How Much Should We Trust in Independent Clusters?,2019-09-04 16:19:25,Bruno Ferman,"http://arxiv.org/abs/1909.01782v7, http://arxiv.org/pdf/1909.01782v7",econ.EM
28984,em,"This paper explores the estimation of a panel data model with cross-sectional
interaction that is flexible both in its approach to specifying the network of
connections between cross-sectional units, and in controlling for unobserved
heterogeneity. It is assumed that there are different sources of information
available on a network, which can be represented in the form of multiple
weights matrices. These matrices may reflect observed links, different measures
of connectivity, groupings or other network structures, and the number of
matrices may be increasing with sample size. A penalised quasi-maximum
likelihood estimator is proposed which aims to alleviate the risk of network
misspecification by shrinking the coefficients of irrelevant weights matrices
to exactly zero. Moreover, controlling for unobserved factors in estimation
provides a safeguard against the misspecification that might arise from
unobserved heterogeneity. The asymptotic properties of the estimator are
derived in a framework where the true value of each parameter remains fixed as
the total number of parameters increases. A Monte Carlo simulation is used to
assess finite sample performance, and in an empirical application the method is
applied to study the prevalence of network spillovers in determining growth
rates across countries.",Shrinkage Estimation of Network Spillovers with Factor Structured Errors,2019-09-06 14:28:41,"Ayden Higgins, Federico Martellosio","http://arxiv.org/abs/1909.02823v4, http://arxiv.org/pdf/1909.02823v4",econ.EM
29254,em,"We consider the problem of inference in Difference-in-Differences (DID) when
there are few treated units and errors are spatially correlated. We first show
that, when there is a single treated unit, some existing inference methods
designed for settings with few treated and many control units remain
asymptotically valid when errors are weakly dependent. However, these methods
may be invalid with more than one treated unit. We propose alternatives that
are asymptotically valid in this setting, even when the relevant distance
metric across units is unavailable.",Inference in Difference-in-Differences with Few Treated Units and Spatial Correlation,2020-06-30 20:58:43,"Luis Alvarez, Bruno Ferman","http://arxiv.org/abs/2006.16997v7, http://arxiv.org/pdf/2006.16997v7",econ.EM
28985,em,"The Economy Watcher Survey, which is a market survey published by the
Japanese government, contains \emph{assessments of current and future economic
conditions} by people from various fields. Although this survey provides
insights regarding economic policy for policymakers, a clear definition of the
word ""future"" in future economic conditions is not provided. Hence, the
assessments respondents provide in the survey are simply based on their
interpretations of the meaning of ""future."" This motivated us to reveal the
different interpretations of the future in their judgments of future economic
conditions by applying weakly supervised learning and text mining. In our
research, we separate the assessments of future economic conditions into
economic conditions of the near and distant future using learning from positive
and unlabeled data (PU learning). Because the dataset includes data from
several periods, we devised new architecture to enable neural networks to
conduct PU learning based on the idea of multi-task learning to efficiently
learn a classifier. Our empirical analysis confirmed that the proposed method
could separate the future economic conditions, and we interpreted the
classification results to obtain intuitions for policymaking.",Identifying Different Definitions of Future in the Assessment of Future Economic Conditions: Application of PU Learning and Text Mining,2019-09-08 02:13:46,Masahiro Kato,"http://arxiv.org/abs/1909.03348v3, http://arxiv.org/pdf/1909.03348v3",econ.EM
28986,em,"This paper investigates double/debiased machine learning (DML) under multiway
clustered sampling environments. We propose a novel multiway cross fitting
algorithm and a multiway DML estimator based on this algorithm. We also develop
a multiway cluster robust standard error formula. Simulations indicate that the
proposed procedure has favorable finite sample performance. Applying the
proposed method to market share data for demand analysis, we obtain larger
two-way cluster robust standard errors than non-robust ones.",Multiway Cluster Robust Double/Debiased Machine Learning,2019-09-08 19:03:37,"Harold D. Chiang, Kengo Kato, Yukun Ma, Yuya Sasaki","http://arxiv.org/abs/1909.03489v3, http://arxiv.org/pdf/1909.03489v3",econ.EM
28987,em,"A desire to understand the decision of the UK to leave the European Union,
Brexit, in the referendum of June 2016 has continued to occupy academics, the
media and politicians. Using topological data analysis ball mapper we extract
information from multi-dimensional datasets gathered on Brexit voting and
regional socio-economic characteristics. While we find broad patterns
consistent with extant empirical work, we also evidence that support for Leave
drew from a far more homogenous demographic than Remain. Obtaining votes from
this concise set was more straightforward for Leave campaigners than was
Remain's task of mobilising a diverse group to oppose Brexit.",An Economic Topology of the Brexit vote,2019-09-08 19:05:40,"Pawel Dlotko, Lucy Minford, Simon Rudkin, Wanling Qiu","http://arxiv.org/abs/1909.03490v2, http://arxiv.org/pdf/1909.03490v2",econ.EM
28988,em,"We recast the synthetic controls for evaluating policies as a counterfactual
prediction problem and replace its linear regression with a nonparametric model
inspired by machine learning. The proposed method enables us to achieve
accurate counterfactual predictions and we provide theoretical guarantees. We
apply our method to a highly debated policy: the relocation of the US embassy
to Jerusalem. In Israel and Palestine, we find that the average number of
weekly conflicts has increased by roughly 103\% over 48 weeks since the
relocation was announced on December 6, 2017. By using conformal inference and
placebo tests, we justify our model and find the increase to be statistically
significant.",Tree-based Synthetic Control Methods: Consequences of moving the US Embassy,2019-09-09 19:15:03,"Nicolaj Søndergaard Mühlbach, Mikkel Slot Nielsen","http://arxiv.org/abs/1909.03968v3, http://arxiv.org/pdf/1909.03968v3",econ.EM
28989,em,"We analyze the properties of matching estimators when there are few treated,
but many control observations. We show that, under standard assumptions, the
nearest neighbor matching estimator for the average treatment effect on the
treated is asymptotically unbiased in this framework. However, when the number
of treated observations is fixed, the estimator is not consistent, and it is
generally not asymptotically normal. Since standard inference methods are
inadequate, we propose alternative inference methods, based on the theory of
randomization tests under approximate symmetry, that are asymptotically valid
in this framework. We show that these tests are valid under relatively strong
assumptions when the number of treated observations is fixed, and under weaker
assumptions when the number of treated observations increases, but at a lower
rate relative to the number of control observations.",Matching Estimators with Few Treated and Many Control Observations,2019-09-11 17:49:03,Bruno Ferman,"http://arxiv.org/abs/1909.05093v4, http://arxiv.org/pdf/1909.05093v4",econ.EM
28990,em,"The paper proposes a quantile-regression inference framework for first-price
auctions with symmetric risk-neutral bidders under the independent
private-value paradigm. It is first shown that a private-value quantile
regression generates a quantile regression for the bids. The private-value
quantile regression can be easily estimated from the bid quantile regression
and its derivative with respect to the quantile level. This also allows to test
for various specification or exogeneity null hypothesis using the observed bids
in a simple way. A new local polynomial technique is proposed to estimate the
latter over the whole quantile level interval. Plug-in estimation of
functionals is also considered, as needed for the expected revenue or the case
of CRRA risk-averse bidders, which is amenable to our framework. A
quantile-regression analysis to USFS timber is found more appropriate than the
homogenized-bid methodology and illustrates the contribution of each
explanatory variables to the private-value distribution. Linear interactive
sieve extensions are proposed and studied in the Appendices.",Quantile regression methods for first-price auctions,2019-09-12 13:05:37,"Nathalie Gimenes, Emmanuel Guerre","http://arxiv.org/abs/1909.05542v2, http://arxiv.org/pdf/1909.05542v2",econ.EM
28991,em,"This paper develops a consistent series-based specification test for
semiparametric panel data models with fixed effects. The test statistic
resembles the Lagrange Multiplier (LM) test statistic in parametric models and
is based on a quadratic form in the restricted model residuals. The use of
series methods facilitates both estimation of the null model and computation of
the test statistic. The asymptotic distribution of the test statistic is
standard normal, so that appropriate critical values can easily be computed.
The projection property of series estimators allows me to develop a degrees of
freedom correction. This correction makes it possible to account for the
estimation variance and obtain refined asymptotic results. It also
substantially improves the finite sample performance of the test.",A Consistent LM Type Specification Test for Semiparametric Panel Data Models,2019-09-12 16:42:16,Ivan Korolev,"http://arxiv.org/abs/1909.05649v1, http://arxiv.org/pdf/1909.05649v1",econ.EM
28992,em,"One simple, and often very effective, way to attenuate the impact of nuisance
parameters on maximum likelihood estimation of a parameter of interest is to
recenter the profile score for that parameter. We apply this general principle
to the quasi-maximum likelihood estimator (QMLE) of the autoregressive
parameter $\lambda$ in a spatial autoregression. The resulting estimator for
$\lambda$ has better finite sample properties compared to the QMLE for
$\lambda$, especially in the presence of a large number of covariates. It can
also solve the incidental parameter problem that arises, for example, in social
interaction models with network fixed effects, or in spatial panel models with
individual or time fixed effects. However, spatial autoregressions present
specific challenges for this type of adjustment, because recentering the
profile score may cause the adjusted estimate to be outside the usual parameter
space for $\lambda$. Conditions for this to happen are given, and implications
are discussed. For inference, we propose confidence intervals based on a
Lugannani--Rice approximation to the distribution of the adjusted QMLE of
$\lambda$. Based on our simulations, the coverage properties of these intervals
are excellent even in models with a large number of covariates.",Adjusted QMLE for the spatial autoregressive parameter,2019-09-18 02:23:50,"Federico Martellosio, Grant Hillier","http://arxiv.org/abs/1909.08141v1, http://arxiv.org/pdf/1909.08141v1",econ.EM
28993,em,"This paper investigates and extends the computationally attractive
nonparametric random coefficients estimator of Fox, Kim, Ryan, and Bajari
(2011). We show that their estimator is a special case of the nonnegative
LASSO, explaining its sparse nature observed in many applications. Recognizing
this link, we extend the estimator, transforming it to a special case of the
nonnegative elastic net. The extension improves the estimator's recovery of the
true support and allows for more accurate estimates of the random coefficients'
distribution. Our estimator is a generalization of the original estimator and
therefore, is guaranteed to have a model fit at least as good as the original
one. A theoretical analysis of both estimators' properties shows that, under
conditions, our generalized estimator approximates the true distribution more
accurately. Two Monte Carlo experiments and an application to a travel mode
data set illustrate the improved performance of the generalized estimator.",Nonparametric Estimation of the Random Coefficients Model: An Elastic Net Approach,2019-09-18 16:22:28,"Florian Heiss, Stephan Hetzenecker, Maximilian Osterhaus","http://arxiv.org/abs/1909.08434v2, http://arxiv.org/pdf/1909.08434v2",econ.EM
28994,em,"In this paper, a statistical model for panel data with unobservable grouped
factor structures which are correlated with the regressors and the group
membership can be unknown. The factor loadings are assumed to be in different
subspaces and the subspace clustering for factor loadings are considered. A
method called least squares subspace clustering estimate (LSSC) is proposed to
estimate the model parameters by minimizing the least-square criterion and to
perform the subspace clustering simultaneously. The consistency of the proposed
subspace clustering is proved and the asymptotic properties of the estimation
procedure are studied under certain conditions. A Monte Carlo simulation study
is used to illustrate the advantages of the proposed method. Further
considerations for the situations that the number of subspaces for factors, the
dimension of factors and the dimension of subspaces are unknown are also
discussed. For illustrative purposes, the proposed method is applied to study
the linkage between income and democracy across countries while subspace
patterns of unobserved factors and factor loadings are allowed.",Subspace Clustering for Panel Data with Interactive Effects,2019-09-22 04:51:11,"Jiangtao Duan, Wei Gao, Hao Qu, Hon Keung Tony","http://arxiv.org/abs/1909.09928v2, http://arxiv.org/pdf/1909.09928v2",econ.EM
28995,em,"We show that moment inequalities in a wide variety of economic applications
have a particular linear conditional structure. We use this structure to
construct uniformly valid confidence sets that remain computationally tractable
even in settings with nuisance parameters. We first introduce least favorable
critical values which deliver non-conservative tests if all moments are
binding. Next, we introduce a novel conditional inference approach which
ensures a strong form of insensitivity to slack moments. Our recommended
approach is a hybrid technique which combines desirable aspects of the least
favorable and conditional methods. The hybrid approach performs well in
simulations calibrated to Wollmann (2018), with favorable power and
computational time comparisons relative to existing alternatives.",Inference for Linear Conditional Moment Inequalities,2019-09-22 21:24:09,"Isaiah Andrews, Jonathan Roth, Ariel Pakes","http://arxiv.org/abs/1909.10062v5, http://arxiv.org/pdf/1909.10062v5",econ.EM
28996,em,"There are many environments in econometrics which require nonseparable
modeling of a structural disturbance. In a nonseparable model with endogenous
regressors, key conditions are validity of instrumental variables and
monotonicity of the model in a scalar unobservable variable. Under these
conditions the nonseparable model is equivalent to an instrumental quantile
regression model. A failure of the key conditions, however, makes instrumental
quantile regression potentially inconsistent. This paper develops a methodology
for testing the hypothesis whether the instrumental quantile regression model
is correctly specified. Our test statistic is asymptotically normally
distributed under correct specification and consistent against any alternative
model. In addition, test statistics to justify the model simplification are
established. Finite sample properties are examined in a Monte Carlo study and
an empirical illustration is provided.",Specification Testing in Nonparametric Instrumental Quantile Regression,2019-09-23 05:41:14,Christoph Breunig,"http://dx.doi.org/10.1017/S0266466619000288, http://arxiv.org/abs/1909.10129v1, http://arxiv.org/pdf/1909.10129v1",econ.EM
28997,em,"This paper proposes several tests of restricted specification in
nonparametric instrumental regression. Based on series estimators, test
statistics are established that allow for tests of the general model against a
parametric or nonparametric specification as well as a test of exogeneity of
the vector of regressors. The tests' asymptotic distributions under correct
specification are derived and their consistency against any alternative model
is shown. Under a sequence of local alternative hypotheses, the asymptotic
distributions of the tests is derived. Moreover, uniform consistency is
established over a class of alternatives whose distance to the null hypothesis
shrinks appropriately as the sample size increases. A Monte Carlo study
examines finite sample performance of the test statistics.",Goodness-of-Fit Tests based on Series Estimators in Nonparametric Instrumental Regression,2019-09-23 05:55:22,Christoph Breunig,"http://dx.doi.org/10.1016/j.jeconom.2014.09.006, http://arxiv.org/abs/1909.10133v1, http://arxiv.org/pdf/1909.10133v1",econ.EM
28998,em,"Nonparametric series regression often involves specification search over the
tuning parameter, i.e., evaluating estimates and confidence intervals with a
different number of series terms. This paper develops pointwise and uniform
inferences for conditional mean functions in nonparametric series estimations
that are uniform in the number of series terms. As a result, this paper
constructs confidence intervals and confidence bands with possibly
data-dependent series terms that have valid asymptotic coverage probabilities.
This paper also considers a partially linear model setup and develops inference
methods for the parametric part uniform in the number of series terms. The
finite sample performance of the proposed methods is investigated in various
simulation setups as well as in an illustrative example, i.e., the
nonparametric estimation of the wage elasticity of the expected labor supply
from Blomquist and Newey (2002).",Inference in Nonparametric Series Estimation with Specification Searches for the Number of Series Terms,2019-09-26 17:45:13,Byunghoon Kang,"http://arxiv.org/abs/1909.12162v2, http://arxiv.org/pdf/1909.12162v2",econ.EM
28999,em,"In this study, we investigate estimation and inference on a low-dimensional
causal parameter in the presence of high-dimensional controls in an
instrumental variable quantile regression. Our proposed econometric procedure
builds on the Neyman-type orthogonal moment conditions of a previous study
Chernozhukov, Hansen and Wuthrich (2018) and is thus relatively insensitive to
the estimation of the nuisance parameters. The Monte Carlo experiments show
that the estimator copes well with high-dimensional controls. We also apply the
procedure to empirically reinvestigate the quantile treatment effect of 401(k)
participation on accumulated wealth.",Debiased/Double Machine Learning for Instrumental Variable Quantile Regressions,2019-09-27 13:11:18,"Jau-er Chen, Chien-Hsun Huang, Jia-Jyun Tien","http://arxiv.org/abs/1909.12592v3, http://arxiv.org/pdf/1909.12592v3",econ.EM
29000,em,"Price indexes in time and space is a most relevant topic in statistical
analysis from both the methodological and the application side. In this paper a
price index providing a novel and effective solution to price indexes over
several periods and among several countries, that is in both a multi-period and
a multilateral framework, is devised. The reference basket of the devised index
is the union of the intersections of the baskets of all periods/countries in
pairs. As such, it provides a broader coverage than usual indexes. Index
closed-form expressions and updating formulas are provided and properties
investigated. Last, applications with real and simulated data provide evidence
of the performance of the index at stake.",An econometric analysis of the Italian cultural supply,2019-09-30 22:58:41,"Consuelo Nava, Maria Grazia Zoia","http://arxiv.org/abs/1910.00073v3, http://arxiv.org/pdf/1910.00073v3",econ.EM
29001,em,"We study the informational content of factor structures in discrete
triangular systems. Factor structures have been employed in a variety of
settings in cross sectional and panel data models, and in this paper we
formally quantify their identifying power in a bivariate system often employed
in the treatment effects literature. Our main findings are that imposing a
factor structure yields point identification of parameters of interest, such as
the coefficient associated with the endogenous regressor in the outcome
equation, under weaker assumptions than usually required in these models. In
particular, we show that a ""non-standard"" exclusion restriction that requires
an explanatory variable in the outcome equation to be excluded from the
treatment equation is no longer necessary for identification, even in cases
where all of the regressors from the outcome equation are discrete. We also
establish identification of the coefficient of the endogenous regressor in
models with more general factor structures, in situations where one has access
to at least two continuous measurements of the common factor.",Informational Content of Factor Structures in Simultaneous Binary Response Models,2019-10-03 09:29:40,"Shakeeb Khan, Arnaud Maurel, Yichong Zhang","http://arxiv.org/abs/1910.01318v3, http://arxiv.org/pdf/1910.01318v3",econ.EM
29002,em,"This paper analyzes identifiability properties of structural vector
autoregressive moving average (SVARMA) models driven by independent and
non-Gaussian shocks. It is well known, that SVARMA models driven by Gaussian
errors are not identified without imposing further identifying restrictions on
the parameters. Even in reduced form and assuming stability and invertibility,
vector autoregressive moving average models are in general not identified
without requiring certain parameter matrices to be non-singular. Independence
and non-Gaussianity of the shocks is used to show that they are identified up
to permutations and scalings. In this way, typically imposed identifying
restrictions are made testable. Furthermore, we introduce a maximum-likelihood
estimator of the non-Gaussian SVARMA model which is consistent and
asymptotically normally distributed.",Identification and Estimation of SVARMA models with Independent and Non-Gaussian Inputs,2019-10-09 19:06:46,Bernd Funovits,"http://arxiv.org/abs/1910.04087v1, http://arxiv.org/pdf/1910.04087v1",econ.EM
29003,em,"We generalize well-known results on structural identifiability of vector
autoregressive models (VAR) to the case where the innovation covariance matrix
has reduced rank. Structural singular VAR models appear, for example, as
solutions of rational expectation models where the number of shocks is usually
smaller than the number of endogenous variables, and as an essential building
block in dynamic factor models. We show that order conditions for
identifiability are misleading in the singular case and provide a rank
condition for identifiability of the noise parameters. Since the Yule-Walker
equations may have multiple solutions, we analyze the effect of restrictions on
the system parameters on over- and underidentification in detail and provide
easily verifiable conditions.",Identifiability of Structural Singular Vector Autoregressive Models,2019-10-09 19:18:57,"Bernd Funovits, Alexander Braumann","http://dx.doi.org/10.1111/jtsa.12576, http://arxiv.org/abs/1910.04096v2, http://arxiv.org/pdf/1910.04096v2",econ.EM
29014,em,"This paper studies inter-trade durations in the NASDAQ limit order market and
finds that inter-trade durations in ultra-high frequency have two modes. One
mode is to the order of approximately 10^{-4} seconds, and the other is to the
order of 1 second. This phenomenon and other empirical evidence suggest that
there are two regimes associated with the dynamics of inter-trade durations,
and the regime switchings are driven by the changes of high-frequency traders
(HFTs) between providing and taking liquidity. To find how the two modes depend
on information in the limit order book (LOB), we propose a two-state
multifactor regime-switching (MF-RSD) model for inter-trade durations, in which
the probabilities transition matrices are time-varying and depend on some
lagged LOB factors. The MF-RSD model has good in-sample fitness and the
superior out-of-sample performance, compared with some benchmark duration
models. Our findings of the effects of LOB factors on the inter-trade durations
help to understand more about the high-frequency market microstructure.",A multifactor regime-switching model for inter-trade durations in the limit order market,2019-12-02 16:30:42,"Zhicheng Li, Haipeng Xing, Xinyun Chen","http://arxiv.org/abs/1912.00764v1, http://arxiv.org/pdf/1912.00764v1",econ.EM
29004,em,"This paper proposes averaging estimation methods to improve the finite-sample
efficiency of the instrumental variables quantile regression (IVQR) estimation.
First, I apply Cheng, Liao, Shi's (2019) averaging GMM framework to the IVQR
model. I propose using the usual quantile regression moments for averaging to
take advantage of cases when endogeneity is not too strong. I also propose
using two-stage least squares slope moments to take advantage of cases when
heterogeneity is not too strong. The empirical optimal weight formula of Cheng
et al. (2019) helps optimize the bias-variance tradeoff, ensuring uniformly
better (asymptotic) risk of the averaging estimator over the standard IVQR
estimator under certain conditions. My implementation involves many
computational considerations and builds on recent developments in the quantile
literature. Second, I propose a bootstrap method that directly averages among
IVQR, quantile regression, and two-stage least squares estimators. More
specifically, I find the optimal weights in the bootstrap world and then apply
the bootstrap-optimal weights to the original sample. The bootstrap method is
simpler to compute and generally performs better in simulations, but it lacks
the formal uniform dominance results of Cheng et al. (2019). Simulation results
demonstrate that in the multiple-regressors/instruments case, both the GMM
averaging and bootstrap estimators have uniformly smaller risk than the IVQR
estimator across data-generating processes (DGPs) with all kinds of
combinations of different endogeneity levels and heterogeneity levels. In DGPs
with a single endogenous regressor and instrument, where averaging estimation
is known to have least opportunity for improvement, the proposed averaging
estimators outperform the IVQR estimator in some cases but not others.",Averaging estimation for instrumental variables quantile regression,2019-10-09 23:48:58,Xin Liu,"http://arxiv.org/abs/1910.04245v1, http://arxiv.org/pdf/1910.04245v1",econ.EM
29005,em,"This paper proposes an imputation procedure that uses the factors estimated
from a tall block along with the re-rotated loadings estimated from a wide
block to impute missing values in a panel of data. Assuming that a strong
factor structure holds for the full panel of data and its sub-blocks, it is
shown that the common component can be consistently estimated at four different
rates of convergence without requiring regularization or iteration. An
asymptotic analysis of the estimation error is obtained. An application of our
analysis is estimation of counterfactuals when potential outcomes have a factor
structure. We study the estimation of average and individual treatment effects
on the treated and establish a normal distribution theory that can be useful
for hypothesis testing.","Matrix Completion, Counterfactuals, and Factor Analysis of Missing Data",2019-10-15 15:18:35,"Jushan Bai, Serena Ng","http://arxiv.org/abs/1910.06677v5, http://arxiv.org/pdf/1910.06677v5",econ.EM
29006,em,"This paper develops a new standard-error estimator for linear panel data
models. The proposed estimator is robust to heteroskedasticity, serial
correlation, and cross-sectional correlation of unknown forms. The serial
correlation is controlled by the Newey-West method. To control for
cross-sectional correlations, we propose to use the thresholding method,
without assuming the clusters to be known. We establish the consistency of the
proposed estimator. Monte Carlo simulations show the method works well. An
empirical application is considered.",Standard Errors for Panel Data Models with Unknown Clusters,2019-10-16 18:21:36,"Jushan Bai, Sung Hoon Choi, Yuan Liao","http://arxiv.org/abs/1910.07406v2, http://arxiv.org/pdf/1910.07406v2",econ.EM
29007,em,"This article provides a selective review on the recent literature on
econometric models of network formation. The survey starts with a brief
exposition on basic concepts and tools for the statistical description of
networks. I then offer a review of dyadic models, focussing on statistical
models on pairs of nodes and describe several developments of interest to the
econometrics literature. The article also presents a discussion of non-dyadic
models where link formation might be influenced by the presence or absence of
additional links, which themselves are subject to similar influences. This is
related to the statistical literature on conditionally specified models and the
econometrics of game theoretical models. I close with a (non-exhaustive)
discussion of potential areas for further development.",Econometric Models of Network Formation,2019-10-17 12:18:59,Aureo de Paula,"http://arxiv.org/abs/1910.07781v2, http://arxiv.org/pdf/1910.07781v2",econ.EM
29008,em,"Long memory in the sense of slowly decaying autocorrelations is a stylized
fact in many time series from economics and finance. The fractionally
integrated process is the workhorse model for the analysis of these time
series. Nevertheless, there is mixed evidence in the literature concerning its
usefulness for forecasting and how forecasting based on it should be
implemented.
  Employing pseudo-out-of-sample forecasting on inflation and realized
volatility time series and simulations we show that methods based on fractional
integration clearly are superior to alternative methods not accounting for long
memory, including autoregressions and exponential smoothing. Our proposal of
choosing a fixed fractional integration parameter of $d=0.5$ a priori yields
the best results overall, capturing long memory behavior, but overcoming the
deficiencies of methods using an estimated parameter.
  Regarding the implementation of forecasting methods based on fractional
integration, we use simulations to compare local and global semiparametric and
parametric estimators of the long memory parameter from the Whittle family and
provide asymptotic theory backed up by simulations to compare different mean
estimators. Both of these analyses lead to new results, which are also of
interest outside the realm of forecasting.",Forecasting under Long Memory and Nonstationarity,2019-10-18 02:57:34,"Uwe Hassler, Marc-Oliver Pohle","http://dx.doi.org/10.1093/jjfinec/nbab017, http://arxiv.org/abs/1910.08202v1, http://arxiv.org/pdf/1910.08202v1",econ.EM
29009,em,"This paper develops the inferential theory for latent factor models estimated
from large dimensional panel data with missing observations. We propose an
easy-to-use all-purpose estimator for a latent factor model by applying
principal component analysis to an adjusted covariance matrix estimated from
partially observed panel data. We derive the asymptotic distribution for the
estimated factors, loadings and the imputed values under an approximate factor
model and general missing patterns. The key application is to estimate
counterfactual outcomes in causal inference from panel data. The unobserved
control group is modeled as missing values, which are inferred from the latent
factor model. The inferential theory for the imputed values allows us to test
for individual treatment effects at any time under general adoption patterns
where the units can be affected by unobserved factors.",Large Dimensional Latent Factor Modeling with Missing Observations and Applications to Causal Inference,2019-10-18 08:38:04,"Ruoxuan Xiong, Markus Pelger","http://arxiv.org/abs/1910.08273v6, http://arxiv.org/pdf/1910.08273v6",econ.EM
29017,em,"We discuss the issue of estimating large-scale vector autoregressive (VAR)
models with stochastic volatility in real-time situations where data are
sampled at different frequencies. In the case of a large VAR with stochastic
volatility, the mixed-frequency data warrant an additional step in the already
computationally challenging Markov Chain Monte Carlo algorithm used to sample
from the posterior distribution of the parameters. We suggest the use of a
factor stochastic volatility model to capture a time-varying error covariance
structure. Because the factor stochastic volatility model renders the equations
of the VAR conditionally independent, settling for this particular stochastic
volatility model comes with major computational benefits. First, we are able to
improve upon the mixed-frequency simulation smoothing step by leveraging a
univariate and adaptive filtering algorithm. Second, the regression parameters
can be sampled equation-by-equation in parallel. These computational features
of the model alleviate the computational burden and make it possible to move
the mixed-frequency VAR to the high-dimensional regime. We illustrate the model
by an application to US data using our mixed-frequency VAR with 20, 34 and 119
variables.",Estimating Large Mixed-Frequency Bayesian VAR Models,2019-12-04 22:59:03,"Sebastian Ankargren, Paulina Jonéus","http://arxiv.org/abs/1912.02231v1, http://arxiv.org/pdf/1912.02231v1",econ.EM
29018,em,"We introduce a synthetic control methodology to study policies with staggered
adoption. Many policies, such as the board gender quota, are replicated by
other policy setters at different time frames. Our method estimates the dynamic
average treatment effects on the treated using variation introduced by the
staggered adoption of policies. Our method gives asymptotically unbiased
estimators of many interesting quantities and delivers asymptotically valid
inference. By using the proposed method and national labor data in Europe, we
find evidence that quota regulation on board diversity leads to a decrease in
part-time employment, and an increase in full-time employment for female
professionals.",Synthetic Control Inference for Staggered Adoption: Estimating the Dynamic Effects of Board Gender Diversity Policies,2019-12-13 07:29:19,"Jianfei Cao, Shirley Lu","http://arxiv.org/abs/1912.06320v1, http://arxiv.org/pdf/1912.06320v1",econ.EM
29019,em,"Haavelmo (1944) proposed a probabilistic structure for econometric modeling,
aiming to make econometrics useful for decision making. His fundamental
contribution has become thoroughly embedded in subsequent econometric research,
yet it could not answer all the deep issues that the author raised. Notably,
Haavelmo struggled to formalize the implications for decision making of the
fact that models can at most approximate actuality. In the same period, Wald
(1939, 1945) initiated his own seminal development of statistical decision
theory. Haavelmo favorably cited Wald, but econometrics did not embrace
statistical decision theory. Instead, it focused on study of identification,
estimation, and statistical inference. This paper proposes statistical decision
theory as a framework for evaluation of the performance of models in decision
making. I particularly consider the common practice of as-if optimization:
specification of a model, point estimation of its parameters, and use of the
point estimate to make a decision that would be optimal if the estimate were
accurate. A central theme is that one should evaluate as-if optimization or any
other model-based decision rule by its performance across the state space,
listing all states of nature that one believes feasible, not across the model
space. I apply the theme to prediction and treatment choice. Statistical
decision theory is conceptually simple, but application is often challenging.
Advancement of computation is the primary task to continue building the
foundations sketched by Haavelmo and Wald.",Econometrics For Decision Making: Building Foundations Sketched By Haavelmo And Wald,2019-12-17 21:47:30,Charles F. Manski,"http://arxiv.org/abs/1912.08726v4, http://arxiv.org/pdf/1912.08726v4",econ.EM
29020,em,"We analyze different types of simulations that applied researchers may use to
assess their inference methods. We show that different types of simulations
vary in many dimensions when considered as inference assessments. Moreover, we
show that natural ways of running simulations may lead to misleading
conclusions, and we propose alternatives. We then provide evidence that even
some simple assessments can detect problems in many different settings.
Alternative assessments that potentially better approximate the true data
generating process may detect problems that simpler assessments would not
detect. However, they are not uniformly dominant in this dimension, and may
imply some costs.",Assessing Inference Methods,2019-12-18 21:09:57,Bruno Ferman,"http://arxiv.org/abs/1912.08772v13, http://arxiv.org/pdf/1912.08772v13",econ.EM
29021,em,"Learning about cause and effect is arguably the main goal in applied
econometrics. In practice, the validity of these causal inferences is
contingent on a number of critical assumptions regarding the type of data that
has been collected and the substantive knowledge that is available. For
instance, unobserved confounding factors threaten the internal validity of
estimates, data availability is often limited to non-random, selection-biased
samples, causal effects need to be learned from surrogate experiments with
imperfect compliance, and causal knowledge has to be extrapolated across
structurally heterogeneous populations. A powerful causal inference framework
is required to tackle these challenges, which plague most data analysis to
varying degrees. Building on the structural approach to causality introduced by
Haavelmo (1943) and the graph-theoretic framework proposed by Pearl (1995), the
artificial intelligence (AI) literature has developed a wide array of
techniques for causal learning that allow to leverage information from various
imperfect, heterogeneous, and biased data sources (Bareinboim and Pearl, 2016).
In this paper, we discuss recent advances in this literature that have the
potential to contribute to econometric methodology along three dimensions.
First, they provide a unified and comprehensive framework for causal inference,
in which the aforementioned problems can be addressed in full generality.
Second, due to their origin in AI, they come together with sound, efficient,
and complete algorithmic criteria for automatization of the corresponding
identification task. And third, because of the nonparametric description of
structural models that graph-theoretic approaches build on, they combine the
strengths of both structural econometrics as well as the potential outcomes
framework, and thus offer an effective middle ground between these two
literature streams.",Causal Inference and Data Fusion in Econometrics,2019-12-19 13:24:04,"Paul Hünermund, Elias Bareinboim","http://arxiv.org/abs/1912.09104v4, http://arxiv.org/pdf/1912.09104v4",econ.EM
29022,em,"We study the use of Temporal-Difference learning for estimating the
structural parameters in dynamic discrete choice models. Our algorithms are
based on the conditional choice probability approach but use functional
approximations to estimate various terms in the pseudo-likelihood function. We
suggest two approaches: The first - linear semi-gradient - provides
approximations to the recursive terms using basis functions. The second -
Approximate Value Iteration - builds a sequence of approximations to the
recursive terms by solving non-parametric estimation problems. Our approaches
are fast and naturally allow for continuous and/or high-dimensional state
spaces. Furthermore, they do not require specification of transition densities.
In dynamic games, they avoid integrating over other players' actions, further
heightening the computational advantage. Our proposals can be paired with
popular existing methods such as pseudo-maximum-likelihood, and we propose
locally robust corrections for the latter to achieve parametric rates of
convergence. Monte Carlo simulations confirm the properties of our algorithms
in practice.",Temporal-Difference estimation of dynamic discrete choice models,2019-12-19 22:21:49,"Karun Adusumilli, Dita Eckardt","http://arxiv.org/abs/1912.09509v2, http://arxiv.org/pdf/1912.09509v2",econ.EM
29034,em,"Researchers increasingly wish to estimate time-varying parameter (TVP)
regressions which involve a large number of explanatory variables. Including
prior information to mitigate over-parameterization concerns has led to many
using Bayesian methods. However, Bayesian Markov Chain Monte Carlo (MCMC)
methods can be very computationally demanding. In this paper, we develop
computationally efficient Bayesian methods for estimating TVP models using an
integrated rotated Gaussian approximation (IRGA). This exploits the fact that
whereas constant coefficients on regressors are often important, most of the
TVPs are often unimportant. Since Gaussian distributions are invariant to
rotations we can split the the posterior into two parts: one involving the
constant coefficients, the other involving the TVPs. Approximate methods are
used on the latter and, conditional on these, the former are estimated with
precision using MCMC methods. In empirical exercises involving artificial data
and a large macroeconomic data set, we show the accuracy and computational
benefits of IRGA methods.",Bayesian Inference in High-Dimensional Time-varying Parameter Models using Integrated Rotated Gaussian Approximations,2020-02-24 17:07:50,"Florian Huber, Gary Koop, Michael Pfarrhofer","http://arxiv.org/abs/2002.10274v1, http://arxiv.org/pdf/2002.10274v1",econ.EM
29023,em,"Dynamic treatment regimes are treatment allocations tailored to heterogeneous
individuals. The optimal dynamic treatment regime is a regime that maximizes
counterfactual welfare. We introduce a framework in which we can partially
learn the optimal dynamic regime from observational data, relaxing the
sequential randomization assumption commonly employed in the literature but
instead using (binary) instrumental variables. We propose the notion of sharp
partial ordering of counterfactual welfares with respect to dynamic regimes and
establish mapping from data to partial ordering via a set of linear programs.
We then characterize the identified set of the optimal regime as the set of
maximal elements associated with the partial ordering. We relate the notion of
partial ordering with a more conventional notion of partial identification
using topological sorts. Practically, topological sorts can be served as a
policy benchmark for a policymaker. We apply our method to understand returns
to schooling and post-school training as a sequence of treatments by combining
data from multiple sources. The framework of this paper can be used beyond the
current context, e.g., in establishing rankings of multiple treatments or
policies across different counterfactual scenarios.",Optimal Dynamic Treatment Regimes and Partial Welfare Ordering,2019-12-20 21:43:01,Sukjin Han,"http://arxiv.org/abs/1912.10014v4, http://arxiv.org/pdf/1912.10014v4",econ.EM
29024,em,"This paper presents a novel deep learning-based travel behaviour choice
model.Our proposed Residual Logit (ResLogit) model formulation seamlessly
integrates a Deep Neural Network (DNN) architecture into a multinomial logit
model. Recently, DNN models such as the Multi-layer Perceptron (MLP) and the
Recurrent Neural Network (RNN) have shown remarkable success in modelling
complex and noisy behavioural data. However, econometric studies have argued
that machine learning techniques are a `black-box' and difficult to interpret
for use in the choice analysis.We develop a data-driven choice model that
extends the systematic utility function to incorporate non-linear cross-effects
using a series of residual layers and using skipped connections to handle model
identifiability in estimating a large number of parameters.The model structure
accounts for cross-effects and choice heterogeneity arising from substitution,
interactions with non-chosen alternatives and other effects in a non-linear
manner.We describe the formulation, model estimation, interpretability and
examine the relative performance and econometric implications of our proposed
model.We present an illustrative example of the model on a classic red/blue bus
choice scenario example. For a real-world application, we use a travel mode
choice dataset to analyze the model characteristics compared to traditional
neural networks and Logit formulations.Our findings show that our ResLogit
approach significantly outperforms MLP models while providing similar
interpretability as a Multinomial Logit model.",ResLogit: A residual neural network logit model for data-driven choice modelling,2019-12-20 22:02:58,"Melvin Wong, Bilal Farooq","http://arxiv.org/abs/1912.10058v2, http://arxiv.org/pdf/1912.10058v2",econ.EM
29025,em,"We propose a new sequential Efficient Pseudo-Likelihood (k-EPL) estimator for
dynamic discrete choice games of incomplete information. k-EPL considers the
joint behavior of multiple players simultaneously, as opposed to individual
responses to other agents' equilibrium play. This, in addition to reframing the
problem from conditional choice probability (CCP) space to value function
space, yields a computationally tractable, stable, and efficient estimator. We
show that each iteration in the k-EPL sequence is consistent and asymptotically
efficient, so the first-order asymptotic properties do not vary across
iterations. Furthermore, we show the sequence achieves higher-order equivalence
to the finite-sample maximum likelihood estimator with iteration and that the
sequence of estimators converges almost surely to the maximum likelihood
estimator at a nearly-superlinear rate when the data are generated by any
regular Markov perfect equilibrium, including equilibria that lead to
inconsistency of other sequential estimators. When utility is linear in
parameters, k-EPL iterations are computationally simple, only requiring that
the researcher solve linear systems of equations to generate pseudo-regressors
which are used in a static logit/probit regression. Monte Carlo simulations
demonstrate the theoretical results and show k-EPL's good performance in finite
samples in both small- and large-scale games, even when the game admits
spurious equilibria in addition to one that generated the data. We apply the
estimator to study the role of competition in the U.S. wholesale club industry.",Efficient and Convergent Sequential Pseudo-Likelihood Estimation of Dynamic Discrete Games,2019-12-22 20:34:23,"Adam Dearing, Jason R. Blevins","http://arxiv.org/abs/1912.10488v5, http://arxiv.org/pdf/1912.10488v5",econ.EM
29026,em,"We propose an optimal-transport-based matching method to nonparametrically
estimate linear models with independent latent variables. The method consists
in generating pseudo-observations from the latent variables, so that the
Euclidean distance between the model's predictions and their matched
counterparts in the data is minimized. We show that our nonparametric estimator
is consistent, and we document that it performs well in simulated data. We
apply this method to study the cyclicality of permanent and transitory income
shocks in the Panel Study of Income Dynamics. We find that the dispersion of
income shocks is approximately acyclical, whereas the skewness of permanent
shocks is procyclical. By comparison, we find that the dispersion and skewness
of shocks to hourly wages vary little with the business cycle.",Recovering Latent Variables by Matching,2019-12-30 23:49:27,"Manuel Arellano, Stephane Bonhomme","http://arxiv.org/abs/1912.13081v1, http://arxiv.org/pdf/1912.13081v1",econ.EM
29027,em,"Markov switching models are a popular family of models that introduces
time-variation in the parameters in the form of their state- or regime-specific
values. Importantly, this time-variation is governed by a discrete-valued
latent stochastic process with limited memory. More specifically, the current
value of the state indicator is determined only by the value of the state
indicator from the previous period, thus the Markov property, and the
transition matrix. The latter characterizes the properties of the Markov
process by determining with what probability each of the states can be visited
next period, given the state in the current period. This setup decides on the
two main advantages of the Markov switching models. Namely, the estimation of
the probability of state occurrences in each of the sample periods by using
filtering and smoothing methods and the estimation of the state-specific
parameters. These two features open the possibility for improved
interpretations of the parameters associated with specific regimes combined
with the corresponding regime probabilities, as well as for improved
forecasting performance based on persistent regimes and parameters
characterizing them.",Markov Switching,2020-02-10 11:29:23,"Yong Song, Tomasz Woźniak","http://dx.doi.org/10.1093/acrefore/9780190625979.013.174, http://arxiv.org/abs/2002.03598v1, http://arxiv.org/pdf/2002.03598v1",econ.EM
29028,em,"Given the extreme dependence of agriculture on weather conditions, this paper
analyses the effect of climatic variations on this economic sector, by
considering both a huge dataset and a flexible spatio-temporal model
specification. In particular, we study the response of N-fertilizer application
to abnormal weather conditions, while accounting for other relevant control
variables. The dataset consists of gridded data spanning over 21 years
(1993-2013), while the methodological strategy makes use of a spatial dynamic
panel data (SDPD) model that accounts for both space and time fixed effects,
besides dealing with both space and time dependences. Time-invariant short and
long term effects, as well as time-varying marginal effects are also properly
defined, revealing interesting results on the impact of both GDP and weather
conditions on fertilizer utilizations. The analysis considers four
macro-regions -- Europe, South America, South-East Asia and Africa -- to allow
for comparisons among different socio-economic societies. In addition to
finding both spatial (in the form of knowledge spillover effects) and temporal
dependences as well as a good support for the existence of an environmental
Kuznets curve for fertilizer application, the paper shows peculiar responses of
N-fertilization to deviations from normal weather conditions of moisture for
each selected region, calling for ad hoc policy interventions.",The Effect of Weather Conditions on Fertilizer Applications: A Spatial Dynamic Panel Data Analysis,2020-02-10 19:31:15,"Anna Gloria Billè, Marco Rogna","http://arxiv.org/abs/2002.03922v2, http://arxiv.org/pdf/2002.03922v2",econ.EM
29029,em,"This article deals with parameterisation, identifiability, and maximum
likelihood (ML) estimation of possibly non-invertible structural vector
autoregressive moving average (SVARMA) models driven by independent and
non-Gaussian shocks. In contrast to previous literature, the novel
representation of the MA polynomial matrix using the Wiener-Hopf factorisation
(WHF) focuses on the multivariate nature of the model, generates insights into
its structure, and uses this structure for devising optimisation algorithms. In
particular, it allows to parameterise the location of determinantal zeros
inside and outside the unit circle, and it allows for MA zeros at zero, which
can be interpreted as informational delays. This is highly relevant for
data-driven evaluation of Dynamic Stochastic General Equilibrium (DSGE) models.
Typically imposed identifying restrictions on the shock transmission matrix as
well as on the determinantal root location are made testable. Furthermore, we
provide low level conditions for asymptotic normality of the ML estimator and
analytic expressions for the score and the information matrix. As application,
we estimate the Blanchard and Quah model and show that our method provides
further insights regarding non-invertibility using a standard macroeconometric
model. These and further analyses are implemented in a well documented
R-package.",Identifiability and Estimation of Possibly Non-Invertible SVARMA Models: A New Parametrisation,2020-02-11 15:35:14,Bernd Funovits,"http://arxiv.org/abs/2002.04346v2, http://arxiv.org/pdf/2002.04346v2",econ.EM
29030,em,"This paper analyses the number of free parameters and solutions of the
structural difference equation obtained from a linear multivariate rational
expectations model. First, it is shown that the number of free parameters
depends on the structure of the zeros at zero of a certain matrix polynomial of
the structural difference equation and the number of inputs of the rational
expectations model. Second, the implications of requiring that some components
of the endogenous variables be predetermined are analysed. Third, a condition
for existence and uniqueness of a causal stationary solution is given.",The Dimension of the Set of Causal Solutions of Linear Multivariate Rational Expectations Models,2020-02-11 16:33:04,Bernd Funovits,"http://arxiv.org/abs/2002.04369v1, http://arxiv.org/pdf/2002.04369v1",econ.EM
29031,em,"We construct long-term prediction intervals for time-aggregated future values
of univariate economic time series. We propose computational adjustments of the
existing methods to improve coverage probability under a small sample
constraint. A pseudo-out-of-sample evaluation shows that our methods perform at
least as well as selected alternative methods based on model-implied Bayesian
approaches and bootstrapping. Our most successful method yields prediction
intervals for eight macroeconomic indicators over a horizon spanning several
decades.",Long-term prediction intervals of economic time series,2020-02-13 11:11:18,"Marek Chudy, Sayar Karmakar, Wei Biao Wu","http://arxiv.org/abs/2002.05384v1, http://arxiv.org/pdf/2002.05384v1",econ.EM
29032,em,"Conjugate priors allow for fast inference in large dimensional vector
autoregressive (VAR) models but, at the same time, introduce the restriction
that each equation features the same set of explanatory variables. This paper
proposes a straightforward means of post-processing posterior estimates of a
conjugate Bayesian VAR to effectively perform equation-specific covariate
selection. Compared to existing techniques using shrinkage alone, our approach
combines shrinkage and sparsity in both the VAR coefficients and the error
variance-covariance matrices, greatly reducing estimation uncertainty in large
dimensions while maintaining computational tractability. We illustrate our
approach by means of two applications. The first application uses synthetic
data to investigate the properties of the model across different
data-generating processes, the second application analyzes the predictive gains
from sparsification in a forecasting exercise for US data.",Combining Shrinkage and Sparsity in Conjugate Vector Autoregressive Models,2020-02-20 17:45:38,"Niko Hauzenberger, Florian Huber, Luca Onorante","http://arxiv.org/abs/2002.08760v2, http://arxiv.org/pdf/2002.08760v2",econ.EM
29033,em,"This paper considers estimation and inference about tail features when the
observations beyond some threshold are censored. We first show that ignoring
such tail censoring could lead to substantial bias and size distortion, even if
the censored probability is tiny. Second, we propose a new maximum likelihood
estimator (MLE) based on the Pareto tail approximation and derive its
asymptotic properties. Third, we provide a small sample modification to the MLE
by resorting to Extreme Value theory. The MLE with this modification delivers
excellent small sample performance, as shown by Monte Carlo simulations. We
illustrate its empirical relevance by estimating (i) the tail index and the
extreme quantiles of the US individual earnings with the Current Population
Survey dataset and (ii) the tail index of the distribution of macroeconomic
disasters and the coefficient of risk aversion using the dataset collected by
Barro and Urs{\'u}a (2008). Our new empirical findings are substantially
different from the existing literature.",Estimation and Inference about Tail Features with Tail Censored Data,2020-02-23 23:43:24,"Yulong Wang, Zhijie Xiao","http://arxiv.org/abs/2002.09982v1, http://arxiv.org/pdf/2002.09982v1",econ.EM
29290,em,"Discrete Choice Experiments (DCE) have been widely used in health economics,
environmental valuation, and other disciplines. However, there is a lack of
resources disclosing the whole procedure of carrying out a DCE. This document
aims to assist anyone wishing to use the power of DCEs to understand people's
behavior by providing a comprehensive guide to the procedure. This guide
contains all the code needed to design, implement, and analyze a DCE using only
free software.","A step-by-step guide to design, implement, and analyze a discrete choice experiment",2020-09-23 19:13:10,Daniel Pérez-Troncoso,"http://arxiv.org/abs/2009.11235v1, http://arxiv.org/pdf/2009.11235v1",econ.EM
29035,em,"This paper studies the identification, estimation, and hypothesis testing
problem in complete and incomplete economic models with testable assumptions.
Testable assumptions ($A$) give strong and interpretable empirical content to
the models but they also carry the possibility that some distribution of
observed outcomes may reject these assumptions. A natural way to avoid this is
to find a set of relaxed assumptions ($\tilde{A}$) that cannot be rejected by
any distribution of observed outcome and the identified set of the parameter of
interest is not changed when the original assumption is not rejected. The main
contribution of this paper is to characterize the properties of such a relaxed
assumption $\tilde{A}$ using a generalized definition of refutability and
confirmability. I also propose a general method to construct such $\tilde{A}$.
A general estimation and inference procedure is proposed and can be applied to
most incomplete economic models. I apply my methodology to the instrument
monotonicity assumption in Local Average Treatment Effect (LATE) estimation and
to the sector selection assumption in a binary outcome Roy model of employment
sector choice. In the LATE application, I use my general method to construct a
set of relaxed assumptions $\tilde{A}$ that can never be rejected, and the
identified set of LATE is the same as imposing $A$ when $A$ is not rejected.
LATE is point identified under my extension $\tilde{A}$ in the LATE
application. In the binary outcome Roy model, I use my method of incomplete
models to relax Roy's sector selection assumption and characterize the
identified set of the binary potential outcome as a polyhedron.",Estimating Economic Models with Testable Assumptions: Theory and Applications,2020-02-24 20:58:41,Moyu Liao,"http://arxiv.org/abs/2002.10415v3, http://arxiv.org/pdf/2002.10415v3",econ.EM
29036,em,"We examine the impact of annual hours worked on annual earnings by
decomposing changes in the real annual earnings distribution into composition,
structural and hours effects. We do so via a nonseparable simultaneous model of
hours, wages and earnings. Using the Current Population Survey for the survey
years 1976--2019, we find that changes in the female distribution of annual
hours of work are important in explaining movements in inequality in female
annual earnings. This captures the substantial changes in their employment
behavior over this period. Movements in the male hours distribution only affect
the lower part of their earnings distribution and reflect the sensitivity of
these workers' annual hours of work to cyclical factors.",Hours Worked and the U.S. Distribution of Real Annual Earnings 1976-2019,2020-02-26 01:55:07,"Iván Fernández-Val, Franco Peracchi, Aico van Vuuren, Francis Vella","http://arxiv.org/abs/2002.11211v3, http://arxiv.org/pdf/2002.11211v3",econ.EM
29037,em,"This paper combines causal mediation analysis with double machine learning to
control for observed confounders in a data-driven way under a
selection-on-observables assumption in a high-dimensional setting. We consider
the average indirect effect of a binary treatment operating through an
intermediate variable (or mediator) on the causal path between the treatment
and the outcome, as well as the unmediated direct effect. Estimation is based
on efficient score functions, which possess a multiple robustness property
w.r.t. misspecifications of the outcome, mediator, and treatment models. This
property is key for selecting these models by double machine learning, which is
combined with data splitting to prevent overfitting in the estimation of the
effects of interest. We demonstrate that the direct and indirect effect
estimators are asymptotically normal and root-n consistent under specific
regularity conditions and investigate the finite sample properties of the
suggested methods in a simulation study when considering lasso as machine
learner. We also provide an empirical application to the U.S. National
Longitudinal Survey of Youth, assessing the indirect effect of health insurance
coverage on general health operating via routine checkups as mediator, as well
as the direct effect. We find a moderate short term effect of health insurance
coverage on general health which is, however, not mediated by routine checkups.",Causal mediation analysis with double machine learning,2020-02-28 16:39:49,"Helmut Farbmacher, Martin Huber, Lukáš Lafférs, Henrika Langen, Martin Spindler","http://arxiv.org/abs/2002.12710v6, http://arxiv.org/pdf/2002.12710v6",econ.EM
29038,em,"Alternative data sets are widely used for macroeconomic nowcasting together
with machine learning--based tools. The latter are often applied without a
complete picture of their theoretical nowcasting properties. Against this
background, this paper proposes a theoretically grounded nowcasting methodology
that allows researchers to incorporate alternative Google Search Data (GSD)
among the predictors and that combines targeted preselection, Ridge
regularization, and Generalized Cross Validation. Breaking with most existing
literature, which focuses on asymptotic in-sample theoretical properties, we
establish the theoretical out-of-sample properties of our methodology and
support them by Monte-Carlo simulations. We apply our methodology to GSD to
nowcast GDP growth rate of several countries during various economic periods.
Our empirical findings support the idea that GSD tend to increase nowcasting
accuracy, even after controlling for official variables, but that the gain
differs between periods of recessions and of macroeconomic stability.",When are Google data useful to nowcast GDP? An approach via pre-selection and shrinkage,2020-07-01 09:58:00,"Laurent Ferrara, Anna Simoni","http://dx.doi.org/10.1080/07350015.2022.2116025, http://arxiv.org/abs/2007.00273v3, http://arxiv.org/pdf/2007.00273v3",econ.EM
29039,em,"In this paper, we estimate and leverage latent constant group structure to
generate the point, set, and density forecasts for short dynamic panel data. We
implement a nonparametric Bayesian approach to simultaneously identify
coefficients and group membership in the random effects which are heterogeneous
across groups but fixed within a group. This method allows us to flexibly
incorporate subjective prior knowledge on the group structure that potentially
improves the predictive accuracy. In Monte Carlo experiments, we demonstrate
that our Bayesian grouped random effects (BGRE) estimators produce accurate
estimates and score predictive gains over standard panel data estimators. With
a data-driven group structure, the BGRE estimators exhibit comparable accuracy
of clustering with the Kmeans algorithm and outperform a two-step Bayesian
grouped estimator whose group structure relies on Kmeans. In the empirical
analysis, we apply our method to forecast the investment rate across a broad
range of firms and illustrate that the estimated latent group structure
improves forecasts relative to standard panel data estimators.",Forecasting with Bayesian Grouped Random Effects in Panel Data,2020-07-05 22:48:27,Boyuan Zhang,"http://arxiv.org/abs/2007.02435v8, http://arxiv.org/pdf/2007.02435v8",econ.EM
29170,em,"We develop a Stata command xthenreg to implement the first-differenced GMM
estimation of the dynamic panel threshold model, which Seo and Shin (2016,
Journal of Econometrics 195: 169-186) have proposed. Furthermore, We derive the
asymptotic variance formula for a kink constrained GMM estimator of the dynamic
threshold model and include an estimation algorithm. We also propose a fast
bootstrap algorithm to implement the bootstrap for the linearity test. The use
of the command is illustrated through a Monte Carlo simulation and an economic
application.",Estimation of Dynamic Panel Threshold Model using Stata,2019-02-27 06:19:33,"Myung Hwan Seo, Sueyoul Kim, Young-Joo Kim","http://dx.doi.org/10.1177/1536867X19874243, http://arxiv.org/abs/1902.10318v1, http://arxiv.org/pdf/1902.10318v1",econ.EM
29040,em,"This paper presents a novel estimator of orthogonal GARCH models, which
combines (eigenvalue and -vector) targeting estimation with stepwise
(univariate) estimation. We denote this the spectral targeting estimator. This
two-step estimator is consistent under finite second order moments, while
asymptotic normality holds under finite fourth order moments. The estimator is
especially well suited for modelling larger portfolios: we compare the
empirical performance of the spectral targeting estimator to that of the quasi
maximum likelihood estimator for five portfolios of 25 assets. The spectral
targeting estimator dominates in terms of computational complexity, being up to
57 times faster in estimation, while both estimators produce similar
out-of-sample forecasts, indicating that the spectral targeting estimator is
well suited for high-dimensional empirical applications.",Spectral Targeting Estimation of $λ$-GARCH models,2020-07-06 11:53:59,Simon Hetland,"http://arxiv.org/abs/2007.02588v1, http://arxiv.org/pdf/2007.02588v1",econ.EM
29041,em,"We study the effects of counterfactual teacher-to-classroom assignments on
average student achievement in elementary and middle schools in the US. We use
the Measures of Effective Teaching (MET) experiment to semiparametrically
identify the average reallocation effects (AREs) of such assignments. Our
findings suggest that changes in within-district teacher assignments could have
appreciable effects on student achievement. Unlike policies which require
hiring additional teachers (e.g., class-size reduction measures), or those
aimed at changing the stock of teachers (e.g., VAM-guided teacher tenure
policies), alternative teacher-to-classroom assignments are resource neutral;
they raise student achievement through a more efficient deployment of existing
teachers.",Teacher-to-classroom assignment and student achievement,2020-07-06 14:20:59,"Bryan S. Graham, Geert Ridder, Petra Thiemann, Gema Zamarro","http://arxiv.org/abs/2007.02653v2, http://arxiv.org/pdf/2007.02653v2",econ.EM
29042,em,"This paper studies optimal decision rules, including estimators and tests,
for weakly identified GMM models. We derive the limit experiment for weakly
identified GMM, and propose a theoretically-motivated class of priors which
give rise to quasi-Bayes decision rules as a limiting case. Together with
results in the previous literature, this establishes desirable properties for
the quasi-Bayes approach regardless of model identification status, and we
recommend quasi-Bayes for settings where identification is a concern. We
further propose weighted average power-optimal identification-robust
frequentist tests and confidence sets, and prove a Bernstein-von Mises-type
result for the quasi-Bayes posterior under weak identification.",Optimal Decision Rules for Weak GMM,2020-07-08 14:48:10,"Isaiah Andrews, Anna Mikusheva","http://arxiv.org/abs/2007.04050v7, http://arxiv.org/pdf/2007.04050v7",econ.EM
29043,em,"In this paper, we test the contribution of foreign management on firms'
competitiveness. We use a novel dataset on the careers of 165,084 managers
employed by 13,106 companies in the United Kingdom in the period 2009-2017. We
find that domestic manufacturing firms become, on average, between 7% and 12%
more productive after hiring the first foreign managers, whereas foreign-owned
firms register no significant improvement. In particular, we test that previous
industry-specific experience is the primary driver of productivity gains in
domestic firms (15.6%), in a way that allows the latter to catch up with
foreign-owned firms. Managers from the European Union are highly valuable, as
they represent about half of the recruits in our data. Our identification
strategy combines matching techniques, difference-in-difference, and
pre-recruitment trends to challenge reverse causality. Results are robust to
placebo tests and to different estimators of Total Factor Productivity.
Eventually, we argue that upcoming limits to the mobility of foreign talents
after the Brexit event can hamper the allocation of productive managerial
resources.",Talents from Abroad. Foreign Managers and Productivity in the United Kingdom,2020-07-08 15:07:13,"Dimitrios Exadaktylos, Massimo Riccaboni, Armando Rungi","http://arxiv.org/abs/2007.04055v1, http://arxiv.org/pdf/2007.04055v1",econ.EM
29044,em,"We study treatment-effect estimation using panel data. The treatment may be
non-binary, non-absorbing, and the outcome may be affected by treatment lags.
We make a parallel-trends assumption, and propose event-study estimators of the
effect of being exposed to a weakly higher treatment dose for $\ell$ periods.
We also propose normalized estimators, that estimate a weighted average of the
effects of the current treatment and its lags. We also analyze commonly-used
two-way-fixed-effects regressions. Unlike our estimators, they can be biased in
the presence of heterogeneous treatment effects. A local-projection version of
those regressions is biased even with homogeneous effects.",Difference-in-Differences Estimators of Intertemporal Treatment Effects,2020-07-08 20:01:22,"Clément de Chaisemartin, Xavier D'Haultfoeuille","http://arxiv.org/abs/2007.04267v12, http://arxiv.org/pdf/2007.04267v12",econ.EM
29045,em,"This paper develops an empirical balancing approach for the estimation of
treatment effects under two-sided noncompliance using a binary conditionally
independent instrumental variable. The method weighs both treatment and outcome
information with inverse probabilities to produce exact finite sample balance
across instrument level groups. It is free of functional form assumptions on
the outcome or the treatment selection step. By tailoring the loss function for
the instrument propensity scores, the resulting treatment effect estimates
exhibit both low bias and a reduced variance in finite samples compared to
conventional inverse probability weighting methods. The estimator is
automatically weight normalized and has similar bias properties compared to
conventional two-stage least squares estimation under constant causal effects
for the compliers. We provide conditions for asymptotic normality and
semiparametric efficiency and demonstrate how to utilize additional information
about the treatment selection step for bias reduction in finite samples. The
method can be easily combined with regularization or other statistical learning
approaches to deal with a high-dimensional number of observed confounding
variables. Monte Carlo simulations suggest that the theoretical advantages
translate well to finite samples. The method is illustrated in an empirical
example.",Efficient Covariate Balancing for the Local Average Treatment Effect,2020-07-08 21:04:46,Phillip Heiler,"http://arxiv.org/abs/2007.04346v1, http://arxiv.org/pdf/2007.04346v1",econ.EM
29053,em,"This paper considers estimation and inference for heterogeneous
counterfactual effects with high-dimensional data. We propose a novel robust
score for debiased estimation of the unconditional quantile regression (Firpo,
Fortin, and Lemieux, 2009) as a measure of heterogeneous counterfactual
marginal effects. We propose a multiplier bootstrap inference and develop
asymptotic theories to guarantee the size control in large sample. Simulation
studies support our theories. Applying the proposed method to Job Corps survey
data, we find that a policy which counterfactually extends the duration of
exposures to the Job Corps training program will be effective especially for
the targeted subpopulations of lower potential wage earners.",Unconditional Quantile Regression with High Dimensional Data,2020-07-27 19:13:41,"Yuya Sasaki, Takuya Ura, Yichong Zhang","http://arxiv.org/abs/2007.13659v4, http://arxiv.org/pdf/2007.13659v4",econ.EM
29046,em,"This paper analyzes a semiparametric model of network formation in the
presence of unobserved agent-specific heterogeneity. The objective is to
identify and estimate the preference parameters associated with homophily on
observed attributes when the distributions of the unobserved factors are not
parametrically specified. This paper offers two main contributions to the
literature on network formation. First, it establishes a new point
identification result for the vector of parameters that relies on the existence
of a special repressor. The identification proof is constructive and
characterizes a closed-form for the parameter of interest. Second, it
introduces a simple two-step semiparametric estimator for the vector of
parameters with a first-step kernel estimator. The estimator is computationally
tractable and can be applied to both dense and sparse networks. Moreover, I
show that the estimator is consistent and has a limiting normal distribution as
the number of individuals in the network increases. Monte Carlo experiments
demonstrate that the estimator performs well in finite samples and in networks
with different levels of sparsity.",A Semiparametric Network Formation Model with Unobserved Linear Heterogeneity,2020-07-10 17:09:41,Luis E. Candelaria,"http://arxiv.org/abs/2007.05403v2, http://arxiv.org/pdf/2007.05403v2",econ.EM
29047,em,"This paper characterises dynamic linkages arising from shocks with
heterogeneous degrees of persistence. Using frequency domain techniques, we
introduce measures that identify smoothly varying links of a transitory and
persistent nature. Our approach allows us to test for statistical differences
in such dynamic links. We document substantial differences in transitory and
persistent linkages among US financial industry volatilities, argue that they
track heterogeneously persistent sources of systemic risk, and thus may serve
as a useful tool for market participants.",Persistence in Financial Connectedness and Systemic Risk,2020-07-14 18:45:33,"Jozef Barunik, Michael Ellington","http://arxiv.org/abs/2007.07842v4, http://arxiv.org/pdf/2007.07842v4",econ.EM
29048,em,"This paper studies the latent index representation of the conditional LATE
model, making explicit the role of covariates in treatment selection. We find
that if the directions of the monotonicity condition are the same across all
values of the conditioning covariate, which is often assumed in the literature,
then the treatment choice equation has to satisfy a separability condition
between the instrument and the covariate. This global representation result
establishes testable restrictions imposed on the way covariates enter the
treatment choice equation. We later extend the representation theorem to
incorporate multiple ordered levels of treatment.",Global Representation of the Conditional LATE Model: A Separability Result,2020-07-16 07:30:59,"Yu-Chang Chen, Haitian Xie","http://dx.doi.org/10.1111/obes.12476, http://arxiv.org/abs/2007.08106v3, http://arxiv.org/pdf/2007.08106v3",econ.EM
29049,em,"I devise a novel approach to evaluate the effectiveness of fiscal policy in
the short run with multi-category treatment effects and inverse probability
weighting based on the potential outcome framework. This study's main
contribution to the literature is the proposed modified conditional
independence assumption to improve the evaluation of fiscal policy. Using this
approach, I analyze the effects of government spending on the US economy from
1992 to 2019. The empirical study indicates that large fiscal contraction
generates a negative effect on the economic growth rate, and small and large
fiscal expansions realize a positive effect. However, these effects are not
significant in the traditional multiple regression approach. I conclude that
this new approach significantly improves the evaluation of fiscal policy.",Government spending and multi-category treatment effects:The modified conditional independence assumption,2020-07-16 18:16:35,Koiti Yano,"http://arxiv.org/abs/2007.08396v3, http://arxiv.org/pdf/2007.08396v3",econ.EM
29050,em,"We propose using a permutation test to detect discontinuities in an
underlying economic model at a known cutoff point. Relative to the existing
literature, we show that this test is well suited for event studies based on
time-series data. The test statistic measures the distance between the
empirical distribution functions of observed data in two local subsamples on
the two sides of the cutoff. Critical values are computed via a standard
permutation algorithm. Under a high-level condition that the observed data can
be coupled by a collection of conditionally independent variables, we establish
the asymptotic validity of the permutation test, allowing the sizes of the
local subsamples to be either be fixed or grow to infinity. In the latter case,
we also establish that the permutation test is consistent. We demonstrate that
our high-level condition can be verified in a broad range of problems in the
infill asymptotic time-series setting, which justifies using the permutation
test to detect jumps in economic variables such as volatility, trading
activity, and liquidity. These potential applications are illustrated in an
empirical case study for selected FOMC announcements during the ongoing
COVID-19 pandemic.",Permutation-based tests for discontinuities in event studies,2020-07-20 05:12:52,"Federico A. Bugni, Jia Li, Qiyuan Li","http://arxiv.org/abs/2007.09837v4, http://arxiv.org/pdf/2007.09837v4",econ.EM
29051,em,"Mean, median, and mode are three essential measures of the centrality of
probability distributions. In program evaluation, the average treatment effect
(mean) and the quantile treatment effect (median) have been intensively studied
in the past decades. The mode treatment effect, however, has long been
neglected in program evaluation. This paper fills the gap by discussing both
the estimation and inference of the mode treatment effect. I propose both
traditional kernel and machine learning methods to estimate the mode treatment
effect. I also derive the asymptotic properties of the proposed estimators and
find that both estimators follow the asymptotic normality but with the rate of
convergence slower than the regular rate $\sqrt{N}$, which is different from
the rates of the classical average and quantile treatment effect estimators.",The Mode Treatment Effect,2020-07-22 21:05:56,Neng-Chieh Chang,"http://arxiv.org/abs/2007.11606v1, http://arxiv.org/pdf/2007.11606v1",econ.EM
29052,em,"The multinomial probit model is a popular tool for analyzing choice behaviour
as it allows for correlation between choice alternatives. Because current model
specifications employ a full covariance matrix of the latent utilities for the
choice alternatives, they are not scalable to a large number of choice
alternatives. This paper proposes a factor structure on the covariance matrix,
which makes the model scalable to large choice sets. The main challenge in
estimating this structure is that the model parameters require identifying
restrictions. We identify the parameters by a trace-restriction on the
covariance matrix, which is imposed through a reparametrization of the factor
structure. We specify interpretable prior distributions on the model parameters
and develop an MCMC sampler for parameter estimation. The proposed approach
significantly improves performance in large choice sets relative to existing
multinomial probit specifications. Applications to purchase data show the
economic importance of including a large number of choice alternatives in
consumer choice analysis.",Scalable Bayesian estimation in the multinomial probit model,2020-07-27 02:38:14,"Ruben Loaiza-Maya, Didier Nibbering","http://arxiv.org/abs/2007.13247v2, http://arxiv.org/pdf/2007.13247v2",econ.EM
29054,em,"Applied macroeconomists often compute confidence intervals for impulse
responses using local projections, i.e., direct linear regressions of future
outcomes on current covariates. This paper proves that local projection
inference robustly handles two issues that commonly arise in applications:
highly persistent data and the estimation of impulse responses at long
horizons. We consider local projections that control for lags of the variables
in the regression. We show that lag-augmented local projections with normal
critical values are asymptotically valid uniformly over (i) both stationary and
non-stationary data, and also over (ii) a wide range of response horizons.
Moreover, lag augmentation obviates the need to correct standard errors for
serial correlation in the regression residuals. Hence, local projection
inference is arguably both simpler than previously thought and more robust than
standard autoregressive inference, whose validity is known to depend
sensitively on the persistence of the data and on the length of the horizon.",Local Projection Inference is Simpler and More Robust Than You Think,2020-07-28 01:03:23,"José Luis Montiel Olea, Mikkel Plagborg-Møller","http://dx.doi.org/10.3982/ECTA18756, http://arxiv.org/abs/2007.13888v3, http://arxiv.org/pdf/2007.13888v3",econ.EM
29055,em,"Commonly used methods of production function and markup estimation assume
that a firm's output quantity can be observed as data, but typical datasets
contain only revenue, not output quantity. We examine the nonparametric
identification of production function and markup from revenue data when a firm
faces a general nonparametri demand function under imperfect competition. Under
standard assumptions, we provide the constructive nonparametric identification
of various firm-level objects: gross production function, total factor
productivity, price markups over marginal costs, output prices, output
quantities, a demand system, and a representative consumer's utility function.","Nonparametric Identification of Production Function, Total Factor Productivity, and Markup from Revenue Data",2020-10-31 02:34:40,"Hiroyuki Kasahara, Yoichi Sugita","http://arxiv.org/abs/2011.00143v1, http://arxiv.org/pdf/2011.00143v1",econ.EM
29056,em,"Macroeconomists increasingly use external sources of exogenous variation for
causal inference. However, unless such external instruments (proxies) capture
the underlying shock without measurement error, existing methods are silent on
the importance of that shock for macroeconomic fluctuations. We show that, in a
general moving average model with external instruments, variance decompositions
for the instrumented shock are interval-identified, with informative bounds.
Various additional restrictions guarantee point identification of both variance
and historical decompositions. Unlike SVAR analysis, our methods do not require
invertibility. Applied to U.S. data, they give a tight upper bound on the
importance of monetary shocks for inflation dynamics.",Instrumental Variable Identification of Dynamic Variance Decompositions,2020-11-03 02:32:44,"Mikkel Plagborg-Møller, Christian K. Wolf","http://arxiv.org/abs/2011.01380v2, http://arxiv.org/pdf/2011.01380v2",econ.EM
29057,em,"Forecasters often use common information and hence make common mistakes. We
propose a new approach, Factor Graphical Model (FGM), to forecast combinations
that separates idiosyncratic forecast errors from the common errors. FGM
exploits the factor structure of forecast errors and the sparsity of the
precision matrix of the idiosyncratic errors. We prove the consistency of
forecast combination weights and mean squared forecast error estimated using
FGM, supporting the results with extensive simulations. Empirical applications
to forecasting macroeconomic series shows that forecast combination using FGM
outperforms combined forecasts using equal weights and graphical models without
incorporating factor structure of forecast errors.",Learning from Forecast Errors: A New Approach to Forecast Combinations,2020-11-04 03:16:16,"Tae-Hwy Lee, Ekaterina Seregina","http://arxiv.org/abs/2011.02077v2, http://arxiv.org/pdf/2011.02077v2",econ.EM
29058,em,"We use a decision-theoretic framework to study the problem of forecasting
discrete outcomes when the forecaster is unable to discriminate among a set of
plausible forecast distributions because of partial identification or concerns
about model misspecification or structural breaks. We derive ""robust"" forecasts
which minimize maximum risk or regret over the set of forecast distributions.
We show that for a large class of models including semiparametric panel data
models for dynamic discrete choice, the robust forecasts depend in a natural
way on a small number of convex optimization problems which can be simplified
using duality methods. Finally, we derive ""efficient robust"" forecasts to deal
with the problem of first having to estimate the set of forecast distributions
and develop a suitable asymptotic efficiency theory. Forecasts obtained by
replacing nuisance parameters that characterize the set of forecast
distributions with efficient first-stage estimators can be strictly dominated
by our efficient robust forecasts.",Robust Forecasting,2020-11-06 04:17:22,"Timothy Christensen, Hyungsik Roger Moon, Frank Schorfheide","http://arxiv.org/abs/2011.03153v4, http://arxiv.org/pdf/2011.03153v4",econ.EM
29059,em,"Following in the footsteps of the literature on empirical welfare
maximization, this paper wants to contribute by stressing the policymaker
perspective via a practical illustration of an optimal policy assignment
problem. More specifically, by focusing on the class of threshold-based
policies, we first set up the theoretical underpinnings of the policymaker
selection problem, to then offer a practical solution to this problem via an
empirical illustration using the popular LaLonde (1986) training program
dataset. The paper proposes an implementation protocol for the optimal solution
that is straightforward to apply and easy to program with standard statistical
software.",Optimal Policy Learning: From Theory to Practice,2020-11-10 12:25:33,Giovanni Cerulli,"http://arxiv.org/abs/2011.04993v1, http://arxiv.org/pdf/2011.04993v1",econ.EM
29060,em,"This paper studies identification of the effect of a mis-classified, binary,
endogenous regressor when a discrete-valued instrumental variable is available.
We begin by showing that the only existing point identification result for this
model is incorrect. We go on to derive the sharp identified set under mean
independence assumptions for the instrument and measurement error. The
resulting bounds are novel and informative, but fail to point identify the
effect of interest. This motivates us to consider alternative and slightly
stronger assumptions: we show that adding second and third moment independence
assumptions suffices to identify the model.","Identifying the effect of a mis-classified, binary, endogenous regressor",2020-11-14 14:35:13,"Francis J. DiTraglia, Camilo Garcia-Jimeno","http://dx.doi.org/10.1016/j.jeconom.2019.01.007, http://arxiv.org/abs/2011.07272v1, http://arxiv.org/pdf/2011.07272v1",econ.EM
29848,em,"This study considers the treatment choice problem when outcome variables are
binary. We focus on statistical treatment rules that plug in fitted values
based on nonparametric kernel regression and show that optimizing two
parameters enables the calculation of the maximum regret. Using this result, we
propose a novel bandwidth selection method based on the minimax regret
criterion. Finally, we perform a numerical analysis to compare the optimal
bandwidth choices for the binary and normally distributed outcomes.",Bandwidth Selection for Treatment Choice with Binary Outcomes,2023-08-28 10:46:05,Takuya Ishihara,"http://arxiv.org/abs/2308.14375v2, http://arxiv.org/pdf/2308.14375v2",econ.EM
29061,em,"To estimate causal effects from observational data, an applied researcher
must impose beliefs. The instrumental variables exclusion restriction, for
example, represents the belief that the instrument has no direct effect on the
outcome of interest. Yet beliefs about instrument validity do not exist in
isolation. Applied researchers often discuss the likely direction of selection
and the potential for measurement error in their articles but lack formal tools
for incorporating this information into their analyses. Failing to use all
relevant information not only leaves money on the table; it runs the risk of
leading to a contradiction in which one holds mutually incompatible beliefs
about the problem at hand. To address these issues, we first characterize the
joint restrictions relating instrument invalidity, treatment endogeneity, and
non-differential measurement error in a workhorse linear model, showing how
beliefs over these three dimensions are mutually constrained by each other and
the data. Using this information, we propose a Bayesian framework to help
researchers elicit their beliefs, incorporate them into estimation, and ensure
their mutual coherence. We conclude by illustrating our framework in a number
of examples drawn from the empirical microeconomics literature.","A Framework for Eliciting, Incorporating, and Disciplining Identification Beliefs in Linear Models",2020-11-14 14:43:44,"Francis J. DiTraglia, Camilo Garcia-Jimeno","http://dx.doi.org/10.1080/07350015.2020.1753528, http://arxiv.org/abs/2011.07276v1, http://arxiv.org/pdf/2011.07276v1",econ.EM
29062,em,"In this paper we propose a semi-parametric Bayesian Generalized Least Squares
estimator. In a generic setting where each error is a vector, the parametric
Generalized Least Square estimator maintains the assumption that each error
vector has the same distributional parameters. In reality, however, errors are
likely to be heterogeneous regarding their distributions. To cope with such
heterogeneity, a Dirichlet process prior is introduced for the distributional
parameters of the errors, leading to the error distribution being a mixture of
a variable number of normal distributions. Our method let the number of normal
components be data driven. Semi-parametric Bayesian estimators for two specific
cases are then presented: the Seemingly Unrelated Regression for equation
systems and the Random Effects Model for panel data. We design a series of
simulation experiments to explore the performance of our estimators. The
results demonstrate that our estimators obtain smaller posterior standard
deviations and mean squared errors than the Bayesian estimators using a
parametric mixture of normal distributions or a normal distribution. We then
apply our semi-parametric Bayesian estimators for equation systems and panel
data models to empirical data.",A Semi-Parametric Bayesian Generalized Least Squares Estimator,2020-11-20 10:50:15,"Ruochen Wu, Melvyn Weeks","http://arxiv.org/abs/2011.10252v2, http://arxiv.org/pdf/2011.10252v2",econ.EM
29063,em,"This paper proposes a new class of M-estimators that double weight for the
twin problems of nonrandom treatment assignment and missing outcomes, both of
which are common issues in the treatment effects literature. The proposed class
is characterized by a `robustness' property, which makes it resilient to
parametric misspecification in either a conditional model of interest (for
example, mean or quantile function) or the two weighting functions. As leading
applications, the paper discusses estimation of two specific causal parameters;
average and quantile treatment effects (ATE, QTEs), which can be expressed as
functions of the doubly weighted estimator, under misspecification of the
framework's parametric components. With respect to the ATE, this paper shows
that the proposed estimator is doubly robust even in the presence of missing
outcomes. Finally, to demonstrate the estimator's viability in empirical
settings, it is applied to Calonico and Smith (2017)'s reconstructed sample
from the National Supported Work training program.",Doubly weighted M-estimation for nonrandom assignment and missing outcomes,2020-11-23 18:48:39,Akanksha Negi,"http://arxiv.org/abs/2011.11485v1, http://arxiv.org/pdf/2011.11485v1",econ.EM
29064,em,"This paper develops a first-stage linear regression representation for the
instrumental variables (IV) quantile regression (QR) model. The quantile
first-stage is analogous to the least squares case, i.e., a linear projection
of the endogenous variables on the instruments and other exogenous covariates,
with the difference that the QR case is a weighted projection. The weights are
given by the conditional density function of the innovation term in the QR
structural model, conditional on the endogeneous and exogenous covariates, and
the instruments as well, at a given quantile. We also show that the required
Jacobian identification conditions for IVQR models are embedded in the quantile
first-stage. We then suggest inference procedures to evaluate the adequacy of
instruments by evaluating their statistical significance using the first-stage
result. The test is developed in an over-identification context, since
consistent estimation of the weights for implementation of the first-stage
requires at least one valid instrument to be available. Monte Carlo experiments
provide numerical evidence that the proposed tests work as expected in terms of
empirical size and power in finite samples. An empirical application
illustrates that checking for the statistical significance of the instruments
at different quantiles is important. The proposed procedures may be specially
useful in QR since the instruments may be relevant at some quantiles but not at
others.",A first-stage representation for instrumental variables quantile regression,2021-02-02 01:26:54,"Javier Alejo, Antonio F. Galvao, Gabriel Montes-Rojas","http://arxiv.org/abs/2102.01212v4, http://arxiv.org/pdf/2102.01212v4",econ.EM
29065,em,"How much do individuals contribute to team output? I propose an econometric
framework to quantify individual contributions when only the output of their
teams is observed. The identification strategy relies on following individuals
who work in different teams over time. I consider two production technologies.
For a production function that is additive in worker inputs, I propose a
regression estimator and show how to obtain unbiased estimates of variance
components that measure the contributions of heterogeneity and sorting. To
estimate nonlinear models with complementarity, I propose a mixture approach
under the assumption that individual types are discrete, and rely on a
mean-field variational approximation for estimation. To illustrate the methods,
I estimate the impact of economists on their research output, and the
contributions of inventors to the quality of their patents.","Teams: Heterogeneity, Sorting, and Complementarity",2021-02-03 02:52:12,Stephane Bonhomme,"http://arxiv.org/abs/2102.01802v1, http://arxiv.org/pdf/2102.01802v1",econ.EM
29160,em,"This paper studies the joint inference on conditional volatility parameters
and the innovation moments by means of bootstrap to test for the existence of
moments for GARCH(p,q) processes. We propose a residual bootstrap to mimic the
joint distribution of the quasi-maximum likelihood estimators and the empirical
moments of the residuals and also prove its validity. A bootstrap-based test
for the existence of moments is proposed, which provides asymptotically
correctly-sized tests without losing its consistency property. It is simple to
implement and extends to other GARCH-type settings. A simulation study
demonstrates the test's size and power properties in finite samples and an
empirical application illustrates the testing approach.",A Bootstrap Test for the Existence of Moments for GARCH Processes,2019-02-05 20:32:20,Alexander Heinemann,"http://arxiv.org/abs/1902.01808v3, http://arxiv.org/pdf/1902.01808v3",econ.EM
29066,em,"We study discrete panel data methods where unobserved heterogeneity is
revealed in a first step, in environments where population heterogeneity is not
discrete. We focus on two-step grouped fixed-effects (GFE) estimators, where
individuals are first classified into groups using kmeans clustering, and the
model is then estimated allowing for group-specific heterogeneity. Our
framework relies on two key properties: heterogeneity is a function - possibly
nonlinear and time-varying - of a low-dimensional continuous latent type, and
informative moments are available for classification. We illustrate the method
in a model of wages and labor market participation, and in a probit model with
time-varying heterogeneity. We derive asymptotic expansions of two-step GFE
estimators as the number of groups grows with the two dimensions of the panel.
We propose a data-driven rule for the number of groups, and discuss bias
reduction and inference.",Discretizing Unobserved Heterogeneity,2021-02-03 19:03:19,Stéphane Bonhomme Thibaut Lamadon Elena Manresa,"http://arxiv.org/abs/2102.02124v1, http://arxiv.org/pdf/2102.02124v1",econ.EM
29067,em,"We present a class of one-to-one matching models with perfectly transferable
utility. We discuss identification and inference in these separable models, and
we show how their comparative statics are readily analyzed.",The Econometrics and Some Properties of Separable Matching Models,2021-02-04 14:55:10,"Alfred Galichon, Bernard Salanié","http://dx.doi.org/10.1257/aer.p20171113, http://arxiv.org/abs/2102.02564v1, http://arxiv.org/pdf/2102.02564v1",econ.EM
29068,em,"The notion of hypothetical bias (HB) constitutes, arguably, the most
fundamental issue in relation to the use of hypothetical survey methods.
Whether or to what extent choices of survey participants and subsequent
inferred estimates translate to real-world settings continues to be debated.
While HB has been extensively studied in the broader context of contingent
valuation, it is much less understood in relation to choice experiments (CE).
This paper reviews the empirical evidence for HB in CE in various fields of
applied economics and presents an integrative framework for how HB relates to
external validity. Results suggest mixed evidence on the prevalence, extent and
direction of HB as well as considerable context and measurement dependency.
While HB is found to be an undeniable issue when conducting CEs, the empirical
evidence on HB does not render CEs unable to represent real-world preferences.
While health-related choice experiments often find negligible degrees of HB,
experiments in consumer behaviour and transport domains suggest that
significant degrees of HB are ubiquitous. Assessments of bias in environmental
valuation studies provide mixed evidence. Also, across these disciplines many
studies display HB in their total willingness to pay estimates and opt-in rates
but not in their hypothetical marginal rates of substitution (subject to scale
correction). Further, recent findings in psychology and brain imaging studies
suggest neurocognitive mechanisms underlying HB that may explain some of the
discrepancies and unexpected findings in the mainstream CE literature. The
review also observes how the variety of operational definitions of HB prohibits
consistent measurement of HB in CE. The paper further identifies major sources
of HB and possible moderating factors. Finally, it explains how HB represents
one component of the wider concept of external validity.",Hypothetical bias in stated choice experiments: Part I. Integrative synthesis of empirical evidence and conceptualisation of external validity,2021-02-05 03:45:50,"Milad Haghani, Michiel C. J. Bliemer, John M. Rose, Harmen Oppewal, Emily Lancsar","http://dx.doi.org/10.1016/j.jocm.2021.100309, http://arxiv.org/abs/2102.02940v1, http://arxiv.org/pdf/2102.02940v1",econ.EM
29069,em,"This paper reviews methods of hypothetical bias (HB) mitigation in choice
experiments (CEs). It presents a bibliometric analysis and summary of empirical
evidence of their effectiveness. The paper follows the review of empirical
evidence on the existence of HB presented in Part I of this study. While the
number of CE studies has rapidly increased since 2010, the critical issue of HB
has been studied in only a small fraction of CE studies. The present review
includes both ex-ante and ex-post bias mitigation methods. Ex-ante bias
mitigation methods include cheap talk, real talk, consequentiality scripts,
solemn oath scripts, opt-out reminders, budget reminders, honesty priming,
induced truth telling, indirect questioning, time to think and pivot designs.
Ex-post methods include follow-up certainty calibration scales, respondent
perceived consequentiality scales, and revealed-preference-assisted estimation.
It is observed that the use of mitigation methods markedly varies across
different sectors of applied economics. The existing empirical evidence points
to their overall effectives in reducing HB, although there is some variation.
The paper further discusses how each mitigation method can counter a certain
subset of HB sources. Considering the prevalence of HB in CEs and the
effectiveness of bias mitigation methods, it is recommended that implementation
of at least one bias mitigation method (or a suitable combination where
possible) becomes standard practice in conducting CEs. Mitigation method(s)
suited to the particular application should be implemented to ensure that
inferences and subsequent policy decisions are as much as possible free of HB.",Hypothetical bias in stated choice experiments: Part II. Macro-scale analysis of literature and effectiveness of bias mitigation methods,2021-02-05 03:53:21,"Milad Haghani, Michiel C. J. Bliemer, John M. Rose, Harmen Oppewal, Emily Lancsar","http://dx.doi.org/10.1016/j.jocm.2021.100322, http://arxiv.org/abs/2102.02945v1, http://arxiv.org/pdf/2102.02945v1",econ.EM
29070,em,"We provide a geometric formulation of the problem of identification of the
matching surplus function and we show how the estimation problem can be solved
by the introduction of a generalized entropy function over the set of
matchings.",Identification of Matching Complementarities: A Geometric Viewpoint,2021-02-07 21:31:54,Alfred Galichon,"http://dx.doi.org/10.1108/S0731-9053(2013)0000032005, http://arxiv.org/abs/2102.03875v1, http://arxiv.org/pdf/2102.03875v1",econ.EM
29071,em,"This paper studies inference in a randomized controlled trial (RCT) with
covariate-adaptive randomization (CAR) and imperfect compliance of a binary
treatment. In this context, we study inference on the LATE. As in Bugni et al.
(2018,2019), CAR refers to randomization schemes that first stratify according
to baseline covariates and then assign treatment status so as to achieve
""balance"" within each stratum. In contrast to these papers, however, we allow
participants of the RCT to endogenously decide to comply or not with the
assigned treatment status.
  We study the properties of an estimator of the LATE derived from a ""fully
saturated"" IV linear regression, i.e., a linear regression of the outcome on
all indicators for all strata and their interaction with the treatment
decision, with the latter instrumented with the treatment assignment. We show
that the proposed LATE estimator is asymptotically normal, and we characterize
its asymptotic variance in terms of primitives of the problem. We provide
consistent estimators of the standard errors and asymptotically exact
hypothesis tests. In the special case when the target proportion of units
assigned to each treatment does not vary across strata, we can also consider
two other estimators of the LATE, including the one based on the ""strata fixed
effects"" IV linear regression, i.e., a linear regression of the outcome on
indicators for all strata and the treatment decision, with the latter
instrumented with the treatment assignment.
  Our characterization of the asymptotic variance of the LATE estimators allows
us to understand the influence of the parameters of the RCT. We use this to
propose strategies to minimize their asymptotic variance in a hypothetical RCT
based on data from a pilot study. We illustrate the practical relevance of
these results using a simulation study and an empirical application based on
Dupas et al. (2018).",Inference under Covariate-Adaptive Randomization with Imperfect Compliance,2021-02-08 01:36:26,"Federico A. Bugni, Mengsi Gao","http://arxiv.org/abs/2102.03937v3, http://arxiv.org/pdf/2102.03937v3",econ.EM
29079,em,"Using results from convex analysis, we investigate a novel approach to
identification and estimation of discrete choice models which we call the Mass
Transport Approach (MTA). We show that the conditional choice probabilities and
the choice-specific payoffs in these models are related in the sense of
conjugate duality, and that the identification problem is a mass transport
problem. Based on this, we propose a new two-step estimator for these models;
interestingly, the first step of our estimator involves solving a linear
program which is identical to the classic assignment (two-sided matching) game
of Shapley and Shubik (1971). The application of convex-analytic tools to
dynamic discrete choice models, and the connection with two-sided matching
models, is new in the literature.",Duality in dynamic discrete-choice models,2021-02-08 18:50:03,"Khai Xiang Chiong, Alfred Galichon, Matt Shum","http://dx.doi.org/10.3982/QE436, http://arxiv.org/abs/2102.06076v2, http://arxiv.org/pdf/2102.06076v2",econ.EM
29072,em,"In a landmark contribution to the structural vector autoregression (SVARs)
literature, Rubio-Ramirez, Waggoner, and Zha (2010, `Structural Vector
Autoregressions: Theory of Identification and Algorithms for Inference,' Review
of Economic Studies) shows a necessary and sufficient condition for equality
restrictions to globally identify the structural parameters of a SVAR. The
simplest form of the necessary and sufficient condition shown in Theorem 7 of
Rubio-Ramirez et al (2010) checks the number of zero restrictions and the ranks
of particular matrices without requiring knowledge of the true value of the
structural or reduced-form parameters. However, this note shows by
counterexample that this condition is not sufficient for global identification.
Analytical investigation of the counterexample clarifies why their sufficiency
claim breaks down. The problem with the rank condition is that it allows for
the possibility that restrictions are redundant, in the sense that one or more
restrictions may be implied by other restrictions, in which case the implied
restriction contains no identifying information. We derive a modified necessary
and sufficient condition for SVAR global identification and clarify how it can
be assessed in practice.",A note on global identification in structural vector autoregressions,2021-02-08 11:14:27,"Emanuele Bacchiocchi, Toru Kitagawa","http://arxiv.org/abs/2102.04048v2, http://arxiv.org/pdf/2102.04048v2",econ.EM
29073,em,"We propose an easily implementable test of the validity of a set of
theoretical restrictions on the relationship between economic variables, which
do not necessarily identify the data generating process. The restrictions can
be derived from any model of interactions, allowing censoring and multiple
equilibria. When the restrictions are parameterized, the test can be inverted
to yield confidence regions for partially identified parameters, thereby
complementing other proposals, primarily Chernozhukov et al. [Chernozhukov, V.,
Hong, H., Tamer, E., 2007. Estimation and confidence regions for parameter sets
in econometric models. Econometrica 75, 1243-1285].",A test of non-identifying restrictions and confidence regions for partially identified parameters,2021-02-08 15:01:13,"Alfred Galichon, Marc Henry","http://dx.doi.org/10.1016/j.jeconom.2009.01.010, http://arxiv.org/abs/2102.04151v1, http://arxiv.org/pdf/2102.04151v1",econ.EM
29074,em,"A general framework is given to analyze the falsifiability of economic models
based on a sample of their observable components. It is shown that, when the
restrictions implied by the economic theory are insufficient to identify the
unknown quantities of the structure, the duality of optimal transportation with
zero-one cost function delivers interpretable and operational formulations of
the hypothesis of specification correctness from which tests can be constructed
to falsify the model.",Optimal transportation and the falsifiability of incompletely specified economic models,2021-02-08 15:25:46,"Ivar Ekeland, Alfred Galichon, Marc Henry","http://dx.doi.org/10.1007/s00199-008-0432-y, http://arxiv.org/abs/2102.04162v2, http://arxiv.org/pdf/2102.04162v2",econ.EM
29075,em,"Despite their popularity, machine learning predictions are sensitive to
potential unobserved predictors. This paper proposes a general algorithm that
assesses how the omission of an unobserved variable with high explanatory power
could affect the predictions of the model. Moreover, the algorithm extends the
usage of machine learning from pointwise predictions to inference and
sensitivity analysis. In the application, we show how the framework can be
applied to data with inherent uncertainty, such as students' scores in a
standardized assessment on financial literacy. First, using Bayesian Additive
Regression Trees (BART), we predict students' financial literacy scores (FLS)
for a subgroup of students with missing FLS. Then, we assess the sensitivity of
predictions by comparing the predictions and performance of models with and
without a highly explanatory synthetic predictor. We find no significant
difference in the predictions and performances of the augmented (i.e., the
model with the synthetic predictor) and original model. This evidence sheds a
light on the stability of the predictive model used in the application. The
proposed methodology can be used, above and beyond our motivating empirical
example, in a wide range of machine learning applications in social and health
sciences.",Assessing Sensitivity of Machine Learning Predictions.A Novel Toolbox with an Application to Financial Literacy,2021-02-08 20:42:10,"Falco J. Bargagli Stoffi, Kenneth De Beckker, Joana E. Maldonado, Kristof De Witte","http://arxiv.org/abs/2102.04382v1, http://arxiv.org/pdf/2102.04382v1",econ.EM
29076,em,"We propose a methodology for constructing confidence regions with partially
identified models of general form. The region is obtained by inverting a test
of internal consistency of the econometric structure. We develop a dilation
bootstrap methodology to deal with sampling uncertainty without reference to
the hypothesized economic structure. It requires bootstrapping the quantile
process for univariate data and a novel generalization of the latter to higher
dimensions. Once the dilation is chosen to control the confidence level, the
unknown true distribution of the observed data can be replaced by the known
empirical distribution and confidence regions can then be obtained as in
Galichon and Henry (2011) and Beresteanu, Molchanov and Molinari (2011).",Dilation bootstrap,2021-02-08 17:13:37,"Alfred Galichon, Marc Henry","http://dx.doi.org/10.1016/j.jeconom.2013.07.001, http://arxiv.org/abs/2102.04457v1, http://arxiv.org/pdf/2102.04457v1",econ.EM
29077,em,"This article proposes a generalized notion of extreme multivariate dependence
between two random vectors which relies on the extremality of the
cross-covariance matrix between these two vectors. Using a partial ordering on
the cross-covariance matrices, we also generalize the notion of positive upper
dependence. We then proposes a means to quantify the strength of the dependence
between two given multivariate series and to increase this strength while
preserving the marginal distributions. This allows for the design of
stress-tests of the dependence between two sets of financial variables, that
can be useful in portfolio management or derivatives pricing.",Extreme dependence for multivariate data,2021-02-08 17:57:13,"Damien Bosc, Alfred Galichon","http://dx.doi.org/10.1080/14697688.2014.886777, http://arxiv.org/abs/2102.04461v1, http://arxiv.org/pdf/2102.04461v1",econ.EM
29078,em,"Responding to the U.S. opioid crisis requires a holistic approach supported
by evidence from linking and analyzing multiple data sources. This paper
discusses how 20 available resources can be combined to answer pressing public
health questions related to the crisis. It presents a network view based on
U.S. geographical units and other standard concepts, crosswalked to communicate
the coverage and interlinkage of these resources. These opioid-related datasets
can be grouped by four themes: (1) drug prescriptions, (2) opioid related
harms, (3) opioid treatment workforce, jobs, and training, and (4) drug policy.
An interactive network visualization was created and is freely available
online; it lets users explore key metadata, relevant scholarly works, and data
interlinkages in support of informed decision making through data analysis.","Interactive Network Visualization of Opioid Crisis Related Data- Policy, Pharmaceutical, Training, and More",2021-02-10 20:51:48,"Olga Scrivner, Elizabeth McAvoy, Thuy Nguyen, Tenzin Choeden, Kosali Simon, Katy Börner","http://arxiv.org/abs/2102.05596v1, http://arxiv.org/pdf/2102.05596v1",econ.EM
29081,em,"We consider structural vector autoregressions subject to 'narrative
restrictions', which are inequality restrictions on functions of the structural
shocks in specific periods. These restrictions raise novel problems related to
identification and inference, and there is currently no frequentist procedure
for conducting inference in these models. We propose a solution that is valid
from both Bayesian and frequentist perspectives by: 1) formalizing the
identification problem under narrative restrictions; 2) correcting a feature of
the existing (single-prior) Bayesian approach that can distort inference; 3)
proposing a robust (multiple-prior) Bayesian approach that is useful for
assessing and eliminating the posterior sensitivity that arises in these models
due to the likelihood having flat regions; and 4) showing that the robust
Bayesian approach has asymptotic frequentist validity. We illustrate our
methods by estimating the effects of US monetary policy under a variety of
narrative restrictions.",Identification and Inference Under Narrative Restrictions,2021-02-12 14:38:55,"Raffaella Giacomini, Toru Kitagawa, Matthew Read","http://arxiv.org/abs/2102.06456v1, http://arxiv.org/pdf/2102.06456v1",econ.EM
29082,em,"Weak instruments present a major setback to empirical work. This paper
introduces an estimator that admits weak, uncorrelated, or mean-independent
instruments that are non-independent of endogenous covariates. Relative to
conventional instrumental variable methods, the proposed estimator weakens the
relevance condition considerably without imposing a stronger exclusion
restriction. Identification mainly rests on (1) a weak conditional median
exclusion restriction imposed on pairwise differences in disturbances and (2)
non-independence between covariates and instruments. Under mild conditions, the
estimator is consistent and asymptotically normal. Monte Carlo experiments
showcase an excellent performance of the estimator, and two empirical examples
illustrate its practical utility.",A Distance Covariance-based Estimator,2021-02-14 00:55:09,"Emmanuel Selorm Tsyawo, Abdul-Nasah Soale","http://arxiv.org/abs/2102.07008v1, http://arxiv.org/pdf/2102.07008v1",econ.EM
29083,em,"This paper contributes to the literature on hedonic models in two ways.
First, it makes use of Queyranne's reformulation of a hedonic model in the
discrete case as a network flow problem in order to provide a proof of
existence and integrality of a hedonic equilibrium and efficient computation of
hedonic prices. Second, elaborating on entropic methods developed in Galichon
and Salani\'{e} (2014), this paper proposes a new identification strategy for
hedonic models in a single market. This methodology allows one to introduce
heterogeneities in both consumers' and producers' attributes and to recover
producers' profits and consumers' utilities based on the observation of
production and consumption patterns and the set of hedonic prices.",Entropy methods for identifying hedonic models,2021-02-15 14:49:21,"Arnaud Dupuy, Alfred Galichon, Marc Henry","http://dx.doi.org/10.1007/s11579-014-0125-1, http://arxiv.org/abs/2102.07491v1, http://arxiv.org/pdf/2102.07491v1",econ.EM
29084,em,"Unlike other techniques of causality inference, the use of valid instrumental
variables can deal with unobserved sources of both variable errors, variable
omissions, and sampling bias, and still arrive at consistent estimates of
average treatment effects. The only problem is to find the valid instruments.
Using the definition of Pearl (2009) of valid instrumental variables, a formal
condition for validity can be stated for variables in generalized linear causal
models. The condition can be applied in two different ways: As a tool for
constructing valid instruments, or as a foundation for testing whether an
instrument is valid. When perfectly valid instruments are not found, the
squared bias of the IV-estimator induced by an imperfectly valid instrument --
estimated with bootstrapping -- can be added to its empirical variance in a
mean-square-error-like reliability measure.",Constructing valid instrumental variables in generalized linear causal models from directed acyclic graphs,2021-02-16 13:09:15,Øyvind Hoveid,"http://arxiv.org/abs/2102.08056v1, http://arxiv.org/pdf/2102.08056v1",econ.EM
29085,em,"We propose a general framework for the specification testing of continuous
treatment effect models. We assume a general residual function, which includes
the average and quantile treatment effect models as special cases. The null
models are identified under the unconfoundedness condition and contain a
nonparametric weighting function. We propose a test statistic for the null
model in which the weighting function is estimated by solving an expanding set
of moment equations. We establish the asymptotic distributions of our test
statistic under the null hypothesis and under fixed and local alternatives. The
proposed test statistic is shown to be more efficient than that constructed
from the true weighting function and can detect local alternatives deviated
from the null models at the rate of $O(N^{-1/2})$. A simulation method is
provided to approximate the null distribution of the test statistic.
Monte-Carlo simulations show that our test exhibits a satisfactory
finite-sample performance, and an application shows its practical value.",A Unified Framework for Specification Tests of Continuous Treatment Effect Models,2021-02-16 13:18:52,"Wei Huang, Oliver Linton, Zheng Zhang","http://arxiv.org/abs/2102.08063v2, http://arxiv.org/pdf/2102.08063v2",econ.EM
29086,em,"In light of the increasing interest to transform the fixed-route public
transit (FRT) services into on-demand transit (ODT) services, there exists a
strong need for a comprehensive evaluation of the effects of this shift on the
users. Such an analysis can help the municipalities and service providers to
design and operate more convenient, attractive, and sustainable transit
solutions. To understand the user preferences, we developed three hybrid choice
models: integrated choice and latent variable (ICLV), latent class (LC), and
latent class integrated choice and latent variable (LC-ICLV) models. We used
these models to analyze the public transit user's preferences in Belleville,
Ontario, Canada. Hybrid choice models were estimated using a rich dataset that
combined the actual level of service attributes obtained from Belleville's ODT
service and self-reported usage behaviour obtained from a revealed preference
survey of the ODT users. The latent class models divided the users into two
groups with different travel behaviour and preferences. The results showed that
the captive user's preference for ODT service was significantly affected by the
number of unassigned trips, in-vehicle time, and main travel mode before the
ODT service started. On the other hand, the non-captive user's service
preference was significantly affected by the Time Sensitivity and the Online
Service Satisfaction latent variables, as well as the performance of the ODT
service and trip purpose. This study attaches importance to improving the
reliability and performance of the ODT service and outlines directions for
reducing operational costs by updating the required fleet size and assigning
more vehicles for work-related trips.",On-Demand Transit User Preference Analysis using Hybrid Choice Models,2021-02-16 19:27:50,"Nael Alsaleh, Bilal Farooq, Yixue Zhang, Steven Farber","http://arxiv.org/abs/2102.08256v2, http://arxiv.org/pdf/2102.08256v2",econ.EM
29087,em,"This article discusses tests for nonlinear cointegration in the presence of
variance breaks. We build on cointegration test approaches under
heteroskedasticity (Cavaliere and Taylor, 2006, Journal of Time Series
Analysis) and for nonlinearity (Choi and Saikkonen, 2010, Econometric Theory)
to propose a bootstrap test and prove its consistency. A Monte Carlo study
shows the approach to have good finite sample properties. We provide an
empirical application to the environmental Kuznets curves (EKC), finding that
the cointegration test provides little evidence for the EKC hypothesis.
Additionally, we examine the nonlinear relation between the US money and the
interest rate, finding that our test does not reject the null of a smooth
transition cointegrating relation.",Testing for Nonlinear Cointegration under Heteroskedasticity,2021-02-17 18:14:19,"Christoph Hanck, Till Massing","http://arxiv.org/abs/2102.08809v2, http://arxiv.org/pdf/2102.08809v2",econ.EM
29088,em,"This paper provides a user's guide to the general theory of approximate
randomization tests developed in Canay, Romano, and Shaikh (2017) when
specialized to linear regressions with clustered data. An important feature of
the methodology is that it applies to settings in which the number of clusters
is small -- even as small as five. We provide a step-by-step algorithmic
description of how to implement the test and construct confidence intervals for
the parameter of interest. In doing so, we additionally present three novel
results concerning the methodology: we show that the method admits an
equivalent implementation based on weighted scores; we show the test and
confidence intervals are invariant to whether the test statistic is studentized
or not; and we prove convexity of the confidence intervals for scalar
parameters. We also articulate the main requirements underlying the test,
emphasizing in particular common pitfalls that researchers may encounter.
Finally, we illustrate the use of the methodology with two applications that
further illuminate these points. The companion {\tt R} and {\tt Stata} packages
facilitate the implementation of the methodology and the replication of the
empirical exercises.",On the implementation of Approximate Randomization Tests in Linear Models with a Small Number of Clusters,2021-02-18 01:32:52,"Yong Cai, Ivan A. Canay, Deborah Kim, Azeem M. Shaikh","http://arxiv.org/abs/2102.09058v4, http://arxiv.org/pdf/2102.09058v4",econ.EM
29089,em,"We propose a novel structural estimation framework in which we train a
surrogate of an economic model with deep neural networks. Our methodology
alleviates the curse of dimensionality and speeds up the evaluation and
parameter estimation by orders of magnitudes, which significantly enhances
one's ability to conduct analyses that require frequent parameter
re-estimation. As an empirical application, we compare two popular option
pricing models (the Heston and the Bates model with double-exponential jumps)
against a non-parametric random forest model. We document that: a) the Bates
model produces better out-of-sample pricing on average, but both structural
models fail to outperform random forest for large areas of the volatility
surface; b) random forest is more competitive at short horizons (e.g., 1-day),
for short-dated options (with less than 7 days to maturity), and on days with
poor liquidity; c) both structural models outperform random forest in
out-of-sample delta hedging; d) the Heston model's relative performance has
deteriorated significantly after the 2008 financial crisis.",Deep Structural Estimation: With an Application to Option Pricing,2021-02-18 11:15:47,"Hui Chen, Antoine Didisheim, Simon Scheidegger","http://arxiv.org/abs/2102.09209v1, http://arxiv.org/pdf/2102.09209v1",econ.EM
29090,em,"We propose a method for constructing confidence intervals that account for
many forms of spatial correlation. The interval has the familiar `estimator
plus and minus a standard error times a critical value' form, but we propose
new methods for constructing the standard error and the critical value. The
standard error is constructed using population principal components from a
given `worst-case' spatial covariance model. The critical value is chosen to
ensure coverage in a benchmark parametric model for the spatial correlations.
The method is shown to control coverage in large samples whenever the spatial
correlation is weak, i.e., with average pairwise correlations that vanish as
the sample size gets large. We also provide results on correct coverage in a
restricted but nonparametric class of strong spatial correlations, as well as
on the efficiency of the method. In a design calibrated to match economic
activity in U.S. states the method outperforms previous suggestions for
spatially robust inference about the population mean.",Spatial Correlation Robust Inference,2021-02-18 17:04:43,"Ulrich K. Müller, Mark W. Watson","http://arxiv.org/abs/2102.09353v1, http://arxiv.org/pdf/2102.09353v1",econ.EM
29091,em,"This paper aims to provide reliable estimates for the COVID-19 contact rate
of a Susceptible-Infected-Recovered (SIR) model. From observable data on
confirmed, recovered, and deceased cases, a noisy measurement for the contact
rate can be constructed. To filter out measurement errors and seasonality, a
novel unobserved components (UC) model is set up. It specifies the log contact
rate as a latent, fractionally integrated process of unknown integration order.
The fractional specification reflects key characteristics of aggregate social
behavior such as strong persistence and gradual adjustments to new information.
A computationally simple modification of the Kalman filter is introduced and is
termed the fractional filter. It allows to estimate UC models with richer
long-run dynamics, and provides a closed-form expression for the prediction
error of UC models. Based on the latter, a conditional-sum-of-squares (CSS)
estimator for the model parameters is set up that is shown to be consistent and
asymptotically normally distributed. The resulting contact rate estimates for
several countries are well in line with the chronology of the pandemic, and
allow to identify different contact regimes generated by policy interventions.
As the fractional filter is shown to provide precise contact rate estimates at
the end of the sample, it bears great potential for monitoring the pandemic in
real time.",Monitoring the pandemic: A fractional filter for the COVID-19 contact rate,2021-02-19 20:55:45,Tobias Hartl,"http://arxiv.org/abs/2102.10067v1, http://arxiv.org/pdf/2102.10067v1",econ.EM
29092,em,"A novel approach to price indices, leading to an innovative solution in both
a multi-period or a multilateral framework, is presented. The index turns out
to be the generalized least squares solution of a regression model linking
values and quantities of the commodities. The index reference basket, which is
the union of the intersections of the baskets of all country/period taken in
pair, has a coverage broader than extant indices. The properties of the index
are investigated and updating formulas established. Applications to both real
and simulated data provide evidence of the better index performance in
comparison with extant alternatives.",A Novel Multi-Period and Multilateral Price Index,2021-02-21 09:44:18,"Consuelo Rubina Nava, Maria Grazia Zoia","http://arxiv.org/abs/2102.10528v1, http://arxiv.org/pdf/2102.10528v1",econ.EM
29094,em,"Here, we have analysed a GARCH(1,1) model with the aim to fit higher order
moments for different companies' stock prices. When we assume a gaussian
conditional distribution, we fail to capture any empirical data when fitting
the first three even moments of financial time series. We show instead that a
double gaussian conditional probability distribution better captures the higher
order moments of the data. To demonstrate this point, we construct regions
(phase diagrams), in the fourth and sixth order standardised moment space,
where a GARCH(1,1) model can be used to fit these moments and compare them with
the corresponding moments from empirical data for different sectors of the
economy. We found that the ability of the GARCH model with a double gaussian
conditional distribution to fit higher order moments is dictated by the time
window our data spans. We can only fit data collected within specific time
window lengths and only with certain parameters of the conditional double
gaussian distribution. In order to incorporate the non-stationarity of
financial series, we assume that the parameters of the GARCH model have time
dependence.",Non-stationary GARCH modelling for fitting higher order moments of financial series within moving time windows,2021-02-23 14:05:23,"Luke De Clerk, Sergey Savel'ev","http://arxiv.org/abs/2102.11627v4, http://arxiv.org/pdf/2102.11627v4",econ.EM
29095,em,"We propose a computationally feasible way of deriving the identified features
of models with multiple equilibria in pure or mixed strategies. It is shown
that in the case of Shapley regular normal form games, the identified set is
characterized by the inclusion of the true data distribution within the core of
a Choquet capacity, which is interpreted as the generalized likelihood of the
model. In turn, this inclusion is characterized by a finite set of inequalities
and efficient and easily implementable combinatorial methods are described to
check them. In all normal form games, the identified set is characterized in
terms of the value of a submodular or convex optimization program. Efficient
algorithms are then given and compared to check inclusion of a parameter in
this identified set. The latter are illustrated with family bargaining games
and oligopoly entry games.",Set Identification in Models with Multiple Equilibria,2021-02-24 15:20:11,"Alfred Galichon, Marc Henry","http://dx.doi.org/10.1093/restud/rdr008, http://arxiv.org/abs/2102.12249v1, http://arxiv.org/pdf/2102.12249v1",econ.EM
29096,em,"We provide a test for the specification of a structural model without
identifying assumptions. We show the equivalence of several natural
formulations of correct specification, which we take as our null hypothesis.
From a natural empirical version of the latter, we derive a Kolmogorov-Smirnov
statistic for Choquet capacity functionals, which we use to construct our test.
We derive the limiting distribution of our test statistic under the null, and
show that our test is consistent against certain classes of alternatives. When
the model is given in parametric form, the test can be inverted to yield
confidence regions for the identified parameter set. The approach can be
applied to the estimation of models with sample selection, censored observables
and to games with multiple equilibria.",Inference in Incomplete Models,2021-02-24 15:39:52,"Alfred Galichon, Marc Henry","http://arxiv.org/abs/2102.12257v1, http://arxiv.org/pdf/2102.12257v1",econ.EM
29097,em,"This paper estimates the break point for large-dimensional factor models with
a single structural break in factor loadings at a common unknown date. First,
we propose a quasi-maximum likelihood (QML) estimator of the change point based
on the second moments of factors, which are estimated by principal component
analysis. We show that the QML estimator performs consistently when the
covariance matrix of the pre- or post-break factor loading, or both, is
singular. When the loading matrix undergoes a rotational type of change while
the number of factors remains constant over time, the QML estimator incurs a
stochastically bounded estimation error. In this case, we establish an
asymptotic distribution of the QML estimator. The simulation results validate
the feasibility of this estimator when used in finite samples. In addition, we
demonstrate empirical applications of the proposed method by applying it to
estimate the break points in a U.S. macroeconomic dataset and a stock return
dataset.",Quasi-maximum likelihood estimation of break point in high-dimensional factor models,2021-02-25 06:43:18,"Jiangtao Duan, Jushan Bai, Xu Han","http://arxiv.org/abs/2102.12666v3, http://arxiv.org/pdf/2102.12666v3",econ.EM
29098,em,"We propose a new control function (CF) method to estimate a binary response
model in a triangular system with multiple unobserved heterogeneities The CFs
are the expected values of the heterogeneity terms in the reduced form
equations conditional on the histories of the endogenous and the exogenous
variables. The method requires weaker restrictions compared to CF methods with
similar imposed structures. If the support of endogenous regressors is large,
average partial effects are point-identified even when instruments are
discrete. Bounds are provided when the support assumption is violated. An
application and Monte Carlo experiments compare several alternative methods
with ours.",A Control Function Approach to Estimate Panel Data Binary Response Model,2021-02-25 18:26:41,Amaresh K Tiwari,"http://dx.doi.org/10.1080/07474938.2021.1983328, http://arxiv.org/abs/2102.12927v2, http://arxiv.org/pdf/2102.12927v2",econ.EM
29099,em,"This paper proposes an empirical method to implement the recentered influence
function (RIF) regression of Firpo, Fortin and Lemieux (2009), a relevant
method to study the effect of covariates on many statistics beyond the mean. In
empirically relevant situations where the influence function is not available
or difficult to compute, we suggest to use the \emph{sensitivity curve} (Tukey,
1977) as a feasible alternative. This may be computationally cumbersome when
the sample size is large. The relevance of the proposed strategy derives from
the fact that, under general conditions, the sensitivity curve converges in
probability to the influence function. In order to save computational time we
propose to use a cubic splines non-parametric method for a random subsample and
then to interpolate to the rest of the cases where it was not computed. Monte
Carlo simulations show good finite sample properties. We illustrate the
proposed estimator with an application to the polarization index of Duclos,
Esteban and Ray (2004).",RIF Regression via Sensitivity Curves,2021-12-02 20:24:43,"Javier Alejo, Gabriel Montes-Rojas, Walter Sosa-Escudero","http://arxiv.org/abs/2112.01435v1, http://arxiv.org/pdf/2112.01435v1",econ.EM
29119,em,"Startups have become in less than 50 years a major component of innovation
and economic growth. An important feature of the startup phenomenon has been
the wealth created through equity in startups to all stakeholders. These
include the startup founders, the investors, and also the employees through the
stock-option mechanism and universities through licenses of intellectual
property. In the employee group, the allocation to important managers like the
chief executive, vice-presidents and other officers, and independent board
members is also analyzed. This report analyzes how equity was allocated in more
than 400 startups, most of which had filed for an initial public offering. The
author has the ambition of informing a general audience about best practice in
equity split, in particular in Silicon Valley, the central place for startup
innovation.",Equity in Startups,2017-11-02 12:33:44,Hervé Lebret,"http://arxiv.org/abs/1711.00661v1, http://arxiv.org/pdf/1711.00661v1",econ.EM
29100,em,"We study estimation of factor models in a fixed-T panel data setting and
significantly relax the common correlated effects (CCE) assumptions pioneered
by Pesaran (2006) and used in dozens of papers since. In the simplest case, we
model the unobserved factors as functions of the cross-sectional averages of
the explanatory variables and show that this is implied by Pesaran's
assumptions when the number of factors does not exceed the number of
explanatory variables. Our approach allows discrete explanatory variables and
flexible functional forms in the covariates. Plus, it extends to a framework
that easily incorporates general functions of cross-sectional moments, in
addition to heterogeneous intercepts and time trends. Our proposed estimators
include Pesaran's pooled correlated common effects (CCEP) estimator as a
special case. We also show that in the presence of heterogeneous slopes our
estimator is consistent under assumptions much weaker than those previously
used. We derive the fixed-T asymptotic normality of a general estimator and
show how to adjust for estimation of the population moments in the factor
loading equation.",Simple Alternatives to the Common Correlated Effects Model,2021-12-02 21:37:52,"Nicholas L. Brown, Peter Schmidt, Jeffrey M. Wooldridge","http://dx.doi.org/10.13140/RG.2.2.12655.76969/1, http://arxiv.org/abs/2112.01486v1, http://arxiv.org/pdf/2112.01486v1",econ.EM
29101,em,"Until recently, there has been a consensus that clinicians should condition
patient risk assessments on all observed patient covariates with predictive
power. The broad idea is that knowing more about patients enables more accurate
predictions of their health risks and, hence, better clinical decisions. This
consensus has recently unraveled with respect to a specific covariate, namely
race. There have been increasing calls for race-free risk assessment, arguing
that using race to predict patient outcomes contributes to racial disparities
and inequities in health care. Writers calling for race-free risk assessment
have not studied how it would affect the quality of clinical decisions.
Considering the matter from the patient-centered perspective of medical
economics yields a disturbing conclusion: Race-free risk assessment would harm
patients of all races.",Patient-Centered Appraisal of Race-Free Clinical Risk Assessment,2021-12-03 02:37:07,Charles F. Manski,"http://arxiv.org/abs/2112.01639v2, http://arxiv.org/pdf/2112.01639v2",econ.EM
29102,em,"We develop a non-parametric multivariate time series model that remains
agnostic on the precise relationship between a (possibly) large set of
macroeconomic time series and their lagged values. The main building block of
our model is a Gaussian process prior on the functional relationship that
determines the conditional mean of the model, hence the name of Gaussian
process vector autoregression (GP-VAR). A flexible stochastic volatility
specification is used to provide additional flexibility and control for
heteroskedasticity. Markov chain Monte Carlo (MCMC) estimation is carried out
through an efficient and scalable algorithm which can handle large models. The
GP-VAR is illustrated by means of simulated data and in a forecasting exercise
with US data. Moreover, we use the GP-VAR to analyze the effects of
macroeconomic uncertainty, with a particular emphasis on time variation and
asymmetries in the transmission mechanisms.",Gaussian Process Vector Autoregressions and Macroeconomic Uncertainty,2021-12-03 19:16:10,"Niko Hauzenberger, Florian Huber, Massimiliano Marcellino, Nico Petz","http://arxiv.org/abs/2112.01995v3, http://arxiv.org/pdf/2112.01995v3",econ.EM
29103,em,"Despite the widespread use of graphs in empirical research, little is known
about readers' ability to process the statistical information they are meant to
convey (""visual inference""). We study visual inference within the context of
regression discontinuity (RD) designs by measuring how accurately readers
identify discontinuities in graphs produced from data generating processes
calibrated on 11 published papers from leading economics journals. First, we
assess the effects of different graphical representation methods on visual
inference using randomized experiments. We find that bin widths and fit lines
have the largest impacts on whether participants correctly perceive the
presence or absence of a discontinuity. Our experimental results allow us to
make evidence-based recommendations to practitioners, and we suggest using
small bins with no fit lines as a starting point to construct RD graphs.
Second, we compare visual inference on graphs constructed using our preferred
method with widely used econometric inference procedures. We find that visual
inference achieves similar or lower type I error (false positive) rates and
complements econometric inference.",Visual Inference and Graphical Representation in Regression Discontinuity Designs,2021-12-06 18:02:14,"Christina Korting, Carl Lieberman, Jordan Matsudaira, Zhuan Pei, Yi Shen","http://arxiv.org/abs/2112.03096v2, http://arxiv.org/pdf/2112.03096v2",econ.EM
29104,em,"The `paradox of progress' is an empirical regularity that associates more
education with larger income inequality. Two driving and competing factors
behind this phenomenon are the convexity of the `Mincer equation' (that links
wages and education) and the heterogeneity in its returns, as captured by
quantile regressions. We propose a joint least-squares and quantile regression
statistical framework to derive a decomposition in order to evaluate the
relative contribution of each explanation. The estimators are based on the
`functional derivative' approach. We apply the proposed decomposition strategy
to the case of Argentina 1992 to 2015.",A decomposition method to evaluate the `paradox of progress' with evidence for Argentina,2021-12-07 20:20:26,"Javier Alejo, Leonardo Gasparini, Gabriel Montes-Rojas, Walter Sosa-Escudero","http://arxiv.org/abs/2112.03836v1, http://arxiv.org/pdf/2112.03836v1",econ.EM
29105,em,"Linear regressions with period and group fixed effects are widely used to
estimate policies' effects: 26 of the 100 most cited papers published by the
American Economic Review from 2015 to 2019 estimate such regressions. It has
recently been shown that those regressions may produce misleading estimates, if
the policy's effect is heterogeneous between groups or over time, as is often
the case. This survey reviews a fast-growing literature that documents this
issue, and that proposes alternative estimators robust to heterogeneous
effects. We use those alternative estimators to revisit Wolfers (2006).",Two-Way Fixed Effects and Differences-in-Differences with Heterogeneous Treatment Effects: A Survey,2021-12-08 23:14:26,"Clément de Chaisemartin, Xavier D'Haultfœuille","http://arxiv.org/abs/2112.04565v6, http://arxiv.org/pdf/2112.04565v6",econ.EM
29126,em,"Some empirical results are more likely to be published than others. Such
selective publication leads to biased estimates and distorted inference. This
paper proposes two approaches for identifying the conditional probability of
publication as a function of a study's results, the first based on systematic
replication studies and the second based on meta-studies. For known conditional
publication probabilities, we propose median-unbiased estimators and associated
confidence sets that correct for selective publication. We apply our methods to
recent large-scale replication studies in experimental economics and
psychology, and to meta-studies of the effects of minimum wages and de-worming
programs.",Identification of and correction for publication bias,2017-11-28 22:45:36,"Isaiah Andrews, Maximilian Kasy","http://arxiv.org/abs/1711.10527v1, http://arxiv.org/pdf/1711.10527v1",econ.EM
29106,em,"I suggest an enhancement of the procedure of Chiong, Hsieh, and Shum (2017)
for calculating bounds on counterfactual demand in semiparametric discrete
choice models. Their algorithm relies on a system of inequalities indexed by
cycles of a large number $M$ of observed markets and hence seems to require
computationally infeasible enumeration of all such cycles. I show that such
enumeration is unnecessary because solving the ""fully efficient"" inequality
system exploiting cycles of all possible lengths $K=1,\dots,M$ can be reduced
to finding the length of the shortest path between every pair of vertices in a
complete bidirected weighted graph on $M$ vertices. The latter problem can be
solved using the Floyd--Warshall algorithm with computational complexity
$O\left(M^3\right)$, which takes only seconds to run even for thousands of
markets. Monte Carlo simulations illustrate the efficiency gain from using
cycles of all lengths, which turns out to be positive, but small.","Efficient counterfactual estimation in semiparametric discrete choice models: a note on Chiong, Hsieh, and Shum (2017)",2021-12-09 03:49:56,Grigory Franguridi,"http://arxiv.org/abs/2112.04637v1, http://arxiv.org/pdf/2112.04637v1",econ.EM
29107,em,"This study contributes a house price prediction model selection in Tehran
City based on the area between Lorenz curve (LC) and concentration curve (CC)
of the predicted price by using 206,556 observed transaction data over the
period from March 21, 2018, to February 19, 2021. Several different methods
such as generalized linear models (GLM) and recursive partitioning and
regression trees (RPART), random forests (RF) regression models, and neural
network (NN) models were examined house price prediction. We used 90% of all
data samples which were chosen randomly to estimate the parameters of pricing
models and 10% of remaining datasets to test the accuracy of prediction.
Results showed that the area between the LC and CC curves (which are known as
ABC criterion) of real and predicted prices in the test data sample of the
random forest regression model was less than by other models under study. The
comparison of the calculated ABC criteria leads us to conclude that the
nonlinear regression models such as RF regression models give an accurate
prediction of house prices in Tehran City.",Housing Price Prediction Model Selection Based on Lorenz and Concentration Curves: Empirical Evidence from Tehran Housing Market,2021-12-12 12:44:28,Mohammad Mirbagherijam,"http://arxiv.org/abs/2112.06192v1, http://arxiv.org/pdf/2112.06192v1",econ.EM
29108,em,"A new Stata command, ldvqreg, is developed to estimate quantile regression
models for the cases of censored (with lower and/or upper censoring) and binary
dependent variables. The estimators are implemented using a smoothed version of
the quantile regression objective function. Simulation exercises show that it
correctly estimates the parameters and it should be implemented instead of the
available quantile regression methods when censoring is present. An empirical
application to women's labor supply in Uruguay is considered.",Quantile Regression under Limited Dependent Variable,2021-12-13 20:33:54,"Javier Alejo, Gabriel Montes-Rojas","http://arxiv.org/abs/2112.06822v1, http://arxiv.org/pdf/2112.06822v1",econ.EM
29109,em,"This article presents identification results for the marginal treatment
effect (MTE) when there is sample selection. We show that the MTE is partially
identified for individuals who are always observed regardless of treatment, and
derive uniformly sharp bounds on this parameter under three increasingly
restrictive sets of assumptions. The first result imposes standard MTE
assumptions with an unrestricted sample selection mechanism. The second set of
conditions imposes monotonicity of the sample selection variable with respect
to treatment, considerably shrinking the identified set. Finally, we
incorporate a stochastic dominance assumption which tightens the lower bound
for the MTE. Our analysis extends to discrete instruments. The results rely on
a mixture reformulation of the problem where the mixture weights are
identified, extending Lee's (2009) trimming procedure to the MTE context. We
propose estimators for the bounds derived and use data made available by Deb,
Munking and Trivedi (2006) to empirically illustrate the usefulness of our
approach.",Identifying Marginal Treatment Effects in the Presence of Sample Selection,2021-12-14 00:08:49,"Otávio Bartalotti, Désiré Kédagni, Vitor Possebom","http://arxiv.org/abs/2112.07014v1, http://arxiv.org/pdf/2112.07014v1",econ.EM
29110,em,"We develop a novel test of the instrumental variable identifying assumptions
for heterogeneous treatment effect models with conditioning covariates. We
assume semiparametric dependence between potential outcomes and conditioning
covariates. This allows us to obtain testable equality and inequality
restrictions among the subdensities of estimable partial residuals. We propose
jointly testing these restrictions. To improve power, we introduce
distillation, where a trimmed sample is used to test the inequality
restrictions. In Monte Carlo exercises we find gains in finite sample power
from testing restrictions jointly and distillation. We apply our test procedure
to three instruments and reject the null for one.",Testing Instrument Validity with Covariates,2021-12-15 16:06:22,"Thomas Carr, Toru Kitagawa","http://arxiv.org/abs/2112.08092v2, http://arxiv.org/pdf/2112.08092v2",econ.EM
29111,em,"This paper examines the local linear regression (LLR) estimate of the
conditional distribution function $F(y|x)$. We derive three uniform convergence
results: the uniform bias expansion, the uniform convergence rate, and the
uniform asymptotic linear representation. The uniformity in the above results
is with respect to both $x$ and $y$ and therefore has not previously been
addressed in the literature on local polynomial regression. Such uniform
convergence results are especially useful when the conditional distribution
estimator is the first stage of a semiparametric estimator. We demonstrate the
usefulness of these uniform results with two examples: the stochastic
equicontinuity condition in $y$, and the estimation of the integrated
conditional distribution function.",Uniform Convergence Results for the Local Linear Regression Estimation of the Conditional Distribution,2021-12-16 04:04:23,Haitian Xie,"http://arxiv.org/abs/2112.08546v2, http://arxiv.org/pdf/2112.08546v2",econ.EM
29112,em,"We consider a two-stage estimation method for linear regression that uses the
lasso in Tibshirani (1996) to screen variables and re-estimate the coefficients
using the least-squares boosting method in Friedman (2001) on every set of
selected variables. Based on the large-scale simulation experiment in Hastie et
al. (2020), the performance of lassoed boosting is found to be as competitive
as the relaxed lasso in Meinshausen (2007) and can yield a sparser model under
certain scenarios. An application to predict equity returns also shows that
lassoed boosting can give the smallest mean square prediction error among all
methods under consideration.",Lassoed Boosting and Linear Prediction in Equities Market,2021-12-16 18:00:37,Xiao Huang,"http://arxiv.org/abs/2112.08934v2, http://arxiv.org/pdf/2112.08934v2",econ.EM
29113,em,"This paper studies the robustness of estimated policy effects to changes in
the distribution of covariates. Robustness to covariate shifts is important,
for example, when evaluating the external validity of quasi-experimental
results, which are often used as a benchmark for evidence-based policy-making.
I propose a novel scalar robustness metric. This metric measures the magnitude
of the smallest covariate shift needed to invalidate a claim on the policy
effect (for example, $ATE \geq 0$) supported by the quasi-experimental
evidence. My metric links the heterogeneity of policy effects and robustness in
a flexible, nonparametric way and does not require functional form assumptions.
I cast the estimation of the robustness metric as a de-biased GMM problem. This
approach guarantees a parametric convergence rate for the robustness metric
while allowing for machine learning-based estimators of policy effect
heterogeneity (for example, lasso, random forest, boosting, neural nets). I
apply my procedure to the Oregon Health Insurance experiment. I study the
robustness of policy effects estimates of health-care utilization and financial
strain outcomes, relative to a shift in the distribution of context-specific
covariates. Such covariates are likely to differ across US states, making
quantification of robustness an important exercise for adoption of the
insurance policy in states other than Oregon. I find that the effect on
outpatient visits is the most robust among the metrics of health-care
utilization considered.","Robustness, Heterogeneous Treatment Effects and Covariate Shifts",2021-12-17 02:53:42,Pietro Emilio Spini,"http://arxiv.org/abs/2112.09259v1, http://arxiv.org/pdf/2112.09259v1",econ.EM
29114,em,"Aims: To re-introduce the Heckman model as a valid empirical technique in
alcohol studies. Design: To estimate the determinants of problem drinking using
a Heckman and a two-part estimation model. Psychological and neuro-scientific
studies justify my underlying estimation assumptions and covariate exclusion
restrictions. Higher order tests checking for multicollinearity validate the
use of Heckman over the use of two-part estimation models. I discuss the
generalizability of the two models in applied research. Settings and
Participants: Two pooled national population surveys from 2016 and 2017 were
used: the Behavioral Risk Factor Surveillance Survey (BRFS), and the National
Survey of Drug Use and Health (NSDUH). Measurements: Participation in problem
drinking and meeting the criteria for problem drinking. Findings: Both U.S.
national surveys perform well with the Heckman model and pass all higher order
tests. The Heckman model corrects for selection bias and reveals the direction
of bias, where the two-part model does not. For example, the coefficients on
age are upward biased and unemployment is downward biased in the two-part where
the Heckman model does not have a selection bias. Covariate exclusion
restrictions are sensitive to survey conditions and are contextually
generalizable. Conclusions: The Heckman model can be used for alcohol (smoking
studies as well) if the underlying estimation specification passes higher order
tests for multicollinearity and the exclusion restrictions are justified with
integrity for the data used. Its use is merit-worthy because it corrects for
and reveals the direction and the magnitude of selection bias where the
two-part does not.",Heckman-Selection or Two-Part models for alcohol studies? Depends,2021-12-20 17:08:35,Reka Sundaram-Stukel,"http://arxiv.org/abs/2112.10542v2, http://arxiv.org/pdf/2112.10542v2",econ.EM
29115,em,"We study the Stigler model of citation flows among journals adapting the
pairwise comparison model of Bradley and Terry to do ranking and selection of
journal influence based on nonparametric empirical Bayes procedures.
Comparisons with several other rankings are made.",Ranking and Selection from Pairwise Comparisons: Empirical Bayes Methods for Citation Analysis,2021-12-21 12:46:29,"Jiaying Gu, Roger Koenker","http://arxiv.org/abs/2112.11064v1, http://arxiv.org/pdf/2112.11064v1",econ.EM
29116,em,"We ask if there are alternative contest models that minimize error or
information loss from misspecification and outperform the Pythagorean model.
This article aims to use simulated data to select the optimal expected win
percentage model among the choice of relevant alternatives. The choices include
the traditional Pythagorean model and the difference-form contest success
function (CSF). Method. We simulate 1,000 iterations of the 2014 MLB season for
the purpose of estimating and analyzing alternative models of expected win
percentage (team quality). We use the open-source, Strategic Baseball Simulator
and develop an AutoHotKey script that programmatically executes the SBS
application, chooses the correct settings for the 2014 season, enters a unique
ID for the simulation data file, and iterates these steps 1,000 times. We
estimate expected win percentage using the traditional Pythagorean model, as
well as the difference-form CSF model that is used in game theory and public
choice economics. Each model is estimated while accounting for fixed (team)
effects. We find that the difference-form CSF model outperforms the traditional
Pythagorean model in terms of explanatory power and in terms of
misspecification-based information loss as estimated by the Akaike Information
Criterion. Through parametric estimation, we further confirm that the simulator
yields realistic statistical outcomes. The simulation methodology offers the
advantage of greatly improved sample size. As the season is held constant, our
simulation-based statistical inference also allows for estimation and model
comparison without the (time series) issue of non-stationarity. The results
suggest that improved win (productivity) estimation can be achieved through
alternative CSF specifications.",An Analysis of an Alternative Pythagorean Expected Win Percentage Model: Applications Using Major League Baseball Team Quality Simulations,2021-12-30 01:08:24,"Justin Ehrlich, Christopher Boudreaux, James Boudreau, Shane Sanders","http://arxiv.org/abs/2112.14846v1, http://arxiv.org/pdf/2112.14846v1",econ.EM
29117,em,"In this paper we examine the relation between market returns and volatility
measures through machine learning methods in a high-frequency environment. We
implement a minute-by-minute rolling window intraday estimation method using
two nonlinear models: Long-Short-Term Memory (LSTM) neural networks and Random
Forests (RF). Our estimations show that the CBOE Volatility Index (VIX) is the
strongest candidate predictor for intraday market returns in our analysis,
specially when implemented through the LSTM model. This model also improves
significantly the performance of the lagged market return as predictive
variable. Finally, intraday RF estimation outputs indicate that there is no
performance improvement with this method, and it may even worsen the results in
some cases.",Modeling and Forecasting Intraday Market Returns: a Machine Learning Approach,2021-12-30 19:05:17,"Iuri H. Ferreira, Marcelo C. Medeiros","http://arxiv.org/abs/2112.15108v1, http://arxiv.org/pdf/2112.15108v1",econ.EM
29118,em,"Startups have become in less than 50 years a major component of innovation
and economic growth. Silicon Valley has been the place where the startup
phenomenon was the most obvious and Stanford University was a major component
of that success. Companies such as Google, Yahoo, Sun Microsystems, Cisco,
Hewlett Packard had very strong links with Stanford but even these vary famous
success stories cannot fully describe the richness and diversity of the
Stanford entrepreneurial activity. This report explores the dynamics of more
than 5000 companies founded by Stanford University alumni and staff, through
their value creation, their field of activities, their growth patterns and
more. The report also explores some features of the founders of these companies
such as their academic background or the number of years between their Stanford
experience and their company creation.",Startups and Stanford University,2017-11-02 11:14:26,Hervé Lebret,"http://arxiv.org/abs/1711.00644v1, http://arxiv.org/pdf/1711.00644v1",econ.EM
29120,em,"I propose a treatment selection model that introduces unobserved
heterogeneity in both choice sets and preferences to evaluate the average
effects of a program offer. I show how to exploit the model structure to define
parameters capturing these effects and then computationally characterize their
identified sets under instrumental variable variation in choice sets. I
illustrate these tools by analyzing the effects of providing an offer to the
Head Start preschool program using data from the Head Start Impact Study. I
find that such a policy affects a large number of children who take up the
offer, and that they subsequently have positive effects on test scores. These
effects arise from children who do not have any preschool as an outside option.
A cost-benefit analysis reveals that the earning benefits associated with the
test score gains can be large and outweigh the net costs associated with offer
take up.",Identifying the Effects of a Program Offer with an Application to Head Start,2017-11-06 20:55:59,Vishal Kamat,"http://arxiv.org/abs/1711.02048v6, http://arxiv.org/pdf/1711.02048v6",econ.EM
29121,em,"I study identification, estimation and inference for spillover effects in
experiments where units' outcomes may depend on the treatment assignments of
other units within a group. I show that the commonly-used reduced-form
linear-in-means regression identifies a weighted sum of spillover effects with
some negative weights, and that the difference in means between treated and
controls identifies a combination of direct and spillover effects entering with
different signs. I propose nonparametric estimators for average direct and
spillover effects that overcome these issues and are consistent and
asymptotically normal under a precise relationship between the number of
parameters of interest, the total sample size and the treatment assignment
mechanism. These findings are illustrated using data from a conditional cash
transfer program and with simulations. The empirical results reveal the
potential pitfalls of failing to flexibly account for spillover effects in
policy evaluation: the estimated difference in means and the reduced-form
linear-in-means coefficients are all close to zero and statistically
insignificant, whereas the nonparametric estimators I propose reveal large,
nonlinear and significant spillover effects.",Identification and Estimation of Spillover Effects in Randomized Experiments,2017-11-08 01:04:44,Gonzalo Vazquez-Bare,"http://arxiv.org/abs/1711.02745v8, http://arxiv.org/pdf/1711.02745v8",econ.EM
29122,em,"Futures market contracts with varying maturities are traded concurrently and
the speed at which they process information is of value in understanding the
pricing discovery process. Using price discovery measures, including Putnins
(2013) information leadership share and intraday data, we quantify the
proportional contribution of price discovery between nearby and deferred
contracts in the corn and live cattle futures markets. Price discovery is more
systematic in the corn than in the live cattle market. On average, nearby
contracts lead all deferred contracts in price discovery in the corn market,
but have a relatively less dominant role in the live cattle market. In both
markets, the nearby contract loses dominance when its relative volume share
dips below 50%, which occurs about 2-3 weeks before expiration in corn and 5-6
weeks before expiration in live cattle. Regression results indicate that the
share of price discovery is most closely linked to trading volume but is also
affected, to far less degree, by time to expiration, backwardation, USDA
announcements and market crashes. The effects of these other factors vary
between the markets which likely reflect the difference in storability as well
as other market-related characteristics.",Measuring Price Discovery between Nearby and Deferred Contracts in Storable and Non-Storable Commodity Futures Markets,2017-11-09 21:12:05,"Zhepeng Hu, Mindy Mallory, Teresa Serra, Philip Garcia","http://arxiv.org/abs/1711.03506v1, http://arxiv.org/pdf/1711.03506v1",econ.EM
29123,em,"Economic complexity reflects the amount of knowledge that is embedded in the
productive structure of an economy. It resides on the premise of hidden
capabilities - fundamental endowments underlying the productive structure. In
general, measuring the capabilities behind economic complexity directly is
difficult, and indirect measures have been suggested which exploit the fact
that the presence of the capabilities is expressed in a country's mix of
products. We complement these studies by introducing a probabilistic framework
which leverages Bayesian non-parametric techniques to extract the dominant
features behind the comparative advantage in exported products. Based on
economic evidence and trade data, we place a restricted Indian Buffet Process
on the distribution of countries' capability endowment, appealing to a culinary
metaphor to model the process of capability acquisition. The approach comes
with a unique level of interpretability, as it produces a concise and
economically plausible description of the instantiated capabilities.",Economic Complexity Unfolded: Interpretable Model for the Productive Structure of Economies,2017-11-17 17:09:19,"Zoran Utkovski, Melanie F. Pradier, Viktor Stojkoski, Fernando Perez-Cruz, Ljupco Kocarev","http://dx.doi.org/10.1371/journal.pone.0200822, http://arxiv.org/abs/1711.07327v2, http://arxiv.org/pdf/1711.07327v2",econ.EM
29124,em,"This study briefly introduces the development of Shantou Special Economic
Zone under Reform and Opening-Up Policy from 1980 through 2016 with a focus on
policy making issues and its influences on local economy. This paper is divided
into two parts, 1980 to 1991, 1992 to 2016 in accordance with the separation of
the original Shantou District into three cities: Shantou, Chaozhou and Jieyang
in the end of 1991. This study analyzes the policy making issues in the
separation of the original Shantou District, the influences of the policy on
Shantou's economy after separation, the possibility of merging the three cities
into one big new economic district in the future and reasons that lead to the
stagnant development of Shantou in recent 20 years. This paper uses statistical
longitudinal analysis in analyzing economic problems with applications of
non-parametric statistics through generalized additive model and time series
forecasting methods. The paper is authored by Bowen Cai solely, who is the
graduate student in the PhD program of Applied and Computational Mathematics
and Statistics at the University of Notre Dame with concentration in big data
analysis.",The Research on the Stagnant Development of Shantou Special Economic Zone Under Reform and Opening-Up Policy,2017-11-24 09:34:15,Bowen Cai,"http://arxiv.org/abs/1711.08877v1, http://arxiv.org/pdf/1711.08877v1",econ.EM
29125,em,"This paper presents the identification of heterogeneous elasticities in the
Cobb-Douglas production function. The identification is constructive with
closed-form formulas for the elasticity with respect to each input for each
firm. We propose that the flexible input cost ratio plays the role of a control
function under ""non-collinear heterogeneity"" between elasticities with respect
to two flexible inputs. The ex ante flexible input cost share can be used to
identify the elasticities with respect to flexible inputs for each firm. The
elasticities with respect to labor and capital can be subsequently identified
for each firm under the timing assumption admitting the functional
independence.",Constructive Identification of Heterogeneous Elasticities in the Cobb-Douglas Production Function,2017-11-28 01:51:57,"Tong Li, Yuya Sasaki","http://arxiv.org/abs/1711.10031v1, http://arxiv.org/pdf/1711.10031v1",econ.EM
29127,em,"Research on growing American political polarization and antipathy primarily
studies public institutions and political processes, ignoring private effects
including strained family ties. Using anonymized smartphone-location data and
precinct-level voting, we show that Thanksgiving dinners attended by
opposing-party precinct residents were 30-50 minutes shorter than same-party
dinners. This decline from a mean of 257 minutes survives extensive spatial and
demographic controls. Dinner reductions in 2016 tripled for travelers from
media markets with heavy political advertising --- an effect not observed in
2015 --- implying a relationship to election-related behavior. Effects appear
asymmetric: while fewer Democratic-precinct residents traveled in 2016 than
2015, political differences shortened Thanksgiving dinners more among
Republican-precinct residents. Nationwide, 34 million person-hours of
cross-partisan Thanksgiving discourse were lost in 2016 to partisan effects.",The Effect of Partisanship and Political Advertising on Close Family Ties,2017-11-29 01:58:02,"M. Keith Chen, Ryne Rohla","http://dx.doi.org/10.1126/science.aaq1433, http://arxiv.org/abs/1711.10602v2, http://arxiv.org/pdf/1711.10602v2",econ.EM
29128,em,"The main purpose of this paper is to analyze threshold effects of official
development assistance (ODA) on economic growth in WAEMU zone countries. To
achieve this, the study is based on OECD and WDI data covering the period
1980-2015 and used Hansen's Panel Threshold Regression (PTR) model to
""bootstrap"" aid threshold above which its effectiveness is effective. The
evidence strongly supports the view that the relationship between aid and
economic growth is non-linear with a unique threshold which is 12.74% GDP.
Above this value, the marginal effect of aid is 0.69 points, ""all things being
equal to otherwise"". One of the main contribution of this paper is to show that
WAEMU countries need investments that could be covered by the foreign aid. This
later one should be considered just as a complementary resource. Thus, WEAMU
countries should continue to strengthen their efforts in internal resource
mobilization in order to fulfil this need.",Aide et Croissance dans les pays de l'Union Economique et Mon{é}taire Ouest Africaine (UEMOA) : retour sur une relation controvers{é}e,2018-04-13 16:07:11,Nimonka Bayale,"http://arxiv.org/abs/1805.00435v1, http://arxiv.org/pdf/1805.00435v1",econ.EM
29129,em,"In this paper, I endeavour to construct a new model, by extending the classic
exogenous economic growth model by including a measurement which tries to
explain and quantify the size of technological innovation ( A ) endogenously. I
do not agree technology is a ""constant"" exogenous variable, because it is
humans who create all technological innovations, and it depends on how much
human and physical capital is allocated for its research. I inspect several
possible approaches to do this, and then I test my model both against sample
and real world evidence data. I call this method ""dynamic"" because it tries to
model the details in resource allocations between research, labor and capital,
by affecting each other interactively. In the end, I point out which is the new
residual and the parts of the economic growth model which can be further
improved.",Endogenous growth - A dynamic technology augmentation of the Solow model,2018-05-02 11:23:18,Murad Kasim,"http://arxiv.org/abs/1805.00668v1, http://arxiv.org/pdf/1805.00668v1",econ.EM
29130,em,"This paper studies the identification and estimation of the optimal linear
approximation of a structural regression function. The parameter in the linear
approximation is called the Optimal Linear Instrumental Variables Approximation
(OLIVA). This paper shows that a necessary condition for standard inference on
the OLIVA is also sufficient for the existence of an IV estimand in a linear
model. The instrument in the IV estimand is unknown and may not be identified.
A Two-Step IV (TSIV) estimator based on Tikhonov regularization is proposed,
which can be implemented by standard regression routines. We establish the
asymptotic normality of the TSIV estimator assuming neither completeness nor
identification of the instrument. As an important application of our analysis,
we robustify the classical Hausman test for exogeneity against misspecification
of the linear structural model. We also discuss extensions to weighted least
squares criteria. Monte Carlo simulations suggest an excellent finite sample
performance for the proposed inferences. Finally, in an empirical application
estimating the elasticity of intertemporal substitution (EIS) with US data, we
obtain TSIV estimates that are much larger than their standard IV counterparts,
with our robust Hausman test failing to reject the null hypothesis of
exogeneity of real interest rates.",Optimal Linear Instrumental Variables Approximations,2018-05-08 23:44:27,"Juan Carlos Escanciano, Wei Li","http://arxiv.org/abs/1805.03275v3, http://arxiv.org/pdf/1805.03275v3",econ.EM
29131,em,"We study the identification and estimation of structural parameters in
dynamic panel data logit models where decisions are forward-looking and the
joint distribution of unobserved heterogeneity and observable state variables
is nonparametric, i.e., fixed-effects model. We consider models with two
endogenous state variables: the lagged decision variable, and the time duration
in the last choice. This class of models includes as particular cases important
economic applications such as models of market entry-exit, occupational choice,
machine replacement, inventory and investment decisions, or dynamic demand of
differentiated products. The identification of structural parameters requires a
sufficient statistic that controls for unobserved heterogeneity not only in
current utility but also in the continuation value of the forward-looking
decision problem. We obtain the minimal sufficient statistic and prove
identification of some structural parameters using a conditional likelihood
approach. We apply this estimator to a machine replacement model.",Sufficient Statistics for Unobserved Heterogeneity in Structural Dynamic Logit Models,2018-05-10 19:27:33,"Victor Aguirregabiria, Jiaying Gu, Yao Luo","http://arxiv.org/abs/1805.04048v1, http://arxiv.org/pdf/1805.04048v1",econ.EM
29132,em,"This paper constructs individual-specific density forecasts for a panel of
firms or households using a dynamic linear model with common and heterogeneous
coefficients as well as cross-sectional heteroskedasticity. The panel
considered in this paper features a large cross-sectional dimension N but short
time series T. Due to the short T, traditional methods have difficulty in
disentangling the heterogeneous parameters from the shocks, which contaminates
the estimates of the heterogeneous parameters. To tackle this problem, I assume
that there is an underlying distribution of heterogeneous parameters, model
this distribution nonparametrically allowing for correlation between
heterogeneous parameters and initial conditions as well as individual-specific
regressors, and then estimate this distribution by combining information from
the whole panel. Theoretically, I prove that in cross-sectional homoskedastic
cases, both the estimated common parameters and the estimated distribution of
the heterogeneous parameters achieve posterior consistency, and that the
density forecasts asymptotically converge to the oracle forecast.
Methodologically, I develop a simulation-based posterior sampling algorithm
specifically addressing the nonparametric density estimation of unobserved
heterogeneous parameters. Monte Carlo simulations and an empirical application
to young firm dynamics demonstrate improvements in density forecasts relative
to alternative approaches.",Density Forecasts in Panel Data Models: A Semiparametric Bayesian Perspective,2018-05-10 23:51:01,Laura Liu,"http://arxiv.org/abs/1805.04178v3, http://arxiv.org/pdf/1805.04178v3",econ.EM
29134,em,"This paper contributes to the literature on treatment effects estimation with
machine learning inspired methods by studying the performance of different
estimators based on the Lasso. Building on recent work in the field of
high-dimensional statistics, we use the semiparametric efficient score
estimation structure to compare different estimators. Alternative weighting
schemes are considered and their suitability for the incorporation of machine
learning estimators is assessed using theoretical arguments and various Monte
Carlo experiments. Additionally we propose an own estimator based on doubly
robust Kernel matching that is argued to be more robust to nuisance parameter
misspecification. In the simulation study we verify theory based intuition and
find good finite sample properties of alternative weighting scheme estimators
like the one we propose.",The Finite Sample Performance of Treatment Effects Estimators based on the Lasso,2018-05-14 11:50:54,Michael Zimmert,"http://arxiv.org/abs/1805.05067v1, http://arxiv.org/pdf/1805.05067v1",econ.EM
29135,em,"This paper introduces a method for linking technological improvement rates
(i.e. Moore's Law) and technology adoption curves (i.e. S-Curves). There has
been considerable research surrounding Moore's Law and the generalized versions
applied to the time dependence of performance for other technologies. The prior
work has culminated with methodology for quantitative estimation of
technological improvement rates for nearly any technology. This paper examines
the implications of such regular time dependence for performance upon the
timing of key events in the technological adoption process. We propose a simple
crossover point in performance which is based upon the technological
improvement rates and current level differences for target and replacement
technologies. The timing for the cross-over is hypothesized as corresponding to
the first 'knee'? in the technology adoption ""S-curve"" and signals when the
market for a given technology will start to be rewarding for innovators. This
is also when potential entrants are likely to intensely experiment with
product-market fit and when the competition to achieve a dominant design
begins. This conceptual framework is then back-tested by examining two
technological changes brought about by the internet, namely music and video
transmission. The uncertainty analysis around the cases highlight opportunities
for organizations to reduce future technological uncertainty. Overall, the
results from the case studies support the reliability and utility of the
conceptual framework in strategic business decision-making with the caveat that
while technical uncertainty is reduced, it is not eliminated.",Data-Driven Investment Decision-Making: Applying Moore's Law and S-Curves to Business Strategies,2018-05-16 17:09:04,"Christopher L. Benson, Christopher L. Magee","http://arxiv.org/abs/1805.06339v1, http://arxiv.org/pdf/1805.06339v1",econ.EM
29136,em,"Some aspects of the problem of stable marriage are discussed. There are two
distinguished marriage plans: the fully transferable case, where money can be
transferred between the participants, and the fully non transferable case where
each participant has its own rigid preference list regarding the other gender.
We continue to discuss intermediate partial transferable cases. Partial
transferable plans can be approached as either special cases of cooperative
games using the notion of a core, or as a generalization of the cyclical
monotonicity property of the fully transferable case (fake promises). We shall
introduced these two approaches, and prove the existence of stable marriage for
the fully transferable and non-transferable plans.",Happy family of stable marriages,2018-05-17 13:33:04,Gershon Wolansky,"http://arxiv.org/abs/1805.06687v1, http://arxiv.org/pdf/1805.06687v1",econ.EM
29137,em,"This study back-tests a marginal cost of production model proposed to value
the digital currency bitcoin. Results from both conventional regression and
vector autoregression (VAR) models show that the marginal cost of production
plays an important role in explaining bitcoin prices, challenging recent
allegations that bitcoins are essentially worthless. Even with markets pricing
bitcoin in the thousands of dollars each, the valuation model seems robust. The
data show that a price bubble that began in the Fall of 2017 resolved itself in
early 2018, converging with the marginal cost model. This suggests that while
bubbles may appear in the bitcoin market, prices will tend to this bound and
not collapse to zero.",Bitcoin price and its marginal cost of production: support for a fundamental value,2018-05-19 18:30:29,Adam Hayes,"http://arxiv.org/abs/1805.07610v1, http://arxiv.org/pdf/1805.07610v1",econ.EM
29138,em,"The issue of model selection in applied research is of vital importance.
Since the true model in such research is not known, which model should be used
from among various potential ones is an empirical question. There might exist
several competitive models. A typical approach to dealing with this is classic
hypothesis testing using an arbitrarily chosen significance level based on the
underlying assumption that a true null hypothesis exists. In this paper we
investigate how successful this approach is in determining the correct model
for different data generating processes using time series data. An alternative
approach based on more formal model selection techniques using an information
criterion or cross-validation is suggested and evaluated in the time series
environment via Monte Carlo experiments. This paper also explores the
effectiveness of deciding what type of general relation exists between two
variables (e.g. relation in levels or relation in first differences) using
various strategies based on hypothesis testing and on information criteria with
the presence or absence of unit roots.",Model Selection in Time Series Analysis: Using Information Criteria as an Alternative to Hypothesis Testing,2018-05-23 10:40:53,"R. Scott Hacker, Abdulnasser Hatemi-J","http://arxiv.org/abs/1805.08991v1, http://arxiv.org/pdf/1805.08991v1",econ.EM
29139,em,"This study investigates the dose-response effects of making music on youth
development. Identification is based on the conditional independence assumption
and estimation is implemented using a recent double machine learning estimator.
The study proposes solutions to two highly practically relevant questions that
arise for these new methods: (i) How to investigate sensitivity of estimates to
tuning parameter choices in the machine learning part? (ii) How to assess
covariate balancing in high-dimensional settings? The results show that
improvements in objectively measured cognitive skills require at least medium
intensity, while improvements in school grades are already observed for low
intensity of practice.",A Double Machine Learning Approach to Estimate the Effects of Musical Practice on Student's Skills,2018-05-23 10:58:08,Michael C. Knaus,"http://arxiv.org/abs/1805.10300v2, http://arxiv.org/pdf/1805.10300v2",econ.EM
29932,em,"We propose logit-based IV and augmented logit-based IV estimators that serve
as alternatives to the traditionally used 2SLS estimator in the model where
both the endogenous treatment variable and the corresponding instrument are
binary. Our novel estimators are as easy to compute as the 2SLS estimator but
have an advantage over the 2SLS estimator in terms of causal interpretability.
In particular, in certain cases where the probability limits of both our
estimators and the 2SLS estimator take the form of weighted-average treatment
effects, our estimators are guaranteed to yield non-negative weights whereas
the 2SLS estimator is not.",Logit-based alternatives to two-stage least squares,2023-12-16 08:47:43,"Denis Chetverikov, Jinyong Hahn, Zhipeng Liao, Shuyang Sheng","http://arxiv.org/abs/2312.10333v1, http://arxiv.org/pdf/2312.10333v1",econ.EM
29140,em,"This article introduces two absolutely continuous global-local shrinkage
priors to enable stochastic variable selection in the context of
high-dimensional matrix exponential spatial specifications. Existing approaches
as a means to dealing with overparameterization problems in spatial
autoregressive specifications typically rely on computationally demanding
Bayesian model-averaging techniques. The proposed shrinkage priors can be
implemented using Markov chain Monte Carlo methods in a flexible and efficient
way. A simulation study is conducted to evaluate the performance of each of the
shrinkage priors. Results suggest that they perform particularly well in
high-dimensional environments, especially when the number of parameters to
estimate exceeds the number of observations. For an empirical illustration we
use pan-European regional economic growth data.",Flexible shrinkage in high-dimensional Bayesian spatial autoregressive models,2018-05-28 12:01:55,"Michael Pfarrhofer, Philipp Piribauer","http://dx.doi.org/10.1016/j.spasta.2018.10.004, http://arxiv.org/abs/1805.10822v1, http://arxiv.org/pdf/1805.10822v1",econ.EM
29141,em,"We propose a method that reconciles two popular approaches to structural
estimation and inference: Using a complete - yet approximate model versus
imposing a set of credible behavioral conditions. This is done by distorting
the approximate model to satisfy these conditions. We provide the asymptotic
theory and Monte Carlo evidence, and illustrate that counterfactual experiments
are possible. We apply the methodology to the model of long run risks in
aggregate consumption (Bansal and Yaron, 2004), where the complete model is
generated using the Campbell and Shiller (1988) approximation. Using US data,
we investigate the empirical importance of the neglected non-linearity. We find
that distorting the model to satisfy the non-linear equilibrium condition is
strongly preferred by the data while the quality of the approximation is yet
another reason for the downward bias to estimates of the intertemporal
elasticity of substitution and the upward bias in risk aversion.",Equilibrium Restrictions and Approximate Models -- With an application to Pricing Macroeconomic Risk,2018-05-28 14:27:20,Andreas Tryphonides,"http://arxiv.org/abs/1805.10869v3, http://arxiv.org/pdf/1805.10869v3",econ.EM
29142,em,"The United States' power market is featured by the lack of judicial power at
the federal level. The market thus provides a unique testing environment for
the market organization structure. At the same time, the econometric modeling
and forecasting of electricity market consumption become more challenging.
Import and export, which generally follow simple rules in European countries,
can be a result of direct market behaviors. This paper seeks to build a general
model for power consumption and using the model to test several hypotheses.",Modeling the residential electricity consumption within a restructured power market,2018-05-28 22:19:00,Chelsea Sun,"http://arxiv.org/abs/1805.11138v2, http://arxiv.org/pdf/1805.11138v2",econ.EM
29143,em,"The policy relevant treatment effect (PRTE) measures the average effect of
switching from a status-quo policy to a counterfactual policy. Estimation of
the PRTE involves estimation of multiple preliminary parameters, including
propensity scores, conditional expectation functions of the outcome and
covariates given the propensity score, and marginal treatment effects. These
preliminary estimators can affect the asymptotic distribution of the PRTE
estimator in complicated and intractable manners. In this light, we propose an
orthogonal score for double debiased estimation of the PRTE, whereby the
asymptotic distribution of the PRTE estimator is obtained without any influence
of preliminary parameter estimators as far as they satisfy mild requirements of
convergence rates. To our knowledge, this paper is the first to develop limit
distribution theories for inference about the PRTE.",Estimation and Inference for Policy Relevant Treatment Effects,2018-05-29 17:34:35,"Yuya Sasaki, Takuya Ura","http://arxiv.org/abs/1805.11503v4, http://arxiv.org/pdf/1805.11503v4",econ.EM
29144,em,"Partial mean with generated regressors arises in several econometric
problems, such as the distribution of potential outcomes with continuous
treatments and the quantile structural function in a nonseparable triangular
model. This paper proposes a nonparametric estimator for the partial mean
process, where the second step consists of a kernel regression on regressors
that are estimated in the first step. The main contribution is a uniform
expansion that characterizes in detail how the estimation error associated with
the generated regressor affects the limiting distribution of the marginal
integration estimator. The general results are illustrated with two examples:
the generalized propensity score for a continuous treatment (Hirano and Imbens,
2004) and control variables in triangular models (Newey, Powell, and Vella,
1999; Imbens and Newey, 2009). An empirical application to the Job Corps
program evaluation demonstrates the usefulness of the method.",Partial Mean Processes with Generated Regressors: Continuous Treatment Effects and Nonseparable Models,2018-11-01 02:37:25,Ying-Ying Lee,"http://arxiv.org/abs/1811.00157v1, http://arxiv.org/pdf/1811.00157v1",econ.EM
29145,em,"I develop a new identification strategy for treatment effects when noisy
measurements of unobserved confounding factors are available. I use proxy
variables to construct a random variable conditional on which treatment
variables become exogenous. The key idea is that, under appropriate conditions,
there exists a one-to-one mapping between the distribution of unobserved
confounding factors and the distribution of proxies. To ensure sufficient
variation in the constructed control variable, I use an additional variable,
termed excluded variable, which satisfies certain exclusion restrictions and
relevance conditions. I establish asymptotic distributional results for
semiparametric and flexible parametric estimators of causal parameters. I
illustrate empirical relevance and usefulness of my results by estimating
causal effects of attending selective college on earnings.",Treatment Effect Estimation with Noisy Conditioning Variables,2018-11-02 01:53:48,Kenichi Nagasawa,"http://arxiv.org/abs/1811.00667v4, http://arxiv.org/pdf/1811.00667v4",econ.EM
29146,em,"We develop a new statistical procedure to test whether the dependence
structure is identical between two groups. Rather than relying on a single
index such as Pearson's correlation coefficient or Kendall's Tau, we consider
the entire dependence structure by investigating the dependence functions
(copulas). The critical values are obtained by a modified randomization
procedure designed to exploit asymptotic group invariance conditions.
Implementation of the test is intuitive and simple, and does not require any
specification of a tuning parameter or weight function. At the same time, the
test exhibits excellent finite sample performance, with the null rejection
rates almost equal to the nominal level even when the sample size is extremely
small. Two empirical applications concerning the dependence between income and
consumption, and the Brexit effect on European financial market integration are
provided.",Randomization Tests for Equality in Dependence Structure,2018-11-06 03:59:00,Juwon Seo,"http://arxiv.org/abs/1811.02105v1, http://arxiv.org/pdf/1811.02105v1",econ.EM
29147,em,"Finite mixture models are useful in applied econometrics. They can be used to
model unobserved heterogeneity, which plays major roles in labor economics,
industrial organization and other fields. Mixtures are also convenient in
dealing with contaminated sampling models and models with multiple equilibria.
This paper shows that finite mixture models are nonparametrically identified
under weak assumptions that are plausible in economic applications. The key is
to utilize the identification power implied by information in covariates
variation. First, three identification approaches are presented, under distinct
and non-nested sets of sufficient conditions. Observable features of data
inform us which of the three approaches is valid. These results apply to
general nonparametric switching regressions, as well as to structural
econometric models, such as auction models with unobserved heterogeneity.
Second, some extensions of the identification results are developed. In
particular, a mixture regression where the mixing weights depend on the value
of the regressors in a fully unrestricted manner is shown to be
nonparametrically identifiable. This means a finite mixture model with
function-valued unobserved heterogeneity can be identified in a cross-section
setting, without restricting the dependence pattern between the regressor and
the unobserved heterogeneity. In this aspect it is akin to fixed effects panel
data models which permit unrestricted correlation between unobserved
heterogeneity and covariates. Third, the paper shows that fully nonparametric
estimation of the entire mixture model is possible, by forming a sample
analogue of one of the new identification strategies. The estimator is shown to
possess a desirable polynomial rate of convergence as in a standard
nonparametric estimation problem, despite nonregular features of the model.",Nonparametric Analysis of Finite Mixtures,2018-11-07 05:16:14,"Yuichi Kitamura, Louise Laage","http://arxiv.org/abs/1811.02727v1, http://arxiv.org/pdf/1811.02727v1",econ.EM
29148,em,"Single index linear models for binary response with random coefficients have
been extensively employed in many econometric settings under various parametric
specifications of the distribution of the random coefficients. Nonparametric
maximum likelihood estimation (NPMLE) as proposed by Cosslett (1983) and
Ichimura and Thompson (1998), in contrast, has received less attention in
applied work due primarily to computational difficulties. We propose a new
approach to computation of NPMLEs for binary response models that significantly
increase their computational tractability thereby facilitating greater
flexibility in applications. Our approach, which relies on recent developments
involving the geometry of hyperplane arrangements, is contrasted with the
recently proposed deconvolution method of Gautier and Kitamura (2013). An
application to modal choice for the journey to work in the Washington DC area
illustrates the methods.",Nonparametric maximum likelihood methods for binary response models with random coefficients,2018-11-08 12:33:02,"Jiaying Gu, Roger Koenker","http://arxiv.org/abs/1811.03329v3, http://arxiv.org/pdf/1811.03329v3",econ.EM
29149,em,"This study proposes a point estimator of the break location for a one-time
structural break in linear regression models. If the break magnitude is small,
the least-squares estimator of the break date has two modes at the ends of the
finite sample period, regardless of the true break location. To solve this
problem, I suggest an alternative estimator based on a modification of the
least-squares objective function. The modified objective function incorporates
estimation uncertainty that varies across potential break dates. The new break
point estimator is consistent and has a unimodal finite sample distribution
under small break magnitudes. A limit distribution is provided under an in-fill
asymptotic framework. Monte Carlo simulation results suggest that the new
estimator outperforms the least-squares estimator. I apply the method to
estimate the break date in U.S. real GDP growth and U.S. and UK stock return
prediction models.",Estimation of a Structural Break Point in Linear Regression Models,2018-11-09 03:10:11,Yaein Baek,"http://arxiv.org/abs/1811.03720v3, http://arxiv.org/pdf/1811.03720v3",econ.EM
29150,em,"This paper analyses the use of bootstrap methods to test for parameter change
in linear models estimated via Two Stage Least Squares (2SLS). Two types of
test are considered: one where the null hypothesis is of no change and the
alternative hypothesis involves discrete change at k unknown break-points in
the sample; and a second test where the null hypothesis is that there is
discrete parameter change at l break-points in the sample against an
alternative in which the parameters change at l + 1 break-points. In both
cases, we consider inferences based on a sup-Wald-type statistic using either
the wild recursive bootstrap or the wild fixed bootstrap. We establish the
asymptotic validity of these bootstrap tests under a set of general conditions
that allow the errors to exhibit conditional and/or unconditional
heteroskedasticity, and report results from a simulation study that indicate
the tests yield reliable inferences in the sample sizes often encountered in
macroeconomics. The analysis covers the cases where the first-stage estimation
of 2SLS involves a model whose parameters are either constant or themselves
subject to discrete parameter change. If the errors exhibit unconditional
heteroskedasticity and/or the reduced form is unstable then the bootstrap
methods are particularly attractive because the limiting distributions of the
test statistics are not pivotal.",Bootstrapping Structural Change Tests,2018-11-09 23:15:33,"Otilia Boldea, Adriana Cornea-Madeira, Alastair R. Hall","http://dx.doi.org/10.1016/j.jeconom.2019.05.019, http://arxiv.org/abs/1811.04125v1, http://arxiv.org/pdf/1811.04125v1",econ.EM
29151,em,"Identification of multinomial choice models is often established by using
special covariates that have full support. This paper shows how these
identification results can be extended to a large class of multinomial choice
models when all covariates are bounded. I also provide a new
$\sqrt{n}$-consistent asymptotically normal estimator of the finite-dimensional
parameters of the model.",Identification and estimation of multinomial choice models with latent special covariates,2018-11-14 01:48:40,Nail Kashaev,"http://arxiv.org/abs/1811.05555v3, http://arxiv.org/pdf/1811.05555v3",econ.EM
29152,em,"In this paper, we investigate seemingly unrelated regression (SUR) models
that allow the number of equations (N) to be large, and to be comparable to the
number of the observations in each equation (T). It is well known in the
literature that the conventional SUR estimator, for example, the generalized
least squares (GLS) estimator of Zellner (1962) does not perform well. As the
main contribution of the paper, we propose a new feasible GLS estimator called
the feasible graphical lasso (FGLasso) estimator. For a feasible implementation
of the GLS estimator, we use the graphical lasso estimation of the precision
matrix (the inverse of the covariance matrix of the equation system errors)
assuming that the underlying unknown precision matrix is sparse. We derive
asymptotic theories of the new estimator and investigate its finite sample
properties via Monte-Carlo simulations.",Estimation of High-Dimensional Seemingly Unrelated Regression Models,2018-11-14 02:19:46,"Lidan Tan, Khai X. Chiong, Hyungsik Roger Moon","http://arxiv.org/abs/1811.05567v1, http://arxiv.org/pdf/1811.05567v1",econ.EM
29161,em,"We study partial identification of the preference parameters in the
one-to-one matching model with perfectly transferable utilities. We do so
without imposing parametric distributional assumptions on the unobserved
heterogeneity and with data on one large market. We provide a tractable
characterisation of the identified set under various classes of nonparametric
distributional assumptions on the unobserved heterogeneity. Using our
methodology, we re-examine some of the relevant questions in the empirical
literature on the marriage market, which have been previously studied under the
Logit assumption. Our results reveal that many findings in the aforementioned
literature are primarily driven by such parametric restrictions.",Partial Identification in Matching Models for the Marriage Market,2019-02-15 00:37:28,"Cristina Gualdani, Shruti Sinha","http://arxiv.org/abs/1902.05610v6, http://arxiv.org/pdf/1902.05610v6",econ.EM
29153,em,"In this study, Bayesian inference is developed for structural vector
autoregressive models in which the structural parameters are identified via
Markov-switching heteroskedasticity. In such a model, restrictions that are
just-identifying in the homoskedastic case, become over-identifying and can be
tested. A set of parametric restrictions is derived under which the structural
matrix is globally or partially identified and a Savage-Dickey density ratio is
used to assess the validity of the identification conditions. The latter is
facilitated by analytical derivations that make the computations fast and
numerical standard errors small. As an empirical example, monetary models are
compared using heteroskedasticity as an additional device for identification.
The empirical results support models with money in the interest rate reaction
function.",Bayesian Inference for Structural Vector Autoregressions Identified by Markov-Switching Heteroskedasticity,2018-11-20 13:29:18,"Helmut Lütkepohl, Tomasz Woźniak","http://dx.doi.org/10.1016/j.jedc.2020.103862, http://arxiv.org/abs/1811.08167v1, http://arxiv.org/pdf/1811.08167v1",econ.EM
29154,em,"In this paper we aim to improve existing empirical exchange rate models by
accounting for uncertainty with respect to the underlying structural
representation. Within a flexible Bayesian non-linear time series framework,
our modeling approach assumes that different regimes are characterized by
commonly used structural exchange rate models, with their evolution being
driven by a Markov process. We assume a time-varying transition probability
matrix with transition probabilities depending on a measure of the monetary
policy stance of the central bank at the home and foreign country. We apply
this model to a set of eight exchange rates against the US dollar. In a
forecasting exercise, we show that model evidence varies over time and a model
approach that takes this empirical evidence seriously yields improvements in
accuracy of density forecasts for most currency pairs considered.",Model instability in predictive exchange rate regressions,2018-11-21 19:40:00,"Niko Hauzenberger, Florian Huber","http://arxiv.org/abs/1811.08818v2, http://arxiv.org/pdf/1811.08818v2",econ.EM
29155,em,"Volatilities, in high-dimensional panels of economic time series with a
dynamic factor structure on the levels or returns, typically also admit a
dynamic factor decomposition. We consider a two-stage dynamic factor model
method recovering the common and idiosyncratic components of both levels and
log-volatilities. Specifically, in a first estimation step, we extract the
common and idiosyncratic shocks for the levels, from which a log-volatility
proxy is computed. In a second step, we estimate a dynamic factor model, which
is equivalent to a multiplicative factor structure for volatilities, for the
log-volatility panel. By exploiting this two-stage factor approach, we build
one-step-ahead conditional prediction intervals for large $n \times T$ panels
of returns. Those intervals are based on empirical quantiles, not on
conditional variances; they can be either equal- or unequal- tailed. We provide
uniform consistency and consistency rates results for the proposed estimators
as both $n$ and $T$ tend to infinity. We study the finite-sample properties of
our estimators by means of Monte Carlo simulations. Finally, we apply our
methodology to a panel of asset returns belonging to the S&P100 index in order
to compute one-step-ahead conditional prediction intervals for the period
2006-2013. A comparison with the componentwise GARCH benchmark (which does not
take advantage of cross-sectional information) demonstrates the superiority of
our approach, which is genuinely multivariate (and high-dimensional),
nonparametric, and model-free.","Generalized Dynamic Factor Models and Volatilities: Consistency, rates, and prediction intervals",2018-11-25 19:06:08,"Matteo Barigozzi, Marc Hallin","http://dx.doi.org/10.1016/j.jeconom.2020.01.003, http://arxiv.org/abs/1811.10045v2, http://arxiv.org/pdf/1811.10045v2",econ.EM
29156,em,"This paper studies model selection in semiparametric econometric models. It
develops a consistent series-based model selection procedure based on a
Bayesian Information Criterion (BIC) type criterion to select between several
classes of models. The procedure selects a model by minimizing the
semiparametric Lagrange Multiplier (LM) type test statistic from Korolev (2018)
but additionally rewards simpler models. The paper also develops consistent
upward testing (UT) and downward testing (DT) procedures based on the
semiparametric LM type specification test. The proposed semiparametric LM-BIC
and UT procedures demonstrate good performance in simulations. To illustrate
the use of these semiparametric model selection procedures, I apply them to the
parametric and semiparametric gasoline demand specifications from Yatchew and
No (2001). The LM-BIC procedure selects the semiparametric specification that
is nonparametric in age but parametric in all other variables, which is in line
with the conclusions in Yatchew and No (2001). The results of the UT and DT
procedures heavily depend on the choice of tuning parameters and assumptions
about the model errors.",LM-BIC Model Selection in Semiparametric Models,2018-11-26 23:29:18,Ivan Korolev,"http://arxiv.org/abs/1811.10676v1, http://arxiv.org/pdf/1811.10676v1",econ.EM
29157,em,"This paper studies a fixed-design residual bootstrap method for the two-step
estimator of Francq and Zako\""ian (2015) associated with the conditional
Expected Shortfall. For a general class of volatility models the bootstrap is
shown to be asymptotically valid under the conditions imposed by Beutner et al.
(2018). A simulation study is conducted revealing that the average coverage
rates are satisfactory for most settings considered. There is no clear evidence
to have a preference for any of the three proposed bootstrap intervals. This
contrasts results in Beutner et al. (2018) for the VaR, for which the
reversed-tails interval has a superior performance.",A Residual Bootstrap for Conditional Expected Shortfall,2018-11-27 01:03:46,"Alexander Heinemann, Sean Telg","http://arxiv.org/abs/1811.11557v1, http://arxiv.org/pdf/1811.11557v1",econ.EM
29158,em,"We provide a complete asymptotic distribution theory for clustered data with
a large number of independent groups, generalizing the classic laws of large
numbers, uniform laws, central limit theory, and clustered covariance matrix
estimation. Our theory allows for clustered observations with heterogeneous and
unbounded cluster sizes. Our conditions cleanly nest the classical results for
i.n.i.d. observations, in the sense that our conditions specialize to the
classical conditions under independent sampling. We use this theory to develop
a full asymptotic distribution theory for estimation based on linear
least-squares, 2SLS, nonlinear MLE, and nonlinear GMM.",Asymptotic Theory for Clustered Samples,2019-02-05 02:46:04,"Bruce E. Hansen, Seojeong Lee","http://arxiv.org/abs/1902.01497v1, http://arxiv.org/pdf/1902.01497v1",econ.EM
29159,em,"In this paper we propose a general framework to analyze prediction in time
series models and show how a wide class of popular time series models satisfies
this framework. We postulate a set of high-level assumptions, and formally
verify these assumptions for the aforementioned time series models. Our
framework coincides with that of Beutner et al. (2019, arXiv:1710.00643) who
establish the validity of conditional confidence intervals for predictions made
in this framework. The current paper therefore complements the results in
Beutner et al. (2019, arXiv:1710.00643) by providing practically relevant
applications of their theory.",A General Framework for Prediction in Time Series Models,2019-02-05 13:06:04,"Eric Beutner, Alexander Heinemann, Stephan Smeekes","http://arxiv.org/abs/1902.01622v1, http://arxiv.org/pdf/1902.01622v1",econ.EM
29162,em,"The identification of the network effect is based on either group size
variation, the structure of the network or the relative position in the
network. I provide easy-to-verify necessary conditions for identification of
undirected network models based on the number of distinct eigenvalues of the
adjacency matrix. Identification of network effects is possible; although in
many empirical situations existing identification strategies may require the
use of many instruments or instruments that could be strongly correlated with
each other. The use of highly correlated instruments or many instruments may
lead to weak identification or many instruments bias. This paper proposes
regularized versions of the two-stage least squares (2SLS) estimators as a
solution to these problems. The proposed estimators are consistent and
asymptotically normal. A Monte Carlo study illustrates the properties of the
regularized estimators. An empirical application, assessing a local government
tax competition model, shows the empirical relevance of using regularization
methods.",Weak Identification and Estimation of Social Interaction Models,2019-02-16 22:36:11,Guy Tchuente,"http://arxiv.org/abs/1902.06143v1, http://arxiv.org/pdf/1902.06143v1",econ.EM
29163,em,"This paper is concerned with learning decision makers' preferences using data
on observed choices from a finite set of risky alternatives. We propose a
discrete choice model with unobserved heterogeneity in consideration sets and
in standard risk aversion. We obtain sufficient conditions for the model's
semi-nonparametric point identification, including in cases where consideration
depends on preferences and on some of the exogenous variables. Our method
yields an estimator that is easy to compute and is applicable in markets with
large choice sets. We illustrate its properties using a dataset on property
insurance purchases.",Discrete Choice under Risk with Limited Consideration,2019-02-18 19:05:32,"Levon Barseghyan, Francesca Molinari, Matthew Thirkettle","http://arxiv.org/abs/1902.06629v3, http://arxiv.org/pdf/1902.06629v3",econ.EM
29164,em,"The synthetic control method is often used in treatment effect estimation
with panel data where only a few units are treated and a small number of
post-treatment periods are available. Current estimation and inference
procedures for synthetic control methods do not allow for the existence of
spillover effects, which are plausible in many applications. In this paper, we
consider estimation and inference for synthetic control methods, allowing for
spillover effects. We propose estimators for both direct treatment effects and
spillover effects and show they are asymptotically unbiased. In addition, we
propose an inferential procedure and show it is asymptotically unbiased. Our
estimation and inference procedure applies to cases with multiple treated units
or periods, and where the underlying factor model is either stationary or
cointegrated. In simulations, we confirm that the presence of spillovers
renders current methods biased and have distorted sizes, whereas our methods
yield properly sized tests and retain reasonable power. We apply our method to
a classic empirical example that investigates the effect of California's
tobacco control program as in Abadie et al. (2010) and find evidence of
spillovers.",Estimation and Inference for Synthetic Control Methods with Spillover Effects,2019-02-20 02:19:26,"Jianfei Cao, Connor Dowd","http://arxiv.org/abs/1902.07343v2, http://arxiv.org/pdf/1902.07343v2",econ.EM
29165,em,"I show how to reveal ambiguity-sensitive preferences over a single natural
event. In the proposed elicitation mechanism, agents mix binarized bets on the
uncertain event and its complement under varying betting odds. The mechanism
identifies the interval of relevant probabilities for maxmin and maxmax
preferences. For variational preferences and smooth second-order preferences,
the mechanism reveals inner bounds, that are sharp under high stakes. For small
stakes, mixing under second-order preferences is dominated by the variance of
the second-order distribution. Additionally, the mechanism can distinguish
extreme ambiguity aversion as in maxmin preferences and moderate ambiguity
aversion as in variational or smooth second-order preferences. An experimental
implementation suggests that participants perceive almost as much ambiguity for
the stock index and actions of other participants as for the Ellsberg urn,
indicating the importance of ambiguity in real-world decision-making.",Eliciting ambiguity with mixing bets,2019-02-20 11:19:21,Patrick Schmidt,"http://arxiv.org/abs/1902.07447v4, http://arxiv.org/pdf/1902.07447v4",econ.EM
29166,em,"Ordered probit and logit models have been frequently used to estimate the
mean ranking of happiness outcomes (and other ordinal data) across groups.
However, it has been recently highlighted that such ranking may not be
identified in most happiness applications. We suggest researchers focus on
median comparison instead of the mean. This is because the median rank can be
identified even if the mean rank is not. Furthermore, median ranks in probit
and logit models can be readily estimated using standard statistical softwares.
The median ranking, as well as ranking for other quantiles, can also be
estimated semiparametrically and we provide a new constrained mixed integer
optimization procedure for implementation. We apply it to estimate a happiness
equation using General Social Survey data of the US.",Robust Ranking of Happiness Outcomes: A Median Regression Perspective,2019-02-20 21:50:07,"Le-Yu Chen, Ekaterina Oparina, Nattavudh Powdthavee, Sorawoot Srisuma","http://arxiv.org/abs/1902.07696v3, http://arxiv.org/pdf/1902.07696v3",econ.EM
29167,em,"We bound features of counterfactual choices in the nonparametric random
utility model of demand, i.e. if observable choices are repeated cross-sections
and one allows for unrestricted, unobserved heterogeneity. In this setting,
tight bounds are developed on counterfactual discrete choice probabilities and
on the expectation and c.d.f. of (functionals of) counterfactual stochastic
demand.",Nonparametric Counterfactuals in Random Utility Models,2019-02-22 06:07:40,"Yuichi Kitamura, Jörg Stoye","http://arxiv.org/abs/1902.08350v2, http://arxiv.org/pdf/1902.08350v2",econ.EM
29168,em,"We propose a counterfactual Kaplan-Meier estimator that incorporates
exogenous covariates and unobserved heterogeneity of unrestricted
dimensionality in duration models with random censoring. Under some regularity
conditions, we establish the joint weak convergence of the proposed
counterfactual estimator and the unconditional Kaplan-Meier (1958) estimator.
Applying the functional delta method, we make inference on the cumulative
hazard policy effect, that is, the change of duration dependence in response to
a counterfactual policy. We also evaluate the finite sample performance of the
proposed counterfactual estimation method in a Monte Carlo study.",Counterfactual Inference in Duration Models with Random Censoring,2019-02-22 17:17:05,Jiun-Hua Su,"http://arxiv.org/abs/1902.08502v1, http://arxiv.org/pdf/1902.08502v1",econ.EM
29169,em,"We show that when a high-dimensional data matrix is the sum of a low-rank
matrix and a random error matrix with independent entries, the low-rank
component can be consistently estimated by solving a convex minimization
problem. We develop a new theoretical argument to establish consistency without
assuming sparsity or the existence of any moments of the error matrix, so that
fat-tailed continuous random errors such as Cauchy are allowed. The results are
illustrated by simulations.",Robust Principal Component Analysis with Non-Sparse Errors,2019-02-23 07:55:29,"Jushan Bai, Junlong Feng","http://arxiv.org/abs/1902.08735v2, http://arxiv.org/pdf/1902.08735v2",econ.EM
29171,em,"An important goal of empirical demand analysis is choice and welfare
prediction on counterfactual budget sets arising from potential
policy-interventions. Such predictions are more credible when made without
arbitrary functional-form/distributional assumptions, and instead based solely
on economic rationality, i.e. that choice is consistent with utility
maximization by a heterogeneous population. This paper investigates
nonparametric economic rationality in the empirically important context of
binary choice. We show that under general unobserved heterogeneity, economic
rationality is equivalent to a pair of Slutsky-like shape-restrictions on
choice-probability functions. The forms of these restrictions differ from
Slutsky-inequalities for continuous goods. Unlike McFadden-Richter's stochastic
revealed preference, our shape-restrictions (a) are global, i.e. their forms do
not depend on which and how many budget-sets are observed, (b) are closed-form,
hence easy to impose on parametric/semi/non-parametric models in practical
applications, and (c) provide computationally simple, theory-consistent bounds
on demand and welfare predictions on counterfactual budget-sets.",The Empirical Content of Binary Choice Models,2019-02-28 13:57:42,Debopam Bhattacharya,"http://arxiv.org/abs/1902.11012v4, http://arxiv.org/pdf/1902.11012v4",econ.EM
29172,em,"McFadden's random-utility model of multinomial choice has long been the
workhorse of applied research. We establish shape-restrictions under which
multinomial choice-probability functions can be rationalized via random-utility
models with nonparametric unobserved heterogeneity and general income-effects.
When combined with an additional restriction, the above conditions are
equivalent to the canonical Additive Random Utility Model. The
sufficiency-proof is constructive, and facilitates nonparametric identification
of preference-distributions without requiring identification-at-infinity type
arguments. A corollary shows that Slutsky-symmetry, a key condition for
previous rationalizability results, is equivalent to absence of income-effects.
Our results imply theory-consistent nonparametric bounds for
choice-probabilities on counterfactual budget-sets. They also apply to widely
used random-coefficient models, upon conditioning on observable choice
characteristics. The theory of partial differential equations plays a key role
in our analysis.",Integrability and Identification in Multinomial Choice Models,2019-02-28 14:12:30,Debopam Bhattacharya,"http://arxiv.org/abs/1902.11017v4, http://arxiv.org/pdf/1902.11017v4",econ.EM
29173,em,"This paper studies estimation of linear panel regression models with
heterogeneous coefficients, when both the regressors and the residual contain a
possibly common, latent, factor structure. Our theory is (nearly) efficient,
because based on the GLS principle, and also robust to the specification of
such factor structure because it does not require any information on the number
of factors nor estimation of the factor structure itself. We first show how the
unfeasible GLS estimator not only affords an efficiency improvement but, more
importantly, provides a bias-adjusted estimator with the conventional limiting
distribution, for situations where the OLS is affected by a first-order bias.
The technical challenge resolved in the paper is to show how these properties
are preserved for a class of feasible GLS estimators in a double-asymptotics
setting. Our theory is illustrated by means of Monte Carlo exercises and, then,
with an empirical application using individual asset returns and firms'
characteristics data.",Robust Nearly-Efficient Estimation of Large Panels with Factor Structures,2019-02-28 19:01:13,"Marco Avarucci, Paolo Zaffaroni","http://arxiv.org/abs/1902.11181v1, http://arxiv.org/pdf/1902.11181v1",econ.EM
29174,em,"This paper studies high-dimensional regression models with lasso when data is
sampled under multi-way clustering. First, we establish convergence rates for
the lasso and post-lasso estimators. Second, we propose a novel inference
method based on a post-double-selection procedure and show its asymptotic
validity. Our procedure can be easily implemented with existing statistical
packages. Simulation results demonstrate that the proposed procedure works well
in finite sample. We illustrate the proposed method with a couple of empirical
applications to development and growth economics.",Lasso under Multi-way Clustering: Estimation and Post-selection Inference,2019-05-06 18:45:57,"Harold D. Chiang, Yuya Sasaki","http://arxiv.org/abs/1905.02107v3, http://arxiv.org/pdf/1905.02107v3",econ.EM
29175,em,"We propose a new estimation method for heterogeneous causal effects which
utilizes a regression discontinuity (RD) design for multiple datasets with
different thresholds. The standard RD design is frequently used in applied
researches, but the result is very limited in that the average treatment
effects is estimable only at the threshold on the running variable. In
application studies it is often the case that thresholds are different among
databases from different regions or firms. For example thresholds for
scholarship differ with states. The proposed estimator based on the augmented
inverse probability weighted local linear estimator can estimate the average
effects at an arbitrary point on the running variable between the thresholds
under mild conditions, while the method adjust for the difference of the
distributions of covariates among datasets. We perform simulations to
investigate the performance of the proposed estimator in the finite samples.",Regression Discontinuity Design with Multiple Groups for Heterogeneous Causal Effect Estimation,2019-05-11 07:11:49,"Takayuki Toda, Ayako Wakano, Takahiro Hoshino","http://arxiv.org/abs/1905.04443v1, http://arxiv.org/pdf/1905.04443v1",econ.EM
29176,em,"We use novel nonparametric techniques to test for the presence of
non-classical measurement error in reported life satisfaction (LS) and study
the potential effects from ignoring it. Our dataset comes from Wave 3 of the UK
Understanding Society that is surveyed from 35,000 British households. Our test
finds evidence of measurement error in reported LS for the entire dataset as
well as for 26 out of 32 socioeconomic subgroups in the sample. We estimate the
joint distribution of reported and latent LS nonparametrically in order to
understand the mis-reporting behavior. We show this distribution can then be
used to estimate parametric models of latent LS. We find measurement error bias
is not severe enough to distort the main drivers of LS. But there is an
important difference that is policy relevant. We find women tend to over-report
their latent LS relative to men. This may help explain the gender puzzle that
questions why women are reportedly happier than men despite being worse off on
objective outcomes such as income and employment.",Analyzing Subjective Well-Being Data with Misclassification,2019-05-15 12:05:11,"Ekaterina Oparina, Sorawoot Srisuma","http://arxiv.org/abs/1905.06037v1, http://arxiv.org/pdf/1905.06037v1",econ.EM
29933,em,"This paper shows that the endogeneity test using the control function
approach in linear instrumental variable models is a variant of the Hausman
test. Moreover, we find that the test statistics used in these tests can be
numerically ordered, indicating their relative power properties in finite
samples.",Some Finite-Sample Results on the Hausman Test,2023-12-17 02:14:02,"Jinyong Hahn, Zhipeng Liao, Nan Liu, Shuyang Sheng","http://arxiv.org/abs/2312.10558v1, http://arxiv.org/pdf/2312.10558v1",econ.EM
29177,em,"In this research paper, I have performed time series analysis and forecasted
the monthly value of housing starts for the year 2019 using several econometric
methods - ARIMA(X), VARX, (G)ARCH and machine learning algorithms - artificial
neural networks, ridge regression, K-Nearest Neighbors, and support vector
regression, and created an ensemble model. The ensemble model stacks the
predictions from various individual models, and gives a weighted average of all
predictions. The analyses suggest that the ensemble model has performed the
best among all the models as the prediction errors are the lowest, while the
econometric models have higher error rates.",Time Series Analysis and Forecasting of the US Housing Starts using Econometric and Machine Learning Model,2019-05-20 05:17:28,Sudiksha Joshi,"http://arxiv.org/abs/1905.07848v1, http://arxiv.org/pdf/1905.07848v1",econ.EM
29178,em,"Time-varying parameter (TVP) models have the potential to be
over-parameterized, particularly when the number of variables in the model is
large. Global-local priors are increasingly used to induce shrinkage in such
models. But the estimates produced by these priors can still have appreciable
uncertainty. Sparsification has the potential to reduce this uncertainty and
improve forecasts. In this paper, we develop computationally simple methods
which both shrink and sparsify TVP models. In a simulated data exercise we show
the benefits of our shrink-then-sparsify approach in a variety of sparse and
dense TVP regressions. In a macroeconomic forecasting exercise, we find our
approach to substantially improve forecast performance relative to shrinkage
alone.",Inducing Sparsity and Shrinkage in Time-Varying Parameter Models,2019-05-26 14:13:09,"Florian Huber, Gary Koop, Luca Onorante","http://arxiv.org/abs/1905.10787v2, http://arxiv.org/pdf/1905.10787v2",econ.EM
29179,em,"This paper considers unit-root tests in large n and large T heterogeneous
panels with cross-sectional dependence generated by unobserved factors. We
reconsider the two prevalent approaches in the literature, that of Moon and
Perron (2004) and the PANIC setup proposed in Bai and Ng (2004). While these
have been considered as completely different setups, we show that, in case of
Gaussian innovations, the frameworks are asymptotically equivalent in the sense
that both experiments are locally asymptotically normal (LAN) with the same
central sequence. Using Le Cam's theory of statistical experiments we determine
the local asymptotic power envelope and derive an optimal test jointly in both
setups. We show that the popular Moon and Perron (2004) and Bai and Ng (2010)
tests only attain the power envelope in case there is no heterogeneity in the
long-run variance of the idiosyncratic components. The new test is
asymptotically uniformly most powerful irrespective of possible heterogeneity.
Moreover, it turns out that for any test, satisfying a mild regularity
condition, the size and local asymptotic power are the same under both data
generating processes. Thus, applied researchers do not need to decide on one of
the two frameworks to conduct unit root tests. Monte-Carlo simulations
corroborate our asymptotic results and document significant gains in
finite-sample power if the variances of the idiosyncratic shocks differ
substantially among the cross sectional units.",Local Asymptotic Equivalence of the Bai and Ng (2004) and Moon and Perron (2004) Frameworks for Panel Unit Root Testing,2019-05-27 16:09:49,"Oliver Wichert, I. Gaia Becheri, Feike C. Drost, Ramon van den Akker","http://arxiv.org/abs/1905.11184v1, http://arxiv.org/pdf/1905.11184v1",econ.EM
29180,em,"This paper develops a threshold regression model where an unknown
relationship between two variables nonparametrically determines the threshold.
We allow the observations to be cross-sectionally dependent so that the model
can be applied to determine an unknown spatial border for sample splitting over
a random field. We derive the uniform rate of convergence and the nonstandard
limiting distribution of the nonparametric threshold estimator. We also obtain
the root-n consistency and the asymptotic normality of the regression
coefficient estimator. Our model has broad empirical relevance as illustrated
by estimating the tipping point in social segregation problems as a function of
demographic characteristics; and determining metropolitan area boundaries using
nighttime light intensity collected from satellite imagery. We find that the
new empirical results are substantially different from those in the existing
studies.",Threshold Regression with Nonparametric Sample Splitting,2019-05-30 19:07:46,"Yoonseok Lee, Yulong Wang","http://arxiv.org/abs/1905.13140v3, http://arxiv.org/pdf/1905.13140v3",econ.EM
29181,em,"This paper studies large $N$ and large $T$ conditional quantile panel data
models with interactive fixed effects. We propose a nuclear norm penalized
estimator of the coefficients on the covariates and the low-rank matrix formed
by the fixed effects. The estimator solves a convex minimization problem, not
requiring pre-estimation of the (number of the) fixed effects. It also allows
the number of covariates to grow slowly with $N$ and $T$. We derive an error
bound on the estimator that holds uniformly in quantile level. The order of the
bound implies uniform consistency of the estimator and is nearly optimal for
the low-rank component. Given the error bound, we also propose a consistent
estimator of the number of fixed effects at any quantile level. To derive the
error bound, we develop new theoretical arguments under primitive assumptions
and new results on random matrices that may be of independent interest. We
demonstrate the performance of the estimator via Monte Carlo simulations.",Regularized Quantile Regression with Interactive Fixed Effects,2019-11-01 03:44:14,Junlong Feng,"http://arxiv.org/abs/1911.00166v4, http://arxiv.org/pdf/1911.00166v4",econ.EM
29182,em,"This paper considers panel data models where the conditional quantiles of the
dependent variables are additively separable as unknown functions of the
regressors and the individual effects. We propose two estimators of the
quantile partial effects while controlling for the individual heterogeneity.
The first estimator is based on local linear quantile regressions, and the
second is based on local linear smoothed quantile regressions, both of which
are easy to compute in practice. Within the large T framework, we provide
sufficient conditions under which the two estimators are shown to be
asymptotically normally distributed. In particular, for the first estimator, it
is shown that $N<<T^{2/(d+4)}$ is needed to ignore the incidental parameter
biases, where $d$ is the dimension of the regressors. For the second estimator,
we are able to derive the analytical expression of the asymptotic biases under
the assumption that $N\approx Th^{d}$, where $h$ is the bandwidth parameter in
local linear approximations. Our theoretical results provide the basis of using
split-panel jackknife for bias corrections. A Monte Carlo simulation shows that
the proposed estimators and the bias-correction method perform well in finite
samples.",Nonparametric Quantile Regressions for Panel Data Models with Large T,2019-11-05 17:47:18,Liang Chen,"http://arxiv.org/abs/1911.01824v3, http://arxiv.org/pdf/1911.01824v3",econ.EM
29183,em,"Quantile Factor Models (QFM) represent a new class of factor models for
high-dimensional panel data. Unlike Approximate Factor Models (AFM), where only
location-shifting factors can be extracted, QFM also allow to recover
unobserved factors shifting other relevant parts of the distributions of
observed variables. A quantile regression approach, labeled Quantile Factor
Analysis (QFA), is proposed to consistently estimate all the quantile-dependent
factors and loadings. Their asymptotic distribution is then derived using a
kernel-smoothed version of the QFA estimators. Two consistent model selection
criteria, based on information criteria and rank minimization, are developed to
determine the number of factors at each quantile. Moreover, in contrast to the
conditions required for the use of Principal Components Analysis in AFM, QFA
estimation remains valid even when the idiosyncratic errors have heavy-tailed
distributions. Three empirical applications (regarding macroeconomic, climate
and finance panel data) provide evidence that extra factors shifting the
quantiles other than the means could be relevant in practice.",Quantile Factor Models,2019-11-06 05:51:19,"Liang Chen, Juan Jose Dolado, Jesus Gonzalo","http://arxiv.org/abs/1911.02173v2, http://arxiv.org/pdf/1911.02173v2",econ.EM
29184,em,"This study proposes a simple, trustworthy Chow test in the presence of
heteroscedasticity and autocorrelation. The test is based on a series
heteroscedasticity and autocorrelation robust variance estimator with
judiciously crafted basis functions. Like the Chow test in a classical normal
linear regression, the proposed test employs the standard F distribution as the
reference distribution, which is justified under fixed-smoothing asymptotics.
Monte Carlo simulations show that the null rejection probability of the
asymptotic F test is closer to the nominal level than that of the chi-square
test.",An Asymptotically F-Distributed Chow Test in the Presence of Heteroscedasticity and Autocorrelation,2019-11-09 23:26:55,"Yixiao Sun, Xuexin Wang","http://arxiv.org/abs/1911.03771v1, http://arxiv.org/pdf/1911.03771v1",econ.EM
29185,em,"We study identification of preferences in static single-agent discrete choice
models where decision makers may be imperfectly informed about the state of the
world. We leverage the notion of one-player Bayes Correlated Equilibrium by
Bergemann and Morris (2016) to provide a tractable characterization of the
sharp identified set. We develop a procedure to practically construct the sharp
identified set following a sieve approach, and provide sharp bounds on
counterfactual outcomes of interest. We use our methodology and data on the
2017 UK general election to estimate a spatial voting model under weak
assumptions on agents' information about the returns to voting. Counterfactual
exercises quantify the consequences of imperfect information on the well-being
of voters and parties.",Identification in discrete choice models with imperfect information,2019-11-11 22:23:02,"Cristina Gualdani, Shruti Sinha","http://arxiv.org/abs/1911.04529v5, http://arxiv.org/pdf/1911.04529v5",econ.EM
29186,em,"Much empirical research in economics and finance involves simultaneously
testing multiple hypotheses. This paper proposes extended MinP (EMinP) tests by
expanding the minimand set of the MinP test statistic to include the $p$%
-value of a global test such as a likelihood ratio test. We show that, compared
with MinP tests, EMinP tests may considerably improve the global power in
rejecting the intersection of all individual hypotheses. Compared with closed
tests EMinP tests have the computational advantage by sharing the benefit of
the stepdown procedure of MinP tests and can have a better global power over
the tests used to construct closed tests. Furthermore, we argue that EMinP
tests may be viewed as a tool to prevent data snooping when two competing tests
that have distinct global powers are exploited. Finally, the proposed tests are
applied to an empirical application on testing the effects of exercise.",Extended MinP Tests of Multiple Hypotheses,2019-11-12 09:12:56,Zeng-Hua Lu,"http://arxiv.org/abs/1911.04696v1, http://arxiv.org/pdf/1911.04696v1",econ.EM
29187,em,"Canay (2011)'s two-step estimator of quantile panel data models, due to its
simple intuition and low computational cost, has been widely used in empirical
studies in recent years. In this paper, we revisit the estimator of Canay
(2011) and point out that in his asymptotic analysis the bias of his estimator
due to the estimation of the fixed effects is mistakenly omitted, and that such
omission will lead to invalid inference on the coefficients. To solve this
problem, we propose a similar easy-to-implement estimator based on smoothed
quantile regressions. The asymptotic distribution of the new estimator is
established and the analytical expression of its asymptotic bias is derived.
Based on these results, we show how to make asymptotically valid inference
based on both analytical and split-panel jackknife bias corrections. Finally,
finite sample simulations are used to support our theoretical analysis and to
illustrate the importance of bias correction in quantile regressions for panel
data.",A Simple Estimator for Quantile Panel Data Models Using Smoothed Quantile Regressions,2019-11-12 11:11:48,"Liang Chen, Yulong Huo","http://arxiv.org/abs/1911.04729v1, http://arxiv.org/pdf/1911.04729v1",econ.EM
29188,em,"We analyze the properties of the Synthetic Control (SC) and related
estimators when the pre-treatment fit is imperfect. In this framework, we show
that these estimators are generally biased if treatment assignment is
correlated with unobserved confounders, even when the number of pre-treatment
periods goes to infinity. Still, we show that a demeaned version of the SC
method can substantially improve in terms of bias and variance relative to the
difference-in-difference estimator. We also derive a specification test for the
demeaned SC estimator in this setting with imperfect pre-treatment fit. Given
our theoretical results, we provide practical guidance for applied researchers
on how to justify the use of such estimators in empirical applications.",Synthetic Controls with Imperfect Pre-Treatment Fit,2019-11-19 22:32:10,"Bruno Ferman, Cristine Pinto","http://arxiv.org/abs/1911.08521v2, http://arxiv.org/pdf/1911.08521v2",econ.EM
29189,em,"We develop a class of tests for time series models such as multiple
regression with growing dimension, infinite-order autoregression and
nonparametric sieve regression. Examples include the Chow test and general
linear restriction tests of growing rank $p$. Employing such increasing $p$
asymptotics, we introduce a new scale correction to conventional test
statistics which accounts for a high-order long-run variance (HLV) that emerges
as $ p $ grows with sample size. We also propose a bias correction via a
null-imposed bootstrap to alleviate finite sample bias without sacrificing
power unduly. A simulation study shows the importance of robustifying testing
procedures against the HLV even when $ p $ is moderate. The tests are
illustrated with an application to the oil regressions in Hamilton (2003).",Robust Inference on Infinite and Growing Dimensional Time Series Regression,2019-11-20 03:25:42,"Abhimanyu Gupta, Myung Hwan Seo","http://arxiv.org/abs/1911.08637v4, http://arxiv.org/pdf/1911.08637v4",econ.EM
29190,em,"We propose a Bayesian vector autoregressive (VAR) model for mixed-frequency
data. Our model is based on the mean-adjusted parametrization of the VAR and
allows for an explicit prior on the 'steady states' (unconditional means) of
the included variables. Based on recent developments in the literature, we
discuss extensions of the model that improve the flexibility of the modeling
approach. These extensions include a hierarchical shrinkage prior for the
steady-state parameters, and the use of stochastic volatility to model
heteroskedasticity. We put the proposed model to use in a forecast evaluation
using US data consisting of 10 monthly and 3 quarterly variables. The results
show that the predictive ability typically benefits from using mixed-frequency
data, and that improvements can be obtained for both monthly and quarterly
variables. We also find that the steady-state prior generally enhances the
accuracy of the forecasts, and that accounting for heteroskedasticity by means
of stochastic volatility usually provides additional improvements, although not
for all variables.",A Flexible Mixed-Frequency Vector Autoregression with a Steady-State Prior,2019-11-20 23:06:12,"Sebastian Ankargren, Måns Unosson, Yukai Yang","http://arxiv.org/abs/1911.09151v1, http://arxiv.org/pdf/1911.09151v1",econ.EM
29191,em,"We propose a method to conduct uniform inference for the (optimal) value
function, that is, the function that results from optimizing an objective
function marginally over one of its arguments. Marginal optimization is not
Hadamard differentiable (that is, compactly differentiable) as a map between
the spaces of objective and value functions, which is problematic because
standard inference methods for nonlinear maps usually rely on Hadamard
differentiability. However, we show that the map from objective function to an
$L_p$ functional of a value function, for $1 \leq p \leq \infty$, are Hadamard
directionally differentiable. As a result, we establish consistency and weak
convergence of nonparametric plug-in estimates of Cram\'er-von Mises and
Kolmogorov-Smirnov test statistics applied to value functions. For practical
inference, we develop detailed resampling techniques that combine a bootstrap
procedure with estimates of the directional derivatives. In addition, we
establish local size control of tests which use the resampling procedure. Monte
Carlo simulations assess the finite-sample properties of the proposed methods
and show accurate empirical size and nontrivial power of the procedures.
Finally, we apply our methods to the evaluation of a job training program using
bounds for the distribution function of treatment effects.",Uniform inference for value functions,2019-11-22 22:00:35,"Sergio Firpo, Antonio F. Galvao, Thomas Parker","http://arxiv.org/abs/1911.10215v7, http://arxiv.org/pdf/1911.10215v7",econ.EM
29192,em,"An understanding of the economic landscape in a world of ever increasing data
necessitates representations of data that can inform policy, deepen
understanding and guide future research. Topological Data Analysis offers a set
of tools which deliver on all three calls. Abstract two-dimensional snapshots
of multi-dimensional space readily capture non-monotonic relationships, inform
of similarity between points of interest in parameter space, mapping such to
outcomes. Specific examples show how some, but not all, countries have returned
to Great Depression levels, and reappraise the links between real private
capital growth and the performance of the economy. Theoretical and empirical
expositions alike remind on the dangers of assuming monotonic relationships and
discounting combinations of factors as determinants of outcomes; both dangers
Topological Data Analysis addresses. Policy-makers can look at outcomes and
target areas of the input space where such are not satisfactory, academics may
additionally find evidence to motivate theoretical development, and
practitioners can gain a rapid and robust base for decision making.",Topologically Mapping the Macroeconomy,2019-11-24 11:58:56,"Pawel Dlotko, Simon Rudkin, Wanling Qiu","http://arxiv.org/abs/1911.10476v1, http://arxiv.org/pdf/1911.10476v1",econ.EM
29193,em,"We investigate how the possible presence of unit roots and cointegration
affects forecasting with Big Data. As most macroeoconomic time series are very
persistent and may contain unit roots, a proper handling of unit roots and
cointegration is of paramount importance for macroeconomic forecasting. The
high-dimensional nature of Big Data complicates the analysis of unit roots and
cointegration in two ways. First, transformations to stationarity require
performing many unit root tests, increasing room for errors in the
classification. Second, modelling unit roots and cointegration directly is more
difficult, as standard high-dimensional techniques such as factor models and
penalized regression are not directly applicable to (co)integrated data and
need to be adapted. We provide an overview of both issues and review methods
proposed to address these issues. These methods are also illustrated with two
empirical applications.",High-Dimensional Forecasting in the Presence of Unit Roots and Cointegration,2019-11-24 18:24:38,"Stephan Smeekes, Etienne Wijler","http://arxiv.org/abs/1911.10552v1, http://arxiv.org/pdf/1911.10552v1",econ.EM
29194,em,"This paper aims at shedding light upon how transforming or detrending a
series can substantially impact predictions of mixed causal-noncausal (MAR)
models, namely dynamic processes that depend not only on their lags but also on
their leads. MAR models have been successfully implemented on commodity prices
as they allow to generate nonlinear features such as locally explosive episodes
(denoted here as bubbles) in a strictly stationary setting. We consider
multiple detrending methods and investigate, using Monte Carlo simulations, to
what extent they preserve the bubble patterns observed in the raw data. MAR
models relies on the dynamics observed in the series alone and does not require
economical background to construct a structural model, which can sometimes be
intricate to specify or which may lack parsimony. We investigate oil prices and
estimate probabilities of crashes before and during the first 2020 wave of the
COVID-19 pandemic. We consider three different mechanical detrending methods
and compare them to a detrending performed using the level of strategic
petroleum reserves.",Predicting crashes in oil prices during the COVID-19 pandemic with mixed causal-noncausal models,2019-11-25 16:48:16,"Alain Hecq, Elisa Voisin","http://arxiv.org/abs/1911.10916v3, http://arxiv.org/pdf/1911.10916v3",econ.EM
29201,em,"The Internet plays a key role in society and is vital to economic
development. Due to the pressure of competition, most technology companies,
including Internet finance companies, continue to explore new markets and new
business. Funding subsidies and resource inputs have led to significant
business income tendencies in financial statements. This tendency of business
income is often manifested as part of the business loss or long-term
unprofitability. We propose a risk change indicator (RFR) and compare the risk
indicator of fourteen representative companies. This model combines extreme
risk value with slope, and the combination method is simple and effective. The
results of experiment show the potential of this model. The risk volatility of
technology enterprises including Internet finance enterprises is highly
cyclical, and the risk volatility of emerging Internet fintech companies is
much higher than that of other technology companies.",Risk Fluctuation Characteristics of Internet Finance: Combining Industry Characteristics with Ecological Value,2020-01-27 17:03:19,"Runjie Xu, Chuanmin Mi, Nan Ye, Tom Marshall, Yadong Xiao, Hefan Shuai","http://arxiv.org/abs/2001.09798v1, http://arxiv.org/pdf/2001.09798v1",econ.EM
29195,em,"Asymptotic bootstrap validity is usually understood as consistency of the
distribution of a bootstrap statistic, conditional on the data, for the
unconditional limit distribution of a statistic of interest. From this
perspective, randomness of the limit bootstrap measure is regarded as a failure
of the bootstrap. We show that such limiting randomness does not necessarily
invalidate bootstrap inference if validity is understood as control over the
frequency of correct inferences in large samples. We first establish sufficient
conditions for asymptotic bootstrap validity in cases where the unconditional
limit distribution of a statistic can be obtained by averaging a (random)
limiting bootstrap distribution. Further, we provide results ensuring the
asymptotic validity of the bootstrap as a tool for conditional inference, the
leading case being that where a bootstrap distribution estimates consistently a
conditional (and thus, random) limit distribution of a statistic. We apply our
framework to several inference problems in econometrics, including linear
models with possibly non-stationary regressors, functional CUSUM statistics,
conditional Kolmogorov-Smirnov specification tests, the `parameter on the
boundary' problem and tests for constancy of parameters in dynamic econometric
models.",Inference under random limit bootstrap measures,2019-11-28 19:39:33,"Giuseppe Cavaliere, Iliyan Georgiev","http://arxiv.org/abs/1911.12779v2, http://arxiv.org/pdf/1911.12779v2",econ.EM
29196,em,"The paper proposes a parsimonious and flexible semiparametric quantile
regression specification for asymmetric bidders within the independent private
value framework. Asymmetry is parameterized using powers of a parent private
value distribution, which is generated by a quantile regression specification.
As noted in Cantillon (2008) , this covers and extends models used for
efficient collusion, joint bidding and mergers among homogeneous bidders. The
specification can be estimated for ascending auctions using the winning bids
and the winner's identity. The estimation is in two stage. The asymmetry
parameters are estimated from the winner's identity using a simple maximum
likelihood procedure. The parent quantile regression specification can be
estimated using simple modifications of Gimenes (2017). Specification testing
procedures are also considered. A timber application reveals that weaker
bidders have $30\%$ less chances to win the auction than stronger ones. It is
also found that increasing participation in an asymmetric ascending auction may
not be as beneficial as using an optimal reserve price as would have been
expected from a result of BulowKlemperer (1996) valid under symmetry.",Semiparametric Quantile Models for Ascending Auctions with Asymmetric Bidders,2019-11-29 14:24:48,"Jayeeta Bhattacharya, Nathalie Gimenes, Emmanuel Guerre","http://arxiv.org/abs/1911.13063v2, http://arxiv.org/pdf/1911.13063v2",econ.EM
29197,em,"This paper considers a semiparametric model of dyadic network formation under
nontransferable utilities (NTU). NTU arises frequently in real-world social
interactions that require bilateral consent, but by its nature induces additive
non-separability. We show how unobserved individual heterogeneity in our model
can be canceled out without additive separability, using a novel method we call
logical differencing. The key idea is to construct events involving the
intersection of two mutually exclusive restrictions on the unobserved
heterogeneity, based on multivariate monotonicity. We provide a consistent
estimator and analyze its performance via simulation, and apply our method to
the Nyakatoke risk-sharing networks.",Logical Differencing in Dyadic Network Formation Models with Nontransferable Utilities,2020-01-03 05:11:36,"Wayne Yuan Gao, Ming Li, Sheng Xu","http://arxiv.org/abs/2001.00691v4, http://arxiv.org/pdf/2001.00691v4",econ.EM
29198,em,"This paper explores strategic network formation under incomplete information
using data from a single large network. We allow the utility function to be
nonseparable in an individual's link choices to capture the spillover effects
from friends in common. In a network with n individuals, the nonseparable
utility drives an individual to choose between 2^{n-1} overlapping portfolios
of links. We develop a novel approach that applies the Legendre transform to
the utility function so that the optimal decision of an individual can be
represented as a sequence of correlated binary choices. The link dependence
that results from the preference for friends in common is captured by an
auxiliary variable introduced by the Legendre transform. We propose a two-step
estimator that is consistent and asymptotically normal. We also derive a
limiting approximation of the game as n grows large that can help simplify the
computation in very large networks. We apply these methods to favor exchange
networks in rural India and find that the direction of support from a mutual
link matters in facilitating favor provision.",Two-Step Estimation of a Strategic Network Formation Model with Clustering,2020-01-12 06:35:37,"Geert Ridder, Shuyang Sheng","http://arxiv.org/abs/2001.03838v3, http://arxiv.org/pdf/2001.03838v3",econ.EM
29199,em,"This paper develops a dynamic factor model that uses euro area (EA)
country-specific information on output and inflation to estimate an area-wide
measure of the output gap. Our model assumes that output and inflation can be
decomposed into country-specific stochastic trends and a common cyclical
component. Comovement in the trends is introduced by imposing a factor
structure on the shocks to the latent states. We moreover introduce flexible
stochastic volatility specifications to control for heteroscedasticity in the
measurement errors and innovations to the latent states. Carefully specified
shrinkage priors allow for pushing the model towards a homoscedastic
specification, if supported by the data. Our measure of the output gap closely
tracks other commonly adopted measures, with small differences in magnitudes
and timing. To assess whether the model-based output gap helps in forecasting
inflation, we perform an out-of-sample forecasting exercise. The findings
indicate that our approach yields superior inflation forecasts, both in terms
of point and density predictions.",A multi-country dynamic factor model with stochastic volatility for euro area business cycle analysis,2020-01-12 17:07:53,"Florian Huber, Michael Pfarrhofer, Philipp Piribauer","http://arxiv.org/abs/2001.03935v1, http://arxiv.org/pdf/2001.03935v1",econ.EM
29200,em,"This paper studies the estimation of causal parameters in the generalized
local average treatment effect (GLATE) model, a generalization of the classical
LATE model encompassing multi-valued treatment and instrument. We derive the
efficient influence function (EIF) and the semiparametric efficiency bound
(SPEB) for two types of parameters: local average structural function (LASF)
and local average structural function for the treated (LASF-T). The moment
condition generated by the EIF satisfies two robustness properties: double
robustness and Neyman orthogonality. Based on the robust moment condition, we
propose the double/debiased machine learning (DML) estimators for LASF and
LASF-T. The DML estimator is semiparametric efficient and suitable for high
dimensional settings. We also propose null-restricted inference methods that
are robust against weak identification issues. As an empirical application, we
study the effects across different sources of health insurance by applying the
developed methods to the Oregon Health Insurance Experiment.",Efficient and Robust Estimation of the Generalized LATE Model,2020-01-19 04:25:27,Haitian Xie,"http://arxiv.org/abs/2001.06746v2, http://arxiv.org/pdf/2001.06746v2",econ.EM
29202,em,"This paper shows how to shrink extremum estimators towards inequality
constraints motivated by economic theory. We propose an Inequality Constrained
Shrinkage Estimator (ICSE) which takes the form of a weighted average between
the unconstrained and inequality constrained estimators with the data dependent
weight. The weight drives both the direction and degree of shrinkage. We use a
local asymptotic framework to derive the asymptotic distribution and risk of
the ICSE. We provide conditions under which the asymptotic risk of the ICSE is
strictly less than that of the unrestricted extremum estimator. The degree of
shrinkage cannot be consistently estimated under the local asymptotic
framework. To address this issue, we propose a feasible plug-in estimator and
investigate its finite sample behavior. We also apply our framework to gasoline
demand estimation under the Slutsky restriction.",Frequentist Shrinkage under Inequality Constraints,2020-01-28 23:59:38,Edvard Bakhitov,"http://arxiv.org/abs/2001.10586v1, http://arxiv.org/pdf/2001.10586v1",econ.EM
29203,em,"This paper provides nonparametric identification results for random
coefficient distributions in perturbed utility models. We cover discrete and
continuous choice models. We establish identification using variation in mean
quantities, and the results apply when an analyst observes aggregate demands
but not whether goods are chosen together. We require exclusion restrictions
and independence between random slope coefficients and random intercepts. We do
not require regressors to have large supports or parametric assumptions.",Identification of Random Coefficient Latent Utility Models,2020-02-29 18:23:50,"Roy Allen, John Rehbeck","http://arxiv.org/abs/2003.00276v1, http://arxiv.org/pdf/2003.00276v1",econ.EM
29204,em,"We propose two types of equal predictive ability (EPA) tests with panels to
compare the predictions made by two forecasters. The first type, namely
$S$-statistics, focuses on the overall EPA hypothesis which states that the EPA
holds on average over all panel units and over time. The second, called
$C$-statistics, focuses on the clustered EPA hypothesis where the EPA holds
jointly for a fixed number of clusters of panel units. The asymptotic
properties of the proposed tests are evaluated under weak and strong
cross-sectional dependence. An extensive Monte Carlo simulation shows that the
proposed tests have very good finite sample properties even with little
information about the cross-sectional dependence in the data. The proposed
framework is applied to compare the economic growth forecasts of the OECD and
the IMF, and to evaluate the performance of the consumer price inflation
forecasts of the IMF.",Equal Predictive Ability Tests Based on Panel Data with Applications to OECD and IMF Forecasts,2020-03-05 21:07:07,"Oguzhan Akgun, Alain Pirotte, Giovanni Urga, Zhenlin Yang","http://arxiv.org/abs/2003.02803v3, http://arxiv.org/pdf/2003.02803v3",econ.EM
29205,em,"This paper reviews, applies and extends recently proposed methods based on
Double Machine Learning (DML) with a focus on program evaluation under
unconfoundedness. DML based methods leverage flexible prediction models to
adjust for confounding variables in the estimation of (i) standard average
effects, (ii) different forms of heterogeneous effects, and (iii) optimal
treatment assignment rules. An evaluation of multiple programs of the Swiss
Active Labour Market Policy illustrates how DML based methods enable a
comprehensive program evaluation. Motivated by extreme individualised treatment
effect estimates of the DR-learner, we propose the normalised DR-learner
(NDR-learner) to address this issue. The NDR-learner acknowledges that
individualised effect estimates can be stabilised by an individualised
normalisation of inverse probability weights.",Double Machine Learning based Program Evaluation under Unconfoundedness,2020-03-06 16:31:31,Michael C. Knaus,"http://dx.doi.org/10.1093/ectj/utac015, http://arxiv.org/abs/2003.03191v5, http://arxiv.org/pdf/2003.03191v5",econ.EM
29206,em,"A unit root test is proposed for time series with a general nonlinear
deterministic trend component. It is shown that asymptotically the pooled OLS
estimator of overlapping blocks filters out any trend component that satisfies
some Lipschitz condition. Under both fixed-$b$ and small-$b$ block asymptotics,
the limiting distribution of the t-statistic for the unit root hypothesis is
derived. Nuisance parameter corrections provide heteroskedasticity-robust
tests, and serial correlation is accounted for by pre-whitening. A Monte Carlo
study that considers slowly varying trends yields both good size and improved
power results for the proposed tests when compared to conventional unit root
tests.",Unit Root Testing with Slowly Varying Trends,2020-03-09 15:30:31,Sven Otto,"http://dx.doi.org/10.1111/jtsa.12557, http://arxiv.org/abs/2003.04066v3, http://arxiv.org/pdf/2003.04066v3",econ.EM
29207,em,"We study the identification and estimation of treatment effect parameters in
weakly separable models. In their seminal work, Vytlacil and Yildiz (2007)
showed how to identify and estimate the average treatment effect of a dummy
endogenous variable when the outcome is weakly separable in a single index.
Their identification result builds on a monotonicity condition with respect to
this single index. In comparison, we consider similar weakly separable models
with multiple indices, and relax the monotonicity condition for identification.
Unlike Vytlacil and Yildiz (2007), we exploit the full information in the
distribution of the outcome variable, instead of just its mean. Indeed, when
the outcome distribution function is more informative than the mean, our method
is applicable to more general settings than theirs; in particular we do not
rely on their monotonicity assumption and at the same time we also allow for
multiple indices. To illustrate the advantage of our approach, we provide
examples of models where our approach can identify parameters of interest
whereas existing methods would fail. These examples include models with
multiple unobserved disturbance terms such as the Roy model and multinomial
choice models with dummy endogenous variables, as well as potential outcome
models with endogenous random coefficients. Our method is easy to implement and
can be applied to a wide class of models. We establish standard asymptotic
properties such as consistency and asymptotic normality.",Identification and Estimation of Weakly Separable Models Without Monotonicity,2020-03-09 21:12:03,"Songnian Chen, Shakeeb Khan, Xun Tang","http://arxiv.org/abs/2003.04337v2, http://arxiv.org/pdf/2003.04337v2",econ.EM
29214,em,"We derive a feasible criterion for the bias-optimal selection of the tuning
parameters involved in estimating the integrated volatility of the spot
volatility via the simple realized estimator by Barndorff-Nielsen and Veraart
(2009). Our analytic results are obtained assuming that the spot volatility is
a continuous mean-reverting process and that consecutive local windows for
estimating the spot volatility are allowed to overlap in a finite sample
setting. Moreover, our analytic results support some optimal selections of
tuning parameters prescribed in the literature, based on numerical evidence.
Interestingly, it emerges that window-overlapping is crucial for optimizing the
finite-sample bias of volatility-of-volatility estimates.",Bias optimal vol-of-vol estimation: the role of window overlapping,2020-04-08 17:34:23,"Giacomo Toscano, Maria Cristina Recchioni","http://arxiv.org/abs/2004.04013v2, http://arxiv.org/pdf/2004.04013v2",econ.EM
29208,em,"I set up a potential outcomes framework to analyze spillover effects using
instrumental variables. I characterize the population compliance types in a
setting in which spillovers can occur on both treatment take-up and outcomes,
and provide conditions for identification of the marginal distribution of
compliance types. I show that intention-to-treat (ITT) parameters aggregate
multiple direct and spillover effects for different compliance types, and hence
do not have a clear link to causally interpretable parameters. Moreover,
rescaling ITT parameters by first-stage estimands generally recovers a weighted
combination of average effects where the sum of weights is larger than one. I
then analyze identification of causal direct and spillover effects under
one-sided noncompliance, and show that causal effects can be estimated by 2SLS
in this case. I illustrate the proposed methods using data from an experiment
on social interactions and voting behavior. I also introduce an alternative
assumption, independence of peers' types, that identifies parameters of
interest under two-sided noncompliance by restricting the amount of
heterogeneity in average potential outcomes.",Causal Spillover Effects Using Instrumental Variables,2020-03-13 00:17:21,Gonzalo Vazquez-Bare,"http://arxiv.org/abs/2003.06023v5, http://arxiv.org/pdf/2003.06023v5",econ.EM
29209,em,"This paper studies a class of linear panel models with random coefficients.
We do not restrict the joint distribution of the time-invariant unobserved
heterogeneity and the covariates. We investigate identification of the average
partial effect (APE) when fixed-effect techniques cannot be used to control for
the correlation between the regressors and the time-varying disturbances.
Relying on control variables, we develop a constructive two-step identification
argument. The first step identifies nonparametrically the conditional
expectation of the disturbances given the regressors and the control variables,
and the second step uses ``between-group'' variations, correcting for
endogeneity, to identify the APE. We propose a natural semiparametric estimator
of the APE, show its $\sqrt{n}$ asymptotic normality and compute its asymptotic
variance. The estimator is computationally easy to implement, and Monte Carlo
simulations show favorable finite sample properties. Control variables arise in
various economic and econometric models, and we propose applications of our
argument in several models. As an empirical illustration, we estimate the
average elasticity of intertemporal substitution in a labor supply model with
random coefficients.",A Correlated Random Coefficient Panel Model with Time-Varying Endogeneity,2020-03-20 19:43:05,Louise Laage,"http://arxiv.org/abs/2003.09367v2, http://arxiv.org/pdf/2003.09367v2",econ.EM
29210,em,"In this paper, we build a new test of rational expectations based on the
marginal distributions of realizations and subjective beliefs. This test is
widely applicable, including in the common situation where realizations and
beliefs are observed in two different datasets that cannot be matched. We show
that whether one can rationalize rational expectations is equivalent to the
distribution of realizations being a mean-preserving spread of the distribution
of beliefs. The null hypothesis can then be rewritten as a system of many
moment inequality and equality constraints, for which tests have been recently
developed in the literature. The test is robust to measurement errors under
some restrictions and can be extended to account for aggregate shocks. Finally,
we apply our methodology to test for rational expectations about future
earnings. While individuals tend to be right on average about their future
earnings, our test strongly rejects rational expectations.",Rationalizing Rational Expectations: Characterization and Tests,2020-03-25 20:57:43,"Xavier D'Haultfoeuille, Christophe Gaillac, Arnaud Maurel","http://dx.doi.org/10.3982/QE1724, http://arxiv.org/abs/2003.11537v3, http://arxiv.org/pdf/2003.11537v3",econ.EM
29211,em,"Random forest regression (RF) is an extremely popular tool for the analysis
of high-dimensional data. Nonetheless, its benefits may be lessened in sparse
settings due to weak predictors, and a pre-estimation dimension reduction
(targeting) step is required. We show that proper targeting controls the
probability of placing splits along strong predictors, thus providing an
important complement to RF's feature sampling. This is supported by simulations
using representative finite samples. Moreover, we quantify the immediate gain
from targeting in terms of increased strength of individual trees.
Macroeconomic and financial applications show that the bias-variance trade-off
implied by targeting, due to increased correlation among trees in the forest,
is balanced at a medium degree of targeting, selecting the best 10--30\% of
commonly applied predictors. Improvements in predictive accuracy of targeted RF
relative to ordinary RF are considerable, up to 12-13\%, occurring both in
recessions and expansions, particularly at long horizons.",Targeting predictors in random forest regression,2020-04-03 10:42:11,"Daniel Borup, Bent Jesper Christensen, Nicolaj Nørgaard Mühlbach, Mikkel Slot Nielsen","http://arxiv.org/abs/2004.01411v4, http://arxiv.org/pdf/2004.01411v4",econ.EM
29212,em,"We propose a doubly robust inference method for causal effects of continuous
treatment variables, under unconfoundedness and with nonparametric or
high-dimensional nuisance functions. Our double debiased machine learning (DML)
estimators for the average dose-response function (or the average structural
function) and the partial effects are asymptotically normal with non-parametric
convergence rates. The first-step estimators for the nuisance conditional
expectation function and the conditional density can be nonparametric or ML
methods. Utilizing a kernel-based doubly robust moment function and
cross-fitting, we give high-level conditions under which the nuisance function
estimators do not affect the first-order large sample distribution of the DML
estimators. We provide sufficient low-level conditions for kernel, series, and
deep neural networks. We justify the use of kernel to localize the continuous
treatment at a given value by the Gateaux derivative. We implement various ML
methods in Monte Carlo simulations and an empirical application on a job
training program evaluation",Double Debiased Machine Learning Nonparametric Inference with Continuous Treatments,2020-04-07 02:01:49,"Kyle Colangelo, Ying-Ying Lee","http://arxiv.org/abs/2004.03036v8, http://arxiv.org/pdf/2004.03036v8",econ.EM
29213,em,"In this article, we study the limiting behavior of Bai (2009)'s interactive
fixed effects estimator in the presence of randomly missing data. In extensive
simulation experiments, we show that the inferential theory derived by Bai
(2009) and Moon and Weidner (2017) approximates the behavior of the estimator
fairly well. However, we find that the fraction and pattern of randomly missing
data affect the performance of the estimator. Additionally, we use the
interactive fixed effects estimator to reassess the baseline analysis of
Acemoglu et al. (2019). Allowing for a more general form of unobserved
heterogeneity as the authors, we confirm significant effects of democratization
on growth.",Inference in Unbalanced Panel Data Models with Interactive Fixed Effects,2020-04-07 17:06:25,"Daniel Czarnowske, Amrei Stammann","http://arxiv.org/abs/2004.03414v1, http://arxiv.org/pdf/2004.03414v1",econ.EM
29215,em,"The paper presents an empirical investigation of telecommuting frequency
choices by post-secondary students in Toronto. It uses a dataset collected
through a large-scale travel survey conducted on post-secondary students of
four major universities in Toronto and it employs multiple alternative
econometric modelling techniques for the empirical investigation. Results
contribute on two fronts. Firstly, it presents empirical investigations of
factors affecting telecommuting frequency choices of post-secondary students
that are rare in literature. Secondly, it identifies better a performing
econometric modelling technique for modelling telecommuting frequency choices.
Empirical investigation clearly reveals that telecommuting for school related
activities is prevalent among post-secondary students in Toronto. Around 80
percent of 0.18 million of the post-secondary students of the region, who make
roughly 36,000 trips per day, also telecommute at least once a week.
Considering that large numbers of students need to spend a long time travelling
from home to campus with around 33 percent spending more than two hours a day
on travelling, telecommuting has potential to enhance their quality of life.
Empirical investigations reveal that car ownership and living farther from the
campus have similar positive effects on the choice of higher frequency of
telecommuting. Students who use a bicycle for regular travel are least likely
to telecommute, compared to those using transit or a private car.",On the Factors Influencing the Choices of Weekly Telecommuting Frequencies of Post-secondary Students in Toronto,2020-04-09 20:08:22,"Khandker Nurul Habib, Ph. D., PEng","http://arxiv.org/abs/2004.04683v1, http://arxiv.org/pdf/2004.04683v1",econ.EM
29216,em,"The existing theory of penalized quantile regression for longitudinal data
has focused primarily on point estimation. In this work, we investigate
statistical inference. We propose a wild residual bootstrap procedure and show
that it is asymptotically valid for approximating the distribution of the
penalized estimator. The model puts no restrictions on individual effects, and
the estimator achieves consistency by letting the shrinkage decay in importance
asymptotically. The new method is easy to implement and simulation studies show
that it has accurate small sample behavior in comparison with existing
procedures. Finally, we illustrate the new approach using U.S. Census data to
estimate a model that includes more than eighty thousand parameters.",Wild Bootstrap Inference for Penalized Quantile Regression for Longitudinal Data,2020-04-10 20:14:03,"Carlos Lamarche, Thomas Parker","http://arxiv.org/abs/2004.05127v3, http://arxiv.org/pdf/2004.05127v3",econ.EM
29217,em,"Social and emotional learning (SEL) programs teach disruptive students to
improve their classroom behavior. Small-scale programs in high-income countries
have been shown to improve treated students' behavior and academic outcomes.
Using a randomized experiment, we show that a nationwide SEL program in Chile
has no effect on eligible students. We find evidence that very disruptive
students may hamper the program's effectiveness. ADHD, a disorder correlated
with disruptiveness, is much more prevalent in Chile than in high-income
countries, so very disruptive students may be more present in Chile than in the
contexts where SEL programs have been shown to work.",The direct and spillover effects of a nationwide socio-emotional learning program for disruptive students,2020-04-17 12:06:38,"Clément de Chaisemartin, Nicolás Navarrete H.","http://arxiv.org/abs/2004.08126v1, http://arxiv.org/pdf/2004.08126v1",econ.EM
29218,em,"This paper develops theoretical criteria and econometric methods to rank
policy interventions in terms of welfare when individuals are loss-averse. Our
new criterion for ""loss aversion-sensitive dominance"" defines a weak partial
ordering of the distributions of policy-induced gains and losses. It applies to
the class of welfare functions which model individual preferences with
non-decreasing and loss-averse attitudes towards changes in outcomes. We also
develop new statistical methods to test loss aversion-sensitive dominance in
practice, using nonparametric plug-in estimates; these allow inference to be
conducted through a special resampling procedure. Since point-identification of
the distribution of policy-induced gains and losses may require strong
assumptions, we extend our comparison criteria, test statistics, and resampling
procedures to the partially-identified case. We illustrate our methods with a
simple empirical application to the welfare comparison of alternative income
support programs in the US.",Loss aversion and the welfare ranking of policy interventions,2020-04-18 01:01:22,"Sergio Firpo, Antonio F. Galvao, Martyna Kobus, Thomas Parker, Pedro Rosa-Dias","http://arxiv.org/abs/2004.08468v4, http://arxiv.org/pdf/2004.08468v4",econ.EM
29219,em,"We propose an estimation procedure for discrete choice models of
differentiated products with possibly high-dimensional product attributes. In
our model, high-dimensional attributes can be determinants of both mean and
variance of the indirect utility of a product. The key restriction in our model
is that the high-dimensional attributes affect the variance of indirect
utilities only through finitely many indices. In a framework of the
random-coefficients logit model, we show a bound on the error rate of a
$l_1$-regularized minimum distance estimator and prove the asymptotic linearity
of the de-biased estimator.",Estimating High-Dimensional Discrete Choice Model of Differentiated Products with Random Coefficients,2020-04-19 11:09:57,"Masayuki Sawada, Kohei Kawaguchi","http://arxiv.org/abs/2004.08791v1, http://arxiv.org/pdf/2004.08791v1",econ.EM
29220,em,"We simulate a simplified version of the price process including bubbles and
crashes proposed in Kreuser and Sornette (2018). The price process is defined
as a geometric random walk combined with jumps modelled by separate, discrete
distributions associated with positive (and negative) bubbles. The key
ingredient of the model is to assume that the sizes of the jumps are
proportional to the bubble size. Thus, the jumps tend to efficiently bring back
excess bubble prices close to a normal or fundamental value (efficient
crashes). This is different from existing processes studied that assume jumps
that are independent of the mispricing. The present model is simplified
compared to Kreuser and Sornette (2018) in that we ignore the possibility of a
change of the probability of a crash as the price accelerates above the normal
price. We study the behaviour of investment strategies that maximize the
expected log of wealth (Kelly criterion) for the risky asset and a risk-free
asset. We show that the method behaves similarly to Kelly on Geometric Brownian
Motion in that it outperforms other methods in the long-term and it beats
classical Kelly. As a primary source of outperformance, we determine knowledge
about the presence of crashes, but interestingly find that knowledge of only
the size, and not the time of occurrence, already provides a significant and
robust edge. We then perform an error analysis to show that the method is
robust with respect to variations in the parameters. The method is most
sensitive to errors in the expected return.",Awareness of crash risk improves Kelly strategies in simulated financial time series,2020-04-20 18:14:43,"Jan-Christian Gerlach, Jerome Kreuser, Didier Sornette","http://arxiv.org/abs/2004.09368v1, http://arxiv.org/pdf/2004.09368v1",econ.EM
29221,em,"The article reviews the history of well-being to gauge how subjective
question surveys can improve our understanding of well-being in Mexico. The
research uses data at the level of the 32 federal entities or States, taking
advantage of the heterogeneity in development indicator readings between and
within geographical areas, the product of socioeconomic inequality. The data
come principally from two innovative subjective questionnaires, BIARE and
ENVIPE, which intersect in their fully representative state-wide applications
in 2014, but also from conventional objective indicator sources such as the HDI
and conventional surveys. This study uses two approaches, a descriptive
analysis of a state-by-state landscape of indicators, both subjective and
objective, in an initial search for stand-out well-being patterns, and an
econometric study of a large selection of mainly subjective indicators inspired
by theory and the findings of previous Mexican research. Descriptive analysis
confirms that subjective well-being correlates strongly with and complements
objective data, providing interesting directions for analysis. The econometrics
literature indicates that happiness increases with income and satisfying of
material needs as theory suggests, but also that Mexicans are relatively happy
considering their mediocre incomes and high levels of insecurity, the last of
which, by categorizing according to satisfaction with life, can be shown to
impact poorer people disproportionately. The article suggests that well-being
is a complex, multidimensional construct which can be revealed by using
exploratory multi-regression and partial correlations models which juxtapose
subjective and objective indicators.",Does Subjective Well-being Contribute to Our Understanding of Mexican Well-being?,2020-04-23 21:46:05,"Jeremy Heald, Erick Treviño Aguilar","http://arxiv.org/abs/2004.11420v1, http://arxiv.org/pdf/2004.11420v1",econ.EM
29222,em,"This chapter reviews the microeconometrics literature on partial
identification, focusing on the developments of the last thirty years. The
topics presented illustrate that the available data combined with credible
maintained assumptions may yield much information about a parameter of
interest, even if they do not reveal it exactly. Special attention is devoted
to discussing the challenges associated with, and some of the solutions put
forward to, (1) obtain a tractable characterization of the values for the
parameters of interest which are observationally equivalent, given the
available data and maintained assumptions; (2) estimate this set of values; (3)
conduct test of hypotheses and make confidence statements. The chapter reviews
advances in partial identification analysis both as applied to learning
(functionals of) probability distributions that are well-defined in the absence
of models, as well as to learning parameters that are well-defined only in the
context of particular models. A simple organizing principle is highlighted: the
source of the identification problem can often be traced to a collection of
random variables that are consistent with the available data and maintained
assumptions. This collection may be part of the observed data or be a model
implication. In either case, it can be formalized as a random set. Random set
theory is then used as a mathematical framework to unify a number of special
results and produce a general methodology to carry out partial identification
analysis.",Microeconometrics with Partial Identification,2020-04-24 16:54:53,Francesca Molinari,"http://arxiv.org/abs/2004.11751v1, http://arxiv.org/pdf/2004.11751v1",econ.EM
29223,em,"A common approach to estimation of economic models is to calibrate a sub-set
of model parameters and keep them fixed when estimating the remaining
parameters. Calibrated parameters likely affect conclusions based on the model
but estimation time often makes a systematic investigation of the sensitivity
to calibrated parameters infeasible. I propose a simple and computationally
low-cost measure of the sensitivity of parameters and other objects of interest
to the calibrated parameters. In the main empirical application, I revisit the
analysis of life-cycle savings motives in Gourinchas and Parker (2002) and show
that some estimates are sensitive to calibrations.",Sensitivity to Calibrated Parameters,2020-04-25 12:40:29,Thomas H. Jørgensen,"http://arxiv.org/abs/2004.12100v2, http://arxiv.org/pdf/2004.12100v2",econ.EM
29224,em,"We develop a concept of weak identification in linear IV models in which the
number of instruments can grow at the same rate or slower than the sample size.
We propose a jackknifed version of the classical weak identification-robust
Anderson-Rubin (AR) test statistic. Large-sample inference based on the
jackknifed AR is valid under heteroscedasticity and weak identification. The
feasible version of this statistic uses a novel variance estimator. The test
has uniformly correct size and good power properties. We also develop a
pre-test for weak identification that is related to the size property of a Wald
test based on the Jackknife Instrumental Variable Estimator (JIVE). This new
pre-test is valid under heteroscedasticity and with many instruments.",Inference with Many Weak Instruments,2020-04-26 20:55:44,"Anna Mikusheva, Liyang Sun","http://arxiv.org/abs/2004.12445v3, http://arxiv.org/pdf/2004.12445v3",econ.EM
29225,em,"We study the role and drivers of persistence in the extensive margin of
bilateral trade. Motivated by a stylized heterogeneous firms model of
international trade with market entry costs, we consider dynamic three-way
fixed effects binary choice models and study the corresponding incidental
parameter problem. The standard maximum likelihood estimator is consistent
under asymptotics where all panel dimensions grow at a constant rate, but it
has an asymptotic bias in its limiting distribution, invalidating inference
even in situations where the bias appears to be small. Thus, we propose two
different bias-corrected estimators. Monte Carlo simulations confirm their
desirable statistical properties. We apply these estimators in a reassessment
of the most commonly studied determinants of the extensive margin of trade.
Both true state dependence and unobserved heterogeneity contribute considerably
to trade persistence and taking this persistence into account matters
significantly in identifying the effects of trade policies on the extensive
margin.",State Dependence and Unobserved Heterogeneity in the Extensive Margin of Trade,2020-04-27 12:07:26,"Julian Hinz, Amrei Stammann, Joschka Wanner","http://arxiv.org/abs/2004.12655v2, http://arxiv.org/pdf/2004.12655v2",econ.EM
29240,em,"This paper examines methods of inference concerning quantile treatment
effects (QTEs) in randomized experiments with matched-pairs designs (MPDs).
Standard multiplier bootstrap inference fails to capture the negative
dependence of observations within each pair and is therefore conservative.
Analytical inference involves estimating multiple functional quantities that
require several tuning parameters. Instead, this paper proposes two bootstrap
methods that can consistently approximate the limit distribution of the
original QTE estimator and lessen the burden of tuning parameter choice. Most
especially, the inverse propensity score weighted multiplier bootstrap can be
implemented without knowledge of pair identities.",Bootstrap Inference for Quantile Treatment Effects in Randomized Experiments with Matched Pairs,2020-05-25 11:21:40,"Liang Jiang, Xiaobin Liu, Peter C. B. Phillips, Yichong Zhang","http://arxiv.org/abs/2005.11967v4, http://arxiv.org/pdf/2005.11967v4",econ.EM
29226,em,"In this paper we investigate potential changes which may have occurred over
the last two decades in the probability mass of the right tail of the wage
distribution, through the analysis of the corresponding tail index. In
specific, a conditional tail index estimator is introduced which explicitly
allows for right tail censoring (top-coding), which is a feature of the widely
used current population survey (CPS), as well as of other surveys. Ignoring the
top-coding may lead to inconsistent estimates of the tail index and to under or
over statements of inequality and of its evolution over time. Thus, having a
tail index estimator that explicitly accounts for this sample characteristic is
of importance to better understand and compute the tail index dynamics in the
censored right tail of the wage distribution. The contribution of this paper is
threefold: i) we introduce a conditional tail index estimator that explicitly
handles the top-coding problem, and evaluate its finite sample performance and
compare it with competing methods; ii) we highlight that the factor values used
to adjust the top-coded wage have changed over time and depend on the
characteristics of individuals, occupations and industries, and propose
suitable values; and iii) we provide an in-depth empirical analysis of the
dynamics of the US wage distribution's right tail using the public-use CPS
database from 1992 to 2017.",Measuring wage inequality under right censoring,2020-04-27 18:11:06,"João Nicolau, Pedro Raposo, Paulo M. M. Rodrigues","http://arxiv.org/abs/2004.12856v1, http://arxiv.org/pdf/2004.12856v1",econ.EM
29227,em,"Can uncertainty about credit availability trigger a slowdown in real
activity? This question is answered by using a novel method to identify shocks
to uncertainty in access to credit. Time-variation in uncertainty about credit
availability is estimated using particle Markov Chain Monte Carlo. We extract
shocks to time-varying credit uncertainty and decompose it into two parts: the
first captures the ""pure"" effect of a shock to the second moment; the second
captures total effects of uncertainty including effects on the first moment.
Using state-dependent local projections, we find that the ""pure"" effect by
itself generates a sharp slowdown in real activity and the effects are largely
countercyclical. We feed the estimated shocks into a flexible price real
business cycle model with a collateral constraint and show that when the
collateral constraint binds, an uncertainty shock about credit access is
recessionary leading to a simultaneous decline in consumption, investment, and
output.",The Interaction Between Credit Constraints and Uncertainty Shocks,2020-04-30 15:16:33,"Pratiti Chatterjee, David Gunawan, Robert Kohn","http://arxiv.org/abs/2004.14719v1, http://arxiv.org/pdf/2004.14719v1",econ.EM
29228,em,"This paper shows that utilizing information on the extensive margin of
financially constrained households can narrow down the set of admissible
preferences in a large class of macroeconomic models. Estimates based on
Spanish aggregate data provide further empirical support for this result and
suggest that accounting for this margin can bring estimates closer to
microeconometric evidence. Accounting for financial constraints and the
extensive margin is shown to matter for empirical asset pricing and quantifying
distortions in financial markets.",Identifying Preferences when Households are Financially Constrained,2020-05-05 11:59:22,Andreas Tryphonides,"http://arxiv.org/abs/2005.02010v6, http://arxiv.org/pdf/2005.02010v6",econ.EM
29229,em,"Assessing the trend of the COVID-19 pandemic and policy effectiveness is
essential for both policymakers and stock investors, but challenging because
the crisis has unfolded with extreme speed and the previous index was not
suitable for measuring policy effectiveness for COVID-19. This paper builds an
index of policy effectiveness on fighting COVID-19 pandemic, whose building
method is similar to the index of Policy Uncertainty, based on province-level
paper documents released in China from Jan.1st to Apr.16th of 2020. This paper
also studies the relationships among COVID-19 daily confirmed cases, stock
market volatility, and document-based policy effectiveness in China. This paper
uses the DCC-GARCH model to fit conditional covariance's change rule of
multi-series. This paper finally tests four hypotheses, about the time-space
difference of policy effectiveness and its overflow effect both on the COVID-19
pandemic and stock market. Through the inner interaction of this triad
structure, we can bring forward more specific and scientific suggestions to
maintain stability in the stock market at such exceptional times.",Stocks Vote with Their Feet: Can a Piece of Paper Document Fights the COVID-19 Pandemic?,2020-05-05 12:56:40,"J. Su, Q. Zhong","http://arxiv.org/abs/2005.02034v1, http://arxiv.org/pdf/2005.02034v1",econ.EM
29230,em,"We develop a generalization of unobserved components models that allows for a
wide range of long-run dynamics by modelling the permanent component as a
fractionally integrated process. The model does not require stationarity and
can be cast in state space form. In a multivariate setup, fractional trends may
yield a cointegrated system. We derive the Kalman filter estimator for the
common fractionally integrated component and establish consistency and
asymptotic (mixed) normality of the maximum likelihood estimator. We apply the
model to extract a common long-run component of three US inflation measures,
where we show that the $I(1)$ assumption is likely to be violated for the
common trend.",Fractional trends in unobserved components models,2020-05-08 15:42:38,"Tobias Hartl, Rolf Tschernig, Enzo Weber","http://arxiv.org/abs/2005.03988v2, http://arxiv.org/pdf/2005.03988v2",econ.EM
29231,em,"Using HILDA data for the years 2001, 2006, 2010, 2014 and 2017, we compute
posterior probabilities for dominance for all pairwise comparisons of income
distributions in these years. The dominance criteria considered are Lorenz
dominance and first and second order stochastic dominance. The income
distributions are estimated using an infinite mixture of gamma density
functions, with posterior probabilities computed as the proportion of Markov
chain Monte Carlo draws that satisfy the inequalities that define the dominance
criteria. We find welfare improvements from 2001 to 2006 and qualified
improvements from 2006 to the later three years. Evidence of an ordering
between 2010, 2014 and 2017 cannot be established.",Posterior Probabilities for Lorenz and Stochastic Dominance of Australian Income Distributions,2020-05-11 08:37:31,"David Gunawan, William E. Griffiths, Duangkamon Chotikapanich","http://arxiv.org/abs/2005.04870v2, http://arxiv.org/pdf/2005.04870v2",econ.EM
29232,em,"We combine high-dimensional factor models with fractional integration methods
and derive models where nonstationary, potentially cointegrated data of
different persistence is modelled as a function of common fractionally
integrated factors. A two-stage estimator, that combines principal components
and the Kalman filter, is proposed. The forecast performance is studied for a
high-dimensional US macroeconomic data set, where we find that benefits from
the fractional factor models can be substantial, as they outperform univariate
autoregressions, principal components, and the factor-augmented
error-correction model.",Macroeconomic Forecasting with Fractional Factor Models,2020-05-11 10:40:10,Tobias Hartl,"http://arxiv.org/abs/2005.04897v1, http://arxiv.org/pdf/2005.04897v1",econ.EM
29233,em,"We develop a generalization of correlated trend-cycle decompositions that
avoids prior assumptions about the long-run dynamic characteristics by
modelling the permanent component as a fractionally integrated process and
incorporating a fractional lag operator into the autoregressive polynomial of
the cyclical component. The model allows for an endogenous estimation of the
integration order jointly with the other model parameters and, therefore, no
prior specification tests with respect to persistence are required. We relate
the model to the Beveridge-Nelson decomposition and derive a modified Kalman
filter estimator for the fractional components. Identification, consistency,
and asymptotic normality of the maximum likelihood estimator are shown. For US
macroeconomic data we demonstrate that, unlike $I(1)$ correlated unobserved
components models, the new model estimates a smooth trend together with a cycle
hitting all NBER recessions. While $I(1)$ unobserved components models yield an
upward-biased signal-to-noise ratio whenever the integration order of the
data-generating mechanism is greater than one, the fractionally integrated
model attributes less variation to the long-run shocks due to the fractional
trend specification and a higher variation to the cycle shocks due to the
fractional lag operator, leading to more persistent cycles and smooth trend
estimates that reflect macroeconomic common sense.",Fractional trends and cycles in macroeconomic time series,2020-05-11 20:08:03,"Tobias Hartl, Rolf Tschernig, Enzo Weber","http://arxiv.org/abs/2005.05266v2, http://arxiv.org/pdf/2005.05266v2",econ.EM
29234,em,"This paper investigates the construction of moment conditions in discrete
choice panel data with individual specific fixed effects. We describe how to
systematically explore the existence of moment conditions that do not depend on
the fixed effects, and we demonstrate how to construct them when they exist.
Our approach is closely related to the numerical ""functional differencing""
construction in Bonhomme (2012), but our emphasis is to find explicit analytic
expressions for the moment functions. We first explain the construction and
give examples of such moment conditions in various models. Then, we focus on
the dynamic binary choice logit model and explore the implications of the
moment conditions for identification and estimation of the model parameters
that are common to all individuals.",Moment Conditions for Dynamic Panel Logit Models with Fixed Effects,2020-05-12 20:41:43,"Bo E. Honoré, Martin Weidner","http://arxiv.org/abs/2005.05942v7, http://arxiv.org/pdf/2005.05942v7",econ.EM
29235,em,"One of the most important empirical findings in microeconometrics is the
pervasiveness of heterogeneity in economic behaviour (cf. Heckman 2001). This
paper shows that cumulative distribution functions and quantiles of the
nonparametric unobserved heterogeneity have an infinite efficiency bound in
many structural economic models of interest. The paper presents a relatively
simple check of this fact. The usefulness of the theory is demonstrated with
several relevant examples in economics, including, among others, the proportion
of individuals with severe long term unemployment duration, the average
marginal effect and the proportion of individuals with a positive marginal
effect in a correlated random coefficient model with heterogenous first-stage
effects, and the distribution and quantiles of random coefficients in linear,
binary and the Mixed Logit models. Monte Carlo simulations illustrate the
finite sample implications of our findings for the distribution and quantiles
of the random coefficients in the Mixed Logit model.",Irregular Identification of Structural Models with Nonparametric Unobserved Heterogeneity,2020-05-18 14:49:01,Juan Carlos Escanciano,"http://arxiv.org/abs/2005.08611v1, http://arxiv.org/pdf/2005.08611v1",econ.EM
29236,em,"We derive sharp bounds on the non consumption utility component in an
extended Roy model of sector selection. We interpret this non consumption
utility component as a compensating wage differential. The bounds are derived
under the assumption that potential utilities in each sector are (jointly)
stochastically monotone with respect to an observed selection shifter. The
research is motivated by the analysis of women's choice of university major,
their under representation in mathematics intensive fields, and the impact of
role models on choices and outcomes. To illustrate our methodology, we
investigate the cost of STEM fields with data from a German graduate survey,
and using the mother's education level and the proportion of women on the STEM
faculty at the time of major choice as selection shifters.",Role models and revealed gender-specific costs of STEM in an extended Roy model of major choice,2020-05-19 00:17:33,"Marc Henry, Romuald Meango, Ismael Mourifie","http://arxiv.org/abs/2005.09095v4, http://arxiv.org/pdf/2005.09095v4",econ.EM
29237,em,"This paper provides new uniform rate results for kernel estimators of
absolutely regular stationary processes that are uniform in the bandwidth and
in infinite-dimensional classes of dependent variables and regressors. Our
results are useful for establishing asymptotic theory for two-step
semiparametric estimators in time series models. We apply our results to obtain
nonparametric estimates and their rates for Expected Shortfall processes.",Uniform Rates for Kernel Estimators of Weakly Dependent Data,2020-05-20 13:36:44,Juan Carlos Escanciano,"http://arxiv.org/abs/2005.09951v1, http://arxiv.org/pdf/2005.09951v1",econ.EM
29238,em,"Control variables are included in regression analyses to estimate the causal
effect of a treatment on an outcome. In this article, we argue that the
estimated effect sizes of control variables are unlikely to have a causal
interpretation themselves though. This is because even valid controls are
possibly endogenous and therefore represent a combination of several different
causal mechanisms operating jointly on the outcome, which is hard to interpret
theoretically. We recommend to refrain from reporting marginal effects of
controls in regression tables and to focus exclusively on the variables of
interest in the results sections of quantitative research papers. Moreover, we
advise against using control variable estimates for subsequent theory building
and meta-analyses.",On the Nuisance of Control Variables in Regression Analysis,2020-05-20 21:53:51,"Paul Hünermund, Beyers Louw","http://arxiv.org/abs/2005.10314v4, http://arxiv.org/pdf/2005.10314v4",econ.EM
29239,em,"The aim of this paper is to investigate the use of the Factor Analysis in
order to identify the role of the relevant macroeconomic variables in driving
the inflation. The Macroeconomic predictors that usually affect the inflation
are summarized using a small number of factors constructed by the principal
components. This allows us to identify the crucial role of money growth,
inflation expectation and exchange rate in driving the inflation. Then we use
this factors to build econometric models to forecast inflation. Specifically,
we use univariate and multivariate models such as classical autoregressive,
Factor models and FAVAR models. Results of forecasting suggest that models
which incorporate more economic information outperform the benchmark.
Furthermore, causality test and impulse response are performed in order to
examine the short-run dynamics of inflation to shocks in the principal factors.",Macroeconomic factors for inflation in Argentine 2013-2019,2020-05-23 06:15:01,Manuel Lopez Galvan,"http://arxiv.org/abs/2005.11455v1, http://arxiv.org/pdf/2005.11455v1",econ.EM
29241,em,"Precipitated by rapid globalization, rising inequality, population growth,
and longevity gains, social protection programs have been on the rise in low-
and middle-income countries (LMICs) in the last three decades. However, the
introduction of public benefits could displace informal mechanisms for
risk-protection, which are especially prevalent in LMICs. If the displacement
of private transfers is considerably large, the expansion of social protection
programs could even lead to social welfare loss. In this paper, we critically
survey the recent empirical literature on crowd-out effects in response to
public policies, specifically in the context of LMICs. We review and synthesize
patterns from the behavioral response to various types of social protection
programs. Furthermore, we specifically examine for heterogeneous treatment
effects by important socioeconomic characteristics. We conclude by drawing on
lessons from our synthesis of studies. If poverty reduction objectives are
considered, along with careful program targeting that accounts for potential
crowd-out effects, there may well be a net social gain.",Do Public Program Benefits Crowd Out Private Transfers in Developing Countries? A Critical Review of Recent Evidence,2020-06-01 09:18:41,"Plamen Nikolov, Matthew Bonci","http://dx.doi.org/10.1016/j.worlddev.2020.1049679, http://arxiv.org/abs/2006.00737v2, http://arxiv.org/pdf/2006.00737v2",econ.EM
29242,em,"The estimation of the causal effect of an endogenous treatment based on an
instrumental variable (IV) is often complicated by attrition, sample selection,
or non-response in the outcome of interest. To tackle the latter problem, the
latent ignorability (LI) assumption imposes that attrition/sample selection is
independent of the outcome conditional on the treatment compliance type (i.e.
how the treatment behaves as a function of the instrument), the instrument, and
possibly further observed covariates. As a word of caution, this note formally
discusses the strong behavioral implications of LI in rather standard IV
models. We also provide an empirical illustration based on the Job Corps
experimental study, in which the sensitivity of the estimated program effect to
LI and alternative assumptions about outcome attrition is investigated.",On the plausibility of the latent ignorability assumption,2020-06-02 18:17:24,Martin Huber,"http://arxiv.org/abs/2006.01703v2, http://arxiv.org/pdf/2006.01703v2",econ.EM
29243,em,"Common approaches to inference for structural and reduced-form parameters in
empirical economic analysis are based on the consistency and the root-n
asymptotic normality of the GMM and M estimators. The canonical consistency
(respectively, root-n asymptotic normality) for these classes of estimators
requires at least the first (respectively, second) moment of the score to be
finite. In this article, we present a method of testing these conditions for
the consistency and the root-n asymptotic normality of the GMM and M
estimators. The proposed test controls size nearly uniformly over the set of
data generating processes that are compatible with the null hypothesis.
Simulation studies support this theoretical result. Applying the proposed test
to the market share data from the Dominick's Finer Foods retail chain, we find
that a common \textit{ad hoc} procedure to deal with zero market shares in
analysis of differentiated products markets results in a failure to satisfy the
conditions for both the consistency and the root-n asymptotic normality.",Testing Finite Moment Conditions for the Consistency and the Root-N Asymptotic Normality of the GMM and M Estimators,2020-06-04 00:32:12,"Yuya Sasaki, Yulong Wang","http://arxiv.org/abs/2006.02541v3, http://arxiv.org/pdf/2006.02541v3",econ.EM
29244,em,"The paper analyses the efficiency of extension programs in the adoption of
chemical fertilisers in Ethiopia between 1994 and 2004. Fertiliser adoption
provides a suitable strategy to ensure and stabilize food production in remote
vulnerable areas. Extension services programs have a long history in supporting
the application of fertiliser. How-ever, their efficiency is questioned. In our
analysis, we focus on seven villages with a considerable time lag in fertiliser
diffusion. Using matching techniques avoids sample selection bias in the
comparison of treated (households received extension service) and controlled
households. Additionally to common factors, measures of culture, proxied by
ethnicity and religion, aim to control for potential tensions between extension
agents and peasants that hamper the efficiency of the program. We find a
considerable impact of extension service on the first fertiliser adoption. The
impact is consistent for five of seven villages.",The pain of a new idea: Do Late Bloomers response to Extension Service in Rural Ethiopia?,2020-06-04 16:31:29,"Alexander Jordan, Marco Guerzoni","http://arxiv.org/abs/2006.02846v1, http://arxiv.org/pdf/2006.02846v1",econ.EM
29245,em,"The Economic Policy Uncertainty index had gained considerable traction with
both academics and policy practitioners. Here, we analyse news feed data to
construct a simple, general measure of uncertainty in the United States using a
highly cited machine learning methodology. Over the period January 1996 through
May 2020, we show that the series unequivocally Granger-causes the EPU and
there is no Granger-causality in the reverse direction",Text as data: a machine learning-based approach to measuring uncertainty,2020-06-11 17:10:17,"Rickard Nyman, Paul Ormerod","http://arxiv.org/abs/2006.06457v1, http://arxiv.org/pdf/2006.06457v1",econ.EM
29246,em,"This article studies tail behavior for the error components in the stochastic
frontier model, where one component has bounded support on one side, and the
other has unbounded support on both sides. Under weak assumptions on the error
components, we derive nonparametric tests that the unbounded component
distribution has thin tails and that the component tails are equivalent. The
tests are useful diagnostic tools for stochastic frontier analysis. A
simulation study and an application to a stochastic cost frontier for 6,100 US
banks from 1998 to 2005 are provided. The new tests reject the normal or
Laplace distributional assumptions, which are commonly imposed in the existing
literature.",Nonparametric Tests of Tail Behavior in Stochastic Frontier Models,2020-06-14 06:21:04,"William, C. Horrace, Yulong Wang","http://arxiv.org/abs/2006.07780v1, http://arxiv.org/pdf/2006.07780v1",econ.EM
29247,em,"This paper constructs internationally consistent measures of macroeconomic
uncertainty. Our econometric framework extracts uncertainty from revisions in
data obtained from standardized national accounts. Applying our model to
post-WWII real-time data, we estimate macroeconomic uncertainty for 39
countries. The cross-country dimension of our uncertainty data allows us to
study the impact of uncertainty shocks under different employment protection
legislation. Our empirical findings suggest that the effects of uncertainty
shocks are stronger and more persistent in countries with low employment
protection compared to countries with high employment protection. These
empirical findings are in line with a theoretical model under varying firing
cost.",Measuring Macroeconomic Uncertainty: The Labor Channel of Uncertainty from a Cross-Country Perspective,2020-06-16 12:09:46,"Andreas Dibiasi, Samad Sarferaz","http://arxiv.org/abs/2006.09007v2, http://arxiv.org/pdf/2006.09007v2",econ.EM
29248,em,"Time-varying parameter (TVP) models often assume that the TVPs evolve
according to a random walk. This assumption, however, might be questionable
since it implies that coefficients change smoothly and in an unbounded manner.
In this paper, we relax this assumption by proposing a flexible law of motion
for the TVPs in large-scale vector autoregressions (VARs). Instead of imposing
a restrictive random walk evolution of the latent states, we carefully design
hierarchical mixture priors on the coefficients in the state equation. These
priors effectively allow for discriminating between periods where coefficients
evolve according to a random walk and times where the TVPs are better
characterized by a stationary stochastic process. Moreover, this approach is
capable of introducing dynamic sparsity by pushing small parameter changes
towards zero if necessary. The merits of the model are illustrated by means of
two applications. Using synthetic data we show that our approach yields precise
parameter estimates. When applied to US data, the model reveals interesting
patterns of low-frequency dynamics in coefficients and forecasts well relative
to a wide range of competing models.",Flexible Mixture Priors for Large Time-varying Parameter Models,2020-06-17 21:07:11,Niko Hauzenberger,"http://arxiv.org/abs/2006.10088v2, http://arxiv.org/pdf/2006.10088v2",econ.EM
29249,em,"The ongoing COVID-19 pandemic risks wiping out years of progress made in
reducing global poverty. In this paper, we explore to what extent financial
inclusion could help mitigate the increase in poverty using cross-country data
across 78 low- and lower-middle-income countries. Unlike other recent
cross-country studies, we show that financial inclusion is a key driver of
poverty reduction in these countries. This effect is not direct, but indirect,
by mitigating the detrimental effect that inequality has on poverty. Our
findings are consistent across all the different measures of poverty used. Our
forecasts suggest that the world's population living on less than $1.90 per day
could increase from 8% to 14% by 2021, pushing nearly 400 million people into
poverty. However, urgent improvements in financial inclusion could
substantially reduce the impact on poverty.",COVID-19 response needs to broaden financial inclusion to curb the rise in poverty,2020-06-18 20:39:05,"Mostak Ahamed, Roxana Gutiérrez-Romero","http://arxiv.org/abs/2006.10706v1, http://arxiv.org/pdf/2006.10706v1",econ.EM
29250,em,"In this paper, we study the trending behaviour of COVID-19 data at country
level, and draw attention to some existing econometric tools which are
potentially helpful to understand the trend better in future studies. In our
empirical study, we find that European countries overall flatten the curves
more effectively compared to the other regions, while Asia & Oceania also
achieve some success, but the situations are not as optimistic elsewhere.
Africa and America are still facing serious challenges in terms of managing the
spread of the virus, and reducing the death rate, although in Africa the virus
spreads slower and has a lower death rate than the other regions. By comparing
the performances of different countries, our results incidentally agree with Gu
et al. (2020), though different approaches and models are considered. For
example, both works agree that countries such as USA, UK and Italy perform
relatively poorly; on the other hand, Australia, China, Japan, Korea, and
Singapore perform relatively better.",On the Time Trend of COVID-19: A Panel Data Study,2020-06-19 13:34:51,"Chaohua Dong, Jiti Gao, Oliver Linton, Bin Peng","http://arxiv.org/abs/2006.11060v2, http://arxiv.org/pdf/2006.11060v2",econ.EM
29251,em,"We introduce tools for controlled variable selection to economists. In
particular, we apply a recently introduced aggregation scheme for false
discovery rate (FDR) control to German administrative data to determine the
parts of the individual employment histories that are relevant for the career
outcomes of women. Our results suggest that career outcomes can be predicted
based on a small set of variables, such as daily earnings, wage increases in
combination with a high level of education, employment status, and working
experience.",A Pipeline for Variable Selection and False Discovery Rate Control With an Application in Labor Economics,2020-06-22 17:29:20,"Sophie-Charlotte Klose, Johannes Lederer","http://arxiv.org/abs/2006.12296v2, http://arxiv.org/pdf/2006.12296v2",econ.EM
29252,em,"A novel IV estimation method, that we term Locally Trimmed LS (LTLS), is
developed which yields estimators with (mixed) Gaussian limit distributions in
situations where the data may be weakly or strongly persistent. In particular,
we allow for nonlinear predictive type of regressions where the regressor can
be stationary short/long memory as well as nonstationary long memory process or
a nearly integrated array. The resultant t-tests have conventional limit
distributions (i.e. N(0; 1)) free of (near to unity and long memory) nuisance
parameters. In the case where the regressor is a fractional process, no
preliminary estimator for the memory parameter is required. Therefore, the
practitioner can conduct inference while being agnostic about the exact
dependence structure in the data. The LTLS estimator is obtained by applying
certain chronological trimming to the OLS instrument via the utilisation of
appropriate kernel functions of time trend variables. The finite sample
performance of LTLS based t-tests is investigated with the aid of a simulation
experiment. An empirical application to the predictability of stock returns is
also provided.",Locally trimmed least squares: conventional inference in possibly nonstationary models,2020-06-22 23:11:57,"Zhishui Hu, Ioannis Kasparis, Qiying Wang","http://arxiv.org/abs/2006.12595v1, http://arxiv.org/pdf/2006.12595v1",econ.EM
29253,em,"This paper considers identifying and estimating the Average Treatment Effect
on the Treated (ATT) when untreated potential outcomes are generated by an
interactive fixed effects model. That is, in addition to time-period and
individual fixed effects, we consider the case where there is an unobserved
time invariant variable whose effect on untreated potential outcomes may change
over time and which can therefore cause outcomes (in the absence of
participating in the treatment) to follow different paths for the treated group
relative to the untreated group. The models that we consider in this paper
generalize many commonly used models in the treatment effects literature
including difference in differences and individual-specific linear trend
models. Unlike the majority of the literature on interactive fixed effects
models, we do not require the number of time periods to go to infinity to
consistently estimate the ATT. Our main identification result relies on having
the effect of some time invariant covariate (e.g., race or sex) not vary over
time. Using our approach, we show that the ATT can be identified with as few as
three time periods and with panel or repeated cross sections data.",Treatment Effects in Interactive Fixed Effects Models with a Small Number of Time Periods,2020-06-29 05:34:53,"Brantly Callaway, Sonia Karami","http://arxiv.org/abs/2006.15780v3, http://arxiv.org/pdf/2006.15780v3",econ.EM
29255,em,"The paper proposes a Bayesian multinomial logit model to analyse spatial
patterns of urban expansion. The specification assumes that the log-odds of
each class follow a spatial autoregressive process. Using recent advances in
Bayesian computing, our model allows for a computationally efficient treatment
of the spatial multinomial logit model. This allows us to assess spillovers
between regions and across land use classes. In a series of Monte Carlo
studies, we benchmark our model against other competing specifications. The
paper also showcases the performance of the proposed specification using
European regional data. Our results indicate that spatial dependence plays a
key role in land sealing process of cropland and grassland. Moreover, we
uncover land sealing spillovers across multiple classes of arable land.",A spatial multinomial logit model for analysing urban expansion,2020-08-03 09:55:10,"Tamás Krisztin, Philipp Piribauer, Michael Wögerer","http://arxiv.org/abs/2008.00673v1, http://arxiv.org/pdf/2008.00673v1",econ.EM
29256,em,"Data on businesses collected by statistical agencies are challenging to
protect. Many businesses have unique characteristics, and distributions of
employment, sales, and profits are highly skewed. Attackers wishing to conduct
identification attacks often have access to much more information than for any
individual. As a consequence, most disclosure avoidance mechanisms fail to
strike an acceptable balance between usefulness and confidentiality protection.
Detailed aggregate statistics by geography or detailed industry classes are
rare, public-use microdata on businesses are virtually inexistant, and access
to confidential microdata can be burdensome. Synthetic microdata have been
proposed as a secure mechanism to publish microdata, as part of a broader
discussion of how to provide broader access to such data sets to researchers.
In this article, we document an experiment to create analytically valid
synthetic data, using the exact same model and methods previously employed for
the United States, for data from two different countries: Canada (LEAP) and
Germany (BHP). We assess utility and protection, and provide an assessment of
the feasibility of extending such an approach in a cost-effective way to other
data.",Applying Data Synthesis for Longitudinal Business Data across Three Countries,2020-07-25 04:16:08,"M. Jahangir Alam, Benoit Dostie, Jörg Drechsler, Lars Vilhuber","http://dx.doi.org/10.21307/stattrans-2020-039, http://arxiv.org/abs/2008.02246v1, http://arxiv.org/pdf/2008.02246v1",econ.EM
29257,em,"We analyze theoretical properties of the hybrid test for superior
predictability. We demonstrate with a simple example that the test may not be
pointwise asymptotically of level $\alpha$ at commonly used significance levels
and may lead to rejection rates over $11\%$ when the significance level
$\alpha$ is $5\%$. Generalizing this observation, we provide a formal result
that pointwise asymptotic invalidity of the hybrid test persists in a setting
under reasonable conditions. As an easy alternative, we propose a modified
hybrid test based on the generalized moment selection method and show that the
modified test enjoys pointwise asymptotic validity. Monte Carlo simulations
support the theoretical findings.",On the Size Control of the Hybrid Test for Predictive Ability,2020-08-05 21:52:44,Deborah Kim,"http://arxiv.org/abs/2008.02318v2, http://arxiv.org/pdf/2008.02318v2",econ.EM
29258,em,"We provide an upper bound as a random variable for the functions of
estimators in high dimensions. This upper bound may help establish the rate of
convergence of functions in high dimensions. The upper bound random variable
may converge faster, slower, or at the same rate as estimators depending on the
behavior of the partial derivative of the function. We illustrate this via
three examples. The first two examples use the upper bound for testing in high
dimensions, and third example derives the estimated out-of-sample variance of
large portfolios. All our results allow for a larger number of parameters, p,
than the sample size, n.",An Upper Bound for Functions of Estimators in High Dimensions,2020-08-06 16:17:17,"Mehmet Caner, Xu Han","http://arxiv.org/abs/2008.02636v1, http://arxiv.org/pdf/2008.02636v1",econ.EM
29259,em,"We provide new results showing identification of a large class of fixed-T
panel models, where the response variable is an unknown, weakly monotone,
time-varying transformation of a latent linear index of fixed effects,
regressors, and an error term drawn from an unknown stationary distribution.
Our results identify the transformation, the coefficient on regressors, and
features of the distribution of the fixed effects. We then develop a
full-commitment intertemporal collective household model, where the implied
quantity demand equations are time-varying functions of a linear index. The
fixed effects in this index equal logged resource shares, defined as the
fractions of household expenditure enjoyed by each household member. Using
Bangladeshi data, we show that women's resource shares decline with household
budgets and that half of the variation in women's resource shares is due to
unobserved household-level heterogeneity.","Identification of Time-Varying Transformation Models with Fixed Effects, with an Application to Unobserved Heterogeneity in Resource Shares",2020-08-12 21:12:38,"Irene Botosaru, Chris Muris, Krishna Pendakur","http://arxiv.org/abs/2008.05507v2, http://arxiv.org/pdf/2008.05507v2",econ.EM
29260,em,"We study a fixed-$T$ panel data logit model for ordered outcomes that
accommodates fixed effects and state dependence. We provide identification
results for the autoregressive parameter, regression coefficients, and the
threshold parameters in this model. Our results require only four observations
on the outcome variable. We provide conditions under which a composite
conditional maximum likelihood estimator is consistent and asymptotically
normal. We use our estimator to explore the determinants of self-reported
health in a panel of European countries over the period 2003-2016. We find
that: (i) the autoregressive parameter is positive and analogous to a linear
AR(1) coefficient of about 0.25, indicating persistence in health status; (ii)
the association between income and health becomes insignificant once we control
for unobserved heterogeneity and persistence.",A dynamic ordered logit model with fixed effects,2020-08-12 21:23:10,"Chris Muris, Pedro Raposo, Sotiris Vandoros","http://arxiv.org/abs/2008.05517v1, http://arxiv.org/pdf/2008.05517v1",econ.EM
29261,em,"We propose a simple approach to optimally select the number of control units
in k nearest neighbors (kNN) algorithm focusing in minimizing the mean squared
error for the average treatment effects. Our approach is non-parametric where
confidence intervals for the treatment effects were calculated using asymptotic
results with bias correction. Simulation exercises show that our approach gets
relative small mean squared errors, and a balance between confidence intervals
length and type I error. We analyzed the average treatment effects on treated
(ATET) of participation in 401(k) plans on accumulated net financial assets
confirming significant effects on amount and positive probability of net asset.
Our optimal k selection produces significant narrower ATET confidence intervals
compared with common practice of using k=1.",Optimal selection of the number of control units in kNN algorithm to estimate average treatment effects,2020-08-14 23:18:16,"Andrés Ramírez-Hassan, Raquel Vargas-Correa, Gustavo García, Daniel Londoño","http://arxiv.org/abs/2008.06564v1, http://arxiv.org/pdf/2008.06564v1",econ.EM
29262,em,"We analyse a sequential contest with two players in darts where one of the
contestants enjoys a technical advantage. Using methods from the causal machine
learning literature, we analyse the built-in advantage, which is the
first-mover having potentially more but never less moves. Our empirical
findings suggest that the first-mover has an 8.6 percentage points higher
probability to win the match induced by the technical advantage. Contestants
with low performance measures and little experience have the highest built-in
advantage. With regard to the fairness principle that contestants with equal
abilities should have equal winning probabilities, this contest is ex-ante fair
in the case of equal built-in advantages for both competitors and a randomized
starting right. Nevertheless, the contest design produces unequal probabilities
of winning for equally skilled contestants because of asymmetries in the
built-in advantage associated with social pressure for contestants competing at
home and away.",Analysing a built-in advantage in asymmetric darts contests using causal machine learning,2020-08-17 12:07:14,Daniel Goller,"http://dx.doi.org/10.1007/s10479-022-04563-0, http://arxiv.org/abs/2008.07165v1, http://arxiv.org/pdf/2008.07165v1",econ.EM
29263,em,"We introduce an approach to deal with self-selection of peers in the
linear-in-means model. Contrary to the existing proposals we do not require to
specify a model for how the selection of peers comes about. Rather, we exploit
two restrictions that are inherent to many such specifications to construct
intuitive instrumental variables. These restrictions are that link decisions
that involve a given individual are not all independent of one another, but
that they are independent of the link behavior between other pairs of
individuals. A two-stage least-squares estimator of the linear-in-means model
is then readily obtained.",Peer effects and endogenous social interactions,2020-08-18 15:20:55,Koen Jochmans,"http://arxiv.org/abs/2008.07886v1, http://arxiv.org/pdf/2008.07886v1",econ.EM
29264,em,"This paper develops new techniques to bound distributional treatment effect
parameters that depend on the joint distribution of potential outcomes -- an
object not identified by standard identifying assumptions such as selection on
observables or even when treatment is randomly assigned. I show that panel data
and an additional assumption on the dependence between untreated potential
outcomes for the treated group over time (i) provide more identifying power for
distributional treatment effect parameters than existing bounds and (ii)
provide a more plausible set of conditions than existing methods that obtain
point identification. I apply these bounds to study heterogeneity in the effect
of job displacement during the Great Recession. Using standard techniques, I
find that workers who were displaced during the Great Recession lost on average
34\% of their earnings relative to their counterfactual earnings had they not
been displaced. Using the methods developed in the current paper, I also show
that the average effect masks substantial heterogeneity across workers.",Bounds on Distributional Treatment Effect Parameters using Panel Data with an Application on Job Displacement,2020-08-18 21:45:23,Brantly Callaway,"http://arxiv.org/abs/2008.08117v1, http://arxiv.org/pdf/2008.08117v1",econ.EM
29265,em,"We introduce a new approach for comparing the predictive accuracy of two
nested models that bypasses the difficulties caused by the degeneracy of the
asymptotic variance of forecast error loss differentials used in the
construction of commonly used predictive comparison statistics. Our approach
continues to rely on the out of sample MSE loss differentials between the two
competing models, leads to nuisance parameter free Gaussian asymptotics and is
shown to remain valid under flexible assumptions that can accommodate
heteroskedasticity and the presence of mixed predictors (e.g. stationary and
local to unit root). A local power analysis also establishes its ability to
detect departures from the null in both stationary and persistent settings.
Simulations calibrated to common economic and financial applications indicate
that our methods have strong power with good size control across commonly
encountered sample sizes.",A Novel Approach to Predictive Accuracy Testing in Nested Environments,2020-08-19 14:45:46,Jean-Yves Pitarakis,"http://arxiv.org/abs/2008.08387v3, http://arxiv.org/pdf/2008.08387v3",econ.EM
29266,em,"Inference in models where the parameter is defined by moment inequalities is
of interest in many areas of economics. This paper develops a new method for
improving the performance of generalized moment selection (GMS) testing
procedures in finite-samples. The method modifies GMS tests by tilting the
empirical distribution in its moment selection step by an amount that maximizes
the empirical likelihood subject to the restrictions of the null hypothesis. We
characterize sets of population distributions on which a modified GMS test is
(i) asymptotically equivalent to its non-modified version to first-order, and
(ii) superior to its non-modified version according to local power when the
sample size is large enough. An important feature of the proposed modification
is that it remains computationally feasible even when the number of moment
inequalities is large. We report simulation results that show the modified
tests control size well, and have markedly improved local power over their
non-modified counterparts.",Inference for Moment Inequalities: A Constrained Moment Selection Procedure,2020-08-20 18:18:38,"Rami V. Tabri, Christopher D. Walker","http://arxiv.org/abs/2008.09021v2, http://arxiv.org/pdf/2008.09021v2",econ.EM
29267,em,"This paper proposes a novel approach to incorporate covariates in regression
discontinuity (RD) designs. We represent the covariate balance condition as
overidentifying moment restrictions. The empirical likelihood (EL) RD estimator
efficiently incorporates the information from covariate balance and thus has an
asymptotic variance no larger than that of the standard estimator without
covariates. It achieves efficiency gain under weak conditions. We resolve the
indeterminacy raised by Calonico, Cattaneo, Farrell, and Titiunik (2019, Page
448) regarding the asymptotic efficiency gain from incorporating covariates to
RD estimator, as their estimator has the same asymptotic variance as ours. We
then propose a robust corrected EL (RCEL) confidence set which achieves the
fast n^(-1) coverage error decay rate even though the point estimator converges
at a nonparametric rate. In addition, the coverage accuracy of the RCEL
confidence set is automatically robust against slight perturbation to the
covariate balance condition, which may happen in cases such as data
contamination and misspecified ""unaffected"" outcomes used as covariates. We
also show a uniform-in-bandwidth Wilks theorem, which is useful in sensitivity
analysis for the proposed RCEL confidence set in the sense of Armstrong and
Kolesar (2018). We conduct Monte Carlo simulations to assess the finite-sample
performance of our method and also apply it to a real dataset.",Empirical Likelihood Covariate Adjustment for Regression Discontinuity Designs,2020-08-21 04:49:04,"Jun Ma, Zhengfei Yu","http://arxiv.org/abs/2008.09263v2, http://arxiv.org/pdf/2008.09263v2",econ.EM
29268,em,"Lee (2009) is a common approach to bound the average causal effect in the
presence of selection bias, assuming the treatment effect on selection has the
same sign for all subjects. This paper generalizes Lee bounds to allow the sign
of this effect to be identified by pretreatment covariates, relaxing the
standard (unconditional) monotonicity to its conditional analog. Asymptotic
theory for generalized Lee bounds is proposed in low-dimensional smooth and
high-dimensional sparse designs. The paper also generalizes Lee bounds to
accommodate multiple outcomes. It characterizes the sharp identified set for
the causal parameter and proposes uniform Gaussian inference on the support
function. The estimated bounds achieve nearly point-identification in JobCorps
job training program (Lee (2009)), where unconditional monotonicity is unlikely
to hold.",Generalized Lee Bounds,2020-08-28 19:03:57,Vira Semenova,"http://arxiv.org/abs/2008.12720v3, http://arxiv.org/pdf/2008.12720v3",econ.EM
29269,em,"This paper proposes a robust method for semiparametric identification and
estimation in panel multinomial choice models, where we allow for
infinite-dimensional fixed effects that enter into consumer utilities in an
additively nonseparable way, thus incorporating rich forms of unobserved
heterogeneity. Our identification strategy exploits multivariate monotonicity
in parametric indexes, and uses the logical contraposition of an intertemporal
inequality on choice probabilities to obtain identifying restrictions. We
provide a consistent estimation procedure, and demonstrate the practical
advantages of our method with simulations and an empirical illustration with
the Nielsen data.",Robust Semiparametric Estimation in Panel Multinomial Choice Models,2020-08-31 23:00:32,"Wayne Yuan Gao, Ming Li","http://arxiv.org/abs/2009.00085v1, http://arxiv.org/pdf/2009.00085v1",econ.EM
29270,em,"Consider a setting where $N$ players, partitioned into $K$ observable types,
form a directed network. Agents' preferences over the form of the network
consist of an arbitrary network benefit function (e.g., agents may have
preferences over their network centrality) and a private component which is
additively separable in own links. This latter component allows for unobserved
heterogeneity in the costs of sending and receiving links across agents
(respectively out- and in- degree heterogeneity) as well as
homophily/heterophily across the $K$ types of agents. In contrast, the network
benefit function allows agents' preferences over links to vary with the
presence or absence of links elsewhere in the network (and hence with the link
formation behavior of their peers). In the null model which excludes the
network benefit function, links form independently across dyads in the manner
described by \cite{Charbonneau_EJ17}. Under the alternative there is
interdependence across linking decisions (i.e., strategic interaction). We show
how to test the null with power optimized in specific directions. These
alternative directions include many common models of strategic network
formation (e.g., ""connections"" models, ""structural hole"" models etc.). Our
random utility specification induces an exponential family structure under the
null which we exploit to construct a similar test which exactly controls size
(despite the the null being a composite one with many nuisance parameters). We
further show how to construct locally best tests for specific alternatives
without making any assumptions about equilibrium selection. To make our tests
feasible we introduce a new MCMC algorithm for simulating the null
distributions of our test statistics.",An optimal test for strategic interaction in social and economic network formation between heterogeneous agents,2020-09-01 06:40:11,"Andrin Pelican, Bryan S. Graham","http://arxiv.org/abs/2009.00212v2, http://arxiv.org/pdf/2009.00212v2",econ.EM
29271,em,"This chapter reviews the instrumental variable quantile regression model of
Chernozhukov and Hansen (2005). We discuss the key conditions used for
identification of structural quantile effects within this model which include
the availability of instruments and a restriction on the ranks of structural
disturbances. We outline several approaches to obtaining point estimates and
performing statistical inference for model parameters. Finally, we point to
possible directions for future research.",Instrumental Variable Quantile Regression,2020-08-28 22:50:43,"Victor Chernozhukov, Christian Hansen, Kaspar Wuthrich","http://arxiv.org/abs/2009.00436v1, http://arxiv.org/pdf/2009.00436v1",econ.EM
29272,em,"When a researcher combines multiple instrumental variables for a single
binary treatment, the monotonicity assumption of the local average treatment
effects (LATE) framework can become restrictive: it requires that all units
share a common direction of response even when separate instruments are shifted
in opposing directions. What I call vector monotonicity, by contrast, simply
assumes treatment uptake to be monotonic in all instruments. I characterize the
class of causal parameters that are point identified under vector monotonicity,
when the instruments are binary. This class includes, for example, the average
treatment effect among units that are in any way responsive to the collection
of instruments, or those that are responsive to a given subset of them. The
identification results are constructive and yield a simple estimator for the
identified treatment effect parameters. An empirical application revisits the
labor market returns to college.",A Vector Monotonicity Assumption for Multiple Instruments,2020-09-01 19:38:54,Leonard Goff,"http://arxiv.org/abs/2009.00553v5, http://arxiv.org/pdf/2009.00553v5",econ.EM
29273,em,"This article investigates retirement decumulation behaviours using the
Grouped Fixed-Effects (GFE) estimator applied to Australian panel data on
drawdowns from phased withdrawal retirement income products. Behaviours
exhibited by the distinct latent groups identified suggest that retirees may
adopt simple heuristics determining how they draw down their accumulated
wealth. Two extensions to the original GFE methodology are proposed: a latent
group label-matching procedure which broadens bootstrap inference to include
the time profile estimates, and a modified estimation procedure for models with
time-invariant additive fixed effects estimated using unbalanced data.",Hidden Group Time Profiles: Heterogeneous Drawdown Behaviours in Retirement,2020-09-03 11:19:32,"Igor Balnozan, Denzil G. Fiebig, Anthony Asher, Robert Kohn, Scott A. Sisson","http://arxiv.org/abs/2009.01505v2, http://arxiv.org/pdf/2009.01505v2",econ.EM
29274,em,"Difference-in-Differences (DID) research designs usually rely on variation of
treatment timing such that, after making an appropriate parallel trends
assumption, one can identify, estimate, and make inference about causal
effects. In practice, however, different DID procedures rely on different
parallel trends assumptions (PTA), and recover different causal parameters. In
this paper, we focus on staggered DID (also referred as event-studies) and
discuss the role played by the PTA in terms of identification and estimation of
causal parameters. We document a ``robustness'' vs. ``efficiency'' trade-off in
terms of the strength of the underlying PTA, and argue that practitioners
should be explicit about these trade-offs whenever using DID procedures. We
propose new DID estimators that reflect these trade-offs and derived their
large sample properties. We illustrate the practical relevance of these results
by assessing whether the transition from federal to state management of the
Clean Water Act affects compliance rates.",The role of parallel trends in event study settings: An application to environmental economics,2020-09-04 02:56:38,"Michelle Marcus, Pedro H. C. Sant'Anna","http://arxiv.org/abs/2009.01963v1, http://arxiv.org/pdf/2009.01963v1",econ.EM
29275,em,"The paper focuses on econometrically justified robust analysis of the effects
of the COVID-19 pandemic on financial markets in different countries across the
World. It provides the results of robust estimation and inference on predictive
regressions for returns on major stock indexes in 23 countries in North and
South America, Europe, and Asia incorporating the time series of reported
infections and deaths from COVID-19. We also present a detailed study of
persistence, heavy-tailedness and tail risk properties of the time series of
the COVID-19 infections and death rates that motivate the necessity in
applications of robust inference methods in the analysis. Econometrically
justified analysis is based on heteroskedasticity and autocorrelation
consistent (HAC) inference methods, recently developed robust $t$-statistic
inference approaches and robust tail index estimation.",COVID-19: Tail Risk and Predictive Regressions,2020-09-05 10:46:53,"Walter Distaso, Rustam Ibragimov, Alexander Semenov, Anton Skrobotov","http://arxiv.org/abs/2009.02486v3, http://arxiv.org/pdf/2009.02486v3",econ.EM
29276,em,"This paper examines the identification power of instrumental variables (IVs)
for average treatment effect (ATE) in partially identified models. We decompose
the ATE identification gains into components of contributions driven by IV
relevancy, IV strength, direction and degree of treatment endogeneity, and
matching via exogenous covariates. Our decomposition is demonstrated with
graphical illustrations, simulation studies and an empirical example of
childbearing and women's labour supply. Our analysis offers insights for
understanding the complex role of IVs in ATE identification and for selecting
IVs in practical policy designs. Simulations also suggest potential uses of our
analysis for detecting irrelevant instruments.",Decomposing Identification Gains and Evaluating Instrument Identification Power for Partially Identified Average Treatment Effects,2020-09-06 06:57:19,"Lina Zhang, David T. Frazier, D. S. Poskitt, Xueyan Zhao","http://arxiv.org/abs/2009.02642v3, http://arxiv.org/pdf/2009.02642v3",econ.EM
29277,em,"This paper considers the asymptotic theory of a semiparametric M-estimator
that is generally applicable to models that satisfy a monotonicity condition in
one or several parametric indexes. We call the estimator two-stage maximum
score (TSMS) estimator since our estimator involves a first-stage nonparametric
regression when applied to the binary choice model of Manski (1975, 1985). We
characterize the asymptotic distribution of the TSMS estimator, which features
phase transitions depending on the dimension and thus the convergence rate of
the first-stage estimation. Effectively, the first-stage nonparametric
estimator serves as an imperfect smoothing function on a non-smooth criterion
function, leading to the pivotality of the first-stage estimation error with
respect to the second-stage convergence rate and asymptotic distribution",Two-Stage Maximum Score Estimator,2020-09-07 05:05:09,"Wayne Yuan Gao, Sheng Xu, Kan Xu","http://arxiv.org/abs/2009.02854v4, http://arxiv.org/pdf/2009.02854v4",econ.EM
29278,em,"This paper aims to decompose a large dimensional vector autoregessive (VAR)
model into two components, the first one being generated by a small-scale VAR
and the second one being a white noise sequence. Hence, a reduced number of
common components generates the entire dynamics of the large system through a
VAR structure. This modelling, which we label as the dimension-reducible VAR,
extends the common feature approach to high dimensional systems, and it differs
from the dynamic factor model in which the idiosyncratic component can also
embed a dynamic pattern. We show the conditions under which this decomposition
exists. We provide statistical tools to detect its presence in the data and to
estimate the parameters of the underlying small-scale VAR model. Based on our
methodology, we propose a novel approach to identify the shock that is
responsible for most of the common variability at the business cycle
frequencies. We evaluate the practical value of the proposed methods by
simulations as well as by an empirical application to a large set of US
economic variables.",Dimension Reduction for High Dimensional Vector Autoregressive Models,2020-09-07 21:30:28,"Gianluca Cubadda, Alain Hecq","http://arxiv.org/abs/2009.03361v3, http://arxiv.org/pdf/2009.03361v3",econ.EM
29279,em,"We introduce the local composite quantile regression (LCQR) to causal
inference in regression discontinuity (RD) designs. Kai et al. (2010) study the
efficiency property of LCQR, while we show that its nice boundary performance
translates to accurate estimation of treatment effects in RD under a variety of
data generating processes. Moreover, we propose a bias-corrected and standard
error-adjusted t-test for inference, which leads to confidence intervals with
good coverage probabilities. A bandwidth selector is also discussed. For
illustration, we conduct a simulation study and revisit a classic example from
Lee (2008). A companion R package rdcqr is developed.",Local Composite Quantile Regression for Regression Discontinuity,2020-09-08 16:10:46,"Xiao Huang, Zhaoguo Zhan","http://arxiv.org/abs/2009.03716v3, http://arxiv.org/pdf/2009.03716v3",econ.EM
29280,em,"In this paper we provide a computation algorithm to get a global solution for
the maximum rank correlation estimator using the mixed integer programming
(MIP) approach. We construct a new constrained optimization problem by
transforming all indicator functions into binary parameters to be estimated and
show that it is equivalent to the original problem. We also consider an
application of the best subset rank prediction and show that the original
optimization problem can be reformulated as MIP. We derive the non-asymptotic
bound for the tail probability of the predictive performance measure. We
investigate the performance of the MIP algorithm by an empirical example and
Monte Carlo simulations.",Exact Computation of Maximum Rank Correlation Estimator,2020-09-08 19:21:58,"Youngki Shin, Zvezdomir Todorov","http://dx.doi.org/10.1093/ectj/utab013, http://arxiv.org/abs/2009.03844v2, http://arxiv.org/pdf/2009.03844v2",econ.EM
29281,em,"This paper proposes a Bayesian approach to perform inference regarding the
size of hidden populations at analytical region using reported statistics. To
do so, we propose a specification taking into account one-sided error
components and spatial effects within a panel data structure. Our simulation
exercises suggest good finite sample performance. We analyze rates of crime
suspects living per neighborhood in Medell\'in (Colombia) associated with four
crime activities. Our proposal seems to identify hot spots or ""crime
communities"", potential neighborhoods where under-reporting is more severe, and
also drivers of crime schools. Statistical evidence suggests a high level of
interaction between homicides and drug dealing in one hand, and motorcycle and
car thefts on the other hand.",Inferring hidden potentials in analytical regions: uncovering crime suspect communities in Medellín,2020-09-11 14:57:38,"Alejandro Puerta, Andrés Ramírez-Hassan","http://arxiv.org/abs/2009.05360v1, http://arxiv.org/pdf/2009.05360v1",econ.EM
29282,em,"This paper proposes an algorithm for computing regularized solutions to
linear rational expectations models. The algorithm allows for regularization
cross-sectionally as well as across frequencies. A variety of numerical
examples illustrate the advantage of regularization.",Regularized Solutions to Linear Rational Expectations Models,2020-09-13 01:51:06,Majid M. Al-Sadoon,"http://arxiv.org/abs/2009.05875v3, http://arxiv.org/pdf/2009.05875v3",econ.EM
29283,em,"In line with the recent policy discussion on the use of macroprudential
measures to respond to cross-border risks arising from capital flows, this
paper tries to quantify to what extent macroprudential policies (MPPs) have
been able to stabilize capital flows in Central, Eastern and Southeastern
Europe (CESEE) -- a region that experienced a substantial boom-bust cycle in
capital flows amid the global financial crisis and where policymakers had been
quite active in adopting MPPs already before that crisis. To study the dynamic
responses of capital flows to MPP shocks, we propose a novel regime-switching
factor-augmented vector autoregressive (FAVAR) model. It allows to capture
potential structural breaks in the policy regime and to control -- besides
domestic macroeconomic quantities -- for the impact of global factors such as
the global financial cycle. Feeding into this model a novel intensity-adjusted
macroprudential policy index, we find that tighter MPPs may be effective in
containing domestic private sector credit growth and the volumes of gross
capital inflows in a majority of the countries analyzed. However, they do not
seem to generally shield CESEE countries from capital flow volatility.",Capital Flows and the Stabilizing Role of Macroprudential Policies in CESEE,2020-09-10 21:17:11,"Markus Eller, Niko Hauzenberger, Florian Huber, Helene Schuberth, Lukas Vashold","http://arxiv.org/abs/2009.06391v1, http://arxiv.org/pdf/2009.06391v1",econ.EM
29284,em,"This paper studies the semi-parametric identification and estimation of a
rational inattention model with Bayesian persuasion. The identification
requires the observation of a cross-section of market-level outcomes. The
empirical content of the model can be characterized by three moment conditions.
A two-step estimation procedure is proposed to avoid computation complexity in
the structural model. In the empirical application, I study the persuasion
effect of Fox News in the 2000 presidential election. Welfare analysis shows
that persuasion will not influence voters with high school education but will
generate higher dispersion in the welfare of voters with a partial college
education and decrease the dispersion in the welfare of voters with a bachelors
degree.",Identification and Estimation of A Rational Inattention Discrete Choice Model with Bayesian Persuasion,2020-09-17 06:41:44,Moyu Liao,"http://arxiv.org/abs/2009.08045v1, http://arxiv.org/pdf/2009.08045v1",econ.EM
29285,em,"We consider fixed effects binary choice models with a fixed number of periods
$T$ and regressors without a large support. If the time-varying unobserved
terms are i.i.d. with known distribution $F$, \cite{chamberlain2010} shows that
the common slope parameter is point identified if and only if $F$ is logistic.
However, he only considers in his proof $T=2$. We show that the result does not
generalize to $T\geq 3$: the common slope parameter can be identified when $F$
belongs to a family including the logit distribution. Identification is based
on a conditional moment restriction. Under restrictions on the covariates,
these moment conditions lead to point identification of relative effects. If
$T=3$ and mild conditions hold, GMM estimators based on these conditional
moment restrictions reach the semiparametric efficiency bound. Finally, we
illustrate our method by revisiting Brender and Drazen (2008).",Fixed Effects Binary Choice Models with Three or More Periods,2020-09-17 10:04:46,"Laurent Davezies, Xavier D'Haultfoeuille, Martin Mugnier","http://arxiv.org/abs/2009.08108v4, http://arxiv.org/pdf/2009.08108v4",econ.EM
29286,em,"We address the issue of semiparametric efficiency in the bivariate regression
problem with a highly persistent predictor, where the joint distribution of the
innovations is regarded an infinite-dimensional nuisance parameter. Using a
structural representation of the limit experiment and exploiting invariance
relationships therein, we construct invariant point-optimal tests for the
regression coefficient of interest. This approach naturally leads to a family
of feasible tests based on the component-wise ranks of the innovations that can
gain considerable power relative to existing tests under non-Gaussian
innovation distributions, while behaving equivalently under Gaussianity. When
an i.i.d. assumption on the innovations is appropriate for the data at hand,
our tests exploit the efficiency gains possible. Moreover, we show by
simulation that our test remains well behaved under some forms of conditional
heteroskedasticity.",Semiparametric Testing with Highly Persistent Predictors,2020-09-17 16:36:56,"Bas Werker, Bo Zhou","http://arxiv.org/abs/2009.08291v1, http://arxiv.org/pdf/2009.08291v1",econ.EM
29287,em,"This paper considers the problem of testing whether there exists a
non-negative solution to a possibly under-determined system of linear equations
with known coefficients. This hypothesis testing problem arises naturally in a
number of settings, including random coefficient, treatment effect, and
discrete choice models, as well as a class of linear programming problems. As a
first contribution, we obtain a novel geometric characterization of the null
hypothesis in terms of identified parameters satisfying an infinite set of
inequality restrictions. Using this characterization, we devise a test that
requires solving only linear programs for its implementation, and thus remains
computationally feasible in the high-dimensional applications that motivate our
analysis. The asymptotic size of the proposed test is shown to equal at most
the nominal level uniformly over a large class of distributions that permits
the number of linear equations to grow with the sample size.",Inference for Large-Scale Linear Systems with Known Coefficients,2020-09-18 03:24:45,"Zheng Fang, Andres Santos, Azeem M. Shaikh, Alexander Torgovitsky","http://arxiv.org/abs/2009.08568v2, http://arxiv.org/pdf/2009.08568v2",econ.EM
29288,em,"The issue of missing network links in partially observed networks is
frequently neglected in empirical studies. This paper aims to address this
issue when investigating spillovers of program benefits in the presence of
network interactions. We focus on a nonparametric outcome model that allows for
flexible forms of heterogeneity, and we propose two methods to point identify
the treatment and spillover effects in the case of bounded network degree. When
the network degree is unbounded, these methods can be used to mitigate the bias
caused by missing network links. The first method utilizes two
easily-constructed network measures based on the incoming and outgoing links.
The second method relies on a single network measure with a known rate of
missing links. We demonstrate the effective bias reduction provided by our
methods using Monte Carlo experiments and a naturalistic simulation on
real-world school network data. Finally, we re-examine the spillovers of home
computer use on children's self-empowered learning.",Spillovers of Program Benefits with Missing Network Links,2020-09-21 08:04:57,Lina Zhang,"http://arxiv.org/abs/2009.09614v2, http://arxiv.org/pdf/2009.09614v2",econ.EM
29289,em,"By exploiting McFadden (1974)'s results on conditional logit estimation, we
show that there exists a one-to-one mapping between existence and uniqueness of
conditional maximum likelihood estimates of the binary logit model with fixed
effects and the configuration of data points. Our results extend those in
Albert and Anderson (1984) for the cross-sectional case and can be used to
build a simple algorithm that detects spurious estimates in finite samples. As
an illustration, we exhibit an artificial dataset for which the STATA's command
\texttt{clogit} returns spurious estimates.",On the Existence of Conditional Maximum Likelihood Estimates of the Binary Logit Model with Fixed Effects,2020-09-21 19:24:44,Martin Mugnier,"http://arxiv.org/abs/2009.09998v3, http://arxiv.org/pdf/2009.09998v3",econ.EM
29291,em,"For counterfactual policy evaluation, it is important to ensure that
treatment parameters are relevant to policies in question. This is especially
challenging under unobserved heterogeneity, as is well featured in the
definition of the local average treatment effect (LATE). Being intrinsically
local, the LATE is known to lack external validity in counterfactual
environments. This paper investigates the possibility of extrapolating local
treatment effects to different counterfactual settings when instrumental
variables are only binary. We propose a novel framework to systematically
calculate sharp nonparametric bounds on various policy-relevant treatment
parameters that are defined as weighted averages of the marginal treatment
effect (MTE). Our framework is flexible enough to fully incorporate statistical
independence (rather than mean independence) of instruments and a large menu of
identifying assumptions beyond the shape restrictions on the MTE that have been
considered in prior studies. We apply our method to understand the effects of
medical insurance policies on the use of medical services.",A Computational Approach to Identification of Treatment Effects for Policy Evaluation,2020-09-29 11:35:24,"Sukjin Han, Shenshen Yang","http://arxiv.org/abs/2009.13861v4, http://arxiv.org/pdf/2009.13861v4",econ.EM
29292,em,"We study efficiency improvements in estimating a vector of potential outcome
means using linear regression adjustment when there are more than two treatment
levels. We show that using separate regression adjustments for each assignment
level is never worse, asymptotically, than using the subsample averages. We
also show that separate regression adjustment improves over pooled regression
adjustment except in the obvious case where slope parameters in the linear
projections are identical across the different assignment levels. We also
characterize the class of nonlinear regression adjustment methods that preserve
consistency of the potential outcome means despite arbitrary misspecification
of the conditional mean functions. Finally, we apply this general potential
outcomes framework to a contingent valuation study for estimating lower bound
mean willingness to pay for an oil spill prevention program in California.",Robust and Efficient Estimation of Potential Outcome Means under Random Assignment,2020-10-05 09:21:02,"Akanksha Negi, Jeffrey M. Wooldridge","http://arxiv.org/abs/2010.01800v1, http://arxiv.org/pdf/2010.01800v1",econ.EM
29293,em,"The literature on dynamic discrete games often assumes that the conditional
choice probabilities and the state transition probabilities are homogeneous
across markets and over time. We refer to this as the ""homogeneity assumption""
in dynamic discrete games. This assumption enables empirical studies to
estimate the game's structural parameters by pooling data from multiple markets
and from many time periods. In this paper, we propose a hypothesis test to
evaluate whether the homogeneity assumption holds in the data. Our hypothesis
test is the result of an approximate randomization test, implemented via a
Markov chain Monte Carlo (MCMC) algorithm. We show that our hypothesis test
becomes valid as the (user-defined) number of MCMC draws diverges, for any
fixed number of markets, time periods, and players. We apply our test to the
empirical study of the U.S.\ Portland cement industry in Ryan (2012).",Testing homogeneity in dynamic discrete games in finite samples,2020-10-05 22:32:32,"Federico A. Bugni, Jackson Bunting, Takuya Ura","http://arxiv.org/abs/2010.02297v2, http://arxiv.org/pdf/2010.02297v2",econ.EM
29294,em,"This comment points out a serious flaw in the article ""Gouri\'eroux, Monfort,
Renne (2019): Identification and Estimation in Non-Fundamental Structural VARMA
Models"" with regard to mirroring complex-valued roots with Blaschke polynomial
matrices. Moreover, the (non-) feasibility of the proposed method (if the
handling of Blaschke transformation were not prohibitive) for cross-sectional
dimensions greater than two and vector moving average (VMA) polynomial matrices
of degree greater than one is discussed.","Comment on Gouriéroux, Monfort, Renne (2019): Identification and Estimation in Non-Fundamental Structural VARMA Models",2020-10-06 16:33:46,Bernd Funovits,"http://arxiv.org/abs/2010.02711v1, http://arxiv.org/pdf/2010.02711v1",econ.EM
29295,em,"This note provides additional interpretation for the counterfactual outcome
distribution and corresponding unconditional quantile ""effects"" defined and
estimated by Firpo, Fortin, and Lemieux (2009) and Chernozhukov,
Fern\'andez-Val, and Melly (2013). With conditional independence of the policy
variable of interest, these methods estimate the policy effect for certain
types of policies, but not others. In particular, they estimate the effect of a
policy change that itself satisfies conditional independence.",Interpreting Unconditional Quantile Regression with Conditional Independence,2020-10-07 22:04:41,David M. Kaplan,"http://arxiv.org/abs/2010.03606v2, http://arxiv.org/pdf/2010.03606v2",econ.EM
29296,em,"This paper proposes a test for the joint hypothesis of correct dynamic
specification and no omitted latent factors for the Quantile Autoregression. If
the composite null is rejected we proceed to disentangle the cause of
rejection, i.e., dynamic misspecification or an omitted variable. We establish
the asymptotic distribution of the test statistics under fairly weak conditions
and show that factor estimation error is negligible. A Monte Carlo study shows
that the suggested tests have good finite sample properties. Finally, we
undertake an empirical illustration of modelling GDP growth and CPI inflation
in the United Kingdom, where we find evidence that factor augmented models are
correctly specified in contrast with their non-augmented counterparts when it
comes to GDP growth, while also exploring the asymmetric behaviour of the
growth and inflation distributions.",Consistent Specification Test of the Quantile Autoregression,2020-10-08 13:50:28,Anthoulla Phella,"http://arxiv.org/abs/2010.03898v1, http://arxiv.org/pdf/2010.03898v1",econ.EM
29297,em,"In this paper, we establish sufficient conditions for identifying treatment
effects on continuous outcomes in endogenous and multi-valued discrete
treatment settings with unobserved heterogeneity. We employ the monotonicity
assumption for multi-valued discrete treatments and instruments, and our
identification condition has a clear economic interpretation. In addition, we
identify the local treatment effects in multi-valued treatment settings and
derive closed-form expressions of the identified treatment effects. We provide
examples to illustrate the usefulness of our result.",Identification of multi-valued treatment effects with unobserved heterogeneity,2020-10-09 09:17:56,Koki Fusejima,"http://arxiv.org/abs/2010.04385v5, http://arxiv.org/pdf/2010.04385v5",econ.EM
29324,em,"Consider a causal structure with endogeneity (i.e., unobserved
confoundedness) in empirical data, where an instrumental variable is available.
In this setting, we show that the mean social welfare function can be
identified and represented via the marginal treatment effect (MTE, Bjorklund
and Moffitt, 1987) as the operator kernel. This representation result can be
applied to a variety of statistical decision rules for treatment choice,
including plug-in rules, Bayes rules, and empirical welfare maximization (EWM)
rules as in Hirano and Porter (2020, Section 2.3). Focusing on the application
to the EWM framework of Kitagawa and Tetenov (2018), we provide convergence
rates of the worst case average welfare loss (regret) in the spirit of Manski
(2004).",Welfare Analysis via Marginal Treatment Effects,2020-12-14 18:16:03,"Yuya Sasaki, Takuya Ura","http://arxiv.org/abs/2012.07624v1, http://arxiv.org/pdf/2012.07624v1",econ.EM
29298,em,"We prove the asymptotic properties of the maximum likelihood estimator (MLE)
in time-varying transition probability (TVTP) regime-switching models. This
class of models extends the constant regime transition probability in
Markov-switching models to a time-varying probability by including information
from observations. An important feature in this proof is the mixing rate of the
regime process conditional on the observations, which is time varying owing to
the time-varying transition probabilities. Consistency and asymptotic normality
follow from the almost deterministic geometrically decaying bound of the mixing
rate. The assumptions are verified in regime-switching autoregressive models
with widely-applied TVTP specifications. A simulation study examines the
finite-sample distributions of the MLE and compares the estimates of the
asymptotic variance constructed from the Hessian matrix and the outer product
of the score. The simulation results favour the latter. As an empirical
example, we compare three leading economic indicators in terms of describing
U.S. industrial production.",Asymptotic Properties of the Maximum Likelihood Estimator in Regime-Switching Models with Time-Varying Transition Probabilities,2020-10-10 10:33:47,"Chaojun Li, Yan Liu","http://arxiv.org/abs/2010.04930v3, http://arxiv.org/pdf/2010.04930v3",econ.EM
29299,em,"In this paper, we are interested in testing if the volatility process is
constant or not during a given time span by using high-frequency data with the
presence of jumps and microstructure noise. Based on estimators of integrated
volatility and spot volatility, we propose a nonparametric way to depict the
discrepancy between local variation and global variation. We show that our
proposed test estimator converges to a standard normal distribution if the
volatility is constant, otherwise it diverges to infinity. Simulation studies
verify the theoretical results and show a good finite sample performance of the
test procedure. We also apply our test procedure to do the heteroscedasticity
test for some real high-frequency financial data. We observe that in almost
half of the days tested, the assumption of constant volatility within a day is
violated. And this is due to that the stock prices during opening and closing
periods are highly volatile and account for a relative large proportion of
intraday variation.",Heteroscedasticity test of high-frequency data with jumps and microstructure noise,2020-10-15 13:51:42,"Qiang Liu, Zhi Liu, Chuanhai Zhang","http://arxiv.org/abs/2010.07659v1, http://arxiv.org/pdf/2010.07659v1",econ.EM
29300,em,"Measuring model risk is required by regulators on financial and insurance
markets. We separate model risk into parameter estimation risk and model
specification risk, and we propose expected shortfall type model risk measures
applied to Levy jump models and affine jump-diffusion models. We investigate
the impact of parameter estimation risk and model specification risk on the
models' ability to capture the joint dynamics of stock and option prices. We
estimate the parameters using Markov chain Monte Carlo techniques, under the
risk-neutral probability measure and the real-world probability measure
jointly. We find strong evidence supporting modeling of price jumps.",Measures of Model Risk in Continuous-time Finance Models,2020-10-16 05:30:19,"Emese Lazar, Shuyuan Qi, Radu Tunaru","http://arxiv.org/abs/2010.08113v2, http://arxiv.org/pdf/2010.08113v2",econ.EM
29301,em,"Synchronization is a phenomenon in which a pair of fluctuations adjust their
rhythms when interacting with each other. We measure the degree of
synchronization between the U.S. dollar (USD) and euro exchange rates and
between the USD and Japanese yen exchange rates on the basis of purchasing
power parity (PPP) over time. We employ a method of synchronization analysis
using the Hilbert transform, which is common in the field of nonlinear science.
We find that the degree of synchronization is high most of the time, suggesting
the establishment of PPP. The degree of synchronization does not remain high
across periods with economic events with asymmetric effects, such as the U.S.
real estate bubble.",Synchronization analysis between exchange rates on the basis of purchasing power parity using the Hilbert transform,2020-10-17 19:53:23,"Makoto Muto, Yoshitaka Saiki","http://arxiv.org/abs/2010.08825v2, http://arxiv.org/pdf/2010.08825v2",econ.EM
29302,em,"Decomposition methods are often used for producing counterfactual predictions
in non-strategic settings. When the outcome of interest arises from a
game-theoretic setting where agents are better off by deviating from their
strategies after a new policy, such predictions, despite their practical
simplicity, are hard to justify. We present conditions in Bayesian games under
which the decomposition-based predictions coincide with the equilibrium-based
ones. In many games, such coincidence follows from an invariance condition for
equilibrium selection rules. To illustrate our message, we revisit an empirical
analysis in Ciliberto and Tamer (2009) on firms' entry decisions in the airline
industry.",A Decomposition Approach to Counterfactual Analysis in Game-Theoretic Models,2020-10-18 00:17:10,"Nathan Canen, Kyungchul Song","http://arxiv.org/abs/2010.08868v6, http://arxiv.org/pdf/2010.08868v6",econ.EM
29303,em,"This paper tackles forecast combination with many forecasts or minimum
variance portfolio selection with many assets. A novel convex problem called
L2-relaxation is proposed. In contrast to standard formulations, L2-relaxation
minimizes the squared Euclidean norm of the weight vector subject to a set of
relaxed linear inequality constraints. The magnitude of relaxation, controlled
by a tuning parameter, balances the bias and variance. When the
variance-covariance (VC) matrix of the individual forecast errors or financial
assets exhibits latent group structures -- a block equicorrelation matrix plus
a VC for idiosyncratic noises, the solution to L2-relaxation delivers roughly
equal within-group weights. Optimality of the new method is established under
the asymptotic framework when the number of the cross-sectional units $N$
potentially grows much faster than the time dimension $T$. Excellent finite
sample performance of our method is demonstrated in Monte Carlo simulations.
Its wide applicability is highlighted in three real data examples concerning
empirical applications of microeconomics, macroeconomics, and finance.",L2-Relaxation: With Applications to Forecast Combination and Portfolio Analysis,2020-10-19 16:20:46,"Zhentao Shi, Liangjun Su, Tian Xie","http://arxiv.org/abs/2010.09477v2, http://arxiv.org/pdf/2010.09477v2",econ.EM
29304,em,"In this paper, we propose a new nonparametric estimator of time-varying
forecast combination weights. When the number of individual forecasts is small,
we study the asymptotic properties of the local linear estimator. When the
number of candidate forecasts exceeds or diverges with the sample size, we
consider penalized local linear estimation with the group SCAD penalty. We show
that the estimator exhibits the oracle property and correctly selects relevant
forecasts with probability approaching one. Simulations indicate that the
proposed estimators outperform existing combination schemes when structural
changes exist. Two empirical studies on inflation forecasting and equity
premium prediction highlight the merits of our approach relative to other
popular methods.",Time-varying Forecast Combination for High-Dimensional Data,2020-10-20 19:45:41,"Bin Chen, Kenwin Maung","http://arxiv.org/abs/2010.10435v1, http://arxiv.org/pdf/2010.10435v1",econ.EM
29305,em,"This paper revisits the simple, but empirically salient, problem of inference
on a real-valued parameter that is partially identified through upper and lower
bounds with asymptotically normal estimators. A simple confidence interval is
proposed and is shown to have the following properties:
  - It is never empty or awkwardly short, including when the sample analog of
the identified set is empty.
  - It is valid for a well-defined pseudotrue parameter whether or not the
model is well-specified.
  - It involves no tuning parameters and minimal computation.
  Computing the interval requires concentrating out one scalar nuisance
parameter. In most cases, the practical result will be simple: To achieve 95%
coverage, report the union of a simple 90% (!) confidence interval for the
identified set and a standard 95% confidence interval for the pseudotrue
parameter.
  For uncorrelated estimators -- notably if bounds are estimated from distinct
subsamples -- and conventional coverage levels, validity of this simple
procedure can be shown analytically. The case obtains in the motivating
empirical application (de Quidt, Haushofer, and Roth, 2018), in which
improvement over existing inference methods is demonstrated. More generally,
simulations suggest that the novel confidence interval has excellent length and
size control. This is partly because, in anticipation of never being empty, the
interval can be made shorter than conventional ones in relevant regions of
sample space.","A Simple, Short, but Never-Empty Confidence Interval for Partially Identified Parameters",2020-10-20 20:40:08,Jörg Stoye,"http://arxiv.org/abs/2010.10484v3, http://arxiv.org/pdf/2010.10484v3",econ.EM
29306,em,"We propose a test for a covariance matrix to have Kronecker Product Structure
(KPS). KPS implies a reduced rank restriction on a certain transformation of
the covariance matrix and the new procedure is an adaptation of the Kleibergen
and Paap (2006) reduced rank test. To derive the limiting distribution of the
Wald type test statistic proves challenging partly because of the singularity
of the covariance matrix estimator that appears in the weighting matrix. We
show that the test statistic has a chi square limiting null distribution with
degrees of freedom equal to the number of restrictions tested. Local asymptotic
power results are derived. Monte Carlo simulations reveal good size and power
properties of the test. Re-examining fifteen highly cited papers conducting
instrumental variable regressions, we find that KPS is not rejected in 56 out
of 118 specifications at the 5% nominal size.",A Test for Kronecker Product Structure Covariance Matrix,2020-10-21 15:56:21,"Patrik Guggenberger, Frank Kleibergen, Sophocles Mavroeidis","http://arxiv.org/abs/2010.10961v4, http://arxiv.org/pdf/2010.10961v4",econ.EM
29307,em,"Estimation and inference in dynamic discrete choice models often relies on
approximation to lower the computational burden of dynamic programming.
Unfortunately, the use of approximation can impart substantial bias in
estimation and results in invalid confidence sets. We present a method for set
estimation and inference that explicitly accounts for the use of approximation
and is thus valid regardless of the approximation error. We show how one can
account for the error from approximation at low computational cost. Our
methodology allows researchers to assess the estimation error due to the use of
approximation and thus more effectively manage the trade-off between bias and
computational expedience. We provide simulation evidence to demonstrate the
practicality of our approach.",Approximation-Robust Inference in Dynamic Discrete Choice,2020-10-22 10:07:18,Ben Deaner,"http://arxiv.org/abs/2010.11482v1, http://arxiv.org/pdf/2010.11482v1",econ.EM
29308,em,"This paper considers forecasts of the growth and inflation distributions of
the United Kingdom with factor-augmented quantile autoregressions under a model
averaging framework. We investigate model combinations across models using
weights that minimise the Akaike Information Criterion (AIC), the Bayesian
Information Criterion (BIC), the Quantile Regression Information Criterion
(QRIC) as well as the leave-one-out cross validation criterion. The unobserved
factors are estimated by principal components of a large panel with N
predictors over T periods under a recursive estimation scheme. We apply the
aforementioned methods to the UK GDP growth and CPI inflation rate. We find
that, on average, for GDP growth, in terms of coverage and final prediction
error, the equal weights or the weights obtained by the AIC and BIC perform
equally well but are outperformed by the QRIC and the Jackknife approach on the
majority of the quantiles of interest. In contrast, the naive QAR(1) model of
inflation outperforms all model averaging methodologies.",Forecasting With Factor-Augmented Quantile Autoregressions: A Model Averaging Approach,2020-10-23 12:47:00,Anthoulla Phella,"http://arxiv.org/abs/2010.12263v1, http://arxiv.org/pdf/2010.12263v1",econ.EM
29309,em,"We provide estimation methods for nonseparable panel models based on low-rank
factor structure approximations. The factor structures are estimated by
matrix-completion methods to deal with the computational challenges of
principal component analysis in the presence of missing data. We show that the
resulting estimators are consistent in large panels, but suffer from
approximation and shrinkage biases. We correct these biases using matching and
difference-in-differences approaches. Numerical examples and an empirical
application to the effect of election day registration on voter turnout in the
U.S. illustrate the properties and usefulness of our methods.",Low-Rank Approximations of Nonseparable Panel Models,2020-10-23 17:31:41,"Iván Fernández-Val, Hugo Freeman, Martin Weidner","http://arxiv.org/abs/2010.12439v2, http://arxiv.org/pdf/2010.12439v2",econ.EM
29310,em,"We introduce two models of non-parametric random utility for demand systems:
the stochastic absolute risk aversion (SARA) model, and the stochastic
safety-first (SSF) model. In each model, individual-level heterogeneity is
characterized by a distribution $\pi\in\Pi$ of taste parameters, and
heterogeneity across consumers is introduced using a distribution $F$ over the
distributions in $\Pi$. Demand is non-separable and heterogeneity is
infinite-dimensional. Both models admit corner solutions. We consider two
frameworks for estimation: a Bayesian framework in which $F$ is known, and a
hyperparametric (or empirical Bayesian) framework in which $F$ is a member of a
known parametric family. Our methods are illustrated by an application to a
large U.S. panel of scanner data on alcohol consumption.",Consumer Theory with Non-Parametric Taste Uncertainty and Individual Heterogeneity,2020-10-27 01:48:55,"Christopher Dobronyi, Christian Gouriéroux","http://arxiv.org/abs/2010.13937v4, http://arxiv.org/pdf/2010.13937v4",econ.EM
29379,em,"Suppose a researcher observes individuals within a county within a state.
Given concerns about correlation across individuals, it is common to group
observations into clusters and conduct inference treating observations across
clusters as roughly independent. However, a researcher that has chosen to
cluster at the county level may be unsure of their decision, given knowledge
that observations are independent across states. This paper proposes a modified
randomization test as a robustness check for the chosen level of clustering in
a linear regression setting. Existing tests require either the number of states
or number of counties to be large. Our method is designed for settings with few
states and few counties. While the method is conservative, it has competitive
power in settings that may be relevant to empirical work.",A Modified Randomization Test for the Level of Clustering,2021-05-03 19:54:25,Yong Cai,"http://arxiv.org/abs/2105.01008v2, http://arxiv.org/pdf/2105.01008v2",econ.EM
29311,em,"The e-commerce delivery demand has grown rapidly in the past two decades and
such trend has accelerated tremendously due to the ongoing coronavirus
pandemic. Given the situation, the need for predicting e-commerce delivery
demand and evaluating relevant logistics solutions is increasing. However, the
existing simulation models for e-commerce delivery demand are still limited and
do not consider the delivery options and their attributes that shoppers face on
e-commerce order placements. We propose a novel modeling framework which
jointly predicts the average total value of e-commerce purchase, the purchase
amount per transaction, and delivery option choices. The proposed framework can
simulate the changes in e-commerce delivery demand attributable to the changes
in delivery options. We assume the model parameters based on various sources of
relevant information and conduct a demonstrative sensitivity analysis.
Furthermore, we have applied the model to the simulation for the
Auto-Innovative Prototype city. While the calibration of the model using
real-world survey data is required, the result of the analysis highlights the
applicability of the proposed framework.",E-Commerce Delivery Demand Modeling Framework for An Agent-Based Simulation Platform,2020-10-27 18:39:36,"Takanori Sakai, Yusuke Hara, Ravi Seshadri, André Alho, Md Sami Hasnine, Peiyu Jing, ZhiYuan Chua, Moshe Ben-Akiva","http://arxiv.org/abs/2010.14375v1, http://arxiv.org/pdf/2010.14375v1",econ.EM
29312,em,"This paper presents an empirical study of spatial origin and destination
effects of European regional FDI dyads. Recent regional studies primarily focus
on locational determinants, but ignore bilateral origin- and intervening
factors, as well as associated spatial dependence. This paper fills this gap by
using observations on interregional FDI flows within a spatially augmented
Poisson interaction model. We explicitly distinguish FDI activities between
three different stages of the value chain. Our results provide important
insights on drivers of regional FDI activities, both from origin and
destination perspectives. We moreover show that spatial dependence plays a key
role in both dimensions.",Modeling European regional FDI flows using a Bayesian spatial Poisson interaction model,2020-10-28 13:06:40,"Tamás Krisztin, Philipp Piribauer","http://arxiv.org/abs/2010.14856v1, http://arxiv.org/pdf/2010.14856v1",econ.EM
29313,em,"This paper studies the identification and estimation of policy effects when
treatment status is binary and endogenous. We introduce a new class of
unconditional marginal treatment effects (MTE) based on the influence function
of the functional underlying the policy target. We show that an unconditional
policy effect can be represented as a weighted average of the newly defined
unconditional MTEs over the individuals who are indifferent about their
treatment status. We provide conditions for point identification of the
unconditional policy effects. When a quantile is the functional of interest, we
introduce the UNconditional Instrumental Quantile Estimator (UNIQUE) and
establish its consistency and asymptotic distribution. In the empirical
application, we estimate the effect of changing college enrollment status,
induced by higher tuition subsidy, on the quantiles of the wage distribution.",Identification and Estimation of Unconditional Policy Effects of an Endogenous Binary Treatment: an Unconditional MTE Approach,2020-10-29 21:09:36,"Julian Martinez-Iriarte, Yixiao Sun","http://arxiv.org/abs/2010.15864v4, http://arxiv.org/pdf/2010.15864v4",econ.EM
29314,em,"Restricting randomization in the design of experiments (e.g., using
blocking/stratification, pair-wise matching, or rerandomization) can improve
the treatment-control balance on important covariates and therefore improve the
estimation of the treatment effect, particularly for small- and medium-sized
experiments. Existing guidance on how to identify these variables and implement
the restrictions is incomplete and conflicting. We identify that differences
are mainly due to the fact that what is important in the pre-treatment data may
not translate to the post-treatment data. We highlight settings where there is
sufficient data to provide clear guidance and outline improved methods to
mostly automate the process using modern machine learning (ML) techniques. We
show in simulations using real-world data, that these methods reduce both the
mean squared error of the estimate (14%-34%) and the size of the standard error
(6%-16%).",Machine Learning for Experimental Design: Methods for Improved Blocking,2020-10-30 01:02:18,"Brian Quistorff, Gentry Johnson","http://arxiv.org/abs/2010.15966v1, http://arxiv.org/pdf/2010.15966v1",econ.EM
29315,em,"We study testable implications of multiple equilibria in discrete games with
incomplete information. Unlike de Paula and Tang (2012), we allow the players'
private signals to be correlated. In static games, we leverage independence of
private types across games whose equilibrium selection is correlated. In
dynamic games with serially correlated discrete unobserved heterogeneity, our
testable implication builds on the fact that the distribution of a sequence of
choices and states are mixtures over equilibria and unobserved heterogeneity.
The number of mixture components is a known function of the length of the
sequence as well as the cardinality of equilibria and unobserved heterogeneity
support. In both static and dynamic cases, these testable implications are
implementable using existing statistical tools.",Testable Implications of Multiple Equilibria in Discrete Games with Correlated Types,2020-12-01 22:29:42,"Aureo de Paula, Xun Tang","http://arxiv.org/abs/2012.00787v1, http://arxiv.org/pdf/2012.00787v1",econ.EM
29316,em,"The COVID-19 pandemic has caused severe disruption to economic and financial
activity worldwide. We assess what happened to the aggregate U.S. stock market
during this period, including implications for both short and long-horizon
investors. Using the model of Maheu, McCurdy and Song (2012), we provide
smoothed estimates and out-of-sample forecasts associated with stock market
dynamics during the pandemic. We identify bull and bear market regimes
including their bull correction and bear rally components, demonstrate the
model's performance in capturing periods of significant regime change, and
provide forecasts that improve risk management and investment decisions. The
paper concludes with out-of-sample forecasts of market states one year ahead.",Bull and Bear Markets During the COVID-19 Pandemic,2020-12-03 04:15:36,"John M. Maheu, Thomas H. McCurdy, Yong Song","http://arxiv.org/abs/2012.01623v1, http://arxiv.org/pdf/2012.01623v1",econ.EM
29317,em,"The properties of Maximum Likelihood estimator in mixed causal and noncausal
models with a generalized Student's t error process are reviewed. Several known
existing methods are typically not applicable in the heavy-tailed framework. To
this end, a new approach to make inference on causal and noncausal parameters
in finite sample sizes is proposed. It exploits the empirical variance of the
generalized Student's-t, without the existence of population variance. Monte
Carlo simulations show a good performance of the new variance construction for
fat tail series. Finally, different existing approaches are compared using
three empirical applications: the variation of daily COVID-19 deaths in
Belgium, the monthly wheat prices, and the monthly inflation rate in Brazil.",Inference in mixed causal and noncausal models with generalized Student's t-distributions,2020-12-03 16:10:16,"Francesco Giancaterini, Alain Hecq","http://arxiv.org/abs/2012.01888v2, http://arxiv.org/pdf/2012.01888v2",econ.EM
29318,em,"A fundamental question underlying the literature on partial identification
is: what can we learn about parameters that are relevant for policy but not
necessarily point-identified by the exogenous variation we observe? This paper
provides an answer in terms of sharp, analytic characterizations and bounds for
an important class of policy-relevant treatment effects, consisting of marginal
treatment effects and linear functionals thereof, in the latent index selection
model as formalized in Vytlacil (2002). The sharp bounds use the full content
of identified marginal distributions, and analytic derivations rely on the
theory of stochastic orders. The proposed methods also make it possible to
sharply incorporate new auxiliary assumptions on distributions into the latent
index selection framework. Empirically, I apply the methods to study the
effects of Medicaid on emergency room utilization in the Oregon Health
Insurance Experiment, showing that the predictions from extrapolations based on
a distribution assumption (rank similarity) differ substantively and
consistently from existing extrapolations based on a parametric mean assumption
(linearity). This underscores the value of utilizing the model's full empirical
content in combination with auxiliary assumptions.",Sharp Bounds in the Latent Index Selection Model,2020-12-04 07:05:42,Philip Marx,"http://arxiv.org/abs/2012.02390v2, http://arxiv.org/pdf/2012.02390v2",econ.EM
29319,em,"This paper shows that modelling comovement in the asymmetry of the predictive
distributions of GDP growth and a timely related series improves nowcasting
uncertainty when it matters most : in times of severe economic downturn. Rather
than using many predictors to nowcast GDP, I show that it is possible to
extract more information than we currently do from series closely related to
economic growth such as employment data. The proposed methodology relies on
score driven techniques and provides an alternative approach for nowcasting
besides dynamic factor models and MIDAS regression where dynamic asymmetry (or
skewness) parameters have not yet been explored.",Capturing GDP nowcast uncertainty in real time,2020-12-04 16:57:48,Paul Labonne,"http://arxiv.org/abs/2012.02601v3, http://arxiv.org/pdf/2012.02601v3",econ.EM
29320,em,"We propose a novel class of multivariate GARCH models that utilize realized
measures of volatilities and correlations. The central component is an
unconstrained vector parametrization of the correlation matrix that facilitates
modeling of the correlation structure. The parametrization is based on the
matrix logarithmic transformation that retains the positive definiteness as an
innate property. A factor approach offers a way to impose a parsimonious
structure in high dimensional system and we show that a factor framework arises
naturally in some existing models. We apply the model to returns of nine assets
and employ the factor structure that emerges from a block correlation
specification. An auxiliary empirical finding is that the empirical
distribution of parametrized realized correlations is approximately Gaussian.
This observation is analogous to the well-known result for logarithmically
transformed realized variances.",A Multivariate Realized GARCH Model,2020-12-04 19:32:21,"Ilya Archakov, Peter Reinhard Hansen, Asger Lunde","http://arxiv.org/abs/2012.02708v1, http://arxiv.org/pdf/2012.02708v1",econ.EM
29321,em,"In this paper, we investigate binary response models for heterogeneous panel
data with interactive fixed effects by allowing both the cross-sectional
dimension and the temporal dimension to diverge. From a practical point of
view, the proposed framework can be applied to predict the probability of
corporate failure, conduct credit rating analysis, etc. Theoretically and
methodologically, we establish a link between a maximum likelihood estimation
and a least squares approach, provide a simple information criterion to detect
the number of factors, and achieve the asymptotic distributions accordingly. In
addition, we conduct intensive simulations to examine the theoretical findings.
In the empirical study, we focus on the sign prediction of stock returns, and
then use the results of sign forecast to conduct portfolio analysis.",Binary Response Models for Heterogeneous Panel Data with Interactive Fixed Effects,2020-12-06 07:46:00,"Jiti Gao, Fei Liu, Bin Peng, Yayi Yan","http://arxiv.org/abs/2012.03182v2, http://arxiv.org/pdf/2012.03182v2",econ.EM
29322,em,"Regression trees and random forests are popular and effective non-parametric
estimators in practical applications. A recent paper by Athey and Wager shows
that the random forest estimate at any point is asymptotically Gaussian; in
this paper, we extend this result to the multivariate case and show that the
vector of estimates at multiple points is jointly normal. Specifically, the
covariance matrix of the limiting normal distribution is diagonal, so that the
estimates at any two points are independent in sufficiently deep trees.
Moreover, the off-diagonal term is bounded by quantities capturing how likely
two points belong to the same partition of the resulting tree. Our results
relies on certain a certain stability property when constructing splits, and we
give examples of splitting rules for which this assumption is and is not
satisfied. We test our proposed covariance bound and the associated coverage
rates of confidence intervals in numerical simulations.",Asymptotic Normality for Multivariate Random Forest Estimators,2020-12-07 10:27:21,Kevin Li,"http://arxiv.org/abs/2012.03486v3, http://arxiv.org/pdf/2012.03486v3",econ.EM
29323,em,"How to allocate vaccines over heterogeneous individuals is one of the
important policy decisions in pandemic times. This paper develops a procedure
to estimate an individualized vaccine allocation policy under limited supply,
exploiting social network data containing individual demographic
characteristics and health status. We model spillover effects of the vaccines
based on a Heterogeneous-Interacted-SIR network model and estimate an
individualized vaccine allocation policy by maximizing an estimated social
welfare (public health) criterion incorporating the spillovers. While this
optimization problem is generally an NP-hard integer optimization problem, we
show that the SIR structure leads to a submodular objective function, and
provide a computationally attractive greedy algorithm for approximating a
solution that has theoretical performance guarantee. Moreover, we characterise
a finite sample welfare regret bound and examine how its uniform convergence
rate depends on the complexity and riskiness of social network. In the
simulation, we illustrate the importance of considering spillovers by comparing
our method with targeting without network information.",Who Should Get Vaccinated? Individualized Allocation of Vaccines Over SIR Network,2020-12-08 00:00:25,"Toru Kitagawa, Guanyi Wang","http://dx.doi.org/10.1016/j.jeconom.2021.09.009, http://arxiv.org/abs/2012.04055v4, http://arxiv.org/pdf/2012.04055v4",econ.EM
29432,em,"The forecast combination puzzle is often found in literature: The
equal-weight scheme tends to outperform sophisticated methods of combining
individual forecasts. Exploiting this finding, we propose a hedge egalitarian
committees algorithm (HECA), which can be implemented via mixed integer
quadratic programming. Specifically, egalitarian committees are formed by the
ridge regression with shrinkage toward equal weights; subsequently, the
forecasts provided by these committees are averaged by the hedge algorithm. We
establish the no-regret property of HECA. Using data collected from the ECB
Survey of Professional Forecasters, we find the superiority of HECA relative to
the equal-weight scheme during the COVID-19 recession.",No-Regret Forecasting with Egalitarian Committees,2021-09-28 18:31:33,Jiun-Hua Su,"http://arxiv.org/abs/2109.13801v1, http://arxiv.org/pdf/2109.13801v1",econ.EM
29325,em,"The conditional logit model is a standard workhorse approach to estimating
customers' product feature preferences using choice data. Using these models at
scale, however, can result in numerical imprecision and optimization failure
due to a combination of large-valued covariates and the softmax probability
function. Standard machine learning approaches alleviate these concerns by
applying a normalization scheme to the matrix of covariates, scaling all values
to sit within some interval (such as the unit simplex). While this type of
normalization is innocuous when using models for prediction, it has the side
effect of perturbing the estimated coefficients, which are necessary for
researchers interested in inference. This paper shows that, for two common
classes of normalizers, designated scaling and centered scaling, the
data-generating non-scaled model parameters can be analytically recovered along
with their asymptotic distributions. The paper also shows the numerical
performance of the analytical results using an example of a scaling normalizer.",Identification of inferential parameters in the covariate-normalized linear conditional logit model,2020-12-15 03:37:52,Philip Erickson,"http://arxiv.org/abs/2012.08022v1, http://arxiv.org/pdf/2012.08022v1",econ.EM
29326,em,"In this paper, we assess whether using non-linear dimension reduction
techniques pays off for forecasting inflation in real-time. Several recent
methods from the machine learning literature are adopted to map a large
dimensional dataset into a lower dimensional set of latent factors. We model
the relationship between inflation and the latent factors using constant and
time-varying parameter (TVP) regressions with shrinkage priors. Our models are
then used to forecast monthly US inflation in real-time. The results suggest
that sophisticated dimension reduction methods yield inflation forecasts that
are highly competitive to linear approaches based on principal components.
Among the techniques considered, the Autoencoder and squared principal
components yield factors that have high predictive power for one-month- and
one-quarter-ahead inflation. Zooming into model performance over time reveals
that controlling for non-linear relations in the data is of particular
importance during recessionary episodes of the business cycle or the current
COVID-19 pandemic.",Real-time Inflation Forecasting Using Non-linear Dimension Reduction Techniques,2020-12-15 11:55:18,"Niko Hauzenberger, Florian Huber, Karin Klieber","http://arxiv.org/abs/2012.08155v3, http://arxiv.org/pdf/2012.08155v3",econ.EM
29327,em,"It is challenging to elucidate the effects of changes in external influences
(such as economic or policy) on the rate of US drug approvals. Here, a novel
approach, termed the Chronological Hurst Exponent (CHE), is proposed, which
hypothesizes that changes in the long-range memory latent within the dynamics
of time series data may be temporally associated with changes in such
influences. Using the monthly number the FDA Center for Drug Evaluation and
Research (CDER) approvals from 1939 to 2019 as the data source, it is
demonstrated that the CHE has a distinct S-shaped structure demarcated by an
8-year (1939-1947) Stagnation Period, a 27-year (1947-1974) Emergent
(time-varying Period, and a 45-year (1974-2019) Saturation Period. Further,
dominant periodicities (resolved via wavelet analyses) are identified during
the most recent 45-year CHE Saturation Period at 17, 8 and 4 years; thus, US
drug approvals have been following a Juglar-Kuznet mid-term cycle with
Kitchin-like bursts. As discussed, this work suggests that (1) changes in
extrinsic factors (e.g., of economic and/or policy origin ) during the Emergent
Period may have led to persistent growth in US drug approvals enjoyed since
1974, (2) the CHE may be a valued method to explore influences on time series
data, and (3) innovation-related economic cycles exist (as viewed via the proxy
metric of US drug approvals).","United States FDA drug approvals are persistent and polycyclic: Insights into economic cycles, innovation dynamics, and national policy",2020-12-16 16:30:17,Iraj Daizadeh,"http://dx.doi.org/10.1007/s43441-021-00279-8, http://arxiv.org/abs/2012.09627v3, http://arxiv.org/pdf/2012.09627v3",econ.EM
29328,em,"We study two-way-fixed-effects regressions (TWFE) with several treatment
variables. Under a parallel trends assumption, we show that the coefficient on
each treatment identifies a weighted sum of that treatment's effect, with
possibly negative weights, plus a weighted sum of the effects of the other
treatments. Thus, those estimators are not robust to heterogeneous effects and
may be contaminated by other treatments' effects. We further show that omitting
a treatment from the regression can actually reduce the estimator's bias,
unlike what would happen under constant treatment effects. We propose an
alternative difference-in-differences estimator, robust to heterogeneous
effects and immune to the contamination problem. In the application we
consider, the TWFE regression identifies a highly non-convex combination of
effects, with large contamination weights, and one of its coefficients
significantly differs from our heterogeneity-robust estimator.",Two-way Fixed Effects and Differences-in-Differences Estimators with Several Treatments,2020-12-18 10:11:36,"Clément de Chaisemartin, Xavier D'Haultfœuille","http://arxiv.org/abs/2012.10077v8, http://arxiv.org/pdf/2012.10077v8",econ.EM
29329,em,"Through time series analysis, this paper empirically explores, confirms and
extends the trademark/patent inter-relationship as proposed in the normative
intellectual-property (IP)-oriented Innovation Agenda view of the science and
technology (S&T) firm. Beyond simple correlation, it is shown that
trademark-filing (Trademarks) and patent-application counts (Patents) have
similar (if not, identical) structural attributes (including similar
distribution characteristics and seasonal variation, cross-wavelet
synchronicity/coherency (short-term cross-periodicity) and structural breaks)
and are cointegrated (integration order of 1) over a period of approximately 40
years (given the monthly observations). The existence of cointegration strongly
suggests a ""long-run"" equilibrium between the two indices; that is, there is
(are) exogenous force(s) restraining the two indices from diverging from one
another. Structural breakpoints in the chrono-dynamics of the indices supports
the existence of potentially similar exogeneous forces(s), as the break dates
are simultaneous/near-simultaneous (Trademarks: 1987, 1993, 1999, 2005, 2011;
Patents: 1988, 1994, 2000, and 2011). A discussion of potential triggers
(affecting both time series) causing these breaks, and the concept of
equilibrium in the context of these proxy measures are presented. The
cointegration order and structural co-movements resemble other macro-economic
variables, stoking the opportunity of using econometrics approaches to further
analyze these data. As a corollary, this work further supports the inclusion of
trademark analysis in innovation studies. Lastly, the data and corresponding
analysis tools (R program) are presented as Supplementary Materials for
reproducibility and convenience to conduct future work for interested readers.",Trademark filings and patent application count time series are structurally near-identical and cointegrated: Implications for studies in innovation,2020-12-14 19:09:15,Iraj Daizadeh,"http://dx.doi.org/10.47909/ijsmc.33, http://arxiv.org/abs/2012.10400v1, http://arxiv.org/pdf/2012.10400v1",econ.EM
29330,em,"We study the problem of choosing optimal policy rules in uncertain
environments using models that may be incomplete and/or partially identified.
We consider a policymaker who wishes to choose a policy to maximize a
particular counterfactual quantity called a policy transform. We characterize
learnability of a set of policy options by the existence of a decision rule
that closely approximates the maximin optimal value of the policy transform
with high probability. Sufficient conditions are provided for the existence of
such a rule. However, learnability of an optimal policy is an ex-ante notion
(i.e. before observing a sample), and so ex-post (i.e. after observing a
sample) theoretical guarantees for certain policy rules are also provided. Our
entire approach is applicable when the distribution of unobservables is not
parametrically specified, although we discuss how semiparametric restrictions
can be used. Finally, we show possible applications of the procedure to a
simultaneous discrete choice example and a program evaluation example.",Policy Transforms and Learning Optimal Policies,2020-12-21 02:09:27,Thomas M. Russell,"http://arxiv.org/abs/2012.11046v1, http://arxiv.org/pdf/2012.11046v1",econ.EM
29451,em,"We use a dynamic panel Tobit model with heteroskedasticity to generate
forecasts for a large cross-section of short time series of censored
observations. Our fully Bayesian approach allows us to flexibly estimate the
cross-sectional distribution of heterogeneous coefficients and then implicitly
use this distribution as prior to construct Bayes forecasts for the individual
time series. In addition to density forecasts, we construct set forecasts that
explicitly target the average coverage probability for the cross-section. We
present a novel application in which we forecast bank-level loan charge-off
rates for small banks.",Forecasting with a Panel Tobit Model,2021-10-27 04:40:09,"Laura Liu, Hyungsik Roger Moon, Frank Schorfheide","http://arxiv.org/abs/2110.14117v2, http://arxiv.org/pdf/2110.14117v2",econ.EM
29331,em,"New binary classification tests are often evaluated relative to a
pre-established test. For example, rapid Antigen tests for the detection of
SARS-CoV-2 are assessed relative to more established PCR tests. In this paper,
I argue that the new test can be described as producing ambiguous information
when the pre-established is imperfect. This allows for a phenomenon called
dilation -- an extreme form of non-informativeness. As an example, I present
hypothetical test data satisfying the WHO's minimum quality requirement for
rapid Antigen tests which leads to dilation. The ambiguity in the information
arises from a missing data problem due to imperfection of the established test:
the joint distribution of true infection and test results is not observed.
Using results from Copula theory, I construct the (usually non-singleton) set
of all these possible joint distributions, which allows me to assess the new
test's informativeness. This analysis leads to a simple sufficient condition to
make sure that a new test is not a dilation. I illustrate my approach with
applications to data from three COVID-19 related tests. Two rapid Antigen tests
satisfy my sufficient condition easily and are therefore informative. However,
less accurate procedures, like chest CT scans, may exhibit dilation.","Binary Classification Tests, Imperfect Standards, and Ambiguous Information",2020-12-21 12:51:30,Gabriel Ziegler,"http://arxiv.org/abs/2012.11215v3, http://arxiv.org/pdf/2012.11215v3",econ.EM
29332,em,"This paper derives a new powerful test for mediation that is easy to use.
Testing for mediation is empirically very important in psychology, sociology,
medicine, economics and business, generating over 100,000 citations to a single
key paper. The no-mediation hypothesis $H_{0}:\theta_{1}\theta _{2}=0$ also
poses a theoretically interesting statistical problem since it defines a
manifold that is non-regular in the origin where rejection probabilities of
standard tests are extremely low. We prove that a similar test for mediation
only exists if the size is the reciprocal of an integer. It is unique, but has
objectionable properties. We propose a new test that is nearly similar with
power close to the envelope without these abject properties and is easy to use
in practice. Construction uses the general varying $g$-method that we propose.
We illustrate the results in an educational setting with gender role beliefs
and in a trade union sentiment application.",A Nearly Similar Powerful Test for Mediation,2020-12-21 16:46:09,"Kees Jan van Garderen, Noud van Giersbergen","http://arxiv.org/abs/2012.11342v2, http://arxiv.org/pdf/2012.11342v2",econ.EM
29333,em,"In many set identified models, it is difficult to obtain a tractable
characterization of the identified set. Therefore, empirical works often
construct confidence region based on an outer set of the identified set.
Because an outer set is always a superset of the identified set, this practice
is often viewed as conservative yet valid. However, this paper shows that, when
the model is refuted by the data, a nonempty outer set could deliver
conflicting results with another outer set derived from the same underlying
model structure, so that the results of outer sets could be misleading in the
presence of misspecification. We provide a sufficient condition for the
existence of discordant outer sets which covers models characterized by
intersection bounds and the Artstein (1983) inequalities. We also derive
sufficient conditions for the non-existence of discordant submodels, therefore
providing a class of models for which constructing outer sets cannot lead to
misleading interpretations. In the case of discordancy, we follow Masten and
Poirier (2021) by developing a method to salvage misspecified models, but
unlike them we focus on discrete relaxations. We consider all minimum
relaxations of a refuted model which restores data-consistency. We find that
the union of the identified sets of these minimum relaxations is robust to
detectable misspecifications and has an intuitive empirical interpretation.",Discordant Relaxations of Misspecified Models,2020-12-22 00:04:14,"Lixiong Li, Désiré Kédagni, Ismaël Mourifié","http://arxiv.org/abs/2012.11679v4, http://arxiv.org/pdf/2012.11679v4",econ.EM
29334,em,"We study linear quantile regression models when regressors and/or dependent
variable are not directly observed but estimated in an initial first step and
used in the second step quantile regression for estimating the quantile
parameters. This general class of generated quantile regression (GQR) covers
various statistical applications, for instance, estimation of endogenous
quantile regression models and triangular structural equation models, and some
new relevant applications are discussed. We study the asymptotic distribution
of the two-step estimator, which is challenging because of the presence of
generated covariates and/or dependent variable in the non-smooth quantile
regression estimator. We employ techniques from empirical process theory to
find uniform Bahadur expansion for the two step estimator, which is used to
establish the asymptotic results. We illustrate the performance of the GQR
estimator through simulations and an empirical application based on auctions.",Quantile regression with generated dependent variable and covariates,2020-12-25 22:12:18,Jayeeta Bhattacharya,"http://arxiv.org/abs/2012.13614v1, http://arxiv.org/pdf/2012.13614v1",econ.EM
29335,em,"Randomized experiments have become a standard tool in economics. In analyzing
randomized experiments, the traditional approach has been based on the Stable
Unit Treatment Value (SUTVA: \cite{rubin}) assumption which dictates that there
is no interference between individuals. However, the SUTVA assumption fails to
hold in many applications due to social interaction, general equilibrium,
and/or externality effects. While much progress has been made in relaxing the
SUTVA assumption, most of this literature has only considered a setting with
perfect compliance to treatment assignment. In practice, however, noncompliance
occurs frequently where the actual treatment receipt is different from the
assignment to the treatment. In this paper, we study causal effects in
randomized experiments with network interference and noncompliance. Spillovers
are allowed to occur at both treatment choice stage and outcome realization
stage. In particular, we explicitly model treatment choices of agents as a
binary game of incomplete information where resulting equilibrium treatment
choice probabilities affect outcomes of interest. Outcomes are further
characterized by a random coefficient model to allow for general unobserved
heterogeneity in the causal effects. After defining our causal parameters of
interest, we propose a simple control function estimator and derive its
asymptotic properties under large-network asymptotics. We apply our methods to
the randomized subsidy program of \cite{dupas} where we find evidence of
spillover effects on both short-run and long-run adoption of
insecticide-treated bed nets. Finally, we illustrate the usefulness of our
methods by analyzing the impact of counterfactual subsidy policies.",Analysis of Randomized Experiments with Network Interference and Noncompliance,2020-12-26 13:11:20,Bora Kim,"http://arxiv.org/abs/2012.13710v1, http://arxiv.org/pdf/2012.13710v1",econ.EM
29507,em,"Many panel data sets used for pseudo-poisson estimation of three-way gravity
models are implicitly unbalanced because uninformative observations are
redundant for the estimation. We show with real data as well as simulations
that this phenomenon, which we call latent unbalancedness, amplifies the
inference problem recently studied by Weidner and Zylkin (2021).",Latent Unbalancedness in Three-Way Gravity Models,2022-03-04 13:47:53,"Daniel Czarnowske, Amrei Stammann","http://arxiv.org/abs/2203.02235v1, http://arxiv.org/pdf/2203.02235v1",econ.EM
29336,em,"This paper is devoted to testing for the explosive bubble under time-varying
non-stationary volatility. Because the limiting distribution of the seminal
Phillips et al. (2011) test depends on the variance function and usually
requires a bootstrap implementation under heteroskedasticity, we construct the
test based on a deformation of the time domain. The proposed test is
asymptotically pivotal under the null hypothesis and its limiting distribution
coincides with that of the standard test under homoskedasticity, so that the
test does not require computationally extensive methods for inference.
Appealing finite sample properties are demonstrated through Monte-Carlo
simulations. An empirical application demonstrates that the upsurge behavior of
cryptocurrency time series in the middle of the sample is partially explained
by the volatility change.",Time-Transformed Test for the Explosive Bubbles under Non-stationary Volatility,2020-12-27 16:20:48,"Eiji Kurozumi, Anton Skrobotov, Alexey Tsarev","http://arxiv.org/abs/2012.13937v2, http://arxiv.org/pdf/2012.13937v2",econ.EM
29337,em,"This paper examines the impact of climate shocks on 13 European economies
analysing jointly business and financial cycles, in different phases and
disentangling the effects for different sector channels. A Bayesian Panel
Markov-switching framework is proposed to jointly estimate the impact of
extreme weather events on the economies as well as the interaction between
business and financial cycles. Results from the empirical analysis suggest that
extreme weather events impact asymmetrically across the different phases of the
economy and heterogeneously across the EU countries. Moreover, we highlight how
the manufacturing output, a component of the industrial production index,
constitutes the main channel through which climate shocks impact the EU
economies.",The impact of Climate on Economic and Financial Cycles: A Markov-switching Panel Approach,2020-12-29 13:09:46,"Monica Billio, Roberto Casarin, Enrica De Cian, Malcolm Mistry, Anthony Osuntuyi","http://arxiv.org/abs/2012.14693v1, http://arxiv.org/pdf/2012.14693v1",econ.EM
29338,em,"In this study, we consider a pairwise network formation model in which each
dyad of agents strategically determines the link status between them. Our model
allows the agents to have unobserved group heterogeneity in the propensity of
link formation. For the model estimation, we propose a three-step maximum
likelihood (ML) method. First, we obtain consistent estimates for the
heterogeneity parameters at individual level using the ML estimator. Second, we
estimate the latent group structure using the binary segmentation algorithm
based on the results obtained from the first step. Finally, based on the
estimated group membership, we re-execute the ML estimation. Under certain
regularity conditions, we show that the proposed estimator is asymptotically
unbiased and distributed as normal at the parametric rate. As an empirical
illustration, we focus on the network data of international visa-free travels.
The results indicate the presence of significant strategic complementarity and
a certain level of degree heterogeneity in the network formation behavior.",A Pairwise Strategic Network Formation Model with Group Heterogeneity: With an Application to International Travel,2020-12-29 21:24:35,Tadao Hoshino,"http://arxiv.org/abs/2012.14886v2, http://arxiv.org/pdf/2012.14886v2",econ.EM
29339,em,"In many empirical studies of a large two-sided matching market (such as in a
college admissions problem), the researcher performs statistical inference
under the assumption that they observe a random sample from a large matching
market. In this paper, we consider a setting in which the researcher observes
either all or a nontrivial fraction of outcomes from a stable matching. We
establish a concentration inequality for empirical matching probabilities
assuming strong correlation among the colleges' preferences while allowing
students' preferences to be fully heterogeneous. Our concentration inequality
yields laws of large numbers for the empirical matching probabilities and other
statistics commonly used in empirical analyses of a large matching market. To
illustrate the usefulness of our concentration inequality, we prove consistency
for estimators of conditional matching probabilities and measures of positive
assortative matching.",The Law of Large Numbers for Large Stable Matchings,2021-01-02 11:05:05,"Jacob Schwartz, Kyungchul Song","http://arxiv.org/abs/2101.00399v6, http://arxiv.org/pdf/2101.00399v6",econ.EM
29340,em,"We consider situations where a user feeds her attributes to a machine
learning method that tries to predict her best option based on a random sample
of other users. The predictor is incentive-compatible if the user has no
incentive to misreport her covariates. Focusing on the popular Lasso estimation
technique, we borrow tools from high-dimensional statistics to characterize
sufficient conditions that ensure that Lasso is incentive compatible in large
samples. We extend our results to the Conservative Lasso estimator and provide
new moment bounds for this generalized weighted version of Lasso. Our results
show that incentive compatibility is achieved if the tuning parameter is kept
above some threshold. We present simulations that illustrate how this can be
done in practice.",Shoiuld Humans Lie to Machines: The Incentive Compatibility of Lasso and General Weighted Lasso,2021-01-04 21:21:17,"Mehmet Caner, Kfir Eliaz","http://arxiv.org/abs/2101.01144v2, http://arxiv.org/pdf/2101.01144v2",econ.EM
29341,em,"This paper considers (partial) identification of a variety of counterfactual
parameters in binary response models with possibly endogenous regressors. Our
framework allows for nonseparable index functions with multi-dimensional latent
variables, and does not require parametric distributional assumptions. We
leverage results on hyperplane arrangements and cell enumeration from the
literature on computational geometry in order to provide a tractable means of
computing the identified set. We demonstrate how various functional form,
independence, and monotonicity assumptions can be imposed as constraints in our
optimization procedure to tighten the identified set. Finally, we apply our
method to study the effects of health insurance on the decision to seek medical
treatment.",Partial Identification in Nonseparable Binary Response Models with Endogenous Regressors,2021-01-05 01:10:31,"Jiaying Gu, Thomas M. Russell","http://arxiv.org/abs/2101.01254v5, http://arxiv.org/pdf/2101.01254v5",econ.EM
29354,em,"Factor modeling is a powerful statistical technique that permits to capture
the common dynamics in a large panel of data with a few latent variables, or
factors, thus alleviating the curse of dimensionality. Despite its popularity
and widespread use for various applications ranging from genomics to finance,
this methodology has predominantly remained linear. This study estimates
factors nonlinearly through the kernel method, which allows flexible
nonlinearities while still avoiding the curse of dimensionality. We focus on
factor-augmented forecasting of a single time series in a high-dimensional
setting, known as diffusion index forecasting in macroeconomics literature. Our
main contribution is twofold. First, we show that the proposed estimator is
consistent and it nests linear PCA estimator as well as some nonlinear
estimators introduced in the literature as specific examples. Second, our
empirical application to a classical macroeconomic dataset demonstrates that
this approach can offer substantial advantages over mainstream methods.",The Kernel Trick for Nonlinear Factor Modeling,2021-03-01 22:27:00,Varlam Kutateladze,"http://arxiv.org/abs/2103.01266v1, http://arxiv.org/pdf/2103.01266v1",econ.EM
29342,em,"In this paper we investigate how the bootstrap can be applied to time series
regressions when the volatility of the innovations is random and
non-stationary. The volatility of many economic and financial time series
displays persistent changes and possible non-stationarity. However, the theory
of the bootstrap for such models has focused on deterministic changes of the
unconditional variance and little is known about the performance and the
validity of the bootstrap when the volatility is driven by a non-stationary
stochastic process. This includes near-integrated volatility processes as well
as near-integrated GARCH processes. This paper develops conditions for
bootstrap validity in time series regressions with non-stationary, stochastic
volatility. We show that in such cases the distribution of bootstrap statistics
(conditional on the data) is random in the limit. Consequently, the
conventional approaches to proving bootstrap validity, involving weak
convergence in probability of the bootstrap statistic, fail to deliver the
required results. Instead, we use the concept of `weak convergence in
distribution' to develop and establish novel conditions for validity of the
wild bootstrap, conditional on the volatility process. We apply our results to
several testing problems in the presence of non-stationary stochastic
volatility, including testing in a location model, testing for structural
change and testing for an autoregressive unit root. Sufficient conditions for
bootstrap validity include the absence of statistical leverage effects, i.e.,
correlation between the error process and its future conditional variance. The
results are illustrated using Monte Carlo simulations, which indicate that the
wild bootstrap leads to size control even in small samples.",Bootstrapping Non-Stationary Stochastic Volatility,2021-01-10 18:04:28,"H. Peter Boswijk, Giuseppe Cavaliere, Anders Rahbek, Iliyan Georgiev","http://arxiv.org/abs/2101.03562v1, http://arxiv.org/pdf/2101.03562v1",econ.EM
29343,em,"In many fields where the main goal is to produce sequential forecasts for
decision making problems, the good understanding of the contemporaneous
relations among different series is crucial for the estimation of the
covariance matrix. In recent years, the modified Cholesky decomposition
appeared as a popular approach to covariance matrix estimation. However, its
main drawback relies on the imposition of the series ordering structure. In
this work, we propose a highly flexible and fast method to deal with the
problem of ordering uncertainty in a dynamic fashion with the use of Dynamic
Order Probabilities. We apply the proposed method in two different forecasting
contexts. The first is a dynamic portfolio allocation problem, where the
investor is able to learn the contemporaneous relationships among different
currencies improving final decisions and economic performance. The second is a
macroeconomic application, where the econometrician can adapt sequentially to
new economic environments, switching the contemporaneous relations among
macroeconomic variables over time.",Dynamic Ordering Learning in Multivariate Forecasting,2021-01-11 22:45:01,"Bruno P. C. Levy, Hedibert F. Lopes","http://arxiv.org/abs/2101.04164v3, http://arxiv.org/pdf/2101.04164v3",econ.EM
29344,em,"This study proposes an econometric framework to interpret and empirically
decompose the difference between IV and OLS estimates given by a linear
regression model when the true causal effects of the treatment are nonlinear in
treatment levels and heterogeneous across covariates. I show that the IV-OLS
coefficient gap consists of three estimable components: the difference in
weights on the covariates, the difference in weights on the treatment levels,
and the difference in identified marginal effects that arises from endogeneity
bias. Applications of this framework to return-to-schooling estimates
demonstrate the empirical relevance of this distinction in properly
interpreting the IV-OLS gap.",Empirical Decomposition of the IV-OLS Gap with Heterogeneous and Nonlinear Effects,2021-01-12 11:35:17,Shoya Ishimaru,"http://arxiv.org/abs/2101.04346v5, http://arxiv.org/pdf/2101.04346v5",econ.EM
29345,em,"We develop a generally applicable full-information inference method for
heterogeneous agent models, combining aggregate time series data and repeated
cross sections of micro data. To handle unobserved aggregate state variables
that affect cross-sectional distributions, we compute a numerically unbiased
estimate of the model-implied likelihood function. Employing the likelihood
estimate in a Markov Chain Monte Carlo algorithm, we obtain fully efficient and
valid Bayesian inference. Evaluation of the micro part of the likelihood lends
itself naturally to parallel computing. Numerical illustrations in models with
heterogeneous households or firms demonstrate that the proposed
full-information method substantially sharpens inference relative to using only
macro data, and for some parameters micro data is essential for identification.",Full-Information Estimation of Heterogeneous Agent Models Using Macro and Micro Data,2021-01-13 00:56:12,"Laura Liu, Mikkel Plagborg-Møller","http://arxiv.org/abs/2101.04771v2, http://arxiv.org/pdf/2101.04771v2",econ.EM
29346,em,"UK GDP data is published with a lag time of more than a month and it is often
adjusted for prior periods. This paper contemplates breaking away from the
historic GDP measure to a more dynamic method using Bank Account, Cheque and
Credit Card payment transactions as possible predictors for faster and real
time measure of GDP value. Historic timeseries data available from various
public domain for various payment types, values, volume and nominal UK GDP was
used for this analysis. Low Value Payments was selected for simple Ordinary
Least Square Simple Linear Regression with mixed results around explanatory
power of the model and reliability measured through residuals distribution and
variance. Future research could potentially expand this work using datasets
split by period of economic shocks to further test the OLS method or explore
one of General Least Square method or an autoregression on GDP timeseries
itself.",GDP Forecasting using Payments Transaction Data,2021-01-16 20:01:13,Arunav Das,"http://arxiv.org/abs/2101.06478v1, http://arxiv.org/pdf/2101.06478v1",econ.EM
29347,em,"This study decomposes the bilateral trade flows using a three-dimensional
panel data model. Under the scenario that all three dimensions diverge to
infinity, we propose an estimation approach to identify the number of global
shocks and country-specific shocks sequentially, and establish the asymptotic
theories accordingly. From the practical point of view, being able to separate
the pervasive and nonpervasive shocks in a multi-dimensional panel data is
crucial for a range of applications, such as, international financial linkages,
migration flows, etc. In the numerical studies, we first conduct intensive
simulations to examine the theoretical findings, and then use the proposed
approach to investigate the international trade flows from two major trading
groups (APEC and EU) over 1982-2019, and quantify the network of bilateral
trade.",Decomposition of Bilateral Trade Flows Using a Three-Dimensional Panel Data Model,2021-01-18 02:45:43,"Yufeng Mao, Bin Peng, Mervyn Silvapulle, Param Silvapulle, Yanrong Yang","http://arxiv.org/abs/2101.06805v1, http://arxiv.org/pdf/2101.06805v1",econ.EM
29348,em,"In the international oil trade network (iOTN), trade shocks triggered by
extreme events may spread over the entire network along the trade links of the
central economies and even lead to the collapse of the whole system. In this
study, we focus on the concept of ""too central to fail"" and use traditional
centrality indicators as strategic indicators for simulating attacks on
economic nodes, and simulates various situations in which the structure and
function of the global oil trade network are lost when the economies suffer
extreme trade shocks. The simulation results show that the global oil trade
system has become more vulnerable in recent years. The regional aggregation of
oil trade is an essential source of iOTN's vulnerability. Maintaining global
oil trade stability and security requires a focus on economies with greater
influence within the network module of the iOTN. International organizations
such as OPEC and OECD established more trade links around the world, but their
influence on the iOTN is declining. We improve the framework of oil security
and trade risk assessment based on the topological index of iOTN, and provide a
reference for finding methods to maintain network robustness and trade
stability.",Robustness of the international oil trade network under targeted attacks to economies,2021-01-26 13:12:10,"N. Wei, W. -J. Xie, W. -X. Zhou","http://dx.doi.org/10.1016/j.energy.2022.123939, http://arxiv.org/abs/2101.10679v2, http://arxiv.org/pdf/2101.10679v2",econ.EM
29349,em,"This paper estimates the effects of non-pharmaceutical interventions -
mainly, the lockdown - on the COVID-19 mortality rate for the case of Italy,
the first Western country to impose a national shelter-in-place order. We use a
new estimator, the Augmented Synthetic Control Method (ASCM), that overcomes
some limits of the standard Synthetic Control Method (SCM). The results are
twofold. From a methodological point of view, the ASCM outperforms the SCM in
that the latter cannot select a valid donor set, assigning all the weights to
only one country (Spain) while placing zero weights to all the remaining. From
an empirical point of view, we find strong evidence of the effectiveness of
non-pharmaceutical interventions in avoiding losses of human lives in Italy:
conservative estimates indicate that for each human life actually lost, in the
absence of lockdown there would have been on average other 1.15, the policy
saved in total 20,400 human lives.",The sooner the better: lives saved by the lockdown during the COVID-19 outbreak. The case of Italy,2021-01-28 13:02:00,"Roy Cerqueti, Raffaella Coppier, Alessandro Girardi, Marco Ventura","http://arxiv.org/abs/2101.11901v1, http://arxiv.org/pdf/2101.11901v1",econ.EM
29350,em,"We develop a Bayesian approach to estimate weight matrices in spatial
autoregressive (or spatial lag) models. Datasets in regional economic
literature are typically characterized by a limited number of time periods T
relative to spatial units N. When the spatial weight matrix is subject to
estimation severe problems of over-parametrization are likely. To make
estimation feasible, our approach focusses on spatial weight matrices which are
binary prior to row-standardization. We discuss the use of hierarchical priors
which impose sparsity in the spatial weight matrix. Monte Carlo simulations
show that these priors perform very well where the number of unknown parameters
is large relative to the observations. The virtues of our approach are
demonstrated using global data from the early phase of the COVID-19 pandemic.",A Bayesian approach for estimation of weight matrices in spatial autoregressive models,2021-01-28 14:32:29,"Tamás Krisztin, Philipp Piribauer","http://dx.doi.org/10.1080/17421772.2022.2095426, http://arxiv.org/abs/2101.11938v2, http://arxiv.org/pdf/2101.11938v2",econ.EM
29351,em,"This paper focuses on the bootstrap for network dependent processes under the
conditional $\psi$-weak dependence. Such processes are distinct from other
forms of random fields studied in the statistics and econometrics literature so
that the existing bootstrap methods cannot be applied directly. We propose a
block-based approach and a modification of the dependent wild bootstrap for
constructing confidence sets for the mean of a network dependent process. In
addition, we establish the consistency of these methods for the smooth function
model and provide the bootstrap alternatives to the network
heteroskedasticity-autocorrelation consistent (HAC) variance estimator. We find
that the modified dependent wild bootstrap and the corresponding variance
estimator are consistent under weaker conditions relative to the block-based
method, which makes the former approach preferable for practical
implementation.",The Bootstrap for Network Dependent Processes,2021-01-29 01:53:54,Denis Kojevnikov,"http://arxiv.org/abs/2101.12312v1, http://arxiv.org/pdf/2101.12312v1",econ.EM
29352,em,"This paper proposes a novel method of algorithmic subsampling (data
sketching) for multiway cluster dependent data. We establish a new uniform weak
law of large numbers and a new central limit theorem for the multiway
algorithmic subsample means. Consequently, we discover an additional advantage
of the algorithmic subsampling that it allows for robustness against potential
degeneracy, and even non-Gaussian degeneracy, of the asymptotic distribution
under multiway clustering. Simulation studies support this novel result, and
demonstrate that inference with the algorithmic subsampling entails more
accuracy than that without the algorithmic subsampling. Applying these basic
asymptotic theories, we derive the consistency and the asymptotic normality for
the multiway algorithmic subsampling generalized method of moments estimator
and for the multiway algorithmic subsampling M-estimator. We illustrate an
application to scanner data.",Algorithmic subsampling under multiway clustering,2021-02-28 19:35:23,"Harold D. Chiang, Jiatong Li, Yuya Sasaki","http://arxiv.org/abs/2103.00557v4, http://arxiv.org/pdf/2103.00557v4",econ.EM
29353,em,"The ex-ante evaluation of policies using structural econometric models is
based on estimated parameters as a stand-in for the true parameters. This
practice ignores uncertainty in the counterfactual policy predictions of the
model. We develop a generic approach that deals with parametric uncertainty
using uncertainty sets and frames model-informed policy-making as a decision
problem under uncertainty. The seminal human capital investment model by Keane
and Wolpin (1997) provides a well-known, influential, and empirically-grounded
test case. We document considerable uncertainty in the models's policy
predictions and highlight the resulting policy recommendations obtained from
using different formal rules of decision-making under uncertainty.",Structural models for policy-making: Coping with parametric uncertainty,2021-03-01 19:34:48,"Philipp Eisenhauer, Janoś Gabler, Lena Janys, Christopher Walsh","http://arxiv.org/abs/2103.01115v4, http://arxiv.org/pdf/2103.01115v4",econ.EM
29537,em,"This paper uses predictive densities obtained via mixed causal-noncausal
autoregressive models to evaluate the statistical sustainability of Brazilian
inflation targeting system with the tolerance bounds. The probabilities give an
indication of the short-term credibility of the targeting system without
requiring modelling people's beliefs. We employ receiver operating
characteristic curves to determine the optimal probability threshold from which
the bank is predicted to be credible. We also investigate the added value of
including experts predictions of key macroeconomic variables.",A short term credibility index for central banks under inflation targeting: an application to Brazil,2022-05-02 17:08:35,"Alain Hecq, Joao Issler, Elisa Voisin","http://arxiv.org/abs/2205.00924v2, http://arxiv.org/pdf/2205.00924v2",econ.EM
29355,em,"In this paper we have updated the hypothesis testing framework by drawing
upon modern computational power and classification models from machine
learning. We show that a simple classification algorithm such as a boosted
decision stump can be used to fully recover the full size-power trade-off for
any single test statistic. This recovery implies an equivalence, under certain
conditions, between the basic building block of modern machine learning and
hypothesis testing. Second, we show that more complex algorithms such as the
random forest and gradient boosted machine can serve as mapping functions in
place of the traditional null distribution. This allows for multiple test
statistics and other information to be evaluated simultaneously and thus form a
pseudo-composite hypothesis test. Moreover, we show how practitioners can make
explicit the relative costs of Type I and Type II errors to contextualize the
test into a specific decision framework. To illustrate this approach we revisit
the case of testing for unit roots, a difficult problem in time series
econometrics for which existing tests are known to exhibit low power. Using a
simulation framework common to the literature we show that this approach can
improve upon overall accuracy of the traditional unit root test(s) by seventeen
percentage points, and the sensitivity by thirty six percentage points.",Standing on the Shoulders of Machine Learning: Can We Improve Hypothesis Testing?,2021-03-02 03:07:30,"Gary Cornwall, Jeff Chen, Beau Sauley","http://arxiv.org/abs/2103.01368v1, http://arxiv.org/pdf/2103.01368v1",econ.EM
29356,em,"This paper contains two finite-sample results about the sign test. First, we
show that the sign test is unbiased against two-sided alternatives even when
observations are not identically distributed. Second, we provide simple
theoretical counterexamples to show that correlation that is unaccounted for
leads to size distortion and over-rejection. Our results have implication for
practitioners, who are increasingly employing randomization tests for
inference.",Some Finite Sample Properties of the Sign Test,2021-03-02 04:51:20,Yong Cai,"http://arxiv.org/abs/2103.01412v1, http://arxiv.org/pdf/2103.01412v1",econ.EM
29357,em,"The coronavirus is a global event of historical proportions and just a few
months changed the time series properties of the data in ways that make many
pre-covid forecasting models inadequate. It also creates a new problem for
estimation of economic factors and dynamic causal effects because the
variations around the outbreak can be interpreted as outliers, as shifts to the
distribution of existing shocks, or as addition of new shocks. I take the
latter view and use covid indicators as controls to 'de-covid' the data prior
to estimation. I find that economic uncertainty remains high at the end of 2020
even though real economic activity has recovered and covid uncertainty has
receded. Dynamic responses of variables to shocks in a VAR similar in magnitude
and shape to the ones identified before 2020 can be recovered by directly or
indirectly modeling covid and treating it as exogenous. These responses to
economic shocks are distinctly different from those to a covid shock which are
much larger but shorter lived. Disentangling the two types of shocks can be
important in macroeconomic modeling post-covid.",Modeling Macroeconomic Variations After COVID-19,2021-03-04 01:45:38,Serena Ng,"http://arxiv.org/abs/2103.02732v4, http://arxiv.org/pdf/2103.02732v4",econ.EM
29358,em,"Economists are blessed with a wealth of data for analysis, but more often
than not, values in some entries of the data matrix are missing. Various
methods have been proposed to handle missing observations in a few variables.
We exploit the factor structure in panel data of large dimensions. Our
\textsc{tall-project} algorithm first estimates the factors from a
\textsc{tall} block in which data for all rows are observed, and projections of
variable specific length are then used to estimate the factor loadings. A
missing value is imputed as the estimated common component which we show is
consistent and asymptotically normal without further iteration. Implications
for using imputed data in factor augmented regressions are then discussed.
  To compensate for the downward bias in covariance matrices created by an
omitted noise when the data point is not observed, we overlay the imputed data
with re-sampled idiosyncratic residuals many times and use the average of the
covariances to estimate the parameters of interest. Simulations show that the
procedures have desirable finite sample properties.",Factor-Based Imputation of Missing Values and Covariances in Panel Data of Large Dimensions,2021-03-04 17:07:47,"Ercument Cahan, Jushan Bai, Serena Ng","http://arxiv.org/abs/2103.03045v3, http://arxiv.org/pdf/2103.03045v3",econ.EM
29359,em,"In this paper we first propose a root-n-consistent Conditional Maximum
Likelihood (CML) estimator for all the common parameters in the panel logit
AR(p) model with strictly exogenous covariates and fixed effects. Our CML
estimator (CMLE) converges in probability faster and is more easily computed
than the kernel-weighted CMLE of Honor\'e and Kyriazidou (2000). Next, we
propose a root-n-consistent CMLE for the coefficients of the exogenous
covariates only. We also discuss new CMLEs for the panel logit AR(p) model
without covariates. Finally, we propose CMLEs for multinomial dynamic panel
logit models with and without covariates. All CMLEs are asymptotically normally
distributed.",Root-n-consistent Conditional ML estimation of dynamic panel logit models with fixed effects,2021-03-08 21:51:31,Hugo Kruiniger,"http://arxiv.org/abs/2103.04973v5, http://arxiv.org/pdf/2103.04973v5",econ.EM
29360,em,"We show that first-difference two-stages-least-squares regressions identify
non-convex combinations of location-and-period-specific treatment effects.
Thus, those regressions could be biased if effects are heterogeneous. We
propose an alternative instrumental-variable correlated-random-coefficient
(IV-CRC) estimator, that is more robust to heterogeneous effects. We revisit
Autor et al. (2013), who use a first-difference two-stages-least-squares
regression to estimate the effect of imports from China on US manufacturing
employment. Their regression estimates a highly non-convex combination of
effects. Our more robust IV-CRC estimator is small and insignificant. Though
its confidence interval is wide, it significantly differs from the
first-difference two-stages-least-squares estimator.","More Robust Estimators for Instrumental-Variable Panel Designs, With An Application to the Effect of Imports from China on US Employment",2021-03-11 06:47:03,"Clément de Chaisemartin, Ziteng Lei","http://arxiv.org/abs/2103.06437v10, http://arxiv.org/pdf/2103.06437v10",econ.EM
29367,em,"This paper introduces a new specification for the nonparametric
production-frontier based on Data Envelopment Analysis (DEA) when dealing with
decision-making units whose economic performances are correlated with those of
the neighbors (spatial dependence). To illustrate the bias reduction that the
SpDEA provides with respect to standard DEA methods, an analysis of the
regional production frontiers for the NUTS-2 European regions during the period
2000-2014 was carried out. The estimated SpDEA scores show a bimodal
distribution do not detected by the standard DEA estimates. The results confirm
the crucial role of space, offering important new insights on both the causes
of regional disparities in labour productivity and the observed polarization of
the European distribution of per capita income.",Addressing spatial dependence in technical efficiency estimation: A Spatial DEA frontier approach,2021-03-25 21:27:06,"Julian Ramajo, Miguel A. Marquez, Geoffrey J. D. Hewings","http://arxiv.org/abs/2103.14063v1, http://arxiv.org/pdf/2103.14063v1",econ.EM
29361,em,"The Rubin Causal Model (RCM) is a framework that allows to define the causal
effect of an intervention as a contrast of potential outcomes. In recent years,
several methods have been developed under the RCM to estimate causal effects in
time series settings. None of these makes use of ARIMA models, which are
instead very common in the econometrics literature. In this paper, we propose a
novel approach, C-ARIMA, to define and estimate the causal effect of an
intervention in a time series setting under the RCM. We first formalize the
assumptions enabling the definition, the estimation and the attribution of the
effect to the intervention; we then check the validity of the proposed method
with an extensive simulation study, comparing its performance against a
standard intervention analysis approach. In the empirical application, we use
C-ARIMA to assess the causal effect of a permanent price reduction on
supermarket sales. The CausalArima R package provides an implementation of our
proposed approach.",Estimating the causal effect of an intervention in a time series setting: the C-ARIMA approach,2021-03-11 18:39:08,"Fiammetta Menchetti, Fabrizio Cipollini, Fabrizia Mealli","http://arxiv.org/abs/2103.06740v3, http://arxiv.org/pdf/2103.06740v3",econ.EM
29362,em,"The relevance condition of Integrated Conditional Moment (ICM) estimators is
significantly weaker than the conventional IV's in at least two respects: (1)
consistent estimation without excluded instruments is possible, provided
endogenous covariates are non-linearly mean-dependent on exogenous covariates,
and (2) endogenous covariates may be uncorrelated with but mean-dependent on
instruments. These remarkable properties notwithstanding, multiplicative-kernel
ICM estimators suffer diminished identification strength, large bias, and
severe size distortions even for a moderately sized instrument vector. This
paper proposes a computationally fast linear ICM estimator that better
preserves identification strength in the presence of multiple instruments and a
test of the ICM relevance condition. Monte Carlo simulations demonstrate a
considerably better size control in the presence of multiple instruments and a
favourably competitive performance in general. An empirical example illustrates
the practical usefulness of the estimator, where estimates remain plausible
when no excluded instrument is used.",Feasible IV Regression without Excluded Instruments,2021-03-17 16:11:52,Emmanuel Selorm Tsyawo,"http://arxiv.org/abs/2103.09621v4, http://arxiv.org/pdf/2103.09621v4",econ.EM
29363,em,"We introduce a new test for a two-sided hypothesis involving a subset of the
structural parameter vector in the linear instrumental variables (IVs) model.
Guggenberger et al. (2019), GKM19 from now on, introduce a subvector
Anderson-Rubin (AR) test with data-dependent critical values that has
asymptotic size equal to nominal size for a parameter space that allows for
arbitrary strength or weakness of the IVs and has uniformly nonsmaller power
than the projected AR test studied in Guggenberger et al. (2012). However,
GKM19 imposes the restrictive assumption of conditional homoskedasticity. The
main contribution here is to robustify the procedure in GKM19 to arbitrary
forms of conditional heteroskedasticity. We first adapt the method in GKM19 to
a setup where a certain covariance matrix has an approximate Kronecker product
(AKP) structure which nests conditional homoskedasticity. The new test equals
this adaption when the data is consistent with AKP structure as decided by a
model selection procedure. Otherwise the test equals the AR/AR test in Andrews
(2017) that is fully robust to conditional heteroskedasticity but less powerful
than the adapted method. We show theoretically that the new test has asymptotic
size bounded by the nominal size and document improved power relative to the
AR/AR test in a wide array of Monte Carlo simulations when the covariance
matrix is not too far from AKP.",A Powerful Subvector Anderson Rubin Test in Linear Instrumental Variables Regression with Conditional Heteroskedasticity,2021-03-21 14:46:04,"Patrik Guggenberger, Frank Kleibergen, Sophocles Mavroeidis","http://arxiv.org/abs/2103.11371v4, http://arxiv.org/pdf/2103.11371v4",econ.EM
29364,em,"In any multiperiod panel, a two-way fixed effects (TWFE) regression is
numerically equivalent to a first-difference (FD) regression that pools all
possible between-period gaps. Building on this observation, this paper develops
numerical and causal interpretations of the TWFE coefficient. At the sample
level, the TWFE coefficient is a weighted average of FD coefficients with
different between-period gaps. This decomposition is useful for assessing the
source of identifying variation for the TWFE coefficient. At the population
level, a causal interpretation of the TWFE coefficient requires a common trends
assumption for any between-period gap, and the assumption has to be conditional
on changes in time-varying covariates. I show that these requirements can be
naturally relaxed by modifying the estimator using a pooled FD regression.",What Do We Get from Two-Way Fixed Effects Regressions? Implications from Numerical Equivalence,2021-03-23 11:16:58,Shoya Ishimaru,"http://arxiv.org/abs/2103.12374v4, http://arxiv.org/pdf/2103.12374v4",econ.EM
29365,em,"Here, we analyse the behaviour of the higher order standardised moments of
financial time series when we truncate a large data set into smaller and
smaller subsets, referred to below as time windows. We look at the effect of
the economic environment on the behaviour of higher order moments in these time
windows. We observe two different scaling relations of higher order moments
when the data sub sets' length decreases; one for longer time windows and
another for the shorter time windows. These scaling relations drastically
change when the time window encompasses a financial crisis. We also observe a
qualitative change of higher order standardised moments compared to the
gaussian values in response to a shrinking time window. We extend this analysis
to incorporate the effects these scaling relations have upon risk. We decompose
the return series within these time windows and carry out a Value-at-Risk
calculation. In doing so, we observe the manifestation of the scaling relations
through the change in the Value-at-Risk level. Moreover, we model the observed
scaling laws by analysing the hierarchy of rare events on higher order moments.",An investigation of higher order moments of empirical financial data and the implications to risk,2021-03-24 16:54:08,"Luke De Clerk, Sergey Savel'ev","http://arxiv.org/abs/2103.13199v3, http://arxiv.org/pdf/2103.13199v3",econ.EM
29366,em,"We propose a route choice model in which traveler behavior is represented as
a utility maximizing assignment of flow across an entire network under a flow
conservation constraint}. Substitution between routes depends on how much they
overlap. {\tr The model is estimated considering the full set of route
alternatives, and no choice set generation is required. Nevertheless,
estimation requires only linear regression and is very fast. Predictions from
the model can be computed using convex optimization, and computation is
straightforward even for large networks. We estimate and validate the model
using a large dataset comprising 1,337,096 GPS traces of trips in the Greater
Copenhagen road network.",A perturbed utility route choice model,2021-03-25 15:21:39,"Mogens Fosgerau, Mads Paulsen, Thomas Kjær Rasmussen","http://arxiv.org/abs/2103.13784v3, http://arxiv.org/pdf/2103.13784v3",econ.EM
29368,em,"When designing eligibility criteria for welfare programs, policymakers
naturally want to target the individuals who will benefit the most. This paper
proposes two new econometric approaches to selecting an optimal eligibility
criterion when individuals' costs to the program are unknown and need to be
estimated. One is designed to achieve the highest benefit possible while
satisfying a budget constraint with high probability. The other is designed to
optimally trade off the benefit and the cost from violating the budget
constraint. The setting I consider extends the previous literature on Empirical
Welfare Maximization by allowing for uncertainty in estimating the budget
needed to implement the criterion, in addition to its benefit. Consequently, my
approaches improve the existing approach as they can be applied to settings
with imperfect take-up or varying program needs. I illustrate my approaches
empirically by deriving an optimal budget-constrained Medicaid expansion in the
US.",Empirical Welfare Maximization with Constraints,2021-03-29 06:08:12,Liyang Sun,"http://arxiv.org/abs/2103.15298v1, http://arxiv.org/pdf/2103.15298v1",econ.EM
29369,em,"I show that Holston, Laubach and Williams' (2017) implementation of Median
Unbiased Estimation (MUE) cannot recover the signal-to-noise ratio of interest
from their Stage 2 model. Moreover, their implementation of the structural
break regressions which are used as an auxiliary model in MUE deviates from
Stock and Watson's (1998) formulation. This leads to spuriously large estimates
of the signal-to-noise parameter $\lambda _{z}$ and thereby an excessive
downward trend in other factor $z_{t}$ and the natural rate. I provide a
correction to the Stage 2 model specification and the implementation of the
structural break regressions in MUE. This correction is quantitatively
important. It results in substantially smaller point estimates of $\lambda
_{z}$ which affects the severity of the downward trend in other factor $z_{t}$.
For the US, the estimate of $\lambda _{z}$ shrinks from $0.040$ to $0.013$ and
is statistically highly insignificant. For the Euro Area, the UK and Canada,
the MUE point estimates of $\lambda _{z}$ are \emph{exactly} zero. Natural rate
estimates from HLW's model using the correct Stage 2 MUE implementation are up
to 100 basis points larger than originally computed.",On a Standard Method for Measuring the Natural Rate of Interest,2021-03-30 18:57:09,Daniel Buncic,"http://arxiv.org/abs/2103.16452v2, http://arxiv.org/pdf/2103.16452v2",econ.EM
29370,em,"We conduct a simulation study of Local Projection (LP) and Vector
Autoregression (VAR) estimators of structural impulse responses across
thousands of data generating processes, designed to mimic the properties of the
universe of U.S. macroeconomic data. Our analysis considers various
identification schemes and several variants of LP and VAR estimators, employing
bias correction, shrinkage, or model averaging. A clear bias-variance trade-off
emerges: LP estimators have lower bias than VAR estimators, but they also have
substantially higher variance at intermediate and long horizons. Bias-corrected
LP is the preferred method if and only if the researcher overwhelmingly
prioritizes bias. For researchers who also care about precision, VAR methods
are the most attractive -- Bayesian VARs at short and long horizons, and
least-squares VARs at intermediate and long horizons.",Local Projections vs. VARs: Lessons From Thousands of DGPs,2021-04-01 20:50:25,"Dake Li, Mikkel Plagborg-Møller, Christian K. Wolf","http://arxiv.org/abs/2104.00655v3, http://arxiv.org/pdf/2104.00655v3",econ.EM
29371,em,"The fundamental relationship of traffic flow is empirically estimated by
fitting a regression curve to a cloud of observations of traffic variables.
Such estimates, however, may suffer from the confounding/endogeneity bias due
to omitted variables such as driving behaviour and weather. To this end, this
paper adopts a causal approach to obtain an unbiased estimate of the
fundamental flow-density relationship using traffic detector data. In
particular, we apply a Bayesian non-parametric spline-based regression approach
with instrumental variables to adjust for the aforementioned confounding bias.
The proposed approach is benchmarked against standard curve-fitting methods in
estimating the flow-density relationship for three highway bottlenecks in the
United States. Our empirical results suggest that the saturated (or
hypercongested) regime of the estimated flow-density relationship using
correlational curve fitting methods may be severely biased, which in turn leads
to biased estimates of important traffic control inputs such as capacity and
capacity-drop. We emphasise that our causal approach is based on the physical
laws of vehicle movement in a traffic stream as opposed to a demand-supply
framework adopted in the economics literature. By doing so, we also aim to
conciliate the engineering and economics approaches to this empirical problem.
Our results, thus, have important implications both for traffic engineers and
transport economists.",Revisiting the empirical fundamental relationship of traffic flow for highways using a causal econometric approach,2021-04-06 13:05:42,"Anupriya, Daniel J. Graham, Daniel Hörcher, Prateek Bansal","http://arxiv.org/abs/2104.02399v1, http://arxiv.org/pdf/2104.02399v1",econ.EM
29372,em,"Inference and testing in general point process models such as the Hawkes
model is predominantly based on asymptotic approximations for likelihood-based
estimators and tests. As an alternative, and to improve finite sample
performance, this paper considers bootstrap-based inference for interval
estimation and testing. Specifically, for a wide class of point process models
we consider a novel bootstrap scheme labeled 'fixed intensity bootstrap' (FIB),
where the conditional intensity is kept fixed across bootstrap repetitions. The
FIB, which is very simple to implement and fast in practice, extends previous
ideas from the bootstrap literature on time series in discrete time, where the
so-called 'fixed design' and 'fixed volatility' bootstrap schemes have shown to
be particularly useful and effective. We compare the FIB with the classic
recursive bootstrap, which is here labeled 'recursive intensity bootstrap'
(RIB). In RIB algorithms, the intensity is stochastic in the bootstrap world
and implementation of the bootstrap is more involved, due to its sequential
structure. For both bootstrap schemes, we provide new bootstrap (asymptotic)
theory which allows to assess bootstrap validity, and propose a
'non-parametric' approach based on resampling time-changed transformations of
the original waiting times. We also establish the link between the proposed
bootstraps for point process models and the related autoregressive conditional
duration (ACD) models. Lastly, we show effectiveness of the different bootstrap
schemes in finite samples through a set of detailed Monte Carlo experiments,
and provide applications to both financial data and social media data to
illustrate the proposed methodology.",Bootstrap Inference for Hawkes and General Point Processes,2021-04-07 16:52:30,"Giuseppe Cavaliere, Ye Lu, Anders Rahbek, Jacob Stærk-Østergaard","http://arxiv.org/abs/2104.03122v2, http://arxiv.org/pdf/2104.03122v2",econ.EM
29538,em,"This study proposes an efficient algorithm for score computation for
regime-switching models, and derived from which, an efficient
expectation-maximization (EM) algorithm. Different from existing algorithms,
this algorithm does not rely on the forward-backward filtering for smoothed
regime probabilities, and only involves forward computation. Moreover, the
algorithm to compute score is readily extended to compute the Hessian matrix.",Efficient Score Computation and Expectation-Maximization Algorithm in Regime-Switching Models,2022-05-03 18:42:08,"Chaojun Li, Shi Qiu","http://arxiv.org/abs/2205.01565v1, http://arxiv.org/pdf/2205.01565v1",econ.EM
29373,em,"We propose a novel text-analytic approach for incorporating textual
information into structural economic models and apply this to study the effects
of tax news. We first develop a novel semi-supervised two-step topic model that
automatically extracts specific information regarding future tax policy changes
from text. We also propose an approach for transforming such textual
information into an economically meaningful time series to be included in a
structural econometric model as variable of interest or instrument. We apply
our method to study the effects of fiscal foresight, in particular the
informational content in speeches of the U.S. president about future tax
reforms, and find that our semi-supervised topic model can successfully extract
information about the direction of tax changes. The extracted information
predicts (exogenous) future tax changes and contains signals that are not
present in previously considered (narrative) measures of (exogenous) tax
changes. We find that tax news triggers a significant yet delayed response in
output.",Min(d)ing the President: A text analytic approach to measuring tax news,2021-04-07 20:08:16,"Adam Jassem, Lenard Lieb, Rui Jorge Almeida, Nalan Baştürk, Stephan Smeekes","http://arxiv.org/abs/2104.03261v2, http://arxiv.org/pdf/2104.03261v2",econ.EM
29374,em,"For treatment effects - one of the core issues in modern econometric analysis
- prediction and estimation are two sides of the same coin. As it turns out,
machine learning methods are the tool for generalized prediction models.
Combined with econometric theory, they allow us to estimate not only the
average but a personalized treatment effect - the conditional average treatment
effect (CATE). In this tutorial, we give an overview of novel methods, explain
them in detail, and apply them via Quantlets in real data applications. We
study the effect that microcredit availability has on the amount of money
borrowed and if 401(k) pension plan eligibility has an impact on net financial
assets, as two empirical examples. The presented toolbox of methods contains
meta-learners, like the Doubly-Robust, R-, T- and X-learner, and methods that
are specially designed to estimate the CATE like the causal BART and the
generalized random forest. In both, the microcredit and 401(k) example, we find
a positive treatment effect for all observations but conflicting evidence of
treatment effect heterogeneity. An additional simulation study, where the true
treatment effect is known, allows us to compare the different methods and to
observe patterns and similarities.",CATE meets ML -- The Conditional Average Treatment Effect and Machine Learning,2021-04-20 15:46:55,Daniel Jacob,"http://arxiv.org/abs/2104.09935v2, http://arxiv.org/pdf/2104.09935v2",econ.EM
29375,em,"In this paper, we develop a two-stage analytical framework to investigate
farming efficiency. In the first stage, data envelopment analysis is employed
to estimate the efficiency of the farms and conduct slack and scale economies
analyses. In the second stage, we propose a stochastic model to identify
potential sources of inefficiency. The latter model integrates within a unified
structure all variables, including inputs, outputs and contextual factors. As
an application ground, we use a sample of 60 farms from the Batinah coastal
region, an agricultural area representing more than 53 per cent of the total
cropped area of Oman. The findings of the study lay emphasis on the
inter-dependence of groundwater salinity, irrigation technology and operational
efficiency of a farm, with as a key recommendation the necessity for more
regulated water consumption and a readjustment of governmental subsidiary
policies.",Investigating farming efficiency through a two stage analytical approach: Application to the agricultural sector in Northern Oman,2021-04-22 12:14:56,"Amar Oukil, Slim Zekri","http://arxiv.org/abs/2104.10943v1, http://arxiv.org/pdf/2104.10943v1",econ.EM
29376,em,"Instrumental variables estimation has gained considerable traction in recent
decades as a tool for causal inference, particularly amongst empirical
researchers. This paper makes three contributions. First, we provide a detailed
theoretical discussion on the properties of the standard two-stage least
squares estimator in the presence of weak instruments and introduce and derive
two alternative estimators. Second, we conduct Monte-Carlo simulations to
compare the finite-sample behavior of the different estimators, particularly in
the weak-instruments case. Third, we apply the estimators to a real-world
context; we employ the different estimators to calculate returns to schooling.",Weak Instrumental Variables: Limitations of Traditional 2SLS and Exploring Alternative Instrumental Variable Estimators,2021-04-26 10:01:29,"Aiwei Huang, Madhurima Chandra, Laura Malkhasyan","http://arxiv.org/abs/2104.12370v1, http://arxiv.org/pdf/2104.12370v1",econ.EM
29377,em,"This paper studies the identification of causal effects of a continuous
treatment using a new difference-in-difference strategy. Our approach allows
for endogeneity of the treatment, and employs repeated cross-sections. It
requires an exogenous change over time which affects the treatment in a
heterogeneous way, stationarity of the distribution of unobservables and a rank
invariance condition on the time trend. On the other hand, we do not impose any
functional form restrictions or an additive time trend, and we are invariant to
the scaling of the dependent variable. Under our conditions, the time trend can
be identified using a control group, as in the binary difference-in-differences
literature. In our scenario, however, this control group is defined by the
data. We then identify average and quantile treatment effect parameters. We
develop corresponding nonparametric estimators and study their asymptotic
properties. Finally, we apply our results to the effect of disposable income on
consumption.",Nonparametric Difference-in-Differences in Repeated Cross-Sections with Continuous Treatments,2021-04-29 19:23:53,"Xavier D'Haultfoeuille, Stefan Hoderlein, Yuya Sasaki","http://dx.doi.org/10.1016/j.jeconom.2022.07.003, http://arxiv.org/abs/2104.14458v2, http://arxiv.org/pdf/2104.14458v2",econ.EM
29378,em,"This paper studies identification of the marginal treatment effect (MTE) when
a binary treatment variable is misclassified. We show under standard
assumptions that the MTE is identified as the derivative of the conditional
expectation of the observed outcome given the true propensity score, which is
partially identified. We characterize the identified set for this propensity
score, and then for the MTE. We show under some mild regularity conditions that
the sign of the MTE is locally identified. We use our MTE bounds to derive
bounds on other commonly used parameters in the literature. We show that our
bounds are tighter than the existing bounds for the local average treatment
effect. We illustrate the practical relevance of our derived bounds through
some numerical and empirical results.",Marginal Treatment Effects with a Misclassified Treatment,2021-05-02 02:51:15,"Santiago Acerenza, Kyunghoon Ban, Désiré Kédagni","http://arxiv.org/abs/2105.00358v7, http://arxiv.org/pdf/2105.00358v7",econ.EM
29380,em,"Empirical work often uses treatment assigned following geographic boundaries.
When the effects of treatment cross over borders, classical
difference-in-differences estimation produces biased estimates for the average
treatment effect. In this paper, I introduce a potential outcomes framework to
model spillover effects and decompose the estimate's bias in two parts: (1) the
control group no longer identifies the counterfactual trend because their
outcomes are affected by treatment and (2) changes in treated units' outcomes
reflect the effect of their own treatment status and the effect from the
treatment status of 'close' units. I propose conditions for non-parametric
identification that can remove both sources of bias and semi-parametrically
estimate the spillover effects themselves including in settings with staggered
treatment timing. To highlight the importance of spillover effects, I revisit
analyses of three place-based interventions.",Difference-in-Differences Estimation with Spatial Spillovers,2021-05-08 19:43:02,Kyle Butts,"http://arxiv.org/abs/2105.03737v3, http://arxiv.org/pdf/2105.03737v3",econ.EM
29381,em,"Empirical analyses on income and wealth inequality and those in other fields
in economics and finance often face the difficulty that the data is
heterogeneous, heavy-tailed or correlated in some unknown fashion. The paper
focuses on applications of the recently developed \textit{t}-statistic based
robust inference approaches in the analysis of inequality measures and their
comparisons under the above problems. Following the approaches, in particular,
a robust large sample test on equality of two parameters of interest (e.g., a
test of equality of inequality measures in two regions or countries considered)
is conducted as follows: The data in the two samples dealt with is partitioned
into fixed numbers $q_1, q_2\ge 2$ (e.g., $q_1=q_2=2, 4, 8$) of groups, the
parameters (inequality measures dealt with) are estimated for each group, and
inference is based on a standard two-sample $t-$test with the resulting $q_1,
q_2$ group estimators. Robust $t-$statistic approaches result in valid
inference under general conditions that group estimators of parameters (e.g.,
inequality measures) considered are asymptotically independent, unbiased and
Gaussian of possibly different variances, or weakly converge, at an arbitrary
rate, to independent scale mixtures of normal random variables. These
conditions are typically satisfied in empirical applications even under
pronounced heavy-tailedness and heterogeneity and possible dependence in
observations. The methods dealt with in the paper complement and compare
favorably with other inference approaches available in the literature. The use
of robust inference approaches is illustrated by an empirical analysis of
income inequality measures and their comparisons across different regions in
Russia.",Robust Inference on Income Inequality: $t$-Statistic Based Approaches,2021-05-11 23:29:05,"Rustam Ibragimov, Paul Kattuman, Anton Skrobotov","http://arxiv.org/abs/2105.05335v2, http://arxiv.org/pdf/2105.05335v2",econ.EM
29382,em,"National and local governments have implemented a large number of policies in
response to the Covid-19 pandemic. Evaluating the effects of these policies,
both on the number of Covid-19 cases as well as on other economic outcomes is a
key ingredient for policymakers to be able to determine which policies are most
effective as well as the relative costs and benefits of particular policies. In
this paper, we consider the relative merits of common identification strategies
that exploit variation in the timing of policies across different locations by
checking whether the identification strategies are compatible with leading
epidemic models in the epidemiology literature. We argue that unconfoundedness
type approaches, that condition on the pre-treatment ""state"" of the pandemic,
are likely to be more useful for evaluating policies than
difference-in-differences type approaches due to the highly nonlinear spread of
cases during a pandemic. For difference-in-differences, we further show that a
version of this problem continues to exist even when one is interested in
understanding the effect of a policy on other economic outcomes when those
outcomes also depend on the number of Covid-19 cases. We propose alternative
approaches that are able to circumvent these issues. We apply our proposed
approach to study the effect of state level shelter-in-place orders early in
the pandemic.",Policy Evaluation during a Pandemic,2021-05-14 19:18:58,"Brantly Callaway, Tong Li","http://arxiv.org/abs/2105.06927v2, http://arxiv.org/pdf/2105.06927v2",econ.EM
29383,em,"We propose the double robust Lagrange multiplier (DRLM) statistic for testing
hypotheses specified on the pseudo-true value of the structural parameters in
the generalized method of moments. The pseudo-true value is defined as the
minimizer of the population continuous updating objective function and equals
the true value of the structural parameter in the absence of
misspecification.\nocite{hhy96} The (bounding) chi-squared limiting
distribution of the DRLM statistic is robust to both misspecification and weak
identification of the structural parameters, hence its name. To emphasize its
importance for applied work, we use the DRLM test to analyze the return on
education, which is often perceived to be weakly identified, using data from
Card (1995) where misspecification occurs in case of treatment heterogeneity;
and to analyze the risk premia associated with risk factors proposed in Adrian
et al. (2014) and He et al. (2017), where both misspecification and weak
identification need to be addressed.",Double robust inference for continuous updating GMM,2021-05-18 11:12:13,"Frank Kleibergen, Zhaoguo Zhan","http://arxiv.org/abs/2105.08345v1, http://arxiv.org/pdf/2105.08345v1",econ.EM
29384,em,"We use identification robust tests to show that difference, level and
non-linear moment conditions, as proposed by Arellano and Bond (1991), Arellano
and Bover (1995), Blundell and Bond (1998) and Ahn and Schmidt (1995) for the
linear dynamic panel data model, do not separately identify the autoregressive
parameter when its true value is close to one and the variance of the initial
observations is large. We prove that combinations of these moment conditions,
however, do so when there are more than three time series observations. This
identification then solely results from a set of, so-called, robust moment
conditions. These robust moments are spanned by the combined difference, level
and non-linear moment conditions and only depend on differenced data. We show
that, when only the robust moments contain identifying information on the
autoregressive parameter, the discriminatory power of the Kleibergen (2005) LM
test using the combined moments is identical to the largest rejection
frequencies that can be obtained from solely using the robust moments. This
shows that the KLM test implicitly uses the robust moments when only they
contain information on the autoregressive parameter.",Identification robust inference for moments based analysis of linear dynamic panel data models,2021-05-18 11:12:22,"Maurice J. G. Bun, Frank Kleibergen","http://arxiv.org/abs/2105.08346v1, http://arxiv.org/pdf/2105.08346v1",econ.EM
29385,em,"The econometric literature on treatment-effects typically takes functionals
of outcome-distributions as `social welfare' and ignores program-impacts on
unobserved utilities. We show how to incorporate aggregate utility within
econometric program-evaluation and optimal treatment-targeting for a
heterogenous population. In the practically important setting of
discrete-choice, under unrestricted preference-heterogeneity and
income-effects, the indirect-utility distribution becomes a closed-form
functional of average demand. This enables nonparametric cost-benefit analysis
of policy-interventions and their optimal targeting based on planners'
redistributional preferences. For ordered/continuous choice,
utility-distributions can be bounded. Our methods are illustrated with Indian
survey-data on private-tuition, where income-paths of usage-maximizing
subsidies differ significantly from welfare-maximizing ones.",Incorporating Social Welfare in Program-Evaluation and Treatment Choice,2021-05-18 20:17:59,"Debopam Bhattacharya, Tatiana Komarova","http://arxiv.org/abs/2105.08689v2, http://arxiv.org/pdf/2105.08689v2",econ.EM
29386,em,"This paper proposes a new framework to evaluate unconditional quantile
effects (UQE) in a data combination model. The UQE measures the effect of a
marginal counterfactual change in the unconditional distribution of a covariate
on quantiles of the unconditional distribution of a target outcome. Under rank
similarity and conditional independence assumptions, we provide a set of
identification results for UQEs when the target covariate is continuously
distributed and when it is discrete, respectively. Based on these
identification results, we propose semiparametric estimators and establish
their large sample properties under primitive conditions. Applying our method
to a variant of Mincer's earnings function, we study the counterfactual
quantile effect of actual work experience on income.",Two Sample Unconditional Quantile Effect,2021-05-20 04:02:09,"Atsushi Inoue, Tong Li, Qi Xu","http://arxiv.org/abs/2105.09445v1, http://arxiv.org/pdf/2105.09445v1",econ.EM
29387,em,"This paper provides additional results relevant to the setting, model, and
estimators of Auerbach (2019a). Section 1 contains results about the large
sample properties of the estimators from Section 2 of Auerbach (2019a). Section
2 considers some extensions to the model. Section 3 provides an application to
estimating network peer effects. Section 4 shows the results from some
simulations.",Identification and Estimation of a Partially Linear Regression Model using Network Data: Inference and an Application to Network Peer Effects,2021-05-20 22:35:16,Eric Auerbach,"http://arxiv.org/abs/2105.10002v1, http://arxiv.org/pdf/2105.10002v1",econ.EM
29388,em,"Average partial effects (APEs) are often not point identified in panel models
with unrestricted unobserved heterogeneity, such as binary response panel model
with fixed effects and logistic errors. This lack of point-identification
occurs despite the identification of these models' common coefficients. We
provide a unified framework to establish the point identification of various
partial effects in a wide class of nonlinear semiparametric models under an
index sufficiency assumption on the unobserved heterogeneity, even when the
error distribution is unspecified and non-stationary. This assumption does not
impose parametric restrictions on the unobserved heterogeneity and
idiosyncratic errors. We also present partial identification results when the
support condition fails. We then propose three-step semiparametric estimators
for the APE, the average structural function, and average marginal effects, and
show their consistency and asymptotic normality. Finally, we illustrate our
approach in a study of determinants of married women's labor supply.",Identification and Estimation of Partial Effects in Nonlinear Semiparametric Panel Models,2021-05-27 03:52:26,"Laura Liu, Alexandre Poirier, Ji-Liang Shiu","http://arxiv.org/abs/2105.12891v4, http://arxiv.org/pdf/2105.12891v4",econ.EM
29389,em,"The exponentially weighted moving average (EMWA) could be labeled as a
competitive volatility estimator, where its main strength relies on computation
simplicity, especially in a multi-asset scenario, due to dependency only on the
decay parameter, $\lambda$. But, what is the best election for $\lambda$ in the
EMWA volatility model? Through a large time-series data set of historical
returns of the top US large-cap companies; we test empirically the forecasting
performance of the EWMA approach, under different time horizons and varying the
decay parameter. Using a rolling window scheme, the out-of-sample performance
of the variance-covariance matrix is computed following two approaches. First,
if we look for a fixed decay parameter for the full sample, the results are in
agreement with the RiskMetrics suggestion for 1-month forecasting. In addition,
we provide the full-sample optimal decay parameter for the weekly and bi-weekly
forecasting horizon cases, confirming two facts: i) the optimal value is as a
function of the forecasting horizon, and ii) for lower forecasting horizons the
short-term memory gains importance. In a second way, we also evaluate the
forecasting performance of EWMA, but this time using the optimal time-varying
decay parameter which minimizes the in-sample variance-covariance estimator,
arriving at better accuracy than the use of a fixed-full-sample optimal
parameter.",Asset volatility forecasting:The optimal decay parameter in the EWMA model,2021-05-30 01:18:52,Axel A. Araneda,"http://arxiv.org/abs/2105.14382v1, http://arxiv.org/pdf/2105.14382v1",econ.EM
29390,em,"I partially identify the marginal treatment effect (MTE) when the treatment
is misclassified. I explore two restrictions, allowing for dependence between
the instrument and the misclassification decision. If the signs of the
derivatives of the propensity scores are equal, I identify the MTE sign. If
those derivatives are similar, I bound the MTE. To illustrate, I analyze the
impact of alternative sentences (fines and community service v. no punishment)
on recidivism in Brazil, where Appeals processes generate misclassification.
The estimated misclassification bias may be as large as 10% of the largest
possible MTE, and the bounds contain the correctly estimated MTE.",Crime and Mismeasured Punishment: Marginal Treatment Effect with Misclassification,2021-05-29 21:33:08,Vitor Possebom,"http://dx.doi.org/10.1162/rest_a_01372, http://arxiv.org/abs/2106.00536v7, http://arxiv.org/pdf/2106.00536v7",econ.EM
29391,em,"The standard approximation of a natural logarithm in statistical analysis
interprets a linear change of \(p\) in \(\ln(X)\) as a \((1+p)\) proportional
change in \(X\), which is only accurate for small values of \(p\). I suggest
base-\((1+p)\) logarithms, where \(p\) is chosen ahead of time. A one-unit
change in \(\log_{1+p}(X)\) is exactly equivalent to a \((1+p)\) proportional
change in \(X\). This avoids an approximation applied too broadly, makes exact
interpretation easier and less error-prone, improves approximation quality when
approximations are used, makes the change of interest a one-log-unit change
like other regression variables, and reduces error from the use of
\(\log(1+X)\).",Linear Rescaling to Accurately Interpret Logarithms,2021-06-06 12:11:36,Nick Huntington-Klein,"http://arxiv.org/abs/2106.03070v3, http://arxiv.org/pdf/2106.03070v3",econ.EM
29392,em,"Clustered standard errors and approximate randomization tests are popular
inference methods that allow for dependence within observations. However, they
require researchers to know the cluster structure ex ante. We propose a
procedure to help researchers discover clusters in panel data. Our method is
based on thresholding an estimated long-run variance-covariance matrix and
requires the panel to be large in the time dimension, but imposes no lower
bound on the number of units. We show that our procedure recovers the true
clusters with high probability with no assumptions on the cluster structure.
The estimated clusters are independently of interest, but they can also be used
in the approximate randomization tests or with conventional cluster-robust
covariance estimators. The resulting procedures control size and have good
power.",Panel Data with Unknown Clusters,2021-06-10 08:06:13,Yong Cai,"http://arxiv.org/abs/2106.05503v4, http://arxiv.org/pdf/2106.05503v4",econ.EM
29393,em,"In this paper, we develop a method to assess the sensitivity of local average
treatment effect estimates to potential violations of the monotonicity
assumption of Imbens and Angrist (1994). We parameterize the degree to which
monotonicity is violated using two sensitivity parameters: the first one
determines the share of defiers in the population, and the second one measures
differences in the distributions of outcomes between compliers and defiers. For
each pair of values of these sensitivity parameters, we derive sharp bounds on
the outcome distributions of compliers in the first-order stochastic dominance
sense. We identify the robust region that is the set of all values of
sensitivity parameters for which a given empirical conclusion, e.g. that the
local average treatment effect is positive, is valid. Researchers can assess
the credibility of their conclusion by evaluating whether all the plausible
sensitivity parameters lie in the robust region. We obtain confidence sets for
the robust region through a bootstrap procedure and illustrate the sensitivity
analysis in an empirical application. We also extend this framework to analyze
treatment effects of the entire population.",Sensitivity of LATE Estimates to Violations of the Monotonicity Assumption,2021-06-11 17:26:16,Claudia Noack,"http://arxiv.org/abs/2106.06421v1, http://arxiv.org/pdf/2106.06421v1",econ.EM
29394,em,"Testing for causation, defined as the preceding impact of the past values of
one variable on the current value of another one when all other pertinent
information is accounted for, is increasingly utilized in empirical research of
the time-series data in different scientific disciplines. A relatively recent
extension of this approach has been allowing for potential asymmetric impacts
since it is harmonious with the way reality operates in many cases according to
Hatemi-J (2012). The current paper maintains that it is also important to
account for the potential change in the parameters when asymmetric causation
tests are conducted, as there exists a number of reasons for changing the
potential causal connection between variables across time. The current paper
extends therefore the static asymmetric causality tests by making them dynamic
via the usage of subsamples. An application is also provided consistent with
measurable definitions of economic or financial bad as well as good news and
their potential interaction across time.",Dynamic Asymmetric Causality Tests with an Application,2021-06-14 20:17:30,Abdulnasser Hatemi-J,"http://arxiv.org/abs/2106.07612v2, http://arxiv.org/pdf/2106.07612v2",econ.EM
29395,em,"Bayesian nonparametric estimates of Australian mental health distributions
are obtained to assess how the mental health status of the population has
changed over time and to compare the mental health status of female/male and
indigenous/non-indigenous population subgroups. First- and second-order
stochastic dominance are used to compare distributions, with results presented
in terms of the posterior probability of dominance and the posterior
probability of no dominance. Our results suggest mental health has deteriorated
in recent years, that males mental health status is better than that of
females, and non-indigenous health status is better than that of the indigenous
population.",Comparisons of Australian Mental Health Distributions,2021-06-15 14:07:51,"David Gunawan, William Griffiths, Duangkamon Chotikapanich","http://arxiv.org/abs/2106.08047v1, http://arxiv.org/pdf/2106.08047v1",econ.EM
29396,em,"When conducting inference on partially identified parameters, confidence
regions may cover the whole identified set with a prescribed probability, to
which we will refer as set coverage, or they may cover each of its point with a
prescribed probability, to which we will refer as point coverage. Since set
coverage implies point coverage, confidence regions satisfying point coverage
are generally preferred on the grounds that they may be more informative. The
object of this note is to describe a decision problem in which, contrary to
received wisdom, point coverage is clearly undesirable.",Set coverage and robust policy,2021-06-17 22:55:46,"Marc Henry, Alexei Onatski","http://arxiv.org/abs/2106.09784v1, http://arxiv.org/pdf/2106.09784v1",econ.EM
29397,em,"In this paper, a semiparametric partially linear model in the spirit of
Robinson (1988) with Box- Cox transformed dependent variable is studied.
Transformation regression models are widely used in applied econometrics to
avoid misspecification. In addition, a partially linear semiparametric model is
an intermediate strategy that tries to balance advantages and disadvantages of
a fully parametric model and nonparametric models. A combination of
transformation and partially linear semiparametric model is, thus, a natural
strategy. The model parameters are estimated by a semiparametric extension of
the so called smooth minimum distance (SmoothMD) approach proposed by Lavergne
and Patilea (2013). SmoothMD is suitable for models defined by conditional
moment conditions and allows the variance of the error terms to depend on the
covariates. In addition, here we allow for infinite-dimension nuisance
parameters. The asymptotic behavior of the new SmoothMD estimator is studied
under general conditions and new inference methods are proposed. A simulation
experiment illustrates the performance of the methods for finite samples.",Semiparametric inference for partially linear regressions with Box-Cox transformation,2021-06-20 19:31:33,"Daniel Becker, Alois Kneip, Valentin Patilea","http://arxiv.org/abs/2106.10723v1, http://arxiv.org/pdf/2106.10723v1",econ.EM
29405,em,"This paper considers nonlinear measures of intergenerational income mobility
such as (i) the effect of parents' permanent income on the entire distribution
of child's permanent income, (ii) transition matrices, and (iii) rank-rank
correlations when observed annual incomes are treated as measured-with-error
versions of permanent incomes. We develop a new approach to identifying joint
distributions in the presence of ""two-sided"" measurement error, and, hence,
identify essentially all parameters of interest in the intergenerational income
mobility literature. Using recent data from the 1997 National Longitudinal
Study of Youth, we find that accounting for measurement error notably reduces
various estimates of intergenerational mobility.",Nonlinear Approaches to Intergenerational Income Mobility allowing for Measurement Error,2021-07-20 05:49:51,"Brantly Callaway, Tong Li, Irina Murtazashvili","http://arxiv.org/abs/2107.09235v2, http://arxiv.org/pdf/2107.09235v2",econ.EM
29398,em,"This paper proposes a flexible and analytically tractable class of
frequency-severity models based on neural networks to parsimoniously capture
important empirical observations. In the proposed two-part model, mean
functions of frequency and severity distributions are characterized by neural
networks to incorporate the non-linearity of input variables. Furthermore, it
is assumed that the mean function of the severity distribution is an affine
function of the frequency variable to account for a potential linkage between
frequency and severity. We provide explicit closed-form formulas for the mean
and variance of the aggregate loss within our modelling framework. Components
of the proposed model including parameters of neural networks and distribution
parameters can be estimated by minimizing the associated negative
log-likelihood functionals with neural network architectures. Furthermore, we
leverage the Shapely value and recent developments in machine learning to
interpret the outputs of the model. Applications to a synthetic dataset and
insurance claims data illustrate that our method outperforms the existing
methods in terms of interpretability and predictive accuracy.",A Neural Frequency-Severity Model and Its Application to Insurance Claims,2021-06-21 01:42:47,Dong-Young Lim,"http://arxiv.org/abs/2106.10770v1, http://arxiv.org/pdf/2106.10770v1",econ.EM
29399,em,"In the context of the Covid-19 pandemic, multiple studies rely on two-way
fixed effects (FE) models to assess the impact of mitigation policies on health
outcomes. Building on the SIRD model of disease transmission, I show that FE
models tend to be misspecified for three reasons. First, despite misleading
common trends in the pre-treatment period, the parallel trends assumption
generally does not hold. Second, heterogeneity in infection rates and infected
populations across regions cannot be accounted for by region-specific fixed
effects, nor by conditioning on observable time-varying confounders. Third,
epidemiological theory predicts heterogeneous treatment effects across regions
and over time. Via simulations, I find that the bias resulting from model
misspecification can be substantial, in magnitude and sometimes in sign.
Overall, my results caution against the use of FE models for mitigation policy
evaluation.",On the Use of Two-Way Fixed Effects Models for Policy Evaluation During Pandemics,2021-06-21 12:44:57,Germain Gauthier,"http://arxiv.org/abs/2106.10949v1, http://arxiv.org/pdf/2106.10949v1",econ.EM
29400,em,"In a classical model of the first-price sealed-bid auction with independent
private values, we develop nonparametric estimation and inference procedures
for a class of policy-relevant metrics, such as total expected surplus and
expected revenue under counterfactual reserve prices. Motivated by the
linearity of these metrics in the quantile function of bidders' values, we
propose a bid spacings-based estimator of the latter and derive its
Bahadur-Kiefer expansion. This makes it possible to construct exact uniform
confidence bands and assess the optimality of a given auction rule. Using the
data on U.S. Forest Service timber auctions, we test whether setting zero
reserve prices in these auctions was revenue maximizing.",Nonparametric inference on counterfactuals in first-price auctions,2021-06-25 22:30:10,"Pasha Andreyanov, Grigory Franguridi","http://arxiv.org/abs/2106.13856v2, http://arxiv.org/pdf/2106.13856v2",econ.EM
29401,em,"This paper analyzes difference-in-differences setups with a continuous
treatment. We show that treatment effect on the treated-type parameters can be
identified under a generalized parallel trends assumption that is similar to
the binary treatment setup. However, interpreting differences in these
parameters across different values of the treatment can be particularly
challenging due to treatment effect heterogeneity. We discuss alternative,
typically stronger, assumptions that alleviate these challenges. We also
provide a variety of treatment effect decomposition results, highlighting that
parameters associated with popular linear two-way fixed-effect (TWFE)
specifications can be hard to interpret, \emph{even} when there are only two
time periods. We introduce alternative estimation procedures that do not suffer
from these TWFE drawbacks, and show in an application that they can lead to
different conclusions.",Difference-in-Differences with a Continuous Treatment,2021-07-06 17:23:15,"Brantly Callaway, Andrew Goodman-Bacon, Pedro H. C. Sant'Anna","http://arxiv.org/abs/2107.02637v3, http://arxiv.org/pdf/2107.02637v3",econ.EM
29402,em,"This paper studies a dynamic ordered logit model for panel data with fixed
effects. The main contribution of the paper is to construct a set of valid
moment conditions that are free of the fixed effects. The moment functions can
be computed using four or more periods of data, and the paper presents
sufficient conditions for the moment conditions to identify the common
parameters of the model, namely the regression coefficients, the autoregressive
parameters, and the threshold parameters. The availability of moment conditions
suggests that these common parameters can be estimated using the generalized
method of moments, and the paper documents the performance of this estimator
using Monte Carlo simulations and an empirical illustration to self-reported
health status using the British Household Panel Survey.",Dynamic Ordered Panel Logit Models,2021-07-07 17:33:49,"Bo E. Honoré, Chris Muris, Martin Weidner","http://arxiv.org/abs/2107.03253v3, http://arxiv.org/pdf/2107.03253v3",econ.EM
29403,em,"This paper shows that testability of reverse causality is possible even in
the absence of exogenous variation, such as in the form of instrumental
variables. Instead of relying on exogenous variation, we achieve testability by
imposing relatively weak model restrictions. Our main assumption is that the
true functional relationship is nonlinear and error terms are additively
separable. In contrast to existing literature, we allow the error to be
heteroskedastic, which is the case in most economic applications. Our procedure
builds on reproducing kernel Hilbert space (RKHS) embeddings of probability
distributions to test conditional independence. We show that the procedure
provides a powerful tool to detect the causal direction in both Monte Carlo
simulations and an application to German survey data. We can infer the causal
direction between income and work experience (proxied by age) without relying
on exogeneous variation.",Testability of Reverse Causality without Exogeneous Variation,2021-07-13 12:19:12,"Christoph Breunig, Patrick Burauel","http://arxiv.org/abs/2107.05936v1, http://arxiv.org/pdf/2107.05936v1",econ.EM
29404,em,"In nonlinear panel data models, fixed effects methods are often criticized
because they cannot identify average marginal effects (AMEs) in short panels.
The common argument is that the identification of AMEs requires knowledge of
the distribution of unobserved heterogeneity, but this distribution is not
identified in a fixed effects model with a short panel. In this paper, we
derive identification results that contradict this argument. In a panel data
dynamic logic model, and for T as small as four, we prove the point
identification of different AMEs, including causal effects of changes in the
lagged dependent variable or in the duration in last choice. Our proofs are
constructive and provide simple closed-form expressions for the AMEs in terms
of probabilities of choice histories. We illustrate our results using Monte
Carlo experiments and with an empirical application of a dynamic structural
model of consumer brand choice with state dependence.",Identification of Average Marginal Effects in Fixed Effects Dynamic Discrete Choice Models,2021-07-12 19:37:35,"Victor Aguirregabiria, Jesus M. Carro","http://arxiv.org/abs/2107.06141v1, http://arxiv.org/pdf/2107.06141v1",econ.EM
29406,em,"We provide a review of recent developments in the calculation of standard
errors and test statistics for statistical inference. While much of the focus
of the last two decades in economics has been on generating unbiased
coefficients, recent years has seen a variety of advancements in correcting for
non-standard standard errors. We synthesize these recent advances in addressing
challenges to conventional inference, like heteroskedasticity, clustering,
serial correlation, and testing multiple hypotheses. We also discuss recent
advancements in numerical methods, such as the bootstrap, wild bootstrap, and
randomization inference. We make three specific recommendations. First, applied
economists need to clearly articulate the challenges to statistical inference
that are present in data as well as the source of those challenges. Second,
modern computing power and statistical software means that applied economists
have no excuse for not correctly calculating their standard errors and test
statistics. Third, because complicated sampling strategies and research designs
make it difficult to work out the correct formula for standard errors and test
statistics, we believe that in the applied economics profession it should
become standard practice to rely on asymptotic refinements to the distribution
of an estimator or test statistic via bootstrapping. Throughout, we reference
built-in and user-written Stata commands that allow one to quickly calculate
accurate standard errors and relevant test statistics.",Recent Developments in Inference: Practicalities for Applied Economics,2021-07-20 22:13:44,"Jeffrey D. Michler, Anna Josephson","http://arxiv.org/abs/2107.09736v1, http://arxiv.org/pdf/2107.09736v1",econ.EM
29407,em,"Maximum likelihood estimation of large Markov-switching vector
autoregressions (MS-VARs) can be challenging or infeasible due to parameter
proliferation. To accommodate situations where dimensionality may be of
comparable order to or exceeds the sample size, we adopt a sparse framework and
propose two penalized maximum likelihood estimators with either the Lasso or
the smoothly clipped absolute deviation (SCAD) penalty. We show that both
estimators are estimation consistent, while the SCAD estimator also selects
relevant parameters with probability approaching one. A modified EM-algorithm
is developed for the case of Gaussian errors and simulations show that the
algorithm exhibits desirable finite sample performance. In an application to
short-horizon return predictability in the US, we estimate a 15 variable
2-state MS-VAR(1) and obtain the often reported counter-cyclicality in
predictability. The variable selection property of our estimators helps to
identify predictors that contribute strongly to predictability during economic
contractions but are otherwise irrelevant in expansions. Furthermore,
out-of-sample analyses indicate that large MS-VARs can significantly outperform
""hard-to-beat"" predictors like the historical average.",Estimating high-dimensional Markov-switching VARs,2021-07-27 05:02:40,Kenwin Maung,"http://arxiv.org/abs/2107.12552v1, http://arxiv.org/pdf/2107.12552v1",econ.EM
29408,em,"This paper considers identification and inference for the distribution of
treatment effects conditional on observable covariates. Since the conditional
distribution of treatment effects is not point identified without strong
assumptions, we obtain bounds on the conditional distribution of treatment
effects by using the Makarov bounds. We also consider the case where the
treatment is endogenous and propose two stochastic dominance assumptions to
tighten the bounds. We develop a nonparametric framework to estimate the bounds
and establish the asymptotic theory that is uniformly valid over the support of
treatment effects. An empirical example illustrates the usefulness of the
methods.",Partial Identification and Inference for Conditional Distributions of Treatment Effects,2021-08-02 11:48:47,Sungwon Lee,"http://arxiv.org/abs/2108.00723v6, http://arxiv.org/pdf/2108.00723v6",econ.EM
29409,em,"We introduce a sequential estimator for continuous time dynamic discrete
choice models (single-agent models and games) by adapting the nested pseudo
likelihood (NPL) estimator of Aguirregabiria and Mira (2002, 2007), developed
for discrete time models with discrete time data, to the continuous time case
with data sampled either discretely (i.e., uniformly-spaced snapshot data) or
continuously. We establish conditions for consistency and asymptotic normality
of the estimator, a local convergence condition, and, for single agent models,
a zero Jacobian property assuring local convergence. We carry out a series of
Monte Carlo experiments using an entry-exit game with five heterogeneous firms
to confirm the large-sample properties and demonstrate finite-sample bias
reduction via iteration. In our simulations we show that the convergence issues
documented for the NPL estimator in discrete time models are less likely to
affect comparable continuous-time models. We also show that there can be large
bias in economically-relevant parameters, such as the competitive effect and
entry cost, from estimating a misspecified discrete time model when in fact the
data generating process is a continuous time model.",Nested Pseudo Likelihood Estimation of Continuous-Time Dynamic Discrete Games,2021-08-04 20:16:08,"Jason R. Blevins, Minhae Kim","http://arxiv.org/abs/2108.02182v2, http://arxiv.org/pdf/2108.02182v2",econ.EM
29410,em,"The fixed-effects model estimates the regressor effects on the mean of the
response, which is inadequate to summarize the variable relationships in the
presence of heteroscedasticity. In this paper, we adapt the asymmetric least
squares (expectile) regression to the fixed-effects model and propose a new
model: expectile regression with fixed-effects $(\ERFE).$ The $\ERFE$ model
applies the within transformation strategy to concentrate out the incidental
parameter and estimates the regressor effects on the expectiles of the response
distribution. The $\ERFE$ model captures the data heteroscedasticity and
eliminates any bias resulting from the correlation between the regressors and
the omitted factors. We derive the asymptotic properties of the $\ERFE$
estimators and suggest robust estimators of its covariance matrix. Our
simulations show that the $\ERFE$ estimator is unbiased and outperforms its
competitors. Our real data analysis shows its ability to capture data
heteroscedasticity (see our R package, \url{github.com/AmBarry/erfe}).",Weighted asymmetric least squares regression with fixed-effects,2021-08-10 17:59:04,"Amadou Barry, Karim Oualkacha, Arthur Charpentier","http://arxiv.org/abs/2108.04737v1, http://arxiv.org/pdf/2108.04737v1",econ.EM
29411,em,"This paper proposes a new method of inference in high-dimensional regression
models and high-dimensional IV regression models. Estimation is based on a
combined use of the orthogonal greedy algorithm, high-dimensional Akaike
information criterion, and double/debiased machine learning. The method of
inference for any low-dimensional subvector of high-dimensional parameters is
based on a root-$N$ asymptotic normality, which is shown to hold without
requiring the exact sparsity condition or the $L^p$ sparsity condition.
Simulation studies demonstrate superior finite-sample performance of this
proposed method over those based on the LASSO or the random forest, especially
under less sparse models. We illustrate an application to production analysis
with a panel of Chilean firms.",Inference in high-dimensional regression models without the exact or $L^p$ sparsity,2021-08-21 17:45:38,"Jooyoung Cha, Harold D. Chiang, Yuya Sasaki","http://arxiv.org/abs/2108.09520v2, http://arxiv.org/pdf/2108.09520v2",econ.EM
29412,em,"I develop a feasible weighted projected principal component (FPPC) analysis
for factor models in which observable characteristics partially explain the
latent factors. This novel method provides more efficient and accurate
estimators than existing methods. To increase estimation efficiency, I take
into account both cross-sectional dependence and heteroskedasticity by using a
consistent estimator of the inverse error covariance matrix as the weight
matrix. To improve accuracy, I employ a projection approach using
characteristics because it removes noise components in high-dimensional factor
analysis. By using the FPPC method, estimators of the factors and loadings have
faster rates of convergence than those of the conventional factor analysis.
Moreover, I propose an FPPC-based diffusion index forecasting model. The
limiting distribution of the parameter estimates and the rate of convergence
for forecast errors are obtained. Using U.S. bond market and macroeconomic
data, I demonstrate that the proposed model outperforms models based on
conventional principal component estimators. I also show that the proposed
model performs well among a large group of machine learning techniques in
forecasting excess bond returns.",Feasible Weighted Projected Principal Component Analysis for Factor Models with an Application to Bond Risk Premia,2021-08-23 18:42:19,Sung Hoon Choi,"http://arxiv.org/abs/2108.10250v3, http://arxiv.org/pdf/2108.10250v3",econ.EM
29413,em,"Double machine learning (DML) has become an increasingly popular tool for
automated variable selection in high-dimensional settings. Even though the
ability to deal with a large number of potential covariates can render
selection-on-observables assumptions more plausible, there is at the same time
a growing risk that endogenous variables are included, which would lead to the
violation of conditional independence. This paper demonstrates that DML is very
sensitive to the inclusion of only a few ""bad controls"" in the covariate space.
The resulting bias varies with the nature of the theoretical causal model,
which raises concerns about the feasibility of selecting control variables in a
data-driven way.",Double Machine Learning and Automated Confounder Selection -- A Cautionary Tale,2021-08-25 18:36:00,"Paul Hünermund, Beyers Louw, Itamar Caspi","http://dx.doi.org/10.1515/jci-2022-0078, http://arxiv.org/abs/2108.11294v4, http://arxiv.org/pdf/2108.11294v4",econ.EM
29414,em,"We develop a framework for difference-in-differences designs with staggered
treatment adoption and heterogeneous causal effects. We show that conventional
regression-based estimators fail to provide unbiased estimates of relevant
estimands absent strong restrictions on treatment-effect homogeneity. We then
derive the efficient estimator addressing this challenge, which takes an
intuitive ""imputation"" form when treatment-effect heterogeneity is
unrestricted. We characterize the asymptotic behavior of the estimator, propose
tools for inference, and develop tests for identifying assumptions. Our method
applies with time-varying controls, in triple-difference designs, and with
certain non-binary treatments. We show the practical relevance of our results
in a simulation study and an application. Studying the consumption response to
tax rebates in the United States, we find that the notional marginal propensity
to consume is between 8 and 11 percent in the first quarter - about half as
large as benchmark estimates used to calibrate macroeconomic models - and
predominantly occurs in the first month after the rebate.",Revisiting Event Study Designs: Robust and Efficient Estimation,2021-08-27 20:56:31,"Kirill Borusyak, Xavier Jaravel, Jann Spiess","http://arxiv.org/abs/2108.12419v4, http://arxiv.org/pdf/2108.12419v4",econ.EM
29415,em,"We study the wild bootstrap inference for instrumental variable regressions
with a small number of large clusters. We first show that the wild bootstrap
Wald test controls size asymptotically up to a small error as long as the
parameters of endogenous variables are strongly identified in at least one of
the clusters. Second, we establish the conditions for the bootstrap tests to
have power against local alternatives.We further develop a wild bootstrap
Anderson-Rubin test for the full-vector inference and show that it controls
size asymptotically even under weak identification in all clusters. We
illustrate their good performance using simulations and provide an empirical
application to a well-known dataset about US local labor markets.",Wild Bootstrap for Instrumental Variables Regressions with Weak and Few Clusters,2021-08-31 12:43:13,"Wenjie Wang, Yichong Zhang","http://arxiv.org/abs/2108.13707v4, http://arxiv.org/pdf/2108.13707v4",econ.EM
29416,em,"In a recent paper Juodis and Reese (2021) (JR) show that the application of
the CD test proposed by Pesaran (2004) to residuals from panels with latent
factors results in over-rejection and propose a randomized test statistic to
correct for over-rejection, and add a screening component to achieve power.
This paper considers the same problem but from a different perspective and
shows that the standard CD test remains valid if the latent factors are weak,
and proposes a simple bias-corrected CD test, labelled CD*, which is shown to
be asymptotically normal, irrespective of whether the latent factors are weak
or strong. This result is shown to hold for pure latent factor models as well
as for panel regressions with latent factors. Small sample properties of the
CD* test are investigated by Monte Carlo experiments and are shown to have the
correct size and satisfactory power for both Gaussian and non-Gaussian errors.
In contrast, it is found that JR's test tends to over-reject in the case of
panels with non-Gaussian errors, and have low power against spatial network
alternatives. The use of the CD* test is illustrated with two empirical
applications from the literature.",A Bias-Corrected CD Test for Error Cross-Sectional Dependence in Panel Data Models with Latent Factors,2021-09-01 17:36:30,"M. Hashem Pesaran, Yimeng Xie","http://arxiv.org/abs/2109.00408v3, http://arxiv.org/pdf/2109.00408v3",econ.EM
29417,em,"This survey is organized around three main topics: models, econometrics, and
empirical applications. Section 2 presents the theoretical framework,
introduces the concept of Markov Perfect Nash Equilibrium, discusses existence
and multiplicity, and describes the representation of this equilibrium in terms
of conditional choice probabilities. We also discuss extensions of the basic
framework, including models in continuous time, the concepts of oblivious
equilibrium and experience-based equilibrium, and dynamic games where firms
have non-equilibrium beliefs. In section 3, we first provide an overview of the
types of data used in this literature, before turning to a discussion of
identification issues and results, and estimation methods. We review different
methods to deal with multiple equilibria and large state spaces. We also
describe recent developments for estimating games in continuous time and
incorporating serially correlated unobservables, and discuss the use of machine
learning methods to solving and estimating dynamic games. Section 4 discusses
empirical applications of dynamic games in IO. We start describing the first
empirical applications in this literature during the early 2000s. Then, we
review recent applications dealing with innovation, antitrust and mergers,
dynamic pricing, regulation, product repositioning, advertising, uncertainty
and investment, airline network competition, dynamic matching, and natural
resources. We conclude with our view of the progress made in this literature
and the remaining challenges.",Dynamic Games in Empirical Industrial Organization,2021-09-03 23:45:43,"Victor Aguirregabiria, Allan Collard-Wexler, Stephen P. Ryan","http://arxiv.org/abs/2109.01725v2, http://arxiv.org/pdf/2109.01725v2",econ.EM
29418,em,"As increasingly popular metrics of worker and institutional quality,
estimated value-added (VA) measures are now widely used as dependent or
explanatory variables in regressions. For example, VA is used as an explanatory
variable when examining the relationship between teacher VA and students'
long-run outcomes. Due to the multi-step nature of VA estimation, the standard
errors (SEs) researchers routinely use when including VA measures in OLS
regressions are incorrect. In this paper, I show that the assumptions
underpinning VA models naturally lead to a generalized method of moments (GMM)
framework. Using this insight, I construct correct SEs' for regressions that
use VA as an explanatory variable and for regressions where VA is the outcome.
In addition, I identify the causes of incorrect SEs when using OLS, discuss the
need to adjust SEs under different sets of assumptions, and propose a more
efficient estimator for using VA as an explanatory variable. Finally, I
illustrate my results using data from North Carolina, and show that correcting
SEs results in an increase that is larger than the impact of clustering SEs.",A Framework for Using Value-Added in Regressions,2021-09-04 01:01:05,Antoine Deeb,"http://arxiv.org/abs/2109.01741v3, http://arxiv.org/pdf/2109.01741v3",econ.EM
29419,em,"We analyze the multilayer architecture of the global input-output network
using sectoral trade data (WIOD, 2016 release). With a focus on the mesoscale
structure and related properties, our multilayer analysis takes into
consideration the splitting into industry-based layers in order to catch more
peculiar relationships between countries that cannot be detected from the
analysis of the single-layer aggregated network. We can identify several large
international communities in which some countries trade more intensively in
some specific layers. However, interestingly, our results show that these
clusters can restructure and evolve over time. In general, not only their
internal composition changes, but the centrality rankings of the members inside
are also reordered, industries from some countries diminishing their role and
others from other countries growing importance. These changes in the large
international clusters may reflect the outcomes and the dynamics of
cooperation, partner selection and competition among industries and among
countries in the global input-output network.",The multilayer architecture of the global input-output network and its properties,2021-09-07 12:01:25,"Rosanna Grassi, Paolo Bartesaghi, Gian Paolo Clemente, Duc Thi Luu","http://dx.doi.org/10.1016/j.jebo.2022.10.029, http://arxiv.org/abs/2109.02946v2, http://arxiv.org/pdf/2109.02946v2",econ.EM
29420,em,"This paper focuses on a setting with observations having a cluster dependence
structure and presents two main impossibility results. First, we show that when
there is only one large cluster, i.e., the researcher does not have any
knowledge on the dependence structure of the observations, it is not possible
to consistently discriminate the mean. When within-cluster observations satisfy
the uniform central limit theorem, we also show that a sufficient condition for
consistent $\sqrt{n}$-discrimination of the mean is that we have at least two
large clusters. This result shows some limitations for inference when we lack
information on the dependence structure of observations. Our second result
provides a necessary and sufficient condition for the cluster structure that
the long run variance is consistently estimable. Our result implies that when
there is at least one large cluster, the long run variance is not consistently
estimable.",Some Impossibility Results for Inference With Cluster Dependence with Large Clusters,2021-09-09 02:23:30,"Denis Kojevnikov, Kyungchul Song","http://arxiv.org/abs/2109.03971v4, http://arxiv.org/pdf/2109.03971v4",econ.EM
29421,em,"Estimating a causal effect from observational data can be biased if we do not
control for self-selection. This selection is based on confounding variables
that affect the treatment assignment and the outcome. Propensity score methods
aim to correct for confounding. However, not all covariates are confounders. We
propose the outcome-adaptive random forest (OARF) that only includes desirable
variables for estimating the propensity score to decrease bias and variance.
Our approach works in high-dimensional datasets and if the outcome and
propensity score model are non-linear and potentially complicated. The OARF
excludes covariates that are not associated with the outcome, even in the
presence of a large number of spurious variables. Simulation results suggest
that the OARF produces unbiased estimates, has a smaller variance and is
superior in variable selection compared to other approaches. The results from
two empirical examples, the effect of right heart catheterization on mortality
and the effect of maternal smoking during pregnancy on birth weight, show
comparable treatment effects to previous findings but tighter confidence
intervals and more plausible selected variables.",Variable Selection for Causal Inference via Outcome-Adaptive Random Forest,2021-09-09 13:29:26,Daniel Jacob,"http://arxiv.org/abs/2109.04154v1, http://arxiv.org/pdf/2109.04154v1",econ.EM
29422,em,"Recent work has highlighted the difficulties of estimating
difference-in-differences models when treatment timing occurs at different
times for different units. This article introduces the R package did2s which
implements the estimator introduced in Gardner (2021). The article provides an
approachable review of the underlying econometric theory and introduces the
syntax for the function did2s. Further, the package introduces a function,
event_study, that provides a common syntax for all the modern event-study
estimators and plot_event_study to plot the results of each estimator.",{did2s}: Two-Stage Difference-in-Differences,2021-09-10 19:15:41,"Kyle Butts, John Gardner","http://dx.doi.org/10.32614/RJ-2022-048, http://arxiv.org/abs/2109.05913v2, http://arxiv.org/pdf/2109.05913v2",econ.EM
29423,em,"The evaluation of a multifaceted program against extreme poverty in different
developing countries gave encouraging results, but with important heterogeneity
between countries. This master thesis proposes to study this heterogeneity with
a Bayesian hierarchical analysis. The analysis we carry out with two different
hierarchical models leads to a very low amount of pooling of information
between countries, indicating that this observed heterogeneity should be
interpreted mostly as true heterogeneity, and not as sampling error. We analyze
the first order behavior of our hierarchical models, in order to understand
what leads to this very low amount of pooling. We try to give to this work a
didactic approach, with an introduction of Bayesian analysis and an explanation
of the different modeling and computational choices of our analysis.",Bayesian hierarchical analysis of a multifaceted program against extreme poverty,2021-09-14 18:25:36,Louis Charlot,"http://arxiv.org/abs/2109.06759v1, http://arxiv.org/pdf/2109.06759v1",econ.EM
29424,em,"A recent econometric literature has critiqued the use of regression
discontinuities where administrative borders serves as the 'cutoff'.
Identification in this context is difficult since multiple treatments can
change at the cutoff and individuals can easily sort on either side of the
border. This note extends the difference-in-discontinuities framework discussed
in Grembi et. al. (2016) to a geographic setting. The paper formalizes the
identifying assumptions in this context which will allow for the removal of
time-invariant sorting and compound-treatments similar to the
difference-in-differences methodology.",Geographic Difference-in-Discontinuities,2021-09-15 19:23:27,Kyle Butts,"http://dx.doi.org/10.1080/13504851.2021.2005236, http://arxiv.org/abs/2109.07406v2, http://arxiv.org/pdf/2109.07406v2",econ.EM
29426,em,"Calibration, the practice of choosing the parameters of a structural model to
match certain empirical moments, can be viewed as minimum distance estimation.
Existing standard error formulas for such estimators require a consistent
estimate of the correlation structure of the empirical moments, which is often
unavailable in practice. Instead, the variances of the individual empirical
moments are usually readily estimable. Using only these variances, we derive
conservative standard errors and confidence intervals for the structural
parameters that are valid even under the worst-case correlation structure. In
the over-identified case, we show that the moment weighting scheme that
minimizes the worst-case estimator variance amounts to a moment selection
problem with a simple solution. Finally, we develop tests of over-identifying
or parameter restrictions. We apply our methods empirically to a model of menu
cost pricing for multi-product firms and to a heterogeneous agent New Keynesian
model.",Standard Errors for Calibrated Parameters,2021-09-16 19:57:46,"Matthew D. Cocci, Mikkel Plagborg-Møller","http://arxiv.org/abs/2109.08109v2, http://arxiv.org/pdf/2109.08109v2",econ.EM
29427,em,"We provide adaptive confidence intervals on a parameter of interest in the
presence of nuisance parameters when some of the nuisance parameters have known
signs. The confidence intervals are adaptive in the sense that they tend to be
short at and near the points where the nuisance parameters are equal to zero.
We focus our results primarily on the practical problem of inference on a
coefficient of interest in the linear regression model when it is unclear
whether or not it is necessary to include a subset of control variables whose
partial effects on the dependent variable have known directions (signs). Our
confidence intervals are trivial to compute and can provide significant length
reductions relative to standard confidence intervals in cases for which the
control variables do not have large effects. At the same time, they entail
minimal length increases at any parameter values. We prove that our confidence
intervals are asymptotically valid uniformly over the parameter space and
illustrate their length properties in an empirical application to a factorial
design field experiment and a Monte Carlo study calibrated to the empirical
application.",Short and Simple Confidence Intervals when the Directions of Some Effects are Known,2021-09-17 00:09:56,"Philipp Ketz, Adam McCloskey","http://arxiv.org/abs/2109.08222v1, http://arxiv.org/pdf/2109.08222v1",econ.EM
29428,em,"We introduce the conditional Maximum Composite Likelihood (MCL) estimation
method for the stochastic factor ordered Probit model of credit rating
transitions of firms. This model is recommended for internal credit risk
assessment procedures in banks and financial institutions under the Basel III
regulations. Its exact likelihood function involves a high-dimensional
integral, which can be approximated numerically before maximization. However,
the estimated migration risk and required capital tend to be sensitive to the
quality of this approximation, potentially leading to statistical regulatory
arbitrage. The proposed conditional MCL estimator circumvents this problem and
maximizes the composite log-likelihood of the factor ordered Probit model. We
present three conditional MCL estimators of different complexity and examine
their consistency and asymptotic normality when n and T tend to infinity. The
performance of these estimators at finite T is examined and compared with a
granularity-based approach in a simulation study. The use of the MCL estimator
is also illustrated in an empirical application.",Composite Likelihood for Stochastic Migration Model with Unobserved Factor,2021-09-19 05:33:20,"Antoine Djogbenou, Christian Gouriéroux, Joann Jasiak, Maygol Bandehali","http://arxiv.org/abs/2109.09043v2, http://arxiv.org/pdf/2109.09043v2",econ.EM
29429,em,"Standard high-dimensional factor models assume that the comovements in a
large set of variables could be modeled using a small number of latent factors
that affect all variables. In many relevant applications in economics and
finance, heterogenous comovements specific to some known groups of variables
naturally arise, and reflect distinct cyclical movements within those groups.
This paper develops two new statistical tests that can be used to investigate
whether there is evidence supporting group-specific heterogeneity in the data.
The first test statistic is designed for the alternative hypothesis of
group-specific heterogeneity appearing in at least one pair of groups; the
second is for the alternative of group-specific heterogeneity appearing in all
pairs of groups. We show that the second moment of factor loadings changes
across groups when heterogeneity is present, and use this feature to establish
the theoretical validity of the tests. We also propose and prove the validity
of a permutation approach for approximating the asymptotic distributions of the
two test statistics. The simulations and the empirical financial application
indicate that the proposed tests are useful for detecting group-specific
heterogeneity.",Tests for Group-Specific Heterogeneity in High-Dimensional Factor Models,2021-09-19 05:50:57,"Antoine Djogbenou, Razvan Sufana","http://arxiv.org/abs/2109.09049v2, http://arxiv.org/pdf/2109.09049v2",econ.EM
29430,em,"I develop algorithms to facilitate Bayesian inference in structural vector
autoregressions that are set-identified with sign and zero restrictions by
showing that the system of restrictions is equivalent to a system of sign
restrictions in a lower-dimensional space. Consequently, algorithms applicable
under sign restrictions can be extended to allow for zero restrictions.
Specifically, I extend algorithms proposed in Amir-Ahmadi and Drautzburg (2021)
to check whether the identified set is nonempty and to sample from the
identified set without rejection sampling. I compare the new algorithms to
alternatives by applying them to variations of the model considered by Arias et
al. (2019), who estimate the effects of US monetary policy using sign and zero
restrictions on the monetary policy reaction function. The new algorithms are
particularly useful when a rich set of sign restrictions substantially
truncates the identified set given the zero restrictions.",Algorithms for Inference in SVARs Identified with Sign and Zero Restrictions,2021-09-22 15:08:24,Matthew Read,"http://arxiv.org/abs/2109.10676v2, http://arxiv.org/pdf/2109.10676v2",econ.EM
29431,em,"We study linear panel regression models in which the unobserved error term is
an unknown smooth function of two-way unobserved fixed effects. In standard
additive or interactive fixed effect models the individual specific and time
specific effects are assumed to enter with a known functional form (additive or
multiplicative). In this paper, we allow for this functional form to be more
general and unknown. We discuss two different estimation approaches that allow
consistent estimation of the regression parameters in this setting as the
number of individuals and the number of time periods grow to infinity. The
first approach uses the interactive fixed effect estimator in Bai (2009), which
is still applicable here, as long as the number of factors in the estimation
grows asymptotically. The second approach first discretizes the two-way
unobserved heterogeneity (similar to what Bonhomme, Lamadon and Manresa 2021
are doing for one-way heterogeneity) and then estimates a simple linear fixed
effect model with additive two-way grouped fixed effects. For both estimation
methods we obtain asymptotic convergence results, perform Monte Carlo
simulations, and employ the estimators in an empirical application to UK house
price data.",Linear Panel Regressions with Two-Way Unobserved Heterogeneity,2021-09-24 15:02:21,"Hugo Freeman, Martin Weidner","http://arxiv.org/abs/2109.11911v3, http://arxiv.org/pdf/2109.11911v3",econ.EM
29433,em,"It is widely accepted that women are underrepresented in academia in general
and economics in particular. This paper introduces a test to detect an
under-researched form of hiring bias: implicit quotas. I derive a test under
the Null of random hiring that requires no information about individual hires
under some assumptions. I derive the asymptotic distribution of this test
statistic and, as an alternative, propose a parametric bootstrap procedure that
samples from the exact distribution. This test can be used to analyze a variety
of other hiring settings. I analyze the distribution of female professors at
German universities across 50 different disciplines. I show that the
distribution of women, given the average number of women in the respective
field, is highly unlikely to result from a random allocation of women across
departments and more likely to stem from an implicit quota of one or two women
on the department level. I also show that a large part of the variation in the
share of women across STEM and non-STEM disciplines could be explained by a
two-women quota on the department level. These findings have important
implications for the potential effectiveness of policies aimed at reducing
underrepresentation and providing evidence of how stakeholders perceive and
evaluate diversity.",Testing the Presence of Implicit Hiring Quotas with Application to German Universities,2021-09-29 13:55:46,Lena Janys,"http://arxiv.org/abs/2109.14343v2, http://arxiv.org/pdf/2109.14343v2",econ.EM
29434,em,"This paper extends the identification results in Nevo and Rosen (2012) to
nonparametric models. We derive nonparametric bounds on the average treatment
effect when an imperfect instrument is available. As in Nevo and Rosen (2012),
we assume that the correlation between the imperfect instrument and the
unobserved latent variables has the same sign as the correlation between the
endogenous variable and the latent variables. We show that the monotone
treatment selection and monotone instrumental variable restrictions, introduced
by Manski and Pepper (2000, 2009), jointly imply this assumption. Moreover, we
show how the monotone treatment response assumption can help tighten the
bounds. The identified set can be written in the form of intersection bounds,
which is more conducive to inference. We illustrate our methodology using the
National Longitudinal Survey of Young Men data to estimate returns to
schooling.",Nonparametric Bounds on Treatment Effects with Imperfect Instruments,2021-09-30 04:18:05,"Kyunghoon Ban, Désiré Kédagni","http://arxiv.org/abs/2109.14785v1, http://arxiv.org/pdf/2109.14785v1",econ.EM
29435,em,"This paper proposes a random coefficient panel model where the regressors are
correlated with the time-varying random coefficients in each period, a critical
feature in many economic applications. We model the random coefficients as
unknown functions of a fixed effect of arbitrary dimensions, a time-varying
random shock that affects the choice of regressors, and an exogenous
idiosyncratic shock. A sufficiency argument is used to control for the fixed
effect, which enables one to construct a feasible control function for the
random shock and subsequently identify the moments of the random coefficients.
We propose a three-step series estimator and prove an asymptotic normality
result. Simulation results show that the method can accurately estimate both
the mean and the dispersion of the random coefficients. As an application, we
estimate the average output elasticities for a sample of Chinese manufacturing
firms.",A Time-Varying Endogenous Random Coefficient Model with an Application to Production Functions,2021-10-03 14:13:57,Ming Li,"http://arxiv.org/abs/2110.00982v1, http://arxiv.org/pdf/2110.00982v1",econ.EM
29436,em,"Binary treatments are often ex-post aggregates of multiple treatments or can
be disaggregated into multiple treatment versions. Thus, effects can be
heterogeneous due to either effect or treatment heterogeneity. We propose a
decomposition method that uncovers masked heterogeneity, avoids spurious
discoveries, and evaluates treatment assignment quality. The estimation and
inference procedure based on double/debiased machine learning allows for
high-dimensional confounding, many treatments and extreme propensity scores.
Our applications suggest that heterogeneous effects of smoking on birthweight
are partially due to different smoking intensities and that gender gaps in Job
Corps effectiveness are largely explained by differential selection into
vocational training.",Effect or Treatment Heterogeneity? Policy Evaluation with Aggregated and Disaggregated Treatments,2021-10-04 16:09:49,"Phillip Heiler, Michael C. Knaus","http://arxiv.org/abs/2110.01427v4, http://arxiv.org/pdf/2110.01427v4",econ.EM
29437,em,"This paper investigates the cointegration between possible determinants of
crude oil futures prices during the COVID-19 pandemic period. We perform
comparative analysis of WTI and newly-launched Shanghai crude oil futures (SC)
via the Autoregressive Distributed Lag (ARDL) model and Quantile Autoregressive
Distributed Lag (QARDL) model. The empirical results confirm that economic
policy uncertainty, stock markets, interest rates and coronavirus panic are
important drivers of WTI futures prices. Our findings also suggest that the US
and China's stock markets play vital roles in movements of SC futures prices.
Meanwhile, CSI300 stock index has a significant positive short-run impact on SC
futures prices while S\&P500 prices possess a positive nexus with SC futures
prices both in long-run and short-run. Overall, these empirical evidences
provide practical implications for investors and policymakers.",New insights into price drivers of crude oil futures markets: Evidence from quantile ARDL approach,2021-10-06 15:31:03,"Hao-Lin Shao, Ying-Hui Shao, Yan-Hong Yang","http://arxiv.org/abs/2110.02693v1, http://arxiv.org/pdf/2110.02693v1",econ.EM
29438,em,"This paper presents novel methods and theories for estimation and inference
about parameters in econometric models using machine learning for nuisance
parameters estimation when data are dyadic. We propose a dyadic cross fitting
method to remove over-fitting biases under arbitrary dyadic dependence.
Together with the use of Neyman orthogonal scores, this novel cross fitting
method enables root-$n$ consistent estimation and inference robustly against
dyadic dependence. We illustrate an application of our general framework to
high-dimensional network link formation models. With this method applied to
empirical data of international economic networks, we reexamine determinants of
free trade agreements (FTA) viewed as links formed in the dyad composed of
world economies. We document that standard methods may lead to misleading
conclusions for numerous classic determinants of FTA formation due to biased
point estimates or standard errors which are too small.",Dyadic double/debiased machine learning for analyzing determinants of free trade agreements,2021-10-08 23:16:17,"Harold D Chiang, Yukun Ma, Joel Rodrigue, Yuya Sasaki","http://arxiv.org/abs/2110.04365v3, http://arxiv.org/pdf/2110.04365v3",econ.EM
29551,em,"We study what two-stage least squares (2SLS) identifies in models with
multiple treatments under treatment effect heterogeneity. Two conditions are
shown to be necessary and sufficient for the 2SLS to identify positively
weighted sums of agent-specific effects of each treatment: average conditional
monotonicity and no cross effects. Our identification analysis allows for any
number of treatments, any number of continuous or discrete instruments, and the
inclusion of covariates. We provide testable implications and present
characterizations of choice behavior implied by our identification conditions.",2SLS with Multiple Treatments,2022-05-16 20:45:46,"Manudeep Bhuller, Henrik Sigstad","http://arxiv.org/abs/2205.07836v9, http://arxiv.org/pdf/2205.07836v9",econ.EM
29439,em,"In this paper we propose new approaches to estimating large dimensional
monotone index models. This class of models has been popular in the applied and
theoretical econometrics literatures as it includes discrete choice,
nonparametric transformation, and duration models. A main advantage of our
approach is computational. For instance, rank estimation procedures such as
those proposed in Han (1987) and Cavanagh and Sherman (1998) that optimize a
nonsmooth, non convex objective function are difficult to use with more than a
few regressors and so limits their use in with economic data sets. For such
monotone index models with increasing dimension, we propose to use a new class
of estimators based on batched gradient descent (BGD) involving nonparametric
methods such as kernel estimation or sieve estimation, and study their
asymptotic properties. The BGD algorithm uses an iterative procedure where the
key step exploits a strictly convex objective function, resulting in
computational advantages. A contribution of our approach is that our model is
large dimensional and semiparametric and so does not require the use of
parametric distributional assumptions.",Estimating High Dimensional Monotone Index Models by Iterative Convex Optimization1,2021-10-09 00:43:00,"Shakeeb Khan, Xiaoying Lan, Elie Tamer, Qingsong Yao","http://arxiv.org/abs/2110.04388v2, http://arxiv.org/pdf/2110.04388v2",econ.EM
29440,em,"We propose consistent nonparametric tests of conditional independence for
time series data. Our methods are motivated from the difference between joint
conditional cumulative distribution function (CDF) and the product of
conditional CDFs. The difference is transformed into a proper conditional
moment restriction (CMR), which forms the basis for our testing procedure. Our
test statistics are then constructed using the integrated moment restrictions
that are equivalent to the CMR. We establish the asymptotic behavior of the
test statistics under the null, the alternative, and the sequence of local
alternatives converging to conditional independence at the parametric rate. Our
tests are implemented with the assistance of a multiplier bootstrap. Monte
Carlo simulations are conducted to evaluate the finite sample performance of
the proposed tests. We apply our tests to examine the predictability of equity
risk premium using variance risk premium for different horizons and find that
there exist various degrees of nonlinear predictability at mid-run and long-run
horizons.",Nonparametric Tests of Conditional Independence for Time Series,2021-10-10 19:27:58,"Xiaojun Song, Haoyu Wei","http://arxiv.org/abs/2110.04847v1, http://arxiv.org/pdf/2110.04847v1",econ.EM
29441,em,"The normality assumption for errors in the Analysis of Variance (ANOVA) is
common when using ANOVA models. But there are few people to test this normality
assumption before using ANOVA models, and the existent literature also rarely
mentions this problem. In this article, we propose an easy-to-use method to
testing the normality assumption in ANOVA models by using smooth tests. The
test statistic we propose has asymptotic chi-square distribution and our tests
are always consistent in various different types of ANOVA models. Discussion
about how to choose the dimension of the smooth model (the number of the basis
functions) are also included in this article. Several simulation experiments
show the superiority of our method.",Smooth Tests for Normality in ANOVA,2021-10-10 19:37:55,"Haoyu Wei, Xiaojun Song","http://arxiv.org/abs/2110.04849v1, http://arxiv.org/pdf/2110.04849v1",econ.EM
29442,em,"This paper studies the estimation of linear panel data models with
interactive fixed effects, where one dimension of the panel, typically time,
may be fixed. To this end, a novel transformation is introduced that reduces
the model to a lower dimension, and, in doing so, relieves the model of
incidental parameters in the cross-section. The central result of this paper
demonstrates that transforming the model and then applying the principal
component (PC) estimator of \cite{bai_panel_2009} delivers $\sqrt{n}$
consistent estimates of regression slope coefficients with $T$ fixed. Moreover,
these estimates are shown to be asymptotically unbiased in the presence of
cross-sectional dependence, serial dependence, and with the inclusion of
dynamic regressors, in stark contrast to the usual case. The large $n$, large
$T$ properties of this approach are also studied, where many of these results
carry over to the case in which $n$ is growing sufficiently fast relative to
$T$. Transforming the model also proves to be useful beyond estimation, a point
illustrated by showing that with $T$ fixed, the eigenvalue ratio test of
\cite{horenstein} provides a consistent test for the number of factors when
applied to the transformed model.",Fixed $T$ Estimation of Linear Panel Data Models with Interactive Fixed Effects,2021-10-11 22:42:14,Ayden Higgins,"http://arxiv.org/abs/2110.05579v1, http://arxiv.org/pdf/2110.05579v1",econ.EM
29443,em,"This paper provides partial identification results for the marginal treatment
effect ($MTE$) when the binary treatment variable is potentially misreported
and the instrumental variable is discrete. Identification results are derived
under different sets of nonparametric assumptions. The identification results
are illustrated in identifying the marginal treatment effects of food stamps on
health.",Partial Identification of Marginal Treatment Effects with discrete instruments and misreported treatment,2021-10-12 22:16:59,Santiago Acerenza,"http://arxiv.org/abs/2110.06285v3, http://arxiv.org/pdf/2110.06285v3",econ.EM
29444,em,"In recent years several complaints about racial discrimination in appraising
home values have been accumulating. For several decades, to estimate the sale
price of the residential properties, appraisers have been walking through the
properties, observing the property, collecting data, and making use of the
hedonic pricing models. However, this method bears some costs and by nature is
subjective and biased. To minimize human involvement and the biases in the real
estate appraisals and boost the accuracy of the real estate market price
prediction models, in this research we design data-efficient learning machines
capable of learning and extracting the relation or patterns between the inputs
(features for the house) and output (value of the houses). We compare the
performance of some machine learning and deep learning algorithms, specifically
artificial neural networks, random forest, and k nearest neighbor approaches to
that of hedonic method on house price prediction in the city of Boulder,
Colorado. Even though this study has been done over the houses in the city of
Boulder it can be generalized to the housing market in any cities. The results
indicate non-linear association between the dwelling features and dwelling
prices. In light of these findings, this study demonstrates that random forest
and artificial neural networks algorithms can be better alternatives over the
hedonic regression analysis for prediction of the house prices in the city of
Boulder, Colorado.","Machine Learning, Deep Learning, and Hedonic Methods for Real Estate Price Prediction",2021-10-14 07:43:04,Mahdieh Yazdani,"http://arxiv.org/abs/2110.07151v1, http://arxiv.org/pdf/2110.07151v1",econ.EM
29445,em,"This paper investigates the performance, in terms of choice probabilities and
correlations, of existing and new specifications of closed-form route choice
models with flexible correlation patterns, namely the Link Nested Logit (LNL),
the Paired Combinatorial Logit (PCL) and the more recent Combination of Nested
Logit (CoNL) models. Following a consolidated track in the literature, choice
probabilities and correlations of the Multinomial Probit (MNP) model by
(Daganzo and Sheffi, 1977) are taken as target. Laboratory experiments on
small/medium-size networks are illustrated, also leveraging a procedure for
practical calculation of correlations of any GEV models, proposed by (Marzano
2014). Results show that models with inherent limitations in the coverage of
the domain of feasible correlations yield unsatisfactory performance, whilst
the specifications of the CoNL proposed in the paper appear the best in fitting
both MNP correlations and probabilities. Performance of the models are
appreciably ameliorated by introducing lower bounds to the nesting parameters.
Overall, the paper provides guidance for the practical application of tested
models.",Choice probabilities and correlations in closed-form route choice models: specifications and drawbacks,2021-10-14 11:41:34,"Fiore Tinessa, Vittorio Marzano, Andrea Papola","http://arxiv.org/abs/2110.07224v1, http://arxiv.org/pdf/2110.07224v1",econ.EM
29446,em,"This paper studies the role played by identification in the Bayesian analysis
of statistical and econometric models. First, for unidentified models we
demonstrate that there are situations where the introduction of a
non-degenerate prior distribution can make a parameter that is nonidentified in
frequentist theory identified in Bayesian theory. In other situations, it is
preferable to work with the unidentified model and construct a Markov Chain
Monte Carlo (MCMC) algorithms for it instead of introducing identifying
assumptions. Second, for partially identified models we demonstrate how to
construct the prior and posterior distributions for the identified set
parameter and how to conduct Bayesian analysis. Finally, for models that
contain some parameters that are identified and others that are not we show
that marginalizing out the identified parameter from the likelihood with
respect to its conditional prior, given the nonidentified parameter, allows the
data to be informative about the nonidentified and partially identified
parameter. The paper provides examples and simulations that illustrate how to
implement our techniques.",Revisiting identification concepts in Bayesian analysis,2021-10-19 16:15:23,"Jean-Pierre Florens, Anna Simoni","http://arxiv.org/abs/2110.09954v1, http://arxiv.org/pdf/2110.09954v1",econ.EM
29447,em,"This paper formalizes a common approach for estimating effects of treatment
at a specific location using geocoded microdata. This estimator compares units
immediately next to treatment (an inner-ring) to units just slightly further
away (an outer-ring). I introduce intuitive assumptions needed to identify the
average treatment effect among the affected units and illustrates pitfalls that
occur when these assumptions fail. Since one of these assumptions requires
knowledge of exactly how far treatment effects are experienced, I propose a new
method that relaxes this assumption and allows for nonparametric estimation
using partitioning-based least squares developed in Cattaneo et. al. (2019).
Since treatment effects typically decay/change over distance, this estimator
improves analysis by estimating a treatment effect curve as a function of
distance from treatment. This is contrast to the traditional method which, at
best, identifies the average effect of treatment. To illustrate the advantages
of this method, I show that Linden and Rockoff (2008) under estimate the
effects of increased crime risk on home values closest to the treatment and
overestimate how far the effects extend by selecting a treatment ring that is
too wide.",Difference-in-Differences with Geocoded Microdata,2021-10-19 21:20:22,Kyle Butts,"http://arxiv.org/abs/2110.10192v1, http://arxiv.org/pdf/2110.10192v1",econ.EM
29448,em,"Heterogeneous panel data models that allow the coefficients to vary across
individuals and/or change over time have received increasingly more attention
in statistics and econometrics. This paper proposes a two-dimensional
heterogeneous panel regression model that incorporate a group structure of
individual heterogeneous effects with cohort formation for their
time-variations, which allows common coefficients between nonadjacent time
points. A bi-integrative procedure that detects the information regarding group
and cohort patterns simultaneously via a doubly penalized least square with
concave fused penalties is introduced. We use an alternating direction method
of multipliers (ADMM) algorithm that automatically bi-integrates the
two-dimensional heterogeneous panel data model pertaining to a common one.
Consistency and asymptotic normality for the proposed estimators are developed.
We show that the resulting estimators exhibit oracle properties, i.e., the
proposed estimator is asymptotically equivalent to the oracle estimator
obtained using the known group and cohort structures. Furthermore, the
simulation studies provide supportive evidence that the proposed method has
good finite sample performance. A real data empirical application has been
provided to highlight the proposed method.",Bi-integrative analysis of two-dimensional heterogeneous panel data model,2021-10-20 13:40:54,"Wei Wang, Xiaodong Yan, Yanyan Ren, Zhijie Xiao","http://arxiv.org/abs/2110.10480v1, http://arxiv.org/pdf/2110.10480v1",econ.EM
29449,em,"Panel data often contain stayers (units with no within-variations) and slow
movers (units with little within-variations). In the presence of many slow
movers, conventional econometric methods can fail to work. We propose a novel
method of robust inference for the average partial effects in correlated random
coefficient models robustly across various distributions of within-variations,
including the cases with many stayers and/or many slow movers in a unified
manner. In addition to this robustness property, our proposed method entails
smaller biases and hence improves accuracy in inference compared to existing
alternatives. Simulation studies demonstrate our theoretical claims about these
properties: the conventional 95% confidence interval covers the true parameter
value with 37-93% frequencies, whereas our proposed one achieves 93-96%
coverage frequencies.",Slow Movers in Panel Data,2021-10-22 23:06:31,"Yuya Sasaki, Takuya Ura","http://arxiv.org/abs/2110.12041v1, http://arxiv.org/pdf/2110.12041v1",econ.EM
29450,em,"This paper proposes an approach for enhancing density forecasts of non-normal
macroeconomic variables using Bayesian Markov-switching models. Alternative
views about economic regimes are combined to produce flexible forecasts, which
are optimized with respect to standard objective functions of density
forecasting. The optimization procedure explores both forecast combinations and
Bayesian model averaging. In an application to U.S. GDP growth, the approach is
shown to achieve good accuracy in terms of average predictive densities and to
produce well-calibrated forecast distributions. The proposed framework can be
used to evaluate the contribution of economists' views to density forecast
performance. In the empirical application, we consider views derived from the
Fed macroeconomic scenarios used for bank stress tests.",Optimal Regime-Switching Density Forecasts,2021-10-26 18:10:53,Graziano Moramarco,"http://arxiv.org/abs/2110.13761v1, http://arxiv.org/pdf/2110.13761v1",econ.EM
29452,em,"Identifying structural change is a crucial step in analysis of time series
and panel data. The longer the time span, the higher the likelihood that the
model parameters have changed as a result of major disruptive events, such as
the 2007-2008 financial crisis and the 2020 COVID-19 outbreak. Detecting the
existence of breaks, and dating them is therefore necessary not only for
estimation purposes but also for understanding drivers of change and their
effect on relationships. This article introduces a new community contributed
command called xtbreak, which provides researchers with a complete toolbox for
analysing multiple structural breaks in time series and panel data. xtbreak can
detect the existence of breaks, determine their number and location, and
provide break date confidence intervals. The new command is used to explore
changes in the relationship between COVID-19 cases and deaths in the US using
both country-level time series data and state-level panel data.",Testing and Estimating Structural Breaks in Time Series and Panel Data in Stata,2021-10-27 19:17:31,"Jan Ditzen, Yiannis Karavias, Joakim Westerlund","http://arxiv.org/abs/2110.14550v2, http://arxiv.org/pdf/2110.14550v2",econ.EM
29453,em,"Despite its paramount importance in the empirical growth literature,
productivity convergence analysis has three problems that have yet to be
resolved: (1) little attempt has been made to explore the hierarchical
structure of industry-level datasets; (2) industry-level technology
heterogeneity has largely been ignored; and (3) cross-sectional dependence has
rarely been allowed for. This paper aims to address these three problems within
a hierarchical panel data framework. We propose an estimation procedure and
then derive the corresponding asymptotic theory. Finally, we apply the
framework to a dataset of 23 manufacturing industries from a wide range of
countries over the period 1963-2018. Our results show that both the
manufacturing industry as a whole and individual manufacturing industries at
the ISIC two-digit level exhibit strong conditional convergence in labour
productivity, but not unconditional convergence. In addition, our results show
that both global and industry-specific shocks are important in explaining the
convergence behaviours of the manufacturing industries.",Productivity Convergence in Manufacturing: A Hierarchical Panel Data Approach,2021-10-31 12:54:19,"Guohua Feng, Jiti Gao, Bin Peng","http://arxiv.org/abs/2111.00449v1, http://arxiv.org/pdf/2111.00449v1",econ.EM
29454,em,"Vector autoregressive (VAR) models are widely used in practical studies,
e.g., forecasting, modelling policy transmission mechanism, and measuring
connection of economic agents. To better capture the dynamics, this paper
introduces a new class of time-varying VAR models in which the coefficients and
covariance matrix of the error innovations are allowed to change smoothly over
time. Accordingly, we establish a set of theories, including the impulse
responses analyses subject to both of the short-run timing and the long-run
restrictions, an information criterion to select the optimal lag, and a
Wald-type test to determine the constant coefficients. Simulation studies are
conducted to evaluate the theoretical findings. Finally, we demonstrate the
empirical relevance and usefulness of the proposed methods through an
application to the transmission mechanism of U.S. monetary policy.","On Time-Varying VAR Models: Estimation, Testing and Impulse Response Analysis",2021-10-31 13:00:19,"Yayi Yan, Jiti Gao, Bin Peng","http://arxiv.org/abs/2111.00450v1, http://arxiv.org/pdf/2111.00450v1",econ.EM
29455,em,"Using a large quarterly macroeconomic dataset over the period 1960Q1-2017Q4,
this paper documents the usefulness of selected financial ratios from the
housing market and firms' aggregate balance sheets for predicting GDP in the
United States over multi-year horizons. A house price-to-rent ratio adjusted
for the business cycle and the liabilities-to-income ratio of the nonfinancial
noncorporate business sector provide the best in-sample fit and out-of-sample
forecasts of cumulative GDP growth over horizons of 1-5 years, outperforming
all other predictors as well as popular high-dimensional forecasting models and
forecast combinations.",Financial-cycle ratios and multi-year predictions of GDP: Evidence from the United States,2021-11-01 13:52:51,Graziano Moramarco,"http://arxiv.org/abs/2111.00822v2, http://arxiv.org/pdf/2111.00822v2",econ.EM
29456,em,"This article develops nonparametric cointegrating regression models with
endogeneity and semi-long memory. We assume semi-long memory is produced in the
regressor process by tempering of random shock coefficients. The fundamental
properties of long memory processes are thus retained in the regressor process.
Nonparametric nonlinear cointegrating regressions with serially dependent
errors and endogenous regressors that are driven by long memory innovations
have been considered in Wang and Phillips (2016). That work also implemented a
statistical specification test for testing whether the regression function
follows a parametric form. The convergence rate of the proposed test is
parameter dependent, and its limit theory involves the local time of fractional
Brownian motion. The present paper modifies the test statistic proposed for the
long memory case by Wang and Phillips (2016) to be suitable for the semi-long
memory case. With this modification, the limit theory for the test involves the
local time of standard Brownian motion. Through simulation studies, we
investigate properties of nonparametric regression function estimation with
semi-long memory regressors as well as long memory regressors.",Nonparametric Cointegrating Regression Functions with Endogeneity and Semi-Long Memory,2021-11-01 17:31:56,"Sepideh Mosaferi, Mark S. Kaiser","http://arxiv.org/abs/2111.00972v2, http://arxiv.org/pdf/2111.00972v2",econ.EM
29457,em,"This paper investigates the transmission of funding liquidity shocks, credit
risk shocks and unconventional monetary policy within the Euro area. To this
aim, we estimate a financial GVAR model for Germany, France, Italy and Spain on
monthly data over the period 2006-2017. The interactions between repo markets,
sovereign bonds and banks' CDS spreads are analyzed, explicitly accounting for
the country-specific effects of the ECB's asset purchase programmes. Impulse
response analysis signals marginally significant core-periphery heterogeneity,
flight-to-quality effects and spillovers between liquidity conditions and
credit risk. Simulated reductions in ECB programmes tend to result in higher
government bond yields and bank CDS spreads, especially for Italy and Spain, as
well as in falling repo trade volumes and rising repo rates across the Euro
area. However, only a few responses to shocks achieve statistical significance.","Funding liquidity, credit risk and unconventional monetary policy in the Euro area: A GVAR approach",2021-11-01 19:41:59,Graziano Moramarco,"http://arxiv.org/abs/2111.01078v2, http://arxiv.org/pdf/2111.01078v2",econ.EM
29604,em,"Uncovering the heterogeneity of causal effects of policies and business
decisions at various levels of granularity provides substantial value to
decision makers. This paper develops estimation and inference procedures for
multiple treatment models in a selection-on-observed-variables framework by
modifying the Causal Forest approach (Wager and Athey, 2018) in several
dimensions. The new estimators have desirable theoretical, computational, and
practical properties for various aggregation levels of the causal effects.
While an Empirical Monte Carlo study suggests that they outperform previously
suggested estimators, an application to the evaluation of an active labour
market pro-gramme shows their value for applied research.",Modified Causal Forest,2022-09-08 15:03:46,"Michael Lechner, Jana Mareckova","http://arxiv.org/abs/2209.03744v1, http://arxiv.org/pdf/2209.03744v1",econ.EM
29458,em,"This paper proposes a class of parametric multiple-index time series models
that involve linear combinations of time trends, stationary variables and unit
root processes as regressors. The inclusion of the three different types of
time series, along with the use of a multiple-index structure for these
variables to circumvent the curse of dimensionality, is due to both theoretical
and practical considerations. The M-type estimators (including OLS, LAD,
Huber's estimator, quantile and expectile estimators, etc.) for the index
vectors are proposed, and their asymptotic properties are established, with the
aid of the generalized function approach to accommodate a wide class of loss
functions that may not be necessarily differentiable at every point. The
proposed multiple-index model is then applied to study the stock return
predictability, which reveals strong nonlinear predictability under various
loss measures. Monte Carlo simulations are also included to evaluate the
finite-sample performance of the proposed estimators.",Multiple-index Nonstationary Time Series Models: Robust Estimation Theory and Practice,2021-11-03 08:12:54,"Chaohua Dong, Jiti Gao, Bin Peng, Yundong Tu","http://arxiv.org/abs/2111.02023v1, http://arxiv.org/pdf/2111.02023v1",econ.EM
29459,em,"This paper proposes a multiplicative component intraday volatility model. The
intraday conditional volatility is expressed as the product of intraday
periodic component, intraday stochastic volatility component and daily
conditional volatility component. I extend the multiplicative component
intraday volatility model of Engle (2012) and Andersen and Bollerslev (1998) by
incorporating the durations between consecutive transactions. The model can be
applied to both regularly and irregularly spaced returns. I also provide a
nonparametric estimation technique of the intraday volatility periodicity. The
empirical results suggest the model can successfully capture the
interdependency of intraday returns.",Multiplicative Component GARCH Model of Intraday Volatility,2021-11-03 20:44:23,Xiufeng Yan,"http://arxiv.org/abs/2111.02376v1, http://arxiv.org/pdf/2111.02376v1",econ.EM
29460,em,"We propose \textbf{occ2vec}, a principal approach to representing
occupations, which can be used in matching, predictive and causal modeling, and
other economic areas. In particular, we use it to score occupations on any
definable characteristic of interest, say the degree of \textquote{greenness}.
Using more than 17,000 occupation-specific text descriptors, we transform each
occupation into a high-dimensional vector using natural language processing.
Similar, we assign a vector to the target characteristic and estimate the
occupational degree of this characteristic as the cosine similarity between the
vectors. The main advantages of this approach are its universal applicability
and verifiability contrary to existing ad-hoc approaches. We extensively
validate our approach on several exercises and then use it to estimate the
occupational degree of charisma and emotional intelligence (EQ). We find that
occupations that score high on these tend to have higher educational
requirements. Turning to wages, highly charismatic occupations are either found
in the lower or upper tail in the wage distribution. This is not found for EQ,
where higher levels of EQ are generally correlated with higher wages.",occ2vec: A principal approach to representing occupations using natural language processing,2021-11-04 00:27:23,Nicolaj Søndergaard Mühlbach,"http://arxiv.org/abs/2111.02528v2, http://arxiv.org/pdf/2111.02528v2",econ.EM
29461,em,"Dealing with structural breaks is an important step in most, if not all,
empirical economic research. This is particularly true in panel data comprised
of many cross-sectional units, such as individuals, firms or countries, which
are all affected by major events. The COVID-19 pandemic has affected most
sectors of the global economy, and there is by now plenty of evidence to
support this. The impact on stock markets is, however, still unclear. The fact
that most markets seem to have partly recovered while the pandemic is still
ongoing suggests that the relationship between stock returns and COVID-19 has
been subject to structural change. It is therefore important to know if a
structural break has occurred and, if it has, to infer the date of the break.
In the present paper we take this last observation as a source of motivation to
develop a new break detection toolbox that is applicable to different sized
panels, easy to implement and robust to general forms of unobserved
heterogeneity. The toolbox, which is the first of its kind, includes a test for
structural change, a break date estimator, and a break date confidence
interval. Application to a panel covering 61 countries from January 3 to
September 25, 2020, leads to the detection of a structural break that is dated
to the first week of April. The effect of COVID-19 is negative before the break
and zero thereafter, implying that while markets did react, the reaction was
short-lived. A possible explanation for this is the quantitative easing
programs announced by central banks all over the world in the second half of
March.",Structural Breaks in Interactive Effects Panels and the Stock Market Reaction to COVID-19,2021-11-04 20:37:29,"Yiannis Karavias, Paresh Narayan, Joakim Westerlund","http://arxiv.org/abs/2111.03035v1, http://arxiv.org/pdf/2111.03035v1",econ.EM
29462,em,"This paper develops bootstrap methods for practical statistical inference in
panel data quantile regression models with fixed effects. We consider
random-weighted bootstrap resampling and formally establish its validity for
asymptotic inference. The bootstrap algorithm is simple to implement in
practice by using a weighted quantile regression estimation for fixed effects
panel data. We provide results under conditions that allow for temporal
dependence of observations within individuals, thus encompassing a large class
of possible empirical applications. Monte Carlo simulations provide numerical
evidence the proposed bootstrap methods have correct finite sample properties.
Finally, we provide an empirical illustration using the environmental Kuznets
curve.",Bootstrap inference for panel data quantile regression,2021-11-05 20:23:10,"Antonio F. Galvao, Thomas Parker, Zhijie Xiao","http://arxiv.org/abs/2111.03626v1, http://arxiv.org/pdf/2111.03626v1",econ.EM
29463,em,"We propose pair copula constructed point-optimal sign tests in the context of
linear and nonlinear predictive regressions with endogenous, persistent
regressors, and disturbances exhibiting serial (nonlinear) dependence. The
proposed approach entails considering the entire dependence structure of the
signs to capture the serial dependence, and building feasible test statistics
based on pair copula constructions of the sign process. The tests are exact and
valid in the presence of heavy tailed and nonstandard errors, as well as
heterogeneous and persistent volatility. Furthermore, they may be inverted to
build confidence regions for the parameters of the regression function.
Finally, we adopt an adaptive approach based on the split-sample technique to
maximize the power of the test by finding an appropriate alternative
hypothesis. In a Monte Carlo study, we compare the performance of the proposed
""quasi""-point-optimal sign tests based on pair copula constructions by
comparing its size and power to those of certain existing tests that are
intended to be robust against heteroskedasticity. The simulation results
maintain the superiority of our procedures to existing popular tests.",Pair copula constructions of point-optimal sign-based tests for predictive linear and nonlinear regressions,2021-11-09 05:54:44,Kaveh Salehzadeh Nobari,"http://arxiv.org/abs/2111.04919v1, http://arxiv.org/pdf/2111.04919v1",econ.EM
29464,em,"In program evaluations, units can often anticipate the implementation of a
new policy before it occurs. Such anticipatory behavior can lead to units'
outcomes becoming dependent on their future treatment assignments. In this
paper, I employ a potential-outcomes framework to analyze the treatment effect
with anticipation. I start with a classical difference-in-differences model
with two time periods and provide identified sets with easy-to-implement
estimation and inference strategies for causal parameters. Empirical
applications and generalizations are provided. I illustrate my results by
analyzing the effect of an early retirement incentive program for teachers,
which the target units were likely to anticipate, on student achievement. The
empirical results show the result can be overestimated by up to 30\% in the
worst case and demonstrate the potential pitfalls of failing to consider
anticipation in policy evaluation.",Bounds for Treatment Effects in the Presence of Anticipatory Behavior,2021-11-12 09:09:59,Aibo Gong,"http://arxiv.org/abs/2111.06573v2, http://arxiv.org/pdf/2111.06573v2",econ.EM
29465,em,"In many applications of regression discontinuity designs, the running
variable used by the administrator to assign treatment is only observed with
error. We show that, provided the observed running variable (i) correctly
classifies the treatment assignment, and (ii) affects the conditional means of
the potential outcomes smoothly, ignoring the measurement error nonetheless
yields an estimate with a causal interpretation: the average treatment effect
for units whose observed running variable equals to the cutoff. We show that,
possibly after doughnut trimming, these assumptions accommodate a variety of
settings where support of the measurement error is not too wide. We propose to
conduct inference using bias-aware methods, which remain valid even when
discreteness or irregular support in the observed running variable may lead to
partial identification. We illustrate the results for both sharp and fuzzy
designs in an empirical application.",When Can We Ignore Measurement Error in the Running Variable?,2021-11-14 19:58:17,"Yingying Dong, Michal Kolesár","http://dx.doi.org/10.1002/JAE.2974, http://arxiv.org/abs/2111.07388v4, http://arxiv.org/pdf/2111.07388v4",econ.EM
29466,em,"We propose a dynamic network quantile regression model to investigate the
quantile connectedness using a predetermined network information. We extend the
existing network quantile autoregression model of Zhu et al. (2019b) by
explicitly allowing the contemporaneous network effects and controlling for the
common factors across quantiles. To cope with the endogeneity issue due to
simultaneous network spillovers, we adopt the instrumental variable quantile
regression (IVQR) estimation and derive the consistency and asymptotic
normality of the IVQR estimator using the near epoch dependence property of the
network process. Via Monte Carlo simulations, we confirm the satisfactory
performance of the IVQR estimator across different quantiles under the
different network structures. Finally, we demonstrate the usefulness of our
proposed approach with an application to the dataset on the stocks traded in
NYSE and NASDAQ in 2016.",Dynamic Network Quantile Regression Model,2021-11-15 12:40:00,"Xiu Xu, Weining Wang, Yongcheol Shin, Chaowen Zheng","http://arxiv.org/abs/2111.07633v1, http://arxiv.org/pdf/2111.07633v1",econ.EM
29467,em,"We propose a method to remedy finite sample coverage problems and improve
upon the efficiency of commonly employed procedures for the construction of
nonparametric confidence intervals in regression kink designs. The proposed
interval is centered at the half-length optimal, numerically obtained linear
minimax estimator over distributions with Lipschitz constrained conditional
mean function. Its construction ensures excellent finite sample coverage and
length properties which are demonstrated in a simulation study and an empirical
illustration. Given the Lipschitz constant that governs how much curvature one
plausibly allows for, the procedure is fully data driven, computationally
inexpensive, incorporates shape constraints and is valid irrespective of the
distribution of the assignment variable.",Optimized Inference in Regression Kink Designs,2021-11-21 05:13:08,Majed Dodin,"http://arxiv.org/abs/2111.10713v1, http://arxiv.org/pdf/2111.10713v1",econ.EM
29468,em,"We study identification of dynamic discrete choice models with hyperbolic
discounting. We show that the standard discount factor, present bias factor,
and instantaneous utility functions for the sophisticated agent are
point-identified from observed conditional choice probabilities and transition
probabilities in a finite horizon model. The main idea to achieve
identification is to exploit variation in the observed conditional choice
probabilities over time. We present the estimation method and demonstrate a
good performance of the estimator by simulation.",Identifying Dynamic Discrete Choice Models with Hyperbolic Discounting,2021-11-21 06:05:46,Taiga Tsubota,"http://arxiv.org/abs/2111.10721v3, http://arxiv.org/pdf/2111.10721v3",econ.EM
29469,em,"This paper studies the problem of estimating individualized treatment rules
when treatment effects are partially identified, as it is often the case with
observational data. By drawing connections between the treatment assignment
problem and classical decision theory, we characterize several notions of
optimal treatment policies in the presence of partial identification. Our
unified framework allows to incorporate user-defined constraints on the set of
allowable policies, such as restrictions for transparency or interpretability,
while also ensuring computational feasibility. We show how partial
identification leads to a new policy learning problem where the objective
function is directionally -- but not fully -- differentiable with respect to
the nuisance first-stage. We then propose an estimation procedure that ensures
Neyman-orthogonality with respect to the nuisance components and we provide
statistical guarantees that depend on the amount of concentration around the
points of non-differentiability in the data-generating-process. The proposed
methods are illustrated using data from the Job Partnership Training Act study.",Orthogonal Policy Learning Under Ambiguity,2021-11-22 00:50:42,Riccardo D'Adamo,"http://arxiv.org/abs/2111.10904v3, http://arxiv.org/pdf/2111.10904v3",econ.EM
29470,em,"This paper considers a model with general regressors and unobservable
factors. An estimator based on iterated principal components is proposed, which
is shown to be not only asymptotically normal and oracle efficient, but under
certain conditions also free of the otherwise so common asymptotic incidental
parameters bias. Interestingly, the conditions required to achieve unbiasedness
become weaker the stronger the trends in the factors, and if the trending is
strong enough unbiasedness comes at no cost at all. In particular, the approach
does not require any knowledge of how many factors there are, or whether they
are deterministic or stochastic. The order of integration of the factors is
also treated as unknown, as is the order of integration of the regressors,
which means that there is no need to pre-test for unit roots, or to decide on
which deterministic terms to include in the model.",Interactive Effects Panel Data Models with General Factors and Regressors,2021-11-22 23:06:26,"Bin Peng, Liangjun Su, Joakim Westerlund, Yanrong Yang","http://arxiv.org/abs/2111.11506v1, http://arxiv.org/pdf/2111.11506v1",econ.EM
29471,em,"We discuss estimation of the differentiated products demand system of Berry
et al (1995) (BLP) by maximum likelihood estimation (MLE). We derive the
maximum likelihood estimator in the case where prices are endogenously
generated by firms that set prices in Bertrand-Nash equilibrium. In Monte Carlo
simulations the MLE estimator outperforms the best-practice GMM estimator on
both bias and mean squared error when the model is correctly specified. This
remains true under some forms of misspecification. In our simulations, the
coverage of the ML estimator is close to its nominal level, whereas the GMM
estimator tends to under-cover. We conclude the paper by estimating BLP on the
car data used in the original Berry et al (1995) paper, obtaining similar
estimates with considerably tighter standard errors.",Maximum Likelihood Estimation of Differentiated Products Demand Systems,2021-11-24 13:31:09,"Greg Lewis, Bora Ozaltun, Georgios Zervas","http://arxiv.org/abs/2111.12397v1, http://arxiv.org/pdf/2111.12397v1",econ.EM
29472,em,"Difference in differences (DD) is widely used to find policy/treatment
effects with observational data, but applying DD to limited dependent variables
(LDV's) Y has been problematic. This paper addresses how to apply DD and
related approaches (such as ""ratio in ratios"" or ""ratio in odds ratios"") to
binary, count, fractional, multinomial or zero-censored Y under the unifying
framework of `generalized linear models with link functions'. We evaluate DD
and the related approaches with simulation and empirical studies, and recommend
'Poisson Quasi-MLE' for non-negative (such as count or zero-censored) Y and
(multinomial) logit MLE for binary, fractional or multinomial Y.",Difference in Differences and Ratio in Ratios for Limited Dependent Variables,2021-11-25 10:12:22,"Myoung-jae Lee, Sanghyeok Lee","http://arxiv.org/abs/2111.12948v2, http://arxiv.org/pdf/2111.12948v2",econ.EM
29473,em,"The problem of demand inversion - a crucial step in the estimation of random
utility discrete-choice models - is equivalent to the determination of stable
outcomes in two-sided matching models. This equivalence applies to random
utility models that are not necessarily additive, smooth, nor even invertible.
Based on this equivalence, algorithms for the determination of stable matchings
provide effective computational methods for estimating these models. For
non-invertible models, the identified set of utility vectors is a lattice, and
the matching algorithms recover sharp upper and lower bounds on the utilities.
Our matching approach facilitates estimation of models that were previously
difficult to estimate, such as the pure characteristics model. An empirical
application to voting data from the 1999 European Parliament elections
illustrates the good performance of our matching-based demand inversion
algorithms in practice.",Yogurts Choose Consumers? Estimation of Random-Utility Models via Two-Sided Matching,2021-11-26 23:44:55,"Odran Bonnet, Alfred Galichon, Yu-Wei Hsieh, Keith O'Hara, Matt Shum","http://arxiv.org/abs/2111.13744v1, http://arxiv.org/pdf/2111.13744v1",econ.EM
29474,em,"This paper develops permutation versions of identification-robust tests in
linear instrumental variables (IV) regression. Unlike the existing
randomization and rank-based tests in which independence between the
instruments and the error terms is assumed, the permutation Anderson- Rubin
(AR), Lagrange Multiplier (LM) and Conditional Likelihood Ratio (CLR) tests are
asymptotically similar and robust to conditional heteroskedasticity under
standard exclusion restriction i.e. the orthogonality between the instruments
and the error terms. Moreover, when the instruments are independent of the
structural error term, the permutation AR tests are exact, hence robust to
heavy tails. As such, these tests share the strengths of the rank-based tests
and the wild bootstrap AR tests. Numerical illustrations corroborate the
theoretical results.",Robust Permutation Tests in Linear Instrumental Variables Regression,2021-11-27 02:46:22,Purevdorj Tuvaandorj,"http://arxiv.org/abs/2111.13774v3, http://arxiv.org/pdf/2111.13774v3",econ.EM
29475,em,"I examine the common problem of multiple missingness on both the endogenous
treatment and outcome variables. Two types of dependence assumptions for
missing mechanisms are proposed for identification, based on which a two-step
AIPW GMM estimator is proposed. This estimator is unbiased and more efficient
than the previously used estimation methods. Statistical properties are
discussed case by case. This method is applied to the Oregon Health Insurance
Experiment and shows the significant effects of enrolling in the Oregon Health
Plan on improving health-related outcomes and reducing out-of-pocket costs for
medical care. There is evidence that simply dropping the incomplete data
creates downward biases for some of the chosen outcome variables. Moreover, the
estimator proposed in this paper reduced standard errors by 6-24% of the
estimated effects of the Oregon Health Plan.",A GMM Approach for Non-monotone Missingness on Both Treatment and Outcome Variables,2022-01-04 10:07:53,Shenshen Yang,"http://arxiv.org/abs/2201.01010v1, http://arxiv.org/pdf/2201.01010v1",econ.EM
29476,em,"This paper studies the unconditional effects of a general policy
intervention, which includes location-scale shifts and simultaneous shifts as
special cases. The location-scale shift is intended to study a counterfactual
policy aimed at changing not only the mean or location of a covariate but also
its dispersion or scale. The simultaneous shift refers to the situation where
shifts in two or more covariates take place simultaneously. For example, a
shift in one covariate is compensated at a certain rate by a shift in another
covariate. Not accounting for these possible scale or simultaneous shifts will
result in an incorrect assessment of the potential policy effects on an outcome
variable of interest. The unconditional policy parameters are estimated with
simple semiparametric estimators, for which asymptotic properties are studied.
Monte Carlo simulations are implemented to study their finite sample
performances. The proposed approach is applied to a Mincer equation to study
the effects of changing years of education on wages and to study the effect of
smoking during pregnancy on birth weight.",Unconditional Effects of General Policy Interventions,2022-01-07 04:56:49,"Julian Martinez-Iriarte, Gabriel Montes-Rojas, Yixiao Sun","http://arxiv.org/abs/2201.02292v3, http://arxiv.org/pdf/2201.02292v3",econ.EM
29477,em,"Here, we use Machine Learning (ML) algorithms to update and improve the
efficiencies of fitting GARCH model parameters to empirical data. We employ an
Artificial Neural Network (ANN) to predict the parameters of these models. We
present a fitting algorithm for GARCH-normal(1,1) models to predict one of the
model's parameters, $\alpha_1$ and then use the analytical expressions for the
fourth order standardised moment, $\Gamma_4$ and the unconditional second order
moment, $\sigma^2$ to fit the other two parameters; $\beta_1$ and $\alpha_0$,
respectively. The speed of fitting of the parameters and quick implementation
of this approach allows for real time tracking of GARCH parameters. We further
show that different inputs to the ANN namely, higher order standardised moments
and the autocovariance of time series can be used for fitting model parameters
using the ANN, but not always with the same level of accuracy.",A machine learning search for optimal GARCH parameters,2022-01-10 14:07:27,"Luke De Clerk, Sergey Savl'ev","http://arxiv.org/abs/2201.03286v1, http://arxiv.org/pdf/2201.03286v1",econ.EM
29478,em,"In this paper, we propose a two-step procedure based on the group LASSO
estimator in combination with a backward elimination algorithm to detect
multiple structural breaks in linear regressions with multivariate responses.
Applying the two-step estimator, we jointly detect the number and location of
change points, and provide consistent estimates of the coefficients. Our
framework is flexible enough to allow for a mix of integrated and stationary
regressors, as well as deterministic terms. Using simulation experiments, we
show that the proposed two-step estimator performs competitively against the
likelihood-based approach (Qu and Perron, 2007; Li and Perron, 2017; Oka and
Perron, 2018) in finite samples. However, the two-step estimator is
computationally much more efficient. An economic application to the
identification of structural breaks in the term structure of interest rates
illustrates this methodology.",Detecting Multiple Structural Breaks in Systems of Linear Regression Equations with Integrated and Stationary Regressors,2022-01-14 16:01:58,Karsten Schweikert,"http://arxiv.org/abs/2201.05430v3, http://arxiv.org/pdf/2201.05430v3",econ.EM
29479,em,"This paper studies nonparametric identification in market level demand models
for differentiated products with heterogeneous consumers. We consider a general
class of models that allows for the individual specific coefficients to vary
continuously across the population and give conditions under which the density
of these coefficients, and hence also functionals such as welfare measures, is
identified. A key finding is that two leading models, the BLP-model (Berry,
Levinsohn, and Pakes, 1995) and the pure characteristics model (Berry and
Pakes, 2007), require considerably different conditions on the support of the
product characteristics.",Nonparametric Identification of Random Coefficients in Endogenous and Heterogeneous Aggregate Demand Models,2022-01-17 00:40:44,"Fabian Dunker, Stefan Hoderlein, Hiroaki Kaido","http://arxiv.org/abs/2201.06140v1, http://arxiv.org/pdf/2201.06140v1",econ.EM
29480,em,"In this paper, we introduce a flexible and widely applicable nonparametric
entropy-based testing procedure that can be used to assess the validity of
simple hypotheses about a specific parametric population distribution. The
testing methodology relies on the characteristic function of the population
probability distribution being tested and is attractive in that, regardless of
the null hypothesis being tested, it provides a unified framework for
conducting such tests. The testing procedure is also computationally tractable
and relatively straightforward to implement. In contrast to some alternative
test statistics, the proposed entropy test is free from user-specified kernel
and bandwidth choices, idiosyncratic and complex regularity conditions, and/or
choices of evaluation grids. Several simulation exercises were performed to
document the empirical performance of our proposed test, including a regression
example that is illustrative of how, in some contexts, the approach can be
applied to composite hypothesis-testing situations via data transformations.
Overall, the testing procedure exhibits notable promise, exhibiting appreciable
increasing power as sample size increases for a number of alternative
distributions when contrasted with hypothesized null distributions. Possible
general extensions of the approach to composite hypothesis-testing contexts,
and directions for future work are also discussed.",An Entropy-Based Approach for Nonparametrically Testing Simple Probability Distribution Hypotheses,2022-01-18 01:29:04,"Ron Mittelhammer, George Judge, Miguel Henry","http://dx.doi.org/10.3390/econometrics10010005, http://arxiv.org/abs/2201.06647v1, http://arxiv.org/pdf/2201.06647v1",econ.EM
29481,em,"Is homophily in social and economic networks driven by a taste for
homogeneity (preferences) or by a higher probability of meeting individuals
with similar attributes (opportunity)? This paper studies identification and
estimation of an iterative network game that distinguishes between these two
mechanisms. Our approach enables us to assess the counterfactual effects of
changing the meeting protocol between agents. As an application, we study the
role of preferences and meetings in shaping classroom friendship networks in
Brazil. In a network structure in which homophily due to preferences is
stronger than homophily due to meeting opportunities, tracking students may
improve welfare. Still, the relative benefit of this policy diminishes over the
school year.",Homophily in preferences or meetings? Identifying and estimating an iterative network formation model,2022-01-18 04:40:53,"Luis Alvarez, Cristine Pinto, Vladimir Ponczek","http://arxiv.org/abs/2201.06694v3, http://arxiv.org/pdf/2201.06694v3",econ.EM
29482,em,"We propose difference-in-differences estimators for continuous treatments
with heterogeneous effects. We assume that between consecutive periods, the
treatment of some units, the switchers, changes, while the treatment of other
units does not change. We show that under a parallel trends assumption, an
unweighted and a weighted average of the slopes of switchers' potential
outcomes can be estimated. While the former parameter may be more intuitive,
the latter can be used for cost-benefit analysis, and it can often be estimated
more precisely. We generalize our estimators to the instrumental-variable case.
We use our results to estimate the price-elasticity of gasoline consumption.",Difference-in-Differences Estimators for Treatments Continuously Distributed at Every Period,2022-01-18 15:03:10,"Clément de Chaisemartin, Xavier D'Haultfoeuille, Félix Pasquier, Gonzalo Vazquez-Bare","http://arxiv.org/abs/2201.06898v3, http://arxiv.org/pdf/2201.06898v3",econ.EM
29483,em,"Despite their popularity, randomized controlled trials (RCTs) are not always
available for the purposes of advertising measurement. Non-experimental data is
thus required. However, Facebook and other ad platforms use complex and
evolving processes to select ads for users. Therefore, successful
non-experimental approaches need to ""undo"" this selection. We analyze 663
large-scale experiments at Facebook to investigate whether this is possible
with the data typically logged at large ad platforms. With access to over 5,000
user-level features, these data are richer than what most advertisers or their
measurement partners can access. We investigate how accurately two
non-experimental methods -- double/debiased machine learning (DML) and
stratified propensity score matching (SPSM) -- can recover the experimental
effects. Although DML performs better than SPSM, neither method performs well,
even using flexible deep learning models to implement the propensity and
outcome models. The median RCT lifts are 29%, 18%, and 5% for the upper,
middle, and lower funnel outcomes, respectively. Using DML (SPSM), the median
lift by funnel is 83% (173%), 58% (176%), and 24% (64%), respectively,
indicating significant relative measurement errors. We further characterize the
circumstances under which each method performs comparatively better. Overall,
despite having access to large-scale experiments and rich user-level data, we
are unable to reliably estimate an ad campaign's causal effect.",Close Enough? A Large-Scale Exploration of Non-Experimental Approaches to Advertising Measurement,2022-01-18 18:31:18,"Brett R. Gordon, Robert Moakler, Florian Zettelmeyer","http://arxiv.org/abs/2201.07055v2, http://arxiv.org/pdf/2201.07055v2",econ.EM
29484,em,"Many economic variables feature changes in their conditional mean and
volatility, and Time Varying Vector Autoregressive Models are often used to
handle such complexity in the data. Unfortunately, when the number of series
grows, they present increasing estimation and interpretation problems. This
paper tries to address this issue proposing a new Multivariate Autoregressive
Index model that features time varying means and volatility. Technically, we
develop a new estimation methodology that mix switching algorithms with the
forgetting factors strategy of Koop and Korobilis (2012). This substantially
reduces the computational burden and allows to select or weight, in real time,
the number of common components and other features of the data using Dynamic
Model Selection or Dynamic Model Averaging without further computational cost.
Using USA macroeconomic data, we provide a structural analysis and a
forecasting exercise that demonstrates the feasibility and usefulness of this
new model.
  Keywords: Large datasets, Multivariate Autoregressive Index models,
Stochastic volatility, Bayesian VARs.",The Time-Varying Multivariate Autoregressive Index Model,2022-01-18 18:49:47,"G. Cubadda, S. Grassi, B. Guardabascio","http://arxiv.org/abs/2201.07069v1, http://arxiv.org/pdf/2201.07069v1",econ.EM
29485,em,"Price discrimination is a practice where firms utilize varying sensitivities
to prices among consumers to increase profits. The welfare effects of price
discrimination are not agreed on among economists, but identification of such
actions may contribute to our standing of firms' pricing behaviors. In this
letter, I use econometric tools to analyze whether Apple Inc, one of the
largest companies in the globe, is practicing price discrimination on the basis
of socio-economical and geographical factors. My results indicate that iPhones
are significantly (p $<$ 0.01) more expensive in markets where competitions are
weak or where Apple has a strong market presence. Furthermore, iPhone prices
are likely to increase (p $<$ 0.01) in developing countries/regions or markets
with high income inequality.",Identification of Direct Socio-Geographical Price Discrimination: An Empirical Study on iPhones,2022-01-20 02:01:17,Davidson Cheng,"http://arxiv.org/abs/2201.07903v1, http://arxiv.org/pdf/2201.07903v1",econ.EM
29486,em,"Nonparametric random coefficient (RC)-density estimation has mostly been
considered in the marginal density case under strict independence of RCs and
covariates. This paper deals with the estimation of RC-densities conditional on
a (large-dimensional) set of control variables using machine learning
techniques. The conditional RC-density allows to disentangle observable from
unobservable heterogeneity in partial effects of continuous treatments adding
to a growing literature on heterogeneous effect estimation using machine
learning. %It is also informative of the conditional potential outcome
distribution. This paper proposes a two-stage sieve estimation procedure. First
a closed-form sieve approximation of the conditional RC density is derived
where each sieve coefficient can be expressed as conditional expectation
function varying with controls. Second, sieve coefficients are estimated with
generic machine learning procedures and under appropriate sample splitting
rules. The $L_2$-convergence rate of the conditional RC-density estimator is
derived. The rate is slower by a factor then typical rates of mean regression
machine learning estimators which is due to the ill-posedness of the RC density
estimation problem. The performance and applicability of the estimator is
illustrated using random forest algorithms over a range of Monte Carlo
simulations and with real data from the SOEP-IS. Here behavioral heterogeneity
in an economic experiment on portfolio choice is studied. The method reveals
two types of behavior in the population, one type complying with economic
theory and one not. The assignment to types appears largely based on
unobservables not available in the data.",Estimation of Conditional Random Coefficient Models using Machine Learning Techniques,2022-01-20 21:50:52,Stephan Martin,"http://arxiv.org/abs/2201.08366v1, http://arxiv.org/pdf/2201.08366v1",econ.EM
29487,em,"Although multivariate stochastic volatility models usually produce more
accurate forecasts compared to the MGARCH models, their estimation techniques
such as Bayesian MCMC typically suffer from the curse of dimensionality. We
propose a fast and efficient estimation approach for MSV based on a penalized
OLS framework. Specifying the MSV model as a multivariate state space model, we
carry out a two-step penalized procedure. We provide the asymptotic properties
of the two-step estimator and the oracle property of the first-step estimator
when the number of parameters diverges. The performances of our method are
illustrated through simulations and financial data.",High-Dimensional Sparse Multivariate Stochastic Volatility Models,2022-01-21 11:06:10,"Benjamin Poignard, Manabu Asai","http://arxiv.org/abs/2201.08584v2, http://arxiv.org/pdf/2201.08584v2",econ.EM
29488,em,"The maximum-likelihood estimator of nonlinear panel data models with fixed
effects is consistent but asymptotically-biased under rectangular-array
asymptotics. The literature has thus far concentrated its effort on devising
methods to correct the maximum-likelihood estimator for its bias as a means to
salvage standard inferential procedures. Instead, we show that the parametric
bootstrap replicates the distribution of the (uncorrected) maximum-likelihood
estimator in large samples. This justifies the use of confidence sets
constructed via standard bootstrap percentile methods. No adjustment for the
presence of bias needs to be made.",Bootstrap inference for fixed-effect models,2022-01-26 22:40:16,"Ayden Higgins, Koen Jochmans","http://arxiv.org/abs/2201.11156v1, http://arxiv.org/pdf/2201.11156v1",econ.EM
29489,em,"We propose improved standard errors and an asymptotic distribution theory for
two-way clustered panels. Our proposed estimator and theory allow for arbitrary
serial dependence in the common time effects, which is excluded by existing
two-way methods, including the popular two-way cluster standard errors of
Cameron, Gelbach, and Miller (2011) and the cluster bootstrap of Menzel (2021).
Our asymptotic distribution theory is the first which allows for this level of
inter-dependence among the observations. Under weak regularity conditions, we
demonstrate that the least squares estimator is asymptotically normal, our
proposed variance estimator is consistent, and t-ratios are asymptotically
standard normal, permitting conventional inference. We present simulation
evidence that confidence intervals constructed with our proposed standard
errors obtain superior coverage performance relative to existing methods. We
illustrate the relevance of the proposed method in an empirical application to
a standard Fama-French three-factor regression.",Standard errors for two-way clustering with serially correlated time effects,2022-01-27 06:49:07,"Harold D Chiang, Bruce E Hansen, Yuya Sasaki","http://arxiv.org/abs/2201.11304v4, http://arxiv.org/pdf/2201.11304v4",econ.EM
29490,em,"Empirical researchers are often interested in not only whether a treatment
affects an outcome of interest, but also how the treatment effect arises.
Causal mediation analysis provides a formal framework to identify causal
mechanisms through which a treatment affects an outcome. The most popular
identification strategy relies on so-called sequential ignorability (SI)
assumption which requires that there is no unobserved confounder that lies in
the causal paths between the treatment and the outcome. Despite its popularity,
such assumption is deemed to be too strong in many settings as it excludes the
existence of unobserved confounders. This limitation has inspired recent
literature to consider an alternative identification strategy based on an
instrumental variable (IV). This paper discusses the identification of causal
mediation effects in a setting with a binary treatment and a binary
instrumental variable that is both assumed to be random. We show that while IV
methods allow for the possible existence of unobserved confounders, additional
monotonicity assumptions are required unless the strong constant effect is
assumed. Furthermore, even when such monotonicity assumptions are satisfied, IV
estimands are not necessarily equivalent to target parameters.",On the Use of Instrumental Variables in Mediation Analysis,2022-01-30 11:45:42,Bora Kim,"http://arxiv.org/abs/2201.12752v1, http://arxiv.org/pdf/2201.12752v1",econ.EM
29491,em,"We revisit classical asymptotics when testing for a structural break in
linear regression models by obtaining the limit theory of residual-based and
Wald-type processes. First, we establish the Brownian bridge limiting
distribution of these test statistics. Second, we study the asymptotic
behaviour of the partial-sum processes in nonstationary (linear) time series
regression models. Although, the particular comparisons of these two different
modelling environments is done from the perspective of the partial-sum
processes, it emphasizes that the presence of nuisance parameters can change
the asymptotic behaviour of the functionals under consideration. Simulation
experiments verify size distortions when testing for a break in nonstationary
time series regressions which indicates that the Brownian bridge limit cannot
provide a suitable asymptotic approximation in this case. Further research is
required to establish the cause of size distortions under the null hypothesis
of parameter stability.",Partial Sum Processes of Residual-Based and Wald-type Break-Point Statistics in Time Series Regression Models,2022-02-01 02:10:30,Christis Katsouris,"http://arxiv.org/abs/2202.00141v2, http://arxiv.org/pdf/2202.00141v2",econ.EM
29492,em,"We propose a new parametrization for the estimation and identification of the
impulse-response functions (IRFs) of dynamic factor models (DFMs). The
theoretical contribution of this paper concerns the problem of observational
equivalence between different IRFs, which implies non-identification of the IRF
parameters without further restrictions. We show how the previously proposed
minimal identification conditions are nested in the new framework and can be
further augmented with overidentifying restrictions leading to efficiency
gains. The current standard practice for the IRF estimation of DFMs is based on
principal components, compared to which the new parametrization is less
restrictive and allows for modelling richer dynamics. As the empirical
contribution of the paper, we develop an estimation method based on the EM
algorithm, which incorporates the proposed identification restrictions. In the
empirical application, we use a standard high-dimensional macroeconomic dataset
to estimate the effects of a monetary policy shock. We estimate a strong
reaction of the macroeconomic variables, while the benchmark models appear to
give qualitatively counterintuitive results. The estimation methods are
implemented in the accompanying R package.",Estimation of Impulse-Response Functions with Dynamic Factor Models: A New Parametrization,2022-02-01 13:16:59,"Juho Koistinen, Bernd Funovits","http://arxiv.org/abs/2202.00310v2, http://arxiv.org/pdf/2202.00310v2",econ.EM
29493,em,"Standard methods, such as sequential procedures based on Johansen's
(pseudo-)likelihood ratio (PLR) test, for determining the co-integration rank
of a vector autoregressive (VAR) system of variables integrated of order one
can be significantly affected, even asymptotically, by unconditional
heteroskedasticity (non-stationary volatility) in the data. Known solutions to
this problem include wild bootstrap implementations of the PLR test or the use
of an information criterion, such as the BIC, to select the co-integration
rank. Although asymptotically valid in the presence of heteroskedasticity,
these methods can display very low finite sample power under some patterns of
non-stationary volatility. In particular, they do not exploit potential
efficiency gains that could be realised in the presence of non-stationary
volatility by using adaptive inference methods. Under the assumption of a known
autoregressive lag length, Boswijk and Zu (2022) develop adaptive PLR test
based methods using a non-parameteric estimate of the covariance matrix
process. It is well-known, however, that selecting an incorrect lag length can
significantly impact on the efficacy of both information criteria and bootstrap
PLR tests to determine co-integration rank in finite samples. We show that
adaptive information criteria-based approaches can be used to estimate the
autoregressive lag order to use in connection with bootstrap adaptive PLR
tests, or to jointly determine the co-integration rank and the VAR lag length
and that in both cases they are weakly consistent for these parameters in the
presence of non-stationary volatility provided standard conditions hold on the
penalty term. Monte Carlo simulations are used to demonstrate the potential
gains from using adaptive methods and an empirical application to the U.S. term
structure is provided.",Adaptive information-based methods for determining the co-integration rank in heteroskedastic VAR models,2022-02-05 14:00:47,"H. Peter Boswijk, Giuseppe Cavaliere, Luca De Angelis, A. M. Robert Taylor","http://arxiv.org/abs/2202.02532v1, http://arxiv.org/pdf/2202.02532v1",econ.EM
29494,em,"In this paper, we study difference-in-differences identification and
estimation strategies where the parallel trends assumption holds after
conditioning on time-varying covariates and/or time-invariant covariates. Our
first main contribution is to point out a number of weaknesses of commonly used
two-way fixed effects (TWFE) regressions in this context. In addition to issues
related to multiple periods and variation in treatment timing that have been
emphasized in the literature, we show that, even in the case with only two time
periods, TWFE regressions are not generally robust to (i) paths of untreated
potential outcomes depending on the level of time-varying covariates (as
opposed to only the change in the covariates over time), (ii) paths of
untreated potential outcomes depending on time-invariant covariates, and (iii)
violations of linearity conditions for outcomes over time and/or the propensity
score. Even in cases where none of the previous three issues hold, we show that
TWFE regressions can suffer from negative weighting and weight-reversal issues.
Thus, TWFE regressions can deliver misleading estimates of causal effect
parameters in a number of empirically relevant cases. Second, we extend these
arguments to the case of multiple periods and variation in treatment timing.
Third, we provide simple diagnostics for assessing the extent of
misspecification bias arising due to TWFE regressions. Finally, we propose
alternative (and simple) estimation strategies that can circumvent these issues
with two-way fixed regressions.",Difference-in-Differences with Time-Varying Covariates in the Parallel Trends Assumption,2022-02-07 04:50:18,"Carolina Caetano, Brantly Callaway","http://arxiv.org/abs/2202.02903v2, http://arxiv.org/pdf/2202.02903v2",econ.EM
29495,em,"Since the Great Financial Crisis (GFC), the use of stress tests as a tool for
assessing the resilience of financial institutions to adverse financial and
economic developments has increased significantly. One key part in such
exercises is the translation of macroeconomic variables into default
probabilities for credit risk by using macrofinancial linkage models. A key
requirement for such models is that they should be able to properly detect
signals from a wide array of macroeconomic variables in combination with a
mostly short data sample. The aim of this paper is to compare a great number of
different regression models to find the best performing credit risk model. We
set up an estimation framework that allows us to systematically estimate and
evaluate a large set of models within the same environment. Our results
indicate that there are indeed better performing models than the current
state-of-the-art model. Moreover, our comparison sheds light on other potential
credit risk models, specifically highlighting the advantages of machine
learning models and forecast combinations.",Predicting Default Probabilities for Stress Tests: A Comparison of Models,2022-02-07 15:45:03,Martin Guth,"http://arxiv.org/abs/2202.03110v1, http://arxiv.org/pdf/2202.03110v1",econ.EM
29496,em,"In dynamic discrete choice (DDC) analysis, it is common to use mixture models
to control for unobserved heterogeneity. However, consistent estimation
typically requires both restrictions on the support of unobserved heterogeneity
and a high-level injectivity condition that is difficult to verify. This paper
provides primitive conditions for point identification of a broad class of DDC
models with multivariate continuous permanent unobserved heterogeneity. The
results apply to both finite- and infinite-horizon DDC models, do not require a
full support assumption, nor a long panel, and place no parametric restriction
on the distribution of unobserved heterogeneity. In addition, I propose a
seminonparametric estimator that is computationally attractive and can be
implemented using familiar parametric methods.",Continuous permanent unobserved heterogeneity in dynamic discrete choice models,2022-02-08 19:12:19,Jackson Bunting,"http://arxiv.org/abs/2202.03960v2, http://arxiv.org/pdf/2202.03960v2",econ.EM
29497,em,"We consider the estimation of a dynamic distribution regression panel data
model with heterogeneous coefficients across units. The objects of primary
interest are specific functionals of these coefficients. These include
predicted actual and stationary distributions of the outcome variable and
quantile treatment effects. Coefficients and their functionals are estimated
via fixed effect methods. We investigate how these functionals vary in response
to changes in initial conditions or covariate values. We also identify a
uniformity issue related to the robustness of inference to the unknown degree
of heterogeneity, and propose a cross-sectional bootstrap method for uniformly
valid inference on function-valued objects. Employing PSID annual labor income
data we illustrate some important empirical issues we can address. We first
quantify the impact of a negative labor income shock on the distribution of
future labor income. We also examine the impact on the distribution of labor
income from increasing the education level of a chosen group of workers.
Finally, we demonstrate the existence of heterogeneity in income mobility, and
how this leads to substantial variation in individuals' incidences to be
trapped in poverty. We also provide simulation evidence confirming that our
procedures work well.","Dynamic Heterogeneous Distribution Regression Panel Models, with an Application to Labor Income Processes",2022-02-09 00:30:54,"Ivan Fernandez-Val, Wayne Yuan Gao, Yuan Liao, Francis Vella","http://arxiv.org/abs/2202.04154v3, http://arxiv.org/pdf/2202.04154v3",econ.EM
29498,em,"The von Mises-Fisher family is a parametric family of distributions on the
surface of the unit ball, summarised by a concentration parameter and a mean
direction. As a quasi-Bayesian prior, the von Mises-Fisher distribution is a
convenient and parsimonious choice when parameter spaces are isomorphic to the
hypersphere (e.g., maximum score estimation in semi-parametric discrete choice,
estimation of single-index treatment assignment rules via empirical welfare
maximisation, under-identifying linear simultaneous equation models). Despite a
long history of application, measures of statistical divergence have not been
analytically characterised for von Mises-Fisher distributions. This paper
provides analytical expressions for the $f$-divergence of a von Mises-Fisher
distribution from another, distinct, von Mises-Fisher distribution in
$\mathbb{R}^p$ and the uniform distribution over the hypersphere. This paper
also collect several other results pertaining to the von Mises-Fisher family of
distributions, and characterises the limiting behaviour of the measures of
divergence that we consider.",von Mises-Fisher distributions and their statistical divergence,2022-02-10 20:50:21,"Toru Kitagawa, Jeff Rowley","http://arxiv.org/abs/2202.05192v2, http://arxiv.org/pdf/2202.05192v2",econ.EM
29499,em,"Modern macroeconometrics often relies on time series models for which it is
time-consuming to evaluate the likelihood function. We demonstrate how Bayesian
computations for such models can be drastically accelerated by reweighting and
mutating posterior draws from an approximating model that allows for fast
likelihood evaluations, into posterior draws from the model of interest, using
a sequential Monte Carlo (SMC) algorithm. We apply the technique to the
estimation of a vector autoregression with stochastic volatility and a
nonlinear dynamic stochastic general equilibrium model. The runtime reductions
we obtain range from 27% to 88%.",Sequential Monte Carlo With Model Tempering,2022-02-15 01:28:51,"Marko Mlikota, Frank Schorfheide","http://arxiv.org/abs/2202.07070v1, http://arxiv.org/pdf/2202.07070v1",econ.EM
29500,em,"We propose a new approach to the semiparametric analysis of panel data binary
choice models with fixed effects and dynamics (lagged dependent variables). The
model we consider has the same random utility framework as in Honore and
Kyriazidou (2000). We demonstrate that, with additional serial dependence
conditions on the process of deterministic utility and tail restrictions on the
error distribution, the (point) identification of the model can proceed in two
steps, and only requires matching the value of an index function of explanatory
variables over time, as opposed to that of each explanatory variable. Our
identification approach motivates an easily implementable, two-step maximum
score (2SMS) procedure -- producing estimators whose rates of convergence, in
contrast to Honore and Kyriazidou's (2000) methods, are independent of the
model dimension. We then derive the asymptotic properties of the 2SMS procedure
and propose bootstrap-based distributional approximations for inference. Monte
Carlo evidence indicates that our procedure performs adequately in finite
samples.",Semiparametric Estimation of Dynamic Binary Choice Panel Data Models,2022-02-24 15:39:15,"Fu Ouyang, Thomas Tao Yang","http://arxiv.org/abs/2202.12062v3, http://arxiv.org/pdf/2202.12062v3",econ.EM
29501,em,"We consider the construction of confidence intervals for treatment effects
estimated using panel models with interactive fixed effects. We first use the
factor-based matrix completion technique proposed by Bai and Ng (2021) to
estimate the treatment effects, and then use bootstrap method to construct
confidence intervals of the treatment effects for treated units at each
post-treatment period. Our construction of confidence intervals requires
neither specific distributional assumptions on the error terms nor large number
of post-treatment periods. We also establish the validity of the proposed
bootstrap procedure that these confidence intervals have asymptotically correct
coverage probabilities. Simulation studies show that these confidence intervals
have satisfactory finite sample performances, and empirical applications using
classical datasets yield treatment effect estimates of similar magnitudes and
reliable confidence intervals.",Confidence Intervals of Treatment Effects in Panel Data Models with Interactive Fixed Effects,2022-02-24 16:02:59,"Xingyu Li, Yan Shen, Qiankun Zhou","http://arxiv.org/abs/2202.12078v1, http://arxiv.org/pdf/2202.12078v1",econ.EM
29502,em,"The multinomial probit model is often used to analyze choice behaviour.
However, estimation with existing Markov chain Monte Carlo (MCMC) methods is
computationally costly, which limits its applicability to large choice data
sets. This paper proposes a variational Bayes method that is accurate and fast,
even when a large number of choice alternatives and observations are
considered. Variational methods usually require an analytical expression for
the unnormalized posterior density and an adequate choice of variational
family. Both are challenging to specify in a multinomial probit, which has a
posterior that requires identifying restrictions and is augmented with a large
set of latent utilities. We employ a spherical transformation on the covariance
matrix of the latent utilities to construct an unnormalized augmented posterior
that identifies the parameters, and use the conditional posterior of the latent
utilities as part of the variational family. The proposed method is faster than
MCMC, and can be made scalable to both a large number of choice alternatives
and a large number of observations. The accuracy and scalability of our method
is illustrated in numerical experiments and real purchase data with one million
observations.",Fast variational Bayes methods for multinomial probit models,2022-02-25 07:45:42,"Rubén Loaiza-Maya, Didier Nibbering","http://arxiv.org/abs/2202.12495v2, http://arxiv.org/pdf/2202.12495v2",econ.EM
29503,em,"We propose a novel variational Bayes approach to estimate high-dimensional
vector autoregression (VAR) models with hierarchical shrinkage priors. Our
approach does not rely on a conventional structural VAR representation of the
parameter space for posterior inference. Instead, we elicit hierarchical
shrinkage priors directly on the matrix of regression coefficients so that (1)
the prior structure directly maps into posterior inference on the reduced-form
transition matrix, and (2) posterior estimates are more robust to variables
permutation. An extensive simulation study provides evidence that our approach
compares favourably against existing linear and non-linear Markov Chain Monte
Carlo and variational Bayes methods. We investigate both the statistical and
economic value of the forecasts from our variational inference approach within
the context of a mean-variance investor allocating her wealth in a large set of
different industry portfolios. The results show that more accurate estimates
translate into substantial statistical and economic out-of-sample gains. The
results hold across different hierarchical shrinkage priors and model
dimensions.",Variational inference for large Bayesian vector autoregressions,2022-02-25 15:09:43,"Mauro Bernardi, Daniele Bianchi, Nicolas Bianco","http://arxiv.org/abs/2202.12644v3, http://arxiv.org/pdf/2202.12644v3",econ.EM
29504,em,"Subsidies are commonly used to encourage behaviors that can lead to short- or
long-term benefits. Typical examples include subsidized job training programs
and provisions of preventive health products, in which both behavioral
responses and associated gains can exhibit heterogeneity. This study uses the
marginal treatment effect (MTE) framework to study personalized assignments of
subsidies based on individual characteristics. First, we derive the optimality
condition for a welfare-maximizing subsidy rule by showing that the welfare can
be represented as a function of the MTE. Next, we show that subsidies generally
result in better welfare than directly mandating the encouraged behavior
because subsidy rules implicitly target individuals through unobserved
heterogeneity in the behavioral response. When there is positive selection,
that is, when individuals with higher returns are more likely to select the
encouraged behavior, the optimal subsidy rule achieves the first-best welfare,
which is the optimal welfare if a policy-maker can observe individuals' private
information. We then provide methods to (partially) identify the optimal
subsidy rule when the MTE is identified and unidentified. Particularly,
positive selection allows for the point identification of the optimal subsidy
rule even when the MTE curve is not. As an empirical application, we study the
optimal wage subsidy using the experimental data from the Jordan New
Opportunities for Women pilot study.",Personalized Subsidy Rules,2022-02-28 08:01:36,"Yu-Chang Chen, Haitian Xie","http://arxiv.org/abs/2202.13545v2, http://arxiv.org/pdf/2202.13545v2",econ.EM
29505,em,"The relationship between inflation and predictors such as unemployment is
potentially nonlinear with a strength that varies over time, and prediction
errors error may be subject to large, asymmetric shocks. Inspired by these
concerns, we develop a model for inflation forecasting that is nonparametric
both in the conditional mean and in the error using Gaussian and Dirichlet
processes, respectively. We discuss how both these features may be important in
producing accurate forecasts of inflation. In a forecasting exercise involving
CPI inflation, we find that our approach has substantial benefits, both overall
and in the left tail, with nonparametric modeling of the conditional mean being
of particular importance.",Forecasting US Inflation Using Bayesian Nonparametric Models,2022-02-28 16:39:04,"Todd E. Clark, Florian Huber, Gary Koop, Massimiliano Marcellino","http://arxiv.org/abs/2202.13793v1, http://arxiv.org/pdf/2202.13793v1",econ.EM
29506,em,"I present a new estimation procedure for production functions with latent
group structures. I consider production functions that are heterogeneous across
groups but time-homogeneous within groups, and where the group membership of
the firms is unknown. My estimation procedure is fully data-driven and embeds
recent identification strategies from the production function literature into
the classifier-Lasso. Simulation experiments demonstrate that firms are
assigned to their correct latent group with probability close to one. I apply
my estimation procedure to a panel of Chilean firms and find sizable
differences in the estimates compared to the standard approach of
classification by industry.",A Classifier-Lasso Approach for Estimating Production Functions with Latent Group Structures,2022-03-04 13:01:58,Daniel Czarnowske,"http://arxiv.org/abs/2203.02220v1, http://arxiv.org/pdf/2203.02220v1",econ.EM
29508,em,"When using dyadic data (i.e., data indexed by pairs of units), researchers
typically assume a linear model, estimate it using Ordinary Least Squares and
conduct inference using ``dyadic-robust"" variance estimators. The latter
assumes that dyads are uncorrelated if they do not share a common unit (e.g.,
if the same individual is not present in both pairs of data). We show that this
assumption does not hold in many empirical applications because indirect links
may exist due to network connections, generating correlated outcomes. Hence,
``dyadic-robust'' estimators can be biased in such situations. We develop a
consistent variance estimator for such contexts by leveraging results in
network statistics. Our estimator has good finite sample properties in
simulations, while allowing for decay in spillover effects. We illustrate our
message with an application to politicians' voting behavior when they are
seating neighbors in the European Parliament.",Inference in Linear Dyadic Data Models with Network Spillovers,2022-03-07 19:18:15,"Nathan Canen, Ko Sugiura","http://arxiv.org/abs/2203.03497v5, http://arxiv.org/pdf/2203.03497v5",econ.EM
29509,em,"The literature often relies on moment-based measures of earnings risk, such
as the variance, skewness, and kurtosis (e.g., Guvenen, Karahan, Ozkan, and
Song, 2019, Econometrica). However, such moments may not exist in the
population under heavy-tailed distributions. We empirically show that the
population kurtosis, skewness, and even variance often fail to exist for the
conditional distribution of earnings growths. This evidence may invalidate the
moment-based analyses in the literature. In this light, we propose conditional
Pareto exponents as novel measures of earnings risk that are robust against
non-existence of moments, and develop estimation and inference methods.
  Using these measures with an administrative data set for the UK, the New
Earnings Survey Panel Dataset (NESPD), and the US Panel Study of Income
Dynamics (PSID), we quantify the tail heaviness of the conditional
distributions of earnings changes given age, gender, and past earnings. Our
main findings are that: 1) the aforementioned moments fail to exist; 2)
earnings risk is increasing over the life cycle; 3) job stayers are more
vulnerable to earnings risk, and 4) these patterns appear in both the period
2007-2008 of the great recession and the period 2015-2016 of positive growth
among others.",Non-Existent Moments of Earnings Growth,2022-03-15 18:50:49,"Silvia Sarpietro, Yuya Sasaki, Yulong Wang","http://arxiv.org/abs/2203.08014v2, http://arxiv.org/pdf/2203.08014v2",econ.EM
29510,em,"Finding valid instruments is difficult. We propose Validity Set Instrumental
Variable (VSIV) estimation, a method for estimating treatment effects when the
instruments are partially invalid. VSIV estimation exploits testable
implications for instrument validity to remove invalid variation in the
instruments. We show that the proposed VSIV estimators are asymptotically
normal under weak conditions and always remove or reduce the asymptotic bias
relative to standard IV estimators. We apply VSIV estimation to estimate the
returns to schooling using the quarter of birth instrument.",Pairwise Valid Instruments,2022-03-15 19:44:20,"Zhenting Sun, Kaspar Wüthrich","http://arxiv.org/abs/2203.08050v3, http://arxiv.org/pdf/2203.08050v3",econ.EM
29511,em,"This paper introduces a new fixed effects estimator for linear panel data
models with clustered time patterns of unobserved heterogeneity. The method
avoids non-convex and combinatorial optimization by combining a preliminary
consistent estimator of the slope coefficient, an agglomerative
pairwise-differencing clustering of cross-sectional units, and a pooled
ordinary least squares regression. Asymptotic guarantees are established in a
framework where $T$ can grow at any power of $N$, as both $N$ and $T$ approach
infinity. Unlike most existing approaches, the proposed estimator is
computationally straightforward and does not require a known upper bound on the
number of groups. As existing approaches, this method leads to a consistent
estimation of well-separated groups and an estimator of common parameters
asymptotically equivalent to the infeasible regression controlling for the true
groups. An application revisits the statistical association between income and
democracy.",A Simple and Computationally Trivial Estimator for Grouped Fixed Effects Models,2022-03-16 21:50:22,Martin Mugnier,"http://arxiv.org/abs/2203.08879v3, http://arxiv.org/pdf/2203.08879v3",econ.EM
29512,em,"We study the role of selection into treatment in difference-in-differences
(DiD) designs. We derive necessary and sufficient conditions for parallel
trends assumptions under general classes of selection mechanisms. These
conditions characterize the empirical content of parallel trends. For settings
where the necessary conditions are questionable, we propose tools for
selection-based sensitivity analysis. We also provide templates for justifying
DiD in applications with and without covariates. A reanalysis of the causal
effect of NSW training programs demonstrates the usefulness of our
selection-based approach to sensitivity analysis.",Selection and parallel trends,2022-03-17 03:44:30,"Dalia Ghanem, Pedro H. C. Sant'Anna, Kaspar Wüthrich","http://arxiv.org/abs/2203.09001v9, http://arxiv.org/pdf/2203.09001v9",econ.EM
29513,em,"Fixed effect estimators of nonlinear panel data models suffer from the
incidental parameter problem. This leads to two undesirable consequences in
applied research: (1) point estimates are subject to large biases, and (2)
confidence intervals have incorrect coverages. This paper proposes a
simulation-based method for bias reduction. The method simulates data using the
model with estimated individual effects, and finds values of parameters by
equating fixed effect estimates obtained from observed and simulated data. The
asymptotic framework provides consistency, bias correction, and asymptotic
normality results. An application and simulations to female labor force
participation illustrates the finite-sample performance of the method.",Indirect Inference for Nonlinear Panel Models with Fixed Effects,2022-03-21 03:27:49,Shuowen Chen,"http://arxiv.org/abs/2203.10683v2, http://arxiv.org/pdf/2203.10683v2",econ.EM
29514,em,"In linear econometric models with proportional selection on unobservables,
omitted variable bias in estimated treatment effects are real roots of a cubic
equation involving estimated parameters from a short and intermediate
regression. The roots of the cubic are functions of $\delta$, the degree of
selection on unobservables, and $R_{max}$, the R-squared in a hypothetical long
regression that includes the unobservable confounder and all observable
controls. In this paper I propose and implement a novel algorithm to compute
roots of the cubic equation over relevant regions of the $\delta$-$R_{max}$
plane and use the roots to construct bounding sets for the true treatment
effect. The algorithm is based on two well-known mathematical results: (a) the
discriminant of the cubic equation can be used to demarcate regions of unique
real roots from regions of three real roots, and (b) a small change in the
coefficients of a polynomial equation will lead to small change in its roots
because the latter are continuous functions of the former. I illustrate my
method by applying it to the analysis of maternal behavior on child outcomes.",Bounds for Bias-Adjusted Treatment Effect in Linear Econometric Models,2022-03-23 17:11:44,Deepankar Basu,"http://arxiv.org/abs/2203.12431v1, http://arxiv.org/pdf/2203.12431v1",econ.EM
29515,em,"Attrition is a common and potentially important threat to internal validity
in treatment effect studies. We extend the changes-in-changes approach to
identify the average treatment effect for respondents and the entire study
population in the presence of attrition. Our method, which exploits baseline
outcome data, can be applied to randomized experiments as well as
quasi-experimental difference-in-difference designs. A formal comparison
highlights that while widely used corrections typically impose restrictions on
whether or how response depends on treatment, our proposed attrition correction
exploits restrictions on the outcome model. We further show that the conditions
required for our correction can accommodate a broad class of response models
that depend on treatment in an arbitrary way. We illustrate the implementation
of the proposed corrections in an application to a large-scale randomized
experiment.",Correcting Attrition Bias using Changes-in-Changes,2022-03-24 00:55:32,"Dalia Ghanem, Sarojini Hirshleifer, Désiré Kédagni, Karen Ortiz-Becerra","http://arxiv.org/abs/2203.12740v3, http://arxiv.org/pdf/2203.12740v3",econ.EM
29516,em,"I introduce a new method for bias correction of dyadic models with
agent-specific fixed-effects, including the dyadic link formation model with
homophily and degree heterogeneity. The proposed approach uses a jackknife
procedure to deal with the incidental parameters problem. The method can be
applied to both directed and undirected networks, allows for non-binary outcome
variables, and can be used to bias correct estimates of average effects and
counterfactual outcomes. I also show how the jackknife can be used to
bias-correct fixed effect averages over functions that depend on multiple
nodes, e.g. triads or tetrads in the network. As an example, I implement
specification tests for dependence across dyads, such as reciprocity or
transitivity. Finally, I demonstrate the usefulness of the estimator in an
application to a gravity model for import/export relationships across
countries.",Estimating Nonlinear Network Data Models with Fixed Effects,2022-03-29 17:12:13,David W. Hughes,"http://arxiv.org/abs/2203.15603v2, http://arxiv.org/pdf/2203.15603v2",econ.EM
29517,em,"Difference-in-differences is one of the most used identification strategies
in empirical work in economics. This chapter reviews a number of important,
recent developments related to difference-in-differences. First, this chapter
reviews recent work pointing out limitations of two way fixed effects
regressions (these are panel data regressions that have been the dominant
approach to implementing difference-in-differences identification strategies)
that arise in empirically relevant settings where there are more than two time
periods, variation in treatment timing across units, and treatment effect
heterogeneity. Second, this chapter reviews recently proposed alternative
approaches that are able to circumvent these issues without being substantially
more complicated to implement. Third, this chapter covers a number of
extensions to these results, paying particular attention to (i) parallel trends
assumptions that hold only after conditioning on observed covariates and (ii)
strategies to partially identify causal effect parameters in
difference-in-differences applications in cases where the parallel trends
assumption may be violated.",Difference-in-Differences for Policy Evaluation,2022-03-29 18:06:20,Brantly Callaway,"http://arxiv.org/abs/2203.15646v1, http://arxiv.org/pdf/2203.15646v1",econ.EM
29518,em,"In this paper we propose two simple methods to estimate models of matching
with transferable and separable utility introduced in Galichon and Salani\'e
(2022). The first method is a minimum distance estimator that relies on the
generalized entropy of matching. The second relies on a reformulation of the
more special but popular Choo and Siow (2006) model; it uses generalized linear
models (GLMs) with two-way fixed effects.",Estimating Separable Matching Models,2022-04-01 14:26:18,"Alfred Galichon, Bernard Salanié","http://arxiv.org/abs/2204.00362v1, http://arxiv.org/pdf/2204.00362v1",econ.EM
29519,em,"We propose confidence regions for the parameters of incomplete models with
exact coverage of the true parameter in finite samples. Our confidence region
inverts a test, which generalizes Monte Carlo tests to incomplete models. The
test statistic is a discrete analogue of a new optimal transport
characterization of the sharp identified region. Both test statistic and
critical values rely on simulation drawn from the distribution of latent
variables and are computed using solutions to discrete optimal transport, hence
linear programming problems. We also propose a fast preliminary search in the
parameter space with an alternative, more conservative yet consistent test,
based on a parameter free critical value.",Finite Sample Inference in Incomplete Models,2022-04-01 17:30:34,"Lixiong Li, Marc Henry","http://arxiv.org/abs/2204.00473v1, http://arxiv.org/pdf/2204.00473v1",econ.EM
29520,em,"I address the decomposition of the differences between the distribution of
outcomes of two groups when individuals self-select themselves into
participation. I differentiate between the decomposition for participants and
the entire population, highlighting how the primitive components of the model
affect each of the distributions of outcomes. Additionally, I introduce two
ancillary decompositions that help uncover the sources of differences in the
distribution of unobservables and participation between the two groups. The
estimation is done using existing quantile regression methods, for which I show
how to perform uniformly valid inference. I illustrate these methods by
revisiting the gender wage gap, finding that changes in female participation
and self-selection have been the main drivers for reducing the gap.",Decomposition of Differences in Distribution under Sample Selection and the Gender Wage Gap,2022-04-01 19:20:57,Santiago Pereda-Fernández,"http://arxiv.org/abs/2204.00551v2, http://arxiv.org/pdf/2204.00551v2",econ.EM
29521,em,"We establish the asymptotic theory in quantile autoregression when the model
parameter is specified with respect to moderate deviations from the unit
boundary of the form (1 + c / k) with a convergence sequence that diverges at a
rate slower than the sample size n. Then, extending the framework proposed by
Phillips and Magdalinos (2007), we consider the limit theory for the
near-stationary and the near-explosive cases when the model is estimated with a
conditional quantile specification function and model parameters are
quantile-dependent. Additionally, a Bahadur-type representation and limiting
distributions based on the M-estimators of the model parameters are derived.
Specifically, we show that the serial correlation coefficient converges in
distribution to a ratio of two independent random variables. Monte Carlo
simulations illustrate the finite-sample performance of the estimation
procedure under investigation.",Asymptotic Theory for Unit Root Moderate Deviations in Quantile Autoregressions and Predictive Regressions,2022-04-05 12:15:51,Christis Katsouris,"http://arxiv.org/abs/2204.02073v2, http://arxiv.org/pdf/2204.02073v2",econ.EM
29522,em,"Treatment effect estimation strategies in the event-study setup, namely panel
data with variation in treatment timing, often use the parallel trend
assumption that assumes mean independence of potential outcomes across
different treatment timings. In this paper, we relax the parallel trend
assumption by assuming a latent type variable and develop a type-specific
parallel trend assumption. With a finite support assumption on the latent type
variable, we show that an extremum classifier consistently estimates the type
assignment. Based on the clasfification result, we propose a type-specific
diff-in-diff estimator for the type-specific CATT. By estimating the CATT with
regard to the latent type, we study heterogeneity in treatment effect, in
addition to heterogeneity in baseline outcomes.",Finitely Heterogeneous Treatment Effect in Event-study,2022-04-05 19:58:53,Myungkou Shin,"http://arxiv.org/abs/2204.02346v3, http://arxiv.org/pdf/2204.02346v3",econ.EM
29523,em,"The paper proposes a new bootstrap approach to the Pesaran, Shin and Smith's
bound tests in a conditional equilibrium correction model with the aim to
overcome some typical drawbacks of the latter, such as inconclusive inference
and distortion in size. The bootstrap tests are worked out under several data
generating processes, including degenerate cases. Monte Carlo simulations
confirm the better performance of the bootstrap tests with respect to bound
ones and to the asymptotic F test on the independent variables of the ARDL
model. It is also proved that any inference carried out in misspecified models,
such as unconditional ARDLs, may be misleading. Empirical applications
highlight the importance of employing the appropriate specification and provide
definitive answers to the inconclusive inference of the bound tests when
exploring the long-term equilibrium relationship between economic variables.",Bootstrap Cointegration Tests in ARDL Models,2022-04-11 11:31:19,"Stefano Bertelli, Gianmarco Vacca, Maria Grazia Zoia","http://arxiv.org/abs/2204.04939v1, http://arxiv.org/pdf/2204.04939v1",econ.EM
29524,em,"We study partially linear models when the outcome of interest and some of the
covariates are observed in two different datasets that cannot be linked. This
type of data combination problem arises very frequently in empirical
microeconomics. Using recent tools from optimal transport theory, we derive a
constructive characterization of the sharp identified set. We then build on
this result and develop a novel inference method that exploits the specific
geometric properties of the identified set. Our method exhibits good
performances in finite samples, while remaining very tractable. We apply our
approach to study intergenerational income mobility over the period 1850-1930
in the United States. Our method allows us to relax the exclusion restrictions
used in earlier work, while delivering confidence regions that are informative.",Partially Linear Models under Data Combination,2022-04-11 18:08:05,"Xavier D'Haultfœuille, Christophe Gaillac, Arnaud Maurel","http://arxiv.org/abs/2204.05175v3, http://arxiv.org/pdf/2204.05175v3",econ.EM
29525,em,"Administrative data are often easier to access as tabulated summaries than in
the original format due to confidentiality concerns. Motivated by this
practical feature, we propose a novel nonparametric density estimation method
from tabulated summary data based on maximum entropy and prove its strong
uniform consistency. Unlike existing kernel-based estimators, our estimator is
free from tuning parameters and admits a closed-form density that is convenient
for post-estimation analysis. We apply the proposed method to the tabulated
summary data of the U.S. tax returns to estimate the income distribution.",Tuning Parameter-Free Nonparametric Density Estimation from Tabulated Summary Data,2022-04-12 05:11:41,"Ji Hyung Lee, Yuya Sasaki, Alexis Akira Toda, Yulong Wang","http://arxiv.org/abs/2204.05480v3, http://arxiv.org/pdf/2204.05480v3",econ.EM
29526,em,"This note describes the optimal policy rule, according to the local
asymptotic minimax regret criterion, for best arm identification when there are
only two treatments. It is shown that the optimal sampling rule is the Neyman
allocation, which allocates a constant fraction of units to each treatment in a
manner that is proportional to the standard deviation of the treatment
outcomes. When the variances are equal, the optimal ratio is one-half. This
policy is independent of the data, so there is no adaptation to previous
outcomes. At the end of the experiment, the policy maker adopts the treatment
with higher average outcomes.",Neyman allocation is minimax optimal for best arm identification with two arms,2022-04-12 07:59:13,Karun Adusumilli,"http://arxiv.org/abs/2204.05527v7, http://arxiv.org/pdf/2204.05527v7",econ.EM
29527,em,"We examine identification of differentiated products demand when one has
""micro data"" linking individual consumers' characteristics and choices. Our
model nests standard specifications featuring rich observed and unobserved
consumer heterogeneity as well as product/market-level unobservables that
introduce the problem of econometric endogeneity. Previous work establishes
identification of such models using market-level data and instruments for all
prices and quantities. Micro data provides a panel structure that facilitates
richer demand specifications and reduces requirements on both the number and
types of instrumental variables. We address identification of demand in the
standard case in which non-price product characteristics are assumed exogenous,
but also cover identification of demand elasticities and other key features
when product characteristics are endogenous. We discuss implications of these
results for applied work.",Nonparametric Identification of Differentiated Products Demand Using Micro Data,2022-04-14 00:00:57,"Steven T. Berry, Philip A. Haile","http://arxiv.org/abs/2204.06637v2, http://arxiv.org/pdf/2204.06637v2",econ.EM
29528,em,"Many empirical examples of regression discontinuity (RD) designs concern a
continuous treatment variable, but the theoretical aspects of such models are
less studied. This study examines the identification and estimation of the
structural function in fuzzy RD designs with a continuous treatment variable.
The structural function fully describes the causal impact of the treatment on
the outcome. We show that the nonlinear and nonseparable structural function
can be nonparametrically identified at the RD cutoff under shape restrictions,
including monotonicity and smoothness conditions. Based on the nonparametric
identification equation, we propose a three-step semiparametric estimation
procedure and establish the asymptotic normality of the estimator. The
semiparametric estimator achieves the same convergence rate as in the case of a
binary treatment variable. As an application of the method, we estimate the
causal effect of sleep time on health status by using the discontinuity in
natural light timing at time zone boundaries.",Nonlinear and Nonseparable Structural Functions in Fuzzy Regression Discontinuity Designs,2022-04-18 08:41:57,Haitian Xie,"http://arxiv.org/abs/2204.08168v2, http://arxiv.org/pdf/2204.08168v2",econ.EM
29529,em,"This paper studies the implication of a fraction of the population not
responding to the instrument when selecting into treatment. We show that, in
general, the presence of non-responders biases the Marginal Treatment Effect
(MTE) curve and many of its functionals. Yet, we show that, when the propensity
score is fully supported on the unit interval, it is still possible to restore
identification of the MTE curve and its functionals with an appropriate
re-weighting.",MTE with Misspecification,2022-04-22 03:19:25,"Julián Martínez-Iriarte, Pietro Emilio Spini","http://arxiv.org/abs/2204.10445v1, http://arxiv.org/pdf/2204.10445v1",econ.EM
29530,em,"This paper proposes a one-covariate-at-a-time multiple testing (OCMT)
approach to choose significant variables in high-dimensional nonparametric
additive regression models. Similarly to Chudik, Kapetanios and Pesaran (2018),
we consider the statistical significance of individual nonparametric additive
components one at a time and take into account the multiple testing nature of
the problem. One-stage and multiple-stage procedures are both considered. The
former works well in terms of the true positive rate only if the marginal
effects of all signals are strong enough; the latter helps to pick up hidden
signals that have weak marginal effects. Simulations demonstrate the good
finite sample performance of the proposed procedures. As an empirical
application, we use the OCMT procedure on a dataset we extracted from the
Longitudinal Survey on Rural Urban Migration in China. We find that our
procedure works well in terms of the out-of-sample forecast root mean square
errors, compared with competing methods.",A One-Covariate-at-a-Time Method for Nonparametric Additive Models,2022-04-26 04:37:22,"Liangjun Su, Thomas Tao Yang, Yonghui Zhang, Qiankun Zhou","http://arxiv.org/abs/2204.12023v2, http://arxiv.org/pdf/2204.12023v2",econ.EM
29531,em,"We consider estimation in moment condition models and show that under any
bound on identification strength, asymptotically admissible (i.e. undominated)
estimators in a wide class of estimation problems must be uniformly continuous
in the sample moment function. GMM estimators are in general discontinuous in
the sample moments, and are thus inadmissible. We show, by contrast, that
bagged, or bootstrap aggregated, GMM estimators as well as quasi-Bayes
posterior means have superior continuity properties, while results in the
literature imply that they are equivalent to GMM when identification is strong.
In simulations calibrated to published instrumental variables specifications,
we find that these alternatives often outperform GMM.",GMM is Inadmissible Under Weak Identification,2022-04-26 20:33:42,"Isaiah Andrews, Anna Mikusheva","http://arxiv.org/abs/2204.12462v3, http://arxiv.org/pdf/2204.12462v3",econ.EM
29532,em,"This paper introduces a flexible local projection that generalizes the model
by Jord\'a (2005) to a non-parametric setting using Bayesian Additive
Regression Trees. Monte Carlo experiments show that our BART-LP model is able
to capture non-linearities in the impulse responses. Our first application
shows that the fiscal multiplier is stronger in recession than in expansion
only in response to contractionary fiscal shocks, but not in response to
expansionary fiscal shocks. We then show that financial shocks generate effects
on the economy that increase more than proportionately in the size of the shock
when the shock is negative, but not when the shock is positive.",Impulse response estimation via flexible local projections,2022-04-27 22:14:13,"Haroon Mumtaz, Michele Piffer","http://arxiv.org/abs/2204.13150v1, http://arxiv.org/pdf/2204.13150v1",econ.EM
29533,em,"Estimating structural models is an essential tool for economists. However,
existing methods are often inefficient either computationally or statistically,
depending on how equilibrium conditions are imposed. We propose a class of
penalized sieve estimators that are consistent, asymptotic normal, and
asymptotically efficient. Instead of solving the model repeatedly, we
approximate the solution with a linear combination of basis functions and
impose equilibrium conditions as a penalty in searching for the best fitting
coefficients. We apply our method to an entry game between Walmart and Kmart.",Penalized Sieve Estimation of Structural Models,2022-04-28 16:31:31,"Yao Luo, Peijun Sang","http://arxiv.org/abs/2204.13488v1, http://arxiv.org/pdf/2204.13488v1",econ.EM
29534,em,"This paper assesses the effects of greenhouse gas emissions drivers in EU-27
over the period 2010-2019, using a Panel EGLS model with period fixed effects.
In particular, we focused our research on studying the effects of GDP,
renewable energy, households energy consumption and waste on the greenhouse gas
emissions. In this regard, we found a positive relationship between three
independent variables (real GDP per capita, households final consumption per
capita and waste generation per capita) and greenhouse gas emissions per
capita, while the effect of the share of renewable energy in gross final energy
consumption on the dependent variable proved to be negative, but quite low. In
addition, we demonstrate that the main challenge that affects greenhouse gas
emissions is related to the structure of households energy consumption, which
is generally composed by environmentally harmful fuels. This suggests the need
to make greater efforts to support the shift to a green economy based on a
higher energy efficiency.",Greenhouse Gas Emissions and its Main Drivers: a Panel Assessment for EU-27 Member States,2022-04-30 18:43:46,"I. Jianu, S. M. Jeloaica, M. D. Tudorache","http://arxiv.org/abs/2205.00295v1, http://arxiv.org/pdf/2205.00295v1",econ.EM
29535,em,"In this paper, we propose a simple inferential method for a wide class of
panel data models with a focus on such cases that have both serial correlation
and cross-sectional dependence. In order to establish an asymptotic theory to
support the inferential method, we develop some new and useful higher-order
expansions, such as Berry-Esseen bound and Edgeworth Expansion, under a set of
simple and general conditions. We further demonstrate the usefulness of these
theoretical results by explicitly investigating a panel data model with
interactive effects which nests many traditional panel data models as special
cases. Finally, we show the superiority of our approach over several natural
competitors using extensive numerical studies.",Higher-order Expansions and Inference for Panel Data Models,2022-05-02 02:04:40,"Jiti Gao, Bin Peng, Yayi Yan","http://arxiv.org/abs/2205.00577v2, http://arxiv.org/pdf/2205.00577v2",econ.EM
29536,em,"Crawford's et al. (2021) article on estimation of discrete choice models with
unobserved or latent consideration sets, presents a unified framework to
address the problem in practice by using ""sufficient sets"", defined as a
combination of past observed choices. The proposed approach is sustained in a
re-interpretation of a consistency result by McFadden (1978) for the problem of
sampling of alternatives, but the usage of that result in Crawford et al.
(2021) is imprecise in an important matter. It is stated that consistency would
be attained if any subset of the true consideration set is used for estimation,
but McFadden (1978) shows that, in general, one needs to do a sampling
correction that depends on the protocol used to draw the choice set. This note
derives the sampling correction that is required when the choice set for
estimation is built from past choices. Then, it formalizes the conditions under
which such correction would fulfill the uniform condition property and can
therefore be ignored when building practical estimators, such as the ones
analyzed by Crawford et al. (2021).","A Note on ""A survey of preference estimation with unobserved choice set heterogeneity"" by Gregory S. Crawford, Rachel Griffith, and Alessandro Iaria",2022-05-02 15:33:21,C. Angelo Guevara,"http://arxiv.org/abs/2205.00852v1, http://arxiv.org/pdf/2205.00852v1",econ.EM
29539,em,"Methods for cluster-robust inference are routinely used in economics and many
other disciplines. However, it is only recently that theoretical foundations
for the use of these methods in many empirically relevant situations have been
developed. In this paper, we use these theoretical results to provide a guide
to empirical practice. We do not attempt to present a comprehensive survey of
the (very large) literature. Instead, we bridge theory and practice by
providing a thorough guide on what to do and why, based on recently available
econometric theory and simulation evidence. To practice what we preach, we
include an empirical analysis of the effects of the minimum wage on labor
supply of teenagers using individual data.",Cluster-Robust Inference: A Guide to Empirical Practice,2022-05-06 18:13:28,"James G. MacKinnon, Morten Ørregaard Nielsen, Matthew D. Webb","http://arxiv.org/abs/2205.03285v1, http://arxiv.org/pdf/2205.03285v1",econ.EM
29540,em,"We introduce a new Stata package called summclust that summarizes the cluster
structure of the dataset for linear regression models with clustered
disturbances. The key unit of observation for such a model is the cluster. We
therefore propose cluster-level measures of leverage, partial leverage, and
influence and show how to compute them quickly in most cases. The measures of
leverage and partial leverage can be used as diagnostic tools to identify
datasets and regression designs in which cluster-robust inference is likely to
be challenging. The measures of influence can provide valuable information
about how the results depend on the data in the various clusters. We also show
how to calculate two jackknife variance matrix estimators efficiently as a
byproduct of our other computations. These estimators, which are already
available in Stata, are generally more conservative than conventional variance
matrix estimators. The summclust package computes all the quantities that we
discuss.","Leverage, Influence, and the Jackknife in Clustered Regression Models: Reliable Inference Using summclust",2022-05-06 18:14:29,"James G. MacKinnon, Morten Ørregaard Nielsen, Matthew D. Webb","http://arxiv.org/abs/2205.03288v3, http://arxiv.org/pdf/2205.03288v3",econ.EM
29541,em,"This paper studies identification and estimation of dynamic games when the
underlying information structure is unknown to the researcher. To tractably
characterize the set of model predictions while maintaining weak assumptions on
players' information, we introduce Markov correlated equilibrium, a dynamic
analog of Bayes correlated equilibrium. The set of Markov correlated
equilibrium predictions coincides with the set of Markov perfect equilibrium
predictions that can arise when the players might observe more signals than
assumed by the analyst. We characterize the sharp identified sets under varying
assumptions on what the players minimally observe. We also propose
computational strategies for dealing with the non-convexities that arise in
dynamic environments.",Estimating Dynamic Games with Unknown Information Structure,2022-05-07 22:19:44,Paul S. Koh,"http://arxiv.org/abs/2205.03706v2, http://arxiv.org/pdf/2205.03706v2",econ.EM
29542,em,"This paper studies identification and estimation of a dynamic discrete choice
model of demand for differentiated product using consumer-level panel data with
few purchase events per consumer (i.e., short panel). Consumers are
forward-looking and their preferences incorporate two sources of dynamics: last
choice dependence due to habits and switching costs, and duration dependence
due to inventory, depreciation, or learning. A key distinguishing feature of
the model is that consumer unobserved heterogeneity has a Fixed Effects (FE)
structure -- that is, its probability distribution conditional on the initial
values of endogenous state variables is unrestricted. I apply and extend recent
results to establish the identification of all the structural parameters as
long as the dataset includes four or more purchase events per household. The
parameters can be estimated using a sufficient statistic - conditional maximum
likelihood (CML) method. An attractive feature of CML in this model is that the
sufficient statistic controls for the forward-looking value of the consumer's
decision problem such that the method does not require solving dynamic
programming problems or calculating expected present values.",Dynamic demand for differentiated products with fixed-effects unobserved heterogeneity,2022-05-08 23:35:57,Victor Aguirregabiria,"http://arxiv.org/abs/2205.03948v2, http://arxiv.org/pdf/2205.03948v2",econ.EM
29543,em,"This paper develops a novel method for policy choice in a dynamic setting
where the available data is a multivariate time series. Building on the
statistical treatment choice framework, we propose Time-series Empirical
Welfare Maximization (T-EWM) methods to estimate an optimal policy rule for the
current period or over multiple periods by maximizing an empirical welfare
criterion constructed using nonparametric potential outcome time-series. We
characterize conditions under which T-EWM consistently learns a policy choice
that is optimal in terms of conditional welfare given the time-series history.
We then derive a nonasymptotic upper bound for conditional welfare regret and
its minimax lower bound. To illustrate the implementation and uses of T-EWM, we
perform simulation studies and apply the method to estimate optimal monetary
policy rules from macroeconomic time-series data.",Policy Choice in Time Series by Empirical Welfare Maximization,2022-05-09 02:22:35,"Toru Kitagawa, Weining Wang, Mengshan Xu","http://arxiv.org/abs/2205.03970v3, http://arxiv.org/pdf/2205.03970v3",econ.EM
29544,em,"Current diagnostic tests for regression discontinuity (RD) design face a
multiple testing problem. We find a massive over-rejection of the identifying
restriction among empirical RD studies published in top-five economics
journals. Each test achieves a nominal size of 5%; however, the median number
of tests per study is 12. Consequently, more than one-third of studies reject
at least one of these tests and their diagnostic procedures are invalid for
justifying the identifying assumption. We offer a joint testing procedure to
resolve the multiple testing problem. Our procedure is based on a new joint
asymptotic normality of local linear estimates and local polynomial density
estimates. In simulation studies, our joint testing procedures outperform the
Bonferroni correction. We implement the procedure as an R package, rdtest, with
two empirical examples in its vignettes.",Joint diagnostic test of regression discontinuity designs: multiple testing problem,2022-05-09 17:46:29,"Koki Fusejima, Takuya Ishihara, Masayuki Sawada","http://arxiv.org/abs/2205.04345v3, http://arxiv.org/pdf/2205.04345v3",econ.EM
29591,em,"We propose a factor network autoregressive (FNAR) model for time series with
complex network structures. The coefficients of the model reflect many
different types of connections between economic agents (multilayer network),
which are summarized into a smaller number of network matrices (network
factors) through a novel tensor-based principal component approach. We provide
consistency and asymptotic normality results for the estimation of the factors
and the coefficients of the FNAR. Our approach combines two different
dimension-reduction techniques and can be applied to ultra-high-dimensional
datasets. In an empirical application, we use the FNAR to investigate the
cross-country interdependence of GDP growth rates based on a variety of
international trade and financial linkages. The model provides a rich
characterization of macroeconomic network effects.",Factor Network Autoregressions,2022-08-05 02:01:37,"Matteo Barigozzi, Giuseppe Cavaliere, Graziano Moramarco","http://arxiv.org/abs/2208.02925v3, http://arxiv.org/pdf/2208.02925v3",econ.EM
29545,em,"The effects of treatments are often heterogeneous, depending on the
observable characteristics, and it is necessary to exploit such heterogeneity
to devise individualized treatment rules (ITRs). Existing estimation methods of
such ITRs assume that the available experimental or observational data are
derived from the target population in which the estimated policy is
implemented. However, this assumption often fails in practice because of
limited useful data. In this case, policymakers must rely on the data generated
in the source population, which differs from the target population.
Unfortunately, existing estimation methods do not necessarily work as expected
in the new setting, and strategies that can achieve a reasonable goal in such a
situation are required. This study examines the application of distributionally
robust optimization (DRO), which formalizes an ambiguity about the target
population and adapts to the worst-case scenario in the set. It is shown that
DRO with Wasserstein distance-based characterization of ambiguity provides
simple intuitions and a simple estimation method. I then develop an estimator
for the distributionally robust ITR and evaluate its theoretical performance.
An empirical application shows that the proposed approach outperforms the naive
approach in the target population.",Distributionally Robust Policy Learning with Wasserstein Distance,2022-05-10 05:51:46,Daido Kido,"http://arxiv.org/abs/2205.04637v2, http://arxiv.org/pdf/2205.04637v2",econ.EM
29546,em,"Empirically, many strategic settings are characterized by stable outcomes in
which players' decisions are publicly observed, yet no player takes the
opportunity to deviate. To analyze such situations in the presence of
incomplete information, we build an empirical framework by introducing a novel
solution concept that we call Bayes stable equilibrium. Our framework allows
the researcher to be agnostic about players' information and the equilibrium
selection rule. The Bayes stable equilibrium identified set collapses to the
complete information pure strategy Nash equilibrium identified set under strong
assumptions on players' information. Furthermore, all else equal, it is weakly
tighter than the Bayes correlated equilibrium identified set. We also propose
computationally tractable approaches for estimation and inference. In an
application, we study the strategic entry decisions of McDonald's and Burger
King in the US. Our results highlight the identifying power of informational
assumptions and show that the Bayes stable equilibrium identified set can be
substantially tighter than the Bayes correlated equilibrium identified set. In
a counterfactual experiment, we examine the impact of increasing access to
healthy food on the market structures in Mississippi food deserts.",Stable Outcomes and Information in Games: An Empirical Framework,2022-05-10 18:56:50,Paul S. Koh,"http://arxiv.org/abs/2205.04990v2, http://arxiv.org/pdf/2205.04990v2",econ.EM
29547,em,"This paper considers the estimation of discrete games of complete information
without assumptions on the equilibrium selection rule, which is often viewed as
computationally difficult. We propose computationally attractive approaches
that avoid simulation and grid search. We show that the moment inequalities
proposed by Andrews, Berry, and Jia (2004) can be expressed in terms of
multinomial logit probabilities, and the corresponding identified set is
convex. When actions are binary, we can characterize the sharp identified set
using closed-form inequalities. We also propose a simple approach to inference.
Two real-data experiments illustrate that our methodology can be several orders
of magnitude faster than the existing approaches.",Estimating Discrete Games of Complete Information: Bringing Logit Back in the Game,2022-05-10 19:17:55,Paul S. Koh,"http://arxiv.org/abs/2205.05002v2, http://arxiv.org/pdf/2205.05002v2",econ.EM
29548,em,"Information retrieval systems, such as online marketplaces, news feeds, and
search engines, are ubiquitous in today's digital society. They facilitate
information discovery by ranking retrieved items on predicted relevance, i.e.
likelihood of interaction (click, share) between users and items. Typically
modeled using past interactions, such rankings have a major drawback:
interaction depends on the attention items receive. A highly-relevant item
placed outside a user's attention could receive little interaction. This
discrepancy between observed interaction and true relevance is termed the
position bias. Position bias degrades relevance estimation and when it
compounds over time, it can silo users into false relevant items, causing
marketplace inefficiencies. Position bias may be identified with randomized
experiments, but such an approach can be prohibitive in cost and feasibility.
Past research has also suggested propensity score methods, which do not
adequately address unobserved confounding; and regression discontinuity
designs, which have poor external validity. In this work, we address these
concerns by leveraging the abundance of A/B tests in ranking evaluations as
instrumental variables. Historical A/B tests allow us to access exogenous
variation in rankings without manually introducing them, harming user
experience and platform revenue. We demonstrate our methodology in two distinct
applications at LinkedIn - feed ads and the People-You-May-Know (PYMK)
recommender. The marketplaces comprise users and campaigns on the ads side, and
invite senders and recipients on PYMK. By leveraging prior experimentation, we
obtain quasi-experimental variation in item rankings that is orthogonal to user
relevance. Our method provides robust position effect estimates that handle
unobserved confounding well, greater generalizability, and easily extends to
other information retrieval systems.",Causal Estimation of Position Bias in Recommender Systems Using Marketplace Instruments,2022-05-12 23:58:25,"Rina Friedberg, Karthik Rajkumar, Jialiang Mao, Qian Yao, YinYin Yu, Min Liu","http://arxiv.org/abs/2205.06363v1, http://arxiv.org/pdf/2205.06363v1",econ.EM
29549,em,"Bounce Rate of different E-commerce websites depends on the different factors
based upon the different devices through which traffic share is observed. This
research paper focuses on how the type of products sold by different E-commerce
websites affects the bounce rate obtained through Mobile/Desktop. It tries to
explain the observations which counter the general trend of positive relation
between Mobile traffic share and bounce rate and how this is different for the
Desktop. To estimate the differences created by the types of products sold by
E-commerce websites on the bounce rate according to the data observed for
different time, fixed effect model (within group method) is used to determine
the difference created by the factors. Along with the effect of the type of
products sold by the E-commerce website on bounce rate, the effect of
individual website is also compared to verify the results obtained for type of
products.",How do Bounce Rates vary according to product sold?,2022-05-13 22:52:51,Himanshu Sharma,"http://arxiv.org/abs/2205.06866v1, http://arxiv.org/pdf/2205.06866v1",econ.EM
29550,em,"This paper proposes strategies to detect time reversibility in stationary
stochastic processes by using the properties of mixed causal and noncausal
models. It shows that they can also be used for non-stationary processes when
the trend component is computed with the Hodrick-Prescott filter rendering a
time-reversible closed-form solution. This paper also links the concept of an
environmental tipping point to the statistical property of time irreversibility
and assesses fourteen climate indicators. We find evidence of time
irreversibility in $GHG$ emissions, global temperature, global sea levels, sea
ice area, and some natural oscillation indices. While not conclusive, our
findings urge the implementation of correction policies to avoid the worst
consequences of climate change and not miss the opportunity window, which might
still be available, despite closing quickly.",Is climate change time reversible?,2022-05-16 14:29:23,"Francesco Giancaterini, Alain Hecq, Claudio Morana","http://arxiv.org/abs/2205.07579v3, http://arxiv.org/pdf/2205.07579v3",econ.EM
29552,em,"$p$-Hacking can undermine the validity of empirical studies. A flourishing
empirical literature investigates the prevalence of $p$-hacking based on the
empirical distribution of reported $p$-values across studies. Interpreting
results in this literature requires a careful understanding of the power of
methods used to detect different types of $p$-hacking. We theoretically study
the implications of likely forms of $p$-hacking on the distribution of reported
$p$-values and the power of existing methods for detecting it. Power can be
quite low, depending crucially on the particular $p$-hacking strategy and the
distribution of actual effects tested by the studies. Publication bias can
enhance the power for testing the joint null hypothesis of no $p$-hacking and
no publication bias. We relate the power of the tests to the costs of
$p$-hacking and show that power tends to be larger when $p$-hacking is very
costly. Monte Carlo simulations support our theoretical results.",The Power of Tests for Detecting $p$-Hacking,2022-05-16 22:18:55,"Graham Elliott, Nikolay Kudrin, Kaspar Wüthrich","http://arxiv.org/abs/2205.07950v2, http://arxiv.org/pdf/2205.07950v2",econ.EM
29553,em,"We introduce the closed-form formulas of nonlinear forecasts and nonlinear
impulse response functions (IRF) for the mixed causal-noncausal (Structural)
Vector Autoregressive (S)VAR models. We also discuss the identification of
nonlinear causal innovations of the model to which the shocks are applied. Our
approach is illustrated by a simulation study and an application to a bivariate
process of Bitcoin/USD and Ethereum/USD exchange rates.",Nonlinear Forecasts and Impulse Responses for Causal-Noncausal (S)VAR Models,2022-05-20 04:32:07,"Christian Gourieroux, Joann Jasiak","http://arxiv.org/abs/2205.09922v2, http://arxiv.org/pdf/2205.09922v2",econ.EM
29554,em,"This paper analyses the forecasting performance of a new class of factor
models with martingale difference errors (FMMDE) recently introduced by Lee and
Shao (2018). The FMMDE makes it possible to retrieve a transformation of the
original series so that the resulting variables can be partitioned according to
whether they are conditionally mean-independent with respect to past
information. We contribute to the literature in two respects. First, we propose
a novel methodology for selecting the number of factors in FMMDE. Through
simulation experiments, we show the good performance of our approach for finite
samples for various panel data specifications. Second, we compare the
forecasting performance of FMMDE with alternative factor model specifications
by conducting an extensive forecasting exercise using FRED-MD, a comprehensive
monthly macroeconomic database for the US economy. Our empirical findings
indicate that FMMDE provides an advantage in predicting the evolution of the
real sector of the economy when the novel methodology for factor selection is
adopted. These results are confirmed for key aggregates such as Production and
Income, the Labor Market, and Consumption.",The Forecasting performance of the Factor model with Martingale Difference errors,2022-05-20 18:38:23,"Luca Mattia Rolla, Alessandro Giovannelli","http://arxiv.org/abs/2205.10256v2, http://arxiv.org/pdf/2205.10256v2",econ.EM
29555,em,"The 1938 Fair Labor Standards Act mandates overtime premium pay for most U.S.
workers, but it has proven difficult to assess the policy's impact on the labor
market because the rule applies nationally and has varied little over time. I
use the extent to which firms bunch workers at the overtime threshold of 40
hours in a week to estimate the rule's effect on hours, drawing on data from
individual workers' weekly paychecks. To do so I generalize a popular
identification strategy that exploits bunching at kink points in a
decision-maker's choice set. Making only nonparametric assumptions about
preferences and heterogeneity, I show that the average causal response among
bunchers to the policy switch at the kink is partially identified. The bounds
indicate a relatively small elasticity of demand for weekly hours, suggesting
that the overtime mandate has a discernible but limited impact on hours and
employment.",Treatment Effects in Bunching Designs: The Impact of Mandatory Overtime Pay on Hours,2022-05-20 20:22:57,Leonard Goff,"http://arxiv.org/abs/2205.10310v3, http://arxiv.org/pdf/2205.10310v3",econ.EM
29556,em,"This paper proposes maximum (quasi)likelihood estimation for high dimensional
factor models with regime switching in the loadings. The model parameters are
estimated jointly by the EM (expectation maximization) algorithm, which in the
current context only requires iteratively calculating regime probabilities and
principal components of the weighted sample covariance matrix. When regime
dynamics are taken into account, smoothed regime probabilities are calculated
using a recursive algorithm. Consistency, convergence rates and limit
distributions of the estimated loadings and the estimated factors are
established under weak cross-sectional and temporal dependence as well as
heteroscedasticity. It is worth noting that due to high dimension, regime
switching can be identified consistently after the switching point with only
one observation. Simulation results show good performance of the proposed
method. An application to the FRED-MD dataset illustrates the potential of the
proposed method for detection of business cycle turning points.",Estimation and Inference for High Dimensional Factor Model with Regime Switching,2022-05-24 17:57:58,"Giovanni Urga, Fa Wang","http://arxiv.org/abs/2205.12126v2, http://arxiv.org/pdf/2205.12126v2",econ.EM
29557,em,"Auction data often contain information on only the most competitive bids as
opposed to all bids. The usual measurement error approaches to unobserved
heterogeneity are inapplicable due to dependence among order statistics. We
bridge this gap by providing a set of positive identification results. First,
we show that symmetric auctions with discrete unobserved heterogeneity are
identifiable using two consecutive order statistics and an instrument. Second,
we extend the results to ascending auctions with unknown competition and
unobserved heterogeneity.",Identification of Auction Models Using Order Statistics,2022-05-25 20:11:05,"Yao Luo, Ruli Xiao","http://arxiv.org/abs/2205.12917v2, http://arxiv.org/pdf/2205.12917v2",econ.EM
29558,em,"This paper proposes a fast two-stage variational Bayesian (VB) algorithm to
estimate unrestricted panel spatial autoregressive models. Using
Dirichlet-Laplace priors, we are able to uncover the spatial relationships
between cross-sectional units without imposing any a priori restrictions. Monte
Carlo experiments show that our approach works well for both long and short
panels. We are also the first in the literature to develop VB methods to
estimate large covariance matrices with unrestricted sparsity patterns, which
are useful for popular large data models such as Bayesian vector
autoregressions. In empirical applications, we examine the spatial
interdependence between euro area sovereign bond ratings and spreads. We find
marked differences between the spillover behaviours of the northern euro area
countries and those of the south.",Fast Two-Stage Variational Bayesian Approach to Estimating Panel Spatial Autoregressive Models with Unrestricted Spatial Weights Matrices,2022-05-30 23:34:44,"Deborah Gefang, Stephen G. Hall, George S. Tavlas","http://arxiv.org/abs/2205.15420v3, http://arxiv.org/pdf/2205.15420v3",econ.EM
29559,em,"Jumps and market microstructure noise are stylized features of high-frequency
financial data. It is well known that they introduce bias in the estimation of
volatility (including integrated and spot volatilities) of assets, and many
methods have been proposed to deal with this problem. When the jumps are
intensive with infinite variation, the efficient estimation of spot volatility
under serially dependent noise is not available and is thus in need. For this
purpose, we propose a novel estimator of spot volatility with a hybrid use of
the pre-averaging technique and the empirical characteristic function. Under
mild assumptions, the results of consistency and asymptotic normality of our
estimator are established. Furthermore, we show that our estimator achieves an
almost efficient convergence rate with optimal variance when the jumps are
either less active or active with symmetric structure. Simulation studies
verify our theoretical conclusions. We apply our proposed estimator to
empirical analyses, such as estimating the weekly volatility curve using
second-by-second transaction price data.",Estimating spot volatility under infinite variation jumps with dependent market microstructure noise,2022-05-31 15:26:03,"Qiang Liu, Zhi Liu","http://arxiv.org/abs/2205.15738v2, http://arxiv.org/pdf/2205.15738v2",econ.EM
29560,em,"In this paper, we consider a wide class of time-varying multivariate causal
processes which nests many classic and new examples as special cases. We first
prove the existence of a weakly dependent stationary approximation for our
model which is the foundation to initiate the theoretical development.
Afterwards, we consider the QMLE estimation approach, and provide both
point-wise and simultaneous inferences on the coefficient functions. In
addition, we demonstrate the theoretical findings through both simulated and
real data examples. In particular, we show the empirical relevance of our study
using an application to evaluate the conditional correlations between the stock
markets of China and U.S. We find that the interdependence between the two
stock markets is increasing over time.",Time-Varying Multivariate Causal Processes,2022-06-01 14:21:01,"Jiti Gao, Bin Peng, Wei Biao Wu, Yayi Yan","http://arxiv.org/abs/2206.00409v1, http://arxiv.org/pdf/2206.00409v1",econ.EM
29561,em,"There is a vast literature on the determinants of subjective wellbeing.
International organisations and statistical offices are now collecting such
survey data at scale. However, standard regression models explain surprisingly
little of the variation in wellbeing, limiting our ability to predict it. In
response, we here assess the potential of Machine Learning (ML) to help us
better understand wellbeing. We analyse wellbeing data on over a million
respondents from Germany, the UK, and the United States. In terms of predictive
power, our ML approaches do perform better than traditional models. Although
the size of the improvement is small in absolute terms, it turns out to be
substantial when compared to that of key variables like health. We moreover
find that drastically expanding the set of explanatory variables doubles the
predictive power of both OLS and the ML approaches on unseen data. The
variables identified as important by our ML algorithms - $i.e.$ material
conditions, health, and meaningful social relations - are similar to those that
have already been identified in the literature. In that sense, our data-driven
ML results validate the findings from conventional approaches.",Human Wellbeing and Machine Learning,2022-06-01 18:35:50,"Ekaterina Oparina, Caspar Kaiser, Niccolò Gentile, Alexandre Tkatchenko, Andrew E. Clark, Jan-Emmanuel De Neve, Conchita D'Ambrosio","http://arxiv.org/abs/2206.00574v1, http://arxiv.org/pdf/2206.00574v1",econ.EM
29562,em,"We consider the problem of inference in shift-share research designs. The
choice between existing approaches that allow for unrestricted spatial
correlation involves tradeoffs, varying in terms of their validity when there
are relatively few or concentrated shocks, and in terms of the assumptions on
the shock assignment process and treatment effects heterogeneity. We propose
alternative randomization inference methods that combine the advantages of
different approaches. These methods are valid in finite samples under
relatively stronger assumptions, while asymptotically valid under weaker
assumptions.",Randomization Inference Tests for Shift-Share Designs,2022-06-02 14:52:03,"Luis Alvarez, Bruno Ferman, Raoni Oliveira","http://arxiv.org/abs/2206.00999v1, http://arxiv.org/pdf/2206.00999v1",econ.EM
29563,em,"As of May 2022, the coronavirus disease 2019 (COVID-19) still has a severe
global impact on people's lives. Previous studies have reported that COVID-19
decreased the electricity demand in early 2020. However, our study found that
the electricity demand increased in summer and winter even when the infection
was widespread. The fact that the event has continued over two years suggests
that it is essential to introduce the method which can estimate the impact of
the event for long period considering seasonal fluctuations. We employed the
Bayesian structural time-series model to estimate the causal impact of COVID-19
on electricity demand in Japan. The results indicate that behavioral
restrictions due to COVID-19 decreased the daily electricity demand (-5.1% in
weekdays, -6.1% in holidays) in April and May 2020 as indicated by previous
studies. However, even in 2020, the results show that the demand increases in
the hot summer and cold winter (the increasing rate is +14% in the period from
1st August to 15th September 2020, and +7.6% from 16th December 2020 to 15th
January 2021). This study shows that the significant decrease in electricity
demand for the business sector exceeded the increase in demand for the
household sector in April and May 2020; however, the increase in demand for the
households exceeded the decrease in demand for the business in hot summer and
cold winter periods. Our result also implies that it is possible to run out of
electricity when people's behavior changes even if they are less active.",Causal impact of severe events on electricity demand: The case of COVID-19 in Japan,2022-06-05 11:31:01,Yasunobu Wakashiro,"http://arxiv.org/abs/2206.02122v1, http://arxiv.org/pdf/2206.02122v1",econ.EM
29564,em,"The Neyman Allocation and its conditional counterpart are used in many papers
on experiment design, which typically assume that researchers have access to
large pilot studies. This may not be realistic. To understand the properties of
the Neyman Allocation with small pilots, we study its behavior in a novel
asymptotic framework for two-wave experiments in which the pilot size is
assumed to be fixed while the main wave sample size grows. Our analysis shows
that the Neyman Allocation can lead to estimates of the ATE with higher
asymptotic variance than with (non-adaptive) balanced randomization. In
particular, this happens when the outcome variable is relatively homoskedastic
with respect to treatment status or when it exhibits high kurtosis. We also
provide a series of empirical examples showing that these situations arise
frequently in practice. Our results suggest that researchers should not use the
Neyman Allocation with small pilots, especially in such instances.",On the Performance of the Neyman Allocation with Small Pilots,2022-06-09 20:38:41,"Yong Cai, Ahnaf Rafi","http://arxiv.org/abs/2206.04643v2, http://arxiv.org/pdf/2206.04643v2",econ.EM
29565,em,"Equality of opportunity has emerged as an important ideal of distributive
justice. Empirically, Inequality of Opportunity (IOp) is measured in two steps:
first, an outcome (e.g., income) is predicted given individual circumstances;
and second, an inequality index (e.g., Gini) of the predictions is computed.
Machine Learning (ML) methods are tremendously useful in the first step.
However, they can cause sizable biases in IOp since the bias-variance trade-off
allows the bias to creep in the second step. We propose a simple debiased IOp
estimator robust to such ML biases and provide the first valid inferential
theory for IOp. We demonstrate improved performance in simulations and report
the first unbiased measures of income IOp in Europe. Mother's education and
father's occupation are the circumstances that explain the most. Plug-in
estimators are very sensitive to the ML algorithm, while debiased IOp
estimators are robust. These results are extended to a general U-statistics
setting.",Machine Learning Inference on Inequality of Opportunity,2022-06-10 20:16:36,"Juan Carlos Escanciano, Joël Robert Terschuur","http://arxiv.org/abs/2206.05235v3, http://arxiv.org/pdf/2206.05235v3",econ.EM
29566,em,"Matching has become the mainstream in counterfactual inference, with which
selection bias between sample groups can be significantly eliminated. However
in practice, when estimating average treatment effect on the treated (ATT) via
matching, no matter which method, the trade-off between estimation accuracy and
information loss constantly exist. Attempting to completely replace the
matching process, this paper proposes the GAN-ATT estimator that integrates
generative adversarial network (GAN) into counterfactual inference framework.
Through GAN machine learning, the probability density functions (PDFs) of
samples in both treatment group and control group can be approximated. By
differentiating conditional PDFs of the two groups with identical input
condition, the conditional average treatment effect (CATE) can be estimated,
and the ensemble average of corresponding CATEs over all treatment group
samples is the estimate of ATT. Utilizing GAN-based infinite sample
augmentations, problems in the case of insufficient samples or lack of common
support domains can be easily solved. Theoretically, when GAN could perfectly
learn the PDFs, our estimators can provide exact estimate of ATT.
  To check the performance of the GAN-ATT estimator, three sets of data are
used for ATT estimations: Two toy data sets with 1/2 dimensional covariate
inputs and constant/covariate-dependent treatment effect are tested. The
estimates of GAN-ATT are proved close to the ground truth and are better than
traditional matching approaches; A real firm-level data set with
high-dimensional input is tested and the applicability towards real data sets
is evaluated by comparing matching approaches. Through the evidences obtained
from the three tests, we believe that the GAN-ATT estimator has significant
advantages over traditional matching methods in estimating ATT.",A Constructive GAN-based Approach to Exact Estimate Treatment Effect without Matching,2022-06-13 15:54:29,"Boyang You, Kerry Papps","http://arxiv.org/abs/2206.06116v1, http://arxiv.org/pdf/2206.06116v1",econ.EM
29567,em,"In this paper, we provide novel definitions of clustering coefficient for
weighted and directed multilayer networks. We extend in the multilayer
theoretical context the clustering coefficients proposed in the literature for
weighted directed monoplex networks. We quantify how deeply a node is involved
in a choesive structure focusing on a single node, on a single layer or on the
entire system. The coefficients convey several characteristics inherent to the
complex topology of the multilayer network. We test their effectiveness
applying them to a particularly complex structure such as the international
trade network. The trade data integrate different aspects and they can be
described by a directed and weighted multilayer network, where each layer
represents import and export relationships between countries for a given
sector. The proposed coefficients find successful application in describing the
interrelations of the trade network, allowing to disentangle the effects of
countries and sectors and jointly consider the interactions between them.",Clustering coefficients as measures of the complex interactions in a directed weighted multilayer network,2022-06-13 19:48:03,"Paolo Bartesaghi, Gian Paolo Clemente, Rosanna Grassi","http://dx.doi.org/10.1016/j.physa.2022.128413, http://arxiv.org/abs/2206.06309v2, http://arxiv.org/pdf/2206.06309v2",econ.EM
29568,em,"In this article, we present a method to forecast the Portuguese gross
domestic product (GDP) in each current quarter (nowcasting). It combines bridge
equations of the real GDP on readily available monthly data like the Economic
Sentiment Indicator (ESI), industrial production index, cement sales or exports
and imports, with forecasts for the jagged missing values computed with the
well-known Hodrick and Prescott (HP) filter. As shown, this simple multivariate
approach can perform as well as a Targeted Diffusion Index (TDI) model and
slightly better than the univariate Theta method in terms of out-of-sample mean
errors.",Nowcasting the Portuguese GDP with Monthly Data,2022-06-14 16:15:57,"João B. Assunção, Pedro Afonso Fernandes","http://arxiv.org/abs/2206.06823v1, http://arxiv.org/pdf/2206.06823v1",econ.EM
29569,em,"A factor model with a break in its factor loadings is observationally
equivalent to a model without changes in the loadings but a change in the
variance of its factors. This effectively transforms a structural change
problem of high dimension into a problem of low dimension. This paper considers
the likelihood ratio (LR) test for a variance change in the estimated factors.
The LR test implicitly explores a special feature of the estimated factors: the
pre-break and post-break variances can be a singular matrix under the
alternative hypothesis, making the LR test diverging faster and thus more
powerful than Wald-type tests. The better power property of the LR test is also
confirmed by simulations. We also consider mean changes and multiple breaks. We
apply the procedure to the factor modelling and structural change of the US
employment using monthly industry-level-data.",Likelihood ratio test for structural changes in factor models,2022-06-16 13:08:39,"Jushan Bai, Jiangtao Duan, Xu Han","http://arxiv.org/abs/2206.08052v2, http://arxiv.org/pdf/2206.08052v2",econ.EM
29570,em,"We propose a new variational approximation of the joint posterior
distribution of the log-volatility in the context of large Bayesian VARs. In
contrast to existing approaches that are based on local approximations, the new
proposal provides a global approximation that takes into account the entire
support of the joint distribution. In a Monte Carlo study we show that the new
global approximation is over an order of magnitude more accurate than existing
alternatives. We illustrate the proposed methodology with an application of a
96-variable VAR with stochastic volatility to measure global bank network
connectedness.",Fast and Accurate Variational Inference for Large Bayesian VARs with Stochastic Volatility,2022-06-16 23:42:27,"Joshua C. C. Chan, Xuewen Yu","http://arxiv.org/abs/2206.08438v1, http://arxiv.org/pdf/2206.08438v1",econ.EM
29571,em,"We propose a semiparametric method to estimate the average treatment effect
under the assumption of unconfoundedness given observational data. Our
estimation method alleviates misspecification issues of the propensity score
function by estimating the single-index link function involved through Hermite
polynomials. Our approach is computationally tractable and allows for
moderately large dimension covariates. We provide the large sample properties
of the estimator and show its validity. Also, the average treatment effect
estimator achieves the parametric rate and asymptotic normality. Our extensive
Monte Carlo study shows that the proposed estimator is valid in finite samples.
We also provide an empirical analysis on the effect of maternal smoking on
babies' birth weight and the effect of job training program on future earnings.",Semiparametric Single-Index Estimation for Average Treatment Effects,2022-06-17 04:44:53,"Difang Huang, Jiti Gao, Tatsushi Oka","http://arxiv.org/abs/2206.08503v2, http://arxiv.org/pdf/2206.08503v2",econ.EM
29572,em,"To help systematically lower anthropogenic Greenhouse gas (GHG) emissions,
accurate and precise GHG emission prediction models have become a key focus of
the climate research. The appeal is that the predictive models will inform
policymakers, and hopefully, in turn, they will bring about systematic changes.
Since the transportation sector is constantly among the top GHG emission
contributors, especially in populated urban areas, substantial effort has been
going into building more accurate and informative GHG prediction models to help
create more sustainable urban environments. In this work, we seek to establish
a predictive framework of GHG emissions at the urban road segment or link level
of transportation networks. The key theme of the framework centers around model
interpretability and actionability for high-level decision-makers using
econometric Discrete Choice Modelling (DCM). We illustrate that DCM is capable
of predicting link-level GHG emission levels on urban road networks in a
parsimonious and effective manner. Our results show up to 85.4% prediction
accuracy in the DCM models' performances. We also argue that since the goal of
most GHG emission prediction models focuses on involving high-level
decision-makers to make changes and curb emissions, the DCM-based GHG emission
prediction framework is the most suitable framework.",Interpretable and Actionable Vehicular Greenhouse Gas Emission Prediction at Road link-level,2022-06-18 03:49:51,"S. Roderick Zhang, Bilal Farooq","http://arxiv.org/abs/2206.09073v1, http://arxiv.org/pdf/2206.09073v1",econ.EM
29573,em,"When data are clustered, common practice has become to do OLS and use an
estimator of the covariance matrix of the OLS estimator that comes close to
unbiasedness. In this paper we derive an estimator that is unbiased when the
random-effects model holds. We do the same for two more general structures. We
study the usefulness of these estimators against others by simulation, the size
of the $t$-test being the criterion. Our findings suggest that the choice of
estimator hardly matters when the regressor has the same distribution over the
clusters. But when the regressor is a cluster-specific treatment variable, the
choice does matter and the unbiased estimator we propose for the random-effects
model shows excellent performance, even when the clusters are highly
unbalanced.",Unbiased estimation of the OLS covariance matrix when the errors are clustered,2022-06-20 11:51:35,"Tom Boot, Gianmaria Niccodemi, Tom Wansbeek","http://arxiv.org/abs/2206.09644v1, http://arxiv.org/pdf/2206.09644v1",econ.EM
29574,em,"This paper studies the statistical decision problem of learning an
individualized intervention policy when data are obtained from observational
studies or randomized experiments with imperfect compliance. Leveraging a
continuous instrumental variable, we provide a social welfare criterion that
accounts for endogenous treatment selection. To this end, we incorporate the
marginal treatment effects (MTE) when identifying treatment effect parameters
and consider encouragement rules that affect social welfare through treatment
take-up when designing policies. We apply the representation of the social
welfare criterion of encouragement rules via the MTE to the Empirical Welfare
Maximization (EWM) method and derive convergence rates of the worst-case regret
(welfare loss). Using data from the Indonesian Family Life Survey, we apply the
EWM encouragement rule to advise on how to encourage upper secondary schooling.
Our framework offers interpretability of the optimal policy by explaining why a
certain subpopulation is targeted.",Policy Learning under Endogeneity Using Instrumental Variables,2022-06-20 19:37:30,Yan Liu,"http://arxiv.org/abs/2206.09883v2, http://arxiv.org/pdf/2206.09883v2",econ.EM
29575,em,"Interactive fixed effects are a popular means to model unobserved
heterogeneity in panel data. Models with interactive fixed effects are well
studied in the low-dimensional case where the number of parameters to be
estimated is small. However, they are largely unexplored in the
high-dimensional case where the number of parameters is large, potentially much
larger than the sample size itself. In this paper, we develop new econometric
methods for the estimation of high-dimensional panel data models with
interactive fixed effects. Our estimator is based on similar ideas as the very
popular common correlated effects (CCE) estimator which is frequently used in
the low-dimensional case. We thus call our estimator a high-dimensional CCE
estimator. We derive theory for the estimator both in the large-T-case, where
the time series length T tends to infinity, and in the small-T-case, where T is
a fixed natural number. The theoretical analysis of the paper is complemented
by a simulation study which evaluates the finite sample performance of the
estimator.",CCE Estimation of High-Dimensional Panel Data Models with Interactive Fixed Effects,2022-06-24 11:30:39,"Michael Vogt, Christopher Walsh, Oliver Linton","http://arxiv.org/abs/2206.12152v1, http://arxiv.org/pdf/2206.12152v1",econ.EM
29576,em,"The widespread co-existence of misspecification and weak identification in
asset pricing has led to an overstated performance of risk factors. Because the
conventional Fama and MacBeth (1973) methodology is jeopardized by
misspecification and weak identification, we infer risk premia by using a
double robust Lagrange multiplier test that remains reliable in the presence of
these two empirically relevant issues. Moreover, we show how the
identification, and the resulting appropriate interpretation, of the risk
premia is governed by the relative magnitudes of the misspecification
J-statistic and the identification IS-statistic. We revisit several prominent
empirical applications and all specifications with one to six factors from the
factor zoo of Feng, Giglio, and Xiu (2020) to emphasize the widespread
occurrence of misspecification and weak identification.",Misspecification and Weak Identification in Asset Pricing,2022-06-27 22:26:32,"Frank Kleibergen, Zhaoguo Zhan","http://arxiv.org/abs/2206.13600v1, http://arxiv.org/pdf/2206.13600v1",econ.EM
29577,em,"Ferrous metal futures have become unique commodity futures with Chinese
characteristics. Due to the late listing time, it has received less attention
from scholars. Our research focuses on the volatility spillover effects,
defined as the intensity of price volatility in financial instruments. We use
DCC-GARCH, BEKK-GARCH, and DY(2012) index methods to conduct empirical tests on
the volatility spillover effects of the Chinese ferrous metal futures market
and other parts of the Chinese commodity futures market, as well as industries
related to the steel industry chain in stock markets. It can be seen that there
is a close volatility spillover relationship between ferrous metal futures and
nonferrous metal futures. Energy futures and chemical futures have a
significant transmission effect on the fluctuations of ferrous metals. In
addition, ferrous metal futures have a significant spillover effect on the
stock index of the steel industry, real estate industry, building materials
industry, machinery equipment industry, and household appliance industry.
Studying the volatility spillover effect of the ferrous metal futures market
can reveal the operating laws of this field and provide ideas and theoretical
references for investors to hedge their risks. It shows that the ferrous metal
futures market has an essential role as a ""barometer"" for the Chinese commodity
futures market and the stock market.",Unique futures in China: studys on volatility spillover effects of ferrous metal futures,2022-06-30 09:09:06,"Tingting Cao, Weiqing Sun, Cuiping Sun, Lin Hao","http://arxiv.org/abs/2206.15039v1, http://arxiv.org/pdf/2206.15039v1",econ.EM
29578,em,"Ad platforms require reliable measurement of advertising returns: what
increase in performance (such as clicks or conversions) can an advertiser
expect in return for additional budget on the platform? Even from the
perspective of the platform, accurately measuring advertising returns is hard.
Selection and omitted variable biases make estimates from observational methods
unreliable, and straightforward experimentation is often costly or infeasible.
We introduce Asymmetric Budget Split, a novel methodology for valid measurement
of ad returns from the perspective of the platform. Asymmetric budget split
creates small asymmetries in ad budget allocation across comparable partitions
of the platform's userbase. By observing performance of the same ad at
different budget levels while holding all other factors constant, the platform
can obtain a valid measure of ad returns. The methodology is unobtrusive and
cost-effective in that it does not require holdout groups or sacrifices in ad
or marketplace performance. We discuss a successful deployment of asymmetric
budget split to LinkedIn's Jobs Marketplace, an ad marketplace where it is used
to measure returns from promotion budgets in terms of incremental job
applicants. We outline operational considerations for practitioners and discuss
further use cases such as budget-aware performance forecasting.",Valid and Unobtrusive Measurement of Returns to Advertising through Asymmetric Budget Split,2022-07-01 08:13:05,"Johannes Hermle, Giorgio Martini","http://arxiv.org/abs/2207.00206v1, http://arxiv.org/pdf/2207.00206v1",econ.EM
29579,em,"We develop a Stata command $\texttt{csa2sls}$ that implements the complete
subset averaging two-stage least squares (CSA2SLS) estimator in Lee and Shin
(2021). The CSA2SLS estimator is an alternative to the two-stage least squares
estimator that remedies the bias issue caused by many correlated instruments.
We conduct Monte Carlo simulations and confirm that the CSA2SLS estimator
reduces both the mean squared error and the estimation bias substantially when
instruments are correlated. We illustrate the usage of $\texttt{csa2sls}$ in
Stata by an empirical application.",csa2sls: A complete subset approach for many instruments using Stata,2022-07-04 18:54:15,"Seojeong Lee, Siha Lee, Julius Owusu, Youngki Shin","http://arxiv.org/abs/2207.01533v2, http://arxiv.org/pdf/2207.01533v2",econ.EM
29580,em,"This paper studies identification and inference of the welfare gain that
results from switching from one policy (such as the status quo policy) to
another policy. The welfare gain is not point identified in general when data
are obtained from an observational study or a randomized experiment with
imperfect compliance. I characterize the sharp identified region of the welfare
gain and obtain bounds under various assumptions on the unobservables with and
without instrumental variables. Estimation and inference of the lower and upper
bounds are conducted using orthogonalized moment conditions to deal with the
presence of infinite-dimensional nuisance parameters. I illustrate the analysis
by considering hypothetical policies of assigning individuals to job training
programs using experimental data from the National Job Training Partnership Act
Study. Monte Carlo simulations are conducted to assess the finite sample
performance of the estimators.",Identification and Inference for Welfare Gains without Unconfoundedness,2022-07-09 21:06:08,Undral Byambadalai,"http://arxiv.org/abs/2207.04314v1, http://arxiv.org/pdf/2207.04314v1",econ.EM
29581,em,"A recent literature has shown that when adoption of a treatment is staggered
and average treatment effects vary across groups and over time,
difference-in-differences regression does not identify an easily interpretable
measure of the typical effect of the treatment. In this paper, I extend this
literature in two ways. First, I provide some simple underlying intuition for
why difference-in-differences regression does not identify a
group$\times$period average treatment effect. Second, I propose an alternative
two-stage estimation framework, motivated by this intuition. In this framework,
group and period effects are identified in a first stage from the sample of
untreated observations, and average treatment effects are identified in a
second stage by comparing treated and untreated outcomes, after removing these
group and period effects. The two-stage approach is robust to treatment-effect
heterogeneity under staggered adoption, and can be used to identify a host of
different average treatment effect measures. It is also simple, intuitive, and
easy to implement. I establish the theoretical properties of the two-stage
approach and demonstrate its effectiveness and applicability using Monte-Carlo
evidence and an example from the literature.",Two-stage differences in differences,2022-07-13 06:44:58,John Gardner,"http://arxiv.org/abs/2207.05943v1, http://arxiv.org/pdf/2207.05943v1",econ.EM
29582,em,"Difference-in-differences is a common method for estimating treatment
effects, and the parallel trends condition is its main identifying assumption:
the trend in mean untreated outcomes is independent of the observed treatment
status. In observational settings, treatment is often a dynamic choice made or
influenced by rational actors, such as policy-makers, firms, or individual
agents. This paper relates parallel trends to economic models of dynamic
choice. We clarify the implications of parallel trends on agent behavior and
study when dynamic selection motives lead to violations of parallel trends.
Finally, we consider identification under alternative assumptions that
accommodate features of dynamic choice.",Parallel Trends and Dynamic Choices,2022-07-14 03:02:41,"Philip Marx, Elie Tamer, Xun Tang","http://arxiv.org/abs/2207.06564v3, http://arxiv.org/pdf/2207.06564v3",econ.EM
29583,em,"Forecast combination -- the aggregation of individual forecasts from multiple
experts or models -- is a proven approach to economic forecasting. To date,
research on economic forecasting has concentrated on local combination methods,
which handle separate but related forecasting tasks in isolation. Yet, it has
been known for over two decades in the machine learning community that global
methods, which exploit task-relatedness, can improve on local methods that
ignore it. Motivated by the possibility for improvement, this paper introduces
a framework for globally combining forecasts while being flexible to the level
of task-relatedness. Through our framework, we develop global versions of
several existing forecast combinations. To evaluate the efficacy of these new
global forecast combinations, we conduct extensive comparisons using synthetic
and real data. Our real data comparisons, which involve forecasts of core
economic indicators in the Eurozone, provide empirical evidence that the
accuracy of global combinations of economic forecasts can surpass local
combinations.",Flexible global forecast combinations,2022-07-15 10:23:06,"Ryan Thompson, Yilin Qian, Andrey L. Vasnev","http://arxiv.org/abs/2207.07318v2, http://arxiv.org/pdf/2207.07318v2",econ.EM
29584,em,"Two of Peter Schmidt's many contributions to econometrics have been to
introduce a simultaneous logit model for bivariate binary outcomes and to study
estimation of dynamic linear fixed effects panel data models using short
panels. In this paper, we study a dynamic panel data version of the bivariate
model introduced in Schmidt and Strauss (1975) that allows for lagged dependent
variables and fixed effects as in Ahn and Schmidt (1995). We combine a
conditional likelihood approach with a method of moments approach to obtain an
estimation strategy for the resulting model. We apply this estimation strategy
to a simple model for the intra-household relationship in employment. Our main
conclusion is that the within-household dependence in employment differs
significantly by the ethnicity composition of the couple even after one allows
for unobserved household specific heterogeneity.",Simultaneity in Binary Outcome Models with an Application to Employment for Couples,2022-07-15 11:39:29,"Bo E. Honoré, Luojia Hu, Ekaterini Kyriazidou, Martin Weidner","http://arxiv.org/abs/2207.07343v2, http://arxiv.org/pdf/2207.07343v2",econ.EM
29585,em,"For the kernel estimator of the quantile density function (the derivative of
the quantile function), I show how to perform the boundary bias correction,
establish the rate of strong uniform consistency of the bias-corrected
estimator, and construct the confidence bands that are asymptotically exact
uniformly over the entire domain $[0,1]$. The proposed procedures rely on the
pivotality of the studentized bias-corrected estimator and known
anti-concentration properties of the Gaussian approximation for its supremum.",Bias correction and uniform inference for the quantile density function,2022-07-19 04:09:02,Grigory Franguridi,"http://arxiv.org/abs/2207.09004v1, http://arxiv.org/pdf/2207.09004v1",econ.EM
29586,em,"This paper considers a linear regression model with an endogenous regressor
which arises from a nonlinear transformation of a latent variable. It is shown
that the corresponding coefficient can be consistently estimated without
external instruments by adding a rank-based transformation of the regressor to
the model and performing standard OLS estimation. In contrast to other
approaches, our nonparametric control function approach does not rely on a
conformably specified copula. Furthermore, the approach allows for the presence
of additional exogenous regressors which may be (linearly) correlated with the
endogenous regressor(s). Consistency and asymptotic normality of the estimator
are proved and the estimator is compared with copula based approaches by means
of Monte Carlo simulations. An empirical application on wage data of the US
current population survey demonstrates the usefulness of our method.",Asymptotic Properties of Endogeneity Corrections Using Nonlinear Transformations,2022-07-19 15:59:26,"Jörg Breitung, Alexander Mayer, Dominik Wied","http://arxiv.org/abs/2207.09246v3, http://arxiv.org/pdf/2207.09246v3",econ.EM
29587,em,"We show by simulation that the test for an unknown threshold in models with
endogenous regressors - proposed in Caner and Hansen (2004) - can exhibit
severe size distortions both in small and in moderately large samples,
pertinent to empirical applications. We propose three new tests that rectify
these size distortions. The first test is based on GMM estimators. The other
two are based on unconventional 2SLS estimators, that use additional
information about the linearity (or lack of linearity) of the first stage. Just
like the test in Caner and Hansen (2004), our tests are non-pivotal, and we
prove their bootstrap validity. The empirical application revisits the question
in Ramey and Zubairy (2018) whether government spending multipliers are larger
in recessions, but using tests for an unknown threshold. Consistent with Ramey
and Zubairy (2018), we do not find strong evidence that these multipliers are
larger in recessions.",Testing for a Threshold in Models with Endogenous Regressors,2022-07-20 20:59:47,"Mario P. Rothfelder, Otilia Boldea","http://arxiv.org/abs/2207.10076v1, http://arxiv.org/pdf/2207.10076v1",econ.EM
29588,em,"We consider a linear combination of jackknife Anderson-Rubin (AR), jackknife
Lagrangian multiplier (LM), and orthogonalized jackknife LM tests for inference
in IV regressions with many weak instruments and heteroskedasticity. Following
I.Andrews (2016), we choose the weights in the linear combination based on a
decision-theoretic rule that is adaptive to the identification strength. Under
both weak and strong identifications, the proposed test controls asymptotic
size and is admissible among certain class of tests. Under strong
identification, our linear combination test has optimal power against local
alternatives among the class of invariant or unbiased tests which are
constructed based on jackknife AR and LM tests. Simulations and an empirical
application to Angrist and Krueger's (1991) dataset confirm the good power
properties of our test.",A Conditional Linear Combination Test with Many Weak Instruments,2022-07-22 18:19:00,"Dennis Lim, Wenjie Wang, Yichong Zhang","http://arxiv.org/abs/2207.11137v3, http://arxiv.org/pdf/2207.11137v3",econ.EM
29589,em,"This paper proposes methods to investigate whether the bubble patterns
observed in individual series are common to various series. We detect the
non-linear dynamics using the recent mixed causal and noncausal models. Both a
likelihood ratio test and information criteria are investigated, the former
having better performances in our Monte Carlo simulations. Implementing our
approach on three commodity prices we do not find evidence of commonalities
although some series look very similar.",Detecting common bubbles in multivariate mixed causal-noncausal models,2022-07-23 20:27:35,"Gianluca Cubadda, Alain Hecq, Elisa Voisin","http://arxiv.org/abs/2207.11557v1, http://arxiv.org/pdf/2207.11557v1",econ.EM
29590,em,"This paper studies identification and estimation of the average treatment
effect on the treated (ATT) in difference-in-difference (DID) designs when the
variable that classifies individuals into treatment and control groups
(treatment status, D) is endogenously misclassified. We show that
misclassification in D hampers consistent estimation of ATT because 1) it
restricts us from identifying the truly treated from those misclassified as
being treated and 2) differential misclassification in counterfactual trends
may result in parallel trends being violated with D even when they hold with
the true but unobserved D*. We propose a solution to correct for endogenous
one-sided misclassification in the context of a parametric DID regression which
allows for considerable heterogeneity in treatment effects and establish its
asymptotic properties in panel and repeated cross section settings.
Furthermore, we illustrate the method by using it to estimate the insurance
impact of a large-scale in-kind food transfer program in India which is known
to suffer from large targeting errors.",Difference-in-Differences with a Misclassified Treatment,2022-08-04 05:42:49,"Akanksha Negi, Digvijay Singh Negi","http://arxiv.org/abs/2208.02412v1, http://arxiv.org/pdf/2208.02412v1",econ.EM
29592,em,"Medical journals have adhered to a reporting practice that seriously limits
the usefulness of published trial findings. Medical decision makers commonly
observe many patient covariates and seek to use this information to personalize
treatment choices. Yet standard summaries of trial findings only partition
subjects into broad subgroups, typically into binary categories. Given this
reporting practice, we study the problem of inference on long mean treatment
outcomes E[y(t)|x], where t is a treatment, y(t) is a treatment outcome, and
the covariate vector x has length K, each component being a binary variable.
The available data are estimates of {E[y(t)|xk = 0], E[y(t)|xk = 1], P(xk)}, k
= 1, . . . , K reported in journal articles. We show that reported trial
findings partially identify {E[y(t)|x], P(x)}. Illustrative computations
demonstrate that the summaries of trial findings in journal articles may imply
only wide bounds on long mean outcomes. One can realistically tighten
inferences if one can combine reported trial findings with credible assumptions
having identifying power, such as bounded-variation assumptions.",Partial Identification of Personalized Treatment Response with Trial-reported Analyses of Binary Subgroups,2022-08-05 23:39:27,"Sheyu Li, Valentyn Litvin, Charles F. Manski","http://arxiv.org/abs/2208.03381v2, http://arxiv.org/pdf/2208.03381v2",econ.EM
29593,em,"We study the estimation of heterogeneous effects of group-level policies,
using quantile regression with interactive fixed effects. Our approach can
identify distributional policy effects, particularly effects on inequality,
under a type of difference-in-differences assumption. We provide asymptotic
properties of our estimators and an inferential method. We apply the model to
evaluate the effect of the minimum wage policy on earnings between 1967 and
1980 in the United States. Our results suggest that the minimum wage policy has
a significant negative impact on the between-inequality but little effect on
the within-inequality.",Estimation of Heterogeneous Treatment Effects Using Quantile Regression with Interactive Fixed Effects,2022-08-07 06:56:12,"Ruofan Xu, Jiti Gao, Tatsushi Oka, Yoon-Jae Whang","http://arxiv.org/abs/2208.03632v2, http://arxiv.org/pdf/2208.03632v2",econ.EM
29594,em,"We identify and estimate treatment effects when potential outcomes are weakly
separable with a binary endogenous treatment. Vytlacil and Yildiz (2007)
proposed an identification strategy that exploits the mean of observed
outcomes, but their approach requires a monotonicity condition. In comparison,
we exploit full information in the entire outcome distribution, instead of just
its mean. As a result, our method does not require monotonicity and is also
applicable to general settings with multiple indices. We provide examples where
our approach can identify treatment effect parameters of interest whereas
existing methods would fail. These include models where potential outcomes
depend on multiple unobserved disturbance terms, such as a Roy model, a
multinomial choice model, as well as a model with endogenous random
coefficients. We establish consistency and asymptotic normality of our
estimators.",Endogeneity in Weakly Separable Models without Monotonicity,2022-08-10 00:34:09,"Songnian Chen, Shakeeb Khan, Xun Tang","http://arxiv.org/abs/2208.05047v1, http://arxiv.org/pdf/2208.05047v1",econ.EM
29595,em,"Purpose: In the context of a COVID pandemic in 2020-21, this paper attempts
to capture the interconnectedness and volatility transmission dynamics. The
nature of change in volatility spillover effects and time-varying conditional
correlation among the G7 countries and India is investigated. Methodology: To
assess the volatility spillover effects, the bivariate BEKK and t- DCC (1,1)
GARCH (1,1) models have been used. Our research shows how the dynamics of
volatility spillover between India and the G7 countries shift before and during
COVID-19. Findings: The findings reveal that the extent of volatility spillover
has altered during COVID compared to the pre-COVID environment. During this
pandemic, a sharp increase in conditional correlation indicates an increase in
systematic risk between countries. Originality: The study contributes to a
better understanding of the dynamics of volatility spillover between G7
countries and India. Asset managers and foreign corporations can use the
changing spillover dynamics to improve investment decisions and implement
effective hedging measures to protect their interests. Furthermore, this
research will assist financial regulators in assessing market risk in the
future owing to crises like as COVID-19.",Understanding Volatility Spillover Relationship Among G7 Nations And India During Covid-19,2022-08-19 07:51:06,"Avik Das, Dr. Devanjali Nandi Das","http://arxiv.org/abs/2208.09148v1, http://arxiv.org/pdf/2208.09148v1",econ.EM
29596,em,"Beta-sorted portfolios -- portfolios comprised of assets with similar
covariation to selected risk factors -- are a popular tool in empirical finance
to analyze models of (conditional) expected returns. Despite their widespread
use, little is known of their statistical properties in contrast to comparable
procedures such as two-pass regressions. We formally investigate the properties
of beta-sorted portfolio returns by casting the procedure as a two-step
nonparametric estimator with a nonparametric first step and a beta-adaptive
portfolios construction. Our framework rationalize the well-known estimation
algorithm with precise economic and statistical assumptions on the general data
generating process and characterize its key features. We study beta-sorted
portfolios for both a single cross-section as well as for aggregation over time
(e.g., the grand mean), offering conditions that ensure consistency and
asymptotic normality along with new uniform inference procedures allowing for
uncertainty quantification and testing of various relevant hypotheses in
financial applications. We also highlight some limitations of current empirical
practices and discuss what inferences can and cannot be drawn from returns to
beta-sorted portfolios for either a single cross-section or across the whole
sample. Finally, we illustrate the functionality of our new procedures in an
empirical application.",Beta-Sorted Portfolios,2022-08-23 16:45:20,"Matias D. Cattaneo, Richard K. Crump, Weining Wang","http://arxiv.org/abs/2208.10974v2, http://arxiv.org/pdf/2208.10974v2",econ.EM
29597,em,"Macro shocks are often composites, yet overlooked in the impulse response
analysis. When an instrumental variable (IV) is used to identify a composite
shock, it violates the common IV exclusion restriction. We show that the Local
Projection-IV estimand is represented as a weighted average of component-wise
impulse responses but with possibly negative weights, which occur when the IV
and shock components have opposite correlations. We further develop alternative
(set-) identification strategies for the LP-IV based on sign restrictions or
additional granular information. Our applications confirm the composite nature
of monetary policy shocks and reveal a non-defense spending multiplier
exceeding one.",What Impulse Response Do Instrumental Variables Identify?,2022-08-25 05:18:56,"Bonsoo Koo, Seojeong Lee, Myung Hwan Seo","http://arxiv.org/abs/2208.11828v2, http://arxiv.org/pdf/2208.11828v2",econ.EM
29598,em,"Several large volatility matrix inference procedures have been developed,
based on the latent factor model. They often assumed that there are a few of
common factors, which can account for volatility dynamics. However, several
studies have demonstrated the presence of local factors. In particular, when
analyzing the global stock market, we often observe that nation-specific
factors explain their own country's volatility dynamics. To account for this,
we propose the Double Principal Orthogonal complEment Thresholding
(Double-POET) method, based on multi-level factor models, and also establish
its asymptotic properties. Furthermore, we demonstrate the drawback of using
the regular principal orthogonal component thresholding (POET) when the local
factor structure exists. We also describe the blessing of dimensionality using
Double-POET for local covariance matrix estimation. Finally, we investigate the
performance of the Double-POET estimator in an out-of-sample portfolio
allocation study using international stocks from 20 financial markets.",Large Volatility Matrix Analysis Using Global and National Factor Models,2022-08-25 22:46:56,"Sung Hoon Choi, Donggyu Kim","http://arxiv.org/abs/2208.12323v2, http://arxiv.org/pdf/2208.12323v2",econ.EM
29599,em,"In this paper, we develop a restricted eigenvalue condition for unit-root
non-stationary data and derive its validity under the assumption of independent
Gaussian innovations that may be contemporaneously correlated. The method of
proof relies on matrix concentration inequalities and offers sufficient
flexibility to enable extensions of our results to alternative time series
settings. As an application of this result, we show the consistency of the
lasso estimator on ultra high-dimensional cointegrated data in which the number
of integrated regressors may grow exponentially in relation to the sample size.",A restricted eigenvalue condition for unit-root non-stationary data,2022-08-27 14:43:26,Etienne Wijler,"http://arxiv.org/abs/2208.12990v1, http://arxiv.org/pdf/2208.12990v1",econ.EM
29600,em,"Social employment, which is mostly carried by firms of different types,
determines the prosperity and stability of a country. As time passing, the
fluctuations of firm employment can reflect the process of creating or
destroying jobs. Therefore, it is instructive to investigate the firm
employment (size) dynamics. Drawing on the firm-level panel data extracted from
the Chinese Industrial Enterprises Database 1998-2013, this paper proposes a
Markov-chain-based descriptive approach to clearly demonstrate the firm size
transfer dynamics between different size categories. With this method, any firm
size transition path in a short time period can be intuitively demonstrated.
Furthermore, by utilizing the properties of Markov transfer matrices, the
definition of transition trend and the transition entropy are introduced and
estimated. As a result, the tendency of firm size transfer between small,
medium and large can be exactly revealed, and the uncertainty of size change
can be quantified. Generally from the evidence of this paper, it can be
inferred that small and medium manufacturing firms in China have greater job
creation potentials compared to large firms over this time period.",A Descriptive Method of Firm Size Transition Dynamics Using Markov Chain,2022-08-27 16:30:33,"Boyang You, Kerry Papps","http://arxiv.org/abs/2208.13012v1, http://arxiv.org/pdf/2208.13012v1",econ.EM
29601,em,"We revisit the identification argument of Kirkeboen et al. (2016) who showed
how one may combine instruments for multiple unordered treatments with
information about individuals' ranking of these treatments to achieve
identification while allowing for both observed and unobserved heterogeneity in
treatment effects. We show that the key assumptions underlying their
identification argument have testable implications. We also provide a new
characterization of the bias that may arise if these assumptions are violated.
Taken together, these results allow researchers not only to test the underlying
assumptions, but also to argue whether the bias from violation of these
assumptions are likely to be economically meaningful. Guided and motivated by
these results, we estimate and compare the earnings payoffs to post-secondary
fields of study in Norway and Denmark. In each country, we apply the
identification argument of Kirkeboen et al. (2016) to data on individuals'
ranking of fields of study and field-specific instruments from discontinuities
in the admission systems. We empirically examine whether and why the payoffs to
fields of study differ across the two countries. We find strong cross-country
correlation in the payoffs to fields of study, especially after removing fields
with violations of the assumptions underlying the identification argument.",Instrumental variables with unordered treatments: Theory and evidence from returns to fields of study,2022-09-01 15:47:16,"Eskil Heinesen, Christian Hvid, Lars Kirkebøen, Edwin Leuven, Magne Mogstad","http://arxiv.org/abs/2209.00417v3, http://arxiv.org/pdf/2209.00417v3",econ.EM
29602,em,"In this paper we develop a novel method of combining many forecasts based on
a machine learning algorithm called Graphical LASSO (GL). We visualize forecast
errors from different forecasters as a network of interacting entities and
generalize network inference in the presence of common factor structure and
structural breaks. First, we note that forecasters often use common information
and hence make common mistakes, which makes the forecast errors exhibit common
factor structures. We use the Factor Graphical LASSO (FGL, Lee and Seregina
(2023)) to separate common forecast errors from the idiosyncratic errors and
exploit sparsity of the precision matrix of the latter. Second, since the
network of experts changes over time as a response to unstable environments
such as recessions, it is unreasonable to assume constant forecast combination
weights. Hence, we propose Regime-Dependent Factor Graphical LASSO (RD-FGL)
that allows factor loadings and idiosyncratic precision matrix to be
regime-dependent. We develop its scalable implementation using the Alternating
Direction Method of Multipliers (ADMM) to estimate regime-dependent forecast
combination weights. The empirical application to forecasting macroeconomic
series using the data of the European Central Bank's Survey of Professional
Forecasters (ECB SPF) demonstrates superior performance of a combined forecast
using FGL and RD-FGL.",Combining Forecasts under Structural Breaks Using Graphical LASSO,2022-09-05 00:22:32,"Tae-Hwy Lee, Ekaterina Seregina","http://arxiv.org/abs/2209.01697v2, http://arxiv.org/pdf/2209.01697v2",econ.EM
29603,em,"We consider hypothesis testing in instrumental variable regression models
with few included exogenous covariates but many instruments -- possibly more
than the number of observations. We show that a ridge-regularised version of
the jackknifed Anderson Rubin (1949, henceforth AR) test controls asymptotic
size in the presence of heteroskedasticity, and when the instruments may be
arbitrarily weak. Asymptotic size control is established under weaker
assumptions than those imposed for recently proposed jackknifed AR tests in the
literature. Furthermore, ridge-regularisation extends the scope of jackknifed
AR tests to situations in which there are more instruments than observations.
Monte-Carlo simulations indicate that our method has favourable finite-sample
size and power properties compared to recently proposed alternative approaches
in the literature. An empirical application on the elasticity of substitution
between immigrants and natives in the US illustrates the usefulness of the
proposed method for practitioners.",A Ridge-Regularised Jackknifed Anderson-Rubin Test,2022-09-07 19:08:51,"Max-Sebastian Dovì, Anders Bredahl Kock, Sophocles Mavroeidis","http://arxiv.org/abs/2209.03259v2, http://arxiv.org/pdf/2209.03259v2",econ.EM
29605,em,"Economists' interests in growth theory have a very long history (Harrod,
1939; Domar, 1946; Solow, 1956; Swan 1956; Mankiw, Romer, and Weil, 1992).
Recently, starting from the neoclassical growth model, Ertur and Koch (2007)
developed the spatially augmented Solow-Swan growth model with the exogenous
spatial weights matrices ($W$). While the exogenous $W$ assumption could be
true only with the geographical/physical distance, it may not be true when
economic/social distances play a role. Using Penn World Table version 7.1,
which covers year 1960-2010, I conducted the robust Rao's score test (Bera,
Dogan, and Taspinar, 2018) to determine if $W$ is endogeonus and used the
maximum likelihood estimation (Qu and Lee, 2015). The key finding is that the
significance and positive effects of physical capital externalities and spatial
externalities (technological interdependence) in Ertur and Koch (2007) were no
longer found with the exogenous $W$, but still they were with the endogenous
$W$ models. I also found an empirical strategy on which economic distance to
use when the data recently has been under heavy shocks of the worldwide
financial crises during year 1996-2010.",Evidence and Strategy on Economic Distance in Spatially Augmented Solow-Swan Growth Model,2022-09-12 22:35:07,Jieun Lee,"http://dx.doi.org/10.13140/RG.2.2.20252.77443, http://arxiv.org/abs/2209.05562v1, http://arxiv.org/pdf/2209.05562v1",econ.EM
29606,em,"I propose Robust Rao's Score (RS) test statistic to determine endogeneity of
spatial weights matrices in a spatial dynamic panel data (SDPD) model (Qu, Lee,
and Yu, 2017). I firstly introduce the bias-corrected score function since the
score function is not centered around zero due to the two-way fixed effects. I
further adjust score functions to rectify the over-rejection of the null
hypothesis under a presence of local misspecification in contemporaneous
dependence over space, dependence over time, or spatial time dependence. I then
derive the explicit forms of our test statistic. A Monte Carlo simulation
supports the analytics and shows nice finite sample properties. Finally, an
empirical illustration is provided using data from Penn World Table version
6.1.",Testing Endogeneity of Spatial Weights Matrices in Spatial Dynamic Panel Data Models,2022-09-12 22:35:56,Jieun Lee,"http://dx.doi.org/10.13140/RG.2.2.33923.58409, http://arxiv.org/abs/2209.05563v1, http://arxiv.org/pdf/2209.05563v1",econ.EM
29607,em,"This paper proposes a density-weighted average derivative estimator based on
two noisy measures of a latent regressor. Both measures have classical errors
with possibly asymmetric distributions. We show that the proposed estimator
achieves the root-n rate of convergence, and derive its asymptotic normal
distribution for statistical inference. Simulation studies demonstrate
excellent small-sample performance supporting the root-n asymptotic normality.
Based on the proposed estimator, we construct a formal test on the sub-unity of
the marginal propensity to consume out of permanent income (MPCP) under a
nonparametric consumption model and a permanent-transitory model of income
dynamics with nonparametric distribution. Applying the test to four recent
waves of U.S. Panel Study of Income Dynamics (PSID), we reject the null
hypothesis of the unit MPCP in favor of a sub-unit MPCP, supporting the
buffer-stock model of saving.",Estimation of Average Derivatives of Latent Regressors: With an Application to Inference on Buffer-Stock Saving,2022-09-13 14:57:37,"Hao Dong, Yuya Sasaki","http://arxiv.org/abs/2209.05914v1, http://arxiv.org/pdf/2209.05914v1",econ.EM
29608,em,"Researchers frequently test and improve model fit by holding a sample
constant and varying the model. We propose methods to test and improve sample
fit by holding a model constant and varying the sample. Much as the bootstrap
is a well-known method to re-sample data and estimate the uncertainty of the
fit of parameters in a model, we develop Sample Fit Reliability (SFR) as a set
of computational methods to re-sample data and estimate the reliability of the
fit of observations in a sample. SFR uses Scoring to assess the reliability of
each observation in a sample, Annealing to check the sensitivity of results to
removing unreliable data, and Fitting to re-weight observations for more robust
analysis. We provide simulation evidence to demonstrate the advantages of using
SFR, and we replicate three empirical studies with treatment effects to
illustrate how SFR reveals new insights about each study.",Sample Fit Reliability,2022-09-14 16:31:23,"Gabriel Okasa, Kenneth A. Younge","http://arxiv.org/abs/2209.06631v1, http://arxiv.org/pdf/2209.06631v1",econ.EM
29609,em,"In this paper we study treatment assignment rules in the presence of social
interaction. We construct an analytical framework under the anonymous
interaction assumption, where the decision problem becomes choosing a treatment
fraction. We propose a multinomial empirical success (MES) rule that includes
the empirical success rule of Manski (2004) as a special case. We investigate
the non-asymptotic bounds of the expected utility based on the MES rule.
Finally, we prove that the MES rule achieves the asymptotic optimality with the
minimax regret criterion.",Statistical Treatment Rules under Social Interaction,2022-09-19 18:07:25,"Seungjin Han, Julius Owusu, Youngki Shin","http://arxiv.org/abs/2209.09077v2, http://arxiv.org/pdf/2209.09077v2",econ.EM
29610,em,"We develop new econometric methods for the comparison of nonparametric time
trends. In many applications, practitioners are interested in whether the
observed time series all have the same time trend. Moreover, they would often
like to know which trends are different and in which time intervals they
differ. We design a multiscale test to formally approach these questions.
Specifically, we develop a test which allows to make rigorous confidence
statements about which time trends are different and where (that is, in which
time intervals) they differ. Based on our multiscale test, we further develop a
clustering algorithm which allows to cluster the observed time series into
groups with the same trend. We derive asymptotic theory for our test and
clustering methods. The theory is complemented by a simulation study and two
applications to GDP growth data and house pricing data.",Multiscale Comparison of Nonparametric Trend Curves,2022-09-22 11:05:16,"Marina Khismatullina, Michael Vogt","http://arxiv.org/abs/2209.10841v1, http://arxiv.org/pdf/2209.10841v1",econ.EM
29611,em,"The multinomial choice model based on utility maximization has been widely
used to select treatments. In this paper, we establish sufficient conditions
for the identification of the marginal treatment effects with multivalued
treatments. Our result reveals treatment effects conditioned on the willingness
to participate in treatments against a specific treatment. Further, our results
can identify other parameters such as the marginal distribution of potential
outcomes.",Identification of the Marginal Treatment Effect with Multivalued Treatments,2022-09-23 10:06:43,Toshiki Tsuda,"http://arxiv.org/abs/2209.11444v3, http://arxiv.org/pdf/2209.11444v3",econ.EM
29612,em,"In light of widespread evidence of parameter instability in macroeconomic
models, many time-varying parameter (TVP) models have been proposed. This paper
proposes a nonparametric TVP-VAR model using Bayesian additive regression trees
(BART) that models the TVPs as an unknown function of effect modifiers. The
novelty of this model arises from the fact that the law of motion driving the
parameters is treated nonparametrically. This leads to great flexibility in the
nature and extent of parameter change, both in the conditional mean and in the
conditional variance. Parsimony is achieved through adopting nonparametric
factor structures and use of shrinkage priors. In an application to US
macroeconomic data, we illustrate the use of our model in tracking both the
evolving nature of the Phillips curve and how the effects of business cycle
shocks on inflation measures vary nonlinearly with changes in the effect
modifiers.",Bayesian Modeling of TVP-VARs Using Regression Trees,2022-09-24 12:40:41,"Niko Hauzenberger, Florian Huber, Gary Koop, James Mitchell","http://arxiv.org/abs/2209.11970v3, http://arxiv.org/pdf/2209.11970v3",econ.EM
29613,em,"We study the problem of estimating the average causal effect of treating
every member of a population, as opposed to none, using an experiment that
treats only some. This is the policy-relevant estimand when deciding whether to
scale up an intervention based on the results of an RCT, for example, but
differs from the usual average treatment effect in the presence of spillovers.
We consider both estimation and experimental design given a bound (parametrized
by $\eta > 0$) on the rate at which spillovers decay with the ``distance''
between units, defined in a generalized way to encompass spatial and
quasi-spatial settings, e.g. where the economically relevant concept of
distance is a gravity equation. Over all estimators linear in the outcomes and
all cluster-randomized designs the optimal geometric rate of convergence is
$n^{-\frac{1}{2+\frac{1}{\eta}}}$, and this rate can be achieved using a
generalized ``Scaling Clusters'' design that we provide. We then introduce the
additional assumption, implicit in the OLS estimators used in recent applied
studies, that potential outcomes are linear in population treatment
assignments. These estimators are inconsistent for our estimand, but a refined
OLS estimator is consistent and rate optimal, and performs better than IPW
estimators when clusters must be small. Its finite-sample performance can be
improved by incorporating prior information about the structure of spillovers.
As a robust alternative to the linear approach we also provide a method to
select estimator-design pairs that minimize a notion of worst-case risk when
the data generating process is unknown. Finally, we provide asymptotically
valid inference methods.",Linear estimation of average global effects,2022-09-28 18:54:20,"Stefan Faridani, Paul Niehaus","http://arxiv.org/abs/2209.14181v4, http://arxiv.org/pdf/2209.14181v4",econ.EM
29614,em,"I establish primitive conditions for unconfoundedness in a coherent model
that features heterogeneous treatment effects, spillovers,
selection-on-observables, and network formation. I identify average partial
effects under minimal exchangeability conditions. If social interactions are
also anonymous, I derive a three-dimensional network propensity score,
characterize its support conditions, relate it to recent work on network
pseudo-metrics, and study extensions. I propose a two-step semiparametric
estimator for a random coefficients model which is consistent and
asymptotically normal as the number and size of the networks grows. I apply my
estimator to a political participation intervention Uganda and a microfinance
application in India.","The Network Propensity Score: Spillovers, Homophily, and Selection into Treatment",2022-09-28 22:31:26,Alejandro Sanchez-Becerra,"http://arxiv.org/abs/2209.14391v1, http://arxiv.org/pdf/2209.14391v1",econ.EM
29615,em,"New satellite sensors will soon make it possible to estimate field-level crop
yields, showing a great potential for agricultural index insurance. This paper
identifies an important threat to better insurance from these new technologies:
data with many fields and few years can yield downward biased estimates of
basis risk, a fundamental metric in index insurance. To demonstrate this bias,
we use state-of-the-art satellite-based data on agricultural yields in the US
and in Kenya to estimate and simulate basis risk. We find a substantive
downward bias leading to a systematic overestimation of insurance quality.
  In this paper, we argue that big data in crop insurance can lead to a new
situation where the number of variables $N$ largely exceeds the number of
observations $T$. In such a situation where $T\ll N$, conventional asymptotics
break, as evidenced by the large bias we find in simulations. We show how the
high-dimension, low-sample-size (HDLSS) asymptotics, together with the spiked
covariance model, provide a more relevant framework for the $T\ll N$ case
encountered in index insurance. More precisely, we derive the asymptotic
distribution of the relative share of the first eigenvalue of the covariance
matrix, a measure of systematic risk in index insurance. Our formula accurately
approximates the empirical bias simulated from the satellite data, and provides
a useful tool for practitioners to quantify bias in insurance quality.",With big data come big problems: pitfalls in measuring basis risk for crop index insurance,2022-09-29 11:08:04,"Matthieu Stigler, Apratim Dey, Andrew Hobbs, David Lobell","http://arxiv.org/abs/2209.14611v1, http://arxiv.org/pdf/2209.14611v1",econ.EM
29616,em,"We implement traditional machine learning and deep learning methods for
global tweets from 2017-2022 to build a high-frequency measure of the public's
sentiment index on inflation and analyze its correlation with other online data
sources such as google trend and market-oriented inflation index. We use
manually labeled trigrams to test the prediction performance of several machine
learning models(logistic regression,random forest etc.) and choose Bert model
for final demonstration. Later, we sum daily tweets' sentiment scores gained
from Bert model to obtain the predicted inflation sentiment index, and we
further analyze the regional and pre/post covid patterns of these inflation
indexes. Lastly, we take other empirical inflation-related data as references
and prove that twitter-based inflation sentiment analysis method has an
outstanding capability to predict inflation. The results suggest that Twitter
combined with deep learning methods can be a novel and timely method to utilize
existing abundant data sources on inflation expectations and provide daily
indicators of consumers' perception on inflation.",Sentiment Analysis on Inflation after Covid-19,2022-09-25 23:37:54,"Xinyu Li, Zihan Tang","http://dx.doi.org/10.11114/aef.v10i1.5821, http://arxiv.org/abs/2209.14737v2, http://arxiv.org/pdf/2209.14737v2",econ.EM
29617,em,"In this article, we show the benefits derived from the Chile-USA (in-force
Jan, 2004) and Chile-China (in-force Oct, 2006) FTA on GDP consumer and
producers to conclude that Chile improved its welfare improved after its
subscription. From that point, we extrapolate to show the direct and indirect
benefits of CTPP accession.",Economic effects of Chile FTAs and an eventual CTPP accession,2022-09-28 20:34:13,"Vargas Sepulveda, Mauricio ""Pacha""","http://arxiv.org/abs/2209.14748v1, http://arxiv.org/pdf/2209.14748v1",econ.EM
29618,em,"We consider a regulator willing to drive individual choices towards
increasing social welfare by providing incentives to a large population of
individuals.
  For that purpose, we formalize and solve the problem of finding an optimal
personalized-incentive policy: optimal in the sense that it maximizes social
welfare under an incentive budget constraint, personalized in the sense that
the incentives proposed depend on the alternatives available to each
individual, as well as her preferences.
  We propose a polynomial time approximation algorithm that computes a policy
within few seconds and we analytically prove that it is boundedly close to the
optimum.
  We then extend the problem to efficiently calculate the Maximum Social
Welfare Curve, which gives the maximum social welfare achievable for a range of
incentive budgets (not just one value).
  This curve is a valuable practical tool for the regulator to determine the
right incentive budget to invest.
  Finally, we simulate a large-scale application to mode choice in a French
department (about 200 thousands individuals) and illustrate the effectiveness
of the proposed personalized-incentive policy in reducing CO2 emissions.",Large-Scale Allocation of Personalized Incentives,2022-10-02 11:45:14,"Lucas Javaudin, Andrea Araldo, André de Palma","http://arxiv.org/abs/2210.00463v1, http://arxiv.org/pdf/2210.00463v1",econ.EM
29619,em,"To what extent is the volume of urban bicycle traffic affected by the
provision of bicycle infrastructure? In this study, we exploit a large dataset
of observed bicycle trajectories in combination with a fine-grained
representation of the Copenhagen bicycle-relevant network. We apply a novel
model for bicyclists' choice of route from origin to destination that takes the
complete network into account. This enables us to determine bicyclists'
preferences for a range of infrastructure and land-use types. We use the
estimated preferences to compute a subjective cost of bicycle travel, which we
correlate with the number of bicycle trips across a large number of
origin-destination pairs. Simulations suggest that the extensive Copenhagen
bicycle lane network has caused the number of bicycle trips and the bicycle
kilometers traveled to increase by 60% and 90%, respectively, compared with a
counterfactual without the bicycle lane network. This translates into an annual
benefit of EUR 0.4M per km of bicycle lane owing to changes in subjective
travel cost, health, and accidents. Our results thus strongly support the
provision of bicycle infrastructure.",Bikeability and the induced demand for cycling,2022-10-05 21:42:21,"Mogens Fosgerau, Miroslawa Lukawska, Mads Paulsen, Thomas Kjær Rasmussen","http://dx.doi.org/10.1073/pnas.2220515120, http://arxiv.org/abs/2210.02504v2, http://arxiv.org/pdf/2210.02504v2",econ.EM
29620,em,"This paper develops the likelihood ratio-based test of the null hypothesis of
a M0-component model against an alternative of (M0 + 1)-component model in the
normal mixture panel regression by extending the Expectation-Maximization (EM)
test of Chen and Li (2009a) and Kasahara and Shimotsu (2015) to the case of
panel data. We show that, unlike the cross-sectional normal mixture, the
first-order derivative of the density function for the variance parameter in
the panel normal mixture is linearly independent of its second-order
derivatives for the mean parameter. On the other hand, like the cross-sectional
normal mixture, the likelihood ratio test statistic of the panel normal mixture
is unbounded. We consider the Penalized Maximum Likelihood Estimator to deal
with the unboundedness, where we obtain the data-driven penalty function via
computational experiments. We derive the asymptotic distribution of the
Penalized Likelihood Ratio Test (PLRT) and EM test statistics by expanding the
log-likelihood function up to five times for the reparameterized parameters.
The simulation experiment indicates good finite sample performance of the
proposed EM test. We apply our EM test to estimate the number of production
technology types for the finite mixture Cobb-Douglas production function model
studied by Kasahara et al. (2022) used the panel data of the Japanese and
Chilean manufacturing firms. We find the evidence of heterogeneity in
elasticities of output for intermediate goods, suggesting that production
function is heterogeneous across firms beyond their Hicks-neutral productivity
terms.",Testing the Number of Components in Finite Mixture Normal Regression Model with Panel Data,2022-10-06 14:25:35,"Yu Hao, Hiroyuki Kasahara","http://arxiv.org/abs/2210.02824v2, http://arxiv.org/pdf/2210.02824v2",econ.EM
29621,em,"We establish nonparametric identification of auction models with continuous
and nonseparable unobserved heterogeneity using three consecutive order
statistics of bids. We then propose sieve maximum likelihood estimators for the
joint distribution of unobserved heterogeneity and the private value, as well
as their conditional and marginal distributions. Lastly, we apply our
methodology to a novel dataset from judicial auctions in China. Our estimates
suggest substantial gains from accounting for unobserved heterogeneity when
setting reserve prices. We propose a simple scheme that achieves nearly optimal
revenue by using the appraisal value as the reserve price.",Order Statistics Approaches to Unobserved Heterogeneity in Auctions,2022-10-07 16:31:14,"Yao Luo, Peijun Sang, Ruli Xiao","http://arxiv.org/abs/2210.03547v1, http://arxiv.org/pdf/2210.03547v1",econ.EM
29622,em,"When proxies (external instruments) used to identify target structural shocks
are weak, inference in proxy-SVARs (SVAR-IVs) is nonstandard and the
construction of asymptotically valid confidence sets for the impulse responses
of interest requires weak-instrument robust methods. In the presence of
multiple target shocks, test inversion techniques require extra restrictions on
the proxy-SVAR parameters other those implied by the proxies that may be
difficult to interpret and test. We show that frequentist asymptotic inference
in these situations can be conducted through Minimum Distance estimation and
standard asymptotic methods if the proxy-SVAR can be identified by using
`strong' instruments for the non-target shocks; i.e. the shocks which are not
of primary interest in the analysis. The suggested identification strategy
hinges on a novel pre-test for the null of instrument relevance based on
bootstrap resampling which is not subject to pre-testing issues, in the sense
that the validity of post-test asymptotic inferences is not affected by the
outcomes of the test. The test is robust to conditionally heteroskedasticity
and/or zero-censored proxies, is computationally straightforward and applicable
regardless of the number of shocks being instrumented. Some illustrative
examples show the empirical usefulness of the suggested identification and
testing strategy.",An identification and testing strategy for proxy-SVARs with weak proxies,2022-10-10 12:45:35,"Giovanni Angelini, Giuseppe Cavaliere, Luca Fanelli","http://arxiv.org/abs/2210.04523v4, http://arxiv.org/pdf/2210.04523v4",econ.EM
29623,em,"I study the problem of a decision maker choosing a policy which allocates
treatment to a heterogeneous population on the basis of experimental data that
includes only a subset of possible treatment values. The effects of new
treatments are partially identified by shape restrictions on treatment
response. Policies are compared according to the minimax regret criterion, and
I show that the empirical analog of the population decision problem has a
tractable linear- and integer-programming formulation. I prove the maximum
regret of the estimated policy converges to the lowest possible maximum regret
at a rate which is the maximum of N^-1/2 and the rate at which conditional
average treatment effects are estimated in the experimental data. I apply my
results to design targeted subsidies for electrical grid connections in rural
Kenya, and estimate that 97% of the population should be given a treatment not
implemented in the experiment.",Policy Learning with New Treatments,2022-10-10 17:00:51,Samuel Higbee,"http://arxiv.org/abs/2210.04703v2, http://arxiv.org/pdf/2210.04703v2",econ.EM
29624,em,"This study proposes a reversible jump Markov chain Monte Carlo method for
estimating parameters of lognormal distribution mixtures for income. Using
simulated data examples, we examined the proposed algorithm's performance and
the accuracy of posterior distributions of the Gini coefficients. Results
suggest that the parameters were estimated accurately. Therefore, the posterior
distributions are close to the true distributions even when the different data
generating process is accounted for. Moreover, promising results for Gini
coefficients encouraged us to apply our method to real data from Japan. The
empirical examples indicate two subgroups in Japan (2020) and the Gini
coefficients' integrity.",Bayesian analysis of mixtures of lognormal distribution with an unknown number of components from grouped data,2022-10-11 06:10:29,Kazuhiko Kakamu,"http://arxiv.org/abs/2210.05115v3, http://arxiv.org/pdf/2210.05115v3",econ.EM
29625,em,"By fully accounting for the distinct tariff regimes levied on imported meat,
we estimate substitution elasticities of Japan's two-stage import aggregation
functions for beef, chicken and pork. While the regression analysis crucially
depends on the price that consumers face, the post-tariff price of imported
meat depends not only on ad valorem duties but also on tariff rate quotas and
gate price system regimes. The effective tariff rate is consequently evaluated
by utilizing monthly transaction data. To address potential endogeneity
problems, we apply exchange rates that we believe to be independent of the
demand shocks for imported meat. The panel nature of the data allows us to
retrieve the first-stage aggregates via time dummy variables, free of demand
shocks, to be used as part of the explanatory variable and as an instrument in
the second-stage regression.",On estimating Armington elasticities for Japan's meat imports,2022-10-06 18:03:23,"Satoshi Nakano, Kazuhiko Nishimura","http://dx.doi.org/10.1111/1477-9552.12539, http://arxiv.org/abs/2210.05358v2, http://arxiv.org/pdf/2210.05358v2",econ.EM
29626,em,"We develop a novel filtering and estimation procedure for parametric option
pricing models driven by general affine jump-diffusions. Our procedure is based
on the comparison between an option-implied, model-free representation of the
conditional log-characteristic function and the model-implied conditional
log-characteristic function, which is functionally affine in the model's state
vector. We formally derive an associated linear state space representation and
establish the asymptotic properties of the corresponding measurement errors.
The state space representation allows us to use a suitably modified Kalman
filtering technique to learn about the latent state vector and a quasi-maximum
likelihood estimator of the model parameters, which brings important
computational advantages. We analyze the finite-sample behavior of our
procedure in Monte Carlo simulations. The applicability of our procedure is
illustrated in two case studies that analyze S&P 500 option prices and the
impact of exogenous state variables capturing Covid-19 reproduction and
economic policy uncertainty.",Estimating Option Pricing Models Using a Characteristic Function-Based Linear State Space Representation,2022-10-12 17:02:59,"H. Peter Boswijk, Roger J. A. Laeven, Evgenii Vladimirov","http://arxiv.org/abs/2210.06217v1, http://arxiv.org/pdf/2210.06217v1",econ.EM
29627,em,"Given independent samples from two univariate distributions, one-sided
Wilcoxon-Mann-Whitney statistics may be used to conduct rank-based tests of
stochastic dominance. We broaden the scope of applicability of such tests by
showing that the bootstrap may be used to conduct valid inference in a matched
pairs sampling framework permitting dependence between the two samples.
Further, we show that a modified bootstrap incorporating an implicit estimate
of a contact set may be used to improve power. Numerical simulations indicate
that our test using the modified bootstrap effectively controls the null
rejection rates and can deliver more or less power than that of the Donald-Hsu
test. In the course of establishing our results we obtain a weak approximation
to the empirical ordinance dominance curve permitting its population density to
diverge to infinity at zero or one at arbitrary rates.",Modified Wilcoxon-Mann-Whitney tests of stochastic dominance,2022-10-17 12:41:09,"Brendan K. Beare, Jackson D. Clarke","http://arxiv.org/abs/2210.08892v1, http://arxiv.org/pdf/2210.08892v1",econ.EM
29628,em,"We investigate the returns to adolescent friendships on earnings in
adulthood. Using data from the National Longitudinal Study of Adolescent to
Adult Health, we document that individuals make investments to accumulate
friends in addition to educational investments. Because both education and
friendships are jointly determined in adolescence, OLS estimates of their
returns are biased. To estimate the causal returns to friendships, we implement
a novel procedure that assumes the returns to schooling range from 5 to 15% (as
the literature has documented), and instrument for friendships using similarity
in age among peers to obtain bounds on the returns to friendships. We find that
having one more friend in adolescence increases earnings between 7 and 14%,
which is substantially larger than the OLS estimates: measurement error and
omitted variables lead to significant downward bias.",Party On: The Labor Market Returns to Social Networks in Adolescence,2022-10-17 23:43:21,"Adriana Lleras-Muney, Matthew Miller, Shuyang Sheng, Veronica Sovero","http://arxiv.org/abs/2210.09426v4, http://arxiv.org/pdf/2210.09426v4",econ.EM
29629,em,"We study a novel large dimensional approximate factor model with regime
changes in the loadings driven by a latent first order Markov process. By
exploiting the equivalent linear representation of the model, we first recover
the latent factors by means of Principal Component Analysis. We then cast the
model in state-space form, and we estimate loadings and transition
probabilities through an EM algorithm based on a modified version of the
Baum-Lindgren-Hamilton-Kim filter and smoother that makes use of the factors
previously estimated. Our approach is appealing as it provides closed form
expressions for all estimators. More importantly, it does not require knowledge
of the true number of factors. We derive the theoretical properties of the
proposed estimation procedure, and we show their good finite sample performance
through a comprehensive set of Monte Carlo experiments. The empirical
usefulness of our approach is illustrated through an application to a large
portfolio of stocks.",Modelling Large Dimensional Datasets with Markov Switching Factor Models,2022-10-18 16:17:07,"Matteo Barigozzi, Daniele Massacci","http://arxiv.org/abs/2210.09828v3, http://arxiv.org/pdf/2210.09828v3",econ.EM
29630,em,"This paper studies the properties of linear regression on centrality measures
when network data is sparse -- that is, when there are many more agents than
links per agent -- and when they are measured with error. We make three
contributions in this setting: (1) We show that OLS estimators can become
inconsistent under sparsity and characterize the threshold at which this
occurs, with and without measurement error. This threshold depends on the
centrality measure used. Specifically, regression on eigenvector is less robust
to sparsity than on degree and diffusion. (2) We develop distributional theory
for OLS estimators under measurement error and sparsity, finding that OLS
estimators are subject to asymptotic bias even when they are consistent.
Moreover, bias can be large relative to their variances, so that bias
correction is necessary for inference. (3) We propose novel bias correction and
inference methods for OLS with sparse noisy networks. Simulation evidence
suggests that our theory and methods perform well, particularly in settings
where the usual OLS estimators and heteroskedasticity-consistent/robust t-tests
are deficient. Finally, we demonstrate the utility of our results in an
application inspired by De Weerdt and Deacon (2006), in which we consider
consumption smoothing and social insurance in Nyakatoke, Tanzania.",Linear Regression with Centrality Measures,2022-10-18 20:47:43,Yong Cai,"http://arxiv.org/abs/2210.10024v1, http://arxiv.org/pdf/2210.10024v1",econ.EM
29631,em,"In this paper, we use the results in Andrews and Cheng (2012), extended to
allow for parameters to be near or at the boundary of the parameter space, to
derive the asymptotic distributions of the two test statistics that are used in
the two-step (testing) procedure proposed by Pedersen and Rahbek (2019). The
latter aims at testing the null hypothesis that a GARCH-X type model, with
exogenous covariates (X), reduces to a standard GARCH type model, while
allowing the ""GARCH parameter"" to be unidentified. We then provide a
characterization result for the asymptotic size of any test for testing this
null hypothesis before numerically establishing a lower bound on the asymptotic
size of the two-step procedure at the 5% nominal level. This lower bound
exceeds the nominal level, revealing that the two-step procedure does not
control asymptotic size. In a simulation study, we show that this finding is
relevant for finite samples, in that the two-step procedure can suffer from
overrejection in finite samples. We also propose a new test that, by
construction, controls asymptotic size and is found to be more powerful than
the two-step procedure when the ""ARCH parameter"" is ""very small"" (in which case
the two-step procedure underrejects).",Allowing for weak identification when testing GARCH-X type models,2022-10-20 19:46:20,Philipp Ketz,"http://arxiv.org/abs/2210.11398v1, http://arxiv.org/pdf/2210.11398v1",econ.EM
29632,em,"Many experiments elicit subjects' prior and posterior beliefs about a random
variable to assess how information affects one's own actions. However, beliefs
are multi-dimensional objects, and experimenters often only elicit a single
response from subjects. In this paper, we discuss how the incentives offered by
experimenters map subjects' true belief distributions to what profit-maximizing
subjects respond in the elicitation task. In particular, we show how slightly
different incentives may induce subjects to report the mean, mode, or median of
their belief distribution. If beliefs are not symmetric and unimodal, then
using an elicitation scheme that is mismatched with the research question may
affect both the magnitude and the sign of identified effects, or may even make
identification impossible. As an example, we revisit Cantoni et al.'s (2019)
study of whether political protests are strategic complements or substitutes.
We show that they elicit modal beliefs, while modal and mean beliefs may be
updated in opposite directions following their experiment. Hence, the sign of
their effects may change, allowing an alternative interpretation of their
results.",Choosing The Best Incentives for Belief Elicitation with an Application to Political Protests,2022-10-22 23:52:10,"Nathan Canen, Anujit Chakraborty","http://arxiv.org/abs/2210.12549v1, http://arxiv.org/pdf/2210.12549v1",econ.EM
29633,em,"The fixed-event forecasting setup is common in economic policy. It involves a
sequence of forecasts of the same (`fixed') predictand, so that the difficulty
of the forecasting problem decreases over time. Fixed-event point forecasts are
typically published without a quantitative measure of uncertainty. To construct
such a measure, we consider forecast postprocessing techniques tailored to the
fixed-event case. We develop regression methods that impose constraints
motivated by the problem at hand, and use these methods to construct prediction
intervals for gross domestic product (GDP) growth in Germany and the US.",Prediction intervals for economic fixed-event forecasts,2022-10-24 22:35:24,"Fabian Krüger, Hendrik Plett","http://arxiv.org/abs/2210.13562v2, http://arxiv.org/pdf/2210.13562v2",econ.EM
29634,em,"In this work we introduce a unit averaging procedure to efficiently recover
unit-specific parameters in a heterogeneous panel model. The procedure consists
in estimating the parameter of a given unit using a weighted average of all the
unit-specific parameter estimators in the panel. The weights of the average are
determined by minimizing an MSE criterion. We analyze the properties of the
minimum MSE unit averaging estimator in a local heterogeneity framework
inspired by the literature on frequentist model averaging. The analysis of the
estimator covers both the cases in which the cross-sectional dimension of the
panel is fixed and large. In both cases, we obtain the local asymptotic
distribution of the minimum MSE unit averaging estimators and of the associated
weights. A GDP nowcasting application for a panel of European countries
showcases the benefits of the procedure.",Unit Averaging for Heterogeneous Panels,2022-10-25 20:50:30,"Christian Brownlees, Vladislav Morozov","http://arxiv.org/abs/2210.14205v2, http://arxiv.org/pdf/2210.14205v2",econ.EM
29640,em,"We exploit the heterogeneous impact of the Roe v. Wade ruling by the US
Supreme Court, which ruled most abortion restrictions unconstitutional. Our
identifying assumption is that states which had not liberalized their abortion
laws prior to Roe would experience a negative birth shock of greater proportion
than states which had undergone pre-Roe reforms. We estimate the
difference-in-difference in births and use estimated births as an exogenous
treatment variable to predict patents per capita. Our results show that one
standard deviation increase in cohort starting population increases per capita
patents by 0.24 standard deviation. These results suggest that at the margins,
increasing fertility can increase patent production. Insofar as patent
production is a sufficient proxy for technological growth, increasing births
has a positive impact on technological growth. This paper and its results do
not pertain to the issue of abortion itself.",Population and Technological Growth: Evidence from Roe v. Wade,2022-11-01 15:08:19,"John T. H. Wong, Matthias Hei Man, Alex Li Cheuk Hung","http://arxiv.org/abs/2211.00410v1, http://arxiv.org/pdf/2211.00410v1",econ.EM
29635,em,"We propose a new estimator for heterogeneous treatment effects in a partially
linear model (PLM) with many exogenous covariates and a possibly endogenous
treatment variable. The PLM has a parametric part that includes the treatment
and the interactions between the treatment and exogenous characteristics, and a
nonparametric part that contains those characteristics and many other
covariates. The new estimator is a combination of a Robinson transformation to
partial out the nonparametric part of the model, the Smooth Minimum Distance
(SMD) approach to exploit all the information of the conditional mean
independence restriction, and a Neyman-Orthogonalized first-order condition
(FOC). With the SMD method, our estimator using only one valid binary
instrument identifies both parameters. With the sparsity assumption, using
regularized machine learning methods (i.e., the Lasso method) allows us to
choose a relatively small number of polynomials of covariates. The
Neyman-Orthogonalized FOC reduces the effect of the bias associated with the
regularization method on estimates of the parameters of interest. Our new
estimator allows for many covariates and is less biased, consistent, and
$\sqrt{n}$-asymptotically normal under standard regularity conditions. Our
simulations show that our estimator behaves well with different sets of
instruments, but the GMM type estimators do not. We estimate the heterogeneous
treatment effects of Medicaid on individual outcome variables from the Oregon
Health Insurance Experiment. We find using our new method with only one valid
instrument produces more significant and more reliable results for
heterogeneous treatment effects of health insurance programs on economic
outcomes than using GMM type estimators.",Estimation of Heterogeneous Treatment Effects Using a Conditional Moment Based Approach,2022-10-28 04:48:26,Xiaolin Sun,"http://arxiv.org/abs/2210.15829v2, http://arxiv.org/pdf/2210.15829v2",econ.EM
29636,em,"Acquiring information is expensive. Experimenters need to carefully choose
how many units of each treatment to sample and when to stop sampling. The aim
of this paper is to develop techniques for incorporating the cost of
information into experimental design. In particular, we study sequential
experiments where sampling is costly and a decision-maker aims to determine the
best treatment for full scale implementation by (1) adaptively allocating units
to two possible treatments, and (2) stopping the experiment when the expected
welfare (inclusive of sampling costs) from implementing the chosen treatment is
maximized. Working under the diffusion limit, we describe the optimal policies
under the minimax regret criterion. Under small cost asymptotics, the same
policies are also optimal under parametric and non-parametric distributions of
outcomes. The minimax optimal sampling rule is just the Neyman allocation; it
is independent of sampling costs and does not adapt to previous outcomes. The
decision-maker stops sampling when the average difference between the treatment
outcomes, multiplied by the number of observations collected until that point,
exceeds a specific threshold. We also suggest methods for inference on the
treatment effects using stopping times and discuss their optimality.",How to sample and when to stop sampling: The generalized Wald problem and minimax policies,2022-10-28 05:23:43,Karun Adusumilli,"http://arxiv.org/abs/2210.15841v5, http://arxiv.org/pdf/2210.15841v5",econ.EM
29637,em,"The conventional cluster-robust (CR) standard errors may not be robust. They
are vulnerable to data that contain a small number of large clusters. When a
researcher uses the 51 states in the U.S. as clusters, the largest cluster
(California) consists of about 10% of the total sample. Such a case in fact
violates the assumptions under which the widely used CR methods are guaranteed
to work. We formally show that the conventional CR methods fail if the
distribution of cluster sizes follows a power law with exponent less than two.
Besides the example of 51 state clusters, some examples are drawn from a list
of recent original research articles published in a top journal. In light of
these negative results about the existing CR methods, we propose a weighted CR
(WCR) method as a simple fix. Simulation studies support our arguments that the
WCR method is robust while the conventional CR methods are not.",Non-Robustness of the Cluster-Robust Inference: with a Proposal of a New Robust Method,2022-10-31 03:16:55,"Yuya Sasaki, Yulong Wang","http://arxiv.org/abs/2210.16991v2, http://arxiv.org/pdf/2210.16991v2",econ.EM
29638,em,"This paper describes how to reparameterize low-dimensional factor models to
fit the weak identification theory developed for generalized method of moments
(GMM) models. Identification conditions in low-dimensional factor models can be
close to failing in a similar way to identification conditions in instrumental
variables or GMM models. Weak identification estimation theory requires a
reparameterization to separate the weakly identified parameters from the
strongly identified parameters. Furthermore, identification-robust hypothesis
tests benefit from a reparameterization that makes the nuisance parameters
strongly identified. We describe such a reparameterization in low-dimensional
factor models with one or two factors. Simulations show that
identification-robust hypothesis tests that require the reparameterization are
less conservative than identification-robust hypothesis tests that use the
original parameterization. The simulations also show that estimates of the
number of factors frequently include weakly identified factors. An empirical
application to a factor model of parental investments in children is included.",Weak Identification in Low-Dimensional Factor Models with One or Two Factors,2022-11-01 11:33:54,Gregory Cox,"http://arxiv.org/abs/2211.00329v1, http://arxiv.org/pdf/2211.00329v1",econ.EM
29639,em,"Macroeconomic forecasting has recently started embracing techniques that can
deal with large-scale datasets and series with unequal release periods.
MIxed-DAta Sampling (MIDAS) and Dynamic Factor Models (DFM) are the two main
state-of-the-art approaches that allow modeling series with non-homogeneous
frequencies. We introduce a new framework called the Multi-Frequency Echo State
Network (MFESN) based on a relatively novel machine learning paradigm called
reservoir computing. Echo State Networks (ESN) are recurrent neural networks
formulated as nonlinear state-space systems with random state coefficients
where only the observation map is subject to estimation. MFESNs are
considerably more efficient than DFMs and allow for incorporating many series,
as opposed to MIDAS models, which are prone to the curse of dimensionality. All
methods are compared in extensive multistep forecasting exercises targeting US
GDP growth. We find that our MFESN models achieve superior or comparable
performance over MIDAS and DFMs at a much lower computational cost.",Reservoir Computing for Macroeconomic Forecasting with Mixed Frequency Data,2022-11-01 13:25:54,"Giovanni Ballarin, Petros Dellaportas, Lyudmila Grigoryeva, Marcel Hirt, Sophie van Huellen, Juan-Pablo Ortega","http://arxiv.org/abs/2211.00363v2, http://arxiv.org/pdf/2211.00363v2",econ.EM
29641,em,"We suggest a new single-equation test for Uncovered Interest Parity (UIP)
based on a dynamic regression approach. The method provides consistent and
asymptotically efficient parameter estimates, and is not dependent on
assumptions of strict exogeneity. This new approach is asymptotically more
efficient than the common approach of using OLS with HAC robust standard errors
in the static forward premium regression. The coefficient estimates when spot
return changes are regressed on the forward premium are all positive and
remarkably stable across currencies. These estimates are considerably larger
than those of previous studies, which frequently find negative coefficients.
The method also has the advantage of showing dynamic effects of risk premia, or
other events that may lead to rejection of UIP or the efficient markets
hypothesis.",A New Test for Market Efficiency and Uncovered Interest Parity,2022-11-02 20:52:40,"Richard T. Baillie, Francis X. Diebold, George Kapetanios, Kun Ho Kim","http://arxiv.org/abs/2211.01344v1, http://arxiv.org/pdf/2211.01344v1",econ.EM
29642,em,"This paper proposes a novel method to estimate individualised treatment
assignment rules. The method is designed to find rules that are stochastic,
reflecting uncertainty in estimation of an assignment rule and about its
welfare performance. Our approach is to form a prior distribution over
assignment rules, not over data generating processes, and to update this prior
based upon an empirical welfare criterion, not likelihood. The social planner
then assigns treatment by drawing a policy from the resulting posterior. We
show analytically a welfare-optimal way of updating the prior using empirical
welfare; this posterior is not feasible to compute, so we propose a variational
Bayes approximation for the optimal posterior. We characterise the welfare
regret convergence of the assignment rule based upon this variational Bayes
approximation, showing that it converges to zero at a rate of ln(n)/sqrt(n). We
apply our methods to experimental data from the Job Training Partnership Act
Study to illustrate the implementation of our methods.",Stochastic Treatment Choice with Empirical Welfare Updating,2022-11-03 04:03:04,"Toru Kitagawa, Hugo Lopez, Jeff Rowley","http://arxiv.org/abs/2211.01537v3, http://arxiv.org/pdf/2211.01537v3",econ.EM
29643,em,"A common task in empirical economics is to estimate \emph{interaction
effects} that measure how the effect of one variable $X$ on another variable
$Y$ depends on a third variable $H$. This paper considers the estimation of
interaction effects in linear panel models with a fixed number of time periods.
There are at least two ways to estimate interaction effects in this setting,
both common in applied work. Our theoretical results show that these two
approaches are distinct, and only coincide under strong conditions on
unobserved effect heterogeneity. Our empirical results show that the difference
between the two approaches is large, leading to conflicting conclusions about
the sign of the interaction effect. Taken together, our findings may guide the
choice between the two approaches in empirical work.",Estimating interaction effects with panel data,2022-11-03 05:32:34,"Chris Muris, Konstantin Wacker","http://arxiv.org/abs/2211.01557v1, http://arxiv.org/pdf/2211.01557v1",econ.EM
29644,em,"We provide an alternative derivation of the asymptotic results for the
Principal Components estimator of a large approximate factor model. Results are
derived under a minimal set of assumptions and, in particular, we require only
the existence of 4th order moments. A special focus is given to the time series
setting, a case considered in almost all recent econometric applications of
factor models. Hence, estimation is based on the classical $n\times n$ sample
covariance matrix and not on a $T\times T$ covariance matrix often considered
in the literature. Indeed, despite the two approaches being asymptotically
equivalent, the former is more coherent with a time series setting and it
immediately allows us to write more intuitive asymptotic expansions for the
Principal Component estimators showing that they are equivalent to OLS as long
as $\sqrt n/T\to 0$ and $\sqrt T/n\to 0$, that is the loadings are estimated in
a time series regression as if the factors were known, while the factors are
estimated in a cross-sectional regression as if the loadings were known.
Finally, we give some alternative sets of primitive sufficient conditions for
mean-squared consistency of the sample covariance matrix of the factors, of the
idiosyncratic components, and of the observed time series, which is the
starting point for Principal Component Analysis.",On Estimation and Inference of Large Approximate Dynamic Factor Models via the Principal Component Analysis,2022-11-03 19:01:49,Matteo Barigozzi,"http://arxiv.org/abs/2211.01921v3, http://arxiv.org/pdf/2211.01921v3",econ.EM
29645,em,"Assessing the statistical significance of parameter estimates is an important
step in high-dimensional vector autoregression modeling. Using the
least-squares boosting method, we compute the p-value for each selected
parameter at every boosting step in a linear model. The p-values are
asymptotically valid and also adapt to the iterative nature of the boosting
procedure. Our simulation experiment shows that the p-values can keep false
positive rate under control in high-dimensional vector autoregressions. In an
application with more than 100 macroeconomic time series, we further show that
the p-values can not only select a sparser model with good prediction
performance but also help control model stability. A companion R package
boostvar is developed.",Boosted p-Values for High-Dimensional Vector Autoregression,2022-11-04 04:52:55,Xiao Huang,"http://arxiv.org/abs/2211.02215v2, http://arxiv.org/pdf/2211.02215v2",econ.EM
29646,em,"We propose and implement an approach to inference in linear instrumental
variables models which is simultaneously robust and computationally tractable.
Inference is based on self-normalization of sample moment conditions, and
allows for (but does not require) many (relative to the sample size), weak,
potentially invalid or potentially endogenous instruments, as well as for many
regressors and conditional heteroskedasticity. Our coverage results are uniform
and can deliver a small sample guarantee. We develop a new computational
approach based on semidefinite programming, which we show can equally be
applied to rapidly invert existing tests (e.g,. AR, LM, CLR, etc.).","Fast, Robust Inference for Linear Instrumental Variables Models using Self-Normalized Moments",2022-11-04 06:48:55,"Eric Gautier, Christiern Rose","http://arxiv.org/abs/2211.02249v3, http://arxiv.org/pdf/2211.02249v3",econ.EM
29665,em,"Educational attainment generates labor market returns, societal gains and has
intrinsic value for individuals. We study Inequality of Opportunity (IOp) and
intergenerational mobility in the distribution of educational attainment. We
propose to use debiased IOp estimators based on the Gini coefficient and the
Mean Logarithmic Deviation (MLD) which are robust to machine learning biases.
We also measure the effect of each circumstance on IOp, we provide tests to
compare IOp in two populations and to test joint significance of a group of
circumstances. We find that circumstances explain between 38\% and 74\% of
total educational inequality in European countries. Mother's education is the
most important circumstance in most countries. There is high intergenerational
persistence and there is evidence of an educational Great Gatsby curve. We also
construct IOp aware educational Great Gatsby curves and find that high income
IOp countries are also high educational IOp and less mobile countries.",Educational Inequality of Opportunity and Mobility in Europe,2022-12-05 19:47:08,Joël Terschuur,"http://arxiv.org/abs/2212.02407v3, http://arxiv.org/pdf/2212.02407v3",econ.EM
29647,em,"This paper develops valid bootstrap inference methods for the dynamic panel
threshold regression. For the first-differenced generalized method of moments
(GMM) estimation for the dynamic short panel, we show that the standard
nonparametric bootstrap is inconsistent. The inconsistency is due to an
$n^{1/4}$-consistent non-normal asymptotic distribution for the threshold
estimate when the parameter resides within the continuity region of the
parameter space, which stems from the rank deficiency of the approximate
Jacobian of the sample moment conditions on the continuity region. We propose a
grid bootstrap to construct confidence sets for the threshold, a residual
bootstrap to construct confidence intervals for the coefficients, and a
bootstrap for testing continuity. They are shown to be valid under uncertain
continuity. A set of Monte Carlo experiments demonstrate that the proposed
bootstraps perform well in the finite samples and improve upon the asymptotic
normal approximation even under a large jump at the threshold. An empirical
application to firms' investment model illustrates our methods.",Bootstraps for Dynamic Panel Threshold Models,2022-11-08 09:06:27,"Woosik Gong, Myung Hwan Seo","http://arxiv.org/abs/2211.04027v2, http://arxiv.org/pdf/2211.04027v2",econ.EM
29648,em,"It is commonly believed that financial crises ""lead to"" lower growth of a
country during the two-year recession period, which can be reflected by their
post-crisis GDP growth. However, by contrasting a causal model with a standard
prediction model, this paper argues that such a belief is non-causal. To make
causal inferences, we design a two-stage staggered difference-in-differences
model to estimate the average treatment effects. Interpreting the residuals as
the contribution of each crisis to the treatment effects, we astonishingly
conclude that cross-sectional crises are often limited to providing relevant
causal information to policymakers.",Crises Do Not Cause Lower Short-Term Growth,2022-11-09 00:07:42,"Kaiwen Hou, David Hou, Yang Ouyang, Lulu Zhang, Aster Liu","http://arxiv.org/abs/2211.04558v3, http://arxiv.org/pdf/2211.04558v3",econ.EM
29649,em,"This paper develops a new toolbox for multiple structural break detection in
panel data models with interactive effects. The toolbox includes tests for the
presence of structural breaks, a break date estimator, and a break date
confidence interval. The new toolbox is applied to a large panel of US banks
for a period characterized by massive quantitative easing programs aimed at
lessening the impact of the global financial crisis and the COVID--19 pandemic.
The question we ask is: Have these programs been successful in spurring bank
lending in the US economy? The short answer turns out to be: ``No''.",Multiple Structural Breaks in Interactive Effects Panel Data and the Impact of Quantitative Easing on Bank Lending,2022-11-12 19:59:47,"Jan Ditzen, Yiannis Karavias, Joakim Westerlund","http://arxiv.org/abs/2211.06707v2, http://arxiv.org/pdf/2211.06707v2",econ.EM
29650,em,"The difference-in-differences (DID) method identifies the average treatment
effects on the treated (ATT) under mainly the so-called parallel trends (PT)
assumption. The most common and widely used approach to justify the PT
assumption is the pre-treatment period examination. If a null hypothesis of the
same trend in the outcome means for both treatment and control groups in the
pre-treatment periods is rejected, researchers believe less in PT and the DID
results. This paper develops a robust generalized DID method that utilizes all
the information available not only from the pre-treatment periods but also from
multiple data sources. Our approach interprets PT in a different way using a
notion of selection bias, which enables us to generalize the standard DID
estimand by defining an information set that may contain multiple pre-treatment
periods or other baseline covariates. Our main assumption states that the
selection bias in the post-treatment period lies within the convex hull of all
selection biases in the pre-treatment periods. We provide a sufficient
condition for this assumption to hold. Based on the baseline information set we
construct, we provide an identified set for the ATT that always contains the
true ATT under our identifying assumption, and also the standard DID estimand.
We extend our proposed approach to multiple treatment periods DID settings. We
propose a flexible and easy way to implement the method. Finally, we illustrate
our methodology through some numerical and empirical examples.",Robust Difference-in-differences Models,2022-11-12 20:22:22,"Kyunghoon Ban, Désiré Kédagni","http://arxiv.org/abs/2211.06710v5, http://arxiv.org/pdf/2211.06710v5",econ.EM
29651,em,"Censoring occurs when an outcome is unobserved beyond some threshold value.
Methods that do not account for censoring produce biased predictions of the
unobserved outcome. This paper introduces Type I Tobit Bayesian Additive
Regression Tree (TOBART-1) models for censored outcomes. Simulation results and
real data applications demonstrate that TOBART-1 produces accurate predictions
of censored outcomes. TOBART-1 provides posterior intervals for the conditional
expectation and other quantities of interest. The error term distribution can
have a large impact on the expectation of the censored outcome. Therefore the
error is flexibly modeled as a Dirichlet process mixture of normal
distributions.",Type I Tobit Bayesian Additive Regression Trees for Censored Outcome Regression,2022-11-14 19:40:08,Eoghan O'Neill,"http://arxiv.org/abs/2211.07506v3, http://arxiv.org/pdf/2211.07506v3",econ.EM
29652,em,"This paper proposes a new class of heterogeneous causal quantities, named
\textit{outcome conditioned} average structural derivatives (OASD) in a general
nonseparable model. OASD is the average partial effect of a marginal change in
a continuous treatment on the individuals located at different parts of the
outcome distribution, irrespective of individuals' characteristics. OASD
combines both features of ATE and QTE: it is interpreted as straightforwardly
as ATE while at the same time more granular than ATE by breaking the entire
population up according to the rank of the outcome distribution.
  One contribution of this paper is that we establish some close relationships
between the \textit{outcome conditioned average partial effects} and a class of
parameters measuring the effect of counterfactually changing the distribution
of a single covariate on the unconditional outcome quantiles. By exploiting
such relationship, we can obtain root-$n$ consistent estimator and calculate
the semi-parametric efficiency bound for these counterfactual effect
parameters. We illustrate this point by two examples: equivalence between OASD
and the unconditional partial quantile effect (Firpo et al. (2009)), and
equivalence between the marginal partial distribution policy effect (Rothe
(2012)) and a corresponding outcome conditioned parameter.
  Because identification of OASD is attained under a conditional exogeneity
assumption, by controlling for a rich information about covariates, a
researcher may ideally use high-dimensional controls in data. We propose for
OASD a novel automatic debiased machine learning estimator, and present
asymptotic statistical guarantees for it. We prove our estimator is root-$n$
consistent, asymptotically normal, and semiparametrically efficient. We also
prove the validity of the bootstrap procedure for uniform inference on the OASD
process.",Identification and Auto-debiased Machine Learning for Outcome Conditioned Average Structural Derivatives,2022-11-15 08:19:07,"Zequn Jin, Lihua Lin, Zhengyu Zhang","http://arxiv.org/abs/2211.07903v1, http://arxiv.org/pdf/2211.07903v1",econ.EM
29653,em,"The deployment of Multi-Armed Bandits (MAB) has become commonplace in many
economic applications. However, regret guarantees for even state-of-the-art
linear bandit algorithms (such as Optimism in the Face of Uncertainty Linear
bandit (OFUL)) make strong exogeneity assumptions w.r.t. arm covariates. This
assumption is very often violated in many economic contexts and using such
algorithms can lead to sub-optimal decisions. Further, in social science
analysis, it is also important to understand the asymptotic distribution of
estimated parameters. To this end, in this paper, we consider the problem of
online learning in linear stochastic contextual bandit problems with endogenous
covariates. We propose an algorithm we term $\epsilon$-BanditIV, that uses
instrumental variables to correct for this bias, and prove an
$\tilde{\mathcal{O}}(k\sqrt{T})$ upper bound for the expected regret of the
algorithm. Further, we demonstrate the asymptotic consistency and normality of
the $\epsilon$-BanditIV estimator. We carry out extensive Monte Carlo
simulations to demonstrate the performance of our algorithms compared to other
methods. We show that $\epsilon$-BanditIV significantly outperforms other
existing methods in endogenous settings. Finally, we use data from real-time
bidding (RTB) system to demonstrate how $\epsilon$-BanditIV can be used to
estimate the causal impact of advertising in such settings and compare its
performance with other existing methods.",Causal Bandits: Online Decision-Making in Endogenous Settings,2022-11-16 06:51:14,"Jingwen Zhang, Yifang Chen, Amandeep Singh","http://arxiv.org/abs/2211.08649v2, http://arxiv.org/pdf/2211.08649v2",econ.EM
29654,em,"Spillover of economic outcomes often arises over multiple networks, and
distinguishing their separate roles is important in empirical research. For
example, the direction of spillover between two groups (such as banks and
industrial sectors linked in a bipartite graph) has important economic
implications, and a researcher may want to learn which direction is supported
in the data. For this, we need to have an empirical methodology that allows for
both directions of spillover simultaneously. In this paper, we develop a
dynamic linear panel model and asymptotic inference with large $n$ and small
$T$, where both directions of spillover are accommodated through multiple
networks. Using the methodology developed here, we perform an empirical study
of spillovers between bank weakness and zombie-firm congestion in industrial
sectors, using firm-bank matched data from Spain between 2005 and 2012.
Overall, we find that there is positive spillover in both directions between
banks and sectors.",Estimating Dynamic Spillover Effects along Multiple Networks in a Linear Panel Model,2022-11-16 18:49:44,"Clemens Possnig, Andreea Rotărescu, Kyungchul Song","http://arxiv.org/abs/2211.08995v1, http://arxiv.org/pdf/2211.08995v1",econ.EM
29655,em,"Many econometrics textbooks imply that under mean independence of the
regressors and the error term, the OLS parameters have a causal interpretation.
We show that even when this assumption is satisfied, OLS might identify a
pseudo-parameter that does not have a causal interpretation. Even assuming that
the linear model is ""structural"" creates some ambiguity in what the regression
error represents and whether the OLS estimand is causal. This issue applies
equally to linear IV and panel data models. To give these estimands a causal
interpretation, one needs to impose assumptions on a ""causal"" model, e.g.,
using the potential outcome framework. This highlights that causal inference
requires causal, and not just stochastic, assumptions.",On the Role of the Zero Conditional Mean Assumption for Causal Inference in Linear Models,2022-11-17 15:52:02,"Federico Crudu, Michael C. Knaus, Giovanni Mellace, Joeri Smits","http://arxiv.org/abs/2211.09502v1, http://arxiv.org/pdf/2211.09502v1",econ.EM
29656,em,"Empirical researchers often perform model specification tests, such as the
Hausman test and the overidentifying restrictions test, to confirm the validity
of estimators rather than the validity of models. This paper examines the
effectiveness of specification pretests in finding invalid estimators. We study
the local asymptotic properties of test statistics and estimators and show that
locally unbiased specification tests cannot determine whether asymptotically
efficient estimators are asymptotically biased. The main message of the paper
is that correct specification and valid estimation are different issues.
Correct specification is neither necessary nor sufficient for asymptotically
unbiased estimation under local overidentification.",A Misuse of Specification Tests,2022-11-22 02:57:20,Naoya Sueishi,"http://arxiv.org/abs/2211.11915v1, http://arxiv.org/pdf/2211.11915v1",econ.EM
29657,em,"This paper shows that the group composition matters for the effectiveness of
labor market training programs for jobseekers. Using rich administrative data
from Germany, I document that greater average exposure to highly employable
peers leads to increased employment stability after program participation. Peer
effects on earnings are positive and long-lasting in classic vocational
training and negative but of short duration in retraining, pointing to
different mechanisms. Finally, I also find evidence for non-linearities in
effects and show that more heterogeneity in the peer group is detrimental.",Peer Effects in Labor Market Training,2022-11-22 19:00:32,Ulrike Unterhofer,"http://arxiv.org/abs/2211.12366v2, http://arxiv.org/pdf/2211.12366v2",econ.EM
29658,em,"This study evaluates the macroeconomic effects of active labour market
policies (ALMP) in Germany over the period 2005 to 2018. We propose a novel
identification strategy to overcome the simultaneity of ALMP and labour market
outcomes at the regional level. It exploits the imperfect overlap of local
labour markets and local employment agencies that decide on the local
implementation of policies. Specifically, we instrument for the use of ALMP in
a local labour market with the mix of ALMP implemented outside this market but
in local employment agencies that partially overlap with this market. We find
no effects of short-term activation measures and further vocational training on
aggregate labour market outcomes. In contrast, wage subsidies substantially
increase the share of workers in unsubsidised employment while lowering
long-term unemployment and welfare dependency. Our results suggest that
negative externalities of ALMP partially offset the effects for program
participants and that some segments of the labour market benefit more than
others.",Macroeconomic Effects of Active Labour Market Policies: A Novel Instrumental Variables Approach,2022-11-22 20:45:34,"Ulrike Unterhofer, Conny Wunsch","http://arxiv.org/abs/2211.12437v1, http://arxiv.org/pdf/2211.12437v1",econ.EM
29678,em,"In a binary-treatment instrumental variable framework, we define
supercompliers as the subpopulation whose treatment take-up positively responds
to eligibility and whose outcome positively responds to take-up. Supercompliers
are the only subpopulation to benefit from treatment eligibility and, hence,
are of great policy interest. Given a set of jointly testable assumptions and a
binary outcome, we can completely identify the characteristics of
supercompliers. Specifically, we require the standard assumptions from the
local average treatment effect literature along with an outcome monotonicity
assumption (i.e., treatment is weakly beneficial). We can estimate and conduct
inference on supercomplier characteristics using standard instrumental variable
regression.",Supercompliers,2022-12-29 00:53:57,"Matthew L. Comey, Amanda R. Eng, Zhuan Pei","http://arxiv.org/abs/2212.14105v2, http://arxiv.org/pdf/2212.14105v2",econ.EM
29659,em,"Many environments in economics feature a cross-section of units linked by
bilateral ties. I develop a framework for studying dynamics of cross-sectional
variables exploiting this network structure. It is a vector autoregression in
which innovations transmit cross-sectionally only via bilateral links and which
can accommodate rich patterns of how network effects of higher order accumulate
over time. The model can be used to estimate dynamic network effects, with the
network given or inferred from dynamic cross-correlations in the data. It also
offers a dimensionality-reduction technique for modeling high-dimensional
(cross-sectional) processes, owing to networks' ability to summarize complex
relations among variables (units) by relatively few non-zero bilateral links.
In a first application, I estimate how sectoral productivity shocks transmit
along supply chain linkages and affect dynamics of sectoral prices in the US
economy. The analysis suggests that network positions can rationalize not only
the strength of a sector's impact on aggregates, but also its timing. In a
second application, I model industrial production growth across 44 countries by
assuming global business cycles are driven by bilateral links which I estimate.
This reduces out-of-sample mean squared errors by up to 23% relative to a
principal components factor model.",Cross-Sectional Dynamics Under Network Structure: Theory and Macroeconomic Applications,2022-11-24 16:53:15,Marko Mlikota,"http://arxiv.org/abs/2211.13610v3, http://arxiv.org/pdf/2211.13610v3",econ.EM
29660,em,"This paper investigates new ways of estimating and identifying causal,
noncausal, and mixed causal-noncausal autoregressive models driven by a
non-Gaussian error sequence. We do not assume any parametric distribution
function for the innovations. Instead, we use the information of higher-order
cumulants, combining the spectrum and the bispectrum in a minimum distance
estimation. We show how to circumvent the nonlinearity of the parameters and
the multimodality in the noncausal and mixed models by selecting the
appropriate initial values in the estimation. In addition, we propose a method
of identification using a simple comparison criterion based on the global
minimum of the estimation function. By means of a Monte Carlo study, we find
unbiased estimated parameters and a correct identification as the data depart
from normality. We propose an empirical application on eight monthly commodity
prices, finding noncausal and mixed causal-noncausal dynamics.",Spectral estimation for mixed causal-noncausal autoregressive models,2022-11-25 02:51:46,"Alain Hecq, Daniel Velasquez-Gaviria","http://arxiv.org/abs/2211.13830v1, http://arxiv.org/pdf/2211.13830v1",econ.EM
29661,em,"When observing spatial data, what standard errors should we report? With the
finite population framework, we identify three channels of spatial correlation:
sampling scheme, assignment design, and model specification. The
Eicker-Huber-White standard error, the cluster-robust standard error, and the
spatial heteroskedasticity and autocorrelation consistent standard error are
compared under different combinations of the three channels. Then, we provide
guidelines for whether standard errors should be adjusted for spatial
correlation for both linear and nonlinear estimators. As it turns out, the
answer to this question also depends on the magnitude of the sampling
probability.",A Design-Based Approach to Spatial Correlation,2022-11-25 22:21:14,"Ruonan Xu, Jeffrey M. Wooldridge","http://arxiv.org/abs/2211.14354v1, http://arxiv.org/pdf/2211.14354v1",econ.EM
29662,em,"Policy analysts are often interested in treating the units with extreme
outcomes, such as infants with extremely low birth weights. Existing
changes-in-changes (CIC) estimators are tailored to middle quantiles and do not
work well for such subpopulations. This paper proposes a new CIC estimator to
accurately estimate treatment effects at extreme quantiles. With its asymptotic
normality, we also propose a method of statistical inference, which is simple
to implement. Based on simulation studies, we propose to use our extreme CIC
estimator for extreme, such as below 5% and above 95%, quantiles, while the
conventional CIC estimator should be used for intermediate quantiles. Applying
the proposed method, we study the effects of income gains from the 1993 EITC
reform on infant birth weights for those in the most critical conditions. This
paper is accompanied by a Stata command.",Extreme Changes in Changes,2022-11-27 18:59:30,"Yuya Sasaki, Yulong Wang","http://arxiv.org/abs/2211.14870v2, http://arxiv.org/pdf/2211.14870v2",econ.EM
29663,em,"The assumption of group heterogeneity has become popular in panel data
models. We develop a constrained Bayesian grouped estimator that exploits
researchers' prior beliefs on groups in a form of pairwise constraints,
indicating whether a pair of units is likely to belong to a same group or
different groups. We propose a prior to incorporate the pairwise constraints
with varying degrees of confidence. The whole framework is built on the
nonparametric Bayesian method, which implicitly specifies a distribution over
the group partitions, and so the posterior analysis takes the uncertainty of
the latent group structure into account. Monte Carlo experiments reveal that
adding prior knowledge yields more accurate estimates of coefficient and scores
predictive gains over alternative estimators. We apply our method to two
empirical applications. In a first application to forecasting U.S. CPI
inflation, we illustrate that prior knowledge of groups improves density
forecasts when the data is not entirely informative. A second application
revisits the relationship between a country's income and its democratic
transition; we identify heterogeneous income effects on democracy with five
distinct groups over ninety countries.",Incorporating Prior Knowledge of Latent Group Structure in Panel Data Models,2022-11-30 06:32:58,Boyuan Zhang,"http://arxiv.org/abs/2211.16714v3, http://arxiv.org/pdf/2211.16714v3",econ.EM
29664,em,"Ride-sourcing services offered by companies like Uber and Didi have grown
rapidly in the last decade. Understanding the demand for these services is
essential for planning and managing modern transportation systems. Existing
studies develop statistical models for ride-sourcing demand estimation at an
aggregate level due to limited data availability. These models lack foundations
in microeconomic theory, ignore competition of ride-sourcing with other travel
modes, and cannot be seamlessly integrated into existing individual-level
(disaggregate) activity-based models to evaluate system-level impacts of
ride-sourcing services. In this paper, we present and apply an approach for
estimating ride-sourcing demand at a disaggregate level using discrete choice
models and multiple data sources. We first construct a sample of trip-based
mode choices in Chicago, USA by enriching household travel survey with publicly
available ride-sourcing and taxi trip records. We then formulate a multivariate
extreme value-based discrete choice with sampling and endogeneity corrections
to account for the construction of the estimation sample from multiple data
sources and endogeneity biases arising from supply-side constraints and surge
pricing mechanisms in ride-sourcing systems. Our analysis of the constructed
dataset reveals insights into the influence of various socio-economic, land use
and built environment features on ride-sourcing demand. We also derive
elasticities of ride-sourcing demand relative to travel cost and time. Finally,
we illustrate how the developed model can be employed to quantify the welfare
implications of ride-sourcing policies and regulations such as terminating
certain types of services and introducing ride-sourcing taxes.",A Data Fusion Approach for Ride-sourcing Demand Estimation: A Discrete Choice Model with Sampling and Endogeneity Corrections,2022-12-05 14:21:48,"Rico Krueger, Michel Bierlaire, Prateek Bansal","http://arxiv.org/abs/2212.02178v1, http://arxiv.org/pdf/2212.02178v1",econ.EM
29666,em,"We study how childhood exposure to technology at ages 5-15 via the occupation
of the parents affects the ability to climb the social ladder in terms of
income at ages 45-49 using the Danish micro data from years 1961-2019. Our
measure of technology exposure covers the degree to which using computers
(hardware and software) is required to perform an occupation, and it is created
by merging occupational codes with detailed data from O*NET. The challenge in
estimating this effect is that long-term outcome is observed over a different
time horizon than our treatment of interest. We therefore adapt the surrogate
index methodology, linking the effect of our childhood treatment on
intermediate surrogates, such as income and education at ages 25-29, to the
effect on adulthood income. We estimate that a one standard error increase in
exposure to technology increases the income rank by 2\%-points, which is
economically and statistically significant and robust to cluster-correlation
within families. The derived policy recommendation is to update the educational
curriculum to expose children to computers to a higher degree, which may then
act as a social leveler.",The long-term effect of childhood exposure to technology using surrogates,2022-12-07 01:19:41,"Sylvia Klosin, Nicolaj Søndergaard Mühlbach","http://arxiv.org/abs/2212.03351v2, http://arxiv.org/pdf/2212.03351v2",econ.EM
29667,em,"This paper proposes IV-based estimators for the semiparametric distribution
regression model in the presence of an endogenous regressor, which are based on
an extension of IV probit estimators. We discuss the causal interpretation of
the estimators and two methods (monotone rearrangement and isotonic regression)
to ensure a monotonically increasing distribution function. Asymptotic
properties and simulation evidence are provided. An application to wage
equations reveals statistically significant and heterogeneous differences to
the inconsistent OLS-based estimator.",Semiparametric Distribution Regression with Instruments and Monotonicity,2022-12-07 18:23:47,Dominik Wied,"http://arxiv.org/abs/2212.03704v1, http://arxiv.org/pdf/2212.03704v1",econ.EM
29668,em,"We propose a new model-selection algorithm for Regression Discontinuity
Design, Regression Kink Design, and related IV estimators. Candidate models are
assessed within a 'placebo zone' of the running variable, where the true
effects are known to be zero. The approach yields an optimal combination of
bandwidth, polynomial, and any other choice parameters. It can also inform
choices between classes of models (e.g. RDD versus cohort-IV) and any other
choices, such as covariates, kernel, or other weights. We outline sufficient
conditions under which the approach is asymptotically optimal. The approach
also performs favorably under more general conditions in a series of Monte
Carlo simulations. We demonstrate the approach in an evaluation of changes to
Minimum Supervised Driving Hours in the Australian state of New South Wales. We
also re-evaluate evidence on the effects of Head Start and Minimum Legal
Drinking Age. Our Stata commands implement the procedure and compare its
performance to other approaches.",Optimal Model Selection in RDD and Related Settings Using Placebo Zones,2022-12-08 05:44:59,"Nathan Kettlewell, Peter Siminski","http://arxiv.org/abs/2212.04043v1, http://arxiv.org/pdf/2212.04043v1",econ.EM
29669,em,"Production functions are potentially misspecified when revenue is used as a
proxy for output. I formalize and strengthen this common knowledge by showing
that neither the production function nor Hicks-neutral productivity can be
identified with such a revenue proxy. This result holds under the standard
assumptions used in the literature for a large class of production functions,
including all commonly used parametric forms. Among the prevalent approaches to
address this issue, only those that impose assumptions on the underlying demand
system can possibly identify the production function.",On the Non-Identification of Revenue Production Functions,2022-12-09 04:23:11,David Van Dijcke,"http://arxiv.org/abs/2212.04620v2, http://arxiv.org/pdf/2212.04620v2",econ.EM
29670,em,"For western economies a long-forgotten phenomenon is on the horizon: rising
inflation rates. We propose a novel approach christened D2ML to identify
drivers of national inflation. D2ML combines machine learning for model
selection with time dependent data and graphical models to estimate the inverse
of the covariance matrix, which is then used to identify dominant drivers.
Using a dataset of 33 countries, we find that the US inflation rate and oil
prices are dominant drivers of national inflation rates. For a more general
framework, we carry out Monte Carlo simulations to show that our estimator
correctly identifies dominant drivers.",Dominant Drivers of National Inflation,2022-12-12 14:58:21,"Jan Ditzen, Francesco Ravazzolo","http://arxiv.org/abs/2212.05841v1, http://arxiv.org/pdf/2212.05841v1",econ.EM
29671,em,"Statistical identification of possibly non-fundamental SVARMA models requires
structural errors: (i) to be an i.i.d process, (ii) to be mutually independent
across components, and (iii) each of them must be non-Gaussian distributed.
Hence, provided the first two requisites, it is crucial to evaluate the
non-Gaussian identification condition. We address this problem by relating the
non-Gaussian dimension of structural errors vector to the rank of a matrix
built from the higher-order spectrum of reduced-form errors. This makes our
proposal robust to the roots location of the lag polynomials, and generalizes
the current procedures designed for the restricted case of a causal structural
VAR model. Simulation exercises show that our procedure satisfactorily
estimates the number of non-Gaussian components.",Robust Estimation of the non-Gaussian Dimension in Structural Linear Models,2022-12-14 17:57:22,Miguel Cabello,"http://arxiv.org/abs/2212.07263v2, http://arxiv.org/pdf/2212.07263v2",econ.EM
29672,em,"We propose an alternative approach towards cost mitigation in
volatility-managed portfolios based on smoothing the predictive density of an
otherwise standard stochastic volatility model. Specifically, we develop a
novel variational Bayes estimation method that flexibly encompasses different
smoothness assumptions irrespective of the persistence of the underlying latent
state. Using a large set of equity trading strategies, we show that smoothing
volatility targeting helps to regularise the extreme leverage/turnover that
results from commonly used realised variance estimates. This has important
implications for both the risk-adjusted returns and the mean-variance
efficiency of volatility-managed portfolios, once transaction costs are
factored in. An extensive simulation study shows that our variational inference
scheme compares favourably against existing state-of-the-art Bayesian
estimation methods for stochastic volatility models.",Smoothing volatility targeting,2022-12-14 18:43:22,"Mauro Bernardi, Daniele Bianchi, Nicolas Bianco","http://arxiv.org/abs/2212.07288v1, http://arxiv.org/pdf/2212.07288v1",econ.EM
29673,em,"This paper investigates the finite sample performance of a range of
parametric, semi-parametric, and non-parametric instrumental variable
estimators when controlling for a fixed set of covariates to evaluate the local
average treatment effect. Our simulation designs are based on empirical labor
market data from the US and vary in several dimensions, including effect
heterogeneity, instrument selectivity, instrument strength, outcome
distribution, and sample size. Among the estimators and simulations considered,
non-parametric estimation based on the random forest (a machine learner
controlling for covariates in a data-driven way) performs competitive in terms
of the average coverage rates of the (bootstrap-based) 95% confidence
intervals, while also being relatively precise. Non-parametric kernel
regression as well as certain versions of semi-parametric radius matching on
the propensity score, pair matching on the covariates, and inverse probability
weighting also have a decent coverage, but are less precise than the random
forest-based method. In terms of the average root mean squared error of LATE
estimation, kernel regression performs best, closely followed by the random
forest method, which has the lowest average absolute bias.",The finite sample performance of instrumental variable-based estimators of the Local Average Treatment Effect when controlling for covariates,2022-12-14 21:02:46,"Hugo Bodory, Martin Huber, Michael Lechner","http://arxiv.org/abs/2212.07379v1, http://arxiv.org/pdf/2212.07379v1",econ.EM
29674,em,"This paper considers the estimation of treatment assignment rules when the
policy maker faces a general budget or resource constraint. Utilizing the
PAC-Bayesian framework, we propose new treatment assignment rules that allow
for flexible notions of treatment outcome, treatment cost, and a budget
constraint. For example, the constraint setting allows for cost-savings, when
the costs of non-treatment exceed those of treatment for a subpopulation, to be
factored into the budget. It also accommodates simpler settings, such as
quantity constraints, and doesn't require outcome responses and costs to have
the same unit of measurement. Importantly, the approach accounts for settings
where budget or resource limitations may preclude treating all that can
benefit, where costs may vary with individual characteristics, and where there
may be uncertainty regarding the cost of treatment rules of interest. Despite
the nomenclature, our theoretical analysis examines frequentist properties of
the proposed rules. For stochastic rules that typically approach
budget-penalized empirical welfare maximizing policies in larger samples, we
derive non-asymptotic generalization bounds for the target population costs and
sharp oracle-type inequalities that compare the rules' welfare regret to that
of optimal policies in relevant budget categories. A closely related,
non-stochastic, model aggregation treatment assignment rule is shown to inherit
desirable attributes.",PAC-Bayesian Treatment Allocation Under Budget Constraints,2022-12-18 07:22:16,Daniel F. Pellatt,"http://arxiv.org/abs/2212.09007v2, http://arxiv.org/pdf/2212.09007v2",econ.EM
29675,em,"We develop a general framework for the identification of counterfactual
parameters in a class of nonlinear semiparametric panel models with fixed
effects and time effects. Our method applies to models for discrete outcomes
(e.g., two-way fixed effects binary choice) or continuous outcomes (e.g.,
censored regression), with discrete or continuous regressors. Our results do
not require parametric assumptions on the error terms or time-homogeneity on
the outcome equation. Our main results focus on static models, with a set of
results applying to models without any exogeneity conditions. We show that the
survival distribution of counterfactual outcomes is identified (point or
partial) in this class of models. This parameter is a building block for most
partial and marginal effects of interest in applied practice that are based on
the average structural function as defined by Blundell and Powell (2003, 2004).
To the best of our knowledge, ours are the first results on average partial and
marginal effects for binary choice and ordered choice models with two-way fixed
effects and non-logistic errors.",Identification of time-varying counterfactual parameters in nonlinear panel models,2022-12-19 02:36:59,"Irene Botosaru, Chris Muris","http://arxiv.org/abs/2212.09193v2, http://arxiv.org/pdf/2212.09193v2",econ.EM
29676,em,"The analysis of discrimination has long interested economists and lawyers. In
recent years, the literature in computer science and machine learning has
become interested in the subject, offering an interesting re-reading of the
topic. These questions are the consequences of numerous criticisms of
algorithms used to translate texts or to identify people in images. With the
arrival of massive data, and the use of increasingly opaque algorithms, it is
not surprising to have discriminatory algorithms, because it has become easy to
have a proxy of a sensitive variable, by enriching the data indefinitely.
According to Kranzberg (1986), ""technology is neither good nor bad, nor is it
neutral"", and therefore, ""machine learning won't give you anything like gender
neutrality `for free' that you didn't explicitely ask for"", as claimed by
Kearns et a. (2019). In this article, we will come back to the general context,
for predictive models in classification. We will present the main concepts of
fairness, called group fairness, based on independence between the sensitive
variable and the prediction, possibly conditioned on this or that information.
We will finish by going further, by presenting the concepts of individual
fairness. Finally, we will see how to correct a potential discrimination, in
order to guarantee that a model is more ethical",Quantifying fairness and discrimination in predictive models,2022-12-20 00:38:13,Arthur Charpentier,"http://arxiv.org/abs/2212.09868v1, http://arxiv.org/pdf/2212.09868v1",econ.EM
29677,em,"It is well-known that generalized method of moments (GMM) estimators of
dynamic panel data models can have asymptotic bias if the number of time
periods (T) and the number of cross-sectional units (n) are both large (Alvarez
and Arellano, 2003). This conclusion, however, follows when all available
instruments are used. This paper provides results supporting a more optimistic
conclusion when less than all available instrumental variables are used. If the
number of instruments used per period increases with T sufficiently slowly, the
bias of GMM estimators based on the forward orthogonal deviations
transformation (FOD-GMM) disappears as T and n increase, regardless of the
relative rate of increase in T and n. Monte Carlo evidence is provided that
corroborates this claim. Moreover, a large-n, large-T distribution result is
provided for FOD-GMM.",Forward Orthogonal Deviations GMM and the Absence of Large Sample Bias,2022-12-28 22:38:03,Robert F. Phillips,"http://arxiv.org/abs/2212.14075v1, http://arxiv.org/pdf/2212.14075v1",econ.EM
29849,em,"We reassess the use of linear models to approximate response probabilities of
binary outcomes, focusing on average partial effects (APE). We confirm that
linear projection parameters coincide with APEs in certain scenarios. Through
simulations, we identify other cases where OLS does or does not approximate
APEs and find that having large fraction of fitted values in [0, 1] is neither
necessary nor sufficient. We also show nonlinear least squares estimation of
the ramp model is consistent and asymptotically normal and is equivalent to
using OLS on an iteratively trimmed sample to reduce bias. Our findings offer
practical guidance for empirical research.",Another Look at the Linear Probability Model and Nonlinear Index Models,2023-08-29 17:32:22,"Kaicheng Chen, Robert S. Martin, Jeffrey M. Wooldridge","http://arxiv.org/abs/2308.15338v3, http://arxiv.org/pdf/2308.15338v3",econ.EM
29679,em,"Survey questions often elicit responses on ordered scales for which the
definitions of the categories are subjective, possibly varying by individual.
This paper clarifies what is learned when these subjective reports are used as
an outcome in regression-based causal inference. When a continuous treatment
variable is statistically independent of both i) potential outcomes; and ii)
heterogeneity in reporting styles, a nonparametric regression of integer
category numbers on that variable uncovers a positively-weighted linear
combination of causal responses among individuals who are on the margin between
adjacent response categories. Though the weights do not integrate to one, the
ratio of local regression derivatives with respect to two such explanatory
variables identifies the relative magnitudes of convex averages of their
effects. When results are extended to discrete treatment variables, different
weighting schemes apply to different regressors, making comparisons of
magnitude less informative. I obtain a partial identification result for
comparing the effects of a discrete treatment variable to those of another
treatment variable when there are many categories and individual reporting
functions are linear. I also provide results for identification using
instrumental variables.",Causal identification with subjective outcomes,2022-12-30 13:17:09,Leonard Goff,"http://arxiv.org/abs/2212.14622v2, http://arxiv.org/pdf/2212.14622v2",econ.EM
29680,em,"In this paper, we propose Forest-PLS, a feature selection method for
analyzing policy effect heterogeneity in a more flexible and comprehensive
manner than is typically available with conventional methods. In particular,
our method is able to capture policy effect heterogeneity both within and
across subgroups of the population defined by observable characteristics. To
achieve this, we employ partial least squares to identify target components of
the population and causal forests to estimate personalized policy effects
across these components. We show that the method is consistent and leads to
asymptotically normally distributed policy effects. To demonstrate the efficacy
of our approach, we apply it to the data from the Pennsylvania Reemployment
Bonus Experiments, which were conducted in 1988-1989. The analysis reveals that
financial incentives can motivate some young non-white individuals to enter the
labor market. However, these incentives may also provide a temporary financial
cushion for others, dissuading them from actively seeking employment. Our
findings highlight the need for targeted, personalized measures for young
non-white male participants.",Feature Selection for Personalized Policy Analysis,2022-12-31 19:52:46,"Maria Nareklishvili, Nicholas Polson, Vadim Sokolov","http://arxiv.org/abs/2301.00251v3, http://arxiv.org/pdf/2301.00251v3",econ.EM
29681,em,"This paper proposes a novel testing procedure for selecting a sparse set of
covariates that explains a large dimensional panel. Our selection method
provides correct false detection control while having higher power than
existing approaches. We develop the inferential theory for large panels with
many covariates by combining post-selection inference with a novel multiple
testing adjustment. Our data-driven hypotheses are conditional on the sparse
covariate selection. We control for family-wise error rates for covariate
discovery for large cross-sections. As an easy-to-use and practically relevant
procedure, we propose Panel-PoSI, which combines the data-driven adjustment for
panel multiple testing with valid post-selection p-values of a generalized
LASSO, that allows us to incorporate priors. In an empirical study, we select a
small number of asset pricing factors that explain a large cross-section of
investment strategies. Our method dominates the benchmarks out-of-sample due to
its better size and power.",Inference for Large Panel Data with Many Covariates,2023-01-01 00:07:24,"Markus Pelger, Jiacheng Zou","http://arxiv.org/abs/2301.00292v6, http://arxiv.org/pdf/2301.00292v6",econ.EM
29682,em,"This paper examines the dynamics of Tether, the stablecoin with the largest
market capitalization. We show that the distributional and dynamic properties
of Tether/USD rates have been evolving from 2017 to 2021. We use local analysis
methods to detect and describe the local patterns, such as short-lived trends,
time-varying volatility and persistence. To accommodate these patterns, we
consider a time varying parameter Double Autoregressive tvDAR(1) model under
the assumption of local stationarity of Tether/USD rates. We estimate the tvDAR
model non-parametrically and test hypotheses on the functional parameters. In
the application to Tether, the model provides a good fit and reliable
out-of-sample forecasts at short horizons, while being robust to time-varying
persistence and volatility. In addition, the model yields a simple plug-in
measure of stability for Tether and other stablecoins for assessing and
comparing their stability.",Time-Varying Coefficient DAR Model and Stability Measures for Stablecoin Prices: An Application to Tether,2023-01-02 06:07:09,"Antoine Djobenou, Emre Inan, Joann Jasiak","http://arxiv.org/abs/2301.00509v1, http://arxiv.org/pdf/2301.00509v1",econ.EM
29683,em,"Non-causal processes have been drawing attention recently in Macroeconomics
and Finance for their ability to display nonlinear behaviors such as asymmetric
dynamics, clustering volatility, and local explosiveness. In this paper, we
investigate the statistical properties of empirical conditional quantiles of
non-causal processes. Specifically, we show that the quantile autoregression
(QAR) estimates for non-causal processes do not remain constant across
different quantiles in contrast to their causal counterparts. Furthermore, we
demonstrate that non-causal autoregressive processes admit nonlinear
representations for conditional quantiles given past observations. Exploiting
these properties, we propose three novel testing strategies of non-causality
for non-Gaussian processes within the QAR framework. The tests are constructed
either by verifying the constancy of the slope coefficients or by applying a
misspecification test of the linear QAR model over different quantiles of the
process. Some numerical experiments are included to examine the finite sample
performance of the testing strategies, where we compare different specification
tests for dynamic quantiles with the Kolmogorov-Smirnov constancy test. The new
methodology is applied to some time series from financial markets to
investigate the presence of speculative bubbles. The extension of the approach
based on the specification tests to AR processes driven by innovations with
heteroskedasticity is studied through simulations. The performance of QAR
estimates of non-causal processes at extreme quantiles is also explored.",Quantile Autoregression-based Non-causality Testing,2023-01-08 00:15:17,Weifeng Jin,"http://arxiv.org/abs/2301.02937v1, http://arxiv.org/pdf/2301.02937v1",econ.EM
29697,em,"The present paper proposes a new treatment effects estimator that is valid
when the number of time periods is small, and the parallel trends condition
holds conditional on covariates and unobserved heterogeneity in the form of
interactive fixed effects. The estimator also allow the control variables to be
affected by treatment and it enables estimation of the resulting indirect
effect on the outcome variable. The asymptotic properties of the estimator are
established and their accuracy in small samples is investigated using Monte
Carlo simulations. The empirical usefulness of the estimator is illustrated
using as an example the effect of increased trade competition on firm markups
in China.",Simple Difference-in-Differences Estimation in Fixed-T Panels,2023-01-26 22:11:32,"Nicholas Brown, Kyle Butts, Joakim Westerlund","http://arxiv.org/abs/2301.11358v2, http://arxiv.org/pdf/2301.11358v2",econ.EM
29684,em,"This paper proves a new central limit theorem for a sample that exhibits
multi-way dependence and heterogeneity across clusters. Statistical inference
for situations where there is both multi-way dependence and cluster
heterogeneity has thus far been an open issue. Existing theory for multi-way
clustering inference requires identical distributions across clusters (implied
by the so-called separate exchangeability assumption). Yet no such homogeneity
requirement is needed in the existing theory for one-way clustering. The new
result therefore theoretically justifies the view that multi-way clustering is
a more robust version of one-way clustering, consistent with applied practice.
The result is applied to linear regression, where it is shown that a standard
plug-in variance estimator is valid for inference.",General Conditions for Valid Inference in Multi-Way Clustering,2023-01-10 09:14:16,Luther Yap,"http://arxiv.org/abs/2301.03805v1, http://arxiv.org/pdf/2301.03805v1",econ.EM
29685,em,"It is customary to estimate error-in-variables models using higher-order
moments of observables. This moments-based estimator is consistent only when
the coefficient of the latent regressor is assumed to be non-zero. We develop a
new estimator based on the divide-and-conquer principle that is consistent for
any value of the coefficient of the latent regressor. In an application on the
relation between investment, (mismeasured) Tobin's $q$ and cash flow, we find
time periods in which the effect of Tobin's $q$ is not statistically different
from zero. The implausibly large higher-order moment estimates in these periods
disappear when using the proposed estimator.",Uniform Inference in Linear Error-in-Variables Models: Divide-and-Conquer,2023-01-11 15:46:09,"Tom Boot, Artūras Juodis","http://arxiv.org/abs/2301.04439v1, http://arxiv.org/pdf/2301.04439v1",econ.EM
29686,em,"The overwhelming majority of empirical research that uses cluster-robust
inference assumes that the clustering structure is known, even though there are
often several possible ways in which a dataset could be clustered. We propose
two tests for the correct level of clustering in regression models. One test
focuses on inference about a single coefficient, and the other on inference
about two or more coefficients. We provide both asymptotic and wild bootstrap
implementations. The proposed tests work for a null hypothesis of either no
clustering or ``fine'' clustering against alternatives of ``coarser''
clustering. We also propose a sequential testing procedure to determine the
appropriate level of clustering. Simulations suggest that the bootstrap tests
perform very well under the null hypothesis and can have excellent power. An
empirical example suggests that using the tests leads to sensible inferences.",Testing for the appropriate level of clustering in linear regression models,2023-01-11 18:39:00,"James G. MacKinnon, Morten Ørregaard Nielsen, Matthew D. Webb","http://arxiv.org/abs/2301.04522v2, http://arxiv.org/pdf/2301.04522v2",econ.EM
29687,em,"We provide computationally attractive methods to obtain jackknife-based
cluster-robust variance matrix estimators (CRVEs) for linear regression models
estimated by least squares. We also propose several new variants of the wild
cluster bootstrap, which involve these CRVEs, jackknife-based bootstrap
data-generating processes, or both. Extensive simulation experiments suggest
that the new methods can provide much more reliable inferences than existing
ones in cases where the latter are not trustworthy, such as when the number of
clusters is small and/or cluster sizes vary substantially. Three empirical
examples illustrate the new methods.",Fast and Reliable Jackknife and Bootstrap Methods for Cluster-Robust Inference,2023-01-11 18:46:18,"James G. MacKinnon, Morten Ørregaard Nielsen, Matthew D. Webb","http://arxiv.org/abs/2301.04527v2, http://arxiv.org/pdf/2301.04527v2",econ.EM
29688,em,"I introduce a generic method for inference on entire quantile and regression
quantile processes in the presence of a finite number of large and arbitrarily
heterogeneous clusters. The method asymptotically controls size by generating
statistics that exhibit enough distributional symmetry such that randomization
tests can be applied. The randomization test does not require ex-ante matching
of clusters, is free of user-chosen parameters, and performs well at
conventional significance levels with as few as five clusters. The method tests
standard (non-sharp) hypotheses and can even be asymptotically similar in
empirically relevant situations. The main focus of the paper is inference on
quantile treatment effects but the method applies more broadly. Numerical and
empirical examples are provided.",Inference on quantile processes with a finite number of clusters,2023-01-11 22:28:34,Andreas Hagemann,"http://arxiv.org/abs/2301.04687v2, http://arxiv.org/pdf/2301.04687v2",econ.EM
29689,em,"We study causal inference in a setting in which units consisting of pairs of
individuals (such as married couples) are assigned randomly to one of four
categories: a treatment targeted at pair member A, a potentially different
treatment targeted at pair member B, joint treatment, or no treatment. The
setup includes the important special case in which the pair members are the
same individual targeted by two different treatments A and B. Allowing for
endogenous non-compliance, including coordinated treatment takeup, as well as
interference across treatments, we derive the causal interpretation of various
instrumental variable estimands using weaker monotonicity conditions than in
the literature. In general, coordinated treatment takeup makes it difficult to
separate treatment interaction from treatment effect heterogeneity. We provide
auxiliary conditions and various bounding strategies that may help zero in on
causally interesting parameters. As an empirical illustration, we apply our
results to a program randomly offering two different treatments, namely
tutoring and financial incentives, to first year college students, in order to
assess the treatments' effects on academic performance.",Treatment Effect Analysis for Pairs with Endogenous Treatment Takeup,2023-01-12 11:47:59,"Mate Kormos, Robert P. Lieli, Martin Huber","http://arxiv.org/abs/2301.04876v1, http://arxiv.org/pdf/2301.04876v1",econ.EM
29696,em,"Many economic and causal parameters of interest depend on generated
regressors. Examples include structural parameters in models with endogenous
variables estimated by control functions and in models with sample selection,
treatment effect estimation with propensity score matching, and marginal
treatment effects. Inference with generated regressors is complicated by the
very complex expression for influence functions and asymptotic variances. To
address this problem, we propose Automatic Locally Robust/debiased GMM
estimators in a general setting with generated regressors. Importantly, we
allow for the generated regressors to be generated from machine learners, such
as Random Forest, Neural Nets, Boosting, and many others. We use our results to
construct novel Doubly Robust and Locally Robust estimators for the
Counterfactual Average Structural Function and Average Partial Effects in
models with endogeneity and sample selection, respectively. We provide
sufficient conditions for the asymptotic normality of our debiased GMM
estimators and investigate their finite sample performance through Monte Carlo
simulations.",Automatic Locally Robust Estimation with Generated Regressors,2023-01-25 18:26:18,"Juan Carlos Escanciano, Telmo Pérez-Izquierdo","http://arxiv.org/abs/2301.10643v2, http://arxiv.org/pdf/2301.10643v2",econ.EM
29690,em,"Robust M-estimation uses loss functions, such as least absolute deviation
(LAD), quantile loss and Huber's loss, to construct its objective function, in
order to for example eschew the impact of outliers, whereas the difficulty in
analysing the resultant estimators rests on the nonsmoothness of these losses.
Generalized functions have advantages over ordinary functions in several
aspects, especially generalized functions possess derivatives of any order.
Generalized functions incorporate local integrable functions, the so-called
regular generalized functions, while the so-called singular generalized
functions (e.g. Dirac delta function) can be obtained as the limits of a
sequence of sufficient smooth functions, so-called regular sequence in
generalized function context. This makes it possible to use these singular
generalized functions through approximation. Nevertheless, a significant
contribution of this paper is to establish the convergence rate of regular
sequence to nonsmooth loss that answers a call from the relevant literature.
For parameter estimation where objective function may be nonsmooth, this paper
first shows as a general paradigm that how generalized function approach can be
used to tackle the nonsmooth loss functions in Section two using a very simple
model. This approach is of general interest and applicability. We further use
the approach in robust M-estimation for additive single-index cointegrating
time series models; the asymptotic theory is established for the proposed
estimators. We evaluate the finite-sample performance of the proposed
estimation method and theory by both simulated data and an empirical analysis
of predictive regression of stock returns.",Robust M-Estimation for Additive Single-Index Cointegrating Time Series Models,2023-01-17 02:04:50,"Chaohua Dong, Jiti Gao, Yundong Tu, Bin Peng","http://arxiv.org/abs/2301.06631v1, http://arxiv.org/pdf/2301.06631v1",econ.EM
29691,em,"We revisit conduct parameter estimation in homogeneous goods markets to
resolve the conflict between Bresnahan (1982) and Perloff and Shen (2012)
regarding the identification and the estimation of conduct parameters. We point
out that Perloff and Shen's (2012) proof is incorrect and its simulation
setting is invalid. Our simulation shows that estimation becomes accurate when
demand shifters are properly added in supply estimation and sample sizes are
increased, supporting Bresnahan (1982).",Resolving the Conflict on Conduct Parameter Estimation in Homogeneous Goods Markets between Bresnahan (1982) and Perloff and Shen (2012),2023-01-17 05:25:43,"Yuri Matsumura, Suguru Otani","http://arxiv.org/abs/2301.06665v5, http://arxiv.org/pdf/2301.06665v5",econ.EM
29692,em,"Evaluating policy in imperfectly competitive markets requires understanding
firm behavior. While researchers test conduct via model selection and
assessment, we present advantages of Rivers and Vuong (2002) (RV) model
selection under misspecification. However, degeneracy of RV invalidates
inference. With a novel definition of weak instruments for testing, we connect
degeneracy to instrument strength, derive weak instrument properties of RV, and
provide a diagnostic for weak instruments by extending the framework of Stock
and Yogo (2005) to model selection. We test vertical conduct (Villas-Boas,
2007) using common instrument sets. Some are weak, providing no power. Strong
instruments support manufacturers setting retail prices.",Testing Firm Conduct,2023-01-17 09:35:03,"Marco Duarte, Lorenzo Magnolfi, Mikkel Sølvsten, Christopher Sullivan","http://arxiv.org/abs/2301.06720v1, http://arxiv.org/pdf/2301.06720v1",econ.EM
29693,em,"This paper develops a semi-parametric procedure for estimation of
unconditional quantile partial effects using quantile regression coefficients.
The estimator is based on an identification result showing that, for continuous
covariates, unconditional quantile effects are a weighted average of
conditional ones at particular quantile levels that depend on the covariates.
We propose a two-step estimator for the unconditional effects where in the
first step one estimates a structural quantile regression model, and in the
second step a nonparametric regression is applied to the first step
coefficients. We establish the asymptotic properties of the estimator, say
consistency and asymptotic normality. Monte Carlo simulations show numerical
evidence that the estimator has very good finite sample performance and is
robust to the selection of bandwidth and kernel. To illustrate the proposed
method, we study the canonical application of the Engel's curve, i.e. food
expenditures as a share of income.",Unconditional Quantile Partial Effects via Conditional Quantile Regression,2023-01-18 03:48:11,"Javier Alejo, Antonio F. Galvao, Julian Martinez-Iriarte, Gabriel Montes-Rojas","http://arxiv.org/abs/2301.07241v4, http://arxiv.org/pdf/2301.07241v4",econ.EM
29694,em,"Many problems ask a question that can be formulated as a causal question:
""what would have happened if...?"" For example, ""would the person have had
surgery if he or she had been Black?"" To address this kind of questions,
calculating an average treatment effect (ATE) is often uninformative, because
one would like to know how much impact a variable (such as skin color) has on a
specific individual, characterized by certain covariates. Trying to calculate a
conditional ATE (CATE) seems more appropriate. In causal inference, the
propensity score approach assumes that the treatment is influenced by x, a
collection of covariates. Here, we will have the dual view: doing an
intervention, or changing the treatment (even just hypothetically, in a thought
experiment, for example by asking what would have happened if a person had been
Black) can have an impact on the values of x. We will see here that optimal
transport allows us to change certain characteristics that are influenced by
the variable we are trying to quantify the effect of. We propose here a mutatis
mutandis version of the CATE, which will be done simply in dimension one by
saying that the CATE must be computed relative to a level of probability,
associated to the proportion of x (a single covariate) in the control
population, and by looking for the equivalent quantile in the test population.
In higher dimension, it will be necessary to go through transport, and an
application will be proposed on the impact of some variables on the probability
of having an unnatural birth (the fact that the mother smokes, or that the
mother is Black).",Optimal Transport for Counterfactual Estimation: A Method for Causal Inference,2023-01-18 22:42:38,"Arthur Charpentier, Emmanuel Flachaire, Ewen Gallic","http://arxiv.org/abs/2301.07755v1, http://arxiv.org/pdf/2301.07755v1",econ.EM
29695,em,"This paper revisits the identification and estimation of a class of
semiparametric (distribution-free) panel data binary choice models with lagged
dependent variables, exogenous covariates, and entity fixed effects. We provide
a novel identification strategy, using an ""identification at infinity""
argument. In contrast with the celebrated Honore and Kyriazidou (2000), our
method permits time trends of any form and does not suffer from the ""curse of
dimensionality"". We propose an easily implementable conditional maximum score
estimator. The asymptotic properties of the proposed estimator are fully
characterized. A small-scale Monte Carlo study demonstrates that our approach
performs satisfactorily in finite samples. We illustrate the usefulness of our
method by presenting an empirical application to enrollment in private hospital
insurance using the Household, Income and Labour Dynamics in Australia (HILDA)
Survey data.",Revisiting Panel Data Discrete Choice Models with Lagged Dependent Variables,2023-01-23 14:47:52,"Christopher R. Dobronyi, Fu Ouyang, Thomas Tao Yang","http://arxiv.org/abs/2301.09379v3, http://arxiv.org/pdf/2301.09379v3",econ.EM
29698,em,"In this paper, we describe a computational implementation of the Synthetic
difference-in-differences (SDID) estimator of Arkhangelsky et al. (2021) for
Stata. Synthetic difference-in-differences can be used in a wide class of
circumstances where treatment effects on some particular policy or event are
desired, and repeated observations on treated and untreated units are available
over time. We lay out the theory underlying SDID, both when there is a single
treatment adoption date and when adoption is staggered over time, and discuss
estimation and inference in each of these cases. We introduce the sdid command
which implements these methods in Stata, and provide a number of examples of
use, discussing estimation, inference, and visualization of results.",Synthetic Difference In Differences Estimation,2023-01-27 20:05:42,"Damian Clarke, Daniel Pailañir, Susan Athey, Guido Imbens","http://arxiv.org/abs/2301.11859v3, http://arxiv.org/pdf/2301.11859v3",econ.EM
29699,em,"This paper introduces a maximum likelihood estimator of the value of job
amenities and labor productivity in a single matching market based on the
observation of equilibrium matches and wages. The estimation procedure
simultaneously fits both the matching patterns and the wage curve. While our
estimator is suited for a wide range of assignment problems, we provide an
application to the estimation of the Value of a Statistical Life using
compensating wage differentials for the risk of fatal injury on the job. Using
US data for 2017, we estimate the Value of Statistical Life at \$ 6.3 million
(\$2017).",A Note on the Estimation of Job Amenities and Labor Productivity,2023-01-30 00:08:44,"Arnaud Dupuy, Alfred Galichon","http://arxiv.org/abs/2301.12542v1, http://arxiv.org/pdf/2301.12542v1",econ.EM
29700,em,"Modeling and predicting extreme movements in GDP is notoriously difficult and
the selection of appropriate covariates and/or possible forms of nonlinearities
are key in obtaining precise forecasts. In this paper, our focus is on using
large datasets in quantile regression models to forecast the conditional
distribution of US GDP growth. To capture possible non-linearities, we include
several nonlinear specifications. The resulting models will be huge dimensional
and we thus rely on a set of shrinkage priors. Since Markov Chain Monte Carlo
estimation becomes slow in these dimensions, we rely on fast variational Bayes
approximations to the posterior distribution of the coefficients and the latent
states. We find that our proposed set of models produces precise forecasts.
These gains are especially pronounced in the tails. Using Gaussian processes to
approximate the nonlinear component of the model further improves the good
performance, in particular in the right tail.",Nonlinearities in Macroeconomic Tail Risk through the Lens of Big Data Quantile Regressions,2023-01-31 16:02:59,"Jan Prüser, Florian Huber","http://arxiv.org/abs/2301.13604v2, http://arxiv.org/pdf/2301.13604v2",econ.EM
29701,em,"Inference on common parameters in panel data models with individual-specific
fixed effects is a classic example of Neyman and Scott's (1948) incidental
parameter problem (IPP). One solution to this IPP is functional differencing
(Bonhomme 2012), which works when the number of time periods T is fixed (and
may be small), but this solution is not applicable to all panel data models of
interest. Another solution, which applies to a larger class of models, is
""large-T"" bias correction (pioneered by Hahn and Kuersteiner 2002 and Hahn and
Newey 2004), but this is only guaranteed to work well when T is sufficiently
large. This paper provides a unified approach that connects those two seemingly
disparate solutions to the IPP. In doing so, we provide an approximate version
of functional differencing, that is, an approximate solution to the IPP that is
applicable to a large class of panel data models even when T is relatively
small.",Approximate Functional Differencing,2023-01-31 19:16:30,"Geert Dhaene, Martin Weidner","http://arxiv.org/abs/2301.13736v2, http://arxiv.org/pdf/2301.13736v2",econ.EM
29702,em,"Thousands of papers have reported two-way cluster-robust (TWCR) standard
errors. However, the recent econometrics literature points out the potential
non-gaussianity of two-way cluster sample means, and thus invalidity of the
inference based on the TWCR standard errors. Fortunately, simulation studies
nonetheless show that the gaussianity is rather common than exceptional. This
paper provides theoretical support for this encouraging observation.
Specifically, we derive a novel central limit theorem for two-way clustered
triangular arrays that justifies the use of the TWCR under very mild and
interpretable conditions. We, therefore, hope that this paper will provide a
theoretical justification for the legitimacy of most, if not all, of the
thousands of those empirical papers that have used the TWCR standard errors. We
provide a guide in practice as to when a researcher can employ the TWCR
standard errors.",On Using The Two-Way Cluster-Robust Standard Errors,2023-01-31 20:24:23,"Harold D Chiang, Yuya Sasaki","http://arxiv.org/abs/2301.13775v1, http://arxiv.org/pdf/2301.13775v1",econ.EM
29703,em,"This paper is concerned with estimation and inference on average treatment
effects in randomized controlled trials when researchers observe potentially
many covariates. By employing Neyman's (1923) finite population perspective, we
propose a bias-corrected regression adjustment estimator using cross-fitting,
and show that the proposed estimator has favorable properties over existing
alternatives. For inference, we derive the first and second order terms in the
stochastic component of the regression adjustment estimators, study higher
order properties of the existing inference methods, and propose a
bias-corrected version of the HC3 standard error. The proposed methods readily
extend to stratified experiments with large strata. Simulation studies show our
cross-fitted estimator, combined with the bias-corrected HC3, delivers precise
point estimates and robust size controls over a wide range of DGPs. To
illustrate, the proposed methods are applied to real dataset on randomized
experiments of incentives and services for college achievement following
Angrist, Lang, and Oreopoulos (2009).",Regression adjustment in randomized controlled trials with many covariates,2023-02-01 17:29:30,"Harold D Chiang, Yukitoshi Matsushita, Taisuke Otsu","http://arxiv.org/abs/2302.00469v3, http://arxiv.org/pdf/2302.00469v3",econ.EM
29704,em,"In this paper we construct an inferential procedure for Granger causality in
high-dimensional non-stationary vector autoregressive (VAR) models. Our method
does not require knowledge of the order of integration of the time series under
consideration. We augment the VAR with at least as many lags as the suspected
maximum order of integration, an approach which has been proven to be robust
against the presence of unit roots in low dimensions. We prove that we can
restrict the augmentation to only the variables of interest for the testing,
thereby making the approach suitable for high dimensions. We combine this lag
augmentation with a post-double-selection procedure in which a set of initial
penalized regressions is performed to select the relevant variables for both
the Granger causing and caused variables. We then establish uniform asymptotic
normality of a second-stage regression involving only the selected variables.
Finite sample simulations show good performance, an application to investigate
the (predictive) causes and effects of economic uncertainty illustrates the
need to allow for unknown orders of integration.",Inference in Non-stationary High-Dimensional VARs,2023-02-03 00:56:36,"Alain Hecq, Luca Margaritella, Stephan Smeekes","http://arxiv.org/abs/2302.01434v2, http://arxiv.org/pdf/2302.01434v2",econ.EM
29705,em,"When agents' information is imperfect and dispersed, existing measures of
macroeconomic uncertainty based on the forecast error variance have two
distinct drivers: the variance of the economic shock and the variance of the
information dispersion. The former driver increases uncertainty and reduces
agents' disagreement (agreed uncertainty). The latter increases both
uncertainty and disagreement (disagreed uncertainty). We use these implications
to identify empirically the effects of agreed and disagreed uncertainty shocks,
based on a novel measure of consumer disagreement derived from survey
expectations. Disagreed uncertainty has no discernible economic effects and is
benign for economic activity, but agreed uncertainty exerts significant
depressing effects on a broad spectrum of macroeconomic indicators.",Agreed and Disagreed Uncertainty,2023-02-03 12:47:41,"Luca Gambetti, Dimitris Korobilis, John Tsoukalas, Francesco Zanetti","http://arxiv.org/abs/2302.01621v1, http://arxiv.org/pdf/2302.01621v1",econ.EM
29706,em,"This document presents an overview of the bayesmixedlogit and
bayesmixedlogitwtp Stata packages. It mirrors closely the helpfile obtainable
in Stata (i.e., through help bayesmixedlogit or help bayesmixedlogitwtp).
Further background for the packages can be found in Baker(2014).",Using bayesmixedlogit and bayesmixedlogitwtp in Stata,2023-02-03 17:33:25,Matthew J. Baker,"http://arxiv.org/abs/2302.01775v1, http://arxiv.org/pdf/2302.01775v1",econ.EM
29707,em,"This Appendix (dated: July 2021) includes supplementary derivations related
to the main limit results of the econometric framework for structural break
testing in predictive regression models based on the OLS-Wald and IVX-Wald test
statistics, developed by Katsouris C (2021). In particular, we derive the
asymptotic distributions of the test statistics when the predictive regression
model includes either mildly integrated or persistent regressors. Moreover, we
consider the case in which a model intercept is included in the model vis-a-vis
the case that the predictive regression model has no model intercept. In a
subsequent version of this study we reexamine these particular aspects in more
depth with respect to the demeaned versions of the variables of the predictive
regression.",Testing for Structural Change under Nonstationarity,2023-02-05 15:28:36,Christis Katsouris,"http://arxiv.org/abs/2302.02370v1, http://arxiv.org/pdf/2302.02370v1",econ.EM
29708,em,"This paper is concerned with detecting the presence of out of sample
predictability in linear predictive regressions with a potentially large set of
candidate predictors. We propose a procedure based on out of sample MSE
comparisons that is implemented in a pairwise manner using one predictor at a
time and resulting in an aggregate test statistic that is standard normally
distributed under the global null hypothesis of no linear predictability.
Predictors can be highly persistent, purely stationary or a combination of
both. Upon rejection of the null hypothesis we subsequently introduce a
predictor screening procedure designed to identify the most active predictors.
An empirical application to key predictors of US economic activity illustrates
the usefulness of our methods and highlights the important forward looking role
played by the series of manufacturing new orders.",Out of Sample Predictability in Predictive Regressions with Many Predictor Candidates,2023-02-06 18:32:48,"Jesus Gonzalo, Jean-Yves Pitarakis","http://arxiv.org/abs/2302.02866v2, http://arxiv.org/pdf/2302.02866v2",econ.EM
29709,em,"We extend the theory from Fan and Li (2001) on penalized likelihood-based
estimation and model-selection to statistical and econometric models which
allow for non-negativity constraints on some or all of the parameters, as well
as time-series dependence. It differs from classic non-penalized likelihood
estimation, where limiting distributions of likelihood-based estimators and
test-statistics are non-standard, and depend on the unknown number of
parameters on the boundary of the parameter space. Specifically, we establish
that the joint model selection and estimation, results in standard asymptotic
Gaussian distributed estimators. The results are applied to the rich class of
autoregressive conditional heteroskedastic (ARCH) models for the modelling of
time-varying volatility. We find from simulations that the penalized estimation
and model-selection works surprisingly well even for a large number of
parameters. A simple empirical illustration for stock-market returns data
confirms the ability of the penalized estimation to select ARCH models which
fit nicely the autocorrelation function, as well as confirms the stylized fact
of long-memory in financial time series data.",Penalized Quasi-likelihood Estimation and Model Selection in Time Series Models with Parameters on the Boundary,2023-02-06 18:36:11,"Heino Bohn Nielsen, Anders Rahbek","http://arxiv.org/abs/2302.02867v1, http://arxiv.org/pdf/2302.02867v1",econ.EM
29710,em,"We develop asymptotic approximation results that can be applied to sequential
estimation and inference problems, adaptive randomized controlled trials, and
other statistical decision problems that involve multiple decision nodes with
structured and possibly endogenous information sets. Our results extend the
classic asymptotic representation theorem used extensively in efficiency bound
theory and local power analysis. In adaptive settings where the decision at one
stage can affect the observation of variables in later stages, we show that a
limiting data environment characterizes all limit distributions attainable
through a joint choice of an adaptive design rule and statistics applied to the
adaptively generated data, under local alternatives. We illustrate how the
theory can be applied to study the choice of adaptive rules and end-of-sample
statistical inference in batched (groupwise) sequential adaptive experiments.","Asymptotic Representations for Sequential Decisions, Adaptive Experiments, and Batched Bandits",2023-02-06 23:54:08,"Keisuke Hirano, Jack R. Porter","http://arxiv.org/abs/2302.03117v1, http://arxiv.org/pdf/2302.03117v1",econ.EM
29711,em,"In settings with few treated units, Difference-in-Differences (DID)
estimators are not consistent, and are not generally asymptotically normal.
This poses relevant challenges for inference. While there are inference methods
that are valid in these settings, some of these alternatives are not readily
available when there is variation in treatment timing and heterogeneous
treatment effects; or for deriving uniform confidence bands for event-study
plots. We present alternatives in settings with few treated units that are
valid with variation in treatment timing and/or that allow for uniform
confidence bands.",Extensions for Inference in Difference-in-Differences with Few Treated Clusters,2023-02-07 00:37:35,"Luis Alvarez, Bruno Ferman","http://arxiv.org/abs/2302.03131v1, http://arxiv.org/pdf/2302.03131v1",econ.EM
29850,em,"We study mixed-effects methods for estimating equations containing person and
firm effects. In economics such models are usually estimated using
fixed-effects methods. Recent enhancements to those fixed-effects methods
include corrections to the bias in estimating the covariance matrix of the
person and firm effects, which we also consider.",Mixed-Effects Methods for Search and Matching Research,2023-08-29 20:13:59,"John M. Abowd, Kevin L. McKinney","http://arxiv.org/abs/2308.15445v1, http://arxiv.org/pdf/2308.15445v1",econ.EM
29712,em,"In this paper we test for Granger causality in high-dimensional vector
autoregressive models (VARs) to disentangle and interpret the complex causal
chains linking radiative forcings and global temperatures. By allowing for high
dimensionality in the model we can enrich the information set with all relevant
natural and anthropogenic forcing variables to obtain reliable causal
relations. These variables have mostly been investigated in an aggregated form
or in separate models in the previous literature. Additionally, our framework
allows to ignore the order of integration of the variables and to directly
estimate the VAR in levels, thus avoiding accumulating biases coming from
unit-root and cointegration tests. This is of particular appeal for climate
time series which are well known to contain stochastic trends as well as
yielding long memory. We are thus able to display the causal networks linking
radiative forcings to global temperatures but also to causally connect
radiative forcings among themselves, therefore allowing for a careful
reconstruction of a timeline of causal effects among forcings. The robustness
of our proposed procedure makes it an important tool for policy evaluation in
tackling global climate change.",High-Dimensional Causality for Climatic Attribution,2023-02-08 14:12:12,"Marina Friedrich, Luca Margaritella, Stephan Smeekes","http://arxiv.org/abs/2302.03996v1, http://arxiv.org/pdf/2302.03996v1",econ.EM
29713,em,"This paper studies inference on the average treatment effect in experiments
in which treatment status is determined according to ""matched pairs"" and it is
additionally desired to adjust for observed, baseline covariates to gain
further precision. By a ""matched pairs"" design, we mean that units are sampled
i.i.d. from the population of interest, paired according to observed, baseline
covariates and finally, within each pair, one unit is selected at random for
treatment. Importantly, we presume that not all observed, baseline covariates
are used in determining treatment assignment. We study a broad class of
estimators based on a ""doubly robust"" moment condition that permits us to study
estimators with both finite-dimensional and high-dimensional forms of covariate
adjustment. We find that estimators with finite-dimensional, linear adjustments
need not lead to improvements in precision relative to the unadjusted
difference-in-means estimator. This phenomenon persists even if the adjustments
are interacted with treatment; in fact, doing so leads to no changes in
precision. However, gains in precision can be ensured by including fixed
effects for each of the pairs. Indeed, we show that this adjustment is the
""optimal"" finite-dimensional, linear adjustment. We additionally study two
estimators with high-dimensional forms of covariate adjustment based on the
LASSO. For each such estimator, we show that it leads to improvements in
precision relative to the unadjusted difference-in-means estimator and also
provide conditions under which it leads to the ""optimal"" nonparametric,
covariate adjustment. A simulation study confirms the practical relevance of
our theoretical analysis, and the methods are employed to reanalyze data from
an experiment using a ""matched pairs"" design to study the effect of
macroinsurance on microenterprise.",Covariate Adjustment in Experiments with Matched Pairs,2023-02-09 03:12:32,"Yuehao Bai, Liang Jiang, Joseph P. Romano, Azeem M. Shaikh, Yichong Zhang","http://arxiv.org/abs/2302.04380v3, http://arxiv.org/pdf/2302.04380v3",econ.EM
29714,em,"This paper presents a new perspective on the identification at infinity for
the intercept of the sample selection model as identification at the boundary
via a transformation of the selection index. This perspective suggests
generalizations of estimation at infinity to kernel regression estimation at
the boundary and further to local linear estimation at the boundary. The
proposed kernel-type estimators with an estimated transformation are proven to
be nonparametric-rate consistent and asymptotically normal under mild
regularity conditions. A fully data-driven method of selecting the optimal
bandwidths for the estimators is developed. The Monte Carlo simulation shows
the desirable finite sample properties of the proposed estimators and bandwidth
selection procedures.",On semiparametric estimation of the intercept of the sample selection model: a kernel approach,2023-02-10 10:16:49,Zhewen Pan,"http://arxiv.org/abs/2302.05089v1, http://arxiv.org/pdf/2302.05089v1",econ.EM
29715,em,"We propose an econometric environment for structural break detection in
nonstationary quantile predictive regressions. We establish the limit
distributions for a class of Wald and fluctuation type statistics based on both
the ordinary least squares estimator and the endogenous instrumental regression
estimator proposed by Phillips and Magdalinos (2009a, Econometric Inference in
the Vicinity of Unity. Working paper, Singapore Management University).
Although the asymptotic distribution of these test statistics appears to depend
on the chosen estimator, the IVX based tests are shown to be asymptotically
nuisance parameter-free regardless of the degree of persistence and consistent
under local alternatives. The finite-sample performance of both tests is
evaluated via simulation experiments. An empirical application to house pricing
index returns demonstrates the practicality of the proposed break tests for
regression quantiles of nonstationary time series data.",Structural Break Detection in Quantile Predictive Regression Models with Persistent Covariates,2023-02-10 14:45:10,Christis Katsouris,"http://arxiv.org/abs/2302.05193v1, http://arxiv.org/pdf/2302.05193v1",econ.EM
29716,em,"Machine learning (ML) estimates of conditional average treatment effects
(CATE) can guide policy decisions, either by allowing targeting of individuals
with beneficial CATE estimates, or as inputs to decision trees that optimise
overall outcomes. There is limited information available regarding how well
these algorithms perform in real-world policy evaluation scenarios. Using
synthetic data, we compare the finite sample performance of different policy
learning algorithms, machine learning techniques employed during their learning
phases, and methods for presenting estimated policy values. For each algorithm,
we assess the resulting treatment allocation by measuring deviation from the
ideal (""oracle"") policy. Our main finding is that policy trees based on
estimated CATEs outperform trees learned from doubly-robust scores. Across
settings, Causal Forests and the Normalised Double-Robust Learner perform
consistently well, while Bayesian Additive Regression Trees perform poorly.
These methods are then applied to a case study targeting optimal allocation of
subsidised health insurance, with the goal of reducing infant mortality in
Indonesia.",Policy Learning with Rare Outcomes,2023-02-10 17:16:55,"Julia Hatamyar, Noemi Kreif","http://arxiv.org/abs/2302.05260v2, http://arxiv.org/pdf/2302.05260v2",econ.EM
29717,em,"Designing individualized allocation of treatments so as to maximize the
equilibrium welfare of interacting agents has many policy-relevant
applications. Focusing on sequential decision games of interacting agents, this
paper develops a method to obtain optimal treatment assignment rules that
maximize a social welfare criterion by evaluating stationary distributions of
outcomes. Stationary distributions in sequential decision games are given by
Gibbs distributions, which are difficult to optimize with respect to a
treatment allocation due to analytical and computational complexity. We apply a
variational approximation to the stationary distribution and optimize the
approximated equilibrium welfare with respect to treatment allocation using a
greedy optimization algorithm. We characterize the performance of the
variational approximation, deriving a performance guarantee for the greedy
optimization algorithm via a welfare regret bound. We implement our proposed
method in simulation exercises and an empirical application using the Indian
microfinance data (Banerjee et al., 2013), and show it delivers significant
welfare gains.",Individualized Treatment Allocation in Sequential Network Games,2023-02-11 20:19:32,"Toru Kitagawa, Guanyi Wang","http://arxiv.org/abs/2302.05747v3, http://arxiv.org/pdf/2302.05747v3",econ.EM
29718,em,"We provide a simple method to estimate the parameters of multivariate
stochastic volatility models with latent factor structures. These models are
very useful as they alleviate the standard curse of dimensionality, allowing
the number of parameters to increase only linearly with the number of the
return series. Although theoretically very appealing, these models have only
found limited practical application due to huge computational burdens. Our
estimation method is simple in implementation as it consists of two steps:
first, we estimate the loadings and the unconditional variances by maximum
likelihood, and then we use the efficient method of moments to estimate the
parameters of the stochastic volatility structure with GARCH as an auxiliary
model. In a comprehensive Monte Carlo study we show the good performance of our
method to estimate the parameters of interest accurately. The simulation study
and an application to real vectors of daily returns of dimensions up to 148
show the method's computation advantage over the existing estimation
procedures.",Sequential Estimation of Multivariate Factor Stochastic Volatility Models,2023-02-14 17:06:16,"Giorgio Calzolari, Roxana Halbleib, Christian Mücher","http://arxiv.org/abs/2302.07052v1, http://arxiv.org/pdf/2302.07052v1",econ.EM
29719,em,"This paper presents an inference method for the local average treatment
effect (LATE) in the presence of high-dimensional covariates, irrespective of
the strength of identification. We propose a novel high-dimensional conditional
test statistic with uniformly correct asymptotic size. We provide an
easy-to-implement algorithm to infer the high-dimensional LATE by inverting our
test statistic and employing the double/debiased machine learning method.
Simulations indicate that our test is robust against both weak identification
and high dimensionality concerning size control and power performance,
outperforming other conventional tests. Applying the proposed method to
railroad and population data to study the effect of railroad access on urban
population growth, we observe that our methodology yields confidence intervals
that are 49% to 92% shorter than conventional results, depending on
specifications.",Identification-robust inference for the LATE with high-dimensional covariates,2023-02-20 07:28:50,Yukun Ma,"http://arxiv.org/abs/2302.09756v4, http://arxiv.org/pdf/2302.09756v4",econ.EM
29720,em,"This paper studies settings where the analyst is interested in identifying
and estimating the average causal effect of a binary treatment on an outcome.
We consider a setup in which the outcome realization does not get immediately
realized after the treatment assignment, a feature that is ubiquitous in
empirical settings. The period between the treatment and the realization of the
outcome allows other observed actions to occur and affect the outcome. In this
context, we study several regression-based estimands routinely used in
empirical work to capture the average treatment effect and shed light on
interpreting them in terms of ceteris paribus effects, indirect causal effects,
and selection terms. We obtain three main and related takeaways. First, the
three most popular estimands do not generally satisfy what we call \emph{strong
sign preservation}, in the sense that these estimands may be negative even when
the treatment positively affects the outcome conditional on any possible
combination of other actions. Second, the most popular regression that includes
the other actions as controls satisfies strong sign preservation \emph{if and
only if} these actions are mutually exclusive binary variables. Finally, we
show that a linear regression that fully stratifies the other actions leads to
estimands that satisfy strong sign preservation.",Decomposition and Interpretation of Treatment Effects in Settings with Delayed Outcomes,2023-02-22 20:22:58,"Federico A. Bugni, Ivan A. Canay, Steve McBride","http://arxiv.org/abs/2302.11505v3, http://arxiv.org/pdf/2302.11505v3",econ.EM
29721,em,"Different proxy variables used in fiscal policy SVARs lead to contradicting
conclusions regarding the size of fiscal multipliers. In this paper, we show
that the conflicting results are due to violations of the exogeneity
assumptions, i.e. the commonly used proxies are endogenously related to the
structural shocks. We propose a novel approach to include proxy variables into
a Bayesian non-Gaussian SVAR, tailored to accommodate potentially endogenous
proxy variables. Using our model, we show that increasing government spending
is a more effective tool to stimulate the economy than reducing taxes. We
construct new exogenous proxies that can be used in the traditional proxy VAR
approach resulting in similar estimates compared to our proposed hybrid SVAR
model.",Estimating Fiscal Multipliers by Combining Statistical Identification with Potentially Endogenous Proxies,2023-02-25 14:51:05,"Sascha A. Keweloh, Mathias Klein, Jan Prüser","http://arxiv.org/abs/2302.13066v3, http://arxiv.org/pdf/2302.13066v3",econ.EM
29722,em,"Local Projection is widely used for impulse response estimation, with the
Fixed Effect (FE) estimator being the default for panel data. This paper
highlights the presence of Nickell bias for all regressors in the FE estimator,
even if lagged dependent variables are absent in the regression. This bias is
the consequence of the inherent panel predictive specification. We recommend
using the split-panel jackknife estimator to eliminate the asymptotic bias and
restore the standard statistical inference. Revisiting three macro-finance
studies on the linkage between financial crises and economic contraction, we
find that the FE estimator substantially underestimates the post-crisis
economic losses.",Nickell Bias in Panel Local Projection: Financial Crises Are Worse Than You Think,2023-02-27 03:54:33,"Ziwei Mei, Liugang Sheng, Zhentao Shi","http://arxiv.org/abs/2302.13455v3, http://arxiv.org/pdf/2302.13455v3",econ.EM
29723,em,"Experiments are an important tool to measure the impacts of interventions.
However, in experimental settings with one-sided noncompliance, extant
empirical approaches may not produce the estimands a decision-maker needs to
solve their problem. For example, these experimental designs are common in
digital advertising settings, but typical methods do not yield effects that
inform the intensive margin -- how much should be spent or how many consumers
should be reached with a campaign. We propose a solution that combines a novel
multi-cell experimental design with modern estimation techniques that enables
decision-makers to recover enough information to solve problems with an
intensive margin. Our design is straightforward to implement. Using data from
advertising experiments at Facebook, we demonstrate our approach outperforms
standard techniques in recovering treatment effect parameters. Through a simple
advertising reach decision problem, we show that our approach generates better
decisions relative to standard techniques.","A multi-cell experimental design to recover policy relevant treatment effects, with an application to online advertising",2023-02-27 17:56:15,"Caio Waisman, Brett R. Gordon","http://arxiv.org/abs/2302.13857v2, http://arxiv.org/pdf/2302.13857v2",econ.EM
29724,em,"We examine the incremental value of news-based data relative to the FRED-MD
economic indicators for quantile predictions (now- and forecasts) of
employment, output, inflation and consumer sentiment. Our results suggest that
news data contain valuable information not captured by economic indicators,
particularly for left-tail forecasts. Methods that capture quantile-specific
non-linearities produce superior forecasts relative to methods that feature
linear predictive relationships. However, adding news-based data substantially
increases the performance of quantile-specific linear models, especially in the
left tail. Variable importance analyses reveal that left tail predictions are
determined by both economic and textual indicators, with the latter having the
most pronounced impact on consumer sentiment.",Forecasting Macroeconomic Tail Risk in Real Time: Do Textual Data Add Value?,2023-02-27 20:44:34,"Philipp Adämmer, Jan Prüser, Rainer Schüssler","http://arxiv.org/abs/2302.13999v1, http://arxiv.org/pdf/2302.13999v1",econ.EM
29725,em,"This article discusses the use of dynamic factor models in macroeconomic
forecasting, with a focus on the Factor-Augmented Error Correction Model
(FECM). The FECM combines the advantages of cointegration and dynamic factor
models, providing a flexible and reliable approach to macroeconomic
forecasting, especially for non-stationary variables. We evaluate the
forecasting performance of the FECM model on a large dataset of 117 Moroccan
economic series with quarterly frequency. Our study shows that FECM outperforms
traditional econometric models in terms of forecasting accuracy and robustness.
The inclusion of long-term information and common factors in FECM enhances its
ability to capture economic dynamics and leads to better forecasting
performance than other competing models. Our results suggest that FECM can be a
valuable tool for macroeconomic forecasting in Morocco and other similar
economies.",Macroeconomic Forecasting using Dynamic Factor Models: The Case of Morocco,2023-02-28 01:37:35,Daoui Marouane,"http://dx.doi.org/10.48374/IMIST.PRSM/ame-v5i2.39806, http://arxiv.org/abs/2302.14180v3, http://arxiv.org/pdf/2302.14180v3",econ.EM
29726,em,"This paper proposes a linear categorical random coefficient model, in which
the random coefficients follow parametric categorical distributions. The
distributional parameters are identified based on a linear recurrence structure
of moments of the random coefficients. A Generalized Method of Moments
estimation procedure is proposed also employed by Peter Schmidt and his
coauthors to address heterogeneity in time effects in panel data models. Using
Monte Carlo simulations, we find that moments of the random coefficients can be
estimated reasonably accurately, but large samples are required for estimation
of the parameters of the underlying categorical distribution. The utility of
the proposed estimator is illustrated by estimating the distribution of returns
to education in the U.S. by gender and educational levels. We find that rising
heterogeneity between educational groups is mainly due to the increasing
returns to education for those with postsecondary education, whereas within
group heterogeneity has been rising mostly in the case of individuals with high
school or less education.",Identification and Estimation of Categorical Random Coefficient Models,2023-02-28 11:06:26,"Zhan Gao, M. Hashem Pesaran","http://arxiv.org/abs/2302.14380v1, http://arxiv.org/pdf/2302.14380v1",econ.EM
29727,em,"This paper revisits the Lagrange multiplier type test for the null hypothesis
of no cross-sectional dependence in large panel data models. We propose a
unified test procedure and its power enhancement version, which show robustness
for a wide class of panel model contexts. Specifically, the two procedures are
applicable to both heterogeneous and fixed effects panel data models with the
presence of weakly exogenous as well as lagged dependent regressors, allowing
for a general form of nonnormal error distribution. With the tools from Random
Matrix Theory, the asymptotic validity of the test procedures is established
under the simultaneous limit scheme where the number of time periods and the
number of cross-sectional units go to infinity proportionally. The derived
theories are accompanied by detailed Monte Carlo experiments, which confirm the
robustness of the two tests and also suggest the validity of the power
enhancement technique.",Unified and robust Lagrange multiplier type tests for cross-sectional independence in large panel data models,2023-02-28 11:15:52,"Zhenhong Huang, Zhaoyuan Li, Jianfeng Yao","http://arxiv.org/abs/2302.14387v1, http://arxiv.org/pdf/2302.14387v1",econ.EM
29728,em,"This paper develops a new specification test for the instrument weakness when
the number of instruments $K_n$ is large with a magnitude comparable to the
sample size $n$. The test relies on the fact that the difference between the
two-stage least squares (2SLS) estimator and the ordinary least squares (OLS)
estimator asymptotically disappears when there are many weak instruments, but
otherwise converges to a non-zero limit. We establish the limiting distribution
of the difference within the above two specifications, and introduce a
delete-$d$ Jackknife procedure to consistently estimate the asymptotic
variance/covariance of the difference. Monte Carlo experiments demonstrate the
good performance of the test procedure for both cases of single and multiple
endogenous variables. Additionally, we re-examine the analysis of returns to
education data in Angrist and Keueger (1991) using our proposed test. Both the
simulation results and empirical analysis indicate the reliability of the test.",A specification test for the strength of instrumental variables,2023-02-28 11:23:51,"Zhenhong Huang, Chen Wang, Jianfeng Yao","http://arxiv.org/abs/2302.14396v1, http://arxiv.org/pdf/2302.14396v1",econ.EM
29729,em,"This paper investigates the behavior of Stock and Yogo (2005)'s first-stage F
statistic and the Cragg-Donald statistic (Cragg and Donald, 1993) when the
number of instruments and the sample size go to infinity in a comparable
magnitude. Our theory shows that the first-stage F test is oversized for
detecting many weak instruments. We next propose an asymptotically valid
correction of the F statistic for testing weakness of instruments. The theory
is also used to construct confidence intervals for the strength of instruments.
As for the Cragg-Donald statistic, we obtain an asymptotically valid correction
in the case of two endogenous variables. Monte Carlo experiments demonstrate
the satisfactory performance of the proposed methods in both situations of a
single and multiple endogenous variables. The usefulness of the proposed tests
is illustrated by an analysis of the returns to education data in Angrist and
Keueger (1991).",Assessing the strength of many instruments with the first-stage F and Cragg-Donald statistics,2023-02-28 12:03:02,"Zhenhong Huang, Chen Wang, Jianfeng Yao","http://arxiv.org/abs/2302.14423v1, http://arxiv.org/pdf/2302.14423v1",econ.EM
31608,gn,"This study examines the influence of grandchildren's gender on grandparents'
voting behavior using independently collected individual-level data. The survey
was conducted immediately after the House of Councilors election in Japan. I
observed that individuals with a granddaughter were more likely to vote for
female candidates by around 10 % than those without. However, having a daughter
did not affect the parents' voting behavior. Furthermore, having a son or a
grandson did not influence grandparents' voting behavior. This implies that
grandparents voted for their granddaughter's future benefit because
granddaughters may be too young vote in a male-dominated and aging society.",Granddaughter and voting for a female candidate,2021-02-25 02:22:12,Eiji Yamamura,"http://arxiv.org/abs/2102.13464v1, http://arxiv.org/pdf/2102.13464v1",econ.GN
29730,em,"Dynamic logit models are popular tools in economics to measure state
dependence. This paper introduces a new method to derive moment restrictions in
a large class of such models with strictly exogenous regressors and fixed
effects. We exploit the common structure of logit-type transition probabilities
and elementary properties of rational fractions, to formulate a systematic
procedure that scales naturally with model complexity (e.g the lag order or the
number of observed time periods). We detail the construction of moment
restrictions in binary response models of arbitrary lag order as well as
first-order panel vector autoregressions and dynamic multinomial logit models.
Identification of common parameters and average marginal effects is also
discussed for the binary response case. Finally, we illustrate our results by
studying the dynamics of drug consumption amongst young people inspired by Deza
(2015).",Transition Probabilities and Moment Restrictions in Dynamic Fixed Effects Logit Models,2023-03-01 00:00:46,Kevin Dano,"http://arxiv.org/abs/2303.00083v2, http://arxiv.org/pdf/2303.00083v2",econ.EM
29731,em,"This paper studies averages of intersection bounds -- the bounds defined by
the infimum of a collection of regression functions -- and other similar
functionals of these bounds, such as averages of saddle values. Examples of
such parameters are Frechet-Hoeffding bounds, Makarov (1981) bounds on
distributional effects. The proposed estimator classifies covariate values into
the regions corresponding to the identity of the binding regression function
and takes the sample average. The paper shows that the proposed moment function
is insensitive to first-order classification mistakes, enabling various
nonparametric and regularized/machine learning classifiers in the first
(classification) step. The result is generalized to cover bounds on the values
of linear programming problems and best linear predictor of intersection
bounds.",Adaptive Estimation of Intersection Bounds: a Classification Approach,2023-03-02 08:24:37,Vira Semenova,"http://arxiv.org/abs/2303.00982v1, http://arxiv.org/pdf/2303.00982v1",econ.EM
29732,em,"Monthly and weekly economic indicators are often taken to be the largest
common factor estimated from high and low frequency data, either separately or
jointly. To incorporate mixed frequency information without directly modeling
them, we target a low frequency diffusion index that is already available, and
treat high frequency values as missing. We impute these values using multiple
factors estimated from the high frequency data. In the empirical examples
considered, static matrix completion that does not account for serial
correlation in the idiosyncratic errors yields imprecise estimates of the
missing values irrespective of how the factors are estimated. Single equation
and systems-based dynamic procedures that account for serial correlation yield
imputed values that are closer to the observed low frequency ones. This is the
case in the counterfactual exercise that imputes the monthly values of consumer
sentiment series before 1978 when the data was released only on a quarterly
basis. This is also the case for a weekly version of the CFNAI index of
economic activity that is imputed using seasonally unadjusted data. The imputed
series reveals episodes of increased variability of weekly economic information
that are masked by the monthly data, notably around the 2014-15 collapse in oil
prices.",Constructing High Frequency Economic Indicators by Imputation,2023-03-03 14:38:23,"Serena Ng, Susannah Scanlan","http://arxiv.org/abs/2303.01863v3, http://arxiv.org/pdf/2303.01863v3",econ.EM
29733,em,"This paper develops estimation and inference methods for censored quantile
regression models with high-dimensional controls. The methods are based on the
application of double/debiased machine learning (DML) framework to the censored
quantile regression estimator of Buchinsky and Hahn (1998). I provide valid
inference for low-dimensional parameters of interest in the presence of
high-dimensional nuisance parameters when implementing machine learning
estimators. The proposed estimator is shown to be consistent and asymptotically
normal. The performance of the estimator with high-dimensional controls is
illustrated with numerical simulation and an empirical application that
examines the effect of 401(k) eligibility on savings.",Censored Quantile Regression with Many Controls,2023-03-06 00:52:23,Seoyun Hong,"http://arxiv.org/abs/2303.02784v1, http://arxiv.org/pdf/2303.02784v1",econ.EM
29734,em,"Despite increasing popularity in empirical studies, the integration of
machine learning generated variables into regression models for statistical
inference suffers from the measurement error problem, which can bias estimation
and threaten the validity of inferences. In this paper, we develop a novel
approach to alleviate associated estimation biases. Our proposed approach,
EnsembleIV, creates valid and strong instrumental variables from weak learners
in an ensemble model, and uses them to obtain consistent estimates that are
robust against the measurement error problem. Our empirical evaluations, using
both synthetic and real-world datasets, show that EnsembleIV can effectively
reduce estimation biases across several common regression specifications, and
can be combined with modern deep learning techniques when dealing with
unstructured data.",EnsembleIV: Creating Instrumental Variables from Ensemble Learners for Robust Statistical Inference,2023-03-06 04:23:49,"Gordon Burtch, Edward McFowland III, Mochen Yang, Gediminas Adomavicius","http://arxiv.org/abs/2303.02820v1, http://arxiv.org/pdf/2303.02820v1",econ.EM
29735,em,"This paper studies the identification of perceived ex ante returns in the
context of binary human capital investment decisions. The environment is
characterised by uncertainty about future outcomes, with some uncertainty being
resolved over time. In this context, each individual holds a probability
distribution over different levels of returns. The paper uses the hypothetical
choice methodology to identify nonparametrically the population distribution of
several individual-specific distribution parameters, which are crucial for
counterfactual policy analyses. The empirical application estimates perceived
returns on overstaying for Afghan asylum seekers in Germany and evaluates the
effect of assisted voluntary return policies.",Identification of Ex Ante Returns Using Elicited Choice Probabilities,2023-03-06 13:25:49,Romuald Meango,"http://arxiv.org/abs/2303.03009v1, http://arxiv.org/pdf/2303.03009v1",econ.EM
29736,em,"This paper proposes a nonparametric estimator of the counterfactual copula of
two outcome variables that would be affected by a policy intervention. The
proposed estimator allows policymakers to conduct ex-ante evaluations by
comparing the estimated counterfactual and actual copulas as well as their
corresponding measures of association. Asymptotic properties of the
counterfactual copula estimator are established under regularity conditions.
These conditions are also used to validate the nonparametric bootstrap for
inference on counterfactual quantities. Simulation results indicate that our
estimation and inference procedures perform well in moderately sized samples.
Applying the proposed method to studying the effects of college education on
intergenerational income mobility under two counterfactual scenarios, we find
that while providing some college education to all children is unlikely to
promote mobility, offering a college degree to children from less educated
families can significantly reduce income persistence across generations.",Counterfactual Copula and Its Application to the Effects of College Education on Intergenerational Mobility,2023-03-12 16:35:44,"Tsung-Chih Lai, Jiun-Hua Su","http://arxiv.org/abs/2303.06658v1, http://arxiv.org/pdf/2303.06658v1",econ.EM
29737,em,"Identification-robust hypothesis tests are commonly based on the continuous
updating objective function or its score. When the number of moment conditions
grows proportionally with the sample size, the large-dimensional weighting
matrix prohibits the use of conventional asymptotic approximations and the
behavior of these tests remains unknown. We show that the structure of the
weighting matrix opens up an alternative route to asymptotic results when,
under the null hypothesis, the distribution of the moment conditions is
reflection invariant. In a heteroskedastic linear instrumental variables model,
we then establish asymptotic normality of conventional tests statistics under
many instrument sequences. A key result is that the additional terms that
appear in the variance are negative. Revisiting a study on the elasticity of
substitution between immigrant and native workers where the number of
instruments is over a quarter of the sample size, the many instrument-robust
approximation indeed leads to substantially narrower confidence intervals.",Identification- and many instrument-robust inference via invariant moment conditions,2023-03-14 14:54:59,"Tom Boot, Johannes W. Ligtenberg","http://arxiv.org/abs/2303.07822v2, http://arxiv.org/pdf/2303.07822v2",econ.EM
29738,em,"This paper proposes a novel approach for identifying coefficients in an
earnings dynamics model with arbitrarily dependent contemporaneous income
shocks. Traditional methods relying on second moments fail to identify these
coefficients, emphasizing the need for nongaussianity assumptions that capture
information from higher moments. Our results contribute to the literature on
earnings dynamics by allowing models of earnings to have, for example, the
permanent income shock of a job change to be linked to the contemporaneous
transitory income shock of a relocation bonus.",Identifying an Earnings Process With Dependent Contemporaneous Income Shocks,2023-03-15 12:04:10,Dan Ben-Moshe,"http://arxiv.org/abs/2303.08460v2, http://arxiv.org/pdf/2303.08460v2",econ.EM
29739,em,"We consider penalized extremum estimation of a high-dimensional, possibly
nonlinear model that is sparse in the sense that most of its parameters are
zero but some are not. We use the SCAD penalty function, which provides model
selection consistent and oracle efficient estimates under suitable conditions.
However, asymptotic approximations based on the oracle model can be inaccurate
with the sample sizes found in many applications. This paper gives conditions
under which the bootstrap, based on estimates obtained through SCAD
penalization with thresholding, provides asymptotic refinements of size \(O
\left( n^{- 2} \right)\) for the error in the rejection (coverage) probability
of a symmetric hypothesis test (confidence interval) and \(O \left( n^{- 1}
\right)\) for the error in rejection (coverage) probability of a one-sided or
equal tailed test (confidence interval). The results of Monte Carlo experiments
show that the bootstrap can provide large reductions in errors in coverage
probabilities. The bootstrap is consistent, though it does not necessarily
provide asymptotic refinements, even if some parameters are close but not equal
to zero. Random-coefficients logit and probit models and nonlinear moment
models are examples of models to which the procedure applies.",Bootstrap based asymptotic refinements for high-dimensional nonlinear models,2023-03-17 01:52:03,"Joel L. Horowitz, Ahnaf Rafi","http://arxiv.org/abs/2303.09680v1, http://arxiv.org/pdf/2303.09680v1",econ.EM
29740,em,"We examine asymptotic properties of the OLS estimator when the values of the
regressor of interest are assigned randomly and independently of other
regressors. We find that the OLS variance formula in this case is often
simplified, sometimes substantially. In particular, when the regressor of
interest is independent not only of other regressors but also of the error
term, the textbook homoskedastic variance formula is valid even if the error
term and auxiliary regressors exhibit a general dependence structure. In the
context of randomized controlled trials, this conclusion holds in completely
randomized experiments with constant treatment effects. When the error term is
heteroscedastic with respect to the regressor of interest, the variance formula
has to be adjusted not only for heteroscedasticity but also for correlation
structure of the error term. However, even in the latter case, some
simplifications are possible as only a part of the correlation structure of the
error term should be taken into account. In the context of randomized control
trials, this implies that the textbook homoscedastic variance formula is
typically not valid if treatment effects are heterogenous but
heteroscedasticity-robust variance formulas are valid if treatment effects are
independent across units, even if the error term exhibits a general dependence
structure. In addition, we extend the results to the case when the regressor of
interest is assigned randomly at a group level, such as in randomized control
trials with treatment assignment determined at a group (e.g., school/village)
level.",Standard errors when a regressor is randomly assigned,2023-03-18 05:00:40,"Denis Chetverikov, Jinyong Hahn, Zhipeng Liao, Andres Santos","http://arxiv.org/abs/2303.10306v1, http://arxiv.org/pdf/2303.10306v1",econ.EM
29741,em,"Locally Robust (LR)/Orthogonal/Debiased moments have proven useful with
machine learning first steps, but their existence has not been investigated for
general parameters. In this paper, we provide a necessary and sufficient
condition, referred to as Restricted Local Non-surjectivity (RLN), for the
existence of such orthogonal moments to conduct robust inference on general
parameters of interest in regular semiparametric models. Importantly, RLN does
not require either identification of the parameters of interest or the nuisance
parameters. However, for orthogonal moments to be informative, the efficient
Fisher Information matrix for the parameter must be non-zero (though possibly
singular). Thus, orthogonal moments exist and are informative under more
general conditions than previously recognized. We demonstrate the utility of
our general results by characterizing orthogonal moments in a class of models
with Unobserved Heterogeneity (UH). For this class of models our method
delivers functional differencing as a special case. Orthogonality for general
smooth functionals of the distribution of UH is also characterized. As a second
major application, we investigate the existence of orthogonal moments and their
relevance for models defined by moment restrictions with possibly different
conditioning variables. We find orthogonal moments for the fully saturated two
stage least squares, for heterogeneous parameters in treatment effects, for
sample selection models, and for popular models of demand for differentiated
products. We apply our results to the Oregon Health Experiment to study
heterogeneous treatment effects of Medicaid on different health outcomes.",On the Existence and Information of Orthogonal Moments,2023-03-20 22:51:43,"Facundo Argañaraz, Juan Carlos Escanciano","http://arxiv.org/abs/2303.11418v2, http://arxiv.org/pdf/2303.11418v2",econ.EM
29742,em,"We discuss estimating conditional treatment effects in regression
discontinuity designs with multiple scores. While local linear regressions have
been popular in settings where the treatment status is completely described by
one running variable, they do not easily generalize to empirical applications
involving multiple treatment assignment rules. In practice, the multivariate
problem is usually reduced to a univariate one where using local linear
regressions is suitable. Instead, we propose a forest-based estimator that can
flexibly model multivariate scores, where we build two honest forests in the
sense of Wager and Athey (2018) on both sides of the treatment boundary. This
estimator is asymptotically normal and sidesteps the pitfalls of running local
linear regressions in higher dimensions. In simulations, we find our proposed
estimator outperforms local linear regressions in multivariate designs and is
competitive against the minimax-optimal estimator of Imbens and Wager (2019).
The implementation of this estimator is simple, can readily accommodate any
(fixed) number of running variables, and does not require estimating any
nuisance parameters of the data generating process.",Using Forests in Multivariate Regression Discontinuity Designs,2023-03-21 13:14:40,"Yiqi Liu, Yuan Qi","http://arxiv.org/abs/2303.11721v1, http://arxiv.org/pdf/2303.11721v1",econ.EM
29743,em,"This paper proposes a novel tool to nonparametrically identify models with a
discrete endogenous variable or treatment: semi-instrumental variables
(semi-IVs). A semi-IV is a variable that is relevant but only partially
excluded from the potential outcomes, i.e., excluded from at least one, but not
necessarily all, potential outcome equations. It follows that standard
instrumental variables (IVs), which are fully excluded from all the potential
outcomes, are a special (extreme) case of semi-IVs. I show that full exclusion
is stronger than necessary because the same objects that are usually identified
with an IV (Imbens and Angrist, 1994; Heckman and Vytlacil, 2005; Chernozhukov
and Hansen, 2005) can be identified with several semi-IVs instead, provided
there is (at least) one semi-IV excluded from each potential outcome. For
applied work, tackling endogeneity with semi-IVs instead of IVs should be an
attractive alternative, since semi-IVs are easier to find: most
selection-specific costs or benefits can be valid semi-IVs, for example. The
paper also provides a simple semi-IV GMM estimator for models with homogenous
treatment effects and uses it to estimate the returns to education.","Don't (fully) exclude me, it's not necessary! Identification with semi-IVs",2023-03-22 18:44:59,Christophe Bruneel-Zupanc,"http://arxiv.org/abs/2303.12667v2, http://arxiv.org/pdf/2303.12667v2",econ.EM
29744,em,"This study proposes an estimator that combines statistical identification
with economically motivated restrictions on the interactions. The estimator is
identified by (mean) independent non-Gaussian shocks and allows for
incorporation of uncertain prior economic knowledge through an adaptive ridge
penalty. The estimator shrinks towards economically motivated restrictions when
the data is consistent with them and stops shrinkage when the data provides
evidence against the restriction. The estimator is applied to analyze the
interaction between the stock and oil market. The results suggest that what is
usually identified as oil-specific demand shocks can actually be attributed to
information shocks extracted from the stock market, which explain about 30-40%
of the oil price variation.",Uncertain Prior Economic Knowledge and Statistically Identified Structural Vector Autoregressions,2023-03-23 17:02:54,Sascha A. Keweloh,"http://arxiv.org/abs/2303.13281v1, http://arxiv.org/pdf/2303.13281v1",econ.EM
29745,em,"We introduce a simple tool to control for false discoveries and identify
individual signals in scenarios involving many tests, dependent test
statistics, and potentially sparse signals. The tool applies the Cauchy
combination test recursively on a sequence of expanding subsets of $p$-values
and is referred to as the sequential Cauchy combination test. While the
original Cauchy combination test aims to make a global statement about a set of
null hypotheses by summing transformed $p$-values, our sequential version
determines which $p$-values trigger the rejection of the global null. The
sequential test achieves strong familywise error rate control, exhibits less
conservatism compared to existing controlling procedures when dealing with
dependent test statistics, and provides a power boost. As illustrations, we
revisit two well-known large-scale multiple testing problems in finance for
which the test statistics have either serial dependence or cross-sectional
dependence, namely monitoring drift bursts in asset prices and searching for
assets with a nonzero alpha. In both applications, the sequential Cauchy
combination test proves to be a preferable alternative. It overcomes many of
the drawbacks inherent to inequality-based controlling procedures, extreme
value approaches, resampling and screening methods, and it improves the power
in simulations, leading to distinct empirical outcomes.",Sequential Cauchy Combination Test for Multiple Testing Problems with Financial Applications,2023-03-23 19:28:16,"Nabil Bouamara, Sébastien Laurent, Shuping Shi","http://arxiv.org/abs/2303.13406v2, http://arxiv.org/pdf/2303.13406v2",econ.EM
29746,em,"This paper characterizes point identification results of the local average
treatment effect (LATE) using two imperfect instruments. The classical approach
(Imbens and Angrist (1994)) establishes the identification of LATE via an
instrument that satisfies exclusion, monotonicity, and independence. However,
it may be challenging to find a single instrument that satisfies all these
assumptions simultaneously. My paper uses two instruments but imposes weaker
assumptions on both instruments. The first instrument is allowed to violate the
exclusion restriction and the second instrument does not need to satisfy
monotonicity. Therefore, the first instrument can affect the outcome via both
direct effects and a shift in the treatment status. The direct effects can be
identified via exogenous variation in the second instrument and therefore the
local average treatment effect is identified. An estimator is proposed, and
using Monte Carlo simulations, it is shown to perform more robustly than the
instrumental variable estimand.",Point Identification of LATE with Two Imperfect Instruments,2023-03-24 07:22:26,Rui Wang,"http://arxiv.org/abs/2303.13795v1, http://arxiv.org/pdf/2303.13795v1",econ.EM
29747,em,"This paper proposes a framework to analyze the effects of counterfactual
policies on the unconditional quantiles of an outcome variable. For a given
counterfactual policy, we obtain identified sets for the effect of both
marginal and global changes in the proportion of treated individuals. To
conduct a sensitivity analysis, we introduce the quantile breakdown frontier, a
curve that (i) indicates whether a sensitivity analysis if possible or not, and
(ii) when a sensitivity analysis is possible, quantifies the amount of
selection bias consistent with a given conclusion of interest across different
quantiles. To illustrate our method, we perform a sensitivity analysis on the
effect of unionizing low income workers on the quantiles of the distribution of
(log) wages.",Sensitivity Analysis in Unconditional Quantile Effects,2023-03-25 02:18:20,Julian Martinez-Iriarte,"http://arxiv.org/abs/2303.14298v2, http://arxiv.org/pdf/2303.14298v2",econ.EM
29748,em,"We revisit identification based on timing and information set assumptions in
structural models, which have been used in the context of production functions,
demand equations, and hedonic pricing models (e.g. Olley and Pakes (1996),
Blundell and Bond (2000)). First, we demonstrate a general under-identification
problem using these assumptions in a simple version of the Blundell-Bond
dynamic panel model. In particular, the basic moment conditions can yield
multiple discrete solutions: one at the persistence parameter in the main
equation and another at the persistence parameter governing the regressor. We
then show that the problem can persist in a broader set of models but
disappears in models under stronger timing assumptions. We then propose
possible solutions in the simple setting by enforcing an assumed sign
restriction and conclude by using lessons from our basic identification
approach to propose more general practical advice for empirical researchers.",Under-Identification of Structural Models Based on Timing and Information Set Assumptions,2023-03-27 16:02:07,"Daniel Ackerberg, Garth Frazer, Kyoo il Kim, Yao Luo, Yingjun Su","http://arxiv.org/abs/2303.15170v1, http://arxiv.org/pdf/2303.15170v1",econ.EM
29749,em,"We study identification and estimation of endogenous linear and nonlinear
regression models without excluded instrumental variables, based on the
standard mean independence condition and a nonlinear relevance condition. Based
on the identification results, we propose two semiparametric estimators as well
as a discretization-based estimator that does not require any nonparametric
regressions. We establish their asymptotic normality and demonstrate via
simulations their robust finite-sample performances with respect to exclusion
restrictions violations and endogeneity. Our approach is applied to study the
returns to education, and to test the direct effects of college proximity
indicators as well as family background variables on the outcome.",IV Regressions without Exclusion Restrictions,2023-04-02 23:54:19,"Wayne Yuan Gao, Rui Wang","http://arxiv.org/abs/2304.00626v3, http://arxiv.org/pdf/2304.00626v3",econ.EM
29750,em,"This paper studies semiparametric identification of substitution and
complementarity patterns between two goods using a panel multinomial choice
model with bundles. The model allows the two goods to be either substitutes or
complements and admits heterogeneous complementarity through observed
characteristics. I first provide testable implications for the complementarity
relationship between goods. I then characterize the sharp identified set for
the model parameters and provide sufficient conditions for point
identification. The identification analysis accommodates endogenous covariates
through flexible dependence structures between observed characteristics and
fixed effects while placing no distributional assumptions on unobserved
preference shocks. My method is shown to perform more robustly than the
parametric method through Monte Carlo simulations. As an extension, I allow for
unobserved heterogeneity in the complementarity, investigate scenarios
involving more than two goods, and study a class of nonseparable utility
functions.",Testing and Identifying Substitution and Complementarity Patterns,2023-04-03 04:45:09,Rui Wang,"http://arxiv.org/abs/2304.00678v1, http://arxiv.org/pdf/2304.00678v1",econ.EM
29751,em,"Granular instrumental variables (GIV) has experienced sharp growth in
empirical macro-finance. The methodology's rise showcases granularity's
potential for identification in a wide set of economic environments, like the
estimation of spillovers and demand systems. I propose a new estimator--called
robust granular instrumental variables (RGIV)--that allows researchers to study
unit-level heterogeneity in spillovers within GIV's framework. In contrast to
GIV, RGIV also allows for unknown shock variances and does not require skewness
of the size distribution of units. I also develop a test of overidentifying
restrictions that evaluates RGIV's compatibility with the data, a parameter
restriction test that evaluates the appropriateness of the homogeneous
spillovers assumption, and extend the framework to allow for observable
explanatory variables. Applied to the Euro area, I find strong evidence of
country-level heterogeneity in sovereign yield spillovers. In simulations, I
show that RGIV produces reliable and informative confidence intervals.",Heterogeneity-robust granular instruments,2023-04-03 21:07:56,Eric Qian,"http://arxiv.org/abs/2304.01273v2, http://arxiv.org/pdf/2304.01273v2",econ.EM
29752,em,"We introduce a novel framework for individual-level welfare analysis. It
builds on a parametric model for continuous demand with a quasilinear utility
function, allowing for heterogeneous coefficients and unobserved
individual-product-level preference shocks. We obtain bounds on the
individual-level consumer welfare loss at any confidence level due to a
hypothetical price increase, solving a scalable optimization problem
constrained by a new confidence set under an independence restriction. This
confidence set is computationally simple, robust to weak instruments,
nonlinearity, and partial identification. In addition, it may have applications
beyond welfare analysis. Monte Carlo simulations and two empirical applications
on gasoline and food demand demonstrate the effectiveness of our method.","Individual Welfare Analysis: Random Quasilinear Utility, Independence, and Confidence Bounds",2023-04-04 19:15:38,"Junlong Feng, Sokbae Lee","http://arxiv.org/abs/2304.01921v2, http://arxiv.org/pdf/2304.01921v2",econ.EM
29753,em,"Many estimators of dynamic discrete choice models with persistent unobserved
heterogeneity have desirable statistical properties but are computationally
intensive. In this paper we propose a method to quicken estimation for a broad
class of dynamic discrete choice problems by exploiting semiparametric index
restrictions. Specifically, we propose an estimator for models whose reduced
form parameters are injective functions of one or more linear indices (Ahn,
Ichimura, Powell and Ruud 2018), a property we term index invertibility. We
establish that index invertibility implies a set of equality constraints on the
model parameters. Our proposed estimator uses the equality constraints to
decrease the dimension of the optimization problem, thereby generating
computational gains. Our main result shows that the proposed estimator is
asymptotically equivalent to the unconstrained, computationally heavy
estimator. In addition, we provide a series of results on the number of
independent index restrictions on the model parameters, providing theoretical
guidance on the extent of computational gains. Finally, we demonstrate the
advantages of our approach via Monte Carlo simulations.",Faster estimation of dynamic discrete choice models using index sufficiency,2023-04-05 03:06:28,"Jackson Bunting, Takuya Ura","http://arxiv.org/abs/2304.02171v2, http://arxiv.org/pdf/2304.02171v2",econ.EM
30218,em,"This paper illustrates two algorithms designed in Forneron & Ng (2020): the
resampled Newton-Raphson (rNR) and resampled quasi-Newton (rqN) algorithms
which speed-up estimation and bootstrap inference for structural models. An
empirical application to BLP shows that computation time decreases from nearly
5 hours with the standard bootstrap to just over 1 hour with rNR, and only 15
minutes using rqN. A first Monte-Carlo exercise illustrates the accuracy of the
method for estimation and inference in a probit IV regression. A second
exercise additionally illustrates statistical efficiency gains relative to
standard estimation for simulation-based estimation using a dynamic panel
regression example.",Estimation and Inference by Stochastic Optimization: Three Examples,2021-02-20 23:57:45,"Jean-Jacques Forneron, Serena Ng","http://arxiv.org/abs/2102.10443v1, http://arxiv.org/pdf/2102.10443v1",econ.EM
29754,em,"We present a novel approach to causal measurement for advertising, namely to
use exogenous variation in advertising exposure (RCTs) for a subset of ad
campaigns to build a model that can predict the causal effect of ad campaigns
that were run without RCTs. This approach -- Predictive Incrementality by
Experimentation (PIE) -- frames the task of estimating the causal effect of an
ad campaign as a prediction problem, with the unit of observation being an RCT
itself. In contrast, traditional causal inference approaches with observational
data seek to adjust covariate imbalance at the user level. A key insight is to
use post-campaign features, such as last-click conversion counts, that do not
require an RCT, as features in our predictive model. We find that our PIE model
recovers RCT-derived incremental conversions per dollar (ICPD) much better than
the program evaluation approaches analyzed in Gordon et al. (forthcoming). The
prediction errors from the best PIE model are 48%, 42%, and 62% of the
RCT-based average ICPD for upper-, mid-, and lower-funnel conversion outcomes,
respectively. In contrast, across the same data, the average prediction error
of stratified propensity score matching exceeds 491%, and that of
double/debiased machine learning exceeds 2,904%. Using a decision-making
framework inspired by industry, we show that PIE leads to different decisions
compared to RCTs for only 6% of upper-funnel, 7% of mid-funnel, and 13% of
lower-funnel outcomes. We conclude that PIE could enable advertising platforms
to scale causal ad measurement by extrapolating from a limited number of RCTs
to a large set of non-experimental ad campaigns.",Predictive Incrementality by Experimentation (PIE) for Ad Measurement,2023-04-14 00:37:04,"Brett R. Gordon, Robert Moakler, Florian Zettelmeyer","http://arxiv.org/abs/2304.06828v1, http://arxiv.org/pdf/2304.06828v1",econ.EM
29755,em,"Model mis-specification in multivariate econometric models can strongly
influence quantities of interest such as structural parameters, forecast
distributions or responses to structural shocks, even more so if higher-order
forecasts or responses are considered, due to parameter convolution. We propose
a simple method for addressing these specification issues in the context of
Bayesian VARs. Our method, called coarsened Bayesian VARs (cBVARs), replaces
the exact likelihood with a coarsened likelihood that takes into account that
the model might be mis-specified along important but unknown dimensions.
Coupled with a conjugate prior, this results in a computationally simple model.
As opposed to more flexible specifications, our approach avoids overfitting, is
simple to implement and estimation is fast. The resulting cBVAR performs well
in simulations for several types of mis-specification. Applied to US data,
cBVARs improve point and density forecasts compared to standard BVARs, and lead
to milder but more persistent negative effects of uncertainty shocks on output.",Coarsened Bayesian VARs -- Correcting BVARs for Incorrect Specification,2023-04-16 21:40:52,"Florian Huber, Massimiliano Marcellino","http://arxiv.org/abs/2304.07856v2, http://arxiv.org/pdf/2304.07856v2",econ.EM
29756,em,"The accurate prediction of short-term electricity prices is vital for
effective trading strategies, power plant scheduling, profit maximisation and
efficient system operation. However, uncertainties in supply and demand make
such predictions challenging. We propose a hybrid model that combines a
techno-economic energy system model with stochastic models to address this
challenge. The techno-economic model in our hybrid approach provides a deep
understanding of the market. It captures the underlying factors and their
impacts on electricity prices, which is impossible with statistical models
alone. The statistical models incorporate non-techno-economic aspects, such as
the expectations and speculative behaviour of market participants, through the
interpretation of prices. The hybrid model generates both conventional point
predictions and probabilistic forecasts, providing a comprehensive
understanding of the market landscape. Probabilistic forecasts are particularly
valuable because they account for market uncertainty, facilitating informed
decision-making and risk management. Our model delivers state-of-the-art
results, helping market participants to make informed decisions and operate
their systems more efficiently.",A hybrid model for day-ahead electricity price forecasting: Combining fundamental and stochastic modelling,2023-04-19 01:53:47,"Mira Watermeyer, Thomas Möbius, Oliver Grothe, Felix Müsgens","http://arxiv.org/abs/2304.09336v1, http://arxiv.org/pdf/2304.09336v1",econ.EM
29757,em,"Based on the statistical yearbook data and related patent data of 287 cities
in China from 2000 to 2020, this study regards the policy of establishing the
national high-tech zones as a quasi-natural experiment. Using this experiment,
this study firstly estimated the treatment effect of the policy and checked the
robustness of the estimation. Then the study examined the heterogeneity in
different geographic demarcation of China and in different city level of China.
After that, this study explored the possible influence mechanism of the policy.
It shows that the possible mechanism of the policy is financial support,
industrial agglomeration of secondary industry and the spillovers. In the end,
this study examined the spillovers deeply and showed the distribution of
spillover effect.",The Impact of Industrial Zone:Evidence from China's National High-tech Zone Policy,2023-04-19 18:58:15,Li Han,"http://arxiv.org/abs/2304.09775v1, http://arxiv.org/pdf/2304.09775v1",econ.EM
29758,em,"We propose a rate optimal estimator for the linear regression model on
network data with interacted (unobservable) individual effects. The estimator
achieves a faster rate of convergence $N$ compared to the standard estimators'
$\sqrt{N}$ rate and is efficient in cases that we discuss. We observe that the
individual effects alter the eigenvalue distribution of the data's matrix
representation in significant and distinctive ways. We subsequently offer a
correction for the \textit{ordinary least squares}' objective function to
attenuate the statistical noise that arises due to the individual effects, and
in some cases, completely eliminate it. The new estimator is asymptotically
normal and we provide a valid estimator for its asymptotic covariance matrix.
While this paper only considers models accounting for first-order interactions
between individual effects, our estimation procedure is naturally extendable to
higher-order interactions and more general specifications of the error terms.",The Ordinary Least Eigenvalues Estimator,2023-04-25 06:42:27,Yassine Sbai Sassi,"http://arxiv.org/abs/2304.12554v1, http://arxiv.org/pdf/2304.12554v1",econ.EM
29813,em,"This paper focuses on the task of detecting local episodes involving
violation of the standard It\^o semimartingale assumption for financial asset
prices in real time that might induce arbitrage opportunities. Our proposed
detectors, defined as stopping rules, are applied sequentially to continually
incoming high-frequency data. We show that they are asymptotically
exponentially distributed in the absence of Ito semimartingale violations. On
the other hand, when a violation occurs, we can achieve immediate detection
under infill asymptotics. A Monte Carlo study demonstrates that the asymptotic
results provide a good approximation to the finite-sample behavior of the
sequential detectors. An empirical application to S&P 500 index futures data
corroborates the effectiveness of our detectors in swiftly identifying the
emergence of an extreme return persistence episode in real time.",Real-Time Detection of Local No-Arbitrage Violations,2023-07-20 16:42:52,"Torben G. Andersen, Viktor Todorov, Bo Zhou","http://arxiv.org/abs/2307.10872v1, http://arxiv.org/pdf/2307.10872v1",econ.EM
29759,em,"Accurate and reliable prediction of individual travel mode choices is crucial
for developing multi-mode urban transportation systems, conducting
transportation planning and formulating traffic demand management strategies.
Traditional discrete choice models have dominated the modelling methods for
decades yet suffer from strict model assumptions and low prediction accuracy.
In recent years, machine learning (ML) models, such as neural networks and
boosting models, are widely used by researchers for travel mode choice
prediction and have yielded promising results. However, despite the superior
prediction performance, a large body of ML methods, especially the branch of
neural network models, is also limited by overfitting and tedious model
structure determination process. To bridge this gap, this study proposes an
enhanced multilayer perceptron (MLP; a neural network) with two hidden layers
for travel mode choice prediction; this MLP is enhanced by XGBoost (a boosting
method) for feature selection and a grid search method for optimal hidden
neurone determination of each hidden layer. The proposed method was trained and
tested on a real resident travel diary dataset collected in Chengdu, China.",Enhanced multilayer perceptron with feature selection and grid search for travel mode choice prediction,2023-04-25 13:05:30,"Li Tang, Chuanli Tang, Qi Fu","http://arxiv.org/abs/2304.12698v2, http://arxiv.org/pdf/2304.12698v2",econ.EM
29760,em,"This paper focuses on estimating the coefficients and average partial effects
of observed regressors in nonlinear panel data models with interactive fixed
effects, using the common correlated effects (CCE) framework. The proposed
two-step estimation method involves applying principal component analysis to
estimate latent factors based on cross-sectional averages of the regressors in
the first step, and jointly estimating the coefficients of the regressors and
factor loadings in the second step. The asymptotic distributions of the
proposed estimators are derived under general conditions, assuming that the
number of time-series observations is comparable to the number of
cross-sectional observations. To correct for asymptotic biases of the
estimators, we introduce both analytical and split-panel jackknife methods, and
confirm their good performance in finite samples using Monte Carlo simulations.
An empirical application utilizes the proposed method to study the arbitrage
behaviour of nonfinancial firms across different security markets.",Common Correlated Effects Estimation of Nonlinear Panel Data Models,2023-04-26 02:55:25,"Liang Chen, Minyuan Zhang","http://arxiv.org/abs/2304.13199v1, http://arxiv.org/pdf/2304.13199v1",econ.EM
29761,em,"This paper studies the estimation of characteristic-based quantile factor
models where the factor loadings are unknown functions of observed individual
characteristics while the idiosyncratic error terms are subject to conditional
quantile restrictions. We propose a three-stage estimation procedure that is
easily implementable in practice and has nice properties. The convergence
rates, the limiting distributions of the estimated factors and loading
functions, and a consistent selection criterion for the number of factors at
each quantile are derived under general conditions. The proposed estimation
methodology is shown to work satisfactorily when: (i) the idiosyncratic errors
have heavy tails, (ii) the time dimension of the panel dataset is not large,
and (iii) the number of factors exceeds the number of characteristics. Finite
sample simulations and an empirical application aimed at estimating the loading
functions of the daily returns of a large panel of S\&P500 index securities
help illustrate these properties.",Estimation of Characteristics-based Quantile Factor Models,2023-04-26 03:16:58,"Liang Chen, Juan Jose Dolado, Jesus Gonzalo, Haozi Pan","http://arxiv.org/abs/2304.13206v1, http://arxiv.org/pdf/2304.13206v1",econ.EM
29762,em,"This paper studies difference-in-differences (DiD) setups with repeated
cross-sectional data and potential compositional changes across time periods.
We begin our analysis by deriving the efficient influence function and the
semiparametric efficiency bound for the average treatment effect on the treated
(ATT). We introduce nonparametric estimators that attain the semiparametric
efficiency bound under mild rate conditions on the estimators of the nuisance
functions, exhibiting a type of rate doubly-robust (DR) property. Additionally,
we document a trade-off related to compositional changes: We derive the
asymptotic bias of DR DiD estimators that erroneously exclude compositional
changes and the efficiency loss when one fails to correctly rule out
compositional changes. We propose a nonparametric Hausman-type test for
compositional changes based on these trade-offs. The finite sample performance
of the proposed DiD tools is evaluated through Monte Carlo experiments and an
empirical application. As a by-product of our analysis, we present a new
uniform stochastic expansion of the local polynomial multinomial logit
estimator, which may be of independent interest.",Difference-in-Differences with Compositional Changes,2023-04-27 05:30:24,"Pedro H. C. Sant'Anna, Qi Xu","http://arxiv.org/abs/2304.13925v1, http://arxiv.org/pdf/2304.13925v1",econ.EM
29763,em,"Forecasting financial time series (FTS) is an essential field in finance and
economics that anticipates market movements in financial markets. This paper
investigates the accuracy of text mining and technical analyses in forecasting
financial time series. It focuses on the S&P500 stock market index during the
pandemic, which tracks the performance of the largest publicly traded companies
in the US. The study compares two methods of forecasting the future price of
the S&P500: text mining, which uses NLP techniques to extract meaningful
insights from financial news, and technical analysis, which uses historical
price and volume data to make predictions. The study examines the advantages
and limitations of both methods and analyze their performance in predicting the
S&P500. The FinBERT model outperforms other models in terms of S&P500 price
prediction, as evidenced by its lower RMSE value, and has the potential to
revolutionize financial analysis and prediction using financial news data.
Keywords: ARIMA, BERT, FinBERT, Forecasting Financial Time Series, GARCH, LSTM,
Technical Analysis, Text Mining JEL classifications: G4, C8",Assessing Text Mining and Technical Analyses on Forecasting Financial Time Series,2023-04-28 00:52:36,Ali Lashgari,"http://arxiv.org/abs/2304.14544v1, http://arxiv.org/pdf/2304.14544v1",econ.EM
29764,em,"In this paper, we study the estimation of the threshold predictive regression
model with hybrid stochastic local unit root predictors. We demonstrate the
estimation procedure and derive the asymptotic distribution of the least square
estimator and the IV based estimator proposed by Magdalinos and Phillips
(2009), under the null hypothesis of a diminishing threshold effect. Simulation
experiments focus on the finite sample performance of our proposed estimators
and the corresponding predictability tests as in Gonzalo and Pitarakis (2012),
under the presence of threshold effects with stochastic local unit roots. An
empirical application to stock return equity indices, illustrate the usefulness
of our framework in uncovering regimes of predictability during certain
periods. In particular, we focus on an aspect not previously examined in the
predictability literature, that is, the effect of economic policy uncertainty.",Estimation and Inference in Threshold Predictive Regression Models with Locally Explosive Regressors,2023-05-01 18:00:02,Christis Katsouris,"http://arxiv.org/abs/2305.00860v3, http://arxiv.org/pdf/2305.00860v3",econ.EM
29765,em,"An input-output table is an important data for analyzing the economic
situation of a region. Generally, the input-output table for each region
(regional input-output table) in Japan is not always publicly available, so it
is necessary to estimate the table. In particular, various methods have been
developed for estimating input coefficients, which are an important part of the
input-output table. Currently, non-survey methods are often used to estimate
input coefficients because they require less data and computation, but these
methods have some problems, such as discarding information and requiring
additional data for estimation.
  In this study, the input coefficients are estimated by approximating the
generation process with an artificial neural network (ANN) to mitigate the
problems of the non-survey methods and to estimate the input coefficients with
higher precision. To avoid over-fitting due to the small data used, data
augmentation, called mixup, is introduced to increase the data size by
generating virtual regions through region composition and scaling.
  By comparing the estimates of the input coefficients with those of Japan as a
whole, it is shown that the accuracy of the method of this research is higher
and more stable than that of the conventional non-survey methods. In addition,
the estimated input coefficients for the three cities in Japan are generally
close to the published values for each city.",Estimating Input Coefficients for Regional Input-Output Tables Using Deep Learning with Mixup,2023-05-02 07:34:09,Shogo Fukui,"http://arxiv.org/abs/2305.01201v2, http://arxiv.org/pdf/2305.01201v2",econ.EM
29766,em,"We consider the problem of extrapolating treatment effects across
heterogeneous populations (``sites""/``contexts""). We consider an idealized
scenario in which the researcher observes cross-sectional data for a large
number of units across several ``experimental"" sites in which an intervention
has already been implemented to a new ``target"" site for which a baseline
survey of unit-specific, pre-treatment outcomes and relevant attributes is
available. We propose a transfer estimator that exploits cross-sectional
variation between individuals and sites to predict treatment outcomes using
baseline outcome data for the target location. We consider the problem of
obtaining a predictor of conditional average treatment effects at the target
site that is MSE optimal within a certain class and subject to data
constraints. Our approach is design-based in the sense that the performance of
the predictor is evaluated given the specific, finite selection of experimental
and target sites. Our approach is nonparametric, and our formal results concern
the construction of an optimal basis of predictors as well as convergence rates
for the estimated conditional average treatment effect relative to the
constrained-optimal population predictor for the target site. We illustrate our
approach using a combined data set of five multi-site randomized controlled
trials (RCTs) to evaluate the effect of conditional cash transfers on school
attendance.",Transfer Estimates for Causal Effects across Heterogeneous Sites,2023-05-02 17:02:15,Konrad Menzel,"http://arxiv.org/abs/2305.01435v2, http://arxiv.org/pdf/2305.01435v2",econ.EM
29767,em,"In this paper, we develop a novel large volatility matrix estimation
procedure for analyzing global financial markets. Practitioners often use
lower-frequency data, such as weekly or monthly returns, to address the issue
of different trading hours in the international financial market. However, this
approach can lead to inefficiency due to information loss. To mitigate this
problem, our proposed method, called Structured Principal Orthogonal complEment
Thresholding (Structured-POET), incorporates observation structural information
for both global and national factor models. We establish the asymptotic
properties of the Structured-POET estimator, and also demonstrate the drawbacks
of conventional covariance matrix estimation procedures when using
lower-frequency data. Finally, we apply the Structured-POET estimator to an
out-of-sample portfolio allocation study using international stock market data.",Large Global Volatility Matrix Analysis Based on Observation Structural Information,2023-05-02 17:46:02,"Sung Hoon Choi, Donggyu Kim","http://arxiv.org/abs/2305.01464v2, http://arxiv.org/pdf/2305.01464v2",econ.EM
29768,em,"Panel data models often use fixed effects to account for unobserved
heterogeneities. These fixed effects are typically incidental parameters and
their estimators converge slowly relative to the square root of the sample
size. In the maximum likelihood context, this induces an asymptotic bias of the
likelihood function. Test statistics derived from the asymptotically biased
likelihood, therefore, no longer follow their standard limiting distributions.
This causes severe distortions in test sizes. We consider a generic class of
dynamic nonlinear models with two-way fixed effects and propose an analytical
bias correction method for the likelihood function. We formally show that the
likelihood ratio, the Lagrange-multiplier, and the Wald test statistics derived
from the corrected likelihood follow their standard asymptotic distributions. A
bias-corrected estimator of the structural parameters can also be derived from
the corrected likelihood function. We evaluate the performance of our bias
correction procedure through simulations and an empirical example.",Debiased inference for dynamic nonlinear models with two-way fixed effects,2023-05-04 23:29:48,"Xuan Leng, Jiaming Mao, Yutao Sun","http://arxiv.org/abs/2305.03134v2, http://arxiv.org/pdf/2305.03134v2",econ.EM
29769,em,"This paper studies the principal component (PC) method-based estimation of
weak factor models with sparse loadings. We uncover an intrinsic near-sparsity
preservation property for the PC estimators of loadings, which comes from the
approximately upper triangular (block) structure of the rotation matrix. It
implies an asymmetric relationship among factors: the rotated loadings for a
stronger factor can be contaminated by those from a weaker one, but the
loadings for a weaker factor is almost free of the impact of those from a
stronger one. More importantly, the finding implies that there is no need to
use complicated penalties to sparsify the loading estimators. Instead, we adopt
a simple screening method to recover the sparsity and construct estimators for
various factor strengths. In addition, for sparse weak factor models, we
provide a singular value thresholding-based approach to determine the number of
factors and establish uniform convergence rates for PC estimators, which
complement Bai and Ng (2023). The accuracy and efficiency of the proposed
estimators are investigated via Monte Carlo simulations. The application to the
FRED-QD dataset reveals the underlying factor strengths and loading sparsity as
well as their dynamic features.",Does Principal Component Analysis Preserve the Sparsity in Sparse Weak Factor Models?,2023-05-10 10:10:24,"Jie Wei, Yonghui Zhang","http://dx.doi.org/10.13140/RG.2.2.23601.04965, http://arxiv.org/abs/2305.05934v1, http://arxiv.org/pdf/2305.05934v1",econ.EM
29820,em,"We establish the asymptotic validity of the bootstrap-based IVX estimator
proposed by Phillips and Magdalinos (2009) for the predictive regression model
parameter based on a local-to-unity specification of the autoregressive
coefficient which covers both nearly nonstationary and nearly stationary
processes. A mixed Gaussian limit distribution is obtained for the
bootstrap-based IVX estimator. The statistical validity of the theoretical
results are illustrated by Monte Carlo experiments for various statistical
inference problems.",Bootstrapping Nonstationary Autoregressive Processes with Predictive Regression Models,2023-07-26 22:10:12,Christis Katsouris,"http://arxiv.org/abs/2307.14463v1, http://arxiv.org/pdf/2307.14463v1",econ.EM
29770,em,"Experiments that use covariate adaptive randomization (CAR) are commonplace
in applied economics and other fields. In such experiments, the experimenter
first stratifies the sample according to observed baseline covariates and then
assigns treatment randomly within these strata so as to achieve balance
according to pre-specified stratum-specific target assignment proportions. In
this paper, we compute the semiparametric efficiency bound for estimating the
average treatment effect (ATE) in such experiments with binary treatments
allowing for the class of CAR procedures considered in Bugni, Canay, and Shaikh
(2018, 2019). This is a broad class of procedures and is motivated by those
used in practice. The stratum-specific target proportions play the role of the
propensity score conditional on all baseline covariates (and not just the
strata) in these experiments. Thus, the efficiency bound is a special case of
the bound in Hahn (1998), but conditional on all baseline covariates.
Additionally, this efficiency bound is shown to be achievable under the same
conditions as those used to derive the bound by using a cross-fitted
Nadaraya-Watson kernel estimator to form nonparametric regression adjustments.",Efficient Semiparametric Estimation of Average Treatment Effects Under Covariate Adaptive Randomization,2023-05-15 07:23:28,Ahnaf Rafi,"http://arxiv.org/abs/2305.08340v1, http://arxiv.org/pdf/2305.08340v1",econ.EM
29771,em,"We introduce a new HD DCC-HEAVY class of hierarchical-type factor models for
conditional covariance matrices of high-dimensional returns, employing the
corresponding realized measures built from higher-frequency data. The modelling
approach features sophisticated asymmetric dynamics in covariances coupled with
straightforward estimation and forecasting schemes, independent of the
cross-sectional dimension of the assets under consideration. Empirical analyses
suggest the HD DCC-HEAVY models have a better in-sample fit, and deliver
statistically and economically significant out-of-sample gains relative to the
standard benchmarks and existing hierarchical factor models. The results are
robust under different market conditions.",Hierarchical DCC-HEAVY Model for High-Dimensional Covariance Matrices,2023-05-15 12:44:24,"Emilija Dzuverovic, Matteo Barigozzi","http://arxiv.org/abs/2305.08488v1, http://arxiv.org/pdf/2305.08488v1",econ.EM
29772,em,"This paper aims to address the issue of semiparametric efficiency for
cointegration rank testing in finite-order vector autoregressive models, where
the innovation distribution is considered an infinite-dimensional nuisance
parameter. Our asymptotic analysis relies on Le Cam's theory of limit
experiment, which in this context takes the form of Locally Asymptotically
Brownian Functional (LABF). By leveraging the structural version of LABF, an
Ornstein-Uhlenbeck experiment, we develop the asymptotic power envelopes of
asymptotically invariant tests for both cases with and without a time trend. We
propose feasible tests based on a nonparametrically estimated density and
demonstrate that their power can achieve the semiparametric power envelopes,
making them semiparametrically optimal. We validate the theoretical results
through large-sample simulations and illustrate satisfactory size control and
excellent power performance of our tests under small samples. In both cases
with and without time trend, we show that a remarkable amount of additional
power can be obtained from non-Gaussian distributions.",Semiparametrically Optimal Cointegration Test,2023-05-13 18:44:09,Bo Zhou,"http://arxiv.org/abs/2305.08880v1, http://arxiv.org/pdf/2305.08880v1",econ.EM
29773,em,"This paper examines the nonparametric identifiability of production
functions, considering firm heterogeneity beyond Hicks-neutral technology
terms. We propose a finite mixture model to account for unobserved
heterogeneity in production technology and productivity growth processes. Our
analysis demonstrates that the production function for each latent type can be
nonparametrically identified using four periods of panel data, relying on
assumptions similar to those employed in existing literature on production
function and panel data identification. By analyzing Japanese plant-level panel
data, we uncover significant disparities in estimated input elasticities and
productivity growth processes among latent types within narrowly defined
industries. We further show that neglecting unobserved heterogeneity in input
elasticities may lead to substantial and systematic bias in the estimation of
productivity growth.",Identification and Estimation of Production Function with Unobserved Heterogeneity,2023-05-20 06:08:06,"Hiroyuki Kasahara, Paul Schrimpf, Michio Suzuki","http://arxiv.org/abs/2305.12067v1, http://arxiv.org/pdf/2305.12067v1",econ.EM
29774,em,"In medical treatment and elsewhere, it has become standard to base treatment
intensity (dosage) on evidence in randomized trials. Yet it has been rare to
study how outcomes vary with dosage. In trials to obtain drug approval, the
norm has been to specify some dose of a new drug and compare it with an
established therapy or placebo. Design-based trial analysis views each trial
arm as qualitatively different, but it may be highly credible to assume that
efficacy and adverse effects (AEs) weakly increase with dosage. Optimization of
patient care requires joint attention to both, as well as to treatment cost.
This paper develops methodology to credibly use limited trial evidence to
choose dosage when efficacy and AEs weakly increase with dose. I suppose that
dosage is an integer choice t in (0, 1, . . . , T), T being a specified maximum
dose. I study dosage choice when trial evidence on outcomes is available for
only K dose levels, where K < T + 1. Then the population distribution of dose
response is partially rather than point identified. The identification region
is a convex polygon determined by linear equalities and inequalities. I
characterize clinical and public-health decision making using the
minimax-regret criterion. A simple analytical solution exists when T = 2 and
computation is tractable when T is larger.",Using Limited Trial Evidence to Credibly Choose Treatment Dosage when Efficacy and Adverse Effects Weakly Increase with Dose,2023-05-26 21:58:44,Charles F. Manski,"http://arxiv.org/abs/2305.17206v1, http://arxiv.org/pdf/2305.17206v1",econ.EM
29781,em,"In this paper we study neural networks and their approximating power in panel
data models. We provide asymptotic guarantees on deep feed-forward neural
network estimation of the conditional mean, building on the work of Farrell et
al. (2021), and explore latent patterns in the cross-section. We use the
proposed estimators to forecast the progression of new COVID-19 cases across
the G7 countries during the pandemic. We find significant forecasting gains
over both linear panel and nonlinear time series models. Containment or
lockdown policies, as instigated at the national-level by governments, are
found to have out-of-sample predictive power for new COVID-19 cases. We
illustrate how the use of partial derivatives can help open the ""black-box"" of
neural networks and facilitate semi-structural analysis: school and workplace
closures are found to have been effective policies at restricting the
progression of the pandemic across the G7 countries. But our methods illustrate
significant heterogeneity and time-variation in the effectiveness of specific
containment policies.",Deep Neural Network Estimation in Panel Data Models,2023-05-31 17:58:31,"Ilias Chronopoulos, Katerina Chrysikou, George Kapetanios, James Mitchell, Aristeidis Raftapostolos","http://arxiv.org/abs/2305.19921v1, http://arxiv.org/pdf/2305.19921v1",econ.EM
29775,em,"Overidentified two-stage least square (TSLS) is commonly adopted by applied
economists to address endogeneity. Though it potentially gives more efficient
or informative estimate, overidentification comes with a cost. The bias of TSLS
is severe when the number of instruments is large. Hence, Jackknife
Instrumental Variable Estimator (JIVE) has been proposed to reduce bias of
overidentified TSLS. A conventional heuristic rule to assess the performance of
TSLS and JIVE is approximate bias. This paper formalizes this concept and
applies the new definition of approximate bias to three classes of estimators
that bridge between OLS, TSLS and a variant of JIVE, namely, JIVE1. Three new
approximately unbiased estimators are proposed. They are called AUK, TSJI1 and
UOJIVE. Interestingly, a previously proposed approximately unbiased estimator
UIJIVE can be viewed as a special case of UOJIVE. While UIJIVE is approximately
unbiased asymptotically, UOJIVE is approximately unbiased even in finite
sample. Moreover, UOJIVE estimates parameters for both endogenous and control
variables whereas UIJIVE only estimates the parameter of the endogenous
variables. TSJI1 and UOJIVE are consistent and asymptotically normal under
fixed number of instruments. They are also consistent under many-instrument
asymptotics. This paper characterizes a series of moment existence conditions
to establish all asymptotic results. In addition, the new estimators
demonstrate good performances with simulated and empirical datasets.","Bridging OLS, TSLS and JIVE1",2023-05-28 06:04:57,"Lei, Wang","http://arxiv.org/abs/2305.17615v2, http://arxiv.org/pdf/2305.17615v2",econ.EM
29776,em,"This paper considers a time-varying vector error-correction model that allows
for different time series behaviours (e.g., unit-root and locally stationary
processes) to interact with each other to co-exist. From practical
perspectives, this framework can be used to estimate shifts in the
predictability of non-stationary variables, test whether economic theories hold
periodically, etc. We first develop a time-varying Granger Representation
Theorem, which facilitates the establishment of asymptotic properties for the
model, and then propose estimation and inferential methods and theory for both
short-run and long-run coefficients. We also propose an information criterion
to estimate the lag length, a singular-value ratio test to determine the
cointegration rank, and a hypothesis test to examine the parameter stability.
To validate the theoretical findings, we conduct extensive simulations.
Finally, we demonstrate the empirical relevance by applying the framework to
investigate the rational expectations hypothesis of the U.S. term structure.",Time-Varying Vector Error-Correction Models: Estimation and Inference,2023-05-29 02:52:09,"Jiti Gao, Bin Peng, Yayi Yan","http://arxiv.org/abs/2305.17829v1, http://arxiv.org/pdf/2305.17829v1",econ.EM
29777,em,"In many situations, researchers are interested in identifying dynamic effects
of an irreversible treatment with a static binary instrumental variable (IV).
For example, in evaluations of dynamic effects of training programs, with a
single lottery determining eligibility. A common approach in these situations
is to report per-period IV estimates. Under a dynamic extension of standard IV
assumptions, we show that such IV estimators identify a weighted sum of
treatment effects for different latent groups and treatment exposures. However,
there is possibility of negative weights. We consider point and partial
identification of dynamic treatment effects in this setting under different
sets of assumptions.",Identifying Dynamic LATEs with a Static Instrument,2023-05-29 17:27:41,"Bruno Ferman, Otávio Tecchio","http://arxiv.org/abs/2305.18114v2, http://arxiv.org/pdf/2305.18114v2",econ.EM
29778,em,"The goal of this paper is to extend the method of estimating Impluse Response
Functions (IRFs) by means of Local Projection (LP) in a nonlinear dynamic
framework. We discuss the existence of a nonlinear autoregressive
representation for a Markov process, and explain how their Impulse Response
Functions are directly linked to the nonlinear Local Projection, as in the case
for the linear setting. We then present a nonparametric LP estimator, and
compare its asymptotic properties to that of IRFs obtained through direct
estimation. We also explore issues of identification for the nonlinear IRF in
the multivariate framework, which remarkably differs in comparison to the
Gaussian linear case. In particular, we show that identification is conditional
on the uniqueness of deconvolution. Then, we consider IRF and LP in augmented
Markov models.",Nonlinear Impulse Response Functions and Local Projections,2023-05-29 18:16:49,"Christian Gourieroux, Quinlan Lee","http://arxiv.org/abs/2305.18145v1, http://arxiv.org/pdf/2305.18145v1",econ.EM
29779,em,"Linear time series models are the workhorse of structural macroeconometric
analysis. However, economic theory as well as data suggest that nonlinear and
asymmetric effects might be key to understand the potential effects of sudden
economic changes. Taking a dynamical system view, this paper proposes a new
semi-nonparametric approach to construct impulse responses of nonlinear time
series. Estimation of autoregressive models with sieve methods is discussed
under natural physical dependence assumptions, and uniform consistency results
for structural impulse responses are derived. Simulations and two empirical
exercises show that the proposed method performs well and yields new insights
in the dynamic effects of macroeconomic shocks.",Impulse Response Analysis of Structural Nonlinear Time Series Models,2023-05-30 17:52:29,Giovanni Ballarin,"http://arxiv.org/abs/2305.19089v3, http://arxiv.org/pdf/2305.19089v3",econ.EM
29780,em,"We consider the well-studied problem of predicting the time-varying
covariance matrix of a vector of financial returns. Popular methods range from
simple predictors like rolling window or exponentially weighted moving average
(EWMA) to more sophisticated predictors such as generalized autoregressive
conditional heteroscedastic (GARCH) type methods. Building on a specific
covariance estimator suggested by Engle in 2002, we propose a relatively simple
extension that requires little or no tuning or fitting, is interpretable, and
produces results at least as good as MGARCH, a popular extension of GARCH that
handles multiple assets. To evaluate predictors we introduce a novel approach,
evaluating the regret of the log-likelihood over a time period such as a
quarter. This metric allows us to see not only how well a covariance predictor
does over all, but also how quickly it reacts to changes in market conditions.
Our simple predictor outperforms MGARCH in terms of regret. We also test
covariance predictors on downstream applications such as portfolio optimization
methods that depend on the covariance matrix. For these applications our simple
covariance predictor and MGARCH perform similarly.",A Simple Method for Predicting Covariance Matrices of Financial Returns,2023-05-31 04:41:24,"Kasper Johansson, Mehmet Giray Ogut, Markus Pelger, Thomas Schmelzer, Stephen Boyd","http://arxiv.org/abs/2305.19484v2, http://arxiv.org/pdf/2305.19484v2",econ.EM
29800,em,"This paper proposes a nonparametric test for $m$th-degree inverse stochastic
dominance which is a powerful tool for ranking distribution functions according
to social welfare. We construct the test based on empirical process theory. The
test is shown to be asymptotically size controlled and consistent. The good
finite sample properties of the test are illustrated via Monte Carlo
simulations. We apply our test to the inequality growth in the United Kingdom
from 1995 to 2010.",A Nonparametric Test of $m$th-degree Inverse Stochastic Dominance,2023-06-21 16:49:20,"Hongyi Jiang, Zhenting Sun, Shiyun Hu","http://arxiv.org/abs/2306.12271v3, http://arxiv.org/pdf/2306.12271v3",econ.EM
29782,em,"This paper studies inference in predictive quantile regressions when the
predictive regressor has a near-unit root. We derive asymptotic distributions
for the quantile regression estimator and its heteroskedasticity and
autocorrelation consistent (HAC) t-statistic in terms of functionals of
Ornstein-Uhlenbeck processes. We then propose a switching-fully modified (FM)
predictive test for quantile predictability with persistent regressors. The
proposed test employs an FM style correction with a Bonferroni bound for the
local-to-unity parameter when the predictor has a near unit root. It switches
to a standard predictive quantile regression test with a slightly conservative
critical value when the largest root of the predictor lies in the stationary
range. Simulations indicate that the test has reliable size in small samples
and particularly good power when the predictor is persistent and endogenous,
i.e., when the predictive regression problem is most acute. We employ this new
methodology to test the ability of three commonly employed, highly persistent
and endogenous lagged valuation regressors - the dividend price ratio, earnings
price ratio, and book to market ratio - to predict the median, shoulders, and
tails of the stock return distribution.",Inference in Predictive Quantile Regressions,2023-06-01 05:32:47,"Alex Maynard, Katsumi Shimotsu, Nina Kuriyama","http://arxiv.org/abs/2306.00296v1, http://arxiv.org/pdf/2306.00296v1",econ.EM
29783,em,"This paper explores the identification and estimation of social interaction
models with endogenous group formation. We characterize group formation using a
two-sided many-to-one matching model, where individuals select groups based on
their preferences, while groups rank individuals according to their
qualifications, accepting the most qualified until reaching capacities. The
selection into groups leads to a bias in standard estimates of peer effects,
which is difficult to correct for due to equilibrium effects. We employ the
limiting approximation of a market as the market size grows large to simplify
the selection bias. Assuming exchangeable unobservables, we can express the
selection bias of an individual as a group-invariant nonparametric function of
her preference and qualification indices. In addition to the selection
correction, we show that the excluded variables in group formation can serve as
instruments to tackle the reflection problem. We propose semiparametric
distribution-free estimators that are root-n consistent and asymptotically
normal.",Social Interactions with Endogenous Group Formation,2023-06-02 16:47:53,"Shuyang Sheng, Xiaoting Sun","http://arxiv.org/abs/2306.01544v1, http://arxiv.org/pdf/2306.01544v1",econ.EM
29784,em,"The synthetic control estimator (Abadie et al., 2010) is asymptotically
unbiased assuming that the outcome is a linear function of the underlying
predictors and that the treated unit can be well approximated by the synthetic
control before the treatment. When the outcome is nonlinear, the bias of the
synthetic control estimator can be severe. In this paper, we provide conditions
for the synthetic control estimator to be asymptotically unbiased when the
outcome is nonlinear, and propose a flexible and data-driven method to choose
the synthetic control weights. Monte Carlo simulations show that compared with
the competing methods, the nonlinear synthetic control method has similar or
better performance when the outcome is linear, and better performance when the
outcome is nonlinear, and that the confidence intervals have good coverage
probabilities across settings. In the empirical application, we illustrate the
method by estimating the impact of the 2019 anti-extradition law amendments
bill protests on Hong Kong's economy, and find that the year-long protests
reduced real GDP per capita in Hong Kong by 11.27% in the first quarter of
2020, which was larger in magnitude than the economic decline during the 1997
Asian financial crisis or the 2008 global financial crisis.",The Synthetic Control Method with Nonlinear Outcomes: Estimating the Impact of the 2019 Anti-Extradition Law Amendments Bill Protests on Hong Kong's Economy,2023-06-03 03:25:53,Wei Tian,"http://arxiv.org/abs/2306.01967v1, http://arxiv.org/pdf/2306.01967v1",econ.EM
29785,em,"Policy evaluation in empirical microeconomics has been focusing on estimating
the average treatment effect and more recently the heterogeneous treatment
effects, often relying on the unconfoundedness assumption. We propose a method
based on the interactive fixed effects model to estimate treatment effects at
the individual level, which allows both the treatment assignment and the
potential outcomes to be correlated with the unobserved individual
characteristics. This method is suitable for panel datasets where multiple
related outcomes are observed for a large number of individuals over a small
number of time periods. Monte Carlo simulations show that our method
outperforms related methods. To illustrate our method, we provide an example of
estimating the effect of health insurance coverage on individual usage of
hospital emergency departments using the Oregon Health Insurance Experiment
data.",Individual Causal Inference Using Panel Data With Multiple Outcomes,2023-06-03 03:33:44,Wei Tian,"http://arxiv.org/abs/2306.01969v1, http://arxiv.org/pdf/2306.01969v1",econ.EM
29786,em,"In this study, we consider a four-regime bubble model under the assumption of
time-varying volatility and propose the algorithm of estimating the break dates
with volatility correction: First, we estimate the emerging date of the
explosive bubble, its collapsing date, and the recovering date to the normal
market under assumption of homoskedasticity; second, we collect the residuals
and then employ the WLS-based estimation of the bubble dates. We demonstrate by
Monte Carlo simulations that the accuracy of the break dates estimators improve
significantly by this two-step procedure in some cases compared to those based
on the OLS method.",Improving the accuracy of bubble date estimators under time-varying volatility,2023-06-05 18:49:32,"Eiji Kurozumi, Anton Skrobotov","http://arxiv.org/abs/2306.02977v1, http://arxiv.org/pdf/2306.02977v1",econ.EM
29787,em,"Experimenters often collect baseline data to study heterogeneity. I propose
the first valid confidence intervals for the VCATE, the treatment effect
variance explained by observables. Conventional approaches yield incorrect
coverage when the VCATE is zero. As a result, practitioners could be prone to
detect heterogeneity even when none exists. The reason why coverage worsens at
the boundary is that all efficient estimators have a locally-degenerate
influence function and may not be asymptotically normal. I solve the problem
for a broad class of multistep estimators with a predictive first stage. My
confidence intervals account for higher-order terms in the limiting
distribution and are fast to compute. I also find new connections between the
VCATE and the problem of deciding whom to treat. The gains of targeting
treatment are (sharply) bounded by half the square root of the VCATE. Finally,
I document excellent performance in simulation and reanalyze an experiment from
Malawi.",Robust inference for the treatment effect variance in experiments using machine learning,2023-06-06 05:30:10,Alejandro Sanchez-Becerra,"http://arxiv.org/abs/2306.03363v1, http://arxiv.org/pdf/2306.03363v1",econ.EM
29788,em,"We propose two approaches to estimate semiparametric discrete choice models
for bundles. Our first approach is a kernel-weighted rank estimator based on a
matching-based identification strategy. We establish its complete asymptotic
properties and prove the validity of the nonparametric bootstrap for inference.
We then introduce a new multi-index least absolute deviations (LAD) estimator
as an alternative, of which the main advantage is its capacity to estimate
preference parameters on both alternative- and agent-specific regressors. Both
methods can account for arbitrary correlation in disturbances across choices,
with the former also allowing for interpersonal heteroskedasticity. We also
demonstrate that the identification strategy underlying these procedures can be
extended naturally to panel data settings, producing an analogous localized
maximum score estimator and a LAD estimator for estimating bundle choice models
with fixed effects. We derive the limiting distribution of the former and
verify the validity of the numerical bootstrap as an inference tool. All our
proposed methods can be applied to general multi-index models. Monte Carlo
experiments show that they perform well in finite samples.",Semiparametric Discrete Choice Models for Bundles,2023-06-07 07:12:02,"Fu Ouyang, Thomas T. Yang","http://arxiv.org/abs/2306.04135v3, http://arxiv.org/pdf/2306.04135v3",econ.EM
29789,em,"Quantifying the impact of regulatory policies on social welfare generally
requires the identification of counterfactual distributions. Many of these
policies (e.g. minimum wages or minimum working time) generate mass points
and/or discontinuities in the outcome distribution. Existing approaches in the
difference-in-difference literature cannot accommodate these discontinuities
while accounting for selection on unobservables and non-stationary outcome
distributions. We provide a unifying partial identification result that can
account for these features. Our main identifying assumption is the stability of
the dependence (copula) between the distribution of the untreated potential
outcome and group membership (treatment assignment) across time. Exploiting
this copula stability assumption allows us to provide an identification result
that is invariant to monotonic transformations. We provide sharp bounds on the
counterfactual distribution of the treatment group suitable for any outcome,
whether discrete, continuous, or mixed. Our bounds collapse to the
point-identification result in Athey and Imbens (2006) for continuous outcomes
with strictly increasing distribution functions. We illustrate our approach and
the informativeness of our bounds by analyzing the impact of an increase in the
legal minimum wage using data from a recent minimum wage study (Cengiz, Dube,
Lindner, and Zipperer, 2019).",Evaluating the Impact of Regulatory Policies on Social Welfare in Difference-in-Difference Settings,2023-06-07 18:04:50,"Dalia Ghanem, Désiré Kédagni, Ismael Mourifié","http://arxiv.org/abs/2306.04494v2, http://arxiv.org/pdf/2306.04494v2",econ.EM
29790,em,"This paper considers a first-order autoregressive panel data model with
individual-specific effects and a heterogeneous autoregressive coefficient. It
proposes estimators for the moments of the cross-sectional distribution of the
autoregressive coefficients, with a focus on the first two moments, assuming a
random coefficient model for the autoregressive coefficients without imposing
any restrictions on the fixed effects. It is shown that the standard
generalized method of moments estimators obtained under homogeneous slopes are
biased. The paper also investigates conditions under which the probability
distribution of the autoregressive coefficients is identified assuming a
categorical distribution with a finite number of categories. Small sample
properties of the proposed estimators are investigated by Monte Carlo
experiments and compared with alternatives both under homogeneous and
heterogeneous slopes. The utility of the heterogeneous approach is illustrated
in the case of earning dynamics, where a clear upward pattern is obtained in
the mean persistence of earnings by the level of educational attainments.",Heterogeneous Autoregressions in Short T Panel Data Models,2023-06-08 18:44:35,"M. Hashem Pesaran, Liying Yang","http://arxiv.org/abs/2306.05299v1, http://arxiv.org/pdf/2306.05299v1",econ.EM
29791,em,"In this paper, we propose a localized neural network (LNN) model and then
develop the LNN based estimation and inferential procedures for dependent data
in both cases with quantitative/qualitative outcomes. We explore the use of
identification restrictions from a nonparametric regression perspective, and
establish an estimation theory for the LNN setting under a set of mild
conditions. The asymptotic distributions are derived accordingly, and we show
that LNN automatically eliminates the dependence of data when calculating the
asymptotic variances. The finding is important, as one can easily use different
types of wild bootstrap methods to obtain valid inference practically. In
particular, for quantitative outcomes, the proposed LNN approach yields
closed-form expressions for the estimates of some key estimators of interest.
Last but not least, we examine our theoretical findings through extensive
numerical studies.",A Localized Neural Network with Dependent Data: Estimation and Inference,2023-06-09 02:41:06,"Jiti Gao, Bin Peng, Yanrong Yang","http://arxiv.org/abs/2306.05593v1, http://arxiv.org/pdf/2306.05593v1",econ.EM
29792,em,"The effect of the full treatment is a primary parameter of interest in policy
evaluation, while often only the effect of a subset of treatment is estimated.
We partially identify the local average treatment effect of receiving full
treatment (LAFTE) using an instrumental variable that may induce individuals
into only a subset of treatment (movers). We show that movers violate the
standard exclusion restriction, necessary conditions on the presence of movers
are testable, and partial identification holds under a double exclusion
restriction. We identify movers in four empirical applications and estimate
informative bounds on the LAFTE in three of them.",Instrument-based estimation of full treatment effects with movers,2023-06-12 13:44:34,"Didier Nibbering, Matthijs Oosterveen","http://arxiv.org/abs/2306.07018v1, http://arxiv.org/pdf/2306.07018v1",econ.EM
29793,em,"Local polynomial density (LPD) estimation has become an essential tool for
boundary inference, including manipulation tests for regression discontinuity.
It is common sense that kernel choice is not critical for LPD estimation, in
analogy to standard kernel smoothing estimation. This paper, however, points
out that kernel choice has a severe impact on the performance of LPD, based on
both asymptotic and non-asymptotic theoretical investigations. In particular,
we show that the estimation accuracy can be extremely poor with commonly used
kernels with compact support, such as the triangular and uniform kernels.
Importantly, this negative result implies that the LPD-based manipulation test
loses its power if a compactly supported kernel is used. As a simple but
powerful solution to this problem, we propose using a specific kernel function
with unbounded support. We illustrate the empirical relevance of our results
with numerous empirical applications and simulations, which show large
improvements.",Kernel Choice Matters for Boundary Inference using Local Polynomial Density: With Application to Manipulation Testing,2023-06-13 11:29:00,"Shunsuke Imai, Yuta Okamoto","http://arxiv.org/abs/2306.07619v1, http://arxiv.org/pdf/2306.07619v1",econ.EM
29794,em,"In this contribution, we propose machine learning techniques to predict
zombie firms. First, we derive the risk of failure by training and testing our
algorithms on disclosed financial information and non-random missing values of
304,906 firms active in Italy from 2008 to 2017. Then, we spot the highest
financial distress conditional on predictions that lies above a threshold for
which a combination of false positive rate (false prediction of firm failure)
and false negative rate (false prediction of active firms) is minimized.
Therefore, we identify zombies as firms that persist in a state of financial
distress, i.e., their forecasts fall into the risk category above the threshold
for at least three consecutive years. For our purpose, we implement a gradient
boosting algorithm (XGBoost) that exploits information about missing values.
The inclusion of missing values in our predictive model is crucial because
patterns of undisclosed accounts are correlated with firm failure. Finally, we
show that our preferred machine learning algorithm outperforms (i) proxy models
such as Z-scores and the Distance-to-Default, (ii) traditional econometric
methods, and (iii) other widely used machine learning techniques. We provide
evidence that zombies are on average less productive and smaller, and that they
tend to increase in times of crisis. Finally, we argue that our application can
help financial institutions and public authorities design evidence-based
policies-e.g., optimal bankruptcy laws and information disclosure policies.",Machine Learning for Zombie Hunting: Predicting Distress from Firms' Accounts and Missing Values,2023-06-14 01:33:44,"Falco J. Bargagli-Stoffi, Fabio Incerti, Massimo Riccaboni, Armando Rungi","http://arxiv.org/abs/2306.08165v1, http://arxiv.org/pdf/2306.08165v1",econ.EM
29795,em,"Data clustering reduces the effective sample size down from the number of
observations towards the number of clusters. For instrumental variable models
this implies more restrictive requirements on the strength of the instruments
and makes the number of instruments more quickly non-negligible compared to the
effective sample size. Clustered data therefore increases the need for many and
weak instrument robust tests. However, none of the previously developed many
and weak instrument robust tests can be applied to this type of data as they
all require independent observations. I therefore adapt two of such tests to
clustered data. First, I derive a cluster jackknife Anderson-Rubin test by
removing clusters rather than individual observations from the Anderson-Rubin
statistic. Second, I propose a cluster many instrument Anderson-Rubin test
which improves on the first test by using a more optimal, but more complex,
weighting matrix. I show that if the clusters satisfy an invariance assumption
the higher complexity poses no problems. By revisiting a study on the effect of
queenly reign on war I show the empirical relevance of the new tests.","Inference in IV models with clustered dependence, many instruments and weak identification",2023-06-14 18:05:48,Johannes W. Ligtenberg,"http://arxiv.org/abs/2306.08559v1, http://arxiv.org/pdf/2306.08559v1",econ.EM
29796,em,"Monitoring downside risk and upside risk to the key macroeconomic indicators
is critical for effective policymaking aimed at maintaining economic stability.
In this paper I propose a parametric framework for modelling and forecasting
macroeconomic risk based on stochastic volatility models with Skew-Normal and
Skew-t shocks featuring time varying skewness. Exploiting a mixture stochastic
representation of the Skew-Normal and Skew-t random variables, in the paper I
develop efficient posterior simulation samplers for Bayesian estimation of both
univariate and VAR models of this type. In an application, I use the models to
predict downside risk to GDP growth in the US and I show that these models
represent a competitive alternative to semi-parametric approaches such as
quantile regression. Finally, estimating a medium scale VAR on US data I show
that time varying skewness is a relevant feature of macroeconomic and financial
shocks.",Modelling and Forecasting Macroeconomic Risk with Time Varying Skewness Stochastic Volatility Models,2023-06-15 20:15:03,Andrea Renzetti,"http://arxiv.org/abs/2306.09287v2, http://arxiv.org/pdf/2306.09287v2",econ.EM
29797,em,"This paper proposes an Anderson-Rubin (AR) test for the presence of peer
effects in panel data without the need to specify the network structure. The
unrestricted model of our test is a linear panel data model of social
interactions with dyad-specific peer effects. The proposed AR test evaluates if
the peer effect coefficients are all zero. As the number of peer effect
coefficients increases with the sample size, so does the number of instrumental
variables (IVs) employed to estimate the unrestricted model, rendering Bekker's
many-IV environment. By extending existing many-IV asymptotic results to panel
data, we show that the proposed AR test is asymptotically valid under the
presence of both individual and time fixed effects. We conduct Monte Carlo
simulations to investigate the finite sample performance of the AR test and
provide two applications to demonstrate its empirical relevance.",Testing for Peer Effects without Specifying the Network Structure,2023-06-16 15:43:48,"Hyunseok Jung, Xiaodong Liu","http://arxiv.org/abs/2306.09806v1, http://arxiv.org/pdf/2306.09806v1",econ.EM
29798,em,"Covariate benchmarking is an important part of sensitivity analysis about
omitted variable bias and can be used to bound the strength of the unobserved
confounder using information and judgments about observed covariates. It is
common to carry out formal covariate benchmarking after residualizing the
unobserved confounder on the set of observed covariates. In this paper, I
explain the rationale and details of this procedure. I clarify some important
details of the process of formal covariate benchmarking and highlight some of
the difficulties of interpretation that researchers face in reasoning about the
residualized part of unobserved confounders. I explain all the points with
several empirical examples.",Formal Covariate Benchmarking to Bound Omitted Variable Bias,2023-06-18 16:52:53,Deepankar Basu,"http://arxiv.org/abs/2306.10562v1, http://arxiv.org/pdf/2306.10562v1",econ.EM
29799,em,"In many scenarios, such as the evaluation of place-based policies, potential
outcomes are not only dependent upon the unit's own treatment but also its
neighbors' treatment. Despite this, ""difference-in-differences"" (DID) type
estimators typically ignore such interference among neighbors. I show in this
paper that the canonical DID estimators generally fail to identify interesting
causal effects in the presence of neighborhood interference. To incorporate
interference structure into DID estimation, I propose doubly robust estimators
for the direct average treatment effect on the treated as well as the average
spillover effects under a modified parallel trends assumption. When spillover
effects are of interest, we often sample the entire population. Thus, I adopt a
finite population perspective in the sense that the estimands are defined as
population averages and inference is conditional on the attributes of all
population units. The approach in this paper relaxes common restrictions in the
literature, such as partial interference and correctly specified spillover
functions. Moreover, robust inference is discussed based on the asymptotic
distribution of the proposed estimators.",Difference-in-Differences with Interference: A Finite Population Perspective,2023-06-21 06:46:14,Ruonan Xu,"http://arxiv.org/abs/2306.12003v3, http://arxiv.org/pdf/2306.12003v3",econ.EM
29801,em,"This paper examines empirical methods for estimating the response of
aggregated electricity demand to high-frequency price signals, the short-term
elasticity of electricity demand. We investigate how the endogeneity of prices
and the autocorrelation of the time series, which are particularly pronounced
at hourly granularity, affect and distort common estimators. After developing a
controlled test environment with synthetic data that replicate key statistical
properties of electricity demand, we show that not only the ordinary least
square (OLS) estimator is inconsistent (due to simultaneity), but so is a
regular instrumental variable (IV) regression (due to autocorrelation). Using
wind as an instrument, as it is commonly done, may result in an estimate of the
demand elasticity that is inflated by an order of magnitude. We visualize the
reason for the Thams bias using causal graphs and show that its magnitude
depends on the autocorrelation of both the instrument, and the dependent
variable. We further incorporate and adapt two extensions of the IV estimation,
conditional IV and nuisance IV, which have recently been proposed by Thams et
al. (2022). We show that these extensions can identify the true short-term
elasticity in a synthetic setting and are thus particularly promising for
future empirical research in this field.",Price elasticity of electricity demand: Using instrumental variable regressions to address endogeneity and autocorrelation of high-frequency time series,2023-06-22 16:19:43,"Silvana Tiedemann, Raffaele Sgarlato, Lion Hirth","http://arxiv.org/abs/2306.12863v1, http://arxiv.org/pdf/2306.12863v1",econ.EM
29802,em,"The common practice for GDP nowcasting in a data-rich environment is to
employ either sparse regression using LASSO-type regularization or a dense
approach based on factor models or ridge regression, which differ in the way
they extract information from high-dimensional datasets. This paper aims to
investigate whether sparse plus dense mixed frequency regression methods can
improve the nowcasts of the US GDP growth. We propose two novel MIDAS
regressions and show that these novel sparse plus dense methods greatly improve
the accuracy of nowcasts during the COVID pandemic compared to either only
sparse or only dense approaches. Using monthly macro and weekly financial
series, we further show that the improvement is particularly sharp when the
dense component is restricted to be macro, while the sparse signal stems from
both macro and financial series.",Sparse plus dense MIDAS regressions and nowcasting during the COVID pandemic,2023-06-23 11:28:39,"Jad Beyhum, Jonas Striaukas","http://arxiv.org/abs/2306.13362v2, http://arxiv.org/pdf/2306.13362v2",econ.EM
29803,em,"The exact estimation of latent variable models with big data is known to be
challenging. The latents have to be integrated out numerically, and the
dimension of the latent variables increases with the sample size. This paper
develops a novel approximate Bayesian method based on the Langevin diffusion
process. The method employs the Fisher identity to integrate out the latent
variables, which makes it accurate and computationally feasible when applied to
big data. In contrast to other approximate estimation methods, it does not
require the choice of a parametric distribution for the unknowns, which often
leads to inaccuracies. In an empirical discrete choice example with a million
observations, the proposed method accurately estimates the posterior choice
probabilities using only 2% of the computation time of exact MCMC.",Hybrid unadjusted Langevin methods for high-dimensional latent variable models,2023-06-26 09:21:21,"Ruben Loaiza-Maya, Didier Nibbering, Dan Zhu","http://arxiv.org/abs/2306.14445v1, http://arxiv.org/pdf/2306.14445v1",econ.EM
29804,em,"This paper investigates the performance of the Generalized Covariance
estimator (GCov) in estimating mixed causal and noncausal Vector Autoregressive
(VAR) models. The GCov estimator is a semi-parametric method that minimizes an
objective function without making any assumptions about the error distribution
and is based on nonlinear autocovariances to identify the causal and noncausal
orders of the mixed VAR. When the number and type of nonlinear autocovariances
included in the objective function of a GCov estimator is
insufficient/inadequate, or the error density is too close to the Gaussian,
identification issues can arise, resulting in local minima in the objective
function of the estimator at parameter values associated with incorrect causal
and noncausal orders. Then, depending on the starting point, the optimization
algorithm may converge to a local minimum, leading to inaccurate estimates. To
circumvent this issue, the paper proposes the use of the Simulated Annealing
(SA) optimization algorithm as an alternative to conventional numerical
optimization methods. The results demonstrate that the SA optimization
algorithm performs effectively when applied to multivariate mixed VAR models,
successfully eliminating the effects of local minima. The approach is
illustrated by simulations and an empirical application of a bivariate mixed
VAR model with commodity price series.",Optimization of the Generalized Covariance Estimator in Noncausal Processes,2023-06-26 15:46:24,"Gianluca Cubadda, Francesco Giancaterini, Alain Hecq, Joann Jasiak","http://arxiv.org/abs/2306.14653v2, http://arxiv.org/pdf/2306.14653v2",econ.EM
29805,em,"This paper traces the historical and analytical development of what is known
in the econometrics literature as the Frisch-Waugh-Lovell theorem. This theorem
demonstrates that the coefficients on any subset of covariates in a multiple
regression is equal to the coefficients in a regression of the residualized
outcome variable on the residualized subset of covariates, where
residualization uses the complement of the subset of covariates of interest. In
this paper, I suggest that the theorem should be renamed as the
Yule-Frisch-Waugh-Lovell (YFWL) theorem to recognize the pioneering
contribution of the statistician G. Udny Yule in its development. Second, I
highlight recent work by the statistician, P. Ding, which has extended the YFWL
theorem to a comparison of estimated covariance matrices of coefficients from
multiple and partial, i.e. residualized regressions. Third, I show that, in
cases where Ding's results do not apply, one can still resort to a
computational method to conduct statistical inference about coefficients in
multiple regressions using information from partial regressions.",The Yule-Frisch-Waugh-Lovell Theorem,2023-07-01 18:44:33,Deepankar Basu,"http://arxiv.org/abs/2307.00369v1, http://arxiv.org/pdf/2307.00369v1",econ.EM
29806,em,"This paper examines the identification assumptions underlying two types of
estimators of the causal effects of minimum wages based on regional variation
in wage levels: the ""effective minimum wage"" and the ""fraction affected/gap""
designs. For the effective minimum wage design, I show that the identification
assumptions emphasized by Lee (1999) are crucial for unbiased estimation but
difficult to satisfy in empirical applications for reasons arising from
economic theory. For the fraction affected design at the region level, I show
that economic factors such as a common trend in the dispersion of worker
productivity or regional convergence in GDP per capita may lead to violations
of the ""parallel trends"" identifying assumption. The paper suggests ways to
increase the likelihood of detecting those issues when implementing checks for
parallel pre-trends. I also show that this design may be subject to biases
arising from the misspecification of the treatment intensity variable,
especially when the minimum wage strongly affects employment and wages.",Does regional variation in wage levels identify the effects of a national minimum wage?,2023-07-03 21:18:24,Daniel Haanwinckel,"http://arxiv.org/abs/2307.01284v2, http://arxiv.org/pdf/2307.01284v2",econ.EM
29807,em,"Counterfactual predictions are challenging when the policy variable goes
beyond its pre-policy support. However, in many cases, information about the
policy of interest is available from different (""source"") regions where a
similar policy has already been implemented. In this paper, we propose a novel
method of using such data from source regions to predict a new policy in a
target region. Instead of relying on extrapolation of a structural relationship
using a parametric specification, we formulate a transferability condition and
construct a synthetic outcome-policy relationship such that it is as close as
possible to meeting the condition. The synthetic relationship weighs both the
similarity in distributions of observables and in structural relationships. We
develop a general procedure to construct asymptotic confidence intervals for
counterfactual predictions and prove its asymptotic validity. We then apply our
proposal to predict average teenage employment in Texas following a
counterfactual increase in the minimum wage.",Synthetic Decomposition for Counterfactual Predictions,2023-07-11 12:00:19,"Nathan Canen, Kyungchul Song","http://arxiv.org/abs/2307.05122v1, http://arxiv.org/pdf/2307.05122v1",econ.EM
29808,em,"This paper investigates the impact of decentralizing inventory
decision-making in multi-establishment firms using data from a large retail
chain. Analyzing two years of daily data, we find significant heterogeneity
among the inventory decisions made by 634 store managers. By estimating a
dynamic structural model, we reveal substantial heterogeneity in managers'
perceived costs. Moreover, we observe a correlation between the variance of
these perceptions and managers' education and experience. Counterfactual
experiments show that centralized inventory management reduces costs by
eliminating the impact of managers' skill heterogeneity. However, these
benefits are offset by the negative impact of delayed demand information.",Decentralized Decision-Making in Retail Chains: Evidence from Inventory Management,2023-07-10 01:06:37,"Victor Aguirregabiria, Francis Guiton","http://arxiv.org/abs/2307.05562v1, http://arxiv.org/pdf/2307.05562v1",econ.EM
29809,em,"This paper tests the feasibility and estimates the cost of climate control
through economic policies. It provides a toolbox for a statistical historical
assessment of a Stochastic Integrated Model of Climate and the Economy, and its
use in (possibly counterfactual) policy analysis. Recognizing that
stabilization requires supressing a trend, we use an integrated-cointegrated
Vector Autoregressive Model estimated using a newly compiled dataset ranging
between years A.D. 1000-2008, extending previous results on Control Theory in
nonstationary systems. We test statistically whether, and quantify to what
extent, carbon abatement policies can effectively stabilize or reduce global
temperatures. Our formal test of policy feasibility shows that carbon abatement
can have a significant long run impact and policies can render temperatures
stationary around a chosen long run mean. In a counterfactual empirical
illustration of the possibilities of our modeling strategy, we show that the
cost of carbon abatement for a retrospective policy aiming to keep global
temperatures close to their 1900 historical level is about 75% of the observed
2008 level of world GDP, a cost equivalent to reverting to levels of output
historically observed in the mid 1960s. This constitutes a measure of the
putative opportunity cost of the lack of investment in carbon abatement
technologies.",What Does it Take to Control Global Temperatures? A toolbox for estimating the impact of economic policies on climate,2023-07-12 00:54:08,"Guillaume Chevillon, Takamitsu Kurita","http://arxiv.org/abs/2307.05818v1, http://arxiv.org/pdf/2307.05818v1",econ.EM
29810,em,"External-instrument identification leads to biased responses when the shock
is not invertible and the measurement error is present. We propose to use this
identification strategy in a structural Dynamic Factor Model, which we call
Proxy DFM. In a simulation analysis, we show that the Proxy DFM always
successfully retrieves the true impulse responses, while the Proxy SVAR
systematically fails to do so when the model is either misspecified, does not
include all relevant information, or the measurement error is present. In an
application to US monetary policy, the Proxy DFM shows that a tightening shock
is unequivocally contractionary, with deteriorations in domestic demand, labor,
credit, housing, exchange, and financial markets. This holds true for all raw
instruments available in the literature. The variance decomposition analysis
highlights the importance of monetary policy shocks in explaining economic
fluctuations, albeit at different horizons.",Robust Impulse Responses using External Instruments: the Role of Information,2023-07-12 16:00:00,"Davide Brignone, Alessandro Franconi, Marco Mazzali","http://arxiv.org/abs/2307.06145v1, http://arxiv.org/pdf/2307.06145v1",econ.EM
29811,em,"We develop a method to learn about treatment effects in multiple treatment
models with discrete-valued instruments. We allow selection into treatment to
be governed by a general class of threshold crossing models that permits
multidimensional unobserved heterogeneity. Under a semi-parametric restriction
on the distribution of unobserved heterogeneity, we show how a sequence of
linear programs can be used to compute sharp bounds for a number of treatment
effect parameters when the marginal treatment response functions underlying
them remain nonparametric or are additionally parameterized.",Identification in Multiple Treatment Models under Discrete Variation,2023-07-12 16:59:35,"Vishal Kamat, Samuel Norris, Matthew Pecenco","http://arxiv.org/abs/2307.06174v1, http://arxiv.org/pdf/2307.06174v1",econ.EM
29812,em,"We provide sufficient conditions for semi-nonparametric point identification
of a mixture model of decision making under risk, when agents make choices in
multiple lines of insurance coverage (contexts) by purchasing a bundle. As a
first departure from the related literature, the model allows for two
preference types. In the first one, agents behave according to standard
expected utility theory with CARA Bernoulli utility function, with an
agent-specific coefficient of absolute risk aversion whose distribution is left
completely unspecified. In the other, agents behave according to the dual
theory of choice under risk(Yaari, 1987) combined with a one-parameter family
distortion function, where the parameter is agent-specific and is drawn from a
distribution that is left completely unspecified. Within each preference type,
the model allows for unobserved heterogeneity in consideration sets, where the
latter form at the bundle level -- a second departure from the related
literature. Our point identification result rests on observing sufficient
variation in covariates across contexts, without requiring any independent
variation across alternatives within a single context. We estimate the model on
data on households' deductible choices in two lines of property insurance, and
use the results to assess the welfare implications of a hypothetical market
intervention where the two lines of insurance are combined into a single one.
We study the role of limited consideration in mediating the welfare effects of
such intervention.","Risk Preference Types, Limited Consideration, and Welfare",2023-07-18 19:31:13,"Levon Barseghyan, Francesca Molinari","http://arxiv.org/abs/2307.09411v1, http://arxiv.org/pdf/2307.09411v1",econ.EM
29814,em,"Economic interactions often occur in networks where heterogeneous agents
(such as workers or firms) sort and produce. However, most existing estimation
approaches either require the network to be dense, which is at odds with many
empirical networks, or they require restricting the form of heterogeneity and
the network formation process. We show how the functional differencing approach
introduced by Bonhomme (2012) in the context of panel data, can be applied in
network settings to derive moment restrictions on model parameters and average
effects. Those restrictions are valid irrespective of the form of
heterogeneity, and they hold in both dense and sparse networks. We illustrate
the analysis with linear and nonlinear models of matched employer-employee
data, in the spirit of the model introduced by Abowd, Kramarz, and Margolis
(1999).",Functional Differencing in Networks,2023-07-21 13:37:49,"Stéphane Bonhomme, Kevin Dano","http://arxiv.org/abs/2307.11484v1, http://arxiv.org/pdf/2307.11484v1",econ.EM
29815,em,"We propose identification robust statistics for testing hypotheses on the
risk premia in dynamic affine term structure models. We do so using the moment
equation specification proposed for these models in Adrian et al. (2013). We
extend the subset (factor) Anderson-Rubin test from Guggenberger et al. (2012)
to models with multiple dynamic factors and time-varying risk prices. Unlike
projection-based tests, it provides a computationally tractable manner to
conduct identification robust tests on a larger number of parameters. We
analyze the potential identification issues arising in empirical studies.
Statistical inference based on the three-stage estimator from Adrian et al.
(2013) requires knowledge of the factors' quality and is misleading without
full-rank beta's or with sampling errors of comparable size as the loadings.
Empirical applications show that some factors, though potentially weak, may
drive the time variation of risk prices, and weak identification issues are
more prominent in multi-factor models.",Identification Robust Inference for the Risk Premium in Term Structure Models,2023-07-24 12:00:21,"Frank Kleibergen, Lingwei Kong","http://arxiv.org/abs/2307.12628v1, http://arxiv.org/pdf/2307.12628v1",econ.EM
29816,em,"In this paper, I discuss three aspects of the Frisch-Waugh-Lovell theorem.
First, I show that the theorem holds for linear instrumental variables
estimation of a multiple regression model that is either exactly or
overidentified. I show that with linear instrumental variables estimation: (a)
coefficients on endogenous variables are identical in full and partial (or
residualized) regressions; (b) residual vectors are identical for full and
partial regressions; and (c) estimated covariance matrices of the coefficient
vectors from full and partial regressions are equal (up to a degree of freedom
correction) if the estimator of the error vector is a function only of the
residual vectors and does not use any information about the covariate matrix
other than its dimensions. While estimation of the full model uses the full set
of instrumental variables, estimation of the partial model uses the
residualized version of the same set of instrumental variables, with
residualization carried out with respect to the set of exogenous variables.
Second, I show that: (a) the theorem applies in large samples to the K-class of
estimators, including the limited information maximum likelihood (LIML)
estimator, and (b) the theorem does not apply in general to linear GMM
estimators, but it does apply to the two step optimal linear GMM estimator.
Third, I trace the historical and analytical development of the theorem and
suggest that it be renamed as the Yule-Frisch-Waugh-Lovell (YFWL) theorem to
recognize the pioneering contribution of the statistician G. Udny Yule in its
development.",The Yule-Frisch-Waugh-Lovell Theorem for Linear Instrumental Variables Estimation,2023-07-12 22:18:22,Deepankar Basu,"http://arxiv.org/abs/2307.12731v2, http://arxiv.org/pdf/2307.12731v2",econ.EM
29817,em,"Can stated preferences help in counterfactual analyses of actual choice? This
research proposes a novel approach to researchers who have access to both
stated choices in hypothetical scenarios and actual choices. The key idea is to
use probabilistic stated choices to identify the distribution of individual
unobserved heterogeneity, even in the presence of measurement error. If this
unobserved heterogeneity is the source of endogeneity, the researcher can
correct for its influence in a demand function estimation using actual choices,
and recover causal effects. Estimation is possible with an off-the-shelf Group
Fixed Effects estimator.",Using Probabilistic Stated Preference Analyses to Understand Actual Choices,2023-07-26 08:56:54,Romuald Meango,"http://arxiv.org/abs/2307.13966v1, http://arxiv.org/pdf/2307.13966v1",econ.EM
29818,em,"I propose a novel argument to point identify economically interpretable
intertemporal treatment effects in dynamic regression discontinuity designs
(RDDs). Specifically, I develop a dynamic potential outcomes model and
specialize two assumptions of the difference-in-differences literature, the no
anticipation and common trends restrictions, to point identify cutoff-specific
impulse responses. The estimand associated with each target parameter can be
expressed as the sum of two static RDD outcome contrasts, thereby allowing for
estimation via standard local polynomial tools. I leverage a limited path
independence assumption to reduce the dimensionality of the problem.",Dynamic Regression Discontinuity: A Within-Design Approach,2023-07-26 17:02:24,Francesco Ruggieri,"http://arxiv.org/abs/2307.14203v1, http://arxiv.org/pdf/2307.14203v1",econ.EM
29819,em,"In practice , quite often there is a need to describe the values set by means
of a table in the form of some functional dependence . The observed values ,
due to certain circumstances , have an error . For approximation, it is
advisable to use a functional dependence that would allow smoothing out the
errors of the observation results. Approximation allows you to determine
intermediate values of functions that are not listed among the data in the
observation table. The use of exponential series for data approximation allows
you to get a result no worse than from approximation by polynomials In the
economic scientific literature, approximation in the form of power functions,
for example, the Cobb-Douglas function, has become widespread. The advantage of
this type of approximation can be called a simple type of approximating
function , and the disadvantage is that in nature not all processes can be
described by power functions with a given accuracy. An example is the GDP
indicator for several decades . For this case , it is difficult to find a power
function approximating a numerical series . But in this case, as shown in this
article, you can use exponential series to approximate the data. In this paper,
the time series of Hungary's GDP in the period from 1992 to 2022 was
approximated by a series of thirty exponents of a complex variable. The use of
data smoothing by the method of triangles allows you to average the data and
increase the accuracy of approximation . This is of practical importance if the
observed random variable contains outliers that need to be smoothed out.",Smoothing of numerical series by the triangle method on the example of hungarian gdp data 1992-2022 based on approximation by series of exponents,2023-07-25 20:35:21,Yekimov Sergey,"http://arxiv.org/abs/2307.14378v1, http://arxiv.org/pdf/2307.14378v1",econ.EM
29821,em,"The Hansen-Jagannathan (HJ) distance statistic is one of the most dominant
measures of model misspecification. However, the conventional HJ specification
test procedure has poor finite sample performance, and we show that it can be
size distorted even in large samples when (proxy) factors exhibit small
correlations with asset returns. In other words, applied researchers are likely
to falsely reject a model even when it is correctly specified. We provide two
alternatives for the HJ statistic and two corresponding novel procedures for
model specification tests, which are robust against the presence of weak
(proxy) factors, and we also offer a novel robust risk premia estimator.
Simulation exercises support our theory. Our empirical application documents
the non-reliability of the traditional HJ test since it may produce
counter-intuitive results when comparing nested models by rejecting a
four-factor model but not the reduced three-factor model. At the same time, our
proposed methods are practically more appealing and show support for a
four-factor model for Fama French portfolios.",Weak (Proxy) Factors Robust Hansen-Jagannathan Distance For Linear Asset Pricing Models,2023-07-26 23:58:12,Lingwei Kong,"http://arxiv.org/abs/2307.14499v1, http://arxiv.org/pdf/2307.14499v1",econ.EM
29822,em,"We consider Wald type statistics designed for joint predictability and
structural break testing based on the instrumentation method of Phillips and
Magdalinos (2009). We show that under the assumption of nonstationary
predictors: (i) the tests based on the OLS estimators converge to a nonstandard
limiting distribution which depends on the nuisance coefficient of persistence;
and (ii) the tests based on the IVX estimators can filter out the persistence
under certain parameter restrictions due to the supremum functional. These
results contribute to the literature of joint predictability and parameter
instability testing by providing analytical tractable asymptotic theory when
taking into account nonstationary regressors. We compare the finite-sample size
and power performance of the Wald tests under both estimators via extensive
Monte Carlo experiments. Critical values are computed using standard bootstrap
inference methodologies. We illustrate the usefulness of the proposed framework
to test for predictability under the presence of parameter instability by
examining the stock market predictability puzzle for the US equity premium.",Predictability Tests Robust against Parameter Instability,2023-07-27 21:56:53,Christis Katsouris,"http://arxiv.org/abs/2307.15151v1, http://arxiv.org/pdf/2307.15151v1",econ.EM
29823,em,"We develop new methods for changes-in-changes and distributional synthetic
controls when there exists group level heterogeneity. For changes-in-changes,
we allow individuals to belong to a large number of heterogeneous groups. The
new method extends the changes-in-changes method in Athey and Imbens (2006) by
finding appropriate subgroups within the control groups which share similar
group level unobserved characteristics to the treatment groups. For
distributional synthetic control, we show that the appropriate synthetic
control needs to be constructed using units in potentially different time
periods in which they have comparable group level heterogeneity to the
treatment group, instead of units that are only in the same time period as in
Gunsilius (2023). Implementation and data requirements for these new methods
are briefly discussed.",Group-Heterogeneous Changes-in-Changes and Distributional Synthetic Controls,2023-07-28 08:23:43,"Songnian Chen, Junlong Feng","http://arxiv.org/abs/2307.15313v1, http://arxiv.org/pdf/2307.15313v1",econ.EM
29824,em,"This paper considers a linear panel model with interactive fixed effects and
unobserved individual and time heterogeneities that are captured by some latent
group structures and an unknown structural break, respectively. To enhance
realism the model may have different numbers of groups and/or different group
memberships before and after the break. With the preliminary
nuclear-norm-regularized estimation followed by row- and column-wise linear
regressions, we estimate the break point based on the idea of binary
segmentation and the latent group structures together with the number of groups
before and after the break by sequential testing K-means algorithm
simultaneously. It is shown that the break point, the number of groups and the
group memberships can each be estimated correctly with probability approaching
one. Asymptotic distributions of the estimators of the slope coefficients are
established. Monte Carlo simulations demonstrate excellent finite sample
performance for the proposed estimation algorithm. An empirical application to
real house price data across 377 Metropolitan Statistical Areas in the US from
1975 to 2014 suggests the presence both of structural breaks and of changes in
group membership.",Panel Data Models with Time-Varying Latent Group Structures,2023-07-29 04:58:59,"Yiren Wang, Peter C B Phillips, Liangjun Su","http://arxiv.org/abs/2307.15863v1, http://arxiv.org/pdf/2307.15863v1",econ.EM
29825,em,"The log transformation of the dependent variable is not innocuous when using
a difference-in-differences (DD) model. With a dependent variable in logs, the
DD term captures an approximation of the proportional difference in growth
rates across groups. As I show with both simulations and two empirical
examples, if the baseline outcome distributions are sufficiently different
across groups, the DD parameter for a log-specification can be different in
sign to that of a levels-specification. I provide a condition, based on (i) the
aggregate time effect, and (ii) the difference in relative baseline outcome
means, for when the sign-switch will occur.",What's Logs Got to do With it: On the Perils of log Dependent Variables and Difference-in-Differences,2023-08-01 00:51:04,Brendon McConnell,"http://arxiv.org/abs/2308.00167v3, http://arxiv.org/pdf/2308.00167v3",econ.EM
29826,em,"We design randomization tests of heterogeneous treatment effects when units
interact on a network. Our modeling strategy allows network interference into
the potential outcomes framework using the concept of network exposure mapping.
We consider three null hypotheses that represent different notions of
homogeneous treatment effects, but due to nuisance parameters and the
multiplicity of potential outcomes, the hypotheses are not sharp. To address
the issue of multiple potential outcomes, we propose a conditional
randomization inference method that expands on existing methods. Additionally,
we propose two techniques that overcome the nuisance parameter issue. We show
that our conditional randomization inference method, combined with either of
the proposed techniques for handling nuisance parameters, produces
asymptotically valid p-values. We illustrate the testing procedures on a
network data set and the results of a Monte Carlo study are also presented.",Randomization Inference of Heterogeneous Treatment Effects under Network Interference,2023-08-01 02:48:36,Julius Owusu,"http://arxiv.org/abs/2308.00202v1, http://arxiv.org/pdf/2308.00202v1",econ.EM
29846,em,"In this article, we study the statistical and asymptotic properties of
break-point estimators in nonstationary autoregressive and predictive
regression models for testing the presence of a single structural break at an
unknown location in the full sample. Moreover, we investigate aspects such as
how the persistence properties of covariates and the location of the
break-point affects the limiting distribution of the proposed break-point
estimators.",Break-Point Date Estimation for Nonstationary Autoregressive and Predictive Regression Models,2023-08-26 19:31:37,Christis Katsouris,"http://arxiv.org/abs/2308.13915v1, http://arxiv.org/pdf/2308.13915v1",econ.EM
29827,em,"Many macroeconomic time series are characterised by nonlinearity both in the
conditional mean and in the conditional variance and, in practice, it is
important to investigate separately these two aspects. Here we address the
issue of testing for threshold nonlinearity in the conditional mean, in the
presence of conditional heteroskedasticity. We propose a supremum Lagrange
Multiplier approach to test a linear ARMA-GARCH model against the alternative
of a TARMA-GARCH model. We derive the asymptotic null distribution of the test
statistic and this requires novel results since the difficulties of working
with nuisance parameters, absent under the null hypothesis, are amplified by
the non-linear moving average, combined with GARCH-type innovations. We show
that tests that do not account for heteroskedasticity fail to achieve the
correct size even for large sample sizes. Moreover, we show that the TARMA
specification naturally accounts for the ubiquitous presence of measurement
error that affects macroeconomic data. We apply the results to analyse the time
series of Italian strikes and we show that the TARMA-GARCH specification is
consistent with the relevant macroeconomic theory while capturing the main
features of the Italian strikes dynamics, such as asymmetric cycles and
regime-switching.",Testing for Threshold Effects in Presence of Heteroskedasticity and Measurement Error with an application to Italian Strikes,2023-08-01 13:43:28,"Francesco Angelini, Massimiliano Castellani, Simone Giannerini, Greta Goracci","http://arxiv.org/abs/2308.00444v1, http://arxiv.org/pdf/2308.00444v1",econ.EM
29828,em,"These lecture notes represent supplementary material for a short course on
time series econometrics and network econometrics. We give emphasis on limit
theory for time series regression models as well as the use of the
local-to-unity parametrization when modeling time series nonstationarity.
Moreover, we present various non-asymptotic theory results for moderate
deviation principles when considering the eigenvalues of covariance matrices as
well as asymptotics for unit root moderate deviations in nonstationary
autoregressive processes. Although not all applications from the literature are
covered we also discuss some open problems in the time series and network
econometrics literature.",Limit Theory under Network Dependence and Nonstationarity,2023-08-02 23:32:19,Christis Katsouris,"http://arxiv.org/abs/2308.01418v4, http://arxiv.org/pdf/2308.01418v4",econ.EM
29829,em,"We propose a method for forecasting individual outcomes and estimating random
effects in linear panel data models and value-added models when the panel has a
short time dimension. The method is robust, trivial to implement and requires
minimal assumptions. The idea is to take a weighted average of time series- and
pooled forecasts/estimators, with individual weights that are based on time
series information. We show the forecast optimality of individual weights, both
in terms of minimax-regret and of mean squared forecast error. We then provide
feasible weights that ensure good performance under weaker assumptions than
those required by existing approaches. Unlike existing shrinkage methods, our
approach borrows the strength - but avoids the tyranny - of the majority, by
targeting individual (instead of group) accuracy and letting the data decide
how much strength each individual should borrow. Unlike existing empirical
Bayesian methods, our frequentist approach requires no distributional
assumptions, and, in fact, it is particularly advantageous in the presence of
features such as heavy tails that would make a fully nonparametric procedure
problematic.",A Robust Method for Microforecasting and Estimation of Random Effects,2023-08-03 11:01:07,"Raffaella Giacomini, Sokbae Lee, Silvia Sarpietro","http://arxiv.org/abs/2308.01596v1, http://arxiv.org/pdf/2308.01596v1",econ.EM
29830,em,"This paper introduces the method of composite quantile factor model for
factor analysis in high-dimensional panel data. We propose to estimate the
factors and factor loadings across different quantiles of the data, allowing
the estimates to better adapt to features of the data at different quantiles
while still modeling the mean of the data. We develop the limiting distribution
of the estimated factors and factor loadings, and an information criterion for
consistent factor number selection is also discussed. Simulations show that the
proposed estimator and the information criterion have good finite sample
properties for several non-normal distributions under consideration. We also
consider an empirical study on the factor analysis for 246 quarterly
macroeconomic variables. A companion R package cqrfactor is developed.",Composite Quantile Factor Models,2023-08-04 19:32:11,Xiao Huang,"http://arxiv.org/abs/2308.02450v1, http://arxiv.org/pdf/2308.02450v1",econ.EM
29831,em,"This paper considers identifying and estimating causal effect parameters in a
staggered treatment adoption setting -- that is, where a researcher has access
to panel data and treatment timing varies across units. We consider the case
where untreated potential outcomes may follow non-parallel trends over time
across groups. This implies that the identifying assumptions of leading
approaches such as difference-in-differences do not hold. We mainly focus on
the case where untreated potential outcomes are generated by an interactive
fixed effects model and show that variation in treatment timing provides
additional moment conditions that can be used to recover a large class of
target causal effect parameters. Our approach exploits the variation in
treatment timing without requiring either (i) a large number of time periods or
(ii) requiring any extra exclusion restrictions. This is in contrast to
essentially all of the literature on interactive fixed effects models which
requires at least one of these extra conditions. Rather, our approach directly
applies in settings where there is variation in treatment timing. Although our
main focus is on a model with interactive fixed effects, our idea of using
variation in treatment timing to recover causal effect parameters is quite
general and could be adapted to other settings with non-parallel trends across
groups such as dynamic panel data models.",Treatment Effects in Staggered Adoption Designs with Non-Parallel Trends,2023-08-05 18:17:39,"Brantly Callaway, Emmanuel Selorm Tsyawo","http://arxiv.org/abs/2308.02899v1, http://arxiv.org/pdf/2308.02899v1",econ.EM
29832,em,"This paper introduces unit-specific heterogeneity in panel data threshold
regression. Both slope coefficients and threshold parameters are allowed to
vary by unit. The heterogeneous threshold parameters manifest via a
unit-specific empirical quantile transformation of a common underlying
threshold parameter which is estimated efficiently from the whole panel. In the
errors, the unobserved heterogeneity of the panel takes the general form of
interactive fixed effects. The newly introduced parameter heterogeneity has
implications for model identification, estimation, interpretation, and
asymptotic inference. The assumption of a shrinking threshold magnitude now
implies shrinking heterogeneity and leads to faster estimator rates of
convergence than previously encountered. The asymptotic theory for the proposed
estimators is derived and Monte Carlo simulations demonstrate its usefulness in
small samples. The new model is employed to examine the Feldstein-Horioka
puzzle and it is found that the trade liberalization policies of the 80's
significantly impacted cross-country capital mobility.",Threshold Regression in Heterogeneous Panel Data with Interactive Fixed Effects,2023-08-08 08:37:14,"Marco Barassi, Yiannis Karavias, Chongxian Zhu","http://arxiv.org/abs/2308.04057v1, http://arxiv.org/pdf/2308.04057v1",econ.EM
29833,em,"This study investigates the causal interpretation of linear social
interaction models in the presence of endogeneity in network formation under a
heterogeneous treatment effects framework. We consider an experimental setting
in which individuals are randomly assigned to treatments while no interventions
are made for the network structure. We show that running a linear regression
ignoring network endogeneity is not problematic for estimating the average
direct treatment effect. However, it leads to sample selection bias and
negative-weights problem for the estimation of the average spillover effect. To
overcome these problems, we propose using potential peer treatment as an
instrumental variable (IV), which is automatically a valid IV for actual
spillover exposure. Using this IV, we examine two IV-based estimands and
demonstrate that they have a local average treatment-effect-type causal
interpretation for the spillover effect.",Causal Interpretation of Linear Social Interaction Models with Endogenous Networks,2023-08-08 17:18:11,Tadao Hoshino,"http://arxiv.org/abs/2308.04276v2, http://arxiv.org/pdf/2308.04276v2",econ.EM
29834,em,"This paper is concerned with identification, estimation, and specification
testing in causal evaluation problems when data is selective and/or missing. We
leverage recent advances in the literature on graphical methods to provide a
unifying framework for guiding empirical practice. The approach integrates and
connects to prominent identification and testing strategies in the literature
on missing data, causal machine learning, panel data analysis, and more. We
demonstrate its utility in the context of identification and specification
testing in sample selection models and field experiments with attrition. We
provide a novel analysis of a large-scale cluster-randomized controlled
teacher's aide trial in Danish schools at grade 6. Even with detailed
administrative data, the handling of missing data crucially affects broader
conclusions about effects on mental health. Results suggest that teaching
assistants provide an effective way of improving internalizing behavior for
large parts of the student population.",A Guide to Impact Evaluation under Sample Selection and Missing Data: Teacher's Aides and Adolescent Mental Health,2023-08-09 16:53:41,"Simon Calmar Andersen, Louise Beuchert, Phillip Heiler, Helena Skyt Nielsen","http://arxiv.org/abs/2308.04963v1, http://arxiv.org/pdf/2308.04963v1",econ.EM
29835,em,"The use of regression analysis for processing experimental data is fraught
with certain difficulties, which, when models are constructed, are associated
with assumptions, and there is a normal law of error distribution and variables
are statistically independent. In practice , these conditions do not always
take place . This may cause the constructed economic and mathematical model to
have no practical value. As an alternative approach to the study of numerical
series, according to the author, smoothing of numerical series using
Fermat-Torricelli points with subsequent interpolation of these points by
series of exponents could be used. The use of exponential series for
interpolating numerical series makes it possible to achieve the accuracy of
model construction no worse than regression analysis . At the same time, the
interpolation by series of exponents does not require the statistical material
that the errors of the numerical series obey the normal distribution law, and
statistical independence of variables is also not required. Interpolation of
numerical series by exponential series represents a ""black box"" type model,
that is, only input parameters and output parameters matter.",Interpolation of numerical series by the Fermat-Torricelli point construction method on the example of the numerical series of inflation in the Czech Republic in 2011-2021,2023-08-09 21:40:04,Yekimov Sergey,"http://arxiv.org/abs/2308.05183v1, http://arxiv.org/pdf/2308.05183v1",econ.EM
29836,em,"An innovative method is proposed to construct a quantile dependence system
for inflation and money growth. By considering all quantiles and leveraging a
novel notion of quantile sensitivity, the method allows the assessment of
changes in the entire distribution of a variable of interest in response to a
perturbation in another variable's quantile. The construction of this
relationship is demonstrated through a system of linear quantile regressions.
Then, the proposed framework is exploited to examine the distributional effects
of money growth on the distributions of inflation and its disaggregate measures
in the United States and the Euro area. The empirical analysis uncovers
significant impacts of the upper quantile of the money growth distribution on
the distribution of inflation and its disaggregate measures. Conversely, the
lower and median quantiles of the money growth distribution are found to have a
negligible influence. Finally, this distributional impact exhibits variation
over time in both the United States and the Euro area.",Money Growth and Inflation: A Quantile Sensitivity Approach,2023-08-10 13:26:10,"Matteo Iacopini, Aubrey Poon, Luca Rossini, Dan Zhu","http://arxiv.org/abs/2308.05486v3, http://arxiv.org/pdf/2308.05486v3",econ.EM
29837,em,"In this paper, we propose a new procedure for unconditional and conditional
forecasting in agent-based models. The proposed algorithm is based on the
application of amortized neural networks and consists of two steps. The first
step simulates artificial datasets from the model. In the second step, a neural
network is trained to predict the future values of the variables using the
history of observations. The main advantage of the proposed algorithm is its
speed. This is due to the fact that, after the training procedure, it can be
used to yield predictions for almost any data without additional simulations or
the re-estimation of the neural network",Amortized neural networks for agent-based model forecasting,2023-08-03 14:24:34,"Denis Koshelev, Alexey Ponomarenko, Sergei Seleznev","http://arxiv.org/abs/2308.05753v1, http://arxiv.org/pdf/2308.05753v1",econ.EM
29838,em,"This article discusses recent developments in the literature of quantile time
series models in the cases of stationary and nonstationary underline stochastic
processes.",Quantile Time Series Regression Models Revisited,2023-08-12 20:21:54,Christis Katsouris,"http://arxiv.org/abs/2308.06617v3, http://arxiv.org/pdf/2308.06617v3",econ.EM
29847,em,"This paper deals with the endogeneity of firms' entry and exit decisions in
demand estimation. Product entry decisions lack a single crossing property in
terms of demand unobservables, which causes the inconsistency of conventional
methods dealing with selection. We present a novel and straightforward two-step
approach to estimate demand while addressing endogenous product entry. In the
first step, our method estimates a finite mixture model of product entry
accommodating latent market types. In the second step, it estimates demand
controlling for the propensity scores of all latent market types. We apply this
approach to data from the airline industry.",Identification and Estimation of Demand Models with Endogenous Product Entry and Exit,2023-08-27 23:13:44,"Victor Aguirregabiria, Alessandro Iaria, Senay Sokullu","http://arxiv.org/abs/2308.14196v1, http://arxiv.org/pdf/2308.14196v1",econ.EM
29839,em,"In B2B markets, value-based pricing and selling has become an important
alternative to discounting. This study outlines a modeling method that uses
customer data (product offers made to each current or potential customer,
features, discounts, and customer purchase decisions) to estimate a mixed logit
choice model. The model is estimated via hierarchical Bayes and machine
learning, delivering customer-level parameter estimates. Customer-level
estimates are input into a nonlinear programming next-offer maximization
problem to select optimal features and discount level for customer segments,
where segments are based on loyalty and discount elasticity. The mixed logit
model is integrated with economic theory (the random utility model), and it
predicts both customer perceived value for and response to alternative future
sales offers. The methodology can be implemented to support value-based pricing
and selling efforts.
  Contributions to the literature include: (a) the use of customer-level
parameter estimates from a mixed logit model, delivered via a hierarchical
Bayes estimation procedure, to support value-based pricing decisions; (b)
validation that mixed logit customer-level modeling can deliver strong
predictive accuracy, not as high as random forest but comparing favorably; and
(c) a nonlinear programming problem that uses customer-level mixed logit
estimates to select optimal features and discounts.","Optimizing B2B Product Offers with Machine Learning, Mixed Logit, and Nonlinear Programming",2023-08-15 18:17:09,"John V. Colias, Stella Park, Elizabeth Horn","http://arxiv.org/abs/2308.07830v1, http://arxiv.org/pdf/2308.07830v1",econ.EM
29840,em,"This study evaluated three Artificial Intelligence (AI) large language model
(LLM) enabled platforms - ChatGPT, BARD, and Bing AI - to answer an
undergraduate finance exam with 20 quantitative questions across various
difficulty levels. ChatGPT scored 30 percent, outperforming Bing AI, which
scored 20 percent, while Bard lagged behind with a score of 15 percent. These
models faced common challenges, such as inaccurate computations and formula
selection. While they are currently insufficient for helping students pass the
finance exam, they serve as valuable tools for dedicated learners. Future
advancements are expected to overcome these limitations, allowing for improved
formula selection and accurate computations and potentially enabling students
to score 90 percent or higher.",Emerging Frontiers: Exploring the Impact of Generative AI Platforms on University Quantitative Finance Examinations,2023-08-15 21:27:39,Rama K. Malladi,"http://arxiv.org/abs/2308.07979v2, http://arxiv.org/pdf/2308.07979v2",econ.EM
29841,em,"When multi-dimensional instruments are used to identify and estimate causal
effects, the monotonicity condition may not hold due to heterogeneity in the
population. Under a partial monotonicity condition, which only requires the
monotonicity to hold for each instrument separately holding all the other
instruments fixed, the 2SLS estimand can still be a positively weighted average
of LATEs. In this paper, we provide a simple nonparametric test for partial
instrument monotonicity. We demonstrate the good finite sample properties of
the test through Monte Carlo simulations. We then apply the test to monetary
incentives and distance from results centers as instruments for the knowledge
of HIV status.",Testing Partial Instrument Monotonicity,2023-08-16 17:22:51,"Hongyi Jiang, Zhenting Sun","http://arxiv.org/abs/2308.08390v2, http://arxiv.org/pdf/2308.08390v2",econ.EM
29842,em,"Linear instrumental variable regressions are widely used to estimate causal
effects. Many instruments arise from the use of ""technical"" instruments and
more recently from the empirical strategy of ""judge design"". This paper surveys
and summarizes ideas from recent literature on estimation and statistical
inferences with many instruments. We discuss how to assess the strength of the
instruments and how to conduct weak identification-robust inference under
heteroscedasticity. We establish new results for a jack-knifed version of the
Lagrange Multiplier (LM) test statistic. Many exogenous regressors arise often
in practice to ensure the validity of the instruments. We extend the
weak-identification-robust tests to settings with both many exogenous
regressors and many instruments. We propose a test that properly partials out
many exogenous regressors while preserving the re-centering property of the
jack-knife. The proposed tests have uniformly correct size and good power
properties.",Weak Identification with Many Instruments,2023-08-18 16:13:41,"Anna Mikusheva, Liyang Sun","http://arxiv.org/abs/2308.09535v1, http://arxiv.org/pdf/2308.09535v1",econ.EM
29843,em,"Conventional methods of cluster-robust inference are inconsistent in the
presence of unignorably large clusters. We formalize this claim by establishing
a necessary and sufficient condition for the consistency of the conventional
methods. We find that this condition for the consistency is rejected for a
majority of empirical research papers. In this light, we propose a novel score
subsampling method which is robust even under the condition that fails the
conventional method. Simulation studies support these claims. With real data
used by an empirical paper, we showcase that the conventional methods conclude
significance while our proposed method concludes insignificance.",On the Inconsistency of Cluster-Robust Inference and How Subsampling Can Fix It,2023-08-20 05:35:52,"Harold D. Chiang, Yuya Sasaki, Yulong Wang","http://arxiv.org/abs/2308.10138v3, http://arxiv.org/pdf/2308.10138v3",econ.EM
29844,em,"In this work, we further explore the forecasting ability of a recently
proposed normalizing and variance-stabilizing (NoVaS) transformation after
wrapping exogenous variables. In practice, especially in the area of financial
econometrics, extra knowledge such as fundamentals- and sentiments-based
information could be beneficial to improve the prediction accuracy of market
volatility if they are incorporated into the forecasting process. In a
classical approach, people usually apply GARCHX-type methods to include the
exogenous variables. Being a Model-free prediction method, NoVaS has been shown
to be more accurate and stable than classical GARCH-type methods. We are
interested in whether the novel NoVaS method can also sustain its superiority
after exogenous covariates are taken into account. We provide the NoVaS
transformation based on GARCHX model and then claim the corresponding
prediction procedure with exogenous variables existing. Also, simulation
studies verify that the NoVaS method still outperforms traditional methods,
especially for long-term time aggregated predictions.",GARHCX-NoVaS: A Model-free Approach to Incorporate Exogenous Variables,2023-08-25 15:35:34,"Kejin Wu, Sayar Karmakar","http://arxiv.org/abs/2308.13346v1, http://arxiv.org/pdf/2308.13346v1",econ.EM
29845,em,"Policy researchers using synthetic control methods typically choose a donor
pool in part by using policy domain expertise so the untreated units are most
like the treated unit in the pre intervention period. This potentially leaves
estimation open to biases, especially when researchers have many potential
donors. We compare how functional principal component analysis synthetic
control, forward-selection, and the original synthetic control method select
donors. To do this, we use Gaussian Process simulations as well as policy case
studies from West German Reunification, a hotel moratorium in Barcelona, and a
sugar-sweetened beverage tax in San Francisco. We then summarize the
implications for policy research and provide avenues for future work.",Splash! Robustifying Donor Pools for Policy Studies,2023-08-26 01:04:20,"Jared Amani Greathouse, Mani Bayani, Jason Coupet","http://arxiv.org/abs/2308.13688v1, http://arxiv.org/pdf/2308.13688v1",econ.EM
29851,em,"Quantitative models are an important decision-making factor for policy makers
and investors. Predicting an economic recession with high accuracy and
reliability would be very beneficial for the society. This paper assesses
machine learning technics to predict economic recessions in United States using
market sentiment and economic indicators (seventy-five explanatory variables)
from Jan 1986 - June 2022 on a monthly basis frequency. In order to solve the
issue of missing time-series data points, Autoregressive Integrated Moving
Average (ARIMA) method used to backcast explanatory variables. Analysis started
with reduction in high dimensional dataset to only most important characters
using Boruta algorithm, correlation matrix and solving multicollinearity issue.
Afterwards, built various cross-validated models, both probability regression
methods and machine learning technics, to predict recession binary outcome. The
methods considered are Probit, Logit, Elastic Net, Random Forest, Gradient
Boosting, and Neural Network. Lastly, discussed different models performance
based on confusion matrix, accuracy and F1 score with potential reasons for
their weakness and robustness.",Can Machine Learning Catch Economic Recessions Using Economic and Market Sentiments?,2023-08-28 04:35:40,Kian Tehranian,"http://arxiv.org/abs/2308.16200v1, http://arxiv.org/pdf/2308.16200v1",econ.EM
29852,em,"Network diffusion models are applicable to many socioeconomic interactions,
yet network interaction is hard to observe or measure. Whenever the diffusion
process is unobserved, the number of possible realizations of the latent matrix
that captures agents' diffusion statuses grows exponentially with the size of
network. Due to interdependencies, the log likelihood function can not be
factorized in individual components. As a consequence, exact estimation of
latent diffusion models with more than one round of interaction is
computationally infeasible. In the present paper, I propose a trimming
estimator that enables me to establish and maximize an approximate log
likelihood function that almost exactly identifies the peak of the true log
likelihood function whenever no more than one third of eligible agents are
subject to trimming.",A Trimming Estimator for the Latent-Diffusion-Observed-Adoption Model,2023-09-04 12:29:25,L. S. Sanna Stephan,"http://arxiv.org/abs/2309.01471v1, http://arxiv.org/pdf/2309.01471v1",econ.EM
29853,em,"According to standard econometric theory, Maximum Likelihood estimation (MLE)
is the efficient estimation choice, however, it is not always a feasible one.
In network diffusion models with unobserved signal propagation, MLE requires
integrating out a large number of latent variables, which quickly becomes
computationally infeasible even for moderate network sizes and time horizons.
Limiting the model time horizon on the other hand entails loss of important
information while approximation techniques entail a (small) error that.
Searching for a viable alternative is thus potentially highly beneficial. This
paper proposes two estimators specifically tailored to the network diffusion
model of partially observed adoption and unobserved network diffusion.",Moment-Based Estimation of Diffusion and Adoption Parameters in Networks,2023-09-04 12:51:55,L. S. Sanna Stephan,"http://arxiv.org/abs/2309.01489v1, http://arxiv.org/pdf/2309.01489v1",econ.EM
29854,em,"Montiel Olea and Pflueger (2013) proposed the effective F-statistic as a test
for weak instruments in terms of the Nagar bias of the two-stage least squares
(2SLS) estimator relative to a benchmark worst-case bias. We show that their
methodology applies to a class of linear generalized method of moments (GMM)
estimators with an associated class of generalized effective F-statistics. The
standard nonhomoskedasticity robust F-statistic is a member of this class. The
associated GMMf estimator, with the extension f for first-stage, is a novel and
unusual estimator as the weight matrix is based on the first-stage residuals.
As the robust F-statistic can also be used as a test for underidentification,
expressions for the calculation of the weak-instruments critical values in
terms of the Nagar bias of the GMMf estimator relative to the benchmark
simplify and no simulation methods or Patnaik (1949) distributional
approximations are needed. In the grouped-data IV designs of Andrews (2018),
where the robust F-statistic is large but the effective F-statistic is small,
the GMMf estimator is shown to behave much better in terms of bias than the
2SLS estimator, as expected by the weak-instruments test results.",The Robust F-Statistic as a Test for Weak Instruments,2023-09-04 17:42:21,Frank Windmeijer,"http://arxiv.org/abs/2309.01637v1, http://arxiv.org/pdf/2309.01637v1",econ.EM
29855,em,"This paper extends the design-based framework to settings with multi-way
cluster dependence, and shows how multi-way clustering can be justified when
clustered assignment and clustered sampling occurs on different dimensions, or
when either sampling or assignment is multi-way clustered. Unlike one-way
clustering, the plug-in variance estimator in multi-way clustering is no longer
conservative, so valid inference either requires an assumption on the
correlation of treatment effects or a more conservative variance estimator.
Simulations suggest that the plug-in variance estimator is usually robust, and
the conservative variance estimator is often too conservative.",Design-Based Multi-Way Clustering,2023-09-04 18:21:20,Luther Yap,"http://arxiv.org/abs/2309.01658v1, http://arxiv.org/pdf/2309.01658v1",econ.EM
29856,em,"This paper proposes a local projection residual bootstrap method to construct
confidence intervals for impulse response coefficients of AR(1) models. Our
bootstrap method is based on the local projection (LP) approach and a residual
bootstrap procedure. We present theoretical results for our bootstrap method
and proposed confidence intervals. First, we prove the uniform consistency of
the LP-residual bootstrap over a large class of AR(1) models that allow for a
unit root. Then, we prove the asymptotic validity of our confidence intervals
over the same class of AR(1) models. Finally, we show that the LP-residual
bootstrap provides asymptotic refinements for confidence intervals on a
restricted class of AR(1) models relative to those required for the uniform
consistency of our bootstrap.",The Local Projection Residual Bootstrap for AR(1) Models,2023-09-05 04:45:32,Amilcar Velez,"http://arxiv.org/abs/2309.01889v2, http://arxiv.org/pdf/2309.01889v2",econ.EM
29867,em,"In discrete choice panel data, the estimation of average effects is crucial
for quantifying the effect of covariates, and for policy evaluation and
counterfactual analysis. This task is challenging in short panels with
individual-specific effects due to partial identification and the incidental
parameter problem. While consistent estimation of the identified set is
possible, it generally requires very large sample sizes, especially when the
number of support points of the observed covariates is large, such as when the
covariates are continuous. In this paper, we propose estimating outer bounds on
the identified set of average effects. Our bounds are easy to construct,
converge at the parametric rate, and are computationally simple to obtain even
in moderately large samples, independent of whether the covariates are discrete
or continuous. We also provide asymptotically valid confidence intervals on the
identified set. Simulation studies confirm that our approach works well and is
informative in finite samples. We also consider an application to labor force
participation.",Bounds on Average Effects in Discrete Choice Panel Data Models,2023-09-17 18:25:14,"Cavit Pakel, Martin Weidner","http://arxiv.org/abs/2309.09299v2, http://arxiv.org/pdf/2309.09299v2",econ.EM
29857,em,"Even though dyadic regressions are widely used in empirical applications, the
(asymptotic) properties of estimation methods only began to be studied recently
in the literature. This paper aims to provide in a step-by-step manner how
U-statistics tools can be applied to obtain the asymptotic properties of
pairwise differences estimators for a two-way fixed effects model of dyadic
interactions. More specifically, we first propose an estimator for the model
that relies on pairwise differencing such that the fixed effects are
differenced out. As a result, the summands of the influence function will not
be independent anymore, showing dependence on the individual level and
translating to the fact that the usual law of large numbers and central limit
theorems do not straightforwardly apply. To overcome such obstacles, we show
how to generalize tools of U-statistics for single-index variables to the
double-indices context of dyadic datasets. A key result is that there can be
different ways of defining the Hajek projection for a directed dyadic
structure, which will lead to distinct, but equivalent, consistent estimators
for the asymptotic variances. The results presented in this paper are easily
extended to non-linear models.",On the use of U-statistics for linear dyadic interaction models,2023-09-05 12:51:45,G. M. Szini,"http://arxiv.org/abs/2309.02089v1, http://arxiv.org/pdf/2309.02089v1",econ.EM
29858,em,"We consider instrumental variable estimation of the proportional hazards
model of Cox (1972). The instrument and the endogenous variable are discrete
but there can be (possibly continuous) exogenous covariables. By making a rank
invariance assumption, we can reformulate the proportional hazards model into a
semiparametric version of the instrumental variable quantile regression model
of Chernozhukov and Hansen (2005). A na\""ive estimation approach based on
conditional moment conditions generated by the model would lead to a highly
nonconvex and nonsmooth objective function. To overcome this problem, we
propose a new presmoothing methodology. First, we estimate the model
nonparametrically - and show that this nonparametric estimator has a
closed-form solution in the leading case of interest of randomized experiments
with one-sided noncompliance. Second, we use the nonparametric estimator to
generate ``proxy'' observations for which exogeneity holds. Third, we apply the
usual partial likelihood estimator to the ``proxy'' data. While the paper
focuses on the proportional hazards model, our presmoothing approach could be
applied to estimate other semiparametric formulations of the instrumental
variable quantile regression model. Our estimation procedure allows for random
right-censoring. We show asymptotic normality of the resulting estimator. The
approach is illustrated via simulation studies and an empirical application to
the Illinois",Instrumental variable estimation of the proportional hazards model by presmoothing,2023-09-05 15:39:59,"Lorenzo Tedesco, Jad Beyhum, Ingrid Van Keilegom","http://arxiv.org/abs/2309.02183v1, http://arxiv.org/pdf/2309.02183v1",econ.EM
29859,em,"This paper develops a simple two-stage variational Bayesian algorithm to
estimate panel spatial autoregressive models, where N, the number of
cross-sectional units, is much larger than T, the number of time periods
without restricting the spatial effects using a predetermined weighting matrix.
We use Dirichlet-Laplace priors for variable selection and parameter shrinkage.
Without imposing any a priori structures on the spatial linkages between
variables, we let the data speak for themselves. Extensive Monte Carlo studies
show that our method is super-fast and our estimated spatial weights matrices
strongly resemble the true spatial weights matrices. As an illustration, we
investigate the spatial interdependence of European Union regional gross value
added growth rates. In addition to a clear pattern of predominant country
clusters, we have uncovered a number of important between-country spatial
linkages which are yet to be documented in the literature. This new procedure
for estimating spatial effects is of particular relevance for researchers and
policy makers alike.",Identifying spatial interdependence in panel data with large N and small T,2023-09-07 17:29:28,"Deborah Gefang, Stephen G. Hall, George S. Tavlas","http://arxiv.org/abs/2309.03740v1, http://arxiv.org/pdf/2309.03740v1",econ.EM
29860,em,"A growing literature measures ""belief effects"" -- that is, the causal effect
of a change in beliefs on actions -- using information provision experiments,
where the provision of information is used as an instrument for beliefs. In
experimental designs with a passive control group, and under heterogeneous
belief effects, we show that the use of information provision as an instrument
may not produce a positive weighted average of belief effects. We develop an
""information provision instrumental variables"" (IPIV) framework that infers the
direction of belief updating using information about prior beliefs. In this
framework, we propose a class of IPIV estimators that recover positive weighted
averages of belief effects. Relative to our preferred IPIV, commonly used
specifications in the literature require additional assumptions to generate
positive weights. And in the cases where these additional assumptions are
satisfied, the identified parameters often up-weight individuals with priors
that are further from the provided information, which may not be desirable.",Interpreting IV Estimators in Information Provision Experiments,2023-09-09 16:36:44,"Vod Vilfort, Whitney Zhang","http://arxiv.org/abs/2309.04793v2, http://arxiv.org/pdf/2309.04793v2",econ.EM
29861,em,"We consider estimation and inference of the effects of a policy in the
absence of a control group. We obtain unbiased estimators of individual
(heterogeneous) treatment effects and a consistent and asymptotically normal
estimator of the average treatment effects, based on forecasting
counterfactuals using a short time series of pre-treatment data. We show that
the focus should be on forecast unbiasedness rather than accuracy. Correct
specification of the forecasting model is not necessary to obtain unbiased
estimates of individual treatment effects. Instead, simple basis function
(e.g., polynomial time trends) regressions deliver unbiasedness under a broad
class of data-generating processes for the individual counterfactuals. Basing
the forecasts on a model can introduce misspecification bias and does not
necessarily improve performance even under correct specification. Consistency
and asymptotic normality of our Forecasted Average Treatment effects (FAT)
estimator are attained under an additional assumption that rules out common and
unforecastable shocks occurring between the treatment date and the date at
which the effect is calculated.",Forecasted Treatment Effects,2023-09-11 20:30:34,"Irene Botosaru, Raffaella Giacomini, Martin Weidner","http://arxiv.org/abs/2309.05639v2, http://arxiv.org/pdf/2309.05639v2",econ.EM
29868,em,"The ability to conduct reproducible research in Stata is often limited by the
lack of version control for user-submitted packages. This article introduces
the require command, a tool designed to ensure Stata package dependencies are
compatible across users and computer systems. Given a list of Stata packages,
require verifies that each package is installed, checks for a minimum or exact
version or package release date, and optionally installs the package if
prompted by the researcher.",require: Package dependencies for reproducible research,2023-09-20 07:35:07,"Sergio Correia, Matthew P. Seay","http://arxiv.org/abs/2309.11058v1, http://arxiv.org/pdf/2309.11058v1",econ.EM
29862,em,"I study the estimation of semiparametric monotone index models in the
scenario where the number of observation points $n$ is extremely large and
conventional approaches fail to work due to heavy computational burdens.
Motivated by the mini-batch gradient descent algorithm (MBGD) that is widely
used as a stochastic optimization tool in the machine learning field, I
proposes a novel subsample- and iteration-based estimation procedure. In
particular, starting from any initial guess of the true parameter, I
progressively update the parameter using a sequence of subsamples randomly
drawn from the data set whose sample size is much smaller than $n$. The update
is based on the gradient of some well-chosen loss function, where the
nonparametric component is replaced with its Nadaraya-Watson kernel estimator
based on subsamples. My proposed algorithm essentially generalizes MBGD
algorithm to the semiparametric setup. Compared with full-sample-based method,
the new method reduces the computational time by roughly $n$ times if the
subsample size and the kernel function are chosen properly, so can be easily
applied when the sample size $n$ is large. Moreover, I show that if I further
conduct averages across the estimators produced during iterations, the
difference between the average estimator and full-sample-based estimator will
be $1/\sqrt{n}$-trivial. Consequently, the average estimator is
$1/\sqrt{n}$-consistent and asymptotically normally distributed. In other
words, the new estimator substantially improves the computational speed, while
at the same time maintains the estimation accuracy.",Stochastic Learning of Semiparametric Monotone Index Models with Large Sample Size,2023-09-13 06:28:31,Qingsong Yao,"http://arxiv.org/abs/2309.06693v2, http://arxiv.org/pdf/2309.06693v2",econ.EM
29863,em,"Investigating interference or spillover effects among units is a central task
in many social science problems. Network experiments are powerful tools for
this task, which avoids endogeneity by randomly assigning treatments to units
over networks. However, it is non-trivial to analyze network experiments
properly without imposing strong modeling assumptions. Previously, many
researchers have proposed sophisticated point estimators and standard errors
for causal effects under network experiments. We further show that
regression-based point estimators and standard errors can have strong
theoretical guarantees if the regression functions and robust standard errors
are carefully specified to accommodate the interference patterns under network
experiments. We first recall a well-known result that the Hajek estimator is
numerically identical to the coefficient from the weighted-least-squares fit
based on the inverse probability of the exposure mapping. Moreover, we
demonstrate that the regression-based approach offers three notable advantages:
its ease of implementation, the ability to derive standard errors through the
same weighted-least-squares fit, and the capacity to integrate covariates into
the analysis, thereby enhancing estimation efficiency. Furthermore, we analyze
the asymptotic bias of the regression-based network-robust standard errors.
Recognizing that the covariance estimator can be anti-conservative, we propose
an adjusted covariance estimator to improve the empirical coverage rates.
Although we focus on regression-based point estimators and standard errors, our
theory holds under the design-based framework, which assumes that the
randomness comes solely from the design of network experiments and allows for
arbitrary misspecification of the regression models.",Causal inference in network experiments: regression-based analysis and design-based properties,2023-09-14 10:29:49,"Mengsi Gao, Peng Ding","http://arxiv.org/abs/2309.07476v2, http://arxiv.org/pdf/2309.07476v2",econ.EM
29864,em,"This paper proposes a structural econometric approach to estimating the basic
reproduction number ($\mathcal{R}_{0}$) of Covid-19. This approach identifies
$\mathcal{R}_{0}$ in a panel regression model by filtering out the effects of
mitigating factors on disease diffusion and is easy to implement. We apply the
method to data from 48 contiguous U.S. states and a diverse set of countries.
Our results reveal a notable concentration of $\mathcal{R}_{0}$ estimates with
an average value of 4.5. Through a counterfactual analysis, we highlight a
significant underestimation of the $\mathcal{R}_{0}$ when mitigating factors
are not appropriately accounted for.",Structural Econometric Estimation of the Basic Reproduction Number for Covid-19 Across U.S. States and Selected Countries,2023-09-09 20:05:51,"Ida Johnsson, M. Hashem Pesaran, Cynthia Fan Yang","http://arxiv.org/abs/2309.08619v1, http://arxiv.org/pdf/2309.08619v1",econ.EM
29865,em,"This paper studies a cluster robust variance estimator proposed by Chiang,
Hansen and Sasaki (2022) for linear panels. First, we show algebraically that
this variance estimator (CHS estimator, hereafter) is a linear combination of
three common variance estimators: the one-way individual cluster estimator, the
""HAC of averages"" estimator, and the ""average of HACs"" estimator. Based on this
finding, we obtain a fixed-b asymptotic result for the CHS estimator and
corresponding test statistics as the cross-section and time sample sizes
jointly go to infinity. Furthermore, we propose two simple bias-corrected
versions of the variance estimator and derive the fixed-b limits. In a
simulation study, we find that the two bias-corrected variance estimators along
with fixed-b critical values provide improvements in finite sample coverage
probabilities. We illustrate the impact of bias-correction and use of the
fixed-b critical values on inference in an empirical example from Thompson
(2011) on the relationship between industry profitability and market
concentration.",Fixed-b Asymptotics for Panel Models with Two-Way Clustering,2023-09-15 21:58:08,"Kaicheng Chen, Timothy J. Vogelsang","http://arxiv.org/abs/2309.08707v2, http://arxiv.org/pdf/2309.08707v2",econ.EM
29866,em,"Empirical studies in various social sciences often involve categorical
outcomes with inherent ordering, such as self-evaluations of subjective
well-being and self-assessments in health domains. While ordered choice models,
such as the ordered logit and ordered probit, are popular tools for analyzing
these outcomes, they may impose restrictive parametric and distributional
assumptions. This paper introduces a novel estimator, the ordered correlation
forest, that can naturally handle non-linearities in the data and does not
assume a specific error term distribution. The proposed estimator modifies a
standard random forest splitting criterion to build a collection of forests,
each estimating the conditional probability of a single class. Under an
""honesty"" condition, predictions are consistent and asymptotically normal. The
weights induced by each forest are used to obtain standard errors for the
predicted probabilities and the covariates' marginal effects. Evidence from
synthetic data shows that the proposed estimator features a superior prediction
performance than alternative forest-based estimators and demonstrates its
ability to construct valid confidence intervals for the covariates' marginal
effects.",Ordered Correlation Forest,2023-09-15 23:43:55,Riccardo Di Francesco,"http://arxiv.org/abs/2309.08755v1, http://arxiv.org/pdf/2309.08755v1",econ.EM
29869,em,"Information provision experiments are a popular way to study causal effects
of beliefs on behavior. Researchers estimate these effects using TSLS. I show
that existing TSLS specifications do not estimate the average partial effect;
they have weights proportional to belief updating in the first-stage. If people
whose decisions depend on their beliefs gather information before the
experiment, the information treatment may shift beliefs more for people with
weak belief effects. This attenuates TSLS estimates. I propose researchers use
a local-least-squares (LLS) estimator that I show consistently estimates the
average partial effect (APE) under Bayesian updating, and apply it to Settele
(2022).",Identifying Causal Effects in Information Provision Experiments,2023-09-20 18:14:19,Dylan Balla-Elliott,"http://arxiv.org/abs/2309.11387v2, http://arxiv.org/pdf/2309.11387v2",econ.EM
29870,em,"This paper introduces a novel methodology that utilizes latency to unveil
time-series dependence patterns. A customized statistical test detects memory
dependence in event sequences by analyzing their inter-event time
distributions. Synthetic experiments based on the renewal-aging property assess
the impact of observer latency on the renewal property. Our test uncovers
memory patterns across diverse time scales, emphasizing the event sequence's
probability structure beyond correlations. The time series analysis produces a
statistical test and graphical plots which helps to detect dependence patterns
among events at different time-scales if any. Furthermore, the test evaluates
the renewal assumption through aging experiments, offering valuable
applications in time-series analysis within economics.",A detection analysis for temporal memory patterns at different time-scales,2023-09-21 15:56:16,"Fabio Vanni, David Lambert","http://arxiv.org/abs/2309.12034v1, http://arxiv.org/pdf/2309.12034v1",econ.EM
29871,em,"Estimating agent-specific taste heterogeneity with a large information and
communication technology (ICT) dataset requires both model flexibility and
computational efficiency. We propose a group-level agent-based mixed (GLAM)
logit approach that is estimated with inverse optimization (IO) and group-level
market share. The model is theoretically consistent with the RUM model
framework, while the estimation method is a nonparametric approach that fits to
market-level datasets, which overcomes the limitations of existing approaches.
A case study of New York statewide travel mode choice is conducted with a
synthetic population dataset provided by Replica Inc., which contains mode
choices of 19.53 million residents on two typical weekdays, one in Fall 2019
and another in Fall 2021. Individual mode choices are grouped into market-level
market shares per census block-group OD pair and four population segments,
resulting in 120,740 group-level agents. We calibrate the GLAM logit model with
the 2019 dataset and compare to several benchmark models: mixed logit (MXL),
conditional mixed logit (CMXL), and individual parameter logit (IPL). The
results show that empirical taste distribution estimated by GLAM logit can be
either unimodal or multimodal, which is infeasible for MXL/CMXL and hard to
fulfill in IPL. The GLAM logit model outperforms benchmark models on the 2021
dataset, improving the overall accuracy from 82.35% to 89.04% and improving the
pseudo R-square from 0.4165 to 0.5788. Moreover, the value-of-time (VOT) and
mode preferences retrieved from GLAM logit aligns with our empirical knowledge
(e.g., VOT of NotLowIncome population in NYC is $28.05/hour; public transit and
walking is preferred in NYC). The agent-specific taste parameters are essential
for the policymaking of statewide transportation projects.",Nonparametric estimation of k-modal taste heterogeneity for group level agent-based mixed logit,2023-09-22 22:50:55,"Xiyuan Ren, Joseph Y. J. Chow","http://arxiv.org/abs/2309.13159v1, http://arxiv.org/pdf/2309.13159v1",econ.EM
29872,em,"Considering a continuous random variable Y together with a continuous random
vector X, I propose a nonparametric estimator f^(.|x) for the conditional
density of Y given X=x. This estimator takes the form of an exponential series
whose coefficients T = (T1,...,TJ) are the solution of a system of nonlinear
equations that depends on an estimator of the conditional expectation
E[p(Y)|X=x], where p(.) is a J-dimensional vector of basis functions. A key
feature is that E[p(Y)|X=x] is estimated by generalized random forest (Athey,
Tibshirani, and Wager, 2019), targeting the heterogeneity of T across x. I show
that f^(.|x) is uniformly consistent and asymptotically normal, while allowing
J to grow to infinity. I also provide a standard error formula to construct
asymptotically valid confidence intervals. Results from Monte Carlo experiments
and an empirical illustration are provided.",Nonparametric estimation of conditional densities by generalized random forests,2023-09-23 07:20:13,Federico Zincenko,"http://arxiv.org/abs/2309.13251v1, http://arxiv.org/pdf/2309.13251v1",econ.EM
29873,em,"This paper develops unified asymptotic distribution theory for dynamic
quantile predictive regressions which is useful when examining quantile
predictability in stock returns under possible presence of nonstationarity.",Unified Inference for Dynamic Quantile Predictive Regression,2023-09-25 17:10:33,Christis Katsouris,"http://arxiv.org/abs/2309.14160v2, http://arxiv.org/pdf/2309.14160v2",econ.EM
29874,em,"Stock prices often react sluggishly to news, producing gradual jumps and jump
delays. Econometricians typically treat these sluggish reactions as
microstructure effects and settle for a coarse sampling grid to guard against
them. Synchronizing mistimed stock returns on a fine sampling grid allows us to
automatically detect noisy jumps and better approximate the true common jumps
in related stock prices.",Sluggish news reactions: A combinatorial approach for synchronizing stock jumps,2023-09-27 17:47:10,"Nabil Bouamara, Kris Boudt, Sébastien Laurent, Christopher J. Neely","http://arxiv.org/abs/2309.15705v1, http://arxiv.org/pdf/2309.15705v1",econ.EM
29875,em,"To tackle difficulties for theoretical studies in situations involving
nonsmooth functions, we propose a sequence of infinitely differentiable
functions to approximate the nonsmooth function under consideration. A rate of
approximation is established and an illustration of its application is then
provided.",Smoothing the Nonsmoothness,2023-09-28 14:14:55,"Chaohua Dong, Jiti Gao, Bin Peng, Yundong Tu","http://arxiv.org/abs/2309.16348v1, http://arxiv.org/pdf/2309.16348v1",econ.EM
29876,em,"We propose a bootstrap generalized Hausman test for the correct specification
of unobserved heterogeneity in both linear and nonlinear fixed-effects panel
data models. We consider as null hypotheses two scenarios in which the
unobserved heterogeneity is either time-invariant or specified as additive
individual and time effects. We contrast the standard fixed-effects estimators
with the recently developed two-way grouped fixed-effects estimator, that is
consistent in the presence of time-varying heterogeneity under minimal
specification and distributional assumptions for the unobserved effects. The
Hausman test exploits the general formulation for the variance of the vector of
contrasts and critical values are computed via parametric percentile bootstrap,
so as to account for the non-centrality of the asymptotic chi-square
distribution arising from the incidental parameters and approximation biases.
Monte Carlo evidence shows that the test has correct size and good power
properties. We provide two empirical applications to illustrate the proposed
test: the first one is based on a linear model for the determinants of the wage
of working women and the second analyzes the trade extensive margin.",Specification testing with grouped fixed effects,2023-10-03 13:52:07,"Claudia Pigini, Alessandro Pionati, Francesco Valentini","http://arxiv.org/abs/2310.01950v1, http://arxiv.org/pdf/2310.01950v1",econ.EM
29890,em,"Dynamic factor models have been developed out of the need of analyzing and
forecasting time series in increasingly high dimensions. While mathematical
statisticians faced with inference problems in high-dimensional observation
spaces were focusing on the so-called spiked-model-asymptotics, econometricians
adopted an entirely and considerably more effective asymptotic approach, rooted
in the factor models originally considered in psychometrics. The so-called
dynamic factor model methods, in two decades, has grown into a wide and
successful body of techniques that are widely used in central banks, financial
institutions, economic and statistical institutes. The objective of this
chapter is not an extensive survey of the topic but a sketch of its historical
growth, with emphasis on the various assumptions and interpretations, and a
family tree of its main variants.",Dynamic Factor Models: a Genealogy,2023-10-26 12:59:27,"Matteo Barigozzi, Marc Hallin","http://arxiv.org/abs/2310.17278v2, http://arxiv.org/pdf/2310.17278v2",econ.EM
29877,em,"In this paper we reconsider the notion of optimality in estimation of
partially identified models. We illustrate the general problem in the context
of a semiparametric binary choice model with discrete covariates as an example
of a model which is partially identified as shown in, e.g. Bierens and Hartog
(1988). A set estimator for the regression coefficients in the model can be
constructed by implementing the Maximum Score procedure proposed by Manski
(1975). For many designs this procedure converges to the identified set for
these parameters, and so in one sense is optimal. But as shown in Komarova
(2013) for other cases the Maximum Score objective function gives an outer
region of the identified set. This motivates alternative methods that are
optimal in one sense that they converge to the identified region in all
designs, and we propose and compare such procedures. One is a Hodges type
estimator combining the Maximum Score estimator with existing procedures. A
second is a two step estimator using a Maximum Score type objective function in
the second step. Lastly we propose a new random set quantile estimator,
motivated by definitions introduced in Molchanov (2006). Extensions of these
ideas for the cross sectional model to static and dynamic discrete panel data
models are also provided.",On Optimal Set Estimation for Partially Identified Binary Choice Models,2023-10-03 23:15:44,"Shakeeb Khan, Tatiana Komarova, Denis Nekipelov","http://arxiv.org/abs/2310.02414v2, http://arxiv.org/pdf/2310.02414v2",econ.EM
29878,em,"This paper proposes a Lasso-based estimator which uses information embedded
in the Moran statistic to develop a selection procedure called Moran's I Lasso
(Mi-Lasso) to solve the Eigenvector Spatial Filtering (ESF) eigenvector
selection problem. ESF uses a subset of eigenvectors from a spatial weights
matrix to efficiently account for any omitted cross-sectional correlation terms
in a classical linear regression framework, thus does not require the
researcher to explicitly specify the spatial part of the underlying structural
model. We derive performance bounds and show the necessary conditions for
consistent eigenvector selection. The key advantages of the proposed estimator
are that it is intuitive, theoretically grounded, and substantially faster than
Lasso based on cross-validation or any proposed forward stepwise procedure. Our
main simulation results show the proposed selection procedure performs well in
finite samples. Compared to existing selection procedures, we find Mi-Lasso has
one of the smallest biases and mean squared errors across a range of sample
sizes and levels of spatial correlation. An application on house prices further
demonstrates Mi-Lasso performs well compared to existing procedures.",Moran's I Lasso for models with spatially correlated data,2023-10-04 15:43:04,"Sylvain Barde, Rowan Cherodian, Guy Tchuente","http://arxiv.org/abs/2310.02773v1, http://arxiv.org/pdf/2310.02773v1",econ.EM
29879,em,"We assess the finite sample performance of the conduct parameter test in
homogeneous goods markets. Statistical power rises with an increase in the
number of markets, a larger conduct parameter, and a stronger demand rotation
instrument. However, even with a moderate number of markets and five firms,
regardless of instrument strength and the utilization of optimal instruments,
rejecting the null hypothesis of perfect competition remains challenging. Our
findings indicate that empirical results that fail to reject perfect
competition are a consequence of the limited number of markets rather than
methodological deficiencies.",Finite Sample Performance of a Conduct Parameter Test in Homogenous Goods Markets,2023-10-06 23:38:32,"Yuri Matsumura, Suguru Otani","http://arxiv.org/abs/2310.04576v3, http://arxiv.org/pdf/2310.04576v3",econ.EM
29880,em,"This paper proposes minimum distance inference for a structural parameter of
interest, which is robust to the lack of identification of other structural
nuisance parameters. Some choices of the weighting matrix lead to asymptotic
chi-squared distributions with degrees of freedom that can be consistently
estimated from the data, even under partial identification. In any case,
knowledge of the level of under-identification is not required. We study the
power of our robust test. Several examples show the wide applicability of the
procedure and a Monte Carlo investigates its finite sample performance. Our
identification-robust inference method can be applied to make inferences on
both calibrated (fixed) parameters and any other structural parameter of
interest. We illustrate the method's usefulness by applying it to a structural
model on the non-neutrality of monetary policy, as in \cite{nakamura2018high},
where we empirically evaluate the validity of the calibrated parameters and we
carry out robust inference on the slope of the Phillips curve and the
information effect.",Robust Minimum Distance Inference in Structural Models,2023-10-09 17:41:53,"Joan Alegre, Juan Carlos Escanciano","http://arxiv.org/abs/2310.05761v1, http://arxiv.org/pdf/2310.05761v1",econ.EM
29881,em,"This paper studies the identification and estimation of a semiparametric
binary network model in which the unobserved social characteristic is
endogenous, that is, the unobserved individual characteristic influences both
the binary outcome of interest and how links are formed within the network. The
exact functional form of the latent social characteristic is not known. The
proposed estimators are obtained based on matching pairs of agents whose
network formation distributions are the same. The consistency and the
asymptotic distribution of the estimators are proposed. The finite sample
properties of the proposed estimators in a Monte-Carlo simulation are assessed.
We conclude this study with an empirical application.",Identification and Estimation of a Semiparametric Logit Model using Network Data,2023-10-11 05:54:31,Brice Romuald Gueyap Kounga,"http://arxiv.org/abs/2310.07151v1, http://arxiv.org/pdf/2310.07151v1",econ.EM
29882,em,"Nonlinearity and endogeneity are common in causal effect studies with
observational data. In this paper, we propose new estimation and inference
procedures for nonparametric treatment effect functions with endogeneity and
potentially high-dimensional covariates. The main innovation of this paper is
the double bias correction procedure for the nonparametric instrumental
variable (NPIV) model under high dimensions. We provide a useful uniform
confidence band of the marginal effect function, defined as the derivative of
the nonparametric treatment function. The asymptotic honesty of the confidence
band is verified in theory. Simulations and an empirical study of air pollution
and migration demonstrate the validity of our procedures.",Uniform Inference for Nonlinear Endogenous Treatment Effects with High-Dimensional Covariates,2023-10-12 09:24:52,"Qingliang Fan, Zijian Guo, Ziwei Mei, Cun-Hui Zhang","http://arxiv.org/abs/2310.08063v2, http://arxiv.org/pdf/2310.08063v2",econ.EM
30272,em,"To what extent can agents with misspecified subjective models predict false
correlations? We study an ""analyst"" who utilizes models that take the form of a
recursive system of linear regression equations. The analyst fits each equation
to minimize the sum of squared errors against an arbitrarily large sample. We
characterize the maximal pairwise correlation that the analyst can predict
given a generic objective covariance matrix, subject to the constraint that the
estimated model does not distort the mean and variance of individual variables.
We show that as the number of variables in the model grows, the false pairwise
correlation can become arbitrarily close to one, regardless of the true
correlation.",Cheating with (Recursive) Models,2019-11-04 17:43:05,"Kfir Eliaz, Ran Spiegler, Yair Weiss","http://arxiv.org/abs/1911.01251v1, http://arxiv.org/pdf/1911.01251v1",econ.TH
29883,em,"Generalized method of moments estimators based on higher-order moment
conditions derived from independent shocks can be used to identify and estimate
the simultaneous interaction in structural vector autoregressions. This study
highlights two problems that arise when using these estimators in small
samples. First, imprecise estimates of the asymptotically efficient weighting
matrix and the asymptotic variance lead to volatile estimates and inaccurate
inference. Second, many moment conditions lead to a small sample scaling bias
towards innovations with a variance smaller than the normalizing unit variance
assumption. To address the first problem, I propose utilizing the assumption of
independent structural shocks to estimate the efficient weighting matrix and
the variance of the estimator. For the second issue, I propose incorporating a
continuously updated scaling term into the weighting matrix, eliminating the
scaling bias. To demonstrate the effectiveness of these measures, I conducted a
Monte Carlo simulation which shows a significant improvement in the performance
of the estimator.",Structural Vector Autoregressions and Higher Moments: Challenges and Solutions in Small Samples,2023-10-12 12:57:03,Sascha A. Keweloh,"http://arxiv.org/abs/2310.08173v1, http://arxiv.org/pdf/2310.08173v1",econ.EM
29884,em,"A series of standard and penalized logistic regression models is employed to
model and forecast the Great Recession and the Covid-19 recession in the US.
These two recessions are scrutinized by closely examining the movement of five
chosen predictors, their regression coefficients, and the predicted
probabilities of recession. The empirical analysis explores the predictive
content of numerous macroeconomic and financial indicators with respect to NBER
recession indicator. The predictive ability of the underlying models is
evaluated using a set of statistical evaluation metrics. The results strongly
support the application of penalized logistic regression models in the area of
recession prediction. Specifically, the analysis indicates that a mixed usage
of different penalized logistic regression models over different forecast
horizons largely outperform standard logistic regression models in the
prediction of Great recession in the US, as they achieve higher predictive
accuracy across 5 different forecast horizons. The Great Recession is largely
predictable, whereas the Covid-19 recession remains unpredictable, given that
the Covid-19 pandemic is a real exogenous event. The results are validated by
constructing via principal component analysis (PCA) on a set of selected
variables a recession indicator that suffers less from publication lags and
exhibits a very high correlation with the NBER recession indicator.",Real-time Prediction of the Great Recession and the Covid-19 Recession,2023-10-12 20:24:18,Seulki Chung,"http://arxiv.org/abs/2310.08536v3, http://arxiv.org/pdf/2310.08536v3",econ.EM
29885,em,"We propose a regression-based approach to estimate how individuals'
expectations influence their responses to a counterfactual change. We provide
conditions under which average partial effects based on regression estimates
recover structural effects. We propose a practical three-step estimation method
that relies on subjective beliefs data. We illustrate our approach in a model
of consumption and saving, focusing on the impact of an income tax that not
only changes current income but also affects beliefs about future income.
Applying our approach to Italian survey data, we find that individuals' beliefs
matter for evaluating the impact of tax policies on consumption decisions.",Estimating Individual Responses when Tomorrow Matters,2023-10-13 16:51:51,"Stephane Bonhomme, Angela Denis","http://arxiv.org/abs/2310.09105v2, http://arxiv.org/pdf/2310.09105v2",econ.EM
29886,em,"Under correlated heterogeneity, the commonly used two-way fixed effects
estimator is biased and can lead to misleading inference. This paper proposes a
new trimmed mean group (TMG) estimator which is consistent at the irregular
rate of n^{1/3} even if the time dimension of the panel is as small as the
number of its regressors. Extensions to panels with time effects are provided,
and a Hausman-type test of correlated heterogeneity is proposed. Small sample
properties of the TMG estimator (with and without time effects) are
investigated by Monte Carlo experiments and shown to be satisfactory and
perform better than other trimmed estimators proposed in the literature. The
proposed test of correlated heterogeneity is also shown to have the correct
size and satisfactory power. The utility of the TMG approach is illustrated
with an empirical application.",Trimmed Mean Group Estimation of Average Treatment Effects in Ultra Short T Panels under Correlated Heterogeneity,2023-10-18 06:04:59,"M. Hashem Pesaran, Liying Yang","http://arxiv.org/abs/2310.11680v1, http://arxiv.org/pdf/2310.11680v1",econ.EM
29887,em,"We combine two recently proposed nonparametric difference-in-differences
methods, extending them to enable the examination of treatment effect
heterogeneity in the staggered adoption setting using machine learning. The
proposed method, machine learning difference-in-differences (MLDID), allows for
estimation of time-varying conditional average treatment effects on the
treated, which can be used to conduct detailed inference on drivers of
treatment effect heterogeneity. We perform simulations to evaluate the
performance of MLDID and find that it accurately identifies the true predictors
of treatment effect heterogeneity. We then use MLDID to evaluate the
heterogeneous impacts of Brazil's Family Health Program on infant mortality,
and find those in poverty and urban locations experienced the impact of the
policy more quickly than other subgroups.",Machine Learning for Staggered Difference-in-Differences and Dynamic Treatment Effect Heterogeneity,2023-10-18 16:41:41,"Julia Hatamyar, Noemi Kreif, Rudi Rocha, Martin Huber","http://arxiv.org/abs/2310.11962v1, http://arxiv.org/pdf/2310.11962v1",econ.EM
29888,em,"This paper studies the identification and estimation of a nonparametric
nonseparable dyadic model where the structural function and the distribution of
the unobservable random terms are assumed to be unknown. The identification and
the estimation of the distribution of the unobservable random term are also
proposed. I assume that the structural function is continuous and strictly
increasing in the unobservable heterogeneity. I propose suitable normalization
for the identification by allowing the structural function to have some
desirable properties such as homogeneity of degree one in the unobservable
random term and some of its observables. The consistency and the asymptotic
distribution of the estimators are proposed. The finite sample properties of
the proposed estimators in a Monte-Carlo simulation are assessed.",Nonparametric Regression with Dyadic Data,2023-10-19 18:22:12,Brice Romuald Gueyap Kounga,"http://arxiv.org/abs/2310.12825v1, http://arxiv.org/pdf/2310.12825v1",econ.EM
29889,em,"We incorporate a version of a spike and slab prior, comprising a pointmass at
zero (""spike"") and a Normal distribution around zero (""slab"") into a dynamic
panel data framework to model coefficient heterogeneity. In addition to
homogeneity and full heterogeneity, our specification can also capture sparse
heterogeneity, that is, there is a core group of units that share common
parameters and a set of deviators with idiosyncratic parameters. We fit a model
with unobserved components to income data from the Panel Study of Income
Dynamics. We find evidence for sparse heterogeneity for balanced panels
composed of individuals with long employment histories.",Bayesian Estimation of Panel Models under Potentially Sparse Heterogeneity,2023-10-20 22:31:47,"Hyungsik Roger Moon, Frank Schorfheide, Boyuan Zhang","http://arxiv.org/abs/2310.13785v1, http://arxiv.org/pdf/2310.13785v1",econ.EM
29891,em,"Many empirical applications estimate causal effects of a continuous
endogenous variable (treatment) using a binary instrument. Estimation is
typically done through linear 2SLS. This approach requires a mean treatment
change and causal interpretation requires the LATE-type monotonicity in the
first stage. An alternative approach is to explore distributional changes in
the treatment, where the first-stage restriction is treatment rank similarity.
We propose causal estimands that are doubly robust in that they are valid under
either of these two restrictions. We apply the doubly robust estimation to
estimate the impacts of sleep on well-being. Our results corroborate the usual
2SLS estimates.",Nonparametric Doubly Robust Identification of Causal Effects of a Continuous Treatment using Discrete Instruments,2023-10-28 00:43:11,"Yingying Dong, Ying-Ying Lee","http://arxiv.org/abs/2310.18504v2, http://arxiv.org/pdf/2310.18504v2",econ.EM
29892,em,"Livestreaming commerce, a hybrid of e-commerce and self-media, has expanded
the broad spectrum of traditional sales performance determinants. To
investigate the factors that contribute to the success of livestreaming
commerce, we construct a longitudinal firm-level database with 19,175
observations, covering an entire livestreaming subsector. By comparing the
forecasting accuracy of eight machine learning models, we identify a random
forest model that provides the best prediction of gross merchandise volume
(GMV). Furthermore, we utilize explainable artificial intelligence to open the
black-box of machine learning model, discovering four new facts: 1) variables
representing the popularity of livestreaming events are crucial features in
predicting GMV. And voice attributes are more important than appearance; 2)
popularity is a major determinant of sales for female hosts, while vocal
aesthetics is more decisive for their male counterparts; 3) merits and
drawbacks of the voice are not equally valued in the livestreaming market; 4)
based on changes of comments, page views and likes, sales growth can be divided
into three stages. Finally, we innovatively propose a 3D-SHAP diagram that
demonstrates the relationship between predicting feature importance, target
variable, and its predictors. This diagram identifies bottlenecks for both
beginner and top livestreamers, providing insights into ways to optimize their
sales performance.","Popularity, face and voice: Predicting and interpreting livestreamers' retail performance using machine learning techniques",2023-10-30 02:48:34,"Xiong Xiong, Fan Yang, Li Su","http://arxiv.org/abs/2310.19200v1, http://arxiv.org/pdf/2310.19200v1",econ.EM
29893,em,"This paper introduces new techniques for estimating, identifying and
simulating mixed causal-noncausal invertible-noninvertible models. We propose a
framework that integrates high-order cumulants, merging both the spectrum and
bispectrum into a single estimation function. The model that most adequately
represents the data under the assumption that the error term is i.i.d. is
selected. Our Monte Carlo study reveals unbiased parameter estimates and a high
frequency with which correct models are identified. We illustrate our strategy
through an empirical analysis of returns from 24 Fama-French emerging market
stock portfolios. The findings suggest that each portfolio displays noncausal
dynamics, producing white noise residuals devoid of conditional heteroscedastic
effects.",Spectral identification and estimation of mixed causal-noncausal invertible-noninvertible models,2023-10-30 16:49:10,"Alain Hecq, Daniel Velasquez-Gaviria","http://arxiv.org/abs/2310.19543v1, http://arxiv.org/pdf/2310.19543v1",econ.EM
29895,em,"This paper discusses the partial identification of treatment effects in
sample selection models when the exclusion restriction fails and the
monotonicity assumption in the selection effect does not hold exactly, both of
which are key challenges in applying the existing methodologies. Our approach
builds on Lee's (2009) procedure, who considers partial identification under
the monotonicity assumption, but we assume only a stochastic (and weaker)
version of monotonicity, which depends on a prespecified parameter $\vartheta$
that represents researchers' belief in the plausibility of the monotonicity.
Under this assumption, we show that we can still obtain useful bounds even when
the monotonic behavioral model does not strictly hold. Our procedure is useful
when empirical researchers anticipate that a small fraction of the population
will not behave monotonically in selection; it can also be an effective tool
for performing sensitivity analysis or examining the identification power of
the monotonicity assumption. Our procedure is easily extendable to other
related settings; we also provide the identification result of the marginal
treatment effects setting as an important application. Moreover, we show that
the bounds can still be obtained even in the absence of the knowledge of
$\vartheta$ under the semiparametric models that nest the classical probit and
logit selection models.",Bounds on Treatment Effects under Stochastic Monotonicity Assumption in Sample Selection Models,2023-11-01 14:04:08,Yuta Okamoto,"http://arxiv.org/abs/2311.00439v1, http://arxiv.org/pdf/2311.00439v1",econ.EM
29896,em,"This paper studies quasi Bayesian estimation and uncertainty quantification
for an unknown function that is identified by a nonparametric conditional
moment restriction. We derive contraction rates for a class of Gaussian process
priors. Furthermore, we provide conditions under which a Bernstein von Mises
theorem holds for the quasi-posterior distribution. As a consequence, we show
that optimally weighted quasi-Bayes credible sets have exact asymptotic
frequentist coverage.",On Gaussian Process Priors in Conditional Moment Restriction Models,2023-11-01 20:09:54,Sid Kankanala,"http://arxiv.org/abs/2311.00662v2, http://arxiv.org/pdf/2311.00662v2",econ.EM
29897,em,"Can temporary subsidies to bundles induce long-run changes in demand due to
learning about the relative quality of one of its constituent goods? This paper
provides theoretical and experimental evidence on the role of this mechanism.
Theoretically, we introduce a model where an agent learns about the quality of
an innovation on an essential good through consumption. Our results show that
the contemporaneous effect of a one-off subsidy to a bundle that contains the
innovation may be decomposed into a direct price effect, and an indirect
learning motive, whereby an agent leverages the discount to increase the
informational bequest left to her future selves. We then assess the predictions
of our theory in a randomised experiment in a ridesharing platform. The
experiment provided two-week discounts for car trips integrating with a train
or metro station (a bundle). Given the heavy-tailed nature of our data, we
follow \cite{Athey2023} and, motivated by our theory, propose a semiparametric
model for treatment effects that enables the construction of more efficient
estimators. We introduce a statistically efficient estimator for our model by
relying on L-moments, a robust alternative to standard moments. Our estimator
immediately yields a specification test for the semiparametric model; moreover,
in our adopted parametrisation, it can be easily computed through generalized
least squares. Our empirical results indicate that a two-week 50\% discount on
car trips integrating with train/metro leads to a contemporaneous increase in
the demand for integrated rides, and, consistent with our learning model,
persistent changes in the mean and dispersion of nonintegrated rides. These
effects persist for over four months after the discount. A simple calibration
of our model shows that around 40\% to 50\% of the estimated contemporaneous
increase in integrated rides may be attributed to a learning motive.",The learning effects of subsidies to bundled goods: a semiparametric approach,2023-11-02 16:18:57,"Luis Alvarez, Ciro Biderman","http://arxiv.org/abs/2311.01217v1, http://arxiv.org/pdf/2311.01217v1",econ.EM
29898,em,"Using a transformation of the autoregressive distributed lag model due to
Bewley, a novel pooled Bewley (PB) estimator of long-run coefficients for
dynamic panels with heterogeneous short-run dynamics is proposed. The PB
estimator is directly comparable to the widely used Pooled Mean Group (PMG)
estimator, and is shown to be consistent and asymptotically normal. Monte Carlo
simulations show good small sample performance of PB compared to the existing
estimators in the literature, namely PMG, panel dynamic OLS (PDOLS), and panel
fully-modified OLS (FMOLS). Application of two bias-correction methods and a
bootstrapping of critical values to conduct inference robust to cross-sectional
dependence of errors are also considered. The utility of the PB estimator is
illustrated in an empirical application to the aggregate consumption function.",Pooled Bewley Estimator of Long Run Relationships in Dynamic Heterogenous Panels,2023-11-03 21:57:56,"Alexander Chudik, M. Hashem Pesaran, Ron P. Smith","http://arxiv.org/abs/2311.02196v1, http://arxiv.org/pdf/2311.02196v1",econ.EM
29899,em,"In this paper, we consider estimation and inference for both the multi-index
parameters and the link function involved in a class of semiparametric
multi-index models via deep neural networks (DNNs). We contribute to the design
of DNN by i) providing more transparency for practical implementation, ii)
defining different types of sparsity, iii) showing the differentiability, iv)
pointing out the set of effective parameters, and v) offering a new variant of
rectified linear activation function (ReLU), etc. Asymptotic properties for the
joint estimates of both the index parameters and the link functions are
established, and a feasible procedure for the purpose of inference is also
proposed. We conduct extensive numerical studies to examine the finite-sample
performance of the estimation methods, and we also evaluate the empirical
relevance and applicability of the proposed models and estimation methods to
real data.",Estimation of Semiparametric Multi-Index Models Using Deep Neural Networks,2023-11-06 02:07:17,"Chaohua Dong, Jiti Gao, Bin Peng, Yayi Yan","http://arxiv.org/abs/2311.02789v3, http://arxiv.org/pdf/2311.02789v3",econ.EM
29900,em,"This survey study discusses main aspects to optimal estimation methodologies
for panel data regression models. In particular, we present current
methodological developments for modeling stationary panel data as well as
robust methods for estimation and inference in nonstationary panel data
regression models. Some applications from the network econometrics and high
dimensional statistics literature are also discussed within a stationary time
series environment.",Optimal Estimation Methodologies for Panel Data Regression Models,2023-11-06 22:15:11,Christis Katsouris,"http://arxiv.org/abs/2311.03471v3, http://arxiv.org/pdf/2311.03471v3",econ.EM
29901,em,"Naive maximum likelihood estimation of binary logit models with fixed effects
leads to unreliable inference due to the incidental parameter problem. We study
the case of three-dimensional panel data, where the model includes three sets
of additive and overlapping unobserved effects. This encompasses models for
network panel data, where senders and receivers maintain bilateral
relationships over time, and fixed effects account for unobserved heterogeneity
at the sender-time, receiver-time, and sender-receiver levels. In an asymptotic
framework, where all three panel dimensions grow large at constant relative
rates, we characterize the leading bias of the naive estimator. The inference
problem we identify is particularly severe, as it is not possible to balance
the order of the bias and the standard deviation. As a consequence, the naive
estimator has a degenerating asymptotic distribution, which exacerbates the
inference problem relative to other fixed effects estimators studied in the
literature. To resolve the inference problem, we derive explicit expressions to
debias the fixed effects estimator.",Debiased Fixed Effects Estimation of Binary Logit Models with Three-Dimensional Panel Data,2023-11-07 18:38:08,Amrei Stammann,"http://arxiv.org/abs/2311.04073v1, http://arxiv.org/pdf/2311.04073v1",econ.EM
29902,em,"This paper considers the estimation of treatment effects in randomized
experiments with complex experimental designs, including cases with
interference between units. We develop a design-based estimation theory for
arbitrary experimental designs. Our theory facilitates the analysis of many
design-estimator pairs that researchers commonly employ in practice and provide
procedures to consistently estimate asymptotic variance bounds. We propose new
classes of estimators with favorable asymptotic properties from a design-based
point of view. In addition, we propose a scalar measure of experimental
complexity which can be linked to the design-based variance of the estimators.
We demonstrate the performance of our estimators using simulated datasets based
on an actual network experiment studying the effect of social networks on
insurance adoptions.",Design-based Estimation Theory for Complex Experiments,2023-11-12 19:30:56,Haoge Chang,"http://arxiv.org/abs/2311.06891v1, http://arxiv.org/pdf/2311.06891v1",econ.EM
29915,em,"We demonstrate that regression models can be estimated by working
independently in a row-wise fashion. We document a simple procedure which
allows for a wide class of econometric estimators to be implemented
cumulatively, where, in the limit, estimators can be produced without ever
storing more than a single line of data in a computer's memory. This result is
useful in understanding the mechanics of many common regression models. These
procedures can be used to speed up the computation of estimates computed via
OLS, IV, Ridge regression, LASSO, Elastic Net, and Non-linear models including
probit and logit, with all common modes of inference. This has implications for
estimation and inference with `big data', where memory constraints may imply
that working with all data at once is particularly costly. We additionally show
that even with moderately sized datasets, this method can reduce computation
time compared with traditional estimation routines.",(Frisch-Waugh-Lovell)': On the Estimation of Regression Models by Row,2023-11-27 16:53:19,"Damian Clarke, Nicolás Paris, Benjamín Villena-Roldán","http://arxiv.org/abs/2311.15829v1, http://arxiv.org/pdf/2311.15829v1",econ.EM
29903,em,"This paper proposes a new method for estimating high-dimensional binary
choice models. The model we consider is semiparametric, placing no
distributional assumptions on the error term, allowing for heteroskedastic
errors, and permitting endogenous regressors. Our proposed approaches extend
the special regressor estimator originally proposed by Lewbel (2000). This
estimator becomes impractical in high-dimensional settings due to the curse of
dimensionality associated with high-dimensional conditional density estimation.
To overcome this challenge, we introduce an innovative data-driven dimension
reduction method for nonparametric kernel estimators, which constitutes the
main innovation of this work. The method combines distance covariance-based
screening with cross-validation (CV) procedures, rendering the special
regressor estimation feasible in high dimensions. Using the new feasible
conditional density estimator, we address the variable and moment (instrumental
variable) selection problems for these models. We apply penalized least squares
(LS) and Generalized Method of Moments (GMM) estimators with a smoothly clipped
absolute deviation (SCAD) penalty. A comprehensive analysis of the oracle and
asymptotic properties of these estimators is provided. Monte Carlo simulations
are employed to demonstrate the effectiveness of our proposed procedures in
finite sample scenarios.",High Dimensional Binary Choice Model with Unknown Heteroskedasticity or Instrumental Variables,2023-11-13 07:15:59,"Fu Ouyang, Thomas Tao Yang","http://arxiv.org/abs/2311.07067v1, http://arxiv.org/pdf/2311.07067v1",econ.EM
29904,em,"This paper develops an asymptotic distribution theory for a two-stage
instrumentation estimation approach in quantile predictive regressions when
both generated covariates and persistent predictors are used. The generated
covariates are obtained from an auxiliary quantile regression model and our
main interest is the robust estimation and inference of the primary quantile
predictive regression in which this generated covariate is added to the set of
nonstationary regressors. We find that the proposed doubly IVX estimator is
robust to the abstract degree of persistence regardless of the presence of
generated regressor obtained from the first stage procedure. The asymptotic
properties of the two-stage IVX estimator such as mixed Gaussianity are
established while the asymptotic covariance matrix is adjusted to account for
the first-step estimation error.",Estimating Conditional Value-at-Risk with Nonstationary Quantile Predictive Regression Models,2023-11-14 17:55:44,Christis Katsouris,"http://arxiv.org/abs/2311.08218v5, http://arxiv.org/pdf/2311.08218v5",econ.EM
29905,em,"Policymakers often desire a statistical treatment rule (STR) that determines
a treatment assignment rule deployed in a future population from available
data. With the true knowledge of the data generating process, the average
treatment effect (ATE) is the key quantity characterizing the optimal treatment
rule. Unfortunately, the ATE is often not point identified but partially
identified. Presuming the partial identification of the ATE, this study
conducts a local asymptotic analysis and develops the locally asymptotically
minimax (LAM) STR. The analysis does not assume the full differentiability but
the directional differentiability of the boundary functions of the
identification region of the ATE. Accordingly, the study shows that the LAM STR
differs from the plug-in STR. A simulation study also demonstrates that the LAM
STR outperforms the plug-in STR.",Locally Asymptotically Minimax Statistical Treatment Rules Under Partial Identification,2023-11-15 16:47:24,Daido Kido,"http://arxiv.org/abs/2311.08958v1, http://arxiv.org/pdf/2311.08958v1",econ.EM
29906,em,"This study investigates the problem of individualizing treatment allocations
using stated preferences for treatments. If individuals know in advance how the
assignment will be individualized based on their stated preferences, they may
state false preferences. We derive an individualized treatment rule (ITR) that
maximizes welfare when individuals strategically state their preferences. We
also show that the optimal ITR is strategy-proof, that is, individuals do not
have a strong incentive to lie even if they know the optimal ITR a priori.
Constructing the optimal ITR requires information on the distribution of true
preferences and the average treatment effect conditioned on true preferences.
In practice, the information must be identified and estimated from the data. As
true preferences are hidden information, the identification is not
straightforward. We discuss two experimental designs that allow the
identification: strictly strategy-proof randomized controlled trials and doubly
randomized preference trials. Under the presumption that data comes from one of
these experiments, we develop data-dependent procedures for determining ITR,
that is, statistical treatment rules (STRs). The maximum regret of the proposed
STRs converges to zero at a rate of the square root of the sample size. An
empirical application demonstrates our proposed STRs.",Incorporating Preferences Into Treatment Assignment Problems,2023-11-15 16:51:42,Daido Kido,"http://arxiv.org/abs/2311.08963v1, http://arxiv.org/pdf/2311.08963v1",econ.EM
29907,em,"Many causal parameters depend on a moment of the joint distribution of
potential outcomes. Such parameters are especially relevant in policy
evaluation settings, where noncompliance is common and accommodated through the
model of Imbens & Angrist (1994). This paper shows that the sharp identified
set for these parameters is an interval with endpoints characterized by the
value of optimal transport problems. Sample analogue estimators are proposed
based on the dual problem of optimal transport. These estimators are root-n
consistent and converge in distribution under mild assumptions. Inference
procedures based on the bootstrap are straightforward and computationally
convenient. The ideas and estimators are demonstrated in an application
revisiting the National Supported Work Demonstration job training program. I
find suggestive evidence that workers who would see below average earnings
without treatment tend to see above average benefits from treatment.",Estimating Functionals of the Joint Distribution of Potential Outcomes with Optimal Transport,2023-11-16 02:12:14,Daniel Ober-Reynolds,"http://arxiv.org/abs/2311.09435v1, http://arxiv.org/pdf/2311.09435v1",econ.EM
29908,em,"This paper considers inference in first-price and second-price sealed-bid
auctions with a large number of symmetric bidders having independent private
values. Given the abundance of bidders in each auction, we propose an
asymptotic framework in which the number of bidders diverges while the number
of auctions remains fixed. This framework allows us to perform asymptotically
exact inference on key model features using only transaction price data.
Specifically, we examine inference on the expected utility of the auction
winner, the expected revenue of the seller, and the tail properties of the
valuation distribution. Simulations confirm the accuracy of our inference
methods in finite samples. Finally, we also apply them to Hong Kong car license
auction data.",Inference in Auctions with Many Bidders Using Transaction Prices,2023-11-16 18:47:30,"Federico A. Bugni, Yulong Wang","http://arxiv.org/abs/2311.09972v1, http://arxiv.org/pdf/2311.09972v1",econ.EM
29909,em,"Time-Varying Parameters Vector Autoregressive (TVP-VAR) models are frequently
used in economics to capture evolving relationships among the macroeconomic
variables. However, TVP-VARs have the tendency of overfitting the data,
resulting in inaccurate forecasts and imprecise estimates of typical objects of
interests such as the impulse response functions. This paper introduces a
Theory Coherent Time-Varying Parameters Vector Autoregressive Model
(TC-TVP-VAR), which leverages on an arbitrary theoretical framework derived by
an underlying economic theory to form a prior for the time varying parameters.
This ""theory coherent"" shrinkage prior significantly improves inference
precision and forecast accuracy over the standard TVP-VAR. Furthermore, the
TC-TVP-VAR can be used to perform indirect posterior inference on the deep
parameters of the underlying economic theory. The paper reveals that using the
classical 3-equation New Keynesian block to form a prior for the TVP- VAR
substantially enhances forecast accuracy of output growth and of the inflation
rate in a standard model of monetary policy. Additionally, the paper shows that
the TC-TVP-VAR can be used to address the inferential challenges during the
Zero Lower Bound period.",Theory coherent shrinkage of Time-Varying Parameters in VARs,2023-11-20 18:52:22,Andrea Renzetti,"http://arxiv.org/abs/2311.11858v1, http://arxiv.org/pdf/2311.11858v1",econ.EM
29910,em,"We analyze the synthetic control (SC) method in panel data settings with many
units. We assume the treatment assignment is based on unobserved heterogeneity
and pre-treatment information, allowing for both strictly and sequentially
exogenous assignment processes. We show that the critical property that
determines the behavior of the SC method is the ability of input features to
approximate the unobserved heterogeneity. Our results imply that the SC method
delivers asymptotically normal estimators for a large class of linear panel
data models as long as the number of pre-treatment periods is sufficiently
large, making it a natural alternative to the Difference-in-Differences.",Large-Sample Properties of the Synthetic Control Method under Selection on Unobservables,2023-11-22 21:30:43,"Dmitry Arkhangelsky, David Hirshberg","http://arxiv.org/abs/2311.13575v2, http://arxiv.org/pdf/2311.13575v2",econ.EM
29911,em,"This paper presents new econometric tools to unpack the treatment effect
heterogeneity of punishing misdemeanor offenses on time-to-recidivism. We show
how one can identify, estimate, and make inferences on the distributional,
quantile, and average marginal treatment effects in setups where the treatment
selection is endogenous and the outcome of interest, usually a duration
variable, is potentially right-censored. We explore our proposed econometric
methodology to evaluate the effect of fines and community service sentences as
a form of punishment on time-to-recidivism in the State of S\~ao Paulo, Brazil,
between 2010 and 2019, leveraging the as-if random assignment of judges to
cases. Our results highlight substantial treatment effect heterogeneity that
other tools are not meant to capture. For instance, we find that people whom
most judges would punish take longer to recidivate as a consequence of the
punishment, while people who would be punished only by strict judges recidivate
at an earlier date than if they were not punished. This result suggests that
designing sentencing guidelines that encourage strict judges to become more
lenient could reduce recidivism.",Was Javert right to be suspicious? Unpacking treatment effect heterogeneity of alternative sentences on time-to-recidivism in Brazil,2023-11-23 15:32:50,"Santiago Acerenza, Vitor Possebom, Pedro H. C. Sant'Anna","http://arxiv.org/abs/2311.13969v1, http://arxiv.org/pdf/2311.13969v1",econ.EM
29912,em,"Counterfactuals in equilibrium models are functions of the current state of
the world, the exogenous change variables and the model parameters. Current
practice treats the current state of the world, the observed data, as perfectly
measured, but there is good reason to believe that they are measured with
error. The main aim of this paper is to provide tools for quantifying
uncertainty about counterfactuals, when the current state of the world is
measured with error. I propose two methods, a Bayesian approach and an
adversarial approach. Both methods are practical and theoretically justified. I
apply the two methods to the application in Adao et al. (2017) and find
non-trivial uncertainty about counterfactuals.",Counterfactual Sensitivity in Equilibrium Models,2023-11-23 17:36:45,Bas Sanders,"http://arxiv.org/abs/2311.14032v1, http://arxiv.org/pdf/2311.14032v1",econ.EM
29913,em,"The matrix exponential spatial models exhibit similarities to the
conventional spatial autoregressive model in spatial econometrics but offer
analytical, computational, and interpretive advantages. This paper provides a
comprehensive review of the literature on the estimation, inference, and model
selection approaches for the cross-sectional matrix exponential spatial models.
We discuss summary measures for the marginal effects of regressors and detail
the matrix-vector product method for efficient estimation. Our aim is not only
to summarize the main findings from the spatial econometric literature but also
to make them more accessible to applied researchers. Additionally, we
contribute to the literature by introducing some new results. We propose an
M-estimation approach for models with heteroskedastic error terms and
demonstrate that the resulting M-estimator is consistent and has an asymptotic
normal distribution. We also consider some new results for model selection
exercises. In a Monte Carlo study, we examine the finite sample properties of
various estimators from the literature alongside the M-estimator.",A Review of Cross-Sectional Matrix Exponential Spatial Models,2023-11-24 22:10:13,"Ye Yang, Osman Dogan, Suleyman Taspinar, Fei Jin","http://arxiv.org/abs/2311.14813v1, http://arxiv.org/pdf/2311.14813v1",econ.EM
29914,em,"This survey discusses the recent causal panel data literature. This recent
literature has focused on credibly estimating causal effects of binary
interventions in settings with longitudinal data, with an emphasis on practical
advice for empirical researchers. It pays particular attention to heterogeneity
in the causal effects, often in situations where few units are treated. The
literature has extended earlier work on difference-in-differences or
two-way-fixed-effect estimators and more generally incorporated factor models
or interactive fixed effects. It has also developed novel methods using
synthetic control approaches.",Causal Models for Longitudinal and Panel Data: A Survey,2023-11-27 02:31:41,"Dmitry Arkhangelsky, Guido Imbens","http://arxiv.org/abs/2311.15458v1, http://arxiv.org/pdf/2311.15458v1",econ.EM
29929,em,"Determining whether Global Average Temperature (GAT) is an integrated process
of order 1, I(1), or is a stationary process around a trend function is crucial
for detection, attribution, impact and forecasting studies of climate change.
In this paper, we investigate the nature of trends in GAT building on the
analysis of individual temperature grids. Our 'micro-founded' evidence suggests
that GAT is stationary around a non-linear deterministic trend in the form of a
linear function with a one-period structural break. This break can be
attributed to a combination of individual grid breaks and the standard
aggregation method under acceleration in global warming. We illustrate our
findings using simulations.",Trends in Temperature Data: Micro-foundations of Their Nature,2023-12-11 16:37:48,"Maria Dolores Gadea, Jesus Gonzalo, Andrey Ramos","http://arxiv.org/abs/2312.06379v1, http://arxiv.org/pdf/2312.06379v1",econ.EM
29916,em,"This paper investigates how certain relationship between observed and
counterfactual distributions serves as an identifying condition for treatment
effects when the treatment is endogenous, and shows that this condition holds
in a range of nonparametric models for treatment effects. To this end, we first
provide a novel characterization of the prevalent assumption restricting
treatment heterogeneity in the literature, namely rank similarity. Our
characterization demonstrates the stringency of this assumption and allows us
to relax it in an economically meaningful way, resulting in our identifying
condition. It also justifies the quest of richer exogenous variations in the
data (e.g., multi-valued or multiple instrumental variables) in exchange for
weaker identifying conditions. The primary goal of this investigation is to
provide empirical researchers with tools that are robust and easy to implement
but still yield tight policy evaluations.","On Quantile Treatment Effects, Rank Similarity, and Variation of Instrumental Variables",2023-11-27 17:43:24,"Sukjin Han, Haiqing Xu","http://arxiv.org/abs/2311.15871v1, http://arxiv.org/pdf/2311.15871v1",econ.EM
29917,em,"This paper proposes three novel test procedures that yield valid inference in
an environment with many weak instrumental variables (MWIV). It is observed
that the t statistic of the jackknife instrumental variable estimator (JIVE)
has an asymptotic distribution that is identical to the two-stage-least squares
(TSLS) t statistic in the just-identified environment. Consequently, test
procedures that were valid for TSLS t are also valid for the JIVE t. Two such
procedures, i.e., VtF and conditional Wald, are adapted directly. By exploiting
a feature of MWIV environments, a third, more powerful, one-sided VtF-based
test procedure can be obtained.",Valid Wald Inference with Many Weak Instruments,2023-11-27 18:39:32,Luther Yap,"http://arxiv.org/abs/2311.15932v1, http://arxiv.org/pdf/2311.15932v1",econ.EM
29918,em,"For the over-identified linear instrumental variables model, researchers
commonly report the 2SLS estimate along with the robust standard error and seek
to conduct inference with these quantities. If errors are homoskedastic, one
can control the degree of inferential distortion using the first-stage F
critical values from Stock and Yogo (2005), or use the robust-to-weak
instruments Conditional Wald critical values of Moreira (2003). If errors are
non-homoskedastic, these methods do not apply. We derive the generalization of
Conditional Wald critical values that is robust to non-homoskedastic errors
(e.g., heteroskedasticity or clustered variance structures), which can also be
applied to nonlinear weakly-identified models (e.g. weakly-identified GMM).",Robust Conditional Wald Inference for Over-Identified IV,2023-11-27 18:57:58,"David S. Lee, Justin McCrary, Marcelo J. Moreira, Jack Porter, Luther Yap","http://arxiv.org/abs/2311.15952v1, http://arxiv.org/pdf/2311.15952v1",econ.EM
29919,em,"This paper addresses the challenge of identifying causal effects of
nonbinary, ordered treatments with multiple binary instruments. Next to
presenting novel insights into the widely-applied two-stage least squares
estimand, I show that a weighted average of local average treatment effects for
combined complier populations is identified under the limited monotonicity
assumption. This novel causal parameter has an intuitive interpretation,
offering an appealing alternative to two-stage least squares. I employ recent
advances in causal machine learning for estimation. I further demonstrate how
causal forests can be used to detect local violations of the underlying limited
monotonicity assumption. The methodology is applied to study the impact of
community nurseries on child health outcomes.","Identifying Causal Effects of Nonbinary, Ordered Treatments using Multiple Instrumental Variables",2023-11-29 15:11:44,Nadja van 't Hoff,"http://arxiv.org/abs/2311.17575v1, http://arxiv.org/pdf/2311.17575v1",econ.EM
29920,em,"Canonical RD designs yield credible local estimates of the treatment effect
at the cutoff under mild continuity assumptions, but they fail to identify
treatment effects away from the cutoff without additional assumptions. The
fundamental challenge of identifying treatment effects away from the cutoff is
that the counterfactual outcome under the alternative treatment status is never
observed. This paper aims to provide a methodological blueprint to identify
treatment effects away from the cutoff in various empirical settings by
offering a non-exhaustive list of assumptions on the counterfactual outcome.
Instead of assuming the exact evolution of the counterfactual outcome, this
paper bounds its variation using the data and sensitivity parameters. The
proposed assumptions are weaker than those introduced previously in the
literature, resulting in partially identified treatment effects that are less
susceptible to assumption violations. This approach accommodates both single
cutoff and multi-cutoff designs. The specific choice of the extrapolation
assumption depends on the institutional background of each empirical
application. Additionally, researchers are recommended to conduct sensitivity
analysis on the chosen parameter and assess resulting shifts in conclusions.
The paper compares the proposed identification results with results using
previous methods via an empirical application and simulated data. It
demonstrates that set identification yields a more credible conclusion about
the sign of the treatment effect.",Extrapolating Away from the Cutoff in Regression Discontinuity Designs,2023-11-30 01:56:33,Yiwei Sun,"http://arxiv.org/abs/2311.18136v1, http://arxiv.org/pdf/2311.18136v1",econ.EM
29921,em,"The partially linear binary choice model can be used for estimating
structural equations where nonlinearity may appear due to diminishing marginal
returns, different life cycle regimes, or hectic physical phenomena. The
inference procedure for this model based on the analytic asymptotic
approximation could be unreliable in finite samples if the sample size is not
sufficiently large. This paper proposes a bootstrap inference approach for the
model. Monte Carlo simulations show that the proposed inference method performs
well in finite samples compared to the procedure based on the asymptotic
approximation.",Bootstrap Inference on Partially Linear Binary Choice Model,2023-11-30 21:01:11,"Wenzheng Gao, Zhenting Sun","http://arxiv.org/abs/2311.18759v1, http://arxiv.org/pdf/2311.18759v1",econ.EM
29922,em,"This paper expands traditional stochastic volatility models by allowing for
time-varying skewness without imposing it. While dynamic asymmetry may capture
the likely direction of future asset returns, it comes at the risk of leading
to overparameterization. Our proposed approach mitigates this concern by
leveraging sparsity-inducing priors to automatically selects the skewness
parameter as being dynamic, static or zero in a data-driven framework. We
consider two empirical applications. First, in a bond yield application,
dynamic skewness captures interest rate cycles of monetary easing and
tightening being partially explained by central banks' mandates. In an currency
modeling framework, our model indicates no skewness in the carry factor after
accounting for stochastic volatility which supports the idea of carry crashes
being the result of volatility surges instead of dynamic skewness.",Stochastic volatility models with skewness selection,2023-12-01 04:35:41,"Igor Ferreira Batista Martins, Hedibert Freitas Lopes","http://arxiv.org/abs/2312.00282v1, http://arxiv.org/pdf/2312.00282v1",econ.EM
29923,em,"We introduce a new estimator, CRE-GMM, which exploits the correlated random
effects (CRE) approach within the generalised method of moments (GMM),
specifically applied to level equations, GMM-lev. It has the advantage of
estimating the effect of measurable time-invariant covariates using all
available information. This is not possible with GMM-dif, applied to the
equations of each period transformed into first differences, while GMM-sys uses
little information as it adds the equation in levels for only one period. The
GMM-lev, by implying a two-component error term containing individual
heterogeneity and shock, exposes the explanatory variables to possible double
endogeneity. For example, the estimation of actual persistence could suffer
from bias if instruments were correlated with the unit-specific error
component. The CRE-GMM deals with double endogeneity, captures initial
conditions and enhance inference. Monte Carlo simulations for different panel
types and under different double endogeneity assumptions show the advantage of
our approach. The empirical applications on production and R&D contribute to
clarify the advantages of using CRE-GMM.",GMM-lev estimation and individual heterogeneity: Monte Carlo evidence and empirical applications,2023-12-01 10:46:16,"Maria Elena Bontempi, Jan Ditzen","http://arxiv.org/abs/2312.00399v2, http://arxiv.org/pdf/2312.00399v2",econ.EM
29924,em,"A common approach to constructing a Synthetic Control unit is to fit on the
outcome variable and covariates in pre-treatment time periods, but it has been
shown by Ferman and Pinto (2021) that this approach does not provide asymptotic
unbiasedness when the fit is imperfect and the number of controls is fixed.
Many related panel methods have a similar limitation when the number of units
is fixed. I introduce and evaluate a new method in which the Synthetic Control
is constructed using a General Method of Moments approach where if the
Synthetic Control satisfies the moment conditions it must have the same
loadings on latent factors as the treated unit. I show that a Synthetic Control
Estimator of this form will be asymptotically unbiased as the number of
pre-treatment time periods goes to infinity, even when pre-treatment fit is
imperfect and the set of controls is fixed. Furthermore, if both the number of
pre-treatment and post-treatment time periods go to infinity, then averages of
treatment effects can be consistently estimated and asymptotically valid
inference can be conducted using a subsampling method. I conduct simulations
and an empirical application to compare the performance of this method with
existing approaches in the literature.",A Method of Moments Approach to Asymptotically Unbiased Synthetic Controls,2023-12-02 22:35:50,Joseph Fry,"http://arxiv.org/abs/2312.01209v1, http://arxiv.org/pdf/2312.01209v1",econ.EM
29925,em,"This paper proposes a general framework for inference on three types of
almost dominances: Almost Lorenz dominance, almost inverse stochastic
dominance, and almost stochastic dominance. We first generalize almost Lorenz
dominance to almost upward and downward Lorenz dominances. We then provide a
bootstrap inference procedure for the Lorenz dominance coefficients, which
measure the degrees of almost Lorenz dominances. Furthermore, we propose almost
upward and downward inverse stochastic dominances and provide inference on the
inverse stochastic dominance coefficients. We also show that our results can
easily be extended to almost stochastic dominance. Simulation studies
demonstrate the finite sample properties of the proposed estimators and the
bootstrap confidence intervals. We apply our methods to the inequality growth
in the United Kingdom and find evidence for almost upward inverse stochastic
dominance.",Almost Dominance: Inference and Application,2023-12-04 22:09:14,"Xiaojun Song, Zhenting Sun","http://arxiv.org/abs/2312.02288v1, http://arxiv.org/pdf/2312.02288v1",econ.EM
29926,em,"I develop the theory around using control functions to instrument hazard
models, allowing the inclusion of endogenous (e.g., mismeasured) regressors.
Simple discrete-data hazard models can be expressed as binary choice panel data
models, and the widespread Prentice and Gloeckler (1978) discrete-data
proportional hazards model can specifically be expressed as a complementary
log-log model with time fixed effects. This allows me to recast it as GMM
estimation and its instrumented version as sequential GMM estimation in a
Z-estimation (non-classical GMM) framework; this framework can then be
leveraged to establish asymptotic properties and sufficient conditions. Whilst
this paper focuses on the Prentice and Gloeckler (1978) model, the methods and
discussion developed here can be applied more generally to other hazard models
and binary choice models. I also introduce my Stata command for estimating a
complementary log-log model instrumented via control functions (available as
ivcloglog on SSC), which allows practitioners to easily instrument the Prentice
and Gloeckler (1978) model.",A Theory Guide to Using Control Functions to Instrument Hazard Models,2023-12-06 01:14:14,William Liu,"http://arxiv.org/abs/2312.03165v1, http://arxiv.org/pdf/2312.03165v1",econ.EM
29927,em,"When fitting a particular Economic model on a sample of data, the model may
turn out to be heavily misspecified for some observations. This can happen
because of unmodelled idiosyncratic events, such as an abrupt but short-lived
change in policy. These outliers can significantly alter estimates and
inferences. A robust estimation is desirable to limit their influence. For
skewed data, this induces another bias which can also invalidate the estimation
and inferences. This paper proposes a robust GMM estimator with a simple bias
correction that does not degrade robustness significantly. The paper provides
finite-sample robustness bounds, and asymptotic uniform equivalence with an
oracle that discards all outliers. Consistency and asymptotic normality ensue
from that result. An application to the ""Price-Puzzle,"" which finds inflation
increases when monetary policy tightens, illustrates the concerns and the
method. The proposed estimator finds the intuitive result: tighter monetary
policy leads to a decline in inflation.",Occasionally Misspecified,2023-12-08 23:07:45,Jean-Jacques Forneron,"http://arxiv.org/abs/2312.05342v1, http://arxiv.org/pdf/2312.05342v1",econ.EM
29928,em,"We examine finite sample performance of the Generalized Covariance (GCov)
residual-based specification test for semiparametric models with i.i.d. errors.
The residual-based multivariate portmanteau test statistic follows
asymptotically a $\chi^2$ distribution when the model is estimated by the GCov
estimator. The test is shown to perform well in application to the univariate
mixed causal-noncausal MAR, double autoregressive (DAR) and multivariate Vector
Autoregressive (VAR) models. We also introduce a bootstrap procedure that
provides the limiting distribution of the test statistic when the specification
test is applied to a model estimated by the maximum likelihood, or the
approximate or quasi-maximum likelihood under a parametric assumption on the
error distribution.",GCov-Based Portmanteau Test,2023-12-09 00:18:14,"Joann Jasiak, Aryan Manafi Neyazi","http://arxiv.org/abs/2312.05373v1, http://arxiv.org/pdf/2312.05373v1",econ.EM
29930,em,"This set of lecture notes discuss key concepts for the Structural Analysis of
Vector Autoregressive models for the teaching of a course on Applied
Macroeconometrics with Advanced Topics.",Structural Analysis of Vector Autoregressive Models,2023-12-11 17:21:34,Christis Katsouris,"http://arxiv.org/abs/2312.06402v5, http://arxiv.org/pdf/2312.06402v5",econ.EM
29934,em,"This paper is concerned with the problem of variable selection in the
presence of parameter instability when both the marginal effects of signals on
the target variable and the correlations of the covariates in the active set
could vary over time. We pose the issue of whether one should use weighted or
unweighted observations at the variable selection stage in the presence of
parameter instability, particularly when the number of potential covariates is
large. We allow parameter instability to be continuous or discrete, subject to
certain regularity conditions. We discuss the pros and cons of Lasso and the
One Covariate at a time Multiple Testing (OCMT) method for variable selection
and argue that OCMT has important advantages under parameter instability. We
establish three main theorems on selection, estimation post selection, and
in-sample fit. These theorems provide justification for using unweighted
observations at the selection stage of OCMT and down-weighting of observations
only at the forecasting stage. It is shown that OCMT delivers better forecasts,
in mean squared error sense, as compared to Lasso, Adaptive Lasso and boosting
both in Monte Carlo experiments as well as in 3 sets of empirical applications:
forecasting monthly returns on 28 stocks from Dow Jones , forecasting quarterly
output growths across 33 countries, and forecasting euro area output growth
using surveys of professional forecasters.",Variable Selection in High Dimensional Linear Regressions with Parameter Instability,2023-12-24 17:52:23,"Alexander Chudik, M. Hashem Pesaran, Mahrad Sharifvaghefi","http://arxiv.org/abs/2312.15494v1, http://arxiv.org/pdf/2312.15494v1",econ.EM
29935,em,"We introduce a novel approach for comparing out-of-sample multi-step
forecasts obtained from a pair of nested models that is based on the forecast
encompassing principle. Our proposed approach relies on an alternative way of
testing the population moment restriction implied by the forecast encompassing
principle and that links the forecast errors from the two competing models in a
particular way. Its key advantage is that it is able to bypass the variance
degeneracy problem afflicting model based forecast comparisons across nested
models. It results in a test statistic whose limiting distribution is standard
normal and which is particularly simple to construct and can accommodate both
single period and longer-horizon prediction comparisons. Inferences are also
shown to be robust to different predictor types, including stationary,
highly-persistent and purely deterministic processes. Finally, we illustrate
the use of our proposed approach through an empirical application that explores
the role of global inflation in enhancing individual country specific inflation
forecasts.",Direct Multi-Step Forecast based Comparison of Nested Models via an Encompassing Test,2023-12-26 18:55:48,Jean-Yves Pitarakis,"http://arxiv.org/abs/2312.16099v1, http://arxiv.org/pdf/2312.16099v1",econ.EM
29936,em,"Consumer choice modeling takes center stage as we delve into understanding
how personal preferences of decision makers (customers) for products influence
demand at the level of the individual. The contemporary choice theory is built
upon the characteristics of the decision maker, alternatives available for the
choice of the decision maker, the attributes of the available alternatives and
decision rules that the decision maker uses to make a choice. The choice set in
our research is represented by six major brands (products) of laundry
detergents in the Japanese market. We use the panel data of the purchases of 98
households to which we apply the hierarchical probit model, facilitated by a
Markov Chain Monte Carlo simulation (MCMC) in order to evaluate the brand
values of six brands. The applied model also allows us to evaluate the tangible
and intangible brand values. These evaluated metrics help us to assess the
brands based on their tangible and intangible characteristics. Moreover,
consumer choice modeling also provides a framework for assessing the
environmental performance of laundry detergent brands as the model uses the
information on components (physical attributes) of laundry detergents.",Development of Choice Model for Brand Evaluation,2023-12-28 12:51:46,"Marina Kholod, Nikita Mokrenko","http://arxiv.org/abs/2312.16927v1, http://arxiv.org/pdf/2312.16927v1",econ.EM
29937,em,"We apply classical statistical decision theory to a large class of treatment
choice problems with partial identification, revealing important theoretical
and practical challenges but also interesting research opportunities. The
challenges are: In a general class of problems with Gaussian likelihood, all
decision rules are admissible; it is maximin-welfare optimal to ignore all
data; and, for severe enough partial identification, there are infinitely many
minimax-regret optimal decision rules, all of which sometimes randomize the
policy recommendation. The opportunities are: We introduce a profiled regret
criterion that can reveal important differences between rules and render some
of them inadmissible; and we uniquely characterize the minimax-regret optimal
rule that least frequently randomizes. We apply our results to aggregation of
experimental estimates for policy adoption, to extrapolation of Local Average
Treatment Effects, and to policy making in the presence of omitted variable
bias.",Decision Theory for Treatment Choice Problems with Partial Identification,2023-12-29 17:27:52,"José Luis Montiel Olea, Chen Qiu, Jörg Stoye","http://arxiv.org/abs/2312.17623v1, http://arxiv.org/pdf/2312.17623v1",econ.EM
29938,em,"Forecasting a key macroeconomic variable, consumer price index (CPI)
inflation, for BRIC countries using economic policy uncertainty and
geopolitical risk is a difficult proposition for policymakers at the central
banks. This study proposes a novel filtered ensemble wavelet neural network
(FEWNet) that can produce reliable long-term forecasts for CPI inflation. The
proposal applies a maximum overlapping discrete wavelet transform to the CPI
inflation series to obtain high-frequency and low-frequency signals. All the
wavelet-transformed series and filtered exogenous variables are fed into
downstream autoregressive neural networks to make the final ensemble forecast.
Theoretically, we show that FEWNet reduces the empirical risk compared to
single, fully connected neural networks. We also demonstrate that the
rolling-window real-time forecasts obtained from the proposed algorithm are
significantly more accurate than benchmark forecasting methods. Additionally,
we use conformal prediction intervals to quantify the uncertainty associated
with the forecasts generated by the proposed approach. The excellent
performance of FEWNet can be attributed to its capacity to effectively capture
non-linearities and long-range dependencies in the data through its adaptable
architecture.",Forecasting CPI inflation under economic policy and geo-political uncertainties,2023-12-30 17:34:22,"Shovon Sengupta, Tanujit Chakraborty, Sunny Kumar Singh","http://arxiv.org/abs/2401.00249v1, http://arxiv.org/pdf/2401.00249v1",econ.EM
30370,em,"Questionable research practices like HARKing or p-hacking have generated
considerable recent interest throughout and beyond the scientific community. We
subsume such practices involving secret data snooping that influences
subsequent statistical inference under the term MESSing (manipulating evidence
subject to snooping) and discuss, illustrate and quantify the possibly dramatic
effects of several forms of MESSing using an empirical and a simple theoretical
example. The empirical example uses numbers from the most popular German
lottery, which seem to suggest that 13 is an unlucky number.",Unlucky Number 13? Manipulating Evidence Subject to Snooping,2020-09-04 16:55:37,"Uwe Hassler, Marc-Oliver Pohle","http://dx.doi.org/10.1111/insr.12488, http://arxiv.org/abs/2009.02198v1, http://arxiv.org/pdf/2009.02198v1",stat.AP
29939,em,"This paper studies identification for a wide range of nonlinear panel data
models, including binary choice, ordered repsonse, and other types of limited
dependent variable models. Our approach accommodates dynamic models with any
number of lagged dependent variables as well as other types of (potentially
contemporary) endogeneity. Our identification strategy relies on a partial
stationarity condition, which not only allows for an unknown distribution of
errors but also for temporal dependencies in errors. We derive partial
identification results under flexible model specifications and provide
additional support conditions for point identification. We demonstrate the
robust finite-sample performance of our approach using Monte Carlo simulations,
with static and dynamic ordered choice models as illustrative examples.",Identification of Dynamic Nonlinear Panel Models under Partial Stationarity,2023-12-30 18:33:10,"Wayne Yuan Gao, Rui Wang","http://arxiv.org/abs/2401.00264v1, http://arxiv.org/pdf/2401.00264v1",econ.EM
29940,em,"In this paper, we develop a generalized Difference-in-Differences model for
discrete, ordered outcomes, building upon elements from a continuous
Changes-in-Changes model. We focus on outcomes derived from self-reported
survey data eliciting socially undesirable, illegal, or stigmatized behaviors
like tax evasion, substance abuse, or domestic violence, where too many ""false
zeros"", or more broadly, underreporting are likely. We provide
characterizations for distributional parallel trends, a concept central to our
approach, within a general threshold-crossing model framework. In cases where
outcomes are assumed to be reported correctly, we propose a framework for
identifying and estimating treatment effects across the entire distribution.
This framework is then extended to modeling underreported outcomes, allowing
the reporting decision to depend on treatment status. A simulation study
documents the finite sample performance of the estimators. Applying our
methodology, we investigate the impact of recreational marijuana legalization
for adults in several U.S. states on the short-term consumption behavior of
8th-grade high-school students. The results indicate small, but significant
increases in consumption probabilities at each level. These effects are further
amplified upon accounting for misreporting.","Generalized Difference-in-Differences for Ordered Choice Models: Too Many ""False Zeros""?",2024-01-01 03:12:56,"Daniel Gutknecht, Cenchen Liu","http://arxiv.org/abs/2401.00618v1, http://arxiv.org/pdf/2401.00618v1",econ.EM
29941,em,"Inspired by the activity signature introduced by Todorov and Tauchen (2010),
which was used to measure the activity of a semimartingale, this paper
introduces the roughness signature function. The paper illustrates how it can
be used to determine whether a discretely observed process is generated by a
continuous process that is rougher than a Brownian motion, a pure-jump process,
or a combination of the two. Further, if a continuous rough process is present,
the function gives an estimate of the roughness index. This is done through an
extensive simulation study, where we find that the roughness signature function
works as expected on rough processes. We further derive some asymptotic
properties of this new signature function. The function is applied empirically
to three different volatility measures for the S&P500 index. The three measures
are realized volatility, the VIX, and the option-extracted volatility estimator
of Todorov (2019). The realized volatility and option-extracted volatility show
signs of roughness, with the option-extracted volatility appearing smoother
than the realized volatility, while the VIX appears to be driven by a
continuous martingale with jumps.",Roughness Signature Functions,2024-01-05 17:07:01,Peter Christensen,"http://arxiv.org/abs/2401.02819v1, http://arxiv.org/pdf/2401.02819v1",econ.EM
29942,em,"This paper applies a regularization procedure called increasing rearrangement
to monotonize Edgeworth and Cornish-Fisher expansions and any other related
approximations of distribution and quantile functions of sample statistics.
Besides satisfying the logical monotonicity, required of distribution and
quantile functions, the procedure often delivers strikingly better
approximations to the distribution and quantile functions of the sample mean
than the original Edgeworth-Cornish-Fisher expansions.",Rearranging Edgeworth-Cornish-Fisher Expansions,2007-08-13 01:11:35,"Victor Chernozhukov, Ivan Fernandez-Val, Alfred Galichon","http://dx.doi.org/10.1007/s00199-008-0431-z, http://arxiv.org/abs/0708.1627v2, http://arxiv.org/pdf/0708.1627v2",stat.ME
29943,em,"In this paper, we develop a new censored quantile instrumental variable
(CQIV) estimator and describe its properties and computation. The CQIV
estimator combines Powell (1986) censored quantile regression (CQR) to deal
with censoring, with a control variable approach to incorporate endogenous
regressors. The CQIV estimator is obtained in two stages that are non-additive
in the unobservables. The first stage estimates a non-additive model with
infinite dimensional parameters for the control variable, such as a quantile or
distribution regression model. The second stage estimates a non-additive
censored quantile regression model for the response variable of interest,
including the estimated control variable to deal with endogeneity. For
computation, we extend the algorithm for CQR developed by Chernozhukov and Hong
(2002) to incorporate the estimation of the control variable. We give generic
regularity conditions for asymptotic normality of the CQIV estimator and for
the validity of resampling methods to approximate its asymptotic distribution.
We verify these conditions for quantile and distribution regression estimation
of the control variable. Our analysis covers two-stage (uncensored) quantile
regression with non-additive first stage as an important special case. We
illustrate the computation and applicability of the CQIV estimator with a
Monte-Carlo numerical example and an empirical application on estimation of
Engel curves for alcohol.",Quantile Regression with Censoring and Endogeneity,2011-04-23 20:43:04,"Victor Chernozhukov, Ivan Fernandez-Val, Amanda Kowalski","http://arxiv.org/abs/1104.4580v3, http://arxiv.org/pdf/1104.4580v3",stat.ME
29950,em,"The frequentist method of simulated minimum distance (SMD) is widely used in
economics to estimate complex models with an intractable likelihood. In other
disciplines, a Bayesian approach known as Approximate Bayesian Computation
(ABC) is far more popular. This paper connects these two seemingly related
approaches to likelihood-free estimation by means of a Reverse Sampler that
uses both optimization and importance weighting to target the posterior
distribution. Its hybrid features enable us to analyze an ABC estimate from the
perspective of SMD. We show that an ideal ABC estimate can be obtained as a
weighted average of a sequence of SMD modes, each being the minimizer of the
deviations between the data and the model. This contrasts with the SMD, which
is the mode of the average deviations. Using stochastic expansions, we provide
a general characterization of frequentist estimators and those based on
Bayesian computations including Laplace-type estimators. Their differences are
illustrated using analytical examples and a simulation study of the dynamic
panel model.",The ABC of Simulation Estimation with Auxiliary Statistics,2015-01-06 21:14:19,"Jean-Jacques Forneron, Serena Ng","http://arxiv.org/abs/1501.01265v4, http://arxiv.org/pdf/1501.01265v4",stat.ME
29944,em,"In applications it is common that the exact form of a conditional expectation
is unknown and having flexible functional forms can lead to improvements.
Series method offers that by approximating the unknown function based on $k$
basis functions, where $k$ is allowed to grow with the sample size $n$. We
consider series estimators for the conditional mean in light of: (i) sharp LLNs
for matrices derived from the noncommutative Khinchin inequalities, (ii) bounds
on the Lebesgue factor that controls the ratio between the $L^\infty$ and
$L_2$-norms of approximation errors, (iii) maximal inequalities for processes
whose entropy integrals diverge, and (iv) strong approximations to series-type
processes.
  These technical tools allow us to contribute to the series literature,
specifically the seminal work of Newey (1997), as follows. First, we weaken the
condition on the number $k$ of approximating functions used in series
estimation from the typical $k^2/n \to 0$ to $k/n \to 0$, up to log factors,
which was available only for spline series before. Second, we derive $L_2$
rates and pointwise central limit theorems results when the approximation error
vanishes. Under an incorrectly specified model, i.e. when the approximation
error does not vanish, analogous results are also shown. Third, under stronger
conditions we derive uniform rates and functional central limit theorems that
hold if the approximation error vanishes or not. That is, we derive the strong
approximation for the entire estimate of the nonparametric function.
  We derive uniform rates, Gaussian approximations, and uniform confidence
bands for a wide collection of linear functionals of the conditional
expectation function.",Some New Asymptotic Theory for Least Squares Series: Pointwise and Uniform Results,2012-12-03 18:43:08,"Alexandre Belloni, Victor Chernozhukov, Denis Chetverikov, Kengo Kato","http://arxiv.org/abs/1212.0442v4, http://arxiv.org/pdf/1212.0442v4",stat.ME
29945,em,"In this article, we review quantile models with endogeneity. We focus on
models that achieve identification through the use of instrumental variables
and discuss conditions under which partial and point identification are
obtained. We discuss key conditions, which include monotonicity and
full-rank-type conditions, in detail. In providing this review, we update the
identification results of Chernozhukov and Hansen (2005, Econometrica). We
illustrate the modeling assumptions through economically motivated examples. We
also briefly review the literature on estimation and inference.
  Key Words: identification, treatment effects, structural models, instrumental
variables",Quantile Models with Endogeneity,2013-03-28 08:32:37,"Victor Chernozhukov, Christian Hansen","http://dx.doi.org/10.1146/annurev-economics-080511-110952, http://arxiv.org/abs/1303.7050v1, http://arxiv.org/pdf/1303.7050v1",stat.AP
29946,em,"We derive fixed effects estimators of parameters and average partial effects
in (possibly dynamic) nonlinear panel data models with individual and time
effects. They cover logit, probit, ordered probit, Poisson and Tobit models
that are important for many empirical applications in micro and macroeconomics.
Our estimators use analytical and jackknife bias corrections to deal with the
incidental parameter problem, and are asymptotically unbiased under asymptotic
sequences where $N/T$ converges to a constant. We develop inference methods and
show that they perform well in numerical examples.","Individual and Time Effects in Nonlinear Panel Models with Large N, T",2013-11-27 20:37:37,"Ivan Fernandez-Val, Martin Weidner","http://arxiv.org/abs/1311.7065v5, http://arxiv.org/pdf/1311.7065v5",stat.ME
29947,em,"This paper considers identification and estimation of ceteris paribus effects
of continuous regressors in nonseparable panel models with time homogeneity.
The effects of interest are derivatives of the average and quantile structural
functions of the model. We find that these derivatives are identified with two
time periods for ""stayers"", i.e. for individuals with the same regressor values
in two time periods. We show that the identification results carry over to
models that allow location and scale time effects. We propose nonparametric
series methods and a weighted bootstrap scheme to estimate and make inference
on the identified effects. The bootstrap proposed allows uniform inference for
function-valued parameters such as quantile effects uniformly over a region of
quantile indices and/or regressor values. An empirical application to Engel
curve estimation with panel data illustrates the results.",Nonparametric Identification in Panels using Quantiles,2013-12-15 02:31:29,"Victor Chernozhukov, Ivan Fernandez-Val, Stefan Hoderlein, Hajo Holzmann, Whitney Newey","http://arxiv.org/abs/1312.4094v3, http://arxiv.org/pdf/1312.4094v3",stat.ME
29948,em,"We consider estimation and inference in panel data models with additive
unobserved individual specific heterogeneity in a high dimensional setting. The
setting allows the number of time varying regressors to be larger than the
sample size. To make informative estimation and inference feasible, we require
that the overall contribution of the time varying variables after eliminating
the individual specific heterogeneity can be captured by a relatively small
number of the available variables whose identities are unknown. This
restriction allows the problem of estimation to proceed as a variable selection
problem. Importantly, we treat the individual specific heterogeneity as fixed
effects which allows this heterogeneity to be related to the observed time
varying variables in an unspecified way and allows that this heterogeneity may
be non-zero for all individuals. Within this framework, we provide procedures
that give uniformly valid inference over a fixed subset of parameters in the
canonical linear fixed effects model and over coefficients on a fixed vector of
endogenous variables in panel data instrumental variables models with fixed
effects and many instruments. An input to developing the properties of our
proposed procedures is the use of a variant of the Lasso estimator that allows
for a grouped data structure where data across groups are independent and
dependence within groups is unrestricted. We provide formal conditions within
this structure under which the proposed Lasso variant selects a sparse model
with good approximation properties. We present simulation results in support of
the theoretical developments and illustrate the use of the methods in an
application aimed at estimating the effect of gun prevalence on crime rates.",Inference in High Dimensional Panel Models with an Application to Gun Control,2014-11-24 18:10:40,"Alexandre Belloni, Victor Chernozhukov, Christian Hansen, Damian Kozbur","http://arxiv.org/abs/1411.6507v1, http://arxiv.org/pdf/1411.6507v1",stat.ME
29949,em,"Factor structures or interactive effects are convenient devices to
incorporate latent variables in panel data models. We consider fixed effect
estimation of nonlinear panel single-index models with factor structures in the
unobservables, which include logit, probit, ordered probit and Poisson
specifications. We establish that fixed effect estimators of model parameters
and average partial effects have normal distributions when the two dimensions
of the panel grow large, but might suffer of incidental parameter bias. We show
how models with factor structures can also be applied to capture important
features of network data such as reciprocity, degree heterogeneity, homophily
in latent variables and clustering. We illustrate this applicability with an
empirical example to the estimation of a gravity equation of international
trade between countries using a Poisson model with multiple factors.",Nonlinear Factor Models for Network and Panel Data,2014-12-18 00:10:01,"Mingli Chen, Iván Fernández-Val, Martin Weidner","http://arxiv.org/abs/1412.5647v4, http://arxiv.org/pdf/1412.5647v4",stat.ME
29951,em,"In this note, we offer an approach to estimating causal/structural parameters
in the presence of many instruments and controls based on methods for
estimating sparse high-dimensional models. We use these high-dimensional
methods to select both which instruments and which control variables to use.
The approach we take extends BCCH2012, which covers selection of instruments
for IV models with a small number of controls, and extends BCH2014, which
covers selection of controls in models where the variable of interest is
exogenous conditional on observables, to accommodate both a large number of
controls and a large number of instruments. We illustrate the approach with a
simulation and an empirical example. Technical supporting material is available
in a supplementary online appendix.",Post-Selection and Post-Regularization Inference in Linear Models with Many Controls and Instruments,2015-01-13 23:48:46,"Victor Chernozhukov, Christian Hansen, Martin Spindler","http://arxiv.org/abs/1501.03185v1, http://arxiv.org/pdf/1501.03185v1",stat.AP
29952,em,"We study Markov decision problems where the agent does not know the
transition probability function mapping current states and actions to future
states. The agent has a prior belief over a set of possible transition
functions and updates beliefs using Bayes' rule. We allow her to be
misspecified in the sense that the true transition probability function is not
in the support of her prior. This problem is relevant in many economic settings
but is usually not amenable to analysis by the researcher. We make the problem
tractable by studying asymptotic behavior. We propose an equilibrium notion and
provide conditions under which it characterizes steady state behavior. In the
special case where the problem is static, equilibrium coincides with the
single-agent version of Berk-Nash equilibrium (Esponda and Pouzo (2016)). We
also discuss subtle issues that arise exclusively in dynamic settings due to
the possibility of a negative value of experimentation.",Equilibrium in Misspecified Markov Decision Processes,2015-02-24 20:30:05,"Ignacio Esponda, Demian Pouzo","http://arxiv.org/abs/1502.06901v2, http://arxiv.org/pdf/1502.06901v2",q-fin.EC
29953,em,"In this paper we study the problems of estimating heterogeneity in causal
effects in experimental or observational studies and conducting inference about
the magnitude of the differences in treatment effects across subsets of the
population. In applications, our method provides a data-driven approach to
determine which subpopulations have large or small treatment effects and to
test hypotheses about the differences in these effects. For experiments, our
method allows researchers to identify heterogeneity in treatment effects that
was not specified in a pre-analysis plan, without concern about invalidating
inference due to multiple testing. In most of the literature on supervised
machine learning (e.g. regression trees, random forests, LASSO, etc.), the goal
is to build a model of the relationship between a unit's attributes and an
observed outcome. A prominent role in these methods is played by
cross-validation which compares predictions to actual outcomes in test samples,
in order to select the level of complexity of the model that provides the best
predictive power. Our method is closely related, but it differs in that it is
tailored for predicting causal effects of a treatment rather than a unit's
outcome. The challenge is that the ""ground truth"" for a causal effect is not
observed for any individual unit: we observe the unit with the treatment, or
without the treatment, but not both at the same time. Thus, it is not obvious
how to use cross-validation to determine whether a causal effect has been
accurately predicted. We propose several novel cross-validation criteria for
this problem and demonstrate through simulations the conditions under which
they perform better than standard methods for the problem of causal effects. We
then apply the method to a large-scale field experiment re-ranking results on a
search engine.",Recursive Partitioning for Heterogeneous Causal Effects,2015-04-05 19:01:44,"Susan Athey, Guido Imbens","http://dx.doi.org/10.1073/pnas.1510489113, http://arxiv.org/abs/1504.01132v3, http://arxiv.org/pdf/1504.01132v3",stat.ML
29954,em,"This paper makes several important contributions to the literature about
nonparametric instrumental variables (NPIV) estimation and inference on a
structural function $h_0$ and its functionals. First, we derive sup-norm
convergence rates for computationally simple sieve NPIV (series 2SLS)
estimators of $h_0$ and its derivatives. Second, we derive a lower bound that
describes the best possible (minimax) sup-norm rates of estimating $h_0$ and
its derivatives, and show that the sieve NPIV estimator can attain the minimax
rates when $h_0$ is approximated via a spline or wavelet sieve. Our optimal
sup-norm rates surprisingly coincide with the optimal root-mean-squared rates
for severely ill-posed problems, and are only a logarithmic factor slower than
the optimal root-mean-squared rates for mildly ill-posed problems. Third, we
use our sup-norm rates to establish the uniform Gaussian process strong
approximations and the score bootstrap uniform confidence bands (UCBs) for
collections of nonlinear functionals of $h_0$ under primitive conditions,
allowing for mildly and severely ill-posed problems. Fourth, as applications,
we obtain the first asymptotic pointwise and uniform inference results for
plug-in sieve t-statistics of exact consumer surplus (CS) and deadweight loss
(DL) welfare functionals under low-level conditions when demand is estimated
via sieve NPIV. Empiricists could read our real data application of UCBs for
exact CS and DL functionals of gasoline demand that reveals interesting
patterns and is applicable to other markets.",Optimal Sup-norm Rates and Uniform Inference on Nonlinear Functionals of Nonparametric IV Regression,2015-08-14 00:39:11,"Xiaohong Chen, Timothy M. Christensen","http://dx.doi.org/10.3982/QE722, http://arxiv.org/abs/1508.03365v3, http://arxiv.org/pdf/1508.03365v3",stat.ME
29955,em,"Oversubscribed treatments are often allocated using randomized waiting lists.
Applicants are ranked randomly, and treatment offers are made following that
ranking until all seats are filled. To estimate causal effects, researchers
often compare applicants getting and not getting an offer. We show that those
two groups are not statistically comparable. Therefore, the estimator arising
from that comparison is inconsistent. We propose a new estimator, and show that
it is consistent. Finally, we revisit an application, and we show that using
our estimator can lead to sizably different results from those obtained using
the commonly used estimator.",Estimating the effect of treatments allocated by randomized waiting lists,2015-11-03 20:46:49,"Clement de Chaisemartin, Luc Behaghel","http://arxiv.org/abs/1511.01453v6, http://arxiv.org/pdf/1511.01453v6",stat.ME
29967,em,"The R package quantreg.nonpar implements nonparametric quantile regression
methods to estimate and make inference on partially linear quantile models.
quantreg.nonpar obtains point estimates of the conditional quantile function
and its derivatives based on series approximations to the nonparametric part of
the model. It also provides pointwise and uniform confidence intervals over a
region of covariate values and/or quantile indices for the same functions using
analytical and resampling methods. This paper serves as an introduction to the
package and displays basic functionality of the functions contained within.",quantreg.nonpar: An R Package for Performing Nonparametric Series Quantile Regression,2016-10-26 16:48:39,"Michael Lipsitz, Alexandre Belloni, Victor Chernozhukov, Iván Fernández-Val","http://arxiv.org/abs/1610.08329v1, http://arxiv.org/pdf/1610.08329v1",stat.CO
29956,em,"The partial (ceteris paribus) effects of interest in nonlinear and
interactive linear models are heterogeneous as they can vary dramatically with
the underlying observed or unobserved covariates. Despite the apparent
importance of heterogeneity, a common practice in modern empirical work is to
largely ignore it by reporting average partial effects (or, at best, average
effects for some groups). While average effects provide very convenient scalar
summaries of typical effects, by definition they fail to reflect the entire
variety of the heterogeneous effects. In order to discover these effects much
more fully, we propose to estimate and report sorted effects -- a collection of
estimated partial effects sorted in increasing order and indexed by
percentiles. By construction the sorted effect curves completely represent and
help visualize the range of the heterogeneous effects in one plot. They are as
convenient and easy to report in practice as the conventional average partial
effects. They also serve as a basis for classification analysis, where we
divide the observational units into most or least affected groups and summarize
their characteristics. We provide a quantification of uncertainty (standard
errors and confidence bands) for the estimated sorted effects and related
classification analysis, and provide confidence sets for the most and least
affected groups. The derived statistical results rely on establishing key, new
mathematical results on Hadamard differentiability of a multivariate sorting
operator and a related classification operator, which are of independent
interest. We apply the sorted effects method and classification analysis to
demonstrate several striking patterns in the gender wage gap.",The Sorted Effects Method: Discovering Heterogeneous Effects Beyond Their Averages,2015-12-17 17:41:01,"Victor Chernozhukov, Ivan Fernandez-Val, Ye Luo","http://arxiv.org/abs/1512.05635v4, http://arxiv.org/pdf/1512.05635v4",stat.ME
29957,em,"In this paper, we propose a doubly robust method to present the heterogeneity
of the average treatment effect with respect to observed covariates of
interest. We consider a situation where a large number of covariates are needed
for identifying the average treatment effect but the covariates of interest for
analyzing heterogeneity are of much lower dimension. Our proposed estimator is
doubly robust and avoids the curse of dimensionality. We propose a uniform
confidence band that is easy to compute, and we illustrate its usefulness via
Monte Carlo experiments and an application to the effects of smoking on birth
weights.",Doubly Robust Uniform Confidence Band for the Conditional Average Treatment Effect Function,2016-01-12 12:54:38,"Sokbae Lee, Ryo Okui, Yoon-Jae Whang","http://arxiv.org/abs/1601.02801v2, http://arxiv.org/pdf/1601.02801v2",stat.ME
29958,em,"In a unified framework, we provide estimators and confidence bands for a
variety of treatment effects when the outcome of interest, typically a
duration, is subjected to right censoring. Our methodology accommodates
average, distributional, and quantile treatment effects under different
identifying assumptions including unconfoundedness, local treatment effects,
and nonlinear differences-in-differences. The proposed estimators are easy to
implement, have close-form representation, are fully data-driven upon
estimation of nuisance parameters, and do not rely on parametric distributional
assumptions, shape restrictions, or on restricting the potential treatment
effect heterogeneity across different subpopulations. These treatment effects
results are obtained as a consequence of more general results on two-step
Kaplan-Meier estimators that are of independent interest: we provide conditions
for applying (i) uniform law of large numbers, (ii) functional central limit
theorems, and (iii) we prove the validity of the ordinary nonparametric
bootstrap in a two-step estimation procedure where the outcome of interest may
be randomly censored.",Program Evaluation with Right-Censored Data,2016-04-10 09:14:33,Pedro H. C. Sant'Anna,"http://arxiv.org/abs/1604.02642v1, http://arxiv.org/pdf/1604.02642v1",stat.ME
29959,em,"In this review, we present econometric and statistical methods for analyzing
randomized experiments. For basic experiments we stress randomization-based
inference as opposed to sampling-based inference. In randomization-based
inference, uncertainty in estimates arises naturally from the random assignment
of the treatments, rather than from hypothesized sampling from a large
population. We show how this perspective relates to regression analyses for
randomized experiments. We discuss the analyses of stratified, paired, and
clustered randomized experiments, and we stress the general efficiency gains
from stratification. We also discuss complications in randomized experiments
such as non-compliance. In the presence of non-compliance we contrast
intention-to-treat analyses with instrumental variables analyses allowing for
general treatment effect heterogeneity. We consider in detail estimation and
inference for heterogeneous treatment effects in settings with (possibly many)
covariates. These methods allow researchers to explore heterogeneity by
identifying subpopulations with different treatment effects while maintaining
the ability to construct valid confidence intervals. We also discuss optimal
assignment to treatment based on covariates in such settings. Finally, we
discuss estimation and inference in experiments in settings with interactions
between units, both in general network settings and in settings where the
population is partitioned into groups with all interactions contained within
these groups.",The Econometrics of Randomized Experiments,2016-07-04 01:57:14,"Susan Athey, Guido Imbens","http://arxiv.org/abs/1607.00698v1, http://arxiv.org/pdf/1607.00698v1",stat.ME
29960,em,"In this paper we discuss recent developments in econometrics that we view as
important for empirical researchers working on policy evaluation questions. We
focus on three main areas, where in each case we highlight recommendations for
applied work. First, we discuss new research on identification strategies in
program evaluation, with particular focus on synthetic control methods,
regression discontinuity, external validity, and the causal interpretation of
regression methods. Second, we discuss various forms of supplementary analyses
to make the identification strategies more credible. These include placebo
analyses as well as sensitivity and robustness analyses. Third, we discuss
recent advances in machine learning methods for causal effects. These advances
include methods to adjust for differences between treated and control units in
high-dimensional settings, and methods for identifying and estimating
heterogeneous treatment effects.",The State of Applied Econometrics - Causality and Policy Evaluation,2016-07-04 02:08:26,"Susan Athey, Guido Imbens","http://arxiv.org/abs/1607.00699v1, http://arxiv.org/pdf/1607.00699v1",stat.ME
29974,em,"In treatment allocation problems the individuals to be treated often arrive
sequentially. We study a problem in which the policy maker is not only
interested in the expected cumulative welfare but is also concerned about the
uncertainty/risk of the treatment outcomes. At the outset, the total number of
treatment assignments to be made may even be unknown. A sequential treatment
policy which attains the minimax optimal regret is proposed. We also
demonstrate that the expected number of suboptimal treatments only grows slowly
in the number of treatments. Finally, we study a setting where outcomes are
only observed with delay.",Optimal sequential treatment allocation,2017-05-28 18:41:12,"Anders Bredahl Kock, Martin Thyrsgaard","http://arxiv.org/abs/1705.09952v4, http://arxiv.org/pdf/1705.09952v4",stat.ML
29961,em,"Most modern supervised statistical/machine learning (ML) methods are
explicitly designed to solve prediction problems very well. Achieving this goal
does not imply that these methods automatically deliver good estimators of
causal parameters. Examples of such parameters include individual regression
coefficients, average treatment effects, average lifts, and demand or supply
elasticities. In fact, estimates of such causal parameters obtained via naively
plugging ML estimators into estimating equations for such parameters can behave
very poorly due to the regularization bias. Fortunately, this regularization
bias can be removed by solving auxiliary prediction problems via ML tools.
Specifically, we can form an orthogonal score for the target low-dimensional
parameter by combining auxiliary and main ML predictions. The score is then
used to build a de-biased estimator of the target parameter which typically
will converge at the fastest possible 1/root(n) rate and be approximately
unbiased and normal, and from which valid confidence intervals for these
parameters of interest may be constructed. The resulting method thus could be
called a ""double ML"" method because it relies on estimating primary and
auxiliary predictive models. In order to avoid overfitting, our construction
also makes use of the K-fold sample splitting, which we call cross-fitting.
This allows us to use a very broad set of ML predictive methods in solving the
auxiliary and main prediction problems, such as random forest, lasso, ridge,
deep neural nets, boosted trees, as well as various hybrids and aggregators of
these methods.",Double/Debiased Machine Learning for Treatment and Causal Parameters,2016-07-30 04:58:04,"Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, James Robins","http://arxiv.org/abs/1608.00060v6, http://arxiv.org/pdf/1608.00060v6",stat.ML
29962,em,"This paper considers inference on fixed effects in a linear regression model
estimated from network data. An important special case of our setup is the
two-way regression model. This is a workhorse technique in the analysis of
matched data sets, such as employer-employee or student-teacher panel data. We
formalize how the structure of the network affects the accuracy with which the
fixed effects can be estimated. This allows us to derive sufficient conditions
on the network for consistent estimation and asymptotically-valid inference to
be possible. Estimation of moments is also considered. We allow for general
networks and our setup covers both the dense and sparse case. We provide
numerical results for the estimation of teacher value-added models and
regressions with occupational dummies.",Fixed-Effect Regressions on Network Data,2016-08-04 16:44:45,"Koen Jochmans, Martin Weidner","http://arxiv.org/abs/1608.01532v4, http://arxiv.org/pdf/1608.01532v4",stat.ME
29963,em,"Quantile and quantile effect functions are important tools for descriptive
and causal analyses due to their natural and intuitive interpretation. Existing
inference methods for these functions do not apply to discrete random
variables. This paper offers a simple, practical construction of simultaneous
confidence bands for quantile and quantile effect functions of possibly
discrete random variables. It is based on a natural transformation of
simultaneous confidence bands for distribution functions, which are readily
available for many problems. The construction is generic and does not depend on
the nature of the underlying problem. It works in conjunction with parametric,
semiparametric, and nonparametric modeling methods for observed and
counterfactual distributions, and does not depend on the sampling scheme. We
apply our method to characterize the distributional impact of insurance
coverage on health care utilization and obtain the distributional decomposition
of the racial test score gap. We find that universal insurance coverage
increases the number of doctor visits across the entire distribution, and that
the racial test score gap is small at early ages but grows with age due to
socio economic factors affecting child development especially at the top of the
distribution. These are new, interesting empirical findings that complement
previous analyses that focused on mean effects only. In both applications, the
outcomes of interest are discrete rendering existing inference methods invalid
for obtaining uniform confidence bands for observed and counterfactual quantile
functions and for their difference -- the quantile effects functions.",Generic Inference on Quantile and Quantile Effect Functions for Discrete Outcomes,2016-08-18 03:46:37,"Victor Chernozhukov, Iván Fernández-Val, Blaise Melly, Kaspar Wüthrich","http://arxiv.org/abs/1608.05142v5, http://arxiv.org/pdf/1608.05142v5",stat.ME
29964,em,"We consider a variable selection problem for the prediction of binary
outcomes. We study the best subset selection procedure by which the covariates
are chosen by maximizing Manski (1975, 1985)'s maximum score objective function
subject to a constraint on the maximal number of selected variables. We show
that this procedure can be equivalently reformulated as solving a mixed integer
optimization problem, which enables computation of the exact or an approximate
solution with a definite approximation error bound. In terms of theoretical
results, we obtain non-asymptotic upper and lower risk bounds when the
dimension of potential covariates is possibly much larger than the sample size.
Our upper and lower risk bounds are minimax rate-optimal when the maximal
number of selected variables is fixed and does not increase with the sample
size. We illustrate usefulness of the best subset binary prediction approach
via Monte Carlo simulations and an empirical application of the work-trip
transportation mode choice.",Best Subset Binary Prediction,2016-10-10 02:01:46,"Le-Yu Chen, Sokbae Lee","http://arxiv.org/abs/1610.02738v7, http://arxiv.org/pdf/1610.02738v7",stat.ME
29965,em,"We present the Stata commands probitfe and logitfe, which estimate probit and
logit panel data models with individual and/or time unobserved effects. Fixed
effect panel data methods that estimate the unobserved effects can be severely
biased because of the incidental parameter problem (Neyman and Scott, 1948). We
tackle this problem by using the analytical and jackknife bias corrections
derived in Fernandez-Val and Weidner (2016) for panels where the two dimensions
($N$ and $T$) are moderately large. We illustrate the commands with an
empirical application to international trade and a Monte Carlo simulation
calibrated to this application.",probitfe and logitfe: Bias corrections for probit and logit models with two-way fixed effects,2016-10-25 06:04:35,"Mario Cruz-Gonzalez, Ivan Fernandez-Val, Martin Weidner","http://arxiv.org/abs/1610.07714v2, http://arxiv.org/pdf/1610.07714v2",stat.ME
29966,em,"The Counterfactual package implements the estimation and inference methods of
Chernozhukov, Fern\'andez-Val and Melly (2013) for counterfactual analysis. The
counterfactual distributions considered are the result of changing either the
marginal distribution of covariates related to the outcome variable of
interest, or the conditional distribution of the outcome given the covariates.
They can be applied to estimate quantile treatment effects and wage
decompositions. This paper serves as an introduction to the package and
displays basic functionality of the commands contained within.",Counterfactual: An R Package for Counterfactual Analysis,2016-10-25 17:32:40,"Mingli Chen, Victor Chernozhukov, Iván Fernández-Val, Blaise Melly","http://arxiv.org/abs/1610.07894v1, http://arxiv.org/pdf/1610.07894v1",stat.CO
29968,em,"This paper proposes new nonparametric diagnostic tools to assess the
asymptotic validity of different treatment effects estimators that rely on the
correct specification of the propensity score. We derive a particular
restriction relating the propensity score distribution of treated and control
groups, and develop specification tests based upon it. The resulting tests do
not suffer from the ""curse of dimensionality"" when the vector of covariates is
high-dimensional, are fully data-driven, do not require tuning parameters such
as bandwidths, and are able to detect a broad class of local alternatives
converging to the null at the parametric rate $n^{-1/2}$, with $n$ the sample
size. We show that the use of an orthogonal projection on the tangent space of
nuisance parameters facilitates the simulation of critical values by means of a
multiplier bootstrap procedure, and can lead to power gains. The finite sample
performance of the tests is examined by means of a Monte Carlo experiment and
an empirical application. Open-source software is available for implementing
the proposed tests.",Specification Tests for the Propensity Score,2016-11-18 23:21:16,"Pedro H. C. Sant'Anna, Xiaojun Song","http://arxiv.org/abs/1611.06217v2, http://arxiv.org/pdf/1611.06217v2",stat.ME
29969,em,"We consider inference about coefficients on a small number of variables of
interest in a linear panel data model with additive unobserved individual and
time specific effects and a large number of additional time-varying confounding
variables. We allow the number of these additional confounding variables to be
larger than the sample size, and suppose that, in addition to unrestricted time
and individual specific effects, these confounding variables are generated by a
small number of common factors and high-dimensional weakly-dependent
disturbances. We allow that both the factors and the disturbances are related
to the outcome variable and other variables of interest. To make informative
inference feasible, we impose that the contribution of the part of the
confounding variables not captured by time specific effects, individual
specific effects, or the common factors can be captured by a relatively small
number of terms whose identities are unknown. Within this framework, we provide
a convenient computational algorithm based on factor extraction followed by
lasso regression for inference about parameters of interest and show that the
resulting procedure has good asymptotic properties. We also provide a simple
k-step bootstrap procedure that may be used to construct inferential statements
about parameters of interest and prove its asymptotic validity. The proposed
bootstrap may be of substantive independent interest outside of the present
context as the proposed bootstrap may readily be adapted to other contexts
involving inference after lasso variable selection and the proof of its
validity requires some new technical arguments. We also provide simulation
evidence about performance of our procedure and illustrate its use in two
empirical applications.",The Factor-Lasso and K-Step Bootstrap Approach for Inference in High-Dimensional Economic Applications,2016-11-29 02:04:36,"Christian Hansen, Yuan Liao","http://arxiv.org/abs/1611.09420v2, http://arxiv.org/pdf/1611.09420v2",stat.ME
29970,em,"This article proposes different tests for treatment effect heterogeneity when
the outcome of interest, typically a duration variable, may be right-censored.
The proposed tests study whether a policy 1) has zero distributional (average)
effect for all subpopulations defined by covariate values, and 2) has
homogeneous average effect across different subpopulations. The proposed tests
are based on two-step Kaplan-Meier integrals and do not rely on parametric
distributional assumptions, shape restrictions, or on restricting the potential
treatment effect heterogeneity across different subpopulations. Our framework
is suitable not only to exogenous treatment allocation but can also account for
treatment noncompliance - an important feature in many applications. The
proposed tests are consistent against fixed alternatives, and can detect
nonparametric alternatives converging to the null at the parametric
$n^{-1/2}$-rate, $n$ being the sample size. Critical values are computed with
the assistance of a multiplier bootstrap. The finite sample properties of the
proposed tests are examined by means of a Monte Carlo study and an application
about the effect of labor market programs on unemployment duration. Open-source
software is available for implementing all proposed tests.",Nonparametric Tests for Treatment Effect Heterogeneity with Duration Outcomes,2016-12-07 04:35:01,Pedro H. C. Sant'Anna,"http://arxiv.org/abs/1612.02090v4, http://arxiv.org/pdf/1612.02090v4",stat.ME
29971,em,"Extremal quantile regression, i.e. quantile regression applied to the tails
of the conditional distribution, counts with an increasing number of economic
and financial applications such as value-at-risk, production frontiers,
determinants of low infant birth weights, and auction models. This chapter
provides an overview of recent developments in the theory and empirics of
extremal quantile regression. The advances in the theory have relied on the use
of extreme value approximations to the law of the Koenker and Bassett (1978)
quantile regression estimator. Extreme value laws not only have been shown to
provide more accurate approximations than Gaussian laws at the tails, but also
have served as the basis to develop bias corrected estimators and inference
methods using simulation and suitable variations of bootstrap and subsampling.
The applicability of these methods is illustrated with two empirical examples
on conditional value-at-risk and financial contagion.",Extremal Quantile Regression: An Overview,2016-12-20 23:55:07,"Victor Chernozhukov, Iván Fernández-Val, Tetsuya Kaji","http://dx.doi.org/10.1201/9781315120256, http://arxiv.org/abs/1612.06850v2, http://arxiv.org/pdf/1612.06850v2",stat.ME
29972,em,"There is a large literature on semiparametric estimation of average treatment
effects under unconfounded treatment assignment in settings with a fixed number
of covariates. More recently attention has focused on settings with a large
number of covariates. In this paper we extend lessons from the earlier
literature to this new setting. We propose that in addition to reporting point
estimates and standard errors, researchers report results from a number of
supplementary analyses to assist in assessing the credibility of their
estimates.",Estimating Average Treatment Effects: Supplementary Analyses and Remaining Challenges,2017-02-04 10:43:07,"Susan Athey, Guido Imbens, Thai Pham, Stefan Wager","http://arxiv.org/abs/1702.01250v1, http://arxiv.org/pdf/1702.01250v1",stat.ME
29973,em,"Given a set of baseline assumptions, a breakdown frontier is the boundary
between the set of assumptions which lead to a specific conclusion and those
which do not. In a potential outcomes model with a binary treatment, we
consider two conclusions: First, that ATE is at least a specific value (e.g.,
nonnegative) and second that the proportion of units who benefit from treatment
is at least a specific value (e.g., at least 50\%). For these conclusions, we
derive the breakdown frontier for two kinds of assumptions: one which indexes
relaxations of the baseline random assignment of treatment assumption, and one
which indexes relaxations of the baseline rank invariance assumption. These
classes of assumptions nest both the point identifying assumptions of random
assignment and rank invariance and the opposite end of no constraints on
treatment selection or the dependence structure between potential outcomes.
This frontier provides a quantitative measure of robustness of conclusions to
relaxations of the baseline point identifying assumptions. We derive
$\sqrt{N}$-consistent sample analog estimators for these frontiers. We then
provide two asymptotically valid bootstrap procedures for constructing lower
uniform confidence bands for the breakdown frontier. As a measure of
robustness, estimated breakdown frontiers and their corresponding confidence
bands can be presented alongside traditional point estimates and confidence
intervals obtained under point identifying assumptions. We illustrate this
approach in an empirical application to the effect of child soldiering on
wages. We find that sufficiently weak conclusions are robust to simultaneous
failures of rank invariance and random assignment, while some stronger
conclusions are fairly robust to failures of rank invariance but not
necessarily to relaxations of random assignment.",Inference on Breakdown Frontiers,2017-05-13 01:49:27,"Matthew A. Masten, Alexandre Poirier","http://arxiv.org/abs/1705.04765v3, http://arxiv.org/pdf/1705.04765v3",stat.ME
29975,em,"Structural econometric methods are often criticized for being sensitive to
functional form assumptions. We study parametric estimators of the local
average treatment effect (LATE) derived from a widely used class of latent
threshold crossing models and show they yield LATE estimates algebraically
equivalent to the instrumental variables (IV) estimator. Our leading example is
Heckman's (1979) two-step (""Heckit"") control function estimator which, with
two-sided non-compliance, can be used to compute estimates of a variety of
causal parameters. Equivalence with IV is established for a semi-parametric
family of control function estimators and shown to hold at interior solutions
for a class of maximum likelihood estimators. Our results suggest differences
between structural and IV estimates often stem from disagreements about the
target parameter rather than from functional form assumptions per se. In cases
where equivalence fails, reporting structural estimates of LATE alongside IV
provides a simple means of assessing the credibility of structural
extrapolation exercises.","On Heckits, LATE, and Numerical Equivalence",2017-06-19 17:21:38,"Patrick Kline, Christopher R. Walters","http://arxiv.org/abs/1706.05982v4, http://arxiv.org/pdf/1706.05982v4",stat.ME
29976,em,"Conditional independence of treatment assignment from potential outcomes is a
commonly used but nonrefutable assumption. We derive identified sets for
various treatment effect parameters under nonparametric deviations from this
conditional independence assumption. These deviations are defined via a
conditional treatment assignment probability, which makes it straightforward to
interpret. Our results can be used to assess the robustness of empirical
conclusions obtained under the baseline conditional independence assumption.",Identification of Treatment Effects under Conditional Partial Independence,2017-07-30 01:26:56,"Matthew A. Masten, Alexandre Poirier","http://arxiv.org/abs/1707.09563v1, http://arxiv.org/pdf/1707.09563v1",stat.ME
29977,em,"Econometrics and machine learning seem to have one common goal: to construct
a predictive model, for a variable of interest, using explanatory variables (or
features). However, these two fields developed in parallel, thus creating two
different cultures, to paraphrase Breiman (2001). The first was to build
probabilistic models to describe economic phenomena. The second uses algorithms
that will learn from their mistakes, with the aim, most often to classify
(sounds, images, etc.). Recently, however, learning models have proven to be
more effective than traditional econometric techniques (with a price to pay
less explanatory power), and above all, they manage to manage much larger data.
In this context, it becomes necessary for econometricians to understand what
these two cultures are, what opposes them and especially what brings them
closer together, in order to appropriate tools developed by the statistical
learning community to integrate them into Econometric models.",Econométrie et Machine Learning,2017-07-27 00:12:42,"Arthur Charpentier, Emmanuel Flachaire, Antoine Ly","http://arxiv.org/abs/1708.06992v2, http://arxiv.org/pdf/1708.06992v2",stat.OT
29978,em,"It is known that the common factors in a large panel of data can be
consistently estimated by the method of principal components, and principal
components can be constructed by iterative least squares regressions. Replacing
least squares with ridge regressions turns out to have the effect of shrinking
the singular values of the common component and possibly reducing its rank. The
method is used in the machine learning literature to recover low-rank matrices.
We study the procedure from the perspective of estimating a minimum-rank
approximate factor model. We show that the constrained factor estimates are
biased but can be more efficient in terms of mean-squared errors. Rank
consideration suggests a data-dependent penalty for selecting the number of
factors. The new criterion is more conservative in cases when the nominal
number of factors is inflated by the presence of weak factors or large
measurement noise. The framework is extended to incorporate a priori linear
constraints on the loadings. We provide asymptotic results that can be used to
test economic hypotheses.",Principal Components and Regularized Estimation of Factor Models,2017-08-27 23:58:02,"Jushan Bai, Serena Ng","http://arxiv.org/abs/1708.08137v2, http://arxiv.org/pdf/1708.08137v2",stat.ME
29979,em,"This paper derives conditions under which preferences and technology are
nonparametrically identified in hedonic equilibrium models, where products are
differentiated along more than one dimension and agents are characterized by
several dimensions of unobserved heterogeneity. With products differentiated
along a quality index and agents characterized by scalar unobserved
heterogeneity, single crossing conditions on preferences and technology provide
identifying restrictions in Ekeland, Heckman and Nesheim (2004) and Heckman,
Matzkin and Nesheim (2010). We develop similar shape restrictions in the
multi-attribute case. These shape restrictions, which are based on optimal
transport theory and generalized convexity, allow us to identify preferences
for goods differentiated along multiple dimensions, from the observation of a
single market. We thereby derive nonparametric identification results for
nonseparable simultaneous equations and multi-attribute hedonic equilibrium
models with (possibly) multiple dimensions of unobserved heterogeneity. One of
our results is a proof of absolute continuity of the distribution of
endogenously traded qualities, which is of independent interest.",Identification of hedonic equilibrium and nonseparable simultaneous equations,2017-09-27 18:07:36,"Victor Chernozhukov, Alfred Galichon, Marc Henry, Brendan Pass","http://arxiv.org/abs/1709.09570v6, http://arxiv.org/pdf/1709.09570v6",econ.EM
29980,em,"Gaussian graphical models are recently used in economics to obtain networks
of dependence among agents. A widely-used estimator is the Graphical Lasso
(GLASSO), which amounts to a maximum likelihood estimation regularized using
the $L_{1,1}$ matrix norm on the precision matrix $\Omega$. The $L_{1,1}$ norm
is a lasso penalty that controls for sparsity, or the number of zeros in
$\Omega$. We propose a new estimator called Structured Graphical Lasso
(SGLASSO) that uses the $L_{1,2}$ mixed norm. The use of the $L_{1,2}$ penalty
controls for the structure of the sparsity in $\Omega$. We show that when the
network size is fixed, SGLASSO is asymptotically equivalent to an infeasible
GLASSO problem which prioritizes the sparsity-recovery of high-degree nodes.
Monte Carlo simulation shows that SGLASSO outperforms GLASSO in terms of
estimating the overall precision matrix and in terms of estimating the
structure of the graphical model. In an empirical illustration using a classic
firms' investment dataset, we obtain a network of firms' dependence that
exhibits the core-periphery structure, with General Motors, General Electric
and U.S. Steel forming the core group of firms.","Estimation of Graphical Models using the $L_{1,2}$ Norm",2017-09-28 19:15:30,"Khai X. Chiong, Hyungsik Roger Moon","http://arxiv.org/abs/1709.10038v2, http://arxiv.org/pdf/1709.10038v2",econ.EM
29987,em,"We present the calibrated-projection MATLAB package implementing the method
to construct confidence intervals proposed by Kaido, Molinari and Stoye (2017).
This manual provides details on how to use the package for inference on
projections of partially identified parameters. It also explains how to use the
MATLAB functions we developed to compute confidence intervals on solutions of
nonlinear optimization problems with estimated constraints.",Calibrated Projection in MATLAB: Users' Manual,2017-10-25 00:20:19,"Hiroaki Kaido, Francesca Molinari, Jörg Stoye, Matthew Thirkettle","http://arxiv.org/abs/1710.09707v1, http://arxiv.org/pdf/1710.09707v1",econ.EM
29981,em,"This paper examines and proposes several attribution modeling methods that
quantify how revenue should be attributed to online advertising inputs. We
adopt and further develop relative importance method, which is based on
regression models that have been extensively studied and utilized to
investigate the relationship between advertising efforts and market reaction
(revenue). Relative importance method aims at decomposing and allocating
marginal contributions to the coefficient of determination (R^2) of regression
models as attribution values. In particular, we adopt two alternative
submethods to perform this decomposition: dominance analysis and relative
weight analysis. Moreover, we demonstrate an extension of the decomposition
methods from standard linear model to additive model. We claim that our new
approaches are more flexible and accurate in modeling the underlying
relationship and calculating the attribution values. We use simulation examples
to demonstrate the superior performance of our new approaches over traditional
methods. We further illustrate the value of our proposed approaches using a
real advertising campaign dataset.",Revenue-based Attribution Modeling for Online Advertising,2017-10-18 05:32:54,"Kaifeng Zhao, Seyed Hanif Mahboobi, Saeed Bagheri","http://arxiv.org/abs/1710.06561v1, http://arxiv.org/pdf/1710.06561v1",econ.EM
29982,em,"We review recent advances in modal regression studies using kernel density
estimation. Modal regression is an alternative approach for investigating
relationship between a response variable and its covariates. Specifically,
modal regression summarizes the interactions between the response variable and
covariates using the conditional mode or local modes. We first describe the
underlying model of modal regression and its estimators based on kernel density
estimation. We then review the asymptotic properties of the estimators and
strategies for choosing the smoothing bandwidth. We also discuss useful
algorithms and similar alternative approaches for modal regression, and propose
future direction in this field.",Modal Regression using Kernel Density Estimation: a Review,2017-10-19 08:25:07,Yen-Chi Chen,"http://arxiv.org/abs/1710.07004v2, http://arxiv.org/pdf/1710.07004v2",stat.ME
29983,em,"The recent research report of U.S. Department of Energy prompts us to
re-examine the pricing theories applied in electricity market design. The
theory of spot pricing is the basis of electricity market design in many
countries, but it has two major drawbacks: one is that it is still based on the
traditional hourly scheduling/dispatch model, ignores the crucial time
continuity in electric power production and consumption and does not treat the
inter-temporal constraints seriously; the second is that it assumes that the
electricity products are homogeneous in the same dispatch period and cannot
distinguish the base, intermediate and peak power with obviously different
technical and economic characteristics. To overcome the shortcomings, this
paper presents a continuous time commodity model of electricity, including spot
pricing model and load duration model. The market optimization models under the
two pricing mechanisms are established with the Riemann and Lebesgue integrals
respectively and the functional optimization problem are solved by the
Euler-Lagrange equation to obtain the market equilibria. The feasibility of
pricing according to load duration is proved by strict mathematical derivation.
Simulation results show that load duration pricing can correctly identify and
value different attributes of generators, reduce the total electricity
purchasing cost, and distribute profits among the power plants more equitably.
The theory and methods proposed in this paper will provide new ideas and
theoretical foundation for the development of electric power markets.",Electricity Market Theory Based on Continuous Time Commodity Model,2017-10-22 13:11:18,"Haoyong Chen, Lijia Han","http://arxiv.org/abs/1710.07918v1, http://arxiv.org/pdf/1710.07918v1",econ.EM
29984,em,"We generalize the approach of Carlier (2001) and provide an existence proof
for the multidimensional screening problem with general nonlinear preferences.
We first formulate the principal's problem as a maximization problem with
$G$-convexity constraints and then use $G$-convex analysis to prove existence.",Existence in Multidimensional Screening with General Nonlinear Preferences,2017-10-24 02:39:18,Kelvin Shuangjian Zhang,"http://arxiv.org/abs/1710.08549v2, http://arxiv.org/pdf/1710.08549v2",econ.EM
29985,em,"Binary classification is highly used in credit scoring in the estimation of
probability of default. The validation of such predictive models is based both
on rank ability, and also on calibration (i.e. how accurately the probabilities
output by the model map to the observed probabilities). In this study we cover
the current best practices regarding calibration for binary classification, and
explore how different approaches yield different results on real world credit
scoring data. The limitations of evaluating credit scoring models using only
rank ability metrics are explored. A benchmark is run on 18 real world
datasets, and results compared. The calibration techniques used are Platt
Scaling and Isotonic Regression. Also, different machine learning models are
used: Logistic Regression, Random Forest Classifiers, and Gradient Boosting
Classifiers. Results show that when the dataset is treated as a time series,
the use of re-calibration with Isotonic Regression is able to improve the long
term calibration better than the alternative methods. Using re-calibration, the
non-parametric models are able to outperform the Logistic Regression on Brier
Score Loss.",Calibration of Machine Learning Classifiers for Probability of Default Modelling,2017-10-24 20:36:51,"Pedro G. Fonseca, Hugo D. Lopes","http://arxiv.org/abs/1710.08901v1, http://arxiv.org/pdf/1710.08901v1",econ.EM
29986,em,"Constraining the maximum likelihood density estimator to satisfy a
sufficiently strong constraint, $\log-$concavity being a common example, has
the effect of restoring consistency without requiring additional parameters.
Since many results in economics require densities to satisfy a regularity
condition, these estimators are also attractive for the structural estimation
of economic models. In all of the examples of regularity conditions provided by
Bagnoli and Bergstrom (2005) and Ewerhart (2013), $\log-$concavity is
sufficient to ensure that the density satisfies the required conditions.
However, in many cases $\log-$concavity is far from necessary, and it has the
unfortunate side effect of ruling out sub-exponential tail behavior.
  In this paper, we use optimal transport to formulate a shape constrained
density estimator. We initially describe the estimator using a $\rho-$concavity
constraint. In this setting we provide results on consistency, asymptotic
distribution, convexity of the optimization problem defining the estimator, and
formulate a test for the null hypothesis that the population density satisfies
a shape constraint. Afterward, we provide sufficient conditions for these
results to hold using an arbitrary shape constraint. This generalization is
used to explore whether the California Department of Transportation's decision
to award construction contracts with the use of a first price auction is cost
minimizing. We estimate the marginal costs of construction firms subject to
Myerson's (1981) regularity condition, which is a requirement for the first
price reverse auction to be cost minimizing. The proposed test fails to reject
that the regularity condition is satisfied.",Shape-Constrained Density Estimation via Optimal Transport,2017-10-25 07:08:29,Ryan Cumings-Menon,"http://arxiv.org/abs/1710.09069v2, http://arxiv.org/pdf/1710.09069v2",econ.EM
29988,em,"Peer-to-peer (P2P) lending is a fast growing financial technology (FinTech)
trend that is displacing traditional retail banking. Studies on P2P lending
have focused on predicting individual interest rates or default probabilities.
However, the relationship between aggregated P2P interest rates and the general
economy will be of interest to investors and borrowers as the P2P credit market
matures. We show that the variation in P2P interest rates across grade types
are determined by three macroeconomic latent factors formed by Canonical
Correlation Analysis (CCA) - macro default, investor uncertainty, and the
fundamental value of the market. However, the variation in P2P interest rates
across term types cannot be explained by the general economy.",Macroeconomics and FinTech: Uncovering Latent Macroeconomic Effects on Peer-to-Peer Lending,2017-10-31 03:49:04,"Jessica Foo, Lek-Heng Lim, Ken Sze-Wai Wong","http://arxiv.org/abs/1710.11283v1, http://arxiv.org/pdf/1710.11283v1",econ.EM
29989,em,"We develop a machine learning based tool for accurate prediction of
socio-economic indicators from daytime satellite imagery. The diverse set of
indicators are often not intuitively related to observable features in
satellite images, and are not even always well correlated with each other. Our
predictive tool is more accurate than using night light as a proxy, and can be
used to predict missing data, smooth out noise in surveys, monitor development
progress of a region, and flag potential anomalies. Finally, we use predicted
variables to do robustness analysis of a regression study of high rate of
stunting in India.",On monitoring development indicators using high resolution satellite images,2017-12-06 19:53:14,"Potnuru Kishen Suraj, Ankesh Gupta, Makkunda Sharma, Sourabh Bikas Paul, Subhashis Banerjee","http://arxiv.org/abs/1712.02282v3, http://arxiv.org/pdf/1712.02282v3",econ.EM
29990,em,"We consider the scaling laws, second-order statistics and entropy of the
consumed energy of metropolis cities which are hybrid complex systems
comprising social networks, engineering systems, agricultural output, economic
activity and energy components. We abstract a city in terms of two fundamental
variables; $s$ resource cells (of unit area) that represent energy-consuming
geographic or spatial zones (e.g. land, housing or infrastructure etc.) and a
population comprising $n$ mobile units that can migrate between these cells. We
show that with a constant metropolis area (fixed $s$), the variance and entropy
of consumed energy initially increase with $n$, reach a maximum and then
eventually diminish to zero as saturation is reached. These metrics are
indicators of the spatial mobility of the population. Under certain situations,
the variance is bounded as a quadratic function of the mean consumed energy of
the metropolis. However, when population and metropolis area are endogenous,
growth in the latter is arrested when $n\leq\frac{s}{2}\log(s)$ due to
diminished population density. Conversely, the population growth reaches
equilibrium when $n\geq {s}\log{n}$ or equivalently when the aggregate of both
over-populated and under-populated areas is large. Moreover, we also draw the
relationship between our approach and multi-scalar information, when economic
dependency between a metropolis's sub-regions is based on the entropy of
consumed energy. Finally, if the city's economic size (domestic product etc.)
is proportional to the consumed energy, then for a constant population density,
we show that the economy scales linearly with the surface area (or $s$).",On Metropolis Growth,2017-12-08 07:46:30,Syed Amaar Ahmad,"http://arxiv.org/abs/1712.02937v2, http://arxiv.org/pdf/1712.02937v2",physics.soc-ph
29991,em,"In economics we often face a system, which intrinsically imposes a structure
of hierarchy of its components, i.e., in modelling trade accounts related to
foreign exchange or in optimization of regional air protection policy.
  A problem of reconciliation of forecasts obtained on different levels of
hierarchy has been addressed in the statistical and econometric literature for
many times and concerns bringing together forecasts obtained independently at
different levels of hierarchy.
  This paper deals with this issue in case of a hierarchical functional time
series. We present and critically discuss a state of art and indicate
opportunities of an application of these methods to a certain environment
protection problem. We critically compare the best predictor known from the
literature with our own original proposal. Within the paper we study a
macromodel describing a day and night air pollution in Silesia region divided
into five subregions.",Forecasting of a Hierarchical Functional Time Series on Example of Macromodel for Day and Night Air Pollution in Silesia Region: A Critical Overview,2017-11-25 21:36:37,"Daniel Kosiorowski, Dominik Mielczarek, Jerzy. P. Rydlewski","http://arxiv.org/abs/1712.03797v1, http://arxiv.org/pdf/1712.03797v1",econ.EM
29992,em,"In accordance with ""Democracy's Effect on Development: More Questions than
Answers"", we seek to carry out a study in following the description in the
'Questions for Further Study.' To that end, we studied 33 countries in the
Sub-Saharan Africa region, who all went through an election which should signal
a ""step-up"" for their democracy, one in which previously homogenous regimes
transfer power to an opposition party that fairly won the election. After doing
so, liberal-democracy indicators and democracy indicators were evaluated in the
five years prior to and after the election took place, and over that ten-year
period, we examine the data for trends. If we see positive or negative trends
over this time horizon, we are able to conclude that it was the recent increase
in the quality of their democracy which led to it. Having investigated examples
of this in depth, there seem to be three main archetypes which drive the
results. Countries with positive results to their democracy from the election
have generally positive effects on their development, countries with more
""plateau"" like results also did well, but countries for whom the descent to
authoritarianism was continued by this election found more negative results.",The Calculus of Democratization and Development,2017-12-12 07:08:32,Jacob Ferguson,"http://arxiv.org/abs/1712.04117v1, http://arxiv.org/pdf/1712.04117v1",econ.EM
29993,em,"We analyze Assessment Voting, a new two-round voting procedure that can be
applied to binary decisions in democratic societies. In the first round, a
randomly-selected number of citizens cast their vote on one of the two
alternatives at hand, thereby irrevocably exercising their right to vote. In
the second round, after the results of the first round have been published, the
remaining citizens decide whether to vote for one alternative or to ab- stain.
The votes from both rounds are aggregated, and the final outcome is obtained by
applying the majority rule, with ties being broken by fair randomization.
Within a costly voting framework, we show that large elec- torates will choose
the preferred alternative of the majority with high prob- ability, and that
average costs will be low. This result is in contrast with the literature on
one-round voting, which predicts either higher voting costs (when voting is
compulsory) or decisions that often do not represent the preferences of the
majority (when voting is voluntary).",Assessment Voting in Large Electorates,2017-12-15 01:47:38,"Hans Gersbach, Akaki Mamageishvili, Oriol Tejada","http://arxiv.org/abs/1712.05470v2, http://arxiv.org/pdf/1712.05470v2",econ.EM
29994,em,"We introduce new inference procedures for counterfactual and synthetic
control methods for policy evaluation. We recast the causal inference problem
as a counterfactual prediction and a structural breaks testing problem. This
allows us to exploit insights from conformal prediction and structural breaks
testing to develop permutation inference procedures that accommodate modern
high-dimensional estimators, are valid under weak and easy-to-verify
conditions, and are provably robust against misspecification. Our methods work
in conjunction with many different approaches for predicting counterfactual
mean outcomes in the absence of the policy intervention. Examples include
synthetic controls, difference-in-differences, factor and matrix completion
models, and (fused) time series panel data models. Our approach demonstrates an
excellent small-sample performance in simulations and is taken to a data
application where we re-evaluate the consequences of decriminalizing indoor
prostitution. Open-source software for implementing our conformal inference
methods is available.",An Exact and Robust Conformal Inference Method for Counterfactual and Synthetic Controls,2017-12-25 18:29:10,"Victor Chernozhukov, Kaspar Wüthrich, Yinchu Zhu","http://arxiv.org/abs/1712.09089v10, http://arxiv.org/pdf/1712.09089v10",econ.EM
29995,em,"Our confidence set quantifies the statistical uncertainty from data-driven
group assignments in grouped panel models. It covers the true group memberships
jointly for all units with pre-specified probability and is constructed by
inverting many simultaneous unit-specific one-sided tests for group membership.
We justify our approach under $N, T \to \infty$ asymptotics using tools from
high-dimensional statistics, some of which we extend in this paper. We provide
Monte Carlo evidence that the confidence set has adequate coverage in finite
samples.An empirical application illustrates the use of our confidence set.",Confidence set for group membership,2017-12-31 21:18:55,"Andreas Dzemski, Ryo Okui","http://arxiv.org/abs/1801.00332v6, http://arxiv.org/pdf/1801.00332v6",econ.EM
29996,em,"In this paper, a new and convenient $\chi^2$ wald test based on MCMC outputs
is proposed for hypothesis testing. The new statistic can be explained as MCMC
version of Wald test and has several important advantages that make it very
convenient in practical applications. First, it is well-defined under improper
prior distributions and avoids Jeffrey-Lindley's paradox. Second, it's
asymptotic distribution can be proved to follow the $\chi^2$ distribution so
that the threshold values can be easily calibrated from this distribution.
Third, it's statistical error can be derived using the Markov chain Monte Carlo
(MCMC) approach. Fourth, most importantly, it is only based on the posterior
MCMC random samples drawn from the posterior distribution. Hence, it is only
the by-product of the posterior outputs and very easy to compute. In addition,
when the prior information is available, the finite sample theory is derived
for the proposed test statistic. At last, the usefulness of the test is
illustrated with several applications to latent variable models widely used in
economics and finance.",A New Wald Test for Hypothesis Testing Based on MCMC outputs,2018-01-03 15:25:25,"Yong Li, Xiaobin Liu, Jun Yu, Tao Zeng","http://arxiv.org/abs/1801.00973v1, http://arxiv.org/pdf/1801.00973v1",econ.EM
29997,em,"This paper compares alternative univariate versus multivariate models,
frequentist versus Bayesian autoregressive and vector autoregressive
specifications, for hourly day-ahead electricity prices, both with and without
renewable energy sources. The accuracy of point and density forecasts are
inspected in four main European markets (Germany, Denmark, Italy and Spain)
characterized by different levels of renewable energy power generation. Our
results show that the Bayesian VAR specifications with exogenous variables
dominate other multivariate and univariate specifications, in terms of both
point and density forecasting.",Comparing the Forecasting Performances of Linear Models for Electricity Prices with High RES Penetration,2018-01-03 20:48:08,"Angelica Gianfreda, Francesco Ravazzolo, Luca Rossini","http://arxiv.org/abs/1801.01093v3, http://arxiv.org/pdf/1801.01093v3",econ.EM
29998,em,"An intensive research sprang up for stochastic methods in insurance during
the past years. To meet all future claims rising from policies, it is requisite
to quantify the outstanding loss liabilities. Loss reserving methods based on
aggregated data from run-off triangles are predominantly used to calculate the
claims reserves. Conventional reserving techniques have some disadvantages:
loss of information from the policy and the claim's development due to the
aggregation, zero or negative cells in the triangle; usually small number of
observations in the triangle; only few observations for recent accident years;
and sensitivity to the most recent paid claims.
  To overcome these dilemmas, granular loss reserving methods for individual
claim-by-claim data will be derived. Reserves' estimation is a crucial part of
the risk valuation process, which is now a front burner in economics. Since
there is a growing demand for prediction of total reserves for different types
of claims or even multiple lines of business, a time-varying copula framework
for granular reserving will be established.",Dynamic and granular loss reserving with copulae,2018-01-05 18:25:42,"Matúš Maciak, Ostap Okhrin, Michal Pešta","http://arxiv.org/abs/1801.01792v1, http://arxiv.org/pdf/1801.01792v1",econ.EM
29999,em,"The purpose of this article is to propose a new ""theory,"" the Strategic
Analysis of Financial Markets (SAFM) theory, that explains the operation of
financial markets using the analytical perspective of an enlightened gambler.
The gambler understands that all opportunities for superior performance arise
from suboptimal decisions by humans, but understands also that knowledge of
human decision making alone is not enough to understand market behavior --- one
must still model how those decisions lead to market prices. Thus are there
three parts to the model: gambling theory, human decision making, and strategic
problem solving. A new theory is necessary because at this writing in 2017,
there is no theory of financial markets acceptable to both practitioners and
theorists. Theorists' efficient market theory, for example, cannot explain
bubbles and crashes nor the exceptional returns of famous investors and
speculators such as Warren Buffett and George Soros. At the same time, a new
theory must be sufficiently quantitative, explain market ""anomalies"" and
provide predictions in order to satisfy theorists. It is hoped that the SAFM
framework will meet these requirements.","Why Markets are Inefficient: A Gambling ""Theory"" of Financial Markets For Practitioners and Theorists",2018-01-06 03:37:52,Steven D. Moffitt,"http://arxiv.org/abs/1801.01948v1, http://arxiv.org/pdf/1801.01948v1",econ.EM
30014,em,"This note proves that the representation of the Allen elasticity of
substitution obtained by Uzawa for linear homogeneous functions holds true for
nonhomogeneous functions. It is shown that the criticism of the Allen-Uzawa
elasticity of substitution in the works of Blackorby, Primont, Russell is based
on an incorrect example.",The Allen--Uzawa elasticity of substitution for nonhomogeneous production functions,2018-02-09 19:08:46,"Elena Burmistrova, Sergey Lobanov","http://arxiv.org/abs/1802.06885v1, http://arxiv.org/pdf/1802.06885v1",econ.EM
30000,em,"To determine the welfare implications of price changes in demand data, we
introduce a revealed preference relation over prices. We show that the absence
of cycles in this relation characterizes a consumer who trades off the utility
of consumption against the disutility of expenditure. Our model can be applied
whenever a consumer's demand over a strict subset of all available goods is
being analyzed; it can also be extended to settings with discrete goods and
nonlinear prices. To illustrate its use, we apply our model to a single-agent
data set and to a data set with repeated cross-sections. We develop a novel
test of linear hypotheses on partially identified parameters to estimate the
proportion of the population who are revealed better off due to a price change
in the latter application. This new technique can be used for nonparametric
counterfactual analysis more broadly.",Revealed Price Preference: Theory and Empirical Analysis,2018-01-09 01:22:30,"Rahul Deb, Yuichi Kitamura, John K. -H. Quah, Jörg Stoye","http://arxiv.org/abs/1801.02702v3, http://arxiv.org/pdf/1801.02702v3",econ.EM
30001,em,"This article is a prologue to the article ""Why Markets are Inefficient: A
Gambling 'Theory' of Financial Markets for Practitioners and Theorists."" It
presents important background for that article --- why gambling is important,
even necessary, for real-world traders --- the reason for the superiority of
the strategic/gambling approach to the competing market ideologies of market
fundamentalism and the scientific approach --- and its potential to uncover
profitable trading systems. Much of this article was drawn from Chapter 1 of
the book ""The Strategic Analysis of Financial Markets (in 2 volumes)"" World
Scientific, 2017.",On a Constructive Theory of Markets,2018-01-08 01:17:52,Steven D. Moffitt,"http://arxiv.org/abs/1801.02994v1, http://arxiv.org/pdf/1801.02994v1",econ.EM
30002,em,"This paper introduces estimation methods for grouped latent heterogeneity in
panel data quantile regression. We assume that the observed individuals come
from a heterogeneous population with a finite number of types. The number of
types and group membership is not assumed to be known in advance and is
estimated by means of a convex optimization problem. We provide conditions
under which group membership is estimated consistently and establish asymptotic
normality of the resulting estimators. Simulations show that the method works
well in finite samples when T is reasonably large. To illustrate the proposed
methodology we study the effects of the adoption of Right-to-Carry concealed
weapon laws on violent crime rates using panel data of 51 U.S. states from 1977
- 2010.",Panel Data Quantile Regression with Grouped Fixed Effects,2018-01-16 00:49:20,"Jiaying Gu, Stanislav Volgushev","http://arxiv.org/abs/1801.05041v2, http://arxiv.org/pdf/1801.05041v2",econ.EM
30003,em,"Many applications involve a censored dependent variable and an endogenous
independent variable. Chernozhukov et al. (2015) introduced a censored quantile
instrumental variable estimator (CQIV) for use in those applications, which has
been applied by Kowalski (2016), among others. In this article, we introduce a
Stata command, cqiv, that simplifes application of the CQIV estimator in Stata.
We summarize the CQIV estimator and algorithm, we describe the use of the cqiv
command, and we provide empirical examples.",Censored Quantile Instrumental Variable Estimation with Stata,2018-01-13 17:47:37,"Victor Chernozhukov, Iván Fernández-Val, Sukjin Han, Amanda Kowalski","http://arxiv.org/abs/1801.05305v3, http://arxiv.org/pdf/1801.05305v3",econ.EM
30004,em,"In this paper we forecast daily returns of crypto-currencies using a wide
variety of different econometric models. To capture salient features commonly
observed in financial time series like rapid changes in the conditional
variance, non-normality of the measurement errors and sharply increasing
trends, we develop a time-varying parameter VAR with t-distributed measurement
errors and stochastic volatility. To control for overparameterization, we rely
on the Bayesian literature on shrinkage priors that enables us to shrink
coefficients associated with irrelevant predictors and/or perform model
specification in a flexible manner. Using around one year of daily data we
perform a real-time forecasting exercise and investigate whether any of the
proposed models is able to outperform the naive random walk benchmark. To
assess the economic relevance of the forecasting gains produced by the proposed
models we moreover run a simple trading exercise.",Predicting crypto-currencies using sparse non-Gaussian state space models,2018-01-19 14:51:27,"Christian Hotz-Behofsits, Florian Huber, Thomas O. Zörner","http://arxiv.org/abs/1801.06373v2, http://arxiv.org/pdf/1801.06373v2",econ.EM
30005,em,"The primary goal of this study is doing a meta-analysis research on two
groups of published studies. First, the ones that focus on the evaluation of
the United States Department of Agriculture (USDA) forecasts and second, the
ones that evaluate the market reactions to the USDA forecasts. We investigate
four questions. 1) How the studies evaluate the accuracy of the USDA forecasts?
2) How they evaluate the market reactions to the USDA forecasts? 3) Is there
any heterogeneity in the results of the mentioned studies? 4) Is there any
publication bias? About the first question, while some researchers argue that
the forecasts are unbiased, most of them maintain that they are biased,
inefficient, not optimal, or not rational. About the second question, while a
few studies claim that the forecasts are not newsworthy, most of them maintain
that they are newsworthy, provide useful information, and cause market
reactions. About the third and the fourth questions, based on our findings,
there are some clues that the results of the studies are heterogeneous, but we
didn't find enough evidences of publication bias.",USDA Forecasts: A meta-analysis study,2018-01-20 00:12:37,Bahram Sanginabadi,"http://arxiv.org/abs/1801.06575v1, http://arxiv.org/pdf/1801.06575v1",econ.EM
30006,em,"This paper extends endogenous economic growth models to incorporate knowledge
externality. We explores whether spatial knowledge spillovers among regions
exist, whether spatial knowledge spillovers promote regional innovative
activities, and whether external knowledge spillovers affect the evolution of
regional innovations in the long run. We empirically verify the theoretical
results through applying spatial statistics and econometric model in the
analysis of panel data of 31 regions in China. An accurate estimate of the
range of knowledge spillovers is achieved and the convergence of regional
knowledge growth rate is found, with clear evidences that developing regions
benefit more from external knowledge spillovers than developed regions.",Evolution of Regional Innovation with Spatial Knowledge Spillovers: Convergence or Divergence?,2018-01-22 05:14:53,"Jinwen Qiu, Wenjian Liu, Ning Ning","http://arxiv.org/abs/1801.06936v3, http://arxiv.org/pdf/1801.06936v3",econ.EM
34346,th,"We consider collective decision making when the society consists of groups
endowed with voting weights. Each group chooses an internal rule that specifies
the allocation of its weight to the alternatives as a function of its members'
preferences. Under fairly general conditions, we show that the winner-take-all
rule is a dominant strategy, while the equilibrium is Pareto dominated,
highlighting the dilemma structure between optimality for each group and for
the whole society. We also develop a technique for asymptotic analysis and show
Pareto dominance of the proportional rule. Our numerical computation for the US
Electoral College verifies its sensibility.",The Winner-Take-All Dilemma,2022-06-20 08:12:10,"Kazuya Kikuchi, Yukio Koriyama","http://dx.doi.org/10.3982/TE5248, http://arxiv.org/abs/2206.09574v1, http://arxiv.org/pdf/2206.09574v1",econ.TH
30007,em,"Since exchange economy considerably varies in the market assets, asset prices
have become an attractive research area for investigating and modeling
ambiguous and uncertain information in today markets. This paper proposes a new
generative uncertainty mechanism based on the Bayesian Inference and
Correntropy (BIC) technique for accurately evaluating asset pricing in markets.
This technique examines the potential processes of risk, ambiguity, and
variations of market information in a controllable manner. We apply the new BIC
technique to a consumption asset-pricing model in which the consumption
variations are modeled using the Bayesian network model with observing the
dynamics of asset pricing phenomena in the data. These dynamics include the
procyclical deviations of price, the countercyclical deviations of equity
premia and equity volatility, the leverage impact and the mean reversion of
excess returns. The key findings reveal that the precise modeling of asset
information can estimate price changes in the market effectively.",Accurate Evaluation of Asset Pricing Under Uncertainty and Ambiguity of Information,2018-01-22 09:38:32,Farouq Abdulaziz Masoudy,"http://arxiv.org/abs/1801.06966v2, http://arxiv.org/pdf/1801.06966v2",q-fin.GN
30008,em,"We consider identification and estimation of nonseparable sample selection
models with censored selection rules. We employ a control function approach and
discuss different objects of interest based on (1) local effects conditional on
the control function, and (2) global effects obtained from integration over
ranges of values of the control function. We derive the conditions for the
identification of these different objects and suggest strategies for
estimation. Moreover, we provide the associated asymptotic theory. These
strategies are illustrated in an empirical investigation of the determinants of
female wages in the United Kingdom.",Nonseparable Sample Selection Models with Censored Selection Rules,2018-01-26 22:54:44,"Iván Fernández-Val, Aico van Vuuren, Francis Vella","http://arxiv.org/abs/1801.08961v2, http://arxiv.org/pdf/1801.08961v2",econ.EM
30009,em,"We analyzed 2012 and 2016 YouGov pre-election polls in order to understand
how different population groups voted in the 2012 and 2016 elections. We broke
the data down by demographics and state. We display our findings with a series
of graphs and maps. The R code associated with this project is available at
https://github.com/rtrangucci/mrp_2016_election/.",Voting patterns in 2016: Exploration using multilevel regression and poststratification (MRP) on pre-election polls,2018-02-02 23:49:44,"Rob Trangucci, Imad Ali, Andrew Gelman, Doug Rivers","http://arxiv.org/abs/1802.00842v3, http://arxiv.org/pdf/1802.00842v3",stat.AP
30010,em,"This study proposes a mixed logit model with multivariate nonparametric
finite mixture distributions. The support of the distribution is specified as a
high-dimensional grid over the coefficient space, with equal or unequal
intervals between successive points along the same dimension; the location of
each point on the grid and the probability mass at that point are model
parameters that need to be estimated. The framework does not require the
analyst to specify the shape of the distribution prior to model estimation, but
can approximate any multivariate probability distribution function to any
arbitrary degree of accuracy. The grid with unequal intervals, in particular,
offers greater flexibility than existing multivariate nonparametric
specifications, while requiring the estimation of a small number of additional
parameters. An expectation maximization algorithm is developed for the
estimation of these models. Multiple synthetic datasets and a case study on
travel mode choice behavior are used to demonstrate the value of the model
framework and estimation algorithm. Compared to extant models that incorporate
random taste heterogeneity through continuous mixture distributions, the
proposed model provides better out-of-sample predictive ability. Findings
reveal significant differences in willingness to pay measures between the
proposed model and extant specifications. The case study further demonstrates
the ability of the proposed model to endogenously recover patterns of attribute
non-attendance and choice set formation.",Random taste heterogeneity in discrete choice models: Flexible nonparametric finite mixture distributions,2018-02-07 06:54:04,"Akshay Vij, Rico Krueger","http://arxiv.org/abs/1802.02299v1, http://arxiv.org/pdf/1802.02299v1",econ.EM
30011,em,"To what extent, hiring incentives targeting a specific group of vulnerable
unemployed (i.e. long term unemployed) are more effective, with respect to
generalised incentives (without a definite target), to increase hirings of the
targeted group? Are generalized incentives able to influence hirings of the
vulnerable group? Do targeted policies have negative side effects too important
to accept them? Even though there is a huge literature on hiring subsidies,
these questions remained unresolved. We tried to answer them, comparing the
impact of two similar hiring policies, one oriented towards a target group and
one generalised, implemented on the italian labour market. We used
administrative data on job contracts, and counterfactual analysis methods. The
targeted policy had a positive and significant impact, while the generalized
policy didn't have a significant impact on the vulnerable group. Moreover, we
concluded the targeted policy didn't have any indirect negative side effect.",Long-Term Unemployed hirings: Should targeted or untargeted policies be preferred?,2018-02-09 19:48:15,"Alessandra Pasquini, Marco Centra, Guido Pellegrini","http://arxiv.org/abs/1802.03343v2, http://arxiv.org/pdf/1802.03343v2",econ.EM
30012,em,"We develop a behavioral asset pricing model in which agents trade in a market
with information friction. Profit-maximizing agents switch between trading
strategies in response to dynamic market conditions. Due to noisy private
information about the fundamental value, the agents form different evaluations
about heterogeneous strategies. We exploit a thin set---a small
sub-population---to pointly identify this nonlinear model, and estimate the
structural parameters using extended method of moments. Based on the estimated
parameters, the model produces return time series that emulate the moments of
the real data. These results are robust across different sample periods and
estimation methods.",Structural Estimation of Behavioral Heterogeneity,2018-02-11 15:48:20,"Zhentao Shi, Huanhuan Zheng","http://dx.doi.org/10.1002/jae.2640, http://arxiv.org/abs/1802.03735v2, http://arxiv.org/pdf/1802.03735v2",q-fin.TR
30013,em,"We discuss the strategy that rational agents can use to maximize their
expected long-term payoff in the co-action minority game. We argue that the
agents will try to get into a cyclic state, where each of the $(2N +1)$ agent
wins exactly $N$ times in any continuous stretch of $(2N+1)$ days. We propose
and analyse a strategy for reaching such a cyclic state quickly, when any
direct communication between agents is not allowed, and only the publicly
available common information is the record of total number of people choosing
the first restaurant in the past. We determine exactly the average time
required to reach the periodic state for this strategy. We show that it varies
as $(N/\ln 2) [1 + \alpha \cos (2 \pi \log_2 N)$], for large $N$, where the
amplitude $\alpha$ of the leading term in the log-periodic oscillations is
found be $\frac{8 \pi^2}{(\ln 2)^2} \exp{(- 2 \pi^2/\ln 2)} \approx
{\color{blue}7 \times 10^{-11}}$.",Achieving perfect coordination amongst agents in the co-action minority game,2018-02-17 15:35:12,"Hardik Rajpal, Deepak Dhar","http://dx.doi.org/10.3390/g9020027, http://arxiv.org/abs/1802.06770v2, http://arxiv.org/pdf/1802.06770v2",econ.EM
30015,em,"Energy policy in Europe has been driven by the three goals of security of
supply, economic competitiveness and environmental sustainability, referred to
as the energy trilemma. Although there are clear conflicts within the trilemma,
member countries have acted to facilitate a fully integrated European
electricity market. Interconnection and cross-border electricity trade has been
a fundamental part of such market liberalisation. However, it has been
suggested that consumers are exposed to a higher price volatility as a
consequence of interconnection. Furthermore, during times of energy shortages
and high demand, issues of national sovereignty take precedence over
cooperation. In this article, the unique and somewhat peculiar conditions of
early 2017 within France, Germany and the United Kingdom have been studied to
understand how the existing integration arrangements address the energy
trilemma. It is concluded that the dominant interests are economic and national
security; issues of environmental sustainability are neglected or overridden.
Although the optimisation of European electricity generation to achieve a lower
overall carbon emission is possible, such a goal is far from being realised.
Furthermore, it is apparent that the United Kingdom, and other countries,
cannot rely upon imports from other countries during periods of high demand
and/or limited supply.",The Security of the United Kingdom Electricity Imports under Conditions of High European Demand,2018-02-21 10:55:54,"Anthony D Stephens, David R Walwyn","http://arxiv.org/abs/1802.07457v1, http://arxiv.org/pdf/1802.07457v1",econ.EM
30016,em,"I present evidence that communication between marketplace participants is an
important influence on market demand. I find that consumer demand is
approximately equally influenced by communication on both formal and informal
networks- namely, product reviews and community forums. In addition, I find
empirical evidence of a vendor's ability to commit to disclosure dampening the
effect of communication on demand. I also find that product demand is more
responsive to average customer sentiment as the number of messages grows, as
may be expected in a Bayesian updating framework.",Measuring the Demand Effects of Formal and Informal Communication : Evidence from Online Markets for Illicit Drugs,2018-02-24 04:33:27,Luis Armona,"http://arxiv.org/abs/1802.08778v1, http://arxiv.org/pdf/1802.08778v1",stat.AP
30017,em,"We employ stochastic dynamic microsimulations to analyse and forecast the
pension cost dependency ratio for England and Wales from 1991 to 2061,
evaluating the impact of the ongoing state pension reforms and changes in
international migration patterns under different Brexit scenarios. To fully
account for the recently observed volatility in life expectancies, we propose
mortality rate model based on deep learning techniques, which discovers complex
patterns in data and extrapolated trends. Our results show that the recent
reforms can effectively stave off the ""pension crisis"" and bring back the
system on a sounder fiscal footing. At the same time, increasingly more workers
can expect to spend greater share of their lifespan in retirement, despite the
eligibility age rises. The population ageing due to the observed postponement
of death until senectitude often occurs with the compression of morbidity, and
thus will not, perforce, intrinsically strain healthcare costs. To a lesser
degree, the future pension cost dependency ratio will depend on the post-Brexit
relations between the UK and the EU, with ""soft"" alignment on the free movement
lowering the relative cost of the pension system compared to the ""hard"" one. In
the long term, however, the ratio has a rising tendency.",Forecasting the impact of state pension reforms in post-Brexit England and Wales using microsimulation and deep learning,2018-02-06 18:30:17,Agnieszka Werpachowska,"http://arxiv.org/abs/1802.09427v2, http://arxiv.org/pdf/1802.09427v2",econ.EM
30018,em,"Economic data are often generated by stochastic processes that take place in
continuous time, though observations may occur only at discrete times. For
example, electricity and gas consumption take place in continuous time. Data
generated by a continuous time stochastic process are called functional data.
This paper is concerned with comparing two or more stochastic processes that
generate functional data. The data may be produced by a randomized experiment
in which there are multiple treatments. The paper presents a method for testing
the hypothesis that the same stochastic process generates all the functional
data. The test described here applies to both functional data and multiple
treatments. It is implemented as a combination of two permutation tests. This
ensures that in finite samples, the true and nominal probabilities that each
test rejects a correct null hypothesis are equal. The paper presents upper and
lower bounds on the asymptotic power of the test under alternative hypotheses.
The results of Monte Carlo experiments and an application to an experiment on
billing and pricing of natural gas illustrate the usefulness of the test.",Permutation Tests for Equality of Distributions of Functional Data,2018-03-02 13:39:58,"Federico A. Bugni, Joel L. Horowitz","http://arxiv.org/abs/1803.00798v4, http://arxiv.org/pdf/1803.00798v4",econ.EM
30019,em,"We revisit the results of Harvie (2000) and show how correcting for a
reporting mistake in some of the estimated parameter values leads to
significantly different conclusions, including realistic parameter values for
the Philips curve and estimated equilibrium employment rates exhibiting on
average one tenth of the relative error of those obtained in Harvie (2000).",A comment on 'Testing Goodwin: growth cycles in ten OECD countries',2018-03-05 10:28:55,"Matheus R. Grasselli, Aditya Maheshwari","http://dx.doi.org/10.1093/cje/bex018, http://arxiv.org/abs/1803.01527v1, http://arxiv.org/pdf/1803.01527v1",econ.EM
30020,em,"We perform econometric tests on a modified Goodwin model where the capital
accumulation rate is constant but not necessarily equal to one as in the
original model (Goodwin, 1967). In addition to this modification, we find that
addressing the methodological and reporting issues in Harvie (2000) leads to
remarkably better results, with near perfect agreement between the estimates of
equilibrium employment rates and the corresponding empirical averages, as well
as significantly improved estimates of equilibrium wage shares. Despite its
simplicity and obvious limitations, the performance of the modified Goodwin
model implied by our results show that it can be used as a starting point for
more sophisticated models for endogenous growth cycles.",Testing a Goodwin model with general capital accumulation rate,2018-03-05 10:46:47,"Matheus R. Grasselli, Aditya Maheshwari","http://arxiv.org/abs/1803.01536v1, http://arxiv.org/pdf/1803.01536v1",econ.EM
30032,em,"The EU Solvency II directive recommends insurance companies to pay more
attention to the risk management methods. The sense of risk management is the
ability to quantify risk and apply methods that reduce uncertainty. In life
insurance, the risk is a consequence of the random variable describing the life
expectancy. The article will present a proposal for stochastic mortality
modeling based on the Lee and Carter methodology. The maximum likelihood method
is often used to estimate parameters in mortality models. This method assumes
that the population is homogeneous and the number of deaths has the Poisson
distribution. The aim of this article is to change assumptions about the
distribution of the number of deaths. The results indicate that the model can
get a better match to historical data, when the number of deaths has a negative
binomial distribution.",Mortality in a heterogeneous population - Lee-Carter's methodology,2018-03-29 22:36:26,Kamil Jodź,"http://arxiv.org/abs/1803.11233v1, http://arxiv.org/pdf/1803.11233v1",econ.EM
30021,em,"Behavioral theories posit that investor sentiment exhibits predictive power
for stock returns, whereas there is little study have investigated the
relationship between the time horizon of the predictive effect of investor
sentiment and the firm characteristics. To this end, by using a Granger
causality analysis in the frequency domain proposed by Lemmens et al. (2008),
this paper examine whether the time horizon of the predictive effect of
investor sentiment on the U.S. returns of stocks vary with different firm
characteristics (e.g., firm size (Size), book-to-market equity (B/M) rate,
operating profitability (OP) and investment (Inv)). The empirical results
indicate that investor sentiment has a long-term (more than 12 months) or
short-term (less than 12 months) predictive effect on stock returns with
different firm characteristics. Specifically, the investor sentiment has strong
predictability in the stock returns for smaller Size stocks, lower B/M stocks
and lower OP stocks, both in the short term and long term, but only has a
short-term predictability for higher quantile ones. The investor sentiment
merely has predictability for the returns of smaller Inv stocks in the short
term, but has a strong short-term and long-term predictability for larger Inv
stocks. These results have important implications for the investors for the
planning of the short and the long run stock investment strategy.",Does the time horizon of the return predictive effect of investor sentiment vary with stock characteristics? A Granger causality analysis in the frequency domain,2018-03-08 07:19:54,"Yong Jiang, Zhongbao Zhou","http://arxiv.org/abs/1803.02962v1, http://arxiv.org/pdf/1803.02962v1",econ.EM
30022,em,"We consider a situation where the distribution of a random variable is being
estimated by the empirical distribution of noisy measurements of that variable.
This is common practice in, for example, teacher value-added models and other
fixed-effect models for panel data. We use an asymptotic embedding where the
noise shrinks with the sample size to calculate the leading bias in the
empirical distribution arising from the presence of noise. The leading bias in
the empirical quantile function is equally obtained. These calculations are new
in the literature, where only results on smooth functionals such as the mean
and variance have been derived. We provide both analytical and jackknife
corrections that recenter the limit distribution and yield confidence intervals
with correct coverage in large samples. Our approach can be connected to
corrections for selection bias and shrinkage estimation and is to be contrasted
with deconvolution. Simulation results confirm the much-improved sampling
behavior of the corrected estimators. An empirical illustration on
heterogeneity in deviations from the law of one price is equally provided.",Inference on a Distribution from Noisy Draws,2018-03-13 21:09:12,"Koen Jochmans, Martin Weidner","http://arxiv.org/abs/1803.04991v5, http://arxiv.org/pdf/1803.04991v5",econ.EM
30023,em,"Stepwise regression building procedures are commonly used applied statistical
tools, despite their well-known drawbacks. While many of their limitations have
been widely discussed in the literature, other aspects of the use of individual
statistical fit measures, especially in high-dimensional stepwise regression
settings, have not. Giving primacy to individual fit, as is done with p-values
and $R^2$, when group fit may be the larger concern, can lead to misguided
decision making. One of the most consequential uses of stepwise regression is
in health care, where these tools allocate hundreds of billions of dollars to
health plans enrolling individuals with different predicted health care costs.
The main goal of this ""risk adjustment"" system is to convey incentives to
health plans such that they provide health care services fairly, a component of
which is not to discriminate in access or care for persons or groups likely to
be expensive. We address some specific limitations of p-values and $R^2$ for
high-dimensional stepwise regression in this policy problem through an
illustrated example by additionally considering a group-level fairness metric.",Limitations of P-Values and $R^2$ for Stepwise Regression Building: A Fairness Demonstration in Health Policy Risk Adjustment,2018-03-15 00:08:12,"Sherri Rose, Thomas G. McGuire","http://dx.doi.org/10.1080/00031305.2018.1518269, http://arxiv.org/abs/1803.05513v2, http://arxiv.org/pdf/1803.05513v2",econ.EM
30024,em,"During the last decades, public policies become a central pillar in
supporting and stabilising agricultural sector. In 1962, EU policy-makers
developed the so-called Common Agricultural Policy (CAP) to ensure
competitiveness and a common market organisation for agricultural products,
while 2003 reform decouple the CAP from the production to focus only on income
stabilization and the sustainability of agricultural sector. Notwithstanding
farmers are highly dependent to public support, literature on the role played
by the CAP in fostering agricultural performances is still scarce and
fragmented. Actual CAP policies increases performance differentials between
Northern Central EU countries and peripheral regions. This paper aims to
evaluate the effectiveness of CAP in stimulate performances by focusing on
Italian lagged Regions. Moreover, agricultural sector is deeply rooted in
place-based production processes. In this sense, economic analysis which omit
the presence of spatial dependence produce biased estimates of the
performances. Therefore, this paper, using data on subsidies and economic
results of farms from the RICA dataset which is part of the Farm Accountancy
Data Network (FADN), proposes a spatial Augmented Cobb-Douglas Production
Function to evaluate the effects of subsidies on farm's performances. The major
innovation in this paper is the implementation of a micro-founded quantile
version of a spatial lag model to examine how the impact of the subsidies may
vary across the conditional distribution of agricultural performances. Results
show an increasing shape which switch from negative to positive at the median
and becomes statistical significant for higher quantiles. Additionally, spatial
autocorrelation parameter is positive and significant across all the
conditional distribution, suggesting the presence of significant spatial
spillovers in agricultural performances.",Does agricultural subsidies foster Italian southern farms? A Spatial Quantile Regression Approach,2018-03-15 12:32:50,"Marusca De Castris, Daniele Di Gennaro","http://arxiv.org/abs/1803.05659v1, http://arxiv.org/pdf/1803.05659v1",econ.EM
30025,em,"We develop a strong diagnostic for bubbles and crashes in bitcoin, by
analyzing the coincidence (and its absence) of fundamental and technical
indicators. Using a generalized Metcalfe's law based on network properties, a
fundamental value is quantified and shown to be heavily exceeded, on at least
four occasions, by bubbles that grow and burst. In these bubbles, we detect a
universal super-exponential unsustainable growth. We model this universal
pattern with the Log-Periodic Power Law Singularity (LPPLS) model, which
parsimoniously captures diverse positive feedback phenomena, such as herding
and imitation. The LPPLS model is shown to provide an ex-ante warning of market
instabilities, quantifying a high crash hazard and probabilistic bracket of the
crash time consistent with the actual corrections; although, as always, the
precise time and trigger (which straw breaks the camel's back) being exogenous
and unpredictable. Looking forward, our analysis identifies a substantial but
not unprecedented overvaluation in the price of bitcoin, suggesting many months
of volatile sideways bitcoin prices ahead (from the time of writing, March
2018).",Are Bitcoin Bubbles Predictable? Combining a Generalized Metcalfe's Law and the LPPLS Model,2018-03-15 12:47:25,"Spencer Wheatley, Didier Sornette, Tobias Huber, Max Reppen, Robert N. Gantner","http://arxiv.org/abs/1803.05663v1, http://arxiv.org/pdf/1803.05663v1",econ.EM
30026,em,"This paper presents an out-of-sample prediction comparison between major
machine learning models and the structural econometric model. Over the past
decade, machine learning has established itself as a powerful tool in many
prediction applications, but this approach is still not widely adopted in
empirical economic studies. To evaluate the benefits of this approach, I use
the most common machine learning algorithms, CART, C4.5, LASSO, random forest,
and adaboost, to construct prediction models for a cash transfer experiment
conducted by the Progresa program in Mexico, and I compare the prediction
results with those of a previous structural econometric study. Two prediction
tasks are performed in this paper: the out-of-sample forecast and the long-term
within-sample simulation. For the out-of-sample forecast, both the mean
absolute error and the root mean square error of the school attendance rates
found by all machine learning models are smaller than those found by the
structural model. Random forest and adaboost have the highest accuracy for the
individual outcomes of all subgroups. For the long-term within-sample
simulation, the structural model has better performance than do all of the
machine learning models. The poor within-sample fitness of the machine learning
model results from the inaccuracy of the income and pregnancy prediction
models. The result shows that the machine learning model performs better than
does the structural model when there are many data to learn; however, when the
data are limited, the structural model offers a more sensible prediction. The
findings of this paper show promise for adopting machine learning in economic
policy analyses in the era of big data.",Evaluating Conditional Cash Transfer Policies with Machine Learning Methods,2018-03-17 00:14:02,Tzai-Shuen Chen,"http://arxiv.org/abs/1803.06401v1, http://arxiv.org/pdf/1803.06401v1",econ.EM
30027,em,"This paper provides a method to construct simultaneous confidence bands for
quantile functions and quantile effects in nonlinear network and panel models
with unobserved two-way effects, strictly exogenous covariates, and possibly
discrete outcome variables. The method is based upon projection of simultaneous
confidence bands for distribution functions constructed from fixed effects
distribution regression estimators. These fixed effects estimators are debiased
to deal with the incidental parameter problem. Under asymptotic sequences where
both dimensions of the data set grow at the same rate, the confidence bands for
the quantile functions and effects have correct joint coverage in large
samples. An empirical application to gravity models of trade illustrates the
applicability of the methods to network data.",Network and Panel Quantile Effects Via Distribution Regression,2018-03-22 01:26:15,"Victor Chernozhukov, Iván Fernández-Val, Martin Weidner","http://arxiv.org/abs/1803.08154v3, http://arxiv.org/pdf/1803.08154v3",econ.EM
30028,em,"We define a class of pure exchange Edgeworth trading processes that under
minimal assumptions converge to a stable set in the space of allocations, and
characterise the Pareto set of these processes. Choosing a specific process
belonging to this class, that we define fair trading, we analyse the trade
dynamics between agents located on a weighted network. We determine the
conditions under which there always exists a one-to-one map between the set of
networks and the set of limit points of the dynamics. This result is used to
understand what is the effect of the network topology on the trade dynamics and
on the final allocation. We find that the positions in the network affect the
distribution of the utility gains, given the initial allocations",Decentralized Pure Exchange Processes on Networks,2018-03-23 18:24:34,"Daniele Cassese, Paolo Pin","http://arxiv.org/abs/1803.08836v7, http://arxiv.org/pdf/1803.08836v7",econ.EM
30029,em,"Although initially originated as a totally empirical relationship to explain
the volume of trade between two partners, gravity equation has been the focus
of several theoretic models that try to explain it. Specialization models are
of great importance in providing a solid theoretic ground for gravity equation
in bilateral trade. Some research papers try to improve specialization models
by adding imperfect specialization to model, but we believe it is unnecessary
complication. We provide a perfect specialization model based on the phenomenon
we call tradability, which overcomes the problems with simpler initial. We
provide empirical evidence using estimates on panel data of bilateral trade of
40 countries over 10 years that support the theoretical model. The empirical
results have implied that tradability is the only reason for deviations of data
from basic perfect specialization models.",A Perfect Specialization Model for Gravity Equation in Bilateral Trade based on Production Structure,2018-03-27 10:31:49,"Majid Einian, Farshad Ranjbar Ravasan","http://dx.doi.org/10.22108/IES.2017.79036, http://arxiv.org/abs/1803.09935v1, http://arxiv.org/pdf/1803.09935v1",econ.EM
30030,em,"We develop a novel continuous-time asymptotic framework for inference on
whether the predictive ability of a given forecast model remains stable over
time. We formally define forecast instability from the economic forecaster's
perspective and highlight that the time duration of the instability bears no
relationship with stable period. Our approach is applicable in forecasting
environment involving low-frequency as well as high-frequency macroeconomic and
financial variables. As the sampling interval between observations shrinks to
zero the sequence of forecast losses is approximated by a continuous-time
stochastic process (i.e., an Ito semimartingale) possessing certain pathwise
properties. We build an hypotheses testing problem based on the local
properties of the continuous-time limit counterpart of the sequence of losses.
The null distribution follows an extreme value distribution. While controlling
the statistical size well, our class of test statistics feature uniform power
over the location of the forecast failure in the sample. The test statistics
are designed to have power against general form of insatiability and are robust
to common forms of non-stationarity such as heteroskedasticty and serial
correlation. The gains in power are substantial relative to extant methods,
especially when the instability is short-lasting and when occurs toward the
tail of the sample.",Tests for Forecast Instability and Forecast Failure under a Continuous Record Asymptotic Framework,2018-03-29 03:11:07,Alessandro Casini,"http://arxiv.org/abs/1803.10883v2, http://arxiv.org/pdf/1803.10883v2",econ.EM
30031,em,"The paper aims to explore the impacts of bi-demographic structure on the
current account and growth. Using a SVAR modeling, we track the dynamic impacts
between these underlying variables. New insights have been developed about the
dynamic interrelation between population growth, current account and economic
growth. The long-run net impact on economic growth of the domestic working
population growth and demand labor for emigrants is positive, due to the
predominant contribution of skilled emigrant workers. Besides, the positive
long-run contribution of emigrant workers to the current account growth largely
compensates the negative contribution from the native population, because of
the predominance of skilled compared to unskilled workforce. We find that a
positive shock in demand labor for emigrant workers leads to an increasing
effect on native active age ratio. Thus, the emigrants appear to be more
complements than substitutes for native workers.",Bi-Demographic Changes and Current Account using SVAR Modeling,2018-03-29 20:07:29,"Hassan B. Ghassan, Hassan R. Al-Hajhoj, Faruk Balli","http://arxiv.org/abs/1803.11161v4, http://arxiv.org/pdf/1803.11161v4",econ.EM
30033,em,"This paper provides an entire inference procedure for the autoregressive
model under (conditional) heteroscedasticity of unknown form with a finite
variance. We first establish the asymptotic normality of the weighted least
absolute deviations estimator (LADE) for the model. Second, we develop the
random weighting (RW) method to estimate its asymptotic covariance matrix,
leading to the implementation of the Wald test. Third, we construct a
portmanteau test for model checking, and use the RW method to obtain its
critical values. As a special weighted LADE, the feasible adaptive LADE (ALADE)
is proposed and proved to have the same efficiency as its infeasible
counterpart. The importance of our entire methodology based on the feasible
ALADE is illustrated by simulation results and the real data analysis on three
U.S. economic data sets.",Statistical inference for autoregressive models under heteroscedasticity of unknown form,2018-04-06 19:33:08,Ke Zhu,"http://arxiv.org/abs/1804.02348v2, http://arxiv.org/pdf/1804.02348v2",stat.ME
30034,em,"This paper explores the effects of simulated moments on the performance of
inference methods based on moment inequalities. Commonly used confidence sets
for parameters are level sets of criterion functions whose boundary points may
depend on sample moments in an irregular manner. Due to this feature,
simulation errors can affect the performance of inference in non-standard ways.
In particular, a (first-order) bias due to the simulation errors may remain in
the estimated boundary of the confidence set. We demonstrate, through Monte
Carlo experiments, that simulation errors can significantly reduce the coverage
probabilities of confidence sets in small samples. The size distortion is
particularly severe when the number of inequality restrictions is large. These
results highlight the danger of ignoring the sampling variations due to the
simulation errors in moment inequality models. Similar issues arise when using
predicted variables in moment inequalities models. We propose a method for
properly correcting for these variations based on regularizing the intersection
of moments in parameter space, and we show that our proposed method performs
well theoretically and in practice.",Moment Inequalities in the Context of Simulated and Predicted Variables,2018-04-10 21:15:33,"Hiroaki Kaido, Jiaxuan Li, Marc Rysman","http://arxiv.org/abs/1804.03674v1, http://arxiv.org/pdf/1804.03674v1",econ.EM
30035,em,"There has been considerable interest in the electrification of freight
transport, particularly heavy-duty trucks to downscale the greenhouse-gas (GHG)
emissions from the transportation sector. However, the economic competitiveness
of electric semi-trucks is uncertain as there are substantial additional
initial costs associated with the large battery packs required. In this work,
we analyze the trade-off between the initial investment and the operating cost
for realistic usage scenarios to compare a fleet of electric semi-trucks with a
range of 500 miles with a fleet of diesel trucks. For the baseline case with
30% of fleet requiring battery pack replacements and a price differential of
US\$50,000, we find a payback period of about 3 years. Based on sensitivity
analysis, we find that the fraction of the fleet that requires battery pack
replacements is a major factor. For the case with 100% replacement fraction,
the payback period could be as high as 5-6 years. We identify the price of
electricity as the second most important variable, where a price of
US$0.14/kWh, the payback period could go up to 5 years. Electric semi-trucks
are expected to lead to savings due to reduced repairs and magnitude of these
savings could play a crucial role in the payback period as well. With increased
penetration of autonomous vehicles, the annual mileage of semi-trucks could
substantially increase and this heavily sways in favor of electric semi-trucks,
bringing down the payback period to around 2 years at an annual mileage of
120,000 miles. There is an undeniable economic case for electric semi-trucks
and developing battery packs with longer cycle life and higher specific energy
would make this case even stronger.",Quantifying the Economic Case for Electric Semi-Trucks,2018-04-17 02:14:57,"Shashank Sripad, Venkatasubramanian Viswanathan","http://dx.doi.org/10.1021/acsenergylett.8b02146, http://arxiv.org/abs/1804.05974v1, http://arxiv.org/pdf/1804.05974v1",econ.EM
30036,em,"We present a detailed bubble analysis of the Bitcoin to US Dollar price
dynamics from January 2012 to February 2018. We introduce a robust automatic
peak detection method that classifies price time series into periods of
uninterrupted market growth (drawups) and regimes of uninterrupted market
decrease (drawdowns). In combination with the Lagrange Regularisation Method
for detecting the beginning of a new market regime, we identify 3 major peaks
and 10 additional smaller peaks, that have punctuated the dynamics of Bitcoin
price during the analyzed time period. We explain this classification of long
and short bubbles by a number of quantitative metrics and graphs to understand
the main socio-economic drivers behind the ascent of Bitcoin over this period.
Then, a detailed analysis of the growing risks associated with the three long
bubbles using the Log-Periodic Power Law Singularity (LPPLS) model is based on
the LPPLS Confidence Indicators, defined as the fraction of qualified fits of
the LPPLS model over multiple time windows. Furthermore, for various fictitious
'present' times $t_2$ before the crashes, we employ a clustering method to
group the predicted critical times $t_c$ of the LPPLS fits over different time
scales, where $t_c$ is the most probable time for the ending of the bubble.
Each cluster is proposed as a plausible scenario for the subsequent Bitcoin
price evolution. We present these predictions for the three long bubbles and
the four short bubbles that our time scale of analysis was able to resolve.
Overall, our predictive scheme provides useful information to warn of an
imminent crash risk.",Dissection of Bitcoin's Multiscale Bubble History from January 2012 to February 2018,2018-04-17 16:56:02,"Jan-Christian Gerlach, Guilherme Demos, Didier Sornette","http://arxiv.org/abs/1804.06261v4, http://arxiv.org/pdf/1804.06261v4",econ.EM
30037,em,"Wholesale electricity markets are increasingly integrated via high voltage
interconnectors, and inter-regional trade in electricity is growing. To model
this, we consider a spatial equilibrium model of price formation, where
constraints on inter-regional flows result in three distinct equilibria in
prices. We use this to motivate an econometric model for the distribution of
observed electricity spot prices that captures many of their unique empirical
characteristics. The econometric model features supply and inter-regional trade
cost functions, which are estimated using Bayesian monotonic regression
smoothing methodology. A copula multivariate time series model is employed to
capture additional dependence -- both cross-sectional and serial-- in regional
prices. The marginal distributions are nonparametric, with means given by the
regression means. The model has the advantage of preserving the heavy
right-hand tail in the predictive densities of price. We fit the model to
half-hourly spot price data in the five interconnected regions of the
Australian national electricity market. The fitted model is then used to
measure how both supply and price shocks in one region are transmitted to the
distribution of prices in all regions in subsequent periods. Finally, to
validate our econometric model, we show that prices forecast using the proposed
model compare favorably with those from some benchmark alternatives.",Econometric Modeling of Regional Electricity Spot Prices in the Australian Market,2018-04-23 04:52:35,"Michael Stanley Smith, Thomas S. Shively","http://arxiv.org/abs/1804.08218v1, http://arxiv.org/pdf/1804.08218v1",econ.EM
30038,em,"This paper proposes some novel one-sided omnibus tests for independence
between two multivariate stationary time series. These new tests apply the
Hilbert-Schmidt independence criterion (HSIC) to test the independence between
the innovations of both time series. Under regular conditions, the limiting
null distributions of our HSIC-based tests are established. Next, our
HSIC-based tests are shown to be consistent. Moreover, a residual bootstrap
method is used to obtain the critical values for our HSIC-based tests, and its
validity is justified. Compared with the existing cross-correlation-based tests
for linear dependence, our tests examine the general (including both linear and
non-linear) dependence to give investigators more complete information on the
causal relationship between two multivariate time series. The merits of our
tests are illustrated by some simulation results and a real example.",New HSIC-based tests for independence between two stationary multivariate time series,2018-04-26 05:57:00,"Guochang Wang, Wai Keung Li, Ke Zhu","http://arxiv.org/abs/1804.09866v1, http://arxiv.org/pdf/1804.09866v1",stat.ME
30039,em,"How should one assess the credibility of assumptions weaker than statistical
independence, like quantile independence? In the context of identifying causal
effects of a treatment variable, we argue that such deviations should be chosen
based on the form of selection on unobservables they allow. For quantile
independence, we characterize this form of treatment selection. Specifically,
we show that quantile independence is equivalent to a constraint on the average
value of either a latent propensity score (for a binary treatment) or the cdf
of treatment given the unobservables (for a continuous treatment). In both
cases, this average value constraint requires a kind of non-monotonic treatment
selection. Using these results, we show that several common treatment selection
models are incompatible with quantile independence. We introduce a class of
assumptions which weakens quantile independence by removing the average value
constraint, and therefore allows for monotonic treatment selection. In a
potential outcomes model with a binary treatment, we derive identified sets for
the ATT and QTT under both classes of assumptions. In a numerical example we
show that the average value constraint inherent in quantile independence has
substantial identifying power. Our results suggest that researchers should
carefully consider the credibility of this non-monotonicity property when using
quantile independence to weaken full independence.",Interpreting Quantile Independence,2018-04-29 19:09:40,"Matthew A. Masten, Alexandre Poirier","http://arxiv.org/abs/1804.10957v1, http://arxiv.org/pdf/1804.10957v1",econ.EM
30040,em,"Due to economic globalization, each country's economic law, including tax
laws and tax treaties, has been forced to work as a single network. However,
each jurisdiction (country or region) has not made its economic law under the
assumption that its law functions as an element of one network, so it has
brought unexpected results. We thought that the results are exactly
international tax avoidance. To contribute to the solution of international tax
avoidance, we tried to investigate which part of the network is vulnerable.
Specifically, focusing on treaty shopping, which is one of international tax
avoidance methods, we attempt to identified which jurisdiction are likely to be
used for treaty shopping from tax liabilities and the relationship between
jurisdictions which are likely to be used for treaty shopping and others. For
that purpose, based on withholding tax rates imposed on dividends, interest,
and royalties by jurisdictions, we produced weighted multiple directed graphs,
computed the centralities and detected the communities. As a result, we
clarified the jurisdictions that are likely to be used for treaty shopping and
pointed out that there are community structures. The results of this study
suggested that fewer jurisdictions need to introduce more regulations for
prevention of treaty abuse worldwide.",Identification of Conduit Countries and Community Structures in the Withholding Tax Networks,2018-06-03 17:13:16,"Tembo Nakamoto, Yuichi Ikeda","http://arxiv.org/abs/1806.00799v1, http://arxiv.org/pdf/1806.00799v1",econ.EM
30041,em,"In the past twenty years the number of elderly drivers has increased for two
reasons. one is the higher proportion of elderly in the population, and the
other is the rise in the share of the elderly who drive. This paper examines
the features of their driving and the level of their awareness of problems
relating to it, by analysis preference survey that included interviews with 205
drivers aged between 70 and 80. The interviewees exhibited a level of optimism
and self confidence in their driving that is out of line with the real
situation. There is also a discrepancy between how their driving is viewed by
others and their own assessment, and between their self assessment and their
assessment of the driving of other elderly drivers, which they rate lower than
their own. they attributed great importance to safety feature in cars, although
they did not think that they themselves needed them, and most elderly drivers
did not think there was any reason that they should stop driving, despite
suggestions from family members and others that they should do so. A declared
preference survey was undertaken to assess the degree of difficulty elderly
drivers attribute to driving conditions. It was found that they are concerned
mainly about weather condition, driving at night, and long journeys. Worry
about night driving was most marked among women, the oldest drivers, and those
who drove less frequently. In light of the findings, imposing greater
responsibility on the health system should be considered. Consideration should
also be given to issuing partial licenses to the elderly for daytime driving
only, or restricted to certain weather conditions, dependent on their medical
condition. Such flexibility will enable the elderly to maintain their life
style and independence for a longer period on the one hand, and on the other,
will minimize the risks to themselves and other.",Driving by the Elderly and their Awareness of their Driving Difficulties (Hebrew),2018-06-04 22:32:26,Idit Sohlberg,"http://arxiv.org/abs/1806.03254v1, http://arxiv.org/pdf/1806.03254v1",cs.CY
30042,em,"We introduce the Pricing Engine package to enable the use of Double ML
estimation techniques in general panel data settings. Customization allows the
user to specify first-stage models, first-stage featurization, second stage
treatment selection and second stage causal-modeling. We also introduce a
DynamicDML class that allows the user to generate dynamic treatment-aware
forecasts at a range of leads and to understand how the forecasts will vary as
a function of causally estimated treatment parameters. The Pricing Engine is
built on Python 3.5 and can be run on an Azure ML Workbench environment with
the addition of only a few Python packages. This note provides high-level
discussion of the Double ML method, describes the packages intended use and
includes an example Jupyter notebook demonstrating application to some publicly
available data. Installation of the package and additional technical
documentation is available at
$\href{https://github.com/bquistorff/pricingengine}{github.com/bquistorff/pricingengine}$.",Pricing Engine: Estimating Causal Impacts in Real World Business Settings,2018-06-08 20:31:25,"Matt Goldman, Brian Quistorff","http://arxiv.org/abs/1806.03285v2, http://arxiv.org/pdf/1806.03285v2",econ.EM
34495,th,"We design two mechanisms that ensure that the majority preferred option wins
in all equilibria. The first one is a simultaneous game where agents choose
other agents to cooperate with on top of the vote for an alternative, thus
overcoming recent impossibility results concerning the implementation of
majority rule. The second one adds sequential ratification to the standard
majority voting procedure allowing to reach the (correct) outcome in
significantly fewer steps than the widely used roll call voting. Both
mechanisms use off-equilibrium lotteries to incentivize truthful voting. We
discuss different extensions, including the possibility for agents to abstain.",Legitimacy of collective decisions: a mechanism design approach,2023-02-19 14:58:11,"Kirneva Margarita, Núñez Matías","http://arxiv.org/abs/2302.09548v3, http://arxiv.org/pdf/2302.09548v3",econ.TH
30043,em,"We propose a procedure to determine the dimension of the common factor space
in a large, possibly non-stationary, dataset. Our procedure is designed to
determine whether there are (and how many) common factors (i) with linear
trends, (ii) with stochastic trends, (iii) with no trends, i.e. stationary. Our
analysis is based on the fact that the largest eigenvalues of a suitably scaled
covariance matrix of the data (corresponding to the common factor part)
diverge, as the dimension $N$ of the dataset diverges, whilst the others stay
bounded. Therefore, we propose a class of randomised test statistics for the
null that the $p$-th eigenvalue diverges, based directly on the estimated
eigenvalue. The tests only requires minimal assumptions on the data, and no
restrictions on the relative rates of divergence of $N$ and $T$ are imposed.
Monte Carlo evidence shows that our procedure has very good finite sample
properties, clearly dominating competing approaches when no common factors are
present. We illustrate our methodology through an application to US bond yields
with different maturities observed over the last 30 years. A common linear
trend and two common stochastic trends are found and identified as the
classical level, slope and curvature factors.",Determining the dimension of factor structures in non-stationary large datasets,2018-06-10 15:23:15,"Matteo Barigozzi, Lorenzo Trapani","http://arxiv.org/abs/1806.03647v1, http://arxiv.org/pdf/1806.03647v1",stat.ME
30044,em,"This paper studies inference in randomized controlled trials with
covariate-adaptive randomization when there are multiple treatments. More
specifically, we study inference about the average effect of one or more
treatments relative to other treatments or a control. As in Bugni et al.
(2018), covariate-adaptive randomization refers to randomization schemes that
first stratify according to baseline covariates and then assign treatment
status so as to achieve balance within each stratum. In contrast to Bugni et
al. (2018), we not only allow for multiple treatments, but further allow for
the proportion of units being assigned to each of the treatments to vary across
strata. We first study the properties of estimators derived from a fully
saturated linear regression, i.e., a linear regression of the outcome on all
interactions between indicators for each of the treatments and indicators for
each of the strata. We show that tests based on these estimators using the
usual heteroskedasticity-consistent estimator of the asymptotic variance are
invalid; on the other hand, tests based on these estimators and suitable
estimators of the asymptotic variance that we provide are exact. For the
special case in which the target proportion of units being assigned to each of
the treatments does not vary across strata, we additionally consider tests
based on estimators derived from a linear regression with strata fixed effects,
i.e., a linear regression of the outcome on indicators for each of the
treatments and indicators for each of the strata. We show that tests based on
these estimators using the usual heteroskedasticity-consistent estimator of the
asymptotic variance are conservative, but tests based on these estimators and
suitable estimators of the asymptotic variance that we provide are exact. A
simulation study illustrates the practical relevance of our theoretical
results.",Inference under Covariate-Adaptive Randomization with Multiple Treatments,2018-06-11 22:22:10,"Federico A. Bugni, Ivan A. Canay, Azeem M. Shaikh","http://arxiv.org/abs/1806.04206v3, http://arxiv.org/pdf/1806.04206v3",econ.EM
30045,em,"Considered an important macroeconomic indicator, the Purchasing Managers'
Index (PMI) on Manufacturing generally assumes that PMI announcements will
produce an impact on stock markets. International experience suggests that
stock markets react to negative PMI news. In this research, we empirically
investigate the stock market reaction towards PMI in China. The asymmetric
effects of PMI announcements on the stock market are observed: no market
reaction is generated towards negative PMI announcements, while a positive
reaction is generally generated for positive PMI news. We further find that the
positive reaction towards the positive PMI news occurs 1 day before the
announcement and lasts for nearly 3 days, and the positive reaction is observed
in the context of expanding economic conditions. By contrast, the negative
reaction towards negative PMI news is prevalent during downward economic
conditions for stocks with low market value, low institutional shareholding
ratios or high price earnings. Our study implies that China's stock market
favors risk to a certain extent given the vast number of individual investors
in the country, and there may exist information leakage in the market.",Asymmetric response to PMI announcements in China's stock returns,2018-06-12 09:15:35,"Yingli Wang, Xiaoguang Yang","http://arxiv.org/abs/1806.04347v1, http://arxiv.org/pdf/1806.04347v1",q-fin.ST
30046,em,"A measure of relative importance of variables is often desired by researchers
when the explanatory aspects of econometric methods are of interest. To this
end, the author briefly reviews the limitations of conventional econometrics in
constructing a reliable measure of variable importance. The author highlights
the relative stature of explanatory and predictive analysis in economics and
the emergence of fruitful collaborations between econometrics and computer
science. Learning lessons from both, the author proposes a hybrid approach
based on conventional econometrics and advanced machine learning (ML)
algorithms, which are otherwise, used in predictive analytics. The purpose of
this article is two-fold, to propose a hybrid approach to assess relative
importance and demonstrate its applicability in addressing policy priority
issues with an example of food inflation in India, followed by a broader aim to
introduce the possibility of conflation of ML and conventional econometrics to
an audience of researchers in economics and social sciences, in general.",A hybrid econometric-machine learning approach for relative importance analysis: Prioritizing food policy,2018-06-09 13:17:58,Akash Malhotra,"http://arxiv.org/abs/1806.04517v3, http://arxiv.org/pdf/1806.04517v3",econ.EM
30047,em,"We consider the estimation and inference in a system of high-dimensional
regression equations allowing for temporal and cross-sectional dependency in
covariates and error processes, covering rather general forms of weak temporal
dependence. A sequence of regressions with many regressors using LASSO (Least
Absolute Shrinkage and Selection Operator) is applied for variable selection
purpose, and an overall penalty level is carefully chosen by a block multiplier
bootstrap procedure to account for multiplicity of the equations and
dependencies in the data. Correspondingly, oracle properties with a jointly
selected tuning parameter are derived. We further provide high-quality
de-biased simultaneous inference on the many target parameters of the system.
We provide bootstrap consistency results of the test procedure, which are based
on a general Bahadur representation for the $Z$-estimators with dependent data.
Simulations demonstrate good performance of the proposed inference procedure.
Finally, we apply the method to quantify spillover effects of textual sentiment
indices in a financial market and to test the connectedness among sectors.",LASSO-Driven Inference in Time and Space,2018-06-13 17:24:06,"Victor Chernozhukov, Wolfgang K. Härdle, Chen Huang, Weining Wang","http://arxiv.org/abs/1806.05081v4, http://arxiv.org/pdf/1806.05081v4",econ.EM
30059,em,"Structural estimation is an important methodology in empirical economics, and
a large class of structural models are estimated through the generalized method
of moments (GMM). Traditionally, selection of structural models has been
performed based on model fit upon estimation, which take the entire observed
samples. In this paper, we propose a model selection procedure based on
cross-validation (CV), which utilizes sample-splitting technique to avoid
issues such as over-fitting. While CV is widely used in machine learning
communities, we are the first to prove its consistency in model selection in
GMM framework. Its empirical property is compared to existing methods by
simulations of IV regressions and oligopoly market model. In addition, we
propose the way to apply our method to Mathematical Programming of Equilibrium
Constraint (MPEC) approach. Finally, we perform our method to online-retail
sales data to compare dynamic market model to static model.",Cross Validation Based Model Selection via Generalized Method of Moments,2018-07-18 18:27:00,"Junpei Komiyama, Hajime Shimao","http://arxiv.org/abs/1807.06993v1, http://arxiv.org/pdf/1807.06993v1",econ.EM
30048,em,"This paper proposes an adaptive randomization procedure for two-stage
randomized controlled trials. The method uses data from a first-wave experiment
in order to determine how to stratify in a second wave of the experiment, where
the objective is to minimize the variance of an estimator for the average
treatment effect (ATE). We consider selection from a class of stratified
randomization procedures which we call stratification trees: these are
procedures whose strata can be represented as decision trees, with differing
treatment assignment probabilities across strata. By using the first wave to
estimate a stratification tree, we simultaneously select which covariates to
use for stratification, how to stratify over these covariates, as well as the
assignment probabilities within these strata. Our main result shows that using
this randomization procedure with an appropriate estimator results in an
asymptotic variance which is minimal in the class of stratification trees.
Moreover, the results we present are able to accommodate a large class of
assignment mechanisms within strata, including stratified block randomization.
In a simulation study, we find that our method, paired with an appropriate
cross-validation procedure ,can improve on ad-hoc choices of stratification. We
conclude by applying our method to the study in Karlan and Wood (2017), where
we estimate stratification trees using the first wave of their experiment.",Stratification Trees for Adaptive Randomization in Randomized Controlled Trials,2018-06-13 19:03:00,Max Tabord-Meehan,"http://arxiv.org/abs/1806.05127v7, http://arxiv.org/pdf/1806.05127v7",econ.EM
30049,em,"In this study, an optimization problem is proposed in order to obtain the
maximum economic benefit from wind farms with variable and intermittent energy
generation in the day ahead and balancing electricity markets. This method,
which is based on the use of pumped-hydro energy storage unit and wind farm
together, increases the profit from the power plant by taking advantage of the
price changes in the markets and at the same time supports the power system by
supplying a portion of the peak load demand in the system to which the plant is
connected. With the objective of examining the effectiveness of the proposed
method, detailed simulation studies are carried out by making use of actual
wind and price data, and the results are compared to those obtained for the
various cases in which the storage unit is not available and/or the proposed
price-based energy management method is not applied. As a consequence, it is
demonstrated that the pumped-hydro energy storage units are the storage systems
capable of being used effectively for high-power levels and that the proposed
optimization problem is quite successful in the cost-effective implementation
of these systems.",A Profit Optimization Approach Based on the Use of Pumped-Hydro Energy Storage Unit and Dynamic Pricing,2018-06-08 00:10:26,"Akın Taşcikaraoğlu, Ozan Erdinç","http://arxiv.org/abs/1806.05211v1, http://arxiv.org/pdf/1806.05211v1",econ.EM
30050,em,"We propose an asymptotic theory for distribution forecasting from the log
normal chain-ladder model. The theory overcomes the difficulty of convoluting
log normal variables and takes estimation error into account. The results
differ from that of the over-dispersed Poisson model and from the chain-ladder
based bootstrap. We embed the log normal chain-ladder model in a class of
infinitely divisible distributions called the generalized log normal
chain-ladder model. The asymptotic theory uses small $\sigma$ asymptotics where
the dimension of the reserving triangle is kept fixed while the standard
deviation is assumed to decrease. The resulting asymptotic forecast
distributions follow t distributions. The theory is supported by simulations
and an empirical application.",Generalized Log-Normal Chain-Ladder,2018-06-15 16:04:30,"D. Kuang, B. Nielsen","http://arxiv.org/abs/1806.05939v1, http://arxiv.org/pdf/1806.05939v1",stat.ME
30051,em,"Geography, including climatic factors, have long been considered potentially
important elements in shaping socio-economic activities, alongside other
determinants, such as institutions. Here we demonstrate that geography and
climate satisfactorily explain worldwide economic activity as measured by the
per capita Gross Cell Product (GCP-PC) at a fine geographical resolution,
typically much higher than country average. A 1{\deg} by 1{\deg} GCP-PC dataset
has been key for establishing and testing a direct relationship between 'local'
geography/climate and GCP-PC. Not only have we tested the geography/climate
hypothesis using many possible explanatory variables, importantly we have also
predicted and reconstructed GCP-PC worldwide by retaining the most significant
predictors. While this study confirms that latitude is the most important
predictor for GCP-PC when taken in isolation, the accuracy of the GCP-PC
prediction is greatly improved when other factors mainly related to variations
in climatic variables, such as the variability in air pressure, rather than
average climatic conditions as typically used, are considered. Implications of
these findings include an improved understanding of why economically better-off
societies are geographically placed where they are",Effect of Climate and Geography on worldwide fine resolution economic activity,2018-06-17 13:19:37,Alberto Troccoli,"http://arxiv.org/abs/1806.06358v2, http://arxiv.org/pdf/1806.06358v2",econ.EM
30052,em,"We investigate the relationships of the VIX with US and BRIC markets. In
detail, we pick up the analysis from the point left off by (Sarwar, 2012), and
we focus on the period: Jan 2007 - Feb 2018, thus capturing the relations
before, during and after the 2008 financial crisis. Results pinpoint frequent
structural breaks in the VIX and suggest an enhancement around 2008 of the fear
transmission in response to negative market moves; largely depending on
overlaps in trading hours, this has become even stronger post-crisis for the
US, while for BRIC countries has gone back towards pre-crisis levels.",Is VIX still the investor fear gauge? Evidence for the US and BRIC markets,2018-06-20 08:15:43,"Marco Neffelli, Marina Resta","http://arxiv.org/abs/1806.07556v2, http://arxiv.org/pdf/1806.07556v2",q-fin.GN
30053,em,"Economists specify high-dimensional models to address heterogeneity in
empirical studies with complex big data. Estimation of these models calls for
optimization techniques to handle a large number of parameters. Convex problems
can be effectively executed in modern statistical programming languages. We
complement Koenker and Mizera (2014)'s work on numerical implementation of
convex optimization, with focus on high-dimensional econometric estimators.
Combining R and the convex solver MOSEK achieves faster speed and equivalent
accuracy, demonstrated by examples from Su, Shi, and Phillips (2016) and Shi
(2016). Robust performance of convex optimization is witnessed cross platforms.
The convenience and reliability of convex optimization in R make it easy to
turn new ideas into prototypes.",Implementing Convex Optimization in R: Two Econometric Examples,2018-06-27 14:50:26,"Zhan Gao, Zhentao Shi","http://arxiv.org/abs/1806.10423v2, http://arxiv.org/pdf/1806.10423v2",stat.CO
30054,em,"We study the optimal referral strategy of a seller and its relationship with
the type of communication channels among consumers. The seller faces a
partially uninformed population of consumers, interconnected through a directed
social network. In the network, the seller offers rewards to informed consumers
(influencers) conditional on inducing purchases by uninformed consumers
(influenced). Rewards are needed to bear a communication cost and to induce
word-of-mouth (WOM) either privately (cost-per-contact) or publicly (fixed cost
to inform all friends). From the seller's viewpoint, eliciting Private WOM is
more costly than eliciting Public WOM. We investigate (i) the incentives for
the seller to move to a denser network, inducing either Private or Public WOM
and (ii) the optimal mix between the two types of communication. A denser
network is found to be always better, not only for information diffusion but
also for seller's profits, as long as Private WOM is concerned. Differently,
under Public WOM, the seller may prefer an environment with less competition
between informed consumers and the presence of highly connected influencers
(hubs) is the main driver to make network density beneficial to profits. When
the seller is able to discriminate between Private and Public WOM, the optimal
strategy is to cheaply incentivize the more connected people to pass on the
information publicly and then offer a high bonus for Private WOM.",Bring a friend! Privately or Publicly?,2018-07-04 13:46:46,"Elias Carroni, Paolo Pin, Simone Righi","http://arxiv.org/abs/1807.01994v2, http://arxiv.org/pdf/1807.01994v2",physics.soc-ph
30055,em,"We develop a new approach for estimating average treatment effects in
observational studies with unobserved group-level heterogeneity. We consider a
general model with group-level unconfoundedness and provide conditions under
which aggregate balancing statistics -- group-level averages of functions of
treatments and covariates -- are sufficient to eliminate differences between
groups. Building on these results, we reinterpret commonly used linear
fixed-effect regression estimators by writing them in the Mundlak form as
linear regression estimators without fixed effects but including group
averages. We use this representation to develop Generalized Mundlak Estimators
(GMEs) that capture group differences through group averages of (functions of)
the unit-level variables and adjust for these group differences in flexible and
robust ways in the spirit of the modern causal literature.",Fixed Effects and the Generalized Mundlak Estimator,2018-07-05 20:40:31,"Dmitry Arkhangelsky, Guido Imbens","http://arxiv.org/abs/1807.02099v9, http://arxiv.org/pdf/1807.02099v9",econ.EM
30056,em,"We propose a framework for estimation and inference when the model may be
misspecified. We rely on a local asymptotic approach where the degree of
misspecification is indexed by the sample size. We construct estimators whose
mean squared error is minimax in a neighborhood of the reference model, based
on one-step adjustments. In addition, we provide confidence intervals that
contain the true parameter under local misspecification. As a tool to interpret
the degree of misspecification, we map it to the local power of a specification
test of the reference model. Our approach allows for systematic sensitivity
analysis when the parameter of interest may be partially or irregularly
identified. As illustrations, we study three applications: an empirical
analysis of the impact of conditional cash transfers in Mexico where
misspecification stems from the presence of stigma effects of the program, a
cross-sectional binary choice model where the error distribution is
misspecified, and a dynamic panel data binary choice model where the number of
time periods is small and the distribution of individual effects is
misspecified.",Minimizing Sensitivity to Model Misspecification,2018-07-05 22:33:23,"Stéphane Bonhomme, Martin Weidner","http://arxiv.org/abs/1807.02161v6, http://arxiv.org/pdf/1807.02161v6",econ.EM
30057,em,"In this paper we propose an autoregressive wild bootstrap method to construct
confidence bands around a smooth deterministic trend. The bootstrap method is
easy to implement and does not require any adjustments in the presence of
missing data, which makes it particularly suitable for climatological
applications. We establish the asymptotic validity of the bootstrap method for
both pointwise and simultaneous confidence bands under general conditions,
allowing for general patterns of missing data, serial dependence and
heteroskedasticity. The finite sample properties of the method are studied in a
simulation study. We use the method to study the evolution of trends in daily
measurements of atmospheric ethane obtained from a weather station in the Swiss
Alps, where the method can easily deal with the many missing observations due
to adverse weather conditions.",Autoregressive Wild Bootstrap Inference for Nonparametric Trends,2018-07-06 14:19:19,"Marina Friedrich, Stephan Smeekes, Jean-Pierre Urbain","http://dx.doi.org/10.1016/j.jeconom.2019.05.006, http://arxiv.org/abs/1807.02357v2, http://arxiv.org/pdf/1807.02357v2",stat.ME
30058,em,"Motivated by applications such as viral marketing, the problem of influence
maximization (IM) has been extensively studied in the literature. The goal is
to select a small number of users to adopt an item such that it results in a
large cascade of adoptions by others. Existing works have three key
limitations. (1) They do not account for economic considerations of a user in
buying/adopting items. (2) Most studies on multiple items focus on competition,
with complementary items receiving limited attention. (3) For the network
owner, maximizing social welfare is important to ensure customer loyalty, which
is not addressed in prior work in the IM literature. In this paper, we address
all three limitations and propose a novel model called UIC that combines
utility-driven item adoption with influence propagation over networks. Focusing
on the mutually complementary setting, we formulate the problem of social
welfare maximization in this novel setting. We show that while the objective
function is neither submodular nor supermodular, surprisingly a simple greedy
allocation algorithm achieves a factor of $(1-1/e-\epsilon)$ of the optimum
expected social welfare. We develop \textsf{bundleGRD}, a scalable version of
this approximation algorithm, and demonstrate, with comprehensive experiments
on real and synthetic datasets, that it significantly outperforms all
baselines.",Maximizing Welfare in Social Networks under a Utility Driven Influence Diffusion Model,2018-07-06 20:40:02,"Prithu Banerjee, Wei Chen, Laks V. S. Lakshmanan","http://dx.doi.org/10.1145/3299869.3319879, http://arxiv.org/abs/1807.02502v2, http://arxiv.org/pdf/1807.02502v2",cs.SI
30107,em,"I introduce a simple permutation procedure to test conventional (non-sharp)
hypotheses about the effect of a binary treatment in the presence of a finite
number of large, heterogeneous clusters when the treatment effect is identified
by comparisons across clusters. The procedure asymptotically controls size by
applying a level-adjusted permutation test to a suitable statistic. The
adjustments needed for most empirically relevant situations are tabulated in
the paper. The adjusted permutation test is easy to implement in practice and
performs well at conventional levels of significance with at least four treated
clusters and a similar number of control clusters. It is particularly robust to
situations where some clusters are much more variable than others. Examples and
an empirical application are provided.",Permutation inference with a finite number of heterogeneous clusters,2019-07-01 23:15:04,Andreas Hagemann,"http://arxiv.org/abs/1907.01049v2, http://arxiv.org/pdf/1907.01049v2",econ.EM
30060,em,"When an individual purchases a home, they simultaneously purchase its
structural features, its accessibility to work, and the neighborhood amenities.
Some amenities, such as air quality, are measurable while others, such as the
prestige or the visual impression of a neighborhood, are difficult to quantify.
Despite the well-known impacts intangible housing features have on house
prices, limited attention has been given to systematically quantifying these
difficult to measure amenities. Two issues have led to this neglect. Not only
do few quantitative methods exist that can measure the urban environment, but
that the collection of such data is both costly and subjective.
  We show that street image and satellite image data can capture these urban
qualities and improve the estimation of house prices. We propose a pipeline
that uses a deep neural network model to automatically extract visual features
from images to estimate house prices in London, UK. We make use of traditional
housing features such as age, size, and accessibility as well as visual
features from Google Street View images and Bing aerial images in estimating
the house price model. We find encouraging results where learning to
characterize the urban quality of a neighborhood improves house price
prediction, even when generalizing to previously unseen London boroughs.
  We explore the use of non-linear vs. linear methods to fuse these cues with
conventional models of house pricing, and show how the interpretability of
linear models allows us to directly extract proxy variables for visual
desirability of neighborhoods that are both of interest in their own right, and
could be used as inputs to other econometric methods. This is particularly
valuable as once the network has been trained with the training data, it can be
applied elsewhere, allowing us to generate vivid dense maps of the visual
appeal of London streets.",Take a Look Around: Using Street View and Satellite Images to Estimate House Prices,2018-07-19 00:10:08,"Stephen Law, Brooks Paige, Chris Russell","http://dx.doi.org/10.1145/3342240, http://arxiv.org/abs/1807.07155v2, http://arxiv.org/pdf/1807.07155v2",econ.EM
30061,em,"This paper proposes a method for estimating multiple change points in panel
data models with unobserved individual effects via ordinary least-squares
(OLS). Typically, in this setting, the OLS slope estimators are inconsistent
due to the unobserved individual effects bias. As a consequence, existing
methods remove the individual effects before change point estimation through
data transformations such as first-differencing. We prove that under reasonable
assumptions, the unobserved individual effects bias has no impact on the
consistent estimation of change points. Our simulations show that since our
method does not remove any variation in the dataset before change point
estimation, it performs better in small samples compared to first-differencing
methods. We focus on short panels because they are commonly used in practice,
and allow for the unobserved individual effects to vary over time. Our method
is illustrated via two applications: the environmental Kuznets curve and the
U.S. house price expectations after the financial crisis.",Change Point Estimation in Panel Data with Time-Varying Individual Effects,2018-08-09 15:10:14,"Otilia Boldea, Bettina Drepper, Zhuojiong Gan","http://arxiv.org/abs/1808.03109v1, http://arxiv.org/pdf/1808.03109v1",econ.EM
30062,em,"This dissertation is to study the interplay between large-scale electric
vehicle (EV) charging and the power system. We address three important issues
pertaining to EV charging and integration into the power system: (1) charging
station placement, (2) pricing policy and energy management strategy, and (3)
electricity trading market and distribution network design to facilitate
integrating EV and renewable energy source (RES) into the power system.
  For charging station placement problem, we propose a multi-stage consumer
behavior based placement strategy with incremental EV penetration rates and
model the EV charging industry as an oligopoly where the entire market is
dominated by a few charging service providers (oligopolists). The optimal
placement policy for each service provider is obtained by solving a Bayesian
game.
  For pricing and energy management of EV charging stations, we provide
guidelines for charging service providers to determine charging price and
manage electricity reserve to balance the competing objectives of improving
profitability, enhancing customer satisfaction, and reducing impact on the
power system. Two algorithms --- stochastic dynamic programming (SDP) algorithm
and greedy algorithm (benchmark algorithm) are applied to derive the pricing
and electricity procurement strategy.
  We design a novel electricity trading market and distribution network, which
supports seamless RES integration, grid to vehicle (G2V), vehicle to grid
(V2G), vehicle to vehicle (V2V), and distributed generation (DG) and storage.
We apply a sharing economy model to the electricity sector to stimulate
different entities to exchange and monetize their underutilized electricity. A
fitness-score (FS)-based supply-demand matching algorithm is developed by
considering consumer surplus, electricity network congestion, and economic
dispatch.","Engineering and Economic Analysis for Electric Vehicle Charging Infrastructure --- Placement, Pricing, and Market Design",2018-08-12 09:05:08,Chao Luo,"http://arxiv.org/abs/1808.03897v1, http://arxiv.org/pdf/1808.03897v1",econ.EM
30063,em,"General Purpose Technologies (GPTs) that can be applied in many industries
are an important driver of economic growth and national and regional
competitiveness. In spite of this, the geography of their development and
diffusion has not received significant attention in the literature. We address
this with an analysis of Deep Learning (DL), a core technique in Artificial
Intelligence (AI) increasingly being recognized as the latest GPT. We identify
DL papers in a novel dataset from ArXiv, a popular preprints website, and use
CrunchBase, a technology business directory to measure industrial capabilities
related to it. After showing that DL conforms with the definition of a GPT,
having experienced rapid growth and diffusion into new fields where it has
generated an impact, we describe changes in its geography. Our analysis shows
China's rise in AI rankings and relative decline in several European countries.
We also find that initial volatility in the geography of DL has been followed
by consolidation, suggesting that the window of opportunity for new entrants
might be closing down as new DL research hubs become dominant. Finally, we
study the regional drivers of DL clustering. We find that competitive DL
clusters tend to be based in regions combining research and industrial
activities related to it. This could be because GPT developers and adopters
located close to each other can collaborate and share knowledge more easily,
thus overcoming coordination failures in GPT deployment. Our analysis also
reveals a Chinese comparative advantage in DL after we control for other
explanatory factors, perhaps underscoring the importance of access to data and
supportive policies for the successful development of this complex, `omni-use'
technology.","Deep learning, deep change? Mapping the development of the Artificial Intelligence General Purpose Technology",2018-08-20 12:14:54,"J. Klinger, J. Mateos-Garcia, K. Stathoulopoulos","http://arxiv.org/abs/1808.06355v1, http://arxiv.org/pdf/1808.06355v1",cs.CY
30064,em,"We consider inference in models defined by approximate moment conditions. We
show that near-optimal confidence intervals (CIs) can be formed by taking a
generalized method of moments (GMM) estimator, and adding and subtracting the
standard error times a critical value that takes into account the potential
bias from misspecification of the moment conditions. In order to optimize
performance under potential misspecification, the weighting matrix for this GMM
estimator takes into account this potential bias, and therefore differs from
the one that is optimal under correct specification. To formally show the
near-optimality of these CIs, we develop asymptotic efficiency bounds for
inference in the locally misspecified GMM setting. These bounds may be of
independent interest, due to their implications for the possibility of using
moment selection procedures when conducting inference in moment condition
models. We apply our methods in an empirical application to automobile demand,
and show that adjusting the weighting matrix can shrink the CIs by a factor of
3 or more.",Sensitivity Analysis using Approximate Moment Condition Models,2018-08-22 17:43:38,"Timothy B. Armstrong, Michal Kolesár","http://dx.doi.org/10.3982/QE1609, http://arxiv.org/abs/1808.07387v5, http://arxiv.org/pdf/1808.07387v5",econ.EM
30065,em,"Motivated by customer loyalty plans and scholarship programs, we study
tie-breaker designs which are hybrids of randomized controlled trials (RCTs)
and regression discontinuity designs (RDDs). We quantify the statistical
efficiency of a tie-breaker design in which a proportion $\Delta$ of observed
subjects are in the RCT. In a two line regression, statistical efficiency
increases monotonically with $\Delta$, so efficiency is maximized by an RCT. We
point to additional advantages of tie-breakers versus RDD: for a nonparametric
regression the boundary bias is much less severe and for quadratic regression,
the variance is greatly reduced. For a two line model we can quantify the short
term value of the treatment allocation and this comparison favors smaller
$\Delta$ with the RDD being best. We solve for the optimal tradeoff between
these exploration and exploitation goals. The usual tie-breaker design applies
an RCT on the middle $\Delta$ subjects as ranked by the assignment variable. We
quantify the efficiency of other designs such as experimenting only in the
second decile from the top. We also show that in some general parametric models
a Monte Carlo evaluation can be replaced by matrix algebra.",Optimizing the tie-breaker regression discontinuity design,2018-08-22 23:57:38,"Art B. Owen, Hal Varian","http://arxiv.org/abs/1808.07563v3, http://arxiv.org/pdf/1808.07563v3",stat.ME
30066,em,"Modern empirical work in Regression Discontinuity (RD) designs often employs
local polynomial estimation and inference with a mean square error (MSE)
optimal bandwidth choice. This bandwidth yields an MSE-optimal RD treatment
effect estimator, but is by construction invalid for inference. Robust bias
corrected (RBC) inference methods are valid when using the MSE-optimal
bandwidth, but we show they yield suboptimal confidence intervals in terms of
coverage error. We establish valid coverage error expansions for RBC confidence
interval estimators and use these results to propose new inference-optimal
bandwidth choices for forming these intervals. We find that the standard
MSE-optimal bandwidth for the RD point estimator is too large when the goal is
to construct RBC confidence intervals with the smallest coverage error. We
further optimize the constant terms behind the coverage error to derive new
optimal choices for the auxiliary bandwidth required for RBC inference. Our
expansions also establish that RBC inference yields higher-order refinements
(relative to traditional undersmoothing) in the context of RD designs. Our main
results cover sharp and sharp kink RD designs under conditional
heteroskedasticity, and we discuss extensions to fuzzy and other RD designs,
clustered sampling, and pre-intervention covariates adjustments. The
theoretical findings are illustrated with a Monte Carlo experiment and an
empirical application, and the main methodological results are available in
\texttt{R} and \texttt{Stata} packages.",Optimal Bandwidth Choice for Robust Bias Corrected Inference in Regression Discontinuity Designs,2018-09-01 21:48:54,"Sebastian Calonico, Matias D. Cattaneo, Max H. Farrell","http://arxiv.org/abs/1809.00236v4, http://arxiv.org/pdf/1809.00236v4",econ.EM
30067,em,"A common problem in econometrics, statistics, and machine learning is to
estimate and make inference on functions that satisfy shape restrictions. For
example, distribution functions are nondecreasing and range between zero and
one, height growth charts are nondecreasing in age, and production functions
are nondecreasing and quasi-concave in input quantities. We propose a method to
enforce these restrictions ex post on point and interval estimates of the
target function by applying functional operators. If an operator satisfies
certain properties that we make precise, the shape-enforced point estimates are
closer to the target function than the original point estimates and the
shape-enforced interval estimates have greater coverage and shorter length than
the original interval estimates. We show that these properties hold for six
different operators that cover commonly used shape restrictions in practice:
range, convexity, monotonicity, monotone convexity, quasi-convexity, and
monotone quasi-convexity. We illustrate the results with two empirical
applications to the estimation of a height growth chart for infants in India
and a production function for chemical firms in China.",Shape-Enforcing Operators for Point and Interval Estimators,2018-09-04 18:21:08,"Xi Chen, Victor Chernozhukov, Iván Fernández-Val, Scott Kostyshak, Ye Luo","http://arxiv.org/abs/1809.01038v5, http://arxiv.org/pdf/1809.01038v5",econ.EM
30068,em,"We propose novel methods for change-point testing for nonparametric
estimators of expected shortfall and related risk measures in weakly dependent
time series. We can detect general multiple structural changes in the tails of
marginal distributions of time series under general assumptions.
Self-normalization allows us to avoid the issues of standard error estimation.
The theoretical foundations for our methods are functional central limit
theorems, which we develop under weak assumptions. An empirical study of S&P
500 and US Treasury bond returns illustrates the practical use of our methods
in detecting and quantifying market instability via the tails of financial time
series.",Change-Point Testing for Risk Measures in Time Series,2018-09-07 07:14:03,"Lin Fan, Peter W. Glynn, Markus Pelger","http://arxiv.org/abs/1809.02303v2, http://arxiv.org/pdf/1809.02303v2",econ.EM
30069,em,"This paper proposes a variational Bayes algorithm for computationally
efficient posterior and predictive inference in time-varying parameter (TVP)
models. Within this context we specify a new dynamic variable/model selection
strategy for TVP dynamic regression models in the presence of a large number of
predictors. This strategy allows for assessing in individual time periods which
predictors are relevant (or not) for forecasting the dependent variable. The
new algorithm is evaluated numerically using synthetic data and its
computational advantages are established. Using macroeconomic data for the US
we find that regression models that combine time-varying parameters with the
information in many predictors have the potential to improve forecasts of price
inflation over a number of alternative forecasting models.",Bayesian dynamic variable selection in high dimensions,2018-09-09 22:52:05,"Gary Koop, Dimitris Korobilis","http://arxiv.org/abs/1809.03031v2, http://arxiv.org/pdf/1809.03031v2",stat.CO
30070,em,"Urban house prices are strongly associated with local socioeconomic factors.
In literature, house price modeling is based on socioeconomic variables from
traditional census, which is not real-time, dynamic and comprehensive. Inspired
by the emerging concept of ""digital census"" - using large-scale digital records
of human activities to measure urban population dynamics and socioeconomic
conditions, we introduce three typical datasets, namely 311 complaints, crime
complaints and taxi trips, into house price modeling. Based on the individual
housing sales data in New York City, we provide comprehensive evidence that
these digital census datasets can substantially improve the modeling
performances on both house price levels and changes, regardless whether
traditional census is included or not. Hence, digital census can serve as both
effective alternatives and complements to traditional census for house price
modeling.",House Price Modeling with Digital Census,2018-08-29 19:13:58,"Enwei Zhu, Stanislav Sobolevsky","http://arxiv.org/abs/1809.03834v1, http://arxiv.org/pdf/1809.03834v1",econ.EM
30071,em,"We study regression discontinuity designs when covariates are included in the
estimation. We examine local polynomial estimators that include discrete or
continuous covariates in an additive separable way, but without imposing any
parametric restrictions on the underlying population regression functions. We
recommend a covariate-adjustment approach that retains consistency under
intuitive conditions, and characterize the potential for estimation and
inference improvements. We also present new covariate-adjusted mean squared
error expansions and robust bias-corrected inference procedures, with
heteroskedasticity-consistent and cluster-robust standard errors. An empirical
illustration and an extensive simulation study is presented. All methods are
implemented in \texttt{R} and \texttt{Stata} software packages.",Regression Discontinuity Designs Using Covariates,2018-09-11 16:58:17,"Sebastian Calonico, Matias D. Cattaneo, Max H. Farrell, Rocio Titiunik","http://arxiv.org/abs/1809.03904v1, http://arxiv.org/pdf/1809.03904v1",econ.EM
30072,em,"Due to the increasing availability of high-dimensional empirical applications
in many research disciplines, valid simultaneous inference becomes more and
more important. For instance, high-dimensional settings might arise in economic
studies due to very rich data sets with many potential covariates or in the
analysis of treatment heterogeneities. Also the evaluation of potentially more
complicated (non-linear) functional forms of the regression relationship leads
to many potential variables for which simultaneous inferential statements might
be of interest. Here we provide a review of classical and modern methods for
simultaneous inference in (high-dimensional) settings and illustrate their use
by a case study using the R package hdm. The R package hdm implements valid
joint powerful and efficient hypothesis tests for a potentially large number of
coeffcients as well as the construction of simultaneous confidence intervals
and, therefore, provides useful methods to perform valid post-selection
inference based on the LASSO.",Valid Simultaneous Inference in High-Dimensional Settings (with the hdm package for R),2018-09-13 16:41:03,"Philipp Bach, Victor Chernozhukov, Martin Spindler","http://arxiv.org/abs/1809.04951v1, http://arxiv.org/pdf/1809.04951v1",econ.EM
30073,em,"Control variables provide an important means of controlling for endogeneity
in econometric models with nonseparable and/or multidimensional heterogeneity.
We allow for discrete instruments, giving identification results under a
variety of restrictions on the way the endogenous variable and the control
variables affect the outcome. We consider many structural objects of interest,
such as average or quantile treatment effects. We illustrate our results with
an empirical application to Engel curve estimation.","Control Variables, Discrete Instruments, and Identification of Structural Functions",2018-09-15 15:05:07,"Whitney Newey, Sami Stouli","http://arxiv.org/abs/1809.05706v2, http://arxiv.org/pdf/1809.05706v2",econ.EM
30074,em,"Central to many inferential situations is the estimation of rational
functions of parameters. The mainstream in statistics and econometrics
estimates these quantities based on the plug-in approach without consideration
of the main objective of the inferential situation. We propose the Bayesian
Minimum Expected Loss (MELO) approach focusing explicitly on the function of
interest, and calculating its frequentist variability. Asymptotic properties of
the MELO estimator are similar to the plug-in approach. Nevertheless,
simulation exercises show that our proposal is better in situations
characterized by small sample sizes and noisy models. In addition, we observe
in the applications that our approach gives lower standard errors than
frequently used alternatives when datasets are not very informative.",Focused econometric estimation for noisy and small datasets: A Bayesian Minimum Expected Loss estimator approach,2018-09-19 06:25:17,"Andres Ramirez-Hassan, Manuel Correa-Giraldo","http://arxiv.org/abs/1809.06996v1, http://arxiv.org/pdf/1809.06996v1",econ.EM
30075,em,"In this paper we propose the Single-equation Penalized Error Correction
Selector (SPECS) as an automated estimation procedure for dynamic
single-equation models with a large number of potentially (co)integrated
variables. By extending the classical single-equation error correction model,
SPECS enables the researcher to model large cointegrated datasets without
necessitating any form of pre-testing for the order of integration or
cointegrating rank. Under an asymptotic regime in which both the number of
parameters and time series observations jointly diverge to infinity, we show
that SPECS is able to consistently estimate an appropriate linear combination
of the cointegrating vectors that may occur in the underlying DGP. In addition,
SPECS is shown to enable the correct recovery of sparsity patterns in the
parameter space and to posses the same limiting distribution as the OLS oracle
procedure. A simulation study shows strong selective capabilities, as well as
superior predictive performance in the context of nowcasting compared to
high-dimensional models that ignore cointegration. An empirical application to
nowcasting Dutch unemployment rates using Google Trends confirms the strong
practical performance of our procedure.",An Automated Approach Towards Sparse Single-Equation Cointegration Modelling,2018-09-24 16:05:11,"Stephan Smeekes, Etienne Wijler","http://arxiv.org/abs/1809.08889v3, http://arxiv.org/pdf/1809.08889v3",econ.EM
30076,em,"Although stochastic volatility and GARCH (generalized autoregressive
conditional heteroscedasticity) models have successfully described the
volatility dynamics of univariate asset returns, extending them to the
multivariate models with dynamic correlations has been difficult due to several
major problems. First, there are too many parameters to estimate if available
data are only daily returns, which results in unstable estimates. One solution
to this problem is to incorporate additional observations based on intraday
asset returns, such as realized covariances. Second, since multivariate asset
returns are not synchronously traded, we have to use the largest time intervals
such that all asset returns are observed in order to compute the realized
covariance matrices. However, in this study, we fail to make full use of the
available intraday informations when there are less frequently traded assets.
Third, it is not straightforward to guarantee that the estimated (and the
realized) covariance matrices are positive definite. Our contributions are the
following: (1) we obtain the stable parameter estimates for the dynamic
correlation models using the realized measures, (2) we make full use of
intraday informations by using pairwise realized correlations, (3) the
covariance matrices are guaranteed to be positive definite, (4) we avoid the
arbitrariness of the ordering of asset returns, (5) we propose the flexible
correlation structure model (e.g., such as setting some correlations to be zero
if necessary), and (6) the parsimonious specification for the leverage effect
is proposed. Our proposed models are applied to the daily returns of nine U.S.
stocks with their realized volatilities and pairwise realized correlations and
are shown to outperform the existing models with respect to portfolio
performances.",Multivariate Stochastic Volatility Model with Realized Volatilities and Pairwise Realized Correlations,2018-09-26 15:03:32,"Yuta Yamauchi, Yasuhiro Omori","http://dx.doi.org/10.1080/07350015.2019.1602048, http://arxiv.org/abs/1809.09928v2, http://arxiv.org/pdf/1809.09928v2",econ.EM
30077,em,"In this paper we investigate panel regression models with interactive fixed
effects. We propose two new estimation methods that are based on minimizing
convex objective functions. The first method minimizes the sum of squared
residuals with a nuclear (trace) norm regularization. The second method
minimizes the nuclear norm of the residuals. We establish the consistency of
the two resulting estimators. Those estimators have a very important
computational advantage compared to the existing least squares (LS) estimator,
in that they are defined as minimizers of a convex objective function. In
addition, the nuclear norm penalization helps to resolve a potential
identification problem for interactive fixed effect models, in particular when
the regressors are low-rank and the number of the factors is unknown. We also
show how to construct estimators that are asymptotically equivalent to the
least squares (LS) estimator in Bai (2009) and Moon and Weidner (2017) by using
our nuclear norm regularized or minimized estimators as initial values for a
finite number of LS minimizing iteration steps. This iteration avoids any
non-convex minimization, while the original LS estimation problem is generally
non-convex, and can have multiple local minima.",Nuclear Norm Regularized Estimation of Panel Regression Models,2018-10-25 20:21:30,"Hyungsik Roger Moon, Martin Weidner","http://arxiv.org/abs/1810.10987v3, http://arxiv.org/pdf/1810.10987v3",econ.EM
30078,em,"Can instrumental variables be found from data? While instrumental variable
(IV) methods are widely used to identify causal effect, testing their validity
from observed data remains a challenge. This is because validity of an IV
depends on two assumptions, exclusion and as-if-random, that are largely
believed to be untestable from data. In this paper, we show that under certain
conditions, testing for instrumental variables is possible. We build upon prior
work on necessary tests to derive a test that characterizes the odds of being a
valid instrument, thus yielding the name ""necessary and probably sufficient"".
The test works by defining the class of invalid-IV and valid-IV causal models
as Bayesian generative models and comparing their marginal likelihood based on
observed data. When all variables are discrete, we also provide a method to
efficiently compute these marginal likelihoods.
  We evaluate the test on an extensive set of simulations for binary data,
inspired by an open problem for IV testing proposed in past work. We find that
the test is most powerful when an instrument follows monotonicity---effect on
treatment is either non-decreasing or non-increasing---and has moderate-to-weak
strength; incidentally, such instruments are commonly used in observational
studies. Among as-if-random and exclusion, it detects exclusion violations with
higher power. Applying the test to IVs from two seminal studies on instrumental
variables and five recent studies from the American Economic Review shows that
many of the instruments may be flawed, at least when all variables are
discretized. The proposed test opens the possibility of data-driven validation
and search for instrumental variables.",Necessary and Probably Sufficient Test for Finding Valid Instrumental Variables,2018-12-04 16:53:21,Amit Sharma,"http://arxiv.org/abs/1812.01412v1, http://arxiv.org/pdf/1812.01412v1",stat.ME
30079,em,"Uncovering the heterogeneity of causal effects of policies and business
decisions at various levels of granularity provides substantial value to
decision makers. This paper develops new estimation and inference procedures
for multiple treatment models in a selection-on-observables framework by
modifying the Causal Forest approach suggested by Wager and Athey (2018) in
several dimensions. The new estimators have desirable theoretical,
computational and practical properties for various aggregation levels of the
causal effects. While an Empirical Monte Carlo study suggests that they
outperform previously suggested estimators, an application to the evaluation of
an active labour market programme shows the value of the new methods for
applied research.",Modified Causal Forests for Estimating Heterogeneous Causal Effects,2018-12-22 12:48:56,Michael Lechner,"http://arxiv.org/abs/1812.09487v2, http://arxiv.org/pdf/1812.09487v2",econ.EM
30080,em,"What should researchers do when their baseline model is refuted? We provide
four constructive answers. First, researchers can measure the extent of
falsification. To do this, we consider continuous relaxations of the baseline
assumptions of concern. We then define the falsification frontier: The smallest
relaxations of the baseline model which are not refuted. This frontier provides
a quantitative measure of the extent of falsification. Second, researchers can
present the identified set for the parameter of interest under the assumption
that the true model lies somewhere on this frontier. We call this the
falsification adaptive set. This set generalizes the standard baseline estimand
to account for possible falsification. Third, researchers can present the
identified set for a specific point on this frontier. Finally, as a sensitivity
analysis, researchers can present identified sets for points beyond the
frontier. To illustrate these four ways of salvaging falsified models, we study
overidentifying restrictions in two instrumental variable models: a homogeneous
effects linear model, and heterogeneous effect models with either binary or
continuous outcomes. In the linear model, we consider the classical
overidentifying restrictions implied when multiple instruments are observed. We
generalize these conditions by considering continuous relaxations of the
classical exclusion restrictions. By sufficiently weakening the assumptions, a
falsified baseline model becomes non-falsified. We obtain analogous results in
the heterogeneous effect models, where we derive identified sets for marginal
distributions of potential outcomes, falsification frontiers, and falsification
adaptive sets under continuous relaxations of the instrument exogeneity
assumptions. We illustrate our results in four different empirical
applications.",Salvaging Falsified Instrumental Variable Models,2018-12-30 23:04:17,"Matthew A. Masten, Alexandre Poirier","http://arxiv.org/abs/1812.11598v3, http://arxiv.org/pdf/1812.11598v3",econ.EM
30081,em,"We analyze the role of selection bias in generating the changes in the
observed distribution of female hourly wages in the United States using CPS
data for the years 1975 to 2020. We account for the selection bias from the
employment decision by modeling the distribution of the number of working hours
and estimating a nonseparable model of wages. We decompose changes in the wage
distribution into composition, structural and selection effects. Composition
effects have increased wages at all quantiles while the impact of the
structural effects varies by time period and quantile. Changes in the role of
selection only appear at the lower quantiles of the wage distribution. The
evidence suggests that there is positive selection in the 1970s which
diminishes until the later 1990s. This reduces wages at lower quantiles and
increases wage inequality. Post 2000 there appears to be an increase in
positive sorting which reduces the selection effects on wage inequality.",Selection and the Distribution of Female Hourly Wages in the U.S,2018-12-21 19:26:50,"Iván Fernández-Val, Franco Peracchi, Aico van Vuuren, Francis Vella","http://arxiv.org/abs/1901.00419v5, http://arxiv.org/pdf/1901.00419v5",econ.EM
30082,em,"International trade research plays an important role to inform trade policy
and shed light on wider issues relating to poverty, development, migration,
productivity, and economy. With recent advances in information technology,
global and regional agencies distribute an enormous amount of internationally
comparable trading data among a large number of countries over time, providing
a goldmine for empirical analysis of international trade. Meanwhile, an array
of new statistical methods are recently developed for dynamic network analysis.
However, these advanced methods have not been utilized for analyzing such
massive dynamic cross-country trading data. International trade data can be
viewed as a dynamic transport network because it emphasizes the amount of goods
moving across a network. Most literature on dynamic network analysis
concentrates on the connectivity network that focuses on link formation or
deformation rather than the transport moving across the network. We take a
different perspective from the pervasive node-and-edge level modeling: the
dynamic transport network is modeled as a time series of relational matrices.
We adopt a matrix factor model of \cite{wang2018factor}, with a specific
interpretation for the dynamic transport network. Under the model, the observed
surface network is assumed to be driven by a latent dynamic transport network
with lower dimensions. The proposed method is able to unveil the latent dynamic
structure and achieve the objective of dimension reduction. We applied the
proposed framework and methodology to a data set of monthly trading volumes
among 24 countries and regions from 1982 to 2015. Our findings shed light on
trading hubs, centrality, trends and patterns of international trade and show
matching change points to trading policies. The dataset also provides a fertile
ground for future research on international trade.",Modeling Dynamic Transport Network with Matrix Factor Models: with an Application to International Trade Flow,2019-01-02 21:17:17,"Elynn Y. Chen, Rong Chen","http://arxiv.org/abs/1901.00769v1, http://arxiv.org/pdf/1901.00769v1",econ.EM
30108,em,"We provide a novel method for large volatility matrix prediction with
high-frequency data by applying eigen-decomposition to daily realized
volatility matrix estimators and capturing eigenvalue dynamics with ARMA
models. Given a sequence of daily volatility matrix estimators, we compute the
aggregated eigenvectors and obtain the corresponding eigenvalues. Eigenvalues
in the same relative magnitude form a time series and the ARMA models are
further employed to model the dynamics within each eigenvalue time series to
produce a predictor. We predict future large volatility matrix based on the
predicted eigenvalues and the aggregated eigenvectors, and demonstrate the
advantages of the proposed method in volatility prediction and portfolio
allocation problems.",Large Volatility Matrix Prediction with High-Frequency Data,2019-07-02 09:54:21,Xinyu Song,"http://arxiv.org/abs/1907.01196v2, http://arxiv.org/pdf/1907.01196v2",stat.AP
30083,em,"The relationship between democracy and economic growth is of long-standing
interest. We revisit the panel data analysis of this relationship by Acemoglu,
Naidu, Restrepo and Robinson (forthcoming) using state of the art econometric
methods. We argue that this and lots of other panel data settings in economics
are in fact high-dimensional, resulting in principal estimators -- the fixed
effects (FE) and Arellano-Bond (AB) estimators -- to be biased to the degree
that invalidates statistical inference. We can however remove these biases by
using simple analytical and sample-splitting methods, and thereby restore valid
statistical inference. We find that the debiased FE and AB estimators produce
substantially higher estimates of the long-run effect of democracy on growth,
providing even stronger support for the key hypothesis in Acemoglu, Naidu,
Restrepo and Robinson (forthcoming). Given the ubiquitous nature of panel data,
we conclude that the use of debiased panel data estimators should substantially
improve the quality of empirical inference in economics.",Mastering Panel 'Metrics: Causal Impact of Democracy on Growth,2019-01-12 11:30:10,"Shuowen Chen, Victor Chernozhukov, Iván Fernández-Val","http://arxiv.org/abs/1901.03821v1, http://arxiv.org/pdf/1901.03821v1",econ.EM
30084,em,"In this paper we consider estimation of unobserved components in state space
models using a dynamic factor approach to incorporate auxiliary information
from high-dimensional data sources. We apply the methodology to unemployment
estimation as done by Statistics Netherlands, who uses a multivariate state
space model to produce monthly figures for the unemployment using series
observed with the labour force survey (LFS). We extend the model by including
auxiliary series of Google Trends about job-search and economic uncertainty,
and claimant counts, partially observed at higher frequencies. Our factor model
allows for nowcasting the variable of interest, providing reliable unemployment
estimates in real-time before LFS data become available.",A dynamic factor model approach to incorporate Big Data in state space models for official statistics,2019-01-31 16:47:13,"Caterina Schiavoni, Franz Palm, Stephan Smeekes, Jan van den Brakel","http://arxiv.org/abs/1901.11355v2, http://arxiv.org/pdf/1901.11355v2",econ.EM
30085,em,"This paper is concerned with cross-sectional dependence arising because
observations are interconnected through an observed network. Following Doukhan
and Louhichi (1999), we measure the strength of dependence by covariances of
nonlinearly transformed variables. We provide a law of large numbers and
central limit theorem for network dependent variables. We also provide a method
of calculating standard errors robust to general forms of network dependence.
For that purpose, we rely on a network heteroskedasticity and autocorrelation
consistent (HAC) variance estimator, and show its consistency. The results rely
on conditions characterized by tradeoffs between the rate of decay of
dependence across a network and network's denseness. Our approach can
accommodate data generated by network formation models, random fields on
graphs, conditional dependency graphs, and large functional-causal systems of
equations.",Limit Theorems for Network Dependent Random Variables,2019-03-04 06:39:56,"Denis Kojevnikov, Vadim Marmer, Kyungchul Song","http://dx.doi.org/10.1016/j.jeconom.2020.05.019, http://arxiv.org/abs/1903.01059v6, http://arxiv.org/pdf/1903.01059v6",econ.EM
30086,em,"Ethane is the most abundant non-methane hydrocarbon in the Earth's atmosphere
and an important precursor of tropospheric ozone through various chemical
pathways. Ethane is also an indirect greenhouse gas (global warming potential),
influencing the atmospheric lifetime of methane through the consumption of the
hydroxyl radical (OH). Understanding the development of trends and identifying
trend reversals in atmospheric ethane is therefore crucial. Our dataset
consists of four series of daily ethane columns obtained from ground-based FTIR
measurements. As many other decadal time series, our data are characterized by
autocorrelation, heteroskedasticity, and seasonal effects. Additionally,
missing observations due to instrument failure or unfavorable measurement
conditions are common in such series. The goal of this paper is therefore to
analyze trends in atmospheric ethane with statistical tools that correctly
address these data features. We present selected methods designed for the
analysis of time trends and trend reversals. We consider bootstrap inference on
broken linear trends and smoothly varying nonlinear trends. In particular, for
the broken trend model, we propose a bootstrap method for inference on the
break location and the corresponding changes in slope. For the smooth trend
model we construct simultaneous confidence bands around the nonparametrically
estimated trend. Our autoregressive wild bootstrap approach, combined with a
seasonal filter, is able to handle all issues mentioned above.",A statistical analysis of time trends in atmospheric ethane,2019-03-13 13:32:19,"Marina Friedrich, Eric Beutner, Hanno Reuvers, Stephan Smeekes, Jean-Pierre Urbain, Whitney Bader, Bruno Franco, Bernard Lejeune, Emmanuel Mahieu","http://arxiv.org/abs/1903.05403v2, http://arxiv.org/pdf/1903.05403v2",stat.AP
30087,em,"A decision maker starts from a judgmental decision and moves to the closest
boundary of the confidence interval. This statistical decision rule is
admissible and does not perform worse than the judgmental decision with a
probability equal to the confidence level, which is interpreted as a
coefficient of statistical risk aversion. The confidence level is related to
the decision maker's aversion to uncertainty and can be elicited with
laboratory experiments using urns a la Ellsberg. The decision rule is applied
to a problem of asset allocation for an investor whose judgmental decision is
to keep all her wealth in cash.",Deciding with Judgment,2019-03-16 23:05:50,Simone Manganelli,"http://arxiv.org/abs/1903.06980v1, http://arxiv.org/pdf/1903.06980v1",stat.ME
30088,em,"For regulatory and interpretability reasons, logistic regression is still
widely used. To improve prediction accuracy and interpretability, a
preprocessing step quantizing both continuous and categorical data is usually
performed: continuous features are discretized and, if numerous, levels of
categorical features are grouped. An even better predictive accuracy can be
reached by embedding this quantization estimation step directly into the
predictive estimation step itself. But doing so, the predictive loss has to be
optimized on a huge set. To overcome this difficulty, we introduce a specific
two-step optimization strategy: first, the optimization problem is relaxed by
approximating discontinuous quantization functions by smooth functions; second,
the resulting relaxed optimization problem is solved via a particular neural
network. The good performances of this approach, which we call glmdisc, are
illustrated on simulated and real data from the UCI library and Cr\'edit
Agricole Consumer Finance (a major European historic player in the consumer
credit market).",Feature quantization for parsimonious and interpretable predictive models,2019-03-21 13:54:16,"Adrien Ehrhardt, Christophe Biernacki, Vincent Vandewalle, Philippe Heinrich","http://arxiv.org/abs/1903.08920v1, http://arxiv.org/pdf/1903.08920v1",stat.ME
30089,em,"We discuss the relevance of the recent Machine Learning (ML) literature for
economics and econometrics. First we discuss the differences in goals, methods
and settings between the ML literature and the traditional econometrics and
statistics literatures. Then we discuss some specific methods from the machine
learning literature that we view as important for empirical researchers in
economics. These include supervised learning methods for regression and
classification, unsupervised learning methods, as well as matrix completion
methods. Finally, we highlight newly developed methods at the intersection of
ML and econometrics, methods that typically perform better than either
off-the-shelf ML or more traditional econometric methods when applied to
particular classes of problems, problems that include causal inference for
average treatment effects, optimal policy estimation, and estimation of the
counterfactual effect of price changes in consumer choice models.",Machine Learning Methods Economists Should Know About,2019-03-25 01:58:02,"Susan Athey, Guido Imbens","http://arxiv.org/abs/1903.10075v1, http://arxiv.org/pdf/1903.10075v1",econ.EM
30090,em,"Dynamic decisions are pivotal to economic policy making. We show how existing
evidence from randomized control trials can be utilized to guide personalized
decisions in challenging dynamic environments with constraints such as limited
budget or queues. Recent developments in reinforcement learning make it
possible to solve many realistically complex settings for the first time. We
allow for restricted policy functions and prove that their regret decays at
rate $n^{-1/2}$, the same as in the static case. We illustrate our methods with
an application to job training. The approach scales to a wide range of
important problems faced by policy makers.",Dynamically Optimal Treatment Allocation using Reinforcement Learning,2019-04-01 21:18:16,"Karun Adusumilli, Friedrich Geiecke, Claudio Schilter","http://arxiv.org/abs/1904.01047v4, http://arxiv.org/pdf/1904.01047v4",econ.EM
30091,em,"Consider a predictor who ranks eventualities on the basis of past cases: for
instance a search engine ranking webpages given past searches. Resampling past
cases leads to different rankings and the extraction of deeper information. Yet
a rich database, with sufficiently diverse rankings, is often beyond reach.
Inexperience demands either ""on the fly"" learning-by-doing or prudence: the
arrival of a novel case does not force (i) a revision of current rankings, (ii)
dogmatism towards new rankings, or (iii) intransitivity. For this higher-order
framework of inductive inference, we derive a suitably unique numerical
representation of these rankings via a matrix on eventualities x cases and
describe a robust test of prudence. Applications include: the success/failure
of startups; the veracity of fake news; and novel conditions for the existence
of a yield curve that is robustly arbitrage-free.",Second-order Inductive Inference: an axiomatic approach,2019-04-05 11:32:27,Patrick H. O'Callaghan,"http://arxiv.org/abs/1904.02934v5, http://arxiv.org/pdf/1904.02934v5",econ.TH
30092,em,"Accurate estimation for extent of cross{sectional dependence in large panel
data analysis is paramount to further statistical analysis on the data under
study. Grouping more data with weak relations (cross{sectional dependence)
together often results in less efficient dimension reduction and worse
forecasting. This paper describes cross-sectional dependence among a large
number of objects (time series) via a factor model and parameterizes its extent
in terms of strength of factor loadings. A new joint estimation method,
benefiting from unique feature of dimension reduction for high dimensional time
series, is proposed for the parameter representing the extent and some other
parameters involved in the estimation procedure. Moreover, a joint asymptotic
distribution for a pair of estimators is established. Simulations illustrate
the effectiveness of the proposed estimation method in the finite sample
performance. Applications in cross-country macro-variables and stock returns
from S&P 500 are studied.",Estimation of Cross-Sectional Dependence in Large Panels,2019-04-15 08:15:05,"Jiti Gao, Guangming Pan, Yanrong Yang, Bo Zhang","http://arxiv.org/abs/1904.06843v1, http://arxiv.org/pdf/1904.06843v1",econ.EM
30093,em,"The goal of this paper is to provide some tools for nonparametric estimation
and inference in psychological and economic experiments. We consider an
experimental framework in which each of $n$subjects provides $T$ responses to a
vector of $T$ stimuli. We propose to estimate the unknown function $f$ linking
stimuli to responses through a nonparametric sieve estimator. We give
conditions for consistency when either $n$ or $T$ or both diverge. The rate of
convergence depends upon the error covariance structure, that is allowed to
differ across subjects. With these results we derive the optimal divergence
rate of the dimension of the sieve basis with both $n$ and $T$. We provide
guidance about the optimal balance between the number of subjects and questions
in a laboratory experiment and argue that a large $n$is often better than a
large $T$. We derive conditions for asymptotic normality of functionals of the
estimator of $T$ and apply them to obtain the asymptotic distribution of the
Wald test when the number of constraints under the null is finite and when it
diverges along with other asymptotic parameters. Lastly, we investigate the
previous properties when the conditional covariance matrix is replaced by an
estimator.",Nonparametric Estimation and Inference in Economic and Psychological Experiments,2019-04-25 07:25:22,"Raffaello Seri, Samuele Centorrino, Michele Bernasconi","http://arxiv.org/abs/1904.11156v3, http://arxiv.org/pdf/1904.11156v3",econ.EM
30094,em,"This paper investigates the role of high-dimensional information sets in the
context of Markov switching models with time varying transition probabilities.
Markov switching models are commonly employed in empirical macroeconomic
research and policy work. However, the information used to model the switching
process is usually limited drastically to ensure stability of the model.
Increasing the number of included variables to enlarge the information set
might even result in decreasing precision of the model. Moreover, it is often
not clear a priori which variables are actually relevant when it comes to
informing the switching behavior. Building strongly on recent contributions in
the field of factor analysis, we introduce a general type of Markov switching
autoregressive models for non-linear time series analysis. Large numbers of
time series are allowed to inform the switching process through a factor
structure. This factor-augmented Markov switching (FAMS) model overcomes
estimation issues that are likely to arise in previous assessments of the
modeling framework. More accurate estimates of the switching behavior as well
as improved model fit result. The performance of the FAMS model is illustrated
in a simulated data example as well as in an US business cycle application.",A Factor-Augmented Markov Switching (FAMS) Model,2019-04-30 15:44:05,"Gregor Zens, Maximilian Böck","http://arxiv.org/abs/1904.13194v2, http://arxiv.org/pdf/1904.13194v2",econ.EM
30095,em,"Structural models that admit multiple reduced forms, such as game-theoretic
models with multiple equilibria, pose challenges in practice, especially when
parameters are set-identified and the identified set is large. In such cases,
researchers often choose to focus on a particular subset of equilibria for
counterfactual analysis, but this choice can be hard to justify. This paper
shows that some parameter values can be more ""desirable"" than others for
counterfactual analysis, even if they are empirically equivalent given the
data. In particular, within the identified set, some counterfactual predictions
can exhibit more robustness than others, against local perturbations of the
reduced forms (e.g. the equilibrium selection rule). We provide a
representation of this subset which can be used to simplify the implementation.
We illustrate our message using moment inequality models, and provide an
empirical application based on a model with top-coded data.",Counterfactual Analysis under Partial Identification Using Locally Robust Refinement,2019-05-31 19:50:57,"Nathan Canen, Kyungchul Song","http://arxiv.org/abs/1906.00003v3, http://arxiv.org/pdf/1906.00003v3",econ.EM
30096,em,"We offer a rationalization of the weak generalized axiom of revealed
preference (WGARP) for both finite and infinite data sets of consumer choice.
We call it maximin rationalization, in which each pairwise choice is associated
with a ""local"" utility function. We develop its associated weak
revealed-preference theory. We show that preference recoverability and welfare
analysis \`a la Varian (1982) may not be informative enough, when the weak
axiom holds, but when consumers are not utility maximizers. We clarify the
reasons for this failure and provide new informative bounds for the consumer's
true preferences.",The Theory of Weak Revealed Preference,2019-06-02 00:34:01,"Victor H. Aguiar, Per Hjertstrand, Roberto Serrano","http://arxiv.org/abs/1906.00296v1, http://arxiv.org/pdf/1906.00296v1",econ.TH
30097,em,"Over the last decade, big data have poured into econometrics, demanding new
statistical methods for analysing high-dimensional data and complex non-linear
relationships. A common approach for addressing dimensionality issues relies on
the use of static graphical structures for extracting the most significant
dependence interrelationships between the variables of interest. Recently,
Bayesian nonparametric techniques have become popular for modelling complex
phenomena in a flexible and efficient manner, but only few attempts have been
made in econometrics. In this paper, we provide an innovative Bayesian
nonparametric (BNP) time-varying graphical framework for making inference in
high-dimensional time series. We include a Bayesian nonparametric dependent
prior specification on the matrix of coefficients and the covariance matrix by
mean of a Time-Series DPP as in Nieto-Barajas et al. (2012). Following Billio
et al. (2019), our hierarchical prior overcomes over-parametrization and
over-fitting issues by clustering the vector autoregressive (VAR) coefficients
into groups and by shrinking the coefficients of each group toward a common
location. Our BNP timevarying VAR model is based on a spike-and-slab
construction coupled with dependent Dirichlet Process prior (DPP) and allows
to: (i) infer time-varying Granger causality networks from time series; (ii)
flexibly model and cluster non-zero time-varying coefficients; (iii)
accommodate for potential non-linearities. In order to assess the performance
of the model, we study the merits of our approach by considering a well-known
macroeconomic dataset. Moreover, we check the robustness of the method by
comparing two alternative specifications, with Dirac and diffuse spike prior
distributions.",Bayesian nonparametric graphical models for time-varying parameters VAR,2019-06-03 12:21:21,"Matteo Iacopini, Luca Rossini","http://arxiv.org/abs/1906.02140v1, http://arxiv.org/pdf/1906.02140v1",econ.EM
30098,em,"Market sectors play a key role in the efficient flow of capital through the
modern Global economy. We analyze existing sectorization heuristics, and
observe that the most popular - the GICS (which informs the S&P 500), and the
NAICS (published by the U.S. Government) - are not entirely quantitatively
driven, but rather appear to be highly subjective and rooted in dogma. Building
on inferences from analysis of the capital structure irrelevance principle and
the Modigliani-Miller theoretic universe conditions, we postulate that
corporation fundamentals - particularly those components specific to the
Modigliani-Miller universe conditions - would be optimal descriptors of the
true economic domain of operation of a company. We generate a set of potential
candidate learned sector universes by varying the linkage method of a
hierarchical clustering algorithm, and the number of resulting sectors derived
from the model (ranging from 5 to 19), resulting in a total of 60 candidate
learned sector universes. We then introduce reIndexer, a backtest-driven sector
universe evaluation research tool, to rank the candidate sector universes
produced by our learned sector classification heuristic. This rank was utilized
to identify the risk-adjusted return optimal learned sector universe as being
the universe generated under CLINK (i.e. complete linkage), with 17 sectors.
The optimal learned sector universe was tested against the benchmark GICS
classification universe with reIndexer, outperforming on both absolute
portfolio value, and risk-adjusted return over the backtest period. We conclude
that our fundamentals-driven Learned Sector classification heuristic provides a
superior risk-diversification profile than the status quo classification
heuristic.",Learned Sectors: A fundamentals-driven sector reclassification project,2019-05-31 05:13:00,"Rukmal Weerawarana, Yiyi Zhu, Yuzhen He","http://arxiv.org/abs/1906.03935v1, http://arxiv.org/pdf/1906.03935v1",q-fin.GN
30099,em,"The presence of \b{eta}-convergence in European regions is an important issue
to be analyzed. In this paper, we adopt a quantile regression approach in
analyzing economic convergence. While previous work has performed quantile
regression at the national level, we focus on 187 European NUTS2 regions for
the period 1981-2009 and use spatial quantile regression to account for spatial
dependence.",Regional economic convergence and spatial quantile regression,2019-06-11 17:16:44,"Alfredo Cartone, Geoffrey JD Hewings, Paolo Postiglione","http://arxiv.org/abs/1906.04613v1, http://arxiv.org/pdf/1906.04613v1",stat.AP
30100,em,"We propose new confidence sets (CSs) for the regression discontinuity
parameter in fuzzy designs. Our CSs are based on local linear regression, and
are bias-aware, in the sense that they take possible bias explicitly into
account. Their construction shares similarities with that of Anderson-Rubin CSs
in exactly identified instrumental variable models, and thereby avoids issues
with ""delta method"" approximations that underlie most commonly used existing
inference methods for fuzzy regression discontinuity analysis. Our CSs are
asymptotically equivalent to existing procedures in canonical settings with
strong identification and a continuous running variable. However, due to their
particular construction they are also valid under a wide range of empirically
relevant conditions in which existing methods can fail, such as setups with
discrete running variables, donut designs, and weak identification.",Bias-Aware Inference in Fuzzy Regression Discontinuity Designs,2019-06-11 17:49:29,"Claudia Noack, Christoph Rothe","http://arxiv.org/abs/1906.04631v4, http://arxiv.org/pdf/1906.04631v4",econ.EM
30101,em,"Economists are often interested in estimating averages with respect to
distributions of unobservables, such as moments of individual fixed-effects, or
average partial effects in discrete choice models. For such quantities, we
propose and study posterior average effects (PAE), where the average is
computed conditional on the sample, in the spirit of empirical Bayes and
shrinkage methods. While the usefulness of shrinkage for prediction is
well-understood, a justification of posterior conditioning to estimate
population averages is currently lacking. We show that PAE have minimum
worst-case specification error under various forms of misspecification of the
parametric distribution of unobservables. In addition, we introduce a measure
of informativeness of the posterior conditioning, which quantifies the
worst-case specification error of PAE relative to parametric model-based
estimators. As illustrations, we report PAE estimates of distributions of
neighborhood effects in the US, and of permanent and transitory components in a
model of income dynamics.",Posterior Average Effects,2019-06-14 21:22:47,"Stéphane Bonhomme, Martin Weidner","http://arxiv.org/abs/1906.06360v6, http://arxiv.org/pdf/1906.06360v6",econ.EM
30102,em,"Based on the high-frequency recordings from Kraken, a cryptocurrency exchange
and professional trading platform that aims to bring Bitcoin and other
cryptocurrencies into the mainstream, the multiscale cross-correlations
involving the Bitcoin (BTC), Ethereum (ETH), Euro (EUR) and US dollar (USD) are
studied over the period between July 1, 2016 and December 31, 2018. It is shown
that the multiscaling characteristics of the exchange rate fluctuations related
to the cryptocurrency market approach those of the Forex. This, in particular,
applies to the BTC/ETH exchange rate, whose Hurst exponent by the end of 2018
started approaching the value of 0.5, which is characteristic of the mature
world markets. Furthermore, the BTC/ETH direct exchange rate has already
developed multifractality, which manifests itself via broad singularity
spectra. A particularly significant result is that the measures applied for
detecting cross-correlations between the dynamics of the BTC/ETH and EUR/USD
exchange rates do not show any noticeable relationships. This may be taken as
an indication that the cryptocurrency market has begun decoupling itself from
the Forex.",Signatures of crypto-currency market decoupling from the Forex,2019-06-19 01:36:37,"Stanisław Drożdż, Ludovico Minati, Paweł Oświęcimka, Marek Stanuszek, Marcin Wątorek","http://dx.doi.org/10.3390/fi11070154, http://arxiv.org/abs/1906.07834v2, http://arxiv.org/pdf/1906.07834v2",q-fin.ST
30103,em,"A joint conditional autoregressive expectile and Expected Shortfall framework
is proposed. The framework is extended through incorporating a measurement
equation which models the contemporaneous dependence between the realized
measures and the latent conditional expectile. Nonlinear threshold
specification is further incorporated into the proposed framework. A Bayesian
Markov Chain Monte Carlo method is adapted for estimation, whose properties are
assessed and compared with maximum likelihood via a simulation study.
One-day-ahead VaR and ES forecasting studies, with seven market indices,
provide empirical support to the proposed models.",Semi-parametric Realized Nonlinear Conditional Autoregressive Expectile and Expected Shortfall,2019-06-21 06:56:19,"Chao Wang, Richard Gerlach","http://arxiv.org/abs/1906.09961v1, http://arxiv.org/pdf/1906.09961v1",q-fin.RM
30104,em,"This study aims to find a Box-Jenkins time series model for the monthly OFW's
remittance in the Philippines. Forecasts of OFW's remittance for the years 2018
and 2019 will be generated using the appropriate time series model. The data
were retrieved from the official website of Bangko Sentral ng Pilipinas. There
are 108 observations, 96 of which were used in model building and the remaining
12 observations were used in forecast evaluation. ACF and PACF were used to
examine the stationarity of the series. Augmented Dickey Fuller test was used
to confirm the stationarity of the series. The data was found to have a
seasonal component, thus, seasonality has been considered in the final model
which is SARIMA (2,1,0)x(0,0,2)_12. There are no significant spikes in the ACF
and PACF of residuals of the final model and the L-jung Box Q* test confirms
further that the residuals of the model are uncorrelated. Also, based on the
result of the Shapiro-Wilk test for the forecast errors, the forecast errors
can be considered a Gaussian white noise. Considering the results of diagnostic
checking and forecast evaluation, SARIMA (2,1,0)x(0,0,2)_12 is an appropriate
model for the series. All necessary computations were done using the R
statistical software.",Forecasting the Remittances of the Overseas Filipino Workers in the Philippines,2019-06-25 12:53:29,"Merry Christ E. Manayaga, Roel F. Ceballos","http://arxiv.org/abs/1906.10422v1, http://arxiv.org/pdf/1906.10422v1",stat.AP
30105,em,"In this study we used company level administrative data from the National
Labour Inspectorate and The Polish Social Insurance Institution in order to
estimate the prevalence of informal employment in Poland. Since the selection
mechanism is non-ignorable we employed a generalization of Heckman's sample
selection model assuming non-Gaussian correlation of errors and clustering by
incorporation of random effects. We found that 5.7% (4.6%, 7.1%; 95% CI) of
registered enterprises in Poland, to some extent, take advantage of the
informal labour force. Our study exemplifies a new approach to measuring
informal employment, which can be implemented in other countries. It also
contributes to the existing literature by providing, to the best of our
knowledge, the first estimates of informal employment at the level of companies
based solely on administrative data.",Estimation of the size of informal employment based on administrative records with non-ignorable selection mechanism,2019-06-26 13:29:12,"Maciej Beręsewicz, Dagmara Nikulin","http://dx.doi.org/10.1111/rssc.12481, http://arxiv.org/abs/1906.10957v1, http://arxiv.org/pdf/1906.10957v1",stat.AP
30106,em,"Purchase data from retail chains provide proxy measures of private household
expenditure on items that are the most troublesome to collect in the
traditional expenditure survey. Due to the sheer amount of proxy data, the bias
due to coverage and selection errors completely dominates the variance. We
develop tests for bias based on audit sampling, which makes use of available
survey data that cannot be linked to the proxy data source at the individual
level. However, audit sampling fails to yield a meaningful mean squared error
estimate, because the sampling variance is too large compared to the bias of
the big data estimate. We propose a novel accuracy measure that is applicable
in such situations. This can provide a necessary part of the statistical
argument for the uptake of big data source, in replacement of traditional
survey sampling. An application to disaggregated food price index is used to
demonstrate the proposed approach.",Proxy expenditure weights for Consumer Price Index: Audit sampling inference for big data statistics,2019-06-15 13:19:13,Li-Chun Zhang,"http://arxiv.org/abs/1906.11208v1, http://arxiv.org/pdf/1906.11208v1",econ.EM
30109,em,"Datasets that are terabytes in size are increasingly common, but computer
bottlenecks often frustrate a complete analysis of the data. While more data
are better than less, diminishing returns suggest that we may not need
terabytes of data to estimate a parameter or test a hypothesis. But which rows
of data should we analyze, and might an arbitrary subset of rows preserve the
features of the original data? This paper reviews a line of work that is
grounded in theoretical computer science and numerical linear algebra, and
which finds that an algorithmically desirable sketch, which is a randomly
chosen subset of the data, must preserve the eigenstructure of the data, a
property known as a subspace embedding. Building on this work, we study how
prediction and inference can be affected by data sketching within a linear
regression setup. We show that the sketching error is small compared to the
sample size effect which a researcher can control. As a sketch size that is
algorithmically optimal may not be suitable for prediction and inference, we
use statistical arguments to provide 'inference conscious' guides to the sketch
size. When appropriately implemented, an estimator that pools over different
sketches can be nearly as efficient as the infeasible one using the full
sample.",An Econometric Perspective on Algorithmic Subsampling,2019-07-03 17:04:12,"Sokbae Lee, Serena Ng","http://arxiv.org/abs/1907.01954v4, http://arxiv.org/pdf/1907.01954v4",econ.EM
30110,em,"In economic development, there are often regions that share similar economic
characteristics, and economic models on such regions tend to have similar
covariate effects. In this paper, we propose a Bayesian clustered regression
for spatially dependent data in order to detect clusters in the covariate
effects. Our proposed method is based on the Dirichlet process which provides a
probabilistic framework for simultaneous inference of the number of clusters
and the clustering configurations. The usage of our method is illustrated both
in simulation studies and an application to a housing cost dataset of Georgia.",Heterogeneous Regression Models for Clusters of Spatial Dependent Data,2019-07-04 07:13:36,"Zhihua Ma, Yishu Xue, Guanyu Hu","http://dx.doi.org/10.1080/17421772.2020.1784989, http://arxiv.org/abs/1907.02212v4, http://arxiv.org/pdf/1907.02212v4",econ.EM
30111,em,"This paper considers a semiparametric generalized autoregressive conditional
heteroskedasticity (S-GARCH) model. For this model, we first estimate the
time-varying long run component for unconditional variance by the kernel
estimator, and then estimate the non-time-varying parameters in GARCH-type
short run component by the quasi maximum likelihood estimator (QMLE). We show
that the QMLE is asymptotically normal with the parametric convergence rate.
Next, we construct a Lagrange multiplier test for linear parameter constraint
and a portmanteau test for model checking, and obtain their asymptotic null
distributions. Our entire statistical inference procedure works for the
non-stationary data with two important features: first, our QMLE and two tests
are adaptive to the unknown form of the long run component; second, our QMLE
and two tests share the same efficiency and testing power as those in variance
targeting method when the S-GARCH model is stationary.",Adaptive inference for a semiparametric generalized autoregressive conditional heteroskedasticity model,2019-07-09 16:24:40,"Feiyu Jiang, Dong Li, Ke Zhu","http://arxiv.org/abs/1907.04147v4, http://arxiv.org/pdf/1907.04147v4",stat.ME
30112,em,"We propose a framework for nonparametric identification and estimation of
discrete choice models with unobserved choice sets. We recover the joint
distribution of choice sets and preferences from a panel dataset on choices. We
assume that either the latent choice sets are sparse or that the panel is
sufficiently long. Sparsity requires the number of possible choice sets to be
relatively small. It is satisfied, for instance, when the choice sets are
nested, or when they form a partition. Our estimation procedure is
computationally fast and uses mixed-integer optimization to recover the sparse
support of choice sets. Analyzing the ready-to-eat cereal industry using a
household scanner dataset, we find that ignoring the unobservability of choice
sets can lead to biased estimates of preferences due to significant latent
heterogeneity in choice sets.",Identification and Estimation of Discrete Choice Models with Unobserved Choice Sets,2019-07-09 21:53:22,"Victor H. Aguiar, Nail Kashaev","http://arxiv.org/abs/1907.04853v3, http://arxiv.org/pdf/1907.04853v3",econ.EM
30113,em,"Climate change is a massive multidimensional shift. Temperature shifts, in
particular, have important implications for urbanization, agriculture, health,
productivity, and poverty, among other things. While much research has
documented rising mean temperature \emph{levels}, we also examine range-based
measures of daily temperature \emph{volatility}. Specifically, using data for
select U.S. cities over the past half-century, we compare the evolving time
series dynamics of the average temperature level, AVG, and the diurnal
temperature range, DTR (the difference between the daily maximum and minimum
temperatures). We characterize trend and seasonality in these two series using
linear models with time-varying coefficients. These straightforward yet
flexible approximations provide evidence of evolving DTR seasonality and stable
AVG seasonality.",On the Evolution of U.S. Temperature Dynamics,2019-07-15 03:27:20,"Francis X. Diebold, Glenn D. Rudebusch","http://arxiv.org/abs/1907.06303v3, http://arxiv.org/pdf/1907.06303v3",stat.AP
30114,em,"Travel decisions tend to exhibit sensitivity to uncertainty and information
processing constraints. These behavioural conditions can be characterized by a
generative learning process. We propose a data-driven generative model version
of rational inattention theory to emulate these behavioural representations. We
outline the methodology of the generative model and the associated learning
process as well as provide an intuitive explanation of how this process
captures the value of prior information in the choice utility specification. We
demonstrate the effects of information heterogeneity on a travel choice,
analyze the econometric interpretation, and explore the properties of our
generative model. Our findings indicate a strong correlation with rational
inattention behaviour theory, which suggest that individuals may ignore certain
exogenous variables and rely on prior information for evaluating decisions
under uncertainty. Finally, the principles demonstrated in this study can be
formulated as a generalized entropy and utility based multinomial logit model.",Information processing constraints in travel behaviour modelling: A generative learning approach,2019-07-16 17:35:23,"Melvin Wong, Bilal Farooq","http://arxiv.org/abs/1907.07036v2, http://arxiv.org/pdf/1907.07036v2",econ.EM
30134,em,"With the rapid development of Internet finance, a large number of studies
have shown that Internet financial platforms have different financial systemic
risk characteristics when they are subject to macroeconomic shocks or fragile
internal crisis. From the perspective of regional development of Internet
finance, this paper uses t-SNE machine learning algorithm to obtain data mining
of China's Internet finance development index involving 31 provinces and 335
cities and regions. The conclusion of the peak and thick tail characteristics,
then proposed three classification risks of Internet financial systemic risk,
providing more regionally targeted recommendations for the systematic risk of
Internet finance.",Systemic Risk Clustering of China Internet Financial Based on t-SNE Machine Learning Algorithm,2019-08-30 22:25:17,"Mi Chuanmin, Xu Runjie, Lin Qingtong","http://arxiv.org/abs/1909.03808v1, http://arxiv.org/pdf/1909.03808v1",q-fin.ST
30115,em,"Time-varying parameter (TVP) models are widely used in time series analysis
to flexibly deal with processes which gradually change over time. However, the
risk of overfitting in TVP models is well known. This issue can be dealt with
using appropriate global-local shrinkage priors, which pull time-varying
parameters towards static ones. In this paper, we introduce the R package
shrinkTVP (Knaus, Bitto-Nemling, Cadonna, and Fr\""uhwirth-Schnatter 2019),
which provides a fully Bayesian implementation of shrinkage priors for TVP
models, taking advantage of recent developments in the literature, in
particular that of Bitto and Fr\""uhwirth-Schnatter (2019). The package
shrinkTVP allows for posterior simulation of the parameters through an
efficient Markov Chain Monte Carlo (MCMC) scheme. Moreover, summary and
visualization methods, as well as the possibility of assessing predictive
performance through log predictive density scores (LPDSs), are provided. The
computationally intensive tasks have been implemented in C++ and interfaced
with R. The paper includes a brief overview of the models and shrinkage priors
implemented in the package. Furthermore, core functionalities are illustrated,
both with simulated and real data.",Shrinkage in the Time-Varying Parameter Model Framework Using the R Package shrinkTVP,2019-07-16 18:24:42,"Peter Knaus, Angela Bitto-Nemling, Annalisa Cadonna, Sylvia Frühwirth-Schnatter","http://arxiv.org/abs/1907.07065v3, http://arxiv.org/pdf/1907.07065v3",econ.EM
30116,em,"The heterogeneous autoregressive (HAR) model is revised by modeling the joint
distribution of the four partial-volatility terms therein involved. Namely,
today's, yesterday's, last week's and last month's volatility components. The
joint distribution relies on a (C-) Vine copula construction, allowing to
conveniently extract volatility forecasts based on the conditional expectation
of today's volatility given its past terms. The proposed empirical application
involves more than seven years of high-frequency transaction prices for ten
stocks and evaluates the in-sample, out-of-sample and one-step-ahead forecast
performance of our model for daily realized-kernel measures. The model proposed
in this paper is shown to outperform the HAR counterpart under different models
for marginal distributions, copula construction methods, and forecasting
settings.",A Vine-copula extension for the HAR model,2019-07-19 17:18:50,Martin Magris,"http://arxiv.org/abs/1907.08522v1, http://arxiv.org/pdf/1907.08522v1",econ.EM
30117,em,"This study conducts a benchmarking study, comparing 23 different statistical
and machine learning methods in a credit scoring application. In order to do
so, the models' performance is evaluated over four different data sets in
combination with five data sampling strategies to tackle existing class
imbalances in the data. Six different performance measures are used to cover
different aspects of predictive performance. The results indicate a strong
superiority of ensemble methods and show that simple sampling strategies
deliver better results than more sophisticated ones.",Predicting credit default probabilities using machine learning techniques in the face of unequal class distributions,2019-07-30 18:01:10,Anna Stelzer,"http://arxiv.org/abs/1907.12996v1, http://arxiv.org/pdf/1907.12996v1",econ.EM
30118,em,"We discuss a simplified version of the testing problem considered by Pelican
and Graham (2019): testing for interdependencies in preferences over links
among N (possibly heterogeneous) agents in a network. We describe an exact test
which conditions on a sufficient statistic for the nuisance parameter
characterizing any agent-level heterogeneity. Employing an algorithm due to
Blitzstein and Diaconis (2011), we show how to simulate the null distribution
of the test statistic in order to estimate critical values and/or p-values. We
illustrate our methods using the Nyakatoke risk-sharing network. We find that
the transitivity of the Nyakatoke network far exceeds what can be explained by
degree heterogeneity across households alone.",Testing for Externalities in Network Formation Using Simulation,2019-08-01 00:10:04,"Bryan S. Graham, Andrin Pelican","http://arxiv.org/abs/1908.00099v1, http://arxiv.org/pdf/1908.00099v1",stat.ME
30119,em,"We study a class of permutation tests of the randomness of a collection of
Bernoulli sequences and their application to analyses of the human tendency to
perceive streaks of consecutive successes as overly representative of positive
dependence - the hot hand fallacy. In particular, we study permutation tests of
the null hypothesis of randomness (i.e., that trials are i.i.d.) based on test
statistics that compare the proportion of successes that directly follow k
consecutive successes with either the overall proportion of successes or the
proportion of successes that directly follow k consecutive failures. We
characterize the asymptotic distributions of these test statistics and their
permutation distributions under randomness, under a set of general stationary
processes, and under a class of Markov chain alternatives, which allow us to
derive their local asymptotic power. The results are applied to evaluate the
empirical support for the hot hand fallacy provided by four controlled
basketball shooting experiments. We establish that substantially larger data
sets are required to derive an informative measurement of the deviation from
randomness in basketball shooting. In one experiment, for which we were able to
obtain data, multiple testing procedures reveal that one shooter exhibits a
shooting pattern significantly inconsistent with randomness - supplying strong
evidence that basketball shooting is not random for all shooters all of the
time. However, we find that the evidence against randomness in this experiment
is limited to this shooter. Our results provide a mathematical and statistical
foundation for the design and validation of experiments that directly compare
deviations from randomness with human beliefs about deviations from randomness,
and thereby constitute a direct test of the hot hand fallacy.",Uncertainty in the Hot Hand Fallacy: Detecting Streaky Alternatives to Random Bernoulli Sequences,2019-08-05 00:49:14,"David M. Ritzwoller, Joseph P. Romano","http://arxiv.org/abs/1908.01406v6, http://arxiv.org/pdf/1908.01406v6",econ.EM
30120,em,"This paper develops the asymptotic theory of a Fully Modified Generalized
Least Squares estimator for multivariate cointegrating polynomial regressions.
Such regressions allow for deterministic trends, stochastic trends and integer
powers of stochastic trends to enter the cointegrating relations. Our fully
modified estimator incorporates: (1) the direct estimation of the inverse
autocovariance matrix of the multidimensional errors, and (2) second order bias
corrections. The resulting estimator has the intuitive interpretation of
applying a weighted least squares objective function to filtered data series.
Moreover, the required second order bias corrections are convenient byproducts
of our approach and lead to standard asymptotic inference. We also study
several multivariate KPSS-type of tests for the null of cointegration. A
comprehensive simulation study shows good performance of the FM-GLS estimator
and the related tests. As a practical illustration, we reinvestigate the
Environmental Kuznets Curve (EKC) hypothesis for six early industrialized
countries as in Wagner et al. (2020).",Efficient Estimation by Fully Modified GLS with an Application to the Environmental Kuznets Curve,2019-08-07 15:17:30,"Yicong Lin, Hanno Reuvers","http://arxiv.org/abs/1908.02552v2, http://arxiv.org/pdf/1908.02552v2",econ.EM
30121,em,"A generalized distributed tool for mobility choice modelling is presented,
where participants do not share personal raw data, while all computations are
done locally. Participants use Blockchain based Smart Mobility Data-market
(BSMD), where all transactions are secure and private. Nodes in blockchain can
transact information with other participants as long as both parties agree to
the transaction rules issued by the owner of the data. A case study is
presented where a mode choice model is distributed and estimated over BSMD. As
an example, the parameter estimation problem is solved on a distributed version
of simulated annealing. It is demonstrated that the estimated model parameters
are consistent and reproducible.",Privacy-Aware Distributed Mobility Choice Modelling over Blockchain,2019-08-09 16:08:26,"David Lopez, Bilal Farooq","http://arxiv.org/abs/1908.03446v2, http://arxiv.org/pdf/1908.03446v2",cs.CR
30122,em,"We propose a modification of the classical Black-Derman-Toy (BDT) interest
rate tree model, which includes the possibility of a jump with small
probability at each step to a practically zero interest rate. The corresponding
BDT algorithms are consequently modified to calibrate the tree containing the
zero interest rate scenarios. This modification is motivated by the recent
2008-2009 crisis in the United States and it quantifies the risk of a future
crises in bond prices and derivatives. The proposed model is useful to price
derivatives. This exercise also provides a tool to calibrate the probability of
this event. A comparison of option prices and implied volatilities on US
Treasury bonds computed with both the proposed and the classical tree model is
provided, in six different scenarios along the different periods comprising the
years 2002-2017.",Zero Black-Derman-Toy interest rate model,2019-08-13 00:29:42,"Grzegorz Krzyżanowski, Ernesto Mordecki, Andrés Sosa","http://arxiv.org/abs/1908.04401v2, http://arxiv.org/pdf/1908.04401v2",econ.EM
30123,em,"I develop a model of a randomized experiment with a binary intervention and a
binary outcome. Potential outcomes in the intervention and control groups give
rise to four types of participants. Fixing ideas such that the outcome is
mortality, some participants would live regardless, others would be saved,
others would be killed, and others would die regardless. These potential
outcome types are not observable. However, I use the model to develop
estimators of the number of participants of each type. The model relies on the
randomization within the experiment and on deductive reasoning. I apply the
model to an important clinical trial, the PROWESS trial, and I perform a Monte
Carlo simulation calibrated to estimates from the trial. The reduced form from
the trial shows a reduction in mortality, which provided a rationale for FDA
approval. However, I find that the intervention killed two participants for
every three it saved.",A Model of a Randomized Experiment with an Application to the PROWESS Clinical Trial,2019-08-16 04:40:05,Amanda Kowalski,"http://arxiv.org/abs/1908.05810v2, http://arxiv.org/pdf/1908.05810v2",stat.ME
30124,em,"The LATE monotonicity assumption of Imbens and Angrist (1994) precludes
""defiers,"" individuals whose treatment always runs counter to the instrument,
in the terminology of Balke and Pearl (1993) and Angrist et al. (1996). I allow
for defiers in a model with a binary instrument and a binary treatment. The
model is explicit about the randomization process that gives rise to the
instrument. I use the model to develop estimators of the counts of defiers,
always takers, compliers, and never takers. I propose separate versions of the
estimators for contexts in which the parameter of the randomization process is
unspecified, which I intend for use with natural experiments with virtual
random assignment. I present an empirical application that revisits Angrist and
Evans (1998), which examines the impact of virtual random assignment of the sex
of the first two children on subsequent fertility. I find that subsequent
fertility is much more responsive to the sex mix of the first two children when
defiers are allowed.",Counting Defiers,2019-08-16 04:44:57,Amanda Kowalski,"http://arxiv.org/abs/1908.05811v2, http://arxiv.org/pdf/1908.05811v2",stat.ME
30125,em,"The drift diffusion model (DDM) is a model of sequential sampling with
diffusion (Brownian) signals, where the decision maker accumulates evidence
until the process hits a stopping boundary, and then stops and chooses the
alternative that corresponds to that boundary. This model has been widely used
in psychology, neuroeconomics, and neuroscience to explain the observed
patterns of choice and response times in a range of binary choice decision
problems. This paper provides a statistical test for DDM's with general
boundaries. We first prove a characterization theorem: we find a condition on
choice probabilities that is satisfied if and only if the choice probabilities
are generated by some DDM. Moreover, we show that the drift and the boundary
are uniquely identified. We then use our condition to nonparametrically
estimate the drift and the boundary and construct a test statistic.",Testing the Drift-Diffusion Model,2019-08-16 06:05:49,"Drew Fudenberg, Whitney K. Newey, Philipp Strack, Tomasz Strzalecki","http://dx.doi.org/10.1073/pnas.2011446117, http://arxiv.org/abs/1908.05824v1, http://arxiv.org/pdf/1908.05824v1",econ.EM
30126,em,"This paper investigates the time-varying impacts of international
macroeconomic uncertainty shocks. We use a global vector autoregressive
specification with drifting coefficients and factor stochastic volatility in
the errors to model six economies jointly. The measure of uncertainty is
constructed endogenously by estimating a scalar driving the innovation
variances of the latent factors, which is also included in the mean of the
process. To achieve regularization, we use Bayesian techniques for estimation,
and introduce a set of hierarchical global-local priors. The adopted priors
center the model on a constant parameter specification with homoscedastic
errors, but allow for time-variation if suggested by likelihood information.
Moreover, we assume coefficients across economies to be similar, but provide
sufficient flexibility via the hierarchical prior for country-specific
idiosyncrasies. The results point towards pronounced real and financial effects
of uncertainty shocks in all countries, with differences across economies and
over time.",Measuring international uncertainty using global vector autoregressions with drifting parameters,2019-08-17 20:47:19,Michael Pfarrhofer,"http://arxiv.org/abs/1908.06325v2, http://arxiv.org/pdf/1908.06325v2",econ.EM
30127,em,"Dyadic data, where outcomes reflecting pairwise interaction among sampled
units are of primary interest, arise frequently in social science research.
Regression analyses with such data feature prominently in many research
literatures (e.g., gravity models of trade). The dependence structure
associated with dyadic data raises special estimation and, especially,
inference issues. This chapter reviews currently available methods for
(parametric) dyadic regression analysis and presents guidelines for empirical
researchers.",Dyadic Regression,2019-08-23 23:31:25,Bryan S. Graham,"http://arxiv.org/abs/1908.09029v1, http://arxiv.org/pdf/1908.09029v1",econ.EM
30128,em,"This paper presents the asymptotic behavior of a linear instrumental
variables (IV) estimator that uses a ridge regression penalty. The
regularization tuning parameter is selected empirically by splitting the
observed data into training and test samples. Conditional on the tuning
parameter, the training sample creates a path from the IV estimator to a prior.
The optimal tuning parameter is the value along this path that minimizes the IV
objective function for the test sample.
  The empirically selected regularization tuning parameter becomes an estimated
parameter that jointly converges with the parameters of interest. The
asymptotic distribution of the tuning parameter is a nonstandard mixture
distribution. Monte Carlo simulations show the asymptotic distribution captures
the characteristics of the sampling distributions and when this ridge estimator
performs better than two-stage least squares.",The Ridge Path Estimator for Linear Instrumental Variables,2019-08-25 03:09:10,"Nandana Sengupta, Fallaw Sowell","http://arxiv.org/abs/1908.09237v1, http://arxiv.org/pdf/1908.09237v1",econ.EM
30129,em,"Forecasting costs is now a front burner in empirical economics. We propose an
unconventional tool for stochastic prediction of future expenses based on the
individual (micro) developments of recorded events. Consider a firm,
enterprise, institution, or state, which possesses knowledge about particular
historical events. For each event, there is a series of several related
subevents: payments or losses spread over time, which all leads to an
infinitely stochastic process at the end. Nevertheless, the issue is that some
already occurred events do not have to be necessarily reported. The aim lies in
forecasting future subevent flows coming from already reported, occurred but
not reported, and yet not occurred events. Our methodology is illustrated on
quantitative risk assessment, however, it can be applied to other areas such as
startups, epidemics, war damages, advertising and commercials, digital
payments, or drug prescription as manifested in the paper. As a theoretical
contribution, inference for infinitely stochastic processes is developed. In
particular, a non-homogeneous Poisson process with non-homogeneous Poisson
processes as marks is used, which includes for instance the Cox process as a
special case.",Infinitely Stochastic Micro Forecasting,2019-08-28 13:54:18,"Matúš Maciak, Ostap Okhrin, Michal Pešta","http://arxiv.org/abs/1908.10636v2, http://arxiv.org/pdf/1908.10636v2",econ.EM
30130,em,"We present a symmetry analysis of the distribution of variations of different
financial indices, by means of a statistical procedure developed by the authors
based on a symmetry statistic by Einmahl and Mckeague. We applied this
statistical methodology to financial uninterrupted daily trends returns and to
other derived observable. In our opinion, to study distributional symmetry,
trends returns offer more advantages than the commonly used daily financial
returns; the two most important being: 1) Trends returns involve sampling over
different time scales and 2) By construction, this variable time series
contains practically the same number of non-negative and negative entry values.
We also show that these time multi-scale returns display distributional
bi-modality. Daily financial indices analyzed in this work, are the Mexican
IPC, the American DJIA, DAX from Germany and the Japanese Market index Nikkei,
covering a time period from 11-08-1991 to 06-30-2017. We show that, at the time
scale resolution and significance considered in this paper, it is almost always
feasible to find an interval of possible symmetry points containing one most
plausible symmetry point denoted by C. Finally, we study the temporal evolution
of C showing that this point is seldom zero and responds with sensitivity to
extreme market events.",A multi-scale symmetry analysis of uninterrupted trends returns of daily financial indices,2019-08-27 22:24:37,"C. M. Rodríguez-Martínez, H. F. Coronel-Brizio, A. R. Hernández-Montoya","http://dx.doi.org/10.1016/j.physa.2021.125982, http://arxiv.org/abs/1908.11204v1, http://arxiv.org/pdf/1908.11204v1",q-fin.ST
30131,em,"Language and cultural diversity is a fundamental aspect of the present world.
We study three modern multilingual societies -- the Basque Country, Ireland and
Wales -- which are endowed with two, linguistically distant, official
languages: $A$, spoken by all individuals, and $B$, spoken by a bilingual
minority. In the three cases it is observed a decay in the use of minoritarian
$B$, a sign of diversity loss. However, for the ""Council of Europe"" the key
factor to avoid the shift of $B$ is its use in all domains. Thus, we
investigate the language choices of the bilinguals by means of an evolutionary
game theoretic model. We show that the language population dynamics has reached
an evolutionary stable equilibrium where a fraction of bilinguals have shifted
to speak $A$. Thus, this equilibrium captures the decline in the use of $B$. To
test the theory we build empirical models that predict the use of $B$ for each
proportion of bilinguals. We show that model-based predictions fit very well
the observed use of Basque, Irish, and Welsh.",The economics of minority language use: theory and empirical evidence for a language game model,2019-08-30 12:13:17,"Stefan Sperlich, Jose-Ramon Uriarte","http://arxiv.org/abs/1908.11604v1, http://arxiv.org/pdf/1908.11604v1",econ.EM
30132,em,"Chernozhukov et al. (2018) proposed the sorted effect method for nonlinear
regression models. This method consists of reporting percentiles of the partial
effects in addition to the average commonly used to summarize the heterogeneity
in the partial effects. They also proposed to use the sorted effects to carry
out classification analysis where the observational units are classified as
most and least affected if their causal effects are above or below some tail
sorted effects. The R package SortedEffects implements the estimation and
inference methods therein and provides tools to visualize the results. This
vignette serves as an introduction to the package and displays basic
functionality of the functions within.",SortedEffects: Sorted Causal Effects in R,2019-09-02 22:13:51,"Shuowen Chen, Victor Chernozhukov, Iván Fernández-Val, Ye Luo","http://arxiv.org/abs/1909.00836v3, http://arxiv.org/pdf/1909.00836v3",econ.EM
30133,em,"When researchers develop new econometric methods it is common practice to
compare the performance of the new methods to those of existing methods in
Monte Carlo studies. The credibility of such Monte Carlo studies is often
limited because of the freedom the researcher has in choosing the design. In
recent years a new class of generative models emerged in the machine learning
literature, termed Generative Adversarial Networks (GANs) that can be used to
systematically generate artificial data that closely mimics real economic
datasets, while limiting the degrees of freedom for the researcher and
optionally satisfying privacy guarantees with respect to their training data.
In addition if an applied researcher is concerned with the performance of a
particular statistical method on a specific data set (beyond its theoretical
properties in large samples), she may wish to assess the performance, e.g., the
coverage rate of confidence intervals or the bias of the estimator, using
simulated data which resembles her setting. Tol illustrate these methods we
apply Wasserstein GANs (WGANs) to compare a number of different estimators for
average treatment effects under unconfoundedness in three distinct settings
(corresponding to three real data sets) and present a methodology for assessing
the robustness of the results. In this example, we find that (i) there is not
one estimator that outperforms the others in all three settings, so researchers
should tailor their analytic approach to a given setting, and (ii) systematic
simulation studies can be helpful for selecting among competing methods in this
situation.",Using Wasserstein Generative Adversarial Networks for the Design of Monte Carlo Simulations,2019-09-05 08:01:38,"Susan Athey, Guido Imbens, Jonas Metzger, Evan Munro","http://arxiv.org/abs/1909.02210v3, http://arxiv.org/pdf/1909.02210v3",econ.EM
30135,em,"We investigate the nature and extent of reallocation occurring within the
Indian income distribution, with a particular focus on the dynamics of the
bottom of the distribution. Specifically, we use a stochastic model of
Geometric Brownian Motion with a reallocation parameter that was constructed to
capture the quantum and direction of composite redistribution implied in the
income distribution. It is well known that inequality has been rising in India
in the recent past, but the assumption has been that while the rich benefit
more than proportionally from economic growth, the poor are also better off
than before. Findings from our model refute this, as we find that since the
early 2000s reallocation has consistently been negative, and that the Indian
income distribution has entered a regime of perverse redistribution of
resources from the poor to the rich. Outcomes from the model indicate not only
that income shares of the bottom decile (~1%) and bottom percentile (~0.03%)
are at historic lows, but also that real incomes of the bottom decile (-2.5%)
and percentile (-6%) have declined in the 2000s. We validate these findings
using income distribution data and find support for our contention of
persistent negative reallocation in the 2000s. We characterize these findings
in the context of increasing informalization of the workforce in the formal
manufacturing and service sectors, as well as the growing economic insecurity
of the agricultural workforce in India. Significant structural changes will be
required to address this phenomenon.",Dynamics of reallocation within India's income distribution,2019-09-10 15:54:54,"Anand Sahasranaman, Henrik Jeldtoft Jensen","http://arxiv.org/abs/1909.04452v4, http://arxiv.org/pdf/1909.04452v4",econ.EM
30136,em,"To make informed policy recommendations from observational data, we must be
able to discern true treatment effects from random noise and effects due to
confounding. Difference-in-Difference techniques which match treated units to
control units based on pre-treatment outcomes, such as the synthetic control
approach have been presented as principled methods to account for confounding.
However, we show that use of synthetic controls or other matching procedures
can introduce regression to the mean (RTM) bias into estimates of the average
treatment effect on the treated. Through simulations, we show RTM bias can lead
to inflated type I error rates as well as decreased power in typical policy
evaluation settings. Further, we provide a novel correction for RTM bias which
can reduce bias and attain appropriate type I error rates. This correction can
be used to perform a sensitivity analysis which determines how results may be
affected by RTM. We use our proposed correction and sensitivity analysis to
reanalyze data concerning the effects of California's Proposition 99, a
large-scale tobacco control program, on statewide smoking rates.",Regression to the Mean's Impact on the Synthetic Control Method: Bias and Sensitivity Analysis,2019-09-10 21:48:18,"Nicholas Illenberger, Dylan S. Small, Pamela A. Shaw","http://arxiv.org/abs/1909.04706v1, http://arxiv.org/pdf/1909.04706v1",stat.ME
30137,em,"In this paper, an application of three GARCH-type models (sGARCH, iGARCH, and
tGARCH) with Student t-distribution, Generalized Error distribution (GED), and
Normal Inverse Gaussian (NIG) distribution are examined. The new development
allows for the modeling of volatility clustering effects, the leptokurtic and
the skewed distributions in the return series of Bitcoin. Comparative to the
two distributions, the normal inverse Gaussian distribution captured adequately
the fat tails and skewness in all the GARCH type models. The tGARCH model was
the best model as it described the asymmetric occurrence of shocks in the
Bitcoin market. That is, the response of investors to the same amount of good
and bad news are distinct. From the empirical results, it can be concluded that
tGARCH-NIG was the best model to estimate the volatility in the return series
of Bitcoin. Generally, it would be optimal to use the NIG distribution in GARCH
type models since time series of most cryptocurrency are leptokurtic.",Estimating the volatility of Bitcoin using GARCH models,2019-09-11 11:16:48,Samuel Asante Gyamerah,"http://arxiv.org/abs/1909.04903v2, http://arxiv.org/pdf/1909.04903v2",q-fin.ST
30138,em,"We study preferences estimated from finite choice experiments and provide
sufficient conditions for convergence to a unique underlying ""true"" preference.
Our conditions are weak, and therefore valid in a wide range of economic
environments. We develop applications to expected utility theory, choice over
consumption bundles, menu choice and intertemporal consumption. Our framework
unifies the revealed preference tradition with models that allow for errors.",Recovering Preferences from Finite Data,2019-09-12 08:13:37,"Christopher P. Chambers, Federico Echenique, Nicolas Lambert","http://arxiv.org/abs/1909.05457v4, http://arxiv.org/pdf/1909.05457v4",econ.TH
30139,em,"This paper develops a framework for quantile regression in binary
longitudinal data settings. A novel Markov chain Monte Carlo (MCMC) method is
designed to fit the model and its computational efficiency is demonstrated in a
simulation study. The proposed approach is flexible in that it can account for
common and individual-specific parameters, as well as multivariate
heterogeneity associated with several covariates. The methodology is applied to
study female labor force participation and home ownership in the United States.
The results offer new insights at the various quantiles, which are of interest
to policymakers and researchers alike.",Estimation and Applications of Quantile Regression for Binary Longitudinal Data,2019-09-12 13:44:15,"Mohammad Arshad Rahman, Angela Vossmeyer","http://dx.doi.org/10.1108/S0731-90532019000040B009, http://arxiv.org/abs/1909.05560v1, http://arxiv.org/pdf/1909.05560v1",econ.EM
30140,em,"The Wald development of statistical decision theory addresses decision making
with sample data. Wald's concept of a statistical decision function (SDF)
embraces all mappings of the form [data -> decision]. An SDF need not perform
statistical inference; that is, it need not use data to draw conclusions about
the true state of nature. Inference-based SDFs have the sequential form [data
-> inference -> decision]. This paper motivates inference-based SDFs as
practical procedures for decision making that may accomplish some of what Wald
envisioned. The paper first addresses binary choice problems, where all SDFs
may be viewed as hypothesis tests. It next considers as-if optimization, which
uses a point estimate of the true state as if the estimate were accurate. It
then extends this idea to as-if maximin and minimax-regret decisions, which use
point estimates of some features of the true state as if they were accurate.
The paper primarily uses finite-sample maximum regret to evaluate the
performance of inference-based SDFs. To illustrate abstract ideas, it presents
specific findings concerning treatment choice and point prediction with sample
data.",Statistical inference for statistical decisions,2019-09-15 21:24:47,Charles F. Manski,"http://arxiv.org/abs/1909.06853v1, http://arxiv.org/pdf/1909.06853v1",econ.EM
30141,em,"We propose a robust method for constructing conditionally valid prediction
intervals based on models for conditional distributions such as quantile and
distribution regression. Our approach can be applied to important prediction
problems including cross-sectional prediction, k-step-ahead forecasts,
synthetic controls and counterfactual prediction, and individual treatment
effects prediction. Our method exploits the probability integral transform and
relies on permuting estimated ranks. Unlike regression residuals, ranks are
independent of the predictors, allowing us to construct conditionally valid
prediction intervals under heteroskedasticity. We establish approximate
conditional validity under consistent estimation and provide approximate
unconditional validity under model misspecification, overfitting, and with time
series data. We also propose a simple ""shape"" adjustment of our baseline method
that yields optimal prediction intervals.",Distributional conformal prediction,2019-09-17 18:30:21,"Victor Chernozhukov, Kaspar Wüthrich, Yinchu Zhu","http://arxiv.org/abs/1909.07889v3, http://arxiv.org/pdf/1909.07889v3",econ.EM
30142,em,"The empirical analysis of discrete complete-information games has relied on
behavioral restrictions in the form of solution concepts, such as Nash
equilibrium. Choosing the right solution concept is crucial not just for
identification of payoff parameters, but also for the validity and
informativeness of counterfactual exercises and policy implications. We say
that a solution concept is discernible if it is possible to determine whether
it generated the observed data on the players' behavior and covariates. We
propose a set of conditions that make it possible to discern solution concepts.
In particular, our conditions are sufficient to tell whether the players'
choices emerged from Nash equilibria. We can also discern between
rationalizable behavior, maxmin behavior, and collusive behavior. Finally, we
identify the correlation structure of unobserved shocks in our model using a
novel approach.",Discerning Solution Concepts,2019-09-20 07:44:02,"Nail Kashaev, Bruno Salcedo","http://arxiv.org/abs/1909.09320v1, http://arxiv.org/pdf/1909.09320v1",econ.EM
30143,em,"Causal decomposition analyses can help build the evidence base for
interventions that address health disparities (inequities). They ask how
disparities in outcomes may change under hypothetical intervention. Through
study design and assumptions, they can rule out alternate explanations such as
confounding, selection-bias, and measurement error, thereby identifying
potential targets for intervention. Unfortunately, the literature on causal
decomposition analysis and related methods have largely ignored equity concerns
that actual interventionists would respect, limiting their relevance and
practical value. This paper addresses these concerns by explicitly considering
what covariates the outcome disparity and hypothetical intervention adjust for
(so-called allowable covariates) and the equity value judgements these choices
convey, drawing from the bioethics, biostatistics, epidemiology, and health
services research literatures. From this discussion, we generalize
decomposition estimands and formulae to incorporate allowable covariate sets,
to reflect equity choices, while still allowing for adjustment of non-allowable
covariates needed to satisfy causal assumptions. For these general formulae, we
provide weighting-based estimators based on adaptations of
ratio-of-mediator-probability and inverse-odds-ratio weighting. We discuss when
these estimators reduce to already used estimators under certain equity value
judgements, and a novel adaptation under other judgements.","Meaningful causal decompositions in health equity research: definition, identification, and estimation through a weighting framework",2019-09-22 21:19:10,John W. Jackson,"http://arxiv.org/abs/1909.10060v3, http://arxiv.org/pdf/1909.10060v3",stat.ME
30144,em,"Conventional financial models fail to explain the economic and monetary
properties of cryptocurrencies due to the latter's dual nature: their usage as
financial assets on the one side and their tight connection to the underlying
blockchain structure on the other. In an effort to examine both components via
a unified approach, we apply a recently developed Non-Homogeneous Hidden Markov
(NHHM) model with an extended set of financial and blockchain specific
covariates on the Bitcoin (BTC) and Ether (ETH) price data. Based on the
observable series, the NHHM model offers a novel perspective on the underlying
microstructure of the cryptocurrency market and provides insight on
unobservable parameters such as the behavior of investors, traders and miners.
The algorithm identifies two alternating periods (hidden states) of inherently
different activity -- fundamental versus uninformed or noise traders -- in the
Bitcoin ecosystem and unveils differences in both the short/long run dynamics
and in the financial characteristics of the two states, such as significant
explanatory variables, extreme events and varying series autocorrelation. In a
somewhat unexpected result, the Bitcoin and Ether markets are found to be
influenced by markedly distinct indicators despite their perceived correlation.
The current approach backs earlier findings that cryptocurrencies are unlike
any conventional financial asset and makes a first step towards understanding
cryptocurrency markets via a more comprehensive lens.",A Peek into the Unobservable: Hidden States and Bayesian Inference for the Bitcoin and Ether Price Series,2019-09-24 17:33:32,"Constandina Koki, Stefanos Leonardos, Georgios Piliouras","http://arxiv.org/abs/1909.10957v2, http://arxiv.org/pdf/1909.10957v2",econ.EM
30145,em,"We propose a new nonparametric estimator for first-price auctions with
independent private values that imposes the monotonicity constraint on the
estimated inverse bidding strategy. We show that our estimator has a smaller
asymptotic variance than that of Guerre, Perrigne and Vuong's (2000) estimator.
In addition to establishing pointwise asymptotic normality of our estimator, we
provide a bootstrap-based approach to constructing uniform confidence bands for
the density function of latent valuations.",Monotonicity-Constrained Nonparametric Estimation and Inference for First-Price Auctions,2019-09-28 01:29:17,"Jun Ma, Vadim Marmer, Artyom Shneyerov, Pai Xu","http://arxiv.org/abs/1909.12974v1, http://arxiv.org/pdf/1909.12974v1",econ.EM
30146,em,"We obtain an elementary invariance principle for multi-dimensional Brownian
sheet where the underlying random fields are not necessarily independent or
stationary. Possible applications include unit-root tests for spatial as well
as panel data models.",A 2-Dimensional Functional Central Limit Theorem for Non-stationary Dependent Random Fields,2019-10-07 05:04:05,Michael C. Tseng,"http://arxiv.org/abs/1910.02577v1, http://arxiv.org/pdf/1910.02577v1",math.PR
30163,em,"Many economic activities are embedded in networks: sets of agents and the
(often) rivalrous relationships connecting them to one another. Input sourcing
by firms, interbank lending, scientific research, and job search are four
examples, among many, of networked economic activities. Motivated by the
premise that networks' structures are consequential, this chapter describes
econometric methods for analyzing them. I emphasize (i) dyadic regression
analysis incorporating unobserved agent-specific heterogeneity and supporting
causal inference, (ii) techniques for estimating, and conducting inference on,
summary network parameters (e.g., the degree distribution or transitivity
index); and (iii) empirical models of strategic network formation admitting
interdependencies in preferences. Current research challenges and open
questions are also discussed.",Network Data,2019-12-13 10:41:57,Bryan S. Graham,"http://arxiv.org/abs/1912.06346v1, http://arxiv.org/pdf/1912.06346v1",econ.EM
30147,em,"High dimensional predictive regressions are useful in wide range of
applications. However, the theory is mainly developed assuming that the model
is stationary with time invariant parameters. This is at odds with the
prevalent evidence for parameter instability in economic time series, but
theories for parameter instability are mainly developed for models with a small
number of covariates. In this paper, we present two $L_2$ boosting algorithms
for estimating high dimensional models in which the coefficients are modeled as
functions evolving smoothly over time and the predictors are locally
stationary. The first method uses componentwise local constant estimators as
base learner, while the second relies on componentwise local linear estimators.
We establish consistency of both methods, and address the practical issues of
choosing the bandwidth for the base learners and the number of boosting
iterations. In an extensive application to macroeconomic forecasting with many
potential predictors, we find that the benefits to modeling time variation are
substantial and they increase with the forecast horizon. Furthermore, the
timing of the benefits suggests that the Great Moderation is associated with
substantial instability in the conditional mean of various economic series.",Boosting High Dimensional Predictive Regressions with Time Varying Parameters,2019-10-08 00:58:27,"Kashif Yousuf, Serena Ng","http://arxiv.org/abs/1910.03109v1, http://arxiv.org/pdf/1910.03109v1",econ.EM
30148,em,"International trade policies have recently garnered attention for limiting
cross-border exchange of essential goods (e.g. steel, aluminum, soybeans, and
beef). Since trade critically affects employment and wages, predicting future
patterns of trade is a high-priority for policy makers around the world. While
traditional economic models aim to be reliable predictors, we consider the
possibility that Machine Learning (ML) techniques allow for better predictions
to inform policy decisions. Open-government data provide the fuel to power the
algorithms that can explain and forecast trade flows to inform policies. Data
collected in this article describe international trade transactions and
commonly associated economic factors. Machine learning (ML) models deployed
include: ARIMA, GBoosting, XGBoosting, and LightGBM for predicting future trade
patterns, and K-Means clustering of countries according to economic factors.
Unlike short-term and subjective (straight-line) projections and medium-term
(aggre-gated) projections, ML methods provide a range of data-driven and
interpretable projections for individual commodities. Models, their results,
and policies are introduced and evaluated for prediction quality.",Application of Machine Learning in Forecasting International Trade Trends,2019-10-08 01:05:45,"Feras Batarseh, Munisamy Gopinath, Ganesh Nalluru, Jayson Beckman","http://arxiv.org/abs/1910.03112v1, http://arxiv.org/pdf/1910.03112v1",econ.EM
30149,em,"Beliefs are important determinants of an individual's choices and economic
outcomes, so understanding how they comove and differ across individuals is of
considerable interest. Researchers often rely on surveys that report individual
beliefs as qualitative data. We propose using a Bayesian hierarchical latent
class model to analyze the comovements and observed heterogeneity in
categorical survey responses. We show that the statistical model corresponds to
an economic structural model of information acquisition, which guides
interpretation and estimation of the model parameters. An algorithm based on
stochastic optimization is proposed to estimate a model for repeated surveys
when responses follow a dynamic structure and conjugate priors are not
appropriate. Guidance on selecting the number of belief types is also provided.
Two examples are considered. The first shows that there is information in the
Michigan survey responses beyond the consumer sentiment index that is
officially published. The second shows that belief types constructed from
survey responses can be used in a subsequent analysis to estimate heterogeneous
returns to education.",Latent Dirichlet Analysis of Categorical Survey Responses,2019-10-11 00:27:01,"Evan Munro, Serena Ng","http://dx.doi.org/10.1080/07350015.2020.1802285, http://arxiv.org/abs/1910.04883v3, http://arxiv.org/pdf/1910.04883v3",econ.EM
30150,em,"Regression discontinuity designs are frequently used to estimate the causal
effect of election outcomes and policy interventions. In these contexts,
treatment effects are typically estimated with covariates included to improve
efficiency. While including covariates improves precision asymptotically, in
practice, treatment effects are estimated with a small number of observations,
resulting in considerable fluctuations in treatment effect magnitude and
precision depending upon the covariates chosen. This practice thus incentivizes
researchers to select covariates which maximize treatment effect statistical
significance rather than precision. Here, I propose a principled approach for
estimating RDDs which provides a means of improving precision with covariates
while minimizing adverse incentives. This is accomplished by integrating the
adaptive LASSO, a machine learning method, into RDD estimation using an R
package developed for this purpose, adaptiveRDD. Using simulations, I show that
this method significantly improves treatment effect precision, particularly
when estimating treatment effects with fewer than 200 observations.",Principled estimation of regression discontinuity designs,2019-10-14 21:54:04,L. Jason Anastasopoulos,"http://arxiv.org/abs/1910.06381v2, http://arxiv.org/pdf/1910.06381v2",stat.AP
30151,em,"Social interactions determine many economic behaviors, but information on
social ties does not exist in most publicly available and widely used datasets.
We present results on the identification of social networks from observational
panel data that contains no information on social ties between agents. In the
context of a canonical social interactions model, we provide sufficient
conditions under which the social interactions matrix, endogenous and exogenous
social effect parameters are all globally identified. While this result is
relevant across different estimation strategies, we then describe how
high-dimensional estimation techniques can be used to estimate the interactions
model based on the Adaptive Elastic Net GMM method. We employ the method to
study tax competition across US states. We find the identified social
interactions matrix implies tax competition differs markedly from the common
assumption of competition between geographically neighboring states, providing
further insights for the long-standing debate on the relative roles of factor
mobility and yardstick competition in driving tax setting behavior across
states. Most broadly, our identification and application show the analysis of
social interactions can be extended to economic realms where no network data
exists.",Identifying Network Ties from Panel Data: Theory and an Application to Tax Competition,2019-10-16 19:19:25,"Aureo de Paula, Imran Rasul, Pedro Souza","http://arxiv.org/abs/1910.07452v4, http://arxiv.org/pdf/1910.07452v4",econ.EM
30152,em,"Strategic valuation of efficient and well-timed network investments under
uncertain electricity market environment has become increasingly challenging,
because there generally exist multiple interacting options in these
investments, and failing to systematically consider these options can lead to
decisions that undervalue the investment. In our work, a real options valuation
(ROV) framework is proposed to determine the optimal strategy for executing
multiple interacting options within a distribution network investment, to
mitigate the risk of financial losses in the presence of future uncertainties.
To demonstrate the characteristics of the proposed framework, we determine the
optimal strategy to economically justify the investment in residential
PV-battery systems for additional grid supply during peak demand periods. The
options to defer, and then expand, are considered as multi-stage compound
options, since the option to expand is a subsequent option of the former. These
options are valued via the least squares Monte Carlo method, incorporating
uncertainty over growing power demand, varying diesel fuel price, and the
declining cost of PV-battery technology as random variables. Finally, a
sensitivity analysis is performed to demonstrate how the proposed framework
responds to uncertain events. The proposed framework shows that executing the
interacting options at the optimal timing increases the investment value.",Multi-Stage Compound Real Options Valuation in Residential PV-Battery Investment,2019-10-21 06:21:29,"Yiju Ma, Kevin Swandi, Archie Chapman, Gregor Verbic","http://arxiv.org/abs/1910.09132v1, http://arxiv.org/pdf/1910.09132v1",econ.EM
30153,em,"In this paper, we write the time-varying parameter (TVP) regression model
involving K explanatory variables and T observations as a constant coefficient
regression model with KT explanatory variables. In contrast with much of the
existing literature which assumes coefficients to evolve according to a random
walk, a hierarchical mixture model on the TVPs is introduced. The resulting
model closely mimics a random coefficients specification which groups the TVPs
into several regimes. These flexible mixtures allow for TVPs that feature a
small, moderate or large number of structural breaks. We develop
computationally efficient Bayesian econometric methods based on the singular
value decomposition of the KT regressors. In artificial data, we find our
methods to be accurate and much faster than standard approaches in terms of
computation time. In an empirical exercise involving inflation forecasting
using a large number of predictors, we find our models to forecast better than
alternative approaches and document different patterns of parameter change than
are found with approaches which assume random walk evolution of parameters.",Fast and Flexible Bayesian Inference in Time-varying Parameter Regression Models,2019-10-23 22:49:55,"Niko Hauzenberger, Florian Huber, Gary Koop, Luca Onorante","http://arxiv.org/abs/1910.10779v4, http://arxiv.org/pdf/1910.10779v4",econ.EM
30154,em,"This paper provides a thorough analysis on the dynamic structures and
predictability of China's Consumer Price Index (CPI-CN), with a comparison to
those of the United States. Despite the differences in the two leading
economies, both series can be well modeled by a class of Seasonal
Autoregressive Integrated Moving Average Model with Covariates (S-ARIMAX). The
CPI-CN series possess regular patterns of dynamics with stable annual cycles
and strong Spring Festival effects, with fitting and forecasting errors largely
comparable to their US counterparts. Finally, for the CPI-CN, the diffusion
index (DI) approach offers improved predictions than the S-ARIMAX models.",Analyzing China's Consumer Price Index Comparatively with that of United States,2019-10-29 17:43:40,"Zhenzhong Wang, Yundong Tu, Song Xi Chen","http://arxiv.org/abs/1910.13301v1, http://arxiv.org/pdf/1910.13301v1",econ.EM
30155,em,"The existence of stylized facts in financial data has been documented in many
studies. In the past decade the modeling of financial markets by agent-based
computational economic market models has become a frequently used modeling
approach. The main purpose of these models is to replicate stylized facts and
to identify sufficient conditions for their creations. In this paper we
introduce the most prominent examples of stylized facts and especially present
stylized facts of financial data. Furthermore, we given an introduction to
agent-based modeling. Here, we not only provide an overview of this topic but
introduce the idea of universal building blocks for agent-based economic market
models.",Stylized Facts and Agent-Based Modeling,2019-12-02 22:51:53,"Simon Cramer, Torsten Trimborn","http://arxiv.org/abs/1912.02684v1, http://arxiv.org/pdf/1912.02684v1",q-fin.GN
30156,em,"Time-varying parameter (TVP) models are very flexible in capturing gradual
changes in the effect of a predictor on the outcome variable. However, in
particular when the number of predictors is large, there is a known risk of
overfitting and poor predictive performance, since the effect of some
predictors is constant over time. We propose a prior for variance shrinkage in
TVP models, called triple gamma. The triple gamma prior encompasses a number of
priors that have been suggested previously, such as the Bayesian lasso, the
double gamma prior and the Horseshoe prior. We present the desirable properties
of such a prior and its relationship to Bayesian Model Averaging for variance
selection. The features of the triple gamma prior are then illustrated in the
context of time varying parameter vector autoregressive models, both for
simulated datasets and for a series of macroeconomics variables in the Euro
Area.",Triple the gamma -- A unifying shrinkage prior for variance and variable selection in sparse state space and TVP models,2019-12-06 16:26:01,"Annalisa Cadonna, Sylvia Frühwirth-Schnatter, Peter Knaus","http://arxiv.org/abs/1912.03100v1, http://arxiv.org/pdf/1912.03100v1",econ.EM
30157,em,"Staggered adoption of policies by different units at different times creates
promising opportunities for observational causal inference. Estimation remains
challenging, however, and common regression methods can give misleading
results. A promising alternative is the synthetic control method (SCM), which
finds a weighted average of control units that closely balances the treated
unit's pre-treatment outcomes. In this paper, we generalize SCM, originally
designed to study a single treated unit, to the staggered adoption setting. We
first bound the error for the average effect and show that it depends on both
the imbalance for each treated unit separately and the imbalance for the
average of the treated units. We then propose ""partially pooled"" SCM weights to
minimize a weighted combination of these measures; approaches that focus only
on balancing one of the two components can lead to bias. We extend this
approach to incorporate unit-level intercept shifts and auxiliary covariates.
We assess the performance of the proposed method via extensive simulations and
apply our results to the question of whether teacher collective bargaining
leads to higher school spending, finding minimal impacts. We implement the
proposed method in the augsynth R package.",Synthetic Controls with Staggered Adoption,2019-12-06 21:45:24,"Eli Ben-Michael, Avi Feller, Jesse Rothstein","http://arxiv.org/abs/1912.03290v2, http://arxiv.org/pdf/1912.03290v2",stat.ME
30158,em,"Energy system optimization models (ESOMs) should be used in an interactive
way to uncover knife-edge solutions, explore alternative system configurations,
and suggest different ways to achieve policy objectives under conditions of
deep uncertainty. In this paper, we do so by employing an existing optimization
technique called modeling to generate alternatives (MGA), which involves a
change in the model structure in order to systematically explore the
near-optimal decision space. The MGA capability is incorporated into Tools for
Energy Model Optimization and Analysis (Temoa), an open source framework that
also includes a technology rich, bottom up ESOM. In this analysis, Temoa is
used to explore alternative energy futures in a simplified single region energy
system that represents the U.S. electric sector and a portion of the light duty
transport sector. Given the dataset limitations, we place greater emphasis on
the methodological approach rather than specific results.",Energy Scenario Exploration with Modeling to Generate Alternatives (MGA),2019-12-09 02:50:21,"Joseph F. DeCarolis, Samaneh Babaee, Binghui Li, Suyash Kanungo","http://dx.doi.org/10.1016/j.envsoft.2015.11.019, http://arxiv.org/abs/1912.03788v1, http://arxiv.org/pdf/1912.03788v1",physics.soc-ph
30159,em,"We consider the estimation of approximate factor models for time series data,
where strong serial and cross-sectional correlations amongst the idiosyncratic
component are present. This setting comes up naturally in many applications,
but existing approaches in the literature rely on the assumption that such
correlations are weak, leading to mis-specification of the number of factors
selected and consequently inaccurate inference. In this paper, we explicitly
incorporate the dependent structure present in the idiosyncratic component
through lagged values of the observed multivariate time series. We formulate a
constrained optimization problem to estimate the factor space and the
transition matrices of the lagged values {\em simultaneously}, wherein the
constraints reflect the low rank nature of the common factors and the sparsity
of the transition matrices. We establish theoretical properties of the obtained
estimates, and introduce an easy-to-implement computational procedure for
empirical work. The performance of the model and the implementation procedure
is evaluated on synthetic data and compared with competing approaches, and
further illustrated on a data set involving weekly log-returns of 75 US large
financial institutions for the 2001-2016 period.",Approximate Factor Models with Strongly Correlated Idiosyncratic Errors,2019-12-09 18:33:33,"Jiahe Lin, George Michailidis","http://arxiv.org/abs/1912.04123v1, http://arxiv.org/pdf/1912.04123v1",stat.ME
30160,em,"A factor-augmented vector autoregressive (FAVAR) model is defined by a VAR
equation that captures lead-lag correlations amongst a set of observed
variables $X$ and latent factors $F$, and a calibration equation that relates
another set of observed variables $Y$ with $F$ and $X$. The latter equation is
used to estimate the factors that are subsequently used in estimating the
parameters of the VAR system. The FAVAR model has become popular in applied
economic research, since it can summarize a large number of variables of
interest as a few factors through the calibration equation and subsequently
examine their influence on core variables of primary interest through the VAR
equation. However, there is increasing need for examining lead-lag
relationships between a large number of time series, while incorporating
information from another high-dimensional set of variables. Hence, in this
paper we investigate the FAVAR model under high-dimensional scaling. We
introduce an appropriate identification constraint for the model parameters,
which when incorporated into the formulated optimization problem yields
estimates with good statistical properties. Further, we address a number of
technical challenges introduced by the fact that estimates of the VAR system
model parameters are based on estimated rather than directly observed
quantities. The performance of the proposed estimators is evaluated on
synthetic data. Further, the model is applied to commodity prices and reveals
interesting and interpretable relationships between the prices and the factors
extracted from a set of global macroeconomic indicators.",Regularized Estimation of High-dimensional Factor-Augmented Vector Autoregressive (FAVAR) Models,2019-12-09 19:11:31,"Jiahe Lin, George Michailidis","http://arxiv.org/abs/1912.04146v2, http://arxiv.org/pdf/1912.04146v2",stat.ME
30161,em,"Dynamic model averaging (DMA) combines the forecasts of a large number of
dynamic linear models (DLMs) to predict the future value of a time series. The
performance of DMA critically depends on the appropriate choice of two
forgetting factors. The first of these controls the speed of adaptation of the
coefficient vector of each DLM, while the second enables time variation in the
model averaging stage. In this paper we develop a novel, adaptive dynamic model
averaging (ADMA) methodology. The proposed methodology employs a stochastic
optimisation algorithm that sequentially updates the forgetting factor of each
DLM, and uses a state-of-the-art non-parametric model combination algorithm
from the prediction with expert advice literature, which offers finite-time
performance guarantees. An empirical application to quarterly UK house price
data suggests that ADMA produces more accurate forecasts than the benchmark
autoregressive model, as well as competing DMA specifications.",Adaptive Dynamic Model Averaging with an Application to House Price Forecasting,2019-12-10 15:03:58,"Alisa Yusupova, Nicos G. Pavlidis, Efthymios G. Pavlidis","http://arxiv.org/abs/1912.04661v1, http://arxiv.org/pdf/1912.04661v1",econ.EM
30162,em,"We propose a regularized factor-augmented vector autoregressive (FAVAR) model
that allows for sparsity in the factor loadings. In this framework, factors may
only load on a subset of variables which simplifies the factor identification
and their economic interpretation. We identify the factors in a data-driven
manner without imposing specific relations between the unobserved factors and
the underlying time series. Using our approach, the effects of structural
shocks can be investigated on economically meaningful factors and on all
observed time series included in the FAVAR model. We prove consistency for the
estimators of the factor loadings, the covariance matrix of the idiosyncratic
component, the factors, as well as the autoregressive parameters in the dynamic
model. In an empirical application, we investigate the effects of a monetary
policy shock on a broad range of economically relevant variables. We identify
this shock using a joint identification of the factor model and the structural
innovations in the VAR model. We find impulse response functions which are in
line with economic rationale, both on the factor aggregates and observed time
series level.",A Regularized Factor-augmented Vector Autoregressive Model,2019-12-12 18:54:24,"Maurizio Daniele, Julie Schnaitmann","http://arxiv.org/abs/1912.06049v1, http://arxiv.org/pdf/1912.06049v1",econ.EM
30164,em,"Uncertainty quantification is a fundamental problem in the analysis and
interpretation of synthetic control (SC) methods. We develop conditional
prediction intervals in the SC framework, and provide conditions under which
these intervals offer finite-sample probability guarantees. Our method allows
for covariate adjustment and non-stationary data. The construction begins by
noting that the statistical uncertainty of the SC prediction is governed by two
distinct sources of randomness: one coming from the construction of the (likely
misspecified) SC weights in the pre-treatment period, and the other coming from
the unobservable stochastic error in the post-treatment period when the
treatment effect is analyzed. Accordingly, our proposed prediction intervals
are constructed taking into account both sources of randomness. For
implementation, we propose a simulation-based approach along with
finite-sample-based probability bound arguments, naturally leading to
principled sensitivity analysis methods. We illustrate the numerical
performance of our methods using empirical applications and a small simulation
study. \texttt{Python}, \texttt{R} and \texttt{Stata} software packages
implementing our methodology are available.",Prediction Intervals for Synthetic Control Methods,2019-12-16 01:03:16,"Matias D. Cattaneo, Yingjie Feng, Rocio Titiunik","http://arxiv.org/abs/1912.07120v4, http://arxiv.org/pdf/1912.07120v4",stat.ME
30165,em,"We introduce the \texttt{Stata} (and \texttt{R}) package \texttt{rdmulti},
which includes three commands (\texttt{rdmc}, \texttt{rdmcplot}, \texttt{rdms})
for analyzing Regression Discontinuity (RD) designs with multiple cutoffs or
multiple scores. The command \texttt{rdmc} applies to non-cumulative and
cumulative multi-cutoff RD settings. It calculates pooled and cutoff-specific
RD treatment effects, and provides robust bias-corrected inference procedures.
Post estimation and inference is allowed. The command \texttt{rdmcplot} offers
RD plots for multi-cutoff settings. Finally, the command \texttt{rdms} concerns
multi-score settings, covering in particular cumulative cutoffs and two running
variables contexts. It also calculates pooled and cutoff-specific RD treatment
effects, provides robust bias-corrected inference procedures, and allows for
post-estimation estimation and inference. These commands employ the
\texttt{Stata} (and \texttt{R}) package \texttt{rdrobust} for plotting,
estimation, and inference. Companion \texttt{R} functions with the same syntax
and capabilities are provided.",Analysis of Regression Discontinuity Designs with Multiple Cutoffs or Multiple Scores,2019-12-16 16:38:08,"Matias D. Cattaneo, Rocio Titiunik, Gonzalo Vazquez-Bare","http://arxiv.org/abs/1912.07346v2, http://arxiv.org/pdf/1912.07346v2",stat.CO
30166,em,"Interest-rate risk is a key factor for property-casualty insurer capital. P&C
companies tend to be highly leveraged, with bond holdings much greater than
capital. For GAAP capital, bonds are marked to market but liabilities are not,
so shifts in the yield curve can have a significant impact on capital.
Yield-curve scenario generators are one approach to quantifying this risk. They
produce many future simulated evolutions of the yield curve, which can be used
to quantify the probabilities of bond-value changes that would result from
various maturity-mix strategies. Some of these generators are provided as
black-box models where the user gets only the projected scenarios. One focus of
this paper is to provide methods for testing generated scenarios from such
models by comparing to known distributional properties of yield curves.
  P&C insurers hold bonds to maturity and manage cash-flow risk by matching
asset and liability flows. Derivative pricing and stochastic volatility are of
little concern over the relevant time frames. This requires different models
and model testing than what is common in the broader financial markets.
  To complicate things further, interest rates for the last decade have not
been following the patterns established in the sixty years following WWII. We
are now coming out of the period of very low rates, yet are still not returning
to what had been thought of as normal before that. Modeling and model testing
are in an evolving state while new patterns emerge.
  Our analysis starts with a review of the literature on interest-rate model
testing, with a P&C focus, and an update of the tests for current market
behavior. We then discuss models, and use them to illustrate the fitting and
testing methods. The testing discussion does not require the model-building
section.",Building and Testing Yield Curve Generators for P&C Insurance,2019-12-22 23:13:23,"Gary Venter, Kailan Shang","http://arxiv.org/abs/1912.10526v1, http://arxiv.org/pdf/1912.10526v1",q-fin.RM
30167,em,"The downward trend in the amount of Arctic sea ice has a wide range of
environmental and economic consequences including important effects on the pace
and intensity of global climate change. Based on several decades of satellite
data, we provide statistical forecasts of Arctic sea ice extent during the rest
of this century. The best fitting statistical model indicates that overall sea
ice coverage is declining at an increasing rate. By contrast, average
projections from the CMIP5 global climate models foresee a gradual slowing of
Arctic sea ice loss even in scenarios with high carbon emissions. Our
long-range statistical projections also deliver probability assessments of the
timing of an ice-free Arctic. These results indicate almost a 60 percent chance
of an effectively ice-free Arctic Ocean sometime during the 2030s -- much
earlier than the average projection from the global climate models.",Probability Assessments of an Ice-Free Arctic: Comparing Statistical and Climate Model Projections,2019-12-23 15:52:04,"Francis X. Diebold, Glenn D. Rudebusch","http://arxiv.org/abs/1912.10774v4, http://arxiv.org/pdf/1912.10774v4",stat.AP
30168,em,"The Pareto model is very popular in risk management, since simple analytical
formulas can be derived for financial downside risk measures (Value-at-Risk,
Expected Shortfall) or reinsurance premiums and related quantities (Large Claim
Index, Return Period). Nevertheless, in practice, distributions are (strictly)
Pareto only in the tails, above (possible very) large threshold. Therefore, it
could be interesting to take into account second order behavior to provide a
better fit. In this article, we present how to go from a strict Pareto model to
Pareto-type distributions. We discuss inference, and derive formulas for
various measures and indices, and finally provide applications on insurance
losses and financial risks.",Pareto models for risk management,2019-12-26 03:56:52,"Arthur Charpentier, Emmanuel Flachaire","http://arxiv.org/abs/1912.11736v1, http://arxiv.org/pdf/1912.11736v1",econ.EM
30169,em,"This paper provides a simple, yet reliable, alternative to the (Bayesian)
estimation of large multivariate VARs with time variation in the conditional
mean equations and/or in the covariance structure. With our new methodology,
the original multivariate, n dimensional model is treated as a set of n
univariate estimation problems, and cross-dependence is handled through the use
of a copula. Thus, only univariate distribution functions are needed when
estimating the individual equations, which are often available in closed form,
and easy to handle with MCMC (or other techniques). Estimation is carried out
in parallel for the individual equations. Thereafter, the individual posteriors
are combined with the copula, so obtaining a joint posterior which can be
easily resampled. We illustrate our approach by applying it to a large
time-varying parameter VAR with 25 macroeconomic variables.",Bayesian estimation of large dimensional time varying VARs using copulas,2019-12-29 00:44:16,"Mike Tsionas, Marwan Izzeldin, Lorenzo Trapani","http://arxiv.org/abs/1912.12527v1, http://arxiv.org/pdf/1912.12527v1",econ.EM
30170,em,"We consider discrete default intensity based and logit type reduced form
models for conditional default probabilities for corporate loans where we
develop simple closed form approximations to the maximum likelihood estimator
(MLE) when the underlying covariates follow a stationary Gaussian process. In a
practically reasonable asymptotic regime where the default probabilities are
small, say 1-3% annually, the number of firms and the time period of data
available is reasonably large, we rigorously show that the proposed estimator
behaves similarly or slightly worse than the MLE when the underlying model is
correctly specified. For more realistic case of model misspecification, both
estimators are seen to be equally good, or equally bad. Further, beyond a
point, both are more-or-less insensitive to increase in data. These conclusions
are validated on empirical and simulated data. The proposed approximations
should also have applications outside finance, where logit-type models are used
and probabilities of interest are small.",Credit Risk: Simple Closed Form Approximate Maximum Likelihood Estimator,2019-12-29 11:56:45,"Anand Deo, Sandeep Juneja","http://arxiv.org/abs/1912.12611v1, http://arxiv.org/pdf/1912.12611v1",econ.EM
30171,em,"The term natural experiment is used inconsistently. In one interpretation, it
refers to an experiment where a treatment is randomly assigned by someone other
than the researcher. In another interpretation, it refers to a study in which
there is no controlled random assignment, but treatment is assigned by some
external factor in a way that loosely resembles a randomized experiment---often
described as an ""as if random"" assignment. In yet another interpretation, it
refers to any non-randomized study that compares a treatment to a control
group, without any specific requirements on how the treatment is assigned. I
introduce an alternative definition that seeks to clarify the integral features
of natural experiments and at the same time distinguish them from randomized
controlled experiments. I define a natural experiment as a research study where
the treatment assignment mechanism (i) is neither designed nor implemented by
the researcher, (ii) is unknown to the researcher, and (iii) is probabilistic
by virtue of depending on an external factor. The main message of this
definition is that the difference between a randomized controlled experiment
and a natural experiment is not a matter of degree, but of essence, and thus
conceptualizing a natural experiment as a research design akin to a randomized
experiment is neither rigorous nor a useful guide to empirical analysis. Using
my alternative definition, I discuss how a natural experiment differs from a
traditional observational study, and offer practical recommendations for
researchers who wish to use natural experiments to study causal effects.",Natural Experiments,2020-02-01 15:52:51,Rocio Titiunik,"http://arxiv.org/abs/2002.00202v1, http://arxiv.org/pdf/2002.00202v1",stat.ME
30172,em,"Our paper aims to model supply and demand curves of electricity day-ahead
auction in a parsimonious way. Our main task is to build an appropriate
algorithm to present the information about electricity prices and demands with
far less parameters than the original one. We represent each curve using
mesh-free interpolation techniques based on radial basis function
approximation. We describe results of this method for the day-ahead IPEX spot
price of Italy.",Efficient representation of supply and demand curves on day-ahead electricity markets,2020-02-03 02:14:58,"Mariia Soloviova, Tiziano Vargiolu","http://arxiv.org/abs/2002.00507v1, http://arxiv.org/pdf/2002.00507v1",q-fin.PR
30173,em,"In time-series analysis, the term ""lead-lag effect"" is used to describe a
delayed effect on a given time series caused by another time series. lead-lag
effects are ubiquitous in practice and are specifically critical in formulating
investment strategies in high-frequency trading. At present, there are three
major challenges in analyzing the lead-lag effects. First, in practical
applications, not all time series are observed synchronously. Second, the size
of the relevant dataset and rate of change of the environment is increasingly
faster, and it is becoming more difficult to complete the computation within a
particular time limit. Third, some lead-lag effects are time-varying and only
last for a short period, and their delay lengths are often affected by external
factors. In this paper, we propose NAPLES (Negative And Positive lead-lag
EStimator), a new statistical measure that resolves all these problems. Through
experiments on artificial and real datasets, we demonstrate that NAPLES has a
strong correlation with the actual lead-lag effects, including those triggered
by significant macroeconomic announcements.",NAPLES;Mining the lead-lag Relationship from Non-synchronous and High-frequency Data,2020-02-03 16:39:41,"Katsuya Ito, Kei Nakagawa","http://arxiv.org/abs/2002.00724v1, http://arxiv.org/pdf/2002.00724v1",q-fin.ST
30174,em,"We develop inference procedures robust to general forms of weak dependence.
The procedures utilize test statistics constructed by resampling in a manner
that does not depend on the unknown correlation structure of the data. We prove
that the statistics are asymptotically normal under the weak requirement that
the target parameter can be consistently estimated at the parametric rate. This
holds for regular estimators under many well-known forms of weak dependence and
justifies the claim of dependence-robustness. We consider applications to
settings with unknown or complicated forms of dependence, with various forms of
network dependence as leading examples. We develop tests for both moment
equalities and inequalities.",Dependence-Robust Inference Using Resampled Statistics,2020-02-06 07:49:51,Michael P. Leung,"http://arxiv.org/abs/2002.02097v4, http://arxiv.org/pdf/2002.02097v4",econ.EM
30175,em,"This paper introduces a new stochastic process with values in the set Z of
integers with sign. The increments of process are Poisson differences and the
dynamics has an autoregressive structure. We study the properties of the
process and exploit the thinning representation to derive stationarity
conditions and the stationary distribution of the process. We provide a
Bayesian inference method and an efficient posterior approximation procedure
based on Monte Carlo. Numerical illustrations on both simulated and real data
show the effectiveness of the proposed inference.",Generalized Poisson Difference Autoregressive Processes,2020-02-11 18:21:24,"Giulia Carallo, Roberto Casarin, Christian P. Robert","http://arxiv.org/abs/2002.04470v1, http://arxiv.org/pdf/2002.04470v1",stat.ME
30187,em,"Unit root tests form an essential part of any time series analysis. We
provide practitioners with a single, unified framework for comprehensive and
reliable unit root testing in the R package bootUR.The package's backbone is
the popular augmented Dickey-Fuller test paired with a union of rejections
principle, which can be performed directly on single time series or multiple
(including panel) time series. Accurate inference is ensured through the use of
bootstrap methods. The package addresses the needs of both novice users, by
providing user-friendly and easy-to-implement functions with sensible default
options, as well as expert users, by giving full user-control to adjust the
tests to one's desired settings. Our parallelized C++ implementation ensures
that all unit root tests are scalable to datasets containing many time series.",bootUR: An R Package for Bootstrap Unit Root Tests,2020-07-23 23:40:52,"Stephan Smeekes, Ines Wilms","http://arxiv.org/abs/2007.12249v5, http://arxiv.org/pdf/2007.12249v5",econ.EM
30176,em,"Causal mediation analysis aims at disentangling a treatment effect into an
indirect mechanism operating through an intermediate outcome or mediator, as
well as the direct effect of the treatment on the outcome of interest. However,
the evaluation of direct and indirect effects is frequently complicated by
non-ignorable selection into the treatment and/or mediator, even after
controlling for observables, as well as sample selection/outcome attrition. We
propose a method for bounding direct and indirect effects in the presence of
such complications using a method that is based on a sequence of linear
programming problems. Considering inverse probability weighting by propensity
scores, we compute the weights that would yield identification in the absence
of complications and perturb them by an entropy parameter reflecting a specific
amount of propensity score misspecification to set-identify the effects of
interest. We apply our method to data from the National Longitudinal Survey of
Youth 1979 to derive bounds on the explained and unexplained components of a
gender wage gap decomposition that is likely prone to non-ignorable mediator
selection and outcome attrition.",Bounds on direct and indirect effects under treatment/mediator endogeneity and outcome attrition,2020-02-13 00:51:00,"Martin Huber, Lukáš Lafférs","http://arxiv.org/abs/2002.05253v3, http://arxiv.org/pdf/2002.05253v3",econ.EM
30177,em,"We develop an analytical framework to study experimental design in two-sided
marketplaces. Many of these experiments exhibit interference, where an
intervention applied to one market participant influences the behavior of
another participant. This interference leads to biased estimates of the
treatment effect of the intervention. We develop a stochastic market model and
associated mean field limit to capture dynamics in such experiments, and use
our model to investigate how the performance of different designs and
estimators is affected by marketplace interference effects. Platforms typically
use two common experimental designs: demand-side (""customer"") randomization
(CR) and supply-side (""listing"") randomization (LR), along with their
associated estimators. We show that good experimental design depends on market
balance: in highly demand-constrained markets, CR is unbiased, while LR is
biased; conversely, in highly supply-constrained markets, LR is unbiased, while
CR is biased. We also introduce and study a novel experimental design based on
two-sided randomization (TSR) where both customers and listings are randomized
to treatment and control. We show that appropriate choices of TSR designs can
be unbiased in both extremes of market balance, while yielding relatively low
bias in intermediate regimes of market balance.",Experimental Design in Two-Sided Platforms: An Analysis of Bias,2020-02-13 20:49:42,"Ramesh Johari, Hannah Li, Inessa Liskovich, Gabriel Weintraub","http://arxiv.org/abs/2002.05670v5, http://arxiv.org/pdf/2002.05670v5",stat.ME
30178,em,"As technology continues to advance, there is increasing concern about
individuals being left behind. Many businesses are striving to adopt
responsible design practices and avoid any unintended consequences of their
products and services, ranging from privacy vulnerabilities to algorithmic
bias. We propose a novel approach to fairness and inclusiveness based on
experimentation. We use experimentation because we want to assess not only the
intrinsic properties of products and algorithms but also their impact on
people. We do this by introducing an inequality approach to A/B testing,
leveraging the Atkinson index from the economics literature. We show how to
perform causal inference over this inequality measure. We also introduce the
concept of site-wide inequality impact, which captures the inclusiveness impact
of targeting specific subpopulations for experiments, and show how to conduct
statistical inference on this impact. We provide real examples from LinkedIn,
as well as an open-source, highly scalable implementation of the computation of
the Atkinson index and its variance in Spark/Scala. We also provide over a
year's worth of learnings -- gathered by deploying our method at scale and
analyzing thousands of experiments -- on which areas and which kinds of product
innovations seem to inherently foster fairness through inclusiveness.",Fairness through Experimentation: Inequality in A/B testing as an approach to responsible design,2020-02-14 03:15:56,"Guillaume Saint-Jacques, Amir Sepehri, Nicole Li, Igor Perisic","http://arxiv.org/abs/2002.05819v1, http://arxiv.org/pdf/2002.05819v1",cs.SI
30179,em,"This paper compares different implementations of monetary policy in a
new-Keynesian setting. We can show that a shift from Ramsey optimal policy
under short-term commitment (based on a negative feedback mechanism) to a
Taylor rule (based on a positive feedback mechanism) corresponds to a Hopf
bifurcation with opposite policy advice and a change of the dynamic properties.
This bifurcation occurs because of the ad hoc assumption that interest rate is
a forward-looking variable when policy targets (inflation and output gap) are
forward-looking variables in the new-Keynesian theory.",Hopf Bifurcation from new-Keynesian Taylor rule to Ramsey Optimal Policy,2020-02-18 13:32:03,"Jean-Bernard Chatelain, Kirsten Ralf","http://dx.doi.org/10.1017/S1365100519001032, http://arxiv.org/abs/2002.07479v1, http://arxiv.org/pdf/2002.07479v1",econ.EM
30180,em,"We study identification and estimation in the Regression Discontinuity Design
(RDD) with a multivalued treatment variable. We also allow for the inclusion of
covariates. We show that without additional information, treatment effects are
not identified. We give necessary and sufficient conditions that lead to
identification of LATEs as well as of weighted averages of the conditional
LATEs. We show that if the first stage discontinuities of the multiple
treatments conditional on covariates are linearly independent, then it is
possible to identify multivariate weighted averages of the treatment effects
with convenient identifiable weights. If, moreover, treatment effects do not
vary with some covariates or a flexible parametric structure can be assumed, it
is possible to identify (in fact, over-identify) all the treatment effects. The
over-identification can be used to test these assumptions. We propose a simple
estimator, which can be programmed in packaged software as a Two-Stage Least
Squares regression, and packaged standard errors and tests can also be used.
Finally, we implement our approach to identify the effects of different types
of insurance coverage on health care utilization, as in Card, Dobkin and
Maestas (2008).",Regression Discontinuity Design with Multivalued Treatments,2020-07-01 05:22:26,"Carolina Caetano, Gregorio Caetano, Juan Carlos Escanciano","http://arxiv.org/abs/2007.00185v1, http://arxiv.org/pdf/2007.00185v1",econ.EM
30200,em,"This paper illustrates the use of entropy balancing in
difference-in-differences analyses when pre-intervention outcome trends suggest
a possible violation of the parallel trends assumption. We describe a set of
assumptions under which weighting to balance intervention and comparison groups
on pre-intervention outcome trends leads to consistent
difference-in-differences estimates even when pre-intervention outcome trends
are not parallel. Simulated results verify that entropy balancing of
pre-intervention outcomes trends can remove bias when the parallel trends
assumption is not directly satisfied, and thus may enable researchers to use
difference-in-differences designs in a wider range of observational settings
than previously acknowledged.",Reducing bias in difference-in-differences models using entropy balancing,2020-11-10 02:21:17,"Matthew Cefalu, Brian G. Vegetabile, Michael Dworsky, Christine Eibner, Federico Girosi","http://arxiv.org/abs/2011.04826v1, http://arxiv.org/pdf/2011.04826v1",stat.ME
30181,em,"Crowding valuation of subway riders is an important input to various
supply-side decisions of transit operators. The crowding cost perceived by a
transit rider is generally estimated by capturing the trade-off that the rider
makes between crowding and travel time while choosing a route. However,
existing studies rely on static compensatory choice models and fail to account
for inertia and the learning behaviour of riders. To address these challenges,
we propose a new dynamic latent class model (DLCM) which (i) assigns riders to
latent compensatory and inertia/habit classes based on different decision
rules, (ii) enables transitions between these classes over time, and (iii)
adopts instance-based learning theory to account for the learning behaviour of
riders. We use the expectation-maximisation algorithm to estimate DLCM, and the
most probable sequence of latent classes for each rider is retrieved using the
Viterbi algorithm. The proposed DLCM can be applied in any choice context to
capture the dynamics of decision rules used by a decision-maker. We demonstrate
its practical advantages in estimating the crowding valuation of an Asian
metro's riders. To calibrate the model, we recover the daily route preferences
and in-vehicle crowding experiences of regular metro riders using a
two-month-long smart card and vehicle location data. The results indicate that
the average rider follows the compensatory rule on only 25.5% of route choice
occasions. DLCM estimates also show an increase of 47% in metro riders'
valuation of travel time under extremely crowded conditions relative to that
under uncrowded conditions.",A Dynamic Choice Model with Heterogeneous Decision Rules: Application in Estimating the User Cost of Rail Crowding,2020-07-07 13:26:56,"Prateek Bansal, Daniel Hörcher, Daniel J. Graham","http://arxiv.org/abs/2007.03682v1, http://arxiv.org/pdf/2007.03682v1",stat.AP
30182,em,"A structural Gaussian mixture vector autoregressive model is introduced. The
shocks are identified by combining simultaneous diagonalization of the reduced
form error covariance matrices with constraints on the time-varying impact
matrix. This leads to flexible identification conditions, and some of the
constraints are also testable. The empirical application studies asymmetries in
the effects of the U.S. monetary policy shock and finds strong asymmetries with
respect to the sign and size of the shock and to the initial state of the
economy. The accompanying CRAN distributed R package gmvarkit provides a
comprehensive set of tools for numerical analysis.",Structural Gaussian mixture vector autoregressive model with application to the asymmetric effects of monetary policy shocks,2020-07-09 14:19:16,Savi Virolainen,"http://arxiv.org/abs/2007.04713v7, http://arxiv.org/pdf/2007.04713v7",econ.EM
30183,em,"We introduce Multivariate Circulant Singular Spectrum Analysis (M-CiSSA) to
provide a comprehensive framework to analyze fluctuations, extracting the
underlying components of a set of time series, disentangling their sources of
variation and assessing their relative phase or cyclical position at each
frequency. Our novel method is non-parametric and can be applied to series out
of phase, highly nonlinear and modulated both in frequency and amplitude. We
prove a uniqueness theorem that in the case of common information and without
the need of fitting a factor model, allows us to identify common sources of
variation. This technique can be quite useful in several fields such as
climatology, biometrics, engineering or economics among others. We show the
performance of M-CiSSA through a synthetic example of latent signals modulated
both in amplitude and frequency and through the real data analysis of energy
prices to understand the main drivers and co-movements of primary energy
commodity prices at various frequencies that are key to assess energy policy at
different time horizons.",Understanding fluctuations through Multivariate Circulant Singular Spectrum Analysis,2020-07-15 12:21:32,"Juan Bógalo, Pilar Poncela, Eva Senra","http://arxiv.org/abs/2007.07561v5, http://arxiv.org/pdf/2007.07561v5",eess.SP
30184,em,"In the data-rich environment, using many economic predictors to forecast a
few key variables has become a new trend in econometrics. The commonly used
approach is factor augment (FA) approach. In this paper, we pursue another
direction, variable selection (VS) approach, to handle high-dimensional
predictors. VS is an active topic in statistics and computer science. However,
it does not receive as much attention as FA in economics. This paper introduces
several cutting-edge VS methods to economic forecasting, which includes: (1)
classical greedy procedures; (2) l1 regularization; (3) gradient descent with
sparsification and (4) meta-heuristic algorithms. Comprehensive simulation
studies are conducted to compare their variable selection accuracy and
prediction performance under different scenarios. Among the reviewed methods, a
meta-heuristic algorithm called sequential Monte Carlo algorithm performs the
best. Surprisingly the classical forward selection is comparable to it and
better than other more sophisticated algorithms. In addition, we apply these VS
methods on economic forecasting and compare with the popular FA approach. It
turns out for employment rate and CPI inflation, some VS methods can achieve
considerable improvement over FA, and the selected predictors can be well
explained by economic theories.",Variable Selection in Macroeconomic Forecasting with Many Predictors,2020-07-20 17:33:57,"Zhenzhong Wang, Zhengyuan Zhu, Cindy Yu","http://arxiv.org/abs/2007.10160v1, http://arxiv.org/pdf/2007.10160v1",econ.EM
30185,em,"Multivalued treatments are commonplace in applications. We explore the use of
discrete-valued instruments to control for selection bias in this setting. Our
discussion revolves around the concept of targeting instruments: which
instruments target which treatments. It allows us to establish conditions under
which counterfactual averages and treatment effects are point- or
partially-identified for composite complier groups. We illustrate the
usefulness of our framework by applying it to data from the Head Start Impact
Study. Under a plausible positive selection assumption, we derive informative
bounds that suggest less beneficial effects of Head Start expansions than the
parametric estimates of Kline and Walters (2016).",Treatment Effects with Targeting Instruments,2020-07-20 22:43:31,"Sokbae Lee, Bernard Salanié","http://arxiv.org/abs/2007.10432v4, http://arxiv.org/pdf/2007.10432v4",econ.EM
30186,em,"A novel deep neural network framework -- that we refer to as Deep Dynamic
Factor Model (D$^2$FM) --, is able to encode the information available, from
hundreds of macroeconomic and financial time-series into a handful of
unobserved latent states. While similar in spirit to traditional dynamic factor
models (DFMs), differently from those, this new class of models allows for
nonlinearities between factors and observables due to the autoencoder neural
network structure. However, by design, the latent states of the model can still
be interpreted as in a standard factor model. Both in a fully real-time
out-of-sample nowcasting and forecasting exercise with US data and in a Monte
Carlo experiment, the D$^2$FM improves over the performances of a
state-of-the-art DFM.",Deep Dynamic Factor Models,2020-07-23 12:54:52,"Paolo Andreini, Cosimo Izzo, Giovanni Ricco","http://arxiv.org/abs/2007.11887v2, http://arxiv.org/pdf/2007.11887v2",econ.EM
30188,em,"This paper aims to examine whether the global economic policy uncertainty
(GEPU) and uncertainty changes have different impacts on crude oil futures
volatility. We establish single-factor and two-factor models under the
GARCH-MIDAS framework to investigate the predictive power of GEPU and GEPU
changes excluding and including realized volatility. The findings show that the
models with rolling-window specification perform better than those with
fixed-span specification. For single-factor models, the GEPU index and its
changes, as well as realized volatility, are consistent effective factors in
predicting the volatility of crude oil futures. Specially, GEPU changes have
stronger predictive power than the GEPU index. For two-factor models, GEPU is
not an effective forecast factor for the volatility of WTI crude oil futures or
Brent crude oil futures. The two-factor model with GEPU changes contains more
information and exhibits stronger forecasting ability for crude oil futures
market volatility than the single-factor models. The GEPU changes are indeed
the main source of long-term volatility of the crude oil futures.",The role of global economic policy uncertainty in predicting crude oil futures volatility: Evidence from a two-factor GARCH-MIDAS model,2020-07-25 05:52:44,"Peng-Fei Dai, Xiong Xiong, Wei-Xing Zhou","http://dx.doi.org/10.1016/j.resourpol.2022.102849, http://arxiv.org/abs/2007.12838v1, http://arxiv.org/pdf/2007.12838v1",q-fin.ST
30189,em,"We report results from the first comprehensive total quality evaluation of
five major indicators in the U.S. Census Bureau's Longitudinal
Employer-Household Dynamics (LEHD) Program Quarterly Workforce Indicators
(QWI): total flow-employment, beginning-of-quarter employment, full-quarter
employment, average monthly earnings of full-quarter employees, and total
quarterly payroll. Beginning-of-quarter employment is also the main tabulation
variable in the LEHD Origin-Destination Employment Statistics (LODES) workplace
reports as displayed in OnTheMap (OTM), including OnTheMap for Emergency
Management. We account for errors due to coverage; record-level non-response;
edit and imputation of item missing data; and statistical disclosure
limitation. The analysis reveals that the five publication variables under
study are estimated very accurately for tabulations involving at least 10 jobs.
Tabulations involving three to nine jobs are a transition zone, where cells may
be fit for use with caution. Tabulations involving one or two jobs, which are
generally suppressed on fitness-for-use criteria in the QWI and synthesized in
LODES, have substantial total variability but can still be used to estimate
statistics for untabulated aggregates as long as the job count in the aggregate
is more than 10.",Total Error and Variability Measures for the Quarterly Workforce Indicators and LEHD Origin-Destination Employment Statistics in OnTheMap,2020-07-27 05:15:29,"Kevin L. McKinney, Andrew S. Green, Lars Vilhuber, John M. Abowd","http://dx.doi.org/10.1093/jssam/smaa029, http://arxiv.org/abs/2007.13275v1, http://arxiv.org/pdf/2007.13275v1",econ.EM
30190,em,"Recent research finds that forecasting electricity prices is very relevant.
In many applications, it might be interesting to predict daily electricity
prices by using their own lags or renewable energy sources. However, the recent
turmoil of energy prices and the Russian-Ukrainian war increased attention in
evaluating the relevance of industrial production and the Purchasing Managers'
Index output survey in forecasting the daily electricity prices. We develop a
Bayesian reverse unrestricted MIDAS model which accounts for the mismatch in
frequency between the daily prices and the monthly macro variables in Germany
and Italy. We find that the inclusion of macroeconomic low frequency variables
is more important for short than medium term horizons by means of point and
density measures. In particular, accuracy increases by combining hard and soft
information, while using only surveys gives less accurate forecasts than using
only industrial production data.",Are low frequency macroeconomic variables important for high frequency electricity prices?,2020-07-24 15:21:30,"Claudia Foroni, Francesco Ravazzolo, Luca Rossini","http://arxiv.org/abs/2007.13566v2, http://arxiv.org/pdf/2007.13566v2",q-fin.ST
30191,em,"The COVID-19 recession that started in March 2020 led to an unprecedented
decline in economic activity across the globe. To fight this recession, policy
makers in central banks engaged in expansionary monetary policy. This paper
asks whether the measures adopted by the US Federal Reserve (Fed) have been
effective in boosting real activity and calming financial markets. To measure
these effects at high frequencies, we propose a novel mixed frequency vector
autoregressive (MF-VAR) model. This model allows us to combine weekly and
monthly information within an unified framework. Our model combines a set of
macroeconomic aggregates such as industrial production, unemployment rates and
inflation with high frequency information from financial markets such as stock
prices, interest rate spreads and weekly information on the Feds balance sheet
size. The latter set of high frequency time series is used to dynamically
interpolate the monthly time series to obtain weekly macroeconomic measures. We
use this setup to simulate counterfactuals in absence of monetary stimulus. The
results show that the monetary expansion caused higher output growth and stock
market returns, more favorable long-term financing conditions and a
depreciation of the US dollar compared to a no-policy benchmark scenario.",Measuring the Effectiveness of US Monetary Policy during the COVID-19 Recession,2020-07-30 15:28:14,"Martin Feldkircher, Florian Huber, Michael Pfarrhofer","http://arxiv.org/abs/2007.15419v1, http://arxiv.org/pdf/2007.15419v1",econ.EM
30192,em,"Many events and policies (treatments) occur at specific spatial locations,
with researchers interested in their effects on nearby units of interest. I
approach the spatial treatment setting from an experimental perspective: What
ideal experiment would we design to estimate the causal effects of spatial
treatments? This perspective motivates a comparison between individuals near
realized treatment locations and individuals near counterfactual (unrealized)
candidate locations, which differs from current empirical practice. I derive
design-based standard errors that are straightforward to compute irrespective
of spatial correlations in outcomes. Furthermore, I propose machine learning
methods to find counterfactual candidate locations using observational data
under unconfounded assignment of the treatment to locations. I apply the
proposed methods to study the causal effects of grocery stores on foot traffic
to nearby businesses during COVID-19 shelter-in-place policies, finding a
substantial positive effect at a very short distance, with no effect at larger
distances.",Causal Inference for Spatial Treatments,2020-11-01 01:09:26,Michael Pollmann,"http://arxiv.org/abs/2011.00373v2, http://arxiv.org/pdf/2011.00373v2",econ.EM
34568,th,"We study the canonical signaling game, endowing the sender with commitment
power. We provide a geometric characterization of the sender's attainable
payoffs, described by the topological join of the graphs of the sender's
interim payoff functions associated with different sender actions. We compare
the sender's payoffs in this setting with two benchmarks. In the first, in
addition to committing to her actions, the sender can commit to provide the
receiver with additional information. In the second, the sender can only commit
to release information, but cannot commit to her actions. We illustrate our
results with applications to job market signaling, communication with lying
costs, and disclosure.",Signaling With Commitment,2023-05-01 14:43:22,"Raphael Boleslavsky, Mehdi Shadmehr","http://arxiv.org/abs/2305.00777v1, http://arxiv.org/pdf/2305.00777v1",econ.TH
30193,em,"Graphical models are a powerful tool to estimate a high-dimensional inverse
covariance (precision) matrix, which has been applied for a portfolio
allocation problem. The assumption made by these models is a sparsity of the
precision matrix. However, when stock returns are driven by common factors,
such assumption does not hold. We address this limitation and develop a
framework, Factor Graphical Lasso (FGL), which integrates graphical models with
the factor structure in the context of portfolio allocation by decomposing a
precision matrix into low-rank and sparse components. Our theoretical results
and simulations show that FGL consistently estimates the portfolio weights and
risk exposure and also that FGL is robust to heavy-tailed distributions which
makes our method suitable for financial applications. FGL-based portfolios are
shown to exhibit superior performance over several prominent competitors
including equal-weighted and Index portfolios in the empirical application for
the S&P500 constituents.",Optimal Portfolio Using Factor Graphical Lasso,2020-11-01 09:01:08,"Tae-Hwy Lee, Ekaterina Seregina","http://arxiv.org/abs/2011.00435v5, http://arxiv.org/pdf/2011.00435v5",econ.EM
30194,em,"This paper investigates the benefits of internet search data in the form of
Google Trends for nowcasting real U.S. GDP growth in real time through the lens
of mixed frequency Bayesian Structural Time Series (BSTS) models. We augment
and enhance both model and methodology to make these better amenable to
nowcasting with large number of potential covariates. Specifically, we allow
shrinking state variances towards zero to avoid overfitting, extend the SSVS
(spike and slab variable selection) prior to the more flexible
normal-inverse-gamma prior which stays agnostic about the underlying model
size, as well as adapt the horseshoe prior to the BSTS. The application to
nowcasting GDP growth as well as a simulation study demonstrate that the
horseshoe prior BSTS improves markedly upon the SSVS and the original BSTS
model with the largest gains in dense data-generating-processes. Our
application also shows that a large dimensional set of search terms is able to
improve nowcasts early in a specific quarter before other macroeconomic data
become available. Search terms with high inclusion probability have good
economic interpretation, reflecting leading signals of economic anxiety and
wealth effects.",Nowcasting Growth using Google Trends Data: A Bayesian Structural Time Series Model,2020-11-02 15:36:30,"David Kohns, Arnab Bhattacharjee","http://arxiv.org/abs/2011.00938v2, http://arxiv.org/pdf/2011.00938v2",econ.EM
30195,em,"Rapid and accurate detection of community outbreaks is critical to address
the threat of resurgent waves of COVID-19. A practical challenge in outbreak
detection is balancing accuracy vs. speed. In particular, while estimation
accuracy improves with longer fitting windows, speed degrades. This paper
presents a machine learning framework to balance this tradeoff using
generalized random forests (GRF), and applies it to detect county level
COVID-19 outbreaks. This algorithm chooses an adaptive fitting window size for
each county based on relevant features affecting the disease spread, such as
changes in social distancing policies. Experiment results show that our method
outperforms any non-adaptive window size choices in 7-day ahead COVID-19
outbreak case number predictions.",Estimating County-Level COVID-19 Exponential Growth Rates Using Generalized Random Forests,2020-10-31 05:34:15,"Zhaowei She, Zilong Wang, Turgay Ayer, Asmae Toumi, Jagpreet Chhatwal","http://arxiv.org/abs/2011.01219v4, http://arxiv.org/pdf/2011.01219v4",cs.LG
30196,em,"We consider settings where an allocation has to be chosen repeatedly, returns
are unknown but can be learned, and decisions are subject to constraints. Our
model covers two-sided and one-sided matching, even with complex constraints.
We propose an approach based on Thompson sampling. Our main result is a
prior-independent finite-sample bound on the expected regret for this
algorithm. Although the number of allocations grows exponentially in the number
of participants, the bound does not depend on this number. We illustrate the
performance of our algorithm using data on refugee resettlement in the United
States.",Adaptive Combinatorial Allocation,2020-11-04 18:02:59,"Maximilian Kasy, Alexander Teytelboym","http://arxiv.org/abs/2011.02330v1, http://arxiv.org/pdf/2011.02330v1",econ.EM
30197,em,"Superspreading complicates the study of SARS-CoV-2 transmission. I propose a
model for aggregated case data that accounts for superspreading and improves
statistical inference. In a Bayesian framework, the model is estimated on
German data featuring over 60,000 cases with date of symptom onset and age
group. Several factors were associated with a strong reduction in transmission:
public awareness rising, testing and tracing, information on local incidence,
and high temperature. Immunity after infection, school and restaurant closures,
stay-at-home orders, and mandatory face covering were associated with a smaller
reduction in transmission. The data suggests that public distancing rules
increased transmission in young adults. Information on local incidence was
associated with a reduction in transmission of up to 44% (95%-CI: [40%, 48%]),
which suggests a prominent role of behavioral adaptations to local risk of
infection. Testing and tracing reduced transmission by 15% (95%-CI: [9%,20%]),
where the effect was strongest among the elderly. Extrapolating weather
effects, I estimate that transmission increases by 53% (95%-CI: [43%, 64%]) in
colder seasons.",Inference under Superspreading: Determinants of SARS-CoV-2 Transmission in Germany,2020-11-08 18:30:03,Patrick W. Schmidt,"http://arxiv.org/abs/2011.04002v1, http://arxiv.org/pdf/2011.04002v1",stat.AP
30198,em,"The existing approaches to sparse wealth allocations (1) are limited to
low-dimensional setup when the number of assets is less than the sample size;
(2) lack theoretical analysis of sparse wealth allocations and their impact on
portfolio exposure; (3) are suboptimal due to the bias induced by an
$\ell_1$-penalty. We address these shortcomings and develop an approach to
construct sparse portfolios in high dimensions. Our contribution is twofold:
from the theoretical perspective, we establish the oracle bounds of sparse
weight estimators and provide guidance regarding their distribution. From the
empirical perspective, we examine the merit of sparse portfolios during
different market scenarios. We find that in contrast to non-sparse
counterparts, our strategy is robust to recessions and can be used as a hedging
vehicle during such times.",A Basket Half Full: Sparse Portfolios,2020-11-05 22:33:59,Ekaterina Seregina,"http://arxiv.org/abs/2011.04278v2, http://arxiv.org/pdf/2011.04278v2",econ.EM
30199,em,"In this paper we propose a time-varying parameter (TVP) vector error
correction model (VECM) with heteroskedastic disturbances. We propose tools to
carry out dynamic model specification in an automatic fashion. This involves
using global-local priors, and postprocessing the parameters to achieve truly
sparse solutions. Depending on the respective set of coefficients, we achieve
this via minimizing auxiliary loss functions. Our two-step approach limits
overfitting and reduces parameter estimation uncertainty. We apply this
framework to modeling European electricity prices. When considering daily
electricity prices for different markets jointly, our model highlights the
importance of explicitly addressing cointegration and nonlinearities. In a
forecast exercise focusing on hourly prices for Germany, our approach yields
competitive metrics of predictive accuracy.",Sparse time-varying parameter VECMs with an application to modeling electricity prices,2020-11-09 20:22:31,"Niko Hauzenberger, Michael Pfarrhofer, Luca Rossini","http://arxiv.org/abs/2011.04577v2, http://arxiv.org/pdf/2011.04577v2",econ.EM
30201,em,"This paper is concerned with testing and dating structural breaks in the
dependence structure of multivariate time series. We consider a cumulative sum
(CUSUM) type test for constant copula-based dependence measures, such as
Spearman's rank correlation and quantile dependencies. The asymptotic null
distribution is not known in closed form and critical values are estimated by
an i.i.d. bootstrap procedure. We analyze size and power properties in a
simulation study under different dependence measure settings, such as skewed
and fat-tailed distributions. To date break points and to decide whether two
estimated break locations belong to the same break event, we propose a pivot
confidence interval procedure. Finally, we apply the test to the historical
data of ten large financial firms during the last financial crisis from 2002 to
mid-2013.",Testing and Dating Structural Changes in Copula-based Dependence Measures,2020-11-10 13:57:31,"Florian Stark, Sven Otto","http://dx.doi.org/10.1080/02664763.2020.1850655, http://arxiv.org/abs/2011.05036v1, http://arxiv.org/pdf/2011.05036v1",econ.EM
30202,em,"Conditional distribution functions are important statistical objects for the
analysis of a wide class of problems in econometrics and statistics. We propose
flexible Gaussian representations for conditional distribution functions and
give a concave likelihood formulation for their global estimation. We obtain
solutions that satisfy the monotonicity property of conditional distribution
functions, including under general misspecification and in finite samples. A
Lasso-type penalized version of the corresponding maximum likelihood estimator
is given that expands the scope of our estimation analysis to models with
sparsity. Inference and estimation results for conditional distribution,
quantile and density functions implied by our representations are provided and
illustrated with an empirical example and simulations.",Gaussian Transforms Modeling and the Estimation of Distributional Regression Functions,2020-11-12 17:34:57,"Richard Spady, Sami Stouli","http://arxiv.org/abs/2011.06416v1, http://arxiv.org/pdf/2011.06416v1",econ.EM
30203,em,"In this paper I revisit the interpretation of the linear instrumental
variables (IV) estimand as a weighted average of conditional local average
treatment effects (LATEs). I focus on a practically relevant situation in which
additional covariates are required for identification while the reduced-form
and first-stage regressions implicitly restrict the effects of the instrument
to be homogeneous, and are thus possibly misspecified. I show that the weights
on some conditional LATEs are negative and the IV estimand is no longer
interpretable as a causal effect under a weaker version of monotonicity, i.e.
when there are compliers but no defiers at some covariate values and defiers
but no compliers elsewhere. The problem of negative weights disappears in the
overidentified specification of Angrist and Imbens (1995) and in an alternative
method, termed ""reordered IV,"" that I also outline. Even if all weights are
positive, the IV estimand in the just identified specification is not
interpretable as the unconditional LATE parameter unless the groups with
different values of the instrument are roughly equal sized. I illustrate my
findings in an application to causal effects of college education using the
college proximity instrument. The benchmark estimates suggest that college
attendance yields earnings gains of about 60 log points, which is well outside
the range of estimates in the recent literature. I demonstrate, however, that
this result is driven by the presence of negative weights. Corrected estimates
indicate that attending college causes earnings to be roughly 20% higher.",When Should We (Not) Interpret Linear IV Estimands as LATE?,2020-11-13 03:04:30,Tymon Słoczyński,"http://arxiv.org/abs/2011.06695v6, http://arxiv.org/pdf/2011.06695v6",econ.EM
30204,em,"The impacts of new real estate developments are strongly associated to its
population distribution (types and compositions of households, incomes, social
demographics) conditioned on aspects such as dwelling typology, price,
location, and floor level. This paper presents a Machine Learning based method
to model the population distribution of upcoming developments of new buildings
within larger neighborhood/condo settings.
  We use a real data set from Ecopark Township, a real estate development
project in Hanoi, Vietnam, where we study two machine learning algorithms from
the deep generative models literature to create a population of synthetic
agents: Conditional Variational Auto-Encoder (CVAE) and Conditional Generative
Adversarial Networks (CGAN). A large experimental study was performed, showing
that the CVAE outperforms both the empirical distribution, a non-trivial
baseline model, and the CGAN in estimating the population distribution of new
real estate development projects.",Population synthesis for urban resident modeling using deep generative models,2020-11-13 13:48:19,"Martin Johnsen, Oliver Brandt, Sergio Garrido, Francisco C. Pereira","http://arxiv.org/abs/2011.06851v1, http://arxiv.org/pdf/2011.06851v1",cs.LG
30205,em,"In the stochastic volatility models for multivariate daily stock returns, it
has been found that the estimates of parameters become unstable as the
dimension of returns increases. To solve this problem, we focus on the factor
structure of multiple returns and consider two additional sources of
information: first, the realized stock index associated with the market factor,
and second, the realized covariance matrix calculated from high frequency data.
The proposed dynamic factor model with the leverage effect and realized
measures is applied to ten of the top stocks composing the exchange traded fund
linked with the investment return of the SP500 index and the model is shown to
have a stable advantage in portfolio performance.","Dynamic factor, leverage and realized covariances in multivariate stochastic volatility",2020-11-13 16:48:24,"Yuta Yamauchi, Yasuhiro Omori","http://arxiv.org/abs/2011.06909v2, http://arxiv.org/pdf/2011.06909v2",econ.EM
30206,em,"This paper shows how to use a randomized saturation experimental design to
identify and estimate causal effects in the presence of spillovers--one
person's treatment may affect another's outcome--and one-sided
non-compliance--subjects can only be offered treatment, not compelled to take
it up. Two distinct causal effects are of interest in this setting: direct
effects quantify how a person's own treatment changes her outcome, while
indirect effects quantify how her peers' treatments change her outcome. We
consider the case in which spillovers occur within known groups, and take-up
decisions are invariant to peers' realized offers. In this setting we point
identify the effects of treatment-on-the-treated, both direct and indirect, in
a flexible random coefficients model that allows for heterogeneous treatment
effects and endogenous selection into treatment. We go on to propose a feasible
estimator that is consistent and asymptotically normal as the number and size
of groups increases. We apply our estimator to data from a large-scale job
placement services experiment, and find negative indirect treatment effects on
the likelihood of employment for those willing to take up the program. These
negative spillovers are offset by positive direct treatment effects from own
take-up.",Identifying Causal Effects in Experiments with Spillovers and Non-compliance,2020-11-13 21:43:16,"Francis J. DiTraglia, Camilo Garcia-Jimeno, Rossa O'Keeffe-O'Donovan, Alejandro Sanchez-Becerra","http://arxiv.org/abs/2011.07051v3, http://arxiv.org/pdf/2011.07051v3",econ.EM
30207,em,"This paper proposes a criterion for simultaneous GMM model and moment
selection: the generalized focused information criterion (GFIC). Rather than
attempting to identify the ""true"" specification, the GFIC chooses from a set of
potentially mis-specified moment conditions and parameter restrictions to
minimize the mean-squared error (MSE) of a user-specified target parameter. The
intent of the GFIC is to formalize a situation common in applied practice. An
applied researcher begins with a set of fairly weak ""baseline"" assumptions,
assumed to be correct, and must decide whether to impose any of a number of
stronger, more controversial ""suspect"" assumptions that yield parameter
restrictions, additional moment conditions, or both. Provided that the baseline
assumptions identify the model, we show how to construct an asymptotically
unbiased estimator of the asymptotic MSE to select over these suspect
assumptions: the GFIC. We go on to provide results for post-selection inference
and model averaging that can be applied both to the GFIC and various
alternative selection criteria. To illustrate how our criterion can be used in
practice, we specialize the GFIC to the problem of selecting over exogeneity
assumptions and lag lengths in a dynamic panel model, and show that it performs
well in simulations. We conclude by applying the GFIC to a dynamic panel data
model for the price elasticity of cigarette demand.",A Generalized Focused Information Criterion for GMM,2020-11-13 22:02:39,"Minsu Chang, Francis J. DiTraglia","http://arxiv.org/abs/2011.07085v1, http://arxiv.org/pdf/2011.07085v1",econ.EM
30208,em,"Factor model is an appealing and effective analytic tool for high-dimensional
time series, with a wide range of applications in economics, finance and
statistics. This paper develops two criteria for the determination of the
number of factors for tensor factor models where the signal part of an observed
tensor time series assumes a Tucker decomposition with the core tensor as the
factor tensor. The task is to determine the dimensions of the core tensor. One
of the proposed criteria is similar to information based criteria of model
selection, and the other is an extension of the approaches based on the ratios
of consecutive eigenvalues often used in factor analysis for panel time series.
Theoretically results, including sufficient conditions and convergence rates,
are established. The results include the vector factor models as special cases,
with an additional convergence rates. Simulation studies provide promising
finite sample performance for the two criteria.",Rank Determination in Tensor Factor Model,2020-11-14 00:04:47,"Yuefeng Han, Rong Chen, Cun-Hui Zhang","http://dx.doi.org/10.1214/22-EJS1991, http://arxiv.org/abs/2011.07131v3, http://arxiv.org/pdf/2011.07131v3",stat.ME
30209,em,"We propose a new framework for modeling high-dimensional matrix-variate time
series by a two-way transformation, where the transformed data consist of a
matrix-variate factor process, which is dynamically dependent, and three other
blocks of white noises. Specifically, for a given $p_1\times p_2$
matrix-variate time series, we seek common nonsingular transformations to
project the rows and columns onto another $p_1$ and $p_2$ directions according
to the strength of the dynamic dependence of the series on the past values.
Consequently, we treat the data as nonsingular linear row and column
transformations of dynamically dependent common factors and white noise
idiosyncratic components. We propose a common orthonormal projection method to
estimate the front and back loading matrices of the matrix-variate factors.
Under the setting that the largest eigenvalues of the covariance of the
vectorized idiosyncratic term diverge for large $p_1$ and $p_2$, we introduce a
two-way projected Principal Component Analysis (PCA) to estimate the associated
loading matrices of the idiosyncratic terms to mitigate such diverging noise
effects. A diagonal-path white noise testing procedure is proposed to estimate
the order of the factor matrix. %under the assumption that the idiosyncratic
term is a matrix-variate white noise process. Asymptotic properties of the
proposed method are established for both fixed and diverging dimensions as the
sample size increases to infinity. We use simulated and real examples to assess
the performance of the proposed method. We also compare our method with some
existing ones in the literature and find that the proposed approach not only
provides interpretable results but also performs well in out-of-sample
forecasting.",A Two-Way Transformed Factor Model for Matrix-Variate Time Series,2020-11-18 04:28:28,"Zhaoxing Gao, Ruey S. Tsay","http://arxiv.org/abs/2011.09029v1, http://arxiv.org/pdf/2011.09029v1",econ.EM
30210,em,"We provide an inference procedure for the sharp regression discontinuity
design (RDD) under monotonicity, with possibly multiple running variables.
Specifically, we consider the case where the true regression function is
monotone with respect to (all or some of) the running variables and assumed to
lie in a Lipschitz smoothness class. Such a monotonicity condition is natural
in many empirical contexts, and the Lipschitz constant has an intuitive
interpretation. We propose a minimax two-sided confidence interval (CI) and an
adaptive one-sided CI. For the two-sided CI, the researcher is required to
choose a Lipschitz constant where she believes the true regression function to
lie in. This is the only tuning parameter, and the resulting CI has uniform
coverage and obtains the minimax optimal length. The one-sided CI can be
constructed to maintain coverage over all monotone functions, providing maximum
credibility in terms of the choice of the Lipschitz constant. Moreover, the
monotonicity makes it possible for the (excess) length of the CI to adapt to
the true Lipschitz constant of the unknown regression function. Overall, the
proposed procedures make it easy to see under what conditions on the underlying
regression function the given estimates are significant, which can add more
transparency to research using RDD methods.",Inference in Regression Discontinuity Designs under Monotonicity,2020-11-29 00:11:54,"Koohyun Kwon, Soonwoo Kwon","http://arxiv.org/abs/2011.14216v1, http://arxiv.org/pdf/2011.14216v1",econ.EM
30211,em,"Urban transportation and land use models have used theory and statistical
modeling methods to develop model systems that are useful in planning
applications. Machine learning methods have been considered too 'black box',
lacking interpretability, and their use has been limited within the land use
and transportation modeling literature. We present a use case in which
predictive accuracy is of primary importance, and compare the use of random
forest regression to multiple regression using ordinary least squares, to
predict rents per square foot in the San Francisco Bay Area using a large
volume of rental listings scraped from the Craigslist website. We find that we
are able to obtain useful predictions from both models using almost exclusively
local accessibility variables, though the predictive accuracy of the random
forest model is substantially higher.",A Comparison of Statistical and Machine Learning Algorithms for Predicting Rents in the San Francisco Bay Area,2020-11-26 11:50:45,"Paul Waddell, Arezoo Besharati-Zadeh","http://arxiv.org/abs/2011.14924v1, http://arxiv.org/pdf/2011.14924v1",econ.EM
30212,em,"Study samples often differ from the target populations of inference and
policy decisions in non-random ways. Researchers typically believe that such
departures from random sampling -- due to changes in the population over time
and space, or difficulties in sampling truly randomly -- are small, and their
corresponding impact on the inference should be small as well. We might
therefore be concerned if the conclusions of our studies are excessively
sensitive to a very small proportion of our sample data. We propose a method to
assess the sensitivity of applied econometric conclusions to the removal of a
small fraction of the sample. Manually checking the influence of all possible
small subsets is computationally infeasible, so we use an approximation to find
the most influential subset. Our metric, the ""Approximate Maximum Influence
Perturbation,"" is based on the classical influence function, and is
automatically computable for common methods including (but not limited to) OLS,
IV, MLE, GMM, and variational Bayes. We provide finite-sample error bounds on
approximation performance. At minimal extra cost, we provide an exact
finite-sample lower bound on sensitivity. We find that sensitivity is driven by
a signal-to-noise ratio in the inference problem, is not reflected in standard
errors, does not disappear asymptotically, and is not due to misspecification.
While some empirical applications are robust, results of several influential
economics papers can be overturned by removing less than 1% of the sample.",An Automatic Finite-Sample Robustness Metric: When Can Dropping a Little Data Make a Big Difference?,2020-11-30 20:05:48,"Tamara Broderick, Ryan Giordano, Rachael Meager","http://arxiv.org/abs/2011.14999v5, http://arxiv.org/pdf/2011.14999v5",stat.ME
30213,em,"Researchers have compared machine learning (ML) classifiers and discrete
choice models (DCMs) in predicting travel behavior, but the generalizability of
the findings is limited by the specifics of data, contexts, and authors'
expertise. This study seeks to provide a generalizable empirical benchmark by
comparing hundreds of ML and DCM classifiers in a highly structured manner. The
experiments evaluate both prediction accuracy and computational cost by
spanning four hyper-dimensions, including 105 ML and DCM classifiers from 12
model families, 3 datasets, 3 sample sizes, and 3 outputs. This experimental
design leads to an immense number of 6,970 experiments, which are corroborated
with a meta dataset of 136 experiment points from 35 previous studies. This
study is hitherto the most comprehensive and almost exhaustive comparison of
the classifiers for travel behavioral prediction. We found that the ensemble
methods and deep neural networks achieve the highest predictive performance,
but at a relatively high computational cost. Random forests are the most
computationally efficient, balancing between prediction and computation. While
discrete choice models offer accuracy with only 3-4 percentage points lower
than the top ML classifiers, they have much longer computational time and
become computationally impossible with large sample size, high input
dimensions, or simulation-based estimation. The relative ranking of the ML and
DCM classifiers is highly stable, while the absolute values of the prediction
accuracy and computational time have large variations. Overall, this paper
suggests using deep neural networks, model ensembles, and random forests as
baseline models for future travel behavior prediction. For choice modeling, the
DCM community should switch more attention from fitting models to improving
computational efficiency, so that the DCMs can be widely adopted in the big
data context.",Comparing hundreds of machine learning classifiers and discrete choice models in predicting travel behavior: an empirical benchmark,2021-02-01 22:45:47,"Shenhao Wang, Baichuan Mo, Stephane Hess, Jinhua Zhao","http://arxiv.org/abs/2102.01130v1, http://arxiv.org/pdf/2102.01130v1",cs.LG
30214,em,"This paper investigates the size performance of Wald tests for CAViaR models
(Engle and Manganelli, 2004). We find that the usual estimation strategy on
test statistics yields inaccuracies. Indeed, we show that existing density
estimation methods cannot adapt to the time-variation in the conditional
probability densities of CAViaR models. Consequently, we develop a method
called adaptive random bandwidth which can approximate time-varying conditional
probability densities robustly for inference testing on CAViaR models based on
the asymptotic normality of the model parameter estimator. This proposed method
also avoids the problem of choosing an optimal bandwidth in estimating
probability densities, and can be extended to multivariate quantile regressions
straightforward.",Adaptive Random Bandwidth for Inference in CAViaR Models,2021-02-02 20:46:02,"Alain Hecq, Li Sun","http://arxiv.org/abs/2102.01636v1, http://arxiv.org/pdf/2102.01636v1",econ.EM
30215,em,"We develop a novel method of constructing confidence bands for nonparametric
regression functions under shape constraints. This method can be implemented
via a linear programming, and it is thus computationally appealing. We
illustrate a usage of our proposed method with an application to the regression
kink design (RKD). Econometric analyses based on the RKD often suffer from wide
confidence intervals due to slow convergence rates of nonparametric derivative
estimators. We demonstrate that economic models and structures motivate shape
restrictions, which in turn contribute to shrinking the confidence interval for
an analysis of the causal effects of unemployment insurance benefits on
unemployment durations.",Linear programming approach to nonparametric inference under shape restrictions: with an application to regression kink designs,2021-02-12 18:52:05,"Harold D. Chiang, Kengo Kato, Yuya Sasaki, Takuya Ura","http://arxiv.org/abs/2102.06586v1, http://arxiv.org/pdf/2102.06586v1",econ.EM
30216,em,"This article develops new closed-form variance expressions for power analyses
for commonly used difference-in-differences (DID) and comparative interrupted
time series (CITS) panel data estimators. The main contribution is to
incorporate variation in treatment timing into the analysis. The power formulas
also account for other key design features that arise in practice:
autocorrelated errors, unequal measurement intervals, and clustering due to the
unit of treatment assignment. We consider power formulas for both
cross-sectional and longitudinal models and allow for covariates. An
illustrative power analysis provides guidance on appropriate sample sizes. The
key finding is that accounting for treatment timing increases required sample
sizes. Further, DID estimators have considerably more power than standard CITS
and ITS estimators. An available Shiny R dashboard performs the sample size
calculations for the considered estimators.",Statistical Power for Estimating Treatment Effects Using Difference-in-Differences and Comparative Interrupted Time Series Designs with Variation in Treatment Timing,2021-02-12 23:56:12,Peter Z. Schochet,"http://arxiv.org/abs/2102.06770v2, http://arxiv.org/pdf/2102.06770v2",stat.ME
30217,em,"This paper introduces a feasible and practical Bayesian method for unit root
testing in financial time series. We propose a convenient approximation of the
Bayes factor in terms of the Bayesian Information Criterion as a
straightforward and effective strategy for testing the unit root hypothesis.
Our approximate approach relies on few assumptions, is of general
applicability, and preserves a satisfactory error rate. Among its advantages,
it does not require the prior distribution on model's parameters to be
specified. Our simulation study and empirical application on real exchange
rates show great accordance between the suggested simple approach and both
Bayesian and non-Bayesian alternatives.",Approximate Bayes factors for unit root testing,2021-02-19 20:31:21,"Magris Martin, Iosifidis Alexandros","http://arxiv.org/abs/2102.10048v2, http://arxiv.org/pdf/2102.10048v2",econ.EM
30219,em,"A growing area of research in epidemiology is the identification of
health-related sibling spillover effects, or the effect of one individual's
exposure on their sibling's outcome. The health and health care of family
members may be inextricably confounded by unobserved factors, rendering
identification of spillover effects within families particularly challenging.
We demonstrate a gain-score regression method for identifying
exposure-to-outcome spillover effects within sibling pairs in a linear fixed
effects framework. The method can identify the exposure-to-outcome spillover
effect if only one sibling's exposure affects the other's outcome; and it
identifies the difference between the spillover effects if both siblings'
exposures affect the others' outcomes. The method fails in the presence of
outcome-to-exposure spillover and outcome-to-outcome spillover. Analytic
results and Monte Carlo simulations demonstrate the method and its limitations.
To exercise this method, we estimate the spillover effect of a child's preterm
birth on an older sibling's literacy skills, measured by the Phonological
Awarenesses Literacy Screening-Kindergarten test. We analyze 20,010 sibling
pairs from a population-wide, Wisconsin-based (United States) birth cohort.
Without covariate adjustment, we estimate that preterm birth modestly decreases
an older sibling's test score (-2.11 points; 95% confidence interval: -3.82,
-0.40 points). In conclusion, gain-scores are a promising strategy for
identifying exposure-to-outcome spillovers in sibling pairs while controlling
for sibling-invariant unobserved confounding in linear settings.",Estimating Sibling Spillover Effects with Unobserved Confounding Using Gain-Scores,2021-02-22 19:18:59,"David C. Mallinson, Felix Elwert","http://arxiv.org/abs/2102.11150v3, http://arxiv.org/pdf/2102.11150v3",stat.ME
30220,em,"Researchers regularly perform conditional prediction using imputed values of
missing data. However, applications of imputation often lack a firm foundation
in statistical theory. This paper originated when we were unable to find
analysis substantiating claims that imputation of missing data has good
frequentist properties when data are missing at random (MAR). We focused on the
use of observed covariates to impute missing covariates when estimating
conditional means of the form E(y|x, w). Here y is an outcome whose
realizations are always observed, x is a covariate whose realizations are
always observed, and w is a covariate whose realizations are sometimes
unobserved. We examine the probability limit of simple imputation estimates of
E(y|x, w) as sample size goes to infinity. We find that these estimates are not
consistent when covariate data are MAR. To the contrary, the estimates suffer
from a shrinkage problem. They converge to points intermediate between the
conditional mean of interest, E(y|x, w), and the mean E(y|x) that conditions
only on x. We use a type of genotype imputation to illustrate.",Misguided Use of Observed Covariates to Impute Missing Covariates in Conditional Prediction: A Shrinkage Problem,2021-02-22 23:06:08,"Charles F Manski, Michael Gmeiner, Anat Tamburc","http://arxiv.org/abs/2102.11334v1, http://arxiv.org/pdf/2102.11334v1",econ.EM
30221,em,"Factor and sparse models are two widely used methods to impose a
low-dimensional structure in high-dimensions. However, they are seemingly
mutually exclusive. We propose a lifting method that combines the merits of
these two models in a supervised learning methodology that allows for
efficiently exploring all the information in high-dimensional datasets. The
method is based on a flexible model for high-dimensional panel data, called
factor-augmented regression model with observable and/or latent common factors,
as well as idiosyncratic components. This model not only includes both
principal component regression and sparse regression as specific models but
also significantly weakens the cross-sectional dependence and facilitates model
selection and interpretability. The method consists of several steps and a
novel test for (partial) covariance structure in high dimensions to infer the
remaining cross-section dependence at each step. We develop the theory for the
model and demonstrate the validity of the multiplier bootstrap for testing a
high-dimensional (partial) covariance structure. The theory is supported by a
simulation study and applications.",Bridging factor and sparse models,2021-02-22 23:25:38,"Jianqing Fan, Ricardo Masini, Marcelo C. Medeiros","http://arxiv.org/abs/2102.11341v4, http://arxiv.org/pdf/2102.11341v4",econ.EM
30222,em,"Mixed-frequency Vector AutoRegressions (MF-VAR) model the dynamics between
variables recorded at different frequencies. However, as the number of series
and high-frequency observations per low-frequency period grow, MF-VARs suffer
from the ""curse of dimensionality"". We curb this curse through a regularizer
that permits hierarchical sparsity patterns by prioritizing the inclusion of
coefficients according to the recency of the information they contain.
Additionally, we investigate the presence of nowcasting relations by sparsely
estimating the MF-VAR error covariance matrix. We study predictive Granger
causality relations in a MF-VAR for the U.S. economy and construct a coincident
indicator of GDP growth. Supplementary Materials for this article are available
online.",Hierarchical Regularizers for Mixed-Frequency Vector Autoregressions,2021-02-23 19:30:15,"Alain Hecq, Marie Ternes, Ines Wilms","http://arxiv.org/abs/2102.11780v2, http://arxiv.org/pdf/2102.11780v2",econ.EM
30223,em,"We provide a comprehensive theory of conducting in-sample statistical
inference about receiver operating characteristic (ROC) curves that are based
on predicted values from a first stage model with estimated parameters (such as
a logit regression). The term ""in-sample"" refers to the practice of using the
same data for model estimation (training) and subsequent evaluation, i.e., the
construction of the ROC curve. We show that in this case the first stage
estimation error has a generally non-negligible impact on the asymptotic
distribution of the ROC curve and develop the appropriate pointwise and
functional limit theory. We propose methods for simulating the distribution of
the limit process and show how to use the results in practice in comparing ROC
curves.",Inference for ROC Curves Based on Estimated Predictive Indices,2021-12-03 11:10:19,"Yu-Chin Hsu, Robert P. Lieli","http://arxiv.org/abs/2112.01772v1, http://arxiv.org/pdf/2112.01772v1",econ.EM
30224,em,"This paper studies nonparametric identification and estimation of causal
effects in centralized school assignment. In many centralized assignment
settings, students are subjected to both lottery-driven variation and
regression discontinuity (RD) driven variation. We characterize the full set of
identified atomic treatment effects (aTEs), defined as the conditional average
treatment effect between a pair of schools, given student characteristics.
Atomic treatment effects are the building blocks of more aggregated notions of
treatment contrasts, and common approaches estimating aggregations of aTEs can
mask important heterogeneity. In particular, many aggregations of aTEs put zero
weight on aTEs driven by RD variation, and estimators of such aggregations put
asymptotically vanishing weight on the RD-driven aTEs. We develop a diagnostic
tool for empirically assessing the weight put on aTEs driven by RD variation.
Lastly, we provide estimators and accompanying asymptotic results for inference
on aggregations of RD-driven aTEs.",Nonparametric Treatment Effect Identification in School Choice,2021-12-07 21:09:28,Jiafeng Chen,"http://arxiv.org/abs/2112.03872v3, http://arxiv.org/pdf/2112.03872v3",econ.EM
30225,em,"The ability to generalize experimental results from randomized control trials
(RCTs) across locations is crucial for informing policy decisions in targeted
regions. Such generalization is often hindered by the lack of identifiability
due to unmeasured effect modifiers that compromise direct transport of
treatment effect estimates from one location to another. We build upon
sensitivity analysis in observational studies and propose an optimization
procedure that allows us to get bounds on the treatment effects in targeted
regions. Furthermore, we construct more informative bounds by balancing on the
moments of covariates. In simulation experiments, we show that the covariate
balancing approach is promising in getting sharper identification intervals.",Covariate Balancing Sensitivity Analysis for Extrapolating Randomized Trials across Locations,2021-12-09 09:50:32,"Xinkun Nie, Guido Imbens, Stefan Wager","http://arxiv.org/abs/2112.04723v1, http://arxiv.org/pdf/2112.04723v1",econ.EM
30226,em,"We show that the Realized GARCH model yields close-form expression for both
the Volatility Index (VIX) and the volatility risk premium (VRP). The Realized
GARCH model is driven by two shocks, a return shock and a volatility shock, and
these are natural state variables in the stochastic discount factor (SDF). The
volatility shock endows the exponentially affine SDF with a compensation for
volatility risk. This leads to dissimilar dynamic properties under the physical
and risk-neutral measures that can explain time-variation in the VRP. In an
empirical application with the S&P 500 returns, the VIX, and the VRP, we find
that the Realized GARCH model significantly outperforms conventional GARCH
models.","Realized GARCH, CBOE VIX, and the Volatility Risk Premium",2021-12-10 05:38:42,"Peter Reinhard Hansen, Zhuo Huang, Chen Tong, Tianyi Wang","http://arxiv.org/abs/2112.05302v1, http://arxiv.org/pdf/2112.05302v1",econ.EM
30227,em,"We introduce a new volatility model for option pricing that combines Markov
switching with the Realized GARCH framework. This leads to a novel pricing
kernel with a state-dependent variance risk premium and a pricing formula for
European options, which is derived with an analytical approximation method. We
apply the Markov switching Realized GARCH model to S&P 500 index options from
1990 to 2019 and find that investors' aversion to volatility-specific risk is
time-varying. The proposed framework outperforms competing models and reduces
(in-sample and out-of-sample) option pricing errors by 15% or more.",Option Pricing with State-dependent Pricing Kernel,2021-12-10 05:58:36,"Chen Tong, Peter Reinhard Hansen, Zhuo Huang","http://arxiv.org/abs/2112.05308v2, http://arxiv.org/pdf/2112.05308v2",q-fin.PR
30228,em,"Synthetic control (SC) methods have been widely applied to estimate the
causal effect of large-scale interventions, e.g., the state-wide effect of a
change in policy. The idea of synthetic controls is to approximate one unit's
counterfactual outcomes using a weighted combination of some other units'
observed outcomes. The motivating question of this paper is: how does the SC
strategy lead to valid causal inferences? We address this question by
re-formulating the causal inference problem targeted by SC with a more
fine-grained model, where we change the unit of the analysis from ""large units""
(e.g., states) to ""small units"" (e.g., individuals in states). Under this
re-formulation, we derive sufficient conditions for the non-parametric causal
identification of the causal effect. We highlight two implications of the
reformulation: (1) it clarifies where ""linearity"" comes from, and how it falls
naturally out of the more fine-grained and flexible model, and (2) it suggests
new ways of using available data with SC methods for valid causal inference, in
particular, new ways of selecting observations from which to estimate the
counterfactual.",On the Assumptions of Synthetic Control Methods,2021-12-10 20:07:14,"Claudia Shi, Dhanya Sridhar, Vishal Misra, David M. Blei","http://arxiv.org/abs/2112.05671v2, http://arxiv.org/pdf/2112.05671v2",stat.ME
30229,em,"We provide a decision theoretic analysis of bandit experiments. Working
within the framework of diffusion asymptotics, we define suitable notions of
asymptotic Bayes and minimax risk for these experiments. For normally
distributed rewards, the minimal Bayes risk can be characterized as the
solution to a second-order partial differential equation (PDE). Using a limit
of experiments approach, we show that this PDE characterization also holds
asymptotically under both parametric and non-parametric distributions of the
rewards. The approach further describes the state variables it is
asymptotically sufficient to restrict attention to, and thereby suggests a
practical strategy for dimension reduction. The PDEs characterizing minimal
Bayes risk can be solved efficiently using sparse matrix routines. We derive
the optimal Bayes and minimax policies from their numerical solutions. These
optimal policies substantially dominate existing methods such as Thompson
sampling and UCB, often by a factor of two. The framework also covers time
discounting and pure exploration.",Risk and optimal policies in bandit experiments,2021-12-13 03:41:19,Karun Adusumilli,"http://arxiv.org/abs/2112.06363v14, http://arxiv.org/pdf/2112.06363v14",econ.EM
30230,em,"We reconcile the two worlds of dense and sparse modeling by exploiting the
positive aspects of both. We employ a factor model and assume {the dynamic of
the factors is non-pervasive while} the idiosyncratic term follows a sparse
vector autoregressive model (VAR) {which allows} for cross-sectional and time
dependence. The estimation is articulated in two steps: first, the factors and
their loadings are estimated via principal component analysis and second, the
sparse VAR is estimated by regularized regression on the estimated
idiosyncratic components. We prove the consistency of the proposed estimation
approach as the time and cross-sectional dimension diverge. In the second step,
the estimation error of the first step needs to be accounted for. Here, we do
not follow the naive approach of simply plugging in the standard rates derived
for the factor estimation. Instead, we derive a more refined expression of the
error. This enables us to derive tighter rates. We discuss the implications of
our model for forecasting, factor augmented regression, bootstrap of factor
models, and time series dependence networks via semi-parametric estimation of
the inverse of the spectral density matrix.",Factor Models with Sparse VAR Idiosyncratic Components,2021-12-14 07:10:59,"Jonas Krampe, Luca Margaritella","http://arxiv.org/abs/2112.07149v2, http://arxiv.org/pdf/2112.07149v2",stat.ME
30231,em,"We provide the first behavioral characterization of nested logit, a
foundational and widely applied discrete choice model, through the introduction
of a non-parametric version of nested logit that we call Nested Stochastic
Choice (NSC). NSC is characterized by a single axiom that weakens Independence
of Irrelevant Alternatives based on revealed similarity to allow for the
similarity effect. Nested logit is characterized by an additional
menu-independence axiom. Our axiomatic characterization leads to a practical,
data-driven algorithm that identifies the true nest structure from choice data.
We also discuss limitations of generalizing nested logit by studying the
testable implications of cross-nested logit.",Behavioral Foundations of Nested Stochastic Choice and Nested Logit,2021-12-14 07:30:14,"Matthew Kovach, Gerelt Tserenjigmid","http://arxiv.org/abs/2112.07155v2, http://arxiv.org/pdf/2112.07155v2",econ.TH
30232,em,"State-space mixed-frequency vector autoregressions are now widely used for
nowcasting. Despite their popularity, estimating such models can be
computationally intensive, especially for large systems with stochastic
volatility. To tackle the computational challenges, we propose two novel
precision-based samplers to draw the missing observations of the low-frequency
variables in these models, building on recent advances in the band and sparse
matrix algorithms for state-space models. We show via a simulation study that
the proposed methods are more numerically accurate and computationally
efficient compared to standard Kalman-filter based methods. We demonstrate how
the proposed method can be applied in two empirical macroeconomic applications:
estimating the monthly output gap and studying the response of GDP to a
monetary policy shock at the monthly frequency. Results from these two
empirical applications highlight the importance of incorporating high-frequency
indicators in macroeconomic models.",Efficient Estimation of State-Space Mixed-Frequency VARs: A Precision-Based Approach,2021-12-21 19:06:04,"Joshua C. C. Chan, Aubrey Poon, Dan Zhu","http://arxiv.org/abs/2112.11315v1, http://arxiv.org/pdf/2112.11315v1",econ.EM
30233,em,"We present a novel characterization of random rank-dependent expected utility
for finite datasets and finite prizes. The test lends itself to statistical
testing using the tools in Kitamura and Stoye (2018).",Random Rank-Dependent Expected Utility,2021-12-27 16:18:42,"Nail Kashaev, Victor Aguiar","http://dx.doi.org/10.3390/g13010013, http://arxiv.org/abs/2112.13649v1, http://arxiv.org/pdf/2112.13649v1",econ.TH
30234,em,"We study a continuous treatment effect model in the presence of treatment
spillovers through social networks. We assume that one's outcome is affected
not only by his/her own treatment but also by a (weighted) average of his/her
neighbors' treatments, both of which are treated as endogenous variables. Using
a control function approach with appropriate instrumental variables, we show
that the conditional mean potential outcome can be nonparametrically
identified. We also consider a more empirically tractable semiparametric model
and develop a three-step estimation procedure for this model. As an empirical
illustration, we investigate the causal effect of the regional unemployment
rate on the crime rate.",Estimating a Continuous Treatment Model with Spillovers: A Control Function Approach,2021-12-30 19:16:30,Tadao Hoshino,"http://arxiv.org/abs/2112.15114v3, http://arxiv.org/pdf/2112.15114v3",econ.EM
30235,em,"Causally identifying the effect of digital advertising is challenging,
because experimentation is expensive, and observational data lacks random
variation. This paper identifies a pervasive source of naturally occurring,
quasi-experimental variation in user-level ad-exposure in digital advertising
campaigns. It shows how this variation can be utilized by ad-publishers to
identify the causal effect of advertising campaigns. The variation pertains to
auction throttling, a probabilistic method of budget pacing that is widely used
to spread an ad-campaign`s budget over its deployed duration, so that the
campaign`s budget is not exceeded or overly concentrated in any one period. The
throttling mechanism is implemented by computing a participation probability
based on the campaign`s budget spending rate and then including the campaign in
a random subset of available ad-auctions each period according to this
probability. We show that access to logged-participation probabilities enables
identifying the local average treatment effect (LATE) in the ad-campaign. We
present a new estimator that leverages this identification strategy and outline
a bootstrap procedure for quantifying its variability. We apply our method to
real-world ad-campaign data from an e-commerce advertising platform, which uses
such throttling for budget pacing. We show our estimate is statistically
different from estimates derived using other standard observational methods
such as OLS and two-stage least squares estimators. Our estimated conversion
lift is 110%, a more plausible number than 600%, the conversion lifts estimated
using naive observational methods.",Auction Throttling and Causal Inference of Online Advertising Effects,2021-12-30 21:21:04,"George Gui, Harikesh Nair, Fengshi Niu","http://arxiv.org/abs/2112.15155v2, http://arxiv.org/pdf/2112.15155v2",econ.EM
30236,em,"In this paper, we consider a high-dimensional quantile regression model where
the sparsity structure may differ between two sub-populations. We develop
$\ell_1$-penalized estimators of both regression coefficients and the threshold
parameter. Our penalized estimators not only select covariates but also
discriminate between a model with homogeneous sparsity and a model with a
change point. As a result, it is not necessary to know or pretest whether the
change point is present, or where it occurs. Our estimator of the change point
achieves an oracle property in the sense that its asymptotic distribution is
the same as if the unknown active sets of regression coefficients were known.
Importantly, we establish this oracle property without a perfect covariate
selection, thereby avoiding the need for the minimum level condition on the
signals of active covariates. Dealing with high-dimensional quantile regression
with an unknown change point calls for a new proof technique since the quantile
loss function is non-smooth and furthermore the corresponding objective
function is non-convex with respect to the change point. The technique
developed in this paper is applicable to a general M-estimation framework with
a change point, which may be of independent interest. The proposed methods are
then illustrated via Monte Carlo experiments and an application to tipping in
the dynamics of racial segregation.",Oracle Estimation of a Change Point in High Dimensional Quantile Regression,2016-03-01 13:21:42,"Sokbae Lee, Yuan Liao, Myung Hwan Seo, Youngki Shin","http://dx.doi.org/10.1080/01621459.2017.1319840, http://arxiv.org/abs/1603.00235v2, http://arxiv.org/pdf/1603.00235v2",stat.ME
30237,em,"In a randomized control trial, the precision of an average treatment effect
estimator can be improved either by collecting data on additional individuals,
or by collecting additional covariates that predict the outcome variable. We
propose the use of pre-experimental data such as a census, or a household
survey, to inform the choice of both the sample size and the covariates to be
collected. Our procedure seeks to minimize the resulting average treatment
effect estimator's mean squared error, subject to the researcher's budget
constraint. We rely on a modification of an orthogonal greedy algorithm that is
conceptually simple and easy to implement in the presence of a large number of
potential covariates, and does not require any tuning parameters. In two
empirical applications, we show that our procedure can lead to substantial
gains of up to 58%, measured either in terms of reductions in data collection
costs or in terms of improvements in the precision of the treatment effect
estimator.",Optimal Data Collection for Randomized Control Trials,2016-03-11 18:06:03,"Pedro Carneiro, Sokbae Lee, Daniel Wilhelm","http://arxiv.org/abs/1603.03675v4, http://arxiv.org/pdf/1603.03675v4",stat.ME
30238,em,"We study factor models augmented by observed covariates that have explanatory
powers on the unknown factors. In financial factor models, the unknown factors
can be reasonably well explained by a few observable proxies, such as the
Fama-French factors. In diffusion index forecasts, identified factors are
strongly related to several directly measurable economic variables such as
consumption-wealth variable, financial ratios, and term spread. With those
covariates, both the factors and loadings are identifiable up to a rotation
matrix even only with a finite dimension. To incorporate the explanatory power
of these covariates, we propose a smoothed principal component analysis (PCA):
(i) regress the data onto the observed covariates, and (ii) take the principal
components of the fitted data to estimate the loadings and factors. This allows
us to accurately estimate the percentage of both explained and unexplained
components in factors and thus to assess the explanatory power of covariates.
We show that both the estimated factors and loadings can be estimated with
improved rates of convergence compared to the benchmark method. The degree of
improvement depends on the strength of the signals, representing the
explanatory power of the covariates on the factors. The proposed estimator is
robust to possibly heavy-tailed distributions. We apply the model to forecast
US bond risk premia, and find that the observed macroeconomic characteristics
contain strong explanatory powers of the factors. The gain of forecast is more
substantial when the characteristics are incorporated to estimate the common
factors than directly used for forecasts.",Augmented Factor Models with Applications to Validating Market Risk Factors and Forecasting Bond Risk Premia,2016-03-23 03:38:24,"Jianqing Fan, Yuan Ke, Yuan Liao","http://arxiv.org/abs/1603.07041v2, http://arxiv.org/pdf/1603.07041v2",stat.ME
30239,em,"The random coefficients model is an extension of the linear regression model
that allows for unobserved heterogeneity in the population by modeling the
regression coefficients as random variables. Given data from this model, the
statistical challenge is to recover information about the joint density of the
random coefficients which is a multivariate and ill-posed problem. Because of
the curse of dimensionality and the ill-posedness, pointwise nonparametric
estimation of the joint density is difficult and suffers from slow convergence
rates. Larger features, such as an increase of the density along some direction
or a well-accentuated mode can, however, be much easier detected from data by
means of statistical tests. In this article, we follow this strategy and
construct tests and confidence statements for qualitative features of the joint
density, such as increases, decreases and modes. We propose a multiple testing
approach based on aggregating single tests which are designed to extract shape
information on fixed scales and directions. Using recent tools for Gaussian
approximations of multivariate empirical processes, we derive expressions for
the critical value. We apply our method to simulated and real data.",Tests for qualitative features in the random coefficients model,2017-04-04 18:33:19,"Fabian Dunker, Konstantin Eckle, Katharina Proksch, Johannes Schmidt-Hieber","http://arxiv.org/abs/1704.01066v3, http://arxiv.org/pdf/1704.01066v3",stat.ME
30240,em,"Triangular systems with nonadditively separable unobserved heterogeneity
provide a theoretically appealing framework for the modelling of complex
structural relationships. However, they are not commonly used in practice due
to the need for exogenous variables with large support for identification, the
curse of dimensionality in estimation, and the lack of inferential tools. This
paper introduces two classes of semiparametric nonseparable triangular models
that address these limitations. They are based on distribution and quantile
regression modelling of the reduced form conditional distributions of the
endogenous variables. We show that average, distribution and quantile
structural functions are identified in these systems through a control function
approach that does not require a large support condition. We propose a
computationally attractive three-stage procedure to estimate the structural
functions where the first two stages consist of quantile or distribution
regressions. We provide asymptotic theory and uniform inference methods for
each stage. In particular, we derive functional central limit theorems and
bootstrap functional central limit theorems for the distribution regression
estimators of the structural functions. These results establish the validity of
the bootstrap for three-stage estimators of structural functions, and lead to
simple inference algorithms. We illustrate the implementation and applicability
of all our methods with numerical simulations and an empirical application to
demand analysis.",Semiparametric Estimation of Structural Functions in Nonseparable Triangular Models,2017-11-07 00:37:43,"Victor Chernozhukov, Iván Fernández-Val, Whitney Newey, Sami Stouli, Francis Vella","http://arxiv.org/abs/1711.02184v3, http://arxiv.org/pdf/1711.02184v3",econ.EM
30241,em,"In this paper we extend the work by Ryuzo Sato devoted to the development of
economic growth models within the framework of the Lie group theory. We propose
a new growth model based on the assumption of logistic growth in factors. It is
employed to derive new production functions and introduce a new notion of wage
share. In the process it is shown that the new functions compare reasonably
well against relevant economic data. The corresponding problem of maximization
of profit under conditions of perfect competition is solved with the aid of one
of these functions. In addition, it is explained in reasonably rigorous
mathematical terms why Bowley's law no longer holds true in post-1960 data.",In search of a new economic model determined by logistic growth,2017-11-07 20:44:12,"Roman G. Smirnov, Kunpeng Wang","http://arxiv.org/abs/1711.02625v5, http://arxiv.org/pdf/1711.02625v5",math.GR
30242,em,"We consider continuous-time models with a large panel of moment conditions,
where the structural parameter depends on a set of characteristics, whose
effects are of interest. The leading example is the linear factor model in
financial economics where factor betas depend on observed characteristics such
as firm specific instruments and macroeconomic variables, and their effects
pick up long-run time-varying beta fluctuations. We specify the factor betas as
the sum of characteristic effects and an orthogonal idiosyncratic parameter
that captures high-frequency movements. It is often the case that researchers
do not know whether or not the latter exists, or its strengths, and thus the
inference about the characteristic effects should be valid uniformly over a
broad class of data generating processes for idiosyncratic parameters. We
construct our estimation and inference in a two-step continuous-time GMM
framework. It is found that the limiting distribution of the estimated
characteristic effects has a discontinuity when the variance of the
idiosyncratic parameter is near the boundary (zero), which makes the usual
""plug-in"" method using the estimated asymptotic variance only valid pointwise
and may produce either over- or under- coveraging probabilities. We show that
the uniformity can be achieved by cross-sectional bootstrap. Our procedure
allows both known and estimated factors, and also features a bias correction
for the effect of estimating unknown factors.",Uniform Inference for Characteristic Effects of Large Continuous-Time Linear Models,2017-11-13 05:25:33,"Yuan Liao, Xiye Yang","http://arxiv.org/abs/1711.04392v2, http://arxiv.org/pdf/1711.04392v2",econ.EM
30243,em,"Given additional distributional information in the form of moment
restrictions, kernel density and distribution function estimators with implied
generalised empirical likelihood probabilities as weights achieve a reduction
in variance due to the systematic use of this extra information. The particular
interest here is the estimation of densities or distributions of (generalised)
residuals in semi-parametric models defined by a finite number of moment
restrictions. Such estimates are of great practical interest, being potentially
of use for diagnostic purposes, including tests of parametric assumptions on an
error distribution, goodness-of-fit tests or tests of overidentifying moment
restrictions. The paper gives conditions for the consistency and describes the
asymptotic mean squared error properties of the kernel density and distribution
estimators proposed in the paper. A simulation study evaluates the small sample
performance of these estimators. Supplements provide analytic examples to
illustrate situations where kernel weighting provides a reduction in variance
together with proofs of the results in the paper.",Improved Density and Distribution Function Estimation,2017-11-13 22:02:26,"Vitaliy Oryshchenko, Richard J. Smith","http://dx.doi.org/10.1214/19-EJS1619, http://arxiv.org/abs/1711.04793v2, http://arxiv.org/pdf/1711.04793v2",stat.ME
30244,em,"Multivalued treatment models have typically been studied under restrictive
assumptions: ordered choice, and more recently unordered monotonicity. We show
how treatment effects can be identified in a more general class of models that
allows for multidimensional unobserved heterogeneity. Our results rely on two
main assumptions: treatment assignment must be a measurable function of
threshold-crossing rules, and enough continuous instruments must be available.
We illustrate our approach for several classes of models.",Identifying Effects of Multivalued Treatments,2018-04-30 21:43:25,"Sokbae Lee, Bernard Salanié","http://arxiv.org/abs/1805.00057v1, http://arxiv.org/pdf/1805.00057v1",econ.EM
30245,em,"This chapter covers methodological issues related to estimation, testing and
computation for models involving structural changes. Our aim is to review
developments as they relate to econometric applications based on linear models.
Substantial advances have been made to cover models at a level of generality
that allow a host of interesting practical applications. These include models
with general stationary regressors and errors that can exhibit temporal
dependence and heteroskedasticity, models with trending variables and possible
unit roots and cointegrated models, among others. Advances have been made
pertaining to computational aspects of constructing estimates, their limit
distributions, tests for structural changes, and methods to determine the
number of changes present. A variety of topics are covered. The first part
summarizes and updates developments described in an earlier review, Perron
(2006), with the exposition following heavily that of Perron (2008). Additions
are included for recent developments: testing for common breaks, models with
endogenous regressors (emphasizing that simply using least-squares is
preferable over instrumental variables methods), quantile regressions, methods
based on Lasso, panel data models, testing for changes in forecast accuracy,
factors models and methods of inference based on a continuous records
asymptotic framework. Our focus is on the so-called off-line methods whereby
one wants to retrospectively test for breaks in a given sample of data and form
confidence intervals about the break dates. The aim is to provide the readers
with an overview of methods that are of direct usefulness in practice as
opposed to issues that are mostly of theoretical interest.",Structural Breaks in Time Series,2018-05-10 07:18:10,"Alessandro Casini, Pierre Perron","http://arxiv.org/abs/1805.03807v1, http://arxiv.org/pdf/1805.03807v1",econ.EM
30246,em,"In the following paper, we use a topic modeling algorithm and sentiment
scoring methods to construct a novel metric that serves as a leading indicator
in recession prediction models. We hypothesize that the inclusion of such a
sentiment indicator, derived purely from unstructured news data, will improve
our capabilities to forecast future recessions because it provides a direct
measure of the polarity of the information consumers and producers are exposed
to. We go on to show that the inclusion of our proposed news sentiment
indicator, with traditional sentiment data, such as the Michigan Index of
Consumer Sentiment and the Purchasing Manager's Index, and common factors
derived from a large panel of economic and financial indicators helps improve
model performance significantly.",News Sentiment as Leading Indicators for Recessions,2018-05-10 23:21:28,"Melody Y. Huang, Randall R. Rojas, Patrick D. Convery","http://arxiv.org/abs/1805.04160v2, http://arxiv.org/pdf/1805.04160v2",stat.AP
30247,em,"The papers~\cite{hatfimmokomi11} and~\cite{azizbrilharr13} propose algorithms
for testing whether the choice function induced by a (strict) preference list
of length $N$ over a universe $U$ is substitutable. The running time of these
algorithms is $O(|U|^3\cdot N^3)$, respectively $O(|U|^2\cdot N^3)$. In this
note we present an algorithm with running time $O(|U|^2\cdot N^2)$. Note that
$N$ may be exponential in the size $|U|$ of the universe.",On testing substitutability,2018-05-19 22:09:27,"Cosmina Croitoru, Kurt Mehlhorn","http://arxiv.org/abs/1805.07642v1, http://arxiv.org/pdf/1805.07642v1",cs.DS
30248,em,"The synthetic control method (SCM) is a popular approach for estimating the
impact of a treatment on a single unit in panel data settings. The ""synthetic
control"" is a weighted average of control units that balances the treated
unit's pre-treatment outcomes as closely as possible. A critical feature of the
original proposal is to use SCM only when the fit on pre-treatment outcomes is
excellent. We propose Augmented SCM as an extension of SCM to settings where
such pre-treatment fit is infeasible. Analogous to bias correction for inexact
matching, Augmented SCM uses an outcome model to estimate the bias due to
imperfect pre-treatment fit and then de-biases the original SCM estimate. Our
main proposal, which uses ridge regression as the outcome model, directly
controls pre-treatment fit while minimizing extrapolation from the convex hull.
This estimator can also be expressed as a solution to a modified synthetic
controls problem that allows negative weights on some donor units. We bound the
estimation error of this approach under different data generating processes,
including a linear factor model, and show how regularization helps to avoid
over-fitting to noise. We demonstrate gains from Augmented SCM with extensive
simulation studies and apply this framework to estimate the impact of the 2012
Kansas tax cuts on economic growth. We implement the proposed method in the new
augsynth R package.",The Augmented Synthetic Control Method,2018-11-10 04:18:52,"Eli Ben-Michael, Avi Feller, Jesse Rothstein","http://arxiv.org/abs/1811.04170v3, http://arxiv.org/pdf/1811.04170v3",stat.ME
30249,em,"We propose a two-stage least squares (2SLS) estimator whose first stage is
the equal-weighted average over a complete subset with $k$ instruments among
$K$ available, which we call the complete subset averaging (CSA) 2SLS. The
approximate mean squared error (MSE) is derived as a function of the subset
size $k$ by the Nagar (1959) expansion. The subset size is chosen by minimizing
the sample counterpart of the approximate MSE. We show that this method
achieves the asymptotic optimality among the class of estimators with different
subset sizes. To deal with averaging over a growing set of irrelevant
instruments, we generalize the approximate MSE to find that the optimal $k$ is
larger than otherwise. An extensive simulation experiment shows that the
CSA-2SLS estimator outperforms the alternative estimators when instruments are
correlated. As an empirical illustration, we estimate the logistic demand
function in Berry, Levinsohn, and Pakes (1995) and find the CSA-2SLS estimate
is better supported by economic theory than the alternative estimates.",Complete Subset Averaging with Many Instruments,2018-11-20 08:46:37,"Seojeong Lee, Youngki Shin","http://dx.doi.org/10.1093/ectj/utaa033, http://arxiv.org/abs/1811.08083v6, http://arxiv.org/pdf/1811.08083v6",econ.EM
30250,em,"Multidimensional heterogeneity and endogeneity are important features of a
wide class of econometric models. We consider heterogenous coefficients models
where the outcome is a linear combination of known functions of treatment and
heterogenous coefficients. We use control variables to obtain identification
results for average treatment effects. With discrete instruments in a
triangular model we find that average treatment effects cannot be identified
when the number of support points is less than or equal to the number of
coefficients. A sufficient condition for identification is that the second
moment matrix of the treatment functions given the control is nonsingular with
probability one. We relate this condition to identification of average
treatment effects with multiple treatments.","Heterogenous Coefficients, Discrete Instruments, and Identification of Treatment Effects",2018-11-24 17:08:46,"Whitney K. Newey, Sami Stouli","http://arxiv.org/abs/1811.09837v1, http://arxiv.org/pdf/1811.09837v1",econ.EM
30251,em,"Berkson errors are commonplace in empirical microeconomics. In consumer
demand this form of measurement error occurs when the price an individual pays
is measured by the (weighted) average price paid by individuals in a specified
group (e.g., a county), rather than the true transaction price. We show the
importance of such measurement errors for the estimation of demand in a setting
with nonseparable unobserved heterogeneity. We develop a consistent estimator
using external information on the true distribution of prices. Examining the
demand for gasoline in the U.S., we document substantial within-market price
variability, and show that there are significant spatial differences in the
magnitude of Berkson errors across regions of the U.S. Accounting for Berkson
errors is found to be quantitatively important for estimating price effects and
for welfare calculations. Imposing the Slutsky shape constraint greatly reduces
the sensitivity to Berkson errors.",Estimation of a Heterogeneous Demand Function with Berkson Errors,2018-11-27 00:11:10,"Richard Blundell, Joel Horowitz, Matthias Parey","http://arxiv.org/abs/1811.10690v2, http://arxiv.org/pdf/1811.10690v2",econ.EM
30252,em,"This paper introduces an intuitive and easy-to-implement nonparametric
density estimator based on local polynomial techniques. The estimator is fully
boundary adaptive and automatic, but does not require pre-binning or any other
transformation of the data. We study the main asymptotic properties of the
estimator, and use these results to provide principled estimation, inference,
and bandwidth selection methods. As a substantive application of our results,
we develop a novel discontinuity in density testing procedure, an important
problem in regression discontinuity designs and other program evaluation
settings. An illustrative empirical application is given. Two companion Stata
and R software packages are provided.",Simple Local Polynomial Density Estimators,2018-11-28 14:55:10,"Matias D. Cattaneo, Michael Jansson, Xinwei Ma","http://arxiv.org/abs/1811.11512v2, http://arxiv.org/pdf/1811.11512v2",econ.EM
30253,em,"We develop a distribution regression model under endogenous sample selection.
This model is a semi-parametric generalization of the Heckman selection model.
It accommodates much richer effects of the covariates on outcome distribution
and patterns of heterogeneity in the selection process, and allows for drastic
departures from the Gaussian error structure, while maintaining the same level
tractability as the classical model. The model applies to continuous, discrete
and mixed outcomes. We provide identification, estimation, and inference
methods, and apply them to obtain wage decomposition for the UK. Here we
decompose the difference between the male and female wage distributions into
composition, wage structure, selection structure, and selection sorting
effects. After controlling for endogenous employment selection, we still find
substantial gender wage gap -- ranging from 21% to 40% throughout the (latent)
offered wage distribution that is not explained by composition. We also uncover
positive sorting for single men and negative sorting for married women that
accounts for a substantive fraction of the gender wage gap at the top of the
distribution.","Distribution Regression with Sample Selection, with an Application to Wage Decompositions in the UK",2018-11-28 17:56:32,"Victor Chernozhukov, Iván Fernández-Val, Siyi Luo","http://arxiv.org/abs/1811.11603v6, http://arxiv.org/pdf/1811.11603v6",econ.EM
30254,em,"This paper investigates asset allocation problems when returns are
predictable. We introduce a market-timing Bayesian hierarchical (BH) approach
that adopts heterogeneous time-varying coefficients driven by lagged
fundamental characteristics. Our approach includes a joint estimation of
conditional expected returns and covariance matrix and considers estimation
risk for portfolio analysis. The hierarchical prior allows modeling different
assets separately while sharing information across assets. We demonstrate the
performance of the U.S. equity market. Though the Bayesian forecast is slightly
biased, our BH approach outperforms most alternative methods in point and
interval prediction. Our BH approach in sector investment for the recent twenty
years delivers a 0.92\% average monthly returns and a 0.32\% significant
Jensen`s alpha. We also find technology, energy, and manufacturing are
important sectors in the past decade, and size, investment, and short-term
reversal factors are heavily weighted. Finally, the stochastic discount factor
constructed by our BH approach explains most anomalies.",Factor Investing: A Bayesian Hierarchical Approach,2019-02-04 05:48:03,"Guanhao Feng, Jingyu He","http://arxiv.org/abs/1902.01015v3, http://arxiv.org/pdf/1902.01015v3",econ.EM
34592,th,"This paper develops a dynamic monetary model to study the (in)stability of
the fractional reserve banking system. The model shows that the fractional
reserve banking system can endanger stability in that equilibrium is more prone
to exhibit endogenous cyclic, chaotic, and stochastic dynamics under lower
reserve requirements, although it can increase consumption in the steady-state.
Introducing endogenous unsecured credit to the baseline model does not change
the main results. This paper also provides empirical evidence that is
consistent with the prediction of the model. The calibrated exercise suggests
that this channel could be another source of economic fluctuations.",On the Instability of Fractional Reserve Banking,2023-05-23 23:16:50,Heon Lee,"http://arxiv.org/abs/2305.14503v2, http://arxiv.org/pdf/2305.14503v2",econ.TH
30255,em,"We synthesize the knowledge present in various scientific disciplines for the
development of semiparametric endogenous truncation-proof algorithm, correcting
for truncation bias due to endogenous self-selection. This synthesis enriches
the algorithm's accuracy, efficiency and applicability. Improving upon the
covariate shift assumption, data are intrinsically affected and largely
generated by their own behavior (cognition). Refining the concept of Vox Populi
(Wisdom of Crowd) allows data points to sort themselves out depending on their
estimated latent reference group opinion space. Monte Carlo simulations, based
on 2,000,000 different distribution functions, practically generating 100
million realizations, attest to a very high accuracy of our model.",Semiparametric correction for endogenous truncation bias with Vox Populi based participation decision,2019-02-17 19:20:03,"Nir Billfeld, Moshe Kim","http://dx.doi.org/10.1109/ACCESS.2018.2888575, http://arxiv.org/abs/1902.06286v1, http://arxiv.org/pdf/1902.06286v1",econ.EM
30256,em,"We introduce the Stata package Binsreg, which implements the binscatter
methods developed in Cattaneo, Crump, Farrell and Feng (2023a,b). The package
includes seven commands: binsreg, binslogit, binsprobit, binsqreg, binstest,
binspwc, and binsregselect. The first four commands implement point estimation
and uncertainty quantification (confidence intervals and confidence bands) for
canonical and extended least squares binscatter regression (binsreg) as well as
generalized nonlinear binscatter regression (binslogit for Logit regression,
binsprobit for Probit regression, and binsqreg for quantile regression). These
commands also offer binned scatter plots, allowing for one- and multi-sample
settings. The next two commands focus on pointwise and uniform inference:
binstest implements hypothesis testing procedures for parametric specifications
and for nonparametric shape restrictions of the unknown regression function,
while binspwc implements multi-group pairwise statistical comparisons. These
two commands cover both least squares as well as generalized nonlinear
binscatter methods. All our methods allow for multi-sample analysis, which is
useful when studying treatment effect heterogeneity in randomized and
observational studies. Finally, the command binsregselect implements
data-driven number of bins selectors for binscatter methods using either
quantile-spaced or evenly-spaced binning/partitioning. All the commands allow
for covariate adjustment, smoothness restrictions, weighting and clustering,
among many other features. Companion Python and R packages with similar syntax
and capabilities are also available.",Binscatter Regressions,2019-02-25 23:59:23,"Matias D. Cattaneo, Richard K. Crump, Max H. Farrell, Yingjie Feng","http://arxiv.org/abs/1902.09615v4, http://arxiv.org/pdf/1902.09615v4",econ.EM
30257,em,"We propose a semiparametric two-stage least square estimator for the
heterogeneous treatment effects (HTE). HTE is the solution to certain integral
equation which belongs to the class of Fredholm integral equations of the first
kind, which is known to be ill-posed problem. Naive semi/nonparametric methods
do not provide stable solution to such problems. Then we propose to approximate
the function of interest by orthogonal series under the constraint which makes
the inverse mapping of integral to be continuous and eliminates the
ill-posedness. We illustrate the performance of the proposed estimator through
simulation experiments.",Semiparametric estimation of heterogeneous treatment effects under the nonignorable assignment condition,2019-02-26 17:52:26,"Keisuke Takahata, Takahiro Hoshino","http://arxiv.org/abs/1902.09978v1, http://arxiv.org/pdf/1902.09978v1",econ.EM
30258,em,"We develop an LM test for Granger causality in high-dimensional VAR models
based on penalized least squares estimations. To obtain a test retaining the
appropriate size after the variable selection done by the lasso, we propose a
post-double-selection procedure to partial out effects of nuisance variables
and establish its uniform asymptotic validity. We conduct an extensive set of
Monte-Carlo simulations that show our tests perform well under different data
generating processes, even without sparsity. We apply our testing procedure to
find networks of volatility spillovers and we find evidence that causal
relationships become clearer in high-dimensional compared to standard
low-dimensional VARs.",Granger Causality Testing in High-Dimensional VARs: a Post-Double-Selection Procedure,2019-02-28 13:21:27,"Alain Hecq, Luca Margaritella, Stephan Smeekes","http://arxiv.org/abs/1902.10991v4, http://arxiv.org/pdf/1902.10991v4",econ.EM
30259,em,"The Hodrick-Prescott (HP) filter is one of the most widely used econometric
methods in applied macroeconomic research. Like all nonparametric methods, the
HP filter depends critically on a tuning parameter that controls the degree of
smoothing. Yet in contrast to modern nonparametric methods and applied work
with these procedures, empirical practice with the HP filter almost universally
relies on standard settings for the tuning parameter that have been suggested
largely by experimentation with macroeconomic data and heuristic reasoning. As
recent research (Phillips and Jin, 2015) has shown, standard settings may not
be adequate in removing trends, particularly stochastic trends, in economic
data.
  This paper proposes an easy-to-implement practical procedure of iterating the
HP smoother that is intended to make the filter a smarter smoothing device for
trend estimation and trend elimination. We call this iterated HP technique the
boosted HP filter in view of its connection to $L_{2}$-boosting in machine
learning. The paper develops limit theory to show that the boosted HP (bHP)
filter asymptotically recovers trend mechanisms that involve unit root
processes, deterministic polynomial drifts, and polynomial drifts with
structural breaks. A stopping criterion is used to automate the iterative HP
algorithm, making it a data-determined method that is ready for modern
data-rich environments in economic research. The methodology is illustrated
using three real data examples that highlight the differences between simple HP
filtering, the data-determined boosted filter, and an alternative
autoregressive approach. These examples show that the bHP filter is helpful in
analyzing a large collection of heterogeneous macroeconomic time series that
manifest various degrees of persistence, trend behavior, and volatility.",Boosting: Why You Can Use the HP Filter,2019-05-01 06:53:22,"Peter C. B. Phillips, Zhentao Shi","http://arxiv.org/abs/1905.00175v3, http://arxiv.org/pdf/1905.00175v3",econ.EM
30371,em,"Multidimensional heterogeneity and endogeneity are important features of
models with multiple treatments. We consider a heterogeneous coefficients model
where the outcome is a linear combination of dummy treatment variables, with
each variable representing a different kind of treatment. We use control
variables to give necessary and sufficient conditions for identification of
average treatment effects. With mutually exclusive treatments we find that,
provided the heterogeneous coefficients are mean independent from treatments
given the controls, a simple identification condition is that the generalized
propensity scores (Imbens, 2000) be bounded away from zero and that their sum
be bounded away from one, with probability one. Our analysis extends to
distributional and quantile treatment effects, as well as corresponding
treatment effects on the treated. These results generalize the classical
identification result of Rosenbaum and Rubin (1983) for binary treatments.","Heterogeneous Coefficients, Control Variables, and Identification of Multiple Treatment Effects",2020-09-04 20:37:47,"Whitney K. Newey, Sami Stouli","http://arxiv.org/abs/2009.02314v3, http://arxiv.org/pdf/2009.02314v3",econ.EM
30260,em,"Variational Bayes (VB), a method originating from machine learning, enables
fast and scalable estimation of complex probabilistic models. Thus far,
applications of VB in discrete choice analysis have been limited to mixed logit
models with unobserved inter-individual taste heterogeneity. However, such a
model formulation may be too restrictive in panel data settings, since tastes
may vary both between individuals as well as across choice tasks encountered by
the same individual. In this paper, we derive a VB method for posterior
inference in mixed logit models with unobserved inter- and intra-individual
heterogeneity. In a simulation study, we benchmark the performance of the
proposed VB method against maximum simulated likelihood (MSL) and Markov chain
Monte Carlo (MCMC) methods in terms of parameter recovery, predictive accuracy
and computational efficiency. The simulation study shows that VB can be a fast,
scalable and accurate alternative to MSL and MCMC estimation, especially in
applications in which fast predictions are paramount. VB is observed to be
between 2.8 and 17.7 times faster than the two competing methods, while
affording comparable or superior accuracy. Besides, the simulation study
demonstrates that a parallelised implementation of the MSL estimator with
analytical gradients is a viable alternative to MCMC in terms of both
estimation accuracy and computational efficiency, as the MSL estimator is
observed to be between 0.9 and 2.1 times faster than MCMC.",Variational Bayesian Inference for Mixed Logit Models with Unobserved Inter- and Intra-Individual Heterogeneity,2019-05-01 14:58:13,"Rico Krueger, Prateek Bansal, Michel Bierlaire, Ricardo A. Daziano, Taha H. Rashidi","http://arxiv.org/abs/1905.00419v3, http://arxiv.org/pdf/1905.00419v3",stat.ME
30261,em,"This paper considers an augmented double autoregressive (DAR) model, which
allows null volatility coefficients to circumvent the over-parameterization
problem in the DAR model. Since the volatility coefficients might be on the
boundary, the statistical inference methods based on the Gaussian quasi-maximum
likelihood estimation (GQMLE) become non-standard, and their asymptotics
require the data to have a finite sixth moment, which narrows applicable scope
in studying heavy-tailed data. To overcome this deficiency, this paper develops
a systematic statistical inference procedure based on the self-weighted GQMLE
for the augmented DAR model. Except for the Lagrange multiplier test statistic,
the Wald, quasi-likelihood ratio and portmanteau test statistics are all shown
to have non-standard asymptotics. The entire procedure is valid as long as the
data is stationary, and its usefulness is illustrated by simulation studies and
one real example.",Non-standard inference for augmented double autoregressive models with null volatility coefficients,2019-05-06 05:39:20,"Feiyu Jiang, Dong Li, Ke Zhu","http://arxiv.org/abs/1905.01798v1, http://arxiv.org/pdf/1905.01798v1",econ.EM
30262,em,"Timing decisions are common: when to file your taxes, finish a referee
report, or complete a task at work. We ask whether time preferences can be
inferred when \textsl{only} task completion is observed. To answer this
question, we analyze the following model: each period a decision maker faces
the choice whether to complete the task today or to postpone it to later. Cost
and benefits of task completion cannot be directly observed by the analyst, but
the analyst knows that net benefits are drawn independently between periods
from a time-invariant distribution and that the agent has time-separable
utility. Furthermore, we suppose the analyst can observe the agent's exact
stopping probability. We establish that for any agent with quasi-hyperbolic
$\beta,\delta$-preferences and given level of partial naivete $\hat{\beta}$,
the probability of completing the task conditional on not having done it
earlier increases towards the deadline. And conversely, for any given
preference parameters $\beta,\delta$ and (weakly increasing) profile of task
completion probability, there exists a stationary payoff distribution that
rationalizes her behavior as long as the agent is either sophisticated or fully
naive. An immediate corollary being that, without parametric assumptions, it is
impossible to rule out time-consistency even when imposing an a priori
assumption on the permissible long-run discount factor. We also provide an
exact partial identification result when the analyst can, in addition to the
stopping probability, observe the agent's continuation value.",Identifying Present-Bias from the Timing of Choices,2019-05-10 09:15:35,"Paul Heidhues, Philipp Strack","http://arxiv.org/abs/1905.03959v1, http://arxiv.org/pdf/1905.03959v1",econ.TH
30263,em,"We use supervised learning to identify factors that predict the cross-section
of returns and maximum drawdown for stocks in the US equity market. Our data
run from January 1970 to December 2019 and our analysis includes ordinary least
squares, penalized linear regressions, tree-based models, and neural networks.
We find that the most important predictors tended to be consistent across
models, and that non-linear models had better predictive power than linear
models. Predictive power was higher in calm periods than in stressed periods.
Environmental, social, and governance indicators marginally impacted the
predictive power of non-linear models in our data, despite their negative
correlation with maximum drawdown and positive correlation with returns. Upon
exploring whether ESG variables are captured by some models, we find that ESG
data contribute to the prediction nonetheless.",Sustainable Investing and the Cross-Section of Returns and Maximum Drawdown,2019-05-13 21:47:38,"Lisa R. Goldberg, Saad Mouti","http://dx.doi.org/10.1016/j.jfds.2022.11.002, http://arxiv.org/abs/1905.05237v2, http://arxiv.org/pdf/1905.05237v2",q-fin.ST
30264,em,"When evaluating the impact of a policy on a metric of interest, it may not be
possible to conduct a randomized control trial. In settings where only
observational data is available, Synthetic Control (SC) methods provide a
popular data-driven approach to estimate a ""synthetic"" control by combining
measurements of ""similar"" units (donors). Recently, Robust SC (RSC) was
proposed as a generalization of SC to overcome the challenges of missing data
high levels of noise, while removing the reliance on domain knowledge for
selecting donors. However, SC, RSC, and their variants, suffer from poor
estimation when the pre-intervention period is too short. As the main
contribution, we propose a generalization of unidimensional RSC to
multi-dimensional RSC, mRSC. Our proposed mechanism incorporates multiple
metrics to estimate a synthetic control, thus overcoming the challenge of poor
inference from limited pre-intervention data. We show that the mRSC algorithm
with $K$ metrics leads to a consistent estimator of the synthetic control for
the target unit under any metric. Our finite-sample analysis suggests that the
prediction error decays to zero at a rate faster than the RSC algorithm by a
factor of $K$ and $\sqrt{K}$ for the training and testing periods (pre- and
post-intervention), respectively. Additionally, we provide a diagnostic test
that evaluates the utility of including additional metrics. Moreover, we
introduce a mechanism to validate the performance of mRSC: time series
prediction. That is, we propose a method to predict the future evolution of a
time series based on limited data when the notion of time is relative and not
absolute, i.e., we have access to a donor pool that has undergone the desired
future evolution. Finally, we conduct experimentation to establish the efficacy
of mRSC on synthetic data and two real-world case studies (retail and Cricket).",mRSC: Multi-dimensional Robust Synthetic Control,2019-05-15 22:20:56,"Muhummad Amjad, Vishal Misra, Devavrat Shah, Dennis Shen","http://arxiv.org/abs/1905.06400v3, http://arxiv.org/pdf/1905.06400v3",stat.ME
30265,em,"This paper describes three methods for carrying out non-asymptotic inference
on partially identified parameters that are solutions to a class of
optimization problems. Applications in which the optimization problems arise
include estimation under shape restrictions, estimation of models of discrete
games, and estimation based on grouped data. The partially identified
parameters are characterized by restrictions that involve the unknown
population means of observed random variables in addition to structural
parameters. Inference consists of finding confidence intervals for functions of
the structural parameters. Our theory provides finite-sample lower bounds on
the coverage probabilities of the confidence intervals under three sets of
assumptions of increasing strength. With the moderate sample sizes found in
most economics applications, the bounds become tighter as the assumptions
strengthen. We discuss estimation of population parameters that the bounds
depend on and contrast our methods with alternative methods for obtaining
confidence intervals for partially identified parameters. The results of Monte
Carlo experiments and empirical examples illustrate the usefulness of our
method.",Inference in a class of optimization problems: Confidence regions and finite sample bounds on errors in coverage probabilities,2019-05-16 04:40:51,"Joel L. Horowitz, Sokbae Lee","http://arxiv.org/abs/1905.06491v6, http://arxiv.org/pdf/1905.06491v6",stat.ME
30266,em,"In this paper, we consider a framework adapting the notion of cointegration
when two asset prices are generated by a driftless It\^{o}-semimartingale
featuring jumps with infinite activity, observed regularly and synchronously at
high frequency. We develop a regression based estimation of the cointegrated
relations method and show the related consistency and central limit theory when
there is cointegration within that framework. We also provide a Dickey-Fuller
type residual based test for the null of no cointegration against the
alternative of cointegration, along with its limit theory. Under no
cointegration, the asymptotic limit is the same as that of the original
Dickey-Fuller residual based test, so that critical values can be easily
tabulated in the same way. Finite sample indicates adequate size and good power
properties in a variety of realistic configurations, outperforming original
Dickey-Fuller and Phillips-Perron type residual based tests, whose sizes are
distorted by non ergodic time-varying variance and power is altered by price
jumps. Two empirical examples consolidate the Monte-Carlo evidence that the
adapted tests can be rejected while the original tests are not, and vice versa.",Cointegration in high frequency data,2019-05-17 04:34:40,"Simon Clinet, Yoann Potiron","http://arxiv.org/abs/1905.07081v2, http://arxiv.org/pdf/1905.07081v2",q-fin.ST
30267,em,"We consider a nonparametric instrumental regression model with continuous
endogenous regressor where instruments are fully independent of the error term.
This assumption allows us to extend the reach of this model to cases where the
instrumental variable is discrete, and therefore to substantially enlarge its
potential empirical applications. Under our assumptions, the regression
function becomes solution to a nonlinear integral equation. We contribute to
existing literature by providing an exhaustive analysis of identification and a
simple iterative estimation procedure. Details on the implementation and on the
asymptotic properties of this estimation algorithm are given. We conclude the
paper with a simulation experiment for a binary instrument and an empirical
application to the estimation of the Engel curve for food, where we show that
our estimator delivers results that are consistent with existing evidence under
several discretizations of the instrumental variable.",Nonparametric Instrumental Regressions with (Potentially Discrete) Instruments Independent of the Error Term,2019-05-20 00:10:22,"Samuele Centorrino, Frédérique Fève, Jean-Pierre Florens","http://arxiv.org/abs/1905.07812v1, http://arxiv.org/pdf/1905.07812v1",econ.EM
30268,em,"We propose to smooth the entire objective function, rather than only the
check function, in a linear quantile regression context. Not only does the
resulting smoothed quantile regression estimator yield a lower mean squared
error and a more accurate Bahadur-Kiefer representation than the standard
estimator, but it is also asymptotically differentiable. We exploit the latter
to propose a quantile density estimator that does not suffer from the curse of
dimensionality. This means estimating the conditional density function without
worrying about the dimension of the covariate vector. It also allows for
two-stage efficient quantile regression estimation. Our asymptotic theory holds
uniformly with respect to the bandwidth and quantile level. Finally, we propose
a rule of thumb for choosing the smoothing bandwidth that should approximate
well the optimal bandwidth. Simulations confirm that our smoothed quantile
regression estimator indeed performs very well in finite samples.",Smoothing quantile regressions,2019-05-21 13:36:08,"Marcelo Fernandes, Emmanuel Guerre, Eduardo Horta","http://arxiv.org/abs/1905.08535v3, http://arxiv.org/pdf/1905.08535v3",econ.EM
30269,em,"This paper analyzes the diagnostic of near multicollinearity in a multiple
linear regression from auxiliary centered regressions (with intercept) and
non-centered (without intercept). From these auxiliary regression, the centered
and non-centered Variance Inflation Factors are calculated, respectively. It is
also presented an expression that relate both of them.",Centered and non-centered variance inflation factor,2019-05-29 12:49:00,"Román Salmerón Gómez, Catalina García García y José García Pérez","http://arxiv.org/abs/1905.12293v1, http://arxiv.org/pdf/1905.12293v1",econ.EM
30270,em,"This paper describes the results of research project on optimal pricing for
LLC ""Perm Local Rail Company"". In this study we propose a regression tree based
approach for estimation of demand function for local rail tickets considering
high degree of demand heterogeneity by various trip directions and the goals of
travel. Employing detailed data on ticket sales for 5 years we estimate the
parameters of demand function and reveal the significant variation in price
elasticity of demand. While in average the demand is elastic by price, near a
quarter of trips is characterized by weakly elastic demand. Lower elasticity of
demand is correlated with lower degree of competition with other transport and
inflexible frequency of travel.",Heterogeneity in demand and optimal price conditioning for local rail transport,2019-05-30 08:32:20,"Evgeniy M. Ozhegov, Alina Ozhegova","http://arxiv.org/abs/1905.12859v1, http://arxiv.org/pdf/1905.12859v1",econ.EM
30271,em,"We analyze the household savings problem in a general setting where returns
on assets, non-financial income and impatience are all state dependent and
fluctuate over time. All three processes can be serially correlated and
mutually dependent. Rewards can be bounded or unbounded and wealth can be
arbitrarily large. Extending classic results from an earlier literature, we
determine conditions under which (a) solutions exist, are unique and are
globally computable, (b) the resulting wealth dynamics are stationary, ergodic
and geometrically mixing, and (c) the wealth distribution has a Pareto tail. We
show how these results can be used to extend recent studies of the wealth
distribution. Our conditions have natural economic interpretations in terms of
asymptotic growth rates for discounting and return on savings.",The Income Fluctuation Problem and the Evolution of Wealth,2019-05-29 09:45:13,"Qingyin Ma, John Stachurski, Alexis Akira Toda","http://dx.doi.org/10.1016/j.jet.2020.105003, http://arxiv.org/abs/1905.13045v3, http://arxiv.org/pdf/1905.13045v3",econ.TH
30273,em,"The paper proposes an estimator to make inference of heterogeneous treatment
effects sorted by impact groups (GATES) for non-randomised experiments. The
groups can be understood as a broader aggregation of the conditional average
treatment effect (CATE) where the number of groups is set in advance. In
economics, this approach is similar to pre-analysis plans. Observational
studies are standard in policy evaluation from labour markets, educational
surveys and other empirical studies. To control for a potential selection-bias,
we implement a doubly-robust estimator in the first stage. We use machine
learning methods to learn the conditional mean functions as well as the
propensity score. The group average treatment effect is then estimated via a
linear projection model. The linear model is easy to interpret, provides
p-values and confidence intervals, and limits the danger of finding spurious
heterogeneity due to small subgroups in the CATE. To control for confounding in
the linear model, we use Neyman-orthogonal moments to partial out the effect
that covariates have on both, the treatment assignment and the outcome. The
result is a best linear predictor for effect heterogeneity based on impact
groups. We find that our proposed method has lower absolute errors as well as
smaller bias than the benchmark doubly-robust estimator. We further introduce a
bagging type averaging for the CATE function for each observation to avoid
biases through sample splitting. The advantage of the proposed method is a
robust linear estimation of heterogeneous group treatment effects in
observational studies.",Group Average Treatment Effects for Observational Studies,2019-11-07 03:42:16,Daniel Jacob,"http://arxiv.org/abs/1911.02688v5, http://arxiv.org/pdf/1911.02688v5",econ.EM
30274,em,"Combinatorial/probabilistic models for cross-country dual-meets are proposed.
The first model assumes that all runners are equally likely to finish in any
possible order. The second model assumes that each team is selected from a
large identically distributed population of potential runners and with each
potential runner's ranking determined by the initial draw from the combined
population.",Combinatorial Models of Cross-Country Dual Meets: What is a Big Victory?,2019-11-12 21:06:45,Kurt S. Riedel,"http://arxiv.org/abs/1911.05044v1, http://arxiv.org/pdf/1911.05044v1",stat.AP
30275,em,"New nonparametric tests of copula exchangeability and radial symmetry are
proposed. The novel aspect of the tests is a resampling procedure that exploits
group invariance conditions associated with the relevant symmetry hypothesis.
They may be viewed as feasible versions of randomization tests of symmetry, the
latter being inapplicable due to the unobservability of margins. Our tests are
simple to compute, control size asymptotically, consistently detect arbitrary
forms of asymmetry, and do not require the specification of a tuning parameter.
Simulations indicate excellent small sample properties compared to existing
procedures involving the multiplier bootstrap.",Randomization tests of copula symmetry,2019-11-13 09:03:13,"Brendan K. Beare, Juwon Seo","http://dx.doi.org/10.1017/S0266466619000410, http://arxiv.org/abs/1911.05307v1, http://arxiv.org/pdf/1911.05307v1",econ.EM
30276,em,"We study a linear random coefficient model where slope parameters may be
correlated with some continuous covariates. Such a model specification may
occur in empirical research, for instance, when quantifying the effect of a
continuous treatment observed at two time periods. We show one can carry
identification and estimation without instruments. We propose a semiparametric
estimator of average partial effects and of average treatment effects on the
treated. We showcase the small sample properties of our estimator in an
extensive simulation study. Among other things, we reveal that it compares
favorably with a control function estimator. We conclude with an application to
the effect of malaria eradication on economic development in Colombia.",Semiparametric Estimation of Correlated Random Coefficient Models without Instrumental Variables,2019-11-15 23:05:51,"Samuele Centorrino, Aman Ullah, Jing Xue","http://arxiv.org/abs/1911.06857v1, http://arxiv.org/pdf/1911.06857v1",econ.EM
30277,em,"This paper studies causal inference in randomized experiments under network
interference. Commonly used models of interference posit that treatments
assigned to alters beyond a certain network distance from the ego have no
effect on the ego's response. However, this assumption is violated in common
models of social interactions. We propose a substantially weaker model of
""approximate neighborhood interference"" (ANI) under which treatments assigned
to alters further from the ego have a smaller, but potentially nonzero, effect
on the ego's response. We formally verify that ANI holds for well-known models
of social interactions. Under ANI, restrictions on the network topology, and
asymptotics under which the network size increases, we prove that standard
inverse-probability weighting estimators consistently estimate useful exposure
effects and are approximately normal. For inference, we consider a network HAC
variance estimator. Under a finite population model, we show that the estimator
is biased but that the bias can be interpreted as the variance of unit-level
exposure effects. This generalizes Neyman's well-known result on conservative
variance estimation to settings with interference.",Causal Inference Under Approximate Neighborhood Interference,2019-11-16 22:51:36,Michael P. Leung,"http://arxiv.org/abs/1911.07085v4, http://arxiv.org/pdf/1911.07085v4",econ.EM
30278,em,"This paper studies inference in models of discrete choice with social
interactions when the data consists of a single large network. We provide
theoretical justification for the use of spatial and network HAC variance
estimators in applied work, the latter constructed by using network path
distance in place of spatial distance. Toward this end, we prove new central
limit theorems for network moments in a large class of social interactions
models. The results are applicable to discrete games on networks and dynamic
models where social interactions enter through lagged dependent variables. We
illustrate our results in an empirical application and simulation study.",Inference in Models of Discrete Choice with Social Interactions Using Network Data,2019-11-17 01:16:56,Michael P. Leung,"http://arxiv.org/abs/1911.07106v1, http://arxiv.org/pdf/1911.07106v1",econ.EM
30279,em,"A new statistical procedure, based on a modified spline basis, is proposed to
identify the linear components in the panel data model with fixed effects.
Under some mild assumptions, the proposed procedure is shown to consistently
estimate the underlying regression function, correctly select the linear
components, and effectively conduct the statistical inference. When compared to
existing methods for detection of linearity in the panel model, our approach is
demonstrated to be theoretically justified as well as practically convenient.
We provide a computational algorithm that implements the proposed procedure
along with a path-based solution method for linearity detection, which avoids
the burden of selecting the tuning parameter for the penalty term. Monte Carlo
simulations are conducted to examine the finite sample performance of our
proposed procedure with detailed findings that confirm our theoretical results
in the paper. Applications to Aggregate Production and Environmental Kuznets
Curve data also illustrate the necessity for detecting linearity in the
partially linear panel model.",Statistical Inference on Partially Linear Panel Model under Unobserved Linearity,2019-11-20 14:16:00,"Ruiqi Liu, Ben Boukai, Zuofeng Shang","http://arxiv.org/abs/1911.08830v1, http://arxiv.org/pdf/1911.08830v1",econ.EM
30280,em,"Quasi-Monte Carlo (qMC) methods are a powerful alternative to classical
Monte-Carlo (MC) integration. Under certain conditions, they can approximate
the desired integral at a faster rate than the usual Central Limit Theorem,
resulting in more accurate estimates. This paper explores these methods in a
simulation-based estimation setting with an emphasis on the scramble of Owen
(1995). For cross-sections and short-panels, the resulting Scrambled Method of
Moments simply replaces the random number generator with the scramble
(available in most softwares) to reduce simulation noise. Scrambled Indirect
Inference estimation is also considered. For time series, qMC may not apply
directly because of a curse of dimensionality on the time dimension. A simple
algorithm and a class of moments which circumvent this issue are described.
Asymptotic results are given for each algorithm. Monte-Carlo examples
illustrate these results in finite samples, including an income process with
""lots of heterogeneity.""",A Scrambled Method of Moments,2019-11-20 22:01:04,Jean-Jacques Forneron,"http://arxiv.org/abs/1911.09128v1, http://arxiv.org/pdf/1911.09128v1",econ.EM
30281,em,"In Regression Discontinuity (RD) design, self-selection leads to different
distributions of covariates on two sides of the policy intervention, which
essentially violates the continuity of potential outcome assumption. The
standard RD estimand becomes difficult to interpret due to the existence of
some indirect effect, i.e. the effect due to self selection. We show that the
direct causal effect of interest can still be recovered under a class of
estimands. Specifically, we consider a class of weighted average treatment
effects tailored for potentially different target populations. We show that a
special case of our estimands can recover the average treatment effect under
the conditional independence assumption per Angrist and Rokkanen (2015), and
another example is the estimand recently proposed in Fr\""olich and Huber
(2018). We propose a set of estimators through a weighted local linear
regression framework and prove the consistency and asymptotic normality of the
estimators. Our approach can be further extended to the fuzzy RD case. In
simulation exercises, we compare the performance of our estimator with the
standard RD estimator. Finally, we apply our method to two empirical data sets:
the U.S. House elections data in Lee (2008) and a novel data set from Microsoft
Bing on Generalized Second Price (GSP) auction.",Regression Discontinuity Design under Self-selection,2019-11-21 05:28:59,"Sida Peng, Yang Ning","http://arxiv.org/abs/1911.09248v1, http://arxiv.org/pdf/1911.09248v1",stat.ME
30282,em,"Asymmetric power GARCH models have been widely used to study the higher order
moments of financial returns, while their quantile estimation has been rarely
investigated. This paper introduces a simple monotonic transformation on its
conditional quantile function to make the quantile regression tractable. The
asymptotic normality of the resulting quantile estimators is established under
either stationarity or non-stationarity. Moreover, based on the estimation
procedure, new tests for strict stationarity and asymmetry are also
constructed. This is the first try of the quantile estimation for
non-stationary ARCH-type models in the literature. The usefulness of the
proposed methodology is illustrated by simulation results and real data
analysis.",Hybrid quantile estimation for asymmetric power GARCH models,2019-11-21 11:45:30,"Guochang Wang, Ke Zhu, Guodong Li, Wai Keung Li","http://arxiv.org/abs/1911.09343v1, http://arxiv.org/pdf/1911.09343v1",econ.EM
30283,em,"Optimal trading strategies for pairs trading have been studied by models that
try to find either optimal shares of stocks by assuming no transaction costs or
optimal timing of trading fixed numbers of shares of stocks with transaction
costs. To find optimal strategies which determine optimally both trade times
and number of shares in pairs trading process, we use a singular stochastic
control approach to study an optimal pairs trading problem with proportional
transaction costs. Assuming a cointegrated relationship for a pair of stock
log-prices, we consider a portfolio optimization problem which involves dynamic
trading strategies with proportional transaction costs. We show that the value
function of the control problem is the unique viscosity solution of a nonlinear
quasi-variational inequality, which is equivalent to a free boundary problem
for the singular stochastic control value function. We then develop a discrete
time dynamic programming algorithm to compute the transaction regions, and show
the convergence of the discretization scheme. We illustrate our approach with
numerical examples and discuss the impact of different parameters on
transaction regions. We study the out-of-sample performance in an empirical
study that consists of six pairs of U.S. stocks selected from different
industry sectors, and demonstrate the efficiency of the optimal strategy.",A singular stochastic control approach for optimal pairs trading with proportional transaction costs,2019-11-24 05:57:10,Haipeng Xing,"http://arxiv.org/abs/1911.10450v1, http://arxiv.org/pdf/1911.10450v1",q-fin.TR
30284,em,"This study constructs an integrated early warning system (EWS) that
identifies and predicts stock market turbulence. Based on switching ARCH
(SWARCH) filtering probabilities of the high volatility regime, the proposed
EWS first classifies stock market crises according to an indicator function
with thresholds dynamically selected by the two-peak method. A hybrid algorithm
is then developed in the framework of a long short-term memory (LSTM) network
to make daily predictions that alert turmoils. In the empirical evaluation
based on ten-year Chinese stock data, the proposed EWS yields satisfying
results with the test-set accuracy of $96.6\%$ and on average $2.4$ days of the
forewarned period. The model's stability and practical value in real-time
decision-making are also proven by the cross-validation and back-testing.",An Integrated Early Warning System for Stock Market Turbulence,2019-11-28 11:52:19,"Peiwan Wang, Lu Zong, Ye Ma","http://arxiv.org/abs/1911.12596v1, http://arxiv.org/pdf/1911.12596v1",econ.EM
30285,em,"In this study, we develop a novel estimation method for quantile treatment
effects (QTE) under rank invariance and rank stationarity assumptions. Ishihara
(2020) explores identification of the nonseparable panel data model under these
assumptions and proposes a parametric estimation based on the minimum distance
method. However, when the dimensionality of the covariates is large, the
minimum distance estimation using this process is computationally demanding. To
overcome this problem, we propose a two-step estimation method based on the
quantile regression and minimum distance methods. We then show the uniform
asymptotic properties of our estimator and the validity of the nonparametric
bootstrap. The Monte Carlo studies indicate that our estimator performs well in
finite samples. Finally, we present two empirical illustrations, to estimate
the distributional effects of insurance provision on household production and
TV watching on child cognitive development.",Panel Data Quantile Regression for Treatment Effect Models,2020-01-13 18:05:52,Takuya Ishihara,"http://arxiv.org/abs/2001.04324v3, http://arxiv.org/pdf/2001.04324v3",stat.ME
30286,em,"This article extends the widely-used synthetic controls estimator for
evaluating causal effects of policy changes to quantile functions. The proposed
method provides a geometrically faithful estimate of the entire counterfactual
quantile function of the treated unit. Its appeal stems from an efficient
implementation via a constrained quantile-on-quantile regression. This
constitutes a novel concept of independent interest. The method provides a
unique counterfactual quantile function in any scenario: for continuous,
discrete or mixed distributions. It operates in both repeated cross-sections
and panel data with as little as a single pre-treatment period. The article
also provides abstract identification results by showing that any synthetic
controls method, classical or our generalization, provides the correct
counterfactual for causal models that preserve distances between the outcome
distributions. Working with whole quantile functions instead of aggregate
values allows for tests of equality and stochastic dominance of the
counterfactual- and the observed distribution. It can provide causal inference
on standard outcomes like average- or quantile treatment effects, but also more
general concepts such as counterfactual Lorenz curves or interquartile ranges.",Distributional synthetic controls,2020-01-17 03:12:02,Florian Gunsilius,"http://arxiv.org/abs/2001.06118v5, http://arxiv.org/pdf/2001.06118v5",econ.EM
30287,em,"This paper introduces entropy balancing for continuous treatments (EBCT) by
extending the original entropy balancing methodology of Hainm\""uller (2012). In
order to estimate balancing weights, the proposed approach solves a globally
convex constrained optimization problem. EBCT weights reliably eradicate
Pearson correlations between covariates and the continuous treatment variable.
This is the case even when other methods based on the generalized propensity
score tend to yield insufficient balance due to strong selection into different
treatment intensities. Moreover, the optimization procedure is more successful
in avoiding extreme weights attached to a single unit. Extensive Monte-Carlo
simulations show that treatment effect estimates using EBCT display similar or
lower bias and uniformly lower root mean squared error. These properties make
EBCT an attractive method for the evaluation of continuous treatments.",Entropy Balancing for Continuous Treatments,2020-01-17 15:56:17,Stefan Tübbicke,"http://arxiv.org/abs/2001.06281v2, http://arxiv.org/pdf/2001.06281v2",econ.EM
30288,em,"In the present work we analyse the dynamics of indirect connections between
insurance companies that result from market price channels. In our analysis we
assume that the stock quotations of insurance companies reflect market
sentiments which constitute a very important systemic risk factor.
Interlinkages between insurers and their dynamics have a direct impact on
systemic risk contagion in the insurance sector. We propose herein a new hybrid
approach to the analysis of interlinkages dynamics based on combining the
copula-DCC-GARCH model and Minimum Spanning Trees (MST). Using the
copula-DCC-GARCH model we determine the tail dependence coefficients. Then, for
each analysed period we construct MST based on these coefficients. The dynamics
is analysed by means of time series of selected topological indicators of the
MSTs in the years 2005-2019. Our empirical results show the usefulness of the
proposed approach to the analysis of systemic risk in the insurance sector. The
times series obtained from the proposed hybrid approach reflect the phenomena
occurring on the market. The analysed MST topological indicators can be
considered as systemic risk predictors.",A tail dependence-based MST and their topological indicators in modelling systemic risk in the European insurance sector,2020-01-18 03:41:46,"Anna Denkowska, Stanisław Wanat","http://arxiv.org/abs/2001.06567v2, http://arxiv.org/pdf/2001.06567v2",q-fin.ST
30289,em,"In this paper, we propose an adaptive group lasso procedure to efficiently
estimate structural breaks in cointegrating regressions. It is well-known that
the group lasso estimator is not simultaneously estimation consistent and model
selection consistent in structural break settings. Hence, we use a first step
group lasso estimation of a diverging number of breakpoint candidates to
produce weights for a second adaptive group lasso estimation. We prove that
parameter changes are estimated consistently by group lasso and show that the
number of estimated breaks is greater than the true number but still
sufficiently close to it. Then, we use these results and prove that the
adaptive group lasso has oracle properties if weights are obtained from our
first step estimation. Simulation results show that the proposed estimator
delivers the expected results. An economic application to the long-run US money
demand function demonstrates the practical importance of this methodology.",Oracle Efficient Estimation of Structural Breaks in Cointegrating Regressions,2020-01-22 13:32:01,Karsten Schweikert,"http://arxiv.org/abs/2001.07949v4, http://arxiv.org/pdf/2001.07949v4",econ.EM
30290,em,"This article develops a Bayesian approach for estimating panel quantile
regression with binary outcomes in the presence of correlated random effects.
We construct a working likelihood using an asymmetric Laplace (AL) error
distribution and combine it with suitable prior distributions to obtain the
complete joint posterior distribution. For posterior inference, we propose two
Markov chain Monte Carlo (MCMC) algorithms but prefer the algorithm that
exploits the blocking procedure to produce lower autocorrelation in the MCMC
draws. We also explain how to use the MCMC draws to calculate the marginal
effects, relative risk and odds ratio. The performance of our preferred
algorithm is demonstrated in multiple simulation studies and shown to perform
extremely well. Furthermore, we implement the proposed framework to study crime
recidivism in Quebec, a Canadian Province, using a novel data from the
administrative correctional files. Our results suggest that the recently
implemented ""tough-on-crime"" policy of the Canadian government has been largely
successful in reducing the probability of repeat offenses in the post-policy
period. Besides, our results support existing findings on crime recidivism and
offer new insights at various quantiles.",Bayesian Panel Quantile Regression for Binary Outcomes with Correlated Random Effects: An Application on Crime Recidivism in Canada,2020-01-25 14:16:30,"Georges Bresson, Guy Lacroix, Mohammad Arshad Rahman","http://arxiv.org/abs/2001.09295v1, http://arxiv.org/pdf/2001.09295v1",econ.EM
30291,em,"This paper studies treatment effect models in which individuals are
classified into unobserved groups based on heterogeneous treatment rules. Using
a finite mixture approach, we propose a marginal treatment effect (MTE)
framework in which the treatment choice and outcome equations can be
heterogeneous across groups. Under the availability of instrumental variables
specific to each group, we show that the MTE for each group can be separately
identified. Based on our identification result, we propose a two-step
semiparametric procedure for estimating the group-wise MTE. We illustrate the
usefulness of the proposed method with an application to economic returns to
college education.",Estimating Marginal Treatment Effects under Unobserved Group Heterogeneity,2020-01-27 05:03:23,"Tadao Hoshino, Takahide Yanagi","http://arxiv.org/abs/2001.09560v6, http://arxiv.org/pdf/2001.09560v6",econ.EM
30292,em,"Many recent studies emphasize how important the role of cognitive and
social-emotional skills can be in determining people's quality of life.
Although skills are of great importance in many aspects, in this paper we will
focus our efforts to better understand the relationship between several types
of skills with academic progress delay. Our dataset contains the same students
in 2012 and 2017, and we consider that there was a academic progress delay for
a specific student if he or she progressed less than expected in school grades.
Our methodology primarily includes the use of a Bayesian logistic regression
model and our results suggest that both cognitive and social-emotional skills
may impact the conditional probability of falling behind in school, and the
magnitude of the impact between the two types of skills can be comparable.",Skills to not fall behind in school,2020-01-28 21:45:26,Felipe Maia Polo,"http://arxiv.org/abs/2001.10519v1, http://arxiv.org/pdf/2001.10519v1",stat.AP
30293,em,"The notion that an independent central bank reduces a country's inflation is
a controversial hypothesis. To date, it has not been possible to satisfactorily
answer this question because the complex macroeconomic structure that gives
rise to the data has not been adequately incorporated into statistical
analyses. We develop a causal model that summarizes the economic process of
inflation. Based on this causal model and recent data, we discuss and identify
the assumptions under which the effect of central bank independence on
inflation can be identified and estimated. Given these and alternative
assumptions, we estimate this effect using modern doubly robust effect
estimators, i.e., longitudinal targeted maximum likelihood estimators. The
estimation procedure incorporates machine learning algorithms and is tailored
to address the challenges associated with complex longitudinal macroeconomic
data. We do not find strong support for the hypothesis that having an
independent central bank for a long period of time necessarily lowers
inflation. Simulation studies evaluate the sensitivity of the proposed methods
in complex settings when certain assumptions are violated and highlight the
importance of working with appropriate learning algorithms for estimation.",Estimating the Effect of Central Bank Independence on Inflation Using Longitudinal Targeted Maximum Likelihood Estimation,2020-03-04 20:26:51,"Philipp F. M. Baumann, Michael Schomaker, Enzo Rossi","http://arxiv.org/abs/2003.02208v7, http://arxiv.org/pdf/2003.02208v7",econ.EM
30294,em,"This paper studies the joint estimation problem of a discrete choice model
and the arrival rate of potential customers when unobserved stock-out events
occur. In this paper, we generalize [Anupindi et al., 1998] and [Conlon and
Mortimer, 2013] in the sense that (1) we work with generic choice models, (2)
we allow arbitrary numbers of products and stock-out events, and (3) we
consider the existence of the null alternative, and estimates the overall
arrival rate of potential customers. In addition, we point out that the
modeling in [Conlon and Mortimer, 2013] is problematic, and present the correct
formulation.",Joint Estimation of Discrete Choice Model and Arrival Rate with Unobserved Stock-out Events,2020-03-04 23:13:55,"Hongzhang Shao, Anton J. Kleywegt","http://arxiv.org/abs/2003.02313v1, http://arxiv.org/pdf/2003.02313v1",math.OC
30295,em,"This paper describes the impact on transportation network companies (TNCs) of
the imposition of a congestion charge and a driver minimum wage. The impact is
assessed using a market equilibrium model to calculate the changes in the
number of passenger trips and trip fare, number of drivers employed, the TNC
platform profit, the number of TNC vehicles, and city revenue. Two charges are
considered: (a) a charge per TNC trip similar to an excise tax, and (b) a
charge per vehicle operating hour (whether or not it has a passenger) similar
to a road tax. Both charges reduce the number of TNC trips, but this reduction
is limited by the wage floor, and the number of TNC vehicles reduced is not
significant. The time-based charge is preferable to the trip-based charge
since, by penalizing idle vehicle time, the former increases vehicle occupancy.
In a case study for San Francisco, the time-based charge is found to be Pareto
superior to the trip-based charge as it yields higher passenger surplus, higher
platform profits, and higher tax revenue for the city.",Impact of Congestion Charge and Minimum Wage on TNCs: A Case Study for San Francisco,2020-03-05 14:56:28,"Sen Li, Kameshwar Poolla, Pravin Varaiya","http://arxiv.org/abs/2003.02550v4, http://arxiv.org/pdf/2003.02550v4",econ.EM
30296,em,"It is well known that the conventional cumulative sum (CUSUM) test suffers
from low power and large detection delay. In order to improve the power of the
test, we propose two alternative statistics. The backward CUSUM detector
considers the recursive residuals in reverse chronological order, whereas the
stacked backward CUSUM detector sequentially cumulates a triangular array of
backwardly cumulated residuals. A multivariate invariance principle for partial
sums of recursive residuals is given, and the limiting distributions of the
test statistics are derived under local alternatives. In the retrospective
context, the local power of the tests is shown to be substantially higher than
that of the conventional CUSUM test if a break occurs in the middle or at the
end of the sample. When applied to monitoring schemes, the detection delay of
the stacked backward CUSUM is found to be much shorter than that of the
conventional monitoring CUSUM procedure. Furthermore, we propose an estimator
of the break date based on the backward CUSUM detector and show that in
monitoring exercises this estimator tends to outperform the usual maximum
likelihood estimator. Finally, an application of the methodology to COVID-19
data is presented.",Backward CUSUM for Testing and Monitoring Structural Change with an Application to COVID-19 Pandemic Data,2020-03-05 17:53:58,"Sven Otto, Jörg Breitung","http://dx.doi.org/10.1017/S0266466622000159, http://arxiv.org/abs/2003.02682v3, http://arxiv.org/pdf/2003.02682v3",econ.EM
30297,em,"We propose a novel conditional quantile prediction method based on complete
subset averaging (CSA) for quantile regressions. All models under consideration
are potentially misspecified and the dimension of regressors goes to infinity
as the sample size increases. Since we average over the complete subsets, the
number of models is much larger than the usual model averaging method which
adopts sophisticated weighting schemes. We propose to use an equal weight but
select the proper size of the complete subset based on the leave-one-out
cross-validation method. Building upon the theory of Lu and Su (2015), we
investigate the large sample properties of CSA and show the asymptotic
optimality in the sense of Li (1987). We check the finite sample performance
via Monte Carlo simulations and empirical applications.",Complete Subset Averaging for Quantile Regressions,2020-03-06 19:23:59,"Ji Hyung Lee, Youngki Shin","http://dx.doi.org/10.1017/S0266466621000402, http://arxiv.org/abs/2003.03299v3, http://arxiv.org/pdf/2003.03299v3",econ.EM
32784,gn,"With the development of Internet technology, the issue of privacy leakage has
attracted more and more attention from the public. In our daily life, mobile
phone applications and identity documents that we use may bring the risk of
privacy leakage, which had increasingly aroused public concern. The path of
privacy protection in the digital age remains to be explored. To explore the
source of this risk and how it can be reduced, we conducted this study by using
personal experience, collecting data and applying the theory.",Personal Privacy Protection Problems in the Digital Age,2022-11-17 18:38:32,"Zhiheng Yi, Xiaoli Chen","http://arxiv.org/abs/2211.09591v2, http://arxiv.org/pdf/2211.09591v2",econ.GN
30298,em,"This study provides a formal analysis of the customer targeting problem when
the cost for a marketing action depends on the customer response and proposes a
framework to estimate the decision variables for campaign profit optimization.
Targeting a customer is profitable if the impact and associated profit of the
marketing treatment are higher than its cost. Despite the growing literature on
uplift models to identify the strongest treatment-responders, no research has
investigated optimal targeting when the costs of the treatment are unknown at
the time of the targeting decision. Stochastic costs are ubiquitous in direct
marketing and customer retention campaigns because marketing incentives are
conditioned on a positive customer response. This study makes two contributions
to the literature, which are evaluated on an e-commerce coupon targeting
campaign. First, we formally analyze the targeting decision problem under
response-dependent costs. Profit-optimal targeting requires an estimate of the
treatment effect on the customer and an estimate of the customer response
probability under treatment. The empirical results demonstrate that the
consideration of treatment cost substantially increases campaign profit when
used for customer targeting in combination with an estimate of the average or
customer-level treatment effect. Second, we propose a framework to jointly
estimate the treatment effect and the response probability by combining methods
for causal inference with a hurdle mixture model. The proposed causal hurdle
model achieves competitive campaign profit while streamlining model building.
Code is available at https://github.com/Humboldt-WI/response-dependent-costs.",Targeting customers under response-dependent costs,2020-03-13 16:23:03,"Johannes Haupt, Stefan Lessmann","http://dx.doi.org/10.1016/j.ejor.2021.05.045, http://arxiv.org/abs/2003.06271v2, http://arxiv.org/pdf/2003.06271v2",econ.EM
30299,em,"Contribution of this paper lies in the formulation and estimation of a
generalized model for stochastic frontier analysis (SFA) that nests virtually
all forms used and includes some that have not been considered so far. The
model is based on the generalized t distribution for the observation error and
the generalized beta distribution of the second kind for the
inefficiency-related term. We use this general error structure framework for
formal testing, to compare alternative specifications and to conduct model
averaging. This allows us to deal with model specification uncertainty, which
is one of the main unresolved issues in SFA, and to relax a number of
potentially restrictive assumptions embedded within existing SF models. We also
develop Bayesian inference methods that are less restrictive compared to the
ones used so far and demonstrate feasible approximate alternatives based on
maximum likelihood.","Stochastic Frontier Analysis with Generalized Errors: inference, model comparison and averaging",2020-03-16 15:38:02,"Kamil Makieła, Błażej Mazur","http://arxiv.org/abs/2003.07150v2, http://arxiv.org/pdf/2003.07150v2",econ.EM
30300,em,"We propose a hypothesis test that allows for many tested restrictions in a
heteroskedastic linear regression model. The test compares the conventional F
statistic to a critical value that corrects for many restrictions and
conditional heteroskedasticity. This correction uses leave-one-out estimation
to correctly center the critical value and leave-three-out estimation to
appropriately scale it. The large sample properties of the test are established
in an asymptotic framework where the number of tested restrictions may be fixed
or may grow with the sample size, and can even be proportional to the number of
observations. We show that the test is asymptotically valid and has non-trivial
asymptotic power against the same local alternatives as the exact F test when
the latter is valid. Simulations corroborate these theoretical findings and
suggest excellent size control in moderately small samples, even under strong
heteroskedasticity.",Testing Many Restrictions Under Heteroskedasticity,2020-03-16 19:44:44,"Stanislav Anatolyev, Mikkel Sølvsten","http://arxiv.org/abs/2003.07320v3, http://arxiv.org/pdf/2003.07320v3",econ.EM
30301,em,"Dynamic pricing schemes are increasingly employed across industries to
maintain a self-organized balance of demand and supply. However, throughout
complex dynamical systems, unintended collective states exist that may
compromise their function. Here we reveal how dynamic pricing may induce
demand-supply imbalances instead of preventing them. Combining game theory and
time series analysis of dynamic pricing data from on-demand ride-hailing
services, we explain this apparent contradiction. We derive a phase diagram
demonstrating how and under which conditions dynamic pricing incentivizes
collective action of ride-hailing drivers to induce anomalous supply shortages.
By disentangling different timescales in price time series of ride-hailing
services at 137 locations across the globe, we identify characteristic patterns
in the price dynamics reflecting these anomalous supply shortages. Our results
provide systemic insights for the regulation of dynamic pricing, in particular
in publicly accessible mobility systems, by unraveling under which conditions
dynamic pricing schemes promote anomalous supply shortages.",Anomalous supply shortages from dynamic pricing in on-demand mobility,2020-03-16 20:45:21,"Malte Schröder, David-Maximilian Storch, Philip Marszal, Marc Timme","http://dx.doi.org/10.1038/s41467-020-18370-3, http://arxiv.org/abs/2003.07736v1, http://arxiv.org/pdf/2003.07736v1",physics.soc-ph
30302,em,"This paper studies the design of two-wave experiments in the presence of
spillover effects when the researcher aims to conduct precise inference on
treatment effects. We consider units connected through a single network, local
dependence among individuals, and a general class of estimands encompassing
average treatment and average spillover effects. We introduce a statistical
framework for designing two-wave experiments with networks, where the
researcher optimizes over participants and treatment assignments to minimize
the variance of the estimators of interest, using a first-wave (pilot)
experiment to estimate the variance. We derive guarantees for inference on
treatment effects and regret guarantees on the variance obtained from the
proposed design mechanism. Our results illustrate the existence of a trade-off
in the choice of the pilot study and formally characterize the pilot's size
relative to the main experiment. Simulations using simulated and real-world
networks illustrate the advantages of the method.",Experimental Design under Network Interference,2020-03-18 21:22:36,Davide Viviano,"http://arxiv.org/abs/2003.08421v4, http://arxiv.org/pdf/2003.08421v4",econ.EM
30309,em,"We consider a small set of axioms for income averaging -- recursivity,
continuity, and the boundary condition for the present. These properties yield
a unique averaging function that is the density of the reflected Brownian
motion with a drift started at the current income and moving over the past
incomes. When averaging is done over the short past, the weighting function is
asymptotically converging to a Gaussian. When averaging is done over the long
horizon, the weighing function converges to the exponential distribution. For
all intermediate averaging scales, we derive an explicit solution that
interpolates between the two.",On Vickrey's Income Averaging,2020-04-14 06:30:19,"Stefan Steinerberger, Aleh Tsyvinski","http://arxiv.org/abs/2004.06289v1, http://arxiv.org/pdf/2004.06289v1",econ.TH
30303,em,"Practical problems with missing data are common, and statistical methods have
been developed concerning the validity and/or efficiency of statistical
procedures. On a central focus, there have been longstanding interests on the
mechanism governing data missingness, and correctly deciding the appropriate
mechanism is crucially relevant for conducting proper practical investigations.
The conventional notions include the three common potential classes -- missing
completely at random, missing at random, and missing not at random. In this
paper, we present a new hypothesis testing approach for deciding between
missing at random and missing not at random. Since the potential alternatives
of missing at random are broad, we focus our investigation on a general class
of models with instrumental variables for data missing not at random. Our
setting is broadly applicable, thanks to that the model concerning the missing
data is nonparametric, requiring no explicit model specification for the data
missingness. The foundational idea is to develop appropriate discrepancy
measures between estimators whose properties significantly differ only when
missing at random does not hold. We show that our new hypothesis testing
approach achieves an objective data oriented choice between missing at random
or not. We demonstrate the feasibility, validity, and efficacy of the new test
by theoretical analysis, simulation studies, and a real data analysis.",Missing at Random or Not: A Semiparametric Testing Approach,2020-03-25 05:09:42,"Rui Duan, C. Jason Liang, Pamela Shaw, Cheng Yong Tang, Yong Chen","http://arxiv.org/abs/2003.11181v1, http://arxiv.org/pdf/2003.11181v1",stat.ME
30304,em,"We develop monitoring procedures for cointegrating regressions, testing the
null of no breaks against the alternatives that there is either a change in the
slope, or a change to non-cointegration. After observing the regression for a
calibration sample m, we study a CUSUM-type statistic to detect the presence of
change during a monitoring horizon m+1,...,T. Our procedures use a class of
boundary functions which depend on a parameter whose value affects the delay in
detecting the possible break. Technically, these procedures are based on almost
sure limiting theorems whose derivation is not straightforward. We therefore
define a monitoring function which - at every point in time - diverges to
infinity under the null, and drifts to zero under alternatives. We cast this
sequence in a randomised procedure to construct an i.i.d. sequence, which we
then employ to define the detector function. Our monitoring procedure rejects
the null of no break (when correct) with a small probability, whilst it rejects
with probability one over the monitoring horizon in the presence of breaks.",Sequential monitoring for cointegrating regressions,2020-03-27 01:58:54,"Lorenzo Trapani, Emily Whitehouse","http://arxiv.org/abs/2003.12182v1, http://arxiv.org/pdf/2003.12182v1",econ.EM
30305,em,"This paper proposes a new class of nonparametric tests for the correct
specification of models based on conditional moment restrictions, paying
particular attention to generalized propensity score models. The test procedure
is based on two different projection arguments, leading to test statistics that
are suitable to setups with many covariates, and are (asymptotically) invariant
to the estimation method used to estimate the nuisance parameters. We show that
our proposed tests are able to detect a broad class of local alternatives
converging to the null at the usual parametric rate and illustrate its
attractive power properties via simulations. We also extend our proposal to
test parametric or semiparametric single-index-type models.",Specification tests for generalized propensity scores using double projections,2020-03-30 23:38:18,"Pedro H. C. Sant'Anna, Xiaojun Song","http://arxiv.org/abs/2003.13803v2, http://arxiv.org/pdf/2003.13803v2",econ.EM
30306,em,"The diminishing extent of Arctic sea ice is a key indicator of climate change
as well as an accelerant for future global warming. Since 1978, Arctic sea ice
has been measured using satellite-based microwave sensing; however, different
measures of Arctic sea ice extent have been made available based on differing
algorithmic transformations of the raw satellite data. We propose and estimate
a dynamic factor model that combines four of these measures in an optimal way
that accounts for their differing volatility and cross-correlations. We then
use the Kalman smoother to extract an optimal combined measure of Arctic sea
ice extent. It turns out that almost all weight is put on the NSIDC Sea Ice
Index, confirming and enhancing confidence in the Sea Ice Index and the NASA
Team algorithm on which it is based.",Optimal Combination of Arctic Sea Ice Extent Measures: A Dynamic Factor Modeling Approach,2020-03-31 18:02:27,"Francis X. Diebold, Maximilian Göbel, Philippe Goulet Coulombe, Glenn D. Rudebusch, Boyuan Zhang","http://arxiv.org/abs/2003.14276v2, http://arxiv.org/pdf/2003.14276v2",stat.AP
30307,em,"We construct robust empirical Bayes confidence intervals (EBCIs) in a normal
means problem. The intervals are centered at the usual linear empirical Bayes
estimator, but use a critical value accounting for shrinkage. Parametric EBCIs
that assume a normal distribution for the means (Morris, 1983b) may
substantially undercover when this assumption is violated. In contrast, our
EBCIs control coverage regardless of the means distribution, while remaining
close in length to the parametric EBCIs when the means are indeed Gaussian. If
the means are treated as fixed, our EBCIs have an average coverage guarantee:
the coverage probability is at least $1 - \alpha$ on average across the $n$
EBCIs for each of the means. Our empirical application considers the effects of
U.S. neighborhoods on intergenerational mobility.",Robust Empirical Bayes Confidence Intervals,2020-04-07 17:54:04,"Timothy B. Armstrong, Michal Kolesár, Mikkel Plagborg-Møller","http://dx.doi.org/10.3982/ECTA18597, http://arxiv.org/abs/2004.03448v4, http://arxiv.org/pdf/2004.03448v4",econ.EM
30308,em,"This paper investigates the sensitivity of forecast performance measures to
taking a real time versus pseudo out-of-sample perspective. We use monthly
vintages for the United States (US) and the Euro Area (EA) and estimate a set
of vector autoregressive (VAR) models of different sizes with constant and
time-varying parameters (TVPs) and stochastic volatility (SV). Our results
suggest differences in the relative ordering of model performance for point and
density forecasts depending on whether real time data or truncated final
vintages in pseudo out-of-sample simulations are used for evaluating forecasts.
No clearly superior specification for the US or the EA across variable types
and forecast horizons can be identified, although larger models featuring TVPs
appear to be affected the least by missing values and data revisions. We
identify substantial differences in performance metrics with respect to whether
forecasts are produced for the US or the EA.",Forecasts with Bayesian vector autoregressions under real time conditions,2020-04-10 13:43:48,Michael Pfarrhofer,"http://arxiv.org/abs/2004.04984v1, http://arxiv.org/pdf/2004.04984v1",econ.EM
34099,th,"In a many-to-one matching model in which firms' preferences satisfy
substitutability, we study the set of worker-quasi-stable matchings.
Worker-quasi-stability is a relaxation of stability that allows blocking pairs
involving a firm and an unemployed worker. We show that this set has a lattice
structure and define a Tarski operator on this lattice that models a
re-equilibration process and has the set of stable matchings as its fixed
points.",The lattice of worker-quasi-stable matchings,2021-03-30 16:25:33,"Agustin G. Bonifacio, Nadia Guinazu, Noelia Juarez, Pablo Neme, Jorge Oviedo","http://arxiv.org/abs/2103.16330v4, http://arxiv.org/pdf/2103.16330v4",econ.TH
30310,em,"This paper proposes a new family of multi-frequency-band (MFB) tests for the
white noise hypothesis by using the maximum overlap discrete wavelet packet
transform (MODWPT). The MODWPT allows the variance of a process to be
decomposed into the variance of its components on different equal-length
frequency sub-bands, and the MFB tests then measure the distance between the
MODWPT-based variance ratio and its theoretical null value jointly over several
frequency sub-bands. The resulting MFB tests have the chi-squared asymptotic
null distributions under mild conditions, which allow the data to be
heteroskedastic. The MFB tests are shown to have the desirable size and power
performance by simulation studies, and their usefulness is further illustrated
by two applications.",Multi-frequency-band tests for white noise under heteroskedasticity,2020-04-20 12:40:58,"Mengya Liu, Fukan Zhu, Ke Zhu","http://arxiv.org/abs/2004.09161v1, http://arxiv.org/pdf/2004.09161v1",econ.EM
30311,em,"The United Nations' ambitions to combat climate change and prosper human
development are manifested in the Paris Agreement and the Sustainable
Development Goals (SDGs), respectively. These are inherently inter-linked as
progress towards some of these objectives may accelerate or hinder progress
towards others. We investigate how these two agendas influence each other by
defining networks of 18 nodes, consisting of the 17 SDGs and climate change,
for various groupings of countries. We compute a non-linear measure of
conditional dependence, the partial distance correlation, given any subset of
the remaining 16 variables. These correlations are treated as weights on edges,
and weighted eigenvector centralities are calculated to determine the most
important nodes. We find that SDG 6, clean water and sanitation, and SDG 4,
quality education, are most central across nearly all groupings of countries.
In developing regions, SDG 17, partnerships for the goals, is strongly
connected to the progress of other objectives in the two agendas whilst,
somewhat surprisingly, SDG 8, decent work and economic growth, is not as
important in terms of eigenvector centrality.",Non-linear interlinkages and key objectives amongst the Paris Agreement and the Sustainable Development Goals,2020-04-16 21:26:27,"Felix Laumann, Julius von Kügelgen, Mauricio Barahona","http://arxiv.org/abs/2004.09318v1, http://arxiv.org/pdf/2004.09318v1",econ.EM
30312,em,"Regression discontinuity designs assess causal effects in settings where
treatment is determined by whether an observed running variable crosses a
pre-specified threshold. Here we propose a new approach to identification,
estimation, and inference in regression discontinuity designs that uses
knowledge about exogenous noise (e.g., measurement error) in the running
variable. In our strategy, we weight treated and control units to balance a
latent variable of which the running variable is a noisy measure. Our approach
is explicitly randomization-based and complements standard formal analyses that
appeal to continuity arguments while ignoring the stochastic nature of the
assignment mechanism.",Noise-Induced Randomization in Regression Discontinuity Designs,2020-04-20 20:24:38,"Dean Eckles, Nikolaos Ignatiadis, Stefan Wager, Han Wu","http://arxiv.org/abs/2004.09458v4, http://arxiv.org/pdf/2004.09458v4",stat.ME
30313,em,"This paper proposes a new linearized mixed data sampling (MIDAS) model and
develops a framework to infer clusters in a panel regression with mixed
frequency data. The linearized MIDAS estimation method is more flexible and
substantially simpler to implement than competing approaches. We show that the
proposed clustering algorithm successfully recovers true membership in the
cross-section, both in theory and in simulations, without requiring prior
knowledge of the number of clusters. This methodology is applied to a
mixed-frequency Okun's law model for state-level data in the U.S. and uncovers
four meaningful clusters based on the dynamic features of state-level labor
markets.",Revealing Cluster Structures Based on Mixed Sampling Frequencies,2020-04-21 09:20:15,"Yeonwoo Rho, Yun Liu, Hie Joo Ahn","http://arxiv.org/abs/2004.09770v2, http://arxiv.org/pdf/2004.09770v2",econ.EM
30314,em,"Bayesian models often involve a small set of hyperparameters determined by
maximizing the marginal likelihood. Bayesian optimization is a popular
iterative method where a Gaussian process posterior of the underlying function
is sequentially updated by new function evaluations. An acquisition strategy
uses this posterior distribution to decide where to place the next function
evaluation. We propose a novel Bayesian optimization framework for situations
where the user controls the computational effort, and therefore the precision
of the function evaluations. This is a common situation in econometrics where
the marginal likelihood is often computed by Markov chain Monte Carlo (MCMC) or
importance sampling methods, with the precision of the marginal likelihood
estimator determined by the number of samples. The new acquisition strategy
gives the optimizer the option to explore the function with cheap noisy
evaluations and therefore find the optimum faster. The method is applied to
estimating the prior hyperparameters in two popular models on US macroeconomic
time series data: the steady-state Bayesian vector autoregressive (BVAR) and
the time-varying parameter BVAR with stochastic volatility. The proposed method
is shown to find the optimum much quicker than traditional Bayesian
optimization or grid search.",Bayesian Optimization of Hyperparameters from Noisy Marginal Likelihood Estimates,2020-04-21 18:11:19,"Oskar Gustafsson, Mattias Villani, Pär Stockhammar","http://arxiv.org/abs/2004.10092v2, http://arxiv.org/pdf/2004.10092v2",stat.CO
30315,em,"As the amount of economic and other data generated worldwide increases
vastly, a challenge for future generations of econometricians will be to master
efficient algorithms for inference in empirical models with large information
sets. This Chapter provides a review of popular estimation algorithms for
Bayesian inference in econometrics and surveys alternative algorithms developed
in machine learning and computing science that allow for efficient computation
in high-dimensional settings. The focus is on scalability and parallelizability
of each algorithm, as well as their ability to be adopted in various empirical
settings in economics and finance.",Machine Learning Econometrics: Bayesian algorithms and methods,2020-04-24 02:15:33,"Dimitris Korobilis, Davide Pettenuzzo","http://arxiv.org/abs/2004.11486v1, http://arxiv.org/pdf/2004.11486v1",stat.CO
30316,em,"We propose and study a maximum likelihood estimator of stochastic frontier
models with endogeneity in cross-section data when the composite error term may
be correlated with inputs and environmental variables. Our framework is a
generalization of the normal half-normal stochastic frontier model with
endogeneity. We derive the likelihood function in closed form using three
fundamental assumptions: the existence of control functions that fully capture
the dependence between regressors and unobservables; the conditional
independence of the two error components given the control functions; and the
conditional distribution of the stochastic inefficiency term given the control
functions being a folded normal distribution. We also provide a Battese-Coelli
estimator of technical efficiency. Our estimator is computationally fast and
easy to implement. We study some of its asymptotic properties, and we showcase
its finite sample behavior in Monte-Carlo simulations and an empirical
application to farmers in Nepal.",Maximum Likelihood Estimation of Stochastic Frontier Models with Endogeneity,2020-04-26 15:51:11,"Samuele Centorrino, María Pérez-Urdiales","http://arxiv.org/abs/2004.12369v3, http://arxiv.org/pdf/2004.12369v3",econ.EM
30317,em,"We propose a novel method for modeling data by using structural models based
on economic theory as regularizers for statistical models. We show that even if
a structural model is misspecified, as long as it is informative about the
data-generating mechanism, our method can outperform both the (misspecified)
structural model and un-structural-regularized statistical models. Our method
permits a Bayesian interpretation of theory as prior knowledge and can be used
both for statistical prediction and causal inference. It contributes to
transfer learning by showing how incorporating theory into statistical modeling
can significantly improve out-of-domain predictions and offers a way to
synthesize reduced-form and structural approaches for causal effect estimation.
Simulation experiments demonstrate the potential of our method in various
settings, including first-price auctions, dynamic models of entry and exit, and
demand estimation with instrumental variables. Our method has potential
applications not only in economics, but in other scientific disciplines whose
theoretical models offer important insight but are subject to significant
misspecification concerns.",Structural Regularization,2020-04-27 09:47:07,"Jiaming Mao, Zhesheng Zheng","http://arxiv.org/abs/2004.12601v4, http://arxiv.org/pdf/2004.12601v4",econ.EM
30318,em,"I provide a unifying perspective on forecast evaluation, characterizing
accurate forecasts of all types, from simple point to complete probabilistic
forecasts, in terms of two fundamental underlying properties, autocalibration
and resolution, which can be interpreted as describing a lack of systematic
mistakes and a high information content. This ""calibration-resolution
principle"" gives a new insight into the nature of forecasting and generalizes
the famous sharpness principle by Gneiting et al. (2007) from probabilistic to
all types of forecasts. It amongst others exposes the shortcomings of several
widely used forecast evaluation methods. The principle is based on a fully
general version of the Murphy decomposition of loss functions, which I provide.
Special cases of this decomposition are well-known and widely used in
meteorology.
  Besides using the decomposition in this new theoretical way, after having
introduced it and the underlying properties in a proper theoretical framework,
accompanied by an illustrative example, I also employ it in its classical sense
as a forecast evaluation method as the meteorologists do: As such, it unveils
the driving forces behind forecast errors and complements classical forecast
evaluation methods. I discuss estimation of the decomposition via kernel
regression and then apply it to popular economic forecasts. Analysis of mean
forecasts from the US Survey of Professional Forecasters and quantile forecasts
derived from Bank of England fan charts indeed yield interesting new insights
and highlight the potential of the method.",The Murphy Decomposition and the Calibration-Resolution Principle: A New Perspective on Forecast Evaluation,2020-05-04 23:41:00,Marc-Oliver Pohle,"http://arxiv.org/abs/2005.01835v1, http://arxiv.org/pdf/2005.01835v1",stat.ME
30319,em,"On September 15th 2020, Arctic sea ice extent (SIE) ranked second-to-lowest
in history and keeps trending downward. The understanding of how feedback loops
amplify the effects of external CO2 forcing is still limited. We propose the
VARCTIC, which is a Vector Autoregression (VAR) designed to capture and
extrapolate Arctic feedback loops. VARs are dynamic simultaneous systems of
equations, routinely estimated to predict and understand the interactions of
multiple macroeconomic time series. The VARCTIC is a parsimonious compromise
between full-blown climate models and purely statistical approaches that
usually offer little explanation of the underlying mechanism. Our completely
unconditional forecast has SIE hitting 0 in September by the 2060's. Impulse
response functions reveal that anthropogenic CO2 emission shocks have an
unusually durable effect on SIE -- a property shared by no other shock. We find
Albedo- and Thickness-based feedbacks to be the main amplification channels
through which CO2 anomalies impact SIE in the short/medium run. Further,
conditional forecast analyses reveal that the future path of SIE crucially
depends on the evolution of CO2 emissions, with outcomes ranging from
recovering SIE to it reaching 0 in the 2050's. Finally, Albedo and Thickness
feedbacks are shown to play an important role in accelerating the speed at
which predicted SIE is heading towards 0.",Arctic Amplification of Anthropogenic Forcing: A Vector Autoregressive Analysis,2020-05-06 02:34:24,"Philippe Goulet Coulombe, Maximilian Göbel","http://arxiv.org/abs/2005.02535v4, http://arxiv.org/pdf/2005.02535v4",econ.EM
30320,em,"Tea auctions across India occur as an ascending open auction, conducted
online. Before the auction, a sample of the tea lot is sent to potential
bidders and a group of tea tasters. The seller's reserve price is a
confidential function of the tea taster's valuation, which also possibly acts
as a signal to the bidders.
  In this paper, we work with the dataset from a single tea auction house, J
Thomas, of tea dust category, on 49 weeks in the time span of 2018-2019, with
the following objectives in mind:
  $\bullet$ Objective classification of the various categories of tea dust (25)
into a more manageable, and robust classification of the tea dust, based on
source and grades.
  $\bullet$ Predict which tea lots would be sold in the auction market, and a
model for the final price conditioned on sale.
  $\bullet$ To study the distribution of price and ratio of the sold tea
auction lots.
  $\bullet$ Make a detailed analysis of the information obtained from the tea
taster's valuation and its impact on the final auction price.
  The model used has shown various promising results on cross-validation. The
importance of valuation is firmly established through analysis of causal
relationship between the valuation and the actual price. The authors hope that
this study of the properties and the detailed analysis of the role played by
the various factors, would be significant in the decision making process for
the players of the auction game, pave the way to remove the manual interference
in an attempt to automate the auction procedure, and improve tea quality in
markets.",The Information Content of Taster's Valuation in Tea Auctions of India,2020-05-04 19:39:47,"Abhinandan Dalal, Diganta Mukherjee, Subhrajyoty Roy","http://arxiv.org/abs/2005.02814v1, http://arxiv.org/pdf/2005.02814v1",stat.AP
30321,em,"Power law distributions characterise several natural and social phenomena.
The Zipf law for cities is one of those. The study views the question of
whether that global regularity is independent of different spatial
distributions of cities. For that purpose, a typical Zipfian rank-size
distribution of cities is generated with random numbers. This distribution is
then cast into different settings of spatial coordinates. For the estimation,
the variables rank and size are supplemented by spatial spillover effects in a
standard spatial econometric approach. Results suggest that distance and
contiguity effects matter. This finding is further corroborated by three
country analyses.",Spatial dependence in the rank-size distribution of cities,2020-05-06 17:05:56,Rolf Bergs,"http://dx.doi.org/10.1371/journal.pone.0246796, http://arxiv.org/abs/2005.02836v1, http://arxiv.org/pdf/2005.02836v1",physics.soc-ph
30322,em,"This paper proposes a logistic undirected network formation model which
allows for assortative matching on observed individual characteristics and the
presence of edge-wise fixed effects. We model the coefficients of observed
characteristics to have a latent community structure and the edge-wise fixed
effects to be of low rank. We propose a multi-step estimation procedure
involving nuclear norm regularization, sample splitting, iterative logistic
regression and spectral clustering to detect the latent communities. We show
that the latent communities can be exactly recovered when the expected degree
of the network is of order log n or higher, where n is the number of nodes in
the network. The finite sample performance of the new estimation and inference
methods is illustrated through both simulated and real datasets.",Detecting Latent Communities in Network Formation Models,2020-05-07 06:34:29,"Shujie Ma, Liangjun Su, Yichong Zhang","http://arxiv.org/abs/2005.03226v3, http://arxiv.org/pdf/2005.03226v3",econ.EM
30323,em,"This paper proposes a new procedure to build factor models for
high-dimensional unit-root time series by postulating that a $p$-dimensional
unit-root process is a nonsingular linear transformation of a set of unit-root
processes, a set of stationary common factors, which are dynamically dependent,
and some idiosyncratic white noise components. For the stationary components,
we assume that the factor process captures the temporal-dependence and the
idiosyncratic white noise series explains, jointly with the factors, the
cross-sectional dependence. The estimation of nonsingular linear loading spaces
is carried out in two steps. First, we use an eigenanalysis of a nonnegative
definite matrix of the data to separate the unit-root processes from the
stationary ones and a modified method to specify the number of unit roots. We
then employ another eigenanalysis and a projected principal component analysis
to identify the stationary common factors and the white noise series. We
propose a new procedure to specify the number of white noise series and, hence,
the number of stationary common factors, establish asymptotic properties of the
proposed method for both fixed and diverging $p$ as the sample size $n$
increases, and use simulation and a real example to demonstrate the performance
of the proposed method in finite samples. We also compare our method with some
commonly used ones in the literature regarding the forecast ability of the
extracted factors and find that the proposed method performs well in
out-of-sample forecasting of a 508-dimensional PM$_{2.5}$ series in Taiwan.",Modeling High-Dimensional Unit-Root Time Series,2020-05-05 23:40:23,"Zhaoxing Gao, Ruey S. Tsay","http://dx.doi.org/10.1016/j.ijforecast.2020.09.008, http://arxiv.org/abs/2005.03496v2, http://arxiv.org/pdf/2005.03496v2",stat.ME
30324,em,"We propose a new semiparametric approach for modelling nonlinear univariate
diffusions, where the observed process is a nonparametric transformation of an
underlying parametric diffusion (UPD). This modelling strategy yields a general
class of semiparametric Markov diffusion models with parametric dynamic copulas
and nonparametric marginal distributions. We provide primitive conditions for
the identification of the UPD parameters together with the unknown
transformations from discrete samples. Likelihood-based estimators of both
parametric and nonparametric components are developed and we analyze the
asymptotic properties of these. Kernel-based drift and diffusion estimators are
also proposed and shown to be normally distributed in large samples. A
simulation study investigates the finite sample performance of our estimators
in the context of modelling US short-term interest rates. We also present a
simple application of the proposed method for modelling the CBOE volatility
index data.",Diffusion Copulas: Identification and Estimation,2020-05-07 17:20:00,"Ruijun Bu, Kaddour Hadri, Dennis Kristensen","http://arxiv.org/abs/2005.03513v1, http://arxiv.org/pdf/2005.03513v1",econ.EM
30325,em,"Time-varying parameter (TVP) regression models can involve a huge number of
coefficients. Careful prior elicitation is required to yield sensible posterior
and predictive inferences. In addition, the computational demands of Markov
Chain Monte Carlo (MCMC) methods mean their use is limited to the case where
the number of predictors is not too large. In light of these two concerns, this
paper proposes a new dynamic shrinkage prior which reflects the empirical
regularity that TVPs are typically sparse (i.e. time variation may occur only
episodically and only for some of the coefficients). A scalable MCMC algorithm
is developed which is capable of handling very high dimensional TVP regressions
or TVP Vector Autoregressions. In an exercise using artificial data we
demonstrate the accuracy and computational efficiency of our methods. In an
application involving the term structure of interest rates in the eurozone, we
find our dynamic shrinkage prior to effectively pick out small amounts of
parameter change and our methods to forecast well.",Dynamic Shrinkage Priors for Large Time-varying Parameter Regressions using Scalable Markov Chain Monte Carlo Methods,2020-05-08 11:40:09,"Niko Hauzenberger, Florian Huber, Gary Koop","http://arxiv.org/abs/2005.03906v2, http://arxiv.org/pdf/2005.03906v2",econ.EM
30326,em,"P-hacking is prevalent in reality but absent from classical hypothesis
testing theory. As a consequence, significant results are much more common than
they are supposed to be when the null hypothesis is in fact true. In this
paper, we build a model of hypothesis testing with p-hacking. From the model,
we construct critical values such that, if the values are used to determine
significance, and if scientists' p-hacking behavior adjusts to the new
significance standards, significant results occur with the desired frequency.
Such robust critical values allow for p-hacking so they are larger than
classical critical values. To illustrate the amount of correction that
p-hacking might require, we calibrate the model using evidence from the medical
sciences. In the calibrated model the robust critical value for any test
statistic is the classical critical value for the same test statistic with one
fifth of the significance level.",Critical Values Robust to P-hacking,2020-05-08 19:37:11,"Adam McCloskey, Pascal Michaillat","http://arxiv.org/abs/2005.04141v8, http://arxiv.org/pdf/2005.04141v8",econ.EM
30327,em,"This note considers the problem of conducting statistical inference on the
share of individuals in some subgroup of a population that experience some
event. The specific complication is that the size of the subgroup needs to be
estimated, whereas the number of individuals that experience the event is
known. The problem is motivated by the recent study of Streeck et al. (2020),
who estimate the infection fatality rate (IFR) of SARS-CoV-2 infection in a
German town that experienced a super-spreading event in mid-February 2020. In
their case the subgroup of interest is comprised of all infected individuals,
and the event is death caused by the infection. We clarify issues with the
precise definition of the target parameter in this context, and propose
confidence intervals (CIs) based on classical statistical principles that
result in good coverage properties.",Combining Population and Study Data for Inference on Event Rates,2020-05-14 10:30:53,Christoph Rothe,"http://arxiv.org/abs/2005.06769v1, http://arxiv.org/pdf/2005.06769v1",stat.AP
30328,em,"Successful forecasting models strike a balance between parsimony and
flexibility. This is often achieved by employing suitable shrinkage priors that
penalize model complexity but also reward model fit. In this note, we modify
the stochastic volatility in mean (SVM) model proposed in Chan (2017) by
introducing state-of-the-art shrinkage techniques that allow for time-variation
in the degree of shrinkage. Using a real-time inflation forecast exercise, we
show that employing more flexible prior distributions on several key parameters
slightly improves forecast performance for the United States (US), the United
Kingdom (UK) and the Euro Area (EA). Comparing in-sample results reveals that
our proposed model yields qualitatively similar insights to the original
version of the model.",Dynamic shrinkage in time-varying parameter stochastic volatility in mean models,2020-05-14 13:10:09,"Florian Huber, Michael Pfarrhofer","http://arxiv.org/abs/2005.06851v1, http://arxiv.org/pdf/2005.06851v1",econ.EM
30329,em,"Models with a large number of latent variables are often used to fully
utilize the information in big or complex data. However, they can be difficult
to estimate using standard approaches, and variational inference methods are a
popular alternative. Key to the success of these is the selection of an
approximation to the target density that is accurate, tractable and fast to
calibrate using optimization methods. Most existing choices can be inaccurate
or slow to calibrate when there are many latent variables. Here, we propose a
family of tractable variational approximations that are more accurate and
faster to calibrate for this case. It combines a parsimonious parametric
approximation for the parameter posterior, with the exact conditional posterior
of the latent variables. We derive a simplified expression for the
re-parameterization gradient of the variational lower bound, which is the main
ingredient of efficient optimization algorithms used to implement variational
estimation. To do so only requires the ability to generate exactly or
approximately from the conditional posterior of the latent variables, rather
than to compute its density. We illustrate using two complex contemporary
econometric examples. The first is a nonlinear multivariate state space model
for U.S. macroeconomic variables. The second is a random coefficients tobit
model applied to two million sales by 20,000 individuals in a large consumer
panel from a marketing study. In both cases, we show that our approximating
family is considerably more accurate than mean field or structured Gaussian
approximations, and faster than Markov chain Monte Carlo. Last, we show how to
implement data sub-sampling in variational inference for our approximation,
which can lead to a further reduction in computation time. MATLAB code
implementing the method for our examples is included in supplementary material.",Fast and Accurate Variational Inference for Models with Many Latent Variables,2020-05-15 12:24:28,"Rubén Loaiza-Maya, Michael Stanley Smith, David J. Nott, Peter J. Danaher","http://arxiv.org/abs/2005.07430v3, http://arxiv.org/pdf/2005.07430v3",stat.ME
30330,em,"This paper evaluates the effects of being an only child in a family on
psychological health, leveraging data on the One-Child Policy in China. We use
an instrumental variable approach to address the potential unmeasured
confounding between the fertility decision and psychological health, where the
instrumental variable is an index on the intensity of the implementation of the
One-Child Policy. We establish an analytical link between the local
instrumental variable approach and principal stratification to accommodate the
continuous instrumental variable. Within the principal stratification
framework, we postulate a Bayesian hierarchical model to infer various causal
estimands of policy interest while adjusting for the clustering data structure.
We apply the method to the data from the China Family Panel Studies and find
small but statistically significant negative effects of being an only child on
self-reported psychological health for some subpopulations. Our analysis
reveals treatment effect heterogeneity with respect to both observed and
unobserved characteristics. In particular, urban males suffer the most from
being only children, and the negative effect has larger magnitude if the
families were more resistant to the One-Child Policy. We also conduct
sensitivity analysis to assess the key instrumental variable assumption.",Is being an only child harmful to psychological health?: Evidence from an instrumental variable analysis of China's One-Child Policy,2020-05-19 02:13:29,"Shuxi Zeng, Fan Li, Peng Ding","http://arxiv.org/abs/2005.09130v2, http://arxiv.org/pdf/2005.09130v2",stat.AP
30331,em,"Instrumental variables (IV) estimation suffers selection bias when the
analysis conditions on the treatment. Judea Pearl's early graphical definition
of instrumental variables explicitly prohibited conditioning on the treatment.
Nonetheless, the practice remains common. In this paper, we derive exact
analytic expressions for IV selection bias across a range of data-generating
models, and for various selection-inducing procedures. We present four sets of
results for linear models. First, IV selection bias depends on the conditioning
procedure (covariate adjustment vs. sample truncation). Second, IV selection
bias due to covariate adjustment is the limiting case of IV selection bias due
to sample truncation. Third, in certain models, the IV and OLS estimators under
selection bound the true causal effect in large samples. Fourth, we
characterize situations where IV remains preferred to OLS despite selection on
the treatment. These results broaden the notion of IV selection bias beyond
sample truncation, replace prior simulation findings with exact analytic
formulas, and enable formal sensitivity analyses.",Instrumental Variables with Treatment-Induced Selection: Exact Bias Results,2020-05-19 19:53:19,"Felix Elwert, Elan Segarra","http://arxiv.org/abs/2005.09583v1, http://arxiv.org/pdf/2005.09583v1",econ.EM
30332,em,"The synthetic control method is a an econometric tool to evaluate causal
effects when only one unit is treated. While initially aimed at evaluating the
effect of large-scale macroeconomic changes with very few available control
units, it has increasingly been used in place of more well-known
microeconometric tools in a broad range of applications, but its properties in
this context are unknown. This paper introduces an alternative to the synthetic
control method, which is developed both in the usual asymptotic framework and
in the high-dimensional scenario. We propose an estimator of average treatment
effect that is doubly robust, consistent and asymptotically normal. It is also
immunized against first-step selection mistakes. We illustrate these properties
using Monte Carlo simulations and applications to both standard and potentially
high-dimensional settings, and offer a comparison with the synthetic control
method.",An alternative to synthetic control for models with many covariates under sparsity,2020-05-25 19:56:45,"Marianne Bléhaut, Xavier D'Haultfoeuille, Jérémy L'Hour, Alexandre B. Tsybakov","http://arxiv.org/abs/2005.12225v2, http://arxiv.org/pdf/2005.12225v2",econ.EM
30338,em,"We conduct an extensive meta-regression analysis of counterfactual programme
evaluations from Italy, considering both published and grey literature on
enterprise and innovation policies. We specify a multilevel model for the
probability of finding positive effect estimates, also assessing correlation
possibly induced by co-authorship networks. We find that the probability of
positive effects is considerable, especially for weaker firms and outcomes that
are directly targeted by public programmes. However, these policies are less
likely to trigger change in the long run.",Evaluating Public Supports to the Investment Activities of Business Firms: A Multilevel Meta-Regression Analysis of Italian Studies,2020-06-02 21:56:22,"Chiara Bocci, Annalisa Caloffi, Marco Mariani, Alessandro Sterlacchini","http://dx.doi.org/10.1007/s40797-021-00170-3, http://arxiv.org/abs/2006.01880v1, http://arxiv.org/pdf/2006.01880v1",stat.AP
30333,em,"This paper evaluates the dynamic impact of various policies adopted by US
states on the growth rates of confirmed Covid-19 cases and deaths as well as
social distancing behavior measured by Google Mobility Reports, where we take
into consideration people's voluntarily behavioral response to new information
of transmission risks. Our analysis finds that both policies and information on
transmission risks are important determinants of Covid-19 cases and deaths and
shows that a change in policies explains a large fraction of observed changes
in social distancing behavior. Our counterfactual experiments suggest that
nationally mandating face masks for employees on April 1st could have reduced
the growth rate of cases and deaths by more than 10 percentage points in late
April, and could have led to as much as 17 to 55 percent less deaths nationally
by the end of May, which roughly translates into 17 to 55 thousand saved lives.
Our estimates imply that removing non-essential business closures (while
maintaining school closures, restrictions on movie theaters and restaurants)
could have led to -20 to 60 percent more cases and deaths by the end of May. We
also find that, without stay-at-home orders, cases would have been larger by 25
to 170 percent, which implies that 0.5 to 3.4 million more Americans could have
been infected if stay-at-home orders had not been implemented. Finally, not
having implemented any policies could have led to at least a 7 fold increase
with an uninformative upper bound in cases (and deaths) by the end of May in
the US, with considerable uncertainty over the effects of school closures,
which had little cross-sectional variation.","Causal Impact of Masks, Policies, Behavior on Early Covid-19 Pandemic in the U.S",2020-05-28 20:32:31,"Victor Chernozhukov, Hiroyuki Kasaha, Paul Schrimpf","http://dx.doi.org/10.1016/j.jeconom.2020.09.003, http://arxiv.org/abs/2005.14168v4, http://arxiv.org/pdf/2005.14168v4",econ.EM
30334,em,"This study investigates the impacts of asymmetry on the modeling and
forecasting of realized volatility in the Japanese futures and spot stock
markets. We employ heterogeneous autoregressive (HAR) models allowing for three
types of asymmetry: positive and negative realized semivariance (RSV),
asymmetric jumps, and leverage effects. The estimation results show that
leverage effects clearly influence the modeling of realized volatility models.
Leverage effects exist for both the spot and futures markets in the Nikkei 225.
Although realized semivariance aids better modeling, the estimations of RSV
models depend on whether these models have leverage effects. Asymmetric jump
components do not have a clear influence on realized volatility models. While
leverage effects and realized semivariance also improve the out-of-sample
forecast performance of volatility models, asymmetric jumps are not useful for
predictive ability. The empirical results of this study indicate that
asymmetric information, in particular, leverage effects and realized
semivariance, yield better modeling and more accurate forecast performance.
Accordingly, asymmetric information should be included when we model and
forecast the realized volatility of Japanese stock markets.",The impacts of asymmetry on modeling and forecasting realized volatility in Japanese stock markets,2020-05-30 06:26:35,"Daiki Maki, Yasushi Ota","http://arxiv.org/abs/2006.00158v1, http://arxiv.org/pdf/2006.00158v1",q-fin.ST
30335,em,"In ordinary quantile regression, quantiles of different order are estimated
one at a time. An alternative approach, which is referred to as quantile
regression coefficients modeling (QRCM), is to model quantile regression
coefficients as parametric functions of the order of the quantile. In this
paper, we describe how the QRCM paradigm can be applied to longitudinal data.
We introduce a two-level quantile function, in which two different quantile
regression models are used to describe the (conditional) distribution of the
within-subject response and that of the individual effects. We propose a novel
type of penalized fixed-effects estimator, and discuss its advantages over
standard methods based on $\ell_1$ and $\ell_2$ penalization. We provide model
identifiability conditions, derive asymptotic properties, describe
goodness-of-fit measures and model selection criteria, present simulation
results, and discuss an application. The proposed method has been implemented
in the R package qrcm.",Parametric Modeling of Quantile Regression Coefficient Functions with Longitudinal Data,2020-05-30 06:29:10,"Paolo Frumento, Matteo Bottai, Iván Fernández-Val","http://arxiv.org/abs/2006.00160v1, http://arxiv.org/pdf/2006.00160v1",stat.ME
30336,em,"We estimate the density and its derivatives using a local polynomial
approximation to the logarithm of an unknown density $f$. The estimator is
guaranteed to be nonnegative and achieves the same optimal rate of convergence
in the interior as well as the boundary of the support of $f$. The estimator is
therefore well-suited to applications in which nonnegative density estimates
are required, such as in semiparametric maximum likelihood estimation. In
addition, we show that our estimator compares favorably with other kernel-based
methods, both in terms of asymptotic performance and computational ease.
Simulation results confirm that our method can perform similarly in finite
samples to these alternative methods when they are used with optimal inputs,
i.e. an Epanechnikov kernel and optimally chosen bandwidth sequence. Further
simulation evidence demonstrates that, if the researcher modifies the inputs
and chooses a larger bandwidth, our approach can even improve upon these
optimized alternatives, asymptotically. We provide code in several languages.",Estimates of derivatives of (log) densities and related objects,2020-06-02 03:59:37,"Joris Pinkse, Karl Schurter","http://arxiv.org/abs/2006.01328v1, http://arxiv.org/pdf/2006.01328v1",econ.EM
30337,em,"Complexity of the problem of choosing among uncertain acts is a salient
feature of many of the environments in which departures from expected utility
theory are observed. I propose and axiomatize a model of choice under
uncertainty in which the size of the partition with respect to which an act is
measurable arises endogenously as a measure of subjective complexity. I derive
a representation of incomplete Simple Bounds preferences in which acts that are
complex from the perspective of the decision maker are bracketed by simple acts
to which they are related by statewise dominance. The key axioms are motivated
by a model of learning from limited data. I then consider choice behavior
characterized by a ""cautious completion"" of Simple Bounds preferences, and
discuss the relationship between this model and models of ambiguity aversion. I
develop general comparative statics results, and explore applications to
portfolio choice, contracting, and insurance choice.",Subjective Complexity Under Uncertainty,2020-06-02 21:03:11,Quitzé Valenzuela-Stookey,"http://arxiv.org/abs/2006.01852v3, http://arxiv.org/pdf/2006.01852v3",econ.TH
30339,em,"Multiple testing plagues many important questions in finance such as fund and
factor selection. We propose a new way to calibrate both Type I and Type II
errors. Next, using a double-bootstrap method, we establish a t-statistic
hurdle that is associated with a specific false discovery rate (e.g., 5%). We
also establish a hurdle that is associated with a certain acceptable ratio of
misses to false discoveries (Type II error scaled by Type I error), which
effectively allows for differential costs of the two types of mistakes.
Evaluating current methods, we find that they lack power to detect
outperforming managers.",False (and Missed) Discoveries in Financial Economics,2020-06-08 00:08:40,"Campbell R. Harvey, Yan Liu","http://arxiv.org/abs/2006.04269v1, http://arxiv.org/pdf/2006.04269v1",stat.ME
30340,em,"Statistical and structural modeling represent two distinct approaches to data
analysis. In this paper, we propose a set of novel methods for combining
statistical and structural models for improved prediction and causal inference.
Our first proposed estimator has the doubly robustness property in that it only
requires the correct specification of either the statistical or the structural
model. Our second proposed estimator is a weighted ensemble that has the
ability to outperform both models when they are both misspecified. Experiments
demonstrate the potential of our estimators in various settings, including
fist-price auctions, dynamic models of entry and exit, and demand estimation
with instrumental variables.",Ensemble Learning with Statistical and Structural Models,2020-06-07 16:36:50,"Jiaming Mao, Jingzhi Xu","http://arxiv.org/abs/2006.05308v1, http://arxiv.org/pdf/2006.05308v1",econ.EM
30341,em,"We analyze the forces that explain inflation using a panel of 122 countries
from 1997 to 2015 with 37 regressors. 98 models motivated by economic theory
are compared to a gradient boosting algorithm, non-linearities and structural
breaks are considered. We show that the typical estimation methods are likely
to lead to fallacious policy conclusions which motivates the use of a new
approach that we propose in this paper. The boosting algorithm outperforms
theory-based models. We confirm that energy prices are important but what
really matters for inflation is their non-linear interplay with energy rents.
Demographic developments also make a difference. Globalization and technology,
public debt, central bank independence and political characteristics are less
relevant. GDP per capita is more relevant than the output gap, credit growth
more than M2 growth.",What Drives Inflation and How: Evidence from Additive Mixed Models Selected by cAIC,2020-06-11 12:30:00,"Philipp F. M. Baumann, Enzo Rossi, Alexander Volkmann","http://arxiv.org/abs/2006.06274v4, http://arxiv.org/pdf/2006.06274v4",stat.AP
30342,em,"Linear regression with measurement error in the covariates is a heavily
studied topic, however, the statistics/econometrics literature is almost silent
to estimating a multi-equation model with measurement error. This paper
considers a seemingly unrelated regression model with measurement error in the
covariates and introduces two novel estimation methods: a pure Bayesian
algorithm (based on Markov chain Monte Carlo techniques) and its mean field
variational Bayes (MFVB) approximation. The MFVB method has the added advantage
of being computationally fast and can handle big data. An issue pertinent to
measurement error models is parameter identification, and this is resolved by
employing a prior distribution on the measurement error variance. The methods
are shown to perform well in multiple simulation studies, where we analyze the
impact on posterior estimates arising due to different values of reliability
ratio or variance of the true unobserved quantity used in the data generating
process. The paper further implements the proposed algorithms in an application
drawn from the health literature and shows that modeling measurement error in
the data can improve model fitting.",Seemingly Unrelated Regression with Measurement Error: Estimation via Markov chain Monte Carlo and Mean Field Variational Bayes Approximation,2020-06-12 13:58:10,"Georges Bresson, Anoop Chaturvedi, Mohammad Arshad Rahman, Shalabh","http://arxiv.org/abs/2006.07074v1, http://arxiv.org/pdf/2006.07074v1",stat.ME
30343,em,"This paper extends the horseshoe prior of Carvalho et al. (2010) to Bayesian
quantile regression (HS-BQR) and provides a fast sampling algorithm for
computation in high dimensions. The performance of the proposed HS-BQR is
evaluated on Monte Carlo simulations and a high dimensional Growth-at-Risk
(GaR) forecasting application for the U.S. The Monte Carlo design considers
several sparsity and error structures. Compared to alternative shrinkage
priors, the proposed HS-BQR yields better (or at worst similar) performance in
coefficient bias and forecast error. The HS-BQR is particularly potent in
sparse designs and in estimating extreme quantiles. As expected, the
simulations also highlight that identifying quantile specific location and
scale effects for individual regressors in dense DGPs requires substantial
data. In the GaR application, we forecast tail risks as well as complete
forecast densities using the McCracken and Ng (2020) database. Quantile
specific and density calibration score functions show that the HS-BQR provides
the best performance, especially at short and medium run horizons. The ability
to produce well calibrated density forecasts and accurate downside risk
measures in large data contexts makes the HS-BQR a promising tool for
nowcasting applications and recession modelling.",Horseshoe Prior Bayesian Quantile Regression,2020-06-13 18:07:52,"David Kohns, Tibor Szendrei","http://arxiv.org/abs/2006.07655v2, http://arxiv.org/pdf/2006.07655v2",econ.EM
30344,em,"There has been an increase in interest in experimental evaluations to
estimate causal effects, partly because their internal validity tends to be
high. At the same time, as part of the big data revolution, large, detailed,
and representative, administrative data sets have become more widely available.
However, the credibility of estimates of causal effects based on such data sets
alone can be low.
  In this paper, we develop statistical methods for systematically combining
experimental and observational data to obtain credible estimates of the causal
effect of a binary treatment on a primary outcome that we only observe in the
observational sample. Both the observational and experimental samples contain
data about a treatment, observable individual characteristics, and a secondary
(often short term) outcome. To estimate the effect of a treatment on the
primary outcome while addressing the potential confounding in the observational
sample, we propose a method that makes use of estimates of the relationship
between the treatment and the secondary outcome from the experimental sample.
If assignment to the treatment in the observational sample were unconfounded,
we would expect the treatment effects on the secondary outcome in the two
samples to be similar. We interpret differences in the estimated causal effects
on the secondary outcome between the two samples as evidence of unobserved
confounders in the observational sample, and develop control function methods
for using those differences to adjust the estimates of the treatment effects on
the primary outcome.
  We illustrate these ideas by combining data on class size and third grade
test scores from the Project STAR experiment with observational data on class
size and both third and eighth grade test scores from the New York school
system.",Combining Experimental and Observational Data to Estimate Treatment Effects on Long Term Outcomes,2020-06-17 09:30:48,"Susan Athey, Raj Chetty, Guido Imbens","http://arxiv.org/abs/2006.09676v1, http://arxiv.org/pdf/2006.09676v1",stat.ME
30345,em,"In this paper, we estimate the time-varying COVID-19 contact rate of a
Susceptible-Infected-Recovered (SIR) model. Our measurement of the contact rate
is constructed using data on actively infected, recovered and deceased cases.
We propose a new trend filtering method that is a variant of the
Hodrick-Prescott (HP) filter, constrained by the number of possible kinks. We
term it the $\textit{sparse HP filter}$ and apply it to daily data from five
countries: Canada, China, South Korea, the UK and the US. Our new method yields
the kinks that are well aligned with actual events in each country. We find
that the sparse HP filter provides a fewer kinks than the $\ell_1$ trend
filter, while both methods fitting data equally well. Theoretically, we
establish risk consistency of both the sparse HP and $\ell_1$ trend filters.
Ultimately, we propose to use time-varying $\textit{contact growth rates}$ to
document and monitor outbreaks of COVID-19.",Sparse HP Filter: Finding Kinks in the COVID-19 Contact Rate,2020-06-18 17:07:14,"Sokbae Lee, Yuan Liao, Myung Hwan Seo, Youngki Shin","http://dx.doi.org/10.1016/j.jeconom.2020.08.008, http://arxiv.org/abs/2006.10555v2, http://arxiv.org/pdf/2006.10555v2",econ.EM
30346,em,"We study the effect of social distancing, food vulnerability, welfare and
labour COVID-19 policy responses on riots, violence against civilians and
food-related conflicts. Our analysis uses georeferenced data for 24 African
countries with monthly local prices and real-time conflict data reported in the
Armed Conflict Location and Event Data Project (ACLED) from January 2015 until
early May 2020. Lockdowns and recent welfare policies have been implemented in
light of COVID-19, but in some contexts also likely in response to ongoing
conflicts. To mitigate the potential risk of endogeneity, we use instrumental
variables. We exploit the exogeneity of global commodity prices, and three
variables that increase the risk of COVID-19 and efficiency in response such as
countries colonial heritage, male mortality rate attributed to air pollution
and prevalence of diabetes in adults. We find that the probability of
experiencing riots, violence against civilians, food-related conflicts and food
looting has increased since lockdowns. Food vulnerability has been a
contributing factor. A 10% increase in the local price index is associated with
an increase of 0.7 percentage points in violence against civilians.
Nonetheless, for every additional anti-poverty measure implemented in response
to COVID-19 the probability of experiencing violence against civilians, riots
and food-related conflicts declines by approximately 0.2 percentage points.
These anti-poverty measures also reduce the number of fatalities associated
with these conflicts. Overall, our findings reveal that food vulnerability has
increased conflict risks, but also offer an optimistic view of the importance
of the state in providing an extensive welfare safety net.","Conflict in Africa during COVID-19: social distancing, food vulnerability and welfare response",2020-06-18 20:30:05,Roxana Gutiérrez-Romero,"http://arxiv.org/abs/2006.10696v1, http://arxiv.org/pdf/2006.10696v1",econ.EM
30347,em,"We consider both $\ell _{0}$-penalized and $\ell _{0}$-constrained quantile
regression estimators. For the $\ell _{0}$-penalized estimator, we derive an
exponential inequality on the tail probability of excess quantile prediction
risk and apply it to obtain non-asymptotic upper bounds on the mean-square
parameter and regression function estimation errors. We also derive analogous
results for the $\ell _{0}$-constrained estimator. The resulting rates of
convergence are nearly minimax-optimal and the same as those for $\ell
_{1}$-penalized and non-convex penalized estimators. Further, we characterize
expected Hamming loss for the $\ell _{0}$-penalized estimator. We implement the
proposed procedure via mixed integer linear programming and also a more
scalable first-order approximation algorithm. We illustrate the finite-sample
performance of our approach in Monte Carlo experiments and its usefulness in a
real data application concerning conformal prediction of infant birth weights
(with $n\approx 10^{3}$ and up to $p>10^{3}$). In sum, our $\ell _{0}$-based
method produces a much sparser estimator than the $\ell _{1}$-penalized and
non-convex penalized approaches without compromising precision.",Sparse Quantile Regression,2020-06-19 19:04:17,"Le-Yu Chen, Sokbae Lee","http://arxiv.org/abs/2006.11201v4, http://arxiv.org/pdf/2006.11201v4",stat.ME
30348,em,"Quasi-experimental methods have proliferated over the last two decades, as
researchers develop causal inference tools for settings in which randomization
is infeasible. Two popular such methods, difference-in-differences (DID) and
comparative interrupted time series (CITS), compare observations before and
after an intervention in a treated group to an untreated comparison group
observed over the same period. Both methods rely on strong, untestable
counterfactual assumptions. Despite their similarities, the methodological
literature on CITS lacks the mathematical formality of DID. In this paper, we
use the potential outcomes framework to formalize two versions of CITS - a
general version described by Bloom (2005) and a linear version often used in
health services research. We then compare these to two corresponding DID
formulations - one with time fixed effects and one with time fixed effects and
group trends. We also re-analyze three previously published studies using these
methods. We demonstrate that the most general versions of CITS and DID impute
the same counterfactuals and estimate the same treatment effects. The only
difference between these two designs is the language used to describe them and
their popularity in distinct disciplines. We also show that these designs
diverge when one constrains them using linearity (CITS) or parallel trends
(DID). We recommend defaulting to the more flexible versions and provide advice
to practitioners on choosing between the more constrained versions by
considering the data-generating mechanism. We also recommend greater attention
to specifying the outcome model and counterfactuals in papers, allowing for
transparent evaluation of the plausibility of causal assumptions.",Do Methodological Birds of a Feather Flock Together?,2020-06-19 22:49:58,"Carrie E. Fry, Laura A. Hatfield","http://arxiv.org/abs/2006.11346v2, http://arxiv.org/pdf/2006.11346v2",stat.ME
30349,em,"This paper introduces unified models for high-dimensional factor-based Ito
process, which can accommodate both continuous-time Ito diffusion and
discrete-time stochastic volatility (SV) models by embedding the discrete SV
model in the continuous instantaneous factor volatility process. We call it the
SV-Ito model. Based on the series of daily integrated factor volatility matrix
estimators, we propose quasi-maximum likelihood and least squares estimation
methods. Their asymptotic properties are established. We apply the proposed
method to predict future vast volatility matrix whose asymptotic behaviors are
studied. A simulation study is conducted to check the finite sample performance
of the proposed estimation and prediction method. An empirical analysis is
carried out to demonstrate the advantage of the SV-Ito model in volatility
prediction and portfolio allocation problems.",Unified Discrete-Time Factor Stochastic Volatility and Continuous-Time Ito Models for Combining Inference Based on Low-Frequency and High-Frequency,2020-06-22 10:20:04,"Donggyu Kim, Xinyu Song, Yazhen Wang","http://arxiv.org/abs/2006.12039v1, http://arxiv.org/pdf/2006.12039v1",stat.ME
30350,em,"I develop Macroeconomic Random Forest (MRF), an algorithm adapting the
canonical Machine Learning (ML) tool to flexibly model evolving parameters in a
linear macro equation. Its main output, Generalized Time-Varying Parameters
(GTVPs), is a versatile device nesting many popular nonlinearities
(threshold/switching, smooth transition, structural breaks/change) and allowing
for sophisticated new ones. The approach delivers clear forecasting gains over
numerous alternatives, predicts the 2008 drastic rise in unemployment, and
performs well for inflation. Unlike most ML-based methods, MRF is directly
interpretable -- via its GTVPs. For instance, the successful unemployment
forecast is due to the influence of forward-looking variables (e.g., term
spreads, housing starts) nearly doubling before every recession. Interestingly,
the Phillips curve has indeed flattened, and its might is highly cyclical.",The Macroeconomy as a Random Forest,2020-06-23 06:44:15,Philippe Goulet Coulombe,"http://arxiv.org/abs/2006.12724v3, http://arxiv.org/pdf/2006.12724v3",econ.EM
30351,em,"An asset pricing model using long-run capital share growth risk has recently
been found to successfully explain U.S. stock returns. Our paper adopts a
recursive preference utility framework to derive an heterogeneous asset pricing
model with capital share risks.While modeling capital share risks, we account
for the elevated consumption volatility of high income stockholders. Capital
risks have strong volatility effects in our recursive asset pricing model.
Empirical evidence is presented in which capital share growth is also a source
of risk for stock return volatility. We uncover contrasting unconditional and
conditional asset pricing evidence for capital share risks.",Asset Prices and Capital Share Risks: Theory and Evidence,2020-06-24 22:59:47,"Joseph P. Byrne, Boulis M. Ibrahim, Xiaoyu Zong","http://arxiv.org/abs/2006.14023v1, http://arxiv.org/pdf/2006.14023v1",econ.EM
30352,em,"We develop a medium-size semi-structural time series model of inflation
dynamics that is consistent with the view - often expressed by central banks -
that three components are important: a trend anchored by long-run expectations,
a Phillips curve and temporary fluctuations in energy prices. We find that a
stable long-term inflation trend and a well identified steep Phillips curve are
consistent with the data, but they imply potential output declining since the
new millennium and energy prices affecting headline inflation not only via the
Phillips curve but also via an independent expectational channel. A
high-frequency energy price cycle can be related to global factors affecting
the commodity market, and often overpowers the Phillips curve thereby
explaining the inflation puzzles of the last ten years.",A Model of the Fed's View on Inflation,2020-06-25 03:21:45,"Thomas Hasenzagl, Filippo Pellegrino, Lucrezia Reichlin, Giovanni Ricco","http://arxiv.org/abs/2006.14110v1, http://arxiv.org/pdf/2006.14110v1",econ.EM
30353,em,"Becker (1973) presents a bilateral matching model in which scalar types
describe agents. For this framework, he establishes the conditions under which
positive sorting between agents' attributes is the unique market outcome.
Becker's celebrated sorting result has been applied to address many economic
questions. However, recent empirical studies in the fields of health,
household, and labor economics suggest that agents have multiple
outcome-relevant attributes. In this paper, I study a matching model with
multidimensional types. I offer multidimensional generalizations of concordance
and supermodularity to construct three multidimensional sorting patterns and
two classes of multidimensional complementarities. For each of these sorting
patterns, I identify the sufficient conditions which guarantee its optimality.
In practice, we observe sorting patterns between observed attributes that are
aggregated over unobserved characteristics. To reconcile theory with practice,
I establish the link between production complementarities and the aggregated
sorting patterns. Finally, I examine the relationship between agents' health
status and their spouses' education levels among U.S. households within the
framework for multidimensional matching markets. Preliminary analysis reveals a
weak positive association between agents' health status and their spouses'
education levels. This weak positive association is estimated to be a product
of three factors: (a) an attraction between better-educated individuals, (b) an
attraction between healthier individuals, and (c) a weak positive association
between agents' health status and their education levels. The attraction
channel suggests that the insurance risk associated with a two-person family
plan is higher than the aggregate risk associated with two individual policies.",Matching Multidimensional Types: Theory and Application,2020-06-25 11:19:32,Veli Safak,"http://arxiv.org/abs/2006.14243v1, http://arxiv.org/pdf/2006.14243v1",econ.EM
30354,em,"Empirical economic research crucially relies on highly sensitive individual
datasets. At the same time, increasing availability of public individual-level
data makes it possible for adversaries to potentially de-identify anonymized
records in sensitive research datasets. Most commonly accepted formal
definition of an individual non-disclosure guarantee is referred to as
differential privacy. It restricts the interaction of researchers with the data
by allowing them to issue queries to the data. The differential privacy
mechanism then replaces the actual outcome of the query with a randomised
outcome.
  The impact of differential privacy on the identification of empirical
economic models and on the performance of estimators in nonlinear empirical
Econometric models has not been sufficiently studied. Since privacy protection
mechanisms are inherently finite-sample procedures, we define the notion of
identifiability of the parameter of interest under differential privacy as a
property of the limit of experiments. It is naturally characterized by the
concepts from the random sets theory.
  We show that particular instances of regression discontinuity design may be
problematic for inference with differential privacy as parameters turn out to
be neither point nor partially identified. The set of differentially private
estimators converges weakly to a random set. Our analysis suggests that many
other estimators that rely on nuisance parameters may have similar properties
with the requirement of differential privacy. We show that identification
becomes possible if the target parameter can be deterministically located
within the random set. In that case, a full exploration of the random set of
the weak limits of differentially private estimators can allow the data curator
to select a sequence of instances of differentially private estimators
converging to the target parameter in probability.",Identification and Formal Privacy Guarantees,2020-06-26 02:36:45,"Tatiana Komarova, Denis Nekipelov","http://arxiv.org/abs/2006.14732v2, http://arxiv.org/pdf/2006.14732v2",econ.EM
30355,em,"Active mobility offers an array of physical, emotional, and social wellbeing
benefits. However, with the proliferation of the sharing economy, new
nonmotorized means of transport are entering the fold, complementing some
existing mobility options while competing with others. The purpose of this
research study is to investigate the adoption of three active travel modes;
namely walking, cycling and bikesharing, in a joint modeling framework. The
analysis is based on an adaptation of the stages of change framework, which
originates from the health behavior sciences. Multivariate ordered probit
modeling drawing on U.S. survey data provides well-needed insights into
individuals preparedness to adopt multiple active modes as a function of
personal, neighborhood and psychosocial factors. The research suggests three
important findings. 1) The joint model structure confirms interdependence among
different active mobility choices. The strongest complementarity is found for
walking and cycling adoption. 2) Each mode has a distinctive adoption path with
either three or four separate stages. We discuss the implications of derived
stage-thresholds and plot adoption contours for selected scenarios. 3)
Psychological and neighborhood variables generate more coupling among active
modes than individual and household factors. Specifically, identifying strongly
with active mobility aspirations, experiences with multimodal travel,
possessing better navigational skills, along with supportive local community
norms are the factors that appear to drive the joint adoption decisions. This
study contributes to the understanding of how decisions within the same
functional domain are related and help to design policies that promote active
mobility by identifying positive spillovers and joint determinants.","Interdependence in active mobility adoption: Joint modelling and motivational spill-over in walking, cycling and bike-sharing",2020-06-24 20:37:45,"M Said, A Biehl, A Stathopoulos","http://dx.doi.org/10.1080/15568318.2021.1885769, http://arxiv.org/abs/2006.16920v2, http://arxiv.org/pdf/2006.16920v2",cs.CY
30356,em,"Estimates of the approximate factor model are increasingly used in empirical
work. Their theoretical properties, studied some twenty years ago, also laid
the ground work for analysis on large dimensional panel data models with
cross-section dependence. This paper presents simplified proofs for the
estimates by using alternative rotation matrices, exploiting properties of low
rank matrices, as well as the singular value decomposition of the data in
addition to its covariance structure. These simplifications facilitate
interpretation of results and provide a more friendly introduction to
researchers new to the field. New results are provided to allow linear
restrictions to be imposed on factor models.",Simpler Proofs for Approximate Factor Models of Large Dimensions,2020-08-01 15:03:46,"Jushan Bai, Serena Ng","http://arxiv.org/abs/2008.00254v1, http://arxiv.org/pdf/2008.00254v1",econ.EM
30357,em,"Measuring the prevalence of active SARS-CoV-2 infections in the general
population is difficult because tests are conducted on a small and non-random
segment of the population. However, people admitted to the hospital for
non-COVID reasons are tested at very high rates, even though they do not appear
to be at elevated risk of infection. This sub-population may provide valuable
evidence on prevalence in the general population. We estimate upper and lower
bounds on the prevalence of the virus in the general population and the
population of non-COVID hospital patients under weak assumptions on who gets
tested, using Indiana data on hospital inpatient records linked to SARS-CoV-2
virological tests. The non-COVID hospital population is tested fifty times as
often as the general population, yielding much tighter bounds on prevalence. We
provide and test conditions under which this non-COVID hospitalization bound is
valid for the general population. The combination of clinical testing data and
hospital records may contain much more information about the state of the
epidemic than has been previously appreciated. The bounds we calculate for
Indiana could be constructed at relatively low cost in many other states.",What can we learn about SARS-CoV-2 prevalence from testing and hospital data?,2020-08-01 20:13:28,"Daniel W. Sacks, Nir Menachemi, Peter Embi, Coady Wing","http://arxiv.org/abs/2008.00298v2, http://arxiv.org/pdf/2008.00298v2",econ.EM
30358,em,"This paper develops a design-based theory of uncertainty that is suitable for
analyzing quasi-experimental settings, such as difference-in-differences (DiD).
A key feature of our framework is that each unit has an idiosyncratic treatment
probability that is unknown to the researcher and may be related to the
potential outcomes. We derive formulas for the bias of common estimators
(including DiD), and provide conditions under which they are unbiased for an
intrepretable causal estimand (e.g., analogs to the ATE or ATT). We further
show that when the finite population is large, conventional standard errors are
valid but typically conservative estimates of the variance of the estimator
over the randomization distribution. An interesting feature of our framework is
that conventional standard errors tend to become more conservative when
treatment probabilities vary across units. This conservativeness helps to
(partially) mitigate the undercoverage of confidence intervals when the
estimator is biased. Thus, for example, confidence intervals for the DiD
estimator can have correct coverage for the average treatment effect on the
treated even if the parallel trends assumption does not hold exactly. We show
that these dynamics can be important in simulations calibrated to real
labor-market data. Our results also have implications for the appropriate level
to cluster standard errors, and for the analysis of linear covariate adjustment
and instrumental variables.",Design-Based Uncertainty for Quasi-Experiments,2020-08-03 04:12:35,"Ashesh Rambachan, Jonathan Roth","http://arxiv.org/abs/2008.00602v5, http://arxiv.org/pdf/2008.00602v5",econ.EM
30359,em,"The main goal of this paper is to develop a methodology for estimating time
varying parameter vector auto-regression (TVP-VAR) models with a timeinvariant
long-run relationship between endogenous variables and changes in exogenous
variables. We propose a Gibbs sampling scheme for estimation of model
parameters as well as time-invariant long-run multiplier parameters. Further we
demonstrate the applicability of the proposed method by analyzing examples of
the Norwegian and Russian economies based on the data on real GDP, real
exchange rate and real oil prices. Our results show that incorporating the time
invariance constraint on the long-run multipliers in TVP-VAR model helps to
significantly improve the forecasting performance.",Estimating TVP-VAR models with time invariant long-run multipliers,2020-08-03 11:45:01,"Denis Belomestny, Ekaterina Krymova, Andrey Polbin","http://arxiv.org/abs/2008.00718v1, http://arxiv.org/pdf/2008.00718v1",econ.EM
30360,em,"Knowing the error distribution is important in many multivariate time series
applications. To alleviate the risk of error distribution mis-specification,
testing methodologies are needed to detect whether the chosen error
distribution is correct. However, the majority of the existing tests only deal
with the multivariate normal distribution for some special multivariate time
series models, and they thus can not be used to testing for the often observed
heavy-tailed and skewed error distributions in applications. In this paper, we
construct a new consistent test for general multivariate time series models,
based on the kernelized Stein discrepancy. To account for the estimation
uncertainty and unobserved initial values, a bootstrap method is provided to
calculate the critical values. Our new test is easy-to-implement for a large
scope of multivariate error distributions, and its importance is illustrated by
simulated and real data.",Testing error distribution by kernelized Stein discrepancy in multivariate time series models,2020-08-03 13:00:45,"Donghang Luo, Ke Zhu, Huan Gong, Dong Li","http://arxiv.org/abs/2008.00747v1, http://arxiv.org/pdf/2008.00747v1",econ.EM
30361,em,"We introduce the *decision-conflict logit*, a simple and disciplined
extension of the logit with an outside option that assigns a menu-dependent
utility to that option. The relative value of this utility at a menu could be
interpreted as proxying decision difficulty and determines the probability of
avoiding/delaying choice at that menu. We focus on two intuitively structured
special cases of the model that offer complementary insights, and argue that
they explain a variety of observed choice-deferral effects that are caused by
hard decisions. We conclude by illustrating the usability of the proposed
modelling framework in applications.",The Decision-Conflict Logit,2020-08-10 19:03:21,Georgios Gerasimou,"http://arxiv.org/abs/2008.04229v8, http://arxiv.org/pdf/2008.04229v8",econ.TH
30362,em,"We describe a (nonparametric) prediction algorithm for spatial data, based on
a canonical factorization of the spectral density function. We provide
theoretical results showing that the predictor has desirable asymptotic
properties. Finite sample performance is assessed in a Monte Carlo study that
also compares our algorithm to a rival nonparametric method based on the
infinite AR representation of the dynamics of the data. Finally, we apply our
methodology to predict house prices in Los Angeles.",Nonparametric prediction with spatial data,2020-08-10 20:10:01,"Abhimanyu Gupta, Javier Hidalgo","http://arxiv.org/abs/2008.04269v2, http://arxiv.org/pdf/2008.04269v2",econ.EM
30363,em,"Against the background of explosive growth in data volume, velocity, and
variety, I investigate the origins of the term ""Big Data"". Its origins are a
bit murky and hence intriguing, involving both academics and industry,
statistics and computer science, ultimately winding back to lunch-table
conversations at Silicon Graphics Inc. (SGI) in the mid 1990s. The Big Data
phenomenon continues unabated, and the ongoing development of statistical
machine learning tools continues to help us confront it.","""Big Data"" and its Origins",2020-08-13 14:44:26,Francis X. Diebold,"http://arxiv.org/abs/2008.05835v6, http://arxiv.org/pdf/2008.05835v6",econ.EM
34439,th,"A firm-worker hypergraph consists of edges in which each edge joins a firm
and its possible employees. We show that a stable matching exists in both
many-to-one matching with transferable utilities and discrete many-to-one
matching when the firm-worker hypergraph has no nontrivial odd-length cycle.
Firms' preferences satisfying this condition arise in a problem of matching
specialized firms with specialists.",Firm-worker hypergraphs,2022-11-13 15:25:54,Chao Huang,"http://arxiv.org/abs/2211.06887v3, http://arxiv.org/pdf/2211.06887v3",econ.TH
30364,em,"I propose novel partial identification bounds on infection prevalence from
information on test rate and test yield. The approach utilizes user-specified
bounds on (i) test accuracy and (ii) the extent to which tests are targeted,
formalized as restriction on the effect of true infection status on the odds
ratio of getting tested and thereby embeddable in logit specifications. The
motivating application is to the COVID-19 pandemic but the strategy may also be
useful elsewhere.
  Evaluated on data from the pandemic's early stage, even the weakest of the
novel bounds are reasonably informative. Notably, and in contrast to
speculations that were widely reported at the time, they place the infection
fatality rate for Italy well above the one of influenza by mid-April.",Bounding Infection Prevalence by Bounding Selectivity and Accuracy of Tests: With Application to Early COVID-19,2020-08-14 06:43:41,Jörg Stoye,"http://arxiv.org/abs/2008.06178v2, http://arxiv.org/pdf/2008.06178v2",econ.EM
30365,em,"The paper studies the problem of auction design in a setting where the
auctioneer accesses the knowledge of the valuation distribution only through
statistical samples. A new framework is established that combines the
statistical decision theory with mechanism design. Two optimality criteria,
maxmin, and equivariance, are studied along with their implications on the form
of auctions. The simplest form of the equivariant auction is the average bid
auction, which set individual reservation prices proportional to the average of
other bids and historical samples. This form of auction can be motivated by the
Gamma distribution, and it sheds new light on the estimation of the optimal
price, an irregular parameter. Theoretical results show that it is often
possible to use the regular parameter population mean to approximate the
optimal price. An adaptive average bid estimator is developed under this idea,
and it has the same asymptotic properties as the empirical Myerson estimator.
The new proposed estimator has a significantly better performance in terms of
value at risk and expected shortfall when the sample size is small.",Finite-Sample Average Bid Auction,2020-08-24 09:40:09,Haitian Xie,"http://arxiv.org/abs/2008.10217v2, http://arxiv.org/pdf/2008.10217v2",econ.EM
30366,em,"Newton-step approximations to pseudo maximum likelihood estimates of spatial
autoregressive models with a large number of parameters are examined, in the
sense that the parameter space grows slowly as a function of sample size. These
have the same asymptotic efficiency properties as maximum likelihood under
Gaussianity but are of closed form. Hence they are computationally simple and
free from compactness assumptions, thereby avoiding two notorious pitfalls of
implicitly defined estimates of large spatial autoregressions. For an initial
least squares estimate, the Newton step can also lead to weaker regularity
conditions for a central limit theorem than those extant in the literature. A
simulation study demonstrates excellent finite sample gains from Newton
iterations, especially in large multiparameter models for which grid search is
costly. A small empirical illustration shows improvements in estimation
precision with real data.",Efficient closed-form estimation of large spatial autoregressions,2020-08-28 01:32:37,Abhimanyu Gupta,"http://arxiv.org/abs/2008.12395v4, http://arxiv.org/pdf/2008.12395v4",econ.EM
30367,em,"Consumers interact with firms across multiple devices, browsers, and
machines; these interactions are often recorded with different identifiers for
the same consumer. The failure to correctly match different identities leads to
a fragmented view of exposures and behaviors. This paper studies the identity
fragmentation bias, referring to the estimation bias resulted from using
fragmented data. Using a formal framework, we decompose the contributing
factors of the estimation bias caused by data fragmentation and discuss the
direction of bias. Contrary to conventional wisdom, this bias cannot be signed
or bounded under standard assumptions. Instead, upward biases and sign
reversals can occur even in experimental settings. We then compare several
corrective measures, and discuss their respective advantages and caveats.",The Identity Fragmentation Bias,2020-08-29 00:15:37,"Tesary Lin, Sanjog Misra","http://dx.doi.org/10.1287/mksc.2022.1360, http://arxiv.org/abs/2008.12849v2, http://arxiv.org/pdf/2008.12849v2",econ.EM
30368,em,"A multivariate score-driven filter is developed to extract signals from noisy
vector processes. By assuming that the conditional location vector from a
multivariate Student's t distribution changes over time, we construct a robust
filter which is able to overcome several issues that naturally arise when
modeling heavy-tailed phenomena and, more in general, vectors of dependent
non-Gaussian time series. We derive conditions for stationarity and
invertibility and estimate the unknown parameters by maximum likelihood (ML).
Strong consistency and asymptotic normality of the estimator are proved and the
finite sample properties are illustrated by a Monte-Carlo study. From a
computational point of view, analytical formulae are derived, which consent to
develop estimation procedures based on the Fisher scoring method. The theory is
supported by a novel empirical illustration that shows how the model can be
effectively applied to estimate consumer prices from home scanner data.",A Robust Score-Driven Filter for Multivariate Time Series,2020-09-03 11:37:46,"Enzo D'Innocenzo, Alessandra Luati, Mario Mazzocchi","http://arxiv.org/abs/2009.01517v3, http://arxiv.org/pdf/2009.01517v3",econ.EM
30369,em,"Much of the recent success of Artificial Intelligence (AI) has been spurred
on by impressive achievements within a broader family of machine learning
methods, commonly referred to as Deep Learning (DL). This paper provides
insights on the diffusion and impact of DL in science. Through a Natural
Language Processing (NLP) approach on the arXiv.org publication corpus, we
delineate the emerging DL technology and identify a list of relevant search
terms. These search terms allow us to retrieve DL-related publications from Web
of Science across all sciences. Based on that sample, we document the DL
diffusion process in the scientific system. We find i) an exponential growth in
the adoption of DL as a research tool across all sciences and all over the
world, ii) regional differentiation in DL application domains, and iii) a
transition from interdisciplinary DL applications to disciplinary research
within application domains. In a second step, we investigate how the adoption
of DL methods affects scientific development. Therefore, we empirically assess
how DL adoption relates to re-combinatorial novelty and scientific impact in
the health sciences. We find that DL adoption is negatively correlated with
re-combinatorial novelty, but positively correlated with expectation as well as
variance of citation performance. Our findings suggest that DL does not (yet?)
work as an autopilot to navigate complex knowledge landscapes and overthrow
their structure. However, the 'DL principle' qualifies for its versatility as
the nucleus of a general scientific method that advances science in a
measurable way.",Deep Learning in Science,2020-09-03 13:41:29,"Stefano Bianchini, Moritz Müller, Pierre Pelletier","http://arxiv.org/abs/2009.01575v2, http://arxiv.org/pdf/2009.01575v2",cs.CY
30372,em,"This paper proposes a doubly robust two-stage semiparametric
difference-in-difference estimator for estimating heterogeneous treatment
effects with high-dimensional data. Our new estimator is robust to model
miss-specifications and allows for, but does not require, many more regressors
than observations. The first stage allows a general set of machine learning
methods to be used to estimate the propensity score. In the second stage, we
derive the rates of convergence for both the parametric parameter and the
unknown function under a partially linear specification for the outcome
equation. We also provide bias correction procedures to allow for valid
inference for the heterogeneous treatment effects. We evaluate the finite
sample performance with extensive simulation studies. Additionally, a real data
analysis on the effect of Fair Minimum Wage Act on the unemployment rate is
performed as an illustration of our method. An R package for implementing the
proposed method is available on Github.",Doubly Robust Semiparametric Difference-in-Differences Estimators with High-Dimensional Data,2020-09-07 18:14:29,"Yang Ning, Sida Peng, Jing Tao","http://arxiv.org/abs/2009.03151v1, http://arxiv.org/pdf/2009.03151v1",econ.EM
30373,em,"Accuracy of crop price forecasting techniques is important because it enables
the supply chain planners and government bodies to take appropriate actions by
estimating market factors such as demand and supply. In emerging economies such
as India, the crop prices at marketplaces are manually entered every day, which
can be prone to human-induced errors like the entry of incorrect data or entry
of no data for many days. In addition to such human prone errors, the
fluctuations in the prices itself make the creation of stable and robust
forecasting solution a challenging task. Considering such complexities in crop
price forecasting, in this paper, we present techniques to build robust crop
price prediction models considering various features such as (i) historical
price and market arrival quantity of crops, (ii) historical weather data that
influence crop production and transportation, (iii) data quality-related
features obtained by performing statistical analysis. We additionally propose a
framework for context-based model selection and retraining considering factors
such as model stability, data quality metrics, and trend analysis of crop
prices. To show the efficacy of the proposed approach, we show experimental
results on two crops - Tomato and Maize for 14 marketplaces in India and
demonstrate that the proposed approach not only improves accuracy metrics
significantly when compared against the standard forecasting techniques but
also provides robust models.",A Framework for Crop Price Forecasting in Emerging Economies by Analyzing the Quality of Time-series Data,2020-09-09 12:06:07,"Ayush Jain, Smit Marvaniya, Shantanu Godbole, Vitobha Munigala","http://arxiv.org/abs/2009.04171v1, http://arxiv.org/pdf/2009.04171v1",stat.AP
30374,em,"Outliers in discrete choice response data may result from misclassification
and misreporting of the response variable and from choice behaviour that is
inconsistent with modelling assumptions (e.g. random utility maximisation). In
the presence of outliers, standard discrete choice models produce biased
estimates and suffer from compromised predictive accuracy. Robust statistical
models are less sensitive to outliers than standard non-robust models. This
paper analyses two robust alternatives to the multinomial probit (MNP) model.
The two models are robit models whose kernel error distributions are
heavy-tailed t-distributions to moderate the influence of outliers. The first
model is the multinomial robit (MNR) model, in which a generic degrees of
freedom parameter controls the heavy-tailedness of the kernel error
distribution. The second model, the generalised multinomial robit (Gen-MNR)
model, is more flexible than MNR, as it allows for distinct heavy-tailedness in
each dimension of the kernel error distribution. For both models, we derive
Gibbs samplers for posterior inference. In a simulation study, we illustrate
the excellent finite sample properties of the proposed Bayes estimators and
show that MNR and Gen-MNR produce more accurate estimates if the choice data
contain outliers through the lens of the non-robust MNP model. In a case study
on transport mode choice behaviour, MNR and Gen-MNR outperform MNP by
substantial margins in terms of in-sample fit and out-of-sample predictive
accuracy. The case study also highlights differences in elasticity estimates
across models.",Robust discrete choice models with t-distributed kernel errors,2020-09-14 15:36:28,"Rico Krueger, Michel Bierlaire, Thomas Gasos, Prateek Bansal","http://dx.doi.org/10.1007/s11222-022-10182-3, http://arxiv.org/abs/2009.06383v3, http://arxiv.org/pdf/2009.06383v3",econ.EM
30375,em,"This paper derives identification, estimation, and inference results using
spatial differencing in sample selection models with unobserved heterogeneity.
We show that under the assumption of smooth changes across space of the
unobserved sub-location specific heterogeneities and inverse Mills ratio, key
parameters of a sample selection model are identified. The smoothness of the
sub-location specific heterogeneities implies a correlation in the outcomes. We
assume that the correlation is restricted within a location or cluster and
derive asymptotic results showing that as the number of independent clusters
increases, the estimators are consistent and asymptotically normal. We also
propose a formula for standard error estimation. A Monte-Carlo experiment
illustrates the small sample properties of our estimator. The application of
our procedure to estimate the determinants of the municipality tax rate in
Finland shows the importance of accounting for unobserved heterogeneity.",Spatial Differencing for Sample Selection Models with Unobserved Heterogeneity,2020-09-14 19:57:48,"Alexander Klein, Guy Tchuente","http://arxiv.org/abs/2009.06570v1, http://arxiv.org/pdf/2009.06570v1",econ.EM
30376,em,"We present a new identification condition for regression discontinuity
designs. We replace the local randomization of Lee (2008) with two restrictions
on its threat, namely, the manipulation of the running variable. Furthermore,
we provide the first auxiliary assumption of McCrary's (2008) diagnostic test
to detect manipulation. Based on our auxiliary assumption, we derive a novel
expression of moments that immediately implies the worst-case bounds of Gerard,
Rokkanen, and Rothe (2020) and an enhanced interpretation of their target
parameters. We highlight two issues: an overlooked source of identification
failure, and a missing auxiliary assumption to detect manipulation. In the case
studies, we illustrate our solution to these issues using institutional details
and economic theories.",Manipulation-Robust Regression Discontinuity Designs,2020-09-16 11:42:03,"Takuya Ishihara, Masayuki Sawada","http://arxiv.org/abs/2009.07551v6, http://arxiv.org/pdf/2009.07551v6",econ.EM
30598,em,"We conduct an incentivized experiment on a nationally representative US
sample \\ (N=708) to test whether people prefer to avoid ambiguity even when it
means choosing dominated options. In contrast to the literature, we find that
55\% of subjects prefer a risky act to an ambiguous act that always provides a
larger probability of winning. Our experimental design shows that such a
preference is not mainly due to a lack of understanding. We conclude that
subjects avoid ambiguity \textit{per se} rather than avoiding ambiguity because
it may yield a worse outcome. Such behavior cannot be reconciled with existing
models of ambiguity aversion in a straightforward manner.",A Two-Ball Ellsberg Paradox: An Experiment,2022-06-09 19:50:38,"Brian Jabarian, Simon Lazarus","http://arxiv.org/abs/2206.04605v6, http://arxiv.org/pdf/2206.04605v6",econ.TH
30377,em,"Proper scoring rules are used to assess the out-of-sample accuracy of
probabilistic forecasts, with different scoring rules rewarding distinct
aspects of forecast performance. Herein, we re-investigate the practice of
using proper scoring rules to produce probabilistic forecasts that are
`optimal' according to a given score, and assess when their out-of-sample
accuracy is superior to alternative forecasts, according to that score.
Particular attention is paid to relative predictive performance under
misspecification of the predictive model. Using numerical illustrations, we
document several novel findings within this paradigm that highlight the
important interplay between the true data generating process, the assumed
predictive model and the scoring rule. Notably, we show that only when a
predictive model is sufficiently compatible with the true process to allow a
particular score criterion to reward what it is designed to reward, will this
approach to forecasting reap benefits. Subject to this compatibility however,
the superiority of the optimal forecast will be greater, the greater is the
degree of misspecification. We explore these issues under a range of different
scenarios, and using both artificially simulated and empirical data.",Optimal probabilistic forecasts: When do they work?,2020-09-21 06:07:12,"Gael M. Martin, Rubén Loaiza-Maya, David T. Frazier, Worapree Maneesoonthorn, Andrés Ramírez Hassan","http://arxiv.org/abs/2009.09592v1, http://arxiv.org/pdf/2009.09592v1",econ.EM
30378,em,"This paper makes a selective survey on the recent development of the factor
model and its application on statistical learnings. We focus on the perspective
of the low-rank structure of factor models, and particularly draws attentions
to estimating the model from the low-rank recovery point of view. The survey
mainly consists of three parts: the first part is a review on new factor
estimations based on modern techniques on recovering low-rank structures of
high-dimensional models. The second part discusses statistical inferences of
several factor-augmented models and applications in econometric learning
models. The final part summarizes new developments dealing with unbalanced
panels from the matrix completion perspective.",Recent Developments on Factor Models and its Applications in Econometric Learning,2020-09-21 21:02:20,"Jianqing Fan, Kunpeng Li, Yuan Liao","http://dx.doi.org/10.1146/annurev-financial-091420-011735, http://arxiv.org/abs/2009.10103v1, http://arxiv.org/pdf/2009.10103v1",econ.EM
30379,em,"We study a semi-/nonparametric regression model with a general form of
nonclassical measurement error in the outcome variable. We show equivalence of
this model to a generalized regression model. Our main identifying assumptions
are a special regressor type restriction and monotonicity in the nonlinear
relationship between the observed and unobserved true outcome. Nonparametric
identification is then obtained under a normalization of the unknown link
function, which is a natural extension of the classical measurement error case.
We propose a novel sieve rank estimator for the regression function and
establish its rate of convergence.
  In Monte Carlo simulations, we find that our estimator corrects for biases
induced by nonclassical measurement error and provides numerically stable
results. We apply our method to analyze belief formation of stock market
expectations with survey data from the German Socio-Economic Panel (SOEP) and
find evidence for nonclassical measurement error in subjective belief data.",Nonclassical Measurement Error in the Outcome Variable,2020-09-26 21:44:18,"Christoph Breunig, Stephan Martin","http://arxiv.org/abs/2009.12665v2, http://arxiv.org/pdf/2009.12665v2",econ.EM
30380,em,"Multivariate dynamic time series models are widely encountered in practical
studies, e.g., modelling policy transmission mechanism and measuring
connectedness between economic agents. To better capture the dynamics, this
paper proposes a wide class of multivariate dynamic models with time-varying
coefficients, which have a general time-varying vector moving average (VMA)
representation, and nest, for instance, time-varying vector autoregression
(VAR), time-varying vector autoregression moving-average (VARMA), and so forth
as special cases. The paper then develops a unified estimation method for the
unknown quantities before an asymptotic theory for the proposed estimators is
established. In the empirical study, we investigate the transmission mechanism
of monetary policy using U.S. data, and uncover a fall in the volatilities of
exogenous shocks. In addition, we find that (i) monetary policy shocks have
less influence on inflation before and during the so-called Great Moderation,
(ii) inflation is more anchored recently, and (iii) the long-run level of
inflation is below, but quite close to the Federal Reserve's target of two
percent after the beginning of the Great Moderation period.",A Class of Time-Varying Vector Moving Average Models: Nonparametric Kernel Estimation and Application,2020-10-04 09:52:54,"Yayi Yan, Jiti Gao, Bin Peng","http://arxiv.org/abs/2010.01492v1, http://arxiv.org/pdf/2010.01492v1",econ.EM
30381,em,"We propose a recursive logit model which captures the notion of choice
aversion by imposing a penalty term that accounts for the dimension of the
choice set at each node of the transportation network. We make three
contributions. First, we show that our model overcomes the correlation problem
between routes, a common pitfall of traditional logit models, and that the
choice aversion model can be seen as an alternative to these models. Second, we
show how our model can generate violations of regularity in the path choice
probabilities. In particular, we show that removing edges in the network may
decrease the probability for existing paths. Finally, we show that under the
presence of choice aversion, adding edges to the network can make users worse
off. In other words, a type of Braess's paradox can emerge outside of
congestion and can be characterized in terms of a parameter that measures
users' degree of choice aversion. We validate these contributions by estimating
this parameter over GPS traffic data captured on a real-world transportation
network.",A Recursive Logit Model with Choice Aversion and Its Application to Transportation Networks,2020-10-06 03:00:14,"Austin Knies, Jorge Lorca, Emerson Melo","http://arxiv.org/abs/2010.02398v4, http://arxiv.org/pdf/2010.02398v4",econ.EM
30382,em,"I introduce a generic method for inference about a scalar parameter in
research designs with a finite number of heterogeneous clusters where only a
single cluster received treatment. This situation is commonplace in
difference-in-differences estimation but the test developed here applies more
generally. I show that the test controls size and has power under asymptotics
where the number of observations within each cluster is large but the number of
clusters is fixed. The test combines weighted, approximately Gaussian parameter
estimates with a rearrangement procedure to obtain its critical values. The
weights needed for most empirically relevant situations are tabulated in the
paper. Calculation of the critical values is computationally simple and does
not require simulation or resampling. The rearrangement test is highly robust
to situations where some clusters are much more variable than others. Examples
and an empirical application are provided.",Inference with a single treated cluster,2020-10-08 18:59:47,Andreas Hagemann,"http://arxiv.org/abs/2010.04076v1, http://arxiv.org/pdf/2010.04076v1",econ.EM
30383,em,"This paper assesses when the validity of difference-in-differences depends on
functional form. We provide a novel characterization: the parallel trends
assumption holds under all strictly monotonic transformations of the outcome if
and only if a stronger ``parallel trends''-type condition holds for the
cumulative distribution function of untreated potential outcomes. This
condition for parallel trends to be insensitive to functional form is satisfied
if and essentially only if the population can be partitioned into a subgroup
for which treatment is effectively randomly assigned and a remaining subgroup
for which the distribution of untreated potential outcomes is stable over time.
These conditions have testable implications, and we introduce falsification
tests for the null that parallel trends is insensitive to functional form.",When Is Parallel Trends Sensitive to Functional Form?,2020-10-10 00:25:43,"Jonathan Roth, Pedro H. C. Sant'Anna","http://arxiv.org/abs/2010.04814v5, http://arxiv.org/pdf/2010.04814v5",econ.EM
30384,em,"Randomized controlled trials generate experimental variation that can
credibly identify causal effects, but often suffer from limited scale, while
observational datasets are large, but often violate desired identification
assumptions. To improve estimation efficiency, I propose a method that
leverages imperfect instruments - pretreatment covariates that satisfy the
relevance condition but may violate the exclusion restriction. I show that
these imperfect instruments can be used to derive moment restrictions that, in
combination with the experimental data, improve estimation efficiency. I
outline estimators for implementing this strategy, and show that my methods can
reduce variance by up to 50%; therefore, only half of the experimental sample
is required to attain the same statistical precision. I apply my method to a
search listing dataset from Expedia that studies the causal effect of search
rankings on clicks, and show that the method can substantially improve the
precision.",Combining Observational and Experimental Data to Improve Efficiency Using Imperfect Instruments,2020-10-11 02:29:27,George Z. Gui,"http://dx.doi.org/10.1287/mksc.2020.0435, http://arxiv.org/abs/2010.05117v5, http://arxiv.org/pdf/2010.05117v5",econ.EM
30385,em,"This note discusses two recent studies on identification of individualized
treatment rules using instrumental variables---Cui and Tchetgen Tchetgen (2020)
and Qiu et al. (2020). It also proposes identifying assumptions that are
alternative to what is used in both studies.",Comment: Individualized Treatment Rules Under Endogeneity,2020-10-15 13:46:22,"Sukjin, Han","http://arxiv.org/abs/2010.07656v1, http://arxiv.org/pdf/2010.07656v1",econ.EM
30386,em,"Researchers often treat data-driven and theory-driven models as two disparate
or even conflicting methods in travel behavior analysis. However, the two
methods are highly complementary because data-driven methods are more
predictive but less interpretable and robust, while theory-driven methods are
more interpretable and robust but less predictive. Using their complementary
nature, this study designs a theory-based residual neural network (TB-ResNet)
framework, which synergizes discrete choice models (DCMs) and deep neural
networks (DNNs) based on their shared utility interpretation. The TB-ResNet
framework is simple, as it uses a ($\delta$, 1-$\delta$) weighting to take
advantage of DCMs' simplicity and DNNs' richness, and to prevent underfitting
from the DCMs and overfitting from the DNNs. This framework is also flexible:
three instances of TB-ResNets are designed based on multinomial logit model
(MNL-ResNets), prospect theory (PT-ResNets), and hyperbolic discounting
(HD-ResNets), which are tested on three data sets. Compared to pure DCMs, the
TB-ResNets provide greater prediction accuracy and reveal a richer set of
behavioral mechanisms owing to the utility function augmented by the DNN
component in the TB-ResNets. Compared to pure DNNs, the TB-ResNets can modestly
improve prediction and significantly improve interpretation and robustness,
because the DCM component in the TB-ResNets stabilizes the utility functions
and input gradients. Overall, this study demonstrates that it is both feasible
and desirable to synergize DCMs and DNNs by combining their utility
specifications under a TB-ResNet framework. Although some limitations remain,
this TB-ResNet framework is an important first step to create mutual benefits
between DCMs and DNNs for travel behavior modeling, with joint improvement in
prediction, interpretation, and robustness.",Theory-based residual neural networks: A synergy of discrete choice models and deep neural networks,2020-10-22 15:31:01,"Shenhao Wang, Baichuan Mo, Jinhua Zhao","http://arxiv.org/abs/2010.11644v1, http://arxiv.org/pdf/2010.11644v1",cs.LG
30387,em,"Gross domestic product (GDP) is an important economic indicator that
aggregates useful information to assist economic agents and policymakers in
their decision-making process. In this context, GDP forecasting becomes a
powerful decision optimization tool in several areas. In order to contribute in
this direction, we investigated the efficiency of classical time series models,
the state-space models, and the neural network models, applied to Brazilian
gross domestic product. The models used were: a Seasonal Autoregressive
Integrated Moving Average (SARIMA) and a Holt-Winters method, which are
classical time series models; the dynamic linear model, a state-space model;
and neural network autoregression and the multilayer perceptron, artificial
neural network models. Based on statistical metrics of model comparison, the
multilayer perceptron presented the best in-sample and out-sample forecasting
performance for the analyzed period, also incorporating the growth rate
structure significantly.",A Systematic Comparison of Forecasting for Gross Domestic Product in an Emergent Economy,2020-10-26 03:25:52,"Kleyton da Costa, Felipe Leite Coelho da Silva, Josiane da Silva Cordeiro Coelho, André de Melo Modenesi","http://arxiv.org/abs/2010.13259v2, http://arxiv.org/pdf/2010.13259v2",econ.EM
30388,em,"Companies survey their customers to measure their satisfaction levels with
the company and its services. The received responses are crucial as they allow
companies to assess their respective performances and find ways to make needed
improvements. This study focuses on the non-systematic bias that arises when
customers assign numerical values in ordinal surveys. Using real customer
satisfaction survey data of a large retail bank, we show that the common
practice of segmenting ordinal survey responses into uneven segments limit the
value that can be extracted from the data. We then show that it is possible to
assess the magnitude of the irreducible error under simple assumptions, even in
real surveys, and place the achievable modeling goal in perspective. We finish
the study by suggesting that a thoughtful survey design, which uses either a
careful binning strategy or proper calibration, can reduce the compounding
non-systematic error even in elaborated ordinal surveys. A possible application
of the calibration method we propose is efficiently conducting targeted surveys
using active learning.",What can be learned from satisfaction assessments?,2020-10-26 08:00:56,"Naftali Cohen, Simran Lamba, Prashant Reddy","http://arxiv.org/abs/2010.13340v1, http://arxiv.org/pdf/2010.13340v1",econ.EM
30418,em,"Stips, Macias, Coughlan, Garcia-Gorriz, and Liang (2016, Nature Scientific
Reports) use information flows (Liang, 2008, 2014) to establish causality from
various forcings to global temperature. We show that the formulas being used
hinges on a simplifying assumption that is nearly always rejected by the data.
We propose an adequate measure of information flow based on Vector
Autoregressions, and find that most results in Stips et al. (2016) cannot be
corroborated. Then, it is discussed which modeling choices (e.g., the choice of
CO2 series and assumptions about simultaneous relationships) may help in
extracting credible estimates of causal flows and the transient climate
response simply by looking at the joint dynamics of two climatic time series.","On Spurious Causality, CO2, and Global Temperature",2021-03-19 05:58:06,"Philippe Goulet Coulombe, Maximilian Göbel","http://arxiv.org/abs/2103.10605v1, http://arxiv.org/pdf/2103.10605v1",stat.AP
30389,em,"Recurrent boom-and-bust cycles are a salient feature of economic and
financial history. Cycles found in the data are stochastic, often highly
persistent, and span substantial fractions of the sample size. We refer to such
cycles as ""long"". In this paper, we develop a novel approach to modeling
cyclical behavior specifically designed to capture long cycles. We show that
existing inferential procedures may produce misleading results in the presence
of long cycles, and propose a new econometric procedure for the inference on
the cycle length. Our procedure is asymptotically valid regardless of the cycle
length. We apply our methodology to a set of macroeconomic and financial
variables for the U.S. We find evidence of long stochastic cycles in the
standard business cycle variables, as well as in credit and house prices.
However, we rule out the presence of stochastic cycles in asset market data.
Moreover, according to our result, financial cycles as characterized by credit
and house prices tend to be twice as long as business cycles.",Modeling Long Cycles,2020-10-26 23:09:36,"Natasha Kang, Vadim Marmer","http://arxiv.org/abs/2010.13877v4, http://arxiv.org/pdf/2010.13877v4",econ.EM
30390,em,"We propose a model of labor market sector self-selection that combines
comparative advantage, as in the Roy model, and sector composition preference.
Two groups choose between two sectors based on heterogeneous potential incomes
and group compositions in each sector. Potential incomes incorporate group
specific human capital accumulation and wage discrimination. Composition
preferences are interpreted as reflecting group specific amenity preferences as
well as homophily and aversion to minority status. We show that occupational
segregation is amplified by the composition preferences and we highlight a
resulting tension between redistribution and diversity. The model also exhibits
tipping from extreme compositions to more balanced ones. Tipping occurs when a
small nudge, associated with affirmative action, pushes the system to a very
different equilibrium, and when the set of equilibria changes abruptly when a
parameter governing the relative importance of pecuniary and composition
preferences crosses a threshold.",Occupational segregation in a Roy model with composition preferences,2020-12-08 18:24:47,"Haoning Chen, Miaomiao Dong, Marc Henry, Ivan Sidorov","http://arxiv.org/abs/2012.04485v2, http://arxiv.org/pdf/2012.04485v2",econ.TH
30391,em,"For a panel model considered by Abadie et al. (2010), the counterfactual
outcomes constructed by Abadie et al., Hsiao et al. (2012), and Doudchenko and
Imbens (2017) may all be confounded by uncontrolled heterogenous trends. Based
on exact-matching on the trend predictors, I propose new methods of estimating
the model-specific treatment effects, which are free from heterogenous trends.
When applied to Abadie et al.'s (2010) model and data, the new estimators
suggest considerably smaller effects of California's tobacco control program.",Exact Trend Control in Estimating Treatment Effects Using Panel Data with Heterogenous Trends,2020-12-16 17:33:53,Chirok Han,"http://arxiv.org/abs/2012.08988v1, http://arxiv.org/pdf/2012.08988v1",econ.EM
30392,em,"There is an innate human tendency, one might call it the ""league table
mentality,"" to construct rankings. Schools, hospitals, sports teams, movies,
and myriad other objects are ranked even though their inherent
multi-dimensionality would suggest that -- at best -- only partial orderings
were possible. We consider a large class of elementary ranking problems in
which we observe noisy, scalar measurements of merit for $n$ objects of
potentially heterogeneous precision and are asked to select a group of the
objects that are ""most meritorious."" The problem is naturally formulated in the
compound decision framework of Robbins's (1956) empirical Bayes theory, but it
also exhibits close connections to the recent literature on multiple testing.
The nonparametric maximum likelihood estimator for mixture models (Kiefer and
Wolfowitz (1956)) is employed to construct optimal ranking and selection rules.
Performance of the rules is evaluated in simulations and an application to
ranking U.S kidney dialysis centers.",Invidious Comparisons: Ranking and Selection as Compound Decisions,2020-12-23 12:21:38,"Jiaying Gu, Roger Koenker","http://arxiv.org/abs/2012.12550v3, http://arxiv.org/pdf/2012.12550v3",econ.EM
30393,em,"Count time series obtained from online social media data, such as Twitter,
have drawn increasing interest among academics and market analysts over the
past decade. Transforming Web activity records into counts yields time series
with peculiar features, including the coexistence of smooth paths and sudden
jumps, as well as cross-sectional and temporal dependence. Using Twitter posts
about country risks for the United Kingdom and the United States, this paper
proposes an innovative state space model for multivariate count data with
jumps. We use the proposed model to assess the impact of public concerns in
these countries on market systems. To do so, public concerns inferred from
Twitter data are unpacked into country-specific persistent terms, risk social
amplification events, and co-movements of the country series. The identified
components are then used to investigate the existence and magnitude of
country-risk spillovers and social amplification effects on the volatility of
financial markets.",Filtering the intensity of public concern from social media count data with jumps,2020-12-24 17:18:04,"Matteo Iacopini, Carlo R. M. A. Santagiustina","http://arxiv.org/abs/2012.13267v1, http://arxiv.org/pdf/2012.13267v1",stat.AP
30394,em,"The paper aims at developing the Bayesian seasonally cointegrated model for
quarterly data. We propose the prior structure, derive the set of full
conditional posterior distributions, and propose the sampling scheme. The
identification of cointegrating spaces is obtained \emph{via} orthonormality
restrictions imposed on vectors spanning them. In the case of annual frequency,
the cointegrating vectors are complex, which should be taken into account when
identifying them. The point estimation of the cointegrating spaces is also
discussed. The presented methods are illustrated by a simulation experiment and
are employed in the analysis of money and prices in the Polish economy.",Bayesian analysis of seasonally cointegrated VAR model,2020-12-29 18:57:54,Justyna Wróblewska,"http://dx.doi.org/10.1016/j.ecosta.2023.02.002, http://arxiv.org/abs/2012.14820v2, http://arxiv.org/pdf/2012.14820v2",econ.EM
30395,em,"We consider inference on a scalar regression coefficient under a constraint
on the magnitude of the control coefficients. A class of estimators based on a
regularized propensity score regression is shown to exactly solve a tradeoff
between worst-case bias and variance. We derive confidence intervals (CIs)
based on these estimators that are bias-aware: they account for the possible
bias of the estimator. Under homoskedastic Gaussian errors, these estimators
and CIs are near-optimal in finite samples for MSE and CI length. We also
provide conditions for asymptotic validity of the CI with unknown and possibly
heteroskedastic error distribution, and derive novel optimal rates of
convergence under high-dimensional asymptotics that allow the number of
regressors to increase more quickly than the number of observations. Extensive
simulations and an empirical application illustrate the performance of our
methods.",Bias-Aware Inference in Regularized Regression Models,2020-12-29 19:06:43,"Timothy B. Armstrong, Michal Kolesár, Soonwoo Kwon","http://arxiv.org/abs/2012.14823v2, http://arxiv.org/pdf/2012.14823v2",econ.EM
30396,em,"We propose a sensitivity analysis for Synthetic Control (SC) treatment effect
estimates to interrogate the assumption that the SC method is well-specified,
namely that choosing weights to minimize pre-treatment prediction error yields
accurate predictions of counterfactual post-treatment outcomes. Our data-driven
procedure recovers the set of treatment effects consistent with the assumption
that the misspecification error incurred by the SC method is at most the
observable misspecification error incurred when using the SC estimator to
predict the outcomes of some control unit. We show that under one definition of
misspecification error, our procedure provides a simple, geometric motivation
for comparing the estimated treatment effect to the distribution of placebo
residuals to assess estimate credibility. When we apply our procedure to
several canonical studies that report SC estimates, we broadly confirm the
conclusions drawn by the source papers.",Assessing the Sensitivity of Synthetic Control Treatment Effect Estimates to Misspecification Error,2020-12-31 02:35:30,"Billy Ferguson, Brad Ross","http://arxiv.org/abs/2012.15367v3, http://arxiv.org/pdf/2012.15367v3",econ.EM
30397,em,"This paper provides a set of methods for quantifying the robustness of
treatment effects estimated using the unconfoundedness assumption (also known
as selection on observables or conditional independence). Specifically, we
estimate and do inference on bounds on various treatment effect parameters,
like the average treatment effect (ATE) and the average effect of treatment on
the treated (ATT), under nonparametric relaxations of the unconfoundedness
assumption indexed by a scalar sensitivity parameter c. These relaxations allow
for limited selection on unobservables, depending on the value of c. For large
enough c, these bounds equal the no assumptions bounds. Using a non-standard
bootstrap method, we show how to construct confidence bands for these bound
functions which are uniform over all values of c. We illustrate these methods
with an empirical application to effects of the National Supported Work
Demonstration program. We implement these methods in a companion Stata module
for easy use in practice.",Assessing Sensitivity to Unconfoundedness: Estimation and Inference,2020-12-31 20:14:38,"Matthew A. Masten, Alexandre Poirier, Linqi Zhang","http://arxiv.org/abs/2012.15716v1, http://arxiv.org/pdf/2012.15716v1",econ.EM
30398,em,"Network models represent a useful tool to describe the complex set of
financial relationships among heterogeneous firms in the system. In this paper,
we propose a new semiparametric model for temporal multilayer causal networks
with both intra- and inter-layer connectivity. A Bayesian model with a
hierarchical mixture prior distribution is assumed to capture heterogeneity in
the response of the network edges to a set of risk factors including the
European COVID-19 cases. We measure the financial connectedness arising from
the interactions between two layers defined by stock returns and volatilities.
In the empirical analysis, we study the topology of the network before and
after the spreading of the COVID-19 disease.",COVID-19 spreading in financial networks: A semiparametric matrix regression model,2021-01-02 14:06:43,"Billio Monica, Casarin Roberto, Costola Michele, Iacopini Matteo","http://arxiv.org/abs/2101.00422v1, http://arxiv.org/pdf/2101.00422v1",econ.EM
30399,em,"Numerous empirical studies employ regression discontinuity designs with
multiple cutoffs and heterogeneous treatments. A common practice is to
normalize all the cutoffs to zero and estimate one effect. This procedure
identifies the average treatment effect (ATE) on the observed distribution of
individuals local to existing cutoffs. However, researchers often want to make
inferences on more meaningful ATEs, computed over general counterfactual
distributions of individuals, rather than simply the observed distribution of
individuals local to existing cutoffs. This paper proposes a consistent and
asymptotically normal estimator for such ATEs when heterogeneity follows a
non-parametric function of cutoff characteristics in the sharp case. The
proposed estimator converges at the minimax optimal rate of root-n for a
specific choice of tuning parameters. Identification in the fuzzy case, with
multiple cutoffs, is impossible unless heterogeneity follows a
finite-dimensional function of cutoff characteristics. Under parametric
heterogeneity, this paper proposes an ATE estimator for the fuzzy case that
optimally combines observations to maximize its precision.",Regression Discontinuity Design with Many Thresholds,2021-01-05 00:46:35,Marinho Bertanha,"http://arxiv.org/abs/2101.01245v1, http://arxiv.org/pdf/2101.01245v1",econ.EM
30400,em,"Recent advances in the literature have demonstrated that standard supervised
learning algorithms are ill-suited for problems with endogenous explanatory
variables. To correct for the endogeneity bias, many variants of nonparameteric
instrumental variable regression methods have been developed. In this paper, we
propose an alternative algorithm called boostIV that builds on the traditional
gradient boosting algorithm and corrects for the endogeneity bias. The
algorithm is very intuitive and resembles an iterative version of the standard
2SLS estimator. Moreover, our approach is data driven, meaning that the
researcher does not have to make a stance on neither the form of the target
function approximation nor the choice of instruments. We demonstrate that our
estimator is consistent under mild conditions. We carry out extensive Monte
Carlo simulations to demonstrate the finite sample performance of our algorithm
compared to other recently developed methods. We show that boostIV is at worst
on par with the existing methods and on average significantly outperforms them.",Causal Gradient Boosting: Boosted Instrumental Variable Regression,2021-01-15 14:54:25,"Edvard Bakhitov, Amandeep Singh","http://arxiv.org/abs/2101.06078v1, http://arxiv.org/pdf/2101.06078v1",econ.EM
30401,em,"The literature on using yield curves to forecast recessions customarily uses
10-year--three-month Treasury yield spread without verification on the pair
selection. This study investigates whether the predictive ability of spread can
be improved by letting a machine learning algorithm identify the best maturity
pair and coefficients. Our comprehensive analysis shows that, despite the
likelihood gain, the machine learning approach does not significantly improve
prediction, owing to the estimation error. This is robust to the forecasting
horizon, control variable, sample period, and oversampling of the recession
observations. Our finding supports the use of the 10-year--three-month spread.",Yield Spread Selection in Predicting Recession Probabilities: A Machine Learning Approach,2021-01-23 04:26:54,"Jaehyuk Choi, Desheng Ge, Kyu Ho Kang, Sungbin Sohn","http://dx.doi.org/10.1002/for.2980, http://arxiv.org/abs/2101.09394v2, http://arxiv.org/pdf/2101.09394v2",econ.EM
30605,em,"Central banks manage about \$12 trillion in foreign exchange reserves,
influencing global exchange rates and asset prices. However, some of the
largest holders of reserves report minimal information about their currency
composition, hindering empirical analysis. I describe a Hidden Markov Model to
estimate the composition of a central bank's reserves by relating the
fluctuation in the portfolio's valuation to the exchange rates of major reserve
currencies. I apply the model to China and Singapore, two countries that
collectively hold about \$3.4 trillion in reserves and conceal their
composition. I find that both China's reserve composition likely resembles the
global average, while Singapore probably holds fewer US dollars.",Estimating the Currency Composition of Foreign Exchange Reserves,2022-06-28 07:34:00,Matthew Ferranti,"http://arxiv.org/abs/2206.13751v4, http://arxiv.org/pdf/2206.13751v4",q-fin.ST
30402,em,"Since their introduction in Abadie and Gardeazabal (2003), Synthetic Control
(SC) methods have quickly become one of the leading methods for estimating
causal effects in observational studies in settings with panel data. Formal
discussions often motivate SC methods by the assumption that the potential
outcomes were generated by a factor model. Here we study SC methods from a
design-based perspective, assuming a model for the selection of the treated
unit(s) and period(s). We show that the standard SC estimator is generally
biased under random assignment. We propose a Modified Unbiased Synthetic
Control (MUSC) estimator that guarantees unbiasedness under random assignment
and derive its exact, randomization-based, finite-sample variance. We also
propose an unbiased estimator for this variance. We document in settings with
real data that under random assignment, SC-type estimators can have root
mean-squared errors that are substantially lower than that of other common
estimators. We show that such an improvement is weakly guaranteed if the
treated period is similar to the other periods, for example, if the treated
period was randomly selected. While our results only directly apply in settings
where treatment is assigned randomly, we believe that they can complement
model-based approaches even for observational studies.",A Design-Based Perspective on Synthetic Control Methods,2021-01-23 04:57:45,"Lea Bottmer, Guido Imbens, Jann Spiess, Merrill Warnick","http://arxiv.org/abs/2101.09398v4, http://arxiv.org/pdf/2101.09398v4",econ.EM
30403,em,"We propose a reduced-form benchmark predictive model (BPM) for fixed-target
forecasting of Arctic sea ice extent, and we provide a case study of its
real-time performance for target date September 2020. We visually detail the
evolution of the statistically-optimal point, interval, and density forecasts
as time passes, new information arrives, and the end of September approaches.
Comparison to the BPM may prove useful for evaluating and selecting among
various more sophisticated dynamical sea ice models, which are widely used to
quantify the likely future evolution of Arctic conditions and their two-way
interaction with economic activity.",A Benchmark Model for Fixed-Target Arctic Sea Ice Forecasting,2021-01-25 22:11:24,"Francis X. Diebold, Maximilian Gobel","http://arxiv.org/abs/2101.10359v3, http://arxiv.org/pdf/2101.10359v3",econ.EM
30404,em,"In this paper, we present a new approach based on dynamic factor models
(DFMs) to perform nowcasts for the percentage annual variation of the Mexican
Global Economic Activity Indicator (IGAE in Spanish). The procedure consists of
the following steps: i) build a timely and correlated database by using
economic and financial time series and real-time variables such as social
mobility and significant topics extracted by Google Trends; ii) estimate the
common factors using the two-step methodology of Doz et al. (2011); iii) use
the common factors in univariate time-series models for test data; and iv)
according to the best results obtained in the previous step, combine the
statistically equal better nowcasts (Diebold-Mariano test) to generate the
current nowcasts. We obtain timely and accurate nowcasts for the IGAE,
including those for the current phase of drastic drops in the economy related
to COVID-19 sanitary measures. Additionally, the approach allows us to
disentangle the key variables in the DFM by estimating the confidence interval
for both the factor loadings and the factor estimates. This approach can be
used in official statistics to obtain preliminary estimates for IGAE up to 50
days before the official results.",A nowcasting approach to generate timely estimates of Mexican economic activity: An application to the period of COVID-19,2021-01-25 23:10:34,"Francisco Corona, Graciela González-Farías, Jesús López-Pérez","http://arxiv.org/abs/2101.10383v1, http://arxiv.org/pdf/2101.10383v1",stat.AP
30405,em,"Methods for linking individuals across historical data sets, typically in
combination with AI based transcription models, are developing rapidly.
Probably the single most important identifier for linking is personal names.
However, personal names are prone to enumeration and transcription errors and
although modern linking methods are designed to handle such challenges, these
sources of errors are critical and should be minimized. For this purpose,
improved transcription methods and large-scale databases are crucial
components. This paper describes and provides documentation for HANA, a newly
constructed large-scale database which consists of more than 3.3 million names.
The database contain more than 105 thousand unique names with a total of more
than 1.1 million images of personal names, which proves useful for transfer
learning to other settings. We provide three examples hereof, obtaining
significantly improved transcription accuracy on both Danish and US census
data. In addition, we present benchmark results for deep learning models
automatically transcribing the personal names from the scanned documents.
Through making more challenging large-scale databases publicly available we
hope to foster more sophisticated, accurate, and robust models for handwritten
text recognition.",HANA: A HAndwritten NAme Database for Offline Handwritten Text Recognition,2021-01-22 19:23:01,"Christian M. Dahl, Torben Johansen, Emil N. Sørensen, Simon Wittrock","http://arxiv.org/abs/2101.10862v2, http://arxiv.org/pdf/2101.10862v2",cs.CV
30406,em,"In this paper we propose the adaptive lasso for predictive quantile
regression (ALQR). Reflecting empirical findings, we allow predictors to have
various degrees of persistence and exhibit different signal strengths. The
number of predictors is allowed to grow with the sample size. We study
regularity conditions under which stationary, local unit root, and cointegrated
predictors are present simultaneously. We next show the convergence rates,
model selection consistency, and asymptotic distributions of ALQR. We apply the
proposed method to the out-of-sample quantile prediction problem of stock
returns and find that it outperforms the existing alternatives. We also provide
numerical evidence from additional Monte Carlo experiments, supporting the
theoretical results.",Predictive Quantile Regression with Mixed Roots and Increasing Dimensions: The ALQR Approach,2021-01-27 20:50:42,"Rui Fan, Ji Hyung Lee, Youngki Shin","http://arxiv.org/abs/2101.11568v4, http://arxiv.org/pdf/2101.11568v4",econ.EM
30417,em,"We propose a contemporaneous bilinear transformation for a $p\times q$ matrix
time series to alleviate the difficulties in modeling and forecasting matrix
time series when $p$ and/or $q$ are large. The resulting transformed matrix
assumes a block structure consisting of several small matrices, and those small
matrix series are uncorrelated across all times. Hence an overall parsimonious
model is achieved by modelling each of those small matrix series separately
without the loss of information on the linear dynamics. Such a parsimonious
model often has better forecasting performance, even when the underlying true
dynamics deviates from the assumed uncorrelated block structure after
transformation. The uniform convergence rates of the estimated transformation
are derived, which vindicate an important virtue of the proposed bilinear
transformation, i.e. it is technically equivalent to the decorrelation of a
vector time series of dimension max$(p,q)$ instead of $p\times q$. The proposed
method is illustrated numerically via both simulated and real data examples.",Simultaneous Decorrelation of Matrix Time Series,2021-03-17 05:54:51,"Yuefeng Han, Rong Chen, Cun-Hui Zhang, Qiwei Yao","http://arxiv.org/abs/2103.09411v2, http://arxiv.org/pdf/2103.09411v2",stat.ME
30407,em,"Since its inception, the choice modelling field has been dominated by
theory-driven modelling approaches. Machine learning offers an alternative
data-driven approach for modelling choice behaviour and is increasingly drawing
interest in our field. Cross-pollination of machine learning models, techniques
and practices could help overcome problems and limitations encountered in the
current theory-driven modelling paradigm, such as subjective labour-intensive
search processes for model selection, and the inability to work with text and
image data. However, despite the potential benefits of using the advances of
machine learning to improve choice modelling practices, the choice modelling
field has been hesitant to embrace machine learning. This discussion paper aims
to consolidate knowledge on the use of machine learning models, techniques and
practices for choice modelling, and discuss their potential. Thereby, we hope
not only to make the case that further integration of machine learning in
choice modelling is beneficial, but also to further facilitate it. To this end,
we clarify the similarities and differences between the two modelling
paradigms; we review the use of machine learning for choice modelling; and we
explore areas of opportunities for embracing machine learning models and
techniques to improve our practices. To conclude this discussion paper, we put
forward a set of research questions which must be addressed to better
understand if and how machine learning can benefit choice modelling.",Choice modelling in the age of machine learning -- discussion paper,2021-01-28 14:57:08,"S. Van Cranenburgh, S. Wang, A. Vij, F. Pereira, J. Walker","http://dx.doi.org/10.1016/j.jocm.2021.100340, http://arxiv.org/abs/2101.11948v2, http://arxiv.org/pdf/2101.11948v2",econ.EM
30408,em,"We present a Gaussian Process - Latent Class Choice Model (GP-LCCM) to
integrate a non-parametric class of probabilistic machine learning within
discrete choice models (DCMs). Gaussian Processes (GPs) are kernel-based
algorithms that incorporate expert knowledge by assuming priors over latent
functions rather than priors over parameters, which makes them more flexible in
addressing nonlinear problems. By integrating a Gaussian Process within a LCCM
structure, we aim at improving discrete representations of unobserved
heterogeneity. The proposed model would assign individuals probabilistically to
behaviorally homogeneous clusters (latent classes) using GPs and simultaneously
estimate class-specific choice models by relying on random utility models.
Furthermore, we derive and implement an Expectation-Maximization (EM) algorithm
to jointly estimate/infer the hyperparameters of the GP kernel function and the
class-specific choice parameters by relying on a Laplace approximation and
gradient-based numerical optimization methods, respectively. The model is
tested on two different mode choice applications and compared against different
LCCM benchmarks. Results show that GP-LCCM allows for a more complex and
flexible representation of heterogeneity and improves both in-sample fit and
out-of-sample predictive power. Moreover, behavioral and economic
interpretability is maintained at the class-specific choice model level while
local interpretation of the latent classes can still be achieved, although the
non-parametric characteristic of GPs lessens the transparency of the model.",Gaussian Process Latent Class Choice Models,2021-01-28 22:56:42,"Georges Sfeir, Filipe Rodrigues, Maya Abou-Zeid","http://dx.doi.org/10.1016/j.trc.2022.103552, http://arxiv.org/abs/2101.12252v1, http://arxiv.org/pdf/2101.12252v1",econ.EM
30409,em,"Since network data commonly consists of observations from a single large
network, researchers often partition the network into clusters in order to
apply cluster-robust inference methods. Existing such methods require clusters
to be asymptotically independent. Under mild conditions, we prove that, for
this requirement to hold for network-dependent data, it is necessary and
sufficient that clusters have low conductance, the ratio of edge boundary size
to volume. This yields a simple measure of cluster quality. We find in
simulations that when clusters have low conductance, cluster-robust methods
control size better than HAC estimators. However, for important classes of
networks lacking low-conductance clusters, the former can exhibit substantial
size distortion. To determine the number of low-conductance clusters and
construct them, we draw on results in spectral graph theory that connect
conductance to the spectrum of the graph Laplacian. Based on these results, we
propose to use the spectrum to determine the number of low-conductance clusters
and spectral clustering to construct them.",Network Cluster-Robust Inference,2021-03-02 07:47:47,Michael P. Leung,"http://arxiv.org/abs/2103.01470v4, http://arxiv.org/pdf/2103.01470v4",econ.EM
30410,em,"In this paper, we develop a penalized realized variance (PRV) estimator of
the quadratic variation (QV) of a high-dimensional continuous It\^{o}
semimartingale. We adapt the principle idea of regularization from linear
regression to covariance estimation in a continuous-time high-frequency
setting. We show that under a nuclear norm penalization, the PRV is computed by
soft-thresholding the eigenvalues of realized variance (RV). It therefore
encourages sparsity of singular values or, equivalently, low rank of the
solution. We prove our estimator is minimax optimal up to a logarithmic factor.
We derive a concentration inequality, which reveals that the rank of PRV is --
with a high probability -- the number of non-negligible eigenvalues of the QV.
Moreover, we also provide the associated non-asymptotic analysis for the spot
variance. We suggest an intuitive data-driven bootstrap procedure to select the
shrinkage parameter. Our theory is supplemented by a simulation study and an
empirical application. The PRV detects about three-five factors in the equity
market, with a notable rank decrease during times of distress in financial
markets. This is consistent with most standard asset pricing models, where a
limited amount of systematic factors driving the cross-section of stock returns
are perturbed by idiosyncratic errors, rendering the QV -- and also RV -- of
full rank.",High-dimensional estimation of quadratic variation based on penalized realized variance,2021-03-04 21:57:13,"Kim Christensen, Mikkel Slot Nielsen, Mark Podolskij","http://arxiv.org/abs/2103.03237v1, http://arxiv.org/pdf/2103.03237v1",econ.EM
30411,em,"We find the set of extremal points of Lorenz curves with fixed Gini index and
compute the maximal $L^1$-distance between Lorenz curves with given values of
their Gini coefficients. As an application we introduce a bidimensional index
that simultaneously measures relative inequality and dissimilarity between two
populations. This proposal employs the Gini indices of the variables and an
$L^1$-distance between their Lorenz curves. The index takes values in a
right-angled triangle, two of whose sides characterize perfect relative
inequality-expressed by the Lorenz ordering between the underlying
distributions. Further, the hypotenuse represents maximal distance between the
two distributions. As a consequence, we construct a chart to, graphically,
either see the evolution of (relative) inequality and distance between two
income distributions over time or to compare the distribution of income of a
specific population between a fixed time point and a range of years. We prove
the mathematical results behind the above claims and provide a full description
of the asymptotic properties of the plug-in estimator of this index. Finally,
we apply the proposed bidimensional index to several real EU-SILC income
datasets to illustrate its performance in practice.",Extremal points of Lorenz curves and applications to inequality analysis,2021-03-04 22:44:08,"Amparo Baíllo, Javier Cárcamo, Carlos Mora-Corral","http://arxiv.org/abs/2103.03286v1, http://arxiv.org/pdf/2103.03286v1",econ.EM
30412,em,"In order to further overcome the difficulties of the existing models in
dealing with the non-stationary and nonlinear characteristics of high-frequency
financial time series data, especially its weak generalization ability, this
paper proposes an ensemble method based on data denoising methods, including
the wavelet transform (WT) and singular spectrum analysis (SSA), and long-term
short-term memory neural network (LSTM) to build a data prediction model, The
financial time series is decomposed and reconstructed by WT and SSA to denoise.
Under the condition of denoising, the smooth sequence with effective
information is reconstructed. The smoothing sequence is introduced into LSTM
and the predicted value is obtained. With the Dow Jones industrial average
index (DJIA) as the research object, the closing price of the DJIA every five
minutes is divided into short-term (1 hour), medium-term (3 hours) and
long-term (6 hours) respectively. . Based on root mean square error (RMSE),
mean absolute error (MAE), mean absolute percentage error (MAPE) and absolute
percentage error standard deviation (SDAPE), the experimental results show that
in the short-term, medium-term and long-term, data denoising can greatly
improve the accuracy and stability of the prediction, and can effectively
improve the generalization ability of LSTM prediction model. As WT and SSA can
extract useful information from the original sequence and avoid overfitting,
the hybrid model can better grasp the sequence pattern of the closing price of
the DJIA. And the WT-LSTM model is better than the benchmark LSTM model and
SSA-LSTM model.",Prediction of financial time series using LSTM and data denoising methods,2021-03-05 10:32:36,"Qi Tang, Tongmei Fan, Ruchen Shi, Jingyan Huang, Yidan Ma","http://arxiv.org/abs/2103.03505v1, http://arxiv.org/pdf/2103.03505v1",econ.EM
30413,em,"This paper proposes methods for Bayesian inference in time-varying parameter
(TVP) quantile regression (QR) models featuring conditional heteroskedasticity.
I use data augmentation schemes to render the model conditionally Gaussian and
develop an efficient Gibbs sampling algorithm. Regularization of the
high-dimensional parameter space is achieved via flexible dynamic shrinkage
priors. A simple version of TVP-QR based on an unobserved component model is
applied to dynamically trace the quantiles of the distribution of inflation in
the United States, the United Kingdom and the euro area. In an out-of-sample
forecast exercise, I find the proposed model to be competitive and perform
particularly well for higher-order and tail forecasts. A detailed analysis of
the resulting predictive distributions reveals that they are sometimes skewed
and occasionally feature heavy tails.",Modeling tail risks of inflation using unobserved component quantile regressions,2021-03-05 15:29:34,Michael Pfarrhofer,"http://arxiv.org/abs/2103.03632v2, http://arxiv.org/pdf/2103.03632v2",econ.EM
30414,em,"Electricity supply must be matched with demand at all times. This helps
reduce the chances of issues such as load frequency control and the chances of
electricity blackouts. To gain a better understanding of the load that is
likely to be required over the next 24h, estimations under uncertainty are
needed. This is especially difficult in a decentralized electricity market with
many micro-producers which are not under central control.
  In this paper, we investigate the impact of eleven offline learning and five
online learning algorithms to predict the electricity demand profile over the
next 24h. We achieve this through integration within the long-term agent-based
model, ElecSim. Through the prediction of electricity demand profile over the
next 24h, we can simulate the predictions made for a day-ahead market. Once we
have made these predictions, we sample from the residual distributions and
perturb the electricity market demand using the simulation, ElecSim. This
enables us to understand the impact of errors on the long-term dynamics of a
decentralized electricity market.
  We show we can reduce the mean absolute error by 30% using an online
algorithm when compared to the best offline algorithm, whilst reducing the
required tendered national grid reserve required. This reduction in national
grid reserves leads to savings in costs and emissions. We also show that large
errors in prediction accuracy have a disproportionate error on investments made
over a 17-year time frame, as well as electricity mix.",The impact of online machine-learning methods on long-term investment decisions and generator utilization in electricity markets,2021-03-07 14:28:54,"Alexander J. M. Kell, A. Stephen McGough, Matthew Forshaw","http://arxiv.org/abs/2103.04327v1, http://arxiv.org/pdf/2103.04327v1",econ.EM
30415,em,"The classic censored regression model (tobit model) has been widely used in
the economic literature. This model assumes normality for the error
distribution and is not recommended for cases where positive skewness is
present. Moreover, in regression analysis, it is well-known that a quantile
regression approach allows us to study the influences of the explanatory
variables on the dependent variable considering different quantiles. Therefore,
we propose in this paper a quantile tobit regression model based on
quantile-based log-symmetric distributions. The proposed methodology allows us
to model data with positive skewness (which is not suitable for the classic
tobit model), and to study the influence of the quantiles of interest, in
addition to accommodating heteroscedasticity. The model parameters are
estimated using the maximum likelihood method and an elaborate Monte Carlo
study is performed to evaluate the performance of the estimates. Finally, the
proposed methodology is illustrated using two female labor supply data sets.
The results show that the proposed log-symmetric quantile tobit model has a
better fit than the classic tobit model.",On a log-symmetric quantile tobit model applied to female labor supply data,2021-03-07 23:41:20,"Danúbia R. Cunha, Jose A. Divino, Helton Saulo","http://arxiv.org/abs/2103.04449v1, http://arxiv.org/pdf/2103.04449v1",stat.ME
30416,em,"This paper reexamines the seminal Lagrange multiplier test for cross-section
independence in a large panel model where both the number of cross-sectional
units n and the number of time series observations T can be large. The first
contribution of the paper is an enlargement of the test with two extensions:
firstly the new asymptotic normality is derived in a simultaneous limiting
scheme where the two dimensions (n, T) tend to infinity with comparable
magnitudes; second, the result is valid for general error distribution (not
necessarily normal). The second contribution of the paper is a new test
statistic based on the sum of the fourth powers of cross-section correlations
from OLS residuals, instead of their squares used in the Lagrange multiplier
statistic. This new test is generally more powerful, and the improvement is
particularly visible against alternatives with weak or sparse cross-section
dependence. Both simulation study and real data analysis are proposed to
demonstrate the advantages of the enlarged Lagrange multiplier test and the
power enhanced test in comparison with the existing procedures.",Extension of the Lagrange multiplier test for error cross-section independence to large panels with non normal errors,2021-03-10 17:25:54,"Zhaoyuan Li, Jianfeng Yao","http://arxiv.org/abs/2103.06075v1, http://arxiv.org/pdf/2103.06075v1",econ.EM
30419,em,"This study provides an efficient approach for using text data to calculate
patent-to-patent (p2p) technological similarity, and presents a hybrid
framework for leveraging the resulting p2p similarity for applications such as
semantic search and automated patent classification. We create embeddings using
Sentence-BERT (SBERT) based on patent claims. We leverage SBERTs efficiency in
creating embedding distance measures to map p2p similarity in large sets of
patent data. We deploy our framework for classification with a simple Nearest
Neighbors (KNN) model that predicts Cooperative Patent Classification (CPC) of
a patent based on the class assignment of the K patents with the highest p2p
similarity. We thereby validate that the p2p similarity captures their
technological features in terms of CPC overlap, and at the same demonstrate the
usefulness of this approach for automatic patent classification based on text
data. Furthermore, the presented classification framework is simple and the
results easy to interpret and evaluate by end-users. In the out-of-sample model
validation, we are able to perform a multi-label prediction of all assigned CPC
classes on the subclass (663) level on 1,492,294 patents with an accuracy of
54% and F1 score > 66%, which suggests that our model outperforms the current
state-of-the-art in text-based multi-label and multi-class patent
classification. We furthermore discuss the applicability of the presented
framework for semantic IP search, patent landscaping, and technology
intelligence. We finally point towards a future research agenda for leveraging
multi-source patent embeddings, their appropriateness across applications, as
well as to improve and validate patent embeddings by creating domain-expert
curated Semantic Textual Similarity (STS) benchmark datasets.",PatentSBERTa: A Deep NLP based Hybrid Model for Patent Distance and Classification using Augmented SBERT,2021-03-22 18:23:19,"Hamid Bekamiri, Daniel S. Hain, Roman Jurowetzki","http://arxiv.org/abs/2103.11933v3, http://arxiv.org/pdf/2103.11933v3",cs.LG
30420,em,"Many real life situations require a set of items to be repeatedly placed in a
random sequence. In such circumstances, it is often desirable to test whether
such randomization indeed obtains, yet this problem has received very limited
attention in the literature. This paper articulates the key features of this
problem and presents three ""untargeted"" tests that require no a priori
information from the analyst. These methods are used to analyze the order in
which lottery numbers are drawn in Powerball, the order in which contestants
perform on American Idol, and the order of candidates on primary election
ballots in Texas and West Virginia. In this last application, multiple
deviations from full randomization are detected, with potentially serious
political and legal consequences. The form these deviations take varies,
depending on institutional factors, which sometimes necessitates the use of
tests that exchange power for increased robustness.",Uncovering Bias in Order Assignment,2021-03-22 18:47:10,Darren Grant,"http://dx.doi.org/10.1111/ecin.13114, http://arxiv.org/abs/2103.11952v2, http://arxiv.org/pdf/2103.11952v2",stat.AP
30421,em,"This paper proposes a hierarchical approximate-factor approach to analyzing
high-dimensional, large-scale heterogeneous time series data using distributed
computing. The new method employs a multiple-fold dimension reduction procedure
using Principal Component Analysis (PCA) and shows great promises for modeling
large-scale data that cannot be stored nor analyzed by a single machine. Each
computer at the basic level performs a PCA to extract common factors among the
time series assigned to it and transfers those factors to one and only one node
of the second level. Each 2nd-level computer collects the common factors from
its subordinates and performs another PCA to select the 2nd-level common
factors. This process is repeated until the central server is reached, which
collects common factors from its direct subordinates and performs a final PCA
to select the global common factors. The noise terms of the 2nd-level
approximate factor model are the unique common factors of the 1st-level
clusters. We focus on the case of 2 levels in our theoretical derivations, but
the idea can easily be generalized to any finite number of hierarchies. We
discuss some clustering methods when the group memberships are unknown and
introduce a new diffusion index approach to forecasting. We further extend the
analysis to unit-root nonstationary time series. Asymptotic properties of the
proposed method are derived for the diverging dimension of the data in each
computing unit and the sample size $T$. We use both simulated data and real
examples to assess the performance of the proposed method in finite samples,
and compare our method with the commonly used ones in the literature concerning
the forecastability of extracted factors.",Divide-and-Conquer: A Distributed Hierarchical Factor Approach to Modeling Large-Scale Time Series Data,2021-03-26 20:40:48,"Zhaoxing Gao, Ruey S. Tsay","http://arxiv.org/abs/2103.14626v1, http://arxiv.org/pdf/2103.14626v1",stat.ME
30422,em,"We propose a definition for the average indirect effect of a binary treatment
in the potential outcomes model for causal inference under cross-unit
interference. Our definition is analogous to the standard definition of the
average direct effect, and can be expressed without needing to compare outcomes
across multiple randomized experiments. We show that the proposed indirect
effect satisfies a decomposition theorem whereby, in a Bernoulli trial, the sum
of the average direct and indirect effects always corresponds to the effect of
a policy intervention that infinitesimally increases treatment probabilities.
We also consider a number of parametric models for interference, and find that
our non-parametric indirect effect remains a natural estimand when re-expressed
in the context of these models.",Average Direct and Indirect Causal Effects under Interference,2021-04-08 17:28:16,"Yuchen Hu, Shuangning Li, Stefan Wager","http://arxiv.org/abs/2104.03802v4, http://arxiv.org/pdf/2104.03802v4",stat.ME
30423,em,"We show that the identification problem for a class of dynamic panel logit
models with fixed effects has a connection to the truncated moment problem in
mathematics. We use this connection to show that the sharp identified set of
the structural parameters is characterized by a set of moment equality and
inequality conditions. This result provides sharp bounds in models where moment
equality conditions do not exist or do not point identify the parameters. We
also show that the sharp identifying content of the non-parametric latent
distribution of the fixed effects is characterized by a vector of its
generalized moments, and that the number of moments grows linearly in T. This
final result lets us point identify, or sharply bound, specific classes of
functionals, without solving an optimization problem with respect to the latent
distribution.",Identification of Dynamic Panel Logit Models with Fixed Effects,2021-04-09 23:06:54,"Christopher Dobronyi, Jiaying Gu, Kyoo il Kim","http://arxiv.org/abs/2104.04590v2, http://arxiv.org/pdf/2104.04590v2",econ.EM
30430,em,"This article considers average marginal effects (AME) in a panel data fixed
effects logit model. Relating the identified set of the AME to an extremal
moment problem, we first show how to obtain sharp bounds on the AME
straightforwardly, without any optimization. Then, we consider two strategies
to build confidence intervals on the AME. In the first, we estimate the sharp
bounds with a semiparametric two-step estimator. The second, very simple
strategy estimates instead a quantity known to be at a bounded distance from
the AME. It does not require any nonparametric estimation but may result in
larger confidence intervals. Monte Carlo simulations suggest that both
approaches work well in practice, the second being often very competitive.
Finally, we show that our results also apply to average treatment effects, the
average structural functions and ordered, fixed effects logit models.",Identification and Estimation of Average Marginal Effects in Fixed Effects Logit Models,2021-05-03 17:01:06,"Laurent Davezies, Xavier D'Haultfoeuille, Louise Laage","http://arxiv.org/abs/2105.00879v3, http://arxiv.org/pdf/2105.00879v3",econ.EM
30424,em,"The presence of outlying observations may adversely affect statistical
testing procedures that result in unstable test statistics and unreliable
inferences depending on the distortion in parameter estimates. In spite of the
fact that the adverse effects of outliers in panel data models, there are only
a few robust testing procedures available for model specification. In this
paper, a new weighted likelihood based robust specification test is proposed to
determine the appropriate approach in panel data including individual-specific
components. The proposed test has been shown to have the same asymptotic
distribution as that of most commonly used Hausman's specification test under
null hypothesis of random effects specification. The finite sample properties
of the robust testing procedure are illustrated by means of Monte Carlo
simulations and an economic-growth data from the member countries of the
Organisation for Economic Co-operation and Development. Our records reveal that
the robust specification test exhibit improved performance in terms of size and
power of the test in the presence of contamination.",A robust specification test in linear panel data models,2021-04-15 22:12:36,"Beste Hamiye Beyaztas, Soutir Bandyopadhyay, Abhijit Mandal","http://arxiv.org/abs/2104.07723v1, http://arxiv.org/pdf/2104.07723v1",stat.ME
30425,em,"This paper establishes bounds on the performance of empirical risk
minimization for large-dimensional linear regression. We generalize existing
results by allowing the data to be dependent and heavy-tailed. The analysis
covers both the cases of identically and heterogeneously distributed
observations. Our analysis is nonparametric in the sense that the relationship
between the regressand and the regressors is not specified. The main results of
this paper show that the empirical risk minimizer achieves the optimal
performance (up to a logarithmic factor) in a dependent data setting.",Performance of Empirical Risk Minimization for Linear Regression with Dependent Data,2021-04-25 13:56:04,"Christian Brownlees, Guðmundur Stefán Guðmundsson","http://arxiv.org/abs/2104.12127v5, http://arxiv.org/pdf/2104.12127v5",econ.EM
30426,em,"Two-sided marketplace platforms often run experiments to test the effect of
an intervention before launching it platform-wide. A typical approach is to
randomize individuals into the treatment group, which receives the
intervention, and the control group, which does not. The platform then compares
the performance in the two groups to estimate the effect if the intervention
were launched to everyone. We focus on two common experiment types, where the
platform randomizes individuals either on the supply side or on the demand
side. The resulting estimates of the treatment effect in these experiments are
typically biased: because individuals in the market compete with each other,
individuals in the treatment group affect those in the control group and vice
versa, creating interference. We develop a simple tractable market model to
study bias and variance in these experiments with interference. We focus on two
choices available to the platform: (1) Which side of the platform should it
randomize on (supply or demand)? (2) What proportion of individuals should be
allocated to treatment? We find that both choices affect the bias and variance
of the resulting estimators but in different ways. The bias-optimal choice of
experiment type depends on the relative amounts of supply and demand in the
market, and we discuss how a platform can use market data to select the
experiment type. Importantly, we find in many circumstances, choosing the
bias-optimal experiment type has little effect on variance. On the other hand,
the choice of treatment proportion can induce a bias-variance tradeoff, where
the bias-minimizing proportion increases variance. We discuss how a platform
can navigate this tradeoff and best choose the treatment proportion, using a
combination of modeling as well as contextual knowledge about the market, the
risk of the intervention, and reasonable effect sizes of the intervention.","Interference, Bias, and Variance in Two-Sided Marketplace Experimentation: Guidance for Platforms",2021-04-25 21:11:09,"Hannah Li, Geng Zhao, Ramesh Johari, Gabriel Y. Weintraub","http://arxiv.org/abs/2104.12222v1, http://arxiv.org/pdf/2104.12222v1",stat.ME
30427,em,"Economists often estimate economic models on data and use the point estimates
as a stand-in for the truth when studying the model's implications for optimal
decision-making. This practice ignores model ambiguity, exposes the decision
problem to misspecification, and ultimately leads to post-decision
disappointment. Using statistical decision theory, we develop a framework to
explore, evaluate, and optimize robust decision rules that explicitly account
for estimation uncertainty. We show how to operationalize our analysis by
studying robust decisions in a stochastic dynamic investment model in which a
decision-maker directly accounts for uncertainty in the model's transition
dynamics.",Robust decision-making under risk and ambiguity,2021-04-23 13:42:16,"Maximilian Blesch, Philipp Eisenhauer","http://arxiv.org/abs/2104.12573v4, http://arxiv.org/pdf/2104.12573v4",econ.EM
30428,em,"This paper studies sequential search models that (1) incorporate unobserved
product quality, which can be correlated with endogenous observable
characteristics (such as price) and endogenous search cost variables (such as
product rankings in online search intermediaries); and (2) do not require
researchers to know the true distribution of the match value between consumers
and products. A likelihood approach to estimate such models gives biased
results. Therefore, I propose a new estimator -- pairwise maximum rank (PMR)
estimator -- for both preference and search cost parameters. I show that the
PMR estimator is consistent using only data on consumers' search order among
one pair of products rather than data on consumers' full consideration set or
final purchase. Additionally, we can use the PMR estimator to test for the true
match value distribution in the data. In the empirical application, I apply the
PMR estimator to quantify the effect of rankings in Expedia hotel search using
two samples of the data set, to which consumers are randomly assigned. I find
the position effect to be \$0.11-\$0.36, and the effect estimated using the
sample with randomly generated rankings is close to the effect estimated using
the sample with endogenous rankings. Moreover, I find that the true match value
distribution in the data is unlikely to be N(0,1). Likelihood estimation
ignoring endogeneity gives an upward bias of at least \$1.17; misspecification
of match value distribution as N(0,1) gives an upward bias of at least \$2.99.",Sequential Search Models: A Pairwise Maximum Rank Approach,2021-04-28 19:19:56,Jiarui Liu,"http://arxiv.org/abs/2104.13865v2, http://arxiv.org/pdf/2104.13865v2",econ.EM
30429,em,"How do inter-organizational networks emerge? Accounting for interdependence
among ties while studying tie formation is one of the key challenges in this
area of research. We address this challenge using an equilibrium framework
where firms' decisions to form links with other firms are modeled as a
strategic game. In this game, firms weigh the costs and benefits of
establishing a relationship with other firms and form ties if their net payoffs
are positive. We characterize the equilibrium networks as exponential random
graphs (ERGM), and we estimate the firms' payoffs using a Bayesian approach. To
demonstrate the usefulness of our approach, we apply the framework to a
co-investment network of venture capital firms in the medical device industry.
The equilibrium framework allows researchers to draw economic interpretation
from parameter estimates of the ERGM Model. We learn that firms rely on their
joint partners (transitivity) and prefer to form ties with firms similar to
themselves (homophily). These results hold after controlling for the
interdependence among ties. Another, critical advantage of a structural
approach is that it allows us to simulate the effects of economic shocks or
policy counterfactuals. We test two such policy shocks, namely, firm entry and
regulatory change. We show how new firms' entry or a regulatory shock of
minimum capital requirements increase the co-investment network's density and
clustering.",A model of inter-organizational network formation,2021-05-02 15:30:39,"Shweta Gaonkar, Angelo Mele","http://arxiv.org/abs/2105.00458v1, http://arxiv.org/pdf/2105.00458v1",econ.EM
30431,em,"With the heightened volatility in stock prices during the Covid-19 pandemic,
the need for price forecasting has become more critical. We investigated the
forecast performance of four models including Long-Short Term Memory, XGBoost,
Autoregression, and Last Value on stock prices of Facebook, Amazon, Tesla,
Google, and Apple in COVID-19 pandemic time to understand the accuracy and
predictability of the models in this highly volatile time region. To train the
models, the data of all stocks are split into train and test datasets. The test
dataset starts from January 2020 to April 2021 which covers the COVID-19
pandemic period. The results show that the Autoregression and Last value models
have higher accuracy in predicting the stock prices because of the strong
correlation between the previous day and the next day's price value.
Additionally, the results suggest that the machine learning models (Long-Short
Term Memory and XGBoost) are not performing as well as Autoregression models
when the market experiences high volatility.",Stock Price Forecasting in Presence of Covid-19 Pandemic and Evaluating Performances of Machine Learning Models for Time-Series Forecasting,2021-05-04 17:55:57,"Navid Mottaghi, Sara Farhangdoost","http://arxiv.org/abs/2105.02785v1, http://arxiv.org/pdf/2105.02785v1",q-fin.ST
30432,em,"We study linear peer effects models where peers interact in groups,
individual's outcomes are linear in the group mean outcome and characteristics,
and group effects are random. Our specification is motivated by the moment
conditions imposed in Graham 2008. We show that these moment conditions can be
cast in terms of a linear random group effects model and lead to a class of GMM
estimators that are generally identified as long as there is sufficient
variation in group size. We also show that our class of GMM estimators contains
a Quasi Maximum Likelihood estimator (QMLE) for the random group effects model,
as well as the Wald estimator of Graham 2008 and the within estimator of Lee
2007 as special cases. Our identification results extend insights in Graham
2008 that show how assumptions about random group effects as well as variation
in group size can be used to overcome the reflection problem in identifying
peer effects. Our QMLE and GMM estimators accommodate additional covariates and
are valid in situations with a large but finite number of different group sizes
or types. Because our estimators are general moment based procedures, using
instruments other than binary group indicators in estimation is straight
forward. Our QMLE estimator accommodates group level covariates in the spirit
of Mundlak and Chamberlain and offers an alternative to fixed effects
specifications. Monte-Carlo simulations show that the bias of the QMLE
estimator decreases with the number of groups and the variation in group size,
and increases with group size. We also prove the consistency and asymptotic
normality of the estimator under reasonable assumptions.",Efficient Peer Effects Estimators with Group Effects,2021-05-10 16:05:40,"Guido M. Kuersteiner, Ingmar R. Prucha, Ying Zeng","http://arxiv.org/abs/2105.04330v2, http://arxiv.org/pdf/2105.04330v2",econ.EM
30433,em,"One of the important and widely used classes of models for non-Gaussian time
series is the generalized autoregressive model average models (GARMA), which
specifies an ARMA structure for the conditional mean process of the underlying
time series. However, in many applications one often encounters conditional
heteroskedasticity. In this paper we propose a new class of models, referred to
as GARMA-GARCH models, that jointly specify both the conditional mean and
conditional variance processes of a general non-Gaussian time series. Under the
general modeling framework, we propose three specific models, as examples, for
proportional time series, nonnegative time series, and skewed and heavy-tailed
financial time series. Maximum likelihood estimator (MLE) and quasi Gaussian
MLE (GMLE) are used to estimate the parameters. Simulation studies and three
applications are used to demonstrate the properties of the models and the
estimation procedures.",Generalized Autoregressive Moving Average Models with GARCH Errors,2021-05-12 12:19:13,"Tingguo Zheng, Han Xiao, Rong Chen","http://arxiv.org/abs/2105.05532v1, http://arxiv.org/pdf/2105.05532v1",stat.ME
30434,em,"We propose a fast and flexible method to scale multivariate return volatility
predictions up to high-dimensions using a dynamic risk factor model. Our
approach increases parsimony via time-varying sparsity on factor loadings and
is able to sequentially learn the use of constant or time-varying parameters
and volatilities. We show in a dynamic portfolio allocation problem with 452
stocks from the S&P 500 index that our dynamic risk factor model is able to
produce more stable and sparse predictions, achieving not just considerable
portfolio performance improvements but also higher utility gains for the
mean-variance investor compared to the traditional Wishart benchmark and the
passive investment on the market index.",Dynamic Portfolio Allocation in High Dimensions using Sparse Risk Factors,2021-05-14 02:12:49,"Bruno P. C. Levy, Hedibert F. Lopes","http://arxiv.org/abs/2105.06584v2, http://arxiv.org/pdf/2105.06584v2",q-fin.ST
30435,em,"We propose employing a debiased-regularized, high-dimensional generalized
method of moments (GMM) framework to perform inference on large-scale spatial
panel networks. In particular, network structure with a flexible sparse
deviation, which can be regarded either as latent or as misspecified from a
predetermined adjacency matrix, is estimated using debiased machine learning
approach. The theoretical analysis establishes the consistency and asymptotic
normality of our proposed estimator, taking into account general temporal and
spatial dependency inherent in the data-generating processes. The
dimensionality allowance in presence of dependency is discussed. A primary
contribution of our study is the development of uniform inference theory that
enables hypothesis testing on the parameters of interest, including zero or
non-zero elements in the network structure. Additionally, the asymptotic
properties for the estimator are derived for both linear and nonlinear moments.
Simulations demonstrate superior performance of our proposed approach. Lastly,
we apply our methodology to investigate the spatial network effect of stock
returns.",Uniform Inference on High-dimensional Spatial Panel Networks,2021-05-16 15:52:18,"Victor Chernozhukov, Chen Huang, Weining Wang","http://arxiv.org/abs/2105.07424v3, http://arxiv.org/pdf/2105.07424v3",econ.EM
30436,em,"Recently, Szufa et al. [AAMAS 2020] presented a ""map of elections"" that
visualizes a set of 800 elections generated from various statistical cultures.
While similar elections are grouped together on this map, there is no obvious
interpretation of the elections' positions. We provide such an interpretation
by introducing four canonical ""extreme"" elections, acting as a compass on the
map. We use them to analyze both a dataset provided by Szufa et al. and a
number of real-life elections. In effect, we find a new variant of the Mallows
model and show that it captures real-life scenarios particularly well.",Putting a Compass on the Map of Elections,2021-05-17 16:28:39,"Niclas Boehmer, Robert Bredereck, Piotr Faliszewski, Rolf Niedermeier, Stanisław Szufa","http://arxiv.org/abs/2105.07815v1, http://arxiv.org/pdf/2105.07815v1",cs.GT
30437,em,"With uncertain changes of the economic environment, macroeconomic downturns
during recessions and crises can hardly be explained by a Gaussian structural
shock. There is evidence that the distribution of macroeconomic variables is
skewed and heavy tailed. In this paper, we contribute to the literature by
extending a vector autoregression (VAR) model to account for a more realistic
assumption of the multivariate distribution of the macroeconomic variables. We
propose a general class of generalized hyperbolic skew Student's t distribution
with stochastic volatility for the error term in the VAR model that allows us
to take into account skewness and heavy tails. Tools for Bayesian inference and
model selection using a Gibbs sampler are provided. In an empirical study, we
present evidence of skewness and heavy tails for monthly macroeconomic
variables. The analysis also gives a clear message that skewness should be
taken into account for better predictions during recessions and crises.",Vector autoregression models with skewness and heavy tails,2021-05-24 13:15:04,"Sune Karlsson, Stepan Mazur, Hoang Nguyen","http://arxiv.org/abs/2105.11182v1, http://arxiv.org/pdf/2105.11182v1",econ.EM
30438,em,"Financial advisors use questionnaires and discussions with clients to
determine a suitable portfolio of assets that will allow clients to reach their
investment objectives. Financial institutions assign risk ratings to each
security they offer, and those ratings are used to guide clients and advisors
to choose an investment portfolio risk that suits their stated risk tolerance.
This paper compares client Know Your Client (KYC) profile risk allocations to
their investment portfolio risk selections using a value-at-risk discrepancy
methodology. Value-at-risk is used to measure elicited and revealed risk to
show whether clients are over-risked or under-risked, changes in KYC risk lead
to changes in portfolio configuration, and cash flow affects a client's
portfolio risk. We demonstrate the effectiveness of value-at-risk at measuring
clients' elicited and revealed risk on a dataset provided by a private Canadian
financial dealership of over $50,000$ accounts for over $27,000$ clients and
$300$ advisors. By measuring both elicited and revealed risk using the same
measure, we can determine how well a client's portfolio aligns with their
stated goals. We believe that using value-at-risk to measure client risk
provides valuable insight to advisors to ensure that their practice is KYC
compliant, to better tailor their client portfolios to stated goals,
communicate advice to clients to either align their portfolios to stated goals
or refresh their goals, and to monitor changes to the clients' risk positions
across their practice.",Measuring Financial Advice: aligning client elicited and revealed risk,2021-05-25 15:55:03,"John R. J. Thompson, Longlong Feng, R. Mark Reesor, Chuck Grace, Adam Metzler","http://arxiv.org/abs/2105.11892v1, http://arxiv.org/pdf/2105.11892v1",econ.EM
30439,em,"This paper develops tests for the correct specification of the conditional
variance function in GARCH models when the true parameter may lie on the
boundary of the parameter space. The test statistics considered are of
Kolmogorov-Smirnov and Cram\'{e}r-von Mises type, and are based on a certain
empirical process marked by centered squared residuals. The limiting
distributions of the test statistics are not free from (unknown) nuisance
parameters, and hence critical values cannot be tabulated. A novel bootstrap
procedure is proposed to implement the tests; it is shown to be asymptotically
valid under general conditions, irrespective of the presence of nuisance
parameters on the boundary. The proposed bootstrap approach is based on
shrinking of the parameter estimates used to generate the bootstrap sample
toward the boundary of the parameter space at a proper rate. It is simple to
implement and fast in applications, as the associated test statistics have
simple closed form expressions. A simulation study demonstrates that the new
tests: (i) have excellent finite sample behavior in terms of empirical
rejection probabilities under the null as well as under the alternative; (ii)
provide a useful complement to existing procedures based on Ljung-Box type
approaches. Two data examples are considered to illustrate the tests.",Specification tests for GARCH processes,2021-05-28 22:50:02,"Giuseppe Cavaliere, Indeewara Perera, Anders Rahbek","http://arxiv.org/abs/2105.14081v1, http://arxiv.org/pdf/2105.14081v1",econ.EM
30440,em,"Datasets from field experiments with covariate-adaptive randomizations (CARs)
usually contain extra covariates in addition to the strata indicators. We
propose to incorporate these additional covariates via auxiliary regressions in
the estimation and inference of unconditional quantile treatment effects (QTEs)
under CARs. We establish the consistency and limit distribution of the
regression-adjusted QTE estimator and prove that the use of multiplier
bootstrap inference is non-conservative under CARs. The auxiliary regression
may be estimated parametrically, nonparametrically, or via regularization when
the data are high-dimensional. Even when the auxiliary regression is
misspecified, the proposed bootstrap inferential procedure still achieves the
nominal rejection probability in the limit under the null. When the auxiliary
regression is correctly specified, the regression-adjusted estimator achieves
the minimum asymptotic variance. We also discuss forms of adjustments that can
improve the efficiency of the QTE estimators. The finite sample performance of
the new estimation and inferential methods is studied in simulations and an
empirical application to a well-known dataset concerned with expanding access
to basic bank accounts on savings is reported.",Regression-Adjusted Estimation of Quantile Treatment Effects under Covariate-Adaptive Randomizations,2021-05-31 10:33:31,"Liang Jiang, Peter C. B. Phillips, Yubo Tao, Yichong Zhang","http://arxiv.org/abs/2105.14752v4, http://arxiv.org/pdf/2105.14752v4",econ.EM
30441,em,"Recently, an approach to modeling portfolio distribution with risk factors
distributed as Gram-Charlier (GC) expansions of the Gaussian law, has been
conceived. GC expansions prove effective when dealing with moderately
leptokurtic data. In order to cover the case of possibly severe leptokurtosis,
the so-called GC-like expansions have been devised by reshaping parent
leptokurtic distributions by means of orthogonal polynomials specific to them.
In this paper, we focus on the hyperbolic-secant (HS) law as parent
distribution whose GC-like expansions fit with kurtosis levels up to 19.4. A
portfolio distribution has been obtained with risk factors modeled as GClike
expansions of the HS law which duly account for excess kurtosis. Empirical
evidence of the workings of the approach dealt with in the paper is included.",Modeling Portfolios with Leptokurtic and Dependent Risk Factors,2021-06-08 12:57:46,"Piero Quatto, Gianmarco Vacca, Maria Grazia Zoia","http://arxiv.org/abs/2106.04218v1, http://arxiv.org/pdf/2106.04218v1",econ.EM
30618,em,"In this chapter, we review variance selection for time-varying parameter
(TVP) models for univariate and multivariate time series within a Bayesian
framework. We show how both continuous as well as discrete spike-and-slab
shrinkage priors can be transferred from variable selection for regression
models to variance selection for TVP models by using a non-centered
parametrization. We discuss efficient MCMC estimation and provide an
application to US inflation modeling.",Sparse Bayesian State-Space and Time-Varying Parameter Models,2022-07-25 15:49:13,"Sylvia Frühwirth-Schnatter, Peter Knaus","http://arxiv.org/abs/2207.12147v1, http://arxiv.org/pdf/2207.12147v1",econ.EM
30442,em,"While most treatment evaluations focus on binary interventions, a growing
literature also considers continuously distributed treatments. We propose a
Cram\'{e}r-von Mises-type test for testing whether the mean potential outcome
given a specific treatment has a weakly monotonic relationship with the
treatment dose under a weak unconfoundedness assumption. In a nonseparable
structural model, applying our method amounts to testing monotonicity of the
average structural function in the continuous treatment of interest. To
flexibly control for a possibly high-dimensional set of covariates in our
testing approach, we propose a double debiased machine learning estimator that
accounts for covariates in a data-driven way. We show that the proposed test
controls asymptotic size and is consistent against any fixed alternative. These
theoretical findings are supported by the Monte-Carlo simulations. As an
empirical illustration, we apply our test to the Job Corps study and reject a
weakly negative relationship between the treatment (hours in academic and
vocational training) and labor market performance among relatively low
treatment values.",Testing Monotonicity of Mean Potential Outcomes in a Continuous Treatment with High-Dimensional Data,2021-06-08 13:33:09,"Yu-Chin Hsu, Martin Huber, Ying-Ying Lee, Chu-An Liu","http://arxiv.org/abs/2106.04237v3, http://arxiv.org/pdf/2106.04237v3",econ.EM
30443,em,"We study regressions with multiple treatments and a set of controls that is
flexible enough to purge omitted variable bias. We show these regressions
generally fail to estimate convex averages of heterogeneous treatment effects;
instead, estimates of each treatment's effect are contaminated by non-convex
averages of the effects of other treatments. We discuss three estimation
approaches that avoid such contamination bias, including a new estimator of
efficiently weighted average effects. We find minimal bias in a re-analysis of
Project STAR, due to idiosyncratic effect heterogeneity. But sizeable
contamination bias arises when effect heterogeneity becomes correlated with
treatment propensity scores.",Contamination Bias in Linear Regressions,2021-06-09 15:33:59,"Paul Goldsmith-Pinkham, Peter Hull, Michal Kolesár","http://arxiv.org/abs/2106.05024v3, http://arxiv.org/pdf/2106.05024v3",econ.EM
30444,em,"Artificial neural networks (ANNs) have been the catalyst to numerous advances
in a variety of fields and disciplines in recent years. Their impact on
economics, however, has been comparatively muted. One type of ANN, the long
short-term memory network (LSTM), is particularly wellsuited to deal with
economic time-series. Here, the architecture's performance and characteristics
are evaluated in comparison with the dynamic factor model (DFM), currently a
popular choice in the field of economic nowcasting. LSTMs are found to produce
superior results to DFMs in the nowcasting of three separate variables; global
merchandise export values and volumes, and global services exports. Further
advantages include their ability to handle large numbers of input features in a
variety of time frequencies. A disadvantage is the inability to ascribe
contributions of input features to model outputs, common to all ANNs. In order
to facilitate continued applied research of the methodology by avoiding the
need for any knowledge of deep-learning libraries, an accompanying Python
library was developed using PyTorch, https://pypi.org/project/nowcast-lstm/.",Economic Nowcasting with Long Short-Term Memory Artificial Neural Networks (LSTM),2021-06-15 16:32:06,Daniel Hopp,"http://arxiv.org/abs/2106.08901v1, http://arxiv.org/pdf/2106.08901v1",econ.EM
30445,em,"Extra-large datasets are becoming increasingly accessible, and computing
tools designed to handle huge amount of data efficiently are democratizing
rapidly. However, conventional statistical and econometric tools are still
lacking fluency when dealing with such large datasets. This paper dives into
econometrics on big datasets, specifically focusing on the logistic regression
on Spark. We review the robustness of the functions available in Spark to fit
logistic regression and introduce a package that we developed in PySpark which
returns the statistical summary of the logistic regression, necessary for
statistical inference.",Scalable Econometrics on Big Data -- The Logistic Regression on Spark,2021-06-18 23:10:34,"Aurélien Ouattara, Matthieu Bulté, Wan-Ju Lin, Philipp Scholl, Benedikt Veit, Christos Ziakas, Florian Felice, Julien Virlogeux, George Dikos","http://arxiv.org/abs/2106.10341v1, http://arxiv.org/pdf/2106.10341v1",stat.CO
30446,em,"In time-series analyses, particularly for finance, generalized autoregressive
conditional heteroscedasticity (GARCH) models are widely applied statistical
tools for modelling volatility clusters (i.e., periods of increased or
decreased risk). In contrast, it has not been considered to be of critical
importance until now to model spatial dependence in the conditional second
moments. Only a few models have been proposed for modelling local clusters of
increased risks. In this paper, we introduce a novel spatial GARCH process in a
unified spatial and spatiotemporal GARCH framework, which also covers all
previously proposed spatial ARCH models, exponential spatial GARCH, and
time-series GARCH models. In contrast to previous spatiotemporal and time
series models, this spatial GARCH allows for instantaneous spill-overs across
all spatial units. For this common modelling framework, estimators are derived
based on a non-linear least-squares approach. Eventually, the use of the model
is demonstrated by a Monte Carlo simulation study and by an empirical example
that focuses on real estate prices from 1995 to 2014 across the ZIP-Code areas
of Berlin. A spatial autoregressive model is applied to the data to illustrate
how locally varying model uncertainties (e.g., due to latent regressors) can be
captured by the spatial GARCH-type models.",Generalized Spatial and Spatiotemporal ARCH Models,2021-06-19 14:50:39,"Philipp Otto, Wolfgang Schmid","http://dx.doi.org/10.1007/s00362-022-01357-1, http://arxiv.org/abs/2106.10477v1, http://arxiv.org/pdf/2106.10477v1",stat.ME
30447,em,"The paper proposes a supervised machine learning algorithm to uncover
treatment effect heterogeneity in classical regression discontinuity (RD)
designs. Extending Athey and Imbens (2016), I develop a criterion for building
an honest ""regression discontinuity tree"", where each leaf of the tree contains
the RD estimate of a treatment (assigned by a common cutoff rule) conditional
on the values of some pre-treatment covariates. It is a priori unknown which
covariates are relevant for capturing treatment effect heterogeneity, and it is
the task of the algorithm to discover them, without invalidating inference. I
study the performance of the method through Monte Carlo simulations and apply
it to the data set compiled by Pop-Eleches and Urquiola (2013) to uncover
various sources of heterogeneity in the impact of attending a better secondary
school in Romania.",Heterogeneous Treatment Effects in Regression Discontinuity Designs,2021-06-22 12:47:28,Ágoston Reguly,"http://arxiv.org/abs/2106.11640v3, http://arxiv.org/pdf/2106.11640v3",econ.EM
35163,th,"We explore the properties of optimal multi-dimensional auctions in a model
where a single object of multiple qualities is sold to several buyers. Using
simulations, we test some hypotheses conjectured by Belloni et al. [3] and
Kushnir and Shourideh [7]. As part of this work, we provide the first
open-source library for multi-dimensional auction simulations written in
Python.",Optimal Multi-Dimensional Auctions: Conjectures and Simulations,2022-07-04 21:28:23,"Alexey Kushnir, James Michelson","http://arxiv.org/abs/2207.01664v1, http://arxiv.org/pdf/2207.01664v1",econ.TH
30448,em,"This paper studies identification and inference in transformation models with
endogenous censoring. Many kinds of duration models, such as the accelerated
failure time model, proportional hazard model, and mixed proportional hazard
model, can be viewed as transformation models. We allow the censoring of a
duration outcome to be arbitrarily correlated with observed covariates and
unobserved heterogeneity. We impose no parametric restrictions on either the
transformation function or the distribution function of the unobserved
heterogeneity. In this setting, we develop bounds on the regression parameters
and the transformation function, which are characterized by conditional moment
inequalities involving U-statistics. We provide inference methods for them by
constructing an inference approach for conditional moment inequality models in
which the sample analogs of moments are U-statistics. We apply the proposed
inference methods to evaluate the effect of heart transplants on patients'
survival time using data from the Stanford Heart Transplant Study.",Partial Identification and Inference in Duration Models with Endogenous Censoring,2021-07-02 12:37:40,Shosei Sakaguchi,"http://arxiv.org/abs/2107.00928v1, http://arxiv.org/pdf/2107.00928v1",econ.EM
30449,em,"Many differentiated products have key attributes that are unstructured and
thus high-dimensional (e.g., design, text). Instead of treating unstructured
attributes as unobservables in economic models, quantifying them can be
important to answer interesting economic questions. To propose an analytical
framework for this type of products, this paper considers one of the simplest
design products -- fonts -- and investigates merger and product differentiation
using an original dataset from the world's largest online marketplace for
fonts. We quantify font shapes by constructing embeddings from a deep
convolutional neural network. Each embedding maps a font's shape onto a
low-dimensional vector. In the resulting product space, designers are assumed
to engage in Hotelling-type spatial competition. From the image embeddings, we
construct two alternative measures that capture the degree of design
differentiation. We then study the causal effects of a merger on the merging
firm's creative decisions using the constructed measures in a synthetic control
method. We find that the merger causes the merging firm to increase the visual
variety of font design. Notably, such effects are not captured when using
traditional measures for product offerings (e.g., specifications and the number
of products) constructed from structured data.",Shapes as Product Differentiation: Neural Network Embedding in the Analysis of Markets for Fonts,2021-07-06 20:12:27,"Sukjin Han, Eric H. Schulman, Kristen Grauman, Santhosh Ramakrishnan","http://arxiv.org/abs/2107.02739v1, http://arxiv.org/pdf/2107.02739v1",econ.EM
30450,em,"A factor copula model is proposed in which factors are either simulable or
estimable from exogenous information. Point estimation and inference are based
on a simulated methods of moments (SMM) approach with non-overlapping
simulation draws. Consistency and limiting normality of the estimator is
established and the validity of bootstrap standard errors is shown. Doing so,
previous results from the literature are verified under low-level conditions
imposed on the individual components of the factor structure. Monte Carlo
evidence confirms the accuracy of the asymptotic theory in finite samples and
an empirical application illustrates the usefulness of the model to explain the
cross-sectional dependence between stock returns.",Estimation and Inference in Factor Copula Models with Exogenous Covariates,2021-07-07 20:25:00,"Alexander Mayer, Dominik Wied","http://arxiv.org/abs/2107.03366v4, http://arxiv.org/pdf/2107.03366v4",econ.EM
30451,em,"This article is an introduction to machine learning for financial
forecasting, planning and analysis (FP\&A). Machine learning appears well
suited to support FP\&A with the highly automated extraction of information
from large amounts of data. However, because most traditional machine learning
techniques focus on forecasting (prediction), we discuss the particular care
that must be taken to avoid the pitfalls of using them for planning and
resource allocation (causal inference). While the naive application of machine
learning usually fails in this context, the recently developed double machine
learning framework can address causal questions of interest. We review the
current literature on machine learning in FP\&A and illustrate in a simulation
study how machine learning can be used for both forecasting and planning. We
also investigate how forecasting and planning improve as the number of data
points increases.","Machine Learning for Financial Forecasting, Planning and Analysis: Recent Developments and Pitfalls",2021-07-10 17:54:36,"Helmut Wasserbacher, Martin Spindler","http://arxiv.org/abs/2107.04851v1, http://arxiv.org/pdf/2107.04851v1",econ.EM
30452,em,"The proportional odds cumulative logit model (POCLM) is a standard regression
model for an ordinal response. Ordinality of predictors can be incorporated by
monotonicity constraints for the corresponding parameters. It is shown that
estimators defined by optimization, such as maximum likelihood estimators, for
an unconstrained model and for parameters in the interior set of the parameter
space of a constrained model are asymptotically equivalent. This is used in
order to derive asymptotic confidence regions and tests for the constrained
model, involving simple modifications for finite samples. The finite sample
coverage probability of the confidence regions is investigated by simulation.
Tests concern the effect of individual variables, monotonicity, and a specified
monotonicity direction. The methodology is applied on real data related to the
assessment of school performance.",Inference for the proportional odds cumulative logit model with monotonicity constraints for ordinal predictors and ordinal response,2021-07-11 05:37:05,"Javier Espinosa-Brito, Christian Hennig","http://arxiv.org/abs/2107.04946v3, http://arxiv.org/pdf/2107.04946v3",econ.EM
30453,em,"In nonseparable triangular models with a binary endogenous treatment and a
binary instrumental variable, Vuong and Xu (2017) established identification
results for individual treatment effects (ITEs) under the rank invariance
assumption. Using their approach, Feng, Vuong, and Xu (2019) proposed a
uniformly consistent kernel estimator for the density of the ITE that utilizes
estimated ITEs. In this paper, we establish the asymptotic normality of the
density estimator of Feng, Vuong, and Xu (2019) and show that the ITE
estimation errors have a non-negligible effect on the asymptotic distribution
of the estimator. We propose asymptotically valid standard errors that account
for ITEs estimation, as well as a bias correction. Furthermore, we develop
uniform confidence bands for the density of the ITE using the jackknife
multiplier or nonparametric bootstrap critical values.",Inference on Individual Treatment Effects in Nonseparable Triangular Models,2021-07-12 19:29:42,"Jun Ma, Vadim Marmer, Zhengfei Yu","http://arxiv.org/abs/2107.05559v4, http://arxiv.org/pdf/2107.05559v4",econ.EM
34454,th,"We describe a two-stage mechanism that fully implements the set of efficient
outcomes in two-agent environments with quasi-linear utilities. The mechanism
asks one agent to set prices for each outcome, and the other agent to make a
choice, paying the corresponding price: Price \& Choose. We extend our
implementation result in three main directions: an arbitrary number of players,
non-quasi linear utilities, and robustness to max-min behavior. Finally, we
discuss how to reduce the payoff inequality between players while still
achieving efficiency.",Price & Choose,2022-12-12 04:28:58,"Federico Echenique, Matías Núñez","http://arxiv.org/abs/2212.05650v2, http://arxiv.org/pdf/2212.05650v2",econ.TH
30454,em,"Score tests have the advantage of requiring estimation alone of the model
restricted by the null hypothesis, which often is much simpler than models
defined under the alternative hypothesis. This is typically so when the
alternative hypothesis involves inequality constraints. However, existing score
tests address only jointly testing all parameters of interest; a leading
example is testing all ARCH parameters or variances of random coefficients
being zero or not. In such testing problems rejection of the null hypothesis
does not provide evidence on rejection of specific elements of parameter of
interest. This paper proposes a class of one-sided score tests for testing a
model parameter that is subject to inequality constraints. Proposed tests are
constructed based on the minimum of a set of $p$-values. The minimand includes
the $p$-values for testing individual elements of parameter of interest using
individual scores. It may be extended to include a $p$-value of existing score
tests. We show that our tests perform better than/or perform as good as
existing score tests in terms of joint testing, and has furthermore the added
benefit of allowing for simultaneously testing individual elements of parameter
of interest. The added benefit is appealing in the sense that it can identify a
model without estimating it. We illustrate our tests in linear regression
models, ARCH and random coefficient models. A detailed simulation study is
provided to examine the finite sample performance of the proposed tests and we
find that our tests perform well as expected.",MinP Score Tests with an Inequality Constrained Parameter Space,2021-07-13 16:42:47,"Giuseppe Cavaliere, Zeng-Hua Lu, Anders Rahbek, Yuhong Yang","http://arxiv.org/abs/2107.06089v1, http://arxiv.org/pdf/2107.06089v1",econ.EM
30455,em,"This paper provides three results for SVARs under the assumption that the
primitive shocks are mutually independent. First, a framework is proposed to
accommodate a disaster-type variable with infinite variance into a SVAR. We
show that the least squares estimates of the SVAR are consistent but have
non-standard asymptotics. Second, the disaster shock is identified as the
component with the largest kurtosis and whose impact effect is negative. An
estimator that is robust to infinite variance is used to recover the mutually
independent components. Third, an independence test on the residuals
pre-whitened by the Choleski decomposition is proposed to test the restrictions
imposed on a SVAR. The test can be applied whether the data have fat or thin
tails, and to over as well as exactly identified models. Three applications are
considered. In the first, the independence test is used to shed light on the
conflicting evidence regarding the role of uncertainty in economic
fluctuations. In the second, disaster shocks are shown to have short term
economic impact arising mostly from feedback dynamics. The third uses the
framework to study the dynamic effects of economic shocks post-covid.",Time Series Estimation of the Dynamic Effects of Disaster-Type Shock,2021-07-14 15:56:46,"Richard Davis, Serena Ng","http://arxiv.org/abs/2107.06663v3, http://arxiv.org/pdf/2107.06663v3",stat.ME
30456,em,"We consider a class of semi-parametric dynamic models with strong white noise
errors. This class of processes includes the standard Vector Autoregressive
(VAR) model, the nonfundamental structural VAR, the mixed causal-noncausal
models, as well as nonlinear dynamic models such as the (multivariate) ARCH-M
model. For estimation of processes in this class, we propose the Generalized
Covariance (GCov) estimator, which is obtained by minimizing a residual-based
multivariate portmanteau statistic as an alternative to the Generalized Method
of Moments. We derive the asymptotic properties of the GCov estimator and of
the associated residual-based portmanteau statistic. Moreover, we show that the
GCov estimators are semi-parametrically efficient and the residual-based
portmanteau statistics are asymptotically chi-square distributed. The finite
sample performance of the GCov estimator is illustrated in a simulation study.
The estimator is also applied to a dynamic model of cryptocurrency prices.",Generalized Covariance Estimator,2021-07-14 23:26:57,"Christian Gourieroux, Joann Jasiak","http://arxiv.org/abs/2107.06979v1, http://arxiv.org/pdf/2107.06979v1",econ.EM
30457,em,"Macroeconomists using large datasets often face the choice of working with
either a large Vector Autoregression (VAR) or a factor model. In this paper, we
develop methods for combining the two using a subspace shrinkage prior.
Subspace priors shrink towards a class of functions rather than directly
forcing the parameters of a model towards some pre-specified location. We
develop a conjugate VAR prior which shrinks towards the subspace which is
defined by a factor model. Our approach allows for estimating the strength of
the shrinkage as well as the number of factors. After establishing the
theoretical properties of our proposed prior, we carry out simulations and
apply it to US macroeconomic data. Using simulations we show that our framework
successfully detects the number of factors. In a forecasting exercise involving
a large macroeconomic data set we find that combining VARs with factor models
using our prior can lead to forecast improvements.",Subspace Shrinkage in Conjugate Bayesian Vector Autoregressions,2021-07-16 13:15:32,"Florian Huber, Gary Koop","http://arxiv.org/abs/2107.07804v1, http://arxiv.org/pdf/2107.07804v1",econ.EM
30458,em,"Empirical regression discontinuity (RD) studies often use covariates to
increase the precision of their estimates. In this paper, we propose a novel
class of estimators that use such covariate information more efficiently than
the linear adjustment estimators that are currently used widely in practice.
Our approach can accommodate a possibly large number of either discrete or
continuous covariates. It involves running a standard RD analysis with an
appropriately modified outcome variable, which takes the form of the difference
between the original outcome and a function of the covariates. We characterize
the function that leads to the estimator with the smallest asymptotic variance,
and show how it can be estimated via modern machine learning, nonparametric
regression, or classical parametric methods. The resulting estimator is easy to
implement, as tuning parameters can be chosen as in a conventional RD analysis.
An extensive simulation study illustrates the performance of our approach.",Flexible Covariate Adjustments in Regression Discontinuity Designs,2021-07-16 18:00:06,"Claudia Noack, Tomasz Olma, Christoph Rothe","http://arxiv.org/abs/2107.07942v2, http://arxiv.org/pdf/2107.07942v2",econ.EM
30459,em,"Latent variable models are becoming increasingly popular in economics for
high-dimensional categorical data such as text and surveys. Often the resulting
low-dimensional representations are plugged into downstream econometric models
that ignore the statistical structure of the upstream model, which presents
serious challenges for valid inference. We show how Hamiltonian Monte Carlo
(HMC) implemented with parallelized automatic differentiation provides a
computationally efficient, easy-to-code, and statistically robust solution for
this problem. Via a series of applications, we show that modeling integrated
structure can non-trivially affect inference and that HMC appears to markedly
outperform current approaches to inference in integrated models.",Hamiltonian Monte Carlo for Regression with High-Dimensional Categorical Data,2021-07-16 23:40:54,"Szymon Sacher, Laura Battaglia, Stephen Hansen","http://arxiv.org/abs/2107.08112v1, http://arxiv.org/pdf/2107.08112v1",econ.EM
30460,em,"This paper extends the idea of decoupling shrinkage and sparsity for
continuous priors to Bayesian Quantile Regression (BQR). The procedure follows
two steps: In the first step, we shrink the quantile regression posterior
through state of the art continuous priors and in the second step, we sparsify
the posterior through an efficient variant of the adaptive lasso, the signal
adaptive variable selection (SAVS) algorithm. We propose a new variant of the
SAVS which automates the choice of penalisation through quantile specific
loss-functions that are valid in high dimensions. We show in large scale
simulations that our selection procedure decreases bias irrespective of the
true underlying degree of sparsity in the data, compared to the un-sparsified
regression posterior. We apply our two-step approach to a high dimensional
growth-at-risk (GaR) exercise. The prediction accuracy of the un-sparsified
posterior is retained while yielding interpretable quantile specific variable
selection results. Our procedure can be used to communicate to policymakers
which variables drive downside risk to the macro economy.",Decoupling Shrinkage and Selection for the Bayesian Quantile Regression,2021-07-18 20:22:33,"David Kohns, Tibor Szendrei","http://arxiv.org/abs/2107.08498v1, http://arxiv.org/pdf/2107.08498v1",econ.EM
30461,em,"Income inequality estimators are biased in small samples, leading generally
to an underestimation. This aspect deserves particular attention when
estimating inequality in small domains and performing small area estimation at
the area level. We propose a bias correction framework for a large class of
inequality measures comprising the Gini Index, the Generalized Entropy and the
Atkinson index families by accounting for complex survey designs. The proposed
methodology does not require any parametric assumption on income distribution,
being very flexible. Design-based performance evaluation of our proposal has
been carried out using EU-SILC data, their results show a noticeable bias
reduction for all the measures. Lastly, an illustrative example of application
in small area estimation confirms that ignoring ex-ante bias correction
determines model misspecification.",Mind the Income Gap: Bias Correction of Inequality Estimators in Small-Sized Samples,2021-07-19 18:06:26,"Silvia De Nicolò, Maria Rosaria Ferrante, Silvia Pacei","http://arxiv.org/abs/2107.08950v3, http://arxiv.org/pdf/2107.08950v3",stat.ME
30462,em,"We examine machine learning and factor-based portfolio optimization. We find
that factors based on autoencoder neural networks exhibit a weaker relationship
with commonly used characteristic-sorted portfolios than popular dimensionality
reduction techniques. Machine learning methods also lead to covariance and
portfolio weight structures that diverge from simpler estimators.
Minimum-variance portfolios using latent factors derived from autoencoders and
sparse methods outperform simpler benchmarks in terms of risk minimization.
These effects are amplified for investors with an increased sensitivity to
risk-adjusted returns, during high volatility periods or when accounting for
tail risk.",Machine Learning and Factor-Based Portfolio Optimization,2021-07-29 12:58:37,"Thomas Conlon, John Cotter, Iason Kynigakis","http://arxiv.org/abs/2107.13866v1, http://arxiv.org/pdf/2107.13866v1",q-fin.PM
30463,em,"We study inference on the common stochastic trends in a non-stationary,
$N$-variate time series $y_{t}$, in the possible presence of heavy tails. We
propose a novel methodology which does not require any knowledge or estimation
of the tail index, or even knowledge as to whether certain moments (such as the
variance) exist or not, and develop an estimator of the number of stochastic
trends $m$ based on the eigenvalues of the sample second moment matrix of
$y_{t}$. We study the rates of such eigenvalues, showing that the first $m$
ones diverge, as the sample size $T$ passes to infinity, at a rate faster by
$O\left(T \right)$ than the remaining $N-m$ ones, irrespective of the tail
index. We thus exploit this eigen-gap by constructing, for each eigenvalue, a
test statistic which diverges to positive infinity or drifts to zero according
to whether the relevant eigenvalue belongs to the set of the first $m$
eigenvalues or not. We then construct a randomised statistic based on this,
using it as part of a sequential testing procedure, ensuring consistency of the
resulting estimator of $m$. We also discuss an estimator of the common trends
based on principal components and show that, up to a an invertible linear
transformation, such estimator is consistent in the sense that the estimation
error is of smaller order than the trend itself. Finally, we also consider the
case in which we relax the standard assumption of \textit{i.i.d.} innovations,
by allowing for heterogeneity of a very general form in the scale of the
innovations. A Monte Carlo study shows that the proposed estimator for $m$
performs particularly well, even in samples of small size. We complete the
paper by presenting four illustrative applications covering commodity prices,
interest rates data, long run PPP and cryptocurrency markets.",Inference in heavy-tailed non-stationary multivariate time series,2021-07-29 14:02:30,"Matteo Barigozzi, Giuseppe Cavaliere, Lorenzo Trapani","http://arxiv.org/abs/2107.13894v1, http://arxiv.org/pdf/2107.13894v1",econ.EM
30464,em,"We develop a Stata command, bootranktest, for implementing the matrix rank
test of Chen and Fang (2019) in linear instrumental variable regression models.
Existing rank tests employ critical values that may be too small, and hence may
not even be first order valid in the sense that they may fail to control the
Type I error. By appealing to the bootstrap, they devise a test that overcomes
the deficiency of existing tests. The command bootranktest implements the
two-step version of their test, and also the analytic version if chosen. The
command also accommodates data with temporal and cluster dependence.",Implementing an Improved Test of Matrix Rank in Stata,2021-08-01 20:49:11,"Qihui Chen, Zheng Fang, Xun Huang","http://arxiv.org/abs/2108.00511v1, http://arxiv.org/pdf/2108.00511v1",econ.EM
30465,em,"It is important for policymakers to understand which financial policies are
effective in increasing climate risk disclosure in corporate reporting. We use
machine learning to automatically identify disclosures of five different types
of climate-related risks. For this purpose, we have created a dataset of over
120 manually-annotated annual reports by European firms. Applying our approach
to reporting of 337 firms over the last 20 years, we find that risk disclosure
is increasing. Disclosure of transition risks grows more dynamically than
physical risks, and there are marked differences across industries.
Country-specific dynamics indicate that regulatory environments potentially
have an important role to play for increasing disclosure.",Automated Identification of Climate Risk Disclosures in Annual Corporate Reports,2021-08-03 14:14:05,"David Friederich, Lynn H. Kaack, Alexandra Luccioni, Bjarne Steffen","http://arxiv.org/abs/2108.01415v1, http://arxiv.org/pdf/2108.01415v1",cs.CY
30466,em,"We study the impact of climate volatility on economic growth exploiting data
on 133 countries between 1960 and 2019. We show that the conditional (ex ante)
volatility of annual temperatures increased steadily over time, rendering
climate conditions less predictable across countries, with important
implications for growth. Controlling for concomitant changes in temperatures, a
+1 degree C increase in temperature volatility causes on average a 0.3 percent
decline in GDP growth and a 0.7 percent increase in the volatility of GDP.
Unlike changes in average temperatures, changes in temperature volatility
affect both rich and poor countries.",The macroeconomic cost of climate volatility,2021-07-21 18:22:11,"Piergiorgio Alessandri, Haroon Mumtaz","http://arxiv.org/abs/2108.01617v2, http://arxiv.org/pdf/2108.01617v2",q-fin.GN
30467,em,"In this work, we propose a novel framework for density forecast combination
by constructing time-varying weights based on time series features, which is
called Feature-based Bayesian Forecasting Model Averaging (FEBAMA). Our
framework estimates weights in the forecast combination via Bayesian log
predictive scores, in which the optimal forecasting combination is determined
by time series features from historical information. In particular, we use an
automatic Bayesian variable selection method to add weight to the importance of
different features. To this end, our approach has better interpretability
compared to other black-box forecasting combination schemes. We apply our
framework to stock market data and M3 competition data. Based on our structure,
a simple maximum-a-posteriori scheme outperforms benchmark methods, and
Bayesian variable selection can further enhance the accuracy for both point and
density forecasts.",Bayesian forecast combination using time-varying features,2021-08-04 17:32:14,"Li Li, Yanfei Kang, Feng Li","http://arxiv.org/abs/2108.02082v3, http://arxiv.org/pdf/2108.02082v3",econ.EM
30468,em,"This article studies experimental design in settings where the experimental
units are large aggregate entities (e.g., markets), and only one or a small
number of units can be exposed to the treatment. In such settings,
randomization of the treatment may result in treated and control groups with
very different characteristics at baseline, inducing biases. We propose a
variety of synthetic control designs (Abadie, Diamond and Hainmueller, 2010,
Abadie and Gardeazabal, 2003) as experimental designs to select treated units
in non-randomized experiments with large aggregate units, as well as the
untreated units to be used as a control group. Average potential outcomes are
estimated as weighted averages of treated units, for potential outcomes with
treatment -- and control units, for potential outcomes without treatment. We
analyze the properties of estimators based on synthetic control designs and
propose new inferential techniques. We show that in experimental settings with
aggregate units, synthetic control designs can substantially reduce estimation
biases in comparison to randomization of the treatment.",Synthetic Controls for Experimental Design,2021-08-04 20:49:58,"Alberto Abadie, Jinglong Zhao","http://arxiv.org/abs/2108.02196v3, http://arxiv.org/pdf/2108.02196v3",stat.ME
30469,em,"We consider a high-dimensional model in which variables are observed over
time and space. The model consists of a spatio-temporal regression containing a
time lag and a spatial lag of the dependent variable. Unlike classical spatial
autoregressive models, we do not rely on a predetermined spatial interaction
matrix, but infer all spatial interactions from the data. Assuming sparsity, we
estimate the spatial and temporal dependence fully data-driven by penalizing a
set of Yule-Walker equations. This regularization can be left unstructured, but
we also propose customized shrinkage procedures when observations originate
from spatial grids (e.g. satellite images). Finite sample error bounds are
derived and estimation consistency is established in an asymptotic framework
wherein the sample size and the number of spatial units diverge jointly.
Exogenous variables can be included as well. A simulation exercise shows strong
finite sample performance compared to competing procedures. As an empirical
application, we model satellite measured NO2 concentrations in London. Our
approach delivers forecast improvements over a competitive benchmark and we
discover evidence for strong spatial interactions.",Sparse Generalized Yule-Walker Estimation for Large Spatio-temporal Autoregressions with an Application to NO2 Satellite Data,2021-08-06 00:51:45,"Hanno Reuvers, Etienne Wijler","http://arxiv.org/abs/2108.02864v2, http://arxiv.org/pdf/2108.02864v2",econ.EM
30470,em,"The Gini index signals only the dispersion of the distribution and is not
very sensitive to income differences at the tails of the distribution. The
widely used index of inequality can be adjusted to also measure distributional
asymmetry by attaching weights to the distances between the Lorenz curve and
the 45-degree line. The measure is equivalent to the Gini if the distribution
is symmetric. The alternative measure of inequality inherits good properties
from the Gini but is more sensitive to changes in the extremes of the income
distribution.",Including the asymmetry of the Lorenz curve into measures of economic inequality,2021-08-08 15:35:19,Mario Schlemmer,"http://arxiv.org/abs/2108.03623v2, http://arxiv.org/pdf/2108.03623v2",econ.EM
30471,em,"This study aims to show the fundamental difference between logistic
regression and Bayesian classifiers in the case of exponential and
unexponential families of distributions, yielding the following findings.
First, the logistic regression is a less general representation of a Bayesian
classifier. Second, one should suppose distributions of classes for the correct
specification of logistic regression equations. Third, in specific cases, there
is no difference between predicted probabilities from correctly specified
generative Bayesian classifier and discriminative logistic regression.",A Theoretical Analysis of Logistic Regression and Bayesian Classifiers,2021-08-08 22:23:29,Roman V. Kirin,"http://arxiv.org/abs/2108.03715v1, http://arxiv.org/pdf/2108.03715v1",econ.EM
30472,em,"We develop a new approach for identifying and estimating average causal
effects in panel data under a linear factor model with unmeasured confounders.
Compared to other methods tackling factor models such as synthetic controls and
matrix completion, our method does not require the number of time periods to
grow infinitely. Instead, we draw inspiration from the two-way fixed effect
model as a special case of the linear factor model, where a simple
difference-in-differences transformation identifies the effect. We show that
analogous, albeit more complex, transformations exist in the more general
linear factor model, providing a new means to identify the effect in that
model. In fact many such transformations exist, called bridge functions, all
identifying the same causal effect estimand. This poses a unique challenge for
estimation and inference, which we solve by targeting the minimal bridge
function using a regularized estimation approach. We prove that our resulting
average causal effect estimator is root-N consistent and asymptotically normal,
and we provide asymptotically valid confidence intervals. Finally, we provide
extensions for the case of a linear factor model with time-varying unmeasured
confounders.",Controlling for Unmeasured Confounding in Panel Data Using Minimal Bridge Functions: From Two-Way Fixed Effects to Factor Models,2021-08-09 10:42:58,"Guido Imbens, Nathan Kallus, Xiaojie Mao","http://arxiv.org/abs/2108.03849v1, http://arxiv.org/pdf/2108.03849v1",stat.ME
30541,em,"A new integer-valued autoregressive process (INAR) with Generalised
Lagrangian Katz (GLK) innovations is defined. We show that our GLK-INAR process
is stationary, discrete semi-self-decomposable, infinite divisible, and
provides a flexible modelling framework for count data allowing for under- and
over-dispersion, asymmetry, and excess of kurtosis. A Bayesian inference
framework and an efficient posterior approximation procedure based on Markov
Chain Monte Carlo are provided. The proposed model family is applied to a
Google Trend dataset which proxies the public concern about climate change
around the world. The empirical results provide new evidence of heterogeneity
across countries and keywords in the persistence, uncertainty, and long-run
public awareness level.",First-order integer-valued autoregressive processes with Generalized Katz innovations,2022-02-04 12:10:39,"Federico Bassetti, Giulia Carallo, Roberto Casarin","http://arxiv.org/abs/2202.02029v1, http://arxiv.org/pdf/2202.02029v1",stat.ME
30473,em,"This paper develops a general methodology to conduct statistical inference
for observations indexed by multiple sets of entities. We propose a novel
multiway empirical likelihood statistic that converges to a chi-square
distribution under the non-degenerate case, where corresponding Hoeffding type
decomposition is dominated by linear terms. Our methodology is related to the
notion of jackknife empirical likelihood but the leave-out pseudo values are
constructed by leaving columns or rows. We further develop a modified version
of our multiway empirical likelihood statistic, which converges to a chi-square
distribution regardless of the degeneracy, and discover its desirable
higher-order property compared to the t-ratio by the conventional Eicker-White
type variance estimator. The proposed methodology is illustrated by several
important statistical problems, such as bipartite network, generalized
estimating equations, and three-way observations.",Multiway empirical likelihood,2021-08-10 21:13:22,"Harold D Chiang, Yukitoshi Matsushita, Taisuke Otsu","http://arxiv.org/abs/2108.04852v5, http://arxiv.org/pdf/2108.04852v5",stat.ME
30474,em,"A unified frequency domain cross-validation (FDCV) method is proposed to
obtain a heteroskedasticity and autocorrelation consistent (HAC) standard
error. This method enables model/tuning parameter selection across both
parametric and nonparametric spectral estimators simultaneously. The candidate
class for this approach consists of restricted maximum likelihood-based (REML)
autoregressive spectral estimators and lag-weights estimators with the Parzen
kernel. Additionally, an efficient technique for computing the REML estimators
of autoregressive models is provided. Through simulations, the reliability of
the FDCV method is demonstrated, comparing favorably with popular HAC
estimators such as Andrews-Monahan and Newey-West.",A Unified Frequency Domain Cross-Validatory Approach to HAC Standard Error Estimation,2021-08-13 10:02:58,"Zhihao Xu, Clifford M. Hurvich","http://arxiv.org/abs/2108.06093v3, http://arxiv.org/pdf/2108.06093v3",econ.EM
30475,em,"We provide a sharp identification region for discrete choice models where
consumers' preferences are not necessarily complete and only aggregate choice
data is available. Behavior is modeled using an upper and a lower utility for
each alternative so that non-comparability can arise. The identification region
places intuitive bounds on the probability distribution of upper and lower
utilities. We show that the existence of an instrumental variable can be used
to reject the hypothesis that the preferences of all consumers are complete. We
apply our methods to data from the 2018 mid-term elections in Ohio.",Identification of Incomplete Preferences,2021-08-13 18:11:31,"Luca Rigotti, Arie Beresteanu","http://arxiv.org/abs/2108.06282v3, http://arxiv.org/pdf/2108.06282v3",econ.EM
30476,em,"Using a state-space system, I forecasted the US Treasury yields by employing
frequentist and Bayesian methods after first decomposing the yields of varying
maturities into its unobserved term structure factors. Then, I exploited the
structure of the state-space model to forecast the Treasury yields and compared
the forecast performance of each model using mean squared forecast error. Among
the frequentist methods, I applied the two-step Diebold-Li, two-step principal
components, and one-step Kalman filter approaches. Likewise, I imposed the five
different priors in Bayesian VARs: Diffuse, Minnesota, natural conjugate, the
independent normal inverse: Wishart, and the stochastic search variable
selection priors. After forecasting the Treasury yields for 9 different
forecast horizons, I found that the BVAR with Minnesota prior generally
minimizes the loss function. I augmented the above BVARs by including
macroeconomic variables and constructed impulse response functions with a
recursive ordering identification scheme. Finally, I fitted a sign-restricted
BVAR with dummy observations.",Dimensionality Reduction and State Space Systems: Forecasting the US Treasury Yields Using Frequentist and Bayesian VARs,2021-08-14 17:47:57,Sudiksha Joshi,"http://arxiv.org/abs/2108.06553v1, http://arxiv.org/pdf/2108.06553v1",econ.EM
30477,em,"We consider a causal inference model in which individuals interact in a
social network and they may not comply with the assigned treatments. In
particular, we suppose that the form of network interference is unknown to
researchers. To estimate meaningful causal parameters in this situation, we
introduce a new concept of exposure mapping, which summarizes potentially
complicated spillover effects into a fixed dimensional statistic of
instrumental variables. We investigate identification conditions for the
intention-to-treat effects and the average treatment effects for compliers,
while explicitly considering the possibility of misspecification of exposure
mapping. Based on our identification results, we develop nonparametric
estimation procedures via inverse probability weighting. Their asymptotic
properties, including consistency and asymptotic normality, are investigated
using an approximate neighborhood interference framework. For an empirical
illustration, we apply our method to experimental data on the anti-conflict
intervention school program. The proposed methods are readily available with
the companion R package latenetwork.",Causal Inference with Noncompliance and Unknown Interference,2021-08-17 09:12:37,"Tadao Hoshino, Takahide Yanagi","http://arxiv.org/abs/2108.07455v5, http://arxiv.org/pdf/2108.07455v5",stat.ME
30478,em,"This paper develops an individual-based stochastic network SIR model for the
empirical analysis of the Covid-19 pandemic. It derives moment conditions for
the number of infected and active cases for single as well as multigroup
epidemic models. These moment conditions are used to investigate the
identification and estimation of the transmission rates. The paper then
proposes a method that jointly estimates the transmission rate and the
magnitude of under-reporting of infected cases. Empirical evidence on six
European countries matches the simulated outcomes once the under-reporting of
infected cases is addressed. It is estimated that the number of actual cases
could be between 4 to 10 times higher than the reported numbers in October 2020
and declined to 2 to 3 times in April 2021. The calibrated models are used in
the counterfactual analyses of the impact of social distancing and vaccination
on the epidemic evolution, and the timing of early interventions in the UK and
Germany.",Matching Theory and Evidence on Covid-19 using a Stochastic Network SIR Model,2021-09-01 15:03:12,"M. Hashem Pesaran, Cynthia Fan Yang","http://arxiv.org/abs/2109.00321v2, http://arxiv.org/pdf/2109.00321v2",econ.EM
30485,em,"We study estimation of the conditional tail average treatment effect (CTATE),
defined as a difference between conditional tail expectations of potential
outcomes. The CTATE can capture heterogeneity and deliver aggregated local
information of treatment effects over different quantile levels, and is closely
related to the notion of second order stochastic dominance and the Lorenz
curve. These properties render it a valuable tool for policy evaluations. We
consider a semiparametric treatment effect framework under endogeneity for the
CTATE estimation using a newly introduced class of consistent loss functions
jointly for the conditioanl tail expectation and quantile. We establish
asymptotic theory of our proposed CTATE estimator and provide an efficient
algorithm for its implementation. We then apply the method to the evaluation of
effects from participating in programs of the Job Training Partnership Act in
the US.",Estimations of the Conditional Tail Average Treatment Effect,2021-09-18 04:05:49,"Le-Yu Chen, Yu-Min Yen","http://arxiv.org/abs/2109.08793v2, http://arxiv.org/pdf/2109.08793v2",stat.AP
30479,em,"We develop new semiparametric methods for estimating treatment effects. We
focus on settings where the outcome distributions may be thick tailed, where
treatment effects may be small, where sample sizes are large and where
assignment is completely random. This setting is of particular interest in
recent online experimentation. We propose using parametric models for the
treatment effects, leading to semiparametric models for the outcome
distributions. We derive the semiparametric efficiency bound for the treatment
effects for this setting, and propose efficient estimators. In the leading case
with constant quantile treatment effects one of the proposed efficient
estimators has an interesting interpretation as a weighted average of quantile
treatment effects, with the weights proportional to minus the second derivative
of the log of the density of the potential outcomes. Our analysis also suggests
an extension of Huber's model and trimmed mean to include asymmetry.",Semiparametric Estimation of Treatment Effects in Randomized Experiments,2021-09-06 20:01:03,"Susan Athey, Peter J. Bickel, Aiyou Chen, Guido W. Imbens, Michael Pollmann","http://dx.doi.org/10.1093/jrsssb/qkad072, http://arxiv.org/abs/2109.02603v2, http://arxiv.org/pdf/2109.02603v2",stat.ME
30480,em,"Pervasive cross-section dependence is increasingly recognized as a
characteristic of economic data and the approximate factor model provides a
useful framework for analysis. Assuming a strong factor structure where
$\Lop\Lo/N^\alpha$ is positive definite in the limit when $\alpha=1$, early
work established convergence of the principal component estimates of the
factors and loadings up to a rotation matrix. This paper shows that the
estimates are still consistent and asymptotically normal when $\alpha\in(0,1]$
albeit at slower rates and under additional assumptions on the sample size. The
results hold whether $\alpha$ is constant or varies across factor loadings. The
framework developed for heterogeneous loadings and the simplified proofs that
can be also used in strong factor analysis are of independent interest.",Approximate Factor Models with Weaker Loadings,2021-09-08 19:52:37,"Jushan Bai, Serena Ng","http://arxiv.org/abs/2109.03773v4, http://arxiv.org/pdf/2109.03773v4",econ.EM
30481,em,"Autoregressive conditional duration (ACD) models are primarily used to deal
with data arising from times between two successive events. These models are
usually specified in terms of a time-varying conditional mean or median
duration. In this paper, we relax this assumption and consider a conditional
quantile approach to facilitate the modeling of different percentiles. The
proposed ACD quantile model is based on a skewed version of Birnbaum-Saunders
distribution, which provides better fitting of the tails than the traditional
Birnbaum-Saunders distribution, in addition to advancing the implementation of
an expectation conditional maximization (ECM) algorithm. A Monte Carlo
simulation study is performed to assess the behavior of the model as well as
the parameter estimation method and to evaluate a form of residual. A real
financial transaction data set is finally analyzed to illustrate the proposed
approach.",On a quantile autoregressive conditional duration model applied to high-frequency financial data,2021-09-08 21:00:47,"Helton Saulo, Narayanaswamy Balakrishnan, Roberto Vila","http://arxiv.org/abs/2109.03844v1, http://arxiv.org/pdf/2109.03844v1",stat.ME
30482,em,"Implicit copulas are the most common copula choice for modeling dependence in
high dimensions. This broad class of copulas is introduced and surveyed,
including elliptical copulas, skew $t$ copulas, factor copulas, time series
copulas and regression copulas. The common auxiliary representation of implicit
copulas is outlined, and how this makes them both scalable and tractable for
statistical modeling. Issues such as parameter identification, extended
likelihoods for discrete or mixed data, parsimony in high dimensions, and
simulation from the copula model are considered. Bayesian approaches to
estimate the copula parameters, and predict from an implicit copula model, are
outlined. Particular attention is given to implicit copula processes
constructed from time series and regression models, which is at the forefront
of current research. Two econometric applications -- one from macroeconomic
time series and the other from financial asset pricing -- illustrate the
advantages of implicit copula models.",Implicit Copulas: An Overview,2021-09-10 10:53:59,Michael Stanley Smith,"http://arxiv.org/abs/2109.04718v1, http://arxiv.org/pdf/2109.04718v1",stat.ME
30483,em,"Truncated conditional expectation functions are objects of interest in a wide
range of economic applications, including income inequality measurement,
financial risk management, and impact evaluation. They typically involve
truncating the outcome variable above or below certain quantiles of its
conditional distribution. In this paper, based on local linear methods, a
novel, two-stage, nonparametric estimator of such functions is proposed. In
this estimation problem, the conditional quantile function is a nuisance
parameter that has to be estimated in the first stage. The proposed estimator
is insensitive to the first-stage estimation error owing to the use of a
Neyman-orthogonal moment in the second stage. This construction ensures that
inference methods developed for the standard nonparametric regression can be
readily adapted to conduct inference on truncated conditional expectations. As
an extension, estimation with an estimated truncation quantile level is
considered. The proposed estimator is applied in two empirical settings: sharp
regression discontinuity designs with a manipulated running variable and
randomized experiments with sample selection.",Nonparametric Estimation of Truncated Conditional Expectation Functions,2021-09-13 20:43:00,Tomasz Olma,"http://arxiv.org/abs/2109.06150v1, http://arxiv.org/pdf/2109.06150v1",econ.EM
30484,em,"This paper studies the case of possibly high-dimensional covariates in the
regression discontinuity design (RDD) analysis. In particular, we propose
estimation and inference methods for the RDD models with covariate selection
which perform stably regardless of the number of covariates. The proposed
methods combine the local approach using kernel weights with
$\ell_{1}$-penalization to handle high-dimensional covariates. We provide
theoretical and numerical results which illustrate the usefulness of the
proposed methods. Theoretically, we present risk and coverage properties for
our point estimation and inference methods, respectively. Under certain special
case, the proposed estimator becomes more efficient than the conventional
covariate adjusted estimator at the cost of an additional sparsity condition.
Numerically, our simulation experiments and empirical example show the robust
behaviors of the proposed methods to the number of covariates in terms of bias
and variance for point estimation and coverage probability and interval length
for inference.",Regression Discontinuity Design with Potentially Many Covariates,2021-09-17 08:04:06,"Yoichi Arai, Taisuke Otsu, Myung Hwan Seo","http://arxiv.org/abs/2109.08351v3, http://arxiv.org/pdf/2109.08351v3",econ.EM
30542,em,"This article proposes an estimation method to detect breakpoints for linear
time series models with their parameters that jump scarcely. Its basic idea
owes the group LASSO (group least absolute shrinkage and selection operator).
The method practically provides estimates of such time-varying parameters of
the models. An example shows that our method can detect each structural
breakpoint's date and magnitude.",Detecting Structural Breaks in Foreign Exchange Markets by using the group LASSO technique,2022-02-07 11:04:01,Mikio Ito,"http://arxiv.org/abs/2202.02988v1, http://arxiv.org/pdf/2202.02988v1",econ.EM
30486,em,"This paper provides a design-based framework for variance (bound) estimation
in experimental analysis. Results are applicable to virtually any combination
of experimental design, linear estimator (e.g., difference-in-means, OLS, WLS)
and variance bound, allowing for unified treatment and a basis for systematic
study and comparison of designs using matrix spectral analysis. A proposed
variance estimator reproduces Eicker-Huber-White (aka. ""robust"",
""heteroskedastic consistent"", ""sandwich"", ""White"", ""Huber-White"", ""HC"", etc.)
standard errors and ""cluster-robust"" standard errors as special cases. While
past work has shown algebraic equivalences between design-based and the
so-called ""robust"" standard errors under some designs, this paper motivates
them for a wide array of design-estimator-bound triplets. In so doing, it
provides a clearer and more general motivation for variance estimators.",Unifying Design-based Inference: On Bounding and Estimating the Variance of any Linear Estimator in any Experimental Design,2021-09-19 23:44:33,Joel A. Middleton,"http://arxiv.org/abs/2109.09220v1, http://arxiv.org/pdf/2109.09220v1",stat.ME
30487,em,"This chapter presents an overview of a specific form of limited dependent
variable models, namely discrete choice models, where the dependent (response
or outcome) variable takes values which are discrete, inherently ordered, and
characterized by an underlying continuous latent variable. Within this setting,
the dependent variable may take only two discrete values (such as 0 and 1)
giving rise to binary models (e.g., probit and logit models) or more than two
values (say $j=1,2, \ldots, J$, where $J$ is some integer, typically small)
giving rise to ordinal models (e.g., ordinal probit and ordinal logit models).
In these models, the primary goal is to model the probability of
responses/outcomes conditional on the covariates. We connect the outcomes of a
discrete choice model to the random utility framework in economics, discuss
estimation techniques, present the calculation of covariate effects and
measures to assess model fitting. Some recent advances in discrete data
modeling are also discussed. Following the theoretical review, we utilize the
binary and ordinal models to analyze public opinion on marijuana legalization
and the extent of legalization -- a socially relevant but controversial topic
in the United States. We obtain several interesting results including that past
use of marijuana, belief about legalization and political partisanship are
important factors that shape the public opinion.",Modeling and Analysis of Discrete Response Data: Applications to Public Opinion on Marijuana Legalization in the United States,2021-09-21 15:09:00,"Mohit Batham, Soudeh Mirghasemi, Mohammad Arshad Rahman, Manini Ojha","http://arxiv.org/abs/2109.10122v2, http://arxiv.org/pdf/2109.10122v2",econ.EM
30488,em,"This paper aims to project a climate change scenario using a stochastic
paleotemperature time series model and compare it with the prevailing
consensus. The ARIMA - Autoregressive Integrated Moving Average Process model
was used for this purpose. The results show that the parameter estimates of the
model were below what is established by the anthropogenic current and
governmental organs, such as the IPCC (UN), considering a 100-year scenario,
which suggests a period of temperature reduction and a probable cooling. Thus,
we hope with this study to contribute to the discussion by adding a statistical
element of paleoclimate in counterpoint to the current scientific consensus and
place the debate in a long-term historical dimension, in line with other
existing research on the topic.",A new look at the anthropogenic global warming consensus: an econometric forecast based on the ARIMA model of paleoclimate series,2021-09-21 22:45:49,"Gilmar V. F. Santos, Lucas G. Cordeiro, Claudio A. Rojo, Edison L. Leismann","http://arxiv.org/abs/2109.10419v2, http://arxiv.org/pdf/2109.10419v2",econ.EM
30489,em,"When randomized trials are run in a marketplace equilibriated by prices,
interference arises. To analyze this, we build a stochastic model of treatment
effects in equilibrium. We characterize the average direct (ADE) and indirect
treatment effect (AIE) asymptotically. A standard RCT can consistently estimate
the ADE, but confidence intervals and AIE estimation require price elasticity
estimates, which we provide using a novel experimental design. We define
heterogeneous treatment effects and derive an optimal targeting rule that meets
an equilibrium stability condition. We illustrate our results using a freelance
labor market simulation and data from a cash transfer experiment.",Treatment Effects in Market Equilibrium,2021-09-23 23:59:38,"Evan Munro, Stefan Wager, Kuang Xu","http://arxiv.org/abs/2109.11647v2, http://arxiv.org/pdf/2109.11647v2",econ.EM
30490,em,"Sibling fixed effects (FE) models are useful for estimating causal treatment
effects while offsetting unobserved sibling-invariant confounding. However,
treatment estimates are biased if an individual's outcome affects their
sibling's outcome. We propose a robustness test for assessing the presence of
outcome-to-outcome interference in linear two-sibling FE models. We regress a
gain-score--the difference between siblings' continuous outcomes--on both
siblings' treatments and on a pre-treatment observed FE. Under certain
restrictions, the observed FE's partial regression coefficient signals the
presence of outcome-to-outcome interference. Monte Carlo simulations
demonstrated the robustness test under several models. We found that an
observed FE signaled outcome-to-outcome spillover if it was directly associated
with an sibling-invariant confounder of treatments and outcomes, directly
associated with a sibling's treatment, or directly and equally associated with
both siblings' outcomes. However, the robustness test collapsed if the observed
FE was directly but differentially associated with siblings' outcomes or if
outcomes affected siblings' treatments.",Assessing Outcome-to-Outcome Interference in Sibling Fixed Effects Models,2021-09-28 02:54:05,David C. Mallinson,"http://arxiv.org/abs/2109.13399v2, http://arxiv.org/pdf/2109.13399v2",stat.ME
30491,em,"A new mixture vector autoregressive model based on Gaussian and Student's $t$
distributions is introduced. As its mixture components, our model incorporates
conditionally homoskedastic linear Gaussian vector autoregressions and
conditionally heteroskedastic linear Student's $t$ vector autoregressions. For
a $p$th order model, the mixing weights depend on the full distribution of the
preceding $p$ observations, which leads to attractive theoretical properties
such as ergodicity and full knowledge of the stationary distribution of $p+1$
consecutive observations. A structural version of the model with statistically
identified shocks and a time-varying impact matrix is also proposed. The
empirical application studies asymmetries in the effects of the Euro area
monetary policy shock. Our model identifies two regimes: a high-growth regime
that is characterized by positive output gap and mainly prevailing before the
Financial crisis, and a low-growth regime that characterized by negative but
volatile output gap and mainly prevailing after the Financial crisis. The
average inflationary effects of the monetary policy shock are stronger in the
high-growth regime than in the low-growth regime. On average, the effects of an
expansionary shock are less enduring than of a contractionary shock. The CRAN
distributed R package gmvarkit accompanies the paper.",Gaussian and Student's $t$ mixture vector autoregressive model with application to the asymmetric effects of monetary policy shocks in the Euro area,2021-09-28 15:10:50,Savi Virolainen,"http://arxiv.org/abs/2109.13648v3, http://arxiv.org/pdf/2109.13648v3",econ.EM
30492,em,"A year following the initial COVID-19 outbreak in China, many countries have
approved emergency vaccines. Public-health practitioners and policymakers must
understand the predicted populational willingness for vaccines and implement
relevant stimulation measures. This study developed a framework for predicting
vaccination uptake rate based on traditional clinical data-involving an
autoregressive model with autoregressive integrated moving average (ARIMA)- and
innovative web search queries-involving a linear regression with ordinary least
squares/least absolute shrinkage and selection operator, and machine-learning
with boost and random forest. For accuracy, we implemented a stacking
regression for the clinical data and web search queries. The stacked regression
of ARIMA (1,0,8) for clinical data and boost with support vector machine for
web data formed the best model for forecasting vaccination speed in the US. The
stacked regression provided a more accurate forecast. These results can help
governments and policymakers predict vaccine demand and finance relevant
programs.",Forecasting the COVID-19 vaccine uptake rate: An infodemiological study in the US,2021-09-28 21:25:48,"Xingzuo Zhou, Yiang Li","http://dx.doi.org/10.1080/21645515.2021.2017216, http://arxiv.org/abs/2109.13971v2, http://arxiv.org/pdf/2109.13971v2",stat.AP
30493,em,"This study presents contemporaneous modeling of asset return and price range
within the framework of stochastic volatility with leverage. A new
representation of the probability density function for the price range is
provided, and its accurate sampling algorithm is developed. A Bayesian
estimation using Markov chain Monte Carlo (MCMC) method is provided for the
model parameters and unobserved variables. MCMC samples can be generated
rigorously, despite the estimation procedure requiring sampling from a density
function with the sum of an infinite series. The empirical results obtained
using data from the U.S. market indices are consistent with the stylized facts
in the financial market, such as the existence of the leverage effect. In
addition, to explore the model's predictive ability, a model comparison based
on the volatility forecast performance is conducted.",Stochastic volatility model with range-based correction and leverage,2021-09-30 21:14:59,Yuta Kurose,"http://arxiv.org/abs/2110.00039v2, http://arxiv.org/pdf/2110.00039v2",stat.CO
30494,em,"We propose a simple dynamic model for estimating the relative contagiousness
of two virus variants. Maximum likelihood estimation and inference is
conveniently invariant to variation in the total number of cases over the
sample period and can be expressed as a logistic regression. We apply the model
to Danish SARS-CoV-2 variant data. We estimate the reproduction numbers of
Alpha and Delta to be larger than that of the ancestral variant by a factor of
1.51 [CI 95%: 1.50, 1.53] and 3.28 [CI 95%: 3.01, 3.58], respectively. In a
predominately vaccinated population, we estimate Omicron to be 3.15 [CI 95%:
2.83, 3.50] times more infectious than Delta. Forecasting the proportion of an
emerging virus variant is straight forward and we proceed to show how the
effective reproduction number for a new variant can be estimated without
contemporary sequencing results. This is useful for assessing the state of the
pandemic in real time as we illustrate empirically with the inferred effective
reproduction number for the Alpha variant.","Relative Contagiousness of Emerging Virus Variants: An Analysis of the Alpha, Delta, and Omicron SARS-CoV-2 Variants",2021-10-01 19:57:43,Peter Reinhard Hansen,"http://arxiv.org/abs/2110.00533v3, http://arxiv.org/pdf/2110.00533v3",econ.EM
30495,em,"This paper extends my research applying statistical decision theory to
treatment choice with sample data, using maximum regret to evaluate the
performance of treatment rules. The specific new contribution is to study as-if
optimization using estimates of illness probabilities in clinical choice
between surveillance and aggressive treatment. Beyond its specifics, the paper
sends a broad message. Statisticians and computer scientists have addressed
conditional prediction for decision making in indirect ways, the former
applying classical statistical theory and the latter measuring prediction
accuracy in test samples. Neither approach is satisfactory. Statistical
decision theory provides a coherent, generally applicable methodology.",Probabilistic Prediction for Binary Treatment Choice: with focus on personalized medicine,2021-10-02 21:34:59,Charles F. Manski,"http://arxiv.org/abs/2110.00864v1, http://arxiv.org/pdf/2110.00864v1",econ.EM
30496,em,"One of the most important studies in finance is to find out whether stock
returns could be predicted. This research aims to create a new multivariate
model, which includes dividend yield, earnings-to-price ratio, book-to-market
ratio as well as consumption-wealth ratio as explanatory variables, for future
stock returns predictions. The new multivariate model will be assessed for its
forecasting performance using empirical analysis. The empirical analysis is
performed on S&P500 quarterly data from Quarter 1, 1952 to Quarter 4, 2019 as
well as S&P500 monthly data from Month 12, 1920 to Month 12, 2019. Results have
shown this new multivariate model has predictability for future stock returns.
When compared to other benchmark models, the new multivariate model performs
the best in terms of the Root Mean Squared Error (RMSE) most of the time.",A New Multivariate Predictive Model for Stock Returns,2021-10-05 11:23:14,Jianying Xie,"http://arxiv.org/abs/2110.01873v1, http://arxiv.org/pdf/2110.01873v1",econ.EM
30497,em,"Gambits are central to human decision-making. Our goal is to provide a theory
of Gambits. A Gambit is a combination of psychological and technical factors
designed to disrupt predictable play. Chess provides an environment to study
gambits and behavioral game theory. Our theory is based on the Bellman
optimality path for sequential decision-making. This allows us to calculate the
$Q$-values of a Gambit where material (usually a pawn) is sacrificed for
dynamic play. On the empirical side, we study the effectiveness of a number of
popular chess Gambits. This is a natural setting as chess Gambits require a
sequential assessment of a set of moves (a.k.a. policy) after the Gambit has
been accepted. Our analysis uses Stockfish 14.1 to calculate the optimal
Bellman $Q$ values, which fundamentally measures if a position is winning or
losing. To test whether Bellman's equation holds in play, we estimate the
transition probabilities to the next board state via a database of expert human
play. This then allows us to test whether the \emph{Gambiteer} is following the
optimal path in his decision-making. Our methodology is applied to the popular
Stafford and reverse Stafford (a.k.a. Boden-Kieretsky-Morphy) Gambit and other
common ones including the Smith-Morra, Goring, Danish and Halloween Gambits. We
build on research in human decision-making by proving an irrational skewness
preference within agents in chess. We conclude with directions for future
research.",Gambits: Theory and Evidence,2021-10-05 15:02:11,"Shiva Maharaj, Nicholas Polson, Christian Turk","http://arxiv.org/abs/2110.02755v5, http://arxiv.org/pdf/2110.02755v5",econ.TH
34455,th,"We propose an extensive-form solution concept, with players that neglect
information from hypothetical events, but make inferences from observed events.
Our concept modifies cursed equilibrium (Eyster and Rabin, 2005), and allows
that players can be 'cursed about' endogenous information.",Sequential Cursed Equilibrium,2022-12-12 19:46:03,"Shani Cohen, Shengwu Li","http://arxiv.org/abs/2212.06025v4, http://arxiv.org/pdf/2212.06025v4",econ.TH
30498,em,"We develop a Bayesian non-parametric quantile panel regression model. Within
each quantile, the response function is a convex combination of a linear model
and a non-linear function, which we approximate using Bayesian Additive
Regression Trees (BART). Cross-sectional information at the pth quantile is
captured through a conditionally heteroscedastic latent factor. The
non-parametric feature of our model enhances flexibility, while the panel
feature, by exploiting cross-country information, increases the number of
observations in the tails. We develop Bayesian Markov chain Monte Carlo (MCMC)
methods for estimation and forecasting with our quantile factor BART model
(QF-BART), and apply them to study growth at risk dynamics in a panel of 11
advanced economies.",Investigating Growth at Risk Using a Multi-country Non-parametric Quantile Factor Model,2021-10-07 15:54:28,"Todd E. Clark, Florian Huber, Gary Koop, Massimiliano Marcellino, Michael Pfarrhofer","http://arxiv.org/abs/2110.03411v1, http://arxiv.org/pdf/2110.03411v1",econ.EM
30499,em,"A recent literature considers causal inference using noisy proxies for
unobserved confounding factors. The proxies are divided into two sets that are
independent conditional on the confounders. One set of proxies are `negative
control treatments' and the other are `negative control outcomes'. Existing
work applies to low-dimensional settings with a fixed number of proxies and
confounders. In this work we consider linear models with many proxy controls
and possibly many confounders. A key insight is that if each group of proxies
is strictly larger than the number of confounding factors, then a matrix of
nuisance parameters has a low-rank structure and a vector of nuisance
parameters has a sparse structure. We can exploit the rank-restriction and
sparsity to reduce the number of free parameters to be estimated. The number of
unobserved confounders is not known a priori but we show that it is identified,
and we apply penalization methods to adapt to this quantity. We provide an
estimator with a closed-form as well as a doubly-robust estimator that must be
evaluated using numerical methods. We provide conditions under which our
doubly-robust estimator is uniformly root-$n$ consistent, asymptotically
centered normal, and our suggested confidence intervals have asymptotically
correct coverage. We provide simulation evidence that our methods achieve
better performance than existing approaches in high dimensions, particularly
when the number of proxies is substantially larger than the number of
confounders.",Many Proxy Controls,2021-10-08 11:47:05,Ben Deaner,"http://arxiv.org/abs/2110.03973v1, http://arxiv.org/pdf/2110.03973v1",econ.EM
30500,em,"Beyond the new results mentioned hereafter, this article aims at
familiarizing researchers working in applied fields -- such as physics or
economics -- with notions or formulas that they use daily without always
identifying all their theoretical features or potentialities. Various
situations where the L1-norm distance E|X-Y| between real-valued random
variables intervene are closely examined. The axiomatic surrounding this
distance is also explored. We constantly try to build bridges between the
concrete uses of E|X-Y| and the underlying probabilistic model. An alternative
interpretation of this distance is also examined, as well as its relation to
the Gini index (economics) and the Lukaszyk-Karmovsky distance (physics). The
main contributions are the following: (a) We show that under independence,
triangle inequality holds for the normalized form E|X-Y|/(E|X| + E|Y|). (b) In
order to present a concrete advance, we determine the analytic form of E|X-Y|
and of its normalized expression when X and Y are independent with Gaussian or
uniform distribution. The resulting formulas generalize relevant tools already
in use in areas such as physics and economics. (c) We propose with all the
required rigor a brief one-dimensional introduction to the optimal transport
problem, essentially for a L1 cost function. The chosen illustrations and
examples should be of great help for newcomers to the field. New proofs and new
results are proposed.",Various issues around the L1-norm distance,2021-10-10 16:06:55,Jean-Daniel Rolle,"http://arxiv.org/abs/2110.04787v3, http://arxiv.org/pdf/2110.04787v3",econ.EM
30501,em,"The Regression Discontinuity (RD) design is a widely used non-experimental
method for causal inference and program evaluation. While its canonical
formulation only requires a score and an outcome variable, it is common in
empirical work to encounter RD analyses where additional variables are used for
adjustment. This practice has led to misconceptions about the role of covariate
adjustment in RD analysis, from both methodological and empirical perspectives.
In this chapter, we review the different roles of covariate adjustment in RD
designs, and offer methodological guidance for its correct use.",Covariate Adjustment in Regression Discontinuity Designs,2021-10-16 02:41:19,"Matias D. Cattaneo, Luke Keele, Rocio Titiunik","http://arxiv.org/abs/2110.08410v2, http://arxiv.org/pdf/2110.08410v2",stat.ME
30502,em,"In an influential critique of empirical practice, Freedman (2008) showed that
the linear regression estimator was biased for the analysis of randomized
controlled trials under the randomization model. Under Freedman's assumptions,
we derive exact closed-form bias corrections for the linear regression
estimator with and without treatment-by-covariate interactions. We show that
the limiting distribution of the bias corrected estimator is identical to the
uncorrected estimator, implying that the asymptotic gains from adjustment can
be attained without introducing any risk of bias. Taken together with results
from Lin (2013), our results show that Freedman's theoretical arguments against
the use of regression adjustment can be completely resolved with minor
modifications to practice.",Exact Bias Correction for Linear Adjustment of Randomized Controlled Trials,2021-10-16 03:48:20,"Haoge Chang, Joel Middleton, P. M. Aronow","http://arxiv.org/abs/2110.08425v2, http://arxiv.org/pdf/2110.08425v2",stat.ME
30503,em,"We revisit the finite-sample behavior of single-variable just-identified
instrumental variables (just-ID IV) estimators, arguing that in most
microeconometric applications, the usual inference strategies are likely
reliable. Three widely-cited applications are used to explain why this is so.
We then consider pretesting strategies of the form $t_{1}>c$, where $t_{1}$ is
the first-stage $t$-statistic, and the first-stage sign is given. Although
pervasive in empirical practice, pretesting on the first-stage $F$-statistic
exacerbates bias and distorts inference. We show, however, that median bias is
both minimized and roughly halved by setting $c=0$, that is by screening on the
sign of the \textit{estimated} first stage. This bias reduction is a free
lunch: conventional confidence interval coverage is unchanged by screening on
the estimated first-stage sign. To the extent that IV analysts sign-screen
already, these results strengthen the case for a sanguine view of the
finite-sample behavior of just-ID IV.",One Instrument to Rule Them All: The Bias and Coverage of Just-ID IV,2021-10-20 16:26:04,"Joshua Angrist, Michal Kolesár","http://arxiv.org/abs/2110.10556v7, http://arxiv.org/pdf/2110.10556v7",econ.EM
30504,em,"We introduce an Attention Overload Model that captures the idea that
alternatives compete for the decision maker's attention, and hence the
attention that each alternative receives decreases as the choice problem
becomes larger. We provide testable implications on the observed choice
behavior that can be used to (point or partially) identify the decision maker's
preference and attention frequency. We then enhance our attention overload
model to accommodate heterogeneous preferences based on the idea of List-based
Attention Overload, where alternatives are presented to the decision makers as
a list that correlates with both heterogeneous preferences and random
attention. We show that preference and attention frequencies are (point or
partially) identifiable under nonparametric assumptions on the list and
attention formation mechanisms, even when the true underlying list is unknown
to the researcher. Building on our identification results, we develop
econometric methods for estimation and inference.",Attention Overload,2021-10-20 19:46:35,"Matias D. Cattaneo, Paul Cheung, Xinwei Ma, Yusufcan Masatlioglu","http://arxiv.org/abs/2110.10650v3, http://arxiv.org/pdf/2110.10650v3",econ.TH
30505,em,"Functional linear regression gets its popularity as a statistical tool to
study the relationship between function-valued response and exogenous
explanatory variables. However, in practice, it is hard to expect that the
explanatory variables of interest are perfectly exogenous, due to, for example,
the presence of omitted variables and measurement error. Despite its empirical
relevance, it was not until recently that this issue of endogeneity was studied
in the literature on functional regression, and the development in this
direction does not seem to sufficiently meet practitioners' needs; for example,
this issue has been discussed with paying particular attention on consistent
estimation and thus distributional properties of the proposed estimators still
remain to be further explored. To fill this gap, this paper proposes new
consistent FPCA-based instrumental variable estimators and develops their
asymptotic properties in detail. Simulation experiments under a wide range of
settings show that the proposed estimators perform considerably well. We apply
our methodology to estimate the impact of immigration on native wages.",Functional instrumental variable regression with an application to estimating the impact of immigration on native wages,2021-10-25 11:16:08,"Dakyung Seong, Won-Ki Seo","http://arxiv.org/abs/2110.12722v3, http://arxiv.org/pdf/2110.12722v3",econ.EM
30506,em,"We study regression discontinuity designs in which many predetermined
covariates, possibly much more than the number of observations, can be used to
increase the precision of treatment effect estimates. We consider a two-step
estimator which first selects a small number of ""important"" covariates through
a localized Lasso-type procedure, and then, in a second step, estimates the
treatment effect by including the selected covariates linearly into the usual
local linear estimator. We provide an in-depth analysis of the algorithm's
theoretical properties, showing that, under an approximate sparsity condition,
the resulting estimator is asymptotically normal, with asymptotic bias and
variance that are conceptually similar to those obtained in low-dimensional
settings. Bandwidth selection and inference can be carried out using standard
methods. We also provide simulations and an empirical application.",Inference in Regression Discontinuity Designs with High-Dimensional Covariates,2021-10-26 17:20:06,"Alexander Kreiß, Christoph Rothe","http://arxiv.org/abs/2110.13725v3, http://arxiv.org/pdf/2110.13725v3",econ.EM
30507,em,"Classical measures of inequality use the mean as the benchmark of economic
dispersion. They are not sensitive to inequality at the left tail of the
distribution, where it would matter most. This paper presents a new inequality
measurement tool that gives more weight to inequality at the lower end of the
distribution, it is based on the comparison of all value pairs and synthesizes
the dispersion of the whole distribution. The differences that sum to the Gini
coefficient are scaled by angular differences between observations. The
resulting index possesses a set of desirable properties, including
normalization, scale invariance, population invariance, transfer sensitivity,
and weak decomposability.",Coupling the Gini and Angles to Evaluate Economic Dispersion,2021-10-26 19:44:13,Mario Schlemmer,"http://arxiv.org/abs/2110.13847v2, http://arxiv.org/pdf/2110.13847v2",econ.EM
30508,em,"Observations in various applications are frequently represented as a time
series of multidimensional arrays, called tensor time series, preserving the
inherent multidimensional structure. In this paper, we present a factor model
approach, in a form similar to tensor CP decomposition, to the analysis of
high-dimensional dynamic tensor time series. As the loading vectors are
uniquely defined but not necessarily orthogonal, it is significantly different
from the existing tensor factor models based on Tucker-type tensor
decomposition. The model structure allows for a set of uncorrelated
one-dimensional latent dynamic factor processes, making it much more convenient
to study the underlying dynamics of the time series. A new high order
projection estimator is proposed for such a factor model, utilizing the special
structure and the idea of the higher order orthogonal iteration procedures
commonly used in Tucker-type tensor factor model and general tensor CP
decomposition procedures. Theoretical investigation provides statistical error
bounds for the proposed methods, which shows the significant advantage of
utilizing the special model structure. Simulation study is conducted to further
demonstrate the finite sample properties of the estimators. Real data
application is used to illustrate the model and its interpretations.",CP Factor Model for Dynamic Tensors,2021-10-29 06:18:53,"Yuefeng Han, Cun-Hui Zhang, Rong Chen","http://arxiv.org/abs/2110.15517v1, http://arxiv.org/pdf/2110.15517v1",stat.ME
30509,em,"This paper explores the duration dynamics modelling under the Autoregressive
Conditional Durations (ACD) framework (Engle and Russell 1998). I test
different distributions assumptions for the durations. The empirical results
suggest unconditional durations approach the Gamma distributions. Moreover,
compared with exponential distributions and Weibull distributions, the ACD
model with Gamma distributed innovations provide the best fit of SPY durations.",Autoregressive conditional duration modelling of high frequency data,2021-11-03 18:32:38,Xiufeng Yan,"http://arxiv.org/abs/2111.02300v1, http://arxiv.org/pdf/2111.02300v1",econ.EM
30528,em,"We provide new insights regarding the headline result that Medicaid increased
emergency department (ED) use from the Oregon experiment. We find meaningful
heterogeneous impacts of Medicaid on ED use using causal machine learning
methods. The individualized treatment effect distribution includes a wide range
of negative and positive values, suggesting the average effect masks
substantial heterogeneity. A small group-about 14% of participants-in the right
tail of the distribution drives the overall effect. We identify priority groups
with economically significant increases in ED usage based on demographics and
previous utilization. Intensive margin effects are an important driver of
increases in ED utilization.",Who Increases Emergency Department Use? New Insights from the Oregon Health Insurance Experiment,2022-01-18 18:53:28,"Augustine Denteh, Helge Liebert","http://arxiv.org/abs/2201.07072v4, http://arxiv.org/pdf/2201.07072v4",econ.EM
30510,em,"We consider a potential outcomes model in which interference may be present
between any two units but the extent of interference diminishes with spatial
distance. The causal estimand is the global average treatment effect, which
compares outcomes under the counterfactuals that all or no units are treated.
We study a class of designs in which space is partitioned into clusters that
are randomized into treatment and control. For each design, we estimate the
treatment effect using a Horvitz-Thompson estimator that compares the average
outcomes of units with all or no neighbors treated, where the neighborhood
radius is of the same order as the cluster size dictated by the design. We
derive the estimator's rate of convergence as a function of the design and
degree of interference and use this to obtain estimator-design pairs that
achieve near-optimal rates of convergence under relatively minimal assumptions
on interference. We prove that the estimators are asymptotically normal and
provide a variance estimator. For practical implementation of the designs, we
suggest partitioning space using clustering algorithms.",Rate-Optimal Cluster-Randomized Designs for Spatial Interference,2021-11-08 04:11:19,Michael P. Leung,"http://arxiv.org/abs/2111.04219v4, http://arxiv.org/pdf/2111.04219v4",stat.ME
30511,em,"This paper introduces a novel Ito diffusion process to model high-frequency
financial data, which can accommodate low-frequency volatility dynamics by
embedding the discrete-time non-linear exponential GARCH structure with
log-integrated volatility in a continuous instantaneous volatility process. The
key feature of the proposed model is that, unlike existing GARCH-Ito models,
the instantaneous volatility process has a non-linear structure, which ensures
that the log-integrated volatilities have the realized GARCH structure. We call
this the exponential realized GARCH-Ito (ERGI) model. Given the auto-regressive
structure of the log-integrated volatility, we propose a quasi-likelihood
estimation procedure for parameter estimation and establish its asymptotic
properties. We conduct a simulation study to check the finite sample
performance of the proposed model and an empirical study with 50 assets among
the S\&P 500 compositions. The numerical studies show the advantages of the new
proposed model.",Exponential GARCH-Ito Volatility Models,2021-11-08 07:20:26,Donggyu Kim,"http://arxiv.org/abs/2111.04267v1, http://arxiv.org/pdf/2111.04267v1",econ.EM
30512,em,"We provide novel bounds on average treatment effects (on the treated) that
are valid under an unconfoundedness assumption. Our bounds are designed to be
robust in challenging situations, for example, when the conditioning variables
take on a large number of different values in the observed sample, or when the
overlap condition is violated. This robustness is achieved by only using
limited ""pooling"" of information across observations. Namely, the bounds are
constructed as sample averages over functions of the observed outcomes such
that the contribution of each outcome only depends on the treatment status of a
limited number of observations. No information pooling across observations
leads to so-called ""Manski bounds"", while unlimited information pooling leads
to standard inverse propensity score weighting. We explore the intermediate
range between these two extremes and provide corresponding inference methods.
We show in Monte Carlo experiments and through an empirical application that
our bounds are indeed robust and informative in practice.",Bounding Treatment Effects by Pooling Limited Information across Observations,2021-11-09 19:27:25,"Sokbae Lee, Martin Weidner","http://arxiv.org/abs/2111.05243v4, http://arxiv.org/pdf/2111.05243v4",econ.EM
30513,em,"Observational studies are needed when experiments are not possible. Within
study comparisons (WSC) compare observational and experimental estimates that
test the same hypothesis using the same treatment group, outcome, and estimand.
Meta-analyzing 39 of them, we compare mean bias and its variance for the eight
observational designs that result from combining whether there is a pretest
measure of the outcome or not, whether the comparison group is local to the
treatment group or not, and whether there is a relatively rich set of other
covariates or not. Of these eight designs, one combines all three design
elements, another has none, and the remainder include any one or two. We found
that both the mean and variance of bias decline as design elements are added,
with the lowest mean and smallest variance in a design with all three elements.
The probability of bias falling within 0.10 standard deviations of the
experimental estimate varied from 59 to 83 percent in Bayesian analyses and
from 86 to 100 percent in non-Bayesian ones -- the ranges depending on the
level of data aggregation. But confounding remains possible due to each of the
eight observational study design cells including a different set of WSC
studies.",Absolute and Relative Bias in Eight Common Observational Study Designs: Evidence from a Meta-analysis,2021-11-13 00:00:40,"Jelena Zurovac, Thomas D. Cook, John Deke, Mariel M. Finucane, Duncan Chaplin, Jared S. Coopersmith, Michael Barna, Lauren Vollmer Forrow","http://arxiv.org/abs/2111.06941v2, http://arxiv.org/pdf/2111.06941v2",stat.ME
30514,em,"Many popular specifications for Vector Autoregressions (VARs) with
multivariate stochastic volatility are not invariant to the way the variables
are ordered due to the use of a Cholesky decomposition for the error covariance
matrix. We show that the order invariance problem in existing approaches is
likely to become more serious in large VARs. We propose the use of a
specification which avoids the use of this Cholesky decomposition. We show that
the presence of multivariate stochastic volatility allows for identification of
the proposed model and prove that it is invariant to ordering. We develop a
Markov Chain Monte Carlo algorithm which allows for Bayesian estimation and
prediction. In exercises involving artificial and real macroeconomic data, we
demonstrate that the choice of variable ordering can have non-negligible
effects on empirical results. In a macroeconomic forecasting exercise involving
VARs with 20 variables we find that our order-invariant approach leads to the
best forecasts and that some choices of variable ordering can lead to poor
forecasts using a conventional, non-order invariant, approach.",Large Order-Invariant Bayesian VARs with Stochastic Volatility,2021-11-14 05:52:28,"Joshua C. C. Chan, Gary Koop, Xuewen Yu","http://arxiv.org/abs/2111.07225v1, http://arxiv.org/pdf/2111.07225v1",econ.EM
30529,em,"In the paper, we propose two models of Artificial Intelligence (AI) patents
in European Union (EU) countries addressing spatial and temporal behaviour. In
particular, the models can quantitatively describe the interaction between
countries or explain the rapidly growing trends in AI patents. For spatial
analysis Poisson regression is used to explain collaboration between a pair of
countries measured by the number of common patents. Through Bayesian inference,
we estimated the strengths of interactions between countries in the EU and the
rest of the world. In particular, a significant lack of cooperation has been
identified for some pairs of countries.
  Alternatively, an inhomogeneous Poisson process combined with the logistic
curve growth accurately models the temporal behaviour by an accurate trend
line. Bayesian analysis in the time domain revealed an upcoming slowdown in
patenting intensity.",Bayesian inference of spatial and temporal relations in AI patents for EU countries,2022-01-18 21:13:03,"Krzysztof Rusek, Agnieszka Kleszcz, Albert Cabellos-Aparicio","http://arxiv.org/abs/2201.07168v1, http://arxiv.org/pdf/2201.07168v1",econ.EM
30515,em,"Humans exhibit irrational decision-making patterns in response to
environmental triggers, such as experiencing an economic loss or gain. In this
paper we investigate whether algorithms exhibit the same behavior by examining
the observed decisions and latent risk and rationality parameters estimated by
a random utility model with constant relative risk-aversion utility function.
We use a dataset consisting of 10,000 hands of poker played by Pluribus, the
first algorithm in the world to beat professional human players and find (1)
Pluribus does shift its playing style in response to economic losses and gains,
ceteris paribus; (2) Pluribus becomes more risk-averse and rational following a
trigger but the humans become more risk-seeking and irrational; (3) the
difference in playing styles between Pluribus and the humans on the dimensions
of risk-aversion and rationality are particularly differentiable when both have
experienced a trigger. This provides support that decision-making patterns
could be used as ""behavioral signatures"" to identify human versus algorithmic
decision-makers in unlabeled contexts.",Rational AI: A comparison of human and AI responses to triggers of economic irrationality in poker,2021-11-14 12:48:53,"C. Grace Haaf, Devansh Singh, Cinny Lin, Scofield Zou","http://arxiv.org/abs/2111.07295v1, http://arxiv.org/pdf/2111.07295v1",econ.TH
30516,em,"This paper extends Becker (1957)'s outcome test of discrimination to settings
where a (human or algorithmic) decision-maker produces a ranked list of
candidates. Ranked lists are particularly relevant in the context of online
platforms that produce search results or feeds, and also arise when human
decisionmakers express ordinal preferences over a list of candidates. We show
that non-discrimination implies a system of moment inequalities, which
intuitively impose that one cannot permute the position of a lower-ranked
candidate from one group with a higher-ranked candidate from a second group and
systematically improve the objective. Moreover, we show that that these moment
inequalities are the only testable implications of non-discrimination when the
auditor observes only outcomes and group membership by rank. We show how to
statistically test the implied inequalities, and validate our approach in an
application using data from LinkedIn.",An Outcome Test of Discrimination for Ranked Lists,2021-11-15 19:42:57,"Jonathan Roth, Guillaume Saint-Jacques, YinYin Yu","http://arxiv.org/abs/2111.07889v1, http://arxiv.org/pdf/2111.07889v1",econ.EM
30517,em,"The COVID-19 pandemic has created a sudden need for a wider uptake of
home-based telework as means of sustaining the production. Generally,
teleworking arrangements impacts directly worker's efficiency and motivation.
The direction of this impact, however, depends on the balance between positive
effects of teleworking (e.g. increased flexibility and autonomy) and its
downsides (e.g. blurring boundaries between private and work life). Moreover,
these effects of teleworking can be amplified in case of vulnerable groups of
workers, such as women. The first step in understanding the implications of
teleworking on women is to have timely information on the extent of teleworking
by age and gender. In the absence of timely official statistics, in this paper
we propose a method for nowcasting the teleworking trends by age and gender for
20 Italian regions using mobile network operators (MNO) data. The method is
developed and validated using MNO data together with the Italian quarterly
Labour Force Survey. Our results confirm that the MNO data have the potential
to be used as a tool for monitoring gender and age differences in teleworking
patterns. This tool becomes even more important today as it could support the
adequate gender mainstreaming in the ``Next Generation EU'' recovery plan and
help to manage related social impacts of COVID-19 through policymaking.",Monitoring COVID-19-induced gender differences in teleworking rates using Mobile Network Data,2021-11-04 18:11:03,"Sara Grubanov-Boskovic, Spyridon Spyratos, Stefano Maria Iacus, Umberto Minora, Francesco Sermi","http://dx.doi.org/10.6339/22-JDS1043, http://arxiv.org/abs/2111.09442v2, http://arxiv.org/pdf/2111.09442v2",cs.SI
30518,em,"This paper extends the literature on the theoretical properties of synthetic
controls to the case of non-linear generative models, showing that the
synthetic control estimator is generally biased in such settings. I derive a
lower bound for the bias, showing that the only component of it that is
affected by the choice of synthetic control is the weighted sum of pairwise
differences between the treated unit and the untreated units in the synthetic
control. To address this bias, I propose a novel synthetic control estimator
that allows for a constant difference of the synthetic control to the treated
unit in the pre-treatment period, and that penalizes the pairwise
discrepancies. Allowing for a constant offset makes the model more flexible,
thus creating a larger set of potential synthetic controls, and the
penalization term allows for the selection of the potential solution that will
minimize bias. I study the properties of this estimator and propose a
data-driven process for parameterizing the penalization term.",Why Synthetic Control estimators are biased and what to do about it: Introducing Relaxed and Penalized Synthetic Controls,2021-11-21 13:27:29,Oscar Engelbrektson,"http://arxiv.org/abs/2111.10784v1, http://arxiv.org/pdf/2111.10784v1",econ.EM
30519,em,"Researchers using instrumental variables to investigate ordered treatments
often recode treatment into an indicator for any exposure. We investigate this
estimand under the assumption that the instruments shift compliers from no
treatment to some but not from some treatment to more. We show that when there
are extensive margin compliers only (EMCO) this estimand captures a weighted
average of treatment effects that can be partially unbundled into each complier
group's potential outcome means. We also establish an equivalence between EMCO
and a two-factor selection model and apply our results to study treatment
heterogeneity in the Oregon Health Insurance Experiment.",On Recoding Ordered Treatments as Binary Indicators,2021-11-24 07:19:27,"Evan K. Rose, Yotam Shem-Tov","http://arxiv.org/abs/2111.12258v3, http://arxiv.org/pdf/2111.12258v3",econ.EM
30520,em,"To further develop the statistical inference problem for heterogeneous
treatment effects, this paper builds on Breiman's (2001) random forest tree
(RFT)and Wager et al.'s (2018) causal tree to parameterize the nonparametric
problem using the excellent statistical properties of classical OLS and the
division of local linear intervals based on covariate quantile points, while
preserving the random forest trees with the advantages of constructible
confidence intervals and asymptotic normality properties [Athey and Imbens
(2016),Efron (2014),Wager et al.(2014)\citep{wager2014asymptotic}], we propose
a decision tree using quantile classification according to fixed rules combined
with polynomial estimation of local samples, which we call the quantile local
linear causal tree (QLPRT) and forest (QLPRF).",Modelling hetegeneous treatment effects by quantitle local polynomial decision tree and forest,2021-11-30 15:02:16,Lai Xinglin,"http://arxiv.org/abs/2111.15320v2, http://arxiv.org/pdf/2111.15320v2",econ.EM
30521,em,"This paper presents an ensemble forecasting method that shows strong results
on the M4 Competition dataset by decreasing feature and model selection
assumptions, termed DONUT (DO Not UTilize human beliefs). Our assumption
reductions, primarily consisting of auto-generated features and a more diverse
model pool for the ensemble, significantly outperform the statistical,
feature-based ensemble method FFORMA by Montero-Manso et al. (2020). We also
investigate feature extraction with a Long Short-term Memory Network (LSTM)
Autoencoder and find that such features contain crucial information not
captured by standard statistical feature approaches. The ensemble weighting
model uses LSTM and statistical features to combine the models accurately. The
analysis of feature importance and interaction shows a slight superiority for
LSTM features over the statistical ones alone. Clustering analysis shows that
essential LSTM features differ from most statistical features and each other.
We also find that increasing the solution space of the weighting model by
augmenting the ensemble with new models is something the weighting model learns
to use, thus explaining part of the accuracy gains. Moreover, we present a
formal ex-post-facto analysis of an optimal combination and selection for
ensembles, quantifying differences through linear optimization on the M4
dataset. Our findings indicate that classical statistical time series features,
such as trend and seasonality, alone do not capture all relevant information
for forecasting a time series. On the contrary, our novel LSTM features contain
significantly more predictive power than the statistical ones alone, but
combining the two feature sets proved the best in practice.",Deep Learning and Linear Programming for Automated Ensemble Forecasting and Interpretation,2022-01-03 01:19:26,"Lars Lien Ankile, Kjartan Krange","http://arxiv.org/abs/2201.00426v2, http://arxiv.org/pdf/2201.00426v2",cs.LG
30522,em,"This paper synthesizes recent advances in the econometrics of
difference-in-differences (DiD) and provides concrete recommendations for
practitioners. We begin by articulating a simple set of ``canonical''
assumptions under which the econometrics of DiD are well-understood. We then
argue that recent advances in DiD methods can be broadly classified as relaxing
some components of the canonical DiD setup, with a focus on $(i)$ multiple
periods and variation in treatment timing, $(ii)$ potential violations of
parallel trends, or $(iii)$ alternative frameworks for inference. Our
discussion highlights the different ways that the DiD literature has advanced
beyond the canonical model, and helps to clarify when each of the papers will
be relevant for empirical work. We conclude by discussing some promising areas
for future research.",What's Trending in Difference-in-Differences? A Synthesis of the Recent Econometrics Literature,2022-01-04 18:21:33,"Jonathan Roth, Pedro H. C. Sant'Anna, Alyssa Bilinski, John Poe","http://arxiv.org/abs/2201.01194v3, http://arxiv.org/pdf/2201.01194v3",econ.EM
30523,em,"We propose an approximate factor model for time-dependent curve data that
represents a functional time series as the aggregate of a predictive
low-dimensional component and an unpredictive infinite-dimensional component.
Suitable identification conditions lead to a two-stage estimation procedure
based on functional principal components, and the number of factors is
estimated consistently through an information criterion-based approach. The
methodology is applied to the problem of modeling and predicting yield curves.
Our results indicate that more than three factors are required to characterize
the dynamics of the term structure of bond yields.",Approximate Factor Models for Functional Time Series,2022-01-07 19:44:13,"Sven Otto, Nazarii Salish","http://arxiv.org/abs/2201.02532v2, http://arxiv.org/pdf/2201.02532v2",econ.EM
30524,em,"This paper considers an endogenous binary response model with many weak
instruments. We in the current paper employ a control function approach and a
regularization scheme to obtain better estimation results for the endogenous
binary response model in the presence of many weak instruments. Two consistent
and asymptotically normally distributed estimators are provided, each of which
is called a regularized conditional maximum likelihood estimator (RCMLE) and a
regularized nonlinear least square estimator (RNLSE) respectively. Monte Carlo
simulations show that the proposed estimators outperform the existing
estimators when many weak instruments are present. We apply our estimation
method to study the effect of family income on college completion.",Binary response model with many weak instruments,2022-01-13 10:00:58,Dakyung Seong,"http://arxiv.org/abs/2201.04811v3, http://arxiv.org/pdf/2201.04811v3",econ.EM
30525,em,"We propose two specifications of a real-time mixed-frequency semi-structural
time series model for evaluating the output potential, output gap, Phillips
curve, and Okun's law for the US. The baseline model uses minimal theory-based
multivariate identification restrictions to inform trend-cycle decomposition,
while the alternative model adds the CBO's output gap measure as an observed
variable. The latter model results in a smoother output potential and lower
cyclical correlation between inflation and real variables but performs worse in
forecasting beyond the short term. This methodology allows for the assessment
and real-time monitoring of official trend and gap estimates.",Monitoring the Economy in Real Time: Trends and Gaps in Real Activity and Prices,2022-01-14 20:01:57,"Thomas Hasenzagl, Filippo Pellegrino, Lucrezia Reichlin, Giovanni Ricco","http://arxiv.org/abs/2201.05556v2, http://arxiv.org/pdf/2201.05556v2",econ.EM
30526,em,"We propose a method for reporting how program evaluations reduce gaps between
groups, such as the gender or Black-white gap. We first show that the reduction
in disparities between groups can be written as the difference in conditional
average treatment effects (CATE) for each group. Then, using a
Kitagawa-Oaxaca-Blinder-style decomposition, we highlight how these CATE can be
decomposed into unexplained differences in CATE in other observables versus
differences in composition across other observables (e.g. the ""endowment"").
Finally, we apply this approach to study the impact of Medicare on American's
access to health insurance.",Measuring Changes in Disparity Gaps: An Application to Health Insurance,2022-01-15 00:09:25,"Paul Goldsmith-Pinkham, Karen Jiang, Zirui Song, Jacob Wallace","http://arxiv.org/abs/2201.05672v1, http://arxiv.org/pdf/2201.05672v1",econ.EM
30527,em,"The Granular Instrumental Variables (GIV) methodology exploits panels with
factor error structures to construct instruments to estimate structural time
series models with endogeneity even after controlling for latent factors. We
extend the GIV methodology in several dimensions. First, we extend the
identification procedure to a large $N$ and large $T$ framework, which depends
on the asymptotic Herfindahl index of the size distribution of $N$
cross-sectional units. Second, we treat both the factors and loadings as
unknown and show that the sampling error in the estimated instrument and
factors is negligible when considering the limiting distribution of the
structural parameters. Third, we show that the sampling error in the
high-dimensional precision matrix is negligible in our estimation algorithm.
Fourth, we overidentify the structural parameters with additional constructed
instruments, which leads to efficiency gains. Monte Carlo evidence is presented
to support our asymptotic theory and application to the global crude oil market
leads to new results.",Inferential Theory for Granular Instrumental Variables in High Dimensions,2022-01-17 22:41:24,"Saman Banafti, Tae-Hwy Lee","http://arxiv.org/abs/2201.06605v2, http://arxiv.org/pdf/2201.06605v2",econ.EM
30530,em,"Time-varying parameter VARs with stochastic volatility are routinely used for
structural analysis and forecasting in settings involving a few endogenous
variables. Applying these models to high-dimensional datasets has proved to be
challenging due to intensive computations and over-parameterization concerns.
We develop an efficient Bayesian sparsification method for a class of models we
call hybrid TVP-VARs--VARs with time-varying parameters in some equations but
constant coefficients in others. Specifically, for each equation, the new
method automatically decides whether the VAR coefficients and contemporaneous
relations among variables are constant or time-varying. Using US datasets of
various dimensions, we find evidence that the parameters in some, but not all,
equations are time varying. The large hybrid TVP-VAR also forecasts better than
many standard benchmarks.",Large Hybrid Time-Varying Parameter VARs,2022-01-18 23:39:58,Joshua C. C. Chan,"http://arxiv.org/abs/2201.07303v2, http://arxiv.org/pdf/2201.07303v2",econ.EM
30531,em,"We present an algorithm for the calibration of local volatility from market
option prices through deep self-consistent learning, by approximating both
market option prices and local volatility using deep neural networks,
respectively. Our method uses the initial-boundary value problem of the
underlying Dupire's partial differential equation solved by the parameterized
option prices to bring corrections to the parameterization in a self-consistent
way. By exploiting the differentiability of the neural networks, we can
evaluate Dupire's equation locally at each strike-maturity pair; while by
exploiting their continuity, we sample strike-maturity pairs uniformly from a
given domain, going beyond the discrete points where the options are quoted.
Moreover, the absence of arbitrage opportunities are imposed by penalizing an
associated loss function as a soft constraint. For comparison with existing
approaches, the proposed method is tested on both synthetic and market option
prices, which shows an improved performance in terms of reduced interpolation
and reprice errors, as well as the smoothness of the calibrated local
volatility. An ablation study has been performed, asserting the robustness and
significance of the proposed method.",Deep self-consistent learning of local volatility,2021-12-09 04:48:10,"Zhe Wang, Nicolas Privault, Claude Guet","http://arxiv.org/abs/2201.07880v2, http://arxiv.org/pdf/2201.07880v2",q-fin.CP
30532,em,"Integrated assessment models have become the primary tools for comparing
climate policies that seek to reduce greenhouse gas emissions. Policy
comparisons have often been performed by considering a planner who seeks to
make optimal trade-offs between the costs of carbon abatement and the economic
damages from climate change. The planning problem has been formalized as one of
optimal control, the objective being to minimize the total costs of abatement
and damages over a time horizon. Studying climate policy as a control problem
presumes that a planner knows enough to make optimization feasible, but
physical and economic uncertainties abound. Earlier, Manski, Sanstad, and
DeCanio proposed and studied use of the minimax-regret (MMR) decision criterion
to account for deep uncertainty in climate modeling. Here we study choice of
climate policy that minimizes maximum regret with deep uncertainty regarding
both the correct climate model and the appropriate time discount rate to use in
intergenerational assessment of policy consequences. The analysis specifies a
range of discount rates to express both empirical and normative uncertainty
about the appropriate rate. The findings regarding climate policy are novel and
informative. The MMR analysis points to use of a relatively low discount rate
of 0.02 for climate policy. The MMR decision rule keeps the maximum future
temperature increase below 2C above the 1900-10 level for most of the parameter
values used to weight costs and damages.",Minimax-Regret Climate Policy with Deep Uncertainty in Climate Modeling and Intergenerational Discounting,2022-01-21 21:21:21,"Stephen J. DeCanio, Charles F. Manski, Alan H. Sanstad","http://arxiv.org/abs/2201.08826v1, http://arxiv.org/pdf/2201.08826v1",econ.EM
30533,em,"This paper presents a new approach for the estimation and inference of the
regression parameters in a panel data model with interactive fixed effects. It
relies on the assumption that the factor loadings can be expressed as an
unknown smooth function of the time average of covariates plus an idiosyncratic
error term. Compared to existing approaches, our estimator has a simple partial
least squares form and does neither require iterative procedures nor the
previous estimation of factors.
  We derive its asymptotic properties by finding out that the limiting
distribution has a discontinuity, depending on the explanatory power of our
basis functions which is expressed by the variance of the error of the factor
loadings. As a result, the usual ``plug-in"" methods based on estimates of the
asymptotic covariance are only valid pointwise and may produce either over- or
under-coverage probabilities. We show that uniformly valid inference can be
achieved by using the cross-sectional bootstrap. A Monte Carlo study indicates
good performance in terms of mean squared error. We apply our methodology to
analyze the determinants of growth rates in OECD countries.",A semiparametric approach for interactive fixed effects panel data models,2022-01-27 15:37:23,"Georg Keilbar, Juan M. Rodriguez-Poo, Alexandra Soberon, Weining Wang","http://arxiv.org/abs/2201.11482v2, http://arxiv.org/pdf/2201.11482v2",econ.EM
30534,em,"Estimation of causal effects using machine learning methods has become an
active research field in econometrics. In this paper, we study the finite
sample performance of meta-learners for estimation of heterogeneous treatment
effects under the usage of sample-splitting and cross-fitting to reduce the
overfitting bias. In both synthetic and semi-synthetic simulations we find that
the performance of the meta-learners in finite samples greatly depends on the
estimation procedure. The results imply that sample-splitting and cross-fitting
are beneficial in large samples for bias reduction and efficiency of the
meta-learners, respectively, whereas full-sample estimation is preferable in
small samples. Furthermore, we derive practical recommendations for application
of specific meta-learners in empirical studies depending on particular data
characteristics such as treatment shares and sample size.",Meta-Learners for Estimation of Causal Effects: Finite Sample Cross-Fit Performance,2022-01-30 03:52:33,Gabriel Okasa,"http://arxiv.org/abs/2201.12692v1, http://arxiv.org/pdf/2201.12692v1",econ.EM
30540,em,"We develop and justify methodology to consistently test for long-horizon
return predictability based on realized variance. To accomplish this, we
propose a parametric transaction-level model for the continuous-time log price
process based on a pure jump point process. The model determines the returns
and realized variance at any level of aggregation with properties shown to be
consistent with the stylized facts in the empirical finance literature. Under
our model, the long-memory parameter propagates unchanged from the
transaction-level drift to the calendar-time returns and the realized variance,
leading endogenously to a balanced predictive regression equation. We propose
an asymptotic framework using power-law aggregation in the predictive
regression. Within this framework, we propose a hypothesis test for long
horizon return predictability which is asymptotically correctly sized and
consistent.",Long-Horizon Return Predictability from Realized Volatility in Pure-Jump Point Processes,2022-02-02 01:29:27,"Meng-Chen Hsieh, Clifford Hurvich, Philippe Soulier","http://arxiv.org/abs/2202.00793v1, http://arxiv.org/pdf/2202.00793v1",econ.EM
30535,em,"Practitioners and academics have long appreciated the benefits of covariate
balancing when they conduct randomized experiments. For web-facing firms
running online A/B tests, however, it still remains challenging in balancing
covariate information when experimental subjects arrive sequentially. In this
paper, we study an online experimental design problem, which we refer to as the
""Online Blocking Problem."" In this problem, experimental subjects with
heterogeneous covariate information arrive sequentially and must be immediately
assigned into either the control or the treated group. The objective is to
minimize the total discrepancy, which is defined as the minimum weight perfect
matching between the two groups. To solve this problem, we propose a randomized
design of experiment, which we refer to as the ""Pigeonhole Design."" The
pigeonhole design first partitions the covariate space into smaller spaces,
which we refer to as pigeonholes, and then, when the experimental subjects
arrive at each pigeonhole, balances the number of control and treated subjects
for each pigeonhole. We analyze the theoretical performance of the pigeonhole
design and show its effectiveness by comparing against two well-known benchmark
designs: the match-pair design and the completely randomized design. We
identify scenarios when the pigeonhole design demonstrates more benefits over
the benchmark design. To conclude, we conduct extensive simulations using
Yahoo! data to show a 10.2% reduction in variance if we use the pigeonhole
design to estimate the average treatment effect.",Pigeonhole Design: Balancing Sequential Experiments from an Online Matching Perspective,2022-01-31 02:14:24,"Jinglong Zhao, Zijie Zhou","http://arxiv.org/abs/2201.12936v5, http://arxiv.org/pdf/2201.12936v5",stat.ME
30536,em,"Time series that display periodicity can be described with a Fourier
expansion. In a similar vein, a recently developed formalism enables
description of growth patterns with the optimal number of parameters (Elitzur
et al, 2020). The method has been applied to the growth of national GDP,
population and the COVID-19 pandemic; in all cases the deviations of long-term
growth patterns from pure exponential required no more than two additional
parameters, mostly only one. Here I utilize the new framework to develop a
unified formulation for all functions that describe growth deceleration,
wherein the growth rate decreases with time. The result offers the prospects
for a new general tool for trend removal in time-series analysis.",A General Description of Growth Trends,2022-01-31 08:05:08,Moshe Elitzur,"http://arxiv.org/abs/2201.13000v1, http://arxiv.org/pdf/2201.13000v1",econ.EM
30537,em,"We investigate how to improve efficiency using regression adjustments with
covariates in covariate-adaptive randomizations (CARs) with imperfect subject
compliance. Our regression-adjusted estimators, which are based on the doubly
robust moment for local average treatment effects, are consistent and
asymptotically normal even with heterogeneous probability of assignment and
misspecified regression adjustments. We propose an optimal but potentially
misspecified linear adjustment and its further improvement via a nonlinear
adjustment, both of which lead to more efficient estimators than the one
without adjustments. We also provide conditions for nonparametric and
regularized adjustments to achieve the semiparametric efficiency bound under
CARs.",Improving Estimation Efficiency via Regression-Adjustment in Covariate-Adaptive Randomizations with Imperfect Compliance,2022-01-31 08:37:28,"Liang Jiang, Oliver B. Linton, Haihan Tang, Yichong Zhang","http://arxiv.org/abs/2201.13004v5, http://arxiv.org/pdf/2201.13004v5",econ.EM
30538,em,"Limited datasets and complex nonlinear relationships are among the challenges
that may emerge when applying econometrics to macroeconomic problems. This
research proposes deep learning as an approach to transfer learning in the
former case and to map relationships between variables in the latter case.
Although macroeconomists already apply transfer learning when assuming a given
a priori distribution in a Bayesian context, estimating a structural VAR with
signal restriction and calibrating parameters based on results observed in
other models, to name a few examples, advance in a more systematic transfer
learning strategy in applied macroeconomics is the innovation we are
introducing. We explore the proposed strategy empirically, showing that data
from different but related domains, a type of transfer learning, helps identify
the business cycle phases when there is no business cycle dating committee and
to quick estimate a economic-based output gap. Next, since deep learning
methods are a way of learning representations, those that are formed by the
composition of multiple non-linear transformations, to yield more abstract
representations, we apply deep learning for mapping low-frequency from
high-frequency variables. The results obtained show the suitability of deep
learning models applied to macroeconomic problems. First, models learned to
classify United States business cycles correctly. Then, applying transfer
learning, they were able to identify the business cycles of out-of-sample
Brazilian and European data. Along the same lines, the models learned to
estimate the output gap based on the U.S. data and obtained good performance
when faced with Brazilian data. Additionally, deep learning proved adequate for
mapping low-frequency variables from high-frequency data to interpolate,
distribute, and extrapolate time series by related series.",Deep Learning Macroeconomics,2022-01-31 20:43:43,Rafael R. S. Guimaraes,"http://arxiv.org/abs/2201.13380v1, http://arxiv.org/pdf/2201.13380v1",econ.EM
30539,em,"The decisions of whether and how to evacuate during a climate disaster are
influenced by a wide range of factors, including sociodemographics, emergency
messaging, and social influence. Further complexity is introduced when multiple
hazards occur simultaneously, such as a flood evacuation taking place amid a
viral pandemic that requires physical distancing. Such multi-hazard events can
necessitate a nuanced navigation of competing decision-making strategies
wherein a desire to follow peers is weighed against contagion risks. To better
understand these nuances, we distributed an online survey during a pandemic
surge in July 2020 to 600 individuals in three midwestern and three southern
states in the United States with high risk of flooding. In this paper, we
estimate a random parameter logit model in both preference space and
willingness-to-pay space. Our results show that the directionality and
magnitude of the influence of peers' choices of whether and how to evacuate
vary widely across respondents. Overall, the decision of whether to evacuate is
positively impacted by peer behavior, while the decision of how to evacuate is
negatively impacted by peers. Furthermore, an increase in flood threat level
lessens the magnitude of these impacts. These findings have important
implications for the design of tailored emergency messaging strategies.
Specifically, emphasizing or deemphasizing the severity of each threat in a
multi-hazard scenario may assist in: (1) encouraging a reprioritization of
competing risk perceptions and (2) magnifying or neutralizing the impacts of
social influence, thereby (3) nudging evacuation decision-making toward a
desired outcome.",Protection or Peril of Following the Crowd in a Pandemic-Concurrent Flood Evacuation,2022-02-01 08:36:13,"Elisa Borowski, Amanda Stathopoulos","http://arxiv.org/abs/2202.00229v1, http://arxiv.org/pdf/2202.00229v1",econ.EM
30549,em,"This paper proposes a simple unified approach to testing transformations on
cumulative distribution functions (CDFs) in the presence of nuisance
parameters. The proposed test is constructed based on a new characterization
that avoids the estimation of nuisance parameters. The critical values are
obtained through a numerical bootstrap method which can easily be implemented
in practice. Under suitable conditions, the proposed test is shown to be
asymptotically size controlled and consistent. The local power property of the
test is established. Finally, Monte Carlo simulations and an empirical study
show that the test performs well on finite samples.",A Unified Nonparametric Test of Transformations on Distribution Functions with Nuisance Parameters,2022-02-20 11:08:51,"Xingyu Li, Xiaojun Song, Zhenting Sun","http://arxiv.org/abs/2202.11031v2, http://arxiv.org/pdf/2202.11031v2",stat.ME
30543,em,"This paper introduces a Threshold Asymmetric Conditional Autoregressive Range
(TACARR) formulation for modeling the daily price ranges of financial assets.
It is assumed that the process generating the conditional expected ranges at
each time point switches between two regimes, labeled as upward market and
downward market states. The disturbance term of the error process is also
allowed to switch between two distributions depending on the regime. It is
assumed that a self-adjusting threshold component that is driven by the past
values of the time series determines the current market regime. The proposed
model is able to capture aspects such as asymmetric and heteroscedastic
behavior of volatility in financial markets. The proposed model is an attempt
at addressing several potential deficits found in existing price range models
such as the Conditional Autoregressive Range (CARR), Asymmetric CARR (ACARR),
Feedback ACARR (FACARR) and Threshold Autoregressive Range (TARR) models.
Parameters of the model are estimated using the Maximum Likelihood (ML) method.
A simulation study shows that the ML method performs well in estimating the
TACARR model parameters. The empirical performance of the TACARR model was
investigated using IBM index data and results show that the proposed model is a
good alternative for in-sample prediction and out-of-sample forecasting of
volatility.
  Key Words: Volatility Modeling, Asymmetric Volatility, CARR Models, Regime
Switching.",Threshold Asymmetric Conditional Autoregressive Range (TACARR) Model,2022-02-07 19:50:58,"Isuru Ratnayake, V. A. Samaranayake","http://arxiv.org/abs/2202.03351v2, http://arxiv.org/pdf/2202.03351v2",econ.EM
30544,em,"We use machine learning techniques to investigate whether it is possible to
replicate the behavior of bank managers who assess the risk of commercial loans
made by a large commercial US bank. Even though a typical bank already relies
on an algorithmic scorecard process to evaluate risk, bank managers are given
significant latitude in adjusting the risk score in order to account for other
holistic factors based on their intuition and experience. We show that it is
possible to find machine learning algorithms that can replicate the behavior of
the bank managers. The input to the algorithms consists of a combination of
standard financials and soft information available to bank managers as part of
the typical loan review process. We also document the presence of significant
heterogeneity in the adjustment process that can be traced to differences
across managers and industries. Our results highlight the effectiveness of
machine learning based analytic approaches to banking and the potential
challenges to high-skill jobs in the financial sector.",Managers versus Machines: Do Algorithms Replicate Human Intuition in Credit Ratings?,2022-02-09 04:20:44,"Matthew Harding, Gabriel F. R. Vasconcelos","http://arxiv.org/abs/2202.04218v1, http://arxiv.org/pdf/2202.04218v1",econ.EM
30545,em,"Economists often estimate models using data from a particular domain, e.g.
estimating risk preferences in a particular subject pool or for a specific
class of lotteries. Whether a model's predictions extrapolate well across
domains depends on whether the estimated model has captured generalizable
structure. We provide a tractable formulation for this ""out-of-domain""
prediction problem and define the transfer error of a model based on how well
it performs on data from a new domain. We derive finite-sample forecast
intervals that are guaranteed to cover realized transfer errors with a
user-selected probability when domains are iid, and use these intervals to
compare the transferability of economic models and black box algorithms for
predicting certainty equivalents. We find that in this application, the black
box algorithms we consider outperform standard economic models when estimated
and tested on data from the same domain, but the economic models generalize
across domains better than the black-box algorithms do.",The Transfer Performance of Economic Models,2022-02-10 05:13:50,"Isaiah Andrews, Drew Fudenberg, Lihua Lei, Annie Liang, Chaofeng Wu","http://arxiv.org/abs/2202.04796v3, http://arxiv.org/pdf/2202.04796v3",econ.TH
30546,em,"In many first-price auctions, bidders face considerable strategic
uncertainty: They cannot perfectly anticipate the other bidders' bidding
behavior. We propose a model in which bidders do not know the entire
distribution of opponent bids but only the expected (winning) bid and lower and
upper bounds on the opponent bids. We characterize the optimal bidding
strategies and prove the existence of equilibrium beliefs. Finally, we apply
the model to estimate the cost distribution in highway procurement auctions and
find good performance out-of-sample.",An Equilibrium Model of the First-Price Auction with Strategic Uncertainty: Theory and Empirics,2022-02-15 18:45:37,Bernhard Kasberger,"http://arxiv.org/abs/2202.07517v2, http://arxiv.org/pdf/2202.07517v2",econ.TH
30547,em,"Labor economists regularly analyze employment data by fitting predictive
models to small, carefully constructed longitudinal survey datasets. Although
modern machine learning methods offer promise for such problems, these survey
datasets are too small to take advantage of them. In recent years large
datasets of online resumes have also become available, providing data about the
career trajectories of millions of individuals. However, standard econometric
models cannot take advantage of their scale or incorporate them into the
analysis of survey data. To this end we develop CAREER, a transformer-based
model that uses transfer learning to learn representations of job sequences.
CAREER is first fit to large, passively-collected resume data and then
fine-tuned to smaller, better-curated datasets for economic inferences. We fit
CAREER to a dataset of 24 million job sequences from resumes, and fine-tune its
representations on longitudinal survey datasets. We find that CAREER forms
accurate predictions of job sequences on three widely-used economics datasets.
We further find that CAREER can be used to form good predictions of other
downstream variables; incorporating CAREER into a wage model provides better
predictions than the econometric models currently in use.",CAREER: Transfer Learning for Economic Prediction of Labor Sequence Data,2022-02-17 02:23:50,"Keyon Vafa, Emil Palikot, Tianyu Du, Ayush Kanodia, Susan Athey, David M. Blei","http://arxiv.org/abs/2202.08370v3, http://arxiv.org/pdf/2202.08370v3",cs.LG
30548,em,"This paper introduces a local-to-unity/small sigma process for a stationary
time series with strong persistence and non-negligible long run risk. This
process represents the stationary long run component in an unobserved short-
and long-run components model involving different time scales. More
specifically, the short run component evolves in the calendar time and the long
run component evolves in an ultra long time scale. We develop the methods of
estimation and long run prediction for the univariate and multivariate
Structural VAR (SVAR) models with unobserved components and reveal the
impossibility to consistently estimate some of the long run parameters. The
approach is illustrated by a Monte-Carlo study and an application to
macroeconomic data.",Long Run Risk in Stationary Structural Vector Autoregressive Models,2022-02-19 02:35:28,"Christian Gourieroux, Joann Jasiak","http://arxiv.org/abs/2202.09473v1, http://arxiv.org/pdf/2202.09473v1",econ.EM
30550,em,"In the context of treatment effect estimation, this paper proposes a new
methodology to recover the counterfactual distribution when there is a single
(or a few) treated unit and possibly a high-dimensional number of potential
controls observed in a panel structure. The methodology accommodates, albeit
does not require, the number of units to be larger than the number of time
periods (high-dimensional setup). As opposed to modeling only the conditional
mean, we propose to model the entire conditional quantile function (CQF)
without intervention and estimate it using the pre-intervention period by a
l1-penalized regression. We derive non-asymptotic bounds for the estimated CQF
valid uniformly over the quantiles. The bounds are explicit in terms of the
number of time periods, the number of control units, the weak dependence
coefficient (beta-mixing), and the tail decay of the random variables. The
results allow practitioners to re-construct the entire counterfactual
distribution. Moreover, we bound the probability coverage of this estimated
CQF, which can be used to construct valid confidence intervals for the
(possibly random) treatment effect for every post-intervention period. We also
propose a new hypothesis test for the sharp null of no-effect based on the Lp
norm of deviation of the estimated CQF to the population one. Interestingly,
the null distribution is quasi-pivotal in the sense that it only depends on the
estimated CQF, Lp norm, and the number of post-intervention periods, but not on
the size of the post-intervention period. For that reason, critical values can
then be easily simulated. We illustrate the methodology by revisiting the
empirical study in Acemoglu, Johnson, Kermani, Kwak and Mitton (2016).",Distributional Counterfactual Analysis in High-Dimensional Setup,2022-02-23 21:22:56,Ricardo Masini,"http://arxiv.org/abs/2202.11671v2, http://arxiv.org/pdf/2202.11671v2",econ.EM
30551,em,"In this paper, we introduce the weighted-average quantile regression
framework, $\int_0^1 q_{Y|X}(u)\psi(u)du = X'\beta$, where $Y$ is a dependent
variable, $X$ is a vector of covariates, $q_{Y|X}$ is the quantile function of
the conditional distribution of $Y$ given $X$, $\psi$ is a weighting function,
and $\beta$ is a vector of parameters. We argue that this framework is of
interest in many applied settings and develop an estimator of the vector of
parameters $\beta$. We show that our estimator is $\sqrt T$-consistent and
asymptotically normal with mean zero and easily estimable covariance matrix,
where $T$ is the size of available sample. We demonstrate the usefulness of our
estimator by applying it in two empirical settings. In the first setting, we
focus on financial data and study the factor structures of the expected
shortfalls of the industry portfolios. In the second setting, we focus on wage
data and study inequality and social welfare dependence on commonly used
individual characteristics.",Weighted-average quantile regression,2022-03-06 22:06:53,"Denis Chetverikov, Yukun Liu, Aleh Tsyvinski","http://arxiv.org/abs/2203.03032v1, http://arxiv.org/pdf/2203.03032v1",econ.EM
30552,em,"`All models are wrong but some are useful' (George Box 1979). But, how to
find those useful ones starting from an imperfect model? How to make informed
data-driven decisions equipped with an imperfect model? These fundamental
questions appear to be pervasive in virtually all empirical fields -- including
economics, finance, marketing, healthcare, climate change, defense planning,
and operations research. This article presents a modern approach (builds on two
core ideas: abductive thinking and density-sharpening principle) and practical
guidelines to tackle these issues in a systematic manner.",Modelplasticity and Abductive Decision Making,2022-03-06 23:05:07,"Subhadeep, Mukhopadhyay","http://arxiv.org/abs/2203.03040v3, http://arxiv.org/pdf/2203.03040v3",econ.EM
30553,em,"In many longitudinal settings, economic theory does not guide practitioners
on the type of restrictions that must be imposed to solve the rotational
indeterminacy of factor-augmented linear models. We study this problem and
offer several novel results on identification using internally generated
instruments. We propose a new class of estimators and establish large sample
results using recent developments on clustered samples and high-dimensional
models. We carry out simulation studies which show that the proposed approaches
improve the performance of existing methods on the estimation of unknown
factors. Lastly, we consider three empirical applications using administrative
data of students clustered in different subjects in elementary school, high
school and college.",Estimation of a Factor-Augmented Linear Model with Applications Using Student Achievement Data,2022-03-07 00:01:21,"Matthew Harding, Carlos Lamarche, Chris Muris","http://arxiv.org/abs/2203.03051v1, http://arxiv.org/pdf/2203.03051v1",econ.EM
30554,em,"The prediction of financial markets is a challenging yet important task. In
modern electronically-driven markets, traditional time-series econometric
methods often appear incapable of capturing the true complexity of the
multi-level interactions driving the price dynamics. While recent research has
established the effectiveness of traditional machine learning (ML) models in
financial applications, their intrinsic inability to deal with uncertainties,
which is a great concern in econometrics research and real business
applications, constitutes a major drawback. Bayesian methods naturally appear
as a suitable remedy conveying the predictive ability of ML methods with the
probabilistically-oriented practice of econometric research. By adopting a
state-of-the-art second-order optimization algorithm, we train a Bayesian
bilinear neural network with temporal attention, suitable for the challenging
time-series task of predicting mid-price movements in ultra-high-frequency
limit-order book markets. We thoroughly compare our Bayesian model with
traditional ML alternatives by addressing the use of predictive distributions
to analyze errors and uncertainties associated with the estimated parameters
and model forecasts. Our results underline the feasibility of the Bayesian
deep-learning approach and its predictive and decisional advantages in complex
econometric tasks, prompting future research in this direction.",Bayesian Bilinear Neural Network for Predicting the Mid-price Dynamics in Limit-Order Book Markets,2022-03-07 21:59:54,"Martin Magris, Mostafa Shabani, Alexandros Iosifidis","http://arxiv.org/abs/2203.03613v2, http://arxiv.org/pdf/2203.03613v2",econ.EM
30555,em,"Rapidly diminishing Arctic summer sea ice is a strong signal of the pace of
global climate change. We provide point, interval, and density forecasts for
four measures of Arctic sea ice: area, extent, thickness, and volume.
Importantly, we enforce the joint constraint that these measures must
simultaneously arrive at an ice-free Arctic. We apply this constrained joint
forecast procedure to models relating sea ice to atmospheric carbon dioxide
concentration and models relating sea ice directly to time. The resulting
""carbon-trend"" and ""time-trend"" projections are mutually consistent and predict
a nearly ice-free summer Arctic Ocean by the mid-2030s with an 80% probability.
Moreover, the carbon-trend projections show that global adoption of a lower
carbon path would likely delay the arrival of a seasonally ice-free Arctic by
only a few years.","When Will Arctic Sea Ice Disappear? Projections of Area, Extent, Thickness, and Volume",2022-03-08 15:20:57,"Francis X. Diebold, Glenn D. Rudebusch, Maximilian Goebel, Philippe Goulet Coulombe, Boyuan Zhang","http://arxiv.org/abs/2203.04040v3, http://arxiv.org/pdf/2203.04040v3",econ.EM
30556,em,"Least squares regression with heteroskedasticity and autocorrelation
consistent (HAC) standard errors has proved very useful in cross section
environments. However, several major difficulties, which are generally
overlooked, must be confronted when transferring the HAC estimation technology
to time series environments. First, in plausible time-series environments
involving failure of strong exogeneity, OLS parameter estimates can be
inconsistent, so that HAC inference fails even asymptotically. Second, most
economic time series have strong autocorrelation, which renders HAC regression
parameter estimates highly inefficient. Third, strong autocorrelation similarly
renders HAC conditional predictions highly inefficient. Finally, The structure
of popular HAC estimators is ill-suited for capturing the autoregressive
autocorrelation typically present in economic time series, which produces large
size distortions and reduced power in HACbased hypothesis testing, in all but
the largest samples. We show that all four problems are largely avoided by the
use of a simple dynamic regression procedure, which is easily implemented. We
demonstrate the advantages of dynamic regression with detailed simulations
covering a range of practical issues.",On Robust Inference in Time Series Regression,2022-03-08 16:49:10,"Richard T. Baillie, Francis X. Diebold, George Kapetanios, Kun Ho Kim","http://arxiv.org/abs/2203.04080v2, http://arxiv.org/pdf/2203.04080v2",econ.EM
30557,em,"In this article we propose a set of simple principles to guide empirical
practice in synthetic control studies. The proposed principles follow from
formal properties of synthetic control estimators, and pertain to the nature,
implications, and prevention of over-fitting biases within a synthetic control
framework, to the interpretability of the results, and to the availability of
validation exercises. We discuss and visually demonstrate the relevance of the
proposed principles under a variety of data configurations.",Synthetic Controls in Action,2022-03-12 02:07:34,"Alberto Abadie, Jaume Vives-i-Bastida","http://arxiv.org/abs/2203.06279v1, http://arxiv.org/pdf/2203.06279v1",stat.ME
30558,em,"We propose a multivariate extension of the Lorenz curve based on multivariate
rearrangements of optimal transport theory. We define a vector Lorenz map as
the integral of the vector quantile map associated to a multivariate resource
allocation. Each component of the Lorenz map is the cumulative share of each
resource, as in the traditional univariate case. The pointwise ordering of such
Lorenz maps defines a new multivariate majorization order. We define a
multi-attribute Gini index and complete ordering based on the Lorenz map. We
formulate income egalitarianism and show that the class of egalitarian
allocations is maximal with respect to our inequality ordering over a large
class of allocations. We propose the level sets of an Inverse Lorenz Function
as a practical tool to visualize and compare inequality in two dimensions, and
apply it to income-wealth inequality in the United States between 1989 and
2019.","Lorenz map, inequality ordering and curves based on multidimensional rearrangements",2022-03-17 03:39:47,"Yanqin Fan, Marc Henry, Brendan Pass, Jorge A. Rivero","http://arxiv.org/abs/2203.09000v2, http://arxiv.org/pdf/2203.09000v2",econ.EM
30559,em,"Log-linear models are prevalent in empirical research. Yet, how to handle
zeros in the dependent variable remains an unsettled issue. This article
clarifies it and addresses the log of zero by developing a new family of
estimators called iterated Ordinary Least Squares (iOLS). This family nests
standard approaches such as log-linear and Poisson regressions, offers several
computational advantages, and corresponds to the correct way to perform the
popular $\log(Y+1)$ transformation. We extend it to the endogenous regressor
setting (i2SLS) and overcome other common issues with Poisson models, such as
controlling for many fixed-effects. We also develop specification tests to help
researchers select between alternative estimators. Finally, our methods are
illustrated through numerical simulations and replications of landmark
publications.",Dealing with Logs and Zeros in Regression Models,2022-03-22 18:40:01,"Christophe Bellégo, David Benatia, Louis Pape","http://arxiv.org/abs/2203.11820v1, http://arxiv.org/pdf/2203.11820v1",econ.EM
30560,em,"Quantifying both historic and future volatility is key in portfolio risk
management. This note presents and compares estimation strategies for
volatility estimation in an estimation universe consisting on 28 629 unique
companies from February 2010 to April 2021, with 858 different portfolios. The
estimation methods are compared in terms of how they rank the volatility of the
different subsets of portfolios. The overall best performing approach estimates
volatility from direct entity returns using a GARCH model for variance
estimation.",Performance evaluation of volatility estimation methods for Exabel,2022-03-23 16:26:36,"Øyvind Grotmol, Martin Jullum, Kjersti Aas, Michael Scheuerer","http://arxiv.org/abs/2203.12402v1, http://arxiv.org/pdf/2203.12402v1",stat.AP
30561,em,"Factor models have become a common and valued tool for understanding the
risks associated with an investing strategy. In this report we describe
Exabel's factor model, we quantify the fraction of the variability of the
returns explained by the different factors, and we show some examples of annual
returns of portfolios with different factor exposure.",Exabel's Factor Model,2022-03-23 16:34:06,"Øyvind Grotmol, Michael Scheuerer, Kjersti Aas, Martin Jullum","http://arxiv.org/abs/2203.12408v1, http://arxiv.org/pdf/2203.12408v1",stat.AP
30562,em,"This paper studies the network structure and fragmentation of the Argentinean
interbank market. Both the unsecured (CALL) and the secured (REPO) markets are
examined, applying complex network analysis. Results indicate that, although
the secured market has less participants, its nodes are more densely connected
than in the unsecured market. The interrelationships in the unsecured market
are less stable, making its structure more volatile and vulnerable to negative
shocks. The analysis identifies two 'hidden' underlying sub-networks within the
REPO market: one based on the transactions collateralized by Treasury bonds
(REPO-T) and other based on the operations collateralized by Central Bank (CB)
securities (REPO-CB). The changes in monetary policy stance and monetary
conditions seem to have a substantially smaller impact in the former than in
the latter 'sub-market'. The connectivity levels within the REPO-T market and
its structure remain relatively unaffected by the (in some period pronounced)
swings in the other segment of the market. Hence, the REPO market shows signs
of fragmentation in its inner structure, according to the type of collateral
asset involved in the transactions, so the average REPO interest rate reflects
the interplay between these two partially fragmented sub-markets. This mixed
structure of the REPO market entails one of the main sources of differentiation
with respect to the CALL market.",Network structure and fragmentation of the Argentinean interbank markets,2022-03-28 07:20:05,"Federico Forte, Pedro Elosegui, Gabriel Montes-Rojas","http://dx.doi.org/10.1016/j.latcb.2022.100066, http://arxiv.org/abs/2203.14488v1, http://arxiv.org/pdf/2203.14488v1",physics.soc-ph
30563,em,"This study demonstrates the existence of a testable condition for the
identification of the causal effect of a treatment on an outcome in
observational data, which relies on two sets of variables: observed covariates
to be controlled for and a suspected instrument. Under a causal structure
commonly found in empirical applications, the testable conditional independence
of the suspected instrument and the outcome given the treatment and the
covariates has two implications. First, the instrument is valid, i.e. it does
not directly affect the outcome (other than through the treatment) and is
unconfounded conditional on the covariates. Second, the treatment is
unconfounded conditional on the covariates such that the treatment effect is
identified. We suggest tests of this conditional independence based on machine
learning methods that account for covariates in a data-driven way and
investigate their asymptotic behavior and finite sample performance in a
simulation study. We also apply our testing approach to evaluating the impact
of fertility on female labor supply when using the sibling sex ratio of the
first two children as supposed instrument, which by and large points to a
violation of our testable implication for the moderate set of socio-economic
covariates considered.",Testing the identification of causal effects in observational data,2022-03-29 23:45:11,"Martin Huber, Jannis Kueck","http://arxiv.org/abs/2203.15890v4, http://arxiv.org/pdf/2203.15890v4",econ.EM
30564,em,"Diagnostic tests are almost never perfect. Studies quantifying their
performance use knowledge of the true health status, measured with a reference
diagnostic test. Researchers commonly assume that the reference test is
perfect, which is not the case in practice. When the assumption fails,
conventional studies identify ""apparent"" performance or performance with
respect to the reference, but not true performance. This paper provides the
smallest possible bounds on the measures of true performance - sensitivity
(true positive rate) and specificity (true negative rate), or equivalently
false positive and negative rates, in standard settings. Implied bounds on
policy-relevant parameters are derived: 1) Prevalence in screened populations;
2) Predictive values. Methods for inference based on moment inequalities are
used to construct uniformly consistent confidence sets in level over a relevant
family of data distributions. Emergency Use Authorization (EUA) and independent
study data for the BinaxNOW COVID-19 antigen test demonstrate that the bounds
can be very informative. Analysis reveals that the estimated false negative
rates for symptomatic and asymptomatic patients are up to 3.17 and 4.59 times
higher than the frequently cited ""apparent"" false negative rate.",Measuring Diagnostic Test Performance Using Imperfect Reference Tests: A Partial Identification Approach,2022-04-01 06:15:25,Filip Obradović,"http://arxiv.org/abs/2204.00180v3, http://arxiv.org/pdf/2204.00180v3",stat.AP
30565,em,"This paper develops estimation and inference methods for conditional quantile
factor models. We first introduce a simple sieve estimation, and establish
asymptotic properties of the estimators under large $N$. We then provide a
bootstrap procedure for estimating the distributions of the estimators. We also
provide two consistent estimators for the number of factors. The methods allow
us not only to estimate conditional factor structures of distributions of asset
returns utilizing characteristics, but also to conduct robust inference in
conditional factor models, which enables us to analyze the cross section of
asset returns with heavy tails. We apply the methods to analyze the cross
section of individual US stock returns.",Robust Estimation of Conditional Factor Models,2022-04-02 11:13:17,Qihui Chen,"http://arxiv.org/abs/2204.00801v2, http://arxiv.org/pdf/2204.00801v2",econ.EM
30566,em,"Although the recursive logit (RL) model has been recently popular and has led
to many applications and extensions, an important numerical issue with respect
to the computation of value functions remains unsolved. This issue is
particularly significant for model estimation, during which the parameters are
updated every iteration and may violate the feasibility condition of the value
function. To solve this numerical issue of the value function in the model
estimation, this study performs an extensive analysis of a prism-constrained RL
(Prism-RL) model proposed by Oyama and Hato (2019), which has a path set
constrained by the prism defined based upon a state-extended network
representation. The numerical experiments have shown two important properties
of the Prism-RL model for parameter estimation. First, the prism-based approach
enables estimation regardless of the initial and true parameter values, even in
cases where the original RL model cannot be estimated due to the numerical
problem. We also successfully captured a positive effect of the presence of
street green on pedestrian route choice in a real application. Second, the
Prism-RL model achieved better fit and prediction performance than the RL
model, by implicitly restricting paths with large detour or many loops.
Defining the prism-based path set in a data-oriented manner, we demonstrated
the possibility of the Prism-RL model describing more realistic route choice
behavior. The capture of positive network attributes while retaining the
diversity of path alternatives is important in many applications such as
pedestrian route choice and sequential destination choice behavior, and thus
the prism-based approach significantly extends the practical applicability of
the RL model.",Capturing positive network attributes during the estimation of recursive logit models: A prism-based approach,2022-04-04 05:49:25,Yuki Oyama,"http://dx.doi.org/10.1016/j.trc.2023.104014, http://arxiv.org/abs/2204.01215v3, http://arxiv.org/pdf/2204.01215v3",econ.EM
30567,em,"Kernel-weighted test statistics have been widely used in a variety of
settings including non-stationary regression, inference on propensity score and
panel data models. We develop the limit theory for a kernel-based specification
test of a parametric conditional mean when the law of the regressors may not be
absolutely continuous to the Lebesgue measure and is contaminated with singular
components. This result is of independent interest and may be useful in other
applications that utilize kernel smoothed U-statistics. Simulations illustrate
the non-trivial impact of the distribution of the conditioning variables on the
power properties of the test statistic.",Kernel-weighted specification testing under general distributions,2022-04-04 20:51:56,"Sid Kankanala, Victoria Zinde-Walsh","http://arxiv.org/abs/2204.01683v3, http://arxiv.org/pdf/2204.01683v3",econ.EM
30568,em,"Finding the optimal product prices and product assortment are two fundamental
problems in revenue management. Usually, a seller needs to jointly determine
the prices and assortment while managing a network of resources with limited
capacity. However, there is not yet a tractable method to efficiently solve
such a problem. Existing papers studying static joint optimization of price and
assortment cannot incorporate resource constraints. Then we study the revenue
management problem with resource constraints and price bounds, where the prices
and the product assortments need to be jointly determined over time. We showed
that under the Markov chain (MC) choice model (which subsumes the multinomial
logit (MNL) model), we could reformulate the choice-based joint optimization
problem as a tractable convex conic optimization problem. We also proved that
an optimal solution with a constant price vector exists even with constraints
on resources. In addition, a solution with both constant assortment and price
vector can be optimal when there is no resource constraint.",Revenue Management Under the Markov Chain Choice Model with Joint Price and Assortment Decisions,2022-04-11 00:30:51,"Anton J. Kleywegt, Hongzhang Shao","http://arxiv.org/abs/2204.04774v1, http://arxiv.org/pdf/2204.04774v1",math.OC
30569,em,"In recent years, more and more state-owned enterprises (SOEs) have been
embedded in the restructuring and governance of private enterprises through
equity participation, providing a more advantageous environment for private
enterprises in financing and innovation. However, there is a lack of knowledge
about the underlying mechanisms of SOE intervention on corporate innovation
performance. Hence, in this study, we investigated the association of state
capital intervention with innovation performance, meanwhile further
investigated the potential mediating and moderating role of managerial
sentiment and financing constraints, respectively, using all listed non-ST
firms from 2010 to 2020 as the sample. The results revealed two main findings:
1) state capital intervention would increase innovation performance through
managerial sentiment; 2) financing constraints would moderate the effect of
state capital intervention on firms' innovation performance.","State capital involvement, managerial sentiment and firm innovation performance Evidence from China",2022-04-11 07:20:35,Xiangtai Zuo,"http://arxiv.org/abs/2204.04860v1, http://arxiv.org/pdf/2204.04860v1",stat.AP
30570,em,"Weak consistency and asymptotic normality of the ordinary least-squares
estimator in a linear regression with adaptive learning is derived when the
crucial, so-called, `gain' parameter is estimated in a first step by nonlinear
least squares from an auxiliary model. The singular limiting distribution of
the two-step estimator is normal and in general affected by the sampling
uncertainty from the first step. However, this `generated-regressor' issue
disappears for certain parameter combinations.",Two-step estimation in linear regressions with adaptive learning,2022-04-11 20:55:52,Alexander Mayer,"http://dx.doi.org/10.1016/j.spl.2022.109761, http://arxiv.org/abs/2204.05298v3, http://arxiv.org/pdf/2204.05298v3",econ.EM
30571,em,"Advances in estimating heterogeneous treatment effects enable firms to
personalize marketing mix elements and target individuals at an unmatched level
of granularity, but feasibility constraints limit such personalization. In
practice, firms choose which unique treatments to offer and which individuals
to offer these treatments with the goal of maximizing profits: we call this the
coarse personalization problem. We propose a two-step solution that makes
segmentation and targeting decisions in concert. First, the firm personalizes
by estimating conditional average treatment effects. Second, the firm
discretizes by utilizing treatment effects to choose which unique treatments to
offer and who to assign to these treatments. We show that a combination of
available machine learning tools for estimating heterogeneous treatment effects
and a novel application of optimal transport methods provides a viable and
efficient solution. With data from a large-scale field experiment for
promotions management, we find that our methodology outperforms extant
approaches that segment on consumer characteristics or preferences and those
that only search over a prespecified grid. Using our procedure, the firm
recoups over 99.5% of its expected incremental profits under fully granular
personalization while offering only five unique treatments. We conclude by
discussing how coarse personalization arises in other domains.",Coarse Personalization,2022-04-12 16:29:33,"Walter W. Zhang, Sanjog Misra","http://arxiv.org/abs/2204.05793v2, http://arxiv.org/pdf/2204.05793v2",econ.EM
30572,em,"We introduce a novel pricing kernel with time-varying variance risk aversion
that yields closed-form expressions for the VIX. We also obtain closed-form
expressions for option prices with a novel approximation method. The model can
explain the observed time-variation in the shape of the pricing kernel. We
estimate the model with S&P 500 returns and option prices and find that
time-variation in volatility risk aversion brings a substantial reduction in
derivative pricing errors. The variance risk ratio emerges as a fundamental
variable and we show that it is closely related to economic fundamentals and
key measures of sentiment and uncertainty.",Option Pricing with Time-Varying Volatility Risk Aversion,2022-04-14 16:15:44,"Peter Reinhard Hansen, Chen Tong","http://arxiv.org/abs/2204.06943v2, http://arxiv.org/pdf/2204.06943v2",q-fin.PR
30573,em,"In this paper we study the finite sample and asymptotic properties of various
weighting estimators of the local average treatment effect (LATE), each of
which can be motivated by Abadie's (2003) kappa theorem. Our framework presumes
a binary treatment and a binary instrument, which may only be valid after
conditioning on additional covariates. We argue that two of the estimators
under consideration, which are weight normalized, are generally preferable.
Several other estimators, which are unnormalized, do not satisfy the properties
of scale invariance with respect to the natural logarithm and translation
invariance, thereby exhibiting sensitivity to the units of measurement when
estimating the LATE in logs and the centering of the outcome variable more
generally. We also demonstrate that, when noncompliance is one sided, certain
estimators have the advantage of being based on a denominator that is strictly
greater than zero by construction. This is the case for only one of the two
normalized estimators, and we recommend this estimator for wider use. We
illustrate our findings with a simulation study and three empirical
applications. The importance of normalization is particularly apparent in
applications to real data. The simulations also suggest that covariate
balancing estimation of instrument propensity scores may be more robust to
misspecification. Software for implementing these methods is available in
Stata.",Abadie's Kappa and Weighting Estimators of the Local Average Treatment Effect,2022-04-16 01:51:50,"Tymon Słoczyński, S. Derya Uysal, Jeffrey M. Wooldridge","http://arxiv.org/abs/2204.07672v3, http://arxiv.org/pdf/2204.07672v3",econ.EM
30574,em,"This paper considers the problem of inference in cluster randomized
experiments when cluster sizes are non-ignorable. Here, by a cluster randomized
experiment, we mean one in which treatment is assigned at the level of the
cluster; by non-ignorable cluster sizes we mean that the distribution of
potential outcomes, and the treatment effects in particular, may depend
non-trivially on the cluster sizes. In order to permit this sort of
flexibility, we consider a sampling framework in which cluster sizes themselves
are random. In this way, our analysis departs from earlier analyses of cluster
randomized experiments in which cluster sizes are treated as non-random. We
distinguish between two different parameters of interest: the equally-weighted
cluster-level average treatment effect, and the size-weighted cluster-level
average treatment effect. For each parameter, we provide methods for inference
in an asymptotic framework where the number of clusters tends to infinity and
treatment is assigned using a covariate-adaptive stratified randomization
procedure. We additionally permit the experimenter to sample only a subset of
the units within each cluster rather than the entire cluster and demonstrate
the implications of such sampling for some commonly used estimators. A small
simulation study and empirical demonstration show the practical relevance of
our theoretical results.",Inference for Cluster Randomized Experiments with Non-ignorable Cluster Sizes,2022-04-18 18:00:45,"Federico Bugni, Ivan Canay, Azeem Shaikh, Max Tabord-Meehan","http://arxiv.org/abs/2204.08356v5, http://arxiv.org/pdf/2204.08356v5",econ.EM
30575,em,"Modeling price risks is crucial for economic decision making in energy
markets. Besides the risk of a single price, the dependence structure of
multiple prices is often relevant. We therefore propose a generic and
easy-to-implement method for creating multivariate probabilistic forecasts
based on univariate point forecasts of day-ahead electricity prices. While each
univariate point forecast refers to one of the day's 24 hours, the multivariate
forecast distribution models dependencies across hours. The proposed method is
based on simple copula techniques and an optional time series component. We
illustrate the method for five benchmark data sets recently provided by Lago et
al. (2020). Furthermore, we demonstrate an example for constructing realistic
prediction intervals for the weighted sum of consecutive electricity prices,
as, e.g., needed for pricing individual load profiles.",From point forecasts to multivariate probabilistic forecasts: The Schaake shuffle for day-ahead electricity price forecasting,2022-04-21 18:04:39,"Oliver Grothe, Fabian Kächele, Fabian Krüger","http://dx.doi.org/10.1016/j.eneco.2023.106602, http://arxiv.org/abs/2204.10154v1, http://arxiv.org/pdf/2204.10154v1",econ.EM
30576,em,"We derive optimal statistical decision rules for discrete choice problems
when payoffs depend on a partially-identified parameter $\theta$ and the
decision maker can use a point-identified parameter $P$ to deduce restrictions
on $\theta$. Leading examples include optimal treatment choice under partial
identification and optimal pricing with rich unobserved heterogeneity. Our
optimal decision rules minimize the maximum risk or regret over the identified
set of payoffs conditional on $P$ and use the data efficiently to learn about
$P$. We discuss implementation of optimal decision rules via the bootstrap and
Bayesian methods, in both parametric and semiparametric models. We provide
detailed applications to treatment choice and optimal pricing. Using a limits
of experiments framework, we show that our optimal decision rules can dominate
seemingly natural alternatives. Our asymptotic approach is well suited for
realistic empirical settings in which the derivation of finite-sample optimal
rules is intractable.",Optimal Decision Rules when Payoffs are Partially Identified,2022-04-25 19:06:16,"Timothy Christensen, Hyungsik Roger Moon, Frank Schorfheide","http://arxiv.org/abs/2204.11748v2, http://arxiv.org/pdf/2204.11748v2",econ.EM
30577,em,"This work concerns the estimation of recursive route choice models in the
situation that the trip observations are incomplete, i.e., there are
unconnected links (or nodes) in the observations. A direct approach to handle
this issue would be intractable because enumerating all paths between
unconnected links (or nodes) in a real network is typically not possible. We
exploit an expectation-maximization (EM) method that allows to deal with the
missing-data issue by alternatively performing two steps of sampling the
missing segments in the observations and solving maximum likelihood estimation
problems. Moreover, observing that the EM method would be expensive, we propose
a new estimation method based on the idea that the choice probabilities of
unconnected link observations can be exactly computed by solving systems of
linear equations. We further design a new algorithm, called as
decomposition-composition (DC), that helps reduce the number of systems of
linear equations to be solved and speed up the estimation. We compare our
proposed algorithms with some standard baselines using a dataset from a real
network and show that the DC algorithm outperforms the other approaches in
recovering missing information in the observations. Our methods work with most
of the recursive route choice models proposed in the literature, including the
recursive logit, nested recursive logit, or discounted recursive models.",Estimation of Recursive Route Choice Models with Incomplete Trip Observations,2022-04-27 18:01:29,"Tien Mai, The Viet Bui, Quoc Phong Nguyen, Tho V. Le","http://arxiv.org/abs/2204.12992v1, http://arxiv.org/pdf/2204.12992v1",econ.EM
30578,em,"We are interested in the distribution of treatment effects for an experiment
where units are randomized to a treatment but outcomes are measured for pairs
of units. For example, we might measure risk sharing links between households
enrolled in a microfinance program, employment relationships between workers
and firms exposed to a trade shock, or bids from bidders to items assigned to
an auction format. Such a double randomized experimental design may be
appropriate when there are social interactions, market externalities, or other
spillovers across units assigned to the same treatment. Or it may describe a
natural or quasi experiment given to the researcher. In this paper, we propose
a new empirical strategy that compares the eigenvalues of the outcome matrices
associated with each treatment. Our proposal is based on a new matrix analog of
the Fr\'echet-Hoeffding bounds that play a key role in the standard theory. We
first use this result to bound the distribution of treatment effects. We then
propose a new matrix analog of quantile treatment effects that is given by a
difference in the eigenvalues. We call this analog spectral treatment effects.","Heterogeneous Treatment Effects for Networks, Panels, and other Outcome Matrices",2022-05-03 02:39:12,"Eric Auerbach, Yong Cai","http://arxiv.org/abs/2205.01246v2, http://arxiv.org/pdf/2205.01246v2",econ.EM
30579,em,"We obtain a necessary and sufficient condition under which random-coefficient
discrete choice models, such as mixed-logit models, are rich enough to
approximate any nonparametric random utility models arbitrarily well across
choice sets. The condition turns out to be the affine-independence of the set
of characteristic vectors. When the condition fails, resulting in some random
utility models that cannot be closely approximated, we identify preferences and
substitution patterns that are challenging to approximate accurately. We also
propose algorithms to quantify the magnitude of approximation errors.",Approximating Choice Data by Discrete Choice Models,2022-05-04 07:07:16,"Haoge Chang, Yusuke Narita, Kota Saito","http://arxiv.org/abs/2205.01882v4, http://arxiv.org/pdf/2205.01882v4",econ.TH
30586,em,"We study a joint facility location and cost planning problem in a competitive
market under random utility maximization (RUM) models. The objective is to
locate new facilities and make decisions on the costs (or budgets) to spend on
the new facilities, aiming to maximize an expected captured customer demand,
assuming that customers choose a facility among all available facilities
according to a RUM model. We examine two RUM frameworks in the discrete choice
literature, namely, the additive and multiplicative RUM. While the former has
been widely used in facility location problems, we are the first to explore the
latter in the context. We numerically show that the two RUM frameworks can well
approximate each other in the context of the cost optimization problem. In
addition, we show that, under the additive RUM framework, the resultant cost
optimization problem becomes highly non-convex and may have several local
optima. In contrast, the use of the multiplicative RUM brings several
advantages to the competitive facility location problem. For instance, the cost
optimization problem under the multiplicative RUM can be solved efficiently by
a general convex optimization solver or can be reformulated as a conic
quadratic program and handled by a conic solver available in some off-the-shelf
solvers such as CPLEX or GUROBI. Furthermore, we consider a joint location and
cost optimization problem under the multiplicative RUM and propose three
approaches to solve the problem, namely, an equivalent conic reformulation, a
multi-cut outer-approximation algorithm, and a local search heuristic. We
provide numerical experiments based on synthetic instances of various sizes to
evaluate the performances of the proposed algorithms in solving the cost
optimization, and the joint location and cost optimization problems.",Joint Location and Cost Planning in Maximum Capture Facility Location under Multiplicative Random Utility Maximization,2022-05-15 20:45:38,"Ngan Ha Duong, Tien Thanh Dam, Thuy Anh Ta, Tien Mai","http://arxiv.org/abs/2205.07345v2, http://arxiv.org/pdf/2205.07345v2",math.OC
30580,em,"Marketplace companies rely heavily on experimentation when making changes to
the design or operation of their platforms. The workhorse of experimentation is
the randomized controlled trial (RCT), or A/B test, in which users are randomly
assigned to treatment or control groups. However, marketplace interference
causes the Stable Unit Treatment Value Assumption (SUTVA) to be violated,
leading to bias in the standard RCT metric. In this work, we propose techniques
for platforms to run standard RCTs and still obtain meaningful estimates
despite the presence of marketplace interference. We specifically consider a
generalized matching setting, in which the platform explicitly matches supply
with demand via a linear programming algorithm. Our first proposal is for the
platform to estimate the value of global treatment and global control via
optimization. We prove that this approach is unbiased in the fluid limit. Our
second proposal is to compare the average shadow price of the treatment and
control groups rather than the total value accrued by each group. We prove that
this technique corresponds to the correct first-order approximation (in a
Taylor series sense) of the value function of interest even in a finite-size
system. We then use this result to prove that, under reasonable assumptions,
our estimator is less biased than the RCT estimator. At the heart of our result
is the idea that it is relatively easy to model interference in matching-driven
marketplaces since, in such markets, the platform intermediates the spillover.",Reducing Marketplace Interference Bias Via Shadow Prices,2022-05-04 21:22:52,"Ido Bright, Arthur Delarue, Ilan Lobel","http://arxiv.org/abs/2205.02274v3, http://arxiv.org/pdf/2205.02274v3",math.OC
30581,em,"There are many kinds of exogeneity assumptions. How should researchers choose
among them? When exogeneity is imposed on an unobservable like a potential
outcome, we argue that the form of exogeneity should be chosen based on the
kind of selection on unobservables it allows. Consequently, researchers can
assess the plausibility of any exogeneity assumption by studying the
distributions of treatment given the unobservables that are consistent with
that assumption. We use this approach to study two common exogeneity
assumptions: quantile and mean independence. We show that both assumptions
require a kind of non-monotonic relationship between treatment and the
potential outcomes. We discuss how to assess the plausibility of this kind of
treatment selection. We also show how to define a new and weaker version of
quantile independence that allows for monotonic treatment selection. We then
show the implications of the choice of exogeneity assumption for
identification. We apply these results in an empirical illustration of the
effect of child soldiering on wages.",Choosing Exogeneity Assumptions in Potential Outcome Models,2022-05-04 21:47:17,"Matthew A. Masten, Alexandre Poirier","http://arxiv.org/abs/2205.02288v1, http://arxiv.org/pdf/2205.02288v1",econ.EM
30582,em,"When sample data are governed by an unknown sequence of independent but
possibly non-identical distributions, the data-generating process (DGP) in
general cannot be perfectly identified from the data. For making decisions
facing such uncertainty, this paper presents a novel approach by studying how
the data can best be used to robustly improve decisions. That is, no matter
which DGP governs the uncertainty, one can make a better decision than without
using the data. I show that common inference methods, e.g., maximum likelihood
and Bayesian updating cannot achieve this goal. To address, I develop new
updating rules that lead to robustly better decisions either asymptotically
almost surely or in finite sample with a pre-specified probability. Especially,
they are easy to implement as are given by simple extensions of the standard
statistical procedures in the case where the possible DGPs are all independent
and identically distributed. Finally, I show that the new updating rules also
lead to more intuitive conclusions in existing economic models such as asset
pricing under ambiguity.",Robust Data-Driven Decisions Under Model Uncertainty,2022-05-10 00:36:20,Xiaoyu Cheng,"http://arxiv.org/abs/2205.04573v1, http://arxiv.org/pdf/2205.04573v1",econ.TH
30583,em,"We consider the problem of learning personalized treatment policies that are
externally valid or generalizable: they perform well in other target
populations besides the experimental (or training) population from which data
are sampled. We first show that welfare-maximizing policies for the
experimental population are robust to shifts in the distribution of outcomes
(but not characteristics) between the experimental and target populations. We
then develop new methods for learning policies that are robust to shifts in
outcomes and characteristics. In doing so, we highlight how treatment effect
heterogeneity within the experimental population affects the generalizability
of policies. Our methods may be used with experimental or observational data
(where treatment is endogenous). Many of our methods can be implemented with
linear programming.",Externally Valid Policy Choice,2022-05-11 18:19:22,"Christopher Adjaho, Timothy Christensen","http://arxiv.org/abs/2205.05561v3, http://arxiv.org/pdf/2205.05561v3",econ.EM
30584,em,"A typical situation in competing risks analysis is that the researcher is
only interested in a subset of risks. This paper considers a depending
competing risks model with the distribution of one risk being a parametric or
semi-parametric model, while the model for the other risks being unknown.
Identifiability is shown for popular classes of parametric models and the
semiparametric proportional hazards model. The identifiability of the
parametric models does not require a covariate, while the semiparametric model
requires at least one. Estimation approaches are suggested which are shown to
be $\sqrt{n}$-consistent. Applicability and attractive finite sample
performance are demonstrated with the help of simulations and data examples.",A single risk approach to the semiparametric copula competing risks model,2022-05-12 16:44:49,"Simon M. S. Lo, Ralf A. Wilke","http://arxiv.org/abs/2205.06087v1, http://arxiv.org/pdf/2205.06087v1",stat.ME
30585,em,"We develop a new permutation test for inference on a subvector of
coefficients in linear models. The test is exact when the regressors and the
error terms are independent. Then, we show that the test is asymptotically of
correct level, consistent and has power against local alternatives when the
independence condition is relaxed, under two main conditions. The first is a
slight reinforcement of the usual absence of correlation between the regressors
and the error term. The second is that the number of strata, defined by values
of the regressors not involved in the subvector test, is small compared to the
sample size. The latter implies that the vector of nuisance regressors is
discrete. Simulations and empirical illustrations suggest that the test has
good power in practice if, indeed, the number of strata is small compared to
the sample size.",A Robust Permutation Test for Subvector Inference in Linear Regressions,2022-05-13 18:35:06,"Xavier D'Haultfœuille, Purevdorj Tuvaandorj","http://arxiv.org/abs/2205.06713v4, http://arxiv.org/pdf/2205.06713v4",econ.EM
30587,em,"Incomplete observability of data generates an identification problem. There
is no panacea for missing data. What one can learn about a population parameter
depends on the assumptions one finds credible to maintain. The credibility of
assumptions varies with the empirical setting. No specific assumptions can
provide a realistic general solution to the problem of inference with missing
data. Yet Rubin has promoted random multiple imputation (RMI) as a general way
to deal with missing values in public-use data. This recommendation has been
influential to empirical researchers who seek a simple fix to the nuisance of
missing data. This paper adds to my earlier critiques of imputation. It
provides a transparent assessment of the mix of Bayesian and frequentist
thinking used by Rubin to argue for RMI. It evaluates random imputation to
replace missing outcome or covariate data when the objective is to learn a
conditional expectation. It considers steps that might help combat the allure
of making stuff up.",Inference with Imputed Data: The Allure of Making Stuff Up,2022-05-16 01:03:45,Charles F. Manski,"http://arxiv.org/abs/2205.07388v1, http://arxiv.org/pdf/2205.07388v1",econ.EM
30588,em,"Despite the impressive success of deep neural networks in many application
areas, neural network models have so far not been widely adopted in the context
of volatility forecasting. In this work, we aim to bridge the conceptual gap
between established time series approaches, such as the Heterogeneous
Autoregressive (HAR) model, and state-of-the-art deep neural network models.
The newly introduced HARNet is based on a hierarchy of dilated convolutional
layers, which facilitates an exponential growth of the receptive field of the
model in the number of model parameters. HARNets allow for an explicit
initialization scheme such that before optimization, a HARNet yields identical
predictions as the respective baseline HAR model. Particularly when considering
the QLIKE error as a loss function, we find that this approach significantly
stabilizes the optimization of HARNets. We evaluate the performance of HARNets
with respect to three different stock market indexes. Based on this evaluation,
we formulate clear guidelines for the optimization of HARNets and show that
HARNets can substantially improve upon the forecasting accuracy of their
respective HAR baseline models. In a qualitative analysis of the filter weights
learnt by a HARNet, we report clear patterns regarding the predictive power of
past information. Among information from the previous week, yesterday and the
day before, yesterday's volatility makes by far the most contribution to
today's realized volatility forecast. Moroever, within the previous month, the
importance of single weeks diminishes almost linearly when moving further into
the past.",HARNet: A Convolutional Neural Network for Realized Volatility Forecasting,2022-05-16 17:33:32,"Rafael Reisenhofer, Xandro Bayer, Nikolaus Hautsch","http://arxiv.org/abs/2205.07719v1, http://arxiv.org/pdf/2205.07719v1",econ.EM
30589,em,"The literature focuses on minimizing the mean of welfare regret, which can
lead to undesirable treatment choice due to sampling uncertainty. We propose to
minimize the mean of a nonlinear transformation of regret and show that
admissible rules are fractional for nonlinear regret. Focusing on mean square
regret, we derive closed-form fractions for finite-sample Bayes and minimax
optimal rules. Our approach is grounded in decision theory and extends to limit
experiments. The treatment fractions can be viewed as the strength of evidence
favoring treatment. We apply our framework to a normal regression model and
sample size calculations in randomized experiments.",Treatment Choice with Nonlinear Regret,2022-05-17 22:06:12,"Toru Kitagawa, Sokbae Lee, Chen Qiu","http://arxiv.org/abs/2205.08586v4, http://arxiv.org/pdf/2205.08586v4",econ.EM
30590,em,"Researchers often use covariate balance tests to assess whether a treatment
variable is assigned ""as-if"" at random. However, standard tests may shed no
light on a key condition for causal inference: the independence of treatment
assignment and potential outcomes. We focus on a key factor that affects the
sensitivity and specificity of balance tests: the extent to which covariates
are prognostic, that is, predictive of potential outcomes. We propose a
""conditional balance test"" based on the weighted sum of covariate differences
of means, where the weights are coefficients from a standardized regression of
observed outcomes on covariates. Our theory and simulations show that this
approach increases power relative to other global tests when potential outcomes
are imbalanced, while limiting spurious rejections due to imbalance on
irrelevant covariates.",Conditional Balance Tests: Increasing Sensitivity and Specificity With Prognostic Covariates,2022-05-21 04:12:24,"Clara Bicalho, Adam Bouyamourn, Thad Dunning","http://arxiv.org/abs/2205.10478v1, http://arxiv.org/pdf/2205.10478v1",stat.ME
30591,em,"This paper proposes a causal decomposition framework for settings in which an
initial regime randomization influences the timing of a treatment duration. The
initial randomization and treatment affect in turn a duration outcome of
interest. Our empirical application considers the survival of individuals on
the kidney transplant waitlist. Upon entering the waitlist, individuals with an
AB blood type, who are universal recipients, are effectively randomized to a
regime with a higher propensity to rapidly receive a kidney transplant. Our
dynamic potential outcomes framework allows us to identify the pre-transplant
effect of the blood type, and the transplant effects depending on blood type.
We further develop dynamic assumptions which build on the LATE framework and
allow researchers to separate effects for different population substrata. Our
main empirical result is that AB blood type candidates display a higher
pre-transplant mortality. We provide evidence that this effect is due to
behavioural changes rather than biological differences.",Regime and Treatment Effects in Duration Models: Decomposing Expectation and Transplant Effects on the Kidney Waitlist,2022-05-23 13:47:41,Stephen Kastoryano,"http://arxiv.org/abs/2205.11189v1, http://arxiv.org/pdf/2205.11189v1",econ.EM
30596,em,"In this paper, we survey recent econometric contributions to measure the
relationship between economic activity and climate change. Due to the critical
relevance of these effects for the well-being of future generations, there is
an explosion of publications devoted to measuring this relationship and its
main channels. The relation between economic activity and climate change is
complex with the possibility of causality running in both directions. Starting
from economic activity, the channels that relate economic activity and climate
change are energy consumption and the consequent pollution. Hence, we first
describe the main econometric contributions about the interactions between
economic activity and energy consumption, moving then to describing the
contributions on the interactions between economic activity and pollution.
Finally, we look at the main results on the relationship between climate change
and economic activity. An important consequence of climate change is the
increasing occurrence of extreme weather phenomena. Therefore, we also survey
contributions on the economic effects of catastrophic climate phenomena.",Economic activity and climate change,2022-06-07 14:18:15,"Aránzazu de Juan, Pilar Poncela, Vladimir Rodríguez-Caballero, Esther Ruiz","http://arxiv.org/abs/2206.03187v2, http://arxiv.org/pdf/2206.03187v2",econ.EM
30592,em,"The internet has changed the way we live, work and take decisions. As it is
the major modern resource for research, detailed data on internet usage
exhibits vast amounts of behavioral information. This paper aims to answer the
question whether this information can be facilitated to predict future returns
of stocks on financial capital markets. In an empirical analysis it implements
gradient boosted decision trees to learn relationships between abnormal returns
of stocks within the S&P 100 index and lagged predictors derived from
historical financial data, as well as search term query volumes on the internet
search engine Google. Models predict the occurrence of day-ahead stock returns
in excess of the index median. On a time frame from 2005 to 2017, all disparate
datasets exhibit valuable information. Evaluated models have average areas
under the receiver operating characteristic between 54.2% and 56.7%, clearly
indicating a classification better than random guessing. Implementing a simple
statistical arbitrage strategy, models are used to create daily trading
portfolios of ten stocks and result in annual performances of more than 57%
before transaction costs. With ensembles of different data sets topping up the
performance ranking, the results further question the weak form and semi-strong
form efficiency of modern financial capital markets. Even though transaction
costs are not included, the approach adds to the existing literature. It gives
guidance on how to use and transform data on internet usage behavior for
financial and economic modeling and forecasting.",Predicting Day-Ahead Stock Returns using Search Engine Query Volumes: An Application of Gradient Boosted Decision Trees to the S&P 100,2022-05-31 17:58:46,Christopher Bockel-Rickermann,"http://arxiv.org/abs/2205.15853v2, http://arxiv.org/pdf/2205.15853v2",econ.EM
30593,em,"The synthetic control method has become a widely popular tool to estimate
causal effects with observational data. Despite this, inference for synthetic
control methods remains challenging. Often, inferential results rely on linear
factor model data generating processes. In this paper, we characterize the
conditions on the factor model primitives (the factor loadings) for which the
statistical risk minimizers are synthetic controls (in the simplex). Then, we
propose a Bayesian alternative to the synthetic control method that preserves
the main features of the standard method and provides a new way of doing valid
inference. We explore a Bernstein-von Mises style result to link our Bayesian
inference to the frequentist inference. For linear factor model frameworks we
show that a maximum likelihood estimator (MLE) of the synthetic control weights
can consistently estimate the predictive function of the potential outcomes for
the treated unit and that our Bayes estimator is asymptotically close to the
MLE in the total variation sense. Through simulations, we show that there is
convergence between the Bayes and frequentist approach even in sparse settings.
Finally, we apply the method to re-visit the study of the economic costs of the
German re-unification and the Catalan secession movement. The Bayesian
synthetic control method is available in the bsynth R-package.",Bayesian and Frequentist Inference for Synthetic Controls,2022-06-03 21:48:25,"Ignacio Martinez, Jaume Vives-i-Bastida","http://arxiv.org/abs/2206.01779v2, http://arxiv.org/pdf/2206.01779v2",stat.ME
30594,em,"Omitted variables are one of the most important threats to the identification
of causal effects. Several widely used approaches, including Oster (2019),
assess the impact of omitted variables on empirical conclusions by comparing
measures of selection on observables with measures of selection on
unobservables. These approaches either (1) assume the omitted variables are
uncorrelated with the included controls, an assumption that is often considered
strong and implausible, or (2) use a method called residualization to avoid
this assumption. In our first contribution, we develop a framework for
objectively comparing sensitivity parameters. We use this framework to formally
prove that the residualization method generally leads to incorrect conclusions
about robustness. In our second contribution, we then provide a new approach to
sensitivity analysis that avoids this critique, allows the omitted variables to
be correlated with the included controls, and lets researchers calibrate
sensitivity parameters by comparing the magnitude of selection on observables
with the magnitude of selection on unobservables as in previous methods. We
illustrate our results in an empirical study of the effect of historical
American frontier life on modern cultural beliefs. Finally, we implement these
methods in the companion Stata module regsensitivity for easy use in practice.",Assessing Omitted Variable Bias when the Controls are Endogenous,2022-06-06 04:08:12,"Paul Diegert, Matthew A. Masten, Alexandre Poirier","http://arxiv.org/abs/2206.02303v4, http://arxiv.org/pdf/2206.02303v4",econ.EM
30595,em,"We investigate the performance and sampling variability of estimated forecast
combinations, with particular attention given to the combination of forecast
distributions. Unknown parameters in the forecast combination are optimized
according to criterion functions based on proper scoring rules, which are
chosen to reward the form of forecast accuracy that matters for the problem at
hand, and forecast performance is measured using the out-of-sample expectation
of said scoring rule. Our results provide novel insights into the behavior of
estimated forecast combinations. Firstly, we show that, asymptotically, the
sampling variability in the performance of standard forecast combinations is
determined solely by estimation of the constituent models, with estimation of
the combination weights contributing no sampling variability whatsoever, at
first order. Secondly, we show that, if computationally feasible, forecast
combinations produced in a single step -- in which the constituent model and
combination function parameters are estimated jointly -- have superior
predictive accuracy and lower sampling variability than standard forecast
combinations -- where constituent model and combination function parameters are
estimated in two steps. These theoretical insights are demonstrated
numerically, both in simulation settings and in an extensive empirical
illustration using a time series of S&P500 returns.",The Impact of Sampling Variability on Estimated Combinations of Distributional Forecasts,2022-06-06 09:11:03,"Ryan Zischke, Gael M. Martin, David T. Frazier, D. S. Poskitt","http://arxiv.org/abs/2206.02376v1, http://arxiv.org/pdf/2206.02376v1",stat.ME
30597,em,"This paper examines the relationship between the price of the Dubai crude oil
and the price of the US natural gas using an updated monthly dataset from 1992
to 2018, incorporating the latter events in the energy markets. After employing
a variety of unit root and cointegration tests, the long-run relationship is
examined via the autoregressive distributed lag (ARDL) cointegration technique,
along with the Toda-Yamamoto (1995) causality test. Our results indicate that
there is a long-run relationship with a unidirectional causality running from
the Dubai crude oil market to the US natural gas market. A variety of post
specification tests indicate that the selected ARDL model is well-specified,
and the results of the Toda-Yamamoto approach via impulse response functions,
forecast error variance decompositions, and historical decompositions with
generalized weights, show that the Dubai crude oil price retains a positive
relationship and affects the US natural gas price.",Cointegration and ARDL specification between the Dubai crude oil and the US natural gas market,2022-06-03 14:57:02,Stavros Stavroyiannis,"http://arxiv.org/abs/2206.03278v1, http://arxiv.org/pdf/2206.03278v1",econ.EM
30599,em,"We address challenges in variable selection with highly correlated data that
are frequently present in finance, economics, but also in complex natural
systems as e.g. weather. We develop a robustified version of the knockoff
framework, which addresses challenges with high dependence among possibly many
influencing factors and strong time correlation. In particular, the repeated
subsampling strategy tackles the variability of the knockoffs and the
dependency of factors. Simultaneously, we also control the proportion of false
discoveries over a grid of all possible values, which mitigates variability of
selected factors from ad-hoc choices of a specific false discovery level. In
the application for corporate bond recovery rates, we identify new important
groups of relevant factors on top of the known standard drivers. But we also
show that out-of-sample, the resulting sparse model has similar predictive
power to state-of-the-art machine learning models that use the entire set of
predictors.",Robust Knockoffs for Controlling False Discoveries With an Application to Bond Recovery Rates,2022-06-13 13:25:57,"Konstantin Görgen, Abdolreza Nazemi, Melanie Schienle","http://arxiv.org/abs/2206.06026v1, http://arxiv.org/pdf/2206.06026v1",econ.EM
30600,em,"In this paper we describe, formalize, implement, and experimentally evaluate
a novel transaction re-identification attack against official foreign-trade
statistics releases in Brazil. The attack's goal is to re-identify the
importers of foreign-trade transactions (by revealing the identity of the
company performing that transaction), which consequently violates those
importers' fiscal secrecy (by revealing sensitive information: the value and
volume of traded goods). We provide a mathematical formalization of this fiscal
secrecy problem using principles from the framework of quantitative information
flow (QIF), then carefully identify the main sources of imprecision in the
official data releases used as auxiliary information in the attack, and model
transaction re-construction as a linear optimization problem solvable through
integer linear programming (ILP). We show that this problem is NP-complete, and
provide a methodology to identify tractable instances. We exemplify the
feasibility of our attack by performing 2,003 transaction re-identifications
that in total amount to more than \$137M, and affect 348 Brazilian companies.
Further, since similar statistics are produced by other statistical agencies,
our attack is of broader concern.","A novel reconstruction attack on foreign-trade official statistics, with a Brazilian case study",2022-06-14 00:48:17,"Danilo Fabrino Favato, Gabriel Coutinho, Mário S. Alvim, Natasha Fernandes","http://arxiv.org/abs/2206.06493v1, http://arxiv.org/pdf/2206.06493v1",cs.CR
30601,em,"A comprehensive methodology for inference in vector autoregressions (VARs)
using sign and other structural restrictions is developed. The reduced-form VAR
disturbances are driven by a few common factors and structural identification
restrictions can be incorporated in their loadings in the form of parametric
restrictions. A Gibbs sampler is derived that allows for reduced-form
parameters and structural restrictions to be sampled efficiently in one step. A
key benefit of the proposed approach is that it allows for treating parameter
estimation and structural inference as a joint problem. An additional benefit
is that the methodology can scale to large VARs with multiple shocks, and it
can be extended to accommodate non-linearities, asymmetries, and numerous other
interesting empirical features. The excellent properties of the new algorithm
for inference are explored using synthetic data experiments, and by revisiting
the role of financial factors in economic fluctuations using identification
based on sign restrictions.",A new algorithm for structural restrictions in Bayesian vector autoregressions,2022-06-14 17:48:24,Dimitris Korobilis,"http://arxiv.org/abs/2206.06892v1, http://arxiv.org/pdf/2206.06892v1",econ.EM
30602,em,"We present a data-driven prescriptive framework for fair decisions, motivated
by hiring. An employer evaluates a set of applicants based on their observable
attributes. The goal is to hire the best candidates while avoiding bias with
regard to a certain protected attribute. Simply ignoring the protected
attribute will not eliminate bias due to correlations in the data. We present a
hiring policy that depends on the protected attribute functionally, but not
statistically, and we prove that, among all possible fair policies, ours is
optimal with respect to the firm's objective. We test our approach on both
synthetic and real data, and find that it shows great practical potential to
improve equity for underrepresented and historically marginalized groups.",Optimal data-driven hiring with equity for underrepresented groups,2022-06-19 04:05:31,"Yinchu Zhu, Ilya O. Ryzhov","http://arxiv.org/abs/2206.09300v1, http://arxiv.org/pdf/2206.09300v1",econ.EM
30603,em,"The sample selection bias problem arises when a variable of interest is
correlated with a latent variable, and involves situations in which the
response variable had part of its observations censored. Heckman (1976)
proposed a sample selection model based on the bivariate normal distribution
that fits both the variable of interest and the latent variable. Recently, this
assumption of normality has been relaxed by more flexible models such as the
Student-t distribution (Marchenko and Genton, 2012; Lachos et al., 2021). The
aim of this work is to propose generalized Heckman sample selection models
based on symmetric distributions (Fang et al., 1990). This is a new class of
sample selection models, in which variables are added to the dispersion and
correlation parameters. A Monte Carlo simulation study is performed to assess
the behavior of the parameter estimation method. Two real data sets are
analyzed to illustrate the proposed approach.",Symmetric generalized Heckman models,2022-06-21 03:29:24,"Helton Saulo, Roberto Vila, Shayane S. Cordeiro","http://arxiv.org/abs/2206.10054v1, http://arxiv.org/pdf/2206.10054v1",stat.ME
30604,em,"We use ""glide charts"" (plots of sequences of root mean squared forecast
errors as the target date is approached) to evaluate and compare fixed-target
forecasts of Arctic sea ice. We first use them to evaluate the simple
feature-engineered linear regression (FELR) forecasts of Diebold and Goebel
(2021), and to compare FELR forecasts to naive pure-trend benchmark forecasts.
Then we introduce a much more sophisticated feature-engineered machine learning
(FEML) model, and we use glide charts to evaluate FEML forecasts and compare
them to a FELR benchmark. Our substantive results include the frequent
appearance of predictability thresholds, which differ across months, meaning
that accuracy initially fails to improve as the target date is approached but
then increases progressively once a threshold lead time is crossed. Also, we
find that FEML can improve appreciably over FELR when forecasting ""turning
point"" months in the annual cycle at horizons of one to three months ahead.",Assessing and Comparing Fixed-Target Forecasts of Arctic Sea Ice: Glide Charts for Feature-Engineered Linear Regression and Machine Learning Models,2022-06-21 23:43:48,"Francis X. Diebold, Maximilian Goebel, Philippe Goulet Coulombe","http://arxiv.org/abs/2206.10721v2, http://arxiv.org/pdf/2206.10721v2",econ.EM
30606,em,"This paper elaborates on the sectoral-regional view of the business cycle
synchronization in the EU -- a necessary condition for the optimal currency
area. We argue that complete and tidy clustering of the data improves the
decision maker's understanding of the business cycle and, by extension, the
quality of economic decisions. We define the business cycles by applying a
wavelet approach to drift-adjusted gross value added data spanning over 2000Q1
to 2021Q2. For the application of the synchronization analysis, we propose the
novel soft-clustering approach, which adjusts hierarchical clustering in
several aspects. First, the method relies on synchronicity dissimilarity
measures, noting that, for time series data, the feature space is the set of
all points in time. Then, the ``soft'' part of the approach strengthens the
synchronization signal by using silhouette measures. Finally, we add a
probabilistic sparsity algorithm to drop out the most asynchronous ``noisy''
data improving the silhouette scores of the most and less synchronous groups.
The method, hence, splits the sectoral-regional data into three groups: the
synchronous group that shapes the EU business cycle; the less synchronous group
that may hint at cycle forecasting relevant information; the asynchronous group
that may help investors to diversify through-the-cycle risks of the investment
portfolios. The results support the core-periphery hypothesis.",Business Cycle Synchronization in the EU: A Regional-Sectoral Look through Soft-Clustering and Wavelet Decomposition,2022-06-28 19:42:02,"Saulius Jokubaitis, Dmitrij Celov","http://dx.doi.org/10.1007/s41549-023-00090-4, http://arxiv.org/abs/2206.14128v1, http://arxiv.org/pdf/2206.14128v1",econ.EM
30607,em,"We use a simulation study to compare three methods for adaptive
experimentation: Thompson sampling, Tempered Thompson sampling, and Exploration
sampling. We gauge the performance of each in terms of social welfare and
estimation accuracy, and as a function of the number of experimental waves. We
further construct a set of novel ""hybrid"" loss measures to identify which
methods are optimal for researchers pursuing a combination of experimental
aims. Our main results are: 1) the relative performance of Thompson sampling
depends on the number of experimental waves, 2) Tempered Thompson sampling
uniquely distributes losses across multiple experimental aims, and 3) in most
cases, Exploration sampling performs similarly to random assignment.",A Comparison of Methods for Adaptive Experimentation,2022-07-02 02:12:52,"Samantha Horn, Sabina J. Sloman","http://arxiv.org/abs/2207.00683v1, http://arxiv.org/pdf/2207.00683v1",stat.ME
30608,em,"We provide an analytical characterization of the model flexibility of the
synthetic control method (SCM) in the familiar form of degrees of freedom. We
obtain estimable information criteria. These may be used to circumvent
cross-validation when selecting either the weighting matrix in the SCM with
covariates, or the tuning parameter in model averaging or penalized variants of
SCM. We assess the impact of car license rationing in Tianjin and make a novel
use of SCM; while a natural match is available, it and other donors are noisy,
inviting the use of SCM to average over approximately matching donors. The very
large number of candidate donors calls for model averaging or penalized
variants of SCM and, with short pre-treatment series, model selection per
information criteria outperforms that per cross-validation.",Degrees of Freedom and Information Criteria for the Synthetic Control Method,2022-07-06 22:52:03,"Guillaume Allaire Pouliot, Zhen Xie","http://arxiv.org/abs/2207.02943v1, http://arxiv.org/pdf/2207.02943v1",econ.EM
30609,em,"Vector autoregressions (VARs) with multivariate stochastic volatility are
widely used for structural analysis. Often the structural model identified
through economically meaningful restrictions--e.g., sign restrictions--is
supposed to be independent of how the dependent variables are ordered. But
since the reduced-form model is not order invariant, results from the
structural analysis depend on the order of the variables. We consider a VAR
based on the factor stochastic volatility that is constructed to be order
invariant. We show that the presence of multivariate stochastic volatility
allows for statistical identification of the model. We further prove that, with
a suitable set of sign restrictions, the corresponding structural model is
point-identified. An additional appeal of the proposed approach is that it can
easily handle a large number of dependent variables as well as sign
restrictions. We demonstrate the methodology through a structural analysis in
which we use a 20-variable VAR with sign restrictions to identify 5 structural
shocks.","Large Bayesian VARs with Factor Stochastic Volatility: Identification, Order Invariance and Structural Analysis",2022-07-08 19:22:26,"Joshua Chan, Eric Eisenstat, Xuewen Yu","http://arxiv.org/abs/2207.03988v1, http://arxiv.org/pdf/2207.03988v1",econ.EM
30610,em,"We produce methodology for regression analysis when the geographic locations
of the independent and dependent variables do not coincide, in which case we
speak of misaligned data. We develop and investigate two complementary methods
for regression analysis with misaligned data that circumvent the need to
estimate or specify the covariance of the regression errors. We carry out a
detailed reanalysis of Maccini and Yang (2009) and find economically
significant quantitative differences but sustain most qualitative conclusions.",Spatial Econometrics for Misaligned Data,2022-07-08 21:12:50,Guillaume Allaire Pouliot,"http://arxiv.org/abs/2207.04082v1, http://arxiv.org/pdf/2207.04082v1",econ.EM
30611,em,"Model diagnostics is an indispensable component of regression analysis, yet
it is not well addressed in standard textbooks on generalized linear models.
The lack of exposition is attributed to the fact that when outcome data are
discrete, classical methods (e.g., Pearson/deviance residual analysis and
goodness-of-fit tests) have limited utility in model diagnostics and treatment.
This paper establishes a novel framework for model diagnostics of discrete data
regression. Unlike the literature defining a single-valued quantity as the
residual, we propose to use a function as a vehicle to retain the residual
information. In the presence of discreteness, we show that such a functional
residual is appropriate for summarizing the residual randomness that cannot be
captured by the structural part of the model. We establish its theoretical
properties, which leads to the innovation of new diagnostic tools including the
functional-residual-vs covariate plot and Function-to-Function (Fn-Fn) plot.
Our numerical studies demonstrate that the use of these tools can reveal a
variety of model misspecifications, such as not properly including a
higher-order term, an explanatory variable, an interaction effect, a dispersion
parameter, or a zero-inflation component. The functional residual yields, as a
byproduct, Liu-Zhang's surrogate residual mainly developed for cumulative link
models for ordinal data (Liu and Zhang, 2018, JASA). As a general notion, it
considerably broadens the diagnostic scope as it applies to virtually all
parametric models for binary, ordinal and count data, all in a unified
diagnostic scheme.",Model diagnostics of discrete data regression: a unifying framework using functional residuals,2022-07-09 20:00:18,"Zewei Lin, Dungang Liu","http://arxiv.org/abs/2207.04299v1, http://arxiv.org/pdf/2207.04299v1",stat.ME
30612,em,"Univariate normal regression models are statistical tools widely applied in
many areas of economics. Nevertheless, income data have asymmetric behavior and
are best modeled by non-normal distributions. The modeling of income plays an
important role in determining workers' earnings, as well as being an important
research topic in labor economics. Thus, the objective of this work is to
propose parametric quantile regression models based on two important asymmetric
income distributions, namely, Dagum and Singh-Maddala distributions. The
proposed quantile models are based on reparameterizations of the original
distributions by inserting a quantile parameter. We present the
reparameterizations, some properties of the distributions, and the quantile
regression models with their inferential aspects. We proceed with Monte Carlo
simulation studies, considering the maximum likelihood estimation performance
evaluation and an analysis of the empirical distribution of two residuals. The
Monte Carlo results show that both models meet the expected outcomes. We apply
the proposed quantile regression models to a household income data set provided
by the National Institute of Statistics of Chile. We showed that both proposed
models had a good performance both in terms of model fitting. Thus, we conclude
that results were favorable to the use of Singh-Maddala and Dagum quantile
regression models for positive asymmetric data, such as income data.",Parametric quantile regression for income data,2022-07-14 02:42:15,"Helton Saulo, Roberto Vila, Giovanna V. Borges, Marcelo Bourguignon","http://arxiv.org/abs/2207.06558v1, http://arxiv.org/pdf/2207.06558v1",stat.ME
30613,em,"This paper introduces and proves asymptotic normality for a new
semi-parametric estimator of continuous treatment effects in panel data.
Specifically, we estimate the average derivative. Our estimator uses the panel
structure of data to account for unobservable time-invariant heterogeneity and
machine learning (ML) methods to preserve statistical power while modeling
high-dimensional relationships. We construct our estimator using tools from
double de-biased machine learning (DML) literature. Monte Carlo simulations in
a nonlinear panel setting show that our method estimates the average derivative
with low bias and variance relative to other approaches. Lastly, we use our
estimator to measure the impact of extreme heat on United States (U.S.) corn
production, after flexibly controlling for precipitation and other weather
features. Our approach yields extreme heat effect estimates that are 50% larger
than estimates using linear regression. This difference in estimates
corresponds to an additional $3.17 billion in annual damages by 2050 under
median climate scenarios. We also estimate a dose-response curve, which shows
that damages from extreme heat decline somewhat in counties with more extreme
heat exposure.",Estimating Continuous Treatment Effects in Panel Data using Machine Learning with a Climate Application,2022-07-18 20:44:35,"Sylvia Klosin, Max Vilgalys","http://arxiv.org/abs/2207.08789v2, http://arxiv.org/pdf/2207.08789v2",econ.EM
30614,em,"We propose a one-to-many matching estimator of the average treatment effect
based on propensity scores estimated by isotonic regression. The method relies
on the monotonicity assumption on the propensity score function, which can be
justified in many applications in economics. We show that the nature of the
isotonic estimator can help us to fix many problems of existing matching
methods, including efficiency, choice of the number of matches, choice of
tuning parameters, robustness to propensity score misspecification, and
bootstrap validity. As a by-product, a uniformly consistent isotonic estimator
is developed for our proposed matching method.",Isotonic propensity score matching,2022-07-18 21:27:01,"Mengshan Xu, Taisuke Otsu","http://arxiv.org/abs/2207.08868v1, http://arxiv.org/pdf/2207.08868v1",econ.EM
30615,em,"Bias correction can often improve the finite sample performance of
estimators. We show that the choice of bias correction method has no effect on
the higher-order variance of semiparametrically efficient parametric
estimators, so long as the estimate of the bias is asymptotically linear. It is
also shown that bootstrap, jackknife, and analytical bias estimates are
asymptotically linear for estimators with higher-order expansions of a standard
form. In particular, we find that for a variety of estimators the
straightforward bootstrap bias correction gives the same higher-order variance
as more complicated analytical or jackknife bias corrections. In contrast, bias
corrections that do not estimate the bias at the parametric rate, such as the
split-sample jackknife, result in larger higher-order variances in the i.i.d.
setting we focus on. For both a cross-sectional MLE and a panel model with
individual fixed effects, we show that the split-sample jackknife has a
higher-order variance term that is twice as large as that of the
`leave-one-out' jackknife.",Efficient Bias Correction for Cross-section and Panel Data,2022-07-20 17:39:31,"Jinyong Hahn, David W. Hughes, Guido Kuersteiner, Whitney K. Newey","http://arxiv.org/abs/2207.09943v3, http://arxiv.org/pdf/2207.09943v3",econ.EM
30616,em,"In this paper we propose a new time-varying econometric model, called
Time-Varying Poisson AutoRegressive with eXogenous covariates (TV-PARX), suited
to model and forecast time series of counts. {We show that the score-driven
framework is particularly suitable to recover the evolution of time-varying
parameters and provides the required flexibility to model and forecast time
series of counts characterized by convoluted nonlinear dynamics and structural
breaks.} We study the asymptotic properties of the TV-PARX model and prove
that, under mild conditions, maximum likelihood estimation (MLE) yields
strongly consistent and asymptotically normal parameter estimates.
Finite-sample performance and forecasting accuracy are evaluated through Monte
Carlo simulations. The empirical usefulness of the time-varying specification
of the proposed TV-PARX model is shown by analyzing the number of new daily
COVID-19 infections in Italy and the number of corporate defaults in the US.",Time-Varying Poisson Autoregression,2022-07-22 13:43:18,"Giovanni Angelini, Giuseppe Cavaliere, Enzo D'Innocenzo, Luca De Angelis","http://arxiv.org/abs/2207.11003v1, http://arxiv.org/pdf/2207.11003v1",econ.EM
30617,em,"The difference-in-differences (DID) design is one of the most popular methods
used in empirical economics research. However, there is almost no work
examining what the DID method identifies in the presence of a misclassified
treatment variable. This paper studies the identification of treatment effects
in DID designs when the treatment is misclassified. Misclassification arises in
various ways, including when the timing of a policy intervention is ambiguous
or when researchers need to infer treatment from auxiliary data. We show that
the DID estimand is biased and recovers a weighted average of the average
treatment effects on the treated (ATT) in two subpopulations -- the correctly
classified and misclassified groups. In some cases, the DID estimand may yield
the wrong sign and is otherwise attenuated. We provide bounds on the ATT when
the researcher has access to information on the extent of misclassification in
the data. We demonstrate our theoretical results using simulations and provide
two empirical applications to guide researchers in performing sensitivity
analysis using our proposed methods.",Misclassification in Difference-in-differences Models,2022-07-25 06:37:10,"Augustine Denteh, Désiré Kédagni","http://arxiv.org/abs/2207.11890v2, http://arxiv.org/pdf/2207.11890v2",econ.EM
30619,em,"In this paper, we forecast euro area inflation and its main components using
an econometric model which exploits a massive number of time series on survey
expectations for the European Commission's Business and Consumer Survey. To
make estimation of such a huge model tractable, we use recent advances in
computational statistics to carry out posterior simulation and inference. Our
findings suggest that the inclusion of a wide range of firms and consumers'
opinions about future economic developments offers useful information to
forecast prices and assess tail risks to inflation. These predictive
improvements do not only arise from surveys related to expected inflation but
also from other questions related to the general economic environment. Finally,
we find that firms' expectations about the future seem to have more predictive
content than consumer expectations.",Forecasting euro area inflation using a huge panel of survey expectations,2022-07-25 17:24:32,"Florian Huber, Luca Onorante, Michael Pfarrhofer","http://arxiv.org/abs/2207.12225v1, http://arxiv.org/pdf/2207.12225v1",econ.EM
30620,em,"These notes shows how to do inference on the Demographic Parity (DP) metric.
Although the metric is a complex statistic involving min and max computations,
we propose a smooth approximation of those functions and derive its asymptotic
distribution. The limit of these approximations and their gradients converge to
those of the true max and min functions, wherever they exist. More importantly,
when the true max and min functions are not differentiable, the approximations
still are, and they provide valid asymptotic inference everywhere in the
domain. We conclude with some directions on how to compute confidence intervals
for DP, how to test if it is under 0.8 (the U.S. Equal Employment Opportunity
Commission fairness threshold), and how to do inference in an A/B test.",Identification and Inference with Min-over-max Estimators for the Measurement of Labor Market Fairness,2022-07-28 00:16:01,Karthik Rajkumar,"http://arxiv.org/abs/2207.13797v1, http://arxiv.org/pdf/2207.13797v1",stat.ME
30621,em,"Motivated by growing evidence of agents' mistakes in strategically simple
environments, we propose a solution concept -- robust equilibrium -- that
requires only an asymptotically optimal behavior. We use it to study large
random matching markets operated by the applicant-proposing Deferred Acceptance
(DA). Although truth-telling is a dominant strategy, almost all applicants may
be non-truthful in robust equilibrium; however, the outcome must be arbitrarily
close to the stable matching. Our results imply that one can assume truthful
agents to study DA outcomes, theoretically or counterfactually. However, to
estimate the preferences of mistaken agents, one should assume stable matching
but not truth-telling.",Stable Matching with Mistaken Agents,2022-07-28 11:04:12,"Georgy Artemov, Yeon-Koo Che, YingHua He","http://dx.doi.org/10.1086/722978, http://arxiv.org/abs/2207.13939v4, http://arxiv.org/pdf/2207.13939v4",econ.TH
30622,em,"A central goal in social science is to evaluate the causal effect of a
policy. One dominant approach is through panel data analysis in which the
behaviors of multiple units are observed over time. The information across time
and space motivates two general approaches: (i) horizontal regression (i.e.,
unconfoundedness), which exploits time series patterns, and (ii) vertical
regression (e.g., synthetic controls), which exploits cross-sectional patterns.
Conventional wisdom states that the two approaches are fundamentally different.
We establish this position to be partly false for estimation but generally true
for inference. In particular, we prove that both approaches yield identical
point estimates under several standard settings. For the same point estimate,
however, each approach quantifies uncertainty with respect to a distinct
estimand. In turn, the confidence interval developed for one estimand may have
incorrect coverage for another. This emphasizes that the source of randomness
that researchers assume has direct implications for the accuracy of inference.",Same Root Different Leaves: Time Series and Cross-Sectional Methods in Panel Data,2022-07-29 08:12:32,"Dennis Shen, Peng Ding, Jasjeet Sekhon, Bin Yu","http://arxiv.org/abs/2207.14481v2, http://arxiv.org/pdf/2207.14481v2",econ.EM
30623,em,"Omitted variables are a common concern in empirical research. We distinguish
between two ways omitted variables can affect baseline estimates: by driving
them to zero or by reversing their sign. We show that, depending on how the
impact of omitted variables is measured, it can be substantially easier for
omitted variables to flip coefficient signs than to drive them to zero.
Consequently, results which are considered robust to being ""explained away"" by
omitted variables are not necessarily robust to sign changes. We show that this
behavior occurs with ""Oster's delta"" (Oster 2019), a commonly reported measure
of regression coefficient robustness to the presence of omitted variables.
Specifically, we show that any time this measure is large--suggesting that
omitted variables may be unimportant--a much smaller value reverses the sign of
the parameter of interest. Relatedly, we show that selection bias adjusted
estimands can be extremely sensitive to the choice of the sensitivity
parameter. Specifically, researchers commonly compute a bias adjustment under
the assumption that Oster's delta equals one. Under the alternative assumption
that delta is very close to one, but not exactly equal to one, we show that the
bias can instead be arbitrarily large. To address these concerns, we propose a
modified measure of robustness that accounts for such sign changes, and discuss
best practices for assessing sensitivity to omitted variables. We demonstrate
this sign flipping behavior in an empirical application to social capital and
the rise of the Nazi party, where we show how it can overturn conclusions about
robustness, and how our proposed modifications can be used to regain
robustness. We analyze three additional empirical applications as well. We
implement our proposed methods in the companion Stata module regsensitivity for
easy use in practice.",The Effect of Omitted Variables on the Sign of Regression Coefficients,2022-08-01 03:50:42,"Matthew A. Masten, Alexandre Poirier","http://arxiv.org/abs/2208.00552v2, http://arxiv.org/pdf/2208.00552v2",econ.EM
30624,em,"We develop a penalized two-pass regression with time-varying factor loadings.
The penalization in the first pass enforces sparsity for the time-variation
drivers while also maintaining compatibility with the no-arbitrage restrictions
by regularizing appropriate groups of coefficients. The second pass delivers
risk premia estimates to predict equity excess returns. Our Monte Carlo results
and our empirical results on a large cross-sectional data set of US individual
stocks show that penalization without grouping can yield to nearly all
estimated time-varying models violating the no-arbitrage restrictions.
Moreover, our results demonstrate that the proposed method reduces the
prediction errors compared to a penalized approach without appropriate grouping
or a time-invariant factor model.",A penalized two-pass regression to predict stock returns with time-varying risk premia,2022-08-01 19:21:20,"Gaetan Bakalli, Stéphane Guerrier, Olivier Scaillet","http://arxiv.org/abs/2208.00972v1, http://arxiv.org/pdf/2208.00972v1",econ.EM
30630,em,"Conducting causal inference with panel data is a core challenge in social
science research. We adapt a deep neural architecture for time series
forecasting (the N-BEATS algorithm) to more accurately predict the
counterfactual evolution of a treated unit had treatment not occurred. Across a
range of settings, the resulting estimator (""SyNBEATS"") significantly
outperforms commonly employed methods (synthetic controls, two-way fixed
effects), and attains comparable or more accurate performance compared to
recently proposed methods (synthetic difference-in-differences, matrix
completion). Our results highlight how advances in the forecasting literature
can be harnessed to improve causal inference in panel data settings.",Forecasting Algorithms for Causal Inference with Panel Data,2022-08-06 13:23:38,"Jacob Goldin, Julian Nyarko, Justin Young","http://arxiv.org/abs/2208.03489v2, http://arxiv.org/pdf/2208.03489v2",econ.EM
30625,em,"We revisit the problem of estimating the local average treatment effect
(LATE) and the local average treatment effect on the treated (LATT) when
control variables are available, either to render the instrumental variable
(IV) suitably exogenous or to improve precision. Unlike previous approaches,
our doubly robust (DR) estimation procedures use quasi-likelihood methods
weighted by the inverse of the IV propensity score - so-called inverse
probability weighted regression adjustment (IPWRA) estimators. By properly
choosing models for the propensity score and outcome models, fitted values are
ensured to be in the logical range determined by the response variable,
producing DR estimators of LATE and LATT with appealing small sample
properties. Inference is relatively straightforward both analytically and using
the nonparametric bootstrap. Our DR LATE and DR LATT estimators work well in
simulations. We also propose a DR version of the Hausman test that can be used
to assess the unconfoundedness assumption through a comparison of different
estimates of the average treatment effect on the treated (ATT) under one-sided
noncompliance. Unlike the usual test that compares OLS and IV estimates, this
procedure is robust to treatment effect heterogeneity.",Doubly Robust Estimation of Local Average Treatment Effects Using Inverse Probability Weighted Regression Adjustment,2022-08-02 11:09:47,"Tymon Słoczyński, S. Derya Uysal, Jeffrey M. Wooldridge","http://arxiv.org/abs/2208.01300v2, http://arxiv.org/pdf/2208.01300v2",econ.EM
30626,em,"This paper is concerned with the findings related to the robust first-stage
F-statistic in the Monte Carlo analysis of Andrews (2018), who found in a
heteroskedastic grouped-data design that even for very large values of the
robust F-statistic, the standard 2SLS confidence intervals had large coverage
distortions. This finding appears to discredit the robust F-statistic as a test
for underidentification. However, it is shown here that large values of the
robust F-statistic do imply that there is first-stage information, but this may
not be utilized well by the 2SLS estimator, or the standard GMM estimator. An
estimator that corrects for this is a robust GMM estimator, denoted GMMf, with
the robust weight matrix not based on the structural residuals, but on the
first-stage residuals. For the grouped-data setting of Andrews (2018), this
GMMf estimator gives the weights to the group specific estimators according to
the group specific concentration parameters in the same way as 2SLS does under
homoskedasticity, which is formally shown using weak instrument asymptotics.
The GMMf estimator is much better behaved than the 2SLS estimator in the
Andrews (2018) design, behaving well in terms of relative bias and Wald-test
size distortion at more standard values of the robust F-statistic. We show that
the same patterns can occur in a dynamic panel data model when the error
variance is heteroskedastic over time. We further derive the conditions under
which the Stock and Yogo (2005) weak instruments critical values apply to the
robust F-statistic in relation to the behaviour of the GMMf estimator.","Weak Instruments, First-Stage Heteroskedasticity, the Robust F-Test and a GMM Estimator with the Weight Matrix Based on First-Stage Residuals",2022-08-03 13:34:20,Frank Windmeijer,"http://arxiv.org/abs/2208.01967v1, http://arxiv.org/pdf/2208.01967v1",econ.EM
30627,em,"We consider bootstrap inference for estimators which are (asymptotically)
biased. We show that, even when the bias term cannot be consistently estimated,
valid inference can be obtained by proper implementations of the bootstrap.
Specifically, we show that the prepivoting approach of Beran (1987, 1988),
originally proposed to deliver higher-order refinements, restores bootstrap
validity by transforming the original bootstrap p-value into an asymptotically
uniform random variable. We propose two different implementations of
prepivoting (plug-in and double bootstrap), and provide general high-level
conditions that imply validity of bootstrap inference. To illustrate the
practical relevance and implementation of our results, we discuss five
examples: (i) inference on a target parameter based on model averaging; (ii)
ridge-type regularized estimators; (iii) nonparametric regression; (iv) a
location model for infinite variance data; and (v) dynamic panel data models.",Bootstrap inference in the presence of bias,2022-08-03 15:50:07,"Giuseppe Cavaliere, Sílvia Gonçalves, Morten Ørregaard Nielsen, Edoardo Zanelli","http://arxiv.org/abs/2208.02028v3, http://arxiv.org/pdf/2208.02028v3",econ.EM
30628,em,"Decision-making often involves ranking and selection. For example, to
assemble a team of political forecasters, we might begin by narrowing our
choice set to the candidates we are confident rank among the top 10% in
forecasting ability. Unfortunately, we do not know each candidate's true
ability but observe a noisy estimate of it. This paper develops new Bayesian
algorithms to rank and select candidates based on noisy estimates. Using
simulations based on empirical data, we show that our algorithms often
outperform frequentist ranking and selection algorithms. Our Bayesian ranking
algorithms yield shorter rank confidence intervals while maintaining
approximately correct coverage. Our Bayesian selection algorithms select more
candidates while maintaining correct error rates. We apply our ranking and
selection procedures to field experiments, economic mobility, forecasting, and
similar problems. Finally, we implement our ranking and selection techniques in
a user-friendly Python package documented here:
https://dsbowen-conditional-inference.readthedocs.io/en/latest/.","Bayesian ranking and selection with applications to field studies, economic mobility, and forecasting",2022-08-03 16:02:34,Dillon Bowen,"http://arxiv.org/abs/2208.02038v1, http://arxiv.org/pdf/2208.02038v1",stat.ME
30629,em,"We establish new results for estimation and inference in financial durations
models, where events are observed over a given time span, such as a trading
day, or a week. For the classical autoregressive conditional duration (ACD)
models by Engle and Russell (1998, Econometrica 66, 1127-1162), we show that
the large sample behavior of likelihood estimators is highly sensitive to the
tail behavior of the financial durations. In particular, even under
stationarity, asymptotic normality breaks down for tail indices smaller than
one or, equivalently, when the clustering behaviour of the observed events is
such that the unconditional distribution of the durations has no finite mean.
Instead, we find that estimators are mixed Gaussian and have non-standard rates
of convergence. The results are based on exploiting the crucial fact that for
duration data the number of observations within any given time span is random.
Our results apply to general econometric models where the number of observed
events is random.",The Econometrics of Financial Duration Modeling,2022-08-03 17:28:03,"Giuseppe Cavaliere, Thomas Mikosch, Anders Rahbek, Frederik Vilandt","http://arxiv.org/abs/2208.02098v3, http://arxiv.org/pdf/2208.02098v3",econ.EM
30666,em,"We offer retrospective and prospective assessments of the Diebold-Yilmaz
connectedness research program, combined with personal recollections of its
development. Its centerpiece in many respects is Diebold and Yilmaz (2014),
around which our discussion is organized.","On the Past, Present, and Future of the Diebold-Yilmaz Approach to Dynamic Network Connectedness",2022-11-08 14:54:36,"Francis X. Diebold, Kamil Yilmaz","http://arxiv.org/abs/2211.04184v2, http://arxiv.org/pdf/2211.04184v2",econ.EM
30631,em,"Nowadays, patenting activities are essential in converting applied science to
technology in the prevailing innovation model. To gain strategic advantages in
the technological competitions between regions, nations need to leverage the
investments of public and private funds to diversify over all technologies or
specialize in a small number of technologies. In this paper, we investigated
who the leaders are at the regional and assignee levels, how they attained
their leadership positions, and whether they adopted diversification or
specialization strategies, using a dataset of 176,193 patent records on
graphene between 1986 and 2017 downloaded from Derwent Innovation. By applying
a co-clustering method to the IPC subclasses in the patents and using a z-score
method to extract keywords from their titles and abstracts, we identified seven
graphene technology areas emerging in the sequence synthesis - composites -
sensors - devices - catalyst - batteries - water treatment. We then examined
the top regions in their investment preferences and their changes in rankings
over time and found that they invested in all seven technology areas. In
contrast, at the assignee level, some were diversified while others were
specialized. We found that large entities diversified their portfolios across
multiple technology areas, while small entities specialized around their core
competencies. In addition, we found that universities had higher entropy values
than corporations on average, leading us to the hypothesis that corporations
file, buy, or sell patents to enable product development. In contrast,
universities focus only on licensing their patents. We validated this
hypothesis through an aggregate analysis of reassignment and licensing and a
more detailed analysis of three case studies - SAMSUNG, RICE UNIVERSITY, and
DYSON.",Strategic differences between regional investments into graphene technology and how corporations and universities manage patent portfolios,2022-08-07 16:28:07,"Ai Linh Nguyen, Wenyuan Liu, Khiam Aik Khor, Andrea Nanetti, Siew Ann Cheong","http://arxiv.org/abs/2208.03719v1, http://arxiv.org/pdf/2208.03719v1",cs.DL
30632,em,"Historically, testing if decision-makers obey certain choice axioms using
choice data takes two distinct approaches. The 'functional' approach observes
and tests the entire 'demand' or 'choice' function, whereas the 'revealed
preference(RP)' approach constructs inequalities to test finite choices. I
demonstrate that a statistical recasting of the revealed enables uniting both
approaches. Specifically, I construct a computationally efficient algorithm to
output one-sided statistical tests of choice data from functional
characterizations of axiomatic behavior, thus linking statistical and RP
testing. An application to weakly separable preferences, where RP
characterizations are provably NP-Hard, demonstrates the approach's merit. I
also show that without assuming monotonicity, all restrictions disappear.
Hence, any ability to resolve axiomatic behavior relies on the monotonicity
assumption.",(Functional)Characterizations vs (Finite)Tests: Partially Unifying Functional and Inequality-Based Approaches to Testing,2022-08-07 17:29:18,Raghav Malhotra,"http://arxiv.org/abs/2208.03737v4, http://arxiv.org/pdf/2208.03737v4",econ.TH
30633,em,"In a linear instrumental variables (IV) setting for estimating the causal
effects of multiple confounded exposure/treatment variables on an outcome, we
investigate the adaptive Lasso method for selecting valid instrumental
variables from a set of available instruments that may contain invalid ones. An
instrument is invalid if it fails the exclusion conditions and enters the model
as an explanatory variable. We extend the results as developed in Windmeijer et
al. (2019) for the single exposure model to the multiple exposures case. In
particular we propose a median-of-medians estimator and show that the
conditions on the minimum number of valid instruments under which this
estimator is consistent for the causal effects are only moderately stronger
than the simple majority rule that applies to the median estimator for the
single exposure case. The adaptive Lasso method which uses the initial
median-of-medians estimator for the penalty weights achieves consistent
selection with oracle properties of the resulting IV estimator. This is
confirmed by some Monte Carlo simulation results. We apply the method to
estimate the causal effects of educational attainment and cognitive ability on
body mass index (BMI) in a Mendelian Randomization setting.",Selecting Valid Instrumental Variables in Linear Models with Multiple Exposure Variables: Adaptive Lasso and the Median-of-Medians Estimator,2022-08-10 14:23:42,"Xiaoran Liang, Eleanor Sanderson, Frank Windmeijer","http://arxiv.org/abs/2208.05278v1, http://arxiv.org/pdf/2208.05278v1",stat.ME
30634,em,"One challenge in the estimation of financial market agent-based models
(FABMs) is to infer reliable insights using numerical simulations validated by
only a single observed time series. Ergodicity (besides stationarity) is a
strong precondition for any estimation, however it has not been systematically
explored and is often simply presumed. For finite-sample lengths and limited
computational resources empirical estimation always takes place in
pre-asymptopia. Thus broken ergodicity must be considered the rule, but it
remains largely unclear how to deal with the remaining uncertainty in
non-ergodic observables. Here we show how an understanding of the ergodic
properties of moment functions can help to improve the estimation of (F)ABMs.
We run Monte Carlo experiments and study the convergence behaviour of moment
functions of two prototype models. We find infeasibly-long convergence times
for most. Choosing an efficient mix of ensemble size and simulated time length
guided our estimation and might help in general.",Time is limited on the road to asymptopia,2022-08-17 12:15:07,"Ivonne Schwartz, Mark Kirstein","http://arxiv.org/abs/2208.08169v1, http://arxiv.org/pdf/2208.08169v1",econ.EM
30635,em,"This paper introduces a matrix quantile factor model for matrix-valued data
with a low-rank structure. We estimate the row and column factor spaces via
minimizing the empirical check loss function over all panels. We show the
estimates converge at rate $1/\min\{\sqrt{p_1p_2}, \sqrt{p_2T},$
$\sqrt{p_1T}\}$ in average Frobenius norm, where $p_1$, $p_2$ and $T$ are the
row dimensionality, column dimensionality and length of the matrix sequence.
This rate is faster than that of the quantile estimates via ``flattening"" the
matrix model into a large vector model. Smoothed estimates are given and their
central limit theorems are derived under some mild condition. We provide three
consistent criteria to determine the pair of row and column factor numbers.
Extensive simulation studies and an empirical study justify our theory.",Matrix Quantile Factor Model,2022-08-18 11:15:56,"Xin-Bing Kong, Yong-Xin Liu, Long Yu, Peng Zhao","http://arxiv.org/abs/2208.08693v2, http://arxiv.org/pdf/2208.08693v2",stat.ME
30636,em,"pystacked implements stacked generalization (Wolpert, 1992) for regression
and binary classification via Python's scikit-learn. Stacking combines multiple
supervised machine learners -- the ""base"" or ""level-0"" learners -- into a
single learner. The currently supported base learners include regularized
regression, random forest, gradient boosted trees, support vector machines, and
feed-forward neural nets (multi-layer perceptron). pystacked can also be used
with as a `regular' machine learning program to fit a single base learner and,
thus, provides an easy-to-use API for scikit-learn's machine learning
algorithms.",pystacked: Stacking generalization and machine learning in Stata,2022-08-23 15:03:04,"Achim Ahrens, Christian B. Hansen, Mark E. Schaffer","http://arxiv.org/abs/2208.10896v2, http://arxiv.org/pdf/2208.10896v2",econ.EM
30637,em,"Large Bayesian vector autoregressions with various forms of stochastic
volatility have become increasingly popular in empirical macroeconomics. One
main difficulty for practitioners is to choose the most suitable stochastic
volatility specification for their particular application. We develop Bayesian
model comparison methods -- based on marginal likelihood estimators that
combine conditional Monte Carlo and adaptive importance sampling -- to choose
among a variety of stochastic volatility specifications. The proposed methods
can also be used to select an appropriate shrinkage prior on the VAR
coefficients, which is a critical component for avoiding over-fitting in
high-dimensional settings. Using US quarterly data of different dimensions, we
find that both the Cholesky stochastic volatility and factor stochastic
volatility outperform the common stochastic volatility specification. Their
superior performance, however, can mostly be attributed to the more flexible
priors that accommodate cross-variable shrinkage.",Comparing Stochastic Volatility Specifications for Large Bayesian VARs,2022-08-28 20:28:56,Joshua C. C. Chan,"http://arxiv.org/abs/2208.13255v1, http://arxiv.org/pdf/2208.13255v1",econ.EM
30638,em,"The regression discontinuity (RD) design is widely used for program
evaluation with observational data. The primary focus of the existing
literature has been the estimation of the local average treatment effect at the
existing treatment cutoff. In contrast, we consider policy learning under the
RD design. Because the treatment assignment mechanism is deterministic,
learning better treatment cutoffs requires extrapolation. We develop a robust
optimization approach to finding optimal treatment cutoffs that improve upon
the existing ones. We first decompose the expected utility into
point-identifiable and unidentifiable components. We then propose an efficient
doubly-robust estimator for the identifiable parts. To account for the
unidentifiable components, we leverage the existence of multiple cutoffs that
are common under the RD design. Specifically, we assume that the heterogeneity
in the conditional expectations of potential outcomes across different groups
vary smoothly along the running variable. Under this assumption, we minimize
the worst case utility loss relative to the status quo policy. The resulting
new treatment cutoffs have a safety guarantee that they will not yield a worse
overall outcome than the existing cutoffs. Finally, we establish the asymptotic
regret bounds for the learned policy using semi-parametric efficiency theory.
We apply the proposed methodology to empirical and simulated data sets.",Safe Policy Learning under Regression Discontinuity Designs with Multiple Cutoffs,2022-08-29 04:27:23,"Yi Zhang, Eli Ben-Michael, Kosuke Imai","http://arxiv.org/abs/2208.13323v3, http://arxiv.org/pdf/2208.13323v3",stat.ME
30639,em,"The switchback is an experimental design that measures treatment effects by
repeatedly turning an intervention on and off for a whole system. Switchback
experiments are a robust way to overcome cross-unit spillover effects; however,
they are vulnerable to bias from temporal carryovers. In this paper, we
consider properties of switchback experiments in Markovian systems that mix at
a geometric rate. We find that, in this setting, standard switchback designs
suffer considerably from carryover bias: Their estimation error decays as
$T^{-1/3}$ in terms of the experiment horizon $T$, whereas in the absence of
carryovers a faster rate of $T^{-1/2}$ would have been possible. We also show,
however, that judicious use of burn-in periods can considerably improve the
situation, and enables errors that decay as $\log(T)^{1/2}T^{-1/2}$. Our formal
results are mirrored in an empirical evaluation.",Switchback Experiments under Geometric Mixing,2022-09-01 06:29:58,"Yuchen Hu, Stefan Wager","http://arxiv.org/abs/2209.00197v1, http://arxiv.org/pdf/2209.00197v1",stat.ME
30640,em,"Timely characterizations of risks in economic and financial systems play an
essential role in both economic policy and private sector decisions. However,
the informational content of low-frequency variables and the results from
conditional mean models provide only limited evidence to investigate this
problem. We propose a novel mixed-frequency quantile vector autoregression
(MF-QVAR) model to address this issue. Inspired by the univariate Bayesian
quantile regression literature, the multivariate asymmetric Laplace
distribution is exploited under the Bayesian framework to form the likelihood.
A data augmentation approach coupled with a precision sampler efficiently
estimates the missing low-frequency variables at higher frequencies under the
state-space representation. The proposed methods allow us to nowcast
conditional quantiles for multiple variables of interest and to derive
quantile-related risk measures at high frequency, thus enabling timely policy
interventions. The main application of the model is to nowcast conditional
quantiles of the US GDP, which is strictly related to the quantification of
Value-at-Risk and the Expected Shortfall.",Bayesian Mixed-Frequency Quantile Vector Autoregression: Eliciting tail risks of Monthly US GDP,2022-09-05 14:17:40,"Matteo Iacopini, Aubrey Poon, Luca Rossini, Dan Zhu","http://arxiv.org/abs/2209.01910v1, http://arxiv.org/pdf/2209.01910v1",econ.EM
30641,em,"The academic evaluation of the publication record of researchers is relevant
for identifying talented candidates for promotion and funding. A key tool for
this is the use of the indexes provided by Web of Science and SCOPUS, costly
databases that sometimes exceed the possibilities of academic institutions in
many parts of the world. We show here how the data in one of the bases can be
used to infer the main index of the other one. Methods of data analysis used in
Machine Learning allow us to select just a few of the hundreds of variables in
a database, which later are used in a panel regression, yielding a good
approximation to the main index in the other database. Since the information of
SCOPUS can be freely scraped from the Web, this approach allows to infer for
free the Impact Factor of publications, the main index used in research
assessments around the globe.",An Assessment Tool for Academic Research Managers in the Third World,2022-09-07 17:59:25,"Fernando Delbianco, Andres Fioriti, Fernando Tohmé","http://arxiv.org/abs/2209.03199v1, http://arxiv.org/pdf/2209.03199v1",econ.EM
30642,em,"We propose a method for estimation and inference for bounds for heterogeneous
causal effect parameters in general sample selection models where the treatment
can affect whether an outcome is observed and no exclusion restrictions are
available. The method provides conditional effect bounds as functions of policy
relevant pre-treatment variables. It allows for conducting valid statistical
inference on the unidentified conditional effects. We use a flexible
debiased/double machine learning approach that can accommodate non-linear
functional forms and high-dimensional confounders. Easily verifiable high-level
conditions for estimation, misspecification robust confidence intervals, and
uniform confidence bands are provided as well. Re-analyzing data from a large
scale field experiment on Facebook, we find significant depolarization effects
of counter-attitudinal news subscription nudges. The effect bounds are highly
heterogeneous and suggest strong depolarization effects for moderates,
conservatives, and younger users.",Heterogeneous Treatment Effect Bounds under Sample Selection with an Application to the Effects of Social Media on Political Polarization,2022-09-09 17:42:03,Phillip Heiler,"http://arxiv.org/abs/2209.04329v3, http://arxiv.org/pdf/2209.04329v3",econ.EM
30643,em,"In this paper, we consider testing the martingale difference hypothesis for
high-dimensional time series. Our test is built on the sum of squares of the
element-wise max-norm of the proposed matrix-valued nonlinear dependence
measure at different lags. To conduct the inference, we approximate the null
distribution of our test statistic by Gaussian approximation and provide a
simulation-based approach to generate critical values. The asymptotic behavior
of the test statistic under the alternative is also studied. Our approach is
nonparametric as the null hypothesis only assumes the time series concerned is
martingale difference without specifying any parametric forms of its
conditional moments. As an advantage of Gaussian approximation, our test is
robust to the cross-series dependence of unknown magnitude. To the best of our
knowledge, this is the first valid test for the martingale difference
hypothesis that not only allows for large dimension but also captures nonlinear
serial dependence. The practical usefulness of our test is illustrated via
simulation and a real data analysis. The test is implemented in a user-friendly
R-function.",Testing the martingale difference hypothesis in high dimension,2022-09-11 05:59:39,"Jinyuan Chang, Qing Jiang, Xiaofeng Shao","http://dx.doi.org/10.1016/j.jeconom.2022.09.001, http://arxiv.org/abs/2209.04770v2, http://arxiv.org/pdf/2209.04770v2",econ.EM
30644,em,"We constructed a frequently updated, near-real-time global power generation
dataset: Carbon Monitor-Power since January, 2016 at national levels with
near-global coverage and hourly-to-daily time resolution. The data presented
here are collected from 37 countries across all continents for eight source
groups, including three types of fossil sources (coal, gas, and oil), nuclear
energy and four groups of renewable energy sources (solar energy, wind energy,
hydro energy and other renewables including biomass, geothermal, etc.). The
global near-real-time power dataset shows the dynamics of the global power
system, including its hourly, daily, weekly and seasonal patterns as influenced
by daily periodical activities, weekends, seasonal cycles, regular and
irregular events (i.e., holidays) and extreme events (i.e., the COVID-19
pandemic). The Carbon Monitor-Power dataset reveals that the COVID-19 pandemic
caused strong disruptions in some countries (i.e., China and India), leading to
a temporary or long-lasting shift to low carbon intensity, while it had only
little impact in some other countries (i.e., Australia). This dataset offers a
large range of opportunities for power-related scientific research and
policy-making.",Carbon Monitor-Power: near-real-time monitoring of global power generation on hourly to daily scales,2022-09-13 18:35:34,"Biqing Zhu, Xuanren Song, Zhu Deng, Wenli Zhao, Da Huo, Taochun Sun, Piyu Ke, Duo Cui, Chenxi Lu, Haiwang Zhong, Chaopeng Hong, Jian Qiu, Steven J. Davis, Pierre Gentine, Philippe Ciais, Zhu Liu","http://arxiv.org/abs/2209.06086v1, http://arxiv.org/pdf/2209.06086v1",physics.data-an
30645,em,"We estimate the causal effect of shared e-scooter services on traffic
accidents by exploiting variation in availability of e-scooter services,
induced by the staggered rollout across 93 cities in six countries.
Police-reported accidents in the average month increased by around 8.2% after
shared e-scooters were introduced. For cities with limited cycling
infrastructure and where mobility relies heavily on cars, estimated effects are
largest. In contrast, no effects are detectable in cities with high bike-lane
density. This heterogeneity suggests that public policy can play a crucial role
in mitigating accidents related to e-scooters and, more generally, to changes
in urban mobility.",Do shared e-scooter services cause traffic accidents? Evidence from six European countries,2022-09-14 21:32:34,"Cannon Cloud, Simon Heß, Johannes Kasinger","http://arxiv.org/abs/2209.06870v2, http://arxiv.org/pdf/2209.06870v2",econ.EM
30646,em,"The objective of this paper is to identify and analyze the response actions
of a set of players embedded in sub-networks in the context of interaction and
learning. We characterize strategic network formation as a static game of
interactions where players maximize their utility depending on the connections
they establish and multiple interdependent actions that permit group-specific
parameters of players. It is challenging to apply this type of model to
real-life scenarios for two reasons: The computation of the Bayesian Nash
Equilibrium is highly demanding and the identification of social influence
requires the use of excluded variables that are oftentimes unavailable. Based
on the theoretical proposal, we propose a set of simulant equations and discuss
the identification of the social interaction effect employing multi-modal
network autoregressive.",A Structural Model for Detecting Communities in Networks,2022-09-17 20:58:01,Alex Centeno,"http://arxiv.org/abs/2209.08380v2, http://arxiv.org/pdf/2209.08380v2",econ.TH
30647,em,"We propose a flexible stochastic block model for multi-layer networks, where
layer-specific hidden Markov-chain processes drive the changes in the formation
of communities. The changes in block membership of a node in a given layer may
be influenced by its own past membership in other layers. This allows for
clustering overlap, clustering decoupling, or more complex relationships
between layers including settings of unidirectional, or bidirectional, block
causality. We cope with the overparameterization issue of a saturated
specification by assuming a Multi-Laplacian prior distribution within a
Bayesian framework. Data augmentation and Gibbs sampling are used to make the
inference problem more tractable. Through simulations, we show that the
standard linear models are not able to detect the block causality under the
great majority of scenarios. As an application to trade networks, we show that
our model provides a unified framework including community detection and
Gravity equation. The model is used to study the causality between trade
agreements and trade looking at the global topological properties of the
networks as opposed to the main existent approaches which focus on local
bilateral relationships. We are able to provide new evidence of unidirectional
causality from the free trade agreements network to the non-observable trade
barriers network structure for 159 countries in the period 1995-2017.",A Dynamic Stochastic Block Model for Multi-Layer Networks,2022-09-20 00:23:17,"Ovielt Baltodano López, Roberto Casarin","http://arxiv.org/abs/2209.09354v1, http://arxiv.org/pdf/2209.09354v1",stat.ME
30648,em,"The global financial crisis and Covid recession have renewed discussion
concerning trend-cycle discovery in macroeconomic data, and boosting has
recently upgraded the popular HP filter to a modern machine learning device
suited to data-rich and rapid computational environments. This paper sheds
light on its versatility in trend-cycle determination, explaining in a simple
manner both HP filter smoothing and the consistency delivered by boosting for
general trend detection. Applied to a universe of time series in FRED
databases, boosting outperforms other methods in timely capturing downturns at
crises and recoveries that follow. With its wide applicability the boosted HP
filter is a useful automated machine learning addition to the macroeconometric
toolkit.",The boosted HP filter is more general than you might think,2022-09-20 18:58:37,"Ziwei Mei, Peter C. B. Phillips, Zhentao Shi","http://arxiv.org/abs/2209.09810v1, http://arxiv.org/pdf/2209.09810v1",econ.EM
30649,em,"The COVID-19 pandemic dramatically catalyzed the proliferation of e-shopping.
The dramatic growth of e-shopping will undoubtedly cause significant impacts on
travel demand. As a result, transportation modeller's ability to model
e-shopping demand is becoming increasingly important. This study developed
models to predict household' weekly home delivery frequencies. We used both
classical econometric and machine learning techniques to obtain the best model.
It is found that socioeconomic factors such as having an online grocery
membership, household members' average age, the percentage of male household
members, the number of workers in the household and various land use factors
influence home delivery demand. This study also compared the interpretations
and performances of the machine learning models and the classical econometric
model. Agreement is found in the variable's effects identified through the
machine learning and econometric models. However, with similar recall accuracy,
the ordered probit model, a classical econometric model, can accurately predict
the aggregate distribution of household delivery demand. In contrast, both
machine learning models failed to match the observed distribution.",Modelling the Frequency of Home Deliveries: An Induced Travel Demand Contribution of Aggrandized E-shopping in Toronto during COVID-19 Pandemics,2022-09-22 00:18:25,"Yicong Liu, Kaili Wang, Patrick Loa, Khandker Nurul Habib","http://arxiv.org/abs/2209.10664v1, http://arxiv.org/pdf/2209.10664v1",econ.EM
30650,em,"In this paper we revisit some common recommendations regarding the analysis
of matched-pair and stratified experimental designs in the presence of
attrition. Our main objective is to clarify a number of well-known claims about
the practice of dropping pairs with an attrited unit when analyzing
matched-pair designs. Contradictory advice appears in the literature about
whether or not dropping pairs is beneficial or harmful, and stratifying into
larger groups has been recommended as a resolution to the issue. To address
these claims, we derive the estimands obtained from the difference-in-means
estimator in a matched-pair design both when the observations from pairs with
an attrited unit are retained and when they are dropped. We find limited
evidence to support the claims that dropping pairs helps recover the average
treatment effect, but we find that it may potentially help in recovering a
convex weighted average of conditional average treatment effects. We report
similar findings for stratified designs when studying the estimands obtained
from a regression of outcomes on treatment with and without strata fixed
effects.",Revisiting the Analysis of Matched-Pair and Stratified Experiments in the Presence of Attrition,2022-09-23 23:06:18,"Yuehao Bai, Meng Hsuan Hsieh, Jizhou Liu, Max Tabord-Meehan","http://arxiv.org/abs/2209.11840v6, http://arxiv.org/pdf/2209.11840v6",econ.EM
30651,em,"Big data analytics has opened new avenues in economic research, but the
challenge of analyzing datasets with tens of millions of observations is
substantial. Conventional econometric methods based on extreme estimators
require large amounts of computing resources and memory, which are often not
readily available. In this paper, we focus on linear quantile regression
applied to ""ultra-large"" datasets, such as U.S. decennial censuses. A fast
inference framework is presented, utilizing stochastic subgradient descent
(S-subGD) updates. The inference procedure handles cross-sectional data
sequentially: (i) updating the parameter estimate with each incoming ""new
observation"", (ii) aggregating it as a $\textit{Polyak-Ruppert}$ average, and
(iii) computing a pivotal statistic for inference using only a solution path.
The methodology draws from time-series regression to create an asymptotically
pivotal statistic through random scaling. Our proposed test statistic is
calculated in a fully online fashion and critical values are calculated without
resampling. We conduct extensive numerical studies to showcase the
computational merits of our proposed inference. For inference problems as large
as $(n, d) \sim (10^7, 10^3)$, where $n$ is the sample size and $d$ is the
number of regressors, our method generates new insights, surpassing current
inference methods in computation. Our method specifically reveals trends in the
gender gap in the U.S. college wage premium using millions of observations,
while controlling over $10^3$ covariates to mitigate confounding effects.",Fast Inference for Quantile Regression with Tens of Millions of Observations,2022-09-29 04:39:53,"Sokbae Lee, Yuan Liao, Myung Hwan Seo, Youngki Shin","http://arxiv.org/abs/2209.14502v5, http://arxiv.org/pdf/2209.14502v5",econ.EM
30652,em,"Statistical inference under market equilibrium effects has attracted
increasing attention recently. In this paper we focus on the specific case of
linear Fisher markets. They have been widely use in fair resource allocation of
food/blood donations and budget management in large-scale Internet ad auctions.
In resource allocation, it is crucial to quantify the variability of the
resource received by the agents (such as blood banks and food banks) in
addition to fairness and efficiency properties of the systems. For ad auction
markets, it is important to establish statistical properties of the platform's
revenues in addition to their expected values. To this end, we propose a
statistical framework based on the concept of infinite-dimensional Fisher
markets. In our framework, we observe a market formed by a finite number of
items sampled from an underlying distribution (the ""observed market"") and aim
to infer several important equilibrium quantities of the underlying long-run
market. These equilibrium quantities include individual utilities, social
welfare, and pacing multipliers. Through the lens of sample average
approximation (SSA), we derive a collection of statistical results and show
that the observed market provides useful statistical information of the
long-run market. In other words, the equilibrium quantities of the observed
market converge to the true ones of the long-run market with strong statistical
guarantees. These include consistency, finite sample bounds, asymptotics, and
confidence. As an extension, we discuss revenue inference in quasilinear Fisher
markets.",Statistical Inference for Fisher Market Equilibrium,2022-09-29 18:45:47,"Luofeng Liao, Yuan Gao, Christian Kroer","http://arxiv.org/abs/2209.15422v1, http://arxiv.org/pdf/2209.15422v1",econ.EM
30653,em,"This paper identifies the probability of causation when there is sample
selection. We show that the probability of causation is partially identified
for individuals who are always observed regardless of treatment status and
derive sharp bounds under three increasingly restrictive sets of assumptions.
The first set imposes an exogenous treatment and a monotone sample selection
mechanism. To tighten these bounds, the second set also imposes the monotone
treatment response assumption, while the third set additionally imposes a
stochastic dominance assumption. Finally, we use experimental data from the
Colombian job training program J\'ovenes en Acci\'on to empirically illustrate
our approach's usefulness. We find that, among always-employed women, at least
18% and at most 24% transitioned to the formal labor market because of the
program.",Probability of Causation with Sample Selection: A Reanalysis of the Impacts of Jóvenes en Acción on Formality,2022-10-05 01:20:58,"Vitor Possebom, Flavio Riva","http://arxiv.org/abs/2210.01938v4, http://arxiv.org/pdf/2210.01938v4",econ.EM
30654,em,"We study the problem of selecting the best $m$ units from a set of $n$ as $m
/ n \to \alpha \in (0, 1)$, where noisy, heteroskedastic measurements of the
units' true values are available and the decision-maker wishes to maximize the
average true value of the units selected. Given a parametric prior
distribution, the empirical Bayes decision rule incurs $O_p(n^{-1})$ regret
relative to the Bayesian oracle that knows the true prior. More generally, if
the error in the estimated prior is of order $O_p(r_n)$, regret is
$O_p(r_n^2)$. In this sense selecting the best units is easier than estimating
their values. We show this regret bound is sharp in the parametric case, by
giving an example in which it is attained. Using priors calibrated from a
dataset of over four thousand internet experiments, we find that empirical
Bayes methods perform well in practice for detecting the best treatments given
only a modest number of experiments.",Empirical Bayes Selection for Value Maximization,2022-10-08 07:01:42,"Dominic Coey, Kenneth Hung","http://arxiv.org/abs/2210.03905v2, http://arxiv.org/pdf/2210.03905v2",stat.ME
30655,em,"This research aims at building a multivariate statistical model for assessing
users' perceptions of acceptance of ride-sharing services in Dhaka City. A
structured questionnaire is developed based on the users' reported attitudes
and perceived risks. A total of 350 normally distributed responses are
collected from ride-sharing service users and stakeholders of Dhaka City.
Respondents are interviewed to express their experience and opinions on
ride-sharing services through the stated preference questionnaire. Structural
Equation Modeling (SEM) is used to validate the research hypotheses.
Statistical parameters and several trials are used to choose the best SEM. The
responses are also analyzed using the Relative Importance Index (RII) method,
validating the chosen SEM. Inside SEM, the quality of ride-sharing services is
measured by two latent and eighteen observed variables. The latent variable
'safety & security' is more influential than 'service performance' on the
overall quality of service index. Under 'safety & security' the other two
variables, i.e., 'account information' and 'personal information' are found to
be the most significant that impact the decision to share rides with others. In
addition, 'risk of conflict' and 'possibility of accident' are identified using
the perception model as the lowest contributing variables. Factor analysis
reveals the suitability and reliability of the proposed SEM. Identifying the
influential parameters in this will help the service providers understand and
improve the quality of ride-sharing service for users.",A Structural Equation Modeling Approach to Understand User's Perceptions of Acceptance of Ride-Sharing Services in Dhaka City,2022-10-08 21:47:53,"Md. Mohaimenul Islam Sourav, Mohammed Russedul Islam, H M Imran Kays, Md. Hadiuzzaman","http://arxiv.org/abs/2210.04086v2, http://arxiv.org/pdf/2210.04086v2",stat.AP
30656,em,"We consider estimation and inference for a regression coefficient in panels
with interactive fixed effects (i.e., with a factor structure). We show that
previously developed estimators and confidence intervals (CIs) might be heavily
biased and size-distorted when some of the factors are weak. We propose
estimators with improved rates of convergence and bias-aware CIs that are
uniformly valid regardless of whether the factors are strong or not. Our
approach applies the theory of minimax linear estimation to form a debiased
estimate using a nuclear norm bound on the error of an initial estimate of the
interactive fixed effects. We use the obtained estimate to construct a
bias-aware CI taking into account the remaining bias due to weak factors. In
Monte Carlo experiments, we find a substantial improvement over conventional
approaches when factors are weak, with little cost to estimation error when
factors are strong.",Robust Estimation and Inference in Panels with Interactive Fixed Effects,2022-10-13 03:32:58,"Timothy B. Armstrong, Martin Weidner, Andrei Zeleneev","http://arxiv.org/abs/2210.06639v2, http://arxiv.org/pdf/2210.06639v2",econ.EM
30657,em,"This paper presents a fast algorithm for estimating hidden states of Bayesian
state space models. The algorithm is a variation of amortized simulation-based
inference algorithms, where a large number of artificial datasets are generated
at the first stage, and then a flexible model is trained to predict the
variables of interest. In contrast to those proposed earlier, the procedure
described in this paper makes it possible to train estimators for hidden states
by concentrating only on certain characteristics of the marginal posterior
distributions and introducing inductive bias. Illustrations using the examples
of the stochastic volatility model, nonlinear dynamic stochastic general
equilibrium model, and seasonal adjustment procedure with breaks in seasonality
show that the algorithm has sufficient accuracy for practical use. Moreover,
after pretraining, which takes several hours, finding the posterior
distribution for any dataset takes from hundredths to tenths of a second.",Fast Estimation of Bayesian State Space Models Using Amortized Simulation-Based Inference,2022-10-13 19:37:05,"Ramis Khabibullin, Sergei Seleznev","http://arxiv.org/abs/2210.07154v1, http://arxiv.org/pdf/2210.07154v1",econ.EM
30658,em,"We propose a new method for generating random correlation matrices that makes
it simple to control both location and dispersion. The method is based on a
vector parameterization, gamma = g(C), which maps any distribution on R^d, d =
n(n-1)/2 to a distribution on the space of non-singular nxn correlation
matrices. Correlation matrices with certain properties, such as being
well-conditioned, having block structures, and having strictly positive
elements, are simple to generate. We compare the new method with existing
methods.",A New Method for Generating Random Correlation Matrices,2022-10-15 03:30:24,"Ilya Archakov, Peter Reinhard Hansen, Yiyao Luo","http://arxiv.org/abs/2210.08147v1, http://arxiv.org/pdf/2210.08147v1",econ.EM
30659,em,"We develop a methodology for conducting inference on extreme quantiles of
unobserved individual heterogeneity (heterogeneous coefficients, heterogeneous
treatment effects, etc.) in a panel data or meta-analysis setting. Inference in
such settings is challenging: only noisy estimates of unobserved heterogeneity
are available, and approximations based on the central limit theorem work
poorly for extreme quantiles. For this situation, under weak assumptions we
derive an extreme value theorem and an intermediate order theorem for noisy
estimates and appropriate rate and moment conditions. Both theorems are then
used to construct confidence intervals for extremal quantiles. The intervals
are simple to construct and require no optimization. Inference based on the
intermediate order theorem involves a novel self-normalized intermediate order
theorem. In simulations, our extremal confidence intervals have favorable
coverage properties in the tail. Our methodology is illustrated with an
application to firm productivity in denser and less dense areas.",Inference on Extreme Quantiles of Unobserved Individual Heterogeneity,2022-10-16 15:53:50,Vladislav Morozov,"http://arxiv.org/abs/2210.08524v3, http://arxiv.org/pdf/2210.08524v3",econ.EM
30660,em,"Variational Bayes methods are a potential scalable estimation approach for
state space models. However, existing methods are inaccurate or computationally
infeasible for many state space models. This paper proposes a variational
approximation that is accurate and fast for any model with a closed-form
measurement density function and a state transition distribution within the
exponential family of distributions. We show that our method can accurately and
quickly estimate a multivariate Skellam stochastic volatility model with
high-frequency tick-by-tick discrete price changes of four stocks, and a
time-varying parameter vector autoregression with a stochastic volatility model
using eight macroeconomic variables.",Efficient variational approximations for state space models,2022-10-20 07:31:49,"Rubén Loaiza-Maya, Didier Nibbering","http://arxiv.org/abs/2210.11010v3, http://arxiv.org/pdf/2210.11010v3",econ.EM
30661,em,"In this paper, we propose a class of low-rank panel quantile regression
models which allow for unobserved slope heterogeneity over both individuals and
time. We estimate the heterogeneous intercept and slope matrices via nuclear
norm regularization followed by sample splitting, row- and column-wise quantile
regressions and debiasing. We show that the estimators of the factors and
factor loadings associated with the intercept and slope matrices are
asymptotically normally distributed. In addition, we develop two specification
tests: one for the null hypothesis that the slope coefficient is a constant
over time and/or individuals under the case that true rank of slope matrix
equals one, and the other for the null hypothesis that the slope coefficient
exhibits an additive structure under the case that the true rank of slope
matrix equals two. We illustrate the finite sample performance of estimation
and inference via Monte Carlo simulations and real datasets.",Low-rank Panel Quantile Regression: Estimation and Inference,2022-10-20 10:34:15,"Yiren Wang, Liangjun Su, Yichong Zhang","http://arxiv.org/abs/2210.11062v1, http://arxiv.org/pdf/2210.11062v1",econ.EM
30662,em,"The generalized least square (GLS) is one of the most basic tools in
regression analyses. A major issue in implementing the GLS is estimation of the
conditional variance function of the error term, which typically requires a
restrictive functional form assumption for parametric estimation or tuning
parameters for nonparametric estimation. In this paper, we propose an
alternative approach to estimate the conditional variance function under
nonparametric monotonicity constraints by utilizing the isotonic regression
method. Our GLS estimator is shown to be asymptotically equivalent to the
infeasible GLS estimator with knowledge of the conditional error variance, and
is free from tuning parameters, not only for point estimation but also for
interval estimation or hypothesis testing. Our analysis extends the scope of
the isotonic regression method by showing that the isotonic estimates, possibly
with generated variables, can be employed as first stage estimates to be
plugged in for semiparametric objects. Simulation studies illustrate excellent
finite sample performances of the proposed method. As an empirical example, we
revisit Acemoglu and Restrepo's (2017) study on the relationship between an
aging population and economic growth to illustrate how our GLS estimator
effectively reduces estimation errors.",GLS Under Monotone Heteroskedasticity,2022-10-25 12:04:54,"Yoichi Arai, Taisuke Otsu, Mengshan Xu","http://arxiv.org/abs/2210.13843v1, http://arxiv.org/pdf/2210.13843v1",econ.EM
30663,em,"Event Studies (ES) are statistical tools that assess whether a particular
event of interest has caused changes in the level of one or more relevant time
series. We are interested in ES applied to multivariate time series
characterized by high spatial (cross-sectional) and temporal dependence. We
pursue two goals. First, we propose to extend the existing taxonomy on ES,
mainly deriving from the financial field, by generalizing the underlying
statistical concepts and then adapting them to the time series analysis of
airborne pollutant concentrations. Second, we address the spatial
cross-sectional dependence by adopting a twofold adjustment. Initially, we use
a linear mixed spatio-temporal regression model (HDGM) to estimate the
relationship between the response variable and a set of exogenous factors,
while accounting for the spatio-temporal dynamics of the observations. Later,
we apply a set of sixteen ES test statistics, both parametric and
nonparametric, some of which directly adjusted for cross-sectional dependence.
We apply ES to evaluate the impact on NO2 concentrations generated by the
lockdown restrictions adopted in the Lombardy region (Italy) during the
COVID-19 pandemic in 2020. The HDGM model distinctly reveals the level shift
caused by the event of interest, while reducing the volatility and isolating
the spatial dependence of the data. Moreover, all the test statistics
unanimously suggest that the lockdown restrictions generated significant
reductions in the average NO2 concentrations.",Spatio-temporal Event Studies for Air Quality Assessment under Cross-sectional Dependence,2022-10-24 22:10:20,"Paolo Maranzano, Matteo Maria Pelagatti","http://dx.doi.org/10.1007/s13253-023-00564-z, http://arxiv.org/abs/2210.17529v1, http://arxiv.org/pdf/2210.17529v1",stat.AP
30664,em,"Despite the popularity of factor models with sparse loading matrices, little
attention has been given to formally address identifiability of these models
beyond standard rotation-based identification such as the positive lower
triangular constraint. To fill this gap, we present a counting rule on the
number of nonzero factor loadings that is sufficient for achieving generic
uniqueness of the variance decomposition in the factor representation. This is
formalized in the framework of sparse matrix spaces and some classical elements
from graph and network theory. Furthermore, we provide a computationally
efficient tool for verifying the counting rule. Our methodology is illustrated
for real data in the context of post-processing posterior draws in Bayesian
sparse factor analysis.",Cover It Up! Bipartite Graphs Uncover Identifiability in Sparse Factor Analysis,2022-11-01 21:01:54,"Darjus Hosszejni, Sylvia Frühwirth-Schnatter","http://arxiv.org/abs/2211.00671v3, http://arxiv.org/pdf/2211.00671v3",econ.EM
30665,em,"To effectively optimize and personalize treatments, it is necessary to
investigate the heterogeneity of treatment effects. With the wide range of
users being treated over many online controlled experiments, the typical
approach of manually investigating each dimension of heterogeneity becomes
overly cumbersome and prone to subjective human biases. We need an efficient
way to search through thousands of experiments with hundreds of target
covariates and hundreds of breakdown dimensions. In this paper, we propose a
systematic paradigm for detecting, surfacing and characterizing heterogeneous
treatment effects. First, we detect if treatment effect variation is present in
an experiment, prior to specifying any breakdowns. Second, we surface the most
relevant dimensions for heterogeneity. Finally, we characterize the
heterogeneity beyond just the conditional average treatment effects (CATE) by
studying the conditional distributions of the estimated individual treatment
effects. We show the effectiveness of our methods using simulated data and
empirical studies.","A Systematic Paradigm for Detecting, Surfacing, and Characterizing Heterogeneous Treatment Effects (HTE)",2022-11-03 04:43:30,"John Cai, Weinan Wang","http://arxiv.org/abs/2211.01547v1, http://arxiv.org/pdf/2211.01547v1",stat.ME
30667,em,"We develop Bayesian neural networks (BNNs) that permit to model generic
nonlinearities and time variation for (possibly large sets of) macroeconomic
and financial variables. From a methodological point of view, we allow for a
general specification of networks that can be applied to either dense or sparse
datasets, and combines various activation functions, a possibly very large
number of neurons, and stochastic volatility (SV) for the error term. From a
computational point of view, we develop fast and efficient estimation
algorithms for the general BNNs we introduce. From an empirical point of view,
we show both with simulated data and with a set of common macro and financial
applications that our BNNs can be of practical use, particularly so for
observations in the tails of the cross-sectional or time series distributions
of the target variables, which makes the method particularly informative for
policy making in uncommon times.",Enhanced Bayesian Neural Networks for Macroeconomics and Finance,2022-11-09 12:10:57,"Niko Hauzenberger, Florian Huber, Karin Klieber, Massimiliano Marcellino","http://arxiv.org/abs/2211.04752v3, http://arxiv.org/pdf/2211.04752v3",econ.EM
30668,em,"This paper studies nonparametric estimation of treatment and spillover
effects using observational data from a single large network. We consider a
model in which interference decays with network distance, which allows for peer
influence in both outcomes and selection into treatment. Under this model, the
total network and covariates of all units constitute sources of confounding, in
contrast to existing work that assumes confounding can be summarized by a
known, low-dimensional function of these objects. We propose to use graph
neural networks to estimate the high-dimensional nuisance functions of a doubly
robust estimator. We establish a network analog of approximate sparsity to
justify the use of shallow architectures.",Unconfoundedness with Network Interference,2022-11-15 04:00:04,"Michael P. Leung, Pantelis Loupos","http://arxiv.org/abs/2211.07823v2, http://arxiv.org/pdf/2211.07823v2",econ.EM
30669,em,"This paper considers nonlinear dynamic models where the main parameter of
interest is a nonnegative matrix characterizing the network (contagion)
effects. This network matrix is usually constrained either by assuming a
limited number of nonzero elements (sparsity), or by considering a reduced rank
approach for nonnegative matrix factorization (NMF). We follow the latter
approach and develop a new probabilistic NMF method. We introduce a new
Identifying Maximum Likelihood (IML) method for consistent estimation of the
identified set of admissible NMF's and derive its asymptotic distribution.
Moreover, we propose a maximum likelihood estimator of the parameter matrix for
a given non-negative rank, derive its asymptotic distribution and the
associated efficiency bound.",Structural Modelling of Dynamic Networks and Identifying Maximum Likelihood,2022-11-22 01:00:23,"Christian Gourieroux, Joann Jasiak","http://arxiv.org/abs/2211.11876v1, http://arxiv.org/pdf/2211.11876v1",econ.EM
30670,em,"This paper provides new insights into the asymptotic properties of the
synthetic control method (SCM). We show that the synthetic control (SC) weight
converges to a limiting weight that minimizes the mean squared prediction risk
of the treatment-effect estimator when the number of pretreatment periods goes
to infinity, and we also quantify the rate of convergence. Observing the link
between the SCM and model averaging, we further establish the asymptotic
optimality of the SC estimator under imperfect pretreatment fit, in the sense
that it achieves the lowest possible squared prediction error among all
possible treatment effect estimators that are based on an average of control
units, such as matching, inverse probability weighting and
difference-in-differences. The asymptotic optimality holds regardless of
whether the number of control units is fixed or divergent. Thus, our results
provide justifications for the SCM in a wide range of applications. The
theoretical results are verified via simulations.",Asymptotic Properties of the Synthetic Control Method,2022-11-22 11:55:07,"Xiaomeng Zhang, Wendun Wang, Xinyu Zhang","http://arxiv.org/abs/2211.12095v1, http://arxiv.org/pdf/2211.12095v1",econ.EM
30671,em,"This paper considers the problem of inference in cluster randomized trials
where treatment status is determined according to a ""matched pairs'' design.
Here, by a cluster randomized experiment, we mean one in which treatment is
assigned at the level of the cluster; by a ""matched pairs'' design we mean that
a sample of clusters is paired according to baseline, cluster-level covariates
and, within each pair, one cluster is selected at random for treatment. We
study the large-sample behavior of a weighted difference-in-means estimator and
derive two distinct sets of results depending on if the matching procedure does
or does not match on cluster size. We then propose a single variance estimator
which is consistent in either regime. Combining these results establishes the
asymptotic exactness of tests based on these estimators. Next, we consider the
properties of two common testing procedures based on t-tests constructed from
linear regressions, and argue that both are generally conservative in our
framework. We additionally study the behavior of a randomization test which
permutes the treatment status for clusters within pairs, and establish its
finite-sample and asymptotic validity for testing specific null hypotheses.
Finally, we propose a covariate-adjusted estimator which adjusts for additional
baseline covariates not used for treatment assignment, and establish conditions
under which such an estimator leads to improvements in precision. A simulation
study confirms the practical relevance of our theoretical results.",Inference in Cluster Randomized Trials with Matched Pairs,2022-11-27 21:10:45,"Yuehao Bai, Jizhou Liu, Azeem M. Shaikh, Max Tabord-Meehan","http://arxiv.org/abs/2211.14903v3, http://arxiv.org/pdf/2211.14903v3",econ.EM
30672,em,"This article proposes a novel Bayesian multivariate quantile regression to
forecast the tail behavior of US macro and financial indicators, where the
homoskedasticity assumption is relaxed to allow for time-varying volatility. In
particular, we exploit the mixture representation of the multivariate
asymmetric Laplace likelihood and the Cholesky-type decomposition of the scale
matrix to introduce stochastic volatility and GARCH processes, and we provide
an efficient MCMC to estimate them. The proposed models outperform the
homoskedastic benchmark mainly when predicting the distribution's tails. We
provide a model combination using a quantile score-based weighting scheme,
which leads to improved performances, notably when no single model uniformly
outperforms the other across quantiles, time, or variables.",Bayesian Multivariate Quantile Regression with alternative Time-varying Volatility Specifications,2022-11-29 14:47:51,"Matteo Iacopini, Francesco Ravazzolo, Luca Rossini","http://arxiv.org/abs/2211.16121v1, http://arxiv.org/pdf/2211.16121v1",econ.EM
30907,em,"This paper provides estimation and inference methods for an identified set's
boundary (i.e., support function) where the selection among a very large number
of covariates is based on modern regularized tools. I characterize the boundary
using a semiparametric moment equation. Combining Neyman-orthogonality and
sample splitting ideas, I construct a root-N consistent, uniformly
asymptotically Gaussian estimator of the boundary and propose a multiplier
bootstrap procedure to conduct inference. I apply this result to the partially
linear model, the partially linear IV model and the average partial derivative
with an interval-valued outcome.",Debiased Machine Learning of Set-Identified Linear Models,2017-12-28 22:04:28,Vira Semenova,"http://arxiv.org/abs/1712.10024v6, http://arxiv.org/pdf/1712.10024v6",stat.ML
30673,em,"Calibration tests based on the probability integral transform (PIT) are
routinely used to assess the quality of univariate distributional forecasts.
However, PIT-based calibration tests for multivariate distributional forecasts
face various challenges. We propose two new types of tests based on proper
scoring rules, which overcome these challenges. They arise from a general
framework for calibration testing in the multivariate case, introduced in this
work. The new tests have good size and power properties in simulations and
solve various problems of existing tests. We apply the tests to forecast
distributions for macroeconomic and financial time series data.",Score-based calibration testing for multivariate forecast distributions,2022-11-29 19:44:36,"Malte Knüppel, Fabian Krüger, Marc-Oliver Pohle","http://arxiv.org/abs/2211.16362v3, http://arxiv.org/pdf/2211.16362v3",econ.EM
30674,em,"This paper presents a multinomial multi-state micro-level reserving model,
denoted mCube. We propose a unified framework for modelling the time and the
payment process for IBNR and RBNS claims and for modeling IBNR claim counts. We
use multinomial distributions for the time process and spliced mixture models
for the payment process. We illustrate the excellent performance of the
proposed model on a real data set of a major insurance company consisting of
bodily injury claims. It is shown that the proposed model produces a best
estimate distribution that is centered around the true reserve.",mCube: Multinomial Micro-level reserving Model,2022-11-30 23:17:48,"Emmanuel Jordy Menvouta, Jolien Ponnet, Robin Van Oirbeek, Tim Verdonck","http://arxiv.org/abs/2212.00101v1, http://arxiv.org/pdf/2212.00101v1",stat.AP
30675,em,"In empirical studies, the data usually don't include all the variables of
interest in an economic model. This paper shows the identification of
unobserved variables in observations at the population level. When the
observables are distinct in each observation, there exists a function mapping
from the observables to the unobservables. Such a function guarantees the
uniqueness of the latent value in each observation. The key lies in the
identification of the joint distribution of observables and unobservables from
the distribution of observables. The joint distribution of observables and
unobservables then reveal the latent value in each observation. Three examples
of this result are discussed.",Identification of Unobservables in Observations,2022-12-05 23:24:19,Yingyao Hu,"http://arxiv.org/abs/2212.02585v1, http://arxiv.org/pdf/2212.02585v1",econ.EM
30676,em,"For the classical linear model with an endogenous variable estimated by the
method of instrumental variables (IVs) with multiple instruments, Masten and
Poirier (2021) introduced the falsification adaptive set (FAS). When a model is
falsified, the FAS reflects the model uncertainty that arises from
falsification of the baseline model. It is the set of just-identified IV
estimands, where each relevant instrument is considered as the just-identifying
instrument in turn, whilst all other instruments are included as controls. It
therefore applies to the case where the exogeneity assumption holds and invalid
instruments violate the exclusion assumption only. We propose a generalized FAS
that reflects the model uncertainty when some instruments violate the
exogeneity assumption and/or some instruments violate the exclusion assumption.
This FAS is the set of all possible just-identified IV estimands where the
just-identifying instrument is relevant. There are a maximum of
$k_{z}2^{k_{z}-1}$ such estimands, where $k_{z}$ is the number of instruments.
If there is at least one relevant instrument that is valid in the sense that it
satisfies the exogeneity and exclusion assumptions, then this generalized FAS
is guaranteed to contain $\beta$ and therefore to be the identified set for
$\beta$.",The Falsification Adaptive Set in Linear Models with Instrumental Variables that Violate the Exogeneity or Exclusion Restriction,2022-12-09 15:42:17,"Nicolas Apfel, Frank Windmeijer","http://arxiv.org/abs/2212.04814v1, http://arxiv.org/pdf/2212.04814v1",econ.EM
30677,em,"Cluster standard error (Liang and Zeger, 1986) is widely used by empirical
researchers to account for cluster dependence in linear model. It is well known
that this standard error is biased. We show that the bias does not vanish under
high dimensional asymptotics by revisiting Chesher and Jewitt (1987)'s
approach. An alternative leave-cluster-out crossfit (LCOC) estimator that is
unbiased, consistent and robust to cluster dependence is provided under high
dimensional setting introduced by Cattaneo, Jansson and Newey (2018). Since
LCOC estimator nests the leave-one-out crossfit estimator of Kline, Saggio and
Solvsten (2019), the two papers are unified. Monte Carlo comparisons are
provided to give insights on its finite sample properties. The LCOC estimator
is then applied to Angrist and Lavy's (2009) study of the effects of high
school achievement award and Donohue III and Levitt's (2001) study of the
impact of abortion on crime.",Robust Inference in High Dimensional Linear Model with Cluster Dependence,2022-12-11 20:36:05,Ng Cheuk Fai,"http://arxiv.org/abs/2212.05554v1, http://arxiv.org/pdf/2212.05554v1",econ.EM
30678,em,"When studying an outcome $Y$ that is weakly-positive but can equal zero (e.g.
earnings), researchers frequently estimate an average treatment effect (ATE)
for a ""log-like"" transformation that behaves like $\log(Y)$ for large $Y$ but
is defined at zero (e.g. $\log(1+Y)$, $\mathrm{arcsinh}(Y)$). We argue that
ATEs for log-like transformations should not be interpreted as approximating
percentage effects, since unlike a percentage, they depend on the units of the
outcome. In fact, we show that if the treatment affects the extensive margin,
one can obtain a treatment effect of any magnitude simply by re-scaling the
units of $Y$ before taking the log-like transformation. This arbitrary
unit-dependence arises because an individual-level percentage effect is not
well-defined for individuals whose outcome changes from zero to non-zero when
receiving treatment, and the units of the outcome implicitly determine how much
weight the ATE for a log-like transformation places on the extensive margin. We
further establish a trilemma: when the outcome can equal zero, there is no
treatment effect parameter that is an average of individual-level treatment
effects, unit-invariant, and point-identified. We discuss several alternative
approaches that may be sensible in settings with an intensive and extensive
margin, including (i) expressing the ATE in levels as a percentage (e.g. using
Poisson regression), (ii) explicitly calibrating the value placed on the
intensive and extensive margins, and (iii) estimating separate effects for the
two margins (e.g. using Lee bounds). We illustrate these approaches in three
empirical applications.",Logs with zeros? Some problems and solutions,2022-12-12 20:56:15,"Jiafeng Chen, Jonathan Roth","http://arxiv.org/abs/2212.06080v7, http://arxiv.org/pdf/2212.06080v7",econ.EM
31325,gn,"We study the effect of religion and intense religious experiences on
terrorism by focusing on one of the five pillars of Islam: Ramadan fasting. For
identification, we exploit two facts: First, daily fasting from dawn to sunset
during Ramadan is considered mandatory for most Muslims. Second, the Islamic
calendar is not synchronized with the solar cycle. We find a robust negative
effect of more intense Ramadan fasting on terrorist events within districts and
country-years in predominantly Muslim countries. This effect seems to operate
partly through decreases in public support for terrorism and the operational
capabilities of terrorist groups.",Religion and Terrorism: Evidence from Ramadan Fasting,2018-10-23 17:02:38,"Roland Hodler, Paul Raschky, Anthony Strittmatter","http://arxiv.org/abs/1810.09869v3, http://arxiv.org/pdf/1810.09869v3",econ.GN
30679,em,"In a high dimensional linear predictive regression where the number of
potential predictors can be larger than the sample size, we consider using
LASSO, a popular L1-penalized regression method, to estimate the sparse
coefficients when many unit root regressors are present. Consistency of LASSO
relies on two building blocks: the deviation bound of the cross product of the
regressors and the error term, and the restricted eigenvalue of the Gram matrix
of the regressors. In our setting where unit root regressors are driven by
temporal dependent non-Gaussian innovations, we establish original
probabilistic bounds for these two building blocks. The bounds imply that the
rates of convergence of LASSO are different from those in the familiar cross
sectional case. In practical applications given a mixture of stationary and
nonstationary predictors, asymptotic guarantee of LASSO is preserved if all
predictors are scale-standardized. In an empirical example of forecasting the
unemployment rate with many macroeconomic time series, strong performance is
delivered by LASSO when the initial specification is guided by macroeconomic
domain expertise.",On LASSO for High Dimensional Predictive Regression,2022-12-14 09:14:58,"Ziwei Mei, Zhentao Shi","http://arxiv.org/abs/2212.07052v1, http://arxiv.org/pdf/2212.07052v1",econ.EM
30680,em,"In many applications, data are observed as matrices with temporal dependence.
Matrix-variate time series modeling is a new branch of econometrics. Although
stylized facts in several fields, the existing models do not account for regime
switches in the dynamics of matrices that are not abrupt. In this paper, we
extend linear matrix-variate autoregressive models by introducing a
regime-switching model capable of accounting for smooth changes, the matrix
smooth transition autoregressive model. We present the estimation processes
with the asymptotic properties demonstrated with simulated and real data.",A smooth transition autoregressive model for matrix-variate time series,2022-12-16 20:46:46,Andrea Bucci,"http://arxiv.org/abs/2212.08615v1, http://arxiv.org/pdf/2212.08615v1",stat.ME
30681,em,"This paper extends quantile factor analysis to a probabilistic variant that
incorporates regularization and computationally efficient variational
approximations. By means of synthetic and real data experiments it is
established that the proposed estimator can achieve, in many cases, better
accuracy than a recently proposed loss-based estimator. We contribute to the
literature on measuring uncertainty by extracting new indexes of low, medium
and high economic policy uncertainty, using the probabilistic quantile factor
methodology. Medium and high indexes have clear contractionary effects, while
the low index is benign for the economy, showing that not all manifestations of
uncertainty are the same.",Probabilistic quantile factor analysis,2022-12-20 17:49:27,"Dimitris Korobilis, Maximilian Schröder","http://arxiv.org/abs/2212.10301v2, http://arxiv.org/pdf/2212.10301v2",econ.EM
30682,em,"We consider a general difference-in-differences model in which the treatment
variable of interest may be non-binary and its value may change in each period.
It is generally difficult to estimate treatment parameters defined with the
potential outcome given the entire path of treatment adoption, because each
treatment path may be experienced by only a small number of observations. We
propose an alternative approach using the concept of effective treatment, which
summarizes the treatment path into an empirically tractable low-dimensional
variable, and develop doubly robust identification, estimation, and inference
methods. We also provide a companion R software package.",An Effective Treatment Approach to Difference-in-Differences with General Treatment Patterns,2022-12-26 20:20:59,Takahide Yanagi,"http://arxiv.org/abs/2212.13226v3, http://arxiv.org/pdf/2212.13226v3",econ.EM
30683,em,"In this paper, we develop spectral and post-spectral estimators for grouped
panel data models. Both estimators are consistent in the asymptotics where the
number of observations $N$ and the number of time periods $T$ simultaneously
grow large. In addition, the post-spectral estimator is $\sqrt{NT}$-consistent
and asymptotically normal with mean zero under the assumption of well-separated
groups even if $T$ is growing much slower than $N$. The post-spectral estimator
has, therefore, theoretical properties that are comparable to those of the
grouped fixed-effect estimator developed by Bonhomme and Manresa (2015). In
contrast to the grouped fixed-effect estimator, however, our post-spectral
estimator is computationally straightforward.",Spectral and post-spectral estimators for grouped panel data models,2022-12-27 02:30:37,"Denis Chetverikov, Elena Manresa","http://arxiv.org/abs/2212.13324v2, http://arxiv.org/pdf/2212.13324v2",econ.EM
30684,em,"This paper studies the identification, estimation, and inference of long-term
(binary) treatment effect parameters when balanced panel data is not available,
or consists of only a subset of the available data. We develop a new estimator:
the chained difference-in-differences, which leverages the overlapping
structure of many unbalanced panel data sets. This approach consists in
aggregating a collection of short-term treatment effects estimated on multiple
incomplete panels. Our estimator accommodates (1) multiple time periods, (2)
variation in treatment timing, (3) treatment effect heterogeneity, (4) general
missing data patterns, and (5) sample selection on observables. We establish
the asymptotic properties of the proposed estimator and discuss identification
and efficiency gains in comparison to existing methods. Finally, we illustrate
its relevance through (i) numerical simulations, and (ii) an application about
the effects of an innovation policy in France.",The Chained Difference-in-Differences,2023-01-03 16:26:22,"Christophe Bellégo, David Benatia, Vincent Dortet-Bernardet","http://arxiv.org/abs/2301.01085v2, http://arxiv.org/pdf/2301.01085v2",econ.EM
30685,em,"This article describes the mixrandregret command, which extends the
randregret command introduced in Guti\'errez-Vargas et al. (2021, The Stata
Journal 21: 626-658) incorporating random coefficients for Random Regret
Minimization models. The newly developed command mixrandregret allows the
inclusion of random coefficients in the regret function of the classical RRM
model introduced in Chorus (2010, European Journal of Transport and
Infrastructure Research 10: 181-196). The command allows the user to specify a
combination of fixed and random coefficients. In addition, the user can specify
normal and log-normal distributions for the random coefficients using the
commands' options. The models are fitted using simulated maximum likelihood
using numerical integration to approximate the choice probabilities.",Fitting mixed logit random regret minimization models using maximum simulated likelihood,2023-01-03 16:34:53,"Ziyue Zhu, Álvaro A. Gutiérrez-Vargas, Martina Vandebroek","http://arxiv.org/abs/2301.01091v1, http://arxiv.org/pdf/2301.01091v1",econ.EM
31689,gn,"This study examines the role of pawnshops as a risk-coping device in prewar
Japan. Using data on pawnshop loans for more than 250 municipalities and
exploiting the 1918-1920 influenza pandemic as a natural experiment, we find
that the adverse health shock increased the total amount of loans from
pawnshops. This is because those who regularly relied on pawnshops borrowed
more money from them than usual to cope with the adverse health shock, and not
because the number of people who used pawnshops increased.",The role of pawnshops in risk coping in early twentieth-century Japan,2019-05-11 04:28:28,Tatsuki Inoue,"http://dx.doi.org/10.1017/S0968565021000111, http://arxiv.org/abs/1905.04419v2, http://arxiv.org/pdf/1905.04419v2",econ.GN
30686,em,"Modeling lies at the core of both the financial and the insurance industry
for a wide variety of tasks. The rise and development of machine learning and
deep learning models have created many opportunities to improve our modeling
toolbox. Breakthroughs in these fields often come with the requirement of large
amounts of data. Such large datasets are often not publicly available in
finance and insurance, mainly due to privacy and ethics concerns. This lack of
data is currently one of the main hurdles in developing better models. One
possible option to alleviating this issue is generative modeling. Generative
models are capable of simulating fake but realistic-looking data, also referred
to as synthetic data, that can be shared more freely. Generative Adversarial
Networks (GANs) is such a model that increases our capacity to fit very
high-dimensional distributions of data. While research on GANs is an active
topic in fields like computer vision, they have found limited adoption within
the human sciences, like economics and insurance. Reason for this is that in
these fields, most questions are inherently about identification of causal
effects, while to this day neural networks, which are at the center of the GAN
framework, focus mostly on high-dimensional correlations. In this paper we
study the causal preservation capabilities of GANs and whether the produced
synthetic data can reliably be used to answer causal questions. This is done by
performing causal analyses on the synthetic data, produced by a GAN, with
increasingly more lenient assumptions. We consider the cross-sectional case,
the time series case and the case with a complete structural model. It is shown
that in the simple cross-sectional scenario where correlation equals causation
the GAN preserves causality, but that challenges arise for more advanced
analyses.",On the causality-preservation capabilities of generative modelling,2023-01-03 17:09:15,"Yves-Cédric Bauwelinckx, Jan Dhaene, Tim Verdonck, Milan van den Heuvel","http://arxiv.org/abs/2301.01109v1, http://arxiv.org/pdf/2301.01109v1",cs.LG
30687,em,"Randomized experiments are an excellent tool for estimating internally valid
causal effects with the sample at hand, but their external validity is
frequently debated. While classical results on the estimation of Population
Average Treatment Effects (PATE) implicitly assume random selection into
experiments, this is typically far from true in many medical,
social-scientific, and industry experiments. When the experimental sample is
different from the target sample along observable or unobservable dimensions,
experimental estimates may be of limited use for policy decisions. We begin by
decomposing the extrapolation bias from estimating the Target Average Treatment
Effect (TATE) using the Sample Average Treatment Effect (SATE) into covariate
shift, overlap, and effect modification components, which researchers can
reason about in order to diagnose the severity of extrapolation bias. Next, We
cast covariate shift as a sample selection problem and propose estimators that
re-weight the doubly-robust scores from experimental subjects to estimate
treatment effects in the overall sample (=: generalization) or in an alternate
target sample (=: transportation). We implement these estimators in the
open-source R package causalTransportR and illustrate its performance in a
simulation study and discuss diagnostics to evaluate its performance.",A Framework for Generalization and Transportation of Causal Estimates Under Covariate Shift,2023-01-12 04:00:36,"Apoorva Lal, Wenjing Zheng, Simon Ejdemyr","http://arxiv.org/abs/2301.04776v1, http://arxiv.org/pdf/2301.04776v1",stat.ME
30688,em,"In this study, we propose a test for the coefficient randomness in
autoregressive models where the autoregressive coefficient is local to unity,
which is empirically relevant given the results of earlier studies. Under this
specification, we theoretically analyze the effect of the correlation between
the random coefficient and disturbance on tests' properties, which remains
largely unexplored in the literature. Our analysis reveals that the correlation
crucially affects the power of tests for coefficient randomness and that tests
proposed by earlier studies can perform poorly when the degree of the
correlation is moderate to large. The test we propose in this paper is designed
to have a power function robust to the correlation. Because the asymptotic null
distribution of our test statistic depends on the correlation $\psi$ between
the disturbance and its square as earlier tests do, we also propose a modified
version of the test statistic such that its asymptotic null distribution is
free from the nuisance parameter $\psi$. The modified test is shown to have
better power properties than existing ones in large and finite samples.",Testing for Coefficient Randomness in Local-to-Unity Autoregressions,2023-01-12 10:34:49,Mikihito Nishi,"http://arxiv.org/abs/2301.04853v2, http://arxiv.org/pdf/2301.04853v2",econ.EM
30689,em,"This study considers testing the specification of spillover effects in causal
inference. We focus on experimental settings in which the treatment assignment
mechanism is known to researchers. We develop a new randomization test
utilizing a hierarchical relationship between different exposures. Compared
with existing approaches, our approach is essentially applicable to any null
exposure specifications and produces powerful test statistics without a priori
knowledge of the true interference structure. As empirical illustrations, we
revisit two existing social network experiments: one on farmers' insurance
adoption and the other on anti-conflict education programs.",Randomization Test for the Specification of Interference Structure,2023-01-13 17:38:53,"Tadao Hoshino, Takahide Yanagi","http://arxiv.org/abs/2301.05580v2, http://arxiv.org/pdf/2301.05580v2",stat.ME
30690,em,"Plausible identification of conditional average treatment effects (CATEs) may
rely on controlling for a large number of variables to account for confounding
factors. In these high-dimensional settings, estimation of the CATE requires
estimating first-stage models whose consistency relies on correctly specifying
their parametric forms. While doubly-robust estimators of the CATE exist,
inference procedures based on the second stage CATE estimator are not
doubly-robust. Using the popular augmented inverse propensity weighting signal,
we propose an estimator for the CATE whose resulting Wald-type confidence
intervals are doubly-robust. We assume a logistic model for the propensity
score and a linear model for the outcome regression, and estimate the
parameters of these models using an $\ell_1$ (Lasso) penalty to address the
high dimensional covariates. Our proposed estimator remains consistent at the
nonparametric rate and our proposed pointwise and uniform confidence intervals
remain asymptotically valid even if one of the logistic propensity score or
linear outcome regression models are misspecified. These results are obtained
under similar conditions to existing analyses in the high-dimensional and
nonparametric literatures.",Doubly-Robust Inference for Conditional Average Treatment Effects with High-Dimensional Controls,2023-01-16 09:44:46,"Adam Baybutt, Manu Navjeevan","http://arxiv.org/abs/2301.06283v1, http://arxiv.org/pdf/2301.06283v1",econ.EM
30726,em,"The paper deals with the construction of a synthetic indicator of economic
growth, obtained by projecting a quarterly measure of aggregate economic
activity, namely gross domestic product (GDP), into the space spanned by a
finite number of smooth principal components, representative of the
medium-to-long-run component of economic growth of a high-dimensional time
series, available at the monthly frequency. The smooth principal components
result from applying a cross-sectional filter distilling the low-pass component
of growth in real time. The outcome of the projection is a monthly nowcast of
the medium-to-long-run component of GDP growth. After discussing the
theoretical properties of the indicator, we deal with the assessment of its
reliability and predictive validity with reference to a panel of macroeconomic
U.S. time series.",Band-Pass Filtering with High-Dimensional Time Series,2023-05-11 10:28:30,"Alessandro Giovannelli, Marco Lippi, Tommaso Proietti","http://arxiv.org/abs/2305.06618v1, http://arxiv.org/pdf/2305.06618v1",econ.EM
30691,em,"Despite the popularity of factor models with sparse loading matrices, little
attention has been given to formally address identifiability of these models
beyond standard rotation-based identification such as the positive lower
triangular (PLT) constraint. To fill this gap, we review the advantages of
variance identification in sparse factor analysis and introduce the generalized
lower triangular (GLT) structures. We show that the GLT assumption is an
improvement over PLT without compromise: GLT is also unique but, unlike PLT, a
non-restrictive assumption. Furthermore, we provide a simple counting rule for
variance identification under GLT structures, and we demonstrate that within
this model class the unknown number of common factors can be recovered in an
exploratory factor analysis. Our methodology is illustrated for simulated data
in the context of post-processing posterior draws in Bayesian sparse factor
analysis.",When it counts -- Econometric identification of the basic factor model based on GLT structures,2023-01-16 13:54:45,"Sylvia Frühwirth-Schnatter, Darjus Hosszejni, Hedibert Freitas Lopes","http://dx.doi.org/10.3390/econometrics11040026, http://arxiv.org/abs/2301.06354v1, http://arxiv.org/pdf/2301.06354v1",stat.ME
30692,em,"The spatial dependence in mean has been well studied by plenty of models in a
large strand of literature, however, the investigation of spatial dependence in
variance is lagging significantly behind. The existing models for the spatial
dependence in variance are scarce, with neither probabilistic structure nor
statistical inference procedure being explored. To circumvent this deficiency,
this paper proposes a new generalized logarithmic spatial heteroscedasticity
model with exogenous variables (denoted by the log-SHE model) to study the
spatial dependence in variance. For the log-SHE model, its spatial near-epoch
dependence (NED) property is investigated, and a systematic statistical
inference procedure is provided, including the maximum likelihood and
generalized method of moments estimators, the Wald, Lagrange multiplier and
likelihood-ratio-type D tests for model parameter constraints, and the
overidentification test for the model diagnostic checking. Using the tool of
spatial NED, the asymptotics of all proposed estimators and tests are
established under regular conditions. The usefulness of the proposed
methodology is illustrated by simulation results and a real data example on the
house selling price.",Statistical inference for the logarithmic spatial heteroskedasticity model with exogenous variables,2023-01-17 04:48:14,"Bing Su, Fukang Zhu, Ke Zhu","http://arxiv.org/abs/2301.06658v1, http://arxiv.org/pdf/2301.06658v1",econ.EM
30693,em,"A practical challenge for structural estimation is the requirement to
accurately minimize a sample objective function which is often non-smooth,
non-convex, or both. This paper proposes a simple algorithm designed to find
accurate solutions without performing an exhaustive search. It augments each
iteration from a new Gauss-Newton algorithm with a grid search step. A finite
sample analysis derives its optimization and statistical properties
simultaneously using only econometric assumptions. After a finite number of
iterations, the algorithm automatically transitions from global to fast local
convergence, producing accurate estimates with high probability. Simulated
examples and an empirical application illustrate the results.","Noisy, Non-Smooth, Non-Convex Estimation of Moment Condition Models",2023-01-18 00:22:27,Jean-Jacques Forneron,"http://arxiv.org/abs/2301.07196v2, http://arxiv.org/pdf/2301.07196v2",econ.EM
30694,em,"Canada and other major countries are investigating the implementation of
``digital money'' or Central Bank Digital Currencies, necessitating answers to
key questions about how demographic and geographic factors influence the
population's digital literacy. This paper uses the Canadian Internet Use Survey
(CIUS) 2020 and survey versions of Lasso inference methods to assess the
digital divide in Canada and determine the relevant factors that influence it.
We find that a significant divide in the use of digital technologies, e.g.,
online banking and virtual wallet, continues to exist across different
demographic and geographic categories. We also create a digital divide score
that measures the survey respondents' digital literacy and provide multiple
correspondence analyses that further corroborate these findings.",Digital Divide: Empirical Study of CIUS 2020,2023-01-19 05:52:42,"Joann Jasiak, Peter MacKenzie, Purevdorj Tuvaandorj","http://arxiv.org/abs/2301.07855v2, http://arxiv.org/pdf/2301.07855v2",econ.EM
30695,em,"This paper studies inference in two-stage randomized experiments under
covariate-adaptive randomization. In the initial stage of this experimental
design, clusters (e.g., households, schools, or graph partitions) are
stratified and randomly assigned to control or treatment groups based on
cluster-level covariates. Subsequently, an independent second-stage design is
carried out, wherein units within each treated cluster are further stratified
and randomly assigned to either control or treatment groups, based on
individual-level covariates. Under the homogeneous partial interference
assumption, I establish conditions under which the proposed
difference-in-""average of averages"" estimators are consistent and
asymptotically normal for the corresponding average primary and spillover
effects and develop consistent estimators of their asymptotic variances.
Combining these results establishes the asymptotic validity of tests based on
these estimators. My findings suggest that ignoring covariate information in
the design stage can result in efficiency loss, and commonly used inference
methods that ignore or improperly use covariate information can lead to either
conservative or invalid inference. Finally, I apply these results to studying
optimal use of covariate information under covariate-adaptive randomization in
large samples, and demonstrate that a specific generalized matched-pair design
achieves minimum asymptotic variance for each proposed estimator. The practical
relevance of the theoretical results is illustrated through a simulation study
and an empirical application.",Inference for Two-stage Experiments under Covariate-Adaptive Randomization,2023-01-22 00:55:40,Jizhou Liu,"http://arxiv.org/abs/2301.09016v5, http://arxiv.org/pdf/2301.09016v5",econ.EM
30696,em,"We introduce the package ddml for Double/Debiased Machine Learning (DDML) in
Stata. Estimators of causal parameters for five different econometric models
are supported, allowing for flexible estimation of causal effects of endogenous
variables in settings with unknown functional forms and/or many exogenous
variables. ddml is compatible with many existing supervised machine learning
programs in Stata. We recommend using DDML in combination with stacking
estimation which combines multiple machine learners into a final predictor. We
provide Monte Carlo evidence to support our recommendation.",ddml: Double/debiased machine learning in Stata,2023-01-23 15:37:34,"Achim Ahrens, Christian B. Hansen, Mark E. Schaffer, Thomas Wiemann","http://arxiv.org/abs/2301.09397v2, http://arxiv.org/pdf/2301.09397v2",econ.EM
35490,th,"In this discussion draft, we explore heterogeneous oligopoly games of
increasing players with quadratic costs, where the market is supposed to have
the isoelastic demand. For each of the models considered in this draft, we
analytically investigate the necessary and sufficient condition of the local
stability of its positive equilibrium. Furthermore, we rigorously prove that
the stability regions are enlarged as the number of involved firms is
increasing.",Stability analysis of heterogeneous oligopoly games of increasing players with quadratic costs,2021-12-24 06:52:20,Xiaoliang Li,"http://arxiv.org/abs/2112.13844v1, http://arxiv.org/pdf/2112.13844v1",econ.TH
30697,em,"The processes of ecological interactions, dispersal and mutations shape the
dynamics of biological communities, and analogous eco-evolutionary processes
acting upon economic entities have been proposed to explain economic change.
This hypothesis is compelling because it explains economic change through
endogenous mechanisms, but it has not been quantitatively tested at the global
economy level. Here, we use an inverse modelling technique and 59 years of
economic data covering 77 countries to test whether the collective dynamics of
national economic activities can be characterised by eco-evolutionary
processes. We estimate the statistical support of dynamic community models in
which the dynamics of economic activities are coupled with positive and
negative interactions between the activities, the spatial dispersal of the
activities, and their transformations into other economic activities. We find
strong support for the models capturing positive interactions between economic
activities and spatial dispersal of the activities across countries. These
results suggest that processes akin to those occurring in ecosystems play a
significant role in the dynamics of economic systems. The strength-of-evidence
obtained for each model varies across countries and may be caused by
differences in the distance between countries, specific institutional contexts,
and historical contingencies. Overall, our study provides a new quantitative,
biologically inspired framework to study the forces shaping economic change.",Processes analogous to ecological interactions and dispersal shape the dynamics of economic activities,2023-01-23 18:30:10,"Victor Boussange, Didier Sornette, Heike Lischke, Loïc Pellissier","http://arxiv.org/abs/2301.09486v1, http://arxiv.org/pdf/2301.09486v1",econ.EM
30698,em,"Reverse Unrestricted MIxed DAta Sampling (RU-MIDAS) regressions are used to
model high-frequency responses by means of low-frequency variables. However,
due to the periodic structure of RU-MIDAS regressions, the dimensionality grows
quickly if the frequency mismatch between the high- and low-frequency variables
is large. Additionally the number of high-frequency observations available for
estimation decreases. We propose to counteract this reduction in sample size by
pooling the high-frequency coefficients and further reduce the dimensionality
through a sparsity-inducing convex regularizer that accounts for the temporal
ordering among the different lags. To this end, the regularizer prioritizes the
inclusion of lagged coefficients according to the recency of the information
they contain. We demonstrate the proposed method on an empirical application
for daily realized volatility forecasting where we explore whether modeling
high-frequency volatility data in terms of low-frequency macroeconomic data
pays off.",Hierarchical Regularizers for Reverse Unrestricted Mixed Data Sampling Regressions,2023-01-25 16:53:16,"Alain Hecq, Marie Ternes, Ines Wilms","http://arxiv.org/abs/2301.10592v1, http://arxiv.org/pdf/2301.10592v1",econ.EM
30699,em,"This paper generalises dynamic factor models for multidimensional dependent
data. In doing so, it develops an interpretable technique to study complex
information sources ranging from repeated surveys with a varying number of
respondents to panels of satellite images. We specialise our results to model
microeconomic data on US households jointly with macroeconomic aggregates. This
results in a powerful tool able to generate localised predictions,
counterfactuals and impulse response functions for individual households,
accounting for traditional time-series complexities depicted in the state-space
literature. The model is also compatible with the growing focus of policymakers
for real-time economic analysis as it is able to process observations online,
while handling missing values and asynchronous data releases.",Multidimensional dynamic factor models,2023-01-29 20:37:29,"Matteo Barigozzi, Filippo Pellegrino","http://arxiv.org/abs/2301.12499v1, http://arxiv.org/pdf/2301.12499v1",econ.EM
30700,em,"This paper extends the canonical model of epidemiology, the SIRD model, to
allow for time-varying parameters for real-time measurement and prediction of
the trajectory of the Covid-19 pandemic. Time variation in model parameters is
captured using the generalized autoregressive score modeling structure designed
for the typical daily count data related to the pandemic. The resulting
specification permits a flexible yet parsimonious model with a low
computational cost. The model is extended to allow for unreported cases using a
mixed-frequency setting. Results suggest that these cases' effects on the
parameter estimates might be sizeable. Full sample results show that the
flexible framework accurately captures the successive waves of the pandemic. A
real-time exercise indicates that the proposed structure delivers timely and
precise information on the pandemic's current stance. This superior
performance, in turn, transforms into accurate predictions of the confirmed and
death cases.",Bridging the Covid-19 Data and the Epidemiological Model using Time-Varying Parameter SIRD Model,2023-01-31 18:08:40,"Cem Cakmakli, Yasin Simsek","http://arxiv.org/abs/2301.13692v1, http://arxiv.org/pdf/2301.13692v1",econ.EM
30701,em,"In this study, we constitute an adaptive hedging method based on empirical
mode decomposition (EMD) method to extract the adaptive hedging horizon and
build a time series cross-validation method for robust hedging performance
estimation. Basing on the variance reduction criterion and the value-at-risk
(VaR) criterion, we find that the estimation of in-sample hedging performance
is inconsistent with that of the out-sample hedging performance. The EMD
hedging method family exhibits superior performance on the VaR criterion
compared with the minimum variance hedging method. The matching degree of the
spot and futures contracts at the specific time scale is the key determinant of
the hedging performance in the corresponding hedging horizon.",Adaptive hedging horizon and hedging performance estimation,2023-02-01 08:29:08,"Wang Haoyu, Junpeng Di, Qing Han","http://arxiv.org/abs/2302.00251v1, http://arxiv.org/pdf/2302.00251v1",econ.EM
30702,em,"We explore time-varying networks for high-dimensional locally stationary time
series, using the large VAR model framework with both the transition and
(error) precision matrices evolving smoothly over time. Two types of
time-varying graphs are investigated: one containing directed edges of Granger
causality linkages, and the other containing undirected edges of partial
correlation linkages. Under the sparse structural assumption, we propose a
penalised local linear method with time-varying weighted group LASSO to jointly
estimate the transition matrices and identify their significant entries, and a
time-varying CLIME method to estimate the precision matrices. The estimated
transition and precision matrices are then used to determine the time-varying
network structures. Under some mild conditions, we derive the theoretical
properties of the proposed estimates including the consistency and oracle
properties. In addition, we extend the methodology and theory to cover
highly-correlated large-scale time series, for which the sparsity assumption
becomes invalid and we allow for common factors before estimating the
factor-adjusted time-varying networks. We provide extensive simulation studies
and an empirical application to a large U.S. macroeconomic dataset to
illustrate the finite-sample performance of our methods.",Estimating Time-Varying Networks for High-Dimensional Time Series,2023-02-05 23:27:09,"Jia Chen, Degui Li, Yuning Li, Oliver Linton","http://arxiv.org/abs/2302.02476v1, http://arxiv.org/pdf/2302.02476v1",stat.ME
30703,em,"Quantile forecasts made across multiple horizons have become an important
output of many financial institutions, central banks and international
organisations. This paper proposes misspecification tests for such quantile
forecasts that assess optimality over a set of multiple forecast horizons
and/or quantiles. The tests build on multiple Mincer-Zarnowitz quantile
regressions cast in a moment equality framework. Our main test is for the null
hypothesis of autocalibration, a concept which assesses optimality with respect
to the information contained in the forecasts themselves. We provide an
extension that allows to test for optimality with respect to larger information
sets and a multivariate extension. Importantly, our tests do not just inform
about general violations of optimality, but may also provide useful insights
into specific forms of sub-optimality. A simulation study investigates the
finite sample performance of our tests, and two empirical applications to
financial returns and U.S. macroeconomic series illustrate that our tests can
yield interesting insights into quantile forecast sub-optimality and its
causes.",Testing Quantile Forecast Optimality,2023-02-06 15:47:34,"Jack Fosten, Daniel Gutknecht, Marc-Oliver Pohle","http://arxiv.org/abs/2302.02747v2, http://arxiv.org/pdf/2302.02747v2",econ.EM
30704,em,"This paper studies covariate adjusted estimation of the average treatment
effect in stratified experiments. We work in a general framework that includes
matched tuples designs, coarse stratification, and complete randomization as
special cases. Regression adjustment with treatment-covariate interactions is
known to weakly improve efficiency for completely randomized designs. By
contrast, we show that for stratified designs such regression estimators are
generically inefficient, potentially even increasing estimator variance
relative to the unadjusted benchmark. Motivated by this result, we derive the
asymptotically optimal linear covariate adjustment for a given stratification.
We construct several feasible estimators that implement this efficient
adjustment in large samples. In the special case of matched pairs, for example,
the regression including treatment, covariates, and pair fixed effects is
asymptotically optimal. We also provide novel asymptotically exact inference
methods that allow researchers to report smaller confidence intervals, fully
reflecting the efficiency gains from both stratification and adjustment.
Simulations and an empirical application demonstrate the value of our proposed
methods.",Covariate Adjustment in Stratified Experiments,2023-02-07 21:59:41,Max Cytrynbaum,"http://arxiv.org/abs/2302.03687v3, http://arxiv.org/pdf/2302.03687v3",econ.EM
30705,em,"The identification of choice models is crucial for understanding consumer
behavior, designing marketing policies, and developing new products. The
identification of parametric choice-based demand models, such as the
multinomial choice model (MNL), is typically straightforward. However,
nonparametric models, which are highly effective and flexible in explaining
customer choices, may encounter the curse of the dimensionality and lose their
identifiability. For example, the ranking-based model, which is a nonparametric
model and designed to mirror the random utility maximization (RUM) principle,
is known to be nonidentifiable from the collection of choice probabilities
alone. In this paper, we develop a new class of nonparametric models that is
not subject to the problem of nonidentifiability. Our model assumes bounded
rationality of consumers, which results in symmetric demand cannibalization and
intriguingly enables full identification. That is to say, we can uniquely
construct the model based on its observed choice probabilities over
assortments. We further propose an efficient estimation framework using a
combination of column generation and expectation-maximization algorithms. Using
a real-world data, we show that our choice model demonstrates competitive
prediction accuracy compared to the state-of-the-art benchmarks, despite
incorporating the assumption of bounded rationality which could, in theory,
limit the representation power of our model.","A Nonparametric Stochastic Set Model: Identification, Optimization, and Prediction",2023-02-09 01:05:02,"Yi-Chun Chen, Dmitry Mitrofanov","http://arxiv.org/abs/2302.04354v2, http://arxiv.org/pdf/2302.04354v2",econ.EM
30706,em,"The conditional variance, skewness, and kurtosis play a central role in time
series analysis. These three conditional moments (CMs) are often studied by
some parametric models but with two big issues: the risk of model
mis-specification and the instability of model estimation. To avoid the above
two issues, this paper proposes a novel method to estimate these three CMs by
the so-called quantiled CMs (QCMs). The QCM method first adopts the idea of
Cornish-Fisher expansion to construct a linear regression model, based on $n$
different estimated conditional quantiles. Next, it computes the QCMs simply
and simultaneously by using the ordinary least squares estimator of this
regression model, without any prior estimation of the conditional mean. Under
certain conditions, the QCMs are shown to be consistent with the convergence
rate $n^{-1/2}$. Simulation studies indicate that the QCMs perform well under
different scenarios of Cornish-Fisher expansion errors and quantile estimation
errors. In the application, the study of QCMs for three exchange rates
demonstrates the effectiveness of financial rescue plans during the COVID-19
pandemic outbreak, and suggests that the existing ``news impact curve''
functions for the conditional skewness and kurtosis may not be suitable.","Quantiled conditional variance, skewness, and kurtosis by Cornish-Fisher expansion",2023-02-14 05:45:01,"Ningning Zhang, Ke Zhu","http://arxiv.org/abs/2302.06799v2, http://arxiv.org/pdf/2302.06799v2",stat.ME
30707,em,"High covariate dimensionality is increasingly occurrent in model estimation,
and existing techniques to address this issue typically require sparsity or
discrete heterogeneity of the unobservable parameter vector. However, neither
restriction may be supported by economic theory in some empirical contexts,
leading to severe bias and misleading inference. The clustering-based grouped
parameter estimator (GPE) introduced in this paper drops both restrictions in
favour of the natural one that the parameter support be compact. GPE exhibits
robust large sample properties under standard conditions and accommodates both
sparse and non-sparse parameters whose support can be bounded away from zero.
Extensive Monte Carlo simulations demonstrate the excellent performance of GPE
in terms of bias reduction and size control compared to competing estimators.
An empirical application of GPE to estimating price and income elasticities of
demand for gasoline highlights its practical utility.",Clustered Covariate Regression,2023-02-18 11:01:47,"Abdul-Nasah Soale, Emmanuel Selorm Tsyawo","http://dx.doi.org/10.2139/ssrn.3394012, http://arxiv.org/abs/2302.09255v2, http://arxiv.org/pdf/2302.09255v2",econ.EM
30737,em,"An impulse response function describes the dynamic evolution of an outcome
variable following a stimulus or treatment. A common hypothesis of interest is
whether the treatment affects the outcome. We show that this hypothesis is best
assessed using significance bands rather than relying on commonly displayed
confidence bands. Under the null hypothesis, we show that significance bands
are trivial to construct with standard statistical software using the LM
principle, and should be reported as a matter of routine when displaying
impulse responses graphically.",Significance Bands for Local Projections,2023-06-05 20:50:07,"Atsushi Inoue, Òscar Jordà, Guido M. Kuersteiner","http://arxiv.org/abs/2306.03073v1, http://arxiv.org/pdf/2306.03073v1",econ.EM
30708,em,"Latent Class Choice Models (LCCM) are extensions of discrete choice models
(DCMs) that capture unobserved heterogeneity in the choice process by
segmenting the population based on the assumption of preference similarities.
We present a method of efficiently incorporating attitudinal indicators in the
specification of LCCM, by introducing Artificial Neural Networks (ANN) to
formulate latent variables constructs. This formulation overcomes structural
equations in its capability of exploring the relationship between the
attitudinal indicators and the decision choice, given the Machine Learning (ML)
flexibility and power in capturing unobserved and complex behavioural features,
such as attitudes and beliefs. All of this while still maintaining the
consistency of the theoretical assumptions presented in the Generalized Random
Utility model and the interpretability of the estimated parameters. We test our
proposed framework for estimating a Car-Sharing (CS) service subscription
choice with stated preference data from Copenhagen, Denmark. The results show
that our proposed approach provides a complete and realistic segmentation,
which helps design better policies.",Attitudes and Latent Class Choice Models using Machine learning,2023-02-20 13:03:01,"Lorena Torres Lahoz, Francisco Camara Pereira, Georges Sfeir, Ioanna Arkoudi, Mayara Moraes Monteiro, Carlos Lima Azevedo","http://arxiv.org/abs/2302.09871v1, http://arxiv.org/pdf/2302.09871v1",econ.EM
30709,em,"The synthetic control (SC) method is a popular approach for estimating
treatment effects from observational panel data. It rests on a crucial
assumption that we can write the treated unit as a linear combination of the
untreated units. This linearity assumption, however, can be unlikely to hold in
practice and, when violated, the resulting SC estimates are incorrect. In this
paper we examine two questions: (1) How large can the misspecification error
be? (2) How can we limit it? First, we provide theoretical bounds to quantify
the misspecification error. The bounds are comforting: small misspecifications
induce small errors. With these bounds in hand, we then develop new SC
estimators that are specially designed to minimize misspecification error. The
estimators are based on additional data about each unit, which is used to
produce the SC weights. (For example, if the units are countries then the
additional data might be demographic information about each.) We study our
estimators on synthetic data; we find they produce more accurate causal
estimates than standard synthetic controls. We then re-analyze the California
tobacco-program data of the original SC paper, now including additional data
from the US census about per-state demographics. Our estimators show that the
observations in the pre-treatment period lie within the bounds of
misspecification error, and that the observations post-treatment lie outside of
those bounds. This is evidence that our SC methods have uncovered a true
effect.",On the Misspecification of Linear Assumptions in Synthetic Control,2023-02-24 20:48:48,"Achille Nazaret, Claudia Shi, David M. Blei","http://arxiv.org/abs/2302.12777v1, http://arxiv.org/pdf/2302.12777v1",stat.ME
30710,em,"We disentangle structural breaks in dynamic factor models by establishing a
projection based equivalent representation theorem which decomposes any break
into a rotational change and orthogonal shift. Our decomposition leads to the
natural interpretation of these changes as a change in the factor variance and
loadings respectively, which allows us to formulate two separate tests to
differentiate between these two cases, unlike the pre-existing literature at
large. We derive the asymptotic distributions of the two tests, and demonstrate
their good finite sample performance. We apply the tests to the FRED-MD dataset
focusing on the Great Moderation and Global Financial Crisis as candidate
breaks, and find evidence that the Great Moderation may be better characterised
as a break in the factor variance as opposed to a break in the loadings,
whereas the Global Financial Crisis is a break in both. Our empirical results
highlight how distinguishing between the breaks can nuance the interpretation
attributed to them by existing methods.",Disentangling Structural Breaks in High Dimensional Factor Models,2023-03-01 05:10:52,"Bonsoo Koo, Benjamin Wong, Ze-Yu Zhong","http://arxiv.org/abs/2303.00178v1, http://arxiv.org/pdf/2303.00178v1",stat.ME
30711,em,"Welfare effects of price changes are often estimated with cross-sections;
these do not identify demand with heterogeneous consumers. We develop a
theoretical method addressing this, utilizing uncompensated demand moments to
construct local approximations for compensated demand moments, robust to
unobserved preference heterogeneity. Our methodological contribution offers
robust approximations for average and distributional welfare estimates,
extending to price indices, taxable income elasticities, and general
equilibrium welfare. Our methods apply to any cross-section; we demonstrate
them via UK household budget survey data. We uncover an insight: simple
non-parametric representative agent models might be less biased than complex
parametric models accounting for heterogeneity.",Robust Hicksian Welfare Analysis under Individual Heterogeneity,2023-03-01 13:57:56,"Sebastiaan Maes, Raghav Malhotra","http://arxiv.org/abs/2303.01231v3, http://arxiv.org/pdf/2303.01231v3",econ.TH
30712,em,"On-demand service platforms face a challenging problem of forecasting a large
collection of high-frequency regional demand data streams that exhibit
instabilities. This paper develops a novel forecast framework that is fast and
scalable, and automatically assesses changing environments without human
intervention. We empirically test our framework on a large-scale demand data
set from a leading on-demand delivery platform in Europe, and find strong
performance gains from using our framework against several industry benchmarks,
across all geographical regions, loss functions, and both pre- and post-Covid
periods. We translate forecast gains to economic impacts for this on-demand
service platform by computing financial gains and reductions in computing
costs.",Fast Forecasting of Unstable Data Streams for On-Demand Service Platforms,2023-03-03 15:33:32,"Yu Jeffrey Hu, Jeroen Rombouts, Ines Wilms","http://arxiv.org/abs/2303.01887v1, http://arxiv.org/pdf/2303.01887v1",econ.EM
30713,em,"This paper considers statistical inference of time-varying network vector
autoregression models for large-scale time series. A latent group structure is
imposed on the heterogeneous and node-specific time-varying momentum and
network spillover effects so that the number of unknown time-varying
coefficients to be estimated can be reduced considerably. A classic
agglomerative clustering algorithm with normalized distance matrix estimates is
combined with a generalized information criterion to consistently estimate the
latent group number and membership. A post-grouping local linear smoothing
method is proposed to estimate the group-specific time-varying momentum and
network effects, substantially improving the convergence rates of the
preliminary estimates which ignore the latent structure. In addition, a
post-grouping specification test is conducted to verify the validity of the
parametric model assumption for group-specific time-varying coefficient
functions, and the asymptotic theory is derived for the test statistic
constructed via a kernel weighted quadratic form under the null and alternative
hypotheses. Numerical studies including Monte-Carlo simulation and an empirical
application to the global trade flow data are presented to examine the
finite-sample performance of the developed model and methodology.",Inference of Grouped Time-Varying Network Vector Autoregression Models,2023-03-17 19:57:29,"Degui Li, Bin Peng, Songqiao Tang, Weibiao Wu","http://arxiv.org/abs/2303.10117v1, http://arxiv.org/pdf/2303.10117v1",stat.ME
30714,em,"Instrumental variable (IV) strategies are widely used in political science to
establish causal relationships. However, the identifying assumptions required
by an IV design are demanding, and it remains challenging for researchers to
assess their validity. In this paper, we replicate 67 papers published in three
top journals in political science during 2010-2022 and identify several
troubling patterns. First, researchers often overestimate the strength of their
IVs due to non-i.i.d. errors, such as a clustering structure. Second, the most
commonly used t-test for the two-stage-least-squares (2SLS) estimates often
severely underestimates uncertainty. Using more robust inferential methods, we
find that around 19-30% of the 2SLS estimates in our sample are underpowered.
Third, in the majority of the replicated studies, the 2SLS estimates are much
larger than the ordinary-least-squares estimates, and their ratio is negatively
correlated with the strength of the IVs in studies where the IVs are not
experimentally generated, suggesting potential violations of unconfoundedness
or the exclusion restriction. To help researchers avoid these pitfalls, we
provide a checklist for better practice.",How Much Should We Trust Instrumental Variable Estimates in Political Science? Practical Advice Based on Over 60 Replicated Studies,2023-03-20 22:11:56,"Apoorva Lal, Mac Lockhart, Yiqing Xu, Ziwen Zu","http://arxiv.org/abs/2303.11399v3, http://arxiv.org/pdf/2303.11399v3",econ.EM
30715,em,"This paper considers estimating functional-coefficient models in panel
quantile regression with individual effects, allowing the cross-sectional and
temporal dependence for large panel observations. A latent group structure is
imposed on the heterogenous quantile regression models so that the number of
nonparametric functional coefficients to be estimated can be reduced
considerably. With the preliminary local linear quantile estimates of the
subject-specific functional coefficients, a classic agglomerative clustering
algorithm is used to estimate the unknown group structure and an
easy-to-implement ratio criterion is proposed to determine the group number.
The estimated group number and structure are shown to be consistent.
Furthermore, a post-grouping local linear smoothing method is introduced to
estimate the group-specific functional coefficients, and the relevant
asymptotic normal distribution theory is derived with a normalisation rate
comparable to that in the literature. The developed methodologies and theory
are verified through a simulation study and showcased with an application to
house price data from UK local authority districts, which reveals different
homogeneity structures at different quantile levels.",Functional-Coefficient Quantile Regression for Panel Data with Latent Group Structure,2023-03-23 15:29:40,"Xiaorong Yang, Jia Chen, Degui Li, Runze Li","http://arxiv.org/abs/2303.13218v1, http://arxiv.org/pdf/2303.13218v1",econ.EM
30716,em,"Inflation is a major determinant for allocation decisions and its forecast is
a fundamental aim of governments and central banks. However, forecasting
inflation is not a trivial task, as its prediction relies on low frequency,
highly fluctuating data with unclear explanatory variables. While classical
models show some possibility of predicting inflation, reliably beating the
random walk benchmark remains difficult. Recently, (deep) neural networks have
shown impressive results in a multitude of applications, increasingly setting
the new state-of-the-art. This paper investigates the potential of the
transformer deep neural network architecture to forecast different inflation
rates. The results are compared to a study on classical time series and machine
learning models. We show that our adapted transformer, on average, outperforms
the baseline in 6 out of 16 experiments, showing best scores in two out of four
investigated inflation rates. Our results demonstrate that a transformer based
neural network can outperform classical regression and machine learning models
in certain inflation rates and forecasting horizons.",Inflation forecasting with attention based transformer neural networks,2023-03-13 16:36:16,"Maximilian Tschuchnig, Petra Tschuchnig, Cornelia Ferner, Michael Gadermayr","http://arxiv.org/abs/2303.15364v2, http://arxiv.org/pdf/2303.15364v2",econ.EM
30717,em,"Our paper discovers a new trade-off of using regression adjustments (RAs) in
causal inference under covariate-adaptive randomizations (CARs). On one hand,
RAs can improve the efficiency of causal estimators by incorporating
information from covariates that are not used in the randomization. On the
other hand, RAs can degrade estimation efficiency due to their estimation
errors, which are not asymptotically negligible when the number of regressors
is of the same order as the sample size. Ignoring the estimation errors of RAs
may result in serious over-rejection of causal inference under the null
hypothesis. To address the issue, we develop a unified inference theory for the
regression-adjusted average treatment effect (ATE) estimator under CARs. Our
theory has two key features: (1) it ensures the exact asymptotic size under the
null hypothesis, regardless of whether the number of covariates is fixed or
diverges no faster than the sample size; and (2) it guarantees weak efficiency
improvement over the ATE estimator without adjustments.",Adjustment with Many Regressors Under Covariate-Adaptive Randomizations,2023-04-17 14:50:35,"Liang Jiang, Liyao Li, Ke Miao, Yichong Zhang","http://arxiv.org/abs/2304.08184v2, http://arxiv.org/pdf/2304.08184v2",econ.EM
30718,em,"In this paper, we derive a new class of doubly robust estimators for
treatment effect estimands that is also robust against weak covariate overlap.
Our proposed estimator relies on trimming observations with extreme propensity
scores and uses a bias correction device for trimming bias. Our framework
accommodates many research designs, such as unconfoundedness, local treatment
effects, and difference-in-differences. Simulation exercises illustrate that
our proposed tools indeed have attractive finite sample properties, which are
aligned with our theoretical asymptotic results.",Doubly Robust Estimators with Weak Overlap,2023-04-18 16:13:11,"Yukun Ma, Pedro H. C. Sant'Anna, Yuya Sasaki, Takuya Ura","http://arxiv.org/abs/2304.08974v2, http://arxiv.org/pdf/2304.08974v2",econ.EM
30719,em,"One of the most popular club football tournaments, the UEFA Champions League,
will see a fundamental reform from the 2024/25 season: the traditional group
stage will be replaced by one league where each of the 36 teams plays eight
matches. To guarantee that the opponents of the clubs are of the same strength
in the new design, it is crucial to forecast the performance of the teams
before the tournament as well as possible. This paper investigates whether the
currently used rating of the teams, the UEFA club coefficient, can be improved
by taking the games played in the national leagues into account. According to
our logistic regression models, a variant of the Elo method provides a higher
accuracy in terms of explanatory power in the Champions League matches. The
Union of European Football Associations (UEFA) is encouraged to follow the
example of the FIFA World Ranking and reform the calculation of the club
coefficients in order to avoid unbalanced schedules in the novel tournament
format of the Champions League.",Club coefficients in the UEFA Champions League: Time for shift to an Elo-based formula,2023-04-18 18:51:13,László Csató,"http://dx.doi.org/10.1080/24748668.2023.2274221, http://arxiv.org/abs/2304.09078v6, http://arxiv.org/pdf/2304.09078v6",stat.AP
30720,em,"This paper proposes a new approach to identifying the effective cointegration
rank in high-dimensional unit-root (HDUR) time series from a prediction
perspective using reduced-rank regression. For a HDUR process $\mathbf{x}_t\in
\mathbb{R}^N$ and a stationary series $\mathbf{y}_t\in \mathbb{R}^p$ of
interest, our goal is to predict future values of $\mathbf{y}_t$ using
$\mathbf{x}_t$ and lagged values of $\mathbf{y}_t$. The proposed framework
consists of a two-step estimation procedure. First, the Principal Component
Analysis is used to identify all cointegrating vectors of $\mathbf{x}_t$.
Second, the co-integrated stationary series are used as regressors, together
with some lagged variables of $\mathbf{y}_t$, to predict $\mathbf{y}_t$. The
estimated reduced rank is then defined as the effective cointegration rank of
$\mathbf{x}_t$. Under the scenario that the autoregressive coefficient matrices
are sparse (or of low-rank), we apply the Least Absolute Shrinkage and
Selection Operator (or the reduced-rank techniques) to estimate the
autoregressive coefficients when the dimension involved is high. Theoretical
properties of the estimators are established under the assumptions that the
dimensions $p$ and $N$ and the sample size $T \to \infty$. Both simulated and
real examples are used to illustrate the proposed framework, and the empirical
application suggests that the proposed procedure fares well in predicting stock
returns.",Determination of the effective cointegration rank in high-dimensional time-series predictive regressions,2023-04-24 17:41:34,"Puyi Fang, Zhaoxing Gao, Ruey S. Tsay","http://arxiv.org/abs/2304.12134v2, http://arxiv.org/pdf/2304.12134v2",econ.EM
30721,em,"Generalized and Simulated Method of Moments are often used to estimate
structural Economic models. Yet, it is commonly reported that optimization is
challenging because the corresponding objective function is non-convex. For
smooth problems, this paper shows that convexity is not required: under a
global rank condition involving the Jacobian of the sample moments, certain
algorithms are globally convergent. These include a gradient-descent and a
Gauss-Newton algorithm with appropriate choice of tuning parameters. The
results are robust to 1) non-convexity, 2) one-to-one non-linear
reparameterizations, and 3) moderate misspecification. In contrast,
Newton-Raphson and quasi-Newton methods can fail to converge for the same
estimation because of non-convexity. A simple example illustrates a non-convex
GMM estimation problem that satisfies the aforementioned rank condition.
Empirical applications to random coefficient demand estimation and impulse
response matching further illustrate the results.",Convexity Not Required: Estimation of Smooth Moment Condition Models,2023-04-27 20:52:41,"Jean-Jacques Forneron, Liang Zhong","http://arxiv.org/abs/2304.14386v1, http://arxiv.org/pdf/2304.14386v1",econ.EM
30722,em,"Recent years have seen tremendous advances in the theory and application of
sequential experiments. While these experiments are not always designed with
hypothesis testing in mind, researchers may still be interested in performing
tests after the experiment is completed. The purpose of this paper is to aid in
the development of optimal tests for sequential experiments by analyzing their
asymptotic properties. Our key finding is that the asymptotic power function of
any test can be matched by a test in a limit experiment where a Gaussian
process is observed for each treatment, and inference is made for the drifts of
these processes. This result has important implications, including a powerful
sufficiency result: any candidate test only needs to rely on a fixed set of
statistics, regardless of the type of sequential experiment. These statistics
are the number of times each treatment has been sampled by the end of the
experiment, along with final value of the score (for parametric models) or
efficient influence function (for non-parametric models) process for each
treatment. We then characterize asymptotically optimal tests under various
restrictions such as unbiasedness, \alpha-spending constraints etc. Finally, we
apply our our results to three key classes of sequential experiments: costly
sampling, group sequential trials, and bandit experiments, and show how optimal
inference can be conducted in these scenarios.",Optimal tests following sequential experiments,2023-04-30 09:09:49,Karun Adusumilli,"http://arxiv.org/abs/2305.00403v2, http://arxiv.org/pdf/2305.00403v2",econ.EM
30723,em,"We consider a panel data analysis to examine the heterogeneity in treatment
effects with respect to a pre-treatment covariate of interest in the staggered
difference-in-differences setting of Callaway and Sant'Anna (2021). Under
standard identification conditions, a doubly robust estimand conditional on the
covariate identifies the group-time conditional average treatment effect given
the covariate. Focusing on the case of a continuous covariate, we propose a
three-step estimation procedure based on nonparametric local polynomial
regressions and parametric estimation methods. Using uniformly valid
distributional approximation results for empirical processes and multiplier
bootstrapping, we develop doubly robust inference methods to construct uniform
confidence bands for the group-time conditional average treatment effect
function. The accompanying R package didhetero allows for easy implementation
of the proposed methods.",Doubly Robust Uniform Confidence Bands for Group-Time Conditional Average Treatment Effects in Difference-in-Differences,2023-05-03 18:29:22,"Shunsuke Imai, Lei Qin, Takahide Yanagi","http://arxiv.org/abs/2305.02185v2, http://arxiv.org/pdf/2305.02185v2",econ.EM
30724,em,"Statistical inferential results generally come with a measure of reliability
for decision-making purposes. For a policy implementer, the value of
implementing published policy research depends critically upon this
reliability. For a policy researcher, the value of policy implementation may
depend weakly or not at all upon the policy's outcome. Some researchers might
find it advantageous to overstate the reliability of statistical results.
Implementers may find it difficult or impossible to determine whether
researchers are overstating reliability. This information asymmetry between
researchers and implementers can lead to an adverse selection problem where, at
best, the full benefits of a policy are not realized or, at worst, a policy is
deemed too risky to implement at any scale. Researchers can remedy this by
guaranteeing the policy outcome. Researchers can overcome their own risk
aversion and wealth constraints by exchanging risks with other researchers or
offering only partial insurance. The problem and remedy are illustrated using a
confidence interval for the success probability of a binomial policy outcome.",Risk management in the use of published statistical results for policy decisions,2023-05-05 02:28:34,Duncan Ermini Leaf,"http://arxiv.org/abs/2305.03205v1, http://arxiv.org/pdf/2305.03205v1",stat.OT
30725,em,"Objective To assess the price elasticity of branded imatinib in chronic
myeloid leukemia (CML) patients on Medicare Part D to determine if high
out-of-pocket payments (OOP) are driving the substantial levels of
non-adherence observed in this population.
  Data sources and study setting We use data from the TriNetX Diamond Network
(TDN) United States database for the period from first availability in 2011
through the end of patent exclusivity following the introduction of generic
imatinib in early 2016.
  Study design We implement a fuzzy regression discontinuity design to
separately estimate the effect of Medicare Part D enrollment at age 65 on
adherence and OOP in newly-diagnosed CML patients initiating branded imatinib.
The corresponding price elasticity of demand (PED) is estimated and results are
assessed across a variety of specifications and robustness checks.
  Data collection/extraction methods Data from eligible patients following the
application of inclusion and exclusion criteria were analyzed.
  Principal findings Our analysis suggests that there is a significant increase
in initial OOP of $232 (95% Confidence interval (CI): $102 to $362) for
individuals that enrolled in Part D due to expanded eligibility at age 65. The
relatively smaller and non-significant decrease in adherence of only 6
percentage points (95% CI: -0.21 to 0.08) led to a PED of -0.02 (95% CI:
-0.056, 0.015).
  Conclusion This study provides evidence regarding the financial impact of
coinsurance-based benefit designs on Medicare-age patients with CML initiating
branded imatinib. Results indicate that factors besides high OOP are driving
the substantial non-adherence observed in this population and add to the
growing literature on PED for specialty drugs.",The price elasticity of Gleevec in patients with Chronic Myeloid Leukemia enrolled in Medicare Part D: Evidence from a regression discontinuity design,2023-05-10 14:56:09,"Samantha E. Clark, Ruth Etzioni, Jerry Radich, Zachary Marcum, Anirban Basu","http://arxiv.org/abs/2305.06076v1, http://arxiv.org/pdf/2305.06076v1",econ.EM
30727,em,"The classic newsvendor model yields an optimal decision for a ""newsvendor""
selecting a quantity of inventory, under the assumption that the demand is
drawn from a known distribution. Motivated by applications such as cloud
provisioning and staffing, we consider a setting in which newsvendor-type
decisions must be made sequentially, in the face of demand drawn from a
stochastic process that is both unknown and nonstationary. All prior work on
this problem either (a) assumes that the level of nonstationarity is known, or
(b) imposes additional statistical assumptions that enable accurate predictions
of the unknown demand.
  We study the Nonstationary Newsvendor, with and without predictions. We
first, in the setting without predictions, design a policy which we prove (via
matching upper and lower bounds) achieves order-optimal regret -- ours is the
first policy to accomplish this without being given the level of
nonstationarity of the underlying demand. We then, for the first time,
introduce a model for generic (i.e. with no statistical assumptions)
predictions with arbitrary accuracy, and propose a policy that incorporates
these predictions without being given their accuracy. We upper bound the regret
of this policy, and show that it matches the best achievable regret had the
accuracy of the predictions been known. Finally, we empirically validate our
new policy with experiments based on two real-world datasets containing
thousands of time-series, showing that it succeeds in closing approximately 74%
of the gap between the best approaches based on nonstationarity and predictions
alone.",The Nonstationary Newsvendor with (and without) Predictions,2023-05-13 23:14:50,"Lin An, Andrew A. Li, Benjamin Moseley, R. Ravi","http://arxiv.org/abs/2305.07993v2, http://arxiv.org/pdf/2305.07993v2",math.OC
30728,em,"We propose a multicountry quantile factor augmeneted vector autoregression
(QFAVAR) to model heterogeneities both across countries and across
characteristics of the distributions of macroeconomic time series. The presence
of quantile factors allows for summarizing these two heterogeneities in a
parsimonious way. We develop two algorithms for posterior inference that
feature varying level of trade-off between estimation precision and
computational speed. Using monthly data for the euro area, we establish the
good empirical properties of the QFAVAR as a tool for assessing the effects of
global shocks on country-level macroeconomic risks. In particular, QFAVAR
short-run tail forecasts are more accurate compared to a FAVAR with symmetric
Gaussian errors, as well as univariate quantile autoregressions that ignore
comovements among quantiles of macroeconomic variables. We also illustrate how
quantile impulse response functions and quantile connectedness measures,
resulting from the new model, can be used to implement joint risk scenario
analysis.",Monitoring multicountry macroeconomic risk,2023-05-16 18:59:07,"Dimitris Korobilis, Maximilian Schröder","http://arxiv.org/abs/2305.09563v1, http://arxiv.org/pdf/2305.09563v1",econ.EM
30729,em,"Experiments on online marketplaces and social networks suffer from
interference, where the outcome of a unit is impacted by the treatment status
of other units. We propose a framework for modeling interference using a
ubiquitous deployment mechanism for experiments, staggered roll-out designs,
which slowly increase the fraction of units exposed to the treatment to
mitigate any unanticipated adverse side effects. Our main idea is to leverage
the temporal variations in treatment assignments introduced by roll-outs to
model the interference structure. Since there are often multiple competing
models of interference in practice we first develop a model selection method
that evaluates models based on their ability to explain outcome variation
observed along the roll-out. Through simulations, we show that our heuristic
model selection method, Leave-One-Period-Out, outperforms other baselines.
Next, we present a set of model identification conditions under which the
estimation of common estimands is possible and show how these conditions are
aided by roll-out designs. We conclude with a set of considerations, robustness
checks, and potential limitations for practitioners wishing to use our
framework.",Modeling Interference Using Experiment Roll-out,2023-05-18 08:57:37,"Ariel Boyarsky, Hongseok Namkoong, Jean Pouget-Abadie","http://dx.doi.org/10.1145/3580507.3597757, http://arxiv.org/abs/2305.10728v2, http://arxiv.org/pdf/2305.10728v2",stat.ME
30730,em,"This paper considers the specification of covariance structures with tail
estimates. We focus on two aspects: (i) the estimation of the VaR-CoVaR risk
matrix in the case of larger number of time series observations than assets in
a portfolio using quantile predictive regression models without assuming the
presence of nonstationary regressors and; (ii) the construction of a novel
variable selection algorithm, so-called, Feature Ordering by Centrality
Exclusion (FOCE), which is based on an assumption-lean regression framework,
has no tuning parameters and is proved to be consistent under general sparsity
assumptions. We illustrate the usefulness of our proposed methodology with
numerical studies of real and simulated datasets when modelling systemic risk
in a network.",Statistical Estimation for Covariance Structures with Tail Estimates using Nodewise Quantile Predictive Regression Models,2023-05-18 22:57:23,Christis Katsouris,"http://arxiv.org/abs/2305.11282v2, http://arxiv.org/pdf/2305.11282v2",econ.EM
30731,em,"This article develops a random effects quantile regression model for panel
data that allows for increased distributional flexibility, multivariate
heterogeneity, and time-invariant covariates in situations where mean
regression may be unsuitable. Our approach is Bayesian and builds upon the
generalized asymmetric Laplace distribution to decouple the modeling of
skewness from the quantile parameter. We derive an efficient simulation-based
estimation algorithm, demonstrate its properties and performance in targeted
simulation studies, and employ it in the computation of marginal likelihoods to
enable formal Bayesian model comparisons. The methodology is applied in a study
of U.S. residential rental rates following the Global Financial Crisis. Our
empirical results provide interesting insights on the interaction between rents
and economic, demographic and policy variables, weigh in on key modeling
features, and overwhelmingly support the additional flexibility at nearly all
quantiles and across several sub-samples. The practical differences that arise
as a result of allowing for flexible modeling can be nontrivial, especially for
quantiles away from the median.",Flexible Bayesian Quantile Analysis of Residential Rental Rates,2023-05-23 07:49:12,"Ivan Jeliazkov, Shubham Karnawat, Mohammad Arshad Rahman, Angela Vossmeyer","http://arxiv.org/abs/2305.13687v2, http://arxiv.org/pdf/2305.13687v2",econ.EM
30772,em,"Systematically biased forecasts are typically interpreted as evidence of
forecasters' irrationality and/or asymmetric loss. In this paper we propose an
alternative explanation: when forecasts inform economic policy decisions, and
the resulting actions affect the realization of the forecast target itself,
forecasts may be optimally biased even under quadratic loss. The result arises
in environments in which the forecaster is uncertain about the decision maker's
reaction to the forecast, which is presumably the case in most applications. We
illustrate the empirical relevance of our theory by reviewing some stylized
properties of Green Book inflation forecasts and relating them to the
predictions from our model. Our results point out that the presence of policy
feedback poses a challenge to traditional tests of forecast rationality.",Forecasting with Feedback,2023-08-29 09:50:13,"Robert P. Lieli, Augusto Nieto-Barthaburu","http://arxiv.org/abs/2308.15062v1, http://arxiv.org/pdf/2308.15062v1",econ.TH
30732,em,"Empirical research typically involves a robustness-efficiency tradeoff. A
researcher seeking to estimate a scalar parameter can invoke strong assumptions
to motivate a restricted estimator that is precise but may be heavily biased,
or they can relax some of these assumptions to motivate a more robust, but
variable, unrestricted estimator. When a bound on the bias of the restricted
estimator is available, it is optimal to shrink the unrestricted estimator
towards the restricted estimator. For settings where a bound on the bias of the
restricted estimator is unknown, we propose adaptive shrinkage estimators that
minimize the percentage increase in worst case risk relative to an oracle that
knows the bound. We show that adaptive estimators solve a weighted convex
minimax problem and provide lookup tables facilitating their rapid computation.
Revisiting five empirical studies where questions of model specification arise,
we examine the advantages of adapting to -- rather than testing for --
misspecification.",Adapting to Misspecification,2023-05-23 20:16:09,"Timothy B. Armstrong, Patrick Kline, Liyang Sun","http://arxiv.org/abs/2305.14265v3, http://arxiv.org/pdf/2305.14265v3",econ.EM
30733,em,"The shocks which hit macroeconomic models such as Vector Autoregressions
(VARs) have the potential to be non-Gaussian, exhibiting asymmetries and fat
tails. This consideration motivates the VAR developed in this paper which uses
a Dirichlet process mixture (DPM) to model the shocks. However, we do not
follow the obvious strategy of simply modeling the VAR errors with a DPM since
this would lead to computationally infeasible Bayesian inference in larger VARs
and potentially a sensitivity to the way the variables are ordered in the VAR.
Instead we develop a particular additive error structure inspired by Bayesian
nonparametric treatments of random effects in panel data models. We show that
this leads to a model which allows for computationally fast and order-invariant
inference in large VARs with nonparametric shocks. Our empirical results with
nonparametric VARs of various dimensions shows that nonparametric treatment of
the VAR errors is particularly useful in periods such as the financial crisis
and the pandemic.",Fast and Order-invariant Inference in Bayesian VARs with Non-Parametric Shocks,2023-05-26 14:11:23,"Florian Huber, Gary Koop","http://arxiv.org/abs/2305.16827v1, http://arxiv.org/pdf/2305.16827v1",econ.EM
30734,em,"We study the effect of using high-resolution elevation data on the selection
of the most fuel-efficient (greenest) path for different trucks in various
urban environments. We adapt a variant of the Comprehensive Modal Emission
Model (CMEM) to show that the optimal speed and the greenest path are slope
dependent (dynamic). When there are no elevation changes in a road network, the
most fuel-efficient path is the shortest path with a constant (static) optimal
speed throughout. However, if the network is not flat, then the shortest path
is not necessarily the greenest path, and the optimal driving speed is dynamic.
We prove that the greenest path converges to an asymptotic greenest path as the
payload approaches infinity and that this limiting path is attained for a
finite load. In a set of extensive numerical experiments, we benchmark the CO2
emissions reduction of our dynamic speed and the greenest path policies against
policies that ignore elevation data. We use the geo-spatial data of 25 major
cities across 6 continents, such as Los Angeles, Mexico City, Johannesburg,
Athens, Ankara, and Canberra. Our results show that, on average, traversing the
greenest path with a dynamic optimal speed policy can reduce the CO2 emissions
by 1.19% to 10.15% depending on the city and truck type for a moderate payload.
They also demonstrate that the average CO2 reduction of the optimal dynamic
speed policy is between 2% to 4% for most of the cities, regardless of the
truck type. We confirm that disregarding elevation data yields sub-optimal
paths that are significantly less CO2 efficient than the greenest paths.",Load Asymptotics and Dynamic Speed Optimization for the Greenest Path Problem: A Comprehensive Analysis,2023-06-02 20:02:25,"Poulad Moradi, Joachim Arts, Josué Velázquez-Martínez","http://arxiv.org/abs/2306.01687v1, http://arxiv.org/pdf/2306.01687v1",math.OC
30735,em,"School choice mechanism designers use discrete choice models to understand
and predict families' preferences. The most widely-used choice model, the
multinomial logit (MNL), is linear in school and/or household attributes. While
the model is simple and interpretable, it assumes the ranked preference lists
arise from a choice process that is uniform throughout the ranking, from top to
bottom. In this work, we introduce two strategies for rank-heterogeneous choice
modeling tailored for school choice. First, we adapt a context-dependent random
utility model (CDM), considering down-rank choices as occurring in the context
of earlier up-rank choices. Second, we consider stratifying the choice modeling
by rank, regularizing rank-adjacent models towards one another when
appropriate. Using data on household preferences from the San Francisco Unified
School District (SFUSD) across multiple years, we show that the contextual
models considerably improve our out-of-sample evaluation metrics across all
rank positions over the non-contextual models in the literature. Meanwhile,
stratifying the model by rank can yield more accurate first-choice predictions
while down-rank predictions are relatively unimproved. These models provide
performance upgrades that school choice researchers can adopt to improve
predictions and counterfactual analyses.",Rank-heterogeneous Preference Models for School Choice,2023-06-01 19:31:29,"Amel Awadelkarim, Arjun Seshadri, Itai Ashlagi, Irene Lo, Johan Ugander","http://dx.doi.org/10.1145/3580305.3599484, http://arxiv.org/abs/2306.01801v1, http://arxiv.org/pdf/2306.01801v1",stat.AP
30736,em,"Estimating weights in the synthetic control method, typically resulting in
sparse weights where only a few control units have non-zero weights, involves
an optimization procedure that simultaneously selects and aligns control units
to closely match the treated unit. However, this simultaneous selection and
alignment of control units may lead to a loss of efficiency. Another concern
arising from the aforementioned procedure is its susceptibility to
under-fitting due to imperfect pre-treatment fit. It is not uncommon for the
linear combination, using nonnegative weights, of pre-treatment period outcomes
for the control units to inadequately approximate the pre-treatment outcomes
for the treated unit. To address both of these issues, this paper proposes a
simple and effective method called Synthetic Regressing Control (SRC). The SRC
method begins by performing the univariate linear regression to appropriately
align the pre-treatment periods of the control units with the treated unit.
Subsequently, a SRC estimator is obtained by synthesizing (taking a weighted
average) the fitted controls. To determine the weights in the synthesis
procedure, we propose an approach that utilizes a criterion of unbiased risk
estimator. Theoretically, we show that the synthesis way is asymptotically
optimal in the sense of achieving the lowest possible squared error. Extensive
numerical experiments highlight the advantages of the SRC method.",Synthetic Regressing Control Method,2023-06-05 07:23:54,Rong J. B. Zhu,"http://arxiv.org/abs/2306.02584v2, http://arxiv.org/pdf/2306.02584v2",econ.EM
30738,em,"The US stock market experienced instability following the recession
(2007-2009). COVID-19 poses a significant challenge to US stock traders and
investors. Traders and investors should keep up with the stock market. This is
to mitigate risks and improve profits by using forecasting models that account
for the effects of the pandemic. With consideration of the COVID-19 pandemic
after the recession, two machine learning models, including Random Forest and
LSTM are used to forecast two major US stock market indices. Data on historical
prices after the big recession is used for developing machine learning models
and forecasting index returns. To evaluate the model performance during
training, cross-validation is used. Additionally, hyperparameter optimizing,
regularization, such as dropouts and weight decays, and preprocessing improve
the performances of Machine Learning techniques. Using high-accuracy machine
learning techniques, traders and investors can forecast stock market behavior,
stay ahead of their competition, and improve profitability. Keywords: COVID-19,
LSTM, S&P500, Random Forest, Russell 2000, Forecasting, Machine Learning, Time
Series JEL Code: C6, C8, G4.",Forecasting the Performance of US Stock Market Indices During COVID-19: RF vs LSTM,2023-06-06 15:15:45,"Reza Nematirad, Amin Ahmadisharaf, Ali Lashgari","http://arxiv.org/abs/2306.03620v1, http://arxiv.org/pdf/2306.03620v1",econ.EM
30739,em,"In many choice modeling applications, people demand is frequently
characterized as multiple discrete, which means that people choose multiple
items simultaneously. The analysis and prediction of people behavior in
multiple discrete choice situations pose several challenges. In this paper, to
address this, we propose a random utility maximization (RUM) based model that
considers each subset of choice alternatives as a composite alternative, where
individuals choose a subset according to the RUM framework. While this approach
offers a natural and intuitive modeling approach for multiple-choice analysis,
the large number of subsets of choices in the formulation makes its estimation
and application intractable. To overcome this challenge, we introduce directed
acyclic graph (DAG) based representations of choices where each node of the DAG
is associated with an elemental alternative and additional information such
that the number of selected elemental alternatives. Our innovation is to show
that the multi-choice model is equivalent to a recursive route choice model on
the DAG, leading to the development of new efficient estimation algorithms
based on dynamic programming. In addition, the DAG representations enable us to
bring some advanced route choice models to capture the correlation between
subset choice alternatives. Numerical experiments based on synthetic and real
datasets show many advantages of our modeling approach and the proposed
estimation algorithms.",Network-based Representations and Dynamic Discrete Choice Models for Multiple Discrete Choice Analysis,2023-06-07 20:16:41,"Hung Tran, Tien Mai","http://arxiv.org/abs/2306.04606v1, http://arxiv.org/pdf/2306.04606v1",econ.EM
30740,em,"Matrix-variate time series data are largely available in applications.
However, no attempt has been made to study their conditional heteroskedasticity
that is often observed in economic and financial data. To address this gap, we
propose a novel matrix generalized autoregressive conditional
heteroskedasticity (GARCH) model to capture the dynamics of conditional row and
column covariance matrices of matrix time series. The key innovation of the
matrix GARCH model is the use of a univariate GARCH specification for the trace
of conditional row or column covariance matrix, which allows for the
identification of conditional row and column covariance matrices. Moreover, we
introduce a quasi maximum likelihood estimator (QMLE) for model estimation and
develop a portmanteau test for model diagnostic checking. Simulation studies
are conducted to assess the finite-sample performance of the QMLE and
portmanteau test. To handle large dimensional matrix time series, we also
propose a matrix factor GARCH model. Finally, we demonstrate the superiority of
the matrix GARCH and matrix factor GARCH models over existing multivariate
GARCH-type models in volatility forecasting and portfolio allocations using
three applications on credit default swap prices, global stock sector indices,
and future prices.",Matrix GARCH Model: Inference and Application,2023-06-08 16:06:13,"Cheng Yu, Dong Li, Feiyu Jiang, Ke Zhu","http://arxiv.org/abs/2306.05169v1, http://arxiv.org/pdf/2306.05169v1",stat.ME
30741,em,"We develop a practical way of addressing the Errors-In-Variables (EIV)
problem in the Generalized Method of Moments (GMM) framework. We focus on the
settings in which the variability of the EIV is a fraction of that of the
mismeasured variables, which is typical for empirical applications. For any
initial set of moment conditions our approach provides a corrected set of
moment conditions that are robust to the EIV. We show that the GMM estimator
based on these moments is root-n-consistent, with the standard tests and
confidence intervals providing valid inference. This is true even when the EIV
are so large that naive estimators (that ignore the EIV problem) may be heavily
biased with the confidence intervals having 0% coverage. Our approach involves
no nonparametric estimation, which is particularly important for applications
with multiple covariates, and settings with multivariate, serially correlated,
or non-classical EIV.",Simple Estimation of Semiparametric Models with Measurement Errors,2023-06-25 21:52:29,"Kirill S. Evdokimov, Andrei Zeleneev","http://arxiv.org/abs/2306.14311v1, http://arxiv.org/pdf/2306.14311v1",econ.EM
30742,em,"When evaluating partial effects, it is important to distinguish between
structural endogeneity and measurement errors. In contrast to linear models,
these two sources of endogeneity affect partial effects differently in
nonlinear models. We study this issue focusing on the Instrumental Variable
(IV) Probit and Tobit models. We show that even when a valid IV is available,
failing to differentiate between the two types of endogeneity can lead to
either under- or over-estimation of the partial effects. We develop simple
estimators of the bounds on the partial effects and provide easy to implement
confidence intervals that correctly account for both types of endogeneity. We
illustrate the methods in a Monte Carlo simulation and an empirical
application.",Marginal Effects for Probit and Tobit with Endogeneity,2023-06-26 20:19:45,"Kirill S. Evdokimov, Ilze Kalnina, Andrei Zeleneev","http://arxiv.org/abs/2306.14862v1, http://arxiv.org/pdf/2306.14862v1",econ.EM
30748,em,"Arctic sea ice has steadily diminished as atmospheric greenhouse gas
concentrations have increased. Using observed data from 1979 to 2019, we
estimate a close contemporaneous linear relationship between Arctic sea ice
area and cumulative carbon dioxide emissions. For comparison, we provide
analogous regression estimates using simulated data from global climate models
(drawn from the CMIP5 and CMIP6 model comparison exercises). The carbon
sensitivity of Arctic sea ice area is considerably stronger in the observed
data than in the climate models. Thus, for a given future emissions path, an
ice-free Arctic is likely to occur much earlier than the climate models
project. Furthermore, little progress has been made in recent global climate
modeling (from CMIP5 to CMIP6) to more accurately match the observed
carbon-climate response of Arctic sea ice.",Climate Models Underestimate the Sensitivity of Arctic Sea Ice to Carbon Emissions,2023-07-07 15:36:23,"Francis X. Diebold, Glenn D. Rudebusch","http://arxiv.org/abs/2307.03552v1, http://arxiv.org/pdf/2307.03552v1",econ.EM
30743,em,"Social disruption occurs when a policy creates or destroys many network
connections between agents. It is a costly side effect of many interventions
and so a growing empirical literature recommends measuring and accounting for
social disruption when evaluating the welfare impact of a policy. However,
there is currently little work characterizing what can actually be learned
about social disruption from data in practice. In this paper, we consider the
problem of identifying social disruption in a research design that is popular
in the literature. We provide two sets of identification results. First, we
show that social disruption is not generally point identified, but informative
bounds can be constructed using the eigenvalues of the network adjacency
matrices observed by the researcher. Second, we show that point identification
follows from a theoretically motivated monotonicity condition, and we derive a
closed form representation. We apply our methods in two empirical illustrations
and find large policy effects that otherwise might be missed by alternatives in
the literature.",Identifying Socially Disruptive Policies,2023-06-26 21:31:43,"Eric Auerbach, Yong Cai","http://arxiv.org/abs/2306.15000v2, http://arxiv.org/pdf/2306.15000v2",econ.EM
30744,em,"We propose a causal framework for decomposing a group disparity in an outcome
in terms of an intermediate treatment variable. Our framework captures the
contributions of group differences in baseline potential outcome, treatment
prevalence, average treatment effect, and selection into treatment. This
framework is counterfactually formulated and readily informs policy
interventions. The decomposition component for differential selection into
treatment is particularly novel, revealing a new mechanism for explaining and
ameliorating disparities. This framework reformulates the classic
Kitagawa-Blinder-Oaxaca decomposition in causal terms, supplements causal
mediation analysis by explaining group disparities instead of group effects,
and resolves conceptual difficulties of recent random equalization
decompositions. We also provide a conditional decomposition that allows
researchers to incorporate covariates in defining the estimands and
corresponding interventions. We develop nonparametric estimators based on
efficient influence functions of the decompositions. We show that, under mild
conditions, these estimators are $\sqrt{n}$-consistent, asymptotically normal,
semiparametrically efficient, and doubly robust. We apply our framework to
study the causal role of education in intergenerational income persistence. We
find that both differential prevalence of and differential selection into
college graduation significantly contribute to the disparity in income
attainment between income origin groups.",Nonparametric Causal Decomposition of Group Disparities,2023-06-29 02:01:44,"Ang Yu, Felix Elwert","http://arxiv.org/abs/2306.16591v1, http://arxiv.org/pdf/2306.16591v1",stat.ME
30745,em,"We suggest double/debiased machine learning estimators of direct and indirect
quantile treatment effects under a selection-on-observables assumption. This
permits disentangling the causal effect of a binary treatment at a specific
outcome rank into an indirect component that operates through an intermediate
variable called mediator and an (unmediated) direct impact. The proposed method
is based on the efficient score functions of the cumulative distribution
functions of potential outcomes, which are robust to certain misspecifications
of the nuisance parameters, i.e., the outcome, treatment, and mediator models.
We estimate these nuisance parameters by machine learning and use cross-fitting
to reduce overfitting bias in the estimation of direct and indirect quantile
treatment effects. We establish uniform consistency and asymptotic normality of
our effect estimators. We also propose a multiplier bootstrap for statistical
inference and show the validity of the multiplier bootstrap. Finally, we
investigate the finite sample performance of our method in a simulation study
and apply it to empirical data from the National Job Corp Study to assess the
direct and indirect earnings effects of training.",Doubly Robust Estimation of Direct and Indirect Quantile Treatment Effects with Machine Learning,2023-07-03 17:27:15,"Yu-Chin Hsu, Martin Huber, Yu-Min Yen","http://arxiv.org/abs/2307.01049v1, http://arxiv.org/pdf/2307.01049v1",econ.EM
30746,em,"In this paper, we consider estimating spot/instantaneous volatility matrices
of high-frequency data collected for a large number of assets. We first combine
classic nonparametric kernel-based smoothing with a generalised shrinkage
technique in the matrix estimation for noise-free data under a uniform sparsity
assumption, a natural extension of the approximate sparsity commonly used in
the literature. The uniform consistency property is derived for the proposed
spot volatility matrix estimator with convergence rates comparable to the
optimal minimax one. For the high-frequency data contaminated by microstructure
noise, we introduce a localised pre-averaging estimation method that reduces
the effective magnitude of the noise. We then use the estimation tool developed
in the noise-free scenario, and derive the uniform convergence rates for the
developed spot volatility matrix estimator. We further combine the kernel
smoothing with the shrinkage technique to estimate the time-varying volatility
matrix of the high-dimensional noise vector. In addition, we consider large
spot volatility matrix estimation in time-varying factor models with observable
risk factors and derive the uniform convergence property. We provide numerical
studies including simulation and empirical application to examine the
performance of the proposed estimation methods in finite samples.",Nonparametric Estimation of Large Spot Volatility Matrices for High-Frequency Financial Data,2023-07-03 23:43:48,"Ruijun Bu, Degui Li, Oliver Linton, Hanchao Wang","http://arxiv.org/abs/2307.01348v1, http://arxiv.org/pdf/2307.01348v1",econ.EM
30747,em,"Financial order flow exhibits a remarkable level of persistence, wherein buy
(sell) trades are often followed by subsequent buy (sell) trades over extended
periods. This persistence can be attributed to the division and gradual
execution of large orders. Consequently, distinct order flow regimes might
emerge, which can be identified through suitable time series models applied to
market data. In this paper, we propose the use of Bayesian online change-point
detection (BOCPD) methods to identify regime shifts in real-time and enable
online predictions of order flow and market impact. To enhance the
effectiveness of our approach, we have developed a novel BOCPD method using a
score-driven approach. This method accommodates temporal correlations and
time-varying parameters within each regime. Through empirical application to
NASDAQ data, we have found that: (i) Our newly proposed model demonstrates
superior out-of-sample predictive performance compared to existing models that
assume i.i.d. behavior within each regime; (ii) When examining the residuals,
our model demonstrates good specification in terms of both distributional
assumptions and temporal correlations; (iii) Within a given regime, the price
dynamics exhibit a concave relationship with respect to time and volume,
mirroring the characteristics of actual large orders; (iv) By incorporating
regime information, our model produces more accurate online predictions of
order flow and market impact compared to models that do not consider regimes.",Online Learning of Order Flow and Market Impact with Bayesian Change-Point Detection Methods,2023-07-05 18:42:06,"Ioanna-Yvonni Tsaknaki, Fabrizio Lillo, Piero Mazzarisi","http://arxiv.org/abs/2307.02375v1, http://arxiv.org/pdf/2307.02375v1",q-fin.TR
30749,em,"The covariance of two random variables measures the average joint deviations
from their respective means. We generalise this well-known measure by replacing
the means with other statistical functionals such as quantiles, expectiles, or
thresholds. Deviations from these functionals are defined via generalised
errors, often induced by identification or moment functions. As a normalised
measure of dependence, a generalised correlation is constructed. Replacing the
common Cauchy-Schwarz normalisation by a novel Fr\'echet-Hoeffding
normalisation, we obtain attainability of the entire interval $[-1, 1]$ for any
given marginals. We uncover favourable properties of these new dependence
measures. The families of quantile and threshold correlations give rise to
function-valued distributional correlations, exhibiting the entire dependence
structure. They lead to tail correlations, which should arguably supersede the
coefficients of tail dependence. Finally, we construct summary covariances
(correlations), which arise as (normalised) weighted averages of distributional
covariances. We retrieve Pearson covariance and Spearman correlation as special
cases. The applicability and usefulness of our new dependence measures is
illustrated on demographic data from the Panel Study of Income Dynamics.",Generalised Covariances and Correlations,2023-07-07 16:38:04,"Tobias Fissler, Marc-Oliver Pohle","http://arxiv.org/abs/2307.03594v2, http://arxiv.org/pdf/2307.03594v2",stat.ME
30750,em,"We undertake a systematic study of historic market volatility spanning
roughly five preceding decades. We focus specifically on the time series of
realized volatility (RV) of the S&P500 index and its distribution function. As
expected, the largest values of RV coincide with the largest economic upheavals
of the period: Savings and Loan Crisis, Tech Bubble, Financial Crisis and Covid
Pandemic. We address the question of whether these values belong to one of the
three categories: Black Swans (BS), that is they lie on scale-free, power-law
tails of the distribution; Dragon Kings (DK), defined as statistically
significant upward deviations from BS; or Negative Dragons Kings (nDK), defined
as statistically significant downward deviations from BS. In analyzing the
tails of the distribution with RV > 40, we observe the appearance of
""potential"" DK which eventually terminate in an abrupt plunge to nDK. This
phenomenon becomes more pronounced with the increase of the number of days over
which the average RV is calculated -- here from daily, n=1, to ""monthly,"" n=21.
We fit the entire distribution with a modified Generalized Beta (mGB)
distribution function, which terminates at a finite value of the variable but
exhibits a long power-law stretch prior to that, as well as Generalized Beta
Prime (GB2) distribution function, which has a power-law tail. We also fit the
tails directly with a straight line on a log-log scale. In order to ascertain
BS, DK or nDK behavior, all fits include their confidence intervals and
p-values are evaluated for the data points to check if they can come from the
respective distributions.",Are there Dragon Kings in the Stock Market?,2023-07-07 18:59:45,"Jiong Liu, M. Dashti Moghaddam, R. A. Serota","http://arxiv.org/abs/2307.03693v1, http://arxiv.org/pdf/2307.03693v1",q-fin.ST
30751,em,"Choice Modeling is at the core of many economics, operations, and marketing
problems. In this paper, we propose a fundamental characterization of choice
functions that encompasses a wide variety of extant choice models. We
demonstrate how nonparametric estimators like neural nets can easily
approximate such functionals and overcome the curse of dimensionality that is
inherent in the non-parametric estimation of choice functions. We demonstrate
through extensive simulations that our proposed functionals can flexibly
capture underlying consumer behavior in a completely data-driven fashion and
outperform traditional parametric models. As demand settings often exhibit
endogenous features, we extend our framework to incorporate estimation under
endogenous features. Further, we also describe a formal inference procedure to
construct valid confidence intervals on objects of interest like price
elasticity. Finally, to assess the practical applicability of our estimator, we
utilize a real-world dataset from S. Berry, Levinsohn, and Pakes (1995). Our
empirical analysis confirms that the estimator generates realistic and
comparable own- and cross-price elasticities that are consistent with the
observations reported in the existing literature.",Choice Models and Permutation Invariance,2023-07-14 02:24:05,"Amandeep Singh, Ye Liu, Hema Yoganarasimhan","http://arxiv.org/abs/2307.07090v1, http://arxiv.org/pdf/2307.07090v1",econ.EM
30752,em,"This paper investigates the properties of Quasi Maximum Likelihood estimation
of an approximate factor model for an $n$-dimensional vector of stationary time
series. We prove that the factor loadings estimated by Quasi Maximum Likelihood
are asymptotically equivalent, as $n\to\infty$, to those estimated via
Principal Components. Both estimators are, in turn, also asymptotically
equivalent, as $n\to\infty$, to the unfeasible Ordinary Least Squares estimator
we would have if the factors were observed. We also show that the usual
sandwich form of the asymptotic covariance matrix of the Quasi Maximum
Likelihood estimator is asymptotically equivalent to the simpler asymptotic
covariance matrix of the unfeasible Ordinary Least Squares. These results hold
in the general case in which the idiosyncratic components are cross-sectionally
heteroskedastic, as well as serially and cross-sectionally weakly correlated.
This paper provides a simple solution to computing the Quasi Maximum Likelihood
estimator and its asymptotic confidence intervals without the need of running
any iterated algorithm, whose convergence properties are unclear, and
estimating the Hessian and Fisher information matrices, whose expressions are
very complex.",Asymptotic equivalence of Principal Components and Quasi Maximum Likelihood estimators in Large Approximate Factor Models,2023-07-19 12:50:14,Matteo Barigozzi,"http://arxiv.org/abs/2307.09864v3, http://arxiv.org/pdf/2307.09864v3",econ.EM
30753,em,"This work considers estimation and forecasting in a multivariate count time
series model based on a copula-type transformation of a Gaussian dynamic factor
model. The estimation is based on second-order properties of the count and
underlying Gaussian models and applies to the case where the model dimension is
larger than the sample length. In addition, novel cross-validation schemes are
suggested for model selection. The forecasting is carried out through a
particle-based sequential Monte Carlo, leveraging Kalman filtering techniques.
A simulation study and an application are also considered.",Latent Gaussian dynamic factor modeling and forecasting for multivariate count time series,2023-07-19 23:48:44,"Younghoon Kim, Zachary F. Fisher, Vladas Pipiras","http://arxiv.org/abs/2307.10454v1, http://arxiv.org/pdf/2307.10454v1",stat.ME
30801,em,"We show, using three empirical applications, that linear regression estimates
which rely on the assumption of sparsity are fragile in two ways. First, we
document that different choices of the regressor matrix that don't impact
ordinary least squares (OLS) estimates, such as the choice of baseline category
with categorical controls, can move sparsity-based estimates two standard
errors or more. Second, we develop two tests of the sparsity assumption based
on comparing sparsity-based estimators with OLS. The tests tend to reject the
sparsity assumption in all three applications. Unless the number of regressors
is comparable to or exceeds the sample size, OLS yields more robust results at
little efficiency cost.",The Fragility of Sparsity,2023-11-04 05:03:48,"Michal Kolesár, Ulrich K. Müller, Sebastian T. Roelsgaard","http://arxiv.org/abs/2311.02299v1, http://arxiv.org/pdf/2311.02299v1",econ.EM
30754,em,"We introduce PySDTest, a Python package for statistical tests of stochastic
dominance. PySDTest can implement the testing procedures of Barrett and Donald
(2003), Linton et al. (2005), Linton et al. (2010), Donald and Hsu (2016), and
their extensions. PySDTest provides several options to compute the critical
values including bootstrap, subsampling, and numerical delta methods. In
addition, PySDTest allows various notions of the stochastic dominance
hypothesis, including stochastic maximality among multiple prospects and
prospect dominance. We briefly give an overview of the concepts of stochastic
dominance and testing methods. We then provide a practical guidance for using
PySDTest. For an empirical illustration, we apply PySDTest to the portfolio
choice problem between the daily returns of Bitcoin and S&P 500 index. We find
that the S&P 500 index returns second-order stochastically dominate the Bitcoin
returns.",PySDTest: a Python Package for Stochastic Dominance Tests,2023-07-20 11:37:20,"Kyungho Lee, Yoon-Jae Whang","http://arxiv.org/abs/2307.10694v1, http://arxiv.org/pdf/2307.10694v1",econ.EM
30755,em,"Claim reserving is primarily accomplished using macro-level models, with the
Chain-Ladder method being the most widely adopted method. These methods are
usually constructed heuristically and rely on oversimplified data assumptions,
neglecting the heterogeneity of policyholders, and frequently leading to modest
reserve predictions. In contrast, micro-level reserving leverages on stochastic
modeling with granular information for improved predictions, but usually comes
at the cost of more complex models that are unattractive to practitioners. In
this paper, we introduce a simple macro-level type approach that can
incorporate granular information from the individual level. To do so, we imply
a novel framework in which we view the claim reserving problem as a population
sampling problem and propose a reserve estimator based on inverse probability
weighting techniques, with weights driven by policyholders' attributes. The
framework provides a statistically sound method for aggregate claim reserving
in a frequency and severity distribution-free fashion, while also incorporating
the capability to utilize granular information via a regression-type framework.
The resulting reserve estimator has the attractiveness of resembling the
Chain-Ladder claim development principle, but applied at the individual claim
level, so it is easy to interpret and more appealing to practitioners.",Claim Reserving via Inverse Probability Weighting: A Micro-Level Chain-Ladder Method,2023-07-05 10:24:23,"Sebastian Calcetero-Vanegas, Andrei L. Badescu, X. Sheldon Lin","http://arxiv.org/abs/2307.10808v2, http://arxiv.org/pdf/2307.10808v2",econ.EM
30756,em,"This study examines the short-term employment changes in the US after
hurricane impacts. An analysis of hurricane events during 1990-2021 suggests
that county-level employment changes in the initial month are small on average,
though large employment losses (>30%) can occur after extreme storms. The
overall small changes are partly a result of compensation among different
employment sectors, such as the construction and leisure and hospitality
sectors. Employment losses tend to be relatively pronounced in the
service-providing industries. The post-storm employment shock is negatively
correlated with the metrics of storm hazards (e.g., extreme wind and
precipitation) and geospatial details of impacts (e.g., storm-entity distance).
Additionally, non-storm factors such as county characteristics also strongly
affect short-term employment changes. The findings inform predictive modeling
of short-term employment changes, which shows promising skills for
service-providing industries and high-impact storms. The Random Forests model,
which can account for nonlinear relationships, greatly outperforms the multiple
linear regression model commonly used by economics studies. These findings may
help improve post-storm aid programs and the modeling of hurricanes
socioeconomic impacts in a changing climate.",Characteristics and Predictive Modeling of Short-term Impacts of Hurricanes on the US Employment,2023-07-25 20:46:57,"Gan Zhang, Wenjun Zhu","http://arxiv.org/abs/2307.13686v1, http://arxiv.org/pdf/2307.13686v1",econ.EM
30757,em,"An analyst observes the frequency with which an agent takes actions, but not
the frequency with which she takes actions conditional on a payoff relevant
state. In this setting, we ask when the analyst can rationalize the agent's
choices as the outcome of the agent learning something about the state before
taking action. Our characterization marries the obedience approach in
information design (Bergemann and Morris, 2016) and the belief approach in
Bayesian persuasion (Kamenica and Gentzkow, 2011) relying on a theorem by
Strassen (1965) and Hall's marriage theorem. We apply our results to
ring-network games and to identify conditions under which a data set is
consistent with a public information structure in first-order Bayesian
persuasion games.",The Core of Bayesian Persuasion,2023-07-26 01:48:55,"Laura Doval, Ran Eilat","http://arxiv.org/abs/2307.13849v1, http://arxiv.org/pdf/2307.13849v1",econ.TH
30758,em,"We extend nonparametric regression smoothing splines to a context where there
is endogeneity and instrumental variables are available. Unlike popular
existing estimators, the resulting estimator is one-step and relies on a unique
regularization parameter. We derive uniform rates of the convergence for the
estimator and its first derivative. We also address the issue of imposing
monotonicity in estimation. Simulations confirm the good performances of our
estimator compared to some popular two-step procedures. Our method yields
economically sensible results when used to estimate Engel curves.",One-step nonparametric instrumental regression using smoothing splines,2023-07-27 16:53:34,"Jad Beyhum, Elia Lapenta, Pascal Lavergne","http://arxiv.org/abs/2307.14867v2, http://arxiv.org/pdf/2307.14867v2",econ.EM
30759,em,"This paper studies the inferential theory for estimating low-rank matrices.
It also provides an inference method for the average treatment effect as an
application. We show that the least square estimation of eigenvectors following
the nuclear norm penalization attains the asymptotic normality. The key
contribution of our method is that it does not require sample splitting. In
addition, this paper allows dependent observation patterns and heterogeneous
observation probabilities. Empirically, we apply the proposed procedure to
estimating the impact of the presidential vote on allocating the U.S. federal
budget to the states.",Inference for Low-rank Completion without Sample Splitting with Application to Treatment Effect Estimation,2023-07-31 05:26:52,"Jungjun Choi, Hyukjun Kwon, Yuan Liao","http://arxiv.org/abs/2307.16370v1, http://arxiv.org/pdf/2307.16370v1",econ.EM
30770,em,"Shrinkage methods are frequently used to estimate fixed effects to reduce the
noisiness of the least squares estimators. However, widely used shrinkage
estimators guarantee such noise reduction only under strong distributional
assumptions. I develop an estimator for the fixed effects that obtains the best
possible mean squared error within a class of shrinkage estimators. This class
includes conventional shrinkage estimators and the optimality does not require
distributional assumptions. The estimator has an intuitive form and is easy to
implement. Moreover, the fixed effects are allowed to vary with time and to be
serially correlated, and the shrinkage optimally incorporates the underlying
correlation structure in this case. In such a context, I also provide a method
to forecast fixed effects one period ahead.",Optimal Shrinkage Estimation of Fixed Effects in Linear Panel Data Models,2023-08-24 03:59:42,Soonwoo Kwon,"http://arxiv.org/abs/2308.12485v2, http://arxiv.org/pdf/2308.12485v2",econ.EM
30760,em,"This paper develops an inferential framework for matrix completion when
missing is not at random and without the requirement of strong signals. Our
development is based on the observation that if the number of missing entries
is small enough compared to the panel size, then they can be estimated well
even when missing is not at random. Taking advantage of this fact, we divide
the missing entries into smaller groups and estimate each group via nuclear
norm regularization. In addition, we show that with appropriate debiasing, our
proposed estimate is asymptotically normal even for fairly weak signals. Our
work is motivated by recent research on the Tick Size Pilot Program, an
experiment conducted by the Security and Exchange Commission (SEC) to evaluate
the impact of widening the tick size on the market quality of stocks from 2016
to 2018. While previous studies were based on traditional regression or
difference-in-difference methods by assuming that the treatment effect is
invariant with respect to time and unit, our analyses suggest significant
heterogeneity across units and intriguing dynamics over time during the pilot
program.",Matrix Completion When Missing Is Not at Random and Its Applications in Causal Panel Data Models,2023-08-04 17:54:29,"Jungjun Choi, Ming Yuan","http://arxiv.org/abs/2308.02364v1, http://arxiv.org/pdf/2308.02364v1",stat.ME
30761,em,"""The rich are getting richer"" implies that the population income
distributions are getting more right skewed and heavily tailed. For such
distributions, the mean is not the best measure of the center, but the
classical indices of income inequality, including the celebrated Gini index,
are all mean-based. In view of this, Professor Gastwirth sounded an alarm back
in 2014 by suggesting to incorporate the median into the definition of the Gini
index, although noted a few shortcomings of his proposed index. In the present
paper we make a further step in the modification of classical indices and, to
acknowledge the possibility of differing viewpoints, arrive at three
median-based indices of inequality. They avoid the shortcomings of the previous
indices and can be used even when populations are ultra heavily tailed, that
is, when their first moments are infinite. The new indices are illustrated both
analytically and numerically using parametric families of income distributions,
and further illustrated using capital incomes coming from 2001 and 2018 surveys
of fifteen European countries. We also discuss the performance of the indices
from the perspective of income transfers.",Measuring income inequality via percentile relativities,2023-08-07 19:25:12,"Vytaras Brazauskas, Francesca Greselin, Ricardas Zitikis","http://arxiv.org/abs/2308.03708v1, http://arxiv.org/pdf/2308.03708v1",stat.ME
30762,em,"We demonstrate that the forecasting combination puzzle is a consequence of
the methodology commonly used to produce forecast combinations. By the
combination puzzle, we refer to the empirical finding that predictions formed
by combining multiple forecasts in ways that seek to optimize forecast
performance often do not out-perform more naive, e.g. equally-weighted,
approaches. In particular, we demonstrate that, due to the manner in which such
forecasts are typically produced, tests that aim to discriminate between the
predictive accuracy of competing combination strategies can have low power, and
can lack size control, leading to an outcome that favours the naive approach.
We show that this poor performance is due to the behavior of the corresponding
test statistic, which has a non-standard asymptotic distribution under the null
hypothesis of no inferior predictive accuracy, rather than the {standard normal
distribution that is} {typically adopted}. In addition, we demonstrate that the
low power of such predictive accuracy tests in the forecast combination setting
can be completely avoided if more efficient estimation strategies are used in
the production of the combinations, when feasible. We illustrate these findings
both in the context of forecasting a functional of interest and in terms of
predictive densities. A short empirical example {using daily financial returns}
exemplifies how researchers can avoid the puzzle in practical settings.",Solving the Forecast Combination Puzzle,2023-08-10 03:16:31,"David T. Frazier, Ryan Covey, Gael M. Martin, Donald Poskitt","http://arxiv.org/abs/2308.05263v1, http://arxiv.org/pdf/2308.05263v1",econ.EM
30763,em,"The Clustered Factor (CF) model induces a block structure on the correlation
matrix and is commonly used to parameterize correlation matrices. Our results
reveal that the CF model imposes superfluous restrictions on the correlation
matrix. This can be avoided by a different parametrization, involving the
logarithmic transformation of the block correlation matrix.",Characterizing Correlation Matrices that Admit a Clustered Factor Representation,2023-08-11 04:07:39,"Chen Tong, Peter Reinhard Hansen","http://arxiv.org/abs/2308.05895v1, http://arxiv.org/pdf/2308.05895v1",econ.EM
30764,em,"Serendipity plays an important role in scientific discovery. Indeed, many of
the most important breakthroughs, ranging from penicillin to the electric
battery, have been made by scientists who were stimulated by a chance exposure
to unsought but useful information. However, not all scientists are equally
likely to benefit from such serendipitous exposure. Although scholars generally
agree that scientists with a prepared mind are most likely to benefit from
serendipitous encounters, there is much less consensus over what precisely
constitutes a prepared mind, with some research suggesting the importance of
openness and others emphasizing the need for deep prior experience in a
particular domain. In this paper, we empirically investigate the role of
serendipity in science by leveraging a policy change that exogenously shifted
the shelving location of journals in university libraries and subsequently
exposed scientists to unsought scientific information. Using large-scale data
on 2.4 million papers published in 9,750 journals by 520,000 scientists at 115
North American research universities, we find that scientists with greater
openness are more likely to benefit from serendipitous encounters. Following
the policy change, these scientists tended to cite less familiar and newer
work, and ultimately published papers that were more innovative. By contrast,
we find little effect on innovativeness for scientists with greater depth of
experience, who, in our sample, tended to cite more familiar and older work
following the policy change.",Serendipity in Science,2023-08-15 04:22:07,"Pyung Nahm, Raviv Murciano-Goroff, Michael Park, Russell J. Funk","http://arxiv.org/abs/2308.07519v1, http://arxiv.org/pdf/2308.07519v1",econ.EM
30771,em,"We study the econometric properties of so-called donut regression
discontinuity (RD) designs, a robustness exercise which involves repeating
estimation and inference without the data points in some area around the
treatment threshold. This approach is often motivated by concerns that possible
systematic sorting of units, or similar data issues, in some neighborhood of
the treatment threshold might distort estimation and inference of RD treatment
effects. We show that donut RD estimators can have substantially larger bias
and variance than contentional RD estimators, and that the corresponding
confidence intervals can be substantially longer. We also provide a formal
testing framework for comparing donut and conventional RD estimation results.",Donut Regression Discontinuity Designs,2023-08-28 13:02:38,"Cladia Noack, Chistoph Rothe","http://arxiv.org/abs/2308.14464v1, http://arxiv.org/pdf/2308.14464v1",econ.EM
30765,em,"Estimating the effects of long-term treatments in A/B testing presents a
significant challenge. Such treatments -- including updates to product
functions, user interface designs, and recommendation algorithms -- are
intended to remain in the system for a long period after their launches. On the
other hand, given the constraints of conducting long-term experiments,
practitioners often rely on short-term experimental results to make product
launch decisions. It remains an open question how to accurately estimate the
effects of long-term treatments using short-term experimental data. To address
this question, we introduce a longitudinal surrogate framework. We show that,
under standard assumptions, the effects of long-term treatments can be
decomposed into a series of functions, which depend on the user attributes, the
short-term intermediate metrics, and the treatment assignments. We describe the
identification assumptions, the estimation strategies, and the inference
technique under this framework. Empirically, we show that our approach
outperforms existing solutions by leveraging two real-world experiments, each
involving millions of users on WeChat, one of the world's largest social
networking platforms.",Estimating Effects of Long-Term Treatments,2023-08-16 08:42:58,"Shan Huang, Chen Wang, Yuan Yuan, Jinglong Zhao, Jingjing Zhang","http://arxiv.org/abs/2308.08152v1, http://arxiv.org/pdf/2308.08152v1",econ.EM
30766,em,"Visual imagery is indispensable to many multi-attribute decision situations.
Examples of such decision situations in travel behaviour research include
residential location choices, vehicle choices, tourist destination choices, and
various safety-related choices. However, current discrete choice models cannot
handle image data and thus cannot incorporate information embedded in images
into their representations of choice behaviour. This gap between discrete
choice models' capabilities and the real-world behaviour it seeks to model
leads to incomplete and, possibly, misleading outcomes. To solve this gap, this
study proposes ""Computer Vision-enriched Discrete Choice Models"" (CV-DCMs).
CV-DCMs can handle choice tasks involving numeric attributes and images by
integrating computer vision and traditional discrete choice models. Moreover,
because CV-DCMs are grounded in random utility maximisation principles, they
maintain the solid behavioural foundation of traditional discrete choice
models. We demonstrate the proposed CV-DCM by applying it to data obtained
through a novel stated choice experiment involving residential location
choices. In this experiment, respondents faced choice tasks with trade-offs
between commute time, monthly housing cost and street-level conditions,
presented using images. As such, this research contributes to the growing body
of literature in the travel behaviour field that seeks to integrate discrete
choice modelling and machine learning.","Computer vision-enriched discrete choice models, with an application to residential location choice",2023-08-16 13:33:24,"Sander van Cranenburgh, Francisco Garrido-Valenzuela","http://arxiv.org/abs/2308.08276v1, http://arxiv.org/pdf/2308.08276v1",cs.CV
30767,em,"This paper studies linear time series regressions with many regressors. Weak
exogeneity is the most used identifying assumption in time series. Weak
exogeneity requires the structural error to have zero conditional expectation
given the present and past regressor values, allowing errors to correlate with
future regressor realizations. We show that weak exogeneity in time series
regressions with many controls may produce substantial biases and even render
the least squares (OLS) estimator inconsistent. The bias arises in settings
with many regressors because the normalized OLS design matrix remains
asymptotically random and correlates with the regression error when only weak
(but not strict) exogeneity holds. This bias's magnitude increases with the
number of regressors and their average autocorrelation. To address this issue,
we propose an innovative approach to bias correction that yields a new
estimator with improved properties relative to OLS. We establish consistency
and conditional asymptotic Gaussianity of this new estimator and provide a
method for inference.",Linear Regression with Weak Exogeneity,2023-08-17 15:57:51,"Anna Mikusheva, Mikkel Sølvsten","http://arxiv.org/abs/2308.08958v1, http://arxiv.org/pdf/2308.08958v1",econ.EM
30768,em,"This paper develops power series expansions of a general class of moment
functions, including transition densities and option prices, of continuous-time
Markov processes, including jump--diffusions. The proposed expansions extend
the ones in Kristensen and Mele (2011) to cover general Markov processes. We
demonstrate that the class of expansions nests the transition density and
option price expansions developed in Yang, Chen, and Wan (2019) and Wan and
Yang (2021) as special cases, thereby connecting seemingly different ideas in a
unified framework. We show how the general expansion can be implemented for
fully general jump--diffusion models. We provide a new theory for the validity
of the expansions which shows that series expansions are not guaranteed to
converge as more terms are added in general. Thus, these methods should be used
with caution. At the same time, the numerical studies in this paper demonstrate
good performance of the proposed implementation in practice when a small number
of terms are included.",Closed-form approximations of moments and densities of continuous-time Markov models,2023-08-17 17:25:39,"Dennis Kristensen, Young Jun Lee, Antonio Mele","http://arxiv.org/abs/2308.09009v1, http://arxiv.org/pdf/2308.09009v1",econ.EM
30769,em,"This paper examines the effectiveness of several forecasting methods for
predicting inflation, focusing on aggregating disaggregated forecasts - also
known in the literature as the bottom-up approach. Taking the Brazilian case as
an application, we consider different disaggregation levels for inflation and
employ a range of traditional time series techniques as well as linear and
nonlinear machine learning (ML) models to deal with a larger number of
predictors. For many forecast horizons, the aggregation of disaggregated
forecasts performs just as well survey-based expectations and models that
generate forecasts using the aggregate directly. Overall, ML methods outperform
traditional time series models in predictive accuracy, with outstanding
performance in forecasting disaggregates. Our results reinforce the benefits of
using models in a data-rich environment for inflation forecasting, including
aggregating disaggregated forecasts from ML techniques, mainly during volatile
periods. Starting from the COVID-19 pandemic, the random forest model based on
both aggregate and disaggregated inflation achieves remarkable predictive
performance at intermediate and longer horizons.",Forecasting inflation using disaggregates and machine learning,2023-08-22 07:01:40,"Gilberto Boaretto, Marcelo C. Medeiros","http://arxiv.org/abs/2308.11173v1, http://arxiv.org/pdf/2308.11173v1",econ.EM
30871,em,"First-best climate policy is a uniform carbon tax which gradually rises over
time. Civil servants have complicated climate policy to expand bureaucracies,
politicians to create rents. Environmentalists have exaggerated climate change
to gain influence, other activists have joined the climate bandwagon. Opponents
to climate policy have attacked the weaknesses in climate research. The climate
debate is convoluted and polarized as a result, and climate policy complex.
Climate policy should become easier and more rational as the Paris Agreement
has shifted climate policy back towards national governments. Changing
political priorities, austerity, and a maturing bureaucracy should lead to a
more constructive climate debate.",The structure of the climate debate,2016-08-19 16:36:55,Richard S. J. Tol,"http://arxiv.org/abs/1608.05597v1, http://arxiv.org/pdf/1608.05597v1",q-fin.EC
30773,em,"This paper develops a novel method to estimate a latent factor model for a
large target panel with missing observations by optimally using the information
from auxiliary panel data sets. We refer to our estimator as target-PCA.
Transfer learning from auxiliary panel data allows us to deal with a large
fraction of missing observations and weak signals in the target panel. We show
that our estimator is more efficient and can consistently estimate weak
factors, which are not identifiable with conventional methods. We provide the
asymptotic inferential theory for target-PCA under very general assumptions on
the approximate factor model and missing patterns. In an empirical study of
imputing data in a mixed-frequency macroeconomic panel, we demonstrate that
target-PCA significantly outperforms all benchmark methods.",Target PCA: Transfer Learning Large Dimensional Panel Data,2023-08-29 23:53:06,"Junting Duan, Markus Pelger, Ruoxuan Xiong","http://arxiv.org/abs/2308.15627v1, http://arxiv.org/pdf/2308.15627v1",econ.EM
30774,em,"These lecture notes provide an overview of existing methodologies and recent
developments for estimation and inference with high dimensional time series
regression models. First, we present main limit theory results for high
dimensional dependent data which is relevant to covariance matrix structures as
well as to dependent time series sequences. Second, we present main aspects of
the asymptotic theory related to time series regression models with many
covariates. Third, we discuss various applications of statistical learning
methodologies for time series analysis purposes.",High Dimensional Time Series Regression Models: Applications to Statistical Learning Methods,2023-08-27 18:53:31,Christis Katsouris,"http://arxiv.org/abs/2308.16192v1, http://arxiv.org/pdf/2308.16192v1",econ.EM
30775,em,"Causal machine learning methods which flexibly generate heterogeneous
treatment effect estimates could be very useful tools for governments trying to
make and implement policy. However, as the critical artificial intelligence
literature has shown, governments must be very careful of unintended
consequences when using machine learning models. One way to try and protect
against unintended bad outcomes is with AI Fairness methods which seek to
create machine learning models where sensitive variables like race or gender do
not influence outcomes. In this paper we argue that standard AI Fairness
approaches developed for predictive machine learning are not suitable for all
causal machine learning applications because causal machine learning generally
(at least so far) uses modelling to inform a human who is the ultimate
decision-maker while AI Fairness approaches assume a model that is making
decisions directly. We define these scenarios as indirect and direct
decision-making respectively and suggest that policy-making is best seen as a
joint decision where the causal machine learning model usually only has
indirect power. We lay out a definition of fairness for this scenario - a model
that provides the information a decision-maker needs to accurately make a value
judgement about just policy outcomes - and argue that the complexity of causal
machine learning models can make this difficult to achieve. The solution here
is not traditional AI Fairness adjustments, but careful modelling and awareness
of some of the decision-making biases that these methods might encourage which
we describe.",Fairness Implications of Heterogeneous Treatment Effect Estimation with Machine Learning Methods in Policy-making,2023-09-02 06:06:14,"Patrick Rehill, Nicholas Biddle","http://arxiv.org/abs/2309.00805v1, http://arxiv.org/pdf/2309.00805v1",econ.EM
30776,em,"This paper proposes the option-implied Fourier-cosine method, iCOS, for
non-parametric estimation of risk-neutral densities, option prices, and option
sensitivities. The iCOS method leverages the Fourier-based COS technique,
proposed by Fang and Oosterlee (2008), by utilizing the option-implied cosine
series coefficients. Notably, this procedure does not rely on any model
assumptions about the underlying asset price dynamics, it is fully
non-parametric, and it does not involve any numerical optimization. These
features make it rather general and computationally appealing. Furthermore, we
derive the asymptotic properties of the proposed non-parametric estimators and
study their finite-sample behavior in Monte Carlo simulations. Our empirical
analysis using S&P 500 index options and Amazon equity options illustrates the
effectiveness of the iCOS method in extracting valuable information from option
prices under different market conditions.",iCOS: Option-Implied COS Method,2023-09-02 16:39:57,Evgenii Vladimirov,"http://arxiv.org/abs/2309.00943v1, http://arxiv.org/pdf/2309.00943v1",q-fin.ST
30777,em,"In lending, where prices are specific to both customers and products, having
a well-functioning personalized pricing policy in place is essential to
effective business making. Typically, such a policy must be derived from
observational data, which introduces several challenges. While the problem of
``endogeneity'' is prominently studied in the established pricing literature,
the problem of selection bias (or, more precisely, bid selection bias) is not.
We take a step towards understanding the effects of selection bias by posing
pricing as a problem of causal inference. Specifically, we consider the
reaction of a customer to price a treatment effect. In our experiments, we
simulate varying levels of selection bias on a semi-synthetic dataset on
mortgage loan applications in Belgium. We investigate the potential of
parametric and nonparametric methods for the identification of individual
bid-response functions. Our results illustrate how conventional methods such as
logistic regression and neural networks suffer adversely from selection bias.
In contrast, we implement state-of-the-art methods from causal machine learning
and show their capability to overcome selection bias in pricing data.",A Causal Perspective on Loan Pricing: Investigating the Impacts of Selection Bias on Identifying Bid-Response Functions,2023-09-07 17:14:30,"Christopher Bockel-Rickermann, Sam Verboven, Tim Verdonck, Wouter Verbeke","http://arxiv.org/abs/2309.03730v1, http://arxiv.org/pdf/2309.03730v1",cs.LG
30778,em,"This paper introduces non-linear dimension reduction in factor-augmented
vector autoregressions to analyze the effects of different economic shocks. I
argue that controlling for non-linearities between a large-dimensional dataset
and the latent factors is particularly useful during turbulent times of the
business cycle. In simulations, I show that non-linear dimension reduction
techniques yield good forecasting performance, especially when data is highly
volatile. In an empirical application, I identify a monetary policy as well as
an uncertainty shock excluding and including observations of the COVID-19
pandemic. Those two applications suggest that the non-linear FAVAR approaches
are capable of dealing with the large outliers caused by the COVID-19 pandemic
and yield reliable results in both scenarios.",Non-linear dimension reduction in factor-augmented vector autoregressions,2023-09-09 18:22:30,Karin Klieber,"http://arxiv.org/abs/2309.04821v1, http://arxiv.org/pdf/2309.04821v1",econ.EM
30779,em,"This study considers tests for coefficient randomness in predictive
regressions. Our focus is on how tests for coefficient randomness are
influenced by the persistence of random coefficient. We find that when the
random coefficient is stationary, or I(0), Nyblom's (1989) LM test loses its
optimality (in terms of power), which is established against the alternative of
integrated, or I(1), random coefficient. We demonstrate this by constructing
tests that are more powerful than the LM test when random coefficient is
stationary, although these tests are dominated in terms of power by the LM test
when random coefficient is integrated. This implies that the best test for
coefficient randomness differs from context to context, and practitioners
should take into account the persistence of potentially random coefficient and
choose from several tests accordingly. We apply tests for coefficient constancy
to real data. The results mostly reverse the conclusion of an earlier empirical
study.",Testing for Stationary or Persistent Coefficient Randomness in Predictive Regressions,2023-09-10 06:18:44,Mikihito Nishi,"http://arxiv.org/abs/2309.04926v2, http://arxiv.org/pdf/2309.04926v2",econ.EM
30780,em,"We propose a novel sensitivity analysis framework for linear estimands when
identification failure can be viewed as seeing the wrong distribution of
outcomes. Our family of assumptions bounds the density ratio between the
observed and true conditional outcome distribution. This framework links
naturally to selection models, generalizes existing assumptions for the
Regression Discontinuity (RD) and Inverse Propensity Weighting (IPW) estimand,
and provides a novel nonparametric perspective on violations of identification
assumptions for ordinary least squares (OLS). Our sharp partial identification
results extend existing results for IPW to cover other estimands and
assumptions that allow even unbounded likelihood ratios, yielding a simple and
unified characterization of bounds under assumptions like the c-dependence
assumption of Masten and Poirier (2018). The sharp bounds can be written as a
simple closed form moment of the data, the nuisance functions estimated in the
primary analysis, and the conditional outcome quantile function. We find our
method does well in simulations even when targeting a discontinuous and nearly
infinite bound.",Sensitivity Analysis for Linear Estimands,2023-09-12 18:16:23,"Jacob Dorn, Luther Yap","http://arxiv.org/abs/2309.06305v2, http://arxiv.org/pdf/2309.06305v2",econ.EM
30781,em,"In experimental design, Neyman allocation refers to the practice of
allocating subjects into treated and control groups, potentially in unequal
numbers proportional to their respective standard deviations, with the
objective of minimizing the variance of the treatment effect estimator. This
widely recognized approach increases statistical power in scenarios where the
treated and control groups have different standard deviations, as is often the
case in social experiments, clinical trials, marketing research, and online A/B
testing. However, Neyman allocation cannot be implemented unless the standard
deviations are known in advance. Fortunately, the multi-stage nature of the
aforementioned applications allows the use of earlier stage observations to
estimate the standard deviations, which further guide allocation decisions in
later stages. In this paper, we introduce a competitive analysis framework to
study this multi-stage experimental design problem. We propose a simple
adaptive Neyman allocation algorithm, which almost matches the
information-theoretic limit of conducting experiments. Using online A/B testing
data from a social media site, we demonstrate the effectiveness of our adaptive
Neyman allocation algorithm, highlighting its practicality especially when
applied with only a limited number of stages.",Adaptive Neyman Allocation,2023-09-16 02:23:31,Jinglong Zhao,"http://arxiv.org/abs/2309.08808v2, http://arxiv.org/pdf/2309.08808v2",stat.ME
30782,em,"A nonlinear regression framework is proposed for time series and panel data
for the situation where certain explanatory variables are available at a higher
temporal resolution than the dependent variable. The main idea is to use the
moments of the empirical distribution of these variables to construct
regressors with the correct resolution. As the moments are likely to display
nonlinear marginal and interaction effects, an artificial neural network
regression function is proposed. The corresponding model operates within the
traditional stochastic nonlinear least squares framework. In particular, a
numerical Hessian is employed to calculate confidence intervals. The practical
usefulness is demonstrated by analyzing the influence of daily temperatures in
260 European NUTS2 regions on the yearly growth of gross value added in these
regions in the time period 2000 to 2021. In the particular example, the model
allows for an appropriate assessment of regional economic impacts resulting
from (future) changes in the regional temperature distribution (mean AND
variance).",Regressing on distributions: The nonlinear effect of temperature on regional economic growth,2023-09-19 12:51:15,Malte Jahn,"http://arxiv.org/abs/2309.10481v1, http://arxiv.org/pdf/2309.10481v1",stat.ME
30783,em,"Country comparisons using standardized test scores may in some cases be
misleading unless we make sure that the potential sample selection bias created
by drop-outs and non-enrollment patterns does not alter the analysis. In this
paper, I propose an answer to this issue which consists of identifying the
counterfactual distribution of achievement (I mean the distribution of
achievement if there was hypothetically no selection) from the observed
distribution of achievements. International comparison measures like means,
quantiles, and inequality measures have to be computed using that
counterfactual distribution which is statistically closer to the observed one
for a low proportion of out-of-school children. I identify the quantiles of
that latent distribution by readjusting the percentile levels of the observed
quantile function of achievement. Because the data on test scores is by nature
truncated, I have to rely on auxiliary data to borrow identification power. I
finally applied my method to compute selection corrected means using PISA 2018
and PASEC 2019 and I found that ranking/comparisons can change.",Testing and correcting sample selection in academic achievement comparisons,2023-09-19 17:22:26,Onil Boussim,"http://arxiv.org/abs/2309.10642v3, http://arxiv.org/pdf/2309.10642v3",econ.EM
30802,em,"We propose a new Bayesian heteroskedastic Markov-switching structural vector
autoregression with data-driven time-varying identification. The model selects
alternative exclusion restrictions over time and, as a condition for the
search, allows to verify identification through heteroskedasticity within each
regime. Based on four alternative monetary policy rules, we show that a monthly
six-variable system supports time variation in US monetary policy shock
identification. In the sample-dominating first regime, systematic monetary
policy follows a Taylor rule extended by the term spread, effectively curbing
inflation. In the second regime, occurring after 2000 and gaining more
persistence after the global financial and COVID crises, it is characterized by
a money-augmented Taylor rule. This regime's unconventional monetary policy
provides economic stimulus, features the liquidity effect, and is complemented
by a pure term spread shock. Absent the specific monetary policy of the second
regime, inflation would be over one percentage point higher on average after
2008.",Time-Varying Identification of Monetary Policy Shocks,2023-11-10 08:53:01,"Annika Camehl, Tomasz Woźniak","http://arxiv.org/abs/2311.05883v2, http://arxiv.org/pdf/2311.05883v2",econ.EM
30784,em,"This study aims to use simultaneous quantile regression (SQR) to examine the
impact of macroeconomic and financial uncertainty including global pandemic,
geopolitical risk on the futures returns of crude oil (ROC). The data for this
study is sourced from the FRED (Federal Reserve Economic Database) economic
dataset; the importance of the factors have been validated by using variation
inflation factor (VIF) and principal component analysis (PCA). To fully
understand the combined effect of these factors on WTI, study includes
interaction terms in the multi-factor model. Empirical results suggest that
changes in ROC can have varying impacts depending on the specific period and
market conditions. The results can be used for informed investment decisions
and to construct portfolios that are well-balanced in terms of risk and return.
Structural breaks, such as changes in global economic conditions or shifts in
demand for crude oil, can cause return on crude oil to be sensitive to changes
in different time periods. The unique aspect ness of this study also lies in
its inclusion of explanatory factors related to the pandemic, geopolitical
risk, and inflation.","Impact of Economic Uncertainty, Geopolitical Risk, Pandemic, Financial & Macroeconomic Factors on Crude Oil Returns -- An Empirical Investigation",2023-10-02 14:55:01,Sarit Maitra,"http://arxiv.org/abs/2310.01123v2, http://arxiv.org/pdf/2310.01123v2",econ.EM
30785,em,"We propose a novel family of test statistics to detect the presence of
changepoints in a sequence of dependent, possibly multivariate,
functional-valued observations. Our approach allows to test for a very general
class of changepoints, including the ""classical"" case of changes in the mean,
and even changes in the whole distribution. Our statistics are based on a
generalisation of the empirical energy distance; we propose weighted
functionals of the energy distance process, which are designed in order to
enhance the ability to detect breaks occurring at sample endpoints. The
limiting distribution of the maximally selected version of our statistics
requires only the computation of the eigenvalues of the covariance function,
thus being readily implementable in the most commonly employed packages, e.g.
R. We show that, under the alternative, our statistics are able to detect
changepoints occurring even very close to the beginning/end of the sample. In
the presence of multiple changepoints, we propose a binary segmentation
algorithm to estimate the number of breaks and the locations thereof.
Simulations show that our procedures work very well in finite samples. We
complement our theory with applications to financial and temperature data.",On changepoint detection in functional data using empirical energy distance,2023-10-07 18:28:42,"B. Cooper Boniece, Lajos Horváth, Lorenzo Trapani","http://arxiv.org/abs/2310.04853v1, http://arxiv.org/pdf/2310.04853v1",stat.ME
30786,em,"We consider a decision maker who faces a binary treatment choice when their
welfare is only partially identified from data. We contribute to the literature
by anchoring our finite-sample analysis on mean square regret, a decision
criterion advocated by Kitagawa, Lee, and Qiu (2022). We find that optimal
rules are always fractional, irrespective of the width of the identified set
and precision of its estimate. The optimal treatment fraction is a simple
logistic transformation of the commonly used t-statistic multiplied by a factor
calculated by a simple constrained optimization. This treatment fraction gets
closer to 0.5 as the width of the identified set becomes wider, implying the
decision maker becomes more cautious against the adversarial Nature.","Treatment Choice, Mean Square Regret and Partial Identification",2023-10-10 04:36:38,"Toru Kitagawa, Sokbae Lee, Chen Qiu","http://arxiv.org/abs/2310.06242v1, http://arxiv.org/pdf/2310.06242v1",econ.EM
30787,em,"This paper examines the degree of integration at euro area financial markets.
To that end, we estimate overall and country-specific integration indices based
on a panel vector-autoregression with factor stochastic volatility. Our results
indicate a more heterogeneous bond market compared to the market for lending
rates. At both markets, the global financial crisis and the sovereign debt
crisis led to a severe decline in financial integration, which fully recovered
since then. We furthermore identify countries that deviate from their peers
either by responding differently to crisis events or by taking on different
roles in the spillover network. The latter analysis reveals two set of
countries, namely a main body of countries that receives and transmits
spillovers and a second, smaller group of spillover absorbing economies.
Finally, we demonstrate by estimating an augmented Taylor rule that euro area
short-term interest rates are positively linked to the level of integration on
the bond market.",Integration or fragmentation? A closer look at euro area financial markets,2023-10-11 21:23:31,"Martin Feldkircher, Karin Klieber","http://arxiv.org/abs/2310.07790v1, http://arxiv.org/pdf/2310.07790v1",econ.EM
30788,em,"Using CPS data for 1976 to 2022 we explore how wage inequality has evolved
for married couples with both spouses working full time full year, and its
impact on household income inequality. We also investigate how marriage sorting
patterns have changed over this period. To determine the factors driving income
inequality we estimate a model explaining the joint distribution of wages which
accounts for the spouses' employment decisions. We find that income inequality
has increased for these households and increased assortative matching of wages
has exacerbated the inequality resulting from individual wage growth. We find
that positive sorting partially reflects the correlation across unobservables
influencing both members' of the marriage wages. We decompose the changes in
sorting patterns over the 47 years comprising our sample into structural,
composition and selection effects and find that the increase in positive
sorting primarily reflects the increased skill premia for both observed and
unobserved characteristics.","Marital Sorting, Household Inequality and Selection",2023-10-11 22:32:45,"Iván Fernández-Val, Aico van Vuuren, Francis Vella","http://arxiv.org/abs/2310.07839v1, http://arxiv.org/pdf/2310.07839v1",econ.EM
30789,em,"This paper proposes a simple method for balancing distributions of covariates
for causal inference based on observational studies. The method makes it
possible to balance an arbitrary number of quantiles (e.g., medians, quartiles,
or deciles) together with means if necessary. The proposed approach is based on
the theory of calibration estimators (Deville and S\""arndal 1992), in
particular, calibration estimators for quantiles, proposed by Harms and
Duchesne (2006). By modifying the entropy balancing method and the covariate
balancing propensity score method, it is possible to balance the distributions
of the treatment and control groups. The method does not require numerical
integration, kernel density estimation or assumptions about the distributions;
valid estimates can be obtained by drawing on existing asymptotic theory.
Results of a simulation study indicate that the method efficiently estimates
average treatment effects on the treated (ATT), the average treatment effect
(ATE), the quantile treatment effect on the treated (QTT) and the quantile
treatment effect (QTE), especially in the presence of non-linearity and
mis-specification of the models. The proposed methods are implemented in an
open source R package jointCalib.",Survey calibration for causal inference: a simple method to balance covariate distributions,2023-10-18 16:50:32,Maciej Beręsewicz,"http://arxiv.org/abs/2310.11969v1, http://arxiv.org/pdf/2310.11969v1",stat.ME
30790,em,"Causal machine learning tools are beginning to see use in real-world policy
evaluation tasks to flexibly estimate treatment effects. One issue with these
methods is that the machine learning models used are generally black boxes,
i.e., there is no globally interpretable way to understand how a model makes
estimates. This is a clear problem in policy evaluation applications,
particularly in government, because it is difficult to understand whether such
models are functioning in ways that are fair, based on the correct
interpretation of evidence and transparent enough to allow for accountability
if things go wrong. However, there has been little discussion of transparency
problems in the causal machine learning literature and how these might be
overcome. This paper explores why transparency issues are a problem for causal
machine learning in public policy evaluation applications and considers ways
these problems might be addressed through explainable AI tools and by
simplifying models in line with interpretable AI principles. It then applies
these ideas to a case-study using a causal forest model to estimate conditional
average treatment effects for a hypothetical change in the school leaving age
in Australia. It shows that existing tools for understanding black-box
predictive models are poorly suited to causal machine learning and that
simplifying the model to make it interpretable leads to an unacceptable
increase in error (in this application). It concludes that new tools are needed
to properly understand causal machine learning models and the algorithms that
fit them.",Transparency challenges in policy evaluation with causal machine learning -- improving usability and accountability,2023-10-20 05:48:29,"Patrick Rehill, Nicholas Biddle","http://arxiv.org/abs/2310.13240v1, http://arxiv.org/pdf/2310.13240v1",cs.LG
30791,em,"Bayesian vector autoregressions (BVARs) are the workhorse in macroeconomic
forecasting. Research in the last decade has established the importance of
allowing time-varying volatility to capture both secular and cyclical
variations in macroeconomic uncertainty. This recognition, together with the
growing availability of large datasets, has propelled a surge in recent
research in building stochastic volatility models suitable for large BVARs.
Some of these new models are also equipped with additional features that are
especially desirable for large systems, such as order invariance -- i.e.,
estimates are not dependent on how the variables are ordered in the BVAR -- and
robustness against COVID-19 outliers. Estimation of these large, flexible
models is made possible by the recently developed equation-by-equation approach
that drastically reduces the computational cost of estimating large systems.
Despite these recent advances, there remains much ongoing work, such as the
development of parsimonious approaches for time-varying coefficients and other
types of nonlinearities in large BVARs.",BVARs and Stochastic Volatility,2023-10-23 01:48:17,Joshua Chan,"http://arxiv.org/abs/2310.14438v1, http://arxiv.org/pdf/2310.14438v1",econ.EM
30792,em,"A decision-maker (DM) faces uncertainty governed by a data-generating process
(DGP), which is only known to belong to a set of sequences of independent but
possibly non-identical distributions. A robust decision maximizes the DM's
expected payoff against the worst possible DGP in this set. This paper studies
how such robust decisions can be improved with data, where improvement is
measured by expected payoff under the true DGP. In this paper, I fully
characterize when and how such an improvement can be guaranteed under all
possible DGPs and develop inference methods to achieve it. These inference
methods are needed because, as this paper shows, common inference methods
(e.g., maximum likelihood or Bayesian) often fail to deliver such an
improvement. Importantly, the developed inference methods are given by simple
augmentations to standard inference procedures, and are thus easy to implement
in practice.",Improving Robust Decisions with Data,2023-10-25 04:27:24,Xiaoyu Cheng,"http://arxiv.org/abs/2310.16281v2, http://arxiv.org/pdf/2310.16281v2",econ.TH
30793,em,"Randomized experiments have been the gold standard for assessing the
effectiveness of a treatment or policy. The classical complete randomization
approach assigns treatments based on a prespecified probability and may lead to
inefficient use of data. Adaptive experiments improve upon complete
randomization by sequentially learning and updating treatment assignment
probabilities. However, their application can also raise fairness and equity
concerns, as assignment probabilities may vary drastically across groups of
participants. Furthermore, when treatment is expected to be extremely
beneficial to certain groups of participants, it is more appropriate to expose
many of these participants to favorable treatment. In response to these
challenges, we propose a fair adaptive experiment strategy that simultaneously
enhances data use efficiency, achieves an envy-free treatment assignment
guarantee, and improves the overall welfare of participants. An important
feature of our proposed strategy is that we do not impose parametric modeling
assumptions on the outcome variables, making it more versatile and applicable
to a wider array of applications. Through our theoretical investigation, we
characterize the convergence rate of the estimated treatment effects and the
associated standard deviations at the group level and further prove that our
adaptive treatment assignment algorithm, despite not having a closed-form
expression, approaches the optimal allocation rule asymptotically. Our proof
strategy takes into account the fact that the allocation decisions in our
design depend on sequentially accumulated data, which poses a significant
challenge in characterizing the properties and conducting statistical inference
of our method. We further provide simulation evidence to showcase the
performance of our fair adaptive experiment strategy.",Fair Adaptive Experiments,2023-10-25 04:52:41,"Waverly Wei, Xinwei Ma, Jingshen Wang","http://arxiv.org/abs/2310.16290v1, http://arxiv.org/pdf/2310.16290v1",stat.ME
30794,em,"This paper adopts the random matrix theory (RMT) to analyze the correlation
structure of the global agricultural futures market from 2000 to 2020. It is
found that the distribution of correlation coefficients is asymmetric and right
skewed, and many eigenvalues of the correlation matrix deviate from the RMT
prediction. The largest eigenvalue reflects a collective market effect common
to all agricultural futures, the other largest deviating eigenvalues can be
implemented to identify futures groups, and there are modular structures based
on regional properties or agricultural commodities among the significant
participants of their corresponding eigenvectors. Except for the smallest
eigenvalue, other smallest deviating eigenvalues represent the agricultural
futures pairs with highest correlations. This paper can be of reference and
significance for using agricultural futures to manage risk and optimize asset
allocation.",Correlation structure analysis of the global agricultural futures market,2023-10-24 10:21:31,"Yun-Shi Dai, Ngoc Quang Anh Huynh, Qing-Huan Zheng, Wei-Xing Zhou","http://dx.doi.org/10.1016/j.ribaf.2022.101677, http://arxiv.org/abs/2310.16849v1, http://arxiv.org/pdf/2310.16849v1",q-fin.ST
30803,em,"Latent variable models are widely used to account for unobserved determinants
of economic behavior. Traditional nonparametric methods to estimate latent
heterogeneity do not scale well into multidimensional settings. Distributional
restrictions alleviate tractability concerns but may impart non-trivial
misspecification bias. Motivated by these concerns, this paper introduces a
quasi-Bayes approach to estimate a large class of multidimensional latent
variable models. Our approach to quasi-Bayes is novel in that we center it
around relating the characteristic function of observables to the distribution
of unobservables. We propose a computationally attractive class of priors that
are supported on Gaussian mixtures and derive contraction rates for a variety
of latent variable models.",Quasi-Bayes in Latent Variable Models,2023-11-12 16:07:02,Sid Kankanala,"http://arxiv.org/abs/2311.06831v1, http://arxiv.org/pdf/2311.06831v1",econ.EM
30795,em,"The ongoing Russia-Ukraine conflict between two major agricultural powers has
posed significant threats and challenges to the global food system and world
food security. Focusing on the impact of the conflict on the global
agricultural market, we propose a new analytical framework for tail dependence,
and combine the Copula-CoVaR method with the ARMA-GARCH-skewed Student-t model
to examine the tail dependence structure and extreme risk spillover between
agricultural futures and spots over the pre- and post-outbreak periods. Our
results indicate that the tail dependence structures in the futures-spot
markets of soybean, maize, wheat, and rice have all reacted to the
Russia-Ukraine conflict. Furthermore, the outbreak of the conflict has
intensified risks of the four agricultural markets in varying degrees, with the
wheat market being affected the most. Additionally, all the agricultural
futures markets exhibit significant downside and upside risk spillovers to
their corresponding spot markets before and after the outbreak of the conflict,
whereas the strengths of these extreme risk spillover effects demonstrate
significant asymmetries at the directional (downside versus upside) and
temporal (pre-outbreak versus post-outbreak) levels.",The impact of the Russia-Ukraine conflict on the extreme risk spillovers between agricultural futures and spots,2023-10-24 11:12:12,"Wei-Xing Zhou, Yun-Shi Dai, Kiet Tuan Duong, Peng-Fei Dai","http://dx.doi.org/10.1016/j.jebo.2023.11.004, http://arxiv.org/abs/2310.16850v1, http://arxiv.org/pdf/2310.16850v1",q-fin.ST
30796,em,"A novel spatial autoregressive model for panel data is introduced, which
incorporates multilayer networks and accounts for time-varying relationships.
Moreover, the proposed approach allows the structural variance to evolve
smoothly over time and enables the analysis of shock propagation in terms of
time-varying spillover effects. The framework is applied to analyse the
dynamics of international relationships among the G7 economies and their impact
on stock market returns and volatilities. The findings underscore the
substantial impact of cooperative interactions and highlight discernible
disparities in network exposure across G7 nations, along with nuanced patterns
in direct and indirect spillover effects.",Bayesian SAR model with stochastic volatility and multiple time-varying weights,2023-10-26 18:24:11,"Michele Costola, Matteo Iacopini, Casper Wichers","http://arxiv.org/abs/2310.17473v1, http://arxiv.org/pdf/2310.17473v1",stat.AP
30797,em,"Feedforward neural network (FFN) and two specific types of recurrent neural
network, long short-term memory (LSTM) and gated recurrent unit (GRU), are used
for modeling US recessions in the period from 1967 to 2021. The estimated
models are then employed to conduct real-time predictions of the Great
Recession and the Covid-19 recession in US. Their predictive performances are
compared to those of the traditional linear models, the logistic regression
model both with and without the ridge penalty. The out-of-sample performance
suggests the application of LSTM and GRU in the area of recession forecasting,
especially for the long-term forecasting tasks. They outperform other types of
models across 5 forecasting horizons with respect to different types of
statistical performance metrics. Shapley additive explanations (SHAP) method is
applied to the fitted GRUs across different forecasting horizons to gain
insight into the feature importance. The evaluation of predictor importance
differs between the GRU and ridge logistic regression models, as reflected in
the variable order determined by SHAP values. When considering the top 5
predictors, key indicators such as the S\&P 500 index, real GDP, and private
residential fixed investment consistently appear for short-term forecasts (up
to 3 months). In contrast, for longer-term predictions (6 months or more), the
term spread and producer price index become more prominent. These findings are
supported by both local interpretable model-agnostic explanations (LIME) and
marginal effects.",Inside the black box: Neural network-based real-time prediction of US recessions,2023-10-26 19:58:16,Seulki Chung,"http://arxiv.org/abs/2310.17571v1, http://arxiv.org/pdf/2310.17571v1",econ.EM
30798,em,"We show that when the propensity score is estimated using a suitable
covariate balancing procedure, the commonly used inverse probability weighting
(IPW) estimator, augmented inverse probability weighting (AIPW) with linear
conditional mean, and inverse probability weighted regression adjustment
(IPWRA) with linear conditional mean are all numerically the same for
estimating the average treatment effect (ATE) or the average treatment effect
on the treated (ATT). Further, suitably chosen covariate balancing weights are
automatically normalized, which means that normalized and unnormalized versions
of IPW and AIPW are identical. For estimating the ATE, the weights that achieve
the algebraic equivalence of IPW, AIPW, and IPWRA are based on propensity
scores estimated using the inverse probability tilting (IPT) method of Graham,
Pinto and Egel (2012). For the ATT, the weights are obtained using the
covariate balancing propensity score (CBPS) method developed in Imai and
Ratkovic (2014). These equivalences also make covariate balancing methods
attractive when the treatment is confounded and one is interested in the local
average treatment effect.",Covariate Balancing and the Equivalence of Weighting and Doubly Robust Estimators of Average Treatment Effects,2023-10-28 05:25:14,"Tymon Słoczyński, S. Derya Uysal, Jeffrey M. Wooldridge","http://arxiv.org/abs/2310.18563v1, http://arxiv.org/pdf/2310.18563v1",econ.EM
30799,em,"Cluster-randomized trials often involve units that are irregularly
distributed in space without well-separated communities. In these settings,
cluster construction is a critical aspect of the design due to the potential
for cross-cluster interference. The existing literature relies on partial
interference models, which take clusters as given and assume no cross-cluster
interference. We relax this assumption by allowing interference to decay with
geographic distance between units. This induces a bias-variance trade-off:
constructing fewer, larger clusters reduces bias due to interference but
increases variance. We propose new estimators that exclude units most
potentially impacted by cross-cluster interference and show that this
substantially reduces asymptotic bias relative to conventional
difference-in-means estimators. We provide formal justification for a new
design that chooses the number of clusters to balance the asymptotic bias and
variance of our estimators and uses unsupervised learning to automate cluster
construction.",Design of Cluster-Randomized Trials with Cross-Cluster Interference,2023-10-29 01:36:37,Michael P. Leung,"http://arxiv.org/abs/2310.18836v2, http://arxiv.org/pdf/2310.18836v2",stat.ME
30800,em,"The spatial autoregressive (SAR) model is extended by introducing a Markov
switching dynamics for the weight matrix and spatial autoregressive parameter.
The framework enables the identification of regime-specific connectivity
patterns and strengths and the study of the spatiotemporal propagation of
shocks in a system with a time-varying spatial multiplier matrix. The proposed
model is applied to disaggregated CPI data from 15 EU countries to examine
cross-price dependencies. The analysis identifies distinct connectivity
structures and spatial weights across the states, which capture shifts in
consumer behaviour, with marked cross-country differences in the spillover from
one price category to another.",A Bayesian Markov-switching SAR model for time-varying cross-price spillovers,2023-10-30 17:12:05,"Christian Glocker, Matteo Iacopini, Tamás Krisztin, Philipp Piribauer","http://arxiv.org/abs/2310.19557v1, http://arxiv.org/pdf/2310.19557v1",stat.AP
30804,em,"Bayesian predictive synthesis (BPS) provides a method for combining multiple
predictive distributions based on agent/expert opinion analysis theory and
encompasses a range of existing density forecast pooling methods. The key
ingredient in BPS is a ``synthesis'' function. This is typically specified
parametrically as a dynamic linear regression. In this paper, we develop a
nonparametric treatment of the synthesis function using regression trees. We
show the advantages of our tree-based approach in two macroeconomic forecasting
applications. The first uses density forecasts for GDP growth from the euro
area's Survey of Professional Forecasters. The second combines density
forecasts of US inflation produced by many regression models involving
different predictors. Both applications demonstrate the benefits -- in terms of
improved forecast accuracy and interpretability -- of modeling the synthesis
function nonparametrically.",Predictive Density Combination Using a Tree-Based Synthesis Function,2023-11-21 18:29:09,"Tony Chernis, Niko Hauzenberger, Florian Huber, Gary Koop, James Mitchell","http://arxiv.org/abs/2311.12671v1, http://arxiv.org/pdf/2311.12671v1",econ.EM
30805,em,"We introduce a new regression method that relates the mean of an outcome
variable to covariates, given the ""adverse condition"" that a distress variable
falls in its tail. This allows to tailor classical mean regressions to adverse
economic scenarios, which receive increasing interest in managing macroeconomic
and financial risks, among many others. In the terminology of the systemic risk
literature, our method can be interpreted as a regression for the Marginal
Expected Shortfall. We propose a two-step procedure to estimate the new models,
show consistency and asymptotic normality of the estimator, and propose
feasible inference under weak conditions allowing for cross-sectional and time
series applications. The accuracy of the asymptotic approximations of the
two-step estimator is verified in simulations. Two empirical applications show
that our regressions under adverse conditions are valuable in such diverse
fields as the study of the relation between systemic risk and asset price
bubbles, and dissecting macroeconomic growth vulnerabilities into individual
components.",Regressions under Adverse Conditions,2023-11-22 14:47:40,"Timo Dimitriadis, Yannick Hoga","http://arxiv.org/abs/2311.13327v1, http://arxiv.org/pdf/2311.13327v1",econ.EM
30806,em,"When there are multiple outcome series of interest, Synthetic Control
analyses typically proceed by estimating separate weights for each outcome. In
this paper, we instead propose estimating a common set of weights across
outcomes, by balancing either a vector of all outcomes or an index or average
of them. Under a low-rank factor model, we show that these approaches lead to
lower bias bounds than separate weights, and that averaging leads to further
gains when the number of outcomes grows. We illustrate this via simulation and
in a re-analysis of the impact of the Flint water crisis on educational
outcomes.",Using Multiple Outcomes to Improve the Synthetic Control Method,2023-11-27 22:07:29,"Liyang Sun, Eli Ben-Michael, Avi Feller","http://arxiv.org/abs/2311.16260v1, http://arxiv.org/pdf/2311.16260v1",econ.EM
30807,em,"We reinvigorate maximum likelihood estimation (MLE) for macroeconomic density
forecasting through a novel neural network architecture with dedicated mean and
variance hemispheres. Our architecture features several key ingredients making
MLE work in this context. First, the hemispheres share a common core at the
entrance of the network which accommodates for various forms of time variation
in the error variance. Second, we introduce a volatility emphasis constraint
that breaks mean/variance indeterminacy in this class of overparametrized
nonlinear models. Third, we conduct a blocked out-of-bag reality check to curb
overfitting in both conditional moments. Fourth, the algorithm utilizes
standard deep learning software and thus handles large data sets - both
computationally and statistically. Ergo, our Hemisphere Neural Network (HNN)
provides proactive volatility forecasts based on leading indicators when it
can, and reactive volatility based on the magnitude of previous prediction
errors when it must. We evaluate point and density forecasts with an extensive
out-of-sample experiment and benchmark against a suite of models ranging from
classics to more modern machine learning-based offerings. In all cases, HNN
fares well by consistently providing accurate mean/variance forecasts for all
targets and horizons. Studying the resulting volatility paths reveals its
versatility, while probabilistic forecasting evaluation metrics showcase its
enviable reliability. Finally, we also demonstrate how this machinery can be
merged with other structured deep learning models by revisiting Goulet Coulombe
(2022)'s Neural Phillips Curve.",From Reactive to Proactive Volatility Modeling with Hemisphere Neural Networks,2023-11-28 00:37:50,"Philippe Goulet Coulombe, Mikael Frenette, Karin Klieber","http://arxiv.org/abs/2311.16333v1, http://arxiv.org/pdf/2311.16333v1",econ.EM
30808,em,"This paper studies the inference about linear functionals of high-dimensional
low-rank matrices. While most existing inference methods would require
consistent estimation of the true rank, our procedure is robust to rank
misspecification, making it a promising approach in applications where rank
estimation can be unreliable. We estimate the low-rank spaces using
pre-specified weighting matrices, known as diversified projections. A novel
statistical insight is that, unlike the usual statistical wisdom that
overfitting mainly introduces additional variances, the over-estimated low-rank
space also gives rise to a non-negligible bias due to an implicit ridge-type
regularization. We develop a new inference procedure and show that the central
limit theorem holds as long as the pre-specified rank is no smaller than the
true rank. Empirically, we apply our method to the U.S. federal grants
allocation data and test the existence of pork-barrel politics.",Inference for Low-rank Models without Estimating the Rank,2023-11-28 05:42:53,"Jungjun Choi, Hyukjun Kwon, Yuan Liao","http://arxiv.org/abs/2311.16440v1, http://arxiv.org/pdf/2311.16440v1",econ.EM
30809,em,"This paper discusses estimation with a categorical instrumental variable in
settings with potentially few observations per category. The proposed
categorical instrumental variable estimator (CIV) leverages a regularization
assumption that implies existence of a latent categorical variable with fixed
finite support achieving the same first stage fit as the observed instrument.
In asymptotic regimes that allow the number of observations per category to
grow at arbitrary small polynomial rate with the sample size, I show that when
the cardinality of the support of the optimal instrument is known, CIV is
root-n asymptotically normal, achieves the same asymptotic variance as the
oracle IV estimator that presumes knowledge of the optimal instrument, and is
semiparametrically efficient under homoskedasticity. Under-specifying the
number of support points reduces efficiency but maintains asymptotic normality.",Optimal Categorical Instrumental Variables,2023-11-28 21:20:05,Thomas Wiemann,"http://arxiv.org/abs/2311.17021v1, http://arxiv.org/pdf/2311.17021v1",econ.EM
30810,em,"Regression adjustment, sometimes known as Controlled-experiment Using
Pre-Experiment Data (CUPED), is an important technique in internet
experimentation. It decreases the variance of effect size estimates, often
cutting confidence interval widths in half or more while never making them
worse. It does so by carefully regressing the goal metric against
pre-experiment features to reduce the variance. The tremendous gains of
regression adjustment begs the question: How much better can we do by
engineering better features from pre-experiment data, for example by using
machine learning techniques or synthetic controls? Could we even reduce the
variance in our effect sizes arbitrarily close to zero with the right
predictors? Unfortunately, our answer is negative. A simple form of regression
adjustment, which uses just the pre-experiment values of the goal metric,
captures most of the benefit. Specifically, under a mild assumption that
observations closer in time are easier to predict that ones further away in
time, we upper bound the potential gains of more sophisticated feature
engineering, with respect to the gains of this simple form of regression
adjustment. The maximum reduction in variance is $50\%$ in Theorem 1, or
equivalently, the confidence interval width can be reduced by at most an
additional $29\%$.",On the Limits of Regression Adjustment,2023-11-29 21:04:39,"Daniel Ting, Kenneth Hung","http://arxiv.org/abs/2311.17858v1, http://arxiv.org/pdf/2311.17858v1",stat.ME
30811,em,"This paper develops a novel nonparametric identification method for treatment
effects in settings where individuals self-select into treatment sequences. I
propose an identification strategy which relies on a dynamic version of
standard Instrumental Variables (IV) assumptions and builds on a dynamic
version of the Marginal Treatment Effects (MTE) as the fundamental building
block for treatment effects. The main contribution of the paper is to relax
assumptions on the support of the observed variables and on unobservable gains
of treatment that are present in the dynamic treatment effects literature.
Monte Carlo simulation studies illustrate the desirable finite-sample
performance of a sieve estimator for MTEs and Average Treatment Effects (ATEs)
on a close-to-application simulation study.",Identification in Endogenous Sequential Treatment Regimes,2023-11-30 16:47:49,Pedro Picchetti,"http://arxiv.org/abs/2311.18555v1, http://arxiv.org/pdf/2311.18555v1",econ.EM
30812,em,"Numerous studies use regression discontinuity design (RDD) for panel data by
assuming that the treatment effects are homogeneous across all
individuals/groups and pooling the data together. It is unclear how to test for
the significance of treatment effects when the treatments vary across
individuals/groups and the error terms may exhibit complicated dependence
structures. This paper examines the estimation and inference of multiple
treatment effects when the errors are not independent and identically
distributed, and the treatment effects vary across individuals/groups. We
derive a simple analytical expression for approximating the variance-covariance
structure of the treatment effect estimators under general dependence
conditions and propose two test statistics, one is to test for the overall
significance of the treatment effect and the other for the homogeneity of the
treatment effects. We find that in the Gaussian approximations to the test
statistics, the dependence structures in the data can be safely ignored due to
the localized nature of the statistics. This has the important implication that
the simulated critical values can be easily obtained. Simulations demonstrate
our tests have superb size control and reasonable power performance in finite
samples regardless of the presence of strong cross-section dependence or/and
weak serial dependence in the data. We apply our tests to two datasets and find
significant overall treatment effects in each case.",Tests for Many Treatment Effects in Regression Discontinuity Panel Data Models,2023-12-02 18:52:24,"Likai Chen, Georg Keilbar, Liangjun Su, Weining Wang","http://arxiv.org/abs/2312.01162v1, http://arxiv.org/pdf/2312.01162v1",econ.EM
30813,em,"This paper proposes a new Bayesian machine learning model that can be applied
to large datasets arising in macroeconomics. Our framework sums over many
simple two-component location mixtures. The transition between components is
determined by a logistic function that depends on a single threshold variable
and two hyperparameters. Each of these individual models only accounts for a
minor portion of the variation in the endogenous variables. But many of them
are capable of capturing arbitrary nonlinear conditional mean relations.
Conjugate priors enable fast and efficient inference. In simulations, we show
that our approach produces accurate point and density forecasts. In a real-data
exercise, we forecast US macroeconomic aggregates and consider the nonlinear
effects of financial shocks in a large-scale nonlinear VAR.",Bayesian Nonlinear Regression using Sums of Simple Functions,2023-12-04 16:24:46,Florian Huber,"http://arxiv.org/abs/2312.01881v1, http://arxiv.org/pdf/2312.01881v1",econ.EM
30814,em,"We analyze the qualitative differences between prices of double barrier
no-touch options in the Heston model and pure jump KoBoL model calibrated to
the same set of the empirical data, and discuss the potential for arbitrage
opportunities if the correct model is a pure jump model. We explain and
demonstrate with numerical examples that accurate and fast calculations of
prices of double barrier options in jump models are extremely difficult using
the numerical methods available in the literature. We develop a new efficient
method (GWR-SINH method) based of the Gaver-Wynn-Rho acceleration applied to
the Bromwich integral; the SINH-acceleration and simplified trapezoid rule are
used to evaluate perpetual double barrier options for each value of the
spectral parameter in GWR-algorithm. The program in Matlab running on a Mac
with moderate characteristics achieves the precision of the order of E-5 and
better in several several dozen of milliseconds; the precision E-07 is
achievable in about 0.1 sec. We outline the extension of GWR-SINH method to
regime-switching models and models with stochastic parameters and stochastic
interest rates.","Alternative models for FX, arbitrage opportunities and efficient pricing of double barrier options in Lévy models",2023-12-07 00:26:58,"Svetlana Boyarchenko, Sergei Levendorskii","http://arxiv.org/abs/2312.03915v1, http://arxiv.org/pdf/2312.03915v1",q-fin.CP
30815,em,"This study presents a novel approach to assessing food security risks at the
national level, employing a probabilistic scenario-based framework that
integrates both Shared Socioeconomic Pathways (SSP) and Representative
Concentration Pathways (RCP). This innovative method allows each scenario,
encompassing socio-economic and climate factors, to be treated as a model
capable of generating diverse trajectories. This approach offers a more dynamic
understanding of food security risks under varying future conditions. The paper
details the methodologies employed, showcasing their applicability through a
focused analysis of food security challenges in Egypt and Ethiopia, and
underscores the importance of considering a spectrum of socio-economic and
climatic factors in national food security assessments.",Probabilistic Scenario-Based Assessment of National Food Security Risks with Application to Egypt and Ethiopia,2023-12-07 19:54:46,"Phoebe Koundouri, Georgios I. Papayiannis, Achilleas Vassilopoulos, Athanasios N. Yannacopoulos","http://arxiv.org/abs/2312.04428v2, http://arxiv.org/pdf/2312.04428v2",econ.EM
30816,em,"This paper addresses a key question in economic forecasting: does pure noise
truly lack predictive power? Economists typically conduct variable selection to
eliminate noises from predictors. Yet, we prove a compelling result that in
most economic forecasts, the inclusion of noises in predictions yields greater
benefits than its exclusion. Furthermore, if the total number of predictors is
not sufficiently large, intentionally adding more noises yields superior
forecast performance, outperforming benchmark predictors relying on dimension
reduction. The intuition lies in economic predictive signals being densely
distributed among regression coefficients, maintaining modest forecast bias
while diversifying away overall variance, even when a significant proportion of
predictors constitute pure noises. One of our empirical demonstrations shows
that intentionally adding 300~6,000 pure noises to the Welch and Goyal (2008)
dataset achieves a noteworthy 10% out-of-sample R square accuracy in
forecasting the annual U.S. equity premium. The performance surpasses the
majority of sophisticated machine learning models.",Economic Forecasts Using Many Noises,2023-12-09 18:17:19,"Yuan Liao, Xinjie Ma, Andreas Neuhierl, Zhentao Shi","http://arxiv.org/abs/2312.05593v2, http://arxiv.org/pdf/2312.05593v2",econ.EM
30817,em,"The presence of units with extreme values in the dependent and/or independent
variables (i.e., vertical outliers, leveraged data) has the potential to
severely bias regression coefficients and/or standard errors. This is common
with short panel data because the researcher cannot advocate asymptotic theory.
Example include cross-country studies, cell-group analyses, and field or
laboratory experimental studies, where the researcher is forced to use few
cross-sectional observations repeated over time due to the structure of the
data or research design. Available diagnostic tools may fail to properly detect
these anomalies, because they are not designed for panel data. In this paper,
we formalise statistical measures for panel data models with fixed effects to
quantify the degree of leverage and outlyingness of units, and the joint and
conditional influences of pairs of units. We first develop a method to visually
detect anomalous units in a panel data set, and identify their type. Second, we
investigate the effect of these units on LS estimates, and on other units'
influence on the estimated parameters. To illustrate and validate the proposed
method, we use a synthetic data set contaminated with different types of
anomalous units. We also provide an empirical example.",Influence Analysis with Panel Data,2023-12-10 01:44:09,Annalivia Polselli,"http://arxiv.org/abs/2312.05700v1, http://arxiv.org/pdf/2312.05700v1",econ.EM
30818,em,"Without a credible control group, the most widespread methodologies for
estimating causal effects cannot be applied. To fill this gap, we propose the
Machine Learning Control Method (MLCM), a new approach for causal panel
analysis based on counterfactual forecasting with machine learning. The MLCM
estimates policy-relevant causal parameters in short- and long-panel settings
without relying on untreated units. We formalize identification in the
potential outcomes framework and then provide estimation based on supervised
machine learning algorithms. To illustrate the advantages of our estimator, we
present simulation evidence and an empirical application on the impact of the
COVID-19 crisis on educational inequality in Italy. We implement the proposed
method in the companion R package MachineControl.",The Machine Learning Control Method for Counterfactual Forecasting,2023-12-10 14:45:31,"Augusto Cerqua, Marco Letta, Fiammetta Menchetti","http://arxiv.org/abs/2312.05858v1, http://arxiv.org/pdf/2312.05858v1",econ.EM
30819,em,"We investigate the dynamics of the update of subjective homicide
victimization risk after an informational shock by developing two econometric
models able to accommodate both optimal decisions of changing prior
expectations which enable us to rationalize skeptical Bayesian agents with
their disregard to new information. We apply our models to a unique household
data (N = 4,030) that consists of socioeconomic and victimization expectation
variables in Brazil, coupled with an informational ``natural experiment''
brought by the sample design methodology, which randomized interviewers to
interviewees. The higher priors about their own subjective homicide
victimization risk are set, the more likely individuals are to change their
initial perceptions. In case of an update, we find that elders and females are
more reluctant to change priors and choose the new response level. In addition,
even though the respondents' level of education is not significant, the
interviewers' level of education has a key role in changing and updating
decisions. The results show that our econometric approach fits reasonable well
the available empirical evidence, stressing the salient role heterogeneity
represented by individual characteristics of interviewees and interviewers have
on belief updating and lack of it, say, skepticism. Furthermore, we can
rationalize skeptics through an informational quality/credibility argument.","Individual Updating of Subjective Probability of Homicide Victimization: a ""Natural Experiment'' on Risk Communication",2023-12-13 17:31:27,"José Raimundo Carvalho, Diego de Maria André, Yuri Costa","http://arxiv.org/abs/2312.08171v1, http://arxiv.org/pdf/2312.08171v1",econ.EM
30820,em,"Many current approaches to shrinkage within the time-varying parameter
framework assume that each state is equipped with only one innovation variance
for all time points. Sparsity is then induced by shrinking this variance
towards zero. We argue that this is not sufficient if the states display large
jumps or structural changes, something which is often the case in time series
analysis. To remedy this, we propose the dynamic triple gamma prior, a
stochastic process that has a well-known triple gamma marginal form, while
still allowing for autocorrelation. Crucially, the triple gamma has many
interesting limiting and special cases (including the horseshoe shrinkage
prior) which can also be chosen as the marginal distribution. Not only is the
marginal form well understood, we further derive many interesting properties of
the dynamic triple gamma, which showcase its dynamic shrinkage characteristics.
We develop an efficient Markov chain Monte Carlo algorithm to sample from the
posterior and demonstrate the performance through sparse covariance modeling
and forecasting of the returns of the components of the EURO STOXX 50 index.",The Dynamic Triple Gamma Prior as a Shrinkage Process Prior for Time-Varying Parameter Models,2023-12-16 18:46:32,"Peter Knaus, Sylvia Frühwirth-Schnatter","http://arxiv.org/abs/2312.10487v1, http://arxiv.org/pdf/2312.10487v1",econ.EM
30836,em,"In this chapter we discuss conceptually high dimensional sparse econometric
models as well as estimation of these models using L1-penalization and
post-L1-penalization methods. Focusing on linear and nonparametric regression
frameworks, we discuss various econometric examples, present basic theoretical
results, and illustrate the concepts and methods with Monte Carlo simulations
and an empirical application. In the application, we examine and confirm the
empirical validity of the Solow-Swan model for international economic growth.",High Dimensional Sparse Econometric Models: An Introduction,2011-06-26 21:21:14,"Alexandre Belloni, Victor Chernozhukov","http://arxiv.org/abs/1106.5242v2, http://arxiv.org/pdf/1106.5242v2",stat.AP
30821,em,"We propose a family of weighted statistics based on the CUSUM process of the
WLS residuals for the online detection of changepoints in a Random Coefficient
Autoregressive model, using both the standard CUSUM and the Page-CUSUM process.
We derive the asymptotics under the null of no changepoint for all possible
weighing schemes, including the case of the standardised CUSUM, for which we
derive a Darling-Erdos-type limit theorem; our results guarantee the
procedure-wise size control under both an open-ended and a closed-ended
monitoring. In addition to considering the standard RCA model with no
covariates, we also extend our results to the case of exogenous regressors. Our
results can be applied irrespective of (and with no prior knowledge required as
to) whether the observations are stationary or not, and irrespective of whether
they change into a stationary or nonstationary regime. Hence, our methodology
is particularly suited to detect the onset, or the collapse, of a bubble or an
epidemic. Our simulations show that our procedures, especially when
standardising the CUSUM process, can ensure very good size control and short
detection delays. We complement our theory by studying the online detection of
breaks in epidemiological and housing prices series.",Real-time monitoring with RCA models,2023-12-19 00:10:10,"Lajos Horváth, Lorenzo Trapani","http://arxiv.org/abs/2312.11710v1, http://arxiv.org/pdf/2312.11710v1",stat.ME
30822,em,"Improving the productivity of the agricultural sector is part of one of the
Sustainable Development Goals set by the United Nations. To this end, many
international organizations have funded training and technology transfer
programs that aim to promote productivity and income growth, fight poverty and
enhance food security among smallholder farmers in developing countries.
Stochastic production frontier analysis can be a useful tool when evaluating
the effectiveness of these programs. However, accounting for treatment
endogeneity, often intrinsic to these interventions, only recently has received
any attention in the stochastic frontier literature. In this work, we extend
the classical maximum likelihood estimation of stochastic production frontier
models by allowing both the production frontier and inefficiency to depend on a
potentially endogenous binary treatment. We use instrumental variables to
define an assignment mechanism for the treatment, and we explicitly model the
density of the first and second-stage composite error terms. We provide
empirical evidence of the importance of controlling for endogeneity in this
setting using farm-level data from a soil conservation program in El Salvador.",Binary Endogenous Treatment in Stochastic Frontier Models with an Application to Soil Conservation in El Salvador,2023-12-21 18:32:31,"Samuele Centorrino, Maria Pérez-Urdiales, Boris Bravo-Ureta, Alan J. Wall","http://arxiv.org/abs/2312.13939v1, http://arxiv.org/pdf/2312.13939v1",econ.EM
30823,em,"We use house prices (HP) and house price indices (HPI) as a proxy to income
distribution. Specifically, we analyze sale prices in the 1970-2010 window of
over 116,000 single-family homes in Hamilton County, Ohio, including Cincinnati
metro area of about 2.2 million people. We also analyze HPI, published by
Federal Housing Finance Agency (FHFA), for nearly 18,000 US ZIP codes that
cover a period of over 40 years starting in 1980's. If HP can be viewed as a
first derivative of income, HPI can be viewed as its second derivative. We use
generalized beta (GB) family of functions to fit distributions of HP and HPI
since GB naturally arises from the models of economic exchange described by
stochastic differential equations. Our main finding is that HP and multi-year
HPI exhibit a negative Dragon King (nDK) behavior, wherein power-law
distribution tail gives way to an abrupt decay to a finite upper limit value,
which is similar to our recent findings for realized volatility of S\&P500
index in the US stock market. This type of tail behavior is best fitted by a
modified GB (mGB) distribution. Tails of single-year HPI appear to show more
consistency with power-law behavior, which is better described by a GB Prime
(GB2) distribution. We supplement full distribution fits by mGB and GB2 with
direct linear fits (LF) of the tails. Our numerical procedure relies on
evaluation of confidence intervals (CI) of the fits, as well as of p-values
that give the likelihood that data come from the fitted distributions.",Exploring Distributions of House Prices and House Price Indices,2023-12-22 01:38:24,"Jiong Liu, Hamed Farahani, R. A. Serota","http://arxiv.org/abs/2312.14325v1, http://arxiv.org/pdf/2312.14325v1",econ.EM
30824,em,"Studies using instrumental variables (IV) often assess the validity of their
identification assumptions using falsification tests. However, these tests are
often carried out in an ad-hoc manner, without theoretical foundations. In this
paper, we establish a theoretical framework for negative control tests, the
predominant category of falsification tests for IV designs. These tests are
conditional independence tests between negative control variables and either
the IV or the outcome (e.g., examining the ``effect'' on the lagged outcome).
We introduce a formal definition for threats to IV exogeneity (alternative path
variables) and characterize the necessary conditions that proxy variables for
such unobserved threats must meet to serve as negative controls. The theory
highlights prevalent errors in the implementation of negative control tests and
how they could be corrected. Our theory can also be used to design new
falsification tests by identifying appropriate negative control variables,
including currently underutilized types, and suggesting alternative statistical
tests. The theory shows that all negative control tests assess IV exogeneity.
However, some commonly used tests simultaneously evaluate the 2SLS functional
form assumptions. Lastly, we show that while negative controls are useful for
detecting biases in IV designs, their capacity to correct or quantify such
biases requires additional non-trivial assumptions.",Negative Controls for Instrumental Variable Designs,2023-12-25 09:14:40,"Oren Danieli, Daniel Nevo, Itai Walk, Bar Weinstein, Dan Zeltzer","http://arxiv.org/abs/2312.15624v1, http://arxiv.org/pdf/2312.15624v1",econ.EM
30825,em,"With the violation of the assumption of homoskedasticity, least squares
estimators of the variance become inefficient and statistical inference
conducted with invalid standard errors leads to misleading rejection rates.
Despite a vast cross-sectional literature on the downward bias of robust
standard errors, the problem is not extensively covered in the panel data
framework. We investigate the consequences of the simultaneous presence of
small sample size, heteroskedasticity and data points that exhibit extreme
values in the covariates ('good leverage points') on the statistical inference.
Focusing on one-way linear panel data models, we examine asymptotic and finite
sample properties of a battery of heteroskedasticity-consistent estimators
using Monte Carlo simulations. We also propose a hybrid estimator of the
variance-covariance matrix. Results show that conventional standard errors are
always dominated by more conservative estimators of the variance, especially in
small samples. In addition, all types of HC standard errors have excellent
performances in terms of size and power tests under homoskedasticity.",Robust Inference in Panel Data Models: Some Effects of Heteroskedasticity and Leveraged Data in Small Samples,2023-12-29 19:43:19,Annalivia Polselli,"http://arxiv.org/abs/2312.17676v1, http://arxiv.org/pdf/2312.17676v1",econ.EM
30826,em,"We propose a robust hypothesis testing procedure for the predictability of
multiple predictors that could be highly persistent. Our method improves the
popular extended instrumental variable (IVX) testing (Phillips and Lee, 2013;
Kostakis et al., 2015) in that, besides addressing the two bias effects found
in Hosseinkouchack and Demetrescu (2021), we find and deal with the
variance-enlargement effect. We show that two types of higher-order terms
induce these distortion effects in the test statistic, leading to significant
over-rejection for one-sided tests and tests in multiple predictive
regressions. Our improved IVX-based test includes three steps to tackle all the
issues above regarding finite sample bias and variance terms. Thus, the test
statistics perform well in size control, while its power performance is
comparable with the original IVX. Monte Carlo simulations and an empirical
study on the predictability of bond risk premia are provided to demonstrate the
effectiveness of the newly proposed approach.",Robust Inference for Multiple Predictive Regressions with an Application on Bond Risk Premia,2024-01-02 09:56:10,"Xiaosai Liao, Xinjue Li, Qingliang Fan","http://arxiv.org/abs/2401.01064v1, http://arxiv.org/pdf/2401.01064v1",stat.ME
30827,em,"This paper proposes a Heaviside composite optimization approach and presents
a progressive (mixed) integer programming (PIP) method for solving multi-class
classification and multi-action treatment problems with constraints. A
Heaviside composite function is a composite of a Heaviside function (i.e., the
indicator function of either the open $( \, 0,\infty )$ or closed $[ \,
0,\infty \, )$ interval) with a possibly nondifferentiable function.
Modeling-wise, we show how Heaviside composite optimization provides a unified
formulation for learning the optimal multi-class classification and
multi-action treatment rules, subject to rule-dependent constraints stipulating
a variety of domain restrictions. A Heaviside composite function has an
equivalent discrete formulation, and the resulting optimization problem can in
principle be solved by integer programming (IP) methods. Nevertheless, for
constrained learning problems with large data sets, a straightforward
application of off-the-shelf IP solvers is usually ineffective in achieving
global optimality. To alleviate such a computational burden, our major
contribution is the proposal of the PIP method by leveraging the effectiveness
of state-of-the-art IP solvers for problems of modest sizes. We provide the
theoretical advantage of the PIP method with the connection to continuous
optimization and show that the computed solution is locally optimal for a broad
class of Heaviside composite optimization problems. The numerical performance
of the PIP method is demonstrated by extensive computational experimentation.",Classification and Treatment Learning with Constraints via Composite Heaviside Optimization: a Progressive MIP Method,2024-01-03 09:39:18,"Yue Fang, Junyi Liu, Jong-Shi Pang","http://arxiv.org/abs/2401.01565v2, http://arxiv.org/pdf/2401.01565v2",math.OC
30828,em,"This paper discusses pairing double/debiased machine learning (DDML) with
stacking, a model averaging method for combining multiple candidate learners,
to estimate structural parameters. We introduce two new stacking approaches for
DDML: short-stacking exploits the cross-fitting step of DDML to substantially
reduce the computational burden and pooled stacking enforces common stacking
weights over cross-fitting folds. Using calibrated simulation studies and two
applications estimating gender gaps in citations and wages, we show that DDML
with stacking is more robust to partially unknown functional forms than common
alternative approaches based on single pre-selected learners. We provide Stata
and R software implementing our proposals.",Model Averaging and Double Machine Learning,2024-01-03 12:38:13,"Achim Ahrens, Christian B. Hansen, Mark E. Schaffer, Thomas Wiemann","http://arxiv.org/abs/2401.01645v1, http://arxiv.org/pdf/2401.01645v1",econ.EM
30829,em,"Economic models produce moment inequalities, which can be used to form tests
of the true parameters. Confidence sets (CS) of the true parameters are derived
by inverting these tests. However, they often lack analytical expressions,
necessitating a grid search to obtain the CS numerically by retaining the grid
points that pass the test. When the statistic is not asymptotically pivotal,
constructing the critical value for each grid point in the parameter space adds
to the computational burden. In this paper, we convert the computational issue
into a classification problem by using a support vector machine (SVM)
classifier. Its decision function provides a faster and more systematic way of
dividing the parameter space into two regions: inside vs. outside of the
confidence set. We label those points in the CS as 1 and those outside as -1.
Researchers can train the SVM classifier on a grid of manageable size and use
it to determine whether points on denser grids are in the CS or not. We
establish certain conditions for the grid so that there is a tuning that allows
us to asymptotically reproduce the test in the CS. This means that in the
limit, a point is classified as belonging to the confidence set if and only if
it is labeled as 1 by the SVM.",Efficient Computation of Confidence Sets Using Classification on Equidistributed Grids,2024-01-03 19:04:14,Lujie Zhou,"http://arxiv.org/abs/2401.01804v1, http://arxiv.org/pdf/2401.01804v1",econ.EM
30830,em,"Quantile regression is an important tool for estimation of conditional
quantiles of a response Y given a vector of covariates X. It can be used to
measure the effect of covariates not only in the center of a distribution, but
also in the upper and lower tails. This paper develops a theory of quantile
regression in the tails. Specifically, it obtains the large sample properties
of extremal (extreme order and intermediate order) quantile regression
estimators for the linear quantile regression model with the tails restricted
to the domain of minimum attraction and closed under tail equivalence across
regressor values. This modeling setup combines restrictions of extreme value
theory with leading homoscedastic and heteroscedastic linear specifications of
regression analysis. In large samples, extreme order regression quantiles
converge weakly to \argmin functionals of stochastic integrals of Poisson
processes that depend on regressors, while intermediate regression quantiles
and their functionals converge to normal vectors with variance matrices
dependent on the tail parameters and the regressor design.",Extremal quantile regression,2005-05-30 12:57:15,Victor Chernozhukov,"http://dx.doi.org/10.1214/009053604000001165, http://arxiv.org/abs/math/0505639v1, http://arxiv.org/pdf/math/0505639v1",math.ST
30858,em,"The problem of endogeneity in statistics and econometrics is often handled by
introducing instrumental variables (IV) which fulfill the mean independence
assumption, i.e. the unobservable is mean independent of the instruments. When
full independence of IV's and the unobservable is assumed, nonparametric IV
regression models and nonparametric demand models lead to nonlinear integral
equations with unknown integral kernels. We prove convergence rates for the
mean integrated square error of the iteratively regularized Newton method
applied to these problems. Compared to related results we derive stronger
convergence results that rely on weaker nonlinearity restrictions. We
demonstrate in numerical simulations for a nonparametric IV regression that the
method produces better results than the standard model.",Adaptive estimation for some nonparametric instrumental variable models,2015-11-12 19:26:47,Fabian Dunker,"http://arxiv.org/abs/1511.03977v3, http://arxiv.org/pdf/1511.03977v3",stat.CO
30831,em,"This paper proposes a method to address the longstanding problem of lack of
monotonicity in estimation of conditional and structural quantile functions,
also known as the quantile crossing problem. The method consists in sorting or
monotone rearranging the original estimated non-monotone curve into a monotone
rearranged curve. We show that the rearranged curve is closer to the true
quantile curve in finite samples than the original curve, establish a
functional delta method for rearrangement-related operators, and derive
functional limit theory for the entire rearranged curve and its functionals. We
also establish validity of the bootstrap for estimating the limit law of the
the entire rearranged curve and its functionals. Our limit results are generic
in that they apply to every estimator of a monotone econometric function,
provided that the estimator satisfies a functional central limit theorem and
the function satisfies some smoothness conditions. Consequently, our results
apply to estimation of other econometric functions with monotonicity
restrictions, such as demand, production, distribution, and structural
distribution functions. We illustrate the results with an application to
estimation of structural quantile functions using data on Vietnam veteran
status and earnings.",Quantile and Probability Curves Without Crossing,2007-04-27 21:58:35,"Victor Chernozhukov, Ivan Fernandez-Val, Alfred Galichon","http://dx.doi.org/10.3982/ECTA7880, http://arxiv.org/abs/0704.3649v3, http://arxiv.org/pdf/0704.3649v3",stat.ME
30832,em,"Suppose that a target function is monotonic, namely, weakly increasing, and
an original estimate of the target function is available, which is not weakly
increasing. Many common estimation methods used in statistics produce such
estimates. We show that these estimates can always be improved with no harm
using rearrangement techniques: The rearrangement methods, univariate and
multivariate, transform the original estimate to a monotonic estimate, and the
resulting estimate is closer to the true curve in common metrics than the
original estimate. We illustrate the results with a computational example and
an empirical example dealing with age-height growth charts.",Improving Estimates of Monotone Functions by Rearrangement,2007-04-27 17:49:17,"Victor Chernozhukov, Ivan Fernandez-Val, Alfred Galichon","http://arxiv.org/abs/0704.3686v2, http://arxiv.org/pdf/0704.3686v2",stat.ME
30833,em,"Counterfactual distributions are important ingredients for policy analysis
and decomposition analysis in empirical economics. In this article we develop
modeling and inference tools for counterfactual distributions based on
regression methods. The counterfactual scenarios that we consider consist of
ceteris paribus changes in either the distribution of covariates related to the
outcome of interest or the conditional distribution of the outcome given
covariates. For either of these scenarios we derive joint functional central
limit theorems and bootstrap validity results for regression-based estimators
of the status quo and counterfactual outcome distributions. These results allow
us to construct simultaneous confidence sets for function-valued effects of the
counterfactual changes, including the effects on the entire distribution and
quantile functions of the outcome as well as on related functionals. These
confidence sets can be used to test functional hypotheses such as no-effect,
positive effect, or stochastic dominance. Our theory applies to general
counterfactual changes and covers the main regression methods including
classical, quantile, duration, and distribution regressions. We illustrate the
results with an empirical application to wage decompositions using data for the
United States.
  As a part of developing the main results, we introduce distribution
regression as a comprehensive and flexible tool for modeling and estimating the
\textit{entire} conditional distribution. We show that distribution regression
encompasses the Cox duration regression and represents a useful alternative to
quantile regression. We establish functional central limit theorems and
bootstrap validity results for the empirical distribution regression process
and various related functionals.",Inference on Counterfactual Distributions,2009-04-06 18:17:37,"Victor Chernozhukov, Ivan Fernandez-Val, Blaise Melly","http://arxiv.org/abs/0904.0951v6, http://arxiv.org/pdf/0904.0951v6",stat.ME
30834,em,"We develop results for the use of Lasso and Post-Lasso methods to form
first-stage predictions and estimate optimal instruments in linear instrumental
variables (IV) models with many instruments, $p$. Our results apply even when
$p$ is much larger than the sample size, $n$. We show that the IV estimator
based on using Lasso or Post-Lasso in the first stage is root-n consistent and
asymptotically normal when the first-stage is approximately sparse; i.e. when
the conditional expectation of the endogenous variables given the instruments
can be well-approximated by a relatively small set of variables whose
identities may be unknown. We also show the estimator is semi-parametrically
efficient when the structural error is homoscedastic. Notably our results allow
for imperfect model selection, and do not rely upon the unrealistic ""beta-min""
conditions that are widely used to establish validity of inference following
model selection. In simulation experiments, the Lasso-based IV estimator with a
data-driven penalty performs well compared to recently advocated
many-instrument-robust procedures. In an empirical example dealing with the
effect of judicial eminent domain decisions on economic outcomes, the
Lasso-based IV estimator outperforms an intuitive benchmark.
  In developing the IV results, we establish a series of new results for Lasso
and Post-Lasso estimators of nonparametric conditional expectation functions
which are of independent theoretical and practical interest. We construct a
modification of Lasso designed to deal with non-Gaussian, heteroscedastic
disturbances which uses a data-weighted $\ell_1$-penalty function. Using
moderate deviation theory for self-normalized sums, we provide convergence
rates for the resulting Lasso and Post-Lasso estimators that are as sharp as
the corresponding rates in the homoscedastic Gaussian case under the condition
that $\log p = o(n^{1/3})$.",Sparse Models and Methods for Optimal Instruments with an Application to Eminent Domain,2010-10-21 03:49:43,"Alexandre Belloni, Daniel Chen, Victor Chernozhukov, Christian Hansen","http://arxiv.org/abs/1010.4345v5, http://arxiv.org/pdf/1010.4345v5",stat.ME
30835,em,"Quantile regression (QR) is a principal regression method for analyzing the
impact of covariates on outcomes. The impact is described by the conditional
quantile function and its functionals. In this paper we develop the
nonparametric QR-series framework, covering many regressors as a special case,
for performing inference on the entire conditional quantile function and its
linear functionals. In this framework, we approximate the entire conditional
quantile function by a linear combination of series terms with
quantile-specific coefficients and estimate the function-valued coefficients
from the data. We develop large sample theory for the QR-series coefficient
process, namely we obtain uniform strong approximations to the QR-series
coefficient process by conditionally pivotal and Gaussian processes. Based on
these strong approximations, or couplings, we develop four resampling methods
(pivotal, gradient bootstrap, Gaussian, and weighted bootstrap) that can be
used for inference on the entire QR-series coefficient function.
  We apply these results to obtain estimation and inference methods for linear
functionals of the conditional quantile function, such as the conditional
quantile function itself, its partial derivatives, average partial derivatives,
and conditional average partial derivatives. Specifically, we obtain uniform
rates of convergence and show how to use the four resampling methods mentioned
above for inference on the functionals. All of the above results are for
function-valued parameters, holding uniformly in both the quantile index and
the covariate value, and covering the pointwise case as a by-product. We
demonstrate the practical utility of these results with an example, where we
estimate the price elasticity function and test the Slutsky condition of the
individual demand for gasoline, as indexed by the individual unobserved
propensity for gasoline consumption.",Conditional Quantile Processes based on Series or Many Regressors,2011-05-31 06:15:37,"Alexandre Belloni, Victor Chernozhukov, Denis Chetverikov, Iván Fernández-Val","http://arxiv.org/abs/1105.6154v4, http://arxiv.org/pdf/1105.6154v4",stat.ME
30837,em,"This article is about estimation and inference methods for high dimensional
sparse (HDS) regression models in econometrics. High dimensional sparse models
arise in situations where many regressors (or series terms) are available and
the regression function is well-approximated by a parsimonious, yet unknown set
of regressors. The latter condition makes it possible to estimate the entire
regression function effectively by searching for approximately the right set of
regressors. We discuss methods for identifying this set of regressors and
estimating their coefficients based on $\ell_1$-penalization and describe key
theoretical results. In order to capture realistic practical situations, we
expressly allow for imperfect selection of regressors and study the impact of
this imperfect selection on estimation and inference results. We focus the main
part of the article on the use of HDS models and methods in the instrumental
variables model and the partially linear model. We present a set of novel
inference results for these models and illustrate their use with applications
to returns to schooling and growth regression.",Inference for High-Dimensional Sparse Econometric Models,2011-12-31 06:31:00,"Alexandre Belloni, Victor Chernozhukov, Christian Hansen","http://arxiv.org/abs/1201.0220v1, http://arxiv.org/pdf/1201.0220v1",stat.ME
30838,em,"We propose robust methods for inference on the effect of a treatment variable
on a scalar outcome in the presence of very many controls. Our setting is a
partially linear model with possibly non-Gaussian and heteroscedastic
disturbances. Our analysis allows the number of controls to be much larger than
the sample size. To make informative inference feasible, we require the model
to be approximately sparse; that is, we require that the effect of confounding
factors can be controlled for up to a small approximation error by conditioning
on a relatively small number of controls whose identities are unknown. The
latter condition makes it possible to estimate the treatment effect by
selecting approximately the right set of controls. We develop a novel
estimation and uniformly valid inference method for the treatment effect in
this setting, called the ""post-double-selection"" method. Our results apply to
Lasso-type methods used for covariate selection as well as to any other model
selection method that is able to find a sparse model with good approximation
properties.
  The main attractive feature of our method is that it allows for imperfect
selection of the controls and provides confidence intervals that are valid
uniformly across a large class of models. In contrast, standard post-model
selection estimators fail to provide uniform inference even in simple cases
with a small, fixed number of controls. Thus our method resolves the problem of
uniform inference after model selection for a large, interesting class of
models. We illustrate the use of the developed methods with numerical
simulations and an application to the effect of abortion on crime rates.",Inference on Treatment Effects After Selection Amongst High-Dimensional Controls,2011-12-31 06:37:19,"Alexandre Belloni, Victor Chernozhukov, Christian Hansen","http://arxiv.org/abs/1201.0224v3, http://arxiv.org/pdf/1201.0224v3",stat.ME
30839,em,"We propose dual regression as an alternative to the quantile regression
process for the global estimation of conditional distribution functions under
minimal assumptions. Dual regression provides all the interpretational power of
the quantile regression process while avoiding the need for repairing the
intersecting conditional quantile surfaces that quantile regression often
produces in practice. Our approach introduces a mathematical programming
characterization of conditional distribution functions which, in its simplest
form, is the dual program of a simultaneous estimator for linear location-scale
models. We apply our general characterization to the specification and
estimation of a flexible class of conditional distribution functions, and
present asymptotic theory for the corresponding empirical dual regression
process.",Dual Regression,2012-10-25 22:25:04,"Richard Spady, Sami Stouli","http://arxiv.org/abs/1210.6958v4, http://arxiv.org/pdf/1210.6958v4",stat.ME
30840,em,"We provide a comprehensive semi-parametric study of Bayesian partially
identified econometric models. While the existing literature on Bayesian
partial identification has mostly focused on the structural parameter, our
primary focus is on Bayesian credible sets (BCS's) of the unknown identified
set and the posterior distribution of its support function. We construct a
(two-sided) BCS based on the support function of the identified set. We prove
the Bernstein-von Mises theorem for the posterior distribution of the support
function. This powerful result in turn infers that, while the BCS and the
frequentist confidence set for the partially identified parameter are
asymptotically different, our constructed BCS for the identified set has an
asymptotically correct frequentist coverage probability. Importantly, we
illustrate that the constructed BCS for the identified set does not require a
prior on the structural parameter. It can be computed efficiently for subset
inference, especially when the target of interest is a sub-vector of the
partially identified parameter, where projecting to a low-dimensional subset is
often required. Hence, the proposed methods are useful in many applications.
  The Bayesian partial identification literature has been assuming a known
parametric likelihood function. However, econometric models usually only
identify a set of moment inequalities, and therefore using an incorrect
likelihood function may result in misleading inferences. In contrast, with a
nonparametric prior on the unknown likelihood function, our proposed Bayesian
procedure only requires a set of moment conditions, and can efficiently make
inference about both the partially identified parameter and its identified set.
This makes it widely applicable in general moment inequality models. Finally,
the proposed method is illustrated in a financial asset pricing problem.",Semi-parametric Bayesian Partially Identified Models based on Support Function,2012-12-13 20:58:43,"Yuan Liao, Anna Simoni","http://arxiv.org/abs/1212.3267v2, http://arxiv.org/pdf/1212.3267v2",stat.ME
30841,em,"We derive a Gaussian approximation result for the maximum of a sum of
high-dimensional random vectors. Specifically, we establish conditions under
which the distribution of the maximum is approximated by that of the maximum of
a sum of the Gaussian random vectors with the same covariance matrices as the
original vectors. This result applies when the dimension of random vectors
($p$) is large compared to the sample size ($n$); in fact, $p$ can be much
larger than $n$, without restricting correlations of the coordinates of these
vectors. We also show that the distribution of the maximum of a sum of the
random vectors with unknown covariance matrices can be consistently estimated
by the distribution of the maximum of a sum of the conditional Gaussian random
vectors obtained by multiplying the original vectors with i.i.d. Gaussian
multipliers. This is the Gaussian multiplier (or wild) bootstrap procedure.
Here too, $p$ can be large or even much larger than $n$. These distributional
approximations, either Gaussian or conditional Gaussian, yield a high-quality
approximation to the distribution of the original maximum, often with
approximation error decreasing polynomially in the sample size, and hence are
of interest in many applications. We demonstrate how our Gaussian
approximations and the multiplier bootstrap can be used for modern
high-dimensional estimation, multiple hypothesis testing, and adaptive
specification testing. All these results contain nonasymptotic bounds on
approximation errors.",Gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors,2012-12-31 17:14:12,"Victor Chernozhukov, Denis Chetverikov, Kengo Kato","http://dx.doi.org/10.1214/13-AOS1161, http://arxiv.org/abs/1212.6906v6, http://arxiv.org/pdf/1212.6906v6",math.ST
30842,em,"We develop uniformly valid confidence regions for regression coefficients in
a high-dimensional sparse median regression model with homoscedastic errors.
Our methods are based on a moment equation that is immunized against
non-regular estimation of the nuisance part of the median regression function
by using Neyman's orthogonalization. We establish that the resulting
instrumental median regression estimator of a target regression coefficient is
asymptotically normally distributed uniformly with respect to the underlying
sparse model and is semi-parametrically efficient. We also generalize our
method to a general non-smooth Z-estimation framework with the number of target
parameters $p_1$ being possibly much larger than the sample size $n$. We extend
Huber's results on asymptotic normality to this setting, demonstrating uniform
asymptotic normality of the proposed estimators over $p_1$-dimensional
rectangles, constructing simultaneous confidence bands on all of the $p_1$
target parameters, and establishing asymptotic validity of the bands uniformly
over underlying approximately sparse models.
  Keywords: Instrument; Post-selection inference; Sparsity; Neyman's Orthogonal
Score test; Uniformly valid inference; Z-estimation.",Uniform Post Selection Inference for LAD Regression and Other Z-estimation problems,2013-04-01 05:29:25,"Alexandre Belloni, Victor Chernozhukov, Kengo Kato","http://dx.doi.org/10.1093/biomet/asu056, http://arxiv.org/abs/1304.0282v6, http://arxiv.org/pdf/1304.0282v6",math.ST
30843,em,"This paper considers generalized linear models in the presence of many
controls. We lay out a general methodology to estimate an effect of interest
based on the construction of an instrument that immunize against model
selection mistakes and apply it to the case of logistic binary choice model.
More specifically we propose new methods for estimating and constructing
confidence regions for a regression parameter of primary interest $\alpha_0$, a
parameter in front of the regressor of interest, such as the treatment variable
or a policy variable. These methods allow to estimate $\alpha_0$ at the
root-$n$ rate when the total number $p$ of other regressors, called controls,
potentially exceed the sample size $n$ using sparsity assumptions. The sparsity
assumption means that there is a subset of $s<n$ controls which suffices to
accurately approximate the nuisance part of the regression function.
Importantly, the estimators and these resulting confidence regions are valid
uniformly over $s$-sparse models satisfying $s^2\log^2 p = o(n)$ and other
technical conditions. These procedures do not rely on traditional consistent
model selection arguments for their validity. In fact, they are robust with
respect to moderate model selection mistakes in variable selection. Under
suitable conditions, the estimators are semi-parametrically efficient in the
sense of attaining the semi-parametric efficiency bounds for the class of
models in this paper.",Post-Selection Inference for Generalized Linear Models with Many Controls,2013-04-15 05:20:57,"Alexandre Belloni, Victor Chernozhukov, Ying Wei","http://arxiv.org/abs/1304.3969v3, http://arxiv.org/pdf/1304.3969v3",stat.ME
30844,em,"In this supplementary appendix we provide additional results, omitted proofs
and extensive simulations that complement the analysis of the main text
(arXiv:1201.0224).","Supplementary Appendix for ""Inference on Treatment Effects After Selection Amongst High-Dimensional Controls""",2013-05-27 06:27:18,"Alexandre Belloni, Victor Chernozhukov, Christian Hansen","http://arxiv.org/abs/1305.6099v2, http://arxiv.org/pdf/1305.6099v2",math.ST
30845,em,"This paper concerns robust inference on average treatment effects following
model selection. In the selection on observables framework, we show how to
construct confidence intervals based on a doubly-robust estimator that are
robust to model selection errors and prove that they are valid uniformly over a
large class of treatment effect models. The class allows for multivalued
treatments with heterogeneous effects (in observables), general
heteroskedasticity, and selection amongst (possibly) more covariates than
observations. Our estimator attains the semiparametric efficiency bound under
appropriate conditions. Precise conditions are given for any model selector to
yield these results, and we show how to combine data-driven selection with
economic theory. For implementation, we give a specific proposal for selection
based on the group lasso, which is particularly well-suited to treatment
effects data, and derive new results for high-dimensional, sparse multinomial
logistic regression. A simulation study shows our estimator performs very well
in finite samples over a wide range of models. Revisiting the National
Supported Work demonstration data, our method yields accurate estimates and
tight confidence intervals.",Robust Inference on Average Treatment Effects with Possibly More Covariates than Observations,2013-09-18 18:47:27,Max H. Farrell,"http://dx.doi.org/10.1016/j.jeconom.2015.06.017, http://arxiv.org/abs/1309.4686v3, http://arxiv.org/pdf/1309.4686v3",math.ST
30846,em,"We study the problem of nonparametric regression when the regressor is
endogenous, which is an important nonparametric instrumental variables (NPIV)
regression in econometrics and a difficult ill-posed inverse problem with
unknown operator in statistics. We first establish a general upper bound on the
sup-norm (uniform) convergence rate of a sieve estimator, allowing for
endogenous regressors and weakly dependent data. This result leads to the
optimal sup-norm convergence rates for spline and wavelet least squares
regression estimators under weakly dependent data and heavy-tailed error terms.
This upper bound also yields the sup-norm convergence rates for sieve NPIV
estimators under i.i.d. data: the rates coincide with the known optimal
$L^2$-norm rates for severely ill-posed problems, and are power of $\log(n)$
slower than the optimal $L^2$-norm rates for mildly ill-posed problems. We then
establish the minimax risk lower bound in sup-norm loss, which coincides with
our upper bounds on sup-norm rates for the spline and wavelet sieve NPIV
estimators. This sup-norm rate optimality provides another justification for
the wide application of sieve NPIV estimators. Useful results on
weakly-dependent random matrices are also provided.",Optimal Uniform Convergence Rates for Sieve Nonparametric Instrumental Variables Regression,2013-11-02 23:25:13,"Xiaohong Chen, Timothy Christensen","http://arxiv.org/abs/1311.0412v1, http://arxiv.org/pdf/1311.0412v1",math.ST
30847,em,"This work proposes new inference methods for a regression coefficient of
interest in a (heterogeneous) quantile regression model. We consider a
high-dimensional model where the number of regressors potentially exceeds the
sample size but a subset of them suffice to construct a reasonable
approximation to the conditional quantile function. The proposed methods are
(explicitly or implicitly) based on orthogonal score functions that protect
against moderate model selection mistakes, which are often inevitable in the
approximately sparse model considered in the present paper. We establish the
uniform validity of the proposed confidence regions for the quantile regression
coefficient. Importantly, these methods directly apply to more than one
variable and a continuum of quantile indices. In addition, the performance of
the proposed methods is illustrated through Monte-Carlo experiments and an
empirical example, dealing with risk factors in childhood malnutrition.",Valid Post-Selection Inference in High-Dimensional Approximately Sparse Quantile Regression Models,2013-12-27 06:31:35,"Alexandre Belloni, Victor Chernozhukov, Kengo Kato","http://arxiv.org/abs/1312.7186v4, http://arxiv.org/pdf/1312.7186v4",math.ST
30848,em,"This paper considers the problem of testing many moment inequalities where
the number of moment inequalities, denoted by $p$, is possibly much larger than
the sample size $n$. There is a variety of economic applications where solving
this problem allows to carry out inference on causal and structural parameters,
a notable example is the market structure model of Ciliberto and Tamer (2009)
where $p=2^{m+1}$ with $m$ being the number of firms that could possibly enter
the market. We consider the test statistic given by the maximum of $p$
Studentized (or $t$-type) inequality-specific statistics, and analyze various
ways to compute critical values for the test statistic. Specifically, we
consider critical values based upon (i) the union bound combined with a
moderate deviation inequality for self-normalized sums, (ii) the multiplier and
empirical bootstraps, and (iii) two-step and three-step variants of (i) and
(ii) by incorporating the selection of uninformative inequalities that are far
from being binding and a novel selection of weakly informative inequalities
that are potentially binding but do not provide first order information. We
prove validity of these methods, showing that under mild conditions, they lead
to tests with the error in size decreasing polynomially in $n$ while allowing
for $p$ being much larger than $n$, indeed $p$ can be of order $\exp (n^{c})$
for some $c > 0$. Importantly, all these results hold without any restriction
on the correlation structure between $p$ Studentized statistics, and also hold
uniformly with respect to suitably large classes of underlying distributions.
Moreover, in the online supplement, we show validity of a test based on the
block multiplier bootstrap in the case of dependent data under some general
mixing conditions.",Inference on causal and structural parameters using many moment inequalities,2013-12-30 04:46:52,"Victor Chernozhukov, Denis Chetverikov, Kengo Kato","http://arxiv.org/abs/1312.7614v6, http://arxiv.org/pdf/1312.7614v6",math.ST
30849,em,"This paper considers inference on functionals of semi/nonparametric
conditional moment restrictions with possibly nonsmooth generalized residuals,
which include all of the (nonlinear) nonparametric instrumental variables (IV)
as special cases. These models are often ill-posed and hence it is difficult to
verify whether a (possibly nonlinear) functional is root-$n$ estimable or not.
We provide computationally simple, unified inference procedures that are
asymptotically valid regardless of whether a functional is root-$n$ estimable
or not. We establish the following new useful results: (1) the asymptotic
normality of a plug-in penalized sieve minimum distance (PSMD) estimator of a
(possibly nonlinear) functional; (2) the consistency of simple sieve variance
estimators for the plug-in PSMD estimator, and hence the asymptotic chi-square
distribution of the sieve Wald statistic; (3) the asymptotic chi-square
distribution of an optimally weighted sieve quasi likelihood ratio (QLR) test
under the null hypothesis; (4) the asymptotic tight distribution of a
non-optimally weighted sieve QLR statistic under the null; (5) the consistency
of generalized residual bootstrap sieve Wald and QLR tests; (6) local power
properties of sieve Wald and QLR tests and of their bootstrap versions; (7)
asymptotic properties of sieve Wald and SQLR for functionals of increasing
dimension. Simulation studies and an empirical illustration of a nonparametric
quantile IV regression are presented.",Sieve Wald and QLR Inferences on Semi/nonparametric Conditional Moment Models,2014-11-05 06:31:59,"Xiaohong Chen, Demian Pouzo","http://arxiv.org/abs/1411.1144v2, http://arxiv.org/pdf/1411.1144v2",math.ST
30850,em,"This paper establishes consistency of the weighted bootstrap for quadratic
forms $\left( n^{-1/2} \sum_{i=1}^{n} Z_{i,n} \right)^{T}\left( n^{-1/2}
\sum_{i=1}^{n} Z_{i,n} \right)$ where $(Z_{i,n})_{i=1}^{n}$ are mean zero,
independent $\mathbb{R}^{d}$-valued random variables and $d=d(n)$ is allowed to
grow with the sample size $n$, slower than $n^{1/4}$. The proof relies on an
adaptation of Lindeberg interpolation technique whereby we simplify the
original problem to a Gaussian approximation problem. We apply our bootstrap
results to model-specification testing problems when the number of moments is
allowed to grow with the sample size.",Bootstrap Consistency for Quadratic Forms of Sample Averages with Increasing Dimension,2014-11-11 06:58:55,Demian Pouzo,"http://arxiv.org/abs/1411.2701v4, http://arxiv.org/pdf/1411.2701v4",math.ST
30851,em,"We propose new concepts of statistical depth, multivariate quantiles, ranks
and signs, based on canonical transportation maps between a distribution of
interest on $R^d$ and a reference distribution on the $d$-dimensional unit
ball. The new depth concept, called Monge-Kantorovich depth, specializes to
halfspace depth in the case of spherical distributions, but, for more general
distributions, differs from the latter in the ability for its contours to
account for non convex features of the distribution of interest. We propose
empirical counterparts to the population versions of those Monge-Kantorovich
depth contours, quantiles, ranks and signs, and show their consistency by
establishing a uniform convergence property for empirical transport maps, which
is of independent interest.","Monge-Kantorovich Depth, Quantiles, Ranks, and Signs",2014-12-29 21:32:34,"Victor Chernozhukov, Alfred Galichon, Marc Hallin, Marc Henry","http://arxiv.org/abs/1412.8434v4, http://arxiv.org/pdf/1412.8434v4",math.ST
30852,em,"Here we present an expository, general analysis of valid post-selection or
post-regularization inference about a low-dimensional target parameter,
$\alpha$, in the presence of a very high-dimensional nuisance parameter,
$\eta$, which is estimated using modern selection or regularization methods.
Our analysis relies on high-level, easy-to-interpret conditions that allow one
to clearly see the structures needed for achieving valid post-regularization
inference. Simple, readily verifiable sufficient conditions are provided for a
class of affine-quadratic models. We focus our discussion on estimation and
inference procedures based on using the empirical analog of theoretical
equations $$M(\alpha, \eta)=0$$ which identify $\alpha$. Within this structure,
we show that setting up such equations in a manner such that the
orthogonality/immunization condition $$\partial_\eta M(\alpha, \eta) = 0$$ at
the true parameter values is satisfied, coupled with plausible conditions on
the smoothness of $M$ and the quality of the estimator $\hat \eta$, guarantees
that inference on for the main parameter $\alpha$ based on testing or point
estimation methods discussed below will be regular despite selection or
regularization biases occurring in estimation of $\eta$. In particular, the
estimator of $\alpha$ will often be uniformly consistent at the root-$n$ rate
and uniformly asymptotically normal even though estimators $\hat \eta$ will
generally not be asymptotically linear and regular. The uniformity holds over
large classes of models that do not impose highly implausible ""beta-min""
conditions. We also show that inference can be carried out by inverting tests
formed from Neyman's $C(\alpha)$ (orthogonal score) statistics.","Valid Post-Selection and Post-Regularization Inference: An Elementary, General Approach",2015-01-14 20:06:26,"Victor Chernozhukov, Christian Hansen, Martin Spindler","http://dx.doi.org/10.1146/annurev-economics-012315-015826, http://arxiv.org/abs/1501.03430v3, http://arxiv.org/pdf/1501.03430v3",math.ST
30859,em,"We propose a general framework for regularization in M-estimation problems
under time dependent (absolutely regular-mixing) data which encompasses many of
the existing estimators. We derive non-asymptotic concentration bounds for the
regularized M-estimator. Our results exhibit a variance-bias trade-off, with
the variance term being governed by a novel measure of the complexity of the
parameter set. We also show that the mixing structure affect the variance term
by scaling the number of observations; depending on the decay rate of the
mixing coefficients, this scaling can even affect the asymptotic behavior.
Finally, we propose a data-driven method for choosing the tuning parameters of
the regularized estimator which yield the same (up to constants) concentration
bound as one that optimally balances the (squared) bias and variance terms. We
illustrate the results with several canonical examples.",On the Non-Asymptotic Properties of Regularized M-estimators,2015-12-20 00:05:52,Demian Pouzo,"http://arxiv.org/abs/1512.06290v3, http://arxiv.org/pdf/1512.06290v3",math.ST
30853,em,"Common high-dimensional methods for prediction rely on having either a sparse
signal model, a model in which most parameters are zero and there are a small
number of non-zero parameters that are large in magnitude, or a dense signal
model, a model with no large parameters and very many small non-zero
parameters. We consider a generalization of these two basic models, termed here
a ""sparse+dense"" model, in which the signal is given by the sum of a sparse
signal and a dense signal. Such a structure poses problems for traditional
sparse estimators, such as the lasso, and for traditional dense estimation
methods, such as ridge estimation. We propose a new penalization-based method,
called lava, which is computationally efficient. With suitable choices of
penalty parameters, the proposed method strictly dominates both lasso and
ridge. We derive analytic expressions for the finite-sample risk function of
the lava estimator in the Gaussian sequence model. We also provide an deviation
bound for the prediction risk in the Gaussian regression model with fixed
design. In both cases, we provide Stein's unbiased estimator for lava's
prediction risk. A simulation example compares the performance of lava to
lasso, ridge, and elastic net in a regression example using feasible,
data-dependent penalty parameters and illustrates lava's improved performance
relative to these benchmarks.",A lava attack on the recovery of sums of dense and sparse signals,2015-02-11 02:29:16,"Victor Chernozhukov, Christian Hansen, Yuan Liao","http://arxiv.org/abs/1502.03155v2, http://arxiv.org/pdf/1502.03155v2",stat.ME
30854,em,"Non-standard distributional approximations have received considerable
attention in recent years. They often provide more accurate approximations in
small samples, and theoretical improvements in some cases. This paper shows
that the seemingly unrelated ""many instruments asymptotics"" and ""small
bandwidth asymptotics"" share a common structure, where the object determining
the limiting distribution is a V-statistic with a remainder that is an
asymptotically normal degenerate U-statistic. We illustrate how this general
structure can be used to derive new results by obtaining a new asymptotic
distribution of a series estimator of the partially linear model when the
number of terms in the series approximation possibly grows as fast as the
sample size, which we call ""many terms asymptotics"".",Alternative Asymptotics and the Partially Linear Model with Many Regressors,2015-05-28 22:44:22,"Matias D. Cattaneo, Michael Jansson, Whitney K. Newey","http://arxiv.org/abs/1505.08120v1, http://arxiv.org/pdf/1505.08120v1",math.ST
30855,em,"The linear regression model is widely used in empirical work in Economics,
Statistics, and many other disciplines. Researchers often include many
covariates in their linear model specification in an attempt to control for
confounders. We give inference methods that allow for many covariates and
heteroskedasticity. Our results are obtained using high-dimensional
approximations, where the number of included covariates are allowed to grow as
fast as the sample size. We find that all of the usual versions of Eicker-White
heteroskedasticity consistent standard error estimators for linear models are
inconsistent under this asymptotics. We then propose a new heteroskedasticity
consistent standard error formula that is fully automatic and robust to both
(conditional)\ heteroskedasticity of unknown form and the inclusion of possibly
many covariates. We apply our findings to three settings: parametric linear
models with many covariates, linear panel models with many fixed effects, and
semiparametric semi-linear models with many technical regressors. Simulation
evidence consistent with our theoretical results is also provided. The proposed
methods are also illustrated with an empirical application.",Inference in Linear Regression Models with Many Covariates and Heteroskedasticity,2015-07-09 16:13:47,"Matias D. Cattaneo, Michael Jansson, Whitney K. Newey","http://arxiv.org/abs/1507.02493v2, http://arxiv.org/pdf/1507.02493v2",math.ST
30856,em,"The ill-posedness of the inverse problem of recovering a regression function
in a nonparametric instrumental variable model leads to estimators that may
suffer from a very slow, logarithmic rate of convergence. In this paper, we
show that restricting the problem to models with monotone regression functions
and monotone instruments significantly weakens the ill-posedness of the
problem. In stark contrast to the existing literature, the presence of a
monotone instrument implies boundedness of our measure of ill-posedness when
restricted to the space of monotone functions. Based on this result we derive a
novel non-asymptotic error bound for the constrained estimator that imposes
monotonicity of the regression function. For a given sample size, the bound is
independent of the degree of ill-posedness as long as the regression function
is not too steep. As an implication, the bound allows us to show that the
constrained estimator converges at a fast, polynomial rate, independently of
the degree of ill-posedness, in a large, but slowly shrinking neighborhood of
constant functions. Our simulation study demonstrates significant finite-sample
performance gains from imposing monotonicity even when the regression function
is rather far from being a constant. We apply the constrained estimator to the
problem of estimating gasoline demand functions from U.S. data.",Nonparametric instrumental variable estimation under monotonicity,2015-07-19 13:04:41,"Denis Chetverikov, Daniel Wilhelm","http://arxiv.org/abs/1507.05270v1, http://arxiv.org/pdf/1507.05270v1",stat.AP
30857,em,"Nonparametric methods play a central role in modern empirical work. While
they provide inference procedures that are more robust to parametric
misspecification bias, they may be quite sensitive to tuning parameter choices.
We study the effects of bias correction on confidence interval coverage in the
context of kernel density and local polynomial regression estimation, and prove
that bias correction can be preferred to undersmoothing for minimizing coverage
error and increasing robustness to tuning parameter choice. This is achieved
using a novel, yet simple, Studentization, which leads to a new way of
constructing kernel-based bias-corrected confidence intervals. In addition, for
practical cases, we derive coverage error optimal bandwidths and discuss
easy-to-implement bandwidth selectors. For interior points, we show that the
MSE-optimal bandwidth for the original point estimator (before bias correction)
delivers the fastest coverage error decay rate after bias correction when
second-order (equivalent) kernels are employed, but is otherwise suboptimal
because it is too ""large"". Finally, for odd-degree local polynomial regression,
we show that, as with point estimation, coverage error adapts to boundary
points automatically when appropriate Studentization is used; however, the
MSE-optimal bandwidth for the original point estimator is suboptimal. All the
results are established using valid Edgeworth expansions and illustrated with
simulated data. Our findings have important consequences for empirical work as
they indicate that bias-corrected confidence intervals, coupled with
appropriate standard errors, have smaller coverage error and are less sensitive
to tuning parameter choices in practically relevant cases where additional
smoothness is available.",On the Effect of Bias Estimation on Coverage Accuracy in Nonparametric Inference,2015-08-12 19:13:30,"Sebastian Calonico, Matias D. Cattaneo, Max H. Farrell","http://arxiv.org/abs/1508.02973v6, http://arxiv.org/pdf/1508.02973v6",math.ST
31708,gn,"Malaysia is experiencing ever increasing domestic energy consumption. This
study is an attempt at analyzing the changes in sectoral energy intensities in
Malaysia for the period 1995 to 2011. The study quantifies the sectoral total,
direct, and indirect energy intensities to track the sectors that are
responsible for the increasing energy consumption. The energy input-output
model which is a frontier method for examining resource embodiments in goods
and services on a sectoral scale that is popular among scholars has been
applied in this study.",Pursuing More Sustainable Energy Consumption by Analyzing Sectoral Direct and Indirect Energy Use in Malaysia: An Input-Output Analysis,2020-01-07 10:25:24,Mukaramah Harun,"http://arxiv.org/abs/2001.02508v1, http://arxiv.org/pdf/2001.02508v1",econ.GN
30860,em,"We propose a bootstrap-based calibrated projection procedure to build
confidence intervals for single components and for smooth functions of a
partially identified parameter vector in moment (in)equality models. The method
controls asymptotic coverage uniformly over a large class of data generating
processes. The extreme points of the calibrated projection confidence interval
are obtained by extremizing the value of the function of interest subject to a
proper relaxation of studentized sample analogs of the moment (in)equality
conditions. The degree of relaxation, or critical level, is calibrated so that
the function of theta, not theta itself, is uniformly asymptotically covered
with prespecified probability. This calibration is based on repeatedly checking
feasibility of linear programming problems, rendering it computationally
attractive.
  Nonetheless, the program defining an extreme point of the confidence interval
is generally nonlinear and potentially intricate. We provide an algorithm,
based on the response surface method for global optimization, that approximates
the solution rapidly and accurately, and we establish its rate of convergence.
The algorithm is of independent interest for optimization problems with simple
objectives and complicated constraints. An empirical application estimating an
entry game illustrates the usefulness of the method. Monte Carlo simulations
confirm the accuracy of the solution algorithm, the good statistical as well as
computational performance of calibrated projection (including in comparison to
other methods), and the algorithm's potential to greatly accelerate computation
of other confidence intervals.",Confidence Intervals for Projections of Partially Identified Parameters,2016-01-05 20:45:22,"Hiroaki Kaido, Francesca Molinari, Jörg Stoye","http://dx.doi.org/10.3982/ECTA14075, http://arxiv.org/abs/1601.00934v4, http://arxiv.org/pdf/1601.00934v4",math.ST
30861,em,"We discuss efficient Bayesian estimation of dynamic covariance matrices in
multivariate time series through a factor stochastic volatility model. In
particular, we propose two interweaving strategies (Yu and Meng, Journal of
Computational and Graphical Statistics, 20(3), 531-570, 2011) to substantially
accelerate convergence and mixing of standard MCMC approaches. Similar to
marginal data augmentation techniques, the proposed acceleration procedures
exploit non-identifiability issues which frequently arise in factor models. Our
new interweaving strategies are easy to implement and come at almost no extra
computational cost; nevertheless, they can boost estimation efficiency by
several orders of magnitude as is shown in extensive simulation studies. To
conclude, the application of our algorithm to a 26-dimensional exchange rate
data set illustrates the superior performance of the new approach for
real-world data.",Efficient Bayesian Inference for Multivariate Factor Stochastic Volatility Models,2016-02-26 02:01:27,"Gregor Kastner, Sylvia Frühwirth-Schnatter, Hedibert Freitas Lopes","http://dx.doi.org/10.1080/10618600.2017.1322091, http://arxiv.org/abs/1602.08154v3, http://arxiv.org/pdf/1602.08154v3",stat.CO
30862,em,"There are many settings where researchers are interested in estimating
average treatment effects and are willing to rely on the unconfoundedness
assumption, which requires that the treatment assignment be as good as random
conditional on pre-treatment variables. The unconfoundedness assumption is
often more plausible if a large number of pre-treatment variables are included
in the analysis, but this can worsen the performance of standard approaches to
treatment effect estimation. In this paper, we develop a method for de-biasing
penalized regression adjustments to allow sparse regression methods like the
lasso to be used for sqrt{n}-consistent inference of average treatment effects
in high-dimensional linear models. Given linearity, we do not need to assume
that the treatment propensities are estimable, or that the average treatment
effect is a sparse contrast of the outcome model parameters. Rather, in
addition standard assumptions used to make lasso regression on the outcome
model consistent under 1-norm error, we only require overlap, i.e., that the
propensity score be uniformly bounded away from 0 and 1. Procedurally, our
method combines balancing weights with a regularized regression adjustment.",Approximate Residual Balancing: De-Biased Inference of Average Treatment Effects in High Dimensions,2016-04-25 07:29:31,"Susan Athey, Guido W. Imbens, Stefan Wager","http://arxiv.org/abs/1604.07125v5, http://arxiv.org/pdf/1604.07125v5",stat.ME
30863,em,"In complicated/nonlinear parametric models, it is generally hard to know
whether the model parameters are point identified. We provide computationally
attractive procedures to construct confidence sets (CSs) for identified sets of
full parameters and of subvectors in models defined through a likelihood or a
vector of moment equalities or inequalities. These CSs are based on level sets
of optimal sample criterion functions (such as likelihood or optimally-weighted
or continuously-updated GMM criterions). The level sets are constructed using
cutoffs that are computed via Monte Carlo (MC) simulations directly from the
quasi-posterior distributions of the criterions. We establish new Bernstein-von
Mises (or Bayesian Wilks) type theorems for the quasi-posterior distributions
of the quasi-likelihood ratio (QLR) and profile QLR in partially-identified
regular models and some non-regular models. These results imply that our MC CSs
have exact asymptotic frequentist coverage for identified sets of full
parameters and of subvectors in partially-identified regular models, and have
valid but potentially conservative coverage in models with reduced-form
parameters on the boundary. Our MC CSs for identified sets of subvectors are
shown to have exact asymptotic coverage in models with singularities. We also
provide results on uniform validity of our CSs over classes of DGPs that
include point and partially identified models. We demonstrate good
finite-sample coverage properties of our procedures in two simulation
experiments. Finally, our procedures are applied to two non-trivial empirical
examples: an airline entry game and a model of trade flows.",Monte Carlo Confidence Sets for Identified Sets,2016-05-02 17:18:33,"Xiaohong Chen, Timothy Christensen, Elie Tamer","http://dx.doi.org/10.3982/ECTA14525, http://arxiv.org/abs/1605.00499v3, http://arxiv.org/pdf/1605.00499v3",stat.ME
30864,em,"This paper develops and implements a nonparametric test of Random Utility
Models. The motivating application is to test the null hypothesis that a sample
of cross-sectional demand distributions was generated by a population of
rational consumers. We test a necessary and sufficient condition for this that
does not rely on any restriction on unobserved heterogeneity or the number of
goods. We also propose and implement a control function approach to account for
endogenous expenditure. An econometric result of independent interest is a test
for linear inequality constraints when these are represented as the vertices of
a polyhedron rather than its faces. An empirical application to the U.K.
Household Expenditure Survey illustrates computational feasibility of the
method in demand problems with 5 goods.",Nonparametric Analysis of Random Utility Models,2016-06-15 18:28:29,"Yuichi Kitamura, Jörg Stoye","http://dx.doi.org/10.3982/ECTA14478, http://arxiv.org/abs/1606.04819v3, http://arxiv.org/pdf/1606.04819v3",math.ST
30865,em,"We propose two types of Quantile Graphical Models (QGMs) --- Conditional
Independence Quantile Graphical Models (CIQGMs) and Prediction Quantile
Graphical Models (PQGMs). CIQGMs characterize the conditional independence of
distributions by evaluating the distributional dependence structure at each
quantile index. As such, CIQGMs can be used for validation of the graph
structure in the causal graphical models (\cite{pearl2009causality,
robins1986new, heckman2015causal}). One main advantage of these models is that
we can apply them to large collections of variables driven by non-Gaussian and
non-separable shocks. PQGMs characterize the statistical dependencies through
the graphs of the best linear predictors under asymmetric loss functions. PQGMs
make weaker assumptions than CIQGMs as they allow for misspecification. Because
of QGMs' ability to handle large collections of variables and focus on specific
parts of the distributions, we could apply them to quantify tail
interdependence. The resulting tail risk network can be used for measuring
systemic risk contributions that help make inroads in understanding
international financial contagion and dependence structures of returns under
downside market movements.
  We develop estimation and inference methods for QGMs focusing on the
high-dimensional case, where the number of variables in the graph is large
compared to the number of observations. For CIQGMs, these methods and results
include valid simultaneous choices of penalty functions, uniform rates of
convergence, and confidence regions that are simultaneously valid. We also
derive analogous results for PQGMs, which include new results for penalized
quantile regressions in high-dimensional settings to handle misspecification,
many controls, and a continuum of additional conditioning events.",Quantile Graphical Models: Prediction and Conditional Independence with Applications to Systemic Risk,2016-07-01 18:19:25,"Alexandre Belloni, Mingli Chen, Victor Chernozhukov","http://arxiv.org/abs/1607.00286v3, http://arxiv.org/pdf/1607.00286v3",math.ST
30866,em,"Bayesian and frequentist criteria are fundamentally different, but often
posterior and sampling distributions are asymptotically equivalent (e.g.,
Gaussian). For the corresponding limit experiment, we characterize the
frequentist size of a certain Bayesian hypothesis test of (possibly nonlinear)
inequalities. If the null hypothesis is that the (possibly
infinite-dimensional) parameter lies in a certain half-space, then the Bayesian
test's size is $\alpha$; if the null hypothesis is a subset of a half-space,
then size is above $\alpha$ (sometimes strictly); and in other cases, size may
be above, below, or equal to $\alpha$. Two examples illustrate our results:
testing stochastic dominance and testing curvature of a translog cost function.",Frequentist size of Bayesian inequality tests,2016-07-01 23:02:05,"David M. Kaplan, Longhao Zhuo","http://dx.doi.org/10.1016/j.jeconom.2020.05.015, http://arxiv.org/abs/1607.00393v3, http://arxiv.org/pdf/1607.00393v3",math.ST
30867,em,"This paper proposes a straightforward algorithm to carry out inference in
large time-varying parameter vector autoregressions (TVP-VARs) with mixture
innovation components for each coefficient in the system. We significantly
decrease the computational burden by approximating the latent indicators that
drive the time-variation in the coefficients with a latent threshold process
that depends on the absolute size of the shocks. The merits of our approach are
illustrated with two applications. First, we forecast the US term structure of
interest rates and demonstrate forecast gains of the proposed mixture
innovation model relative to other benchmark models. Second, we apply our
approach to US macroeconomic data and find significant evidence for
time-varying effects of a monetary policy tightening.",Should I stay or should I go? A latent threshold approach to large-scale mixture innovation models,2016-07-15 17:42:32,"Florian Huber, Gregor Kastner, Martin Feldkircher","http://dx.doi.org/10.1002/jae.2680, http://arxiv.org/abs/1607.04532v5, http://arxiv.org/pdf/1607.04532v5",stat.ME
30868,em,"We report the first ex post study of the economic impact of sea level rise.
We apply two econometric approaches to estimate the past effects of sea level
rise on the economy of the USA, viz. Barro type growth regressions adjusted for
spatial patterns and a matching estimator. Unit of analysis is 3063 counties of
the USA. We fit growth regressions for 13 time periods and we estimated
numerous varieties and robustness tests for both growth regressions and
matching estimator. Although there is some evidence that sea level rise has a
positive effect on economic growth, in most specifications the estimated
effects are insignificant. We therefore conclude that there is no stable,
significant effect of sea level rise on economic growth. This finding
contradicts previous ex ante studies.",Effects of Sea Level Rise on Economy of the United States,2016-07-21 12:43:56,"Monika Novackova, Richard S. J. Tol","http://arxiv.org/abs/1607.06247v1, http://arxiv.org/pdf/1607.06247v1",q-fin.EC
30869,em,"Many economic and causal parameters depend on nonparametric or high
dimensional first steps. We give a general construction of locally
robust/orthogonal moment functions for GMM, where moment conditions have zero
derivative with respect to first steps. We show that orthogonal moment
functions can be constructed by adding to identifying moments the nonparametric
influence function for the effect of the first step on identifying moments.
Orthogonal moments reduce model selection and regularization bias, as is very
important in many applications, especially for machine learning first steps.
  We give debiased machine learning estimators of functionals of high
dimensional conditional quantiles and of dynamic discrete choice parameters
with high dimensional state variables. We show that adding to identifying
moments the nonparametric influence function provides a general construction of
orthogonal moments, including regularity conditions, and show that the
nonparametric influence function is robust to additional unknown functions on
which it depends. We give a general approach to estimating the unknown
functions in the nonparametric influence function and use it to automatically
debias estimators of functionals of high dimensional conditional location
learners. We give a variety of new doubly robust moment equations and
characterize double robustness. We give general and simple regularity
conditions and apply these for asymptotic inference on functionals of high
dimensional regression quantiles and dynamic discrete choice parameters with
high dimensional state variables.",Locally Robust Semiparametric Estimation,2016-07-30 00:32:27,"Victor Chernozhukov, Juan Carlos Escanciano, Hidehiko Ichimura, Whitney K. Newey, James M. Robins","http://arxiv.org/abs/1608.00033v4, http://arxiv.org/pdf/1608.00033v4",math.ST
30870,em,"In this article the package High-dimensional Metrics (\texttt{hdm}) is
introduced. It is a collection of statistical methods for estimation and
quantification of uncertainty in high-dimensional approximately sparse models.
It focuses on providing confidence intervals and significance testing for
(possibly many) low-dimensional subcomponents of the high-dimensional parameter
vector. Efficient estimators and uniformly valid confidence intervals for
regression coefficients on target variables (e.g., treatment or policy
variable) in a high-dimensional approximately sparse regression model, for
average treatment effect (ATE) and average treatment effect for the treated
(ATET), as well for extensions of these parameters to the endogenous setting
are provided. Theory grounded, data-driven methods for selecting the
penalization parameter in Lasso regressions under heteroscedastic and
non-Gaussian errors are implemented. Moreover, joint/ simultaneous confidence
intervals for regression coefficients of a high-dimensional sparse regression
are implemented. Data sets which have been used in the literature and might be
useful for classroom demonstration and for testing new estimators are included.",hdm: High-Dimensional Metrics,2016-08-01 11:51:31,"Victor Chernozhukov, Chris Hansen, Martin Spindler","http://arxiv.org/abs/1608.00354v1, http://arxiv.org/pdf/1608.00354v1",stat.ME
30872,em,"This paper studies a penalized statistical decision rule for the treatment
assignment problem. Consider the setting of a utilitarian policy maker who must
use sample data to allocate a binary treatment to members of a population,
based on their observable characteristics. We model this problem as a
statistical decision problem where the policy maker must choose a subset of the
covariate space to assign to treatment, out of a class of potential subsets. We
focus on settings in which the policy maker may want to select amongst a
collection of constrained subset classes: examples include choosing the number
of covariates over which to perform best-subset selection, and model selection
when approximating a complicated class via a sieve. We adapt and extend results
from statistical learning to develop the Penalized Welfare Maximization (PWM)
rule. We establish an oracle inequality for the regret of the PWM rule which
shows that it is able to perform model selection over the collection of
available classes. We then use this oracle inequality to derive relevant bounds
on maximum regret for PWM. An important consequence of our results is that we
are able to formalize model-selection using a ""hold-out"" procedure, where the
policy maker would first estimate various policies using half of the data, and
then select the policy which performs the best when evaluated on the other half
of the data.",Model Selection for Treatment Choice: Penalized Welfare Maximization,2016-09-11 18:09:28,"Eric Mbakop, Max Tabord-Meehan","http://arxiv.org/abs/1609.03167v6, http://arxiv.org/pdf/1609.03167v6",math.ST
30873,em,"We propose generalized random forests, a method for non-parametric
statistical estimation based on random forests (Breiman, 2001) that can be used
to fit any quantity of interest identified as the solution to a set of local
moment equations. Following the literature on local maximum likelihood
estimation, our method considers a weighted set of nearby training examples;
however, instead of using classical kernel weighting functions that are prone
to a strong curse of dimensionality, we use an adaptive weighting function
derived from a forest designed to express heterogeneity in the specified
quantity of interest. We propose a flexible, computationally efficient
algorithm for growing generalized random forests, develop a large sample theory
for our method showing that our estimates are consistent and asymptotically
Gaussian, and provide an estimator for their asymptotic variance that enables
valid confidence intervals. We use our approach to develop new methods for
three statistical tasks: non-parametric quantile regression, conditional
average partial effect estimation, and heterogeneous treatment effect
estimation via instrumental variables. A software implementation, grf for R and
C++, is available from CRAN.",Generalized Random Forests,2016-10-05 07:42:13,"Susan Athey, Julie Tibshirani, Stefan Wager","http://arxiv.org/abs/1610.01271v4, http://arxiv.org/pdf/1610.01271v4",stat.ME
30874,em,"This paper considers maximum likelihood (ML) estimation in a large class of
models with hidden Markov regimes. We investigate consistency of the ML
estimator and local asymptotic normality for the models under general
conditions which allow for autoregressive dynamics in the observable process,
Markov regime sequences with covariate-dependent transition matrices, and
possible model misspecification. A Monte Carlo study examines the finite-sample
properties of the ML estimator in correctly specified and misspecified models.
An empirical application is also discussed.",Maximum Likelihood Estimation in Markov Regime-Switching Models with Covariate-Dependent Transition Probabilities,2016-12-15 08:24:01,"Demian Pouzo, Zacharias Psaradakis, Martin Sola","http://arxiv.org/abs/1612.04932v3, http://arxiv.org/pdf/1612.04932v3",math.ST
30875,em,"The Fisher Ideal index, developed to measure price inflation, is applied to
define a population-weighted temperature trend. This method has the advantages
that the trend is representative for the population distribution throughout the
sample but without conflating the trend in the population distribution and the
trend in the temperature. I show that the trend in the global area-weighted
average surface air temperature is different in key details from the
population-weighted trend. I extend the index to include urbanization and the
urban heat island effect. This substantially changes the trend again. I further
extend the index to include international migration, but this has a minor
impact on the trend.",Population and trends in the global mean temperature,2016-12-27 20:15:36,Richard S. J. Tol,"http://arxiv.org/abs/1612.09123v1, http://arxiv.org/pdf/1612.09123v1",q-fin.EC
30876,em,"We extend the Granger-Johansen representation theorems for I(1) and I(2)
vector autoregressive processes to accommodate processes that take values in an
arbitrary complex separable Hilbert space. This more general setting is of
central relevance for statistical applications involving functional time
series. We first obtain a range of necessary and sufficient conditions for a
pole in the inverse of a holomorphic index-zero Fredholm operator pencil to be
of first or second order. Those conditions form the basis for our development
of I(1) and I(2) representations of autoregressive Hilbertian processes.
Cointegrating and attractor subspaces are characterized in terms of the
behavior of the autoregressive operator pencil in a neighborhood of one.",Representation of I(1) and I(2) autoregressive Hilbertian processes,2017-01-27 21:49:53,"Brendan K. Beare, Won-Ki Seo","http://dx.doi.org/10.1017/S0266466619000276, http://arxiv.org/abs/1701.08149v4, http://arxiv.org/pdf/1701.08149v4",math.ST
30877,em,"In the recent years more and more high-dimensional data sets, where the
number of parameters $p$ is high compared to the number of observations $n$ or
even larger, are available for applied researchers. Boosting algorithms
represent one of the major advances in machine learning and statistics in
recent years and are suitable for the analysis of such data sets. While Lasso
has been applied very successfully for high-dimensional data sets in Economics,
boosting has been underutilized in this field, although it has been proven very
powerful in fields like Biostatistics and Pattern Recognition. We attribute
this to missing theoretical results for boosting. The goal of this paper is to
fill this gap and show that boosting is a competitive method for inference of a
treatment effect or instrumental variable (IV) estimation in a high-dimensional
setting. First, we present the $L_2$Boosting with componentwise least squares
algorithm and variants which are tailored for regression problems which are the
workhorse for most Econometric problems. Then we show how $L_2$Boosting can be
used for estimation of treatment effects and IV estimation. We highlight the
methods and illustrate them with simulations and empirical examples. For
further results and technical details we refer to Luo and Spindler (2016, 2017)
and to the online supplement of the paper.",$L_2$Boosting for Economic Applications,2017-02-10 19:35:06,"Ye Luo, Martin Spindler","http://arxiv.org/abs/1702.03244v1, http://arxiv.org/pdf/1702.03244v1",stat.ML
31709,gn,"This study was conducted to examine and understand the spending behavior of
low income households (B40), namely households with income of RM3800 and below.
The study focused on the area Kubang Pasu District, Kedah.","Ways to Reduce Cost of Living: A Case Study among Low Income Household in Kubang Pasu, Kedah, Malaysia",2020-01-07 09:39:23,Mukaramah Harun,"http://arxiv.org/abs/2001.02509v1, http://arxiv.org/pdf/2001.02509v1",econ.GN
30878,em,"This paper examines the limit properties of information criteria (such as
AIC, BIC, HQIC) for distinguishing between the unit root model and the various
kinds of explosive models. The explosive models include the local-to-unit-root
model, the mildly explosive model and the regular explosive model. Initial
conditions with different order of magnitude are considered. Both the OLS
estimator and the indirect inference estimator are studied. It is found that
BIC and HQIC, but not AIC, consistently select the unit root model when data
come from the unit root model. When data come from the local-to-unit-root
model, both BIC and HQIC select the wrong model with probability approaching 1
while AIC has a positive probability of selecting the right model in the limit.
When data come from the regular explosive model or from the mildly explosive
model in the form of $1+n^{\alpha }/n$ with $\alpha \in (0,1)$, all three
information criteria consistently select the true model. Indirect inference
estimation can increase or decrease the probability for information criteria to
select the right model asymptotically relative to OLS, depending on the
information criteria and the true model. Simulation results confirm our
asymptotic results in finite sample.",Model Selection for Explosive Models,2017-03-08 09:16:28,"Yubo Tao, Jun Yu","http://dx.doi.org/10.1108/S0731-905320200000041003, http://arxiv.org/abs/1703.02720v1, http://arxiv.org/pdf/1703.02720v1",math.ST
30879,em,"This paper uses model symmetries in the instrumental variable (IV) regression
to derive an invariant test for the causal structural parameter. Contrary to
popular belief, we show that there exist model symmetries when equation errors
are heteroskedastic and autocorrelated (HAC). Our theory is consistent with
existing results for the homoskedastic model (Andrews, Moreira, and Stock
(2006) and Chamberlain (2007)). We use these symmetries to propose the
conditional integrated likelihood (CIL) test for the causality parameter in the
over-identified model. Theoretical and numerical findings show that the CIL
test performs well compared to other tests in terms of power and
implementation. We recommend that practitioners use the Anderson-Rubin (AR)
test in the just-identified model, and the CIL test in the over-identified
model.",Optimal Invariant Tests in an Instrumental Variables Regression With Heteroskedastic and Autocorrelated Errors,2017-04-29 23:04:51,"Marcelo J. Moreira, Mahrad Sharifvaghefi, Geert Ridder","http://arxiv.org/abs/1705.00231v2, http://arxiv.org/pdf/1705.00231v2",math.ST
30880,em,"Consider a researcher estimating the parameters of a regression function
based on data for all 50 states in the United States or on data for all visits
to a website. What is the interpretation of the estimated parameters and the
standard errors? In practice, researchers typically assume that the sample is
randomly drawn from a large population of interest and report standard errors
that are designed to capture sampling variation. This is common even in
applications where it is difficult to articulate what that population of
interest is, and how it differs from the sample. In this article, we explore an
alternative approach to inference, which is partly design-based. In a
design-based setting, the values of some of the regressors can be manipulated,
perhaps through a policy intervention. Design-based uncertainty emanates from
lack of knowledge about the values that the regression outcome would have taken
under alternative interventions. We derive standard errors that account for
design-based uncertainty instead of, or in addition to, sampling-based
uncertainty. We show that our standard errors in general are smaller than the
usual infinite-population sampling-based standard errors and provide conditions
under which they coincide.",Sampling-based vs. Design-based Uncertainty in Regression Analysis,2017-06-06 17:11:25,"Alberto Abadie, Susan Athey, Guido W. Imbens, Jeffrey M. Wooldridge","http://arxiv.org/abs/1706.01778v2, http://arxiv.org/pdf/1706.01778v2",math.ST
30881,em,"Bayesian inference for stochastic volatility models using MCMC methods highly
depends on actual parameter values in terms of sampling efficiency. While draws
from the posterior utilizing the standard centered parameterization break down
when the volatility of volatility parameter in the latent state equation is
small, non-centered versions of the model show deficiencies for highly
persistent latent variable series. The novel approach of
ancillarity-sufficiency interweaving has recently been shown to aid in
overcoming these issues for a broad class of multilevel models. In this paper,
we demonstrate how such an interweaving strategy can be applied to stochastic
volatility models in order to greatly improve sampling efficiency for all
parameters and throughout the entire parameter range. Moreover, this method of
""combining best of different worlds"" allows for inference for parameter
constellations that have previously been infeasible to estimate without the
need to select a particular parameterization beforehand.",Ancillarity-Sufficiency Interweaving Strategy (ASIS) for Boosting MCMC Estimation of Stochastic Volatility Models,2017-06-16 17:11:43,"Gregor Kastner, Sylvia Frühwirth-Schnatter","http://dx.doi.org/10.1016/j.csda.2013.01.002, http://arxiv.org/abs/1706.05280v1, http://arxiv.org/pdf/1706.05280v1",stat.ME
30882,em,"Multinomial choice models are fundamental for empirical modeling of economic
choices among discrete alternatives. We analyze identification of binary and
multinomial choice models when the choice utilities are nonseparable in
observed attributes and multidimensional unobserved heterogeneity with
cross-section and panel data. We show that derivatives of choice probabilities
with respect to continuous attributes are weighted averages of utility
derivatives in cross-section models with exogenous heterogeneity. In the
special case of random coefficient models with an independent additive effect,
we further characterize that the probability derivative at zero is proportional
to the population mean of the coefficients. We extend the identification
results to models with endogenous heterogeneity using either a control function
or panel data. In time stationary panel models with two periods, we find that
differences over time of derivatives of choice probabilities identify utility
derivatives ""on the diagonal,"" i.e. when the observed attributes take the same
values in the two periods. We also show that time stationarity does not
identify structural derivatives ""off the diagonal"" both in continuous and
multinomial choice panel models.",Nonseparable Multinomial Choice Models in Cross-Section and Panel Data,2017-06-26 17:46:45,"Victor Chernozhukov, Iván Fernández-Val, Whitney Newey","http://arxiv.org/abs/1706.08418v2, http://arxiv.org/pdf/1706.08418v2",stat.ME
30889,em,"This paper establishes a general equivalence between discrete choice and
rational inattention models. Matejka and McKay (2015, AER) showed that when
information costs are modelled using the Shannon entropy function, the
resulting choice probabilities in the rational inattention model take the
multinomial logit form. By exploiting convex-analytic properties of the
discrete choice model, we show that when information costs are modelled using a
class of generalized entropy functions, the choice probabilities in any
rational inattention model are observationally equivalent to some additive
random utility discrete choice model and vice versa. Thus any additive random
utility model can be given an interpretation in terms of boundedly rational
behavior. This includes empirically relevant specifications such as the probit
and nested logit models.",Discrete Choice and Rational Inattention: a General Equivalence Result,2017-09-26 19:31:59,"Mogens Fosgerau, Emerson Melo, Andre de Palma, Matthew Shum","http://arxiv.org/abs/1709.09117v1, http://arxiv.org/pdf/1709.09117v1",econ.EM
30883,em,"In this paper we present tools for applied researchers that re-purpose
off-the-shelf methods from the computer-science field of machine learning to
create a ""discovery engine"" for data from randomized controlled trials (RCTs).
The applied problem we seek to solve is that economists invest vast resources
into carrying out RCTs, including the collection of a rich set of candidate
outcome measures. But given concerns about inference in the presence of
multiple testing, economists usually wind up exploring just a small subset of
the hypotheses that the available data could be used to test. This prevents us
from extracting as much information as possible from each RCT, which in turn
impairs our ability to develop new theories or strengthen the design of policy
interventions. Our proposed solution combines the basic intuition of reverse
regression, where the dependent variable of interest now becomes treatment
assignment itself, with methods from machine learning that use the data
themselves to flexibly identify whether there is any function of the outcomes
that predicts (or has signal about) treatment group status. This leads to
correctly-sized tests with appropriate $p$-values, which also have the
important virtue of being easy to implement in practice. One open challenge
that remains with our work is how to meaningfully interpret the signal that
these methods find.",Machine-Learning Tests for Effects on Multiple Outcomes,2017-07-05 20:13:08,"Jens Ludwig, Sendhil Mullainathan, Jann Spiess","http://arxiv.org/abs/1707.01473v2, http://arxiv.org/pdf/1707.01473v2",stat.ML
30884,em,"This paper develops theory for feasible estimators of finite-dimensional
parameters identified by general conditional quantile restrictions, under much
weaker assumptions than previously seen in the literature. This includes
instrumental variables nonlinear quantile regression as a special case. More
specifically, we consider a set of unconditional moments implied by the
conditional quantile restrictions, providing conditions for local
identification. Since estimators based on the sample moments are generally
impossible to compute numerically in practice, we study feasible estimators
based on smoothed sample moments. We propose a method of moments estimator for
exactly identified models, as well as a generalized method of moments estimator
for over-identified models. We establish consistency and asymptotic normality
of both estimators under general conditions that allow for weakly dependent
data and nonlinear structural models. Simulations illustrate the finite-sample
properties of the methods. Our in-depth empirical application concerns the
consumption Euler equation derived from quantile utility maximization.
Advantages of the quantile Euler equation include robustness to fat tails,
decoupling of risk attitude from the elasticity of intertemporal substitution,
and log-linearization without any approximation error. For the four countries
we examine, the quantile estimates of discount factor and elasticity of
intertemporal substitution are economically reasonable for a range of quantiles
above the median, even when two-stage least squares estimates are not
reasonable.",Smoothed GMM for quantile models,2017-07-11 22:01:38,"Luciano de Castro, Antonio F. Galvao, David M. Kaplan, Xin Liu","http://dx.doi.org/10.1016/j.jeconom.2019.04.008, http://arxiv.org/abs/1707.03436v2, http://arxiv.org/pdf/1707.03436v2",math.ST
30885,em,"When comparing two distributions, it is often helpful to learn at which
quantiles or values there is a statistically significant difference. This
provides more information than the binary ""reject"" or ""do not reject"" decision
of a global goodness-of-fit test. Framing our question as multiple testing
across the continuum of quantiles $\tau\in(0,1)$ or values $r\in\mathbb{R}$, we
show that the Kolmogorov--Smirnov test (interpreted as a multiple testing
procedure) achieves strong control of the familywise error rate. However, its
well-known flaw of low sensitivity in the tails remains. We provide an
alternative method that retains such strong control of familywise error rate
while also having even sensitivity, i.e., equal pointwise type I error rates at
each of $n\to\infty$ order statistics across the distribution. Our one-sample
method computes instantly, using our new formula that also instantly computes
goodness-of-fit $p$-values and uniform confidence bands. To improve power, we
also propose stepdown and pre-test procedures that maintain control of the
asymptotic familywise error rate. One-sample and two-sample cases are
considered, as well as extensions to regression discontinuity designs and
conditional distributions. Simulations, empirical examples, and code are
provided.",Comparing distributions by multiple testing across quantiles or CDF values,2017-08-15 22:30:50,"Matt Goldman, David M. Kaplan","http://dx.doi.org/10.1016/j.jeconom.2018.04.003, http://arxiv.org/abs/1708.04658v1, http://arxiv.org/pdf/1708.04658v1",math.ST
30886,em,"Shrinkage estimation usually reduces variance at the cost of bias. But when
we care only about some parameters of a model, I show that we can reduce
variance without incurring bias if we have additional information about the
distribution of covariates. In a linear regression model with homoscedastic
Normal noise, I consider shrinkage estimation of the nuisance parameters
associated with control variables. For at least three control variables and
exogenous treatment, I establish that the standard least-squares estimator is
dominated with respect to squared-error loss in the treatment effect even among
unbiased estimators and even when the target parameter is low-dimensional. I
construct the dominating estimator by a variant of James-Stein shrinkage in a
high-dimensional Normal-means problem. It can be interpreted as an invariant
generalized Bayes estimator with an uninformative (improper) Jeffreys prior in
the target parameter.",Unbiased Shrinkage Estimation,2017-08-22 01:20:31,Jann Spiess,"http://arxiv.org/abs/1708.06436v2, http://arxiv.org/pdf/1708.06436v2",math.ST
30887,em,"The two-stage least-squares (2SLS) estimator is known to be biased when its
first-stage fit is poor. I show that better first-stage prediction can
alleviate this bias. In a two-stage linear regression model with Normal noise,
I consider shrinkage in the estimation of the first-stage instrumental variable
coefficients. For at least four instrumental variables and a single endogenous
regressor, I establish that the standard 2SLS estimator is dominated with
respect to bias. The dominating IV estimator applies James-Stein type shrinkage
in a first-stage high-dimensional Normal-means problem followed by a
control-function approach in the second stage. It preserves invariances of the
structural instrumental variable equations.",Bias Reduction in Instrumental Variable Estimation through First-Stage Shrinkage,2017-08-22 01:38:21,Jann Spiess,"http://arxiv.org/abs/1708.06443v2, http://arxiv.org/pdf/1708.06443v2",math.ST
30888,em,"We show that estimators based on spectral regularization converge to the best
approximation of a structural parameter in a class of nonidentified linear
ill-posed inverse models. Importantly, this convergence holds in the uniform
and Hilbert space norms. We describe several circumstances when the best
approximation coincides with a structural parameter, or at least reasonably
approximates it, and discuss how our results can be useful in the partial
identification setting. Lastly, we document that identification failures have
important implications for the asymptotic distribution of a linear functional
of regularized estimators, which can have a weighted chi-squared component. The
theory is illustrated for various high-dimensional and nonparametric IV
regressions.",Is completeness necessary? Estimation in nonidentified linear models,2017-09-11 19:49:19,"Andrii Babii, Jean-Pierre Florens","http://arxiv.org/abs/1709.03473v4, http://arxiv.org/pdf/1709.03473v4",math.ST
30890,em,"In many macroeconomic applications, confidence intervals for impulse
responses are constructed by estimating VAR models in levels - ignoring
cointegration rank uncertainty. We investigate the consequences of ignoring
this uncertainty. We adapt several methods for handling model uncertainty and
highlight their shortcomings. We propose a new method -
Weighted-Inference-by-Model-Plausibility (WIMP) - that takes rank uncertainty
into account in a data-driven way. In simulations the WIMP outperforms all
other methods considered, delivering intervals that are robust to rank
uncertainty, yet not overly conservative. We also study potential ramifications
of rank uncertainty on applied macroeconomic analysis by re-assessing the
effects of fiscal policy shocks.",Inference for Impulse Responses under Model Uncertainty,2017-09-27 18:35:17,"Lenard Lieb, Stephan Smeekes","http://arxiv.org/abs/1709.09583v3, http://arxiv.org/pdf/1709.09583v3",econ.EM
30891,em,"To quantify uncertainty around point estimates of conditional objects such as
conditional means or variances, parameter uncertainty has to be taken into
account. Attempts to incorporate parameter uncertainty are typically based on
the unrealistic assumption of observing two independent processes, where one is
used for parameter estimation, and the other for conditioning upon. Such
unrealistic foundation raises the question whether these intervals are
theoretically justified in a realistic setting. This paper presents an
asymptotic justification for this type of intervals that does not require such
an unrealistic assumption, but relies on a sample-split approach instead. By
showing that our sample-split intervals coincide asymptotically with the
standard intervals, we provide a novel, and realistic, justification for
confidence intervals of conditional objects. The analysis is carried out for a
rich class of time series models.",A Justification of Conditional Confidence Intervals,2017-10-02 16:53:13,"Eric Beutner, Alexander Heinemann, Stephan Smeekes","http://arxiv.org/abs/1710.00643v2, http://arxiv.org/pdf/1710.00643v2",econ.EM
30892,em,"In empirical work it is common to estimate parameters of models and report
associated standard errors that account for ""clustering"" of units, where
clusters are defined by factors such as geography. Clustering adjustments are
typically motivated by the concern that unobserved components of outcomes for
units within clusters are correlated. However, this motivation does not provide
guidance about questions such as: (i) Why should we adjust standard errors for
clustering in some situations but not others? How can we justify the common
practice of clustering in observational studies but not randomized experiments,
or clustering by state but not by gender? (ii) Why is conventional clustering a
potentially conservative ""all-or-nothing"" adjustment, and are there alternative
methods that respond to data and are less conservative? (iii) In what settings
does the choice of whether and how to cluster make a difference? We address
these questions using a framework of sampling and design inference. We argue
that clustering can be needed to address sampling issues if sampling follows a
two stage process where in the first stage, a subset of clusters are sampled
from a population of clusters, and in the second stage, units are sampled from
the sampled clusters. Then, clustered standard errors account for the existence
of clusters in the population that we do not see in the sample. Clustering can
be needed to account for design issues if treatment assignment is correlated
with membership in a cluster. We propose new variance estimators to deal with
intermediate settings where conventional cluster standard errors are
unnecessarily conservative and robust standard errors are too small.",When Should You Adjust Standard Errors for Clustering?,2017-10-09 06:18:11,"Alberto Abadie, Susan Athey, Guido Imbens, Jeffrey Wooldridge","http://arxiv.org/abs/1710.02926v4, http://arxiv.org/pdf/1710.02926v4",math.ST
30893,em,"In this paper, a unified approach is proposed to derive the exact local
asymptotic power for panel unit root tests, which is one of the most important
issues in nonstationary panel data literature. Two most widely used panel unit
root tests known as Levin-Lin-Chu (LLC, Levin, Lin and Chu (2002)) and
Im-Pesaran-Shin (IPS, Im, Pesaran and Shin (2003)) tests are systematically
studied for various situations to illustrate our method. Our approach is
characteristic function based, and can be used directly in deriving the moments
of the asymptotic distributions of these test statistics under the null and the
local-to-unity alternatives. For the LLC test, the approach provides an
alternative way to obtain the results that can be derived by the existing
method. For the IPS test, the new results are obtained, which fills the gap in
the literature where few results exist, since the IPS test is non-admissible.
Moreover, our approach has the advantage in deriving Edgeworth expansions of
these tests, which are also given in the paper. The simulations are presented
to illustrate our theoretical findings.",A Unified Approach on the Local Power of Panel Unit Root Tests,2017-10-09 08:57:06,Zhongwen Liang,"http://arxiv.org/abs/1710.02944v1, http://arxiv.org/pdf/1710.02944v1",econ.EM
30894,em,"This paper characterizes the minimax linear estimator of the value of an
unknown function at a boundary point of its domain in a Gaussian white noise
model under the restriction that the first-order derivative of the unknown
function is Lipschitz continuous (the second-order H\""{o}lder class). The
result is then applied to construct the minimax optimal estimator for the
regression discontinuity design model, where the parameter of interest involves
function values at boundary points.",Minimax Linear Estimation at a Boundary Point,2017-10-18 19:11:13,Wayne Yuan Gao,"http://arxiv.org/abs/1710.06809v1, http://arxiv.org/pdf/1710.06809v1",econ.EM
30895,em,"Ratio of medians or other suitable quantiles of two distributions is widely
used in medical research to compare treatment and control groups or in
economics to compare various economic variables when repeated cross-sectional
data are available. Inspired by the so-called growth incidence curves
introduced in poverty research, we argue that the ratio of quantile functions
is a more appropriate and informative tool to compare two distributions. We
present an estimator for the ratio of quantile functions and develop
corresponding simultaneous confidence bands, which allow to assess significance
of certain features of the quantile functions ratio. Derived simultaneous
confidence bands rely on the asymptotic distribution of the quantile functions
ratio and do not require re-sampling techniques. The performance of the
simultaneous confidence bands is demonstrated in simulations. Analysis of the
expenditure data from Uganda in years 1999, 2002 and 2005 illustrates the
relevance of our approach.",Asymptotic Distribution and Simultaneous Confidence Bands for Ratios of Quantile Functions,2017-10-25 01:28:58,"Fabian Dunker, Stephan Klasen, Tatyana Krivobokova","http://arxiv.org/abs/1710.09009v1, http://arxiv.org/pdf/1710.09009v1",stat.ME
31710,gn,"This study examines the disparities of infrastructure in four states in
Northern Peninsular Malaysia. This study used a primer data which is collected
by using a face to face interview with a structure questionnaire on head of
household at Kedah, Perlis, Penang and Perak. The list of respondents is
provided by the Department of Statistics of Malaysia (DOS). The Department of
Statistics of Malaysia (DOS) uses the population observation in 2010 to
determine the respondents is provided by the Department of Statistics of
Malaysia (DOS).",Infrastructure Disparities in Northern Malaysia,2020-01-07 09:33:13,Mukaramah Harun,"http://arxiv.org/abs/2001.02510v1, http://arxiv.org/pdf/2001.02510v1",econ.GN
30896,em,"In this paper we study methods for estimating causal effects in settings with
panel data, where some units are exposed to a treatment during some periods and
the goal is estimating counterfactual (untreated) outcomes for the treated
unit/period combinations. We propose a class of matrix completion estimators
that uses the observed elements of the matrix of control outcomes corresponding
to untreated unit/periods to impute the ""missing"" elements of the control
outcome matrix, corresponding to treated units/periods. This leads to a matrix
that well-approximates the original (incomplete) matrix, but has lower
complexity according to the nuclear norm for matrices. We generalize results
from the matrix completion literature by allowing the patterns of missing data
to have a time series dependency structure that is common in social science
applications. We present novel insights concerning the connections between the
matrix completion literature, the literature on interactive fixed effects
models and the literatures on program evaluation under unconfoundedness and
synthetic control methods. We show that all these estimators can be viewed as
focusing on the same objective function. They differ solely in the way they
deal with identification, in some cases solely through regularization (our
proposed nuclear norm matrix completion estimator) and in other cases primarily
through imposing hard restrictions (the unconfoundedness and synthetic control
approaches). The proposed method outperforms unconfoundedness-based or
synthetic control estimators in simulations based on real data.",Matrix Completion Methods for Causal Panel Data Models,2017-10-27 20:34:11,"Susan Athey, Mohsen Bayati, Nikolay Doudchenko, Guido Imbens, Khashayar Khosravi","http://dx.doi.org/10.1080/01621459.2021.1891924, http://arxiv.org/abs/1710.10251v5, http://arxiv.org/pdf/1710.10251v5",math.ST
30897,em,"Artificial intelligence (AI) has achieved superhuman performance in a growing
number of tasks, but understanding and explaining AI remain challenging. This
paper clarifies the connections between machine-learning algorithms to develop
AIs and the econometrics of dynamic structural models through the case studies
of three famous game AIs. Chess-playing Deep Blue is a calibrated value
function, whereas shogi-playing Bonanza is an estimated value function via
Rust's (1987) nested fixed-point method. AlphaGo's ""supervised-learning policy
network"" is a deep neural network implementation of Hotz and Miller's (1993)
conditional choice probability estimation; its ""reinforcement-learning value
network"" is equivalent to Hotz, Miller, Sanders, and Smith's (1994) conditional
choice simulation method. Relaxing these AIs' implicit econometric assumptions
would improve their structural interpretability.","Artificial Intelligence as Structural Estimation: Economic Interpretations of Deep Blue, Bonanza, and AlphaGo",2017-10-30 17:25:39,Mitsuru Igami,"http://dx.doi.org/10.1093/ectj/utaa005, http://arxiv.org/abs/1710.10967v3, http://arxiv.org/pdf/1710.10967v3",econ.EM
30898,em,"This article contains new tools for studying the shape of the stationary
distribution of sizes in a dynamic economic system in which units experience
random multiplicative shocks and are occasionally reset. Each unit has a
Markov-switching type which influences their growth rate and reset probability.
We show that the size distribution has a Pareto upper tail, with exponent equal
to the unique positive solution to an equation involving the spectral radius of
a certain matrix-valued function. Under a non-lattice condition on growth
rates, an eigenvector associated with the Pareto exponent provides the
distribution of types in the upper tail of the size distribution.",Determination of Pareto exponents in economic models driven by Markov multiplicative processes,2017-12-05 04:02:17,"Brendan K. Beare, Alexis Akira Toda","http://dx.doi.org/10.3982/ECTA17984, http://arxiv.org/abs/1712.01431v5, http://arxiv.org/pdf/1712.01431v5",econ.EM
30899,em,"This paper illustrates how one can deduce preference from observed choices
when attention is not only limited but also random. In contrast to earlier
approaches, we introduce a Random Attention Model (RAM) where we abstain from
any particular attention formation, and instead consider a large class of
nonparametric random attention rules. Our model imposes one intuitive
condition, termed Monotonic Attention, which captures the idea that each
consideration set competes for the decision-maker's attention. We then develop
revealed preference theory within RAM and obtain precise testable implications
for observable choice probabilities. Based on these theoretical findings, we
propose econometric methods for identification, estimation, and inference of
the decision maker's preferences. To illustrate the applicability of our
results and their concrete empirical content in specific settings, we also
develop revealed preference theory and accompanying econometric methods under
additional nonparametric assumptions on the consideration set for binary choice
problems. Finally, we provide general purpose software implementation of our
estimation and inference results, and showcase their performance using
simulations.",A Random Attention Model,2017-12-10 01:50:39,"Matias D. Cattaneo, Xinwei Ma, Yusufcan Masatlioglu, Elchin Suleymanov","http://arxiv.org/abs/1712.03448v3, http://arxiv.org/pdf/1712.03448v3",econ.EM
30900,em,"This paper proposes a method for estimating the effect of a policy
intervention on an outcome over time. We train recurrent neural networks (RNNs)
on the history of control unit outcomes to learn a useful representation for
predicting future outcomes. The learned representation of control units is then
applied to the treated units for predicting counterfactual outcomes. RNNs are
specifically structured to exploit temporal dependencies in panel data, and are
able to learn negative and nonlinear interactions between control unit
outcomes. We apply the method to the problem of estimating the long-run impact
of U.S. homestead policy on public school spending.","RNN-based counterfactual prediction, with an application to homestead policy and public schooling",2017-12-10 19:00:18,"Jason Poulos, Shuxi Zeng","http://dx.doi.org/10.1111/rssc.12511, http://arxiv.org/abs/1712.03553v7, http://arxiv.org/pdf/1712.03553v7",stat.ML
30901,em,"We consider estimation and inference on average treatment effects under
unconfoundedness conditional on the realizations of the treatment variable and
covariates. Given nonparametric smoothness and/or shape restrictions on the
conditional mean of the outcome variable, we derive estimators and confidence
intervals (CIs) that are optimal in finite samples when the regression errors
are normal with known variance. In contrast to conventional CIs, our CIs use a
larger critical value that explicitly takes into account the potential bias of
the estimator. When the error distribution is unknown, feasible versions of our
CIs are valid asymptotically, even when $\sqrt{n}$-inference is not possible
due to lack of overlap, or low smoothness of the conditional mean. We also
derive the minimum smoothness conditions on the conditional mean that are
necessary for $\sqrt{n}$-inference. When the conditional mean is restricted to
be Lipschitz with a large enough bound on the Lipschitz constant, the optimal
estimator reduces to a matching estimator with the number of matches set to
one. We illustrate our methods in an application to the National Supported Work
Demonstration.",Finite-Sample Optimal Estimation and Inference on Average Treatment Effects Under Unconfoundedness,2017-12-13 05:57:02,"Timothy B. Armstrong, Michal Kolesár","http://dx.doi.org/10.3982/ECTA16907, http://arxiv.org/abs/1712.04594v5, http://arxiv.org/pdf/1712.04594v5",stat.AP
30902,em,"We propose strategies to estimate and make inference on key features of
heterogeneous effects in randomized experiments. These key features include
best linear predictors of the effects using machine learning proxies, average
effects sorted by impact groups, and average characteristics of most and least
impacted units. The approach is valid in high dimensional settings, where the
effects are proxied (but not necessarily consistently estimated) by predictive
and causal machine learning methods. We post-process these proxies into
estimates of the key features. Our approach is generic, it can be used in
conjunction with penalized methods, neural networks, random forests, boosted
trees, and ensemble methods, both predictive and causal. Estimation and
inference are based on repeated data splitting to avoid overfitting and achieve
validity. We use quantile aggregation of the results across many potential
splits, in particular taking medians of p-values and medians and other
quantiles of confidence intervals. We show that quantile aggregation lowers
estimation risks over a single split procedure, and establish its principal
inferential properties. Finally, our analysis reveals ways to build provably
better machine learning proxies through causal learning: we can use the
objective functions that we develop to construct the best linear predictors of
the effects, to obtain better machine learning proxies in the initial step. We
illustrate the use of both inferential tools and causal learners with a
randomized field experiment that evaluates a combination of nudges to stimulate
demand for immunization in India.","Fisher-Schultz Lecture: Generic Machine Learning Inference on Heterogenous Treatment Effects in Randomized Experiments, with an Application to Immunization in India",2017-12-13 17:47:57,"Victor Chernozhukov, Mert Demirer, Esther Duflo, Iván Fernández-Val","http://arxiv.org/abs/1712.04802v8, http://arxiv.org/pdf/1712.04802v8",stat.ML
30903,em,"Flexible estimation of heterogeneous treatment effects lies at the heart of
many statistical challenges, such as personalized medicine and optimal resource
allocation. In this paper, we develop a general class of two-step algorithms
for heterogeneous treatment effect estimation in observational studies. We
first estimate marginal effects and treatment propensities in order to form an
objective function that isolates the causal component of the signal. Then, we
optimize this data-adaptive objective function. Our approach has several
advantages over existing methods. From a practical perspective, our method is
flexible and easy to use: In both steps, we can use any loss-minimization
method, e.g., penalized regression, deep neural networks, or boosting;
moreover, these methods can be fine-tuned by cross validation. Meanwhile, in
the case of penalized kernel regression, we show that our method has a
quasi-oracle property: Even if the pilot estimates for marginal effects and
treatment propensities are not particularly accurate, we achieve the same error
bounds as an oracle who has a priori knowledge of these two nuisance
components. We implement variants of our approach based on penalized
regression, kernel ridge regression, and boosting in a variety of simulation
setups, and find promising performance relative to existing baselines.",Quasi-Oracle Estimation of Heterogeneous Treatment Effects,2017-12-13 21:32:13,"Xinkun Nie, Stefan Wager","http://arxiv.org/abs/1712.04912v4, http://arxiv.org/pdf/1712.04912v4",stat.ML
30904,em,"We present a general framework for studying regularized estimators; such
estimators are pervasive in estimation problems wherein ""plug-in"" type
estimators are either ill-defined or ill-behaved. Within this framework, we
derive, under primitive conditions, consistency and a generalization of the
asymptotic linearity property. We also provide data-driven methods for choosing
tuning parameters that, under some conditions, achieve the aforementioned
properties. We illustrate the scope of our approach by presenting a wide range
of applications.",Towards a General Large Sample Theory for Regularized Estimators,2017-12-20 01:46:32,"Michael Jansson, Demian Pouzo","http://arxiv.org/abs/1712.07248v4, http://arxiv.org/pdf/1712.07248v4",math.ST
30905,em,"Most long memory forecasting studies assume that the memory is generated by
the fractional difference operator. We argue that the most cited theoretical
arguments for the presence of long memory do not imply the fractional
difference operator, and assess the performance of the autoregressive
fractionally integrated moving average $(ARFIMA)$ model when forecasting series
with long memory generated by nonfractional processes. We find that high-order
autoregressive $(AR)$ models produce similar or superior forecast performance
than $ARFIMA$ models at short horizons. Nonetheless, as the forecast horizon
increases, the $ARFIMA$ models tend to dominate in forecast performance. Hence,
$ARFIMA$ models are well suited for forecasts of long memory processes
regardless of the long memory generating mechanism, particularly for medium and
long forecast horizons. Additionally, we analyse the forecasting performance of
the heterogeneous autoregressive ($HAR$) model which imposes restrictions on
high-order $AR$ models. We find that the structure imposed by the $HAR$ model
produces better long horizon forecasts than $AR$ models of the same order, at
the price of inferior short horizon forecasts in some cases. Our results have
implications for, among others, Climate Econometrics and Financial Econometrics
models dealing with long memory series at different forecast horizons. We show
in an example that while a short memory autoregressive moving average $(ARMA)$
model gives the best performance when forecasting the Realized Variance of the
S\&P 500 up to a month ahead, the $ARFIMA$ model gives the best performance for
longer forecast horizons.",On Long Memory Origins and Forecast Horizons,2017-12-21 19:23:47,J. Eduardo Vera-Valdés,"http://dx.doi.org/10.1002/for.2651, http://arxiv.org/abs/1712.08057v1, http://arxiv.org/pdf/1712.08057v1",econ.EM
30906,em,"We propose a new variational Bayes estimator for high-dimensional copulas
with discrete, or a combination of discrete and continuous, margins. The method
is based on a variational approximation to a tractable augmented posterior, and
is faster than previous likelihood-based approaches. We use it to estimate
drawable vine copulas for univariate and multivariate Markov ordinal and mixed
time series. These have dimension $rT$, where $T$ is the number of observations
and $r$ is the number of series, and are difficult to estimate using previous
methods. The vine pair-copulas are carefully selected to allow for
heteroskedasticity, which is a feature of most ordinal time series data. When
combined with flexible margins, the resulting time series models also allow for
other common features of ordinal data, such as zero inflation, multiple modes
and under- or over-dispersion. Using six example series, we illustrate both the
flexibility of the time series copula models, and the efficacy of the
variational Bayes estimator for copulas of up to 792 dimensions and 60
parameters. This far exceeds the size and complexity of copula models for
discrete data that can be estimated using previous methods.",Variational Bayes Estimation of Discrete-Margined Copula Models with Application to Time Series,2017-12-26 03:38:39,"Ruben Loaiza-Maya, Michael Stanley Smith","http://arxiv.org/abs/1712.09150v2, http://arxiv.org/pdf/1712.09150v2",stat.ME
30908,em,"Empirical researchers are increasingly faced with rich data sets containing
many controls or instrumental variables, making it essential to choose an
appropriate approach to variable selection. In this paper, we provide results
for valid inference after post- or orthogonal $L_2$-Boosting is used for
variable selection. We consider treatment effects after selecting among many
control variables and instrumental variable models with potentially many
instruments. To achieve this, we establish new results for the rate of
convergence of iterated post-$L_2$-Boosting and orthogonal $L_2$-Boosting in a
high-dimensional setting similar to Lasso, i.e., under approximate sparsity
without assuming the beta-min condition. These results are extended to the 2SLS
framework and valid inference is provided for treatment effect analysis. We
give extensive simulation results for the proposed methods and compare them
with Lasso. In an empirical application, we construct efficient IVs with our
proposed methods to estimate the effect of pre-merger overlap of bank branch
networks in the US on the post-merger stock returns of the acquirer bank.",Estimation and Inference of Treatment Effects with $L_2$-Boosting in High-Dimensional Settings,2018-01-01 01:15:57,"Jannis Kueck, Ye Luo, Martin Spindler, Zigan Wang","http://arxiv.org/abs/1801.00364v2, http://arxiv.org/pdf/1801.00364v2",stat.ML
30909,em,"This document collects the lecture notes from my mini-course ""Complexity
Theory, Game Theory, and Economics,"" taught at the Bellairs Research Institute
of McGill University, Holetown, Barbados, February 19--23, 2017, as the 29th
McGill Invitational Workshop on Computational Complexity.
  The goal of this mini-course is twofold: (i) to explain how complexity theory
has helped illuminate several barriers in economics and game theory; and (ii)
to illustrate how game-theoretic questions have led to new and interesting
complexity theory, including recent several breakthroughs. It consists of two
five-lecture sequences: the Solar Lectures, focusing on the communication and
computational complexity of computing equilibria; and the Lunar Lectures,
focusing on applications of complexity theory in game theory and economics. No
background in game theory is assumed.","Complexity Theory, Game Theory, and Economics: The Barbados Lectures",2018-01-02 20:18:11,Tim Roughgarden,"http://dx.doi.org/10.1561/0400000085, http://arxiv.org/abs/1801.00734v3, http://arxiv.org/pdf/1801.00734v3",cs.CC
30910,em,"We introduce the simulation tool SABCEMM (Simulator for Agent-Based
Computational Economic Market Models) for agent-based computational economic
market (ABCEM) models. Our simulation tool is implemented in C++ and we can
easily run ABCEM models with several million agents. The object-oriented
software design enables the isolated implementation of building blocks for
ABCEM models, such as agent types and market mechanisms. The user can design
and compare ABCEM models in a unified environment by recombining existing
building blocks using the XML-based SABCEMM configuration file. We introduce an
abstract ABCEM model class which our simulation tool is built upon.
Furthermore, we present the software architecture as well as computational
aspects of SABCEMM. Here, we focus on the efficiency of SABCEMM with respect to
the run time of our simulations. We show the great impact of different random
number generators on the run time of ABCEM models. The code and documentation
is published on GitHub at https://github.com/SABCEMM/SABCEMM, such that all
results can be reproduced by the reader.",SABCEMM-A Simulator for Agent-Based Computational Economic Market Models,2018-01-05 19:08:14,"Torsten Trimborn, Philipp Otte, Simon Cramer, Max Beikirch, Emma Pabich, Martin Frank","http://arxiv.org/abs/1801.01811v2, http://arxiv.org/pdf/1801.01811v2",q-fin.CP
30911,em,"This paper studies the problem of stochastic dynamic pricing and energy
management policy for electric vehicle (EV) charging service providers. In the
presence of renewable energy integration and energy storage system, EV charging
service providers must deal with multiple uncertainties --- charging demand
volatility, inherent intermittency of renewable energy generation, and
wholesale electricity price fluctuation. The motivation behind our work is to
offer guidelines for charging service providers to determine proper charging
prices and manage electricity to balance the competing objectives of improving
profitability, enhancing customer satisfaction, and reducing impact on power
grid in spite of these uncertainties. We propose a new metric to assess the
impact on power grid without solving complete power flow equations. To protect
service providers from severe financial losses, a safeguard of profit is
incorporated in the model. Two algorithms --- stochastic dynamic programming
(SDP) algorithm and greedy algorithm (benchmark algorithm) --- are applied to
derive the pricing and electricity procurement policy. A Pareto front of the
multiobjective optimization is derived. Simulation results show that using SDP
algorithm can achieve up to 7% profit gain over using greedy algorithm.
Additionally, we observe that the charging service provider is able to reshape
spatial-temporal charging demands to reduce the impact on power grid via
pricing signals.",Stochastic Dynamic Pricing for EV Charging Stations with Renewables Integration and Energy Storage,2018-01-07 07:55:43,"Chao Luo, Yih-Fang Huang, Vijay Gupta","http://dx.doi.org/10.1109/TSG.2017.2696493, http://arxiv.org/abs/1801.02128v1, http://arxiv.org/pdf/1801.02128v1",eess.SP
30912,em,"This paper studies the problem of multi-stage placement of electric vehicle
(EV) charging stations with incremental EV penetration rates. A nested logit
model is employed to analyze the charging preference of the individual consumer
(EV owner), and predict the aggregated charging demand at the charging
stations. The EV charging industry is modeled as an oligopoly where the entire
market is dominated by a few charging service providers (oligopolists). At the
beginning of each planning stage, an optimal placement policy for each service
provider is obtained through analyzing strategic interactions in a Bayesian
game. To derive the optimal placement policy, we consider both the
transportation network graph and the electric power network graph. A simulation
software --- The EV Virtual City 1.0 --- is developed using Java to investigate
the interactions among the consumers (EV owner), the transportation network
graph, the electric power network graph, and the charging stations. Through a
series of experiments using the geographic and demographic data from the city
of San Pedro District of Los Angeles, we show that the charging station
placement is highly consistent with the heatmap of the traffic flow. In
addition, we observe a spatial economic phenomenon that service providers
prefer clustering instead of separation in the EV charging market.",Placement of EV Charging Stations --- Balancing Benefits among Multiple Entities,2018-01-07 07:57:27,"Chao Luo, Yih-Fang Huang, Vijay Gupta","http://dx.doi.org/10.1109/TSG.2015.2508740, http://arxiv.org/abs/1801.02129v1, http://arxiv.org/pdf/1801.02129v1",eess.SP
32444,gn,"This paper uses a large UK cohort to investigate the impact of early-life
pollution exposure on individuals' human capital and health outcomes in older
age. We compare individuals who were exposed to the London smog in December
1952 whilst in utero or in infancy to those born after the smog and those born
at the same time but in unaffected areas. We find that those exposed to the
smog have substantially lower fluid intelligence and worse respiratory health,
with some evidence of a reduction in years of schooling.",The Long-Term Effects of Early-Life Pollution Exposure: Evidence from the London Smog,2022-02-24 00:04:46,"Stephanie von Hinke, Emil N. Sørensen","http://arxiv.org/abs/2202.11785v3, http://arxiv.org/pdf/2202.11785v3",econ.GN
30913,em,"This paper presents a multi-stage approach to the placement of charging
stations under the scenarios of different electric vehicle (EV) penetration
rates. The EV charging market is modeled as the oligopoly. A consumer behavior
based approach is applied to forecast the charging demand of the charging
stations using a nested logit model. The impacts of both the urban road network
and the power grid network on charging station planning are also considered. At
each planning stage, the optimal station placement strategy is derived through
solving a Bayesian game among the service providers. To investigate the
interplay of the travel pattern, the consumer behavior, urban road network,
power grid network, and the charging station placement, a simulation platform
(The EV Virtual City 1.0) is developed using Java on Repast.We conduct a case
study in the San Pedro District of Los Angeles by importing the geographic and
demographic data of that region into the platform. The simulation results
demonstrate a strong consistency between the charging station placement and the
traffic flow of EVs. The results also reveal an interesting phenomenon that
service providers prefer clustering instead of spatial separation in this
oligopoly market.",A Consumer Behavior Based Approach to Multi-Stage EV Charging Station Placement,2018-01-07 08:29:06,"Chao Luo, Yih-Fang Huang, Vijay Gupta","http://dx.doi.org/10.1109/VTCSpring.2015.7145593, http://arxiv.org/abs/1801.02135v1, http://arxiv.org/pdf/1801.02135v1",eess.SP
30914,em,"This paper presents a dynamic pricing and energy management framework for
electric vehicle (EV) charging service providers. To set the charging prices,
the service providers faces three uncertainties: the volatility of wholesale
electricity price, intermittent renewable energy generation, and
spatial-temporal EV charging demand. The main objective of our work here is to
help charging service providers to improve their total profits while enhancing
customer satisfaction and maintaining power grid stability, taking into account
those uncertainties. We employ a linear regression model to estimate the EV
charging demand at each charging station, and introduce a quantitative measure
for customer satisfaction. Both the greedy algorithm and the dynamic
programming (DP) algorithm are employed to derive the optimal charging prices
and determine how much electricity to be purchased from the wholesale market in
each planning horizon. Simulation results show that DP algorithm achieves an
increased profit (up to 9%) compared to the greedy algorithm (the benchmark
algorithm) under certain scenarios. Additionally, we observe that the
integration of a low-cost energy storage into the system can not only improve
the profit, but also smooth out the charging price fluctuation, protecting the
end customers from the volatile wholesale market.",Dynamic Pricing and Energy Management Strategy for EV Charging Stations under Uncertainties,2018-01-09 06:45:06,"Chao Luo, Yih-Fang Huang, Vijay Gupta","http://dx.doi.org/10.5220/0005797100490059, http://arxiv.org/abs/1801.02783v1, http://arxiv.org/pdf/1801.02783v1",eess.SP
30915,em,"We propose a robust implementation of the Nerlove--Arrow model using a
Bayesian structural time series model to explain the relationship between
advertising expenditures of a country-wide fast-food franchise network with its
weekly sales. Thanks to the flexibility and modularity of the model, it is well
suited to generalization to other markets or situations. Its Bayesian nature
facilitates incorporating \emph{a priori} information (the manager's views),
which can be updated with relevant data. This aspect of the model will be used
to present a strategy of budget scheduling across time and channels.",Assessing the effect of advertising expenditures upon sales: a Bayesian structural time series model,2018-01-09 20:39:51,"Víctor Gallego, Pablo Suárez-García, Pablo Angulo, David Gómez-Ullate","http://dx.doi.org/10.1002/asmb.2460, http://arxiv.org/abs/1801.03050v3, http://arxiv.org/pdf/1801.03050v3",stat.ML
30916,em,"The major perspective of this paper is to provide more evidence into the
empirical determinants of capital structure adjustment in different
macroeconomics states by focusing and discussing the relative importance of
firm-specific and macroeconomic characteristics from an alternative scope in
U.S. This study extends the empirical research on the topic of capital
structure by focusing on a quantile regression method to investigate the
behavior of firm-specific characteristics and macroeconomic variables across
all quantiles of distribution of leverage (total debt, long-terms debt and
short-terms debt). Thus, based on a partial adjustment model, we find that
long-term and short-term debt ratios varying regarding their partial adjustment
speeds; the short-term debt raises up while the long-term debt ratio slows down
for same periods.","Capital Structure in U.S., a Quantile Regression Approach with Macroeconomic Impacts",2018-01-20 13:45:31,"Andreas Kaloudis, Dimitrios Tsolis","http://arxiv.org/abs/1801.06651v1, http://arxiv.org/pdf/1801.06651v1",q-fin.EC
30917,em,"The fractional difference operator remains to be the most popular mechanism
to generate long memory due to the existence of efficient algorithms for their
simulation and forecasting. Nonetheless, there is no theoretical argument
linking the fractional difference operator with the presence of long memory in
real data. In this regard, one of the most predominant theoretical explanations
for the presence of long memory is cross-sectional aggregation of persistent
micro units. Yet, the type of processes obtained by cross-sectional aggregation
differs from the one due to fractional differencing. Thus, this paper develops
fast algorithms to generate and forecast long memory by cross-sectional
aggregation. Moreover, it is shown that the antipersistent phenomenon that
arises for negative degrees of memory in the fractional difference literature
is not present for cross-sectionally aggregated processes. Pointedly, while the
autocorrelations for the fractional difference operator are negative for
negative degrees of memory by construction, this restriction does not apply to
the cross-sectional aggregated scheme. We show that this has implications for
long memory tests in the frequency domain, which will be misspecified for
cross-sectionally aggregated processes with negative degrees of memory.
Finally, we assess the forecast performance of high-order $AR$ and $ARFIMA$
models when the long memory series are generated by cross-sectional
aggregation. Our results are of interest to practitioners developing forecasts
of long memory variables like inflation, volatility, and climate data, where
aggregation may be the source of long memory.","Nonfractional Memory: Filtering, Antipersistence, and Forecasting",2018-01-20 16:59:44,J. Eduardo Vera-Valdés,"http://arxiv.org/abs/1801.06677v1, http://arxiv.org/pdf/1801.06677v1",math.ST
30918,em,"Markov regime switching models have been used in numerous empirical studies
in economics and finance. However, the asymptotic distribution of the
likelihood ratio test statistic for testing the number of regimes in Markov
regime switching models has been an unresolved problem. This paper derives the
asymptotic distribution of the likelihood ratio test statistic for testing the
null hypothesis of $M_0$ regimes against the alternative hypothesis of $M_0 +
1$ regimes for any $M_0 \geq 1$ both under the null hypothesis and under local
alternatives. We show that the contiguous alternatives converge to the null
hypothesis at a rate of $n^{-1/8}$ in regime switching models with normal
density. The asymptotic validity of the parametric bootstrap is also
established.",Testing the Number of Regimes in Markov Regime Switching Models,2018-01-21 20:49:27,"Hiroyuki Kasahara, Katsumi Shimotsu","http://arxiv.org/abs/1801.06862v3, http://arxiv.org/pdf/1801.06862v3",econ.EM
30919,em,"This paper analyzes consumer choices over lunchtime restaurants using data
from a sample of several thousand anonymous mobile phone users in the San
Francisco Bay Area. The data is used to identify users' approximate typical
morning location, as well as their choices of lunchtime restaurants. We build a
model where restaurants have latent characteristics (whose distribution may
depend on restaurant observables, such as star ratings, food category, and
price range), each user has preferences for these latent characteristics, and
these preferences are heterogeneous across users. Similarly, each item has
latent characteristics that describe users' willingness to travel to the
restaurant, and each user has individual-specific preferences for those latent
characteristics. Thus, both users' willingness to travel and their base utility
for each restaurant vary across user-restaurant pairs. We use a Bayesian
approach to estimation. To make the estimation computationally feasible, we
rely on variational inference to approximate the posterior distribution, as
well as stochastic gradient descent as a computational approach. Our model
performs better than more standard competing models such as multinomial logit
and nested logit models, in part due to the personalization of the estimates.
We analyze how consumers re-allocate their demand after a restaurant closes to
nearby restaurants versus more distant restaurants with similar
characteristics, and we compare our predictions to actual outcomes. Finally, we
show how the model can be used to analyze counterfactual questions such as what
type of restaurant would attract the most consumers in a given location.",Estimating Heterogeneous Consumer Preferences for Restaurants and Travel Time Using Mobile Location Data,2018-01-23 02:55:42,"Susan Athey, David Blei, Robert Donnelly, Francisco Ruiz, Tobias Schmidt","http://arxiv.org/abs/1801.07826v1, http://arxiv.org/pdf/1801.07826v1",econ.EM
30920,em,"We test the existence of a neighborhood based peer effect around
participation in an incentive based conservation program called `Water Smart
Landscapes' (WSL) in the city of Las Vegas, Nevada. We use 15 years of
geo-coded daily records of WSL program applications and approvals compiled by
the Southern Nevada Water Authority and Clark County Tax Assessors rolls for
home characteristics. We use this data to test whether a spatially mediated
peer effect can be observed in WSL participation likelihood at the household
level. We show that epidemic spreading models provide more flexibility in
modeling assumptions, and also provide one mechanism for addressing problems
associated with correlated unobservables than hazards models which can also be
applied to address the same questions. We build networks of neighborhood based
peers for 16 randomly selected neighborhoods in Las Vegas and test for the
existence of a peer based influence on WSL participation by using a
Susceptible-Exposed-Infected-Recovered epidemic spreading model (SEIR), in
which a home can become infected via autoinfection or through contagion from
its infected neighbors. We show that this type of epidemic model can be
directly recast to an additive-multiplicative hazard model, but not to purely
multiplicative one. Using both inference and prediction approaches we find
evidence of peer effects in several Las Vegas neighborhoods.",Are `Water Smart Landscapes' Contagious? An epidemic approach on networks to study peer effects,2018-01-29 17:49:16,"Christa Brelsford, Caterina De Bacco","http://arxiv.org/abs/1801.10516v1, http://arxiv.org/pdf/1801.10516v1",econ.EM
30921,em,"Corporate insolvency can have a devastating effect on the economy. With an
increasing number of companies making expansion overseas to capitalize on
foreign resources, a multinational corporate bankruptcy can disrupt the world's
financial ecosystem. Corporations do not fail instantaneously; objective
measures and rigorous analysis of qualitative (e.g. brand) and quantitative
(e.g. econometric factors) data can help identify a company's financial risk.
Gathering and storage of data about a corporation has become less difficult
with recent advancements in communication and information technologies. The
remaining challenge lies in mining relevant information about a company's
health hidden under the vast amounts of data, and using it to forecast
insolvency so that managers and stakeholders have time to react. In recent
years, machine learning has become a popular field in big data analytics
because of its success in learning complicated models. Methods such as support
vector machines, adaptive boosting, artificial neural networks, and Gaussian
processes can be used for recognizing patterns in the data (with a high degree
of accuracy) that may not be apparent to human analysts. This thesis studied
corporate bankruptcy of manufacturing companies in Korea and Poland using
experts' opinions and financial measures, respectively. Using publicly
available datasets, several machine learning methods were applied to learn the
relationship between the company's current state and its fate in the near
future. Results showed that predictions with accuracy greater than 95% were
achievable using any machine learning technique when informative features like
experts' assessment were used. However, when using purely financial factors to
predict whether or not a company will go bankrupt, the correlation is not as
strong.",Analysis of Financial Credit Risk Using Machine Learning,2018-02-15 00:33:26,Jacky C. K. Chow,"http://dx.doi.org/10.13140/RG.2.2.30242.53449, http://arxiv.org/abs/1802.05326v1, http://arxiv.org/pdf/1802.05326v1",q-fin.ST
30922,em,"In unit root testing, a piecewise locally stationary process is adopted to
accommodate nonstationary errors that can have both smooth and abrupt changes
in second- or higher-order properties. Under this framework, the limiting null
distributions of the conventional unit root test statistics are derived and
shown to contain a number of unknown parameters. To circumvent the difficulty
of direct consistent estimation, we propose to use the dependent wild bootstrap
to approximate the non-pivotal limiting null distributions and provide a
rigorous theoretical justification for bootstrap consistency. The proposed
method is compared through finite sample simulations with the recolored wild
bootstrap procedure, which was developed for errors that follow a
heteroscedastic linear process. Further, a combination of autoregressive sieve
recoloring with the dependent wild bootstrap is shown to perform well. The
validity of the dependent wild bootstrap in a nonstationary setting is
demonstrated for the first time, showing the possibility of extensions to other
inference problems associated with locally stationary processes.",Bootstrap-Assisted Unit Root Testing With Piecewise Locally Stationary Errors,2018-02-15 01:00:23,"Yeonwoo Rho, Xiaofeng Shao","http://arxiv.org/abs/1802.05333v1, http://arxiv.org/pdf/1802.05333v1",econ.EM
30923,em,"Algorithmic collusion is an emerging concept in current artificial
intelligence age. Whether algorithmic collusion is a creditable threat remains
as an argument. In this paper, we propose an algorithm which can extort its
human rival to collude in a Cournot duopoly competing market. In experiments,
we show that, the algorithm can successfully extorted its human rival and gets
higher profit in long run, meanwhile the human rival will fully collude with
the algorithm. As a result, the social welfare declines rapidly and stably.
Both in theory and in experiment, our work confirms that, algorithmic collusion
can be a creditable threat. In application, we hope, the frameworks, the
algorithm design as well as the experiment environment illustrated in this
work, can be an incubator or a test bed for researchers and policymakers to
handle the emerging algorithmic collusion.",Algorithmic Collusion in Cournot Duopoly Market: Evidence from Experimental Economics,2018-02-21 21:34:27,"Nan Zhou, Li Zhang, Shijian Li, Zhijian Wang","http://arxiv.org/abs/1802.08061v1, http://arxiv.org/pdf/1802.08061v1",econ.EM
30924,em,"We provide adaptive inference methods, based on $\ell_1$ regularization, for
regular (semi-parametric) and non-regular (nonparametric) linear functionals of
the conditional expectation function. Examples of regular functionals include
average treatment effects, policy effects, and derivatives. Examples of
non-regular functionals include average treatment effects, policy effects, and
derivatives conditional on a covariate subvector fixed at a point. We construct
a Neyman orthogonal equation for the target parameter that is approximately
invariant to small perturbations of the nuisance parameters. To achieve this
property, we include the Riesz representer for the functional as an additional
nuisance parameter. Our analysis yields weak ``double sparsity robustness'':
either the approximation to the regression or the approximation to the
representer can be ``completely dense'' as long as the other is sufficiently
``sparse''. Our main results are non-asymptotic and imply asymptotic uniform
validity over large classes of models, translating into honest confidence bands
for both global and local parameters.",De-Biased Machine Learning of Global and Local Parameters Using Regularized Riesz Representers,2018-02-23 21:17:42,"Victor Chernozhukov, Whitney Newey, Rahul Singh","http://arxiv.org/abs/1802.08667v6, http://arxiv.org/pdf/1802.08667v6",stat.ML
30925,em,"A conditional expectation function (CEF) can at best be partially identified
when the conditioning variable is interval censored. When the number of bins is
small, existing methods often yield minimally informative bounds. We propose
three innovations that make meaningful inference possible in interval data
contexts. First, we prove novel nonparametric bounds for contexts where the
distribution of the censored variable is known. Second, we show that a class of
measures that describe the conditional mean across a fixed interval of the
conditioning space can often be bounded tightly even when the CEF itself
cannot. Third, we show that a constraint on CEF curvature can either tighten
bounds or can substitute for the monotonicity assumption often made in interval
data applications. We derive analytical bounds that use the first two
innovations, and develop a numerical method to calculate bounds under the
third. We show the performance of the method in simulations and then present
two applications. First, we resolve a known problem in the estimation of
mortality as a function of education: because individuals with high school or
less are a smaller and thus more negatively selected group over time, estimates
of their mortality change are likely to be biased. Our method makes it possible
to hold education rank bins constant over time, revealing that current
estimates of rising mortality for less educated women are biased upward in some
cases by a factor of three. Second, we apply the method to the estimation of
intergenerational mobility, where researchers frequently use coarsely measured
education data in the many contexts where matched parent-child income data are
unavailable. Conventional measures like the rank-rank correlation may be
uninformative once interval censoring is taken into account; CEF interval-based
measures of mobility are bounded tightly.",Partial Identification of Expectations with Interval Data,2018-02-28 18:52:05,"Sam Asher, Paul Novosad, Charlie Rafkin","http://arxiv.org/abs/1802.10490v1, http://arxiv.org/pdf/1802.10490v1",econ.EM
30926,em,"The fundamental purpose of the present research article is to introduce the
basic principles of Dimensional Analysis in the context of the neoclassical
economic theory, in order to apply such principles to the fundamental relations
that underlay most models of economic growth. In particular, basic instruments
from Dimensional Analysis are used to evaluate the analytical consistency of
the Neoclassical economic growth model. The analysis shows that an adjustment
to the model is required in such a way that the principle of dimensional
homogeneity is satisfied.",Dimensional Analysis in Economics: A Study of the Neoclassical Economic Growth Model,2018-02-28 19:42:36,"Miguel Alvarez Texocotitla, M. David Alvarez Hernandez, Shani Alvarez Hernandez","http://dx.doi.org/10.1177/0260107919845269, http://arxiv.org/abs/1802.10528v1, http://arxiv.org/pdf/1802.10528v1",econ.EM
30927,em,"In this paper, we propose deep learning techniques for econometrics,
specifically for causal inference and for estimating individual as well as
average treatment effects. The contribution of this paper is twofold: 1. For
generalized neighbor matching to estimate individual and average treatment
effects, we analyze the use of autoencoders for dimensionality reduction while
maintaining the local neighborhood structure among the data points in the
embedding space. This deep learning based technique is shown to perform better
than simple k nearest neighbor matching for estimating treatment effects,
especially when the data points have several features/covariates but reside in
a low dimensional manifold in high dimensional space. We also observe better
performance than manifold learning methods for neighbor matching. 2. Propensity
score matching is one specific and popular way to perform matching in order to
estimate average and individual treatment effects. We propose the use of deep
neural networks (DNNs) for propensity score matching, and present a network
called PropensityNet for this. This is a generalization of the logistic
regression technique traditionally used to estimate propensity scores and we
show empirically that DNNs perform better than logistic regression at
propensity score matching. Code for both methods will be made available shortly
on Github at: https://github.com/vikas84bf",Deep Learning for Causal Inference,2018-03-01 04:01:16,Vikas Ramachandra,"http://arxiv.org/abs/1803.00149v1, http://arxiv.org/pdf/1803.00149v1",econ.EM
30928,em,"We propose a new efficient online algorithm to learn the parameters governing
the purchasing behavior of a utility maximizing buyer, who responds to prices,
in a repeated interaction setting. The key feature of our algorithm is that it
can learn even non-linear buyer utility while working with arbitrary price
constraints that the seller may impose. This overcomes a major shortcoming of
previous approaches, which use unrealistic prices to learn these parameters
making them unsuitable in practice.",An Online Algorithm for Learning Buyer Behavior under Realistic Pricing Restrictions,2018-03-06 03:48:02,"Debjyoti Saharoy, Theja Tulabandhula","http://arxiv.org/abs/1803.01968v1, http://arxiv.org/pdf/1803.01968v1",stat.ML
30929,em,"This paper establishes the argmin of a random objective function to be unique
almost surely. This paper first formulates a general result that proves almost
sure uniqueness without convexity of the objective function. The general result
is then applied to a variety of applications in statistics. Four applications
are discussed, including uniqueness of M-estimators, both classical likelihood
and penalized likelihood estimators, and two applications of the argmin
theorem, threshold regression and weak identification.",Almost Sure Uniqueness of a Global Minimum Without Convexity,2018-03-06 23:27:28,Gregory Cox,"http://arxiv.org/abs/1803.02415v3, http://arxiv.org/pdf/1803.02415v3",econ.EM
30930,em,"In this paper, we examine the recent trend towards in-browser mining of
cryptocurrencies; in particular, the mining of Monero through Coinhive and
similar code- bases. In this model, a user visiting a website will download a
JavaScript code that executes client-side in her browser, mines a
cryptocurrency, typically without her consent or knowledge, and pays out the
seigniorage to the website. Websites may consciously employ this as an
alternative or to supplement advertisement revenue, may offer premium content
in exchange for mining, or may be unwittingly serving the code as a result of a
breach (in which case the seigniorage is collected by the attacker). The
cryptocurrency Monero is preferred seemingly for its unfriendliness to
large-scale ASIC mining that would drive browser-based efforts out of the
market, as well as for its purported privacy features. In this paper, we survey
this landscape, conduct some measurements to establish its prevalence and
profitability, outline an ethical framework for considering whether it should
be classified as an attack or business opportunity, and make suggestions for
the detection, mitigation and/or prevention of browser-based mining for non-
consenting users.",A first look at browser-based Cryptojacking,2018-03-08 00:50:37,"Shayan Eskandari, Andreas Leoutsarakos, Troy Mursch, Jeremy Clark","http://arxiv.org/abs/1803.02887v1, http://arxiv.org/pdf/1803.02887v1",cs.CR
30947,em,"We consider nonparametric estimation of a mixed discrete-continuous
distribution under anisotropic smoothness conditions and possibly increasing
number of support points for the discrete part of the distribution. For these
settings, we derive lower bounds on the estimation rates in the total variation
distance. Next, we consider a nonparametric mixture of normals model that uses
continuous latent variables for the discrete part of the observations. We show
that the posterior in this model contracts at rates that are equal to the
derived lower bounds up to a log factor. Thus, Bayesian mixture of normals
models can be used for optimal adaptive estimation of mixed discrete-continuous
distributions.",Adaptive Bayesian Estimation of Mixed Discrete-Continuous Distributions under Smoothness and Sparsity,2018-06-20 01:22:38,"Andriy Norets, Justinas Pelenis","http://arxiv.org/abs/1806.07484v1, http://arxiv.org/pdf/1806.07484v1",math.ST
30931,em,"We examine volume computation of general-dimensional polytopes and more
general convex bodies, defined as the intersection of a simplex by a family of
parallel hyperplanes, and another family of parallel hyperplanes or a family of
concentric ellipsoids. Such convex bodies appear in modeling and predicting
financial crises. The impact of crises on the economy (labor, income, etc.)
makes its detection of prime interest. Certain features of dependencies in the
markets clearly identify times of turmoil. We describe the relationship between
asset characteristics by means of a copula; each characteristic is either a
linear or quadratic form of the portfolio components, hence the copula can be
constructed by computing volumes of convex bodies. We design and implement
practical algorithms in the exact and approximate setting, we experimentally
juxtapose them and study the tradeoff of exactness and accuracy for speed. We
analyze the following methods in order of increasing generality: rejection
sampling relying on uniformly sampling the simplex, which is the fastest
approach, but inaccurate for small volumes; exact formulae based on the
computation of integrals of probability distribution functions; an optimized
Lawrence sign decomposition method, since the polytopes at hand are shown to be
simple; Markov chain Monte Carlo algorithms using random walks based on the
hit-and-run paradigm generalized to nonlinear convex bodies and relying on new
methods for computing a ball enclosed; the latter is experimentally extended to
non-convex bodies with very encouraging results. Our C++ software, based on
CGAL and Eigen and available on github, is shown to be very effective in up to
100 dimensions. Our results offer novel, effective means of computing portfolio
dependencies and an indicator of financial crises, which is shown to correctly
identify past crises.","Practical volume computation of structured convex bodies, and an application to modeling portfolio dependencies and financial crises",2018-03-15 19:56:01,"Ludovic Cales, Apostolos Chalkis, Ioannis Z. Emiris, Vissarion Fisikopoulos","http://arxiv.org/abs/1803.05861v1, http://arxiv.org/pdf/1803.05861v1",cs.CG
30932,em,"We develop a novel ""decouple-recouple"" dynamic predictive strategy and
contribute to the literature on forecasting and economic decision making in a
data-rich environment. Under this framework, clusters of predictors generate
different latent states in the form of predictive densities that are later
synthesized within an implied time-varying latent factor model. As a result,
the latent inter-dependencies across predictive densities and biases are
sequentially learned and corrected. Unlike sparse modeling and variable
selection procedures, we do not assume a priori that there is a given subset of
active predictors, which characterize the predictive density of a quantity of
interest. We test our procedure by investigating the predictive content of a
large set of financial ratios and macroeconomic variables on both the equity
premium across different industries and the inflation rate in the U.S., two
contexts of topical interest in finance and macroeconomics. We find that our
predictive synthesis framework generates both statistically and economically
significant out-of-sample benefits while maintaining interpretability of the
forecasting variables. In addition, the main empirical results highlight that
our proposed framework outperforms both LASSO-type shrinkage regressions,
factor based dimension reduction, sequential variable selection, and
equal-weighted linear pooling methodologies.",Large-Scale Dynamic Predictive Regressions,2018-03-19 00:01:01,"Daniele Bianchi, Kenichiro McAlinn","http://arxiv.org/abs/1803.06738v1, http://arxiv.org/pdf/1803.06738v1",stat.ME
30933,em,"In this article, we consider identification, estimation, and inference
procedures for treatment effect parameters using Difference-in-Differences
(DiD) with (i) multiple time periods, (ii) variation in treatment timing, and
(iii) when the ""parallel trends assumption"" holds potentially only after
conditioning on observed covariates. We show that a family of causal effect
parameters are identified in staggered DiD setups, even if differences in
observed characteristics create non-parallel outcome dynamics between groups.
Our identification results allow one to use outcome regression, inverse
probability weighting, or doubly-robust estimands. We also propose different
aggregation schemes that can be used to highlight treatment effect
heterogeneity across different dimensions as well as to summarize the overall
effect of participating in the treatment. We establish the asymptotic
properties of the proposed estimators and prove the validity of a
computationally convenient bootstrap procedure to conduct asymptotically valid
simultaneous (instead of pointwise) inference. Finally, we illustrate the
relevance of our proposed tools by analyzing the effect of the minimum wage on
teen employment from 2001--2007. Open-source software is available for
implementing the proposed methods.",Difference-in-Differences with Multiple Time Periods,2018-03-24 02:45:05,"Brantly Callaway, Pedro H. C. Sant'Anna","http://arxiv.org/abs/1803.09015v4, http://arxiv.org/pdf/1803.09015v4",econ.EM
30934,em,"In the recent literature on estimating heterogeneous treatment effects, each
proposed method makes its own set of restrictive assumptions about the
intervention's effects and which subpopulations to explicitly estimate.
Moreover, the majority of the literature provides no mechanism to identify
which subpopulations are the most affected--beyond manual inspection--and
provides little guarantee on the correctness of the identified subpopulations.
Therefore, we propose Treatment Effect Subset Scan (TESS), a new method for
discovering which subpopulation in a randomized experiment is most
significantly affected by a treatment. We frame this challenge as a pattern
detection problem where we efficiently maximize a nonparametric scan statistic
(a measure of the conditional quantile treatment effect) over subpopulations.
Furthermore, we identify the subpopulation which experiences the largest
distributional change as a result of the intervention, while making minimal
assumptions about the intervention's effects or the underlying data generating
process. In addition to the algorithm, we demonstrate that under the sharp null
hypothesis of no treatment effect, the asymptotic Type I and II error can be
controlled, and provide sufficient conditions for detection consistency--i.e.,
exact identification of the affected subpopulation. Finally, we validate the
efficacy of the method by discovering heterogeneous treatment effects in
simulations and in real-world data from a well-known program evaluation study.",Efficient Discovery of Heterogeneous Quantile Treatment Effects in Randomized Experiments via Anomalous Pattern Detection,2018-03-24 23:21:06,"Edward McFowland III, Sriram Somanchi, Daniel B. Neill","http://arxiv.org/abs/1803.09159v3, http://arxiv.org/pdf/1803.09159v3",stat.ME
30935,em,"Under the classical long-span asymptotic framework we develop a class of
Generalized Laplace (GL) inference methods for the change-point dates in a
linear time series regression model with multiple structural changes analyzed
in, e.g., Bai and Perron (1998). The GL estimator is defined by an integration
rather than optimization-based method and relies on the least-squares criterion
function. It is interpreted as a classical (non-Bayesian) estimator and the
inference methods proposed retain a frequentist interpretation. This approach
provides a better approximation about the uncertainty in the data of the
change-points relative to existing methods. On the theoretical side, depending
on some input (smoothing) parameter, the class of GL estimators exhibits a dual
limiting distribution; namely, the classical shrinkage asymptotic distribution,
or a Bayes-type asymptotic distribution. We propose an inference method based
on Highest Density Regions using the latter distribution. We show that it has
attractive theoretical properties not shared by the other popular alternatives,
i.e., it is bet-proof. Simulations confirm that these theoretical properties
translate to better finite-sample performance.",Generalized Laplace Inference in Multiple Change-Points Models,2018-03-29 01:34:57,"Alessandro Casini, Pierre Perron","http://dx.doi.org/10.1017/S0266466621000013, http://arxiv.org/abs/1803.10871v4, http://arxiv.org/pdf/1803.10871v4",math.ST
30936,em,"For a partial structural change in a linear regression model with a single
break, we develop a continuous record asymptotic framework to build inference
methods for the break date. We have T observations with a sampling frequency h
over a fixed time horizon [0, N] , and let T with h 0 while keeping the time
span N fixed. We impose very mild regularity conditions on an underlying
continuous-time model assumed to generate the data. We consider the
least-squares estimate of the break date and establish consistency and
convergence rate. We provide a limit theory for shrinking magnitudes of shifts
and locally increasing variances. The asymptotic distribution corresponds to
the location of the extremum of a function of the quadratic variation of the
regressors and of a Gaussian centered martingale process over a certain time
interval. We can account for the asymmetric informational content provided by
the pre- and post-break regimes and show how the location of the break and
shift magnitude are key ingredients in shaping the distribution. We consider a
feasible version based on plug-in estimates, which provides a very good
approximation to the finite sample distribution. We use the concept of Highest
Density Region to construct confidence sets. Overall, our method is reliable
and delivers accurate coverage probabilities and relatively short average
length of the confidence sets. Importantly, it does so irrespective of the size
of the break.",Continuous Record Asymptotics for Change-Points Models,2018-03-29 02:58:03,"Alessandro Casini, Pierre Perron","http://arxiv.org/abs/1803.10881v3, http://arxiv.org/pdf/1803.10881v3",math.ST
30937,em,"Building upon the continuous record asymptotic framework recently introduced
by Casini and Perron (2018a) for inference in structural change models, we
propose a Laplace-based (Quasi-Bayes) procedure for the construction of the
estimate and confidence set for the date of a structural change. It is defined
by an integration rather than an optimization-based method. A transformation of
the least-squares criterion function is evaluated in order to derive a proper
distribution, referred to as the Quasi-posterior. For a given choice of a loss
function, the Laplace-type estimator is the minimizer of the expected risk with
the expectation taken under the Quasi-posterior. Besides providing an
alternative estimate that is more precise|lower mean absolute error (MAE) and
lower root-mean squared error (RMSE)|than the usual least-squares one, the
Quasi-posterior distribution can be used to construct asymptotically valid
inference using the concept of Highest Density Region. The resulting
Laplace-based inferential procedure is shown to have lower MAE and RMSE, and
the confidence sets strike the best balance between empirical coverage rates
and average lengths of the confidence sets relative to traditional long-span
methods, whether the break size is small or large.",Continuous Record Laplace-based Inference about the Break Date in Structural Change Models,2018-04-01 03:04:25,"Alessandro Casini, Pierre Perron","http://arxiv.org/abs/1804.00232v3, http://arxiv.org/pdf/1804.00232v3",econ.EM
30938,em,"The common practice in difference-in-difference (DiD) designs is to check for
parallel trends prior to treatment assignment, yet typical estimation and
inference does not account for the fact that this test has occurred. I analyze
the properties of the traditional DiD estimator conditional on having passed
(i.e. not rejected) the test for parallel pre-trends. When the DiD design is
valid and the test for pre-trends confirms it, the typical DiD estimator is
unbiased, but traditional standard errors are overly conservative.
Additionally, there exists an alternative unbiased estimator that is more
efficient than the traditional DiD estimator under parallel trends. However,
when in population there is a non-zero pre-trend but we fail to reject the
hypothesis of parallel pre-trends, the DiD estimator is generally biased
relative to the population DiD coefficient. Moreover, if the trend is monotone,
then under reasonable assumptions the bias from conditioning exacerbates the
bias relative to the true treatment effect. I propose new estimation and
inference procedures that account for the test for parallel trends, and compare
their performance to that of the traditional estimator in a Monte Carlo
simulation.",Should We Adjust for the Test for Pre-trends in Difference-in-Difference Designs?,2018-04-04 04:54:37,Jonathan Roth,"http://arxiv.org/abs/1804.01208v2, http://arxiv.org/pdf/1804.01208v2",econ.EM
30939,em,"We propose simultaneous mean-variance regression for the linear estimation
and approximation of conditional mean functions. In the presence of
heteroskedasticity of unknown form, our method accounts for varying dispersion
in the regression outcome across the support of conditioning variables by using
weights that are jointly determined with the mean regression parameters.
Simultaneity generates outcome predictions that are guaranteed to improve over
ordinary least-squares prediction error, with corresponding parameter standard
errors that are automatically valid. Under shape misspecification of the
conditional mean and variance functions, we establish existence and uniqueness
of the resulting approximations and characterize their formal interpretation
and robustness properties. In particular, we show that the corresponding
mean-variance regression location-scale model weakly dominates the ordinary
least-squares location model under a Kullback-Leibler measure of divergence,
with strict improvement in the presence of heteroskedasticity. The simultaneous
mean-variance regression loss function is globally convex and the corresponding
estimator is easy to implement. We establish its consistency and asymptotic
normality under misspecification, provide robust inference methods, and present
numerical simulations that show large improvements over ordinary and weighted
least-squares in terms of estimation and inference in finite samples. We
further illustrate our method with two empirical applications to the estimation
of the relationship between economic prosperity in 1500 and today, and demand
for gasoline in the United States.",Simultaneous Mean-Variance Regression,2018-04-05 03:23:02,"Richard Spady, Sami Stouli","http://arxiv.org/abs/1804.01631v2, http://arxiv.org/pdf/1804.01631v2",econ.EM
30940,em,"We present large sample results for partitioning-based least squares
nonparametric regression, a popular method for approximating conditional
expectation functions in statistics, econometrics, and machine learning. First,
we obtain a general characterization of their leading asymptotic bias. Second,
we establish integrated mean squared error approximations for the point
estimator and propose feasible tuning parameter selection. Third, we develop
pointwise inference methods based on undersmoothing and robust bias correction.
Fourth, employing different coupling approaches, we develop uniform
distributional approximations for the undersmoothed and robust bias-corrected
t-statistic processes and construct valid confidence bands. In the univariate
case, our uniform distributional approximations require seemingly minimal rate
restrictions and improve on approximation rates known in the literature.
Finally, we apply our general results to three partitioning-based estimators:
splines, wavelets, and piecewise polynomials. The supplemental appendix
includes several other general and example-specific technical and
methodological results. A companion R package is provided.",Large Sample Properties of Partitioning-Based Series Estimators,2018-04-13 15:33:48,"Matias D. Cattaneo, Max H. Farrell, Yingjie Feng","http://arxiv.org/abs/1804.04916v3, http://arxiv.org/pdf/1804.04916v3",math.ST
30941,em,"Based on 1-minute price changes recorded since year 2012, the fluctuation
properties of the rapidly-emerging Bitcoin (BTC) market are assessed over
chosen sub-periods, in terms of return distributions, volatility
autocorrelation, Hurst exponents and multiscaling effects. The findings are
compared to the stylized facts of mature world markets. While early trading was
affected by system-specific irregularities, it is found that over the months
preceding Apr 2018 all these statistical indicators approach the features
hallmarking maturity. This can be taken as an indication that the Bitcoin
market, and possibly other cryptocurrencies, carry concrete potential of
imminently becoming a regular market, alternative to the foreign exchange
(Forex). Since high-frequency price data are available since the beginning of
trading, the Bitcoin offers a unique window into the statistical
characteristics of a market maturation trajectory.","Bitcoin market route to maturity? Evidence from return fluctuations, temporal correlations and multiscaling effects",2018-04-16 23:00:01,"Stanisław Drożdż, Robert Gębarowski, Ludovico Minati, Paweł Oświęcimka, Marcin Wątorek","http://dx.doi.org/10.1063/1.5036517, http://arxiv.org/abs/1804.05916v2, http://arxiv.org/pdf/1804.05916v2",q-fin.ST
30942,em,"Deep learning searches for nonlinear factors for predicting asset returns.
Predictability is achieved via multiple layers of composite factors as opposed
to additive ones. Viewed in this way, asset pricing studies can be revisited
using multi-layer deep learners, such as rectified linear units (ReLU) or
long-short-term-memory (LSTM) for time-series effects. State-of-the-art
algorithms including stochastic gradient descent (SGD), TensorFlow and dropout
design provide imple- mentation and efficient factor exploration. To illustrate
our methodology, we revisit the equity market risk premium dataset of Welch and
Goyal (2008). We find the existence of nonlinear factors which explain
predictability of returns, in particular at the extremes of the characteristic
space. Finally, we conclude with directions for future research.",Deep Learning for Predicting Asset Returns,2018-04-25 04:52:34,"Guanhao Feng, Jingyu He, Nicholas G. Polson","http://arxiv.org/abs/1804.09314v2, http://arxiv.org/pdf/1804.09314v2",stat.ML
30943,em,"This paper develops the limit theory of the GARCH(1,1) process that
moderately deviates from IGARCH process towards both stationary and explosive
regimes. The GARCH(1,1) process is defined by equations $u_t = \sigma_t
\varepsilon_t$, $\sigma_t^2 = \omega + \alpha_n u_{t-1}^2 +
\beta_n\sigma_{t-1}^2$ and $\alpha_n + \beta_n$ approaches to unity as sample
size goes to infinity. The asymptotic theory developed in this paper extends
Berkes et al. (2005) by allowing the parameters to have a slower convergence
rate. The results can be applied to unit root test for processes with
mildly-integrated GARCH innovations (e.g. Boswijk (2001), Cavaliere and Taylor
(2007, 2009)) and deriving limit theory of estimators for models involving
mildly-integrated GARCH processes (e.g. Jensen and Rahbek (2004), Francq and
Zako\""ian (2012, 2013)).",Limit Theory for Moderate Deviation from Integrated GARCH Processes,2018-06-04 20:16:56,Yubo Tao,"http://dx.doi.org/10.1016/j.spl.2019.03.001, http://arxiv.org/abs/1806.01229v3, http://arxiv.org/pdf/1806.01229v3",math.ST
30944,em,"This chapter presents key concepts and theoretical results for analyzing
estimation and inference in high-dimensional models. High-dimensional models
are characterized by having a number of unknown parameters that is not
vanishingly small relative to the sample size. We first present results in a
framework where estimators of parameters of interest may be represented
directly as approximate means. Within this context, we review fundamental
results including high-dimensional central limit theorems, bootstrap
approximation of high-dimensional limit distributions, and moderate deviation
theory. We also review key concepts underlying inference when many parameters
are of interest such as multiple testing with family-wise error rate or false
discovery rate control. We then turn to a general high-dimensional minimum
distance framework with a special focus on generalized method of moments
problems where we present results for estimation and inference about model
parameters. The presented results cover a wide array of econometric
applications, and we discuss several leading special cases including
high-dimensional linear regression and linear instrumental variables models to
illustrate the general results.",High-Dimensional Econometrics and Regularized GMM,2018-06-05 21:46:12,"Alexandre Belloni, Victor Chernozhukov, Denis Chetverikov, Christian Hansen, Kengo Kato","http://arxiv.org/abs/1806.01888v2, http://arxiv.org/pdf/1806.01888v2",math.ST
30945,em,"Symmetry is a fundamental concept in modern physics and other related
sciences. Being such a powerful tool, almost all physical theories can be
derived from symmetry, and the effectiveness of such an approach is
astonishing. Since many physicists do not actually believe that symmetry is a
fundamental feature of nature, it seems more likely it is a fundamental feature
of human cognition. According to evolutionary psychologists, humans have a
sensory bias for symmetry. The unconscious quest for symmetrical patterns has
developed as a solution to specific adaptive problems related to survival and
reproduction. Therefore, it comes as no surprise that some fundamental concepts
in psychology and behavioral economics necessarily involve symmetry. The
purpose of this paper is to draw attention to the role of symmetry in
decision-making and to illustrate how it can be algebraically operationalized
through the use of mathematical group theory.",Role of Symmetry in Irrational Choice,2018-06-07 14:48:31,Ivan Kozic,"http://arxiv.org/abs/1806.02627v3, http://arxiv.org/pdf/1806.02627v3",physics.pop-ph
30946,em,"The nonuniqueness of rational expectations is explained: in the stochastic,
discrete-time, linear, constant-coefficients case, the associated free
parameters are coefficients that determine the public's most immediate
reactions to shocks. The requirement of model-consistency may leave these
parameters completely free, yet when their values are appropriately specified,
a unique solution is determined. In a broad class of models, the requirement of
least-square forecast errors determines the parameter values, and therefore
defines a unique solution. This approach is independent of dynamical stability,
and generally does not suppress model dynamics.
  Application to a standard New Keynesian example shows that the traditional
solution suppresses precisely those dynamics that arise from rational
expectations. The uncovering of those dynamics reveals their incompatibility
with the new I-S equation and the expectational Phillips curve.",The Origin and the Resolution of Nonuniqueness in Linear Rational Expectations,2018-06-18 16:35:51,John G. Thistle,"http://arxiv.org/abs/1806.06657v3, http://arxiv.org/pdf/1806.06657v3",q-fin.EC
30948,em,"This note presents a proof of the conjecture in \citet*{pearl1995testability}
about testing the validity of an instrumental variable in hidden variable
models. It implies that instrument validity cannot be tested in the case where
the endogenous treatment is continuously distributed. This stands in contrast
to the classical testability results for instrument validity when the treatment
is discrete. However, imposing weak structural assumptions on the model, such
as continuity between the observable variables, can re-establish theoretical
testability in the continuous setting.",Non-testability of instrument validity under continuous endogenous variables,2018-06-25 18:05:22,Florian Gunsilius,"http://arxiv.org/abs/1806.09517v3, http://arxiv.org/pdf/1806.09517v3",econ.EM
30949,em,"Domestic and foreign scholars have already done much research on regional
disparity and its evolution in China, but there is a big difference in
conclusions. What is the reason for this? We think it is mainly due to
different analytic approaches, perspectives, spatial units, statistical
indicators and different periods for studies. On the basis of previous analyses
and findings, we have done some further quantitative computation and empirical
study, and revealed the inter-provincial disparity and regional disparity of
economic development and their evolution trends from 1952-2000. The results
shows that (a) Regional disparity in economic development in China, including
the inter-provincial disparity, inter-regional disparity and intra-regional
disparity, has existed for years; (b) Gini coefficient and Theil coefficient
have revealed a similar dynamic trend for comparative disparity in economic
development between provinces in China. From 1952 to 1978, except for the
""Great Leap Forward"" period, comparative disparity basically assumes a upward
trend and it assumed a slowly downward trend from 1979 to1990. Afterwards from
1991 to 2000 the disparity assumed a slowly upward trend again; (c) A
comparison between Shanghai and Guizhou shows that absolute inter-provincial
disparity has been quite big for years; and (d) The Hurst exponent (H=0.5) in
the period of 1966-1978 indicates that the comparative inter-provincial
disparity of economic development showed a random characteristic, and in the
Hurst exponent (H>0.5) in period of 1979-2000 indicates that in this period the
evolution of the comparative inter-provincial disparity of economic development
in China has a long-enduring characteristic.",Quantitative analysis on the disparity of regional economic development in China and its evolution from 1952 to 2000,2018-06-28 09:56:57,"Jianhua Xu, Nanshan Ai, Yan Lu, Yong Chen, Yiying Ling, Wenze Yue","http://arxiv.org/abs/1806.10794v1, http://arxiv.org/pdf/1806.10794v1",stat.AP
30950,em,"This paper considers inference for a function of a parameter vector in a
partially identified model with many moment inequalities. This framework allows
the number of moment conditions to grow with the sample size, possibly at
exponential rates. Our main motivating application is subvector inference,
i.e., inference on a single component of the partially identified parameter
vector associated with a treatment effect or a policy variable of interest.
  Our inference method compares a MinMax test statistic (minimum over
parameters satisfying $H_0$ and maximum over moment inequalities) against
critical values that are based on bootstrap approximations or analytical
bounds. We show that this method controls asymptotic size uniformly over a
large class of data generating processes despite the partially identified many
moment inequality setting. The finite sample analysis allows us to obtain
explicit rates of convergence on the size control. Our results are based on
combining non-asymptotic approximations and new high-dimensional central limit
theorems for the MinMax of the components of random matrices. Unlike the
previous literature on functional inference in partially identified models, our
results do not rely on weak convergence results based on Donsker's class
assumptions and, in fact, our test statistic may not even converge in
distribution. Our bootstrap approximation requires the choice of a tuning
parameter sequence that can avoid the excessive concentration of our test
statistic. To this end, we propose an asymptotically valid data-driven method
to select this tuning parameter sequence. This method generalizes the selection
of tuning parameter sequences to problems outside the Donsker's class
assumptions and may also be of independent interest. Our procedures based on
self-normalized moderate deviation bounds are relatively more conservative but
easier to implement.",Subvector Inference in Partially Identified Models with Many Moment Inequalities,2018-06-29 18:15:26,"Alexandre Belloni, Federico Bugni, Victor Chernozhukov","http://arxiv.org/abs/1806.11466v1, http://arxiv.org/pdf/1806.11466v1",math.ST
30951,em,"Errors-in-variables is a long-standing, difficult issue in linear regression;
and progress depends in part on new identifying assumptions. I characterize
measurement error as bad-leverage points and assume that fewer than half the
sample observations are heavily contaminated, in which case a high-breakdown
robust estimator may be able to isolate and down weight or discard the
problematic data. In simulations of simple and multiple regression where eiv
affects 25% of the data and R-squared is mediocre, certain high-breakdown
estimators have small bias and reliable confidence intervals.",Measurement Errors as Bad Leverage Points,2018-07-08 16:25:45,Eric Blankmeyer,"http://arxiv.org/abs/1807.02814v2, http://arxiv.org/pdf/1807.02814v2",econ.EM
30952,em,"The paper establishes the central limit theorems and proposes how to perform
valid inference in factor models. We consider a setting where many
counties/regions/assets are observed for many time periods, and when estimation
of a global parameter includes aggregation of a cross-section of heterogeneous
micro-parameters estimated separately for each entity. The central limit
theorem applies for quantities involving both cross-sectional and time series
aggregation, as well as for quadratic forms in time-aggregated errors. The
paper studies the conditions when one can consistently estimate the asymptotic
variance, and proposes a bootstrap scheme for cases when one cannot. A small
simulation study illustrates performance of the asymptotic and bootstrap
procedures. The results are useful for making inferences in two-step estimation
procedures related to factor models, as well as in other related contexts. Our
treatment avoids structural modeling of cross-sectional dependence but imposes
time-series independence.",Limit Theorems for Factor Models,2018-07-17 13:56:52,"Stanislav Anatolyev, Anna Mikusheva","http://dx.doi.org/10.1017/S0266466620000468, http://arxiv.org/abs/1807.06338v3, http://arxiv.org/pdf/1807.06338v3",econ.EM
30953,em,"Machine learning methods tend to outperform traditional statistical models at
prediction. In the prediction of academic achievement, ML models have not shown
substantial improvement over logistic regression. So far, these results have
almost entirely focused on college achievement, due to the availability of
administrative datasets, and have contained relatively small sample sizes by ML
standards. In this article we apply popular machine learning models to a large
dataset ($n=1.2$ million) containing primary and middle school performance on a
standardized test given annually to Australian students. We show that machine
learning models do not outperform logistic regression for detecting students
who will perform in the `below standard' band of achievement upon sitting their
next test, even in a large-$n$ setting.",Machine Learning Classifiers Do Not Improve the Prediction of Academic Risk: Evidence from Australia,2018-07-19 05:01:36,"Sarah Cornell-Farrow, Robert Garrard","http://dx.doi.org/10.1080/23737484.2020.1752849, http://arxiv.org/abs/1807.07215v4, http://arxiv.org/pdf/1807.07215v4",stat.ML
30959,em,"In this paper, we introduce a new machine learning (ML) model for nonlinear
regression called the Boosted Smooth Transition Regression Trees (BooST), which
is a combination of boosting algorithms with smooth transition regression
trees. The main advantage of the BooST model is the estimation of the
derivatives (partial effects) of very general nonlinear models. Therefore, the
model can provide more interpretation about the mapping between the covariates
and the dependent variable than other tree-based models, such as Random
Forests. We present several examples with both simulated and real data.",BooST: Boosting Smooth Trees for Partial Effect Estimation in Nonlinear Regressions,2018-08-10 23:37:52,"Yuri Fonseca, Marcelo Medeiros, Gabriel Vasconcelos, Alvaro Veiga","http://arxiv.org/abs/1808.03698v5, http://arxiv.org/pdf/1808.03698v5",stat.ML
30954,em,"We study the implications of including many covariates in a first-step
estimate entering a two-step estimation procedure. We find that a first order
bias emerges when the number of \textit{included} covariates is ""large""
relative to the square-root of sample size, rendering standard inference
procedures invalid. We show that the jackknife is able to estimate this ""many
covariates"" bias consistently, thereby delivering a new automatic
bias-corrected two-step point estimator. The jackknife also consistently
estimates the standard error of the original two-step point estimator. For
inference, we develop a valid post-bias-correction bootstrap approximation that
accounts for the additional variability introduced by the jackknife
bias-correction. We find that the jackknife bias-corrected point estimator and
the bootstrap post-bias-correction inference perform excellent in simulations,
offering important improvements over conventional two-step point estimators and
inference procedures, which are not robust to including many covariates. We
apply our results to an array of distinct treatment effect, policy evaluation,
and other applied microeconomics settings. In particular, we discuss production
function and marginal treatment effect estimation in detail.",Two-Step Estimation and Inference with Possibly Many Included Covariates,2018-07-26 15:53:58,"Matias D. Cattaneo, Michael Jansson, Xinwei Ma","http://arxiv.org/abs/1807.10100v1, http://arxiv.org/pdf/1807.10100v1",econ.EM
30955,em,"This paper describes a fundamental and empirically conspicuous problem that
is inherent to all surveys of human feelings and opinions in which subjective
responses are elicited on numerical scales. The paper also proposes a solution.
The problem is ""focal value response"" (FVR) behavior, a deep tendency by some
kinds of individuals -- particularly those with low levels of education -- to
simplify the response scale by considering only a subset of possible responses
such as the lowest, middle, and highest. This renders even the weak ordinality
assumption used with such data invalid. I show in this paper that, especially
if a researcher is trying to estimate the influence of education, the resulting
biases in inference can be overwhelmingly large. Using the application of
""happiness"" or life satisfaction data as an example, I first use a multinomial
logit model to diagnose the factors empirically associated with FVR. I then
introduce a new computational approach to estimate the bias resulting from FVR,
to correct estimates of univariate and multivariate inference, and to estimate
an index quantifying the fraction of respondents who exhibit FVR. Addressing
this problem resolves a longstanding and unexplained anomaly in the life
satisfaction literature, namely that the returns to education, after adjusting
for income, appear to be small or negative. I also show that, due to the same
econometric problem, the marginal utility of income in a subjective wellbeing
sense has been consistently underestimated by economists of happiness.",The econometrics of happiness: Are we underestimating the returns to education and income?,2018-07-31 17:20:49,Christopher P Barrington-Leigh,"http://arxiv.org/abs/1807.11835v2, http://arxiv.org/pdf/1807.11835v2",econ.EM
30956,em,"Nonlinear panel data models with fixed individual effects provide an
important set of tools for describing microeconometric data. In a large class
of such models (including probit, proportional hazard and quantile regression
to name just a few) it is impossible to difference out individual effects, and
inference is usually justified in a `large n large T' asymptotic framework.
However, there is a considerable gap in the type of assumptions that are
currently imposed in models with smooth score functions (such as probit, and
proportional hazard) and quantile regression. In the present paper we show that
this gap can be bridged and establish asymptotic unbiased normality for
quantile regression panels under conditions on n,T that are very close to what
is typically assumed in standard nonlinear panels. Our results considerably
improve upon existing theory and show that quantile regression is applicable to
the same type of panel data (in terms of n,T) as other commonly used nonlinear
panel data models. Thorough numerical experiments confirm our theoretical
findings.",On the Unbiased Asymptotic Normality of Quantile Regression with Fixed Effects,2018-07-31 18:17:56,"Antonio F. Galvao, Jiaying Gu, Stanislav Volgushev","http://arxiv.org/abs/1807.11863v2, http://arxiv.org/pdf/1807.11863v2",econ.EM
30957,em,"This paper studies higher-order inference properties of nonparametric local
polynomial regression methods under random sampling. We prove Edgeworth
expansions for $t$ statistics and coverage error expansions for interval
estimators that (i) hold uniformly in the data generating process, (ii) allow
for the uniform kernel, and (iii) cover estimation of derivatives of the
regression function. The terms of the higher-order expansions, and their
associated rates as a function of the sample size and bandwidth sequence,
depend on the smoothness of the population regression function, the smoothness
exploited by the inference procedure, and on whether the evaluation point is in
the interior or on the boundary of the support. We prove that robust bias
corrected confidence intervals have the fastest coverage error decay rates in
all cases, and we use our results to deliver novel, inference-optimal bandwidth
selectors. The main methodological results are implemented in companion
\textsf{R} and \textsf{Stata} software packages.",Coverage Error Optimal Confidence Intervals for Local Polynomial Regression,2018-08-04 04:10:13,"Sebastian Calonico, Matias D. Cattaneo, Max H. Farrell","http://arxiv.org/abs/1808.01398v4, http://arxiv.org/pdf/1808.01398v4",econ.EM
30958,em,"This paper introduces a quantile regression estimator for panel data models
with individual heterogeneity and attrition. The method is motivated by the
fact that attrition bias is often encountered in Big Data applications. For
example, many users sign-up for the latest program but few remain active users
several months later, making the evaluation of such interventions inherently
very challenging. Building on earlier work by Hausman and Wise (1979), we
provide a simple identification strategy that leads to a two-step estimation
procedure. In the first step, the coefficients of interest in the selection
equation are consistently estimated using parametric or nonparametric methods.
In the second step, standard panel quantile methods are employed on a subset of
weighted observations. The estimator is computationally easy to implement in
Big Data applications with a large number of subjects. We investigate the
conditions under which the parameter estimator is asymptotically Gaussian and
we carry out a series of Monte Carlo simulations to investigate the finite
sample properties of the estimator. Lastly, using a simulation exercise, we
apply the method to the evaluation of a recent Time-of-Day electricity pricing
experiment inspired by the work of Aigner and Hausman (1980).",A Panel Quantile Approach to Attrition Bias in Big Data: Evidence from a Randomized Experiment,2018-08-10 01:35:27,"Matthew Harding, Carlos Lamarche","http://arxiv.org/abs/1808.03364v1, http://arxiv.org/pdf/1808.03364v1",econ.EM
30960,em,"In non-experimental settings, the Regression Discontinuity (RD) design is one
of the most credible identification strategies for program evaluation and
causal inference. However, RD treatment effect estimands are necessarily local,
making statistical methods for the extrapolation of these effects a key area
for development. We introduce a new method for extrapolation of RD effects that
relies on the presence of multiple cutoffs, and is therefore design-based. Our
approach employs an easy-to-interpret identifying assumption that mimics the
idea of ""common trends"" in difference-in-differences designs. We illustrate our
methods with data on a subsidized loan program on post-education attendance in
Colombia, and offer new evidence on program effects for students with test
scores away from the cutoff that determined program eligibility.",Extrapolating Treatment Effects in Multi-Cutoff Regression Discontinuity Designs,2018-08-13 22:38:11,"Matias D. Cattaneo, Luke Keele, Rocio Titiunik, Gonzalo Vazquez-Bare","http://arxiv.org/abs/1808.04416v3, http://arxiv.org/pdf/1808.04416v3",econ.EM
30961,em,"In this paper we study estimation of and inference for average treatment
effects in a setting with panel data. We focus on the setting where units,
e.g., individuals, firms, or states, adopt the policy or treatment of interest
at a particular point in time, and then remain exposed to this treatment at all
times afterwards. We take a design perspective where we investigate the
properties of estimators and procedures given assumptions on the assignment
process. We show that under random assignment of the adoption date the standard
Difference-In-Differences estimator is is an unbiased estimator of a particular
weighted average causal effect. We characterize the proeperties of this
estimand, and show that the standard variance estimator is conservative.",Design-based Analysis in Difference-In-Differences Settings with Staggered Adoption,2018-08-16 01:10:57,"Susan Athey, Guido Imbens","http://arxiv.org/abs/1808.05293v3, http://arxiv.org/pdf/1808.05293v3",econ.EM
30962,em,"Graphical models have become a very popular tool for representing
dependencies within a large set of variables and are key for representing
causal structures. We provide results for uniform inference on high-dimensional
graphical models with the number of target parameters $d$ being possible much
larger than sample size. This is in particular important when certain features
or structures of a causal model should be recovered. Our results highlight how
in high-dimensional settings graphical models can be estimated and recovered
with modern machine learning methods in complex data sets. To construct
simultaneous confidence regions on many target parameters, sufficiently fast
estimation rates of the nuisance functions are crucial. In this context, we
establish uniform estimation rates and sparsity guarantees of the square-root
estimator in a random design under approximate sparsity conditions that might
be of independent interest for related problems in high-dimensions. We also
demonstrate in a comprehensive simulation study that our procedure has good
small sample properties.",Uniform Inference in High-Dimensional Gaussian Graphical Models,2018-08-31 00:53:06,"Sven Klaassen, Jannis Kück, Martin Spindler, Victor Chernozhukov","http://arxiv.org/abs/1808.10532v2, http://arxiv.org/pdf/1808.10532v2",stat.ME
30963,em,"Insurance companies must manage millions of claims per year. While most of
these claims are non-fraudulent, fraud detection is core for insurance
companies. The ultimate goal is a predictive model to single out the fraudulent
claims and pay out the non-fraudulent ones immediately. Modern machine learning
methods are well suited for this kind of problem. Health care claims often have
a data structure that is hierarchical and of variable length. We propose one
model based on piecewise feed forward neural networks (deep learning) and
another model based on self-attention neural networks for the task of claim
management. We show that the proposed methods outperform bag-of-words based
models, hand designed features, and models based on convolutional neural
networks, on a data set of two million health care claims. The proposed
self-attention method performs the best.",A Self-Attention Network for Hierarchical Data Structures with an Application to Claims Management,2018-08-31 01:56:46,"Leander Löw, Martin Spindler, Eike Brechmann","http://arxiv.org/abs/1808.10543v1, http://arxiv.org/pdf/1808.10543v1",cs.LG
30964,em,"Empirical research often cites observed choice responses to variation that
shifts expected discounted future utilities, but not current utilities, as an
intuitive source of information on time preferences. We study the
identification of dynamic discrete choice models under such economically
motivated exclusion restrictions on primitive utilities. We show that each
exclusion restriction leads to an easily interpretable moment condition with
the discount factor as the only unknown parameter. The identified set of
discount factors that solves this condition is finite, but not necessarily a
singleton. Consequently, in contrast to common intuition, an exclusion
restriction does not in general give point identification. Finally, we show
that exclusion restrictions have nontrivial empirical content: The implied
moment conditions impose restrictions on choices that are absent from the
unconstrained model.",Identifying the Discount Factor in Dynamic Discrete Choice Models,2018-08-31 12:53:21,"Jaap H. Abbring, Øystein Daljord","http://dx.doi.org/10.3982/QE1352, http://arxiv.org/abs/1808.10651v4, http://arxiv.org/pdf/1808.10651v4",econ.EM
30965,em,"Portfolio sorting is ubiquitous in the empirical finance literature, where it
has been widely used to identify pricing anomalies. Despite its popularity,
little attention has been paid to the statistical properties of the procedure.
We develop a general framework for portfolio sorting by casting it as a
nonparametric estimator. We present valid asymptotic inference methods and a
valid mean square error expansion of the estimator leading to an optimal choice
for the number of portfolios. In practical settings, the optimal choice may be
much larger than the standard choices of 5 or 10. To illustrate the relevance
of our results, we revisit the size and momentum anomalies.",Characteristic-Sorted Portfolios: Estimation and Inference,2018-09-10 23:31:28,"Matias D. Cattaneo, Richard K. Crump, Max H. Farrell, Ernst Schaumburg","http://arxiv.org/abs/1809.03584v3, http://arxiv.org/pdf/1809.03584v3",econ.EM
30966,em,"This paper presents a simple method for carrying out inference in a wide
variety of possibly nonlinear IV models under weak assumptions. The method is
non-asymptotic in the sense that it provides a finite sample bound on the
difference between the true and nominal probabilities of rejecting a correct
null hypothesis. The method is a non-Studentized version of the Anderson-Rubin
test but is motivated and analyzed differently. In contrast to the conventional
Anderson-Rubin test, the method proposed here does not require restrictive
distributional assumptions, linearity of the estimated model, or simultaneous
equations. Nor does it require knowledge of whether the instruments are strong
or weak. It does not require testing or estimating the strength of the
instruments. The method can be applied to quantile IV models that may be
nonlinear and can be used to test a parametric IV model against a nonparametric
alternative. The results presented here hold in finite samples, regardless of
the strength of the instruments.",Non-Asymptotic Inference in Instrumental Variables Estimation,2018-09-11 00:20:12,Joel L. Horowitz,"http://arxiv.org/abs/1809.03600v1, http://arxiv.org/pdf/1809.03600v1",econ.EM
30967,em,"Many causal and structural effects depend on regressions. Examples include
policy effects, average derivatives, regression decompositions, average
treatment effects, causal mediation, and parameters of economic structural
models. The regressions may be high dimensional, making machine learning
useful. Plugging machine learners into identifying equations can lead to poor
inference due to bias from regularization and/or model selection. This paper
gives automatic debiasing for linear and nonlinear functions of regressions.
The debiasing is automatic in using Lasso and the function of interest without
the full form of the bias correction. The debiasing can be applied to any
regression learner, including neural nets, random forests, Lasso, boosting, and
other high dimensional methods. In addition to providing the bias correction we
give standard errors that are robust to misspecification, convergence rates for
the bias correction, and primitive conditions for asymptotic inference for
estimators of a variety of estimators of structural and causal effects. The
automatic debiased machine learning is used to estimate the average treatment
effect on the treated for the NSW job training data and to estimate demand
elasticities from Nielsen scanner data while allowing preferences to be
correlated with prices and income.",Automatic Debiased Machine Learning of Causal and Structural Effects,2018-09-14 05:24:07,"Victor Chernozhukov, Whitney K Newey, Rahul Singh","http://arxiv.org/abs/1809.05224v5, http://arxiv.org/pdf/1809.05224v5",math.ST
30968,em,"In this paper, we are interested in evaluating the resilience of financial
portfolios under extreme economic conditions. Therefore, we use empirical
measures to characterize the transmission process of macroeconomic shocks to
risk parameters. We propose the use of an extensive family of models, called
General Transfer Function Models, which condense well the characteristics of
the transmission described by the impact measures. The procedure for estimating
the parameters of these models is described employing the Bayesian approach and
using the prior information provided by the impact measures. In addition, we
illustrate the use of the estimated models from the credit risk data of a
portfolio.",Transmission of Macroeconomic Shocks to Risk Parameters: Their uses in Stress Testing,2018-09-19 23:36:02,"Helder Rojas, David Dias","http://arxiv.org/abs/1809.07401v3, http://arxiv.org/pdf/1809.07401v3",stat.AP
30969,em,"We consider two recent suggestions for how to perform an empirically
motivated Monte Carlo study to help select a treatment effect estimator under
unconfoundedness. We show theoretically that neither is likely to be
informative except under restrictive conditions that are unlikely to be
satisfied in many contexts. To test empirical relevance, we also apply the
approaches to a real-world setting where estimator performance is known. Both
approaches are worse than random at selecting estimators which minimise
absolute bias. They are better when selecting estimators that minimise mean
squared error. However, using a simple bootstrap is at least as good and often
better. For now researchers would be best advised to use a range of estimators
and compare estimates for robustness.",Mostly Harmless Simulations? Using Monte Carlo Studies for Estimator Selection,2018-09-25 17:54:11,"Arun Advani, Toru Kitagawa, Tymon Słoczyński","http://arxiv.org/abs/1809.09527v2, http://arxiv.org/pdf/1809.09527v2",econ.EM
30970,em,"This paper proposes new estimators for the propensity score that aim to
maximize the covariate distribution balance among different treatment groups.
Heuristically, our proposed procedure attempts to estimate a propensity score
model by making the underlying covariate distribution of different treatment
groups as close to each other as possible. Our estimators are data-driven, do
not rely on tuning parameters such as bandwidths, admit an asymptotic linear
representation, and can be used to estimate different treatment effect
parameters under different identifying assumptions, including unconfoundedness
and local treatment effects. We derive the asymptotic properties of inverse
probability weighted estimators for the average, distributional, and quantile
treatment effects based on the proposed propensity score estimator and
illustrate their finite sample performance via Monte Carlo simulations and two
empirical applications.",Covariate Distribution Balance via Propensity Scores,2018-10-02 20:00:13,"Pedro H. C. Sant'Anna, Xiaojun Song, Qi Xu","http://arxiv.org/abs/1810.01370v4, http://arxiv.org/pdf/1810.01370v4",econ.EM
30971,em,"Applied work often studies the effect of a binary variable (""treatment"")
using linear models with additive effects. I study the interpretation of the
OLS estimands in such models when treatment effects are heterogeneous. I show
that the treatment coefficient is a convex combination of two parameters, which
under certain conditions can be interpreted as the average treatment effects on
the treated and untreated. The weights on these parameters are inversely
related to the proportion of observations in each group. Reliance on these
implicit weights can have serious consequences for applied work, as I
illustrate with two well-known applications. I develop simple diagnostic tools
that empirical researchers can use to avoid potential biases. Software for
implementing these methods is available in R and Stata. In an important special
case, my diagnostics only require the knowledge of the proportion of treated
units.",Interpreting OLS Estimands When Treatment Effects Are Heterogeneous: Smaller Groups Get Larger Weights,2018-10-03 07:01:27,Tymon Słoczyński,"http://arxiv.org/abs/1810.01576v3, http://arxiv.org/pdf/1810.01576v3",econ.EM
30972,em,"Proxies for regulatory reforms based on categorical variables are
increasingly used in empirical evaluation models. We surveyed 63 studies that
rely on such indices to analyze the effects of entry liberalization,
privatization, unbundling, and independent regulation of the electricity,
natural gas, and telecommunications sectors. We highlight methodological issues
related to the use of these proxies. Next, taking stock of the literature, we
provide practical advice for the design of the empirical strategy and discuss
the selection of control and instrumental variables to attenuate endogeneity
problems undermining identification of the effects of regulatory reforms.",Evaluating regulatory reform of network industries: a survey of empirical models based on categorical proxies,2018-10-08 12:49:57,"Andrea Bastianin, Paolo Castelnovo, Massimo Florio","http://dx.doi.org/10.1016/j.jup.2018.09.012, http://arxiv.org/abs/1810.03348v1, http://arxiv.org/pdf/1810.03348v1",econ.GN
30989,em,"Randomized control trials (RCTs) are the gold standard for estimating causal
effects, but often use samples that are non-representative of the actual
population of interest. We propose a reweighting method for estimating
population average treatment effects in settings with noncompliance.
Simulations show the proposed compliance-adjusted population estimator
outperforms its unadjusted counterpart when compliance is relatively low and
can be predicted by observed covariates. We apply the method to evaluate the
effect of Medicaid coverage on health care use for a target population of
adults who may benefit from expansions to the Medicaid program. We draw RCT
data from the Oregon Health Insurance Experiment, where less than one-third of
those randomly selected to receive Medicaid benefits actually enrolled.",Estimating population average treatment effects from experiments with noncompliance,2019-01-10 04:35:38,"Kellie Ottoboni, Jason Poulos","http://dx.doi.org/10.1515/jci-2018-0035, http://arxiv.org/abs/1901.02991v3, http://arxiv.org/pdf/1901.02991v3",stat.ME
30973,em,"In many settings, a decision-maker wishes to learn a rule, or policy, that
maps from observable characteristics of an individual to an action. Examples
include selecting offers, prices, advertisements, or emails to send to
consumers, as well as the problem of determining which medication to prescribe
to a patient. While there is a growing body of literature devoted to this
problem, most existing results are focused on the case where data comes from a
randomized experiment, and further, there are only two possible actions, such
as giving a drug to a patient or not. In this paper, we study the offline
multi-action policy learning problem with observational data and where the
policy may need to respect budget constraints or belong to a restricted policy
class such as decision trees. We build on the theory of efficient
semi-parametric inference in order to propose and implement a policy learning
algorithm that achieves asymptotically minimax-optimal regret. To the best of
our knowledge, this is the first result of this type in the multi-action setup,
and it provides a substantial performance improvement over the existing
learning algorithms. We then consider additional computational challenges that
arise in implementing our method for the case where the policy is restricted to
take the form of a decision tree. We propose two different approaches, one
using a mixed integer program formulation and the other using a tree-search
based algorithm.",Offline Multi-Action Policy Learning: Generalization and Optimization,2018-10-11 02:34:37,"Zhengyuan Zhou, Susan Athey, Stefan Wager","http://arxiv.org/abs/1810.04778v2, http://arxiv.org/pdf/1810.04778v2",stat.ML
30974,em,"We develop a cross-sectional research design to identify causal effects in
the presence of unobservable heterogeneity without instruments. When units are
dense in physical space, it may be sufficient to regress the ""spatial first
differences"" (SFD) of the outcome on the treatment and omit all covariates. The
identifying assumptions of SFD are similar in mathematical structure and
plausibility to other quasi-experimental designs. We use SFD to obtain new
estimates for the effects of time-invariant geographic factors, soil and
climate, on long-run agricultural productivities --- relationships crucial for
economic decisions, such as land management and climate policy, but notoriously
confounded by unobservables.",Accounting for Unobservable Heterogeneity in Cross Section Using Spatial First Differences,2018-10-16 21:19:38,"Hannah Druckenmiller, Solomon Hsiang","http://arxiv.org/abs/1810.07216v2, http://arxiv.org/pdf/1810.07216v2",econ.EM
30975,em,"This paper studies the inference problem in quantile regression (QR) for a
large sample size $n$ but under a limited memory constraint, where the memory
can only store a small batch of data of size $m$. A natural method is the
na\""ive divide-and-conquer approach, which splits data into batches of size
$m$, computes the local QR estimator for each batch, and then aggregates the
estimators via averaging. However, this method only works when $n=o(m^2)$ and
is computationally expensive. This paper proposes a computationally efficient
method, which only requires an initial QR estimator on a small batch of data
and then successively refines the estimator via multiple rounds of
aggregations. Theoretically, as long as $n$ grows polynomially in $m$, we
establish the asymptotic normality for the obtained estimator and show that our
estimator with only a few rounds of aggregations achieves the same efficiency
as the QR estimator computed on all the data. Moreover, our result allows the
case that the dimensionality $p$ goes to infinity. The proposed method can also
be applied to address the QR problem under distributed computing environment
(e.g., in a large-scale sensor network) or for real-time streaming data.",Quantile Regression Under Memory Constraint,2018-10-18 23:03:51,"Xi Chen, Weidong Liu, Yichen Zhang","http://dx.doi.org/10.1214/18-AOS1777, http://arxiv.org/abs/1810.08264v1, http://arxiv.org/pdf/1810.08264v1",stat.ME
30976,em,"Graph-based techniques emerged as a choice to deal with the dimensionality
issues in modeling multivariate time series. However, there is yet no complete
understanding of how the underlying structure could be exploited to ease this
task. This work provides contributions in this direction by considering the
forecasting of a process evolving over a graph. We make use of the
(approximate) time-vertex stationarity assumption, i.e., timevarying graph
signals whose first and second order statistical moments are invariant over
time and correlated to a known graph topology. The latter is combined with VAR
and VARMA models to tackle the dimensionality issues present in predicting the
temporal evolution of multivariate time series. We find out that by projecting
the data to the graph spectral domain: (i) the multivariate model estimation
reduces to that of fitting a number of uncorrelated univariate ARMA models and
(ii) an optimal low-rank data representation can be exploited so as to further
reduce the estimation costs. In the case that the multivariate process can be
observed at a subset of nodes, the proposed models extend naturally to Kalman
filtering on graphs allowing for optimal tracking. Numerical experiments with
both synthetic and real data validate the proposed approach and highlight its
benefits over state-of-the-art alternatives.",Forecasting Time Series with VARMA Recursions on Graphs,2018-10-19 19:51:11,"Elvin Isufi, Andreas Loukas, Nathanael Perraudin, Geert Leus","http://dx.doi.org/10.1109/TSP.2019.2929930, http://arxiv.org/abs/1810.08581v2, http://arxiv.org/pdf/1810.08581v2",eess.SP
30977,em,"We derive properties of the cdf of random variables defined as saddle-type
points of real valued continuous stochastic processes. This facilitates the
derivation of the first-order asymptotic properties of tests for stochastic
spanning given some stochastic dominance relation. We define the concept of
Markowitz stochastic dominance spanning, and develop an analytical
representation of the spanning property. We construct a non-parametric test for
spanning based on subsampling, and derive its asymptotic exactness and
consistency. The spanning methodology determines whether introducing new
securities or relaxing investment constraints improves the investment
opportunity set of investors driven by Markowitz stochastic dominance. In an
application to standard data sets of historical stock market returns, we reject
market portfolio Markowitz efficiency as well as two-fund separation. Hence, we
find evidence that equity management through base assets can outperform the
market, for investors with Markowitz type preferences.",Spanning Tests for Markowitz Stochastic Dominance,2018-10-25 12:21:59,"Stelios Arvanitis, Olivier Scaillet, Nikolas Topaloglou","http://arxiv.org/abs/1810.10800v1, http://arxiv.org/pdf/1810.10800v1",q-fin.ST
31052,em,"Empirical economic research frequently applies maximum likelihood estimation
in cases where the likelihood function is analytically intractable. Most of the
theoretical literature focuses on maximum simulated likelihood (MSL)
estimators, while empirical and simulation analyzes often find that alternative
approximation methods such as quasi-Monte Carlo simulation, Gaussian
quadrature, and integration on sparse grids behave considerably better
numerically. This paper generalizes the theoretical results widely known for
MSL estimators to a general set of maximum approximated likelihood (MAL)
estimators. We provide general conditions for both the model and the
approximation approach to ensure consistency and asymptotic normality. We also
show specific examples and finite-sample simulation results.",Maximum Approximated Likelihood Estimation,2019-08-12 15:22:52,"Michael Griebel, Florian Heiss, Jens Oettershagen, Constantin Weiser","http://arxiv.org/abs/1908.04110v1, http://arxiv.org/pdf/1908.04110v1",econ.EM
30978,em,"Inverse Probability Weighting (IPW) is widely used in empirical work in
economics and other disciplines. As Gaussian approximations perform poorly in
the presence of ""small denominators,"" trimming is routinely employed as a
regularization strategy. However, ad hoc trimming of the observations renders
usual inference procedures invalid for the target estimand, even in large
samples. In this paper, we first show that the IPW estimator can have different
(Gaussian or non-Gaussian) asymptotic distributions, depending on how ""close to
zero"" the probability weights are and on how large the trimming threshold is.
As a remedy, we propose an inference procedure that is robust not only to small
probability weights entering the IPW estimator but also to a wide range of
trimming threshold choices, by adapting to these different asymptotic
distributions. This robustness is achieved by employing resampling techniques
and by correcting a non-negligible trimming bias. We also propose an
easy-to-implement method for choosing the trimming threshold by minimizing an
empirical analogue of the asymptotic mean squared error. In addition, we show
that our inference procedure remains valid with the use of a data-driven
trimming threshold. We illustrate our method by revisiting a dataset from the
National Supported Work program.",Robust Inference Using Inverse Probability Weighting,2018-10-26 18:47:29,"Xinwei Ma, Jingshen Wang","http://arxiv.org/abs/1810.11397v2, http://arxiv.org/pdf/1810.11397v2",econ.EM
30979,em,"In this paper, we study the dynamic assortment optimization problem under a
finite selling season of length $T$. At each time period, the seller offers an
arriving customer an assortment of substitutable products under a cardinality
constraint, and the customer makes the purchase among offered products
according to a discrete choice model. Most existing work associates each
product with a real-valued fixed mean utility and assumes a multinomial logit
choice (MNL) model. In many practical applications, feature/contexutal
information of products is readily available. In this paper, we incorporate the
feature information by assuming a linear relationship between the mean utility
and the feature. In addition, we allow the feature information of products to
change over time so that the underlying choice model can also be
non-stationary. To solve the dynamic assortment optimization under this
changing contextual MNL model, we need to simultaneously learn the underlying
unknown coefficient and makes the decision on the assortment. To this end, we
develop an upper confidence bound (UCB) based policy and establish the regret
bound on the order of $\widetilde O(d\sqrt{T})$, where $d$ is the dimension of
the feature and $\widetilde O$ suppresses logarithmic dependence. We further
established the lower bound $\Omega(d\sqrt{T}/K)$ where $K$ is the cardinality
constraint of an offered assortment, which is usually small. When $K$ is a
constant, our policy is optimal up to logarithmic factors. In the exploitation
phase of the UCB algorithm, we need to solve a combinatorial optimization for
assortment optimization based on the learned information. We further develop an
approximation algorithm and an efficient greedy heuristic. The effectiveness of
the proposed policy is further demonstrated by our numerical studies.",Dynamic Assortment Optimization with Changing Contextual Information,2018-10-31 04:52:59,"Xi Chen, Yining Wang, Yuan Zhou","http://arxiv.org/abs/1810.13069v2, http://arxiv.org/pdf/1810.13069v2",econ.EM
30980,em,"This paper develops a general framework for conducting inference on the rank
of an unknown matrix $\Pi_0$. A defining feature of our setup is the null
hypothesis of the form $\mathrm H_0: \mathrm{rank}(\Pi_0)\le r$. The problem is
of first order importance because the previous literature focuses on $\mathrm
H_0': \mathrm{rank}(\Pi_0)= r$ by implicitly assuming away
$\mathrm{rank}(\Pi_0)<r$, which may lead to invalid rank tests due to
over-rejections. In particular, we show that limiting distributions of test
statistics under $\mathrm H_0'$ may not stochastically dominate those under
$\mathrm{rank}(\Pi_0)<r$. A multiple test on the nulls
$\mathrm{rank}(\Pi_0)=0,\ldots,r$, though valid, may be substantially
conservative. We employ a testing statistic whose limiting distributions under
$\mathrm H_0$ are highly nonstandard due to the inherent irregular natures of
the problem, and then construct bootstrap critical values that deliver size
control and improved power. Since our procedure relies on a tuning parameter, a
two-step procedure is designed to mitigate concerns on this nuisance. We
additionally argue that our setup is also important for estimation. We
illustrate the empirical relevance of our results through testing
identification in linear IV models that allows for clustered data and inference
on sorting dimensions in a two-sided matching model with transferrable utility.",Improved Inference on the Rank of a Matrix,2018-12-06 06:46:11,"Qihui Chen, Zheng Fang","http://arxiv.org/abs/1812.02337v2, http://arxiv.org/pdf/1812.02337v2",econ.EM
30981,em,"We study the qualitative and quantitative appearance of stylized facts in
several agent-based computational economic market (ABCEM) models. We perform
our simulations with the SABCEMM (Simulator for Agent-Based Computational
Economic Market Models) tool recently introduced by the authors (Trimborn et
al. 2019). Furthermore, we present novel ABCEM models created by recombining
existing models and study them with respect to stylized facts as well. This can
be efficiently performed by the SABCEMM tool thanks to its object-oriented
software design. The code is available on GitHub (Trimborn et al. 2018), such
that all results can be reproduced by the reader.",Simulation of Stylized Facts in Agent-Based Computational Economic Market Models,2018-11-28 01:07:46,"Maximilian Beikirch, Simon Cramer, Martin Frank, Philipp Otte, Emma Pabich, Torsten Trimborn","http://arxiv.org/abs/1812.02726v2, http://arxiv.org/pdf/1812.02726v2",econ.GN
30982,em,"In 2016, the majority of full-time employed women in the U.S. earned
significantly less than comparable men. The extent to which women were affected
by gender inequality in earnings, however, depended greatly on socio-economic
characteristics, such as marital status or educational attainment. In this
paper, we analyzed data from the 2016 American Community Survey using a
high-dimensional wage regression and applying double lasso to quantify
heterogeneity in the gender wage gap. We found that the gap varied
substantially across women and was driven primarily by marital status, having
children at home, race, occupation, industry, and educational attainment. We
recommend that policy makers use these insights to design policies that will
reduce discrimination and unequal pay more effectively.",Closing the U.S. gender wage gap requires understanding its heterogeneity,2018-12-11 15:05:26,"Philipp Bach, Victor Chernozhukov, Martin Spindler","http://arxiv.org/abs/1812.04345v2, http://arxiv.org/pdf/1812.04345v2",econ.EM
30990,em,"We study the effectiveness of a community-level information intervention
aimed at improving sanitation using a cluster-randomized controlled trial (RCT)
in Nigerian communities. The intervention, Community-Led Total Sanitation
(CLTS), is currently part of national sanitation policy in more than 25
countries. While average impacts are exiguous almost three years after
implementation at scale, the results hide important heterogeneity: the
intervention has strong and lasting effects on sanitation practices in poorer
communities. These are realized through increased sanitation investments. We
show that community wealth, widely available in secondary data, is a key
statistic for effective intervention targeting. Using data from five other
similar randomized interventions in various contexts, we find that
community-level wealth heterogeneity can rationalize the wide range of impact
estimates in the literature. This exercise provides plausible external validity
to our findings, with implications for intervention scale-up. JEL Codes: O12,
I12, I15, I18.",Community Matters: Heterogeneous Impacts of a Sanitation Intervention,2019-01-11 13:39:38,"Laura Abramovsky, Britta Augsburg, Melanie Lührmann, Francisco Oteiza, Juan Pablo Rud","http://arxiv.org/abs/1901.03544v5, http://arxiv.org/pdf/1901.03544v5",econ.GN
30983,em,"Statistical and multiscaling characteristics of WTI Crude Oil prices
expressed in US dollar in relation to the most traded currencies as well as to
gold futures and to the E-mini S$\&$P500 futures prices on 5 min intra-day
recordings in the period January 2012 - December 2017 are studied. It is shown
that in most of the cases the tails of return distributions of the considered
financial instruments follow the inverse cubic power law. The only exception is
the Russian ruble for which the distribution tail is heavier and scales with
the exponent close to 2. From the perspective of multiscaling the analysed time
series reveal the multifractal organization with the left-sided asymmetry of
the corresponding singularity spectra. Even more, all the considered financial
instruments appear to be multifractally cross-correlated with oil, especially
on the level of medium-size fluctuations, as the multifractal cross-correlation
analysis carried out by means of the multifractal cross-correlation analysis
(MFCCA) and detrended cross-correlation coefficient $\rho_q$ show. The degree
of such cross-correlations is however varying among the financial instruments.
The strongest ties to the oil characterize currencies of the oil extracting
countries. Strength of this multifractal coupling appears to depend also on the
oil market trend. In the analysed time period the level of cross-correlations
systematically increases during the bear phase on the oil market and it
saturates after the trend reversal in 1st half of 2016. The same methodology is
also applied to identify possible causal relations between considered
observables. Searching for some related asymmetry in the information flow
mediating cross-correlations indicates that it was the oil price that led the
Russian ruble over the time period here considered rather than vice versa.",Multifractal cross-correlations between the World Oil and other Financial Markets in 2012-2017,2018-12-20 16:20:55,"Marcin Wątorek, Stanisław Drożdż, Paweł Oświȩcimka, Marek Stanuszek","http://dx.doi.org/10.1016/j.eneco.2019.05.015, http://arxiv.org/abs/1812.08548v2, http://arxiv.org/pdf/1812.08548v2",q-fin.ST
30984,em,"In the following paper, we analyse the ID$_3$-Price in the German Intraday
Continuous electricity market using an econometric time series model. A
multivariate approach is conducted for hourly and quarter-hourly products
separately. We estimate the model using lasso and elastic net techniques and
perform an out-of-sample, very short-term forecasting study. The model's
performance is compared with benchmark models and is discussed in detail.
Forecasting results provide new insights to the German Intraday Continuous
electricity market regarding its efficiency and to the ID$_3$-Price behaviour.",Econometric modelling and forecasting of intraday electricity prices,2018-12-21 15:36:07,"Michał Narajewski, Florian Ziel","http://dx.doi.org/10.1016/j.jcomm.2019.100107, http://arxiv.org/abs/1812.09081v2, http://arxiv.org/pdf/1812.09081v2",q-fin.ST
30985,em,"We consider the problem of a firm seeking to use personalized pricing to sell
an exogenously given stock of a product over a finite selling horizon to
different consumer types. We assume that the type of an arriving consumer can
be observed but the demand function associated with each type is initially
unknown. The firm sets personalized prices dynamically for each type and
attempts to maximize the revenue over the season. We provide a learning
algorithm that is near-optimal when the demand and capacity scale in
proportion. The algorithm utilizes the primal-dual formulation of the problem
and learns the dual optimal solution explicitly. It allows the algorithm to
overcome the curse of dimensionality (the rate of regret is independent of the
number of types) and sheds light on novel algorithmic designs for learning
problems with resource constraints.",A Primal-dual Learning Algorithm for Personalized Dynamic Pricing with an Inventory Constraint,2018-12-20 09:15:16,"Ningyuan Chen, Guillermo Gallego","http://arxiv.org/abs/1812.09234v3, http://arxiv.org/pdf/1812.09234v3",cs.LG
30986,em,"In testing for correlation of the errors in regression models the power of
tests can be very low for strongly correlated errors. This counterintuitive
phenomenon has become known as the ""zero-power trap"". Despite a considerable
amount of literature devoted to this problem, mainly focusing on its detection,
a convincing solution has not yet been found. In this article we first discuss
theoretical results concerning the occurrence of the zero-power trap
phenomenon. Then, we suggest and compare three ways to avoid it. Given an
initial test that suffers from the zero-power trap, the method we recommend for
practice leads to a modified test whose power converges to one as the
correlation gets very strong. Furthermore, the modified test has approximately
the same power function as the initial test, and thus approximately preserves
all of its optimality properties. We also provide some numerical illustrations
in the context of testing for network generated correlation.",How to avoid the zero-power trap in testing for correlation,2018-12-27 19:10:19,David Preinerstorfer,"http://dx.doi.org/10.1017/S0266466621000062, http://arxiv.org/abs/1812.10752v1, http://arxiv.org/pdf/1812.10752v1",math.ST
30987,em,"This paper discusses difference-in-differences (DID) estimation when there
exist many control variables, potentially more than the sample size. In this
case, traditional estimation methods, which require a limited number of
variables, do not work. One may consider using statistical or machine learning
(ML) methods. However, by the well-known theory of inference of ML methods
proposed in Chernozhukov et al. (2018), directly applying ML methods to the
conventional semiparametric DID estimators will cause significant bias and make
these DID estimators fail to be sqrt{N}-consistent. This article proposes three
new DID estimators for three different data structures, which are able to
shrink the bias and achieve sqrt{N}-consistency and asymptotic normality with
mean zero when applying ML methods. This leads to straightforward inferential
procedures. In addition, I show that these new estimators have the small bias
property (SBP), meaning that their bias will converge to zero faster than the
pointwise bias of the nonparametric estimator on which it is based.",Semiparametric Difference-in-Differences with Potentially Many Control Variables,2018-12-28 02:31:58,Neng-Chieh Chang,"http://arxiv.org/abs/1812.10846v3, http://arxiv.org/pdf/1812.10846v3",econ.GN
30988,em,"Motivated by the application of real-time pricing in e-commerce platforms, we
consider the problem of revenue-maximization in a setting where the seller can
leverage contextual information describing the customer's history and the
product's type to predict her valuation of the product. However, her true
valuation is unobservable to the seller, only binary outcome in the form of
success-failure of a transaction is observed. Unlike in usual contextual bandit
settings, the optimal price/arm given a covariate in our setting is sensitive
to the detailed characteristics of the residual uncertainty distribution. We
develop a semi-parametric model in which the residual distribution is
non-parametric and provide the first algorithm which learns both regression
parameters and residual distribution with $\tilde O(\sqrt{n})$ regret. We
empirically test a scalable implementation of our algorithm and observe good
performance.",Semi-parametric dynamic contextual pricing,2019-01-07 23:07:16,"Virag Shah, Jose Blanchet, Ramesh Johari","http://arxiv.org/abs/1901.02045v4, http://arxiv.org/pdf/1901.02045v4",cs.LG
31066,em,"We describe and examine a test for a general class of shape constraints, such
as constraints on the signs of derivatives, U-(S-)shape, symmetry,
quasi-convexity, log-convexity, $r$-convexity, among others, in a nonparametric
framework using partial sums empirical processes. We show that, after a
suitable transformation, its asymptotic distribution is a functional of the
standard Brownian motion, so that critical values are available. However, due
to the possible poor approximation of the asymptotic critical values to the
finite sample ones, we also describe a valid bootstrap algorithm.",Testing nonparametric shape restrictions,2019-09-04 13:13:14,"Tatiana Komarova, Javier Hidalgo","http://arxiv.org/abs/1909.01675v2, http://arxiv.org/pdf/1909.01675v2",stat.ME
30991,em,"This paper presents a unified second order asymptotic framework for
conducting inference on parameters of the form $\phi(\theta_0)$, where
$\theta_0$ is unknown but can be estimated by $\hat\theta_n$, and $\phi$ is a
known map that admits null first order derivative at $\theta_0$. For a large
number of examples in the literature, the second order Delta method reveals a
nondegenerate weak limit for the plug-in estimator $\phi(\hat\theta_n)$. We
show, however, that the `standard' bootstrap is consistent if and only if the
second order derivative $\phi_{\theta_0}''=0$ under regularity conditions,
i.e., the standard bootstrap is inconsistent if $\phi_{\theta_0}''\neq 0$, and
provides degenerate limits unhelpful for inference otherwise. We thus identify
a source of bootstrap failures distinct from that in Fang and Santos (2018)
because the problem (of consistently bootstrapping a \textit{nondegenerate}
limit) persists even if $\phi$ is differentiable. We show that the correction
procedure in Babu (1984) can be extended to our general setup. Alternatively, a
modified bootstrap is proposed when the map is \textit{in addition} second
order nondifferentiable. Both are shown to provide local size control under
some conditions. As an illustration, we develop a test of common conditional
heteroskedastic (CH) features, a setting with both degeneracy and
nondifferentiability -- the latter is because the Jacobian matrix is degenerate
at zero and we allow the existence of multiple common CH features.",Inference on Functionals under First Order Degeneracy,2019-01-15 17:40:27,"Qihui Chen, Zheng Fang","http://arxiv.org/abs/1901.04861v1, http://arxiv.org/pdf/1901.04861v1",econ.EM
30992,em,"The Kalman Filter has been called one of the greatest inventions in
statistics during the 20th century. Its purpose is to measure the state of a
system by processing the noisy data received from different electronic sensors.
In comparison, a useful resource for managers in their effort to make the right
decisions is the wisdom of crowds. This phenomenon allows managers to combine
judgments by different employees to get estimates that are often more accurate
and reliable than estimates, which managers produce alone. Since harnessing the
collective intelligence of employees, and filtering signals from multiple noisy
sensors appear related, we looked at the possibility of using the Kalman Filter
on estimates by people. Our predictions suggest, and our findings based on the
Survey of Professional Forecasters reveal, that the Kalman Filter can help
managers solve their decision-making problems by giving them stronger signals
before they choose. Indeed, when used on a subset of forecasters identified by
the Contribution Weighted Model, the Kalman Filter beat that rule clearly,
across all the forecasting horizons in the survey.",The Wisdom of a Kalman Crowd,2019-01-24 00:17:12,Ulrik W. Nash,"http://arxiv.org/abs/1901.08133v1, http://arxiv.org/pdf/1901.08133v1",econ.EM
30993,em,"This work is devoted to the study of modeling geophysical and financial time
series. A class of volatility models with time-varying parameters is presented
to forecast the volatility of time series in a stationary environment. The
modeling of stationary time series with consistent properties facilitates
prediction with much certainty. Using the GARCH and stochastic volatility
model, we forecast one-step-ahead suggested volatility with +/- 2 standard
prediction errors, which is enacted via Maximum Likelihood Estimation. We
compare the stochastic volatility model relying on the filtering technique as
used in the conditional volatility with the GARCH model. We conclude that the
stochastic volatility is a better forecasting tool than GARCH (1, 1), since it
is less conditioned by autoregressive past information.",Volatility Models Applied to Geophysics and High Frequency Financial Market Data,2019-01-26 05:35:54,"Maria C Mariani, Md Al Masum Bhuiyan, Osei K Tweneboah, Hector Gonzalez-Huizar, Ionut Florescu","http://dx.doi.org/10.1016/j.physa.2018.02.167, http://arxiv.org/abs/1901.09145v1, http://arxiv.org/pdf/1901.09145v1",stat.AP
30994,em,"The sampling efficiency of MCMC methods in Bayesian inference for stochastic
volatility (SV) models is known to highly depend on the actual parameter
values, and the effectiveness of samplers based on different parameterizations
varies significantly. We derive novel algorithms for the centered and the
non-centered parameterizations of the practically highly relevant SV model with
leverage, where the return process and innovations of the volatility process
are allowed to correlate. Moreover, based on the idea of
ancillarity-sufficiency interweaving (ASIS), we combine the resulting samplers
in order to guarantee stable sampling efficiency irrespective of the baseline
parameterization.We carry out an extensive comparison to already existing
sampling methods for this model using simulated as well as real world data.",Approaches Toward the Bayesian Estimation of the Stochastic Volatility Model with Leverage,2019-01-31 20:43:13,"Darjus Hosszejni, Gregor Kastner","http://dx.doi.org/10.1007/978-3-030-30611-3_8, http://arxiv.org/abs/1901.11491v2, http://arxiv.org/pdf/1901.11491v2",stat.CO
30995,em,"Variational Bayes (VB) is a recent approximate method for Bayesian inference.
It has the merit of being a fast and scalable alternative to Markov Chain Monte
Carlo (MCMC) but its approximation error is often unknown. In this paper, we
derive the approximation error of VB in terms of mean, mode, variance,
predictive density and KL divergence for the linear Gaussian multi-equation
regression. Our results indicate that VB approximates the posterior mean
perfectly. Factors affecting the magnitude of underestimation in posterior
variance and mode are revealed. Importantly, We demonstrate that VB estimates
predictive densities accurately.",Approximation Properties of Variational Bayes for Vector Autoregressions,2019-03-02 06:24:16,Reza Hajargasht,"http://arxiv.org/abs/1903.00617v1, http://arxiv.org/pdf/1903.00617v1",stat.ML
30996,em,"Classical approaches to experimental design assume that intervening on one
unit does not affect other units. There are many important settings, however,
where this non-interference assumption does not hold, as when running
experiments on supply-side incentives on a ride-sharing platform or subsidies
in an energy marketplace. In this paper, we introduce a new approach to
experimental design in large-scale stochastic systems with considerable
cross-unit interference, under an assumption that the interference is
structured enough that it can be captured via mean-field modeling. Our approach
enables us to accurately estimate the effect of small changes to system
parameters by combining unobstrusive randomization with lightweight modeling,
all while remaining in equilibrium. We can then use these estimates to optimize
the system by gradient descent. Concretely, we focus on the problem of a
platform that seeks to optimize supply-side payments p in a centralized
marketplace where different suppliers interact via their effects on the overall
supply-demand equilibrium, and show that our approach enables the platform to
optimize p in large systems using vanishingly small perturbations.",Experimenting in Equilibrium,2019-03-06 03:16:43,"Stefan Wager, Kuang Xu","http://arxiv.org/abs/1903.02124v5, http://arxiv.org/pdf/1903.02124v5",math.OC
30997,em,"Universal function approximators, such as artificial neural networks, can
learn a large variety of target functions arbitrarily well given sufficient
training data. This flexibility comes at the cost of the ability to perform
parametric inference. We address this gap by proposing a generic framework
based on the Shapley-Taylor decomposition of a model. A surrogate parametric
regression analysis is performed in the space spanned by the Shapley value
expansion of a model. This allows for the testing of standard hypotheses of
interest. At the same time, the proposed approach provides novel insights into
statistical learning processes themselves derived from the consistency and bias
properties of the nonparametric estimators. We apply the framework to the
estimation of heterogeneous treatment effects in simulated and real-world
randomised experiments. We introduce an explicit treatment function based on
higher-order Shapley-Taylor indices. This can be used to identify potentially
complex treatment channels and help the generalisation of findings from
experimental settings. More generally, the presented approach allows for a
standardised use and communication of results from machine learning models.",Parametric inference with universal function approximators,2019-03-11 13:37:05,Andreas Joseph,"http://arxiv.org/abs/1903.04209v4, http://arxiv.org/pdf/1903.04209v4",stat.ML
30998,em,"This paper examines how homestead policies, which opened vast frontier lands
for settlement, influenced the development of American frontier states. It uses
a treatment propensity-weighted matrix completion model to estimate the
counterfactual size of these states without homesteading. In simulation
studies, the method shows lower bias and variance than other estimators,
particularly in higher complexity scenarios. The empirical analysis reveals
that homestead policies significantly and persistently reduced state government
expenditure and revenue. These findings align with continuous
difference-in-differences estimates using 1.46 million land patent records.
This study's extension of the matrix completion method to include propensity
score weighting for causal effect estimation in panel data, especially in
staggered treatment contexts, enhances policy evaluation by improving the
precision of long-term policy impact assessments.",State-Building through Public Land Disposal? An Application of Matrix Completion for Counterfactual Prediction,2019-03-19 17:49:22,Jason Poulos,"http://arxiv.org/abs/1903.08028v4, http://arxiv.org/pdf/1903.08028v4",econ.GN
30999,em,"We study the finite sample behavior of Lasso-based inference methods such as
post double Lasso and debiased Lasso. We show that these methods can exhibit
substantial omitted variable biases (OVBs) due to Lasso not selecting relevant
controls. This phenomenon can occur even when the coefficients are sparse and
the sample size is large and larger than the number of controls. Therefore,
relying on the existing asymptotic inference theory can be problematic in
empirical applications. We compare the Lasso-based inference methods to modern
high-dimensional OLS-based methods and provide practical guidance.",Omitted variable bias of Lasso-based inference methods: A finite sample analysis,2019-03-20 22:05:35,"Kaspar Wuthrich, Ying Zhu","http://arxiv.org/abs/1903.08704v9, http://arxiv.org/pdf/1903.08704v9",math.ST
31000,em,"We propose a new Conditional BEKK matrix-F (CBF) model for the time-varying
realized covariance (RCOV) matrices. This CBF model is capable of capturing
heavy-tailed RCOV, which is an important stylized fact but could not be handled
adequately by the Wishart-based models. To further mimic the long memory
feature of the RCOV, a special CBF model with the conditional heterogeneous
autoregressive (HAR) structure is introduced. Moreover, we give a systematical
study on the probabilistic properties and statistical inferences of the CBF
model, including exploring its stationarity, establishing the asymptotics of
its maximum likelihood estimator, and giving some new inner-product-based tests
for its model checking. In order to handle a large dimensional RCOV matrix, we
construct two reduced CBF models -- the variance-target CBF model (for moderate
but fixed dimensional RCOV matrix) and the factor CBF model (for high
dimensional RCOV matrix). For both reduced models, the asymptotic theory of the
estimated parameters is derived. The importance of our entire methodology is
illustrated by simulation results and two real examples.",Time series models for realized covariance matrices based on the matrix-F distribution,2019-03-26 08:13:05,"Jiayuan Zhou, Feiyu Jiang, Ke Zhu, Wai Keung Li","http://arxiv.org/abs/1903.12077v2, http://arxiv.org/pdf/1903.12077v2",math.ST
31001,em,"Understanding the effect of a particular treatment or a policy pertains to
many areas of interest, ranging from political economics, marketing to
healthcare. In this paper, we develop a non-parametric algorithm for detecting
the effects of treatment over time in the context of Synthetic Controls. The
method builds on counterfactual predictions from many algorithms without
necessarily assuming that the algorithms correctly capture the model. We
introduce an inferential procedure for detecting treatment effects and show
that the testing procedure is asymptotically valid for stationary, beta mixing
processes without imposing any restriction on the set of base algorithms under
consideration. We discuss consistency guarantees for average treatment effect
estimates and derive regret bounds for the proposed methodology. The class of
algorithms may include Random Forest, Lasso, or any other machine-learning
estimator. Numerical studies and an application illustrate the advantages of
the method.",Synthetic learner: model-free inference on treatments over time,2019-04-02 18:28:21,"Davide Viviano, Jelena Bradic","http://arxiv.org/abs/1904.01490v2, http://arxiv.org/pdf/1904.01490v2",stat.ME
31002,em,"Variational Bayes (VB) methods have emerged as a fast and
computationally-efficient alternative to Markov chain Monte Carlo (MCMC)
methods for scalable Bayesian estimation of mixed multinomial logit (MMNL)
models. It has been established that VB is substantially faster than MCMC at
practically no compromises in predictive accuracy. In this paper, we address
two critical gaps concerning the usage and understanding of VB for MMNL. First,
extant VB methods are limited to utility specifications involving only
individual-specific taste parameters. Second, the finite-sample properties of
VB estimators and the relative performance of VB, MCMC and maximum simulated
likelihood estimation (MSLE) are not known. To address the former, this study
extends several VB methods for MMNL to admit utility specifications including
both fixed and random utility parameters. To address the latter, we conduct an
extensive simulation-based evaluation to benchmark the extended VB methods
against MCMC and MSLE in terms of estimation times, parameter recovery and
predictive accuracy. The results suggest that all VB variants with the
exception of the ones relying on an alternative variational lower bound
constructed with the help of the modified Jensen's inequality perform as well
as MCMC and MSLE at prediction and parameter recovery. In particular, VB with
nonconjugate variational message passing and the delta-method (VB-NCVMP-Delta)
is up to 16 times faster than MCMC and MSLE. Thus, VB-NCVMP-Delta can be an
attractive alternative to MCMC and MSLE for fast, scalable and accurate
estimation of MMNL models.",Bayesian Estimation of Mixed Multinomial Logit Models: Advances and Simulation-Based Evaluations,2019-04-07 16:03:56,"Prateek Bansal, Rico Krueger, Michel Bierlaire, Ricardo A. Daziano, Taha H. Rashidi","http://dx.doi.org/10.1016/j.trb.2019.12.001, http://arxiv.org/abs/1904.03647v4, http://arxiv.org/pdf/1904.03647v4",stat.ML
31003,em,"We develop a novel asymptotic theory for local polynomial (quasi-)
maximum-likelihood estimators of time-varying parameters in a broad class of
nonlinear time series models. Under weak regularity conditions, we show the
proposed estimators are consistent and follow normal distributions in large
samples. Our conditions impose weaker smoothness and moment conditions on the
data-generating process and its likelihood compared to existing theories.
Furthermore, the bias terms of the estimators take a simpler form. We
demonstrate the usefulness of our general results by applying our theory to
local (quasi-)maximum-likelihood estimators of a time-varying VAR's, ARCH and
GARCH, and Poisson autogressions. For the first three models, we are able to
substantially weaken the conditions found in the existing literature. For the
Poisson autogression, existing theories cannot be be applied while our novel
approach allows us to analyze it.",Local Polynomial Estimation of Time-Varying Parameters in Nonlinear Models,2019-04-10 17:26:16,"Dennis Kristensen, Young Jun Lee","http://arxiv.org/abs/1904.05209v2, http://arxiv.org/pdf/1904.05209v2",econ.EM
31004,em,"We propose to combine smoothing, simulations and sieve approximations to
solve for either the integrated or expected value function in a general class
of dynamic discrete choice (DDC) models. We use importance sampling to
approximate the Bellman operators defining the two functions. The random
Bellman operators, and therefore also the corresponding solutions, are
generally non-smooth which is undesirable. To circumvent this issue, we
introduce a smoothed version of the random Bellman operator and solve for the
corresponding smoothed value function using sieve methods. We show that one can
avoid using sieves by generalizing and adapting the `self-approximating' method
of Rust (1997) to our setting. We provide an asymptotic theory for the
approximate solutions and show that they converge with root-N-rate, where $N$
is number of Monte Carlo draws, towards Gaussian processes. We examine their
performance in practice through a set of numerical experiments and find that
both methods perform well with the sieve method being particularly attractive
in terms of computational speed and accuracy.",Solving Dynamic Discrete Choice Models Using Smoothing and Sieve Methods,2019-04-10 18:05:22,"Dennis Kristensen, Patrick K. Mogensen, Jong Myun Moon, Bertel Schjerning","http://arxiv.org/abs/1904.05232v2, http://arxiv.org/pdf/1904.05232v2",econ.EM
31005,em,"In this paper we discuss how the notion of subgeometric ergodicity in Markov
chain theory can be exploited to study stationarity and ergodicity of nonlinear
time series models. Subgeometric ergodicity means that the transition
probability measures converge to the stationary measure at a rate slower than
geometric. Specifically, we consider suitably defined higher-order nonlinear
autoregressions that behave similarly to a unit root process for large values
of the observed series but we place almost no restrictions on their dynamics
for moderate values of the observed series. Results on the subgeometric
ergodicity of nonlinear autoregressions have previously appeared only in the
first-order case. We provide an extension to the higher-order case and show
that the autoregressions we consider are, under appropriate conditions,
subgeometrically ergodic. As useful implications we also obtain stationarity
and $\beta$-mixing with subgeometrically decaying mixing coefficients.",Subgeometrically ergodic autoregressions,2019-04-15 17:50:46,"Mika Meitz, Pentti Saikkonen","http://dx.doi.org/10.1017/S0266466620000419, http://arxiv.org/abs/1904.07089v3, http://arxiv.org/pdf/1904.07089v3",econ.EM
31006,em,"It is well known that stationary geometrically ergodic Markov chains are
$\beta$-mixing (absolutely regular) with geometrically decaying mixing
coefficients. Furthermore, for initial distributions other than the stationary
one, geometric ergodicity implies $\beta$-mixing under suitable moment
assumptions. In this note we show that similar results hold also for
subgeometrically ergodic Markov chains. In particular, for both stationary and
other initial distributions, subgeometric ergodicity implies $\beta$-mixing
with subgeometrically decaying mixing coefficients. Although this result is
simple it should prove very useful in obtaining rates of mixing in situations
where geometric ergodicity can not be established. To illustrate our results we
derive new subgeometric ergodicity and $\beta$-mixing results for the
self-exciting threshold autoregressive model.",Subgeometric ergodicity and $β$-mixing,2019-04-15 18:11:01,"Mika Meitz, Pentti Saikkonen","http://arxiv.org/abs/1904.07103v2, http://arxiv.org/pdf/1904.07103v2",econ.EM
31007,em,"In econometrics, many parameters of interest can be written as ratios of
expectations. The main approach to construct confidence intervals for such
parameters is the delta method. However, this asymptotic procedure yields
intervals that may not be relevant for small sample sizes or, more generally,
in a sequence-of-model framework that allows the expectation in the denominator
to decrease to $0$ with the sample size. In this setting, we prove a
generalization of the delta method for ratios of expectations and the
consistency of the nonparametric percentile bootstrap. We also investigate
finite-sample inference and show a partial impossibility result: nonasymptotic
uniform confidence intervals can be built for ratios of expectations but not at
every level. Based on this, we propose an easy-to-compute index to appraise the
reliability of the intervals based on the delta method. Simulations and an
application illustrate our results and the practical usefulness of our rule of
thumb.",On the construction of confidence intervals for ratios of expectations,2019-04-10 20:26:23,"Alexis Derumigny, Lucas Girard, Yannick Guyonvarch","http://arxiv.org/abs/1904.07111v1, http://arxiv.org/pdf/1904.07111v1",math.ST
31008,em,"The standard Gibbs sampler of Mixed Multinomial Logit (MMNL) models involves
sampling from conditional densities of utility parameters using
Metropolis-Hastings (MH) algorithm due to unavailability of conjugate prior for
logit kernel. To address this non-conjugacy concern, we propose the application
of P\'olygamma data augmentation (PG-DA) technique for the MMNL estimation. The
posterior estimates of the augmented and the default Gibbs sampler are similar
for two-alternative scenario (binary choice), but we encounter empirical
identification issues in the case of more alternatives ($J \geq 3$).",Pólygamma Data Augmentation to address Non-conjugacy in the Bayesian Estimation of Mixed Multinomial Logit Models,2019-04-13 19:38:18,"Prateek Bansal, Rico Krueger, Michel Bierlaire, Ricardo A. Daziano, Taha H. Rashidi","http://arxiv.org/abs/1904.07688v1, http://arxiv.org/pdf/1904.07688v1",stat.ML
31034,em,"The R package stochvol provides a fully Bayesian implementation of
heteroskedasticity modeling within the framework of stochastic volatility. It
utilizes Markov chain Monte Carlo (MCMC) samplers to conduct inference by
obtaining draws from the posterior distribution of parameters and latent
variables which can then be used for predicting future volatilities. The
package can straightforwardly be employed as a stand-alone tool; moreover, it
allows for easy incorporation into other MCMC samplers. The main focus of this
paper is to show the functionality of stochvol. In addition, it provides a
brief mathematical description of the model, an overview of the sampling
schemes used, and several illustrative examples using exchange rate data.",Dealing with Stochastic Volatility in Time Series Using the R Package stochvol,2019-06-28 14:16:00,Gregor Kastner,"http://dx.doi.org/10.18637/jss.v069.i05, http://arxiv.org/abs/1906.12134v1, http://arxiv.org/pdf/1906.12134v1",stat.CO
31009,em,"I analyze treatment effects in situations when agents endogenously select
into the treatment group and into the observed sample. As a theoretical
contribution, I propose pointwise sharp bounds for the marginal treatment
effect (MTE) of interest within the always-observed subpopulation under
monotonicity assumptions. Moreover, I impose an extra mean dominance assumption
to tighten the previous bounds. I further discuss how to identify those bounds
when the support of the propensity score is either continuous or discrete.
Using these results, I estimate bounds for the MTE of the Job Corps Training
Program on hourly wages for the always-employed subpopulation and find that it
is decreasing in the likelihood of attending the program within the
Non-Hispanic group. For example, the Average Treatment Effect on the Treated is
between \$.33 and \$.99 while the Average Treatment Effect on the Untreated is
between \$.71 and \$3.00.",Sharp Bounds for the Marginal Treatment Effect with Sample Selection,2019-04-18 01:21:17,Vitor Possebom,"http://arxiv.org/abs/1904.08522v1, http://arxiv.org/pdf/1904.08522v1",econ.EM
31010,em,"In this paper, I show that classic two-stage least squares (2SLS) estimates
are highly unstable with weak instruments. I propose a ridge estimator (ridge
IV) and show that it is asymptotically normal even with weak instruments,
whereas 2SLS is severely distorted and un-bounded. I motivate the ridge IV
estimator as a convex optimization problem with a GMM objective function and an
L2 penalty. I show that ridge IV leads to sizable mean squared error reductions
theoretically and validate these results in a simulation study inspired by data
designs of papers published in the American Economic Review.",Ridge regularization for Mean Squared Error Reduction in Regression with Weak Instruments,2019-04-18 06:25:52,Karthik Rajkumar,"http://arxiv.org/abs/1904.08580v1, http://arxiv.org/pdf/1904.08580v1",econ.EM
31011,em,"This paper highlights a tension between semiparametric efficiency and
bootstrap consistency in the context of a canonical semiparametric estimation
problem, namely the problem of estimating the average density. It is shown that
although simple plug-in estimators suffer from bias problems preventing them
from achieving semiparametric efficiency under minimal smoothness conditions,
the nonparametric bootstrap automatically corrects for this bias and that, as a
result, these seemingly inferior estimators achieve bootstrap consistency under
minimal smoothness conditions. In contrast, several ""debiased"" estimators that
achieve semiparametric efficiency under minimal smoothness conditions do not
achieve bootstrap consistency under those same conditions.",Average Density Estimators: Efficiency and Bootstrap Consistency,2019-04-20 02:34:42,"Matias D. Cattaneo, Michael Jansson","http://arxiv.org/abs/1904.09372v2, http://arxiv.org/pdf/1904.09372v2",econ.EM
31012,em,"We prove a central limit theorem for network moments in a model of network
formation with strategic interactions and homophilous agents. Since data often
consists of observations on a single large network, we consider an asymptotic
framework in which the network size diverges. We argue that a modification of
""exponential stabilization"" conditions from the literature on geometric graphs
provides a useful high-level formulation of weak dependence, which we use to
establish an abstract central limit theorem. We then derive primitive
conditions for stabilization using results in branching process theory. We
discuss practical inference procedures justified by our results and outline a
methodology for deriving primitive conditions that can be applied more broadly
to other large network models with strategic interactions.",Normal Approximation in Large Network Models,2019-04-24 23:42:11,"Michael P. Leung, Hyungsik Roger Moon","http://arxiv.org/abs/1904.11060v5, http://arxiv.org/pdf/1904.11060v5",econ.EM
31013,em,"This paper considers improved forecasting in possibly nonlinear dynamic
settings, with high-dimension predictors (""big data"" environments). To overcome
the curse of dimensionality and manage data and model complexity, we examine
shrinkage estimation of a back-propagation algorithm of a deep neural net with
skip-layer connections. We expressly include both linear and nonlinear
components. This is a high-dimensional learning approach including both
sparsity L1 and smoothness L2 penalties, allowing high-dimensionality and
nonlinearity to be accommodated in one step. This approach selects significant
predictors as well as the topology of the neural network. We estimate optimal
values of shrinkage hyperparameters by incorporating a gradient-based
optimization technique resulting in robust predictions with improved
reproducibility. The latter has been an issue in some approaches. This is
statistically interpretable and unravels some network structure, commonly left
to a black box. An additional advantage is that the nonlinear part tends to get
pruned if the underlying process is linear. In an application to forecasting
equity returns, the proposed approach captures nonlinear dynamics between
equities to enhance forecast performance. It offers an appreciable improvement
over current univariate and multivariate models by RMSE and actual portfolio
performance.",Forecasting in Big Data Environments: an Adaptable and Automated Shrinkage Estimation of Neural Networks (AAShNet),2019-04-25 06:43:02,"Ali Habibnia, Esfandiar Maasoumi","http://arxiv.org/abs/1904.11145v1, http://arxiv.org/pdf/1904.11145v1",econ.EM
31014,em,"This paper considers the problem of testing many moment inequalities, where
the number of moment inequalities ($p$) is possibly larger than the sample size
($n$). Chernozhukov et al. (2019) proposed asymptotic tests for this problem
using the maximum $t$ statistic. We observe that such tests can have low power
if multiple inequalities are violated. As an alternative, we propose novel
randomization tests based on a maximum non-negatively weighted combination of
$t$ statistics. We provide a condition guaranteeing size control in large
samples. Simulations show that the tests control size in small samples ($n =
30$, $p = 1000$), and often has substantially higher power against alternatives
with multiple violations than tests based on the maximum $t$ statistic.",Exact Testing of Many Moment Inequalities Against Multiple Violations,2019-04-29 18:33:12,"Nick Koning, Paul Bekker","http://arxiv.org/abs/1904.12775v3, http://arxiv.org/pdf/1904.12775v3",math.ST
31015,em,"Mesh refinement in pseudospectral (PS) optimal control is embarrassingly easy
--- simply increase the order $N$ of the Lagrange interpolating polynomial and
the mathematics of convergence automates the distribution of the grid points.
Unfortunately, as $N$ increases, the condition number of the resulting linear
algebra increases as $N^2$; hence, spectral efficiency and accuracy are lost in
practice. In this paper, we advance Birkhoff interpolation concepts over an
arbitrary grid to generate well-conditioned PS optimal control discretizations.
We show that the condition number increases only as $\sqrt{N}$ in general, but
is independent of $N$ for the special case of one of the boundary points being
fixed. Hence, spectral accuracy and efficiency are maintained as $N$ increases.
The effectiveness of the resulting fast mesh refinement strategy is
demonstrated by using \underline{polynomials of over a thousandth order} to
solve a low-thrust, long-duration orbit transfer problem.",Fast Mesh Refinement in Pseudospectral Optimal Control,2019-04-30 02:50:07,"N. Koeppen, I. M. Ross, L. C. Wilcox, R. J. Proulx","http://arxiv.org/abs/1904.12992v1, http://arxiv.org/pdf/1904.12992v1",math.OC
31016,em,"Nonparametric kernel density and local polynomial regression estimators are
very popular in Statistics, Economics, and many other disciplines. They are
routinely employed in applied work, either as part of the main empirical
analysis or as a preliminary ingredient entering some other estimation or
inference procedure. This article describes the main methodological and
numerical features of the software package nprobust, which offers an array of
estimation and inference procedures for nonparametric kernel-based density and
local polynomial regression methods, implemented in both the R and Stata
statistical platforms. The package includes not only classical bandwidth
selection, estimation, and inference methods (Wand and Jones, 1995; Fan and
Gijbels, 1996), but also other recent developments in the statistics and
econometrics literatures such as robust bias-corrected inference and coverage
error optimal bandwidth selection (Calonico, Cattaneo and Farrell, 2018, 2019).
Furthermore, this article also proposes a simple way of estimating optimal
bandwidths in practice that always delivers the optimal mean square error
convergence rate regardless of the specific evaluation point, that is, no
matter whether it is implemented at a boundary or interior point. Numerical
performance is illustrated using an empirical application and simulated data,
where a detailed numerical comparison with other R packages is given.",nprobust: Nonparametric Kernel-Based Estimation and Robust Bias-Corrected Inference,2019-06-01 13:33:04,"Sebastian Calonico, Matias D. Cattaneo, Max H. Farrell","http://arxiv.org/abs/1906.00198v1, http://arxiv.org/pdf/1906.00198v1",stat.CO
31017,em,"Nonparametric partitioning-based least squares regression is an important
tool in empirical work. Common examples include regressions based on splines,
wavelets, and piecewise polynomials. This article discusses the main
methodological and numerical features of the R software package lspartition,
which implements modern estimation and inference results for partitioning-based
least squares (series) regression estimation. This article discusses the main
methodological and numerical features of the R software package lspartition,
which implements results for partitioning-based least squares (series)
regression estimation and inference from Cattaneo and Farrell (2013) and
Cattaneo, Farrell, and Feng (2019). These results cover the multivariate
regression function as well as its derivatives. First, the package provides
data-driven methods to choose the number of partition knots optimally,
according to integrated mean squared error, yielding optimal point estimation.
Second, robust bias correction is implemented to combine this point estimator
with valid inference. Third, the package provides estimates and inference for
the unknown function both pointwise and uniformly in the conditioning
variables. In particular, valid confidence bands are provided. Finally, an
extension to two-sample analysis is developed, which can be used in
treatment-control comparisons and related problems",lspartition: Partitioning-Based Least Squares Regression,2019-06-01 13:56:29,"Matias D. Cattaneo, Max H. Farrell, Yingjie Feng","http://arxiv.org/abs/1906.00202v2, http://arxiv.org/pdf/1906.00202v2",stat.CO
31018,em,"An resilience optimal evaluation of financial portfolios implies having
plausible hypotheses about the multiple interconnections between the
macroeconomic variables and the risk parameters. In this paper, we propose a
graphical model for the reconstruction of the causal structure that links the
multiple macroeconomic variables and the assessed risk parameters, it is this
structure that we call Stress Testing Network (STN). In this model, the
relationships between the macroeconomic variables and the risk parameter define
a ""relational graph"" among their time-series, where related time-series are
connected by an edge. Our proposal is based on the temporal causal models, but
unlike, we incorporate specific conditions in the structure which correspond to
intrinsic characteristics this type of networks. Using the proposed model and
given the high-dimensional nature of the problem, we used regularization
methods to efficiently detect causality in the time-series and reconstruct the
underlying causal structure. In addition, we illustrate the use of model in
credit risk data of a portfolio. Finally, we discuss its uses and practical
benefits in stress testing.",Stress Testing Network Reconstruction via Graphical Causal Model,2019-06-03 20:16:24,"Helder Rojas, David Dias","http://arxiv.org/abs/1906.01468v3, http://arxiv.org/pdf/1906.01468v3",stat.AP
31019,em,"Personalized interventions in social services, education, and healthcare
leverage individual-level causal effect predictions in order to give the best
treatment to each individual or to prioritize program interventions for the
individuals most likely to benefit. While the sensitivity of these domains
compels us to evaluate the fairness of such policies, we show that actually
auditing their disparate impacts per standard observational metrics, such as
true positive rates, is impossible since ground truths are unknown. Whether our
data is experimental or observational, an individual's actual outcome under an
intervention different than that received can never be known, only predicted
based on features. We prove how we can nonetheless point-identify these
quantities under the additional assumption of monotone treatment response,
which may be reasonable in many applications. We further provide a sensitivity
analysis for this assumption by means of sharp partial-identification bounds
under violations of monotonicity of varying strengths. We show how to use our
results to audit personalized interventions using partially-identified ROC and
xROC curves and demonstrate this in a case study of a French job training
dataset.",Assessing Disparate Impacts of Personalized Interventions: Identifiability and Bounds,2019-06-04 19:11:07,"Nathan Kallus, Angela Zhou","http://arxiv.org/abs/1906.01552v1, http://arxiv.org/pdf/1906.01552v1",stat.ML
31020,em,"This paper proposes a method for estimating consumer preferences among
discrete choices, where the consumer chooses at most one product in a category,
but selects from multiple categories in parallel. The consumer's utility is
additive in the different categories. Her preferences about product attributes
as well as her price sensitivity vary across products and are in general
correlated across products. We build on techniques from the machine learning
literature on probabilistic models of matrix factorization, extending the
methods to account for time-varying product attributes and products going out
of stock. We evaluate the performance of the model using held-out data from
weeks with price changes or out of stock products. We show that our model
improves over traditional modeling approaches that consider each category in
isolation. One source of the improvement is the ability of the model to
accurately estimate heterogeneity in preferences (by pooling information across
categories); another source of improvement is its ability to estimate the
preferences of consumers who have rarely or never made a purchase in a given
category in the training data. Using held-out data, we show that our model can
accurately distinguish which consumers are most price sensitive to a given
product. We consider counterfactuals such as personally targeted price
discounts, showing that using a richer model such as the one we propose
substantially increases the benefits of personalization in discounts.",Counterfactual Inference for Consumer Choice Across Many Product Categories,2019-06-06 18:11:01,"Rob Donnelly, Francisco R. Ruiz, David Blei, Susan Athey","http://dx.doi.org/10.1007/s11129-021-09241-2, http://arxiv.org/abs/1906.02635v2, http://arxiv.org/pdf/1906.02635v2",cs.LG
31021,em,"The Stochastic Volatility (SV) model and its variants are widely used in the
financial sector while recurrent neural network (RNN) models are successfully
used in many large-scale industrial applications of Deep Learning. Our article
combines these two methods in a non-trivial way and proposes a model, which we
call the Statistical Recurrent Stochastic Volatility (SR-SV) model, to capture
the dynamics of stochastic volatility. The proposed model is able to capture
complex volatility effects (e.g., non-linearity and long-memory
auto-dependence) overlooked by the conventional SV models, is statistically
interpretable and has an impressive out-of-sample forecast performance. These
properties are carefully discussed and illustrated through extensive simulation
studies and applications to five international stock index datasets: The German
stock index DAX30, the Hong Kong stock index HSI50, the France market index
CAC40, the US stock market index SP500 and the Canada market index TSX250. An
user-friendly software package together with the examples reported in the paper
are available at \url{https://github.com/vbayeslab}.",A Statistical Recurrent Stochastic Volatility Model for Stock Markets,2019-06-07 06:29:46,"Trong-Nghia Nguyen, Minh-Ngoc Tran, David Gunawan, R. Kohn","http://arxiv.org/abs/1906.02884v3, http://arxiv.org/pdf/1906.02884v3",econ.EM
31022,em,"The advantages of sequential Monte Carlo (SMC) are exploited to develop
parameter estimation and model selection methods for GARCH (Generalized
AutoRegressive Conditional Heteroskedasticity) style models. It provides an
alternative method for quantifying estimation uncertainty relative to classical
inference. Even with long time series, it is demonstrated that the posterior
distribution of model parameters are non-normal, highlighting the need for a
Bayesian approach and an efficient posterior sampling method. Efficient
approaches for both constructing the sequence of distributions in SMC, and
leave-one-out cross-validation, for long time series data are also proposed.
Finally, an unbiased estimator of the likelihood is developed for the Bad
Environment-Good Environment model, a complex GARCH-type model, which permits
exact Bayesian inference not previously available in the literature.",Efficient Bayesian estimation for GARCH-type models via Sequential Monte Carlo,2019-06-10 10:59:26,"Dan Li, Adam Clements, Christopher Drovandi","http://arxiv.org/abs/1906.03828v2, http://arxiv.org/pdf/1906.03828v2",stat.AP
31023,em,"This handbook chapter gives an introduction to the sharp regression
discontinuity design, covering identification, estimation, inference, and
falsification methods.",The Regression Discontinuity Design,2019-06-10 22:20:25,"Matias D. Cattaneo, Rocio Titiunik, Gonzalo Vazquez-Bare","http://dx.doi.org/10.4135/9781526486387.n47, http://arxiv.org/abs/1906.04242v2, http://arxiv.org/pdf/1906.04242v2",econ.EM
31024,em,"We argue that a stochastic model of economic exchange, whose steady-state
distribution is a Generalized Beta Prime (also known as GB2), and some unique
properties of the latter, are the reason for GB2's success in describing
wealth/income distributions. We use housing sale prices as a proxy to
wealth/income distribution to numerically illustrate this point. We also
explore parametric limits of the distribution to do so analytically. We discuss
parametric properties of the inequality indices -- Gini, Hoover, Theil T and
Theil L -- vis-a-vis those of GB2 and introduce a new inequality index, which
serves a similar purpose. We argue that Hoover and Theil L are more appropriate
measures for distributions with power-law dependencies, especially fat tails,
such as GB2.",Generalized Beta Prime Distribution: Stochastic Model of Economic Exchange and Properties of Inequality Indices,2019-06-12 00:17:05,"M. Dashti Moghaddam, Jeffrey Mills, R. A. Serota","http://arxiv.org/abs/1906.04822v1, http://arxiv.org/pdf/1906.04822v1",econ.EM
31025,em,"Density estimation and inference methods are widely used in empirical work.
When the underlying distribution has compact support, conventional kernel-based
density estimators are no longer consistent near or at the boundary because of
their well-known boundary bias. Alternative smoothing methods are available to
handle boundary points in density estimation, but they all require additional
tuning parameter choices or other typically ad hoc modifications depending on
the evaluation point and/or approach considered. This article discusses the R
and Stata package lpdensity implementing a novel local polynomial density
estimator proposed and studied in Cattaneo, Jansson, and Ma (2020, 2021), which
is boundary adaptive and involves only one tuning parameter. The methods
implemented also cover local polynomial estimation of the cumulative
distribution function and density derivatives. In addition to point estimation
and graphical procedures, the package offers consistent variance estimators,
mean squared error optimal bandwidth selection, robust bias-corrected
inference, and confidence bands construction, among other features. A
comparison with other density estimation packages available in R using a Monte
Carlo experiment is provided.",lpdensity: Local Polynomial Density Estimation and Inference,2019-06-15 14:26:21,"Matias D. Cattaneo, Michael Jansson, Xinwei Ma","http://arxiv.org/abs/1906.06529v3, http://arxiv.org/pdf/1906.06529v3",stat.CO
31026,em,"We theoretically analyze the problem of testing for $p$-hacking based on
distributions of $p$-values across multiple studies. We provide general results
for when such distributions have testable restrictions (are non-increasing)
under the null of no $p$-hacking. We find novel additional testable
restrictions for $p$-values based on $t$-tests. Specifically, the shape of the
power functions results in both complete monotonicity as well as bounds on the
distribution of $p$-values. These testable restrictions result in more powerful
tests for the null hypothesis of no $p$-hacking. When there is also publication
bias, our tests are joint tests for $p$-hacking and publication bias. A
reanalysis of two prominent datasets shows the usefulness of our new tests.",Detecting p-hacking,2019-06-16 17:44:26,"Graham Elliott, Nikolay Kudrin, Kaspar Wuthrich","http://arxiv.org/abs/1906.06711v5, http://arxiv.org/pdf/1906.06711v5",econ.EM
31027,em,"In this paper, we consider an unknown functional estimation problem in a
general nonparametric regression model with the feature of having both
multiplicative and additive noise.We propose two new wavelet estimators in this
general context. We prove that they achieve fast convergence rates under the
mean integrated square error over Besov spaces. The obtained rates have the
particularity of being established under weak conditions on the model. A
numerical study in a context comparable to stochastic frontier estimation (with
the difference that the boundary is not necessarily a production function)
supports the theory.",Nonparametric estimation in a regression model with additive and multiplicative noise,2019-06-18 20:16:32,"Christophe Chesneau, Salima El Kolei, Junke Kou, Fabien Navarro","http://dx.doi.org/10.1016/j.cam.2020.112971, http://arxiv.org/abs/1906.07695v2, http://arxiv.org/pdf/1906.07695v2",math.ST
31028,em,"We study issues related to external validity for treatment effects using over
100 replications of the Angrist and Evans (1998) natural experiment on the
effects of sibling sex composition on fertility and labor supply. The
replications are based on census data from around the world going back to 1960.
We decompose sources of error in predicting treatment effects in external
contexts in terms of macro and micro sources of variation. In our empirical
setting, we find that macro covariates dominate over micro covariates for
reducing errors in predicting treatments, an issue that past studies of
external validity have been unable to evaluate. We develop methods for two
applications to evidence-based decision-making, including determining where to
locate an experiment and whether policy-makers should commission new
experiments or rely on an existing evidence base for making a policy decision.",From Local to Global: External Validity in a Fertility Natural Experiment,2019-06-19 16:43:37,"Rajeev Dehejia, Cristian Pop-Eleches, Cyrus Samii","http://arxiv.org/abs/1906.08096v1, http://arxiv.org/pdf/1906.08096v1",econ.EM
31029,em,"Social scientists have frequently sought to understand the distinct effects
of age, period, and cohort, but disaggregation of the three dimensions is
difficult because cohort = period - age. We argue that this technical
difficulty reflects a disconnection between how cohort effect is conceptualized
and how it is modeled in the traditional age-period-cohort framework. We
propose a new method, called the age-period-cohort-interaction (APC-I) model,
that is qualitatively different from previous methods in that it represents
Ryder's (1965) theoretical account about the conditions under which cohort
differentiation may arise. This APC-I model does not require problematic
statistical assumptions and the interpretation is straightforward. It
quantifies inter-cohort deviations from the age and period main effects and
also permits hypothesis testing about intra-cohort life-course dynamics. We
demonstrate how this new model can be used to examine age, period, and cohort
patterns in women's labor force participation.",The Age-Period-Cohort-Interaction Model for Describing and Investigating Inter-Cohort Deviations and Intra-Cohort Life-Course Dynamics,2019-06-03 04:33:35,"Liying Luo, James Hodges","http://arxiv.org/abs/1906.08357v1, http://arxiv.org/pdf/1906.08357v1",stat.AP
31030,em,"The internal validity of observational study is often subject to debate. In
this study, we define the counterfactuals as the unobserved sample and intend
to quantify its relationship with the null hypothesis statistical testing
(NHST). We propose the probability of a causal inference is robust for internal
validity, i.e., the PIV, as a robustness index of causal inference. Formally,
the PIV is the probability of rejecting the null hypothesis again based on both
the observed sample and the counterfactuals, provided the same null hypothesis
has already been rejected based on the observed sample. Under either
frequentist or Bayesian framework, one can bound the PIV of an inference based
on his bounded belief about the counterfactuals, which is often needed when the
unconfoundedness assumption is dubious. The PIV is equivalent to statistical
power when the NHST is thought to be based on both the observed sample and the
counterfactuals. We summarize the process of evaluating internal validity with
the PIV into an eight-step procedure and illustrate it with an empirical
example (i.e., Hong and Raudenbush (2005)).",On the probability of a causal inference is robust for internal validity,2019-06-20 19:13:03,"Tenglong Li, Kenneth A. Frank","http://arxiv.org/abs/1906.08726v1, http://arxiv.org/pdf/1906.08726v1",stat.AP
31031,em,"This paper studies the problem of optimally allocating treatments in the
presence of spillover effects, using information from a (quasi-)experiment. I
introduce a method that maximizes the sample analog of average social welfare
when spillovers occur. I construct semi-parametric welfare estimators with
known and unknown propensity scores and cast the optimization problem into a
mixed-integer linear program, which can be solved using off-the-shelf
algorithms. I derive a strong set of guarantees on regret, i.e., the difference
between the maximum attainable welfare and the welfare evaluated at the
estimated policy. The proposed method presents attractive features for
applications: (i) it does not require network information of the target
population; (ii) it exploits heterogeneity in treatment effects for targeting
individuals; (iii) it does not rely on the correct specification of a
particular structural model; and (iv) it accommodates constraints on the policy
function. An application for targeting information on social networks
illustrates the advantages of the method.",Policy Targeting under Network Interference,2019-06-25 01:42:32,Davide Viviano,"http://arxiv.org/abs/1906.10258v13, http://arxiv.org/pdf/1906.10258v13",econ.EM
31032,em,"Exchangeable arrays are natural tools to model common forms of dependence
between units of a sample. Jointly exchangeable arrays are well suited to
dyadic data, where observed random variables are indexed by two units from the
same population. Examples include trade flows between countries or
relationships in a network. Separately exchangeable arrays are well suited to
multiway clustering, where units sharing the same cluster (e.g. geographical
areas or sectors of activity when considering individual wages) may be
dependent in an unrestricted way. We prove uniform laws of large numbers and
central limit theorems for such exchangeable arrays. We obtain these results
under the same moment restrictions and conditions on the class of functions as
those typically assumed with i.i.d. data. We also show the convergence of
bootstrap processes adapted to such arrays.",Empirical Process Results for Exchangeable Arrays,2019-06-24 18:55:24,"Laurent Davezies, Xavier D'Haultfoeuille, Yannick Guyonvarch","http://dx.doi.org/10.1214/20-AOS1981, http://arxiv.org/abs/1906.11293v4, http://arxiv.org/pdf/1906.11293v4",math.ST
31033,em,"Stochastic volatility (SV) models are nonlinear state-space models that enjoy
increasing popularity for fitting and predicting heteroskedastic time series.
However, due to the large number of latent quantities, their efficient
estimation is non-trivial and software that allows to easily fit SV models to
data is rare. We aim to alleviate this issue by presenting novel
implementations of four SV models delivered in two R packages. Several unique
features are included and documented. As opposed to previous versions, stochvol
is now capable of handling linear mean models, heavy-tailed SV, and SV with
leverage. Moreover, we newly introduce factorstochvol which caters for
multivariate SV. Both packages offer a user-friendly interface through the
conventional R generics and a range of tailor-made methods. Computational
efficiency is achieved via interfacing R to C++ and doing the heavy work in the
latter. In the paper at hand, we provide a detailed discussion on Bayesian SV
estimation and showcase the use of the new software through various examples.",Modeling Univariate and Multivariate Stochastic Volatility in R with stochvol and factorstochvol,2019-06-28 13:35:36,"Darjus Hosszejni, Gregor Kastner","http://dx.doi.org/10.18637/jss.v100.i12, http://arxiv.org/abs/1906.12123v3, http://arxiv.org/pdf/1906.12123v3",stat.CO
31035,em,"Suppose X and Y are binary exposure and outcome variables, and we have full
knowledge of the distribution of Y, given application of X. From this we know
the average causal effect of X on Y. We are now interested in assessing, for a
case that was exposed and exhibited a positive outcome, whether it was the
exposure that caused the outcome. The relevant ""probability of causation"", PC,
typically is not identified by the distribution of Y given X, but bounds can be
placed on it, and these bounds can be improved if we have further information
about the causal process. Here we consider cases where we know the
probabilistic structure for a sequence of complete mediators between X and Y.
We derive a general formula for calculating bounds on PC for any pattern of
data on the mediators (including the case with no data). We show that the
largest and smallest upper and lower bounds that can result from any complete
mediation process can be obtained in processes with at most two steps. We also
consider homogeneous processes with many mediators. PC can sometimes be
identified as 0 with negative data, but it cannot be identified at 1 even with
positive data on an infinite set of mediators. The results have implications
for learning about causation from knowledge of general processes and of data on
cases.",Bounding Causes of Effects with Mediators,2019-06-30 18:41:36,"Philip Dawid, Macartan Humphreys, Monica Musio","http://arxiv.org/abs/1907.00399v1, http://arxiv.org/pdf/1907.00399v1",math.ST
31036,em,"This paper introduces measures for how each moment contributes to the
precision of parameter estimates in GMM settings. For example, one of the
measures asks what would happen to the variance of the parameter estimates if a
particular moment was dropped from the estimation. The measures are all easy to
compute. We illustrate the usefulness of the measures through two simple
examples as well as an application to a model of joint retirement planning of
couples. We estimate the model using the UK-BHPS, and we find evidence of
complementarities in leisure. Our sensitivity measures illustrate that the
estimate of the complementarity is primarily informed by the distribution of
differences in planned retirement dates. The estimated econometric model can be
interpreted as a bivariate ordered choice model that allows for simultaneity.
This makes the model potentially useful in other applications.",The Informativeness of Estimation Moments,2019-07-03 21:54:39,"Bo Honore, Thomas Jorgensen, Aureo de Paula","http://arxiv.org/abs/1907.02101v2, http://arxiv.org/pdf/1907.02101v2",econ.EM
31037,em,"This article presents a set of tools for the modeling of a spatial allocation
problem in a large geographic market and gives examples of applications. In our
settings, the market is described by a network that maps the cost of travel
between each pair of adjacent locations. Two types of agents are located at the
nodes of this network. The buyers choose the most competitive sellers depending
on their prices and the cost to reach them. Their utility is assumed additive
in both these quantities. Each seller, taking as given other sellers prices,
sets her own price to have a demand equal to the one we observed. We give a
linear programming formulation for the equilibrium conditions. After formally
introducing our model we apply it on two examples: prices offered by petrol
stations and quality of services provided by maternity wards. These examples
illustrate the applicability of our model to aggregate demand, rank prices and
estimate cost structure over the network. We insist on the possibility of
applications to large scale data sets using modern linear programming solvers
such as Gurobi. In addition to this paper we released a R toolbox to implement
our results and an online tutorial (http://optimalnetwork.github.io)","Optimal transport on large networks, a practitioner's guide",2019-07-04 13:45:57,"Arthur Charpentier, Alfred Galichon, Lucas Vernet","http://arxiv.org/abs/1907.02320v2, http://arxiv.org/pdf/1907.02320v2",econ.GN
31038,em,"Artificial intelligence, or AI, enhancements are increasingly shaping our
daily lives. Financial decision-making is no exception to this. We introduce
the notion of AI Alter Egos, which are shadow robo-investors, and use a unique
data set covering brokerage accounts for a large cross-section of investors
over a sample from January 2003 to March 2012, which includes the 2008
financial crisis, to assess the benefits of robo-investing. We have detailed
investor characteristics and records of all trades. Our data set consists of
investors typically targeted for robo-advising. We explore robo-investing
strategies commonly used in the industry, including some involving advanced
machine learning methods. The man versus machine comparison allows us to shed
light on potential benefits the emerging robo-advising industry may provide to
certain segments of the population, such as low income and/or high risk averse
investors.",Artificial Intelligence Alter Egos: Who benefits from Robo-investing?,2019-07-08 03:10:09,"Catherine D'Hondt, Rudy De Winne, Eric Ghysels, Steve Raymond","http://arxiv.org/abs/1907.03370v1, http://arxiv.org/pdf/1907.03370v1",q-fin.PM
31039,em,"Different agents need to make a prediction. They observe identical data, but
have different models: they predict using different explanatory variables. We
study which agent believes they have the best predictive ability -- as measured
by the smallest subjective posterior mean squared prediction error -- and show
how it depends on the sample size. With small samples, we present results
suggesting it is an agent using a low-dimensional model. With large samples, it
is generally an agent with a high-dimensional model, possibly including
irrelevant variables, but never excluding relevant ones. We apply our results
to characterize the winning model in an auction of productive assets, to argue
that entrepreneurs and investors with simple models will be over-represented in
new sectors, and to understand the proliferation of ""factors"" that explain the
cross-sectional variation of expected stock returns in the asset-pricing
literature.",Competing Models,2019-07-08 21:50:09,"Jose Luis Montiel Olea, Pietro Ortoleva, Mallesh M Pai, Andrea Prat","http://dx.doi.org/10.1093/qje/qjac015, http://arxiv.org/abs/1907.03809v5, http://arxiv.org/pdf/1907.03809v5",econ.TH
31040,em,"We propose a simple test for moment inequalities that has exact size in
normal models with known variance and has uniformly asymptotically exact size
more generally. The test compares the quasi-likelihood ratio statistic to a
chi-squared critical value, where the degree of freedom is the rank of the
inequalities that are active in finite samples. The test requires no simulation
and thus is computationally fast and especially suitable for constructing
confidence sets for parameters by test inversion. It uses no tuning parameter
for moment selection and yet still adapts to the slackness of the moment
inequalities. Furthermore, we show how the test can be easily adapted for
inference on subvectors for the common empirical setting of conditional moment
inequalities with nuisance parameters entering linearly.",Simple Adaptive Size-Exact Testing for Full-Vector and Subvector Inference in Moment Inequality Models,2019-07-15 05:54:55,"Gregory Cox, Xiaoxia Shi","http://arxiv.org/abs/1907.06317v2, http://arxiv.org/pdf/1907.06317v2",econ.EM
31041,em,"We develop tools for utilizing correspondence experiments to detect illegal
discrimination by individual employers. Employers violate US employment law if
their propensity to contact applicants depends on protected characteristics
such as race or sex. We establish identification of higher moments of the
causal effects of protected characteristics on callback rates as a function of
the number of fictitious applications sent to each job ad. These moments are
used to bound the fraction of jobs that illegally discriminate. Applying our
results to three experimental datasets, we find evidence of significant
employer heterogeneity in discriminatory behavior, with the standard deviation
of gaps in job-specific callback probabilities across protected groups
averaging roughly twice the mean gap. In a recent experiment manipulating
racially distinctive names, we estimate that at least 85% of jobs that contact
both of two white applications and neither of two black applications are
engaged in illegal discrimination. To assess the tradeoff between type I and II
errors presented by these patterns, we consider the performance of a series of
decision rules for investigating suspicious callback behavior under a simple
two-type model that rationalizes the experimental data. Though, in our
preferred specification, only 17% of employers are estimated to discriminate on
the basis of race, we find that an experiment sending 10 applications to each
job would enable accurate detection of 7-10% of discriminators while falsely
accusing fewer than 0.2% of non-discriminators. A minimax decision rule
acknowledging partial identification of the joint distribution of callback
rates yields higher error rates but more investigations than our baseline
two-type model. Our results suggest illegal labor market discrimination can be
reliably monitored with relatively small modifications to existing audit
designs.","Audits as Evidence: Experiments, Ensembles, and Enforcement",2019-07-15 20:43:02,"Patrick Kline, Christopher Walters","http://arxiv.org/abs/1907.06622v2, http://arxiv.org/pdf/1907.06622v2",econ.EM
31042,em,"The paper shows that matching without replacement on propensity scores
produces estimators that generally are inconsistent for the average treatment
effect of the treated. To achieve consistency, practitioners must either assume
that no units exist with propensity scores greater than one-half or assume that
there is no confounding among such units. The result is not driven by the use
of propensity scores, and similar artifacts arise when matching on other scores
as long as it is without replacement.",On the inconsistency of matching without replacement,2019-07-17 02:50:56,Fredrik Sävje,"http://arxiv.org/abs/1907.07288v2, http://arxiv.org/pdf/1907.07288v2",econ.EM
31043,em,"Aesthetics are critically important to market acceptance. In the automotive
industry, an improved aesthetic design can boost sales by 30% or more. Firms
invest heavily in designing and testing aesthetics. A single automotive ""theme
clinic"" can cost over $100,000, and hundreds are conducted annually. We propose
a model to augment the commonly-used aesthetic design process by predicting
aesthetic scores and automatically generating innovative and appealing product
designs. The model combines a probabilistic variational autoencoder (VAE) with
adversarial components from generative adversarial networks (GAN) and a
supervised learning component. We train and evaluate the model with data from
an automotive partner-images of 203 SUVs evaluated by targeted consumers and
180,000 high-quality unrated images. Our model predicts well the appeal of new
aesthetic designs-43.5% improvement relative to a uniform baseline and
substantial improvement over conventional machine learning models and
pretrained deep neural networks. New automotive designs are generated in a
controllable manner for use by design teams. We empirically verify that
automatically generated designs are (1) appealing to consumers and (2) resemble
designs which were introduced to the market five years after our data were
collected. We provide an additional proof-of-concept application using
opensource images of dining room chairs.",Product Aesthetic Design: A Machine Learning Augmentation,2019-07-18 00:56:55,"Alex Burnap, John R. Hauser, Artem Timoshenko","http://arxiv.org/abs/1907.07786v2, http://arxiv.org/pdf/1907.07786v2",cs.LG
31044,em,"Several methods have been developed for the simulation of the Hawkes process.
The oldest approach is the inverse sampling transform (ITS) suggested in
\citep{ozaki1979maximum}, but rapidly abandoned in favor of more efficient
alternatives. This manuscript shows that the ITS approach can be conveniently
discussed in terms of Lambert-W functions. An optimized and efficient
implementation suggests that this approach is computationally more performing
than more recent alternatives available for the simulation of the Hawkes
process.",On the simulation of the Hawkes process via Lambert-W functions,2019-07-22 10:28:22,Martin Magris,"http://arxiv.org/abs/1907.09162v1, http://arxiv.org/pdf/1907.09162v1",econ.EM
31045,em,"This paper develops an approach to detect identification failure in moment
condition models. This is achieved by introducing a quasi-Jacobian matrix
computed as the slope of a linear approximation of the moments on an estimate
of the identified set. It is asymptotically singular when local and/or global
identification fails, and equivalent to the usual Jacobian matrix which has
full rank when the model is point and locally identified. Building on this
property, a simple test with chi-squared critical values is introduced to
conduct subvector inferences allowing for strong, semi-strong, and weak
identification without \textit{a priori} knowledge about the underlying
identification structure. Monte-Carlo simulations and an empirical application
to the Long-Run Risks model illustrate the results.",Detecting Identification Failure in Moment Condition Models,2019-07-30 20:26:06,Jean-Jacques Forneron,"http://arxiv.org/abs/1907.13093v5, http://arxiv.org/pdf/1907.13093v5",econ.EM
31046,em,"We study nonparametric estimation of density functions for undirected dyadic
random variables (i.e., random variables defined for all
n\overset{def}{\equiv}\tbinom{N}{2} unordered pairs of agents/nodes in a
weighted network of order N). These random variables satisfy a local dependence
property: any random variables in the network that share one or two indices may
be dependent, while those sharing no indices in common are independent. In this
setting, we show that density functions may be estimated by an application of
the kernel estimation method of Rosenblatt (1956) and Parzen (1962). We suggest
an estimate of their asymptotic variances inspired by a combination of (i)
Newey's (1994) method of variance estimation for kernel estimators in the
""monadic"" setting and (ii) a variance estimator for the (estimated) density of
a simple network first suggested by Holland and Leinhardt (1976). More unusual
are the rates of convergence and asymptotic (normal) distributions of our
dyadic density estimates. Specifically, we show that they converge at the same
rate as the (unconditional) dyadic sample mean: the square root of the number,
N, of nodes. This differs from the results for nonparametric estimation of
densities and regression functions for monadic data, which generally have a
slower rate of convergence than their corresponding sample mean.",Kernel Density Estimation for Undirected Dyadic Data,2019-07-31 20:55:08,"Bryan S. Graham, Fengshi Niu, James L. Powell","http://arxiv.org/abs/1907.13630v1, http://arxiv.org/pdf/1907.13630v1",math.ST
33428,gn,"In this paper an algorithm designed for large databases is introduced for the
enhancement of pass rates in mathematical university lower division courses
with several sections. Using integer programming techniques, the algorithm
finds the optimal pairing of students and lecturers in order to maximize the
success chances of the students' body. The students-lecturer success
probability is computed according to their corresponding profiles stored in the
data bases.",A big data based method for pass rates optimization in mathematics university lower division courses,2018-09-18 17:15:33,"Fernando A Morales, Cristian C Chica, Carlos A Osorio, Daniel Cabarcas J","http://arxiv.org/abs/1809.09724v3, http://arxiv.org/pdf/1809.09724v3",econ.GN
31047,em,"This paper proposes a new method to identify leaders and followers in a
network. Prior works use spatial autoregression models (SARs) which implicitly
assume that each individual in the network has the same peer effects on others.
Mechanically, they conclude the key player in the network to be the one with
the highest centrality. However, when some individuals are more influential
than others, centrality may fail to be a good measure. I develop a model that
allows for individual-specific endogenous effects and propose a two-stage LASSO
procedure to identify influential individuals in a network. Under an assumption
of sparsity: only a subset of individuals (which can increase with sample size
n) is influential, I show that my 2SLSS estimator for individual-specific
endogenous effects is consistent and achieves asymptotic normality. I also
develop robust inference including uniformly valid confidence intervals. These
results also carry through to scenarios where the influential individuals are
not sparse. I extend the analysis to allow for multiple types of connections
(multiple networks), and I show how to use the sparse group LASSO to detect
which of the multiple connection types is more influential. Simulation evidence
shows that my estimator has good finite sample performance. I further apply my
method to the data in Banerjee et al. (2013) and my proposed procedure is able
to identify leaders and effective networks.",Heterogeneous Endogenous Effects in Networks,2019-08-02 03:05:05,Sida Peng,"http://arxiv.org/abs/1908.00663v1, http://arxiv.org/pdf/1908.00663v1",econ.EM
31048,em,"We show the equivalence of discrete choice models and a forest of binary
decision trees. This suggests that standard machine learning techniques based
on random forests can serve to estimate discrete choice models with an
interpretable output: the underlying trees can be viewed as the internal choice
process of customers. Our data-driven theoretical results show that random
forests can predict the choice probability of any discrete choice model
consistently. Moreover, our algorithm predicts unseen assortments with
mechanisms and errors that can be theoretically analyzed. We also prove that
the splitting criterion in random forests, the Gini index, is capable of
recovering preference rankings of customers. The framework has unique practical
advantages: it can capture behavioral patterns such as irrationality or
sequential searches; it handles nonstandard formats of training data that
result from aggregation; it can measure product importance based on how
frequently a random customer would make decisions depending on the presence of
the product; it can also incorporate price information and customer features.
Our numerical results show that using random forests to estimate customer
choices can outperform the best parametric models in synthetic and real
datasets when presented with enough data or when the underlying discrete choice
model cannot be correctly specified by existing parametric models.",The Use of Binary Choice Forests to Model and Estimate Discrete Choices,2019-08-03 05:34:49,"Ningyuan Chen, Guillermo Gallego, Zhuodong Tang","http://arxiv.org/abs/1908.01109v4, http://arxiv.org/pdf/1908.01109v4",cs.LG
31049,em,"We propose a new method for studying environments with unobserved individual
heterogeneity. Based on model-implied pairwise inequalities, the method
classifies individuals in the sample into groups defined by discrete unobserved
heterogeneity with unknown support. We establish conditions under which the
groups are identified and consistently estimated through our method. We show
that the method performs well in finite samples through Monte Carlo simulation.
We then apply the method to estimate a model of lowest-price procurement
auctions with unobserved bidder heterogeneity, using data from the California
highway procurement market.",Estimating Unobserved Individual Heterogeneity Using Pairwise Comparisons,2019-08-04 07:58:24,"Elena Krasnokutskaya, Kyungchul Song, Xun Tang","http://arxiv.org/abs/1908.01272v3, http://arxiv.org/pdf/1908.01272v3",econ.EM
31050,em,"With the industry trend of shifting from a traditional hierarchical approach
to flatter management structure, crowdsourced performance assessment gained
mainstream popularity. One fundamental challenge of crowdsourced performance
assessment is the risks that personal interest can introduce distortions of
facts, especially when the system is used to determine merit pay or promotion.
In this paper, we developed a method to identify bias and strategic behavior in
crowdsourced performance assessment, using a rich dataset collected from a
professional service firm in China. We find a pattern of ""discriminatory
generosity"" on the part of peer evaluation, where raters downgrade their peer
coworkers who have passed objective promotion requirements while overrating
their peer coworkers who have not yet passed. This introduces two types of
biases: the first aimed against more competent competitors, and the other
favoring less eligible peers which can serve as a mask of the first bias. This
paper also aims to bring angles of fairness-aware data mining to talent and
management computing. Historical decision records, such as performance ratings,
often contain subjective judgment which is prone to bias and strategic
behavior. For practitioners of predictive talent analytics, it is important to
investigate potential bias and strategic behavior underlying historical
decision records.",Discovery of Bias and Strategic Behavior in Crowdsourced Performance Assessment,2019-08-05 19:51:09,"Yifei Huang, Matt Shum, Xi Wu, Jason Zezhong Xiao","http://arxiv.org/abs/1908.01718v2, http://arxiv.org/pdf/1908.01718v2",cs.LG
31051,em,"Data in the form of networks are increasingly available in a variety of
areas, yet statistical models allowing for parameter estimates with desirable
statistical properties for sparse networks remain scarce. To address this, we
propose the Sparse $\beta$-Model (S$\beta$M), a new network model that
interpolates the celebrated Erd\H{o}s-R\'enyi model and the $\beta$-model that
assigns one different parameter to each node. By a novel reparameterization of
the $\beta$-model to distinguish global and local parameters, our S$\beta$M can
drastically reduce the dimensionality of the $\beta$-model by requiring some of
the local parameters to be zero. We derive the asymptotic distribution of the
maximum likelihood estimator of the S$\beta$M when the support of the parameter
vector is known. When the support is unknown, we formulate a penalized
likelihood approach with the $\ell_0$-penalty. Remarkably, we show via a
monotonicity lemma that the seemingly combinatorial computational problem due
to the $\ell_0$-penalty can be overcome by assigning nonzero parameters to
those nodes with the largest degrees. We further show that a $\beta$-min
condition guarantees our method to identify the true model and provide excess
risk bounds for the estimated parameters. The estimation procedure enjoys good
finite sample properties as shown by simulation studies. The usefulness of the
S$\beta$M is further illustrated via the analysis of a microfinance take-up
example.",Analysis of Networks via the Sparse $β$-Model,2019-08-08 19:25:32,"Mingli Chen, Kengo Kato, Chenlei Leng","http://arxiv.org/abs/1908.03152v3, http://arxiv.org/pdf/1908.03152v3",math.ST
31053,em,"We introduce new forecast encompassing tests for the risk measure Expected
Shortfall (ES). The ES currently receives much attention through its
introduction into the Basel III Accords, which stipulate its use as the primary
market risk measure for the international banking regulation. We utilize joint
loss functions for the pair ES and Value at Risk to set up three ES
encompassing test variants. The tests are built on misspecification robust
asymptotic theory and we investigate the finite sample properties of the tests
in an extensive simulation study. We use the encompassing tests to illustrate
the potential of forecast combination methods for different financial assets.",Forecast Encompassing Tests for the Expected Shortfall,2019-08-13 13:26:03,"Timo Dimitriadis, Julie Schnaitmann","http://arxiv.org/abs/1908.04569v3, http://arxiv.org/pdf/1908.04569v3",q-fin.RM
31054,em,"The family of rank estimators, including Han's maximum rank correlation (Han,
1987) as a notable example, has been widely exploited in studying regression
problems. For these estimators, although the linear index is introduced for
alleviating the impact of dimensionality, the effect of large dimension on
inference is rarely studied. This paper fills this gap via studying the
statistical properties of a larger family of M-estimators, whose objective
functions are formulated as U-processes and may be discontinuous in increasing
dimension set-up where the number of parameters, $p_{n}$, in the model is
allowed to increase with the sample size, $n$. First, we find that often in
estimation, as $p_{n}/n\rightarrow 0$, $(p_{n}/n)^{1/2}$ rate of convergence is
obtainable. Second, we establish Bahadur-type bounds and study the validity of
normal approximation, which we find often requires a much stronger scaling
requirement than $p_{n}^{2}/n\rightarrow 0.$ Third, we state conditions under
which the numerical derivative estimator of asymptotic covariance matrix is
consistent, and show that the step size in implementing the covariance
estimator has to be adjusted with respect to $p_{n}$. All theoretical results
are further backed up by simulation studies.",On rank estimators in increasing dimensions,2019-08-14 20:35:07,"Yanqin Fan, Fang Han, Wei Li, Xiao-Hua Zhou","http://arxiv.org/abs/1908.05255v1, http://arxiv.org/pdf/1908.05255v1",math.ST
31055,em,"In many applications of network analysis, it is important to distinguish
between observed and unobserved factors affecting network structure. To this
end, we develop spectral estimators for both unobserved blocks and the effect
of covariates in stochastic blockmodels. On the theoretical side, we establish
asymptotic normality of our estimators for the subsequent purpose of performing
inference. On the applied side, we show that computing our estimator is much
faster than standard variational expectation--maximization algorithms and
scales well for large networks. Monte Carlo experiments suggest that the
estimator performs well under different data generating processes. Our
application to Facebook data shows evidence of homophily in gender, role and
campus-residence, while allowing us to discover unobserved communities. The
results in this paper provide a foundation for spectral estimation of the
effect of observed covariates as well as unobserved latent community structure
on the probability of link formation in networks.",Spectral inference for large Stochastic Blockmodels with nodal covariates,2019-08-18 16:03:13,"Angelo Mele, Lingxin Hao, Joshua Cape, Carey E. Priebe","http://arxiv.org/abs/1908.06438v2, http://arxiv.org/pdf/1908.06438v2",stat.ME
31056,em,"This survey reviews recent developments in revealed preference theory. It
discusses the testable implications of theories of choice that are germane to
specific economic environments. The focus is on expected utility in risky
environments; subjected expected utility and maxmin expected utility in the
presence of uncertainty; and exponentially discounted utility for intertemporal
choice. The testable implications of these theories for data on choice from
classical linear budget sets are described, and shown to follow a common
thread. The theories all imply an inverse relation between prices and
quantities, with different qualifications depending on the functional forms in
the theory under consideration.","New developments in revealed preference theory: decisions under risk, uncertainty, and intertemporal choice",2019-08-20 21:36:04,Federico Echenique,"http://dx.doi.org/10.1146/annurev-economics-082019-110800, http://arxiv.org/abs/1908.07561v2, http://arxiv.org/pdf/1908.07561v2",econ.TH
31057,em,"We propose a factor state-space approach with stochastic volatility to model
and forecast the term structure of future contracts on commodities. Our
approach builds upon the dynamic 3-factor Nelson-Siegel model and its 4-factor
Svensson extension and assumes for the latent level, slope and curvature
factors a Gaussian vector autoregression with a multivariate Wishart stochastic
volatility process. Exploiting the conjugacy of the Wishart and the Gaussian
distribution, we develop a computationally fast and easy to implement MCMC
algorithm for the Bayesian posterior analysis. An empirical application to
daily prices for contracts on crude oil with stipulated delivery dates ranging
from one to 24 months ahead show that the estimated 4-factor Svensson model
with two curvature factors provides a good parsimonious representation of the
serial correlation in the individual prices and their volatility. It also shows
that this model has a good out-of-sample forecast performance.",Analyzing Commodity Futures Using Factor State-Space Models with Wishart Stochastic Volatility,2019-08-21 14:15:28,"Tore Selland Kleppe, Roman Liesenfeld, Guilherme Valle Moura, Atle Oglend","http://arxiv.org/abs/1908.07798v1, http://arxiv.org/pdf/1908.07798v1",stat.CO
31058,em,"Real-time bidding (RTB) systems, which utilize auctions to allocate user
impressions to competing advertisers, continue to enjoy success in digital
advertising. Assessing the effectiveness of such advertising remains a
challenge in research and practice. This paper proposes a new approach to
perform causal inference on advertising bought through such mechanisms.
Leveraging the economic structure of first- and second-price auctions, we first
show that the effects of advertising are identified by the optimal bids. Hence,
since these optimal bids are the only objects that need to be recovered, we
introduce an adapted Thompson sampling (TS) algorithm to solve a multi-armed
bandit problem that succeeds in recovering such bids and, consequently, the
effects of advertising while minimizing the costs of experimentation. We derive
a regret bound for our algorithm which is order optimal and use data from RTB
auctions to show that it outperforms commonly used methods that estimate the
effects of advertising.",Online Causal Inference for Advertising in Real-Time Bidding Auctions,2019-08-23 00:13:03,"Caio Waisman, Harikesh S. Nair, Carlos Carrion","http://arxiv.org/abs/1908.08600v3, http://arxiv.org/pdf/1908.08600v3",cs.LG
31059,em,"Several studies of the Job Corps tend to nd more positive earnings effects
for males than for females. This effect heterogeneity favouring males contrasts
with the results of the majority of other training programmes' evaluations.
Applying the translated quantile approach of Bitler, Hoynes, and Domina (2014),
I investigate a potential mechanism behind the surprising findings for the Job
Corps. My results provide suggestive evidence that the effect of heterogeneity
by gender operates through existing gender earnings inequality rather than Job
Corps trainability differences.",Heterogeneous Earnings Effects of the Job Corps by Gender Earnings: A Translated Quantile Approach,2019-08-23 11:55:51,Anthony Strittmatter,"http://dx.doi.org/10.1016/j.labeco.2019.101760, http://arxiv.org/abs/1908.08721v1, http://arxiv.org/pdf/1908.08721v1",econ.EM
31060,em,"This paper gives a consistent, asymptotically normal estimator of the
expected value function when the state space is high-dimensional and the
first-stage nuisance functions are estimated by modern machine learning tools.
First, we show that value function is orthogonal to the conditional choice
probability, therefore, this nuisance function needs to be estimated only at
$n^{-1/4}$ rate. Second, we give a correction term for the transition density
of the state variable. The resulting orthogonal moment is robust to
misspecification of the transition density and does not require this nuisance
function to be consistently estimated. Third, we generalize this result by
considering the weighted expected value. In this case, the orthogonal moment is
doubly robust in the transition density and additional second-stage nuisance
functions entering the correction term. We complete the asymptotic theory by
providing bounds on second-order asymptotic terms.",Inference on weighted average value function in high-dimensional state space,2019-08-24 20:34:40,"Victor Chernozhukov, Whitney Newey, Vira Semenova","http://arxiv.org/abs/1908.09173v1, http://arxiv.org/pdf/1908.09173v1",stat.ML
31061,em,"We provide general formulation of weak identification in semiparametric
models and an efficiency concept. Weak identification occurs when a parameter
is weakly regular, i.e., when it is locally homogeneous of degree zero. When
this happens, consistent or equivariant estimation is shown to be impossible.
We then show that there exists an underlying regular parameter that fully
characterizes the weakly regular parameter. While this parameter is not unique,
concepts of sufficiency and minimality help pin down a desirable one. If
estimation of minimal sufficient underlying parameters is inefficient, it
introduces noise in the corresponding estimation of weakly regular parameters,
whence we can improve the estimators by local asymptotic Rao-Blackwellization.
We call an estimator weakly efficient if it does not admit such improvement.
New weakly efficient estimators are presented in linear IV and nonlinear
regression models. Simulation of a linear IV model demonstrates how 2SLS and
optimal IV estimators are improved.",Theory of Weak Identification in Semiparametric Models,2019-08-28 00:53:03,Tetsuya Kaji,"http://dx.doi.org/10.3982/ECTA16413, http://arxiv.org/abs/1908.10478v3, http://arxiv.org/pdf/1908.10478v3",econ.EM
31062,em,"In this work we use Recurrent Neural Networks and Multilayer Perceptrons to
predict NYSE, NASDAQ and AMEX stock prices from historical data. We experiment
with different architectures and compare data normalization techniques. Then,
we leverage those findings to question the efficient-market hypothesis through
a formal statistical test.",Stock Price Forecasting and Hypothesis Testing Using Neural Networks,2019-08-28 06:10:30,Kerda Varaku,"http://arxiv.org/abs/1908.11212v1, http://arxiv.org/pdf/1908.11212v1",q-fin.ST
31063,em,"This paper introduces the concept of travel behavior embeddings, a method for
re-representing discrete variables that are typically used in travel demand
modeling, such as mode, trip purpose, education level, family type or
occupation. This re-representation process essentially maps those variables
into a latent space called the \emph{embedding space}. The benefit of this is
that such spaces allow for richer nuances than the typical transformations used
in categorical variables (e.g. dummy encoding, contrasted encoding, principal
components analysis). While the usage of latent variable representations is not
new per se in travel demand modeling, the idea presented here brings several
innovations: it is an entirely data driven algorithm; it is informative and
consistent, since the latent space can be visualized and interpreted based on
distances between different categories; it preserves interpretability of
coefficients, despite being based on Neural Network principles; and it is
transferrable, in that embeddings learned from one dataset can be reused for
other ones, as long as travel behavior keeps consistent between the datasets.
  The idea is strongly inspired on natural language processing techniques,
namely the word2vec algorithm. Such algorithm is behind recent developments
such as in automatic translation or next word prediction. Our method is
demonstrated using a model choice model, and shows improvements of up to 60\%
with respect to initial likelihood, and up to 20% with respect to likelihood of
the corresponding traditional model (i.e. using dummy variables) in
out-of-sample evaluation. We provide a new Python package, called PyTre (PYthon
TRavel Embeddings), that others can straightforwardly use to replicate our
results or improve their own models. Our experiments are themselves based on an
open dataset (swissmetro).",Rethinking travel behavior modeling representations through embeddings,2019-08-31 10:05:43,Francisco C. Pereira,"http://arxiv.org/abs/1909.00154v1, http://arxiv.org/pdf/1909.00154v1",econ.EM
31064,em,"Where do firms innovate? Mapping their locations and directions in
technological space is challenging due to its high dimensionality. We propose a
new method to characterize firms' inventive activities via topological data
analysis (TDA) that represents high-dimensional data in a shape graph. Applying
this method to 333 major firms' patents in 1976--2005 reveals substantial
heterogeneity: some firms remain undifferentiated; others develop unique
portfolios. Firms with unique trajectories, which we define and measure
graph-theoretically as ""flares"" in the Mapper graph, perform better. This
association is statistically and economically significant, and continues to
hold after we control for portfolio size, firm survivorship, industry
classification, and firm fixed effects. By contrast, existing techniques --
such as principal component analysis (PCA) and Jaffe's (1989) clustering method
-- struggle to track these firm-level dynamics.",Mapping Firms' Locations in Technological Space: A Topological Analysis of Patent Statistics,2019-08-31 21:20:25,"Emerson G. Escolar, Yasuaki Hiraoka, Mitsuru Igami, Yasin Ozcan","http://arxiv.org/abs/1909.00257v7, http://arxiv.org/pdf/1909.00257v7",econ.EM
31065,em,"The uncertainties in future Bitcoin price make it difficult to accurately
predict the price of Bitcoin. Accurately predicting the price for Bitcoin is
therefore important for decision-making process of investors and market players
in the cryptocurrency market. Using historical data from 01/01/2012 to
16/08/2019, machine learning techniques (Generalized linear model via penalized
maximum likelihood, random forest, support vector regression with linear
kernel, and stacking ensemble) were used to forecast the price of Bitcoin. The
prediction models employed key and high dimensional technical indicators as the
predictors. The performance of these techniques were evaluated using mean
absolute percentage error (MAPE), root mean square error (RMSE), mean absolute
error (MAE), and coefficient of determination (R-squared). The performance
metrics revealed that the stacking ensemble model with two base learner (random
forest and generalized linear model via penalized maximum likelihood) and
support vector regression with linear kernel as meta-learner was the optimal
model for forecasting Bitcoin price. The MAPE, RMSE, MAE, and R-squared values
for the stacking ensemble model were 0.0191%, 15.5331 USD, 124.5508 USD, and
0.9967 respectively. These values show a high degree of reliability in
predicting the price of Bitcoin using the stacking ensemble model. Accurately
predicting the future price of Bitcoin will yield significant returns for
investors and market players in the cryptocurrency market.",Are Bitcoins price predictable? Evidence from machine learning techniques using technical indicators,2019-09-03 19:03:13,Samuel Asante Gyamerah,"http://arxiv.org/abs/1909.01268v1, http://arxiv.org/pdf/1909.01268v1",q-fin.ST
31067,em,"Opioid overdose rates have reached an epidemic level and state-level policy
innovations have followed suit in an effort to prevent overdose deaths.
State-level drug law is a set of policies that may reinforce or undermine each
other, and analysts have a limited set of tools for handling the policy
collinearity using statistical methods. This paper uses a machine learning
method called hierarchical clustering to empirically generate ""policy bundles""
by grouping states with similar sets of policies in force at a given time
together for analysis in a 50-state, 10-year interrupted time series regression
with drug overdose deaths as the dependent variable. Policy clusters were
generated from 138 binomial variables observed by state and year from the
Prescription Drug Abuse Policy System. Clustering reduced the policies to a set
of 10 bundles. The approach allows for ranking of the relative effect of
different bundles and is a tool to recommend those most likely to succeed. This
study shows that a set of policies balancing Medication Assisted Treatment,
Naloxone Access, Good Samaritan Laws, Medication Assisted Treatment,
Prescription Drug Monitoring Programs and legalization of medical marijuana
leads to a reduced number of overdose deaths, but not until its second year in
force.",State Drug Policy Effectiveness: Comparative Policy Analysis of Drug Overdose Mortality,2019-09-03 19:41:44,"Jarrod Olson, Po-Hsu Allen Chen, Marissa White, Nicole Brennan, Ning Gong","http://arxiv.org/abs/1909.01936v3, http://arxiv.org/pdf/1909.01936v3",stat.AP
31068,em,"In order to estimate the conditional risk of a portfolio's return, two
strategies can be advocated. A multivariate strategy requires estimating a
dynamic model for the vector of risk factors, which is often challenging, when
at all possible, for large portfolios. A univariate approach based on a dynamic
model for the portfolio's return seems more attractive. However, when the
combination of the individual returns is time varying, the portfolio's return
series is typically non stationary which may invalidate statistical inference.
An alternative approach consists in reconstituting a ""virtual portfolio"", whose
returns are built using the current composition of the portfolio and for which
a stationary dynamic model can be estimated.
  This paper establishes the asymptotic properties of this method, that we call
Virtual Historical Simulation. Numerical illustrations on simulated and real
data are provided.",Virtual Historical Simulation for estimating the conditional VaR of large portfolios,2019-09-10 19:01:19,"Christian Francq, Jean-Michel Zakoian","http://arxiv.org/abs/1909.04661v1, http://arxiv.org/pdf/1909.04661v1",econ.EM
31069,em,"Volatility estimation based on high-frequency data is key to accurately
measure and control the risk of financial assets. A L\'{e}vy process with
infinite jump activity and microstructure noise is considered one of the
simplest, yet accurate enough, models for financial data at high-frequency.
Utilizing this model, we propose a ""purposely misspecified"" posterior of the
volatility obtained by ignoring the jump-component of the process. The
misspecified posterior is further corrected by a simple estimate of the
location shift and re-scaling of the log likelihood. Our main result
establishes a Bernstein-von Mises (BvM) theorem, which states that the proposed
adjusted posterior is asymptotically Gaussian, centered at a consistent
estimator, and with variance equal to the inverse of the Fisher information. In
the absence of microstructure noise, our approach can be extended to inferences
of the integrated variance of a general It\^o semimartingale. Simulations are
provided to demonstrate the accuracy of the resulting credible intervals, and
the frequentist properties of the approximate Bayesian inference based on the
adjusted posterior.",Bayesian Inference on Volatility in the Presence of Infinite Jump Activity and Microstructure Noise,2019-09-11 08:06:47,"Qi Wang, José E. Figueroa-López, Todd Kuffner","http://arxiv.org/abs/1909.04853v1, http://arxiv.org/pdf/1909.04853v1",math.ST
31070,em,"We propose a novel approach for causal mediation analysis based on
changes-in-changes assumptions restricting unobserved heterogeneity over time.
This allows disentangling the causal effect of a binary treatment on a
continuous outcome into an indirect effect operating through a binary
intermediate variable (called mediator) and a direct effect running via other
causal mechanisms. We identify average and quantile direct and indirect effects
for various subgroups under the condition that the outcome is monotonic in the
unobserved heterogeneity and that the distribution of the latter does not
change over time conditional on the treatment and the mediator. We also provide
a simulation study and an empirical application to the Jobs II programme.",Direct and Indirect Effects based on Changes-in-Changes,2019-09-11 14:32:54,"Martin Huber, Mark Schelker, Anthony Strittmatter","http://dx.doi.org/10.1080/07350015.2020.1831929, http://arxiv.org/abs/1909.04981v3, http://arxiv.org/pdf/1909.04981v3",econ.EM
31071,em,"The Efficient Market Hypothesis has been a staple of economics research for
decades. In particular, weak-form market efficiency -- the notion that past
prices cannot predict future performance -- is strongly supported by
econometric evidence. In contrast, machine learning algorithms implemented to
predict stock price have been touted, to varying degrees, as successful.
Moreover, some data scientists boast the ability to garner above-market returns
using price data alone. This study endeavors to connect existing econometric
research on weak-form efficient markets with data science innovations in
algorithmic trading. First, a traditional exploration of stationarity in stock
index prices over the past decade is conducted with Augmented Dickey-Fuller and
Variance Ratio tests. Then, an algorithmic trading platform is implemented with
the use of five machine learning algorithms. Econometric findings identify
potential stationarity, hinting technical evaluation may be possible, though
algorithmic trading results find little predictive power in any machine
learning model, even when using trend-specific metrics. Accounting for
transaction costs and risk, no system achieved above-market returns
consistently. Our findings reinforce the validity of weak-form market
efficiency.",Validating Weak-form Market Efficiency in United States Stock Markets with Trend Deterministic Price Data and Machine Learning,2019-09-11 18:39:49,"Samuel Showalter, Jeffrey Gropp","http://arxiv.org/abs/1909.05151v1, http://arxiv.org/pdf/1909.05151v1",q-fin.ST
31078,em,"The paper introduces an approach to telematics devices data application in
automotive insurance. We conduct a comparative analysis of different types of
devices that collect information on vehicle utilization and driving style of
its driver, describe advantages and disadvantages of these devices and indicate
the most efficient from the insurer point of view. The possible formats of
telematics data are described and methods of their processing to a format
convenient for modelling are proposed. We also introduce an approach to
classify the accidents strength. Using all the available information, we
estimate accident probability models for different types of accidents and
identify an optimal set of factors for each of the models. We assess the
quality of resulting models using both in-sample and out-of-sample estimates.",Usage-Based Vehicle Insurance: Driving Style Factors of Accident Probability and Severity,2019-10-01 17:45:50,"Konstantin Korishchenko, Ivan Stankevich, Nikolay Pilnik, Daria Petrova","http://arxiv.org/abs/1910.00460v2, http://arxiv.org/pdf/1910.00460v2",stat.AP
31072,em,"The widespread use of quantile regression methods depends crucially on the
existence of fast algorithms. Despite numerous algorithmic improvements, the
computation time is still non-negligible because researchers often estimate
many quantile regressions and use the bootstrap for inference. We suggest two
new fast algorithms for the estimation of a sequence of quantile regressions at
many quantile indexes. The first algorithm applies the preprocessing idea of
Portnoy and Koenker (1997) but exploits a previously estimated quantile
regression to guess the sign of the residuals. This step allows for a reduction
of the effective sample size. The second algorithm starts from a previously
estimated quantile regression at a similar quantile index and updates it using
a single Newton-Raphson iteration. The first algorithm is exact, while the
second is only asymptotically equivalent to the traditional quantile regression
estimator. We also apply the preprocessing idea to the bootstrap by using the
sample estimates to guess the sign of the residuals in the bootstrap sample.
Simulations show that our new algorithms provide very large improvements in
computation time without significant (if any) cost in the quality of the
estimates. For instance, we divide by 100 the time required to estimate 99
quantile regressions with 20 regressors and 50,000 observations.",Fast Algorithms for the Quantile Regression Process,2019-09-12 19:34:37,"Victor Chernozhukov, Iván Fernández-Val, Blaise Melly","http://arxiv.org/abs/1909.05782v2, http://arxiv.org/pdf/1909.05782v2",econ.EM
31073,em,"This paper studies the forecasting ability of cryptocurrency time series.
This study is about the four most capitalized cryptocurrencies: Bitcoin,
Ethereum, Litecoin and Ripple. Different Bayesian models are compared,
including models with constant and time-varying volatility, such as stochastic
volatility and GARCH. Moreover, some crypto-predictors are included in the
analysis, such as S\&P 500 and Nikkei 225. In this paper the results show that
stochastic volatility is significantly outperforming the benchmark of VAR in
both point and density forecasting. Using a different type of distribution, for
the errors of the stochastic volatility the student-t distribution came out to
be outperforming the standard normal approach.",Comparing the forecasting of cryptocurrencies by Bayesian time-varying volatility models,2019-09-14 17:01:09,"Rick Bohte, Luca Rossini","http://arxiv.org/abs/1909.06599v1, http://arxiv.org/pdf/1909.06599v1",econ.EM
31074,em,"We use official data for all 16 federal German states to study the causal
effect of a flat 1000 Euro state-dependent university tuition fee on the
enrollment behavior of students during the years 2006-2014. In particular, we
show how the variation in the introduction scheme across states and times can
be exploited to identify the federal average causal effect of tuition fees by
controlling for a large amount of potentially influencing attributes for state
heterogeneity. We suggest a stability post-double selection methodology to
robustly determine the causal effect across types in the transparently modeled
unknown response components. The proposed stability resampling scheme in the
two LASSO selection steps efficiently mitigates the risk of model
underspecification and thus biased effects when the tuition fee policy decision
also depends on relevant variables for the state enrollment rates. Correct
inference for the full cross-section state population in the sample requires
adequate design -- rather than sampling-based standard errors. With the
data-driven model selection and explicit control for spatial cross-effects we
detect that tuition fees induce substantial migration effects where the
mobility occurs both from fee but also from non-fee states suggesting also a
general movement for quality. Overall, we find a significant negative impact of
up to 4.5 percentage points of fees on student enrollment. This is in contrast
to plain one-step LASSO or previous empirical studies with full fixed effects
linear panel regressions which generally underestimate the size and get an only
insignificant effect.",How have German University Tuition Fees Affected Enrollment Rates: Robust Model Selection and Design-based Inference in High-Dimensions,2019-09-18 12:12:54,"Konstantin Görgen, Melanie Schienle","http://arxiv.org/abs/1909.08299v2, http://arxiv.org/pdf/1909.08299v2",stat.AP
31075,em,"We study identification and estimation of causal effects in settings with
panel data. Traditionally researchers follow model-based identification
strategies relying on assumptions governing the relation between the potential
outcomes and the observed and unobserved confounders. We focus on a different,
complementary approach to identification where assumptions are made about the
connection between the treatment assignment and the unobserved confounders.
Such strategies are common in cross-section settings but rarely used with panel
data. We introduce different sets of assumptions that follow the two paths to
identification and develop a doubly robust approach. We propose estimation
methods that build on these identification strategies.",Doubly Robust Identification for Causal Panel Data Models,2019-09-20 13:25:23,"Dmitry Arkhangelsky, Guido W. Imbens","http://arxiv.org/abs/1909.09412v3, http://arxiv.org/pdf/1909.09412v3",econ.EM
31076,em,Structural Change Analysis of Active Cryptocurrency Market,Structural Change Analysis of Active Cryptocurrency Market,2019-09-24 05:02:24,"C. Y. Tan, Y. B. Koh, K. H. Ng, K. H. Ng","http://arxiv.org/abs/1909.10679v1, http://arxiv.org/pdf/1909.10679v1",q-fin.ST
31077,em,"Allocating multiple scarce items across a set of individuals is an important
practical problem. In the case of divisible goods and additive preferences a
convex program can be used to find the solution that maximizes Nash welfare
(MNW). The MNW solution is equivalent to finding the equilibrium of a market
economy (aka. the competitive equilibrium from equal incomes, CEEI) and thus
has good properties such as Pareto optimality, envy-freeness, and incentive
compatibility in the large. Unfortunately, this equivalence (and nice
properties) breaks down for general preference classes. Motivated by real world
problems such as course allocation and recommender systems we study the case of
additive `at most one' (AMO) preferences - individuals want at most 1 of each
item and lotteries are allowed. We show that in this case the MNW solution is
still a convex program and importantly is a CEEI solution when the instance
gets large but has a `low rank' structure. Thus a polynomial time algorithm can
be used to scale CEEI (which is in general PPAD-hard) for AMO preferences. We
examine whether the properties guaranteed in the limit hold approximately in
finite samples using several real datasets.",Scalable Fair Division for 'At Most One' Preferences,2019-09-24 16:50:06,"Christian Kroer, Alexander Peysakhovich","http://arxiv.org/abs/1909.10925v1, http://arxiv.org/pdf/1909.10925v1",cs.GT
31463,gn,"The Total Economic Time Capacity of a Year 525600 minutes is postulated as a
time standard for a new Monetary Minute currency in this evaluation study.
Consequently, the Monetary Minute MonMin is defined as a 1/525600 part of the
Total Economic Time Capacity of a Year. The Value CMonMin of the Monetary
Minute MonMin is equal to a 1/525600 part of the GDP, p.c., expressed in a
specific state currency C. There is described how the Monetary Minutes MonMin
are determined, and how their values CMonMin are calculated based on the GDP
and all the population in specific economies. The Monetary Minutes trace
different aggregate productivity, i.e. exploitation of the total time capacity
of a year for generating of the GDP in economies of different states.",Currency Based on Time Standard,2019-10-17 15:40:01,Tomas Kala,"http://arxiv.org/abs/1910.07859v1, http://arxiv.org/pdf/1910.07859v1",econ.GN
31079,em,"This chapter covers different approaches to policy evaluation for assessing
the causal effect of a treatment or intervention on an outcome of interest. As
an introduction to causal inference, the discussion starts with the
experimental evaluation of a randomized treatment. It then reviews evaluation
methods based on selection on observables (assuming a quasi-random treatment
given observed covariates), instrumental variables (inducing a quasi-random
shift in the treatment), difference-in-differences and changes-in-changes
(exploiting changes in outcomes over time), as well as regression
discontinuities and kinks (using changes in the treatment assignment at some
threshold of a running variable). The chapter discusses methods particularly
suited for data with many observations for a flexible (i.e. semi- or
nonparametric) modeling of treatment effects, and/or many (i.e. high
dimensional) observed covariates by applying machine learning to select and
control for covariates in a data-driven way. This is not only useful for
tackling confounding by controlling for instance for factors jointly affecting
the treatment and the outcome, but also for learning effect heterogeneities
across subgroups defined upon observable covariates and optimally targeting
those groups for which the treatment is most effective.",An introduction to flexible methods for policy evaluation,2019-10-01 22:59:51,Martin Huber,"http://arxiv.org/abs/1910.00641v1, http://arxiv.org/pdf/1910.00641v1",econ.EM
31080,em,"The availability of charging infrastructure is essential for large-scale
adoption of electric vehicles (EV). Charging patterns and the utilization of
infrastructure have consequences not only for the energy demand, loading local
power grids but influence the economic returns, parking policies and further
adoption of EVs. We develop a data-driven approach that is exploiting
predictors compiled from GIS data describing the urban context and urban
activities near charging infrastructure to explore correlations with a
comprehensive set of indicators measuring the performance of charging
infrastructure. The best fit was identified for the size of the unique group of
visitors (popularity) attracted by the charging infrastructure. Consecutively,
charging infrastructure is ranked by popularity. The question of whether or not
a given charging spot belongs to the top tier is posed as a binary
classification problem and predictive performance of logistic regression
regularized with an l-1 penalty, random forests and gradient boosted regression
trees is evaluated. Obtained results indicate that the collected predictors
contain information that can be used to predict the popularity of charging
infrastructure. The significance of predictors and how they are linked with the
popularity are explored as well. The proposed methodology can be used to inform
charging infrastructure deployment strategies.",Predicting popularity of EV charging infrastructure from GIS data,2019-10-06 21:38:40,"Milan Straka, Pasquale De Falco, Gabriella Ferruzzi, Daniela Proto, Gijs van der Poel, Shahab Khormali, Ľuboš Buzna","http://dx.doi.org/10.1109/ACCESS.2020.2965621, http://arxiv.org/abs/1910.02498v1, http://arxiv.org/pdf/1910.02498v1",stat.AP
31081,em,"This paper studies Quasi Maximum Likelihood estimation of Dynamic Factor
Models for large panels of time series. Specifically, we consider the case in
which the autocorrelation of the factors is explicitly accounted for, and
therefore the model has a state-space form. Estimation of the factors and their
loadings is implemented through the Expectation Maximization (EM) algorithm,
jointly with the Kalman smoother.~We prove that as both the dimension of the
panel $n$ and the sample size $T$ diverge to infinity, up to logarithmic terms:
(i) the estimated loadings are $\sqrt T$-consistent and asymptotically normal
if $\sqrt T/n\to 0$; (ii) the estimated factors are $\sqrt n$-consistent and
asymptotically normal if $\sqrt n/T\to 0$; (iii) the estimated common component
is $\min(\sqrt n,\sqrt T)$-consistent and asymptotically normal regardless of
the relative rate of divergence of $n$ and $T$. Although the model is estimated
as if the idiosyncratic terms were cross-sectionally and serially uncorrelated
and normally distributed, we show that these mis-specifications do not affect
consistency. Moreover, the estimated loadings are asymptotically as efficient
as those obtained with the Principal Components estimator, while the estimated
factors are more efficient if the idiosyncratic covariance is sparse enough.~We
then propose robust estimators of the asymptotic covariances, which can be used
to conduct inference on the loadings and to compute confidence intervals for
the factors and common components. Finally, we study the performance of our
estimators and we compare them with the traditional Principal Components
approach through MonteCarlo simulations and analysis of US macroeconomic data.",Quasi Maximum Likelihood Estimation and Inference of Large Approximate Dynamic Factor Models via the EM algorithm,2019-10-09 10:31:59,"Matteo Barigozzi, Matteo Luciani","http://arxiv.org/abs/1910.03821v4, http://arxiv.org/pdf/1910.03821v4",math.ST
31082,em,"This study develops a framework for testing hypotheses on structural
parameters in incomplete models. Such models make set-valued predictions and
hence do not generally yield a unique likelihood function. The model structure,
however, allows us to construct tests based on the least favorable pairs of
likelihoods using the theory of Huber and Strassen (1973). We develop tests
robust to model incompleteness that possess certain optimality properties. We
also show that sharp identifying restrictions play a role in constructing such
tests in a computationally tractable manner. A framework for analyzing the
local asymptotic power of the tests is developed by embedding the least
favorable pairs into a model that allows local approximations under the limits
of experiments argument. Examples of the hypotheses we consider include those
on the presence of strategic interaction effects in discrete games of complete
information. Monte Carlo experiments demonstrate the robust performance of the
proposed tests.",Robust Likelihood Ratio Tests for Incomplete Economic Models,2019-10-10 17:41:10,"Hiroaki Kaido, Yi Zhang","http://arxiv.org/abs/1910.04610v2, http://arxiv.org/pdf/1910.04610v2",econ.EM
31083,em,"In recent years, probabilistic forecasting is an emerging topic, which is why
there is a growing need of suitable methods for the evaluation of multivariate
predictions. We analyze the sensitivity of the most common scoring rules,
especially regarding quality of the forecasted dependency structures.
Additionally, we propose scoring rules based on the copula, which uniquely
describes the dependency structure for every probability distribution with
continuous marginal distributions. Efficient estimation of the considered
scoring rules and evaluation methods such as the Diebold-Mariano test are
discussed. In detailed simulation studies, we compare the performance of the
renowned scoring rules and the ones we propose. Besides extended synthetic
studies based on recently published results we also consider a real data
example. We find that the energy score, which is probably the most widely used
multivariate scoring rule, performs comparably well in detecting forecast
errors, also regarding dependencies. This contradicts other studies. The
results also show that a proposed copula score provides very strong distinction
between models with correct and incorrect dependency structure. We close with a
comprehensive discussion on the proposed methodology.",Multivariate Forecasting Evaluation: On Sensitive and Strictly Proper Scoring Rules,2019-10-16 15:57:00,"Florian Ziel, Kevin Berk","http://arxiv.org/abs/1910.07325v1, http://arxiv.org/pdf/1910.07325v1",stat.ME
31084,em,"This paper develops asymptotic theory of integrals of empirical quantile
functions with respect to random weight functions, which is an extension of
classical $L$-statistics. They appear when sample trimming or Winsorization is
applied to asymptotically linear estimators. The key idea is to consider
empirical processes in the spaces appropriate for integration. First, we
characterize weak convergence of empirical distribution functions and random
weight functions in the space of bounded integrable functions. Second, we
establish the delta method for empirical quantile functions as integrable
functions. Third, we derive the delta method for $L$-statistics. Finally, we
prove weak convergence of their bootstrap processes, showing validity of
nonparametric bootstrap.",Asymptotic Theory of $L$-Statistics and Integrable Empirical Processes,2019-10-16 22:01:51,Tetsuya Kaji,"http://arxiv.org/abs/1910.07572v1, http://arxiv.org/pdf/1910.07572v1",math.ST
31085,em,"This paper develops a uniformly valid and asymptotically nonconservative test
based on projection for a class of shape restrictions. The key insight we
exploit is that these restrictions form convex cones, a simple and yet elegant
structure that has been barely harnessed in the literature. Based on a
monotonicity property afforded by such a geometric structure, we construct a
bootstrap procedure that, unlike many studies in nonstandard settings,
dispenses with estimation of local parameter spaces, and the critical values
are obtained in a way as simple as computing the test statistic. Moreover, by
appealing to strong approximations, our framework accommodates nonparametric
regression models as well as distributional/density-related and structural
settings. Since the test entails a tuning parameter (due to the nonstandard
nature of the problem), we propose a data-driven choice and prove its validity.
Monte Carlo simulations confirm that our test works well.",A Projection Framework for Testing Shape Restrictions That Form Convex Cones,2019-10-17 06:00:42,"Zheng Fang, Juwon Seo","http://dx.doi.org/10.3982/ECTA17764, http://arxiv.org/abs/1910.07689v4, http://arxiv.org/pdf/1910.07689v4",econ.EM
31086,em,"Partial identification approaches are a flexible and robust alternative to
standard point-identification approaches in general instrumental variable
models. However, this flexibility comes at the cost of a ``curse of
cardinality'': the number of restrictions on the identified set grows
exponentially with the number of points in the support of the endogenous
treatment. This article proposes a novel path-sampling approach to this
challenge. It is designed for partially identifying causal effects of interest
in the most complex models with continuous endogenous treatments. A stochastic
process representation allows to seamlessly incorporate assumptions on
individual behavior into the model. Some potential applications include
dose-response estimation in randomized trials with imperfect compliance, the
evaluation of social programs, welfare estimation in demand models, and
continuous choice models. As a demonstration, the method provides informative
nonparametric bounds on household expenditures under the assumption that
expenditure is continuous. The mathematical contribution is an approach to
approximately solving infinite dimensional linear programs on path spaces via
sampling.",A path-sampling method to partially identify causal effects in instrumental variable models,2019-10-21 19:44:15,Florian Gunsilius,"http://arxiv.org/abs/1910.09502v2, http://arxiv.org/pdf/1910.09502v2",econ.EM
31087,em,"A principal component analysis based on the generalized Gini correlation
index is proposed (Gini PCA). The Gini PCA generalizes the standard PCA based
on the variance. It is shown, in the Gaussian case, that the standard PCA is
equivalent to the Gini PCA. It is also proven that the dimensionality reduction
based on the generalized Gini correlation matrix, that relies on city-block
distances, is robust to outliers. Monte Carlo simulations and an application on
cars data (with outliers) show the robustness of the Gini PCA and provide
different interpretations of the results compared with the variance PCA.",Principal Component Analysis: A Generalized Gini Approach,2019-10-22 20:32:33,"Charpentier, Arthur, Mussard, Stephane, Tea Ouraga","http://arxiv.org/abs/1910.10133v1, http://arxiv.org/pdf/1910.10133v1",stat.ME
31088,em,"In this paper, we consider the problem of learning models with a latent
factor structure. The focus is to find what is possible and what is impossible
if the usual strong factor condition is not imposed. We study the minimax rate
and adaptivity issues in two problems: pure factor models and panel regression
with interactive fixed effects. For pure factor models, if the number of
factors is known, we develop adaptive estimation and inference procedures that
attain the minimax rate. However, when the number of factors is not specified a
priori, we show that there is a tradeoff between validity and efficiency: any
confidence interval that has uniform validity for arbitrary factor strength has
to be conservative; in particular its width is bounded away from zero even when
the factors are strong. Conversely, any data-driven confidence interval that
does not require as an input the exact number of factors (including weak ones)
and has shrinking width under strong factors does not have uniform coverage and
the worst-case coverage probability is at most 1/2. For panel regressions with
interactive fixed effects, the tradeoff is much better. We find that the
minimax rate for learning the regression coefficient does not depend on the
factor strength and propose a simple estimator that achieves this rate.
However, when weak factors are allowed, uncertainty in the number of factors
can cause a great loss of efficiency although the rate is not affected. In most
cases, we find that the strong factor condition (and/or exact knowledge of
number of factors) improves efficiency, but this condition needs to be imposed
by faith and cannot be verified in data for inference purposes.",How well can we learn large factor models without assuming strong factors?,2019-10-23 09:41:53,Yinchu Zhu,"http://arxiv.org/abs/1910.10382v3, http://arxiv.org/pdf/1910.10382v3",math.ST
31089,em,"We present a novel algorithm for non-linear instrumental variable (IV)
regression, DualIV, which simplifies traditional two-stage methods via a dual
formulation. Inspired by problems in stochastic programming, we show that
two-stage procedures for non-linear IV regression can be reformulated as a
convex-concave saddle-point problem. Our formulation enables us to circumvent
the first-stage regression which is a potential bottleneck in real-world
applications. We develop a simple kernel-based algorithm with an analytic
solution based on this formulation. Empirical results show that we are
competitive to existing, more complicated algorithms for non-linear
instrumental variable regression.",Dual Instrumental Variable Regression,2019-10-28 00:36:26,"Krikamol Muandet, Arash Mehrjou, Si Kai Lee, Anant Raj","http://arxiv.org/abs/1910.12358v3, http://arxiv.org/pdf/1910.12358v3",stat.ML
31090,em,"This paper proposes a new estimator for selecting weights to average over
least squares estimates obtained from a set of models. Our proposed estimator
builds on the Mallows model average (MMA) estimator of Hansen (2007), but,
unlike MMA, simultaneously controls for location bias and regression error
through a common constant. We show that our proposed estimator-- the mean-shift
Mallows model average (MSA) estimator-- is asymptotically optimal to the
original MMA estimator in terms of mean squared error. A simulation study is
presented, where we show that our proposed estimator uniformly outperforms the
MMA estimator.",Mean-shift least squares model averaging,2019-12-03 08:19:17,"Kenichiro McAlinn, Kosaku Takanashi","http://arxiv.org/abs/1912.01194v1, http://arxiv.org/pdf/1912.01194v1",econ.EM
31091,em,"We propose a generalization of the linear panel quantile regression model to
accommodate both \textit{sparse} and \textit{dense} parts: sparse means while
the number of covariates available is large, potentially only a much smaller
number of them have a nonzero impact on each conditional quantile of the
response variable; while the dense part is represent by a low-rank matrix that
can be approximated by latent factors and their loadings. Such a structure
poses problems for traditional sparse estimators, such as the
$\ell_1$-penalised Quantile Regression, and for traditional latent factor
estimator, such as PCA. We propose a new estimation procedure, based on the
ADMM algorithm, consists of combining the quantile loss function with $\ell_1$
\textit{and} nuclear norm regularization. We show, under general conditions,
that our estimator can consistently estimate both the nonzero coefficients of
the covariates and the latent low-rank matrix.
  Our proposed model has a ""Characteristics + Latent Factors"" Asset Pricing
Model interpretation: we apply our model and estimator with a large-dimensional
panel of financial data and find that (i) characteristics have sparser
predictive power once latent factors were controlled (ii) the factors and
coefficients at upper and lower quantiles are different from the median.",High Dimensional Latent Panel Quantile Regression with an Application to Asset Pricing,2019-12-04 21:03:53,"Alexandre Belloni, Mingli Chen, Oscar Hernan Madrid Padilla, Zixuan, Wang","http://arxiv.org/abs/1912.02151v2, http://arxiv.org/pdf/1912.02151v2",econ.EM
31092,em,"We explore the international transmission of monetary policy and central bank
information shocks by the Federal Reserve and the European Central Bank.
Identification of these shocks is achieved by using a combination of
high-frequency market surprises around announcement dates of policy decisions
and sign restrictions. We propose a high-dimensional macroeconometric framework
for modeling aggregate quantities alongside country-specific variables to study
international shock propagation and spillover effects. Our results are in line
with the established literature focusing on individual economies, and moreover
suggest substantial international spillover effects in both directions for
monetary policy and central bank information shocks. In addition, we detect
heterogeneities in the transmission of ECB policy actions to individual member
states.",The international effects of central bank information shocks,2019-12-06 17:43:07,"Michael Pfarrhofer, Anna Stelzer","http://arxiv.org/abs/1912.03158v1, http://arxiv.org/pdf/1912.03158v1",econ.EM
31093,em,"We introduce several new estimation methods that leverage shape constraints
in auction models to estimate various objects of interest, including the
distribution of a bidder's valuations, the bidder's ex ante expected surplus,
and the seller's counterfactual revenue. The basic approach applies broadly in
that (unlike most of the literature) it works for a wide range of auction
formats and allows for asymmetric bidders. Though our approach is not
restrictive, we focus our analysis on first--price, sealed--bid auctions with
independent private valuations. We highlight two nonparametric estimation
strategies, one based on a least squares criterion and the other on a maximum
likelihood criterion. We also provide the first direct estimator of the
strategy function. We establish several theoretical properties of our methods
to guide empirical analysis and inference. In addition to providing the
asymptotic distributions of our estimators, we identify ways in which
methodological choices should be tailored to the objects of their interest. For
objects like the bidders' ex ante surplus and the seller's counterfactual
expected revenue with an additional symmetric bidder, we show that our
input--parameter--free estimators achieve the semiparametric efficiency bound.
For objects like the bidders' inverse strategy function, we provide an easily
implementable boundary--corrected kernel smoothing and transformation method in
order to ensure the squared error is integrable over the entire support of the
valuations. An extensive simulation study illustrates our analytical results
and demonstrates the respective advantages of our least--squares and maximum
likelihood estimators in finite samples. Compared to estimation strategies
based on kernel density estimation, the simulations indicate that the smoothed
versions of our estimators enjoy a large degree of robustness to the choice of
an input parameter.",Estimation of Auction Models with Shape Restrictions,2019-12-16 19:01:16,"Joris Pinkse, Karl Schurter","http://arxiv.org/abs/1912.07466v1, http://arxiv.org/pdf/1912.07466v1",econ.EM
31094,em,"There has been considerable advance in understanding the properties of sparse
regularization procedures in high-dimensional models. In time series context,
it is mostly restricted to Gaussian autoregressions or mixing sequences. We
study oracle properties of LASSO estimation of weakly sparse
vector-autoregressive models with heavy tailed, weakly dependent innovations
with virtually no assumption on the conditional heteroskedasticity. In contrast
to current literature, our innovation process satisfy an $L^1$ mixingale type
condition on the centered conditional covariance matrices. This condition
covers $L^1$-NED sequences and strong ($\alpha$-) mixing sequences as
particular examples.",Regularized Estimation of High-Dimensional Vector AutoRegressions with Weakly Dependent Innovations,2019-12-19 06:19:23,"Ricardo P. Masini, Marcelo C. Medeiros, Eduardo F. Mendes","http://arxiv.org/abs/1912.09002v3, http://arxiv.org/pdf/1912.09002v3",math.ST
31095,em,"We study robust versions of pricing problems where customers choose products
according to a generalized extreme value (GEV) choice model, and the choice
parameters are not known exactly but lie in an uncertainty set. We show that,
when the robust problem is unconstrained and the price sensitivity parameters
are homogeneous, the robust optimal prices have a constant markup over
products, and we provide formulas that allow to compute this constant markup by
bisection. We further show that, in the case that the price sensitivity
parameters are only homogeneous in each partition of the products, under the
assumption that the choice probability generating function and the uncertainty
set are partition-wise separable, a robust solution will have a constant markup
in each subset, and this constant-markup vector can be found efficiently by
convex optimization. We provide numerical results to illustrate the advantages
of our robust approach in protecting from bad scenarios. Our results hold for
convex and bounded uncertainty sets,} and for any arbitrary GEV model,
including the multinomial logit, nested or cross-nested logit.",Robust Product-line Pricing under Generalized Extreme Value Models,2019-12-20 00:32:50,"Tien Mai, Patrick Jaillet","http://arxiv.org/abs/1912.09552v2, http://arxiv.org/pdf/1912.09552v2",math.OC
31096,em,"This paper deals with the Gaussian and bootstrap approximations to the
distribution of the max statistic in high dimensions. This statistic takes the
form of the maximum over components of the sum of independent random vectors
and its distribution plays a key role in many high-dimensional econometric
problems. Using a novel iterative randomized Lindeberg method, the paper
derives new bounds for the distributional approximation errors. These new
bounds substantially improve upon existing ones and simultaneously allow for a
larger class of bootstrap methods.",Improved Central Limit Theorem and bootstrap approximations in high dimensions,2019-12-22 23:38:16,"Victor Chernozhukov, Denis Chetverikov, Kengo Kato, Yuta Koike","http://arxiv.org/abs/1912.10529v2, http://arxiv.org/pdf/1912.10529v2",math.ST
34528,th,"This paper presents a new nested production function that is specifically
designed for analyzing capital and labor intensity of manufacturing industries
in developing and developed regions. The paper provides a rigorous theoretical
foundation for this production function, as well as an empirical analysis of
its performance in a sample of industries. The analysis shows that the
production function can be used to accurately estimate the level of capital and
labor intensity in industries, as well as to analyze the capacity utilization
of these industries.",A New Production Function Approach,2023-03-25 13:21:22,Samidh Pal,"http://arxiv.org/abs/2303.14428v1, http://arxiv.org/pdf/2303.14428v1",econ.TH
31097,em,"This paper is about the feasibility and means of root-n consistently
estimating linear, mean-square continuous functionals of a high dimensional,
approximately sparse regression. Such objects include a wide variety of
interesting parameters such as regression coefficients, average derivatives,
and the average treatment effect. We give lower bounds on the convergence rate
of estimators of a regression slope and an average derivative and find that
these bounds are substantially larger than in a low dimensional, semiparametric
setting. We also give debiased machine learners that are root-n consistent
under either a minimal approximate sparsity condition or rate double
robustness. These estimators improve on existing estimators in being root-n
consistent under more general conditions that previously known.",Minimax Semiparametric Learning With Approximate Sparsity,2019-12-27 19:13:21,"Jelena Bradic, Victor Chernozhukov, Whitney K. Newey, Yinchu Zhu","http://arxiv.org/abs/1912.12213v6, http://arxiv.org/pdf/1912.12213v6",math.ST
31098,em,"Based on administrative data of unemployed in Belgium, we estimate the labour
market effects of three training programmes at various aggregation levels using
Modified Causal Forests, a causal machine learning estimator. While all
programmes have positive effects after the lock-in period, we find substantial
heterogeneity across programmes and unemployed. Simulations show that
'black-box' rules that reassign unemployed to programmes that maximise
estimated individual gains can considerably improve effectiveness: up to 20
percent more (less) time spent in (un)employment within a 30 months window. A
shallow policy tree delivers a simple rule that realizes about 70 percent of
this gain.",Priority to unemployed immigrants? A causal machine learning evaluation of training in Belgium,2019-12-30 12:44:34,"Bart Cockx, Michael Lechner, Joost Bollens","http://arxiv.org/abs/1912.12864v4, http://arxiv.org/pdf/1912.12864v4",econ.EM
31099,em,"In this paper we develop a data-driven smoothing technique for
high-dimensional and non-linear panel data models. We allow for individual
specific (non-linear) functions and estimation with econometric or machine
learning methods by using weighted observations from other individuals. The
weights are determined by a data-driven way and depend on the similarity
between the corresponding functions and are measured based on initial
estimates. The key feature of such a procedure is that it clusters individuals
based on the distance / similarity between them, estimated in a first stage.
Our estimation method can be combined with various statistical estimation
procedures, in particular modern machine learning methods which are in
particular fruitful in the high-dimensional case and with complex,
heterogeneous data. The approach can be interpreted as a \textquotedblleft
soft-clustering\textquotedblright\ in comparison to
traditional\textquotedblleft\ hard clustering\textquotedblright that assigns
each individual to exactly one group. We conduct a simulation study which shows
that the prediction can be greatly improved by using our estimator. Finally, we
analyze a big data set from didichuxing.com, a leading company in
transportation industry, to analyze and predict the gap between supply and
demand based on a large set of covariates. Our estimator clearly performs much
better in out-of-sample prediction compared to existing linear panel data
estimators.",Adaptive Discrete Smoothing for High-Dimensional and Nonlinear Panel Data,2019-12-30 12:50:58,"Xi Chen, Ye Luo, Martin Spindler","http://arxiv.org/abs/1912.12867v2, http://arxiv.org/pdf/1912.12867v2",stat.ME
31100,em,"We develop new robust discrete choice tools to learn about the average
willingness to pay for a price subsidy and its effects on demand given
exogenous, discrete variation in prices. Our starting point is a nonparametric,
nonseparable model of choice. We exploit the insight that our welfare
parameters in this model can be expressed as functions of demand for the
different alternatives. However, while the variation in the data reveals the
value of demand at the observed prices, the parameters generally depend on its
values beyond these prices. We show how to sharply characterize what we can
learn when demand is specified to be entirely nonparametric or to be
parameterized in a flexible manner, both of which imply that the parameters are
not necessarily point identified. We use our tools to analyze the welfare
effects of price subsidies provided by school vouchers in the DC Opportunity
Scholarship Program. We robustly find that the provision of the status quo
voucher and a wide range of counterfactual vouchers of different amounts have
positive benefits net of costs. This positive effect can be explained by the
popularity of low-tuition schools in the program; removing them from the
program can result in a negative net benefit. Relative to our bounds, we also
find that comparable logit estimates potentially understate the benefits for
certain voucher amounts, and provide a misleading sense of robustness for
alternative amounts.",Estimating Welfare Effects in a Nonparametric Choice Model: The Case of School Vouchers,2020-02-01 02:46:48,"Vishal Kamat, Samuel Norris","http://arxiv.org/abs/2002.00103v5, http://arxiv.org/pdf/2002.00103v5",econ.GN
31101,em,"Discrete choice models (DCMs) require a priori knowledge of the utility
functions, especially how tastes vary across individuals. Utility
misspecification may lead to biased estimates, inaccurate interpretations and
limited predictability. In this paper, we utilize a neural network to learn
taste representation. Our formulation consists of two modules: a neural network
(TasteNet) that learns taste parameters (e.g., time coefficient) as flexible
functions of individual characteristics; and a multinomial logit (MNL) model
with utility functions defined with expert knowledge. Taste parameters learned
by the neural network are fed into the choice model and link the two modules.
  Our approach extends the L-MNL model (Sifringer et al., 2020) by allowing the
neural network to learn the interactions between individual characteristics and
alternative attributes. Moreover, we formalize and strengthen the
interpretability condition - requiring realistic estimates of behavior
indicators (e.g., value-of-time, elasticity) at the disaggregated level, which
is crucial for a model to be suitable for scenario analysis and policy
decisions. Through a unique network architecture and parameter transformation,
we incorporate prior knowledge and guide the neural network to output realistic
behavior indicators at the disaggregated level. We show that TasteNet-MNL
reaches the ground-truth model's predictability and recovers the nonlinear
taste functions on synthetic data. Its estimated value-of-time and choice
elasticities at the individual level are close to the ground truth. On a
publicly available Swissmetro dataset, TasteNet-MNL outperforms benchmarking
MNLs and Mixed Logit model's predictability. It learns a broader spectrum of
taste variations within the population and suggests a higher average
value-of-time.",A Neural-embedded Choice Model: TasteNet-MNL Modeling Taste Heterogeneity with Flexibility and Interpretability,2020-02-03 21:03:54,"Yafei Han, Francisco Camara Pereira, Moshe Ben-Akiva, Christopher Zegras","http://arxiv.org/abs/2002.00922v2, http://arxiv.org/pdf/2002.00922v2",econ.EM
31127,em,"We propose a restrictiveness measure for economic models based on how well
they fit synthetic data from a pre-defined class. This measure, together with a
measure for how well the model fits real data, outlines a Pareto frontier,
where models that rule out more regularities, yet capture the regularities that
are present in real data, are preferred. To illustrate our approach, we
evaluate the restrictiveness of popular models in two laboratory settings --
certainty equivalents and initial play -- and in one field setting -- takeup of
microfinance in Indian villages. The restrictiveness measure reveals new
insights about each of the models, including that some economic models with
only a few parameters are very flexible.",How Flexible is that Functional Form? Quantifying the Restrictiveness of Theories,2020-07-17 23:08:53,"Drew Fudenberg, Wayne Gao, Annie Liang","http://arxiv.org/abs/2007.09213v4, http://arxiv.org/pdf/2007.09213v4",econ.TH
31102,em,"Choosing the technique that is the best at forecasting your data, is a
problem that arises in any forecasting application. Decades of research have
resulted into an enormous amount of forecasting methods that stem from
statistics, econometrics and machine learning (ML), which leads to a very
difficult and elaborate choice to make in any forecasting exercise. This paper
aims to facilitate this process for high-level tactical sales forecasts by
comparing a large array of techniques for 35 times series that consist of both
industry data from the Coca-Cola Company and publicly available datasets.
However, instead of solely focusing on the accuracy of the resulting forecasts,
this paper introduces a novel and completely automated profit-driven approach
that takes into account the expected profit that a technique can create during
both the model building and evaluation process. The expected profit function
that is used for this purpose, is easy to understand and adaptable to any
situation by combining forecasting accuracy with business expertise.
Furthermore, we examine the added value of ML techniques, the inclusion of
external factors and the use of seasonal models in order to ascertain which
type of model works best in tactical sales forecasting. Our findings show that
simple seasonal time series models consistently outperform other methodologies
and that the profit-driven approach can lead to selecting a different
forecasting model.",Profit-oriented sales forecasting: a comparison of forecasting techniques from a business perspective,2020-02-03 17:50:24,"Tine Van Calster, Filip Van den Bossche, Bart Baesens, Wilfried Lemahieu","http://arxiv.org/abs/2002.00949v1, http://arxiv.org/pdf/2002.00949v1",econ.EM
31103,em,"Even before the start of the COVID-19 pandemic, bus ridership in the United
States had attained its lowest level since 1973. If transit agencies hope to
reverse this trend, they must understand how their service allocation policies
affect ridership. This paper is among the first to model ridership trends on a
hyper-local level over time. A Poisson fixed-effects model is developed to
evaluate the ridership elasticity to frequency on weekdays using passenger
count data from Portland, Miami, Minneapolis/St-Paul, and Atlanta between 2012
and 2018. In every agency, ridership is found to be elastic to frequency when
observing the variation between individual route-segments at one point in time.
In other words, the most frequent routes are already the most productive in
terms of passengers per vehicle-trip. When observing the variation within each
route-segment over time, however, ridership is inelastic; each additional
vehicle-trip is expected to generate less ridership than the average bus
already on the route. In three of the four agencies, the elasticity is a
decreasing function of prior frequency, meaning that low-frequency routes are
the most sensitive to changes in frequency. This paper can help transit
agencies anticipate the marginal effect of shifting service throughout the
network. As the quality and availability of passenger count data improve, this
paper can serve as the methodological basis to explore the dynamics of bus
ridership.",On Ridership and Frequency,2020-02-06 22:57:04,"Simon Berrebi, Sanskruti Joshi, Kari E Watkins","http://arxiv.org/abs/2002.02493v3, http://arxiv.org/pdf/2002.02493v3",physics.soc-ph
31104,em,"We consider a matching market where buyers and sellers arrive according to
independent Poisson processes at the same rate and independently abandon the
market if not matched after an exponential amount of time with the same mean.
In this centralized market, the utility for the system manager from matching
any buyer and any seller is a general random variable. We consider a sequence
of systems indexed by $n$ where the arrivals in the $n^{\mathrm{th}}$ system
are sped up by a factor of $n$. We analyze two families of one-parameter
policies: the population threshold policy immediately matches an arriving agent
to its best available mate only if the number of mates in the system is above a
threshold, and the utility threshold policy matches an arriving agent to its
best available mate only if the corresponding utility is above a threshold.
Using a fluid analysis of the two-dimensional Markov process of buyers and
sellers, we show that when the matching utility distribution is light-tailed,
the population threshold policy with threshold $\frac{n}{\ln n}$ is
asymptotically optimal among all policies that make matches only at agent
arrival epochs. In the heavy-tailed case, we characterize the optimal threshold
level for both policies. We also study the utility threshold policy in an
unbalanced matching market with heavy-tailed matching utilities and find that
the buyers and sellers have the same asymptotically optimal utility threshold.
We derive optimal thresholds when the matching utility distribution is
exponential, uniform, Pareto, and correlated Pareto. We find that as the right
tail of the matching utility distribution gets heavier, the threshold level of
each policy (and hence market thickness) increases, as does the magnitude by
which the utility threshold policy outperforms the population threshold policy.",Asymptotically Optimal Control of a Centralized Dynamic Matching Market with General Utilities,2020-02-08 20:34:14,"Jose H. Blanchet, Martin I. Reiman, Viragh Shah, Lawrence M. Wein, Linjia Wu","http://arxiv.org/abs/2002.03205v2, http://arxiv.org/pdf/2002.03205v2",math.PR
31105,em,"We propose a sequential monitoring scheme to find structural breaks in real
estate markets. The changes in the real estate prices are modeled by a
combination of linear and autoregressive terms. The monitoring scheme is based
on a detector and a suitably chosen boundary function. If the detector crosses
the boundary function, a structural break is detected. We provide the
asymptotics for the procedure under the stability null hypothesis and the
stopping time under the change point alternative. Monte Carlo simulation is
used to show the size and the power of our method under several conditions. We
study the real estate markets in Boston, Los Angeles and at the national U.S.
level. We find structural breaks in the markets, and we segment the data into
stationary segments. It is observed that the autoregressive parameter is
increasing but stays below 1.",Sequential Monitoring of Changes in Housing Prices,2020-02-11 00:49:19,"Lajos Horváth, Zhenya Liu, Shanglin Lu","http://arxiv.org/abs/2002.04101v1, http://arxiv.org/pdf/2002.04101v1",econ.EM
31106,em,"The goal of many scientific experiments including A/B testing is to estimate
the average treatment effect (ATE), which is defined as the difference between
the expected outcomes of two or more treatments. In this paper, we consider a
situation where an experimenter can assign a treatment to research subjects
sequentially. In adaptive experimental design, the experimenter is allowed to
change the probability of assigning a treatment using past observations for
estimating the ATE efficiently. However, with this approach, it is difficult to
apply a standard statistical method to construct an estimator because the
observations are not independent and identically distributed. We thus propose
an algorithm for efficient experiments with estimators constructed from
dependent samples. We also introduce a sequential testing framework using the
proposed estimator. To justify our proposed approach, we provide finite and
infinite sample analyses. Finally, we experimentally show that the proposed
algorithm exhibits preferable performance.",Efficient Adaptive Experimental Design for Average Treatment Effect Estimation,2020-02-13 05:04:17,"Masahiro Kato, Takuya Ishihara, Junya Honda, Yusuke Narita","http://arxiv.org/abs/2002.05308v4, http://arxiv.org/pdf/2002.05308v4",stat.ML
31476,gn,"The aim of this study is to contribute to the theory of exogenous economic
shocks and their equivalents in an attempt to explain business cycle
fluctuations, which still do not have a clear explanation. To this end the
author has developed an econometric model based on a regression analysis.
Another objective is to tackle the issue of hybrid threats, which have not yet
been subjected to a cross-disciplinary research. These were reviewed in terms
of their economic characteristics in order to complement research in the fields
of defence and security.",Hybrid threats as an exogenous economic shock,2019-12-17 18:32:03,Shteryo Nozharov,"http://arxiv.org/abs/1912.08916v1, http://arxiv.org/pdf/1912.08916v1",econ.GN
31107,em,"We consider the estimation of treatment effects in settings when multiple
treatments are assigned over time and treatments can have a causal effect on
future outcomes or the state of the treated unit. We propose an extension of
the double/debiased machine learning framework to estimate the dynamic effects
of treatments, which can be viewed as a Neyman orthogonal (locally robust)
cross-fitted version of $g$-estimation in the dynamic treatment regime. Our
method applies to a general class of non-linear dynamic treatment models known
as Structural Nested Mean Models and allows the use of machine learning methods
to control for potentially high dimensional state variables, subject to a mean
square error guarantee, while still allowing parametric estimation and
construction of confidence intervals for the structural parameters of interest.
These structural parameters can be used for off-policy evaluation of any target
dynamic policy at parametric rates, subject to semi-parametric restrictions on
the data generating process. Our work is based on a recursive peeling process,
typical in $g$-estimation, and formulates a strongly convex objective at each
stage, which allows us to extend the $g$-estimation framework in multiple
directions: i) to provide finite sample guarantees, ii) to estimate non-linear
effect heterogeneity with respect to fixed unit characteristics, within
arbitrary function spaces, enabling a dynamic analogue of the RLearner
algorithm for heterogeneous effects, iii) to allow for high-dimensional sparse
parameterizations of the target structural functions, enabling automated model
selection via a recursive lasso algorithm. We also provide guarantees for data
stemming from a single treated unit over a long horizon and under stationarity
conditions.",Double/Debiased Machine Learning for Dynamic Treatment Effects via g-Estimation,2020-02-18 01:32:34,"Greg Lewis, Vasilis Syrgkanis","http://arxiv.org/abs/2002.07285v5, http://arxiv.org/pdf/2002.07285v5",econ.EM
31108,em,"The availability of tourism-related big data increases the potential to
improve the accuracy of tourism demand forecasting, but presents significant
challenges for forecasting, including curse of dimensionality and high model
complexity. A novel bagging-based multivariate ensemble deep learning approach
integrating stacked autoencoders and kernel-based extreme learning machines
(B-SAKE) is proposed to address these challenges in this study. By using
historical tourist arrival data, economic variable data and search intensity
index (SII) data, we forecast tourist arrivals in Beijing from four countries.
The consistent results of multiple schemes suggest that our proposed B-SAKE
approach outperforms benchmark models in terms of level accuracy, directional
accuracy and even statistical significance. Both bagging and stacked
autoencoder can effectively alleviate the challenges brought by tourism big
data and improve the forecasting performance of the models. The ensemble deep
learning model we propose contributes to tourism forecasting literature and
benefits relevant government officials and tourism practitioners.",Tourism Demand Forecasting: An Ensemble Deep Learning Approach,2020-02-19 05:23:38,"Shaolong Sun, Yanzhao Li, Ju-e Guo, Shouyang Wang","http://arxiv.org/abs/2002.07964v3, http://arxiv.org/pdf/2002.07964v3",stat.AP
31109,em,"The accurate seasonal and trend forecasting of tourist arrivals is a very
challenging task. In the view of the importance of seasonal and trend
forecasting of tourist arrivals, and limited research work paid attention to
these previously. In this study, a new adaptive multiscale ensemble (AME)
learning approach incorporating variational mode decomposition (VMD) and least
square support vector regression (LSSVR) is developed for short-, medium-, and
long-term seasonal and trend forecasting of tourist arrivals. In the
formulation of our developed AME learning approach, the original tourist
arrivals series are first decomposed into the trend, seasonal and remainders
volatility components. Then, the ARIMA is used to forecast the trend component,
the SARIMA is used to forecast seasonal component with a 12-month cycle, while
the LSSVR is used to forecast remainder volatility components. Finally, the
forecasting results of the three components are aggregated to generate an
ensemble forecasting of tourist arrivals by the LSSVR based nonlinear ensemble
approach. Furthermore, a direct strategy is used to implement multi-step-ahead
forecasting. Taking two accuracy measures and the Diebold-Mariano test, the
empirical results demonstrate that our proposed AME learning approach can
achieve higher level and directional forecasting accuracy compared with other
benchmarks used in this study, indicating that our proposed approach is a
promising model for forecasting tourist arrivals with high seasonality and
volatility.",Seasonal and Trend Forecasting of Tourist Arrivals: An Adaptive Multiscale Ensemble Learning Approach,2020-02-19 09:32:01,"Shaolong Suna, Dan Bi, Ju-e Guo, Shouyang Wang","http://arxiv.org/abs/2002.08021v2, http://arxiv.org/pdf/2002.08021v2",stat.AP
31110,em,"It has been known since Elliott (1998) that standard methods of inference on
cointegrating relationships break down entirely when autoregressive roots are
near but not exactly equal to unity. We consider this problem within the
framework of a structural VAR, arguing this it is as much a problem of
identification failure as it is of inference. We develop a characterisation of
cointegration based on the impulse response function, which allows long-run
equilibrium relationships to remain identified even in the absence of exact
unit roots. Our approach also provides a framework in which the structural
shocks driving the common persistent components continue to be identified via
long-run restrictions, just as in an SVAR with exact unit roots. We show that
inference on the cointegrating relationships is affected by nuisance
parameters, in a manner familiar from predictive regression; indeed the two
problems are asymptotically equivalent. By adapting the approach of Elliott,
M\""uller and Watson (2015) to our setting, we develop tests that robustly
control size while sacrificing little power (relative to tests that are
efficient in the presence of exact unit roots).",Cointegration without Unit Roots,2020-02-19 13:02:50,"James A. Duffy, Jerome R. Simons","http://arxiv.org/abs/2002.08092v2, http://arxiv.org/pdf/2002.08092v2",econ.EM
31111,em,"We consider a variant of the contextual bandit problem. In standard
contextual bandits, when a user arrives we get the user's complete feature
vector and then assign a treatment (arm) to that user. In a number of
applications (like healthcare), collecting features from users can be costly.
To address this issue, we propose algorithms that avoid needless feature
collection while maintaining strong regret guarantees.",Survey Bandits with Regret Guarantees,2020-02-23 06:24:03,"Sanath Kumar Krishnamurthy, Susan Athey","http://arxiv.org/abs/2002.09814v1, http://arxiv.org/pdf/2002.09814v1",cs.LG
31477,gn,"Digital advertising markets are growing and attracting increased scrutiny.
This paper explores four market inefficiencies that remain poorly understood:
ad effect measurement, frictions between and within advertising channel
members, ad blocking and ad fraud. These topics are not unique to digital
advertising, but each manifests in new ways in markets for digital ads. We
identify relevant findings in the academic literature, recent developments in
practice, and promising topics for future research.",Inefficiencies in Digital Advertising Markets,2019-12-19 07:34:02,"Brett R Gordon, Kinshuk Jerath, Zsolt Katona, Sridhar Narayanan, Jiwoong Shin, Kenneth C Wilbur","http://arxiv.org/abs/1912.09012v2, http://arxiv.org/pdf/1912.09012v2",econ.GN
31112,em,"Regulation is an important feature characterising many dynamical phenomena
and can be tested within the threshold autoregressive setting, with the null
hypothesis being a global non-stationary process. Nonetheless, this setting is
debatable since data are often corrupted by measurement errors. Thus, it is
more appropriate to consider a threshold autoregressive moving-average model as
the general hypothesis. We implement this new setting with the integrated
moving-average model of order one as the null hypothesis. We derive a Lagrange
multiplier test which has an asymptotically similar null distribution and
provide the first rigorous proof of tightness pertaining to testing for
threshold nonlinearity against difference stationarity, which is of independent
interest. Simulation studies show that the proposed approach enjoys less bias
and higher power in detecting threshold regulation than existing tests when
there are measurement errors. We apply the new approach to the daily real
exchange rates of Eurozone countries. It lends support to the purchasing power
parity hypothesis, via a nonlinear mean-reversion mechanism triggered upon
crossing a threshold located in the extreme upper tail. Furthermore, we analyse
the Eurozone series and propose a threshold autoregressive moving-average
specification, which sheds new light on the purchasing power parity debate.",Testing for threshold regulation in presence of measurement error with an application to the PPP hypothesis,2020-02-23 21:54:51,"Kung-Sik Chan, Simone Giannerini, Greta Goracci, Howell Tong","http://arxiv.org/abs/2002.09968v3, http://arxiv.org/pdf/2002.09968v3",stat.ME
31113,em,"In todays global economy, accuracy in predicting macro-economic parameters
such as the foreign the exchange rate or at least estimating the trend
correctly is of key importance for any future investment. In recent times, the
use of computational intelligence-based techniques for forecasting
macroeconomic variables has been proven highly successful. This paper tries to
come up with a multivariate time series approach to forecast the exchange rate
(USD/INR) while parallelly comparing the performance of three multivariate
prediction modelling techniques: Vector Auto Regression (a Traditional
Econometric Technique), Support Vector Machine (a Contemporary Machine Learning
Technique), and Recurrent Neural Networks (a Contemporary Deep Learning
Technique). We have used monthly historical data for several macroeconomic
variables from April 1994 to December 2018 for USA and India to predict USD-INR
Foreign Exchange Rate. The results clearly depict that contemporary techniques
of SVM and RNN (Long Short-Term Memory) outperform the widely used traditional
method of Auto Regression. The RNN model with Long Short-Term Memory (LSTM)
provides the maximum accuracy (97.83%) followed by SVM Model (97.17%) and VAR
Model (96.31%). At last, we present a brief analysis of the correlation and
interdependencies of the variables used for forecasting.","Forecasting Foreign Exchange Rate: A Multivariate Comparative Analysis between Traditional Econometric, Contemporary Machine Learning & Deep Learning Techniques",2020-02-19 21:11:57,"Manav Kaushik, A K Giri","http://arxiv.org/abs/2002.10247v1, http://arxiv.org/pdf/2002.10247v1",q-fin.ST
31114,em,"Intra-day price spreads are of interest to electricity traders, storage and
electric vehicle operators. This paper formulates dynamic density functions,
based upon skewed-t and similar representations, to model and forecast the
German electricity price spreads between different hours of the day, as
revealed in the day-ahead auctions. The four specifications of the density
functions are dynamic and conditional upon exogenous drivers, thereby
permitting the location, scale and shape parameters of the densities to respond
hourly to such factors as weather and demand forecasts. The best fitting and
forecasting specifications for each spread are selected based on the Pinball
Loss function, following the closed-form analytical solutions of the cumulative
distribution functions.",Forecasting the Intra-Day Spread Densities of Electricity Prices,2020-02-20 22:57:07,"Ekaterina Abramova, Derek Bunn","http://dx.doi.org/10.3390/en13030687, http://arxiv.org/abs/2002.10566v1, http://arxiv.org/pdf/2002.10566v1",stat.AP
31115,em,"Holston, Laubach and Williams' (2017) estimates of the natural rate of
interest are driven by the downward trending behaviour of 'other factor'
$z_{t}$. I show that their implementation of Stock and Watson's (1998) Median
Unbiased Estimation (MUE) to determine the size of the $\lambda _{z}$ parameter
which drives this downward trend in $z_{t}$ is unsound. It cannot recover the
ratio of interest $\lambda _{z}=a_{r}\sigma _{z}/\sigma _{\tilde{y}}$ from MUE
required for the estimation of the full structural model. This failure is due
to an 'unnecessary' misspecification in Holston et al.'s (2017) formulation of
the Stage 2 model. More importantly, their implementation of MUE on this
misspecified Stage 2 model spuriously amplifies the point estimate of $\lambda
_{z}$. Using a simulation experiment, I show that their procedure generates
excessively large estimates of $\lambda _{z}$ when applied to data generated
from a model where the true $\lambda _{z}$ is equal to zero. Correcting the
misspecification in their Stage 2 model and the implementation of MUE leads to
a substantially smaller $\lambda _{z}$ estimate, and with this, a more subdued
downward trending influence of 'other factor' $z_{t}$ on the natural rate.
Moreover, the $\lambda _{z}$ point estimate is statistically highly
insignificant, suggesting that there is no role for 'other factor' $z_{t}$ in
this model. I also discuss various other estimation issues that arise in
Holston et al.'s (2017) model of the natural rate that make it unsuitable for
policy analysis.",Econometric issues with Laubach and Williams' estimates of the natural rate of interest,2020-02-26 19:08:01,Daniel Buncic,"http://arxiv.org/abs/2002.11583v2, http://arxiv.org/pdf/2002.11583v2",econ.EM
31116,em,"We consider evaluating and training a new policy for the evaluation data by
using the historical data obtained from a different policy. The goal of
off-policy evaluation (OPE) is to estimate the expected reward of a new policy
over the evaluation data, and that of off-policy learning (OPL) is to find a
new policy that maximizes the expected reward over the evaluation data.
Although the standard OPE and OPL assume the same distribution of covariate
between the historical and evaluation data, a covariate shift often exists,
i.e., the distribution of the covariate of the historical data is different
from that of the evaluation data. In this paper, we derive the efficiency bound
of OPE under a covariate shift. Then, we propose doubly robust and efficient
estimators for OPE and OPL under a covariate shift by using a nonparametric
estimator of the density ratio between the historical and evaluation data
distributions. We also discuss other possible estimators and compare their
theoretical properties. Finally, we confirm the effectiveness of the proposed
estimators through experiments.",Off-Policy Evaluation and Learning for External Validity under a Covariate Shift,2020-02-26 20:18:43,"Masahiro Kato, Masatoshi Uehara, Shota Yasui","http://arxiv.org/abs/2002.11642v3, http://arxiv.org/pdf/2002.11642v3",stat.ML
34529,th,"We revisit Popper's falsifiability criterion. A tester hires a potential
expert to produce a theory, offering payments contingent on the observed
performance of the theory. We argue that if the informed expert can acquire
additional information, falsifiability does have the power to identify
worthless theories.",Redeeming Falsifiability?,2023-03-28 07:17:21,"Mark Whitmeyer, Kun Zhang","http://arxiv.org/abs/2303.15723v1, http://arxiv.org/pdf/2303.15723v1",econ.TH
31117,em,"Off-policy evaluation (OPE) is the problem of evaluating new policies using
historical data obtained from a different policy. In the recent OPE context,
most studies have focused on single-player cases, and not on multi-player
cases. In this study, we propose OPE estimators constructed by the doubly
robust and double reinforcement learning estimators in two-player zero-sum
Markov games. The proposed estimators project exploitability that is often used
as a metric for determining how close a policy profile (i.e., a tuple of
policies) is to a Nash equilibrium in two-player zero-sum games. We prove the
exploitability estimation error bounds for the proposed estimators. We then
propose the methods to find the best candidate policy profile by selecting the
policy profile that minimizes the estimated exploitability from a given policy
profile class. We prove the regret bounds of the policy profiles selected by
our methods. Finally, we demonstrate the effectiveness and performance of the
proposed estimators through experiments.",Off-Policy Exploitability-Evaluation in Two-Player Zero-Sum Markov Games,2020-07-04 19:51:56,"Kenshi Abe, Yusuke Kaneko","http://arxiv.org/abs/2007.02141v2, http://arxiv.org/pdf/2007.02141v2",cs.LG
31118,em,"Study populations are typically sampled from limited points in space and
time, and marginalized groups are underrepresented. To assess the external
validity of randomized and observational studies, we propose and evaluate the
worst-case treatment effect (WTE) across all subpopulations of a given size,
which guarantees positive findings remain valid over subpopulations. We develop
a semiparametrically efficient estimator for the WTE that analyzes the external
validity of the augmented inverse propensity weighted estimator for the average
treatment effect. Our cross-fitting procedure leverages flexible nonparametric
and machine learning-based estimates of nuisance parameters and is a regular
root-$n$ estimator even when nuisance estimates converge more slowly. On real
examples where external validity is of core concern, our proposed framework
guards against brittle findings that are invalidated by unanticipated
population shifts.",Assessing External Validity Over Worst-case Subpopulations,2020-07-05 21:34:14,"Sookyo Jeong, Hongseok Namkoong","http://arxiv.org/abs/2007.02411v3, http://arxiv.org/pdf/2007.02411v3",stat.ML
31119,em,"This paper extends the canonical model of epidemiology, SIRD model, to allow
for time varying parameters for real-time measurement of the stance of the
COVID-19 pandemic. Time variation in model parameters is captured using the
generalized autoregressive score modelling structure designed for the typically
daily count data related to pandemic. The resulting specification permits a
flexible yet parsimonious model structure with a very low computational cost.
This is especially crucial at the onset of the pandemic when the data is scarce
and the uncertainty is abundant. Full sample results show that countries
including US, Brazil and Russia are still not able to contain the pandemic with
the US having the worst performance. Furthermore, Iran and South Korea are
likely to experience the second wave of the pandemic. A real-time exercise show
that the proposed structure delivers timely and precise information on the
current stance of the pandemic ahead of the competitors that use rolling
window. This, in turn, transforms into accurate short-term predictions of the
active cases. We further modify the model to allow for unreported cases.
Results suggest that the effects of the presence of these cases on the
estimation results diminish towards the end of sample with the increasing
number of testing.",Bridging the COVID-19 Data and the Epidemiological Model using Time Varying Parameter SIRD Model,2020-07-03 15:31:55,"Cem Cakmakli, Yasin Simsek","http://arxiv.org/abs/2007.02726v2, http://arxiv.org/pdf/2007.02726v2",q-bio.PE
31120,em,"This study presents a semi-nonparametric Latent Class Choice Model (LCCM)
with a flexible class membership component. The proposed model formulates the
latent classes using mixture models as an alternative approach to the
traditional random utility specification with the aim of comparing the two
approaches on various measures including prediction accuracy and representation
of heterogeneity in the choice process. Mixture models are parametric
model-based clustering techniques that have been widely used in areas such as
machine learning, data mining and patter recognition for clustering and
classification problems. An Expectation-Maximization (EM) algorithm is derived
for the estimation of the proposed model. Using two different case studies on
travel mode choice behavior, the proposed model is compared to traditional
discrete choice models on the basis of parameter estimates' signs, value of
time, statistical goodness-of-fit measures, and cross-validation tests. Results
show that mixture models improve the overall performance of latent class choice
models by providing better out-of-sample prediction accuracy in addition to
better representations of heterogeneity without weakening the behavioral and
economic interpretability of the choice models.",Semi-nonparametric Latent Class Choice Model with a Flexible Class Membership Component: A Mixture Model Approach,2020-07-06 16:19:26,"Georges Sfeir, Maya Abou-Zeid, Filipe Rodrigues, Francisco Camara Pereira, Isam Kaysi","http://dx.doi.org/10.1016/j.jocm.2021.100320, http://arxiv.org/abs/2007.02739v1, http://arxiv.org/pdf/2007.02739v1",econ.EM
31121,em,"We consider a testing problem for cross-sectional dependence for
high-dimensional panel data, where the number of cross-sectional units is
potentially much larger than the number of observations. The cross-sectional
dependence is described through a linear regression model. We study three tests
named the sum test, the max test and the max-sum test, where the latter two are
new. The sum test is initially proposed by Breusch and Pagan (1980). We design
the max and sum tests for sparse and non-sparse residuals in the linear
regressions, respectively.And the max-sum test is devised to compromise both
situations on the residuals. Indeed, our simulation shows that the max-sum test
outperforms the previous two tests. This makes the max-sum test very useful in
practice where sparsity or not for a set of data is usually vague. Towards the
theoretical analysis of the three tests, we have settled two conjectures
regarding the sum of squares of sample correlation coefficients asked by
Pesaran (2004 and 2008). In addition, we establish the asymptotic theory for
maxima of sample correlations coefficients appeared in the linear regression
model for panel data, which is also the first successful attempt to our
knowledge. To study the max-sum test, we create a novel method to show
asymptotic independence between maxima and sums of dependent random variables.
We expect the method itself is useful for other problems of this nature.
Finally, an extensive simulation study as well as a case study are carried out.
They demonstrate advantages of our proposed methods in terms of both empirical
powers and robustness for residuals regardless of sparsity or not.",Max-sum tests for cross-sectional dependence of high-demensional panel data,2020-07-08 09:10:02,"Long Feng, Tiefeng Jiang, Binghui Liu, Wei Xiong","http://arxiv.org/abs/2007.03911v1, http://arxiv.org/pdf/2007.03911v1",math.ST
31122,em,"In this paper, we model the trajectory of the cumulative confirmed cases and
deaths of COVID-19 (in log scale) via a piecewise linear trend model. The model
naturally captures the phase transitions of the epidemic growth rate via
change-points and further enjoys great interpretability due to its
semiparametric nature. On the methodological front, we advance the nascent
self-normalization (SN) technique (Shao, 2010) to testing and estimation of a
single change-point in the linear trend of a nonstationary time series. We
further combine the SN-based change-point test with the NOT algorithm
(Baranowski et al., 2019) to achieve multiple change-point estimation. Using
the proposed method, we analyze the trajectory of the cumulative COVID-19 cases
and deaths for 30 major countries and discover interesting patterns with
potentially relevant implications for effectiveness of the pandemic responses
by different countries. Furthermore, based on the change-point detection
algorithm and a flexible extrapolation function, we design a simple two-stage
forecasting scheme for COVID-19 and demonstrate its promising performance in
predicting cumulative deaths in the U.S.",Time Series Analysis of COVID-19 Infection Curve: A Change-Point Perspective,2020-07-09 07:46:13,"Feiyu Jiang, Zifeng Zhao, Xiaofeng Shao","http://arxiv.org/abs/2007.04553v1, http://arxiv.org/pdf/2007.04553v1",econ.EM
31123,em,"Nowadays consumer loan plays an important role in promoting the economic
growth, and credit cards are the most popular consumer loan. One of the most
essential parts in credit cards is the credit limit management. Traditionally,
credit limits are adjusted based on limited heuristic strategies, which are
developed by experienced professionals. In this paper, we present a data-driven
approach to manage the credit limit intelligently. Firstly, a conditional
independence testing is conducted to acquire the data for building models.
Based on these testing data, a response model is then built to measure the
heterogeneous treatment effect of increasing credit limits (i.e. treatments)
for different customers, who are depicted by several control variables (i.e.
features). In order to incorporate the diminishing marginal effect, a carefully
selected log transformation is introduced to the treatment variable. Moreover,
the model's capability can be further enhanced by applying a non-linear
transformation on features via GBDT encoding. Finally, a well-designed metric
is proposed to properly measure the performances of compared methods. The
experimental results demonstrate the effectiveness of the proposed approach.",Intelligent Credit Limit Management in Consumer Loans Based on Causal Inference,2020-07-10 09:22:44,"Hang Miao, Kui Zhao, Zhun Wang, Linbo Jiang, Quanhui Jia, Yanming Fang, Quan Yu","http://arxiv.org/abs/2007.05188v1, http://arxiv.org/pdf/2007.05188v1",cs.LG
31124,em,"Standard inference about a scalar parameter estimated via GMM amounts to
applying a t-test to a particular set of observations. If the number of
observations is not very large, then moderately heavy tails can lead to poor
behavior of the t-test. This is a particular problem under clustering, since
the number of observations then corresponds to the number of clusters, and
heterogeneity in cluster sizes induces a form of heavy tails. This paper
combines extreme value theory for the smallest and largest observations with a
normal approximation for the average of the remaining observations to construct
a more robust alternative to the t-test. The new test is found to control size
much more successfully in small samples compared to existing methods.
Analytical results in the canonical inference for the mean problem demonstrate
that the new test provides a refinement over the full sample t-test under more
than two but less than three moments, while the bootstrapped t-test does not.",A More Robust t-Test,2020-07-14 17:42:27,Ulrich K. Mueller,"http://arxiv.org/abs/2007.07065v1, http://arxiv.org/pdf/2007.07065v1",econ.EM
31125,em,"Researchers may perform regressions using a sketch of data of size $m$
instead of the full sample of size $n$ for a variety of reasons. This paper
considers the case when the regression errors do not have constant variance and
heteroskedasticity robust standard errors would normally be needed for test
statistics to provide accurate inference. We show that estimates using data
sketched by random projections will behave `as if' the errors were
homoskedastic. Estimation by random sampling would not have this property. The
result arises because the sketched estimates in the case of random projections
can be expressed as degenerate $U$-statistics, and under certain conditions,
these statistics are asymptotically normal with homoskedastic variance. We
verify that the conditions hold not only in the case of least squares
regression when the covariates are exogenous, but also in instrumental
variables estimation when the covariates are endogenous. The result implies
that inference, including first-stage F tests for instrument relevance, can be
simpler than the full sample case if the sketching scheme is appropriately
chosen.",Least Squares Estimation Using Sketched Data with Heteroskedastic Errors,2020-07-15 18:58:27,"Sokbae Lee, Serena Ng","http://arxiv.org/abs/2007.07781v3, http://arxiv.org/pdf/2007.07781v3",stat.ML
31126,em,"A fundamental problem in revenue management is to optimally choose the
attributes of products, such that the total profit or revenue or market share
is maximized. Usually, these attributes can affect both a product's market
share (probability to be chosen) and its profit margin. For example, if a smart
phone has a better battery, then it is more costly to be produced, but is more
likely to be purchased by a customer. The decision maker then needs to choose
an optimal vector of attributes for each product that balances this trade-off.
In spite of the importance of such problems, there is not yet a method to solve
it efficiently in general. Past literature in revenue management and discrete
choice models focus on pricing problems, where price is the only attribute to
be chosen for each product. Existing approaches to solve pricing problems
tractably cannot be generalized to the optimization problem with multiple
product attributes as decision variables. On the other hand, papers studying
product line design with multiple attributes all result in intractable
optimization problems. Then we found a way to reformulate the static
multi-attribute optimization problem, as well as the multi-stage fluid
optimization problem with both resource constraints and upper and lower bounds
of attributes, as a tractable convex conic optimization problem. Our result
applies to optimization problems under the multinomial logit (MNL) model, the
Markov chain (MC) choice model, and with certain conditions, the nested logit
(NL) model.",Tractable Profit Maximization over Multiple Attributes under Discrete Choice Models,2020-07-17 22:20:34,"Hongzhang Shao, Anton J. Kleywegt","http://arxiv.org/abs/2007.09193v3, http://arxiv.org/pdf/2007.09193v3",math.OC
31484,gn,"Most doctors in the NRMP are matched to one of their most-preferred
internship programs. Since various surveys indicate similarities across
doctors' preferences, this suggests a puzzle. How can nearly everyone get a
position in a highly-desirable program when positions in each program are
scarce? We provide one possible explanation for this puzzle. We show that the
patterns observed in the NRMP data may be an artifact of the interview process
that precedes the match. Our analysis highlights the importance of interactions
occurring outside of a matching clearinghouse for resulting outcomes, and casts
doubts on analysis of clearinghouses that take reported preferences at face
value.",Top of the Batch: Interviews and the Match,2020-02-13 06:25:06,"Federico Echenique, Ruy Gonzalez, Alistair Wilson, Leeat Yariv","http://arxiv.org/abs/2002.05323v2, http://arxiv.org/pdf/2002.05323v2",econ.GN
31128,em,"The European Union Emission Trading Scheme is a carbon emission allowance
trading system designed by Europe to achieve emission reduction targets. The
amount of carbon emission caused by production activities is closely related to
the socio-economic environment. Therefore, from the perspective of economic
policy uncertainty, this article constructs the GARCH-MIDAS-EUEPU and
GARCH-MIDAS-GEPU models for investigating the impact of European and global
economic policy uncertainty on carbon price fluctuations. The results show that
both European and global economic policy uncertainty will exacerbate the
long-term volatility of European carbon spot return, with the latter having a
stronger impact when the change is the same. Moreover, the volatility of the
European carbon spot return can be forecasted better by the predictor, global
economic policy uncertainty. This research can provide some implications for
market managers in grasping carbon market trends and helping participants
control the risk of fluctuations in carbon allowances.",The impact of economic policy uncertainties on the volatility of European carbon market,2020-07-21 05:17:03,"Peng-Fei Dai, Xiong Xiong, Toan Luu Duc Huynh, Jiqiang Wang","http://arxiv.org/abs/2007.10564v2, http://arxiv.org/pdf/2007.10564v2",econ.GN
31129,em,"In this paper we develop valid inference for high-dimensional time series. We
extend the desparsified lasso to a time series setting under Near-Epoch
Dependence (NED) assumptions allowing for non-Gaussian, serially correlated and
heteroskedastic processes, where the number of regressors can possibly grow
faster than the time dimension. We first derive an error bound under weak
sparsity, which, coupled with the NED assumption, means this inequality can
also be applied to the (inherently misspecified) nodewise regressions performed
in the desparsified lasso. This allows us to establish the uniform asymptotic
normality of the desparsified lasso under general conditions, including for
inference on parameters of increasing dimensions. Additionally, we show
consistency of a long-run variance estimator, thus providing a complete set of
tools for performing inference in high-dimensional linear time series models.
Finally, we perform a simulation exercise to demonstrate the small sample
properties of the desparsified lasso in common time series settings.",Lasso Inference for High-Dimensional Time Series,2020-07-21 20:12:39,"Robert Adamek, Stephan Smeekes, Ines Wilms","http://arxiv.org/abs/2007.10952v6, http://arxiv.org/pdf/2007.10952v6",econ.EM
31130,em,"This paper considers linear rational expectations models in the frequency
domain under general conditions. The paper develops necessary and sufficient
conditions for existence and uniqueness of particular and generic systems and
characterizes the space of all solutions as an affine space in the frequency
domain. It is demonstrated that solutions are not generally continuous with
respect to the parameters of the models, invalidating mainstream frequentist
and Bayesian methods. The ill-posedness of the problem motivates regularized
solutions with theoretically guaranteed uniqueness, continuity, and even
differentiability properties. Regularization is illustrated in an analysis of
the limiting Gaussian likelihood functions of two analytically tractable
models.",The Spectral Approach to Linear Rational Expectations Models,2020-07-27 21:39:45,Majid M. Al-Sadoon,"http://arxiv.org/abs/2007.13804v4, http://arxiv.org/pdf/2007.13804v4",econ.EM
31131,em,"We present an empirical study of debiasing methods for classifiers, showing
that debiasers often fail in practice to generalize out-of-sample, and can in
fact make fairness worse rather than better. A rigorous evaluation of the
debiasing treatment effect requires extensive cross-validation beyond what is
usually done. We demonstrate that this phenomenon can be explained as a
consequence of bias-variance trade-off, with an increase in variance
necessitated by imposing a fairness constraint. Follow-up experiments validate
the theoretical prediction that the estimation variance depends strongly on the
base rates of the protected class. Considering fairness--performance trade-offs
justifies the counterintuitive notion that partial debiasing can actually yield
better results in practice on out-of-sample data.",Debiasing classifiers: is reality at variance with expectation?,2020-11-04 20:00:54,"Ashrya Agrawal, Florian Pfisterer, Bernd Bischl, Francois Buet-Golfouse, Srijan Sood, Jiahao Chen, Sameena Shah, Sebastian Vollmer","http://arxiv.org/abs/2011.02407v2, http://arxiv.org/pdf/2011.02407v2",cs.LG
31132,em,"We study the bias of classical quantile regression and instrumental variable
quantile regression estimators. While being asymptotically first-order
unbiased, these estimators can have non-negligible second-order biases. We
derive a higher-order stochastic expansion of these estimators using empirical
process theory. Based on this expansion, we derive an explicit formula for the
second-order bias and propose a feasible bias correction procedure that uses
finite-difference estimators of the bias components. The proposed bias
correction method performs well in simulations. We provide an empirical
illustration using Engel's classical data on household expenditure.",Bias correction for quantile regression estimators,2020-11-05 22:43:13,"Grigory Franguridi, Bulat Gafarov, Kaspar Wuthrich","http://arxiv.org/abs/2011.03073v6, http://arxiv.org/pdf/2011.03073v6",econ.EM
31133,em,"In addition to efficient statistical estimators of a treatment's effect,
successful application of causal inference requires specifying assumptions
about the mechanisms underlying observed data and testing whether they are
valid, and to what extent. However, most libraries for causal inference focus
only on the task of providing powerful statistical estimators. We describe
DoWhy, an open-source Python library that is built with causal assumptions as
its first-class citizens, based on the formal framework of causal graphs to
specify and test causal assumptions. DoWhy presents an API for the four steps
common to any causal analysis---1) modeling the data using a causal graph and
structural assumptions, 2) identifying whether the desired effect is estimable
under the causal model, 3) estimating the effect using statistical estimators,
and finally 4) refuting the obtained estimate through robustness checks and
sensitivity analyses. In particular, DoWhy implements a number of robustness
checks including placebo tests, bootstrap tests, and tests for unoberved
confounding. DoWhy is an extensible library that supports interoperability with
other implementations, such as EconML and CausalML for the the estimation step.
The library is available at https://github.com/microsoft/dowhy",DoWhy: An End-to-End Library for Causal Inference,2020-11-09 09:22:11,"Amit Sharma, Emre Kiciman","http://arxiv.org/abs/2011.04216v1, http://arxiv.org/pdf/2011.04216v1",stat.ME
31147,em,"We study estimation of causal effects in staggered rollout designs, i.e.
settings where there is staggered treatment adoption and the timing of
treatment is as-good-as randomly assigned. We derive the most efficient
estimator in a class of estimators that nests several popular generalized
difference-in-differences methods. A feasible plug-in version of the efficient
estimator is asymptotically unbiased with efficiency (weakly) dominating that
of existing approaches. We provide both $t$-based and permutation-test-based
methods for inference. In an application to a training program for police
officers, confidence intervals for the proposed estimator are as much as eight
times shorter than for existing approaches.",Efficient Estimation for Staggered Rollout Designs,2021-02-02 07:04:18,"Jonathan Roth, Pedro H. C. Sant'Anna","http://arxiv.org/abs/2102.01291v7, http://arxiv.org/pdf/2102.01291v7",econ.EM
31134,em,"We offer straightforward theoretical results that justify incorporating
machine learning in the standard linear instrumental variable setting. The key
idea is to use machine learning, combined with sample-splitting, to predict the
treatment variable from the instrument and any exogenous covariates, and then
use this predicted treatment and the covariates as technical instruments to
recover the coefficients in the second-stage. This allows the researcher to
extract non-linear co-variation between the treatment and instrument that may
dramatically improve estimation precision and robustness by boosting instrument
strength. Importantly, we constrain the machine-learned predictions to be
linear in the exogenous covariates, thus avoiding spurious identification
arising from non-linear relationships between the treatment and the covariates.
We show that this approach delivers consistent and asymptotically normal
estimates under weak conditions and that it may be adapted to be
semiparametrically efficient (Chamberlain, 1992). Our method preserves standard
intuitions and interpretations of linear instrumental variable methods,
including under weak identification, and provides a simple, user-friendly
upgrade to the applied economics toolbox. We illustrate our method with an
example in law and criminal justice, examining the causal effect of appellate
court reversals on district court sentencing decisions.",Mostly Harmless Machine Learning: Learning Optimal Instruments in Linear IV Models,2020-11-12 04:55:11,"Jiafeng Chen, Daniel L. Chen, Greg Lewis","http://arxiv.org/abs/2011.06158v3, http://arxiv.org/pdf/2011.06158v3",econ.EM
31135,em,"There is increasing interest in allocating treatments based on observed
individual characteristics: examples include targeted marketing, individualized
credit offers, and heterogeneous pricing. Treatment personalization introduces
incentives for individuals to modify their behavior to obtain a better
treatment. Strategic behavior shifts the joint distribution of covariates and
potential outcomes. The optimal rule without strategic behavior allocates
treatments only to those with a positive Conditional Average Treatment Effect.
With strategic behavior, we show that the optimal rule can involve
randomization, allocating treatments with less than 100% probability even to
those who respond positively on average to the treatment. We propose a
sequential experiment based on Bayesian Optimization that converges to the
optimal treatment rule without parametric assumptions on individual strategic
behavior.",Treatment Allocation with Strategic Agents,2020-11-12 20:40:53,Evan Munro,"http://arxiv.org/abs/2011.06528v5, http://arxiv.org/pdf/2011.06528v5",econ.EM
31136,em,"We study the impact of weak identification in discrete choice models, and
provide insights into the determinants of identification strength in these
models. Using these insights, we propose a novel test that can consistently
detect weak identification in commonly applied discrete choice models, such as
probit, logit, and many of their extensions. Furthermore, we demonstrate that
when the null hypothesis of weak identification is rejected, Wald-based
inference can be carried out using standard formulas and critical values. A
Monte Carlo study compares our proposed testing approach against commonly
applied weak identification tests. The results simultaneously demonstrate the
good performance of our approach and the fundamental failure of using
conventional weak identification tests for linear models in the discrete choice
model context. Furthermore, we compare our approach against those commonly
applied in the literature in two empirical examples: married women labor force
participation, and US food aid and civil conflicts.",Weak Identification in Discrete Choice Models,2020-11-13 07:21:15,"David T. Frazier, Eric Renault, Lina Zhang, Xueyan Zhao","http://arxiv.org/abs/2011.06753v2, http://arxiv.org/pdf/2011.06753v2",econ.EM
31137,em,"This paper studies experimental designs for estimation and inference on
policies with spillover effects. Units are organized into a finite number of
large clusters and interact in unknown ways within each cluster. First, we
introduce a single-wave experiment that, by varying the randomization across
cluster pairs, estimates the marginal effect of a change in treatment
probabilities, taking spillover effects into account. Using the marginal
effect, we propose a test for policy optimality. Second, we design a
multiple-wave experiment to estimate welfare-maximizing treatment rules. We
provide strong theoretical guarantees and an implementation in a large-scale
field experiment.",Policy design in experiments with unknown interference,2020-11-16 21:58:54,"Davide Viviano, Jess Rudder","http://arxiv.org/abs/2011.08174v8, http://arxiv.org/pdf/2011.08174v8",econ.EM
31138,em,"Time series forecasting is essential for agents to make decisions.
Traditional approaches rely on statistical methods to forecast given past
numeric values. In practice, end-users often rely on visualizations such as
charts and plots to reason about their forecasts. Inspired by practitioners, we
re-imagine the topic by creating a novel framework to produce visual forecasts,
similar to the way humans intuitively do. In this work, we leverage advances in
deep learning to extend the field of time series forecasting to a visual
setting. We capture input data as an image and train a model to produce the
subsequent image. This approach results in predicting distributions as opposed
to pointwise values. We examine various synthetic and real datasets with
diverse degrees of complexity. Our experiments show that visual forecasting is
effective for cyclic data but somewhat less for irregular data such as stock
price. Importantly, when using image-based evaluation metrics, we find the
proposed visual forecasting method to outperform various numerical baselines,
including ARIMA and a numerical variation of our method. We demonstrate the
benefits of incorporating vision-based approaches in forecasting tasks -- both
for the quality of the forecasts produced, as well as the metrics that can be
used to evaluate them.",Visual Time Series Forecasting: An Image-driven Approach,2020-11-18 05:51:37,"Srijan Sood, Zhen Zeng, Naftali Cohen, Tucker Balch, Manuela Veloso","http://dx.doi.org/10.1145/3490354.3494387, http://arxiv.org/abs/2011.09052v3, http://arxiv.org/pdf/2011.09052v3",cs.CV
31139,em,"This paper analyzes the effect of a discrete treatment Z on a duration T. The
treatment is not randomly assigned. The confounding issue is treated using a
discrete instrumental variable explaining the treatment and independent of the
error term of the model. Our framework is nonparametric and allows for random
right censoring. This specification generates a nonlinear inverse problem and
the average treatment effect is derived from its solution. We provide local and
global identification properties that rely on a nonlinear system of equations.
We propose an estimation procedure to solve this system and derive rates of
convergence and conditions under which the estimator is asymptotically normal.
When censoring makes identification fail, we develop partial identification
results. Our estimators exhibit good finite sample properties in simulations.
We also apply our methodology to the Illinois Reemployment Bonus Experiment.",Nonparametric instrumental regression with right censored duration outcomes,2020-11-20 17:40:42,"Jad Beyhum, Jean-Pierre FLorens, Ingrid Van Keilegom","http://arxiv.org/abs/2011.10423v1, http://arxiv.org/pdf/2011.10423v1",math.ST
31140,em,"A key element in the education of youths is their sensitization to historical
and artistic heritage. We analyze a field experiment conducted in Florence
(Italy) to assess how appropriate incentives assigned to high-school classes
may induce teens to visit museums in their free time. Non-compliance and
spillover effects make the impact evaluation of this clustered encouragement
design challenging. We propose to blend principal stratification and causal
mediation, by defining sub-populations of units according to their compliance
behavior and using the information on their friendship networks as mediator. We
formally define principal natural direct and indirect effects and principal
controlled direct and spillover effects, and use them to disentangle spillovers
from other causal channels. We adopt a Bayesian approach for inference.",Exploiting network information to disentangle spillover effects in a field experiment on teens' museum attendance,2020-11-22 17:30:02,"Silvia Noirjean, Marco Mariani, Alessandra Mattei, Fabrizia Mealli","http://arxiv.org/abs/2011.11023v2, http://arxiv.org/pdf/2011.11023v2",stat.AP
31141,em,"We study identifiability of the parameters in autoregressions defined on a
network. Most identification conditions that are available for these models
either rely on the network being observed repeatedly, are only sufficient, or
require strong distributional assumptions. This paper derives conditions that
apply even when the individuals composing the network are observed only once,
are necessary and sufficient for identification, and require weak
distributional assumptions. We find that the model parameters are generically,
in the measure theoretic sense, identified even without repeated observations,
and analyze the combinations of the interaction matrix and the regressor matrix
causing identification failures. This is done both in the original model and
after certain transformations in the sample space, the latter case being
relevant, for example, in some fixed effects specifications.",Non-Identifiability in Network Autoregressions,2020-11-22 21:38:27,Federico Martellosio,"http://arxiv.org/abs/2011.11084v2, http://arxiv.org/pdf/2011.11084v2",econ.EM
31142,em,"Functional principal component analysis (FPCA) has played an important role
in the development of functional time series analysis. This note investigates
how FPCA can be used to analyze cointegrated functional time series and
proposes a modification of FPCA as a novel statistical tool. Our modified FPCA
not only provides an asymptotically more efficient estimator of the
cointegrating vectors, but also leads to novel FPCA-based tests for examining
essential properties of cointegrated functional time series.",Functional Principal Component Analysis for Cointegrated Functional Time Series,2020-11-25 17:43:30,Won-Ki Seo,"http://arxiv.org/abs/2011.12781v8, http://arxiv.org/pdf/2011.12781v8",stat.ME
31143,em,"In this paper, the technical requirements to perform a cost-benefit analysis
of a Demand Responsive Transport (DRT) service with the traffic simulation
software MATSim are elaborated in order to achieve the long-term goal of
assessing the introduction of a DRT service in G\""ottingen and the surrounding
area. The aim was to determine if the software is suitable for a cost-benefit
analysis while providing a user manual for building a basic simulation that can
be extended with public transport and DRT. The main result is that the software
is suitable for a cost-benefit analysis of a DRT service. In particular, the
most important internal and external costs, such as usage costs of the various
modes of transport and emissions, can be integrated into the simulation
scenarios. Thus, the scenarios presented in this paper can be extended by data
from a mobility study of G\""ottingen and its surroundings in order to achieve
the long-term goal. This paper is aimed at transport economists and researchers
who are not familiar with MATSim, to provide them with a guide for the first
steps in working with a traffic simulation software.",Implementation of a cost-benefit analysis of Demand-Responsive Transport with a Multi-Agent Transport Simulation,2020-11-25 19:49:07,"Conny Grunicke, Jan Christian Schlüter, Jani-Pekka Jokinen","http://arxiv.org/abs/2011.12869v2, http://arxiv.org/pdf/2011.12869v2",econ.GN
31144,em,"A general class of time-varying regression models is considered in this
paper. We estimate the regression coefficients by using local linear
M-estimation. For these estimators, weak Bahadur representations are obtained
and are used to construct simultaneous confidence bands. For practical
implementation, we propose a bootstrap based method to circumvent the slow
logarithmic convergence of the theoretical simultaneous bands. Our results
substantially generalize and unify the treatments for several time-varying
regression and auto-regression models. The performance for ARCH and GARCH
models is studied in simulations and a few real-life applications of our study
are presented through analysis of some popular financial datasets.",Simultaneous inference for time-varying models,2020-11-26 10:00:09,"Sayar Karmakar, Stefan Richter, Wei Biao Wu","http://arxiv.org/abs/2011.13157v2, http://arxiv.org/pdf/2011.13157v2",math.ST
31145,em,"We consider the problem of adaptive inference on a regression function at a
point under a multivariate nonparametric regression setting. The regression
function belongs to a H\""older class and is assumed to be monotone with respect
to some or all of the arguments. We derive the minimax rate of convergence for
confidence intervals (CIs) that adapt to the underlying smoothness, and provide
an adaptive inference procedure that obtains this minimax rate. The procedure
differs from that of Cai and Low (2004), intended to yield shorter CIs under
practically relevant specifications. The proposed method applies to general
linear functionals of the regression function, and is shown to have favorable
performance compared to existing inference procedures.",Adaptive Inference in Multivariate Nonparametric Regression Models Under Monotonicity,2020-11-29 00:21:04,"Koohyun Kwon, Soonwoo Kwon","http://arxiv.org/abs/2011.14219v1, http://arxiv.org/pdf/2011.14219v1",math.ST
31146,em,"We propose a novel bootstrap procedure for dependent data based on Generative
Adversarial networks (GANs). We show that the dynamics of common stationary
time series processes can be learned by GANs and demonstrate that GANs trained
on a single sample path can be used to generate additional samples from the
process. We find that temporal convolutional neural networks provide a suitable
design for the generator and discriminator, and that convincing samples can be
generated on the basis of a vector of iid normal noise. We demonstrate the
finite sample properties of GAN sampling and the suggested bootstrap using
simulations where we compare the performance to circular block bootstrapping in
the case of resampling an AR(1) time series processes. We find that resampling
using the GAN can outperform circular block bootstrapping in terms of empirical
coverage.",Time Series (re)sampling using Generative Adversarial Networks,2021-01-30 13:58:15,"Christian M. Dahl, Emil N. Sørensen","http://arxiv.org/abs/2102.00208v1, http://arxiv.org/pdf/2102.00208v1",cs.LG
31508,gn,"This paper aims to provide a comprehensive analysis on the income
inequalities recorded in the EU-15 in the 1995-2014 period and to estimate the
impact of private sector credit on income disparities. In order to estimate the
impact, I used the panel data technique with 15 cross-sections for the first 15
Member States of the European Union, applying generalized error correction
model.",The impact of private sector credit on income inequalities in European Union (15 member states),2020-07-20 17:49:22,Ionut Jianu,"http://arxiv.org/abs/2007.11408v1, http://arxiv.org/pdf/2007.11408v1",econ.GN
31148,em,"Data acquisition forms the primary step in all empirical research. The
availability of data directly impacts the quality and extent of conclusions and
insights. In particular, larger and more detailed datasets provide convincing
answers even to complex research questions. The main problem is that 'large and
detailed' usually implies 'costly and difficult', especially when the data
medium is paper and books. Human operators and manual transcription have been
the traditional approach for collecting historical data. We instead advocate
the use of modern machine learning techniques to automate the digitisation
process. We give an overview of the potential for applying machine digitisation
for data collection through two illustrative applications. The first
demonstrates that unsupervised layout classification applied to raw scans of
nurse journals can be used to construct a treatment indicator. Moreover, it
allows an assessment of assignment compliance. The second application uses
attention-based neural networks for handwritten text recognition in order to
transcribe age and birth and death dates from a large collection of Danish
death certificates. We describe each step in the digitisation pipeline and
provide implementation insights.",Applications of Machine Learning in Document Digitisation,2021-02-05 18:35:28,"Christian M. Dahl, Torben S. D. Johansen, Emil N. Sørensen, Christian E. Westermann, Simon F. Wittrock","http://arxiv.org/abs/2102.03239v1, http://arxiv.org/pdf/2102.03239v1",cs.CV
31149,em,"Inverse propensity weighting (IPW) is a popular method for estimating
treatment effects from observational data. However, its correctness relies on
the untestable (and frequently implausible) assumption that all confounders
have been measured. This paper introduces a robust sensitivity analysis for IPW
that estimates the range of treatment effects compatible with a given amount of
unobserved confounding. The estimated range converges to the narrowest possible
interval (under the given assumptions) that must contain the true treatment
effect. Our proposal is a refinement of the influential sensitivity analysis by
Zhao, Small, and Bhattacharya (2019), which we show gives bounds that are too
wide even asymptotically. This analysis is based on new partial identification
results for Tan (2006)'s marginal sensitivity model.",Sharp Sensitivity Analysis for Inverse Propensity Weighting via Quantile Balancing,2021-02-09 00:47:23,"Jacob Dorn, Kevin Guo","http://dx.doi.org/10.1080/01621459.2022.2069572, http://arxiv.org/abs/2102.04543v3, http://arxiv.org/pdf/2102.04543v3",math.ST
31150,em,"In Historical Economics, Persistence studies document the persistence of some
historical phenomenon or leverage this persistence to identify causal
relationships of interest in the present. In this chapter, we analyze the
implications of allowing for heterogeneous treatment effects in these studies.
We delineate their common empirical structure, argue that heterogeneous
treatment effects are likely in their context, and propose minimal abstract
models that help interpret results and guide the development of empirical
strategies to uncover the mechanisms generating the effects.",LATE for History,2021-02-16 17:22:44,"Alberto Bisin, Andrea Moro","http://arxiv.org/abs/2102.08174v1, http://arxiv.org/pdf/2102.08174v1",econ.GN
31151,em,"Adaptive experiments, including efficient average treatment effect estimation
and multi-armed bandit algorithms, have garnered attention in various
applications, such as social experiments, clinical trials, and online
advertisement optimization. This paper considers estimating the mean outcome of
an action from samples obtained in adaptive experiments. In causal inference,
the mean outcome of an action has a crucial role, and the estimation is an
essential task, where the average treatment effect estimation and off-policy
value estimation are its variants. In adaptive experiments, the probability of
choosing an action (logging policy) is allowed to be sequentially updated based
on past observations. Due to this logging policy depending on the past
observations, the samples are often not independent and identically distributed
(i.i.d.), making developing an asymptotically normal estimator difficult. A
typical approach for this problem is to assume that the logging policy
converges in a time-invariant function. However, this assumption is restrictive
in various applications, such as when the logging policy fluctuates or becomes
zero at some periods. To mitigate this limitation, we propose another
assumption that the average logging policy converges to a time-invariant
function and show the doubly robust (DR) estimator's asymptotic normality.
Under the assumption, the logging policy itself can fluctuate or be zero for
some actions. We also show the empirical properties by simulations.",Adaptive Doubly Robust Estimator from Non-stationary Logging Policy under a Convergence of Average Probability,2021-02-17 22:05:53,Masahiro Kato,"http://arxiv.org/abs/2102.08975v2, http://arxiv.org/pdf/2102.08975v2",stat.ME
31152,em,"Survey scientists increasingly face the problem of high-dimensionality in
their research as digitization makes it much easier to construct
high-dimensional (or ""big"") data sets through tools such as online surveys and
mobile applications. Machine learning methods are able to handle such data, and
they have been successfully applied to solve \emph{predictive} problems.
However, in many situations, survey statisticians want to learn about
\emph{causal} relationships to draw conclusions and be able to transfer the
findings of one survey to another. Standard machine learning methods provide
biased estimates of such relationships. We introduce into survey statistics the
double machine learning approach, which gives approximately unbiased estimators
of causal parameters, and show how it can be used to analyze survey nonresponse
in a high-dimensional panel setting.",Big Data meets Causal Survey Research: Understanding Nonresponse in the Recruitment of a Mixed-mode Online Panel,2021-02-17 22:37:53,"Barbara Felderer, Jannis Kueck, Martin Spindler","http://arxiv.org/abs/2102.08994v1, http://arxiv.org/pdf/2102.08994v1",stat.ME
31153,em,"Feature-based dynamic pricing is an increasingly popular model of setting
prices for highly differentiated products with applications in digital
marketing, online sales, real estate and so on. The problem was formally
studied as an online learning problem [Javanmard & Nazerzadeh, 2019] where a
seller needs to propose prices on the fly for a sequence of $T$ products based
on their features $x$ while having a small regret relative to the best --
""omniscient"" -- pricing strategy she could have come up with in hindsight. We
revisit this problem and provide two algorithms (EMLP and ONSP) for stochastic
and adversarial feature settings, respectively, and prove the optimal
$O(d\log{T})$ regret bounds for both. In comparison, the best existing results
are $O\left(\min\left\{\frac{1}{\lambda_{\min}^2}\log{T},
\sqrt{T}\right\}\right)$ and $O(T^{2/3})$ respectively, with $\lambda_{\min}$
being the smallest eigenvalue of $\mathbb{E}[xx^T]$ that could be arbitrarily
close to $0$. We also prove an $\Omega(\sqrt{T})$ information-theoretic lower
bound for a slightly more general setting, which demonstrates that
""knowing-the-demand-curve"" leads to an exponential improvement in feature-based
dynamic pricing.",Logarithmic Regret in Feature-based Dynamic Pricing,2021-02-20 03:45:33,"Jianyu Xu, Yu-Xiang Wang","http://arxiv.org/abs/2102.10221v2, http://arxiv.org/pdf/2102.10221v2",cs.LG
31154,em,"Time series forecasting is essential for decision making in many domains. In
this work, we address the challenge of predicting prices evolution among
multiple potentially interacting financial assets. A solution to this problem
has obvious importance for governments, banks, and investors. Statistical
methods such as Auto Regressive Integrated Moving Average (ARIMA) are widely
applied to these problems. In this paper, we propose to approach economic time
series forecasting of multiple financial assets in a novel way via video
prediction. Given past prices of multiple potentially interacting financial
assets, we aim to predict the prices evolution in the future. Instead of
treating the snapshot of prices at each time point as a vector, we spatially
layout these prices in 2D as an image, such that we can harness the power of
CNNs in learning a latent representation for these financial assets. Thus, the
history of these prices becomes a sequence of images, and our goal becomes
predicting future images. We build on a state-of-the-art video prediction
method for forecasting future images. Our experiments involve the prediction
task of the price evolution of nine financial assets traded in U.S. stock
markets. The proposed method outperforms baselines including ARIMA, Prophet,
and variations of the proposed method, demonstrating the benefits of harnessing
the power of CNNs in the problem of economic time series forecasting.",Deep Video Prediction for Time Series Forecasting,2021-02-24 07:27:23,"Zhen Zeng, Tucker Balch, Manuela Veloso","http://arxiv.org/abs/2102.12061v2, http://arxiv.org/pdf/2102.12061v2",cs.CV
31155,em,"This paper proposes a dynamic process of portfolio risk measurement to
address potential information loss. The proposed model takes advantage of
financial big data to incorporate out-of-target-portfolio information that may
be missed when one considers the Value at Risk (VaR) measures only from certain
assets of the portfolio. We investigate how the curse of dimensionality can be
overcome in the use of financial big data and discuss where and when benefits
occur from a large number of assets. In this regard, the proposed approach is
the first to suggest the use of financial big data to improve the accuracy of
risk analysis. We compare the proposed model with benchmark approaches and
empirically show that the use of financial big data improves small portfolio
risk analysis. Our findings are useful for portfolio managers and financial
regulators, who may seek for an innovation to improve the accuracy of portfolio
risk estimation.",Next Generation Models for Portfolio Risk Management: An Approach Using Financial Big Data,2021-02-25 14:09:46,"Kwangmin Jung, Donggyu Kim, Seunghyeon Yu","http://dx.doi.org/10.1111/jori.12374, http://arxiv.org/abs/2102.12783v3, http://arxiv.org/pdf/2102.12783v3",q-fin.RM
31156,em,"During online decision making in Multi-Armed Bandits (MAB), one needs to
conduct inference on the true mean reward of each arm based on data collected
so far at each step. However, since the arms are adaptively selected--thereby
yielding non-iid data--conducting inference accurately is not straightforward.
In particular, sample averaging, which is used in the family of UCB and
Thompson sampling (TS) algorithms, does not provide a good choice as it suffers
from bias and a lack of good statistical properties (e.g. asymptotic
normality). Our thesis in this paper is that more sophisticated inference
schemes that take into account the adaptive nature of the sequentially
collected data can unlock further performance gains, even though both UCB and
TS type algorithms are optimal in the worst case. In particular, we propose a
variant of TS-style algorithms--which we call doubly adaptive TS--that
leverages recent advances in causal inference and adaptively reweights the
terms of a doubly robust estimator on the true mean reward of each arm. Through
20 synthetic domain experiments and a semi-synthetic experiment based on data
from an A/B test of a web service, we demonstrate that using an adaptive
inferential scheme (while still retaining the exploration efficacy of TS)
provides clear benefits in online decision making: the proposed DATS algorithm
has superior empirical performance to existing baselines (UCB and TS) in terms
of regret and sample complexity in identifying the best arm. In addition, we
also provide a finite-time regret bound of doubly adaptive TS that matches (up
to log factors) those of UCB and TS algorithms, thereby establishing that its
improved practical benefits do not come at the expense of worst-case
suboptimality.",Online Multi-Armed Bandits with Adaptive Inference,2021-02-26 01:29:25,"Maria Dimakopoulou, Zhimei Ren, Zhengyuan Zhou","http://arxiv.org/abs/2102.13202v2, http://arxiv.org/pdf/2102.13202v2",cs.LG
31157,em,"Time-varying parameter (TVP) regressions commonly assume that time-variation
in the coefficients is determined by a simple stochastic process such as a
random walk. While such models are capable of capturing a wide range of dynamic
patterns, the true nature of time variation might stem from other sources, or
arise from different laws of motion. In this paper, we propose a flexible TVP
VAR that assumes the TVPs to depend on a panel of partially latent covariates.
The latent part of these covariates differ in their state dynamics and thus
capture smoothly evolving or abruptly changing coefficients. To determine which
of these covariates are important, and thus to decide on the appropriate state
evolution, we introduce Bayesian shrinkage priors to perform model selection.
As an empirical application, we forecast the US term structure of interest
rates and show that our approach performs well relative to a set of competing
models. We then show how the model can be used to explain structural breaks in
coefficients related to the US yield curve.",General Bayesian time-varying parameter VARs for predicting government bond yields,2021-02-26 14:02:58,"Manfred M. Fischer, Niko Hauzenberger, Florian Huber, Michael Pfarrhofer","http://arxiv.org/abs/2102.13393v1, http://arxiv.org/pdf/2102.13393v1",econ.EM
31158,em,"Various parametric volatility models for financial data have been developed
to incorporate high-frequency realized volatilities and better capture market
dynamics. However, because high-frequency trading data are not available during
the close-to-open period, the volatility models often ignore volatility
information over the close-to-open period and thus may suffer from loss of
important information relevant to market dynamics. In this paper, to account
for whole-day market dynamics, we propose an overnight volatility model based
on It\^o diffusions to accommodate two different instantaneous volatility
processes for the open-to-close and close-to-open periods. We develop a
weighted least squares method to estimate model parameters for two different
periods and investigate its asymptotic properties. We conduct a simulation
study to check the finite sample performance of the proposed model and method.
Finally, we apply the proposed approaches to real trading data.",Overnight GARCH-Itô Volatility Models,2021-02-25 00:51:44,"Donggyu Kim, Minseok Shin, Yazhen Wang","http://arxiv.org/abs/2102.13467v2, http://arxiv.org/pdf/2102.13467v2",q-fin.ST
31509,gn,"This research aims to provide an overview of the existing inequalities and
their drivers in the member states of the European Union as well as their
developements in the 2002-2008 and 2009- 2015 sub-periods. It also analyses the
impact of health and education government spending on income inequality in the
European Union over the 2002-2015 period. In this context, I applied the
Estimated Generalized Least Squares method using panel data for the 28-member
states of the European Union.",The impact of government health and education expenditure on income inequality in European Union,2020-07-20 17:53:35,Ionut Jianu,"http://arxiv.org/abs/2007.11409v1, http://arxiv.org/pdf/2007.11409v1",econ.GN
31159,em,"Classical two-sample permutation tests for equality of distributions have
exact size in finite samples, but they fail to control size for testing
equality of parameters that summarize each distribution. This paper proposes
permutation tests for equality of parameters that are estimated at root-$n$ or
slower rates. Our general framework applies to both parametric and
nonparametric models, with two samples or one sample split into two subsamples.
Our tests have correct size asymptotically while preserving exact size in
finite samples when distributions are equal. They have no loss in local
asymptotic power compared to tests that use asymptotic critical values. We
propose confidence sets with correct coverage in large samples that also have
exact coverage in finite samples if distributions are equal up to a
transformation. We apply our theory to four commonly-used hypothesis tests of
nonparametric functions evaluated at a point. Lastly, simulations show good
finite sample properties, and two empirical examples illustrate our tests in
practice.",Permutation Tests at Nonparametric Rates,2021-02-26 21:30:22,"Marinho Bertanha, EunYi Chung","http://arxiv.org/abs/2102.13638v3, http://arxiv.org/pdf/2102.13638v3",econ.EM
31160,em,"This paper explores the use of deep neural networks for semiparametric
estimation of economic models of maximizing behavior in production or discrete
choice. We argue that certain deep networks are particularly well suited as a
nonparametric sieve to approximate regression functions that result from
nonlinear latent variable models of continuous or discrete optimization.
Multi-stage models of this type will typically generate rich interaction
effects between regressors (""inputs"") in the regression function so that there
may be no plausible separability restrictions on the ""reduced-form"" mapping
form inputs to outputs to alleviate the curse of dimensionality. Rather,
economic shape, sparsity, or separability restrictions either at a global level
or intermediate stages are usually stated in terms of the latent variable
model. We show that restrictions of this kind are imposed in a more
straightforward manner if a sufficiently flexible version of the latent
variable model is in fact used to approximate the unknown regression function.",Structural Sieves,2021-12-01 19:37:02,Konrad Menzel,"http://arxiv.org/abs/2112.01377v2, http://arxiv.org/pdf/2112.01377v2",econ.EM
31161,em,"A main difficulty in actuarial claim size modeling is that there is no simple
off-the-shelf distribution that simultaneously provides a good distributional
model for the main body and the tail of the data. In particular, covariates may
have different effects for small and for large claim sizes. To cope with this
problem, we introduce a deep composite regression model whose splicing point is
given in terms of a quantile of the conditional claim size distribution rather
than a constant. To facilitate M-estimation for such models, we introduce and
characterize the class of strictly consistent scoring functions for the triplet
consisting a quantile, as well as the lower and upper expected shortfall beyond
that quantile. In a second step, this elicitability result is applied to fit
deep neural network regression models. We demonstrate the applicability of our
approach and its superiority over classical approaches on a real accident
insurance data set.",Deep Quantile and Deep Composite Model Regression,2021-12-06 17:31:25,"Tobias Fissler, Michael Merz, Mario V. Wüthrich","http://dx.doi.org/10.1016/j.insmatheco.2023.01.001, http://arxiv.org/abs/2112.03075v1, http://arxiv.org/pdf/2112.03075v1",stat.ME
31162,em,"Local volatility is a versatile option pricing model due to its state
dependent diffusion coefficient. Calibration is, however, non-trivial as it
involves both proposing a hypothesis model of the latent function and a method
for fitting it to data. In this paper we present novel Bayesian inference with
Gaussian process priors. We obtain a rich representation of the local
volatility function with a probabilistic notion of uncertainty attached to the
calibrate. We propose an inference algorithm and apply our approach to S&P 500
market data.",A Bayesian take on option pricing with Gaussian processes,2021-12-07 17:14:30,"Martin Tegner, Stephen Roberts","http://arxiv.org/abs/2112.03718v1, http://arxiv.org/pdf/2112.03718v1",q-fin.MF
31163,em,"Matching on covariates is a well-established framework for estimating causal
effects in observational studies. The principal challenge stems from the often
high-dimensional structure of the problem. Many methods have been introduced to
address this, with different advantages and drawbacks in computational and
statistical performance as well as interpretability. This article introduces a
natural optimal matching method based on multimarginal unbalanced optimal
transport that possesses many useful properties in this regard. It provides
interpretable weights based on the distance of matched individuals, can be
efficiently implemented via the iterative proportional fitting procedure, and
can match several treatment arms simultaneously. Importantly, the proposed
method only selects good matches from either group, hence is competitive with
the classical k-nearest neighbors approach in terms of bias and variance in
finite samples. Moreover, we prove a central limit theorem for the empirical
process of the potential functions of the optimal coupling in the unbalanced
optimal transport problem with a fixed penalty term. This implies a parametric
rate of convergence of the empirically obtained weights to the optimal weights
in the population for a fixed penalty term.",Matching for causal effects via multimarginal unbalanced optimal transport,2021-12-08 19:45:31,"Florian Gunsilius, Yuliang Xu","http://arxiv.org/abs/2112.04398v2, http://arxiv.org/pdf/2112.04398v2",stat.ME
31164,em,"Historical processes manifest remarkable diversity. Nevertheless, scholars
have long attempted to identify patterns and categorize historical actors and
influences with some success. A stochastic process framework provides a
structured approach for the analysis of large historical datasets that allows
for detection of sometimes surprising patterns, identification of relevant
causal actors both endogenous and exogenous to the process, and comparison
between different historical cases. The combination of data, analytical tools
and the organizing theoretical framework of stochastic processes complements
traditional narrative approaches in history and archaeology.",The Past as a Stochastic Process,2021-12-11 03:15:59,"David H. Wolpert, Michael H. Price, Stefani A. Crabtree, Timothy A. Kohler, Jurgen Jost, James Evans, Peter F. Stadler, Hajime Shimao, Manfred D. Laubichler","http://arxiv.org/abs/2112.05876v1, http://arxiv.org/pdf/2112.05876v1",stat.AP
31189,em,"This paper studies local asymptotic relationship between two scalar
estimates. We define sensitivity of a target estimate to a control estimate to
be the directional derivative of the target functional with respect to the
gradient direction of the control functional. Sensitivity according to the
information metric on the model manifold is the asymptotic covariance of
regular efficient estimators. Sensitivity according to a general policy metric
on the model manifold can be obtained from influence functions of regular
efficient estimators. Policy sensitivity has a local counterfactual
interpretation, where the ceteris paribus change to a counterfactual
distribution is specified by the combination of a control parameter and a
Riemannian metric on the model manifold.",Sensitivity of Regular Estimators,2018-05-23 00:57:42,Yaroslav Mukhin,"http://arxiv.org/abs/1805.08883v1, http://arxiv.org/pdf/1805.08883v1",econ.EM
31165,em,"This paper introduces a simple and tractable sieve estimation of
semiparametric conditional factor models with latent factors. We establish
large-$N$-asymptotic properties of the estimators without requiring large $T$.
We also develop a simple bootstrap procedure for conducting inference about the
conditional pricing errors as well as the shapes of the factor loading
functions. These results enable us to estimate conditional factor structure of
a large set of individual assets by utilizing arbitrary nonlinear functions of
a number of characteristics without the need to pre-specify the factors, while
allowing us to disentangle the characteristics' role in capturing factor betas
from alphas (i.e., undiversifiable risk from mispricing). We apply these
methods to the cross-section of individual U.S. stock returns and find strong
evidence of large nonzero pricing errors that combine to produce arbitrage
portfolios with Sharpe ratios above 3. We also document a significant decline
in apparent mispricing over time.",Semiparametric Conditional Factor Models: Estimation and Inference,2021-12-14 05:46:21,"Qihui Chen, Nikolai Roussanov, Xiaoliang Wang","http://arxiv.org/abs/2112.07121v4, http://arxiv.org/pdf/2112.07121v4",econ.EM
31166,em,"Predicting the success of startup companies is of great importance for both
startup companies and investors. It is difficult due to the lack of available
data and appropriate general methods. With data platforms like Crunchbase
aggregating the information of startup companies, it is possible to predict
with machine learning algorithms. Existing research suffers from the data
sparsity problem as most early-stage startup companies do not have much data
available to the public. We try to leverage the recent algorithms to solve this
problem. We investigate several machine learning algorithms with a large
dataset from Crunchbase. The results suggest that LightGBM and XGBoost perform
best and achieve 53.03% and 52.96% F1 scores. We interpret the predictions from
the perspective of feature contribution. We construct portfolios based on the
models and achieve high success rates. These findings have substantial
implications on how machine learning methods can help startup companies and
investors.",Solving the Data Sparsity Problem in Predicting the Success of the Startups with Machine Learning Methods,2021-12-15 12:21:32,"Dafei Yin, Jing Li, Gaosheng Wu","http://arxiv.org/abs/2112.07985v1, http://arxiv.org/pdf/2112.07985v1",cs.LG
31167,em,"This paper presents a framework for how to incorporate prior sources of
information into the design of a sequential experiment. These sources can
include previous experiments, expert opinions, or the experimenter's own
introspection. We formalize this problem using a Bayesian approach that maps
each source to a Bayesian model. These models are aggregated according to their
associated posterior probabilities. We evaluate a broad class of policy rules
according to three criteria: whether the experimenter learns the parameters of
the payoff distributions, the probability that the experimenter chooses the
wrong treatment when deciding to stop the experiment, and the average rewards.
We show that our framework exhibits several nice finite sample theoretical
guarantees, including robustness to any source that is not externally valid.",Reinforcing RCTs with Multiple Priors while Learning about External Validity,2021-12-16 22:38:09,"Frederico Finan, Demian Pouzo","http://arxiv.org/abs/2112.09170v4, http://arxiv.org/pdf/2112.09170v4",econ.EM
31168,em,"In all areas of human knowledge, datasets are increasing in both size and
complexity, creating the need for richer statistical models. This trend is also
true for economic data, where high-dimensional and nonlinear/nonparametric
inference is the norm in several fields of applied econometric work. The
purpose of this paper is to introduce the reader to the world of Bayesian model
determination, by surveying modern shrinkage and variable selection algorithms
and methodologies. Bayesian inference is a natural probabilistic framework for
quantifying uncertainty and learning about model parameters, and this feature
is particularly important for inference in modern models of high dimensions and
increased complexity.
  We begin with a linear regression setting in order to introduce various
classes of priors that lead to shrinkage/sparse estimators of comparable value
to popular penalized likelihood estimators (e.g.\ ridge, lasso). We explore
various methods of exact and approximate inference, and discuss their pros and
cons. Finally, we explore how priors developed for the simple regression
setting can be extended in a straightforward way to various classes of
interesting econometric models. In particular, the following case-studies are
considered, that demonstrate application of Bayesian shrinkage and variable
selection strategies to popular econometric contexts: i) vector autoregressive
models; ii) factor models; iii) time-varying parameter regressions; iv)
confounder selection in treatment effects models; and v) quantile regression
models. A MATLAB package and an accompanying technical manual allow the reader
to replicate many of the algorithms described in this review.",Bayesian Approaches to Shrinkage and Sparse Estimation,2021-12-22 12:35:27,"Dimitris Korobilis, Kenichi Shimizu","http://arxiv.org/abs/2112.11751v1, http://arxiv.org/pdf/2112.11751v1",econ.EM
31169,em,"In order to evaluate the impact of a policy intervention on a group of units
over time, it is important to correctly estimate the average treatment effect
(ATE) measure. Due to lack of robustness of the existing procedures of
estimating ATE from panel data, in this paper, we introduce a robust estimator
of the ATE and the subsequent inference procedures using the popular approach
of minimum density power divergence inference. Asymptotic properties of the
proposed ATE estimator are derived and used to construct robust test statistics
for testing parametric hypotheses related to the ATE. Besides asymptotic
analyses of efficiency and powers, extensive simulation studies are conducted
to study the finite-sample performances of our proposed estimation and testing
procedures under both pure and contaminated data. The robustness of the ATE
estimator is further investigated theoretically through the influence functions
analyses. Finally our proposal is applied to study the long-term economic
effects of the 2004 Indian Ocean earthquake and tsunami on the (per-capita)
gross domestic products (GDP) of five mostly affected countries, namely
Indonesia, Sri Lanka, Thailand, India and Maldives.",Robust Estimation of Average Treatment Effects from Panel Data,2021-12-25 15:20:35,"Sayoni Roychowdhury, Indrila Ganguly, Abhik Ghosh","http://arxiv.org/abs/2112.13228v2, http://arxiv.org/pdf/2112.13228v2",stat.ME
31202,em,"Binscatter is a popular method for visualizing bivariate relationships and
conducting informal specification testing. We study the properties of this
method formally and develop enhanced visualization and econometric binscatter
tools. These include estimating conditional means with optimal binning and
quantifying uncertainty. We also highlight a methodological problem related to
covariate adjustment that can yield incorrect conclusions. We revisit two
applications using our methodology and find substantially different results
relative to those obtained using prior informal binscatter methods. General
purpose software in Python, R, and Stata is provided. Our technical work is of
independent interest for the nonparametric partition-based estimation
literature.",On Binscatter,2019-02-25 23:53:04,"Matias D. Cattaneo, Richard K. Crump, Max H. Farrell, Yingjie Feng","http://arxiv.org/abs/1902.09608v4, http://arxiv.org/pdf/1902.09608v4",econ.EM
31170,em,"We derive general, yet simple, sharp bounds on the size of the omitted
variable bias for a broad class of causal parameters that can be identified as
linear functionals of the conditional expectation function of the outcome. Such
functionals encompass many of the traditional targets of investigation in
causal inference studies, such as, for example, (weighted) average of potential
outcomes, average treatment effects (including subgroup effects, such as the
effect on the treated), (weighted) average derivatives, and policy effects from
shifts in covariate distribution -- all for general, nonparametric causal
models. Our construction relies on the Riesz-Frechet representation of the
target functional. Specifically, we show how the bound on the bias depends only
on the additional variation that the latent variables create both in the
outcome and in the Riesz representer for the parameter of interest. Moreover,
in many important cases (e.g, average treatment effects and avearage
derivatives) the bound is shown to depend on easily interpretable quantities
that measure the explanatory power of the omitted variables. Therefore, simple
plausibility judgments on the maximum explanatory power of omitted variables
(in explaining treatment and outcome variation) are sufficient to place overall
bounds on the size of the bias. Furthermore, we use debiased machine learning
to provide flexible and efficient statistical inference on learnable components
of the bounds. Finally, empirical examples demonstrate the usefulness of the
approach.",Long Story Short: Omitted Variable Bias in Causal Machine Learning,2021-12-26 18:38:23,"Victor Chernozhukov, Carlos Cinelli, Whitney Newey, Amit Sharma, Vasilis Syrgkanis","http://arxiv.org/abs/2112.13398v4, http://arxiv.org/pdf/2112.13398v4",econ.EM
31171,em,"Nearest neighbor (NN) matching as a tool to align data sampled from different
groups is both conceptually natural and practically well-used. In a landmark
paper, Abadie and Imbens (2006) provided the first large-sample analysis of NN
matching under, however, a crucial assumption that the number of NNs, $M$, is
fixed. This manuscript reveals something new out of their study and shows that,
once allowing $M$ to diverge with the sample size, an intrinsic statistic in
their analysis actually constitutes a consistent estimator of the density
ratio. Furthermore, through selecting a suitable $M$, this statistic can attain
the minimax lower bound of estimation over a Lipschitz density function class.
Consequently, with a diverging $M$, the NN matching provably yields a doubly
robust estimator of the average treatment effect and is semiparametrically
efficient if the density functions are sufficiently smooth and the outcome
model is appropriately specified. It can thus be viewed as a precursor of
double machine learning estimators.",Estimation based on nearest neighbor matching: from density ratio to average treatment effect,2021-12-27 07:45:29,"Zhexiao Lin, Peng Ding, Fang Han","http://arxiv.org/abs/2112.13506v1, http://arxiv.org/pdf/2112.13506v1",math.ST
31172,em,"We study the asymptotic normality of two feasible estimators of the
integrated volatility of volatility based on the Fourier methodology, which
does not require the pre-estimation of the spot volatility. We show that the
bias-corrected estimator reaches the optimal rate $n^{1/4}$, while the
estimator without bias-correction has a slower convergence rate and a smaller
asymptotic variance. Additionally, we provide simulation results that support
the theoretical asymptotic distribution of the rate-efficient estimator and
show the accuracy of the latter in comparison with a rate-optimal estimator
based on the pre-estimation of the spot volatility. Finally, using the
rate-optimal Fourier estimator, we reconstruct the time series of the daily
volatility of volatility of the S\&P500 and EUROSTOXX50 indices over long
samples and provide novel insight into the existence of stylized facts about
the volatility of volatility dynamics.",Volatility of volatility estimation: central limit theorems for the Fourier transform estimator and empirical study of the daily time series stylized facts,2021-12-29 15:53:02,"Giacomo Toscano, Giulia Livieri, Maria Elvira Mancino, Stefano Marmi","http://arxiv.org/abs/2112.14529v3, http://arxiv.org/pdf/2112.14529v3",math.ST
31173,em,"The package High-dimensional Metrics (\Rpackage{hdm}) is an evolving
collection of statistical methods for estimation and quantification of
uncertainty in high-dimensional approximately sparse models. It focuses on
providing confidence intervals and significance testing for (possibly many)
low-dimensional subcomponents of the high-dimensional parameter vector.
Efficient estimators and uniformly valid confidence intervals for regression
coefficients on target variables (e.g., treatment or policy variable) in a
high-dimensional approximately sparse regression model, for average treatment
effect (ATE) and average treatment effect for the treated (ATET), as well for
extensions of these parameters to the endogenous setting are provided. Theory
grounded, data-driven methods for selecting the penalization parameter in Lasso
regressions under heteroscedastic and non-Gaussian errors are implemented.
Moreover, joint/ simultaneous confidence intervals for regression coefficients
of a high-dimensional sparse regression are implemented, including a joint
significance test for Lasso regression. Data sets which have been used in the
literature and might be useful for classroom demonstration and for testing new
estimators are included. \R and the package \Rpackage{hdm} are open-source
software projects and can be freely downloaded from CRAN:
\texttt{http://cran.r-project.org}.",High-Dimensional Metrics in R,2016-03-05 10:57:26,"Victor Chernozhukov, Chris Hansen, Martin Spindler","http://arxiv.org/abs/1603.01700v2, http://arxiv.org/pdf/1603.01700v2",stat.ML
31174,em,"Estimating the long-term effects of treatments is of interest in many fields.
A common challenge in estimating such treatment effects is that long-term
outcomes are unobserved in the time frame needed to make policy decisions. One
approach to overcome this missing data problem is to analyze treatments effects
on an intermediate outcome, often called a statistical surrogate, if it
satisfies the condition that treatment and outcome are independent conditional
on the statistical surrogate. The validity of the surrogacy condition is often
controversial. Here we exploit that fact that in modern datasets, researchers
often observe a large number, possibly hundreds or thousands, of intermediate
outcomes, thought to lie on or close to the causal chain between the treatment
and the long-term outcome of interest. Even if none of the individual proxies
satisfies the statistical surrogacy criterion by itself, using multiple proxies
can be useful in causal inference. We focus primarily on a setting with two
samples, an experimental sample containing data about the treatment indicator
and the surrogates and an observational sample containing information about the
surrogates and the primary outcome. We state assumptions under which the
average treatment effect be identified and estimated with a high-dimensional
vector of proxies that collectively satisfy the surrogacy assumption, and
derive the bias from violations of the surrogacy assumption, and show that even
if the primary outcome is also observed in the experimental sample, there is
still information to be gained from using surrogates.",Estimating Treatment Effects using Multiple Surrogates: The Role of the Surrogate Score and the Surrogate Index,2016-03-30 22:45:52,"Susan Athey, Raj Chetty, Guido Imbens, Hyunseung Kang","http://arxiv.org/abs/1603.09326v3, http://arxiv.org/pdf/1603.09326v3",stat.ME
31175,em,"We develop a Bayesian vector autoregressive (VAR) model with multivariate
stochastic volatility that is capable of handling vast dimensional information
sets. Three features are introduced to permit reliable estimation of the model.
First, we assume that the reduced-form errors in the VAR feature a factor
stochastic volatility structure, allowing for conditional equation-by-equation
estimation. Second, we apply recently developed global-local shrinkage priors
to the VAR coefficients to cure the curse of dimensionality. Third, we utilize
recent innovations to efficiently sample from high-dimensional multivariate
Gaussian distributions. This makes simulation-based fully Bayesian inference
feasible when the dimensionality is large but the time series length is
moderate. We demonstrate the merits of our approach in an extensive simulation
study and apply the model to US macroeconomic data to evaluate its forecasting
capabilities.",Sparse Bayesian vector autoregressions in huge dimensions,2017-04-11 14:14:41,"Gregor Kastner, Florian Huber","http://dx.doi.org/10.1002/for.2680, http://arxiv.org/abs/1704.03239v3, http://arxiv.org/pdf/1704.03239v3",stat.CO
31176,em,"This paper proposes a valid bootstrap-based distributional approximation for
M-estimators exhibiting a Chernoff (1964)-type limiting distribution. For
estimators of this kind, the standard nonparametric bootstrap is inconsistent.
The method proposed herein is based on the nonparametric bootstrap, but
restores consistency by altering the shape of the criterion function defining
the estimator whose distribution we seek to approximate. This modification
leads to a generic and easy-to-implement resampling method for inference that
is conceptually distinct from other available distributional approximations. We
illustrate the applicability of our results with four examples in econometrics
and machine learning.",Bootstrap-Based Inference for Cube Root Asymptotics,2017-04-26 14:41:13,"Matias D. Cattaneo, Michael Jansson, Kenichi Nagasawa","http://arxiv.org/abs/1704.08066v3, http://arxiv.org/pdf/1704.08066v3",math.ST
31177,em,"In this paper, we provide some new results for the Weibull-R family of
distributions (Alzaghal, Ghosh and Alzaatreh (2016)). We derive some new
structural properties of the Weibull-R family of distributions. We provide
various characterizations of the family via conditional moments, some functions
of order statistics and via record values.",On some further properties and application of Weibull-R family of distributions,2017-11-01 05:24:18,"Indranil Ghosh, Saralees Nadarajah","http://arxiv.org/abs/1711.00171v1, http://arxiv.org/pdf/1711.00171v1",math.ST
31178,em,"We assess the relationship between model size and complexity in the
time-varying parameter VAR framework via thorough predictive exercises for the
Euro Area, the United Kingdom and the United States. It turns out that
sophisticated dynamics through drifting coefficients are important in small
data sets while simpler models tend to perform better in sizeable data sets. To
combine best of both worlds, novel shrinkage priors help to mitigate the curse
of dimensionality, resulting in competitive forecasts for all scenarios
considered. Furthermore, we discuss dynamic model selection to improve upon the
best performing individual model for each point in time.",Sophisticated and small versus simple and sizeable: When does it pay off to introduce drifting coefficients in Bayesian VARs?,2017-11-02 02:34:11,"Martin Feldkircher, Florian Huber, Gregor Kastner","http://arxiv.org/abs/1711.00564v2, http://arxiv.org/pdf/1711.00564v2",stat.ME
31179,em,"We develop SHOPPER, a sequential probabilistic model of shopping data.
SHOPPER uses interpretable components to model the forces that drive how a
customer chooses products; in particular, we designed SHOPPER to capture how
items interact with other items. We develop an efficient posterior inference
algorithm to estimate these forces from large-scale data, and we analyze a
large dataset from a major chain grocery store. We are interested in answering
counterfactual queries about changes in prices. We found that SHOPPER provides
accurate predictions even under price interventions, and that it helps identify
complementary and substitutable pairs of products.",SHOPPER: A Probabilistic Model of Consumer Choice with Substitutes and Complements,2017-11-09 22:04:21,"Francisco J. R. Ruiz, Susan Athey, David M. Blei","http://arxiv.org/abs/1711.03560v3, http://arxiv.org/pdf/1711.03560v3",stat.ML
31180,em,"Testing for regime switching when the regime switching probabilities are
specified either as constants (`mixture models') or are governed by a
finite-state Markov chain (`Markov switching models') are long-standing
problems that have also attracted recent interest. This paper considers testing
for regime switching when the regime switching probabilities are time-varying
and depend on observed data (`observation-dependent regime switching').
Specifically, we consider the likelihood ratio test for observation-dependent
regime switching in mixture autoregressive models. The testing problem is
highly nonstandard, involving unidentified nuisance parameters under the null,
parameters on the boundary, singular information matrices, and higher-order
approximations of the log-likelihood. We derive the asymptotic null
distribution of the likelihood ratio test statistic in a general mixture
autoregressive setting using high-level conditions that allow for various forms
of dependence of the regime switching probabilities on past observations, and
we illustrate the theory using two particular mixture autoregressive models.
The likelihood ratio test has a nonstandard asymptotic distribution that can
easily be simulated, and Monte Carlo studies show the test to have satisfactory
finite sample size and power properties.",Testing for observation-dependent regime switching in mixture autoregressive models,2017-11-10 21:40:36,"Mika Meitz, Pentti Saikkonen","http://arxiv.org/abs/1711.03959v1, http://arxiv.org/pdf/1711.03959v1",econ.EM
31181,em,"It is well known that sequential decision making may lead to information
cascades. That is, when agents make decisions based on their private
information, as well as observing the actions of those before them, then it
might be rational to ignore their private signal and imitate the action of
previous individuals. If the individuals are choosing between a right and a
wrong state, and the initial actions are wrong, then the whole cascade will be
wrong. This issue is due to the fact that cascades can be based on very little
information.
  We show that if agents occasionally disregard the actions of others and base
their action only on their private information, then wrong cascades can be
avoided. Moreover, we study the optimal asymptotic rate at which the error
probability at time $t$ can go to zero. The optimal policy is for the player at
time $t$ to follow their private information with probability $p_{t} = c/t$,
leading to a learning rate of $c'/t$, where the constants $c$ and $c'$ are
explicit.",How fragile are information cascades?,2017-11-11 00:49:22,"Yuval Peres, Miklos Z. Racz, Allan Sly, Izabella Stuhl","http://arxiv.org/abs/1711.04024v2, http://arxiv.org/pdf/1711.04024v2",math.PR
31182,em,"We present a robust generalization of the synthetic control method for
comparative case studies. Like the classical method, we present an algorithm to
estimate the unobservable counterfactual of a treatment unit. A distinguishing
feature of our algorithm is that of de-noising the data matrix via singular
value thresholding, which renders our approach robust in multiple facets: it
automatically identifies a good subset of donors, overcomes the challenges of
missing data, and continues to work well in settings where covariate
information may not be provided. To begin, we establish the condition under
which the fundamental assumption in synthetic control-like approaches holds,
i.e. when the linear relationship between the treatment unit and the donor pool
prevails in both the pre- and post-intervention periods. We provide the first
finite sample analysis for a broader class of models, the Latent Variable
Model, in contrast to Factor Models previously considered in the literature.
Further, we show that our de-noising procedure accurately imputes missing
entries, producing a consistent estimator of the underlying signal matrix
provided $p = \Omega( T^{-1 + \zeta})$ for some $\zeta > 0$; here, $p$ is the
fraction of observed data and $T$ is the time interval of interest. Under the
same setting, we prove that the mean-squared-error (MSE) in our prediction
estimation scales as $O(\sigma^2/p + 1/\sqrt{T})$, where $\sigma^2$ is the
noise variance. Using a data aggregation method, we show that the MSE can be
made as small as $O(T^{-1/2+\gamma})$ for any $\gamma \in (0, 1/2)$, leading to
a consistent estimator. We also introduce a Bayesian framework to quantify the
model uncertainty through posterior probabilities. Our experiments, using both
real-world and synthetic datasets, demonstrate that our robust generalization
yields an improvement over the classical synthetic control method.",Robust Synthetic Control,2017-11-19 02:22:34,"Muhammad Jehangir Amjad, Devavrat Shah, Dennis Shen","http://arxiv.org/abs/1711.06940v1, http://arxiv.org/pdf/1711.06940v1",econ.EM
31183,em,"Contextual bandit algorithms are sensitive to the estimation method of the
outcome model as well as the exploration method used, particularly in the
presence of rich heterogeneity or complex outcome models, which can lead to
difficult estimation problems along the path of learning. We study a
consideration for the exploration vs. exploitation framework that does not
arise in multi-armed bandits but is crucial in contextual bandits; the way
exploration and exploitation is conducted in the present affects the bias and
variance in the potential outcome model estimation in subsequent stages of
learning. We develop parametric and non-parametric contextual bandits that
integrate balancing methods from the causal inference literature in their
estimation to make it less prone to problems of estimation bias. We provide the
first regret bound analyses for contextual bandits with balancing in the domain
of linear contextual bandits that match the state of the art regret bounds. We
demonstrate the strong practical advantage of balanced contextual bandits on a
large number of supervised learning datasets and on a synthetic example that
simulates model mis-specification and prejudice in the initial training data.
Additionally, we develop contextual bandits with simpler assignment policies by
leveraging sparse model estimation methods from the econometrics literature and
demonstrate empirically that in the early stages they can improve the rate of
learning and decrease regret.",Estimation Considerations in Contextual Bandits,2017-11-19 23:49:47,"Maria Dimakopoulou, Zhengyuan Zhou, Susan Athey, Guido Imbens","http://arxiv.org/abs/1711.07077v4, http://arxiv.org/pdf/1711.07077v4",stat.ML
31184,em,"This study explores the validity of chain effects of clean water, which are
known as the ""Mills-Reincke phenomenon,"" in early twentieth-century Japan.
Recent studies have reported that water purifications systems are responsible
for huge contributions to human capital. Although some studies have
investigated the instantaneous effects of water-supply systems in pre-war
Japan, little is known about the chain effects of these systems. By analyzing
city-level cause-specific mortality data from 1922-1940, we find that a decline
in typhoid deaths by one per 1,000 people decreased the risk of death due to
non-waterborne diseases such as tuberculosis and pneumonia by 0.742-2.942 per
1,000 people. Our finding suggests that the observed Mills-Reincke phenomenon
could have resulted in the relatively rapid decline in the mortality rate in
early twentieth-century Japan.",Chain effects of clean water: The Mills-Reincke phenomenon in early twentieth-century Japan,2018-04-26 13:48:24,"Tatsuki Inoue, Kota Ogasawara","http://arxiv.org/abs/1805.00875v3, http://arxiv.org/pdf/1805.00875v3",stat.AP
31185,em,"A new mixture autoregressive model based on Student's $t$-distribution is
proposed. A key feature of our model is that the conditional $t$-distributions
of the component models are based on autoregressions that have multivariate
$t$-distributions as their (low-dimensional) stationary distributions. That
autoregressions with such stationary distributions exist is not immediate. Our
formulation implies that the conditional mean of each component model is a
linear function of past observations and the conditional variance is also time
varying. Compared to previous mixture autoregressive models our model may
therefore be useful in applications where the data exhibits rather strong
conditional heteroskedasticity. Our formulation also has the theoretical
advantage that conditions for stationarity and ergodicity are always met and
these properties are much more straightforward to establish than is common in
nonlinear autoregressive models. An empirical example employing a realized
kernel series based on S&P 500 high-frequency data shows that the proposed
model performs well in volatility forecasting.",A mixture autoregressive model based on Student's $t$-distribution,2018-05-10 17:58:58,"Mika Meitz, Daniel Preve, Pentti Saikkonen","http://arxiv.org/abs/1805.04010v1, http://arxiv.org/pdf/1805.04010v1",econ.EM
31186,em,"We study the rise in the acceptability fiat money in a Kiyotaki-Wright
economy by developing a method that can determine dynamic Nash equilibria for a
class of search models with genuine heterogenous agents. We also address open
issues regarding the stability properties of pure strategies equilibria and the
presence of multiple equilibria. Experiments illustrate the liquidity
conditions that favor the transition from partial to full acceptance of fiat
money, and the effects of inflationary shocks on production, liquidity, and
trade.",A Dynamic Analysis of Nash Equilibria in Search Models with Fiat Money,2018-05-12 18:12:21,"Federico Bonetto, Maurizio Iacopetta","http://arxiv.org/abs/1805.04733v1, http://arxiv.org/pdf/1805.04733v1",econ.EM
31187,em,"This paper proposes computationally efficient methods that can be used for
instrumental variable quantile regressions (IVQR) and related methods with
statistical guarantees. This is much needed when we investigate heterogenous
treatment effects since interactions between the endogenous treatment and
control variables lead to an increased number of endogenous covariates. We
prove that the GMM formulation of IVQR is NP-hard and finding an approximate
solution is also NP-hard. Hence, solving the problem from a purely
computational perspective seems unlikely. Instead, we aim to obtain an estimate
that has good statistical properties and is not necessarily the global solution
of any optimization problem.
  The proposal consists of employing $k$-step correction on an initial
estimate. The initial estimate exploits the latest advances in mixed integer
linear programming and can be computed within seconds. One theoretical
contribution is that such initial estimators and Jacobian of the moment
condition used in the k-step correction need not be even consistent and merely
$k=4\log n$ fast iterations are needed to obtain an efficient estimator. The
overall proposal scales well to handle extremely large sample sizes because
lack of consistency requirement allows one to use a very small subsample to
obtain the initial estimate and the k-step iterations on the full sample can be
implemented efficiently. Another contribution that is of independent interest
is to propose a tuning-free estimation for the Jacobian matrix, whose
definition nvolves conditional densities. This Jacobian estimator generalizes
bootstrap quantile standard errors and can be efficiently computed via
closed-end solutions. We evaluate the performance of the proposal in
simulations and an empirical example on the heterogeneous treatment effect of
Job Training Partnership Act.",Learning non-smooth models: instrumental variable quantile regressions and related problems,2018-05-17 19:58:11,Yinchu Zhu,"http://arxiv.org/abs/1805.06855v4, http://arxiv.org/pdf/1805.06855v4",econ.EM
31188,em,"We develop an empirical framework to identify and estimate the effects of
treatments on outcomes of interest when the treatments are the result of
strategic interaction (e.g., bargaining, oligopolistic entry, peer effects). We
consider a model where agents play a discrete game with complete information
whose equilibrium actions (i.e., binary treatments) determine a post-game
outcome in a nonseparable model with endogeneity. Due to the simultaneity in
the first stage, the model as a whole is incomplete and the selection process
fails to exhibit the conventional monotonicity. Without imposing parametric
restrictions or large support assumptions, this poses challenges in recovering
treatment parameters. To address these challenges, we first establish a
monotonic pattern of the equilibria in the first-stage game in terms of the
number of treatments selected. Based on this finding, we derive bounds on the
average treatment effects (ATEs) under nonparametric shape restrictions and the
existence of excluded exogenous variables. We show that instrument variation
that compensates strategic substitution helps solve the multiple equilibria
problem. We apply our method to data on airlines and air pollution in cities in
the U.S. We find that (i) the causal effect of each airline on pollution is
positive, and (ii) the effect is increasing in the number of firms but at a
decreasing rate.",Multiple Treatments with Strategic Interaction,2018-05-21 23:05:36,"Jorge Balat, Sukjin Han","http://arxiv.org/abs/1805.08275v2, http://arxiv.org/pdf/1805.08275v2",econ.EM
31190,em,"This paper develops a nonparametric model that represents how sequences of
outcomes and treatment choices influence one another in a dynamic manner. In
this setting, we are interested in identifying the average outcome for
individuals in each period, had a particular treatment sequence been assigned.
The identification of this quantity allows us to identify the average treatment
effects (ATE's) and the ATE's on transitions, as well as the optimal treatment
regimes, namely, the regimes that maximize the (weighted) sum of the average
potential outcomes, possibly less the cost of the treatments. The main
contribution of this paper is to relax the sequential randomization assumption
widely used in the biostatistics literature by introducing a flexible
choice-theoretic framework for a sequence of endogenous treatments. We show
that the parameters of interest are identified under each period's two-way
exclusion restriction, i.e., with instruments excluded from the
outcome-determining process and other exogenous variables excluded from the
treatment-selection process. We also consider partial identification in the
case where the latter variables are not available. Lastly, we extend our
results to a setting where treatments do not appear in every period.",Identification in Nonparametric Models for Dynamic Treatment Effects,2018-05-23 22:37:47,Sukjin Han,"http://arxiv.org/abs/1805.09397v3, http://arxiv.org/pdf/1805.09397v3",econ.EM
31191,em,"Smooth transition autoregressive models are widely used to capture
nonlinearities in univariate and multivariate time series. Existence of
stationary solution is typically assumed, implicitly or explicitly. In this
paper we describe conditions for stationarity and ergodicity of vector STAR
models. The key condition is that the joint spectral radius of certain matrices
is below 1, which is not guaranteed if only separate spectral radii are below
1. Our result allows to use recently introduced toolboxes from computational
mathematics to verify the stationarity and ergodicity of vector STAR models.",Stationarity and ergodicity of vector STAR models,2018-05-29 11:54:24,"Igor L. Kheifets, Pentti J. Saikkonen","http://dx.doi.org/10.1080/07474938.2019.1651489, http://arxiv.org/abs/1805.11311v3, http://arxiv.org/pdf/1805.11311v3",math.ST
31192,em,"We forecast S&P 500 excess returns using a flexible Bayesian econometric
state space model with non-Gaussian features at several levels. More precisely,
we control for overparameterization via novel global-local shrinkage priors on
the state innovation variances as well as the time-invariant part of the state
space model. The shrinkage priors are complemented by heavy tailed state
innovations that cater for potential large breaks in the latent states.
Moreover, we allow for leptokurtic stochastic volatility in the observation
equation. The empirical findings indicate that several variants of the proposed
approach outperform typical competitors frequently used in the literature, both
in terms of point and density forecasts.",Introducing shrinkage in heavy-tailed state space models to predict equity excess returns,2018-05-30 23:30:12,"Florian Huber, Gregor Kastner, Michael Pfarrhofer","http://dx.doi.org/10.1007/s00181-023-02437-3, http://arxiv.org/abs/1805.12217v2, http://arxiv.org/pdf/1805.12217v2",econ.EM
31193,em,"We study the impact of oil price shocks on the U.S. stock market volatility.
We jointly analyze three different structural oil market shocks (i.e.,
aggregate demand, oil supply, and oil-specific demand shocks) and stock market
volatility using a structural vector autoregressive model. Identification is
achieved by assuming that the price of crude oil reacts to stock market
volatility only with delay. This implies that innovations to the price of crude
oil are not strictly exogenous, but predetermined with respect to the stock
market. We show that volatility responds significantly to oil price shocks
caused by unexpected changes in aggregate and oil-specific demand, whereas the
impact of supply-side shocks is negligible.",How does stock market volatility react to oil shocks?,2018-11-09 12:05:51,"Andrea Bastianin, Matteo Manera","http://dx.doi.org/10.1017/S1365100516000353, http://arxiv.org/abs/1811.03820v1, http://arxiv.org/pdf/1811.03820v1",econ.EM
31194,em,"The major perspective of this paper is to provide more evidence regarding how
""quickly"", in different macroeconomic states, companies adjust their capital
structure to their leverage targets. This study extends the empirical research
on the topic of capital structure by focusing on a quantile regression method
to investigate the behavior of firm-specific characteristics and macroeconomic
factors across all quantiles of distribution of leverage (book leverage and
market leverage). Therefore, depending on a partial adjustment model, we find
that the adjustment speed fluctuated in different stages of book versus market
leverage. Furthermore, while macroeconomic states change, we detect clear
differentiations of the contribution and the effects of the firm-specific and
the macroeconomic variables between market leverage and book leverage debt
ratios. Consequently, we deduce that across different macroeconomic states the
nature and maturity of borrowing influence the persistence and endurance of the
relation between determinants and borrowing.",Capital Structure and Speed of Adjustment in U.S. Firms. A Comparative Study in Microeconomic and Macroeconomic Conditions - A Quantille Regression Approach,2018-11-11 23:40:16,"Andreas Kaloudis, Dimitrios Tsolis","http://arxiv.org/abs/1811.04473v1, http://arxiv.org/pdf/1811.04473v1",econ.EM
31195,em,"We introduce a flexible framework that produces high-quality almost-exact
matches for causal inference. Most prior work in matching uses ad-hoc distance
metrics, often leading to poor quality matches, particularly when there are
irrelevant covariates. In this work, we learn an interpretable distance metric
for matching, which leads to substantially higher quality matches. The learned
distance metric stretches the covariate space according to each covariate's
contribution to outcome prediction: this stretching means that mismatches on
important covariates carry a larger penalty than mismatches on irrelevant
covariates. Our ability to learn flexible distance metrics leads to matches
that are interpretable and useful for the estimation of conditional average
treatment effects.",MALTS: Matching After Learning to Stretch,2018-11-19 01:29:59,"Harsh Parikh, Cynthia Rudin, Alexander Volfovsky","http://arxiv.org/abs/1811.07415v9, http://arxiv.org/pdf/1811.07415v9",stat.ME
31226,em,"We develop an estimator for the high-dimensional covariance matrix of a
locally stationary process with a smoothly varying trend and use this statistic
to derive consistent predictors in non-stationary time series. In contrast to
the currently available methods for this problem the predictor developed here
does not rely on fitting an autoregressive model and does not require a
vanishing trend. The finite sample properties of the new methodology are
illustrated by means of a simulation study and a financial indices study.",Prediction in locally stationary time series,2020-01-02 16:03:14,"Holger Dette, Weichi Wu","http://arxiv.org/abs/2001.00419v2, http://arxiv.org/pdf/2001.00419v2",stat.ME
31196,em,"In this paper, we propose a new threshold-kernel jump-detection method for
jump-diffusion processes, which iteratively applies thresholding and kernel
methods in an approximately optimal way to achieve improved finite-sample
performance. We use the expected number of jump misclassifications as the
objective function to optimally select the threshold parameter of the jump
detection scheme. We prove that the objective function is quasi-convex and
obtain a new second-order infill approximation of the optimal threshold in
closed form. The approximate optimal threshold depends not only on the spot
volatility, but also the jump intensity and the value of the jump density at
the origin. Estimation methods for these quantities are then developed, where
the spot volatility is estimated by a kernel estimator with thresholding and
the value of the jump density at the origin is estimated by a density kernel
estimator applied to those increments deemed to contain jumps by the chosen
thresholding criterion. Due to the interdependency between the model parameters
and the approximate optimal estimators built to estimate them, a type of
iterative fixed-point algorithm is developed to implement them. Simulation
studies for a prototypical stochastic volatility model show that it is not only
feasible to implement the higher-order local optimal threshold scheme but also
that this is superior to those based only on the first order approximation
and/or on average values of the parameters over the estimation time period.",Optimal Iterative Threshold-Kernel Estimation of Jump Diffusion Processes,2018-11-19 07:46:20,"José E. Figueroa-López, Cheng Li, Jeffrey Nisen","http://dx.doi.org/10.1007/s11203-020-09211-7., http://arxiv.org/abs/1811.07499v4, http://arxiv.org/pdf/1811.07499v4",math.ST
31197,em,"We consider a high dimensional binary classification problem and construct a
classification procedure by minimizing the empirical misclassification risk
with a penalty on the number of selected features. We derive non-asymptotic
probability bounds on the estimated sparsity as well as on the excess
misclassification risk. In particular, we show that our method yields a sparse
solution whose l0-norm can be arbitrarily close to true sparsity with high
probability and obtain the rates of convergence for the excess
misclassification risk. The proposed procedure is implemented via the method of
mixed integer linear programming. Its numerical performance is illustrated in
Monte Carlo experiments.",High Dimensional Classification through $\ell_0$-Penalized Empirical Risk Minimization,2018-11-23 19:26:27,"Le-Yu Chen, Sokbae Lee","http://arxiv.org/abs/1811.09540v1, http://arxiv.org/pdf/1811.09540v1",stat.ME
31198,em,"This paper proposes a Sieve Simulated Method of Moments (Sieve-SMM) estimator
for the parameters and the distribution of the shocks in nonlinear dynamic
models where the likelihood and the moments are not tractable. An important
concern with SMM, which matches sample with simulated moments, is that a
parametric distribution is required. However, economic quantities that depend
on this distribution, such as welfare and asset-prices, can be sensitive to
misspecification. The Sieve-SMM estimator addresses this issue by flexibly
approximating the distribution of the shocks with a Gaussian and tails mixture
sieve. The asymptotic framework provides consistency, rate of convergence and
asymptotic normality results, extending existing results to a new framework
with more general dynamics and latent variables. An application to asset
pricing in a production economy shows a large decline in the estimates of
relative risk-aversion, highlighting the empirical relevance of
misspecification bias.",A Sieve-SMM Estimator for Dynamic Models,2019-02-04 23:51:02,Jean-Jacques Forneron,"http://arxiv.org/abs/1902.01456v4, http://arxiv.org/pdf/1902.01456v4",econ.EM
31199,em,"Finite mixtures of multivariate normal distributions have been widely used in
empirical applications in diverse fields such as statistical genetics and
statistical finance. Testing the number of components in multivariate normal
mixture models is a long-standing challenge even in the most important case of
testing homogeneity. This paper develops likelihood-based tests of the null
hypothesis of $M_0$ components against the alternative hypothesis of $M_0 + 1$
components for a general $M_0 \geq 1$. For heteroscedastic normal mixtures, we
propose an EM test and derive the asymptotic distribution of the EM test
statistic. For homoscedastic normal mixtures, we derive the asymptotic
distribution of the likelihood ratio test statistic. We also derive the
asymptotic distribution of the likelihood ratio test statistic and EM test
statistic under local alternatives and show the validity of parametric
bootstrap. The simulations show that the proposed test has good finite sample
size and power properties.",Testing the Order of Multivariate Normal Mixture Models,2019-02-08 05:43:11,"Hiroyuki Kasahara, Katsumi Shimotsu","http://arxiv.org/abs/1902.02920v1, http://arxiv.org/pdf/1902.02920v1",math.ST
31200,em,"Random forests are powerful non-parametric regression method but are severely
limited in their usage in the presence of randomly censored observations, and
naively applied can exhibit poor predictive performance due to the incurred
biases. Based on a local adaptive representation of random forests, we develop
its regression adjustment for randomly censored regression quantile models.
Regression adjustment is based on new estimating equations that adapt to
censoring and lead to quantile score whenever the data do not exhibit
censoring. The proposed procedure named censored quantile regression forest,
allows us to estimate quantiles of time-to-event without any parametric
modeling assumption. We establish its consistency under mild model
specifications. Numerical studies showcase a clear advantage of the proposed
procedure.",Censored Quantile Regression Forests,2019-02-09 02:29:50,"Alexander Hanbo Li, Jelena Bradic","http://arxiv.org/abs/1902.03327v1, http://arxiv.org/pdf/1902.03327v1",stat.ML
31201,em,"Randomized experiments, or A/B tests are used to estimate the causal impact
of a feature on the behavior of users by creating two parallel universes in
which members are simultaneously assigned to treatment and control. However, in
social network settings, members interact, such that the impact of a feature is
not always contained within the treatment group. Researchers have developed a
number of experimental designs to estimate network effects in social settings.
Alternatively, naturally occurring exogenous variation, or 'natural
experiments,' allow researchers to recover causal estimates of peer effects
from observational data in the absence of experimental manipulation. Natural
experiments trade off the engineering costs and some of the ethical concerns
associated with network randomization with the search costs of finding
situations with natural exogenous variation. To mitigate the search costs
associated with discovering natural counterfactuals, we identify a common
engineering requirement used to scale massive online systems, in which natural
exogenous variation is likely to exist: notification queueing. We identify two
natural experiments on the LinkedIn platform based on the order of notification
queues to estimate the causal impact of a received message on the engagement of
a recipient. We show that receiving a message from another member significantly
increases a member's engagement, but that some popular observational
specifications, such as fixed-effects estimators, overestimate this effect by
as much as 2.7x. We then apply the estimated network effect coefficients to a
large body of past experiments to quantify the extent to which it changes our
interpretation of experimental results. The study points to the benefits of
using messaging queues to discover naturally occurring counterfactuals for the
estimation of causal effects without experimenter intervention.",Estimating Network Effects Using Naturally Occurring Peer Notification Queue Counterfactuals,2019-02-19 19:44:08,"Craig Tutterow, Guillaume Saint-Jacques","http://arxiv.org/abs/1902.07133v1, http://arxiv.org/pdf/1902.07133v1",cs.SI
31203,em,"This paper considers estimation and inference for a weighted average
derivative (WAD) of a nonparametric quantile instrumental variables regression
(NPQIV). NPQIV is a non-separable and nonlinear ill-posed inverse problem,
which might be why there is no published work on the asymptotic properties of
any estimator of its WAD. We first characterize the semiparametric efficiency
bound for a WAD of a NPQIV, which, unfortunately, depends on an unknown
conditional derivative operator and hence an unknown degree of ill-posedness,
making it difficult to know if the information bound is singular or not. In
either case, we propose a penalized sieve generalized empirical likelihood
(GEL) estimation and inference procedure, which is based on the unconditional
WAD moment restriction and an increasing number of unconditional moments that
are implied by the conditional NPQIV restriction, where the unknown quantile
function is approximated by a penalized sieve. Under some regularity
conditions, we show that the self-normalized penalized sieve GEL estimator of
the WAD of a NPQIV is asymptotically standard normal. We also show that the
quasi likelihood ratio statistic based on the penalized sieve GEL criterion is
asymptotically chi-square distributed regardless of whether or not the
information bound is singular.",Penalized Sieve GEL for Weighted Average Derivatives of Nonparametric Quantile IV Regressions,2019-02-26 21:29:53,"Xiaohong Chen, Demian Pouzo, James L. Powell","http://arxiv.org/abs/1902.10100v1, http://arxiv.org/pdf/1902.10100v1",math.ST
31204,em,"Many popular methods for building confidence intervals on causal effects
under high-dimensional confounding require strong ""ultra-sparsity"" assumptions
that may be difficult to validate in practice. To alleviate this difficulty, we
here study a new method for average treatment effect estimation that yields
asymptotically exact confidence intervals assuming that either the conditional
response surface or the conditional probability of treatment allows for an
ultra-sparse representation (but not necessarily both). This guarantee allows
us to provide valid inference for average treatment effect in high dimensions
under considerably more generality than available baselines. In addition, we
showcase that our results are semi-parametrically efficient.",Sparsity Double Robust Inference of Average Treatment Effects,2019-05-02 16:47:15,"Jelena Bradic, Stefan Wager, Yinchu Zhu","http://arxiv.org/abs/1905.00744v1, http://arxiv.org/pdf/1905.00744v1",math.ST
31205,em,"For an $N \times T$ random matrix $X(\beta)$ with weakly dependent uniformly
sub-Gaussian entries $x_{it}(\beta)$ that may depend on a possibly
infinite-dimensional parameter $\beta\in \mathbf{B}$, we obtain a uniform bound
on its operator norm of the form $\mathbb{E} \sup_{\beta \in \mathbf{B}}
||X(\beta)|| \leq CK \left(\sqrt{\max(N,T)} +
\gamma_2(\mathbf{B},d_\mathbf{B})\right)$, where $C$ is an absolute constant,
$K$ controls the tail behavior of (the increments of) $x_{it}(\cdot)$, and
$\gamma_2(\mathbf{B},d_\mathbf{B})$ is Talagrand's functional, a measure of
multi-scale complexity of the metric space $(\mathbf{B},d_\mathbf{B})$. We
illustrate how this result may be used for estimation that seeks to minimize
the operator norm of moment conditions as well as for estimation of the maximal
number of factors with functional data.",A Uniform Bound on the Operator Norm of Sub-Gaussian Random Matrices and Its Applications,2019-05-03 12:58:07,"Grigory Franguridi, Hyungsik Roger Moon","http://arxiv.org/abs/1905.01096v4, http://arxiv.org/pdf/1905.01096v4",econ.EM
31206,em,"In dealing with high-dimensional data, factor models are often used for
reducing dimensions and extracting relevant information. The spectrum of
covariance matrices from power data exhibits two aspects: 1) bulk, which arises
from random noise or fluctuations and 2) spikes, which represents factors
caused by anomaly events. In this paper, we propose a new approach to the
estimation of high-dimensional factor models, minimizing the distance between
the empirical spectral density (ESD) of covariance matrices of the residuals of
power data that are obtained by subtracting principal components and the
limiting spectral density (LSD) from a multiplicative covariance structure
model. The free probability theory (FPT) is used to derive the spectral density
of the multiplicative covariance model, which efficiently solves the
computational difficulties. The proposed approach connects the estimation of
the number of factors to the LSD of covariance matrices of the residuals, which
provides estimators of the number of factors and the correlation structure
information in the residuals. Considering a lot of measurement noise is
contained in the power data and the correlation structure is complex for the
residuals, the approach prefers approaching the ESD of covariance matrices of
the residuals through a multiplicative covariance model, which avoids making
crude assumptions or simplifications on the complex structure of the data.
Theoretical studies show the proposed approach is robust against noise and
sensitive to the presence of weak factors. The synthetic data from IEEE 118-bus
power system is used to validate the effectiveness of the approach.
Furthermore, the application to the analysis of the real-world online
monitoring data in a power grid shows that the estimators in the approach can
be used to indicate the system behavior.",Estimation of high-dimensional factor models and its application in power data analysis,2019-05-06 17:33:17,"Xin Shi, Robert Qiu","http://arxiv.org/abs/1905.02061v2, http://arxiv.org/pdf/1905.02061v2",stat.AP
31207,em,"Women belonging to the socially disadvantaged caste-groups in India have
historically been engaged in labour-intensive, blue-collar work. We study
whether there has been any change in the ability to predict a woman's
work-status and work-type based on her caste by interpreting machine learning
models using feature attribution. We find that caste is now a less important
determinant of work for the younger generation of women compared to the older
generation. Moreover, younger women from disadvantaged castes are now more
likely to be working in white-collar jobs.",Working women and caste in India: A study of social disadvantage using feature attribution,2019-04-27 10:15:33,"Kuhu Joshi, Chaitanya K. Joshi","http://arxiv.org/abs/1905.03092v2, http://arxiv.org/pdf/1905.03092v2",econ.EM
31208,em,"We present a method for computing the likelihood of a mixed hitting-time
model that specifies durations as the first time a latent L\'evy process
crosses a heterogeneous threshold. This likelihood is not generally known in
closed form, but its Laplace transform is. Our approach to its computation
relies on numerical methods for inverting Laplace transforms that exploit
special properties of the first passage times of L\'evy processes. We use our
method to implement a maximum likelihood estimator of the mixed hitting-time
model in MATLAB. We illustrate the application of this estimator with an
analysis of Kennan's (1985) strike data.",The Likelihood of Mixed Hitting Times,2019-05-09 10:02:32,"Jaap H. Abbring, Tim Salimans","http://dx.doi.org/10.1016/j.jeconom.2019.08.017, http://arxiv.org/abs/1905.03463v2, http://arxiv.org/pdf/1905.03463v2",econ.EM
31243,em,"An increasing number of decisions are guided by machine learning algorithms.
In many settings, from consumer credit to criminal justice, those decisions are
made by applying an estimator to data on an individual's observed behavior. But
when consequential decisions are encoded in rules, individuals may
strategically alter their behavior to achieve desired outcomes. This paper
develops a new class of estimator that is stable under manipulation, even when
the decision rule is fully transparent. We explicitly model the costs of
manipulating different behaviors, and identify decision rules that are stable
in equilibrium. Through a large field experiment in Kenya, we show that
decision rules estimated with our strategy-robust method outperform those based
on standard supervised learning approaches.",Manipulation-Proof Machine Learning,2020-04-08 11:04:01,"Daniel Björkegren, Joshua E. Blumenstock, Samsun Knight","http://arxiv.org/abs/2004.03865v1, http://arxiv.org/pdf/2004.03865v1",econ.TH
31209,em,"Many real-life settings of consumer-choice involve social interactions,
causing targeted policies to have spillover-effects. This paper develops novel
empirical tools for analyzing demand and welfare-effects of
policy-interventions in binary choice settings with social interactions.
Examples include subsidies for health-product adoption and vouchers for
attending a high-achieving school. We establish the connection between
econometrics of large games and Brock-Durlauf-type interaction models, under
both I.I.D. and spatially correlated unobservables. We develop new convergence
results for associated beliefs and estimates of preference-parameters under
increasing-domain spatial asymptotics. Next, we show that even with fully
parametric specifications and unique equilibrium, choice data, that are
sufficient for counterfactual demand-prediction under interactions, are
insufficient for welfare-calculations. This is because distinct underlying
mechanisms producing the same interaction coefficient can imply different
welfare-effects and deadweight-loss from a policy-intervention. Standard
index-restrictions imply distribution-free bounds on welfare. We illustrate our
results using experimental data on mosquito-net adoption in rural Kenya.",Demand and Welfare Analysis in Discrete Choice Models with Social Interactions,2019-05-10 12:32:07,"Debopam Bhattacharya, Pascaline Dupas, Shin Kanaya","http://arxiv.org/abs/1905.04028v1, http://arxiv.org/pdf/1905.04028v1",econ.EM
31210,em,"We define a moment-based estimator that maximizes the empirical saddlepoint
(ESP) approximation of the distribution of solutions to empirical moment
conditions. We call it the ESP estimator. We prove its existence, consistency
and asymptotic normality, and we propose novel test statistics. We also show
that the ESP estimator corresponds to the MM (method of moments) estimator
shrunk toward parameter values with lower estimated variance, so it reduces the
documented instability of existing moment-based estimators. In the case of
just-identified moment conditions, which is the case we focus on, the ESP
estimator is different from the MM estimator, unlike the recently proposed
alternatives, such as the empirical-likelihood-type estimators.",The Empirical Saddlepoint Estimator,2019-05-16 21:04:52,"Benjamin Holcblat, Fallaw Sowell","http://arxiv.org/abs/1905.06977v1, http://arxiv.org/pdf/1905.06977v1",math.ST
31211,em,"The recent literature often cites Fang and Wang (2015) for analyzing the
identification of time preferences in dynamic discrete choice under exclusion
restrictions (e.g. Yao et al., 2012; Lee, 2013; Ching et al., 2013; Norets and
Tang, 2014; Dub\'e et al., 2014; Gordon and Sun, 2015; Bajari et al., 2016;
Chan, 2017; Gayle et al., 2018). Fang and Wang's Proposition 2 claims generic
identification of a dynamic discrete choice model with hyperbolic discounting.
This claim uses a definition of ""generic"" that does not preclude the
possibility that a generically identified model is nowhere identified. To
illustrate this point, we provide two simple examples of models that are
generically identified in Fang and Wang's sense, but that are, respectively,
everywhere and nowhere identified. We conclude that Proposition 2 is void: It
has no implications for identification of the dynamic discrete choice model. We
show that its proof is incorrect and incomplete and suggest alternative
approaches to identification.","A Comment on ""Estimating Dynamic Discrete Choice Models with Hyperbolic Discounting"" by Hanming Fang and Yang Wang",2019-05-17 00:55:46,"Jaap H. Abbring, Øystein Daljord","http://dx.doi.org/10.1111/iere.12434, http://arxiv.org/abs/1905.07048v2, http://arxiv.org/pdf/1905.07048v2",econ.EM
31212,em,"Build-to-order (BTO) supply chains have become common-place in industries
such as electronics, automotive and fashion. They enable building products
based on individual requirements with a short lead time and minimum inventory
and production costs. Due to their nature, they differ significantly from
traditional supply chains. However, there have not been studies dedicated to
demand forecasting methods for this type of setting. This work makes two
contributions. First, it presents a new and unique data set from a manufacturer
in the BTO sector. Second, it proposes a novel data transformation technique
for demand forecasting of BTO products. Results from thirteen forecasting
methods show that the approach compares well to the state-of-the-art while
being easy to implement and to explain to decision-makers.",Demand forecasting techniques for build-to-order lean manufacturing supply chains,2019-05-20 09:33:53,"Rodrigo Rivera-Castro, Ivan Nazarov, Yuke Xiang, Alexander Pletneev, Ivan Maksimov, Evgeny Burnaev","http://arxiv.org/abs/1905.07902v1, http://arxiv.org/pdf/1905.07902v1",cs.LG
31213,em,"We consider the estimation of heterogeneous treatment effects with arbitrary
machine learning methods in the presence of unobserved confounders with the aid
of a valid instrument. Such settings arise in A/B tests with an intent-to-treat
structure, where the experimenter randomizes over which user will receive a
recommendation to take an action, and we are interested in the effect of the
downstream action. We develop a statistical learning approach to the estimation
of heterogeneous effects, reducing the problem to the minimization of an
appropriate loss function that depends on a set of auxiliary models (each
corresponding to a separate prediction task). The reduction enables the use of
all recent algorithmic advances (e.g. neural nets, forests). We show that the
estimated effect model is robust to estimation errors in the auxiliary models,
by showing that the loss satisfies a Neyman orthogonality criterion. Our
approach can be used to estimate projections of the true effect model on
simpler hypothesis spaces. When these spaces are parametric, then the parameter
estimates are asymptotically normal, which enables construction of confidence
sets. We applied our method to estimate the effect of membership on downstream
webpage engagement on TripAdvisor, using as an instrument an intent-to-treat
A/B test among 4 million TripAdvisor users, where some users received an easier
membership sign-up process. We also validate our method on synthetic data and
on public datasets for the effects of schooling on income.",Machine Learning Estimation of Heterogeneous Treatment Effects with Instruments,2019-05-24 15:14:08,"Vasilis Syrgkanis, Victor Lei, Miruna Oprescu, Maggie Hei, Keith Battocchi, Greg Lewis","http://arxiv.org/abs/1905.10176v3, http://arxiv.org/pdf/1905.10176v3",econ.EM
31214,em,"Motivated by the increasing abundance of data describing real-world networks
that exhibit dynamical features, we propose an extension of the Exponential
RandomGraph Models (ERGMs) that accommodates the time variation of its
parameters. Inspired by the fast growing literature on Dynamic Conditional
Score-driven models each parameter evolves according to an updating rule driven
by the score of the ERGM distribution. We demonstrate the flexibility of the
score-driven ERGMs (SD-ERGMs), both as data generating processes and as
filters, and we show the advantages of the dynamic version with respect to the
static one. We discuss two applications to time-varying networks from financial
and political systems. First, we consider the prediction of future links in the
Italian inter-bank credit network. Second, we show that the SD-ERGM allows to
discriminate between static or time-varying parameters when used to model the
dynamics of the US congress co-voting network.",Score-Driven Exponential Random Graphs: A New Class of Time-Varying Parameter Models for Dynamical Networks,2019-05-26 16:48:43,"Domenico Di Gangi, Giacomo Bormetti, Fabrizio Lillo","http://arxiv.org/abs/1905.10806v2, http://arxiv.org/pdf/1905.10806v2",stat.AP
31215,em,"Assessing world-wide financial integration constitutes a recurrent challenge
in macroeconometrics, often addressed by visual inspections searching for data
patterns. Econophysics literature enables us to build complementary,
data-driven measures of financial integration using graphs. The present
contribution investigates the potential and interests of a novel 3-step
approach that combines several state-of-the-art procedures to i) compute
graph-based representations of the multivariate dependence structure of asset
prices time series representing the financial states of 32 countries world-wide
(1955-2015); ii) compute time series of 5 graph-based indices that characterize
the time evolution of the topologies of the graph; iii) segment these time
evolutions in piece-wise constant eras, using an optimization framework
constructed on a multivariate multi-norm total variation penalized functional.
The method shows first that it is possible to find endogenous stable eras of
world-wide financial integration. Then, our results suggest that the most
relevant globalization eras would be based on the historical patterns of global
capital flows, while the major regulatory events of the 1970s would only appear
as a cause of sub-segmentation.",Graph-based era segmentation of international financial integration,2019-05-28 17:25:36,"Cécile Bastidon, Antoine Parent, Pablo Jensen, Patrice Abry, Pierre Borgnat","http://dx.doi.org/10.1016/j.physa.2019.122877, http://arxiv.org/abs/1905.11842v1, http://arxiv.org/pdf/1905.11842v1",q-fin.GN
31216,em,"When pre-processing observational data via matching, we seek to approximate
each unit with maximally similar peers that had an alternative treatment
status--essentially replicating a randomized block design. However, as one
considers a growing number of continuous features, a curse of dimensionality
applies making asymptotically valid inference impossible (Abadie and Imbens,
2006). The alternative of ignoring plausibly relevant features is certainly no
better, and the resulting trade-off substantially limits the application of
matching methods to ""wide"" datasets. Instead, Li and Fu (2017) recasts the
problem of matching in a metric learning framework that maps features to a
low-dimensional space that facilitates ""closer matches"" while still capturing
important aspects of unit-level heterogeneity. However, that method lacks key
theoretical guarantees and can produce inconsistent estimates in cases of
heterogeneous treatment effects. Motivated by straightforward extension of
existing results in the matching literature, we present alternative techniques
that learn latent matching features through either MLPs or through siamese
neural networks trained on a carefully selected loss function. We benchmark the
resulting alternative methods in simulations as well as against two
experimental data sets--including the canonical NSW worker training program
data set--and find superior performance of the neural-net-based methods.",Matching on What Matters: A Pseudo-Metric Learning Approach to Matching Estimation in High Dimensions,2019-05-28 21:26:46,"Gentry Johnson, Brian Quistorff, Matt Goldman","http://arxiv.org/abs/1905.12020v1, http://arxiv.org/pdf/1905.12020v1",econ.EM
31217,em,"Instrumental variable analysis is a powerful tool for estimating causal
effects when randomization or full control of confounders is not possible. The
application of standard methods such as 2SLS, GMM, and more recent variants are
significantly impeded when the causal effects are complex, the instruments are
high-dimensional, and/or the treatment is high-dimensional. In this paper, we
propose the DeepGMM algorithm to overcome this. Our algorithm is based on a new
variational reformulation of GMM with optimal inverse-covariance weighting that
allows us to efficiently control very many moment conditions. We further
develop practical techniques for optimization and model selection that make it
particularly successful in practice. Our algorithm is also computationally
tractable and can handle large-scale datasets. Numerical results show our
algorithm matches the performance of the best tuned methods in standard
settings and continues to work in high-dimensional settings where even recent
methods break.",Deep Generalized Method of Moments for Instrumental Variable Analysis,2019-05-29 17:30:09,"Andrew Bennett, Nathan Kallus, Tobias Schnabel","http://arxiv.org/abs/1905.12495v2, http://arxiv.org/pdf/1905.12495v2",stat.ML
31218,em,"We propose a new estimator for causal effects in applications where the
exogenous variation comes from aggregate time-series shocks. We address the
critical identification challenge in such applications -- unobserved
confounding, which renders conventional estimators invalid. Our estimator uses
a new data-based aggregation scheme and remains consistent in the presence of
unobserved aggregate shocks. We illustrate the advantages of our algorithm
using data from Nakamura and Steinsson (2014). We also establish the
statistical properties of our estimator in a practically relevant regime, where
both cross-sectional and time-series dimensions are large, and show how to use
our method to conduct inference.",On Policy Evaluation with Aggregate Time-Series Shocks,2019-05-31 18:01:43,"Dmitry Arkhangelsky, Vasily Korovkin","http://arxiv.org/abs/1905.13660v7, http://arxiv.org/pdf/1905.13660v7",econ.EM
31219,em,"We introduce a variable importance measure to quantify the impact of
individual input variables to a black box function. Our measure is based on the
Shapley value from cooperative game theory. Many measures of variable
importance operate by changing some predictor values with others held fixed,
potentially creating unlikely or even logically impossible combinations. Our
cohort Shapley measure uses only observed data points. Instead of changing the
value of a predictor we include or exclude subjects similar to the target
subject on that predictor to form a similarity cohort. Then we apply Shapley
value to the cohort averages. We connect variable importance measures from
explainable AI to function decompositions from global sensitivity analysis. We
introduce a squared cohort Shapley value that splits previously studied Shapley
effects over subjects, consistent with a Shapley axiom.",Explaining black box decisions by Shapley cohort refinement,2019-11-01 20:10:20,"Masayoshi Mase, Art B. Owen, Benjamin Seiler","http://arxiv.org/abs/1911.00467v2, http://arxiv.org/pdf/1911.00467v2",cs.LG
31220,em,"The lack of longitudinal studies of the relationship between the built
environment and travel behavior has been widely discussed in the literature.
This paper discusses how standard propensity score matching estimators can be
extended to enable such studies by pairing observations across two dimensions:
longitudinal and cross-sectional. Researchers mimic randomized controlled
trials (RCTs) and match observations in both dimensions, to find synthetic
control groups that are similar to the treatment group and to match subjects
synthetically across before-treatment and after-treatment time periods. We call
this a two-dimensional propensity score matching (2DPSM). This method
demonstrates superior performance for estimating treatment effects based on
Monte Carlo evidence. A near-term opportunity for such matching is identifying
the impact of transportation infrastructure on travel behavior.",A two-dimensional propensity score matching method for longitudinal quasi-experimental studies: A focus on travel behavior and the built environment,2019-11-02 09:50:05,"Haotian Zhong, Wei Li, Marlon G. Boarnet","http://arxiv.org/abs/1911.00667v2, http://arxiv.org/pdf/1911.00667v2",econ.EM
31221,em,"We propose a novel framework of the model specification test in regression
using unlabeled test data. In many cases, we have conducted statistical
inferences based on the assumption that we can correctly specify a model.
However, it is difficult to confirm whether a model is correctly specified. To
overcome this problem, existing works have devised statistical tests for model
specification. Existing works have defined a correctly specified model in
regression as a model with zero conditional mean of the error term over train
data only. Extending the definition in conventional statistical tests, we
define a correctly specified model as a model with zero conditional mean of the
error term over any distribution of the explanatory variable. This definition
is a natural consequence of the orthogonality of the explanatory variable and
the error term. If a model does not satisfy this condition, the model might
lack robustness with regards to the distribution shift. The proposed method
would enable us to reject a misspecified model under our definition. By
applying the proposed method, we can obtain a model that predicts the label for
the unlabeled test data well without losing the interpretability of the model.
In experiments, we show how the proposed method works for synthetic and
real-world datasets.",Model Specification Test with Unlabeled Data: Approach from Covariate Shift,2019-11-02 13:06:17,"Masahiro Kato, Hikaru Kawarazaki","http://arxiv.org/abs/1911.00688v2, http://arxiv.org/pdf/1911.00688v2",stat.ME
31222,em,"In this paper, we study the design and analysis of experiments conducted on a
set of units over multiple time periods where the starting time of the
treatment may vary by unit. The design problem involves selecting an initial
treatment time for each unit in order to most precisely estimate both the
instantaneous and cumulative effects of the treatment. We first consider
non-adaptive experiments, where all treatment assignment decisions are made
prior to the start of the experiment. For this case, we show that the
optimization problem is generally NP-hard, and we propose a near-optimal
solution. Under this solution, the fraction entering treatment each period is
initially low, then high, and finally low again. Next, we study an adaptive
experimental design problem, where both the decision to continue the experiment
and treatment assignment decisions are updated after each period's data is
collected. For the adaptive case, we propose a new algorithm, the
Precision-Guided Adaptive Experiment (PGAE) algorithm, that addresses the
challenges at both the design stage and at the stage of estimating treatment
effects, ensuring valid post-experiment inference accounting for the adaptive
nature of the design. Using realistic settings, we demonstrate that our
proposed solutions can reduce the opportunity cost of the experiments by over
50%, compared to static design benchmarks.",Optimal Experimental Design for Staggered Rollouts,2019-11-09 22:46:29,"Ruoxuan Xiong, Susan Athey, Mohsen Bayati, Guido Imbens","http://arxiv.org/abs/1911.03764v6, http://arxiv.org/pdf/1911.03764v6",econ.EM
31223,em,"Understanding disaggregate channels in the transmission of monetary policy is
of crucial importance for effectively implementing policy measures. We extend
the empirical econometric literature on the role of production networks in the
propagation of shocks along two dimensions. First, we allow for
industry-specific responses that vary over time, reflecting non-linearities and
cross-sectional heterogeneities in direct transmission channels. Second, we
allow for time-varying network structures and dependence. This feature captures
both variation in the structure of the production network, but also differences
in cross-industry demand elasticities. We find that impacts vary substantially
over time and the cross-section. Higher-order effects appear to be particularly
important in periods of economic and financial uncertainty, often coinciding
with tight credit market conditions and financial stress. Differentials in
industry-specific responses can be explained by how close the respective
industries are to end-consumers.",Bayesian state-space modeling for analyzing heterogeneous network effects of US monetary policy,2019-11-14 19:00:02,"Niko Hauzenberger, Michael Pfarrhofer","http://arxiv.org/abs/1911.06206v3, http://arxiv.org/pdf/1911.06206v3",econ.EM
31224,em,"Cross-correlations in fluctuations of the daily exchange rates within the
basket of the 100 highest-capitalization cryptocurrencies over the period
October 1, 2015, through March 31, 2019, are studied. The corresponding
dynamics predominantly involve one leading eigenvalue of the correlation
matrix, while the others largely coincide with those of Wishart random
matrices. However, the magnitude of the principal eigenvalue, and thus the
degree of collectivity, strongly depends on which cryptocurrency is used as a
base. It is largest when the base is the most peripheral cryptocurrency; when
more significant ones are taken into consideration, its magnitude
systematically decreases, nevertheless preserving a sizable gap with respect to
the random bulk, which in turn indicates that the organization of correlations
becomes more heterogeneous. This finding provides a criterion for recognizing
which currencies or cryptocurrencies play a dominant role in the global
crypto-market. The present study shows that over the period under
consideration, the Bitcoin (BTC) predominates, hallmarking exchange rate
dynamics at least as influential as the US dollar. The BTC started dominating
around the year 2017, while further cryptocurrencies, like the Ethereum (ETH)
and even Ripple (XRP), assumed similar trends. At the same time, the USD, an
original value determinant for the cryptocurrency market, became increasingly
disconnected, its related characteristics eventually approaching those of a
fictitious currency. These results are strong indicators of incipient
independence of the global cryptocurrency market, delineating a self-contained
trade resembling the Forex.",Competition of noise and collectivity in global cryptocurrency trading: route to a self-contained market,2019-11-20 17:52:12,"Stanisław Drożdż, Ludovico Minati, Paweł Oświęcimka, Marek Stanuszek, Marcin Wątorek","http://dx.doi.org/10.1063/1.5139634, http://arxiv.org/abs/1911.08944v2, http://arxiv.org/pdf/1911.08944v2",q-fin.ST
31225,em,"In this Element and its accompanying Element, Matias D. Cattaneo, Nicolas
Idrobo, and Rocio Titiunik provide an accessible and practical guide for the
analysis and interpretation of Regression Discontinuity (RD) designs that
encourages the use of a common set of practices and facilitates the
accumulation of RD-based empirical evidence. In this Element, the authors
discuss the foundations of the canonical Sharp RD design, which has the
following features: (i) the score is continuously distributed and has only one
dimension, (ii) there is only one cutoff, and (iii) compliance with the
treatment assignment is perfect. In the accompanying Element, the authors
discuss practical and conceptual extensions to the basic RD setup.",A Practical Introduction to Regression Discontinuity Designs: Foundations,2019-11-21 17:58:18,"Matias D. Cattaneo, Nicolas Idrobo, Rocio Titiunik","http://dx.doi.org/10.1017/9781108684606, http://arxiv.org/abs/1911.09511v1, http://arxiv.org/pdf/1911.09511v1",stat.ME
31227,em,"We develop a Bayesian median autoregressive (BayesMAR) model for time series
forecasting. The proposed method utilizes time-varying quantile regression at
the median, favorably inheriting the robustness of median regression in
contrast to the widely used mean-based methods. Motivated by a working Laplace
likelihood approach in Bayesian quantile regression, BayesMAR adopts a
parametric model bearing the same structure as autoregressive models by
altering the Gaussian error to Laplace, leading to a simple, robust, and
interpretable modeling strategy for time series forecasting. We estimate model
parameters by Markov chain Monte Carlo. Bayesian model averaging is used to
account for model uncertainty, including the uncertainty in the autoregressive
order, in addition to a Bayesian model selection approach. The proposed methods
are illustrated using simulations and real data applications. An application to
U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and
often superior predictive performance compared to the selected mean-based
alternatives under various loss functions that encompass both point and
probabilistic forecasts. The proposed methods are generic and can be used to
complement a rich class of methods that build on autoregressive models.",Bayesian Median Autoregression for Robust Time Series Forecasting,2020-01-04 22:44:33,"Zijian Zeng, Meng Li","http://dx.doi.org/10.1016/j.ijforecast.2020.11.002, http://arxiv.org/abs/2001.01116v2, http://arxiv.org/pdf/2001.01116v2",stat.AP
31228,em,"We develop and implement a novel fast bootstrap for dependent data. Our
scheme is based on the i.i.d. resampling of the smoothed moment indicators. We
characterize the class of parametric and semi-parametric estimation problems
for which the method is valid. We show the asymptotic refinements of the
proposed procedure, proving that it is higher-order correct under mild
assumptions on the time series, the estimating functions, and the smoothing
kernel. We illustrate the applicability and the advantages of our procedure for
Generalized Empirical Likelihood estimation. As a by-product, our fast
bootstrap provides higher-order correct asymptotic confidence distributions.
Monte Carlo simulations on an autoregressive conditional duration model provide
numerical evidence that the novel bootstrap yields higher-order accurate
confidence intervals. A real-data application on dynamics of trading volume of
stocks illustrates the advantage of our method over the routinely-applied
first-order asymptotic theory, when the underlying distribution of the test
statistic is skewed or fat-tailed.",A Higher-Order Correct Fast Moving-Average Bootstrap for Dependent Data,2020-01-14 19:11:22,"Davide La Vecchia, Alban Moor, Olivier Scaillet","http://arxiv.org/abs/2001.04867v2, http://arxiv.org/pdf/2001.04867v2",stat.ME
31229,em,"This paper introduces a new data-driven methodology for estimating sparse
covariance matrices of the random coefficients in logit mixture models.
Researchers typically specify covariance matrices in logit mixture models under
one of two extreme assumptions: either an unrestricted full covariance matrix
(allowing correlations between all random coefficients), or a restricted
diagonal matrix (allowing no correlations at all). Our objective is to find
optimal subsets of correlated coefficients for which we estimate covariances.
We propose a new estimator, called MISC, that uses a mixed-integer optimization
(MIO) program to find an optimal block diagonal structure specification for the
covariance matrix, corresponding to subsets of correlated coefficients, for any
desired sparsity level using Markov Chain Monte Carlo (MCMC) posterior draws
from the unrestricted full covariance matrix. The optimal sparsity level of the
covariance matrix is determined using out-of-sample validation. We demonstrate
the ability of MISC to correctly recover the true covariance structure from
synthetic data. In an empirical illustration using a stated preference survey
on modes of transportation, we use MISC to obtain a sparse covariance matrix
indicating how preferences for attributes are related to one another.",Sparse Covariance Estimation in Logit Mixture Models,2020-01-14 23:19:15,"Youssef M Aboutaleb, Mazen Danaf, Yifei Xie, Moshe Ben-Akiva","http://arxiv.org/abs/2001.05034v1, http://arxiv.org/pdf/2001.05034v1",stat.ME
31230,em,"Social network data can be expensive to collect. Breza et al. (2017) propose
aggregated relational data (ARD) as a low-cost substitute that can be used to
recover the structure of a latent social network when it is generated by a
specific parametric random effects model. Our main observation is that many
economic network formation models produce networks that are effectively
low-rank. As a consequence, network recovery from ARD is generally possible
without parametric assumptions using a nuclear-norm penalized regression. We
demonstrate how to implement this method and provide finite-sample bounds on
the mean squared error of the resulting estimator for the distribution of
network links. Computation takes seconds for samples with hundreds of
observations. Easy-to-use code in R and Python can be found at
https://github.com/mpleung/ARD.",Recovering Network Structure from Aggregated Relational Data using Penalized Regression,2020-01-16 22:43:52,"Hossein Alidaee, Eric Auerbach, Michael P. Leung","http://arxiv.org/abs/2001.06052v1, http://arxiv.org/pdf/2001.06052v1",econ.EM
31231,em,"We develop new higher-order asymptotic techniques for the Gaussian maximum
likelihood estimator in a spatial panel data model, with fixed effects,
time-varying covariates, and spatially correlated errors. Our saddlepoint
density and tail area approximation feature relative error of order
$O(1/(n(T-1)))$ with $n$ being the cross-sectional dimension and $T$ the
time-series dimension. The main theoretical tool is the tilted-Edgeworth
technique in a non-identically distributed setting. The density approximation
is always non-negative, does not need resampling, and is accurate in the tails.
Monte Carlo experiments on density approximation and testing in the presence of
nuisance parameters illustrate the good performance of our approximation over
first-order asymptotics and Edgeworth expansions. An empirical application to
the investment-saving relationship in OECD (Organisation for Economic
Co-operation and Development) countries shows disagreement between testing
results based on first-order asymptotics and saddlepoint techniques.",Saddlepoint approximations for spatial panel data models,2020-01-22 22:40:01,"Chaonan Jiang, Davide La Vecchia, Elvezio Ronchetti, Olivier Scaillet","http://arxiv.org/abs/2001.10377v3, http://arxiv.org/pdf/2001.10377v3",math.ST
31284,gn,"We study nonzero-sum stochastic switching games. Two players compete for
market dominance through controlling (via timing options) the discrete-state
market regime $M$. Switching decisions are driven by a continuous stochastic
factor $X$ that modulates instantaneous revenue rates and switching costs. This
generates a competitive feedback between the short-term fluctuations due to $X$
and the medium-term advantages based on $M$. We construct threshold-type
Feedback Nash Equilibria which characterize stationary strategies describing
long-run dynamic equilibrium market organization. Two sequential approximation
schemes link the switching equilibrium to (i) constrained optimal switching,
(ii) multi-stage timing games. We provide illustrations using an
Ornstein-Uhlenbeck $X$ that leads to a recurrent equilibrium $M^\ast$ and a
Geometric Brownian Motion $X$ that makes $M^\ast$ eventually ""absorbed"" as one
player eventually gains permanent advantage. Explicit computations and
comparative statics regarding the emergent macroscopic market equilibrium are
also provided.",Stochastic Switching Games,2018-07-11 01:37:20,"Liangchen Li, Michael Ludkovski","http://arxiv.org/abs/1807.03893v1, http://arxiv.org/pdf/1807.03893v1",econ.GN
31232,em,"A recent literature in econometrics models unobserved cross-sectional
heterogeneity in panel data by assigning each cross-sectional unit a
one-dimensional, discrete latent type. Such models have been shown to allow
estimation and inference by regression clustering methods. This paper is
motivated by the finding that the clustered heterogeneity models studied in
this literature can be badly misspecified, even when the panel has significant
discrete cross-sectional structure. To address this issue, we generalize
previous approaches to discrete unobserved heterogeneity by allowing each unit
to have multiple, imperfectly-correlated latent variables that describe its
response-type to different covariates. We give inference results for a k-means
style estimator of our model and develop information criteria to jointly select
the number clusters for each latent variable. Monte Carlo simulations confirm
our theoretical results and give intuition about the finite-sample performance
of estimation and model selection. We also contribute to the theory of
clustering with an over-specified number of clusters and derive new convergence
rates for this setting. Our results suggest that over-fitting can be severe in
k-means style estimators when the number of clusters is over-specified.",Blocked Clusterwise Regression,2020-01-30 02:29:31,Max Cytrynbaum,"http://arxiv.org/abs/2001.11130v1, http://arxiv.org/pdf/2001.11130v1",econ.EM
31233,em,"Researchers are often interested in linking individuals between two datasets
that lack a common unique identifier. Matching procedures often struggle to
match records with common names, birthplaces or other field values.
Computational feasibility is also a challenge, particularly when linking large
datasets. We develop a Bayesian method for automated probabilistic record
linkage and show it recovers more than 50% more true matches, holding accuracy
constant, than comparable methods in a matching of military recruitment data to
the 1900 US Census for which expert-labelled matches are available. Our
approach, which builds on a recent state-of-the-art Bayesian method, refines
the modelling of comparison data, allowing disagreement probability parameters
conditional on non-match status to be record-specific in the smaller of the two
datasets. This flexibility significantly improves matching when many records
share common field values. We show that our method is computationally feasible
in practice, despite the added complexity, with an R/C++ implementation that
achieves significant improvement in speed over comparable recent methods. We
also suggest a lightweight method for treatment of very common names and show
how to estimate true positive rate and positive predictive value when true
match status is unavailable.",Fast Bayesian Record Linkage With Record-Specific Disagreement Parameters,2020-03-09 19:23:54,Thomas Stringham,"http://dx.doi.org/10.1080/07350015.2021.1934478, http://arxiv.org/abs/2003.04238v2, http://arxiv.org/pdf/2003.04238v2",stat.ME
31234,em,"We introduce a new mixture autoregressive model which combines Gaussian and
Student's $t$ mixture components. The model has very attractive properties
analogous to the Gaussian and Student's $t$ mixture autoregressive models, but
it is more flexible as it enables to model series which consist of both
conditionally homoscedastic Gaussian regimes and conditionally heteroscedastic
Student's $t$ regimes. The usefulness of our model is demonstrated in an
empirical application to the monthly U.S. interest rate spread between the
3-month Treasury bill rate and the effective federal funds rate.",A mixture autoregressive model based on Gaussian and Student's $t$-distributions,2020-03-11 14:16:36,Savi Virolainen,"http://arxiv.org/abs/2003.05221v3, http://arxiv.org/pdf/2003.05221v3",econ.EM
31235,em,"Policy evaluation studies, which intend to assess the effect of an
intervention, face some statistical challenges: in real-world settings
treatments are not randomly assigned and the analysis might be further
complicated by the presence of interference between units. Researchers have
started to develop novel methods that allow to manage spillover mechanisms in
observational studies; recent works focus primarily on binary treatments.
However, many policy evaluation studies deal with more complex interventions.
For instance, in political science, evaluating the impact of policies
implemented by administrative entities often implies a multivariate approach,
as a policy towards a specific issue operates at many different levels and can
be defined along a number of dimensions. In this work, we extend the
statistical framework about causal inference under network interference in
observational studies, allowing for a multi-valued individual treatment and an
interference structure shaped by a weighted network. The estimation strategy is
based on a joint multiple generalized propensity score and allows one to
estimate direct effects, controlling for both individual and network
covariates. We follow the proposed methodology to analyze the impact of the
national immigration policy on the crime rate. We define a multi-valued
characterization of political attitudes towards migrants and we assume that the
extent to which each country can be influenced by another country is modeled by
an appropriate indicator, summarizing their cultural and geographical
proximity. Results suggest that implementing a highly restrictive immigration
policy leads to an increase of the crime rate and the estimated effects is
larger if we take into account interference from other countries.",Modelling Network Interference with Multi-valued Treatments: the Causal Effect of Immigration Policy on Crime Rates,2020-02-28 14:17:00,"C. Tortù, I. Crimaldi, F. Mealli, L. Forastiere","http://arxiv.org/abs/2003.10525v3, http://arxiv.org/pdf/2003.10525v3",stat.AP
31236,em,"Given a causal graph, the do-calculus can express treatment effects as
functionals of the observational joint distribution that can be estimated
empirically. Sometimes the do-calculus identifies multiple valid formulae,
prompting us to compare the statistical properties of the corresponding
estimators. For example, the backdoor formula applies when all confounders are
observed and the frontdoor formula applies when an observed mediator transmits
the causal effect. In this paper, we investigate the over-identified scenario
where both confounders and mediators are observed, rendering both estimators
valid. Addressing the linear Gaussian causal model, we demonstrate that either
estimator can dominate the other by an unbounded constant factor. Next, we
derive an optimal estimator, which leverages all observed variables, and bound
its finite-sample variance. We show that it strictly outperforms the backdoor
and frontdoor estimators and that this improvement can be unbounded. We also
present a procedure for combining two datasets, one with observed confounders
and another with observed mediators. Finally, we evaluate our methods on both
simulated data and the IHDP and JTPA datasets.",Estimating Treatment Effects with Observed Confounders and Mediators,2020-03-26 18:50:25,"Shantanu Gupta, Zachary C. Lipton, David Childers","http://arxiv.org/abs/2003.11991v3, http://arxiv.org/pdf/2003.11991v3",stat.ME
31293,gn,"Debates on an EU-leaving referendum arose in several member states after
Brexit. We want to highlight how the exit of an additional country affects the
power distribution in the Council of the European Union. We inspect the power
indices of the member states both with and without the country which might
leave the union. Our results show a pattern connected to a change in the
threshold of the number of member states required for a decision. An exit that
modifies this threshold benefits the countries with high population, while an
exit that does not cause such a change benefits the small member states.
According to our calculations, the threat of Brexit would have worked
differently before the entry of Croatia.",Brexit: The Belated Threat,2018-08-14 10:31:15,"Dóra Gréta Petróczy, Mark Francis Rogers, László Á. Kóczy","http://dx.doi.org/10.3390/g13010018, http://arxiv.org/abs/1808.05142v1, http://arxiv.org/pdf/1808.05142v1",econ.GN
31237,em,"This study attempts to investigate into the structure and features of global
equity markets from a time-frequency perspective. An analysis grounded on this
framework allows one to capture information from a different dimension, as
opposed to the traditional time domain analyses, where multiscale structures of
financial markets are clearly extracted. In financial time series, multiscale
features manifest themselves due to presence of multiple time horizons. The
existence of multiple time horizons necessitates a careful investigation of
each time horizon separately as market structures are not homogenous across
different time horizons. The presence of multiple time horizons, with varying
levels of complexity, requires one to investigate financial time series from a
heterogeneous market perspective where market players are said to operate at
different investment horizons. This thesis extends the application of
time-frequency based wavelet techniques to: i) analyse the interdependence of
global equity markets from a heterogeneous investor perspective with a special
focus on the Indian stock market, ii) investigate the contagion effect, if any,
of financial crises on Indian stock market, and iii) to study fractality and
scaling properties of global equity markets and analyse the efficiency of
Indian stock markets using wavelet based long memory methods.","A wavelet analysis of inter-dependence, contagion and long memory among global equity markets",2020-03-31 14:28:20,Avishek Bhandari,"http://arxiv.org/abs/2003.14110v1, http://arxiv.org/pdf/2003.14110v1",econ.EM
31238,em,"This research paper explores the performance of Machine Learning (ML)
algorithms and techniques that can be used for financial asset price
forecasting. The prediction and forecasting of asset prices and returns remains
one of the most challenging and exciting problems for quantitative finance and
practitioners alike. The massive increase in data generated and captured in
recent years presents an opportunity to leverage Machine Learning algorithms.
This study directly compares and contrasts state-of-the-art implementations of
modern Machine Learning algorithms on high performance computing (HPC)
infrastructures versus the traditional and highly popular Capital Asset Pricing
Model (CAPM) on U.S equities data. The implemented Machine Learning models -
trained on time series data for an entire stock universe (in addition to
exogenous macroeconomic variables) significantly outperform the CAPM on
out-of-sample (OOS) test data.",Machine Learning Algorithms for Financial Asset Price Forecasting,2020-03-31 21:14:18,Philip Ndikum,"http://arxiv.org/abs/2004.01504v1, http://arxiv.org/pdf/2004.01504v1",q-fin.ST
31239,em,"We develop a method for uniform valid confidence bands of a nonparametric
component $f_1$ in the general additive model $Y=f_1(X_1)+\ldots + f_p(X_p) +
\varepsilon$ in a high-dimensional setting. We employ sieve estimation and
embed it in a high-dimensional Z-estimation framework allowing us to construct
uniformly valid confidence bands for the first component $f_1$. As usual in
high-dimensional settings where the number of regressors $p$ may increase with
sample, a sparsity assumption is critical for the analysis. We also run
simulations studies which show that our proposed method gives reliable results
concerning the estimation properties and coverage properties even in small
samples. Finally, we illustrate our procedure with an empirical application
demonstrating the implementation and the use of the proposed method in
practice.",Uniform Inference in High-Dimensional Generalized Additive Models,2020-04-03 18:30:29,"Philipp Bach, Sven Klaassen, Jannis Kueck, Martin Spindler","http://arxiv.org/abs/2004.01623v1, http://arxiv.org/pdf/2004.01623v1",stat.ME
31240,em,"We first revisit the problem of estimating the spot volatility of an It\^o
semimartingale using a kernel estimator. We prove a Central Limit Theorem with
optimal convergence rate for a general two-sided kernel. Next, we introduce a
new pre-averaging/kernel estimator for spot volatility to handle the
microstructure noise of ultra high-frequency observations. We prove a Central
Limit Theorem for the estimation error with an optimal rate and study the
optimal selection of the bandwidth and kernel functions. We show that the
pre-averaging/kernel estimator's asymptotic variance is minimal for exponential
kernels, hence, justifying the need of working with kernels of unbounded
support as proposed in this work. We also develop a feasible implementation of
the proposed estimators with optimal bandwidth. Monte Carlo experiments confirm
the superior performance of the devised method.",Kernel Estimation of Spot Volatility with Microstructure Noise Using Pre-Averaging,2020-04-04 08:43:25,"José E. Figueroa-López, Bei Wu","http://arxiv.org/abs/2004.01865v3, http://arxiv.org/pdf/2004.01865v3",econ.EM
31241,em,"Traditional data sources for the analysis of housing markets show several
limitations, that recently started to be overcome using data coming from
housing sales advertisements (ads) websites. In this paper, using a large
dataset of ads in Italy, we provide the first comprehensive analysis of the
problems and potential of these data. The main problem is that multiple ads
(""duplicates"") can correspond to the same housing unit. We show that this issue
is mainly caused by sellers' attempt to increase visibility of their listings.
Duplicates lead to misrepresentation of the volume and composition of housing
supply, but this bias can be corrected by identifying duplicates with machine
learning tools. We then focus on the potential of these data. We show that the
timeliness, granularity, and online nature of these data allow monitoring of
housing demand, supply and liquidity, and that the (asking) prices posted on
the website can be more informative than transaction prices.",What do online listings tell us about the housing market?,2020-04-06 17:40:09,"Michele Loberto, Andrea Luciani, Marco Pangallo","http://arxiv.org/abs/2004.02706v1, http://arxiv.org/pdf/2004.02706v1",econ.EM
31242,em,"Understanding disease spread through data visualisation has concentrated on
trends and maps. Whilst these are helpful, they neglect important
multi-dimensional interactions between characteristics of communities. Using
the Topological Data Analysis Ball Mapper algorithm we construct an abstract
representation of NUTS3 level economic data, overlaying onto it the confirmed
cases of Covid-19 in England. In so doing we may understand how the disease
spreads on different socio-economical dimensions. It is observed that some
areas of the characteristic space have quickly raced to the highest levels of
infection, while others close by in the characteristic space, do not show large
infection growth. Likewise, we see patterns emerging in very different areas
that command more monitoring. A strong contribution for Topological Data
Analysis, and the Ball Mapper algorithm especially, in comprehending dynamic
epidemic data is signposted.",Visualising the Evolution of English Covid-19 Cases with Topological Data Analysis Ball Mapper,2020-04-07 14:37:24,"Pawel Dlotko, Simon Rudkin","http://arxiv.org/abs/2004.03282v2, http://arxiv.org/pdf/2004.03282v2",physics.soc-ph
34577,th,"I study how to regulate firms' access to consumer data when the latter is
used for price discrimination and the regulator faces non-Bayesian uncertainty
about the correlation structure between data and willingness to pay, and hence
about the way the monopolist will segment the market. I characterize all
policies that are worst-case optimal when the regulator maximizes consumer
surplus: the regulator allows the monopolist to access data if and only if the
monopolist cannot use the database to identify a small group of consumers.",Robust Regulation of Firms' Access to Consumer Data,2023-05-10 03:36:06,Jose Higueras,"http://arxiv.org/abs/2305.05822v3, http://arxiv.org/pdf/2305.05822v3",econ.TH
31244,em,"In cities, the creation of public transport infrastructure such as light
rails can cause changes on a very detailed spatial scale, with different
stories unfolding next to each other within a same urban neighborhood. We study
the direct effect of a light rail line built in Florence (Italy) on the retail
density of the street where it was built and and its spillover effect on other
streets in the treated street's neighborhood. To this aim, we investigate the
use of the Synthetic Control Group (SCG) methods in panel comparative case
studies where interference between the treated and the untreated units is
plausible, an issue still little researched in the SCG methodological
literature. We frame our discussion in the potential outcomes approach. Under a
partial interference assumption, we formally define relevant direct and
spillover causal effects. We also consider the ``unrealized'' spillover effect
on the treated street in the hypothetical scenario that another street in the
treated unit's neighborhood had been assigned to the intervention.",Direct and spillover effects of a new tramway line on the commercial vitality of peripheral streets. A synthetic-control approach,2020-04-10 16:24:15,"Giulio Grossi, Marco Mariani, Alessandra Mattei, Patrizia Lattarulo, Özge Öner","http://arxiv.org/abs/2004.05027v5, http://arxiv.org/pdf/2004.05027v5",stat.AP
31245,em,"We propose a new method for flagging bid rigging, which is particularly
useful for detecting incomplete bid-rigging cartels. Our approach combines
screens, i.e. statistics derived from the distribution of bids in a tender,
with machine learning to predict the probability of collusion. As a
methodological innovation, we calculate such screens for all possible subgroups
of three or four bids within a tender and use summary statistics like the mean,
median, maximum, and minimum of each screen as predictors in the machine
learning algorithm. This approach tackles the issue that competitive bids in
incomplete cartels distort the statistical signals produced by bid rigging. We
demonstrate that our algorithm outperforms previously suggested methods in
applications to incomplete cartels based on empirical data from Switzerland.",A Machine Learning Approach for Flagging Incomplete Bid-rigging Cartels,2020-04-12 18:04:35,"Hannes Wallimann, David Imhof, Martin Huber","http://arxiv.org/abs/2004.05629v1, http://arxiv.org/pdf/2004.05629v1",econ.EM
31246,em,"Leadership is a process that leaders influence followers to achieve
collective goals. One of special cases of leadership is the coordinated pattern
initiation. In this context, leaders are initiators who initiate coordinated
patterns that everyone follows. Given a set of individual-multivariate time
series of real numbers, the mFLICA package provides a framework for R users to
infer coordination events within time series, initiators and followers of these
coordination events, as well as dynamics of group merging and splitting. The
mFLICA package also has a visualization function to make results of leadership
inference more understandable. The package is available on Comprehensive R
Archive Network (CRAN) at https://CRAN.R-project.org/package=mFLICA.",mFLICA: An R package for Inferring Leadership of Coordination From Time Series,2020-04-10 19:14:06,Chainarong Amornbunchornvej,"http://dx.doi.org/10.1016/j.softx.2021.100781, http://arxiv.org/abs/2004.06092v3, http://arxiv.org/pdf/2004.06092v3",cs.SI
31247,em,"As a consequence of missing data on tests for infection and imperfect
accuracy of tests, reported rates of population infection by the SARS CoV-2
virus are lower than actual rates of infection. Hence, reported rates of severe
illness conditional on infection are higher than actual rates. Understanding
the time path of the COVID-19 pandemic has been hampered by the absence of
bounds on infection rates that are credible and informative. This paper
explains the logical problem of bounding these rates and reports illustrative
findings, using data from Illinois, New York, and Italy. We combine the data
with assumptions on the infection rate in the untested population and on the
accuracy of the tests that appear credible in the current context. We find that
the infection rate might be substantially higher than reported. We also find
that the infection fatality rate in Italy is substantially lower than reported.",Estimating the COVID-19 Infection Rate: Anatomy of an Inference Problem,2020-04-13 23:04:34,"Charles F. Manski, Francesca Molinari","http://arxiv.org/abs/2004.06178v1, http://arxiv.org/pdf/2004.06178v1",econ.EM
31248,em,"We study the problem of optimal control of the stochastic SIR model. Models
of this type are used in mathematical epidemiology to capture the time
evolution of highly infectious diseases such as COVID-19. Our approach relies
on reformulating the Hamilton-Jacobi-Bellman equation as a stochastic minimum
principle. This results in a system of forward backward stochastic differential
equations, which is amenable to numerical solution via Monte Carlo simulations.
We present a number of numerical solutions of the system under a variety of
scenarios.",Epidemic control via stochastic optimal control,2020-04-14 20:34:07,Andrew Lesniewski,"http://arxiv.org/abs/2004.06680v3, http://arxiv.org/pdf/2004.06680v3",q-bio.PE
31249,em,"We establish nonparametric identification in a class of so-called index
models using a novel approach that relies on general topological results. Our
proof strategy requires substantially weaker conditions on the functions and
distributions characterizing the model compared to existing strategies; in
particular, it does not require any large support conditions on the regressors
of our model. We apply the general identification result to additive random
utility and competing risk models.",Identification of a class of index models: A topological approach,2020-04-16 22:43:57,"Mogens Fosgerau, Dennis Kristensen","http://arxiv.org/abs/2004.07900v1, http://arxiv.org/pdf/2004.07900v1",econ.EM
31250,em,"The number of Covid-19 cases is increasing dramatically worldwide. Therefore,
the availability of reliable forecasts for the number of cases in the coming
days is of fundamental importance. We propose a simple statistical method for
short-term real-time forecasting of the number of Covid-19 cases and fatalities
in countries that are latecomers -- i.e., countries where cases of the disease
started to appear some time after others. In particular, we propose a penalized
(LASSO) regression with an error correction mechanism to construct a model of a
latecomer in terms of the other countries that were at a similar stage of the
pandemic some days before. By tracking the number of cases and deaths in those
countries, we forecast through an adaptive rolling-window scheme the number of
cases and deaths in the latecomer. We apply this methodology to Brazil, and
show that (so far) it has been performing very well. These forecasts aim to
foster a better short-run management of the health system capacity.",Short-Term Covid-19 Forecast for Latecomers,2020-04-17 01:09:14,"Marcelo Medeiros, Alexandre Street, Davi Valladão, Gabriel Vasconcelos, Eduardo Zilberman","http://arxiv.org/abs/2004.07977v3, http://arxiv.org/pdf/2004.07977v3",stat.AP
31251,em,"We study causal inference under case-control and case-population sampling.
Specifically, we focus on the binary-outcome and binary-treatment case, where
the parameters of interest are causal relative and attributable risks defined
via the potential outcome framework. It is shown that strong ignorability is
not always as powerful as it is under random sampling and that certain
monotonicity assumptions yield comparable results in terms of sharp identified
intervals. Specifically, the usual odds ratio is shown to be a sharp identified
upper bound on causal relative risk under the monotone treatment response and
monotone treatment selection assumptions. We offer algorithms for inference on
the causal parameters that are aggregated over the true population distribution
of the covariates. We show the usefulness of our approach by studying three
empirical examples: the benefit of attending private school for entering a
prestigious university in Pakistan; the relationship between staying in school
and getting involved with drug-trafficking gangs in Brazil; and the link
between physicians' hours and size of the group practice in the United States.",Causal Inference under Outcome-Based Sampling with Monotonicity Assumptions,2020-04-17 19:01:34,"Sung Jae Jun, Sokbae Lee","http://arxiv.org/abs/2004.08318v6, http://arxiv.org/pdf/2004.08318v6",econ.EM
31252,em,"The main focus of this study is to collect and prepare data on air passengers
traffic worldwide with the scope of analyze the impact of travel ban on the
aviation sector. Based on historical data from January 2010 till October 2019,
a forecasting model is implemented in order to set a reference baseline. Making
use of airplane movements extracted from online flight tracking platforms and
on-line booking systems, this study presents also a first assessment of recent
changes in flight activity around the world as a result of the COVID-19
pandemic. To study the effects of air travel ban on aviation and in turn its
socio-economic, several scenarios are constructed based on past pandemic crisis
and the observed flight volumes. It turns out that, according to this
hypothetical scenarios, in the first Quarter of 2020 the impact of aviation
losses could have negatively reduced World GDP by 0.02% to 0.12% according to
the observed data and, in the worst case scenarios, at the end of 2020 the loss
could be as high as 1.41-1.67% and job losses may reach the value of 25-30
millions. Focusing on EU27, the GDP loss may amount to 1.66-1.98% by the end of
2020 and the number of job losses from 4.2 to 5 millions in the worst case
scenarios. Some countries will be more affected than others in the short run
and most European airlines companies will suffer from the travel ban.",Estimating and Projecting Air Passenger Traffic during the COVID-19 Coronavirus Outbreak and its Socio-Economic Impact,2020-04-18 00:40:31,"Stefano Maria Iacus, Fabrizio Natale, Carlos Satamaria, Spyridon Spyratos, Michele Vespe","http://arxiv.org/abs/2004.08460v2, http://arxiv.org/pdf/2004.08460v2",stat.AP
31253,em,"Economic Scenario Generators (ESGs) simulate economic and financial variables
forward in time for risk management and asset allocation purposes. It is often
not feasible to calibrate the dynamics of all variables within the ESG to
historical data alone. Calibration to forward-information such as future
scenarios and return expectations is needed for stress testing and portfolio
optimization, but no generally accepted methodology is available. This paper
introduces the Conditional Scenario Simulator, which is a framework for
consistently calibrating simulations and projections of economic and financial
variables both to historical data and forward-looking information. The
framework can be viewed as a multi-period, multi-factor generalization of the
Black-Litterman model, and can embed a wide array of financial and
macroeconomic models. Two practical examples demonstrate this in a frequentist
and Bayesian setting.",Consistent Calibration of Economic Scenario Generators: The Case for Conditional Simulation,2020-04-20 06:44:47,Misha van Beek,"http://arxiv.org/abs/2004.09042v1, http://arxiv.org/pdf/2004.09042v1",econ.EM
31254,em,"Assessing sampling uncertainty in extremum estimation can be challenging when
the asymptotic variance is not analytically tractable. Bootstrap inference
offers a feasible solution but can be computationally costly especially when
the model is complex. This paper uses iterates of a specially designed
stochastic optimization algorithm as draws from which both point estimates and
bootstrap standard errors can be computed in a single run. The draws are
generated by the gradient and Hessian computed from batches of data that are
resampled at each iteration. We show that these draws yield consistent
estimates and asymptotically valid frequentist inference for a large class of
regular problems. The algorithm provides accurate standard errors in simulation
examples and empirical applications at low computational costs. The draws from
the algorithm also provide a convenient way to detect data irregularities.",Inference by Stochastic Optimization: A Free-Lunch Bootstrap,2020-04-20 23:43:28,"Jean-Jacques Forneron, Serena Ng","http://arxiv.org/abs/2004.09627v3, http://arxiv.org/pdf/2004.09627v3",econ.EM
31255,em,"This paper proposes two distinct contributions to econometric analysis of
large information sets and structural instabilities. First, it treats a
regression model with time-varying coefficients, stochastic volatility and
exogenous predictors, as an equivalent high-dimensional static regression
problem with thousands of covariates. Inference in this specification proceeds
using Bayesian hierarchical priors that shrink the high-dimensional vector of
coefficients either towards zero or time-invariance. Second, it introduces the
frameworks of factor graphs and message passing as a means of designing
efficient Bayesian estimation algorithms. In particular, a Generalized
Approximate Message Passing (GAMP) algorithm is derived that has low
algorithmic complexity and is trivially parallelizable. The result is a
comprehensive methodology that can be used to estimate time-varying parameter
regressions with arbitrarily large number of exogenous predictors. In a
forecasting exercise for U.S. price inflation this methodology is shown to work
very well.",High-dimensional macroeconomic forecasting using message passing algorithms,2020-04-24 02:10:04,Dimitris Korobilis,"http://dx.doi.org/10.1080/07350015.2019.1677472, http://arxiv.org/abs/2004.11485v1, http://arxiv.org/pdf/2004.11485v1",stat.ME
31260,em,"This paper investigates the case of interference, when a unit's treatment
also affects other units' outcome. When interference is at work, policy
evaluation mostly relies on the use of randomized experiments under cluster
interference and binary treatment. Instead, we consider a non-experimental
setting under continuous treatment and network interference. In particular, we
define spillover effects by specifying the exposure to network treatment as a
weighted average of the treatment received by units connected through physical,
social or economic interactions. We provide a generalized propensity
score-based estimator to estimate both direct and spillover effects of a
continuous treatment. Our estimator also allows to consider asymmetric network
connections characterized by heterogeneous intensities. To showcase this
methodology, we investigate whether and how spillover effects shape the optimal
level of policy interventions in agricultural markets. Our results show that,
in this context, neglecting interference may underestimate the degree of policy
effectiveness.",Causal Inference on Networks under Continuous Treatment Interference,2020-04-28 15:28:30,"Laura Forastiere, Davide Del Prete, Valerio Leone Sciabolazza","http://arxiv.org/abs/2004.13459v2, http://arxiv.org/pdf/2004.13459v2",stat.ME
31256,em,"This study presents a systematic comparison of methods for individual
treatment assignment, a general problem that arises in many applications and
has received significant attention from economists, computer scientists, and
social scientists. We group the various methods proposed in the literature into
three general classes of algorithms (or metalearners): learning models to
predict outcomes (the O-learner), learning models to predict causal effects
(the E-learner), and learning models to predict optimal treatment assignments
(the A-learner). We compare the metalearners in terms of (1) their level of
generality and (2) the objective function they use to learn models from data;
we then discuss the implications that these characteristics have for modeling
and decision making. Notably, we demonstrate analytically and empirically that
optimizing for the prediction of outcomes or causal effects is not the same as
optimizing for treatment assignments, suggesting that in general the A-learner
should lead to better treatment assignments than the other metalearners. We
demonstrate the practical implications of our findings in the context of
choosing, for each user, the best algorithm for playlist generation in order to
optimize engagement. This is the first comparison of the three different
metalearners on a real-world application at scale (based on more than half a
billion individual treatment assignments). In addition to supporting our
analytical findings, the results show how large A/B tests can provide
substantial value for learning treatment assignment policies, rather than
simply choosing the variant that performs best on average.",A Comparison of Methods for Treatment Assignment with an Application to Playlist Generation,2020-04-24 07:56:15,"Carlos Fernández-Loría, Foster Provost, Jesse Anderton, Benjamin Carterette, Praveen Chandar","http://arxiv.org/abs/2004.11532v5, http://arxiv.org/pdf/2004.11532v5",econ.EM
31257,em,"In regional economics research, a problem of interest is to detect
similarities between regions, and estimate their shared coefficients in
economics models. In this article, we propose a mixture of finite mixtures
(MFM) clustered regression model with auxiliary covariates that account for
similarities in demographic or economic characteristics over a spatial domain.
Our Bayesian construction provides both inference for number of clusters and
clustering configurations, and estimation for parameters for each cluster.
Empirical performance of the proposed model is illustrated through simulation
experiments, and further applied to a study of influential factors for monthly
housing cost in Georgia.",Bayesian Clustered Coefficients Regression with Auxiliary Covariates Assistant Random Effects,2020-04-25 03:21:33,"Guanyu Hu, Yishu Xue, Zhihua Ma","http://arxiv.org/abs/2004.12022v2, http://arxiv.org/pdf/2004.12022v2",stat.ME
31258,em,"In an A/B test, the typical objective is to measure the total average
treatment effect (TATE), which measures the difference between the average
outcome if all users were treated and the average outcome if all users were
untreated. However, a simple difference-in-means estimator will give a biased
estimate of the TATE when outcomes of control units depend on the outcomes of
treatment units, an issue we refer to as test-control interference. Using a
simulation built on top of data from Airbnb, this paper considers the use of
methods from the network interference literature for online marketplace
experimentation. We model the marketplace as a network in which an edge exists
between two sellers if their goods substitute for one another. We then simulate
seller outcomes, specifically considering a ""status quo"" context and
""treatment"" context that forces all sellers to lower their prices. We use the
same simulation framework to approximate TATE distributions produced by using
blocked graph cluster randomization, exposure modeling, and the Hajek estimator
for the difference in means. We find that while blocked graph cluster
randomization reduces the bias of the naive difference-in-means estimator by as
much as 62%, it also significantly increases the variance of the estimator. On
the other hand, the use of more sophisticated estimators produces mixed
results. While some provide (small) additional reductions in bias and small
reductions in variance, others lead to increased bias and variance. Overall,
our results suggest that experiment design and analysis techniques from the
network experimentation literature are promising tools for reducing bias due to
test-control interference in marketplace experiments.",Limiting Bias from Test-Control Interference in Online Marketplace Experiments,2020-04-25 17:54:49,"David Holtz, Sinan Aral","http://arxiv.org/abs/2004.12162v1, http://arxiv.org/pdf/2004.12162v1",stat.AP
31259,em,"Online marketplace designers frequently run A/B tests to measure the impact
of proposed product changes. However, given that marketplaces are inherently
connected, total average treatment effect estimates obtained through Bernoulli
randomized experiments are often biased due to violations of the stable unit
treatment value assumption. This can be particularly problematic for
experiments that impact sellers' strategic choices, affect buyers' preferences
over items in their consideration set, or change buyers' consideration sets
altogether. In this work, we measure and reduce bias due to interference in
online marketplace experiments by using observational data to create clusters
of similar listings, and then using those clusters to conduct
cluster-randomized field experiments. We provide a lower bound on the magnitude
of bias due to interference by conducting a meta-experiment that randomizes
over two experiment designs: one Bernoulli randomized, one cluster randomized.
In both meta-experiment arms, treatment sellers are subject to a different
platform fee policy than control sellers, resulting in different prices for
buyers. By conducting a joint analysis of the two meta-experiment arms, we find
a large and statistically significant difference between the total average
treatment effect estimates obtained with the two designs, and estimate that
32.60% of the Bernoulli-randomized treatment effect estimate is due to
interference bias. We also find weak evidence that the magnitude and/or
direction of interference bias depends on extent to which a marketplace is
supply- or demand-constrained, and analyze a second meta-experiment to
highlight the difficulty of detecting interference bias when treatment
interventions require intention-to-treat analysis.",Reducing Interference Bias in Online Marketplace Pricing Experiments,2020-04-27 01:09:37,"David Holtz, Ruben Lobel, Inessa Liskovich, Sinan Aral","http://arxiv.org/abs/2004.12489v1, http://arxiv.org/pdf/2004.12489v1",stat.ME
31283,gn,"This paper studies the problem of optimally extracting nonrenewable natural
resources. Taking into account the fact that the market values of the main
natural resources i.e. oil, natural gas, copper,..., etc, fluctuate randomly
following global and seasonal macroeconomic parameters, the prices of natural
resources are modeled using Markov switching L\'evy processes. We formulate
this optimal extraction problem as an infinite-time horizon optimal control
problem. We derive closed-form solutions for the value function as well as the
optimal extraction policy. Numerical examples are presented to illustrate these
results.",Explicit Solutions for Optimal Resource Extraction Problems under Regime Switching Lévy Models,2018-06-15 22:53:36,Moustapha Pemy,"http://arxiv.org/abs/1806.06105v1, http://arxiv.org/pdf/1806.06105v1",econ.GN
31261,em,"As we reach the apex of the COVID-19 pandemic, the most pressing question
facing us is: can we even partially reopen the economy without risking a second
wave? We first need to understand if shutting down the economy helped. And if
it did, is it possible to achieve similar gains in the war against the pandemic
while partially opening up the economy? To do so, it is critical to understand
the effects of the various interventions that can be put into place and their
corresponding health and economic implications. Since many interventions exist,
the key challenge facing policy makers is understanding the potential
trade-offs between them, and choosing the particular set of interventions that
works best for their circumstance. In this memo, we provide an overview of
Synthetic Interventions (a natural generalization of Synthetic Control), a
data-driven and statistically principled method to perform what-if scenario
planning, i.e., for policy makers to understand the trade-offs between
different interventions before having to actually enact them. In essence, the
method leverages information from different interventions that have already
been enacted across the world and fits it to a policy maker's setting of
interest, e.g., to estimate the effect of mobility-restricting interventions on
the U.S., we use daily death data from countries that enforced severe mobility
restrictions to create a ""synthetic low mobility U.S."" and predict the
counterfactual trajectory of the U.S. if it had indeed applied a similar
intervention. Using Synthetic Interventions, we find that lifting severe
mobility restrictions and only retaining moderate mobility restrictions (at
retail and transit locations), seems to effectively flatten the curve. We hope
this provides guidance on weighing the trade-offs between the safety of the
population, strain on the healthcare system, and impact on the economy.",Two Burning Questions on COVID-19: Did shutting down the economy help? Can we (partially) reopen the economy without risking the second wave?,2020-04-30 22:39:34,"Anish Agarwal, Abdullah Alomar, Arnab Sarker, Devavrat Shah, Dennis Shen, Cindy Yang","http://arxiv.org/abs/2005.00072v2, http://arxiv.org/pdf/2005.00072v2",econ.EM
31262,em,"Utilizing a generative regime switching framework, we perform Monte-Carlo
simulations of asset returns for Value at Risk threshold estimation. Using
equity markets and long term bonds as test assets in the global, US, Euro area
and UK setting over an up to 1,250 weeks sample horizon ending in August 2018,
we investigate neural networks along three design steps relating (i) to the
initialization of the neural network, (ii) its incentive function according to
which it has been trained and (iii) the amount of data we feed. First, we
compare neural networks with random seeding with networks that are initialized
via estimations from the best-established model (i.e. the Hidden Markov). We
find latter to outperform in terms of the frequency of VaR breaches (i.e. the
realized return falling short of the estimated VaR threshold). Second, we
balance the incentive structure of the loss function of our networks by adding
a second objective to the training instructions so that the neural networks
optimize for accuracy while also aiming to stay in empirically realistic regime
distributions (i.e. bull vs. bear market frequencies). In particular this
design feature enables the balanced incentive recurrent neural network (RNN) to
outperform the single incentive RNN as well as any other neural network or
established approach by statistically and economically significant levels.
Third, we half our training data set of 2,000 days. We find our networks when
fed with substantially less data (i.e. 1,000 days) to perform significantly
worse which highlights a crucial weakness of neural networks in their
dependence on very large data sets ...",Neural Networks and Value at Risk,2020-05-04 20:41:59,"Alexander Arimond, Damian Borth, Andreas Hoepner, Michael Klawunn, Stefan Weisheit","http://arxiv.org/abs/2005.01686v2, http://arxiv.org/pdf/2005.01686v2",q-fin.RM
31263,em,"While causal models are robust in that they are prediction optimal under
arbitrarily strong interventions, they may not be optimal when the
interventions are bounded. We prove that the classical K-class estimator
satisfies such optimality by establishing a connection between K-class
estimators and anchor regression. This connection further motivates a novel
estimator in instrumental variable settings that minimizes the mean squared
prediction error subject to the constraint that the estimator lies in an
asymptotically valid confidence region of the causal coefficient. We call this
estimator PULSE (p-uncorrelated least squares estimator), relate it to work on
invariance, show that it can be computed efficiently as a data-driven K-class
estimator, even though the underlying optimization problem is non-convex, and
prove consistency. We evaluate the estimators on real data and perform
simulation experiments illustrating that PULSE suffers from less variability.
There are several settings including weak instrument settings, where it
outperforms other estimators.",Distributional robustness of K-class estimators and the PULSE,2020-05-07 12:39:07,"Martin Emil Jakobsen, Jonas Peters","http://dx.doi.org/10.1093/ectj/utab031, http://arxiv.org/abs/2005.03353v3, http://arxiv.org/pdf/2005.03353v3",econ.EM
31264,em,"In Canada, financial advisors and dealers are required by provincial
securities commissions and self-regulatory organizations--charged with direct
regulation over investment dealers and mutual fund dealers--to respectively
collect and maintain Know Your Client (KYC) information, such as their age or
risk tolerance, for investor accounts. With this information, investors, under
their advisor's guidance, make decisions on their investments which are
presumed to be beneficial to their investment goals. Our unique dataset is
provided by a financial investment dealer with over 50,000 accounts for over
23,000 clients. We use a modified behavioural finance recency, frequency,
monetary model for engineering features that quantify investor behaviours, and
machine learning clustering algorithms to find groups of investors that behave
similarly. We show that the KYC information collected does not explain client
behaviours, whereas trade and transaction frequency and volume are most
informative. We believe the results shown herein encourage financial regulators
and advisors to use more advanced metrics to better understand and predict
investor behaviours.",Know Your Clients' behaviours: a cluster analysis of financial transactions,2020-05-07 20:22:40,"John R. J. Thompson, Longlong Feng, R. Mark Reesor, Chuck Grace","http://arxiv.org/abs/2005.03625v2, http://arxiv.org/pdf/2005.03625v2",econ.EM
31265,em,"We develop theoretical finite-sample results concerning the size of wild
bootstrap-based heteroskedasticity robust tests in linear regression models. In
particular, these results provide an efficient diagnostic check, which can be
used to weed out tests that are unreliable for a given testing problem in the
sense that they overreject substantially. This allows us to assess the
reliability of a large variety of wild bootstrap-based tests in an extensive
numerical study.",How Reliable are Bootstrap-based Heteroskedasticity Robust Tests?,2020-05-08 18:06:36,"Benedikt M. Pötscher, David Preinerstorfer","http://arxiv.org/abs/2005.04089v2, http://arxiv.org/pdf/2005.04089v2",math.ST
31315,gn,"This article outlines different stages in development of the national culture
model, created by Geert Hofstede and his affiliates. This paper reveals and
synthesizes the contemporary review of the application spheres of this
framework. Numerous applications of the dimensions set are used as a source of
identifying significant critiques, concerning different aspects in model's
operation. These critiques are classified and their underlying reasons are also
outlined by means of a fishbone diagram.",Geert Hofstede et al's set of national cultural dimensions - popularity and criticisms,2018-10-05 14:31:40,Kiril Dimitrov,"http://dx.doi.org/10.5281/zenodo.1434882, http://arxiv.org/abs/1810.02621v1, http://arxiv.org/pdf/1810.02621v1",econ.GN
31266,em,"Water demand is a highly important variable for operational control and
decision making. Hence, the development of accurate forecasts is a valuable
field of research to further improve the efficiency of water utilities.
Focusing on probabilistic multi-step-ahead forecasting, a time series model is
introduced, to capture typical autoregressive, calendar and seasonal effects,
to account for time-varying variance, and to quantify the uncertainty and
path-dependency of the water demand process. To deal with the high complexity
of the water demand process a high-dimensional feature space is applied, which
is efficiently tuned by an automatic shrinkage and selection operator (lasso).
It allows to obtain an accurate, simple interpretable and fast computable
forecasting model, which is well suited for real-time applications. The
complete probabilistic forecasting framework allows not only for simulating the
mean and the marginal properties, but also the correlation structure between
hours within the forecasting horizon. For practitioners, complete probabilistic
multi-step-ahead forecasts are of considerable relevance as they provide
additional information about the expected aggregated or cumulative water
demand, so that a statement can be made about the probability with which a
water storage capacity can guarantee the supply over a certain period of time.
This information allows to better control storage capacities and to better
ensure the smooth operation of pumps. To appropriately evaluate the forecasting
performance of the considered models, the energy score (ES) as a strictly
proper multidimensional evaluation criterion, is introduced. The methodology is
applied to the hourly water demand data of a German water supplier.",Probabilistic Multi-Step-Ahead Short-Term Water Demand Forecasting with Lasso,2020-05-10 01:26:09,"Jens Kley-Holsteg, Florian Ziel","http://dx.doi.org/10.1061/(ASCE)WR.1943-5452.0001268, http://arxiv.org/abs/2005.04522v1, http://arxiv.org/pdf/2005.04522v1",stat.AP
31267,em,"In this work we provide a review of basic ideas and novel developments about
Conformal Prediction -- an innovative distribution-free, non-parametric
forecasting method, based on minimal assumptions -- that is able to yield in a
very straightforward way predictions sets that are valid in a statistical sense
also in in the finite sample case. The in-depth discussion provided in the
paper covers the theoretical underpinnings of Conformal Prediction, and then
proceeds to list the more advanced developments and adaptations of the original
idea.",Conformal Prediction: a Unified Review of Theory and New Challenges,2020-05-16 15:38:19,"Matteo Fontana, Gianluca Zeni, Simone Vantini","http://arxiv.org/abs/2005.07972v2, http://arxiv.org/pdf/2005.07972v2",cs.LG
31268,em,"We introduce a new stochastic duration model for transaction times in asset
markets. We argue that widely accepted rules for aggregating seemingly related
trades mislead inference pertaining to durations between unrelated trades:
while any two trades executed in the same second are probably related, it is
extremely unlikely that all such pairs of trades are, in a typical sample. By
placing uncertainty about which trades are related within our model, we improve
inference for the distribution of durations between unrelated trades,
especially near zero. We introduce a normalized conditional distribution for
durations between unrelated trades that is both flexible and amenable to
shrinkage towards an exponential distribution, which we argue is an appropriate
first-order model. Thanks to highly efficient draws of state variables,
numerical efficiency of posterior simulation is much higher than in previous
studies. In an empirical application, we find that the conditional hazard
function for durations between unrelated trades varies much less than what most
studies find. We claim that this is because we avoid statistical artifacts that
arise from deterministic trade-aggregation rules and unsuitable parametric
distributions.",A Flexible Stochastic Conditional Duration Model,2020-05-19 05:01:45,"Samuel Gingras, William J. McCausland","http://arxiv.org/abs/2005.09166v1, http://arxiv.org/pdf/2005.09166v1",econ.EM
31269,em,"During the early part of the Covid-19 pandemic, national and local
governments introduced a number of policies to combat the spread of Covid-19.
In this paper, we propose a new approach to bound the effects of such
early-pandemic policies on Covid-19 cases and other outcomes while dealing with
complications arising from (i) limited availability of Covid-19 tests, (ii)
differential availability of Covid-19 tests across locations, and (iii)
eligibility requirements for individuals to be tested. We use our approach
study the effects of Tennessee's expansion of Covid-19 testing early in the
pandemic and find that the policy decreased Covid-19 cases.",Evaluating Policies Early in a Pandemic: Bounding Policy Effects with Nonrandomly Missing Data,2020-05-19 20:26:52,"Brantly Callaway, Tong Li","http://arxiv.org/abs/2005.09605v6, http://arxiv.org/pdf/2005.09605v6",econ.EM
31270,em,"We study the problem of a decision maker who must provide the best possible
treatment recommendation based on an experiment. The desirability of the
outcome distribution resulting from the policy recommendation is measured
through a functional capturing the distributional characteristic that the
decision maker is interested in optimizing. This could be, e.g., its inherent
inequality, welfare, level of poverty or its distance to a desired outcome
distribution. If the functional of interest is not quasi-convex or if there are
constraints, the optimal recommendation may be a mixture of treatments. This
vastly expands the set of recommendations that must be considered. We
characterize the difficulty of the problem by obtaining maximal expected regret
lower bounds. Furthermore, we propose two (near) regret-optimal policies. The
first policy is static and thus applicable irrespectively of subjects arriving
sequentially or not in the course of the experimentation phase. The second
policy can utilize that subjects arrive sequentially by successively
eliminating inferior treatments and thus spends the sampling effort where it is
most needed.",Treatment recommendation with distributional targets,2020-05-19 22:27:21,"Anders Bredahl Kock, David Preinerstorfer, Bezirgen Veliyev","http://arxiv.org/abs/2005.09717v4, http://arxiv.org/pdf/2005.09717v4",econ.EM
31271,em,"This paper describes a general approach for stochastic modeling of assets
returns and liability cash-flows of a typical pensions insurer. On the asset
side, we model the investment returns on equities and various classes of
fixed-income instruments including short- and long-maturity fixed-rate bonds as
well as index-linked and corporate bonds. On the liability side, the risks are
driven by future mortality developments as well as price and wage inflation.
All the risk factors are modeled as a multivariate stochastic process that
captures the dynamics and the dependencies across different risk factors. The
model is easy to interpret and to calibrate to both historical data and to
forecasts or expert views concerning the future. The simple structure of the
model allows for efficient computations. The construction of a million
scenarios takes only a few minutes on a personal computer. The approach is
illustrated with an asset-liability analysis of a defined benefit pension fund.",Stochastic modeling of assets and liabilities with mortality risk,2020-05-20 14:37:08,"Sergio Alvares Maffra, John Armstrong, Teemu Pennanen","http://dx.doi.org/10.1111/j.1365-2966.2005.09974.x, http://arxiv.org/abs/2005.09974v1, http://arxiv.org/pdf/2005.09974v1",q-fin.RM
31316,gn,"The current article unveils and analyzes some important factors, influencing
diversity in strategic decision-making approaches in local companies.
Researcher's attention is oriented to survey important characteristics of the
strategic moves, undertaken by leading companies in Bulgaria.","Contemporary facets of business successes among leading companies, operating in Bulgaria",2018-10-05 14:37:27,Kiril Dimitrov,"http://dx.doi.org/10.5281/zenodo.1434878, http://arxiv.org/abs/1810.02622v1, http://arxiv.org/pdf/1810.02622v1",econ.GN
31272,em,"The internal validity of observational study is often subject to debate. In
this study, we define the unobserved sample based on the counterfactuals and
formalize its relationship with the null hypothesis statistical testing (NHST)
for regression models. The probability of a robust inference for internal
validity, i.e., the PIV, is the probability of rejecting the null hypothesis
again based on the ideal sample which is defined as the combination of the
observed and unobserved samples, provided the same null hypothesis has already
been rejected for the observed sample. When the unconfoundedness assumption is
dubious, one can bound the PIV of an inference based on bounded belief about
the mean counterfactual outcomes, which is often needed in this case.
Essentially, the PIV is statistical power of the NHST that is thought to be
built on the ideal sample. We summarize the process of evaluating internal
validity with the PIV into a six-step procedure and illustrate it with an
empirical example (i.e., Hong and Raudenbush (2005)).",The probability of a robust inference for internal validity and its applications in regression models,2020-05-22 06:29:19,"Tenglong Li, Kenneth A. Frank","http://arxiv.org/abs/2005.12784v1, http://arxiv.org/pdf/2005.12784v1",stat.ME
31273,em,"This paper introduces structured machine learning regressions for
high-dimensional time series data potentially sampled at different frequencies.
The sparse-group LASSO estimator can take advantage of such time series data
structures and outperforms the unstructured LASSO. We establish oracle
inequalities for the sparse-group LASSO estimator within a framework that
allows for the mixing processes and recognizes that the financial and the
macroeconomic data may have heavier than exponential tails. An empirical
application to nowcasting US GDP growth indicates that the estimator performs
favorably compared to other alternatives and that text data can be a useful
addition to more traditional numerical data.",Machine Learning Time Series Regressions with an Application to Nowcasting,2020-05-28 17:42:58,"Andrii Babii, Eric Ghysels, Jonas Striaukas","http://arxiv.org/abs/2005.14057v4, http://arxiv.org/pdf/2005.14057v4",econ.EM
31274,em,"As the COVID-19 pandemic progresses, researchers are reporting findings of
randomized trials comparing standard care with care augmented by experimental
drugs. The trials have small sample sizes, so estimates of treatment effects
are imprecise. Seeing imprecision, clinicians reading research articles may
find it difficult to decide when to treat patients with experimental drugs.
Whatever decision criterion one uses, there is always some probability that
random variation in trial outcomes will lead to prescribing sub-optimal
treatments. A conventional practice when comparing standard care and an
innovation is to choose the innovation only if the estimated treatment effect
is positive and statistically significant. This practice defers to standard
care as the status quo. To evaluate decision criteria, we use the concept of
near-optimality, which jointly considers the probability and magnitude of
decision errors. An appealing decision criterion from this perspective is the
empirical success rule, which chooses the treatment with the highest observed
average patient outcome in the trial. Considering the design of recent and
ongoing COVID-19 trials, we show that the empirical success rule yields
treatment results that are much closer to optimal than those generated by
prevailing decision criteria based on hypothesis tests.",Statistical Decision Properties of Imprecise Trials Assessing COVID-19 Drugs,2020-05-30 22:45:27,"Charles F. Manski, Aleksey Tetenov","http://arxiv.org/abs/2006.00343v1, http://arxiv.org/pdf/2006.00343v1",econ.EM
31275,em,"We develop a multiple-events model and exploit within and between country
variation in the timing, type and level of intensity of various public policies
to study their dynamic effects on the daily incidence of COVID-19 and on
population mobility patterns across 135 countries. We remove concurrent policy
bias by taking into account the contemporaneous presence of multiple
interventions. The main result of the paper is that cancelling public events
and imposing restrictions on private gatherings followed by school closures
have quantitatively the most pronounced effects on reducing the daily incidence
of COVID-19. They are followed by workplace as well as stay-at-home
requirements, whose statistical significance and levels of effect are not as
pronounced. Instead, we find no effects for international travel controls,
public transport closures and restrictions on movements across cities and
regions. We establish that these findings are mediated by their effect on
population mobility patterns in a manner consistent with time-use and
epidemiological factors.","Lockdown Strategies, Mobility Patterns and COVID-19",2020-05-31 17:24:38,"Nikos Askitas, Konstantinos Tatsiramos, Bertrand Verheyden","http://dx.doi.org/10.1038/s41598-021-81442-x, http://arxiv.org/abs/2006.00531v1, http://arxiv.org/pdf/2006.00531v1",econ.EM
31276,em,"Deliberation among individuals online plays a key role in shaping the
opinions that drive votes, purchases, donations and other critical offline
behavior. Yet, the determinants of opinion-change via persuasion in
deliberation online remain largely unexplored. Our research examines the
persuasive power of $\textit{ethos}$ -- an individual's ""reputation"" -- using a
7-year panel of over a million debates from an argumentation platform
containing explicit indicators of successful persuasion. We identify the causal
effect of reputation on persuasion by constructing an instrument for reputation
from a measure of past debate competition, and by controlling for unstructured
argument text using neural models of language in the double machine-learning
framework. We find that an individual's reputation significantly impacts their
persuasion rate above and beyond the validity, strength and presentation of
their arguments. In our setting, we find that having 10 additional reputation
points causes a 31% increase in the probability of successful persuasion over
the platform average. We also find that the impact of reputation is moderated
by characteristics of the argument content, in a manner consistent with a
theoretical model that attributes the persuasive power of reputation to
heuristic information-processing under cognitive overload. We discuss
managerial implications for platforms that facilitate deliberative
decision-making for public and private organizations online.",Influence via Ethos: On the Persuasive Power of Reputation in Deliberation Online,2020-06-01 07:25:40,"Emaad Manzoor, George H. Chen, Dokyun Lee, Michael D. Smith","http://arxiv.org/abs/2006.00707v1, http://arxiv.org/pdf/2006.00707v1",econ.EM
31317,gn,"This paper studies the extent to which social capital drives performance in
the Chinese venture capital market and explores the trend toward VC syndication
in China. First, we propose a hybrid model based on syndicated social networks
and the latent-variable model, which describes the social capital at venture
capital firms and builds relationships between social capital and performance
at VC firms. Then, we build three hypotheses about the relationships and test
the hypotheses using our proposed model. Some numerical simulations are given
to support the test results. Finally, we show that the correlations between
social capital and financial performance at venture capital firms are weak in
China and find that China's venture capital firms lack mature social capital
links.",Social capital at venture capital firms and their financial performance: Evidence from China,2018-10-06 10:51:13,"Qi-lin Cao, Hua-yun Xiang, You-jia Mao, Ben-zhang Yang","http://arxiv.org/abs/1810.02952v1, http://arxiv.org/pdf/1810.02952v1",econ.GN
31277,gn,"In this paper, a mathematical model based on the one-parameter Mittag-Leffler
function is proposed to be used for the first time to describe the relation
between unemployment rate and inflation rate, also known as the Phillips curve.
The Phillips curve is in the literature often represented by an
exponential-like shape. On the other hand, Phillips in his fundamental paper
used a power function in the model definition. Considering that the ordinary as
well as generalised Mittag-Leffler function behaves between a purely
exponential function and a power function it is natural to implement it in the
definition of the model used to describe the relation between the data
representing the Phillips curve. For the modelling purposes the data of two
different European economies, France and Switzerland, were used and an
""out-of-sample"" forecast was done to compare the performance of the
Mittag-Leffler model to the performance of the power-type and exponential-type
model. The results demonstrate that the ability of the Mittag-Leffler function
to fit data that manifest signs of stretched exponentials, oscillations or even
damped oscillations can be of use when describing economic relations and
phenomenons, such as the Phillips curve.",The Mittag-Leffler Fitting of the Phillips Curve,2016-04-01 22:28:06,Tomas Skovranek,"http://dx.doi.org/10.3390/math7070589, http://arxiv.org/abs/1604.00369v3, http://arxiv.org/pdf/1604.00369v3",econ.GN
31278,gn,"By borrowing methods from complex system analysis, in this paper we analyze
the features of the complex relationship that links the development and the
industrialization of a country to economic inequality. In order to do this, we
identify industrialization as a combination of a monetary index, the GDP per
capita, and a recently introduced measure of the complexity of an economy, the
Fitness. At first we explore these relations on a global scale over the time
period 1990--2008 focusing on two different dimensions of inequality: the
capital share of income and a Theil measure of wage inequality. In both cases,
the movement of inequality follows a pattern similar to the one theorized by
Kuznets in the fifties. We then narrow down the object of study ad we
concentrate on wage inequality within the United States. By employing data on
wages and employment on the approximately 3100 US counties for the time
interval 1990--2014, we generalize the Fitness-Complexity algorithm for
counties and NAICS sectors, and we investigate wage inequality between
industrial sectors within counties. At this scale, in the early nineties we
recover a behavior similar to the global one. While, in more recent years, we
uncover a trend reversal: wage inequality monotonically increases as
industrialization levels grow. Hence at a county level, at net of the social
and institutional factors that differ among countries, we not only observe an
upturn in inequality but also a change in the structure of the relation between
wage inequality and development.",Economic Development and Inequality: a complex system analysis,2016-05-10 21:09:59,"Angelica Sbardella, Emanuele Pugliese, Luciano Pietronero","http://dx.doi.org/10.1371/journal.pone.0182774, http://arxiv.org/abs/1605.03133v1, http://arxiv.org/pdf/1605.03133v1",econ.GN
31279,gn,"We present a simple continuous-time model of clearing in financial networks.
Financial firms are represented as ""tanks"" filled with fluid (money), flowing
in and out. Once ""pipes"" connecting ""tanks"" are open, the system reaches the
clearing payment vector in finite time. This approach provides a simple
recursive solution to a classical static model of financial clearing in
bankruptcy, and suggests a practical payment mechanism. With sufficient
resources, a system of mutual obligations can be restructured into an
equivalent system that has a cascade structure: there is a group of banks that
paid off their debts, another group that owes money only to banks in the first
group, and so on. Technically, we use the machinery of Markov chains to analyze
evolution of a deterministic dynamical system.",Banks as Tanks: A Continuous-Time Model of Financial Clearing,2017-05-17 01:19:33,"Isaac M. Sonin, Konstantin Sonin","http://arxiv.org/abs/1705.05943v3, http://arxiv.org/pdf/1705.05943v3",econ.GN
31280,gn,"This note is a contribution to the debate about the optimal algorithm for
Economic Complexity that recently appeared on ArXiv [1, 2] . The authors of [2]
eventually agree that the ECI+ algorithm [1] consists just in a renaming of the
Fitness algorithm we introduced in 2012, as we explicitly showed in [3].
However, they omit any comment on the fact that their extensive numerical tests
claimed to demonstrate that the same algorithm works well if they name it ECI+,
but not if its name is Fitness. They should realize that this eliminates any
credibility to their numerical methods and therefore also to their new
analysis, in which they consider many algorithms [2]. Since by their own
admission the best algorithm is the Fitness one, their new claim became that
the search for the best algorithm is pointless and all algorithms are alike.
This is exactly the opposite of what they claimed a few days ago and it does
not deserve much comments. After these clarifications we also present a
constructive analysis of the status of Economic Complexity, its algorithms, its
successes and its perspectives. For us the discussion closes here, we will not
reply to further comments.","Economic Complexity: ""Buttarla in caciara"" vs a constructive approach",2017-09-15 18:36:48,"Luciano Pietronero, Matthieu Cristelli, Andrea Gabrielli, Dario Mazzilli, Emanuele Pugliese, Andrea Tacchella, Andrea Zaccaria","http://arxiv.org/abs/1709.05272v1, http://arxiv.org/pdf/1709.05272v1",econ.GN
31281,gn,"We study a generalization of the model of a dark market due to
Duffie-G\^arleanu- Pedersen [6]. Our market is segmented and involves multiple
assets. We show that this market has a unique asymptotically stable
equilibrium. In order to establish this result, we use a novel approach
inspired by a theory due to McKenzie and Hawkins-Simon. Moreover, we obtain a
closed form solution for the price of each asset at which investors trade at
equilibrium. We conduct a comparative statics analysis which shows, among other
sensitivities, how equilibrium prices respond to the level of interactions
between investors.","Dark Markets with Multiple Assets: Segmentation, Asymptotic Stability, and Equilibrium Prices",2018-06-05 23:16:00,"Alain Bélanger, Ndouné Ndouné, Roland Pongou","http://arxiv.org/abs/1806.01924v1, http://arxiv.org/pdf/1806.01924v1",econ.GN
31282,gn,"The question about fair income inequality has been an important open question
in economics and in political philosophy for over two centuries with only
qualitative answers such as the ones suggested by Rawls, Nozick, and Dworkin.
We provided a quantitative answer recently, for an ideal free-market society,
by developing a game-theoretic framework that proved that the ideal inequality
is a lognormal distribution of income at equilibrium. In this paper, we develop
another approach, using the Nash Bargaining Solution (NBS) framework, which
also leads to the same conclusion. Even though the conclusion is the same, the
new approach, however, reveals the true nature of NBS, which has been of
considerable interest for several decades. Economists have wondered about the
economic meaning or purpose of the NBS. While some have alluded to its fairness
property, we show more conclusively that it is all about fairness. Since the
essence of entropy is also fairness, we see an interesting connection between
the Nash product and entropy for a large population of rational economic
agents.",How much income inequality is fair? Nash bargaining solution and its connection to entropy,2018-06-13 23:45:31,"Venkat Venkatasubramanian, Yu Luo","http://arxiv.org/abs/1806.05262v1, http://arxiv.org/pdf/1806.05262v1",econ.GN
31285,gn,"It has been conjectured that canonical Bewley--Huggett--Aiyagari
heterogeneous-agent models cannot explain the joint distribution of income and
wealth. The results stated below verify this conjecture and clarify its
implications under very general conditions. We show in particular that if (i)
agents are infinitely-lived, (ii) saving is risk-free, and (iii) agents have
constant discount factors, then the wealth distribution inherits the tail
behavior of income shocks (e.g., light-tailedness or the Pareto exponent). Our
restrictions on utility require only that relative risk aversion is bounded,
and a large variety of income processes are admitted. Our results show
conclusively that it is necessary to go beyond standard models to explain the
empirical fact that wealth is heavier-tailed than income. We demonstrate
through examples that relaxing any of the above three conditions can generate
Pareto tails.",An Impossibility Theorem for Wealth in Heterogeneous-agent Models with Limited Heterogeneity,2018-07-23 05:17:40,"John Stachurski, Alexis Akira Toda","http://dx.doi.org/10.1016/j.jet.2019.04.001, http://arxiv.org/abs/1807.08404v3, http://arxiv.org/pdf/1807.08404v3",econ.GN
31286,gn,"Despite the importance of CAP-related agricultural market regulation
mechanisms within Europe, the agricultural sectors in European countries retain
a degree of sensitivity to macroeconomic activity and policies. This reality
now raises the question of the effects to be expected from the implementation
of the single monetary policy on these agricultural sectors within the Monetary
Union.",CAP and Monetary Policy,2018-07-25 11:21:39,Carl Duisberg,"http://arxiv.org/abs/1807.09475v1, http://arxiv.org/pdf/1807.09475v1",econ.GN
31287,gn,"There are many analogies among fortune hunting in business, politics, and
science. The prime task of the gold digger was to go to the Klondikes, find the
right mine and mine the richest veins. This task requires motivation, sense of
purpose and ability. Techniques and equipment must be developed. Fortune
hunting in New England was provided at one time by hunting for whales. One went
to a great whalers' station such as New Bedford and joined the whale hunters.
The hunt in academic research is similar. A single-minded passion is called
for. These notes here are the wrap-up comments containing some terminal
observations of mine on a hunt for a theory money and financial institutions.",Apologia Pro Vita Sua: The Vanishing of the White Whale in the Mists,2018-07-25 16:26:01,Martin Shubik,"http://arxiv.org/abs/1807.09577v1, http://arxiv.org/pdf/1807.09577v1",econ.GN
31288,gn,"The tourism and hospitality industry worldwide has been confronted with the
problem of attracting and retaining quality employees. If today's students are
to become the effective practitioners of tomorrow, it is fundamental to
understand their perceptions of tourism employment. Therefore, this research
aims at investigating the perceptions of hospitality students at the Faculty of
Tourism in Alexandria University towards the industry as a career choice. A
self-administrated questionnaire was developed to rate the importance of 20
factors in influencing career choice, and the extent to which hospitality as a
career offers these factors. From the results, it is clear that students
generally do not believe that the hospitality career will offer them the
factors they found important. However, most of respondents (70.6%) indicated
that they would work in the industry after graduation. Finally, a set of
specific remedial actions that hospitality stakeholders could initiate to
improve the perceptions of hospitality career are discussed.",Hospitality Students' Perceptions towards Working in Hotels: a case study of the faculty of tourism and hotels in Alexandria University,2018-07-22 15:17:00,Sayed El-Houshy,"http://arxiv.org/abs/1807.09660v1, http://arxiv.org/pdf/1807.09660v1",econ.GN
31289,gn,"In order to induce farmers to adopt a productive new agricultural technology,
we apply simple and complex contagion diffusion models on rich social network
data from 200 villages in Malawi to identify seed farmers to target and train
on the new technology. A randomized controlled trial compares these
theory-driven network targeting approaches to simpler strategies that either
rely on a government extension worker or an easily measurable proxy for the
social network (geographic distance between households) to identify seed
farmers. Our results indicate that technology diffusion is characterized by a
complex contagion learning environment in which most farmers need to learn from
multiple people before they adopt themselves. Network theory based targeting
can out-perform traditional approaches to extension, and we identify methods to
realize these gains at low cost to policymakers.
  Keywords: Social Learning, Agricultural Technology Adoption, Complex
Contagion, Malawi
  JEL Classification Codes: O16, O13",Can Network Theory-based Targeting Increase Technology Adoption?,2018-08-03 17:30:04,"Lori Beaman, Ariel BenYishay, Jeremy Magruder, Ahmed Mushfiq Mobarak","http://arxiv.org/abs/1808.01205v1, http://arxiv.org/pdf/1808.01205v1",econ.GN
31290,gn,"This paper examines the question whether information is contained in
forecasts from DSGE models beyond that contained in lagged values, which are
extensively used in the models. Four sets of forecasts are examined. The
results are encouraging for DSGE forecasts of real GDP. The results suggest
that there is information in the DSGE forecasts not contained in forecasts
based only on lagged values and that there is no information in the
lagged-value forecasts not contained in the DSGE forecasts. The opposite is
true for forecasts of the GDP deflator.
  Keywords: DSGE forecasts, Lagged values
  JEL Classification Codes: E10, E17, C53",Information Content of DSGE Forecasts,2018-08-08 21:29:54,Ray Fair,"http://arxiv.org/abs/1808.02910v1, http://arxiv.org/pdf/1808.02910v1",econ.GN
31291,gn,"This empirical research explores the impact of age on nationality bias. World
Cup competition data suggest that judges of professional ski jumping
competitions prefer jumpers of their own nationality and exhibit this
preference by rewarding them with better marks. Furthermore, the current study
reveals that this nationality bias is diminished among younger judges, in
accordance with the reported lower levels of national discrimination among
younger generations. Globalisation and its effect in reducing class-based
thinking may explain this reduced bias in judgment of others.",The Impact of Age on Nationality Bias: Evidence from Ski Jumping,2018-08-11 16:49:47,"Sandra Schneemann, Hendrik Scholten, Christian Deutscher","http://arxiv.org/abs/1808.03804v1, http://arxiv.org/pdf/1808.03804v1",econ.GN
31292,gn,"In recent years, there have been a lot of sharp changes in the oil price.
These rapid changes cause the traditional models to fail in predicting the
price behavior. The main reason for the failure of the traditional models is
that they consider the actual value of parameters instead of their
expectational ones. In this paper, we propose a system dynamics model that
incorporates expectational variables in determining the oil price. In our
model, the oil price is determined by the expected demand and supply vs. their
actual values. Our core model is based upon regression analysis on several
historic time series and adjusted by adding many casual loops in the oil
market. The proposed model in simulated in different scenarios that have
happened in the past and our results comply with the trends of the oil price in
each of the scenarios.",A Predictive Model for Oil Market under Uncertainty: Data-Driven System Dynamics Approach,2018-08-13 14:14:44,"Sina Aghaei, Amirreza Safari Langroudi, Masoud Fekri","http://arxiv.org/abs/1808.04150v1, http://arxiv.org/pdf/1808.04150v1",econ.GN
31294,gn,"Among several developments, the field of Economic Complexity (EC) has notably
seen the introduction of two new techniques. One is the Bootstrapped Selective
Predictability Scheme (SPSb), which can provide quantitative forecasts of the
Gross Domestic Product of countries. The other, Hidden Markov Model (HMM)
regularisation, denoises the datasets typically employed in the literature. We
contribute to EC along three different directions. First, we prove the
convergence of the SPSb algorithm to a well-known statistical learning
technique known as Nadaraya-Watson Kernel regression. The latter has
significantly lower time complexity, produces deterministic results, and it is
interchangeable with SPSb for the purpose of making predictions. Second, we
study the effects of HMM regularization on the Product Complexity and logPRODY
metrics, for which a model of time evolution has been recently proposed. We
find confirmation for the original interpretation of the logPRODY model as
describing the change in the global market structure of products with new
insights allowing a new interpretation of the Complexity measure, for which we
propose a modification. Third, we explore new effects of regularisation on the
data. We find that it reduces noise, and observe for the first time that it
increases nestedness in the export network adjacency matrix.",Complexity of products: the effect of data regularisation,2018-08-24 21:10:21,"Orazio Angelini, Tiziana Di Matteo","http://dx.doi.org/10.3390/e20110814, http://arxiv.org/abs/1808.08249v2, http://arxiv.org/pdf/1808.08249v2",econ.GN
31295,gn,"The fossil-fuel induced contribution to further warming over the 21st century
will be determined largely by integrated CO2 emissions over time rather than
the precise timing of the emissions, with a relation of near-proportionality
between global warming and cumulative CO2 emissions. This paper examines
optimal abatement pathways under an exogenous constraint on cumulative
emissions. Least cost abatement pathways have carbon tax rising at the
risk-free interest rate, but if endogenous learning or climate damage costs are
included in the analysis, the carbon tax grows more slowly. The inclusion of
damage costs in the optimization leads to a higher initial carbon tax, whereas
the effect of learning depends on whether it appears as an additive or
multiplicative contribution to the marginal cost curve. Multiplicative models
are common in the literature and lead to delayed abatement and a smaller
initial tax. The required initial carbon tax increases with the cumulative
abatement goal and is higher for lower interest rates. Delaying the start of
abatement is costly owing to the increasing marginal abatement cost. Lower
interest rates lead to higher relative costs of delaying abatement because
these induce higher abatement rates early on. The fraction of business-as-usual
emissions (BAU) avoided in optimal pathways increases for low interest rates
and rapid growth of the abatement cost curve, which allows a lower threshold
global warming goal to become attainable without overshoot in temperature. Each
year of delay in starting abatement raises this threshold by an increasing
amount, because the abatement rate increases exponentially with time.",Economics of carbon-dioxide abatement under an exogenous constraint on cumulative emissions,2018-08-27 10:46:23,Ashwin K Seshadri,"http://arxiv.org/abs/1808.08717v2, http://arxiv.org/pdf/1808.08717v2",econ.GN
31296,gn,"Attempts to curb illegal activity by enforcing regulations gets complicated
when agents react to the new regulatory regime in unanticipated ways to
circumvent enforcement. We present a research strategy that uncovers such
reactions, and permits program evaluation net of such adaptive behaviors. Our
interventions were designed to reduce over-fishing of the critically endangered
Pacific hake by either (a) monitoring and penalizing vendors that sell illegal
fish or (b) discouraging consumers from purchasing using an information
campaign. Vendors attempt to circumvent the ban through hidden sales and other
means, which we track using mystery shoppers. Instituting random monitoring
visits are much more effective in reducing true hake availability by limiting
such cheating, compared to visits that occur on a predictable schedule.
Monitoring at higher frequency (designed to limit temporal displacement of
illegal sales) backfires, because targeted agents learn faster, and cheat more
effectively. Sophisticated policy design is therefore crucial for determining
the sustained, longer-term effects of enforcement. Data collected from
fishermen, vendors, and consumers allow us to document the upstream,
downstream, spillover, and equilibrium effects of enforcement on the entire
supply chain. The consumer information campaign generates two-thirds of the
gains compared to random monitoring, but is simpler for the government to
implement and almost as cost-effective.",Enforcing Regulation Under Illicit Adaptation,2018-08-29 18:38:24,"Andres Gonzalez Lira, Ahmed Mushfiq Mobarak","http://arxiv.org/abs/1808.09887v1, http://arxiv.org/pdf/1808.09887v1",econ.GN
31297,gn,"Development and growth are complex and tumultuous processes. Modern economic
growth theories identify some key determinants of economic growth. However, the
relative importance of the determinants remains unknown, and additional
variables may help clarify the directions and dimensions of the interactions.
The novel stream of literature on economic complexity goes beyond aggregate
measures of productive inputs, and considers instead a more granular and
structural view of the productive possibilities of countries, i.e. their
capabilities. Different endowments of capabilities are crucial ingredients in
explaining differences in economic performances. In this paper we employ
economic fitness, a measure of productive capabilities obtained through complex
network techniques. Focusing on the combined roles of fitness and some more
traditional drivers of growth, we build a bridge between economic growth
theories and the economic complexity literature. Our findings, in agreement
with other recent empirical studies, show that fitness plays a crucial role in
fostering economic growth and, when it is included in the analysis, can be
either complementary to traditional drivers of growth or can completely
overshadow them.",The role of complex analysis in modeling economic growth,2018-08-30 20:47:28,"Angelica Sbardella, Emanuele Pugliese, Andrea Zaccaria, Pasquale Scaramozzino","http://dx.doi.org/10.3390/e20110883, http://arxiv.org/abs/1808.10428v1, http://arxiv.org/pdf/1808.10428v1",econ.GN
31318,gn,"The current article traces back the scientific interest to cultural levels
across the organization at the University of National and World Economy, and
especially in the series of Economic Alternatives - an official scientific
magazine, issued by this Institution. Further, a wider and critical review of
international achievements in this field is performed, revealing diverse
analysis perspectives with respect to cultural levels. Also, a useful model of
exploring and teaching the cultural levels beyond the organization is proposed.
  Keywords: globalization, national culture, organization culture, cultural
levels, cultural economics. JEL: M14, Z10.","Critical review of models, containing cultural levels beyond the organizational one",2018-10-08 11:47:46,Kiril Dimitrov,"http://dx.doi.org/10.5281/zenodo.1434856, http://arxiv.org/abs/1810.03605v1, http://arxiv.org/pdf/1810.03605v1",econ.GN
31298,gn,"Considering the risk aversion for gains and the risk seeking for losses of
venture capitalists, the TODIM has been chosen as the decision-making method.
Moreover, group decision is an available way to avoid the limited ability and
knowledge etc. of venture capitalists.Simultaneously, venture capitalists may
be hesitant among several assessed values with different probabilities to
express their real perceptionbecause of the uncertain decision-making
environment. However, the probabilistic hesitant fuzzy information can solve
such problems effectively. Therefore, the TODIM has been extended to
probabilistic hesitant fuzzy circumstance for the sake of settling the
decision-making problem of venture capitalists in this paper. Moreover, due to
the uncertain investment environment, the criteria weights are considered as
probabilistic hesitant fuzzyinformation as well. Then, a case study has been
used to verify the feasibility and validity of the proposed TODIM.Also, the
TODIM with hesitant fuzzy information has been carried out to analysis the same
case.From the comparative analysis, the superiority of the proposed TODIM in
this paper has already appeared.",Finding a promising venture capital project with todim under probabilistic hesitant fuzzy circumstance,2018-09-01 10:20:20,"Weike Zhang, Jiang Du, Xiaoli Tian","http://arxiv.org/abs/1809.00128v1, http://arxiv.org/pdf/1809.00128v1",econ.GN
31299,gn,"We consider a partially asymmetric three-players zero-sum game with two
strategic variables. Two players (A and B) have the same payoff functions, and
Player C does not. Two strategic variables are $t_i$'s and $s_i$'s for $i=A, B,
C$. Mainly we will show the following results.
  1. The equilibrium when all players choose $t_i$'s is equivalent to the
equilibrium when Players A and B choose $t_i$'s and Player C chooses $s_C$ as
their strategic variables. 2. The equilibrium when all players choose $s_i$'s
is equivalent to the equilibrium when Players A and B choose $s_i$'s and Player
C chooses $t_C$ as their strategic variables.
  The equilibrium when all players choose $t_i$'s and the equilibrium when all
players choose $s_i$'s are not equivalent although they are equivalent in a
symmetric game in which all players have the same payoff functions.",Nash equilibrium of partially asymmetric three-players zero-sum game with two strategic variables,2018-09-04 07:57:17,"Atsuhiro Satoh, Yasuhito Tanaka","http://arxiv.org/abs/1809.02465v1, http://arxiv.org/pdf/1809.02465v1",econ.GN
31300,gn,"We consider the relation between Sion's minimax theorem for a continuous
function and a Nash equilibrium in a five-players game with two groups which is
zero-sum and symmetric in each group. We will show the following results.
  1. The existence of Nash equilibrium which is symmetric in each group implies
Sion's minimax theorem for a pair of playes in each group. 2. Sion's minimax
theorem for a pair of playes in each group imply the existence of a Nash
equilibrium which is symmetric in each group.
  Thus, they are equivalent. An example of such a game is a relative profit
maximization game in each group under oligopoly with two groups such that firms
in each group have the same cost functions and maximize their relative profits
in each group, and the demand functions are symmetric for the firms in each
group.",Sion's mini-max theorem and Nash equilibrium in a five-players game with two groups which is zero-sum and symmetric in each group,2018-09-04 07:57:23,"Atsuhiro Satoh, Yasuhito Tanaka","http://arxiv.org/abs/1809.02466v1, http://arxiv.org/pdf/1809.02466v1",econ.GN
31301,gn,"We study individual decision-making behavioral on generic view. Using a
formal mathematical model, we investigate the action mechanism of decision
behavioral under subjective perception changing of task attributes. Our model
is built on work in two kinds classical behavioral decision making theory:
""prospect theory (PT)"" and ""image theory (IT)"". We consider subjective
attributes preference of decision maker under the whole decision process.
Strategies collection and selection mechanism are induced according the
description of multi-attributes decision making. A novel behavioral
decision-making framework named ""ladder theory (LT)"" is proposed. By real four
cases comparing, the results shows that the LT have better explanation and
prediction ability then PT and IT under some decision situations. Furthermore,
we use our model to shed light on that the LT theory can cover PT and IT
ideally. It is the enrichment and development for classical behavioral decision
theory and, it has positive theoretical value and instructive significance for
explaining plenty of real decision-making phenomena. It may facilitate our
understanding of how individual decision-making performed actually.",The Ladder Theory of Behavioral Decision Making,2018-09-07 15:29:11,Xingguang Chen,"http://arxiv.org/abs/1809.03442v3, http://arxiv.org/pdf/1809.03442v3",econ.GN
31302,gn,"Remittances provide an essential connection between people working abroad and
their home countries. This paper considers these transfers as a measure of
preferences revealed by the workers, underlying a ranking of countries around
the world. In particular, we use the World Bank bilateral remittances data of
international salaries and interpersonal transfers between 2010 and 2015 to
compare European countries. The suggested least squares method has favourable
axiomatic properties. Our ranking reveals a crucial aspect of quality of life
and may become an alternative to various composite indices.",An alternative quality of life ranking on the basis of remittances,2018-09-11 18:34:16,Dóra Gréta Petróczy,"http://dx.doi.org/10.1016/j.seps.2021.101042, http://arxiv.org/abs/1809.03977v6, http://arxiv.org/pdf/1809.03977v6",econ.GN
31303,gn,"We propose a design for philanthropic or publicly-funded seeding to allow
(near) optimal provision of a decentralized, self-organizing ecosystem of
public goods. The concept extends ideas from Quadratic Voting to a funding
mechanism for endogenous community formation. Individuals make public goods
contributions to projects of value to them. The amount received by the project
is (proportional to) the square of the sum of the square roots of contributions
received. Under the ""standard model"" this yields first best public goods
provision. Variations can limit the cost, help protect against collusion and
aid coordination. We discuss applications to campaign finance, open source
software ecosystems, news media finance and urban public projects. More
broadly, we offer a resolution to the classic liberal-communitarian debate in
political philosophy by providing neutral and non-authoritarian rules that
nonetheless support collective organization.",A Flexible Design for Funding Public Goods,2018-09-17 23:07:13,"Vitalik Buterin, Zoe Hitzig, E. Glen Weyl","http://dx.doi.org/10.1287/mnsc.2019.3337, http://arxiv.org/abs/1809.06421v2, http://arxiv.org/pdf/1809.06421v2",econ.GN
31326,gn,"Using the Panama Papers, we show that the beginning of media reporting on
expropriations and property confiscations in a country increases the
probability that offshore entities are incorporated by agents from the same
country in the same month. This result is robust to the use of country-year
fixed effects and the exclusion of tax havens. Further analysis shows that the
effect is driven by countries with non-corrupt and effective governments, which
supports the notion that offshore entities are incorporated when reasonably
well-intended and well-functioning governments become more serious about
fighting organized crime by confiscating proceeds of crime.","Expropriations, Property Confiscations and New Offshore Entities: Evidence from the Panama Papers",2018-10-23 17:22:29,"Ralph-Christopher Bayer, Roland Hodler, Paul Raschky, Anthony Strittmatter","http://dx.doi.org/10.1016/j.jebo.2020.01.002, http://arxiv.org/abs/1810.09876v1, http://arxiv.org/pdf/1810.09876v1",econ.GN
31304,gn,"The field of study of this paper is the analysis of the exchange between two
subjects. Circumscribed to the micro dimension, it is however expanded with
respect to standard economic theory by introducing both the dimension of power
and the motivation to exchange. The basic reference is made by the reflections
of those economists, preeminently John Kenneth Galbraith, who criticize the
removal from the neoclassical economy of the ""power"" dimension. We have also
referred to the criticism that Galbraith, among others, makes to the assumption
of neoclassical economists that the ""motivation"" in exchanges is solely linked
to the reward, to the money obtained in the exchange. We have got around the
problem of having a large number of types of power and also a large number of
forms of motivation by directly taking into account the effects on the welfare
of each subject, regardless of the means with which they are achieved: that is,
referring to everything that happens in the negotiation process to the
potential or real variations of the welfare function induced in each subject
due to the exercise of the specific form of power, on a case by case basis, and
of the intensity of the motivation to perform the exchange. In the construction
of a mathematical model we paid great attention to its usability in field
testing.","The ""power"" dimension in a process of exchange",2018-09-21 23:02:52,Alberto Banterle,"http://arxiv.org/abs/1809.08293v3, http://arxiv.org/pdf/1809.08293v3",econ.GN
31305,gn,"In this study we present evidence that endowment effect can be elicited
merely by assigned ownership. Using Google Customer Survey, we administered a
survey were participants (n=495) were randomly split into 4 groups. Each group
was assigned ownership of either legroom or their ability to recline on an
airline. Using this experiment setup we were able to generate endowment effect,
a 15-20x (at p<0.05) increase between participant's willingness to pay (WTP)
and their willingness to accept (WTA).",Eliciting the Endowment Effect under Assigned Ownership,2018-09-23 01:44:23,"Patrick Barranger, Rohit Nair, Rob Mulla, Shane Conner","http://arxiv.org/abs/1809.08500v2, http://arxiv.org/pdf/1809.08500v2",econ.GN
31306,gn,"This paper finds near equilibrium prices for electricity markets with
nonconvexities due to binary variables, in order to reduce the market
participants' opportunity costs, such as generators' unrecovered costs. The
opportunity cost is defined as the difference between the profit when the
instructions of the market operator are followed and when the market
participants can freely make their own decisions based on the market prices. We
use the minimum complementarity approximation to the minimum total opportunity
cost (MTOC) model, from previous research, with tests on a much more realistic
unit commitment (UC) model than in previous research, including features such
as reserve requirements, ramping constraints, and minimum up and down times.
The developed model incorporates flexible price responsive demand, as in
previous research, but since not all demand is price responsive, we consider
the more realistic case that total demand is a mixture of fixed and flexible.
Another improvement over previous MTOC research is computational: whereas the
previous research had nonconvex terms among the objective function's continuous
variables, we convert the objective to an equivalent form that contains only
linear and convex quadratic terms in the continuous variables. We compare the
unit commitment model with the standard social welfare optimization version of
UC, in a series of sensitivity analyses, varying flexible demand to represent
varying degrees of future penetration of electric vehicles and smart
appliances, different ratios of generation availability, and different values
of transmission line capacities to consider possible congestion. The minimum
total opportunity cost and social welfare solutions are mostly very close in
different scenarios, except in some extreme cases.",Extended opportunity cost model to find near equilibrium electricity prices under non-convexities,2018-09-26 00:26:54,"Hassan Shavandi, Mehrdad Pirnia, J. David Fuller","http://dx.doi.org/10.1016/j.apenergy.2019.02.059, http://arxiv.org/abs/1809.09734v1, http://arxiv.org/pdf/1809.09734v1",econ.GN
31307,gn,"It is one of hottest topics in Vietnam whether to construct a High Speed Rail
(HSR) system or not in near future. To analyze the impacts of introducing the
HSR on the intercity travel behavior, this research develops an integrated
intercity demand forecasting model to represent trip generation and frequency,
destination choice and travel mode choice behavior. For this purpose, a
comprehensive questionnaire survey with both Revealed Preference (RP)
information (an inter-city trip diary) and Stated Preference (SP) information
was conducted in Hanoi in 2011. In the SP part, not only HSR, but also Low Cost
Carrier is included in the choice set, together with other existing inter-city
travel modes. To make full use of the advantages of each type of data and to
overcome their disadvantages, RP and SP data are combined to describe the
destination choice and mode choice behavior, while trip generation and
frequency are represented by using the RP data. The model estimation results
show the inter-relationship between trip generation and frequency, destination
choice and travel mode choice, and confirm that those components should not
dealt with separately.",Influence of introducing high speed railways on intercity travel behavior in Vietnam,2018-09-29 09:03:25,"Tho V. Le, Junyi Zhang, Makoto Chikaraishi, Akimasa Fujiwara","http://arxiv.org/abs/1810.00155v1, http://arxiv.org/pdf/1810.00155v1",econ.GN
31308,gn,"This theoretical model contains concept, equations, and graphical results for
venture banking. A system of 27 equations describes the behavior of the
venture-bank and underwriter system allowing phase-space type graphs that show
where profits and losses occur. These results confirm and expand those obtained
from the original spreadsheet based model. An example investment in a castle at
a loss is provided to clarify concept. This model requires that all investments
are in enterprises that create new utility value. The assessed utility value
created is the new money out of which the venture bank and underwriter are
paid. The model presented chooses parameters that ensure that the venture-bank
experiences losses before the underwriter does. Parameters are: DIN Premium,
0.05; Clawback lien fraction, 0.77; Clawback bonds and equity futures discount,
1.5 x (USA 12 month LIBOR); Range of clawback bonds sold, 0 to 100%; Range of
equity futures sold 0 to 70%.",A New Form of Banking -- Concept and Mathematical Model of Venture Banking,2018-10-01 06:37:08,Brian P Hanley,"http://arxiv.org/abs/1810.00516v8, http://arxiv.org/pdf/1810.00516v8",econ.GN
31324,gn,"We examine the household-specific effects of the introduction of Time-of-Use
(TOU) electricity pricing schemes. Using a causal forest (Athey and Imbens,
2016; Wager and Athey, 2018; Athey et al., 2019), we consider the association
between past consumption and survey variables, and the effect of TOU pricing on
household electricity demand. We describe the heterogeneity in household
variables across quartiles of estimated demand response and utilise variable
importance measures.
  Household-specific estimates produced by a causal forest exhibit reasonable
associations with covariates. For example, households that are younger, more
educated, and that consume more electricity, are predicted to respond more to a
new pricing scheme. In addition, variable importance measures suggest that some
aspects of past consumption information may be more useful than survey
information in producing these estimates.",Causal Tree Estimation of Heterogeneous Household Response to Time-Of-Use Electricity Pricing Schemes,2018-10-22 14:17:18,"Eoghan O'Neill, Melvyn Weeks","http://arxiv.org/abs/1810.09179v3, http://arxiv.org/pdf/1810.09179v3",econ.GN
31309,gn,"The objective of this study is to understand the different behavioral
considerations that govern the choice of people to engage in a crowd-shipping
market. Using novel data collected by the researchers in the US, we develop
discrete-continuous models. A binary logit model has been used to estimate
crowd-shippers' willingness to work, and an ordinary least-square regression
model has been employed to calculate crowd-shippers' maximum tolerance for
shipping and delivery times. A selectivity-bias term has been included in the
model to correct for the conditional relationships of the crowd-shipper's
willingness to work and their maximum travel time tolerance. The results show
socio-demographic characteristics (e.g. age, gender, race, income, and
education level), transporting freight experience, and number of social media
usages significant influence the decision to participate in the crowd-shipping
market. In addition, crowd-shippers pay expectations were found to be
reasonable and concurrent with the literature on value-of-time. Findings from
this research are helpful for crowd-shipping companies to identify and attract
potential shippers. In addition, an understanding of crowd-shippers - their
behaviors, perceptions, demographics, pay expectations, and in which contexts
they are willing to divert from their route - are valuable to the development
of business strategies such as matching criteria and compensation schemes for
driver-partners.",Selectivity correction in discrete-continuous models for the willingness to work as crowd-shippers and travel time tolerance,2018-10-02 00:28:26,"Tho V. Le, Satish V. Ukkusuri","http://arxiv.org/abs/1810.00985v1, http://arxiv.org/pdf/1810.00985v1",econ.GN
31310,gn,"South Africa's disability grants program is tied to its HIV/AIDS recovery
program, such that individuals who are ill enough may qualify. Qualification is
historically tied to a CD4 count of 200 cells/mm3, which improve when a person
adheres to antiretroviral therapy. This creates a potential unintended
consequence where poor individuals, faced with potential loss of their income,
may choose to limit their recovery through non-adherence. To test for
manipulation caused by grant rules, we identify differences in disability grant
recipients and non-recipients' rate of CD4 recovery around the qualification
threshold, implemented as a fixed-effects difference-in-difference around the
threshold. We use data from the Africa Health Research Institute Demographic
and Health Surveillance System (AHRI DSS) in rural KwaZulu-Natal, South Africa,
utilizing DG status and laboratory CD4 count records for 8,497 individuals to
test whether there are any systematic differences in CD4 recover rates among
eligible patients. We find that disability grant threshold rules caused
recipients to have a relatively slower CD4 recovery rate of about 20-30
cells/mm3/year, or a 20% reduction in the speed of recovery around the
threshold.",Disability for HIV and Disincentives for Health: The Impact of South Africa's Disability Grant on HIV/AIDS Recovery,2018-10-03 23:58:38,"Noah Haber, Till Bärnighausen, Jacob Bor, Jessica Cohen, Frank Tanser, Deenan Pillay, Günther Fink","http://arxiv.org/abs/1810.01971v1, http://arxiv.org/pdf/1810.01971v1",econ.GN
31311,gn,"Low CO2 prices have prompted discussion about political measures aimed at
increasing the cost of carbon dioxide emissions. These costs affect, inter
alia, integrated district heating system operators (DHSO), often owned by
municipalities with some political influence, that use a variety of (CO2 emis-
sion intense) heat generation technologies. We examine whether DHSOs have an
incentive to support measures that increase CO2 emission prices in the short
term. Therefore, we (i) develop a simplified analytical framework to analyse
optimal decisions of a district heating operator, and (ii) investigate the
market-wide effects of increasing emission prices, in particular the pass-
through from emission costs to electricity prices. Using a numerical model of
the common Austrian and German power system, we estimate a pass-through from
CO2 emission prices to power prices between 0.69 and 0.53 as of 2017, depending
on the absolute emission price level. We find the CO2 emission cost
pass-through to be sufficiently high so that low-emission district heating
systems operating at least moderately efficient generation units benefit from
rising CO2 emission prices in the short term.",District heating systems under high CO2 emission prices: the role of the pass-through from emission cost to electricity prices,2018-10-04 12:12:53,"Sebastian Wehrle, Johannes Schmidt","http://arxiv.org/abs/1810.02109v1, http://arxiv.org/pdf/1810.02109v1",econ.GN
31312,gn,"The current article explores interesting, significant and recently identified
nuances in the relationship ""culture-strategy"". The shared views of leading
scholars at the University of National and World Economy in relation with the
essence, direction, structure, role and hierarchy of ""culture-strategy""
relation are defined as a starting point of the analysis. The research emphasis
is directed on recent developments in interpreting the observed realizations of
the aforementioned link among the community of international scholars and
consultants, publishing in selected electronic scientific databases. In this
way a contemporary notion of the nature of ""culture-strategy"" relationship for
the entities from the world of business is outlined.","Exploring the nuances in the relationship ""culture-strategy"" for the business world",2018-10-05 14:15:49,Kiril Dimitrov,"http://dx.doi.org/10.5281/zenodo.1434904, http://arxiv.org/abs/1810.02613v1, http://arxiv.org/pdf/1810.02613v1",econ.GN
31313,gn,"The current article unveils and analyzes important shades of meaning for the
widely discussed term talent management. It not only grounds the outlined
perspectives in incremental formulation and elaboration of this construct, but
also is oriented to exploring the underlying reasons for the social actors,
proposing new nuances. Thus, a mind map and a fish-bone diagram are constructed
to depict effectively and efficiently the current state of development for
talent management and make easier the realizations of future research
endeavours in this field.",Talent management - an etymological study,2018-10-05 14:22:32,Kiril Dimitrov,"http://dx.doi.org/10.5281/zenodo.1434892, http://arxiv.org/abs/1810.02615v1, http://arxiv.org/pdf/1810.02615v1",econ.GN
31314,gn,"This article aims to outline the diversity of cultural phenomena that occur
at organizational level, emphasizing the place and role of the key attributes
of professed firm culture for the survival and successful development of big
business organizations. The holding companies, members of the Bulgarian
Industrial Capital Association, are chosen as a survey object as the mightiest
driving engines of the local economy. That is why their emergence and
development in the transition period is monitored and analyzed. Based on an
empirical study of relevant website content, important implications about
dominating attributes of professed firm culture on them are found and several
useful recommendations to their senior management are made.",Dominating Attributes Of Professed Firm Culture Of Holding Companies - Members Of The Bulgarian Industrial Capital Association,2018-10-05 14:25:20,"Kiril Dimitrov, Marin Geshkov","http://dx.doi.org/10.5281/zenodo.1442380, http://arxiv.org/abs/1810.02617v1, http://arxiv.org/pdf/1810.02617v1",econ.GN
31319,gn,"This paper presents empirically-estimated average hourly relationships
between regional electricity trade in the United States and prices, emissions,
and generation from 2015 through 2018. Consistent with economic theory, the
analysis finds a negative relationship between electricity prices in California
and regional trade, conditional on local demand. Each 1 gigawatt-hour increase
in California electricity imports is associated with an average $0.15 per
megawatt-hour decrease in the California Independent System Operator's
wholesale electricity price. There is a net-negative short term relationship
between carbon dioxide emissions in California and electricity imports that is
partially offset by positive emissions from exporting neighbors. Specifically,
each 1 GWh increase in regional trade is associated with a net 70-ton average
decrease in CO2 emissions across the western U.S., conditional on demand
levels. The results provide evidence that electricity imports mostly displace
natural gas generation on the margin in the California electricity market. A
small positive relationship is observed between short-run SO2 and NOx emissions
in neighboring regions and California electricity imports. The magnitude of the
SO2 and NOx results suggest an average increase of 0.1 MWh from neighboring
coal plants is associated with a 1 MWh increase in imports to California.",Integrating electricity markets: Impacts of increasing trade on prices and emissions in the western United States,2018-10-11 00:59:48,Steven Dahlke,"http://dx.doi.org/10.5278/ijsepm.3416, http://arxiv.org/abs/1810.04759v3, http://arxiv.org/pdf/1810.04759v3",econ.GN
31320,gn,"Feeny (1982, pp. 26-28) referred to a three-factor two-good general
equilibrium trade model, when he explained the relative importance of trade and
factor endowments in Thailand 1880-1940. For example, Feeny (1982) stated that
the growth in labor stock would be responsible for a substantial increase in
rice output relative to textile output. Is Feeny's statement plausible? The
purpose of this paper is to derive the Rybczynski sign patterns, which express
the factor endowment--commodity output relationship, for Thailand during the
period 1920 to 1927 using the EWS (economy-wide substitution)-ratio vector. A
'strong Rybczynski result' necessarily holds. I derived three Rybczynski sign
patterns. However, a more detailed estimate allowed a reduction from three
candidates to two. I restrict the analysis to the period 1920-1927 because of
data availability. The results imply that Feeny's statement might not
necessarily hold. Hence, labor stock might not affect the share of exportable
sector in national income positively. Moreover, the percentage of Chinese
immigration in the total population growth was not as large as expected. This
study will be useful when simulating real wage in Thailand.",Deriving the factor endowment--commodity output relationship for Thailand (1920-1927) using a three-factor two-good general equilibrium trade model,2018-10-11 05:05:23,Yoshiaki Nakada,"http://arxiv.org/abs/1810.04819v1, http://arxiv.org/pdf/1810.04819v1",econ.GN
31321,gn,"Aggressive incentive schemes that allow individuals to impose economic
punishment on themselves if they fail to meet health goals present a promising
approach for encouraging healthier behavior. However, the element of choice
inherent in these schemes introduces concerns that only non-representative
sectors of the population will select aggressive incentives, leaving value on
the table for those who don't opt in. In a field experiment conducted over a 29
week period on individuals wearing Fitbit activity trackers, we find modest and
short lived increases in physical activity for those provided the choice of
aggressive incentives. In contrast, we find significant and persistent
increases for those assigned (oftentimes against their stated preference) to
the same aggressive incentives. The modest benefits for those provided a choice
seems to emerge because those who benefited most from the aggressive incentives
were the least likely to choose them, and it was those who did not need them
who opted in. These results are confirmed in a follow up lab experiment. We
also find that benefits to individuals assigned to aggressive incentives were
pronounced if they also updated their step target in the Fitbit mobile
application to match the new activity goal we provided them. Our findings have
important implications for incentive based interventions to improve health
behavior. For firms and policy makers, our results suggest that one effective
strategy for encouraging sustained healthy behavior combines exposure to
aggressive incentive schemes to jolt individuals out of their comfort zones
with technology decision aids that help individuals sustain this behavior after
incentives end.",Aggressive Economic Incentives and Physical Activity: The Role of Choice and Technology Decision Aids,2018-10-16 00:13:33,"Idris Adjerid, Rachael Purta, Aaron Striegel, George Loewenstein","http://arxiv.org/abs/1810.06698v2, http://arxiv.org/pdf/1810.06698v2",econ.GN
31322,gn,"Most of today's products and services are made in global supply chains. As a
result, a consumption of goods and services in one country is associated with
various environmental pressures all over the world due to international trade.
Advances in global multi-region input-output models have allowed researchers to
draw detailed, international supply-chain connections between production and
consumptions activities and associated environmental impacts. Due to a limited
data availability there is little evidence about the more recent trends in
global energy footprint. In order to expand the analytical potential of the
existing WIOD 2016 dataset to a wider range of research themes, this paper
develops energy accounts and presents the global energy footprint trends for
the period 2000-2014.",Constructing energy accounts for WIOD 2016 release,2018-10-16 19:12:46,Viktoras Kulionis,"http://arxiv.org/abs/1810.07112v1, http://arxiv.org/pdf/1810.07112v1",econ.GN
31323,gn,"A noteworthy feature of U.S. politics in recent years is serious partisan
conflict, which has led to intensifying polarization and exacerbating high
policy uncertainty. The US is a significant player in oil and gold markets. Oil
and gold also form the basis of important strategic reserves in the US. We
investigate whether U.S. partisan conflict affects the returns and price
volatility of oil and gold using a parametric test of Granger causality in
quantiles. The empirical results suggest that U.S. partisan conflict has an
effect on the returns of oil and gold, and the effects are concentrated at the
tail of the conditional distribution of returns. More specifically, the
partisan conflict mainly affects oil returns when the crude oil market is in a
bearish state (lower quantiles). By contrast, partisan conflict matters for
gold returns only when the gold market is in a bullish scenario (higher
quantiles). In addition, for the volatility of oil and gold, the predictability
of partisan conflict index virtually covers the entire distribution of
volatility.",Does the price of strategic commodities respond to U.S. Partisan Conflict?,2018-10-19 11:34:21,"Yong Jiang, Yi-Shuai Ren, Chao-Qun Ma, Jiang-Long Liu, Basil Sharp","http://dx.doi.org/10.1016/j.resourpol.2020.101617, http://arxiv.org/abs/1810.08396v2, http://arxiv.org/pdf/1810.08396v2",econ.GN
31327,gn,"While researchers increasingly use deep neural networks (DNN) to analyze
individual choices, overfitting and interpretability issues remain as obstacles
in theory and practice. By using statistical learning theory, this study
presents a framework to examine the tradeoff between estimation and
approximation errors, and between prediction and interpretation losses. It
operationalizes the DNN interpretability in the choice analysis by formulating
the metrics of interpretation loss as the difference between true and estimated
choice probability functions. This study also uses the statistical learning
theory to upper bound the estimation error of both prediction and
interpretation losses in DNN, shedding light on why DNN does not have the
overfitting issue. Three scenarios are then simulated to compare DNN to binary
logit model (BNL). We found that DNN outperforms BNL in terms of both
prediction and interpretation for most of the scenarios, and larger sample size
unleashes the predictive power of DNN but not BNL. DNN is also used to analyze
the choice of trip purposes and travel modes based on the National Household
Travel Survey 2017 (NHTS2017) dataset. These experiments indicate that DNN can
be used for choice analysis beyond the current practice of demand forecasting
because it has the inherent utility interpretation, the flexibility of
accommodating various information formats, and the power of automatically
learning utility specification. DNN is both more predictive and interpretable
than BNL unless the modelers have complete knowledge about the choice task, and
the sample size is small. Overall, statistical learning theory can be a
foundation for future studies in the non-asymptotic data regime or using
high-dimensional statistical models in choice analysis, and the experiments
show the feasibility and effectiveness of DNN for its wide applications to
policy and behavioral analysis.",Deep Neural Networks for Choice Analysis: A Statistical Learning Theory Perspective,2018-10-24 18:57:28,"Shenhao Wang, Qingyi Wang, Nate Bailey, Jinhua Zhao","http://arxiv.org/abs/1810.10465v2, http://arxiv.org/pdf/1810.10465v2",econ.GN
31328,gn,"The formation of consortiums of a broadband access Internet Service Provider
(ISP) and multiple Content Providers (CP) is considered for large-scale content
caching. The consortium members share costs from operations and investments in
the supporting infrastructure. Correspondingly, the model's cost function
includes marginal and fixed costs; the latter has been important in determining
industry structure. Also, if Net Neutrality regulations permit, additional
network capacity on the ISP's last mile may be contracted by the CPs. The
number of subscribers is determined by a combination of users' price elasticity
of demand and Quality of Experience. The profit generated by a coalition after
pricing and design optimization determines the game's characteristic function.
Coalition formation is by a bargaining procedure due to Okada (1996) based on
random proposers in a non-cooperative, multi-player game-theoretic framework. A
necessary and sufficient condition is obtained for the Grand Coalition to form,
which bounds subsidies from large to small contributors. Caching is generally
supported even under Net Neutrality regulations. The Grand Coalition's profit
matches upper bounds. Numerical results illustrate the analytic results.",The Case for Formation of ISP-Content Providers Consortiums by Nash Bargaining for Internet Content Delivery,2018-10-25 03:15:25,"Debasis Mitra, Abhinav Sridhar","http://arxiv.org/abs/1810.10660v1, http://arxiv.org/pdf/1810.10660v1",econ.GN
31329,gn,"Research has highlighted relationships between size and scaled growth across
a large variety of biological and social organisms, ranging from bacteria,
through animals and plants, to cities an companies. Yet, heretofore,
identifying a similar relationship at the country level has proven challenging.
One reason is that, unlike the former, countries have predefined borders, which
limit their ability to grow ""organically."" This paper addresses this issue by
identifying and validating an effective measure of organic growth at the
country level: nighttime light emissions, which serve as a proxy of energy
allocations where more productive activity takes place. This indicator is
compared to population size to illustrate that while nighttime light emissions
are associated with superlinear growth, population size at the country level is
associated with sublinear growth. These relationships and their implications
for economic inequalities are then explored using high-resolution geospatial
datasets spanning the last three decades.","Nighttime Light, Superlinear Growth, and Economic Inequalities at the Country Level",2018-10-30 23:41:23,"Ore Koren, Laura Mann","http://arxiv.org/abs/1810.12996v1, http://arxiv.org/pdf/1810.12996v1",econ.GN
31330,gn,"The cost-effectiveness of policies providing subsidized goods is often
compromised by limited use of the goods provided. Through a randomized trial,
we test two approaches to improve the cost-effectiveness of a program
distributing free eyeglasses to myopic children in rural China. Requiring
recipients to undergo an ordeal better targeted eyeglasses to those who used
them without reducing usage relative to free delivery. An information campaign
increased use when eyeglasses were freely delivered but not under an ordeal.
Free delivery plus information was determined to be the most socially
cost-effective approach and obtained the highest rate of eyeglass use.","Ordeal Mechanisms, Information, and the Cost-Effectiveness of Subsidies: Evidence from Subsidized Eyeglasses in Rural China",2018-12-02 15:59:11,"Sean Sylvia, Xiaochen Ma, Yaojiang Shi, Scott Rozelle, C. -Y. Cynthia Lin Lawell","http://arxiv.org/abs/1812.00383v1, http://arxiv.org/pdf/1812.00383v1",econ.GN
31331,gn,"We examine how the institutional context affects the relationship between
gender and opportunity entrepreneurship. To do this, we develop a multi-level
model that connects feminist theory at the micro-level to institutional theory
at the macro-level. It is hypothesized that the gender gap in opportunity
entrepreneurship is more pronounced in low-quality institutional contexts and
less pronounced in high-quality institutional contexts. Using data from the
Global Entrepreneurship Monitor (GEM) and regulation data from the economic
freedom of the world index (EFW), we test our predictions and find evidence in
support of our model. Our findings suggest that, while there is a gender gap in
entrepreneurship, these disparities are reduced as the quality of the
institutional context improves.",Shattering the glass ceiling? How the institutional context mitigates the gender gap in entrepreneurship,2018-12-10 15:53:07,"Christopher J. Boudreaux, Boris Nikolaev","http://arxiv.org/abs/1812.03771v1, http://arxiv.org/pdf/1812.03771v1",econ.GN
31332,gn,"To analyze the influence of introducing the High-Speed Railway (HSR) system
on business and non-business travel behavior, this study develops an integrated
inter-city travel demand model to represent trip generations, destination
choice, and travel mode choice behavior. The accessibility calculated from the
RP/SP (Revealed Preference/Stated Preference) combined nested logit model of
destination and mode choices is used as an explanatory variable in the trip
frequency models. One of the important findings is that additional travel would
be induced by introducing HSR. Our simulation analyses also reveal that HSR and
conventional airlines will be the main modes for middle distances and long
distances, respectively. The development of zones may highly influence the
destination choices for business purposes, while prices of HSR and Low-Cost
Carriers affect choices for non-business purposes. Finally, the research
reveals that people on non-business trips are more sensitive to changes in
travel time, travel cost and regional attributes than people on business trips.",Influence of High-Speed Railway System on Inter-city Travel Behavior in Vietnam,2018-12-11 05:00:16,"Tho V. Le, Junyi Zhang, Makoto Chikaraishi, Akimasa Fujiwara","http://arxiv.org/abs/1812.04184v1, http://arxiv.org/pdf/1812.04184v1",econ.GN
31333,gn,"Unexpected increases of natural resource prices can generate rents, value
that should be recovered by the State to minimize inefficiencies, avoid
arbitrary discrimination between citizens and keep a sustainable trajectory. As
a case study about private appropriation of natural resource rent, this work
explores the case of copper in Chile since 1990, empirically analyzing if the
12 main private mining companies have recovered in present value more than
their investment during their life cycle. The results of this exercise,
applicable to other natural resources, indicate that some actually have,
capturing about US$ 40 billion up to 2012. Elaborating an adequate
institutional framework for future deposits remain important challenges for
Chile to plentifully take advantage of its mining potential, as well as for any
country with an abundant resource base to better enjoy its natural wealth. For
that purpose, a concession known as Least Present Value Revenue (LPVR) is
proposed.",Apropiación privada de renta de recursos naturales? El caso del cobre en Chile,2018-12-12 21:59:36,Benjamín Leiva,"http://dx.doi.org/10.20430/ete.v83i332.233, http://arxiv.org/abs/1812.05093v1, http://arxiv.org/pdf/1812.05093v1",econ.GN
31334,gn,"This paper provides new conditions for dynamic optimality in discrete time
and uses them to establish fundamental dynamic programming results for several
commonly used recursive preference specifications. These include Epstein-Zin
preferences, risk-sensitive preferences, narrow framing models and recursive
preferences with sensitivity to ambiguity. The results obtained for these
applications include (i) existence of optimal policies, (ii) uniqueness of
solutions to the Bellman equation, (iii) a complete characterization of optimal
policies via Bellman's principle of optimality, and (iv) a globally convergent
method of computation via value function iteration.",Dynamic Programming with Recursive Preferences: Optimality and Applications,2018-12-14 03:29:39,"Guanlong Ren, John Stachurski","http://arxiv.org/abs/1812.05748v4, http://arxiv.org/pdf/1812.05748v4",econ.GN
31335,gn,"The endogenous adaptation of agents, that may adjust their local contact
network in response to the risk of being infected, can have the perverse effect
of increasing the overall systemic infectiveness of a disease. We study a
dynamical model over two geographically distinct but interacting locations, to
better understand theoretically the mechanism at play. Moreover, we provide
empirical motivation from the Italian National Bovine Database, for the period
2006-2013.",Spreading of an infectious disease between different locations,2018-12-19 12:11:13,"Alessio Muscillo, Paolo Pin, Tiziano Razzolini","http://dx.doi.org/10.1016/j.jebo.2021.01.004, http://arxiv.org/abs/1812.07827v1, http://arxiv.org/pdf/1812.07827v1",econ.GN
31336,gn,"Background: Absenteism can generate important economic costs. Aim: To analyze
the determinants of the time off work for sick leaves granted to workers of a
regional health service. Material and Methods: Information about 2033
individuals, working at a health service, that were granted at least one sick
leave during 2012, was analyzed. Personal identification was censored. Special
emphasis was given to the type of health insurance system of the workers
(public or private). Results: Workers ascribed to the Chilean public health
insurance system (FONASA) had 11 days more off work than their counterparts
ascribed to private health insurance systems. A higher amount of time off work
was observed among older subjects and women. Conclusions: Age, gender and the
type of health insurance system influence the number of day off work due to
sick leaves.",Social security and labor absenteeism in a regional health service,2018-12-19 20:06:36,"Ariel Soto Caro, Roberto Herrera Cofre, Rodrigo Fuentes Solis","http://dx.doi.org/10.4067/S0034-98872015000800004, http://arxiv.org/abs/1812.08091v1, http://arxiv.org/pdf/1812.08091v1",econ.GN
31337,gn,"In this study, a method will be developed and applied for estimating
biological migration parameters of the biomass of a fishery resource by means
of a decision analysis of the spatial behavior of the fleet. First, a model of
discrete selection is estimated, together with patch capture function. This
will allow estimating the biomass availability on each patch. In the second
regression, values of biomass are used in order to estimate a model of
biological migration between patches. This method is proven in the Chilean jack
mackerel fishery. This will allow estimating statistically significant
migration parameters, identifying migration patterns.",Estimating biomass migration parameters by analyzing the spatial behavior of the fishing fleet,2018-12-19 20:18:34,"Hugo Salgado, Ariel Soto-Caro","http://dx.doi.org/10.4067/S0718-88702016000100003, http://arxiv.org/abs/1812.08099v1, http://arxiv.org/pdf/1812.08099v1",econ.GN
31338,gn,"Migration the main process shaping patterns of human settlement within and
between countries. It is widely acknowledged to be integral to the process of
human development as it plays a significant role in enhancing educational
outcomes. At regional and national levels, internal migration underpins the
efficient functioning of the economy by bringing knowledge and skills to the
locations where they are needed. It is the multi-dimensional nature of
migration that underlines its significance in the process of human development.
Human mobility extends in the spatial domain from local travel to international
migration, and in the temporal dimension from short-term stays to permanent
relocations. Classification and measurement of such phenomena is inevitably
complex, which has severely hindered progress in comparative research, with
very few large-scale cross-national comparisons of migration. The linkages
between migration and education have been explored in a separate line of
inquiry that has predominantly focused on country-specific analyses as to the
ways in which migration affects educational outcomes and how educational
attainment affects migration behaviour. A recurrent theme has been the
educational selectivity of migrants, which in turn leads to an increase of
human capital in some regions, primarily cities, at the expense of others.
Questions have long been raised as to the links between education and migration
in response to educational expansion, but have not yet been fully answered
because of the absence, until recently, of adequate data for comparative
analysis of migration. In this paper, we bring these two separate strands of
research together to systematically explore links between internal migration
and education across a global sample of 57 countries at various stages of
development, using data drawn from the IPUMS database.",Internal migration and education: A cross-national comparison,2018-12-21 05:06:50,"Aude Bernard, Martin Bell","http://arxiv.org/abs/1812.08913v1, http://arxiv.org/pdf/1812.08913v1",econ.GN
31346,gn,"E-commerce is on the rise in Hungary, with significantly growing numbers of
customers shopping online. This paper aims to identify the direct and indirect
drivers of the double-digit growth rate, including the related macroeconomic
indicators and the Digital Economy and Society Index (DESI). Moreover, this
study provides a deep insight into industry trends and outlooks, including high
industry concentration and top industrial players. It also draws the profile of
the typical online shopper and the dominant characteristics of online
purchases. Development of e-commerce is robust, but there is still plenty of
potential for growth and progress in Hungary.",E-commerce in Hungary: A Market Analysis,2018-12-30 11:17:35,Szabolcs Nagy,"http://dx.doi.org/10.18096/TMP.2016.03.03, http://arxiv.org/abs/1812.11488v1, http://arxiv.org/pdf/1812.11488v1",econ.GN
31339,gn,"Functions or 'functionings' enable to give a structure to any activity and
their combinations constitute the capabilities which characterize economic
assets such as work utility. The basic law of supply and demand naturally
emerges from that structure while integrating this utility within frames of
reference in which conditions of growth and associated inflation are identified
in the exchange mechanisms. Growth sustainability is built step by step taking
into account functional and organizational requirements which are followed
through a project up to a product delivery with different levels of
externalities. Entering the market through that structure leads to designing
basic equations of its dynamics and to finding canonical solutions, or
particular equilibria, after specifying the notion of maturity introduced in
order to refine the basic model. This approach allows to tackle behavioral
foundations of Prospect Theory through a generalization of its probability
weighting function for rationality analyses which apply to Western, Educated,
Industrialized, Rich, and Democratic societies as well as to the poorest ones.
The nature of reality and well-being appears then as closely related to the
relative satisfaction reached on the market, as it can be conceived by an
agent, according to business cycles; this reality being the result of the
complementary systems that govern human mind as structured by rational
psychologists. The final concepts of growth integrate and extend the maturity
part of the behavioral model into virtuous or erroneous sustainability.","Growth, Industrial Externality, Prospect Dynamics, and Well-being on Markets",2018-11-10 19:20:03,Emmanuel Chauvet,"http://arxiv.org/abs/1812.09302v4, http://arxiv.org/pdf/1812.09302v4",econ.GN
31340,gn,"This study tries to find the relationship among poverty inequality and
growth. The major finding of this study is poverty has reduced significantly
from 2000 to 2016, which is more than 100 percent but in recent time poverty
reduction has slowed down. Slower and unequal household consumption growth
makes sloth the rate of poverty reduction. Average annual consumption fell from
1.8 percent to 1.4 percent from 2010 to 2016 and poorer households experienced
slower consumption growth compared to richer households.","Poverty, Income Inequality and Growth in Bangladesh: Revisited Karl-Marx",2018-12-22 00:48:44,"Md Niaz Murshed Chowdhury, Md Mobarak Hossain","http://arxiv.org/abs/1812.09385v1, http://arxiv.org/pdf/1812.09385v1",econ.GN
31341,gn,"Bangladesh is the 2nd largest growing country in the world in 2016 with 7.1%
GDP growth. This study undertakes an econometric analysis to examine the
relationship between population growth and economic development. This result
indicates population growth adversely related to per capita GDP growth, which
means rapid population growth is a real problem for the development of
Bangladesh.",Population Growth and Economic Development in Bangladesh: Revisited Malthus,2018-12-22 01:14:47,"Md Niaz Murshed Chowdhury, Md. Mobarak Hossain","http://arxiv.org/abs/1812.09393v2, http://arxiv.org/pdf/1812.09393v2",econ.GN
31342,gn,"The random utility model (RUM, McFadden and Richter, 1990) has been the
standard tool to describe the behavior of a population of decision makers. RUM
assumes that decision makers behave as if they maximize a rational preference
over a choice set. This assumption may fail when consideration of all
alternatives is costly. We provide a theoretical and statistical framework that
unifies well-known models of random (limited) consideration and generalizes
them to allow for preference heterogeneity. We apply this methodology in a
novel stochastic choice dataset that we collected in a large-scale online
experiment. Our dataset is unique since it exhibits both choice set and
(attention) frame variation. We run a statistical survival race between
competing models of random consideration and RUM. We find that RUM cannot
explain the population behavior. In contrast, we cannot reject the hypothesis
that decision makers behave according to the logit attention model (Brade and
Rehbeck, 2016).",Random Utility and Limited Consideration,2018-12-23 02:01:33,"Victor H. Aguiar, Maria Jose Boccardi, Nail Kashaev, Jeongbin Kim","http://arxiv.org/abs/1812.09619v3, http://arxiv.org/pdf/1812.09619v3",econ.GN
31343,gn,"In the first part of the paper, we prove the equivalence of the unsymmetric
transformation function and an efficient joint production function (JPF) under
strong monotonicity conditions imposed on input and output correspondences.
Monotonicity, continuity, and convexity properties sufficient for a symmetric
transformation function to be an efficient JPF are also stated. In the second
part, we show that the most frequently used functional form for the directional
technology distance function (DTDF), the quadratic, does not satisfy
homogeneity of degree $-1$ in the direction vector. This implies that the
quadratic function is not the directional technology distance function. We
provide derivation of the DTDF from a symmetric transformation function and
show how this approach can be used to obtain functional forms that satisfy both
translation property and homogeneity of degree $-1$ in the direction vector if
the optimal solution of an underlying optimization problem can be expressed in
closed form.",Revisiting Transformation and Directional Technology Distance Functions,2018-12-25 16:55:15,Yaryna Kolomiytseva,"http://arxiv.org/abs/1812.10108v1, http://arxiv.org/pdf/1812.10108v1",econ.GN
31344,gn,"We use insights from epidemiology, namely the SIR model, to study how agents
infect each other with ""investment ideas."" Once an investment idea ""goes
viral,"" equilibrium prices exhibit the typical ""fever peak,"" which is
characteristic for speculative excesses. Using our model, we identify a time
line of symptoms that indicate whether a boom is in its early or later stages.
Regarding the market's top, we find that prices start to decline while the
number of infected agents, who buy the asset, is still rising. Moreover, the
presence of fully rational agents (i) accelerates booms (ii) lowers peak prices
and (iii) produces broad, drawn-out, market tops.",Thought Viruses and Asset Prices,2018-12-29 21:16:03,Wolfgang Kuhle,"http://arxiv.org/abs/1812.11417v1, http://arxiv.org/pdf/1812.11417v1",econ.GN
31345,gn,"We offer a parsimonious model to investigate how strategic wind producers
sell energy under stochastic production constraints, where the extent of
heterogeneity of wind energy availability varies according to wind farm
locations. The main insight of our analysis is that increasing heterogeneity in
resource availability improves social welfare, as a function of its effects
both on improving diversification and on reducing withholding by firms. We show
that this insight is quite robust for any concave and downward-sloping inverse
demand function. The model is also used to analyze the effect of heterogeneity
on firm profits and opportunities for collusion. Finally, we analyze the
impacts of improving public information and weather forecasting; enhanced
public forecasting increases welfare, but it is not always in the best
interests of strategic producers.",Selling Wind,2018-12-29 21:35:03,"Ali Kakhbod, Asuman Ozdaglar, Ian Schneider","http://arxiv.org/abs/1812.11420v1, http://arxiv.org/pdf/1812.11420v1",econ.GN
31347,gn,"Social responsibility of consumers is one of the main conditions for the
recoupment of enterprises expenses associated with the implementation of social
and ethical marketing tasks. Therefore, the enterprises, which plan to act on
terms of social and ethical marketing, should monitor the social responsibility
of consumers in the relevant markets. At the same time, special attention
should be paid to the analysis of factors that prevent consumers from
implementing their socially responsible intentions in the regions with a low
level of social activity of consumers. The purpose of the article is to develop
methodological guidelines that determine the tasks and directions of conducting
empirical studies aimed at assessing the gap between the socially responsible
intentions of consumers and the actual implementation of these intentions, as
well as to identify the causes of this gap. An empirical survey of the sampled
consumers in Kharkiv was carried out in terms of the proposed methodological
provisions. It revealed a rather high level of respondents' willingness to
support socially responsible enterprises and a rather low level of
implementation of these intentions due to the lack of consumers awareness. To
test the proposed methodological guidelines, an empirical study of the
consumers social responsibility was conducted in 2017 on a sample of students
and professors of the Semen Kuznets Kharkiv National University of Economics
(120 people). Questioning of the respondents was carried out using the Google
Forms. The finding allowed to make conclusion for existence of a high level of
respondents' willingness to support socially responsible and socially active
enterprises. However, the study also revealed the existence of a significant
gap between the intentions and actions of consumers, caused by the lack of
awareness.",Methodological provisions for conducting empirical research of the availability and implementation of the consumers socially responsible intentions,2019-01-01 21:22:08,"Lyudmyla Potrashkova, Diana Raiko, Leonid Tseitlin, Olga Savchenko, Szabolcs Nagy","http://dx.doi.org/10.21272/mmi.2018.3-11, http://arxiv.org/abs/1901.00191v1, http://arxiv.org/pdf/1901.00191v1",econ.GN
31348,gn,"We live in the Digital Age in which both economy and society have been
transforming significantly. The Internet and the connected digital devices are
inseparable parts of our daily life and the engine of the economic growth. In
this paper, first I analyzed the status of digital economy and society in
Hungary, then compared it with Ukraine and made conclusions regarding the
future development tendencies. Using secondary data provided by the European
Commission I investigated the five components of the Digital Economy and
Society Index of Hungary. I performed cross country analysis to find out the
significant differences between Ukraine and Hungary in terms of access to the
Internet and device use including smartphones, computers and tablets. Based on
my findings, I concluded that Hungary is more developed in terms of the
significant parameters of the digital economy and society than Ukraine, but
even Hungary is an emerging digital nation. Considering the high growth rate of
Internet, tablet and smartphone penetration in both countries, I expect faster
progress in the development of the digital economy and society in Hungary and
Ukraine.",Digital Economy And Society. A Cross Country Comparison Of Hungary And Ukraine,2019-01-02 10:19:54,Szabolcs Nagy,"http://arxiv.org/abs/1901.00283v1, http://arxiv.org/pdf/1901.00283v1",econ.GN
31349,gn,"The main purpose of the study is to develop the model for transaction costs
measurement in the Collective Waste Recovery Systems. The methodology of New
Institutional Economics is used in the research. The impact of the study is
related both to the enlargement of the limits of the theory about the
interaction between transaction costs and social costs and to the
identification of institutional failures of the European concept for circular
economy. A new model for social costs measurement is developed. Keywords:
circular economy, transaction costs, extended producer responsibility JEL: A13,
C51, D23, L22, Q53",The Institutional Economics of Collective Waste Recovery Systems: an empirical investigation,2019-01-02 12:48:27,Shteryo Nozharov,"http://arxiv.org/abs/1901.00495v2, http://arxiv.org/pdf/1901.00495v2",econ.GN
31350,gn,"Mobile phones play a very important role in our life. Mobile phone sales have
been soaring over the last decade due to the growing acceptance of
technological innovations, especially by Generations Y and Z. Understanding the
change in customers' requirement is the key to success in the smartphone
business. New, strong mobile phone models will emerge if the voice of the
customer can be heard. Although it has been widely known that country of origin
has serious impact on the attitudes and purchase decisions of mobile phone
consumers, there lack substantial studies that investigate the mobile phone
preference of young adults aged 18-25, members of late Generation Y and early
Generation Z. In order to investigate the role of country of origin in mobile
phone choice of Generations Y and Z, an online survey with 228 respondents was
conducted in Hungary in 2016. Besides the descriptive statistical methods,
crosstabs, ANOVA and Pearson correlation are used to analyze the collected data
and find out significant relationships. Factor analysis (Principal Component
Analysis) is used for data reduction to create new factor components. The
findings of this exploratory study support the idea that country of origin
plays a significant role in many respects related to young adults' mobile phone
choice. Mobile phone owners with different countries of origin attribute
crucial importance to the various product features including technical
parameters, price, design, brand name, operating system, and memory size.
Country of origin has a moderating effect on the price sensitivity of consumers
with varied net income levels. It is also found that frequent buyers of mobile
phones, especially US brand products, spend the most significant amount of
money for their consumption in this aspect.",The Impact Of Country Of Origin In Mobile Phone Choice Of Generation Y And Z,2019-01-02 09:53:15,Szabolcs Nagy,"http://dx.doi.org/10.12792/JMTI.4.2.16, http://arxiv.org/abs/1901.00793v1, http://arxiv.org/pdf/1901.00793v1",econ.GN
31351,gn,"Every year, 10 million people die from lack of access to treatment for
curable diseases, specially in developing countries. Meanwhile, legal but
unsafe drugs cause 130 thousand deaths per year. How can this be happening in
21st Century? What role does the pharmaceutical industry play in this tragedy?
In this research, WHO reports are analyzed and primary information gathered so
as to answer this questions.",Public Health and access to medicine. Pharmaceutical industry's role,2019-01-08 19:05:15,Juan Gonzalez-Blanco,"http://arxiv.org/abs/1901.02384v1, http://arxiv.org/pdf/1901.02384v1",econ.GN
31352,gn,"We explore how institutional quality moderates the effectiveness of
privatization on entrepreneurs sales performance. To do this, we blend agency
theory and entrepreneurial cognition theory with insights from institutional
economics to develop a model of emerging market venture performance. Using data
from the World Banks Enterprise Survey of entrepreneurs in China, our results
suggest that private-owned enterprises (POEs) outperform state-owned
enterprises (SOEs) but only in environments with high-quality market
institutions. In environments with low-quality market institutions, SOEs
outperform POEs. These findings suggest that the effectiveness of privatization
on entrepreneurial performance is context-specific, which reveals more nuance
than previously has been attributed.",When does privatization spur entrepreneurial performance? The moderating effect of institutional quality in an emerging market,2019-01-10 22:49:03,Christopher Boudreaux,"http://arxiv.org/abs/1901.03356v1, http://arxiv.org/pdf/1901.03356v1",econ.GN
31353,gn,"This paper proposes a model of optimal tax-induced transfer pricing with a
fuzzy arm's length parameter. Fuzzy numbers provide a suitable structure for
modelling the ambiguity that is intrinsic to the arm's length parameter. For
the usual conditions regarding the anti-shifting mechanisms, the optimal
transfer price becomes a maximising $\alpha$-cut of the fuzzy arm's length
parameter. Nonetheless, we show that it is profitable for firms to choose any
maximising transfer price if the probability of tax audit is sufficiently low,
even if the chosen price is considered a completely non-arm's length price by
tax authorities. In this case, we derive the necessary and sufficient
conditions to prevent this extreme shifting strategy",Fuzzy Profit Shifting: A Model for Optimal Tax-induced Transfer Pricing with Fuzzy Arm's Length Parameter,2019-01-12 13:43:52,Alex A. T. Rathke,"http://arxiv.org/abs/1901.03843v1, http://arxiv.org/pdf/1901.03843v1",econ.GN
31354,gn,"Microwork platforms allocate fragmented tasks to crowds of providers with
remunerations as low as few cents. Instrumental to the development of today's
artificial intelligence, these micro-tasks push to the extreme the logic of
casualization already observed in ""uberized"" workers. The present article uses
the results of the DiPLab study to estimate the number of people who microwork
in France. We distinguish three categories of microworkers, corresponding to
different modes of engagement: a group of 14,903 ""very active"" microworkers,
most of whom are present on these platforms at least once a week; a second
featuring 52,337 ""routine"" microworkers, more selective and present at least
once a month; a third circle of 266,126 ""casual"" microworkers, more
heterogeneous and who alternate inactivity and various levels of work practice.
Our results show that microwork is comparable to, and even larger than, the
workforce of ride-sharing and delivery platforms in France. It is therefore not
an anecdotal phenomenon and deserves great attention from researchers, unions
and policy-makers.",How many people microwork in France? Estimating the size of a new labor force,2019-01-12 21:16:41,"Clément Le Ludec, Paola Tubaro, Antonio A. Casilli","http://arxiv.org/abs/1901.03889v1, http://arxiv.org/pdf/1901.03889v1",econ.GN
31355,gn,"In this study, the prevalent methodology for design of the industrial policy
in developing countries was critically assessed, and it was shown that the
mechanism and content of classical method is fundamentally contradictory to the
goals and components of the endogenous growth theories. This study, by
proposing a new approach, along settling Schumpeter's economic growth theory as
a policy framework, designed the process of entering, analyzing and processing
data as the mechanism of the industrial policy in order to provide ""theoretical
consistency"" and ""technical and Statistical requirements"" for targeting the
growth stimulant factor effectively.",Designing An Industrial Policy For Developing Countries: A New Approach,2019-01-14 15:43:24,"Ali Haeri, Abbas Arabmazar","http://arxiv.org/abs/1901.04265v1, http://arxiv.org/pdf/1901.04265v1",econ.GN
31356,gn,"This paper describes asset price and return disturbances as result of
relations between transactions and multiple kinds of expectations. We show that
disturbances of expectations can cause fluctuations of trade volume, price and
return. We model price disturbances for transactions made under all types of
expectations as weighted sum of partial price and trade volume disturbances for
transactions made under separate kinds of expectations. Relations on price
allow present return as weighted sum of partial return and trade volume
""return"" for transactions made under separate expectations. Dependence of price
disturbances on trade volume disturbances as well as dependence of return on
trade volume ""return"" cause dependence of volatility and statistical
distributions of price and return on statistical properties of trade volume
disturbances and trade volume ""return"" respectively.","Econophysics of Asset Price, Return and Multiple Expectations",2019-01-15 22:06:42,Victor Olkhov,"http://dx.doi.org/10.21314/JNTF.2019.055, http://arxiv.org/abs/1901.05024v2, http://arxiv.org/pdf/1901.05024v2",econ.GN
31357,gn,"As of 2015, a millennial born in the 1990's became the largest population in
the workplace and are still growing. Studies indicate that a millennial is tech
savvy but lag in the exercise of digital responsibility. In addition, they are
passive towards environmental sustainability and fail to grasp the importance
of social responsibility. This paper provides a review of such findings
relating to business communications educators in their classrooms. The
literature should enable the development of a millennial as an excellent global
citizen through business communications curricula that emphasizes digital
citizenship, environmental sustainability and social responsibility. The
impetus for this work is to provide guidance in the development of courses and
teaching strategies customized to the development of each millennial as a
digital, environmental and socially responsible global citizen.",Preparing millennials as digital citizens and socially and environmentally responsible business professionals in a socially irresponsible climate,2019-01-20 04:05:01,"Barbara Burgess-Wilkerson, Clovia Hamilton, Chlotia Garrison, Keith Robbins","http://arxiv.org/abs/1901.06609v1, http://arxiv.org/pdf/1901.06609v1",econ.GN
31358,gn,"Does academic engagement accelerate or crowd out the commercialization of
university knowledge? Research on this topic seldom considers the impact of the
institutional environment, especially when a formal institution for encouraging
the commercial activities of scholars has not yet been established. This study
investigates this question in the context of China, which is in the
institutional transition stage. Based on a survey of scholars from Shanghai
Maritime University, we demonstrate that academic engagement has a positive
impact on commercialization and that this impact is greater for risk-averse
scholars than for other risk-seeking scholars. Our results suggest that in an
institutional transition environment, the government should consider
encouraging academic engagement to stimulate the commercialization activities
of conservative scholars.",Academic Engagement and Commercialization in an Institutional Transition Environment: Evidence from Shanghai Maritime University,2019-01-23 08:13:11,"Dongbo Shi, Yeyanran Ge","http://arxiv.org/abs/1901.07725v1, http://arxiv.org/pdf/1901.07725v1",econ.GN
31359,gn,"We calculate measures of economic complexity for US metropolitan areas for
the years 2007-2015 based on industry employment data. We show that the concept
of economic complexity translates well from the cross-country to the regional
setting, and is able to incorporate local as well as traded industries. The
largest cities and the Northeast of the US have the highest average complexity,
while traded industries are more complex than local-serving ones on average,
but with some exceptions. On average, regions with higher complexity have a
higher income per capita, but those regions also were more affected by the
financial crisis. Finally, economic complexity is a significant predictor of
within-decreases in income per capita and population. Our findings highlight
the importance of subnational regions, and particularly metropolitan areas, as
units of economic geography.",The Economic Complexity of US Metropolitan Areas,2019-01-23 23:00:51,"Benedikt S. L. Fritz, Robert A. Manduca","http://dx.doi.org/10.1080/00343404.2021.1884215, http://arxiv.org/abs/1901.08112v1, http://arxiv.org/pdf/1901.08112v1",econ.GN
31360,gn,"Technological parasitism is a new theory to explain the evolution of
technology in society. In this context, this study proposes a model to analyze
the interaction between a host technology (system) and a parasitic technology
(subsystem) to explain evolutionary pathways of technologies as complex
systems. The coefficient of evolutionary growth of the model here indicates the
typology of evolution of parasitic technology in relation to host technology:
i.e., underdevelopment, growth and development. This approach is illustrated
with realistic examples using empirical data of product and process
technologies. Overall, then, the theory of technological parasitism can be
useful for bringing a new perspective to explain and generalize the evolution
of technology and predict which innovations are likely to evolve rapidly in
society.",Technological Parasitism,2019-01-25 23:35:59,Mario Coccia,"http://arxiv.org/abs/1901.09073v1, http://arxiv.org/pdf/1901.09073v1",econ.GN
31361,gn,"Bid leakage is a corrupt scheme in a first-price sealed-bid auction in which
the procurer leaks the opponents' bids to a favoured participant. The rational
behaviour of such participant is to bid close to the deadline in order to
receive all bids, which allows him to ensure his win at the best price
possible. While such behaviour does leave detectable traces in the data, the
absence of bid leakage labels makes supervised classification impossible.
Instead, we reduce the problem of the bid leakage detection to a
positive-unlabeled classification. The key idea is to regard the losing
participants as fair and the winners as possibly corrupted. This allows us to
estimate the prior probability of bid leakage in the sample, as well as the
posterior probability of bid leakage for each specific auction.
  We extract and analyze the data on 600,000 Russian procurement auctions
between 2014 and 2018. We find that around 9% of the auctions are exposed to
bid leakage, which results in an overall 1.5% price increase. The predicted
probability of bid leakage is higher for auctions with a higher reserve price,
with too low or too high number of participants, and if the winner has met the
auctioneer in earlier auctions.",Stealed-bid Auctions: Detecting Bid Leakage via Semi-Supervised Learning,2019-03-01 15:09:40,"Dmitry I. Ivanov, Alexander S. Nesterov","http://arxiv.org/abs/1903.00261v2, http://arxiv.org/pdf/1903.00261v2",econ.GN
31362,gn,"Norms are challenging to define and measure, but this paper takes advantage
of text data and the recent development in machine learning to create an
encompassing measure of norms. An LSTM neural network is trained to detect
gendered language. The network functions as a tool to create a measure on how
gender norms changes in relation to the Metoo movement on Swedish Twitter. This
paper shows that gender norms on average are less salient half a year after the
date of the first appearance of the hashtag #Metoo. Previous literature
suggests that gender norms change over generations, but the current result
suggests that norms can change in the short run.",Using Artificial Intelligence to Recapture Norms: Did #metoo change gender norms in Sweden?,2019-03-02 15:12:45,Sara Moricz,"http://arxiv.org/abs/1903.00690v1, http://arxiv.org/pdf/1903.00690v1",econ.GN
31363,gn,"Using an additional decade of CNLSY data, this study replicated and extended
Deming's (2009) evaluation of Head Start's life-cycle skill formation impacts
in three ways. Extending the measurement interval for Deming's adulthood
outcomes, we found no statistically significant impacts on earnings and mixed
evidence of impacts on other adult outcomes. Applying Deming's sibling
comparison framework to more recent birth cohorts born to CNLSY mothers
revealed mostly negative Head Start impacts. Combining all cohorts shows
generally null impacts on school-age and early adulthood outcomes.",Elusive Longer-Run Impacts of Head Start: Replications Within and Across Cohorts,2019-03-05 20:39:57,"Remy J. -C. Pages, Dylan J. Lukes, Drew H. Bailey, Greg J. Duncan","http://arxiv.org/abs/1903.01954v4, http://arxiv.org/pdf/1903.01954v4",econ.GN
31364,gn,"Until recently, analysis of optimal global climate policy has focused on
mitigation. Exploration of policies to meet the 1.5{\deg}C target have brought
carbon dioxide removal (CDR), a second instrument, into the climate policy
mainstream. Far less agreement exists regarding the role of solar
geoengineering (SG), a third instrument to limit global climate risk.
Integrated assessment modelling (IAM) studies offer little guidance on
trade-offs between these three instruments because they have dealt with CDR and
SG in isolation. Here, I extend the Dynamic Integrated model of Climate and
Economy (DICE) to include both CDR and SG to explore the temporal ordering of
the three instruments. Contrary to implicit assumptions that SG would be
employed only after mitigation and CDR are exhausted, I find that SG is
introduced parallel to mitigation temporary reducing climate risks during the
era of peak CO2 concentrations. CDR reduces concentrations after mitigation is
exhausted, enabling SG phasing out.","Optimal Climate Strategy with Mitigation, Carbon Removal, and Solar Geoengineering",2019-03-05 23:35:39,Mariia Belaia,"http://arxiv.org/abs/1903.02043v1, http://arxiv.org/pdf/1903.02043v1",econ.GN
31365,gn,"Regulation is commonly viewed as a hindrance to entrepreneurship, but
heterogeneity in the effects of regulation is rarely explored. We focus on
regional variation in the effects of national-level regulations by developing a
theory of hierarchical institutional interdependence. Using the political
science theory of market-preserving federalism, we argue that regional economic
freedom attenuates the negative influence of national regulation on net job
creation. Using U.S. data, we find that regulation destroys jobs on net, but
regional economic freedom moderates this effect. In regions with average
economic freedom, a one percent increase in regulation results in 14 fewer jobs
created on net. However, a standard deviation increase in economic freedom
attenuates this relationship by four fewer jobs. Interestingly, this moderation
accrues strictly to older firms; regulation usually harms young firm job
creation, and economic freedom does not attenuate this relationship.","The Interdependence of Hierarchical Institutions: Federal Regulation, Job Creation, and the Moderating Effect of State Economic Freedom",2019-03-07 17:11:18,"David S. Lucas, Christopher J. Boudreaux","http://arxiv.org/abs/1903.02924v1, http://arxiv.org/pdf/1903.02924v1",econ.GN
31366,gn,"Entrepreneurship is often touted for its ability to generate economic growth.
Through the creative-destructive process, entrepreneurs are often able to
innovate and outperform incumbent organizations, all of which is supposed to
lead to higher employment and economic growth. Although some empirical evidence
supports this logic, it has also been the subject of recent criticisms.
Specifically, entrepreneurship does not lead to growth in developing countries;
it only does in more developed countries with higher income levels. Using
Global Entrepreneurship Monitor data for a panel of 83 countries from 2002 to
2014, we examine the contribution of entrepreneurship towards economic growth.
Our evidence validates earlier studies findings but also exposes previously
undiscovered findings. That is, we find that entrepreneurship encourages
economic growth but not in developing countries. In addition, our evidence
finds that the institutional environment of the country, as measured by GEM
Entrepreneurial Framework Conditions, only contributes to economic growth in
more developed countries but not in developing countries. These findings have
important policy implications. Namely, our evidence contradicts policy
proposals that suggest entrepreneurship and the adoption of pro-market
institutions that support it to encourage economic growth in developing
countries. Our evidence suggests these policy proposals will be unlikely to
generate the economic growth desired.","Entrepreneurship, Institutions, and Economic Growth: Does the Level of Development Matter?",2019-03-07 17:26:11,Christopher J. Boudreaux,"http://arxiv.org/abs/1903.02934v1, http://arxiv.org/pdf/1903.02934v1",econ.GN
31379,gn,"This study suggests a new decomposition of the effect of Foreign Direct
Investment (FDI) on long-term growth in developing countries. It reveals that
FDI not only have a positive direct effect on growth, but also increase the
latter by reducing the recessionary effect resulting from a banking crisis.
Even more, they reduce its occurrence. JEL: F65, F36, G01, G15","FDI, banking crisis and growth: direct and spill over effects",2019-04-08 16:10:57,"Brahim Gaies, Khaled Guesmi, Stéphane Goutte","http://arxiv.org/abs/1904.04911v1, http://arxiv.org/pdf/1904.04911v1",econ.GN
31367,gn,"Workers who earn at or below the minimum wage in the United States are mostly
either less educated, young, or female. Little is known, however, concerning
the extent to which the minimum wage influences wage differentials among
workers with different observed characteristics and among workers with the same
observed characteristics. This paper shows that changes in the real value of
the minimum wage over recent decades have affected the relationship of hourly
wages with education, experience, and gender. The results suggest that changes
in the real value of the minimum wage account in part for the patterns of
changes in education, experience, and gender wage differentials and mostly for
the patterns of changes in within-group wage differentials among female workers
with lower levels of experience.",Heterogeneous Impact of the Minimum Wage: Implications for Changes in Between- and Within-group Inequality,2019-03-10 08:28:15,"Tatsushi Oka, Ken Yamada","http://dx.doi.org/10.3368/jhr.58.3.0719-10339R1, http://arxiv.org/abs/1903.03925v2, http://arxiv.org/pdf/1903.03925v2",econ.GN
31368,gn,"We reconsider the Cournot duopoly problem in light of the theory for long
memory. We introduce the Caputo fractional-order difference calculus to
classical duopoly theory to propose a fractional-order discrete Cournot duopoly
game model, which allows participants to make decisions while making full use
of their historical information. Then we discuss Nash equilibria and local
stability by using linear approximation. Finally, we detect the chaos of the
model by employing a 0-1 test algorithm.",A fractional-order difference Cournot duopoly game with long memory,2019-03-11 16:55:44,"Baogui Xin, Wei Peng, Yekyung Kwon","http://arxiv.org/abs/1903.04305v1, http://arxiv.org/pdf/1903.04305v1",econ.GN
31369,gn,"This paper presents a farm level irrigation microsimulation model of the
southern Murray-Darling Basin. The model leverages detailed ABARES survey data
to estimate a series of input demand and output supply equations, derived from
a normalised quadratic profit function. The parameters from this estimation are
then used to simulate the impact on total cost, revenue and profit of a
hypothetical 30 per cent increase in the price of water. The model is still
under development, with several potential improvements suggested in the
conclusion. This is a working paper, provided for the purpose of receiving
feedback on the analytical approach to improve future iterations of the
microsimulation model.",A micro-simulation model of irrigation farms in the southern Murray-Darling Basin,2019-03-14 04:06:03,"Huong Dinh, Manannan Donoghoe, Neal Hughes, Tim Goesch","http://arxiv.org/abs/1903.05781v1, http://arxiv.org/pdf/1903.05781v1",econ.GN
31370,gn,"Planetary life support systems are collapsing due to climate change and the
biodiversity crisis. The root cause is the existing consumer economy, coupled
with profit maximisation based on ecological and social externalities. Trends
can be reversed, civilisation may be saved by transforming the profit
maximising consumer economy into an ecologically and socially just economy,
which we call the prosumer economy. Prosumer economy is a macro scale circular
economy with minimum negative or positive ecological and social impact, an
ecosystem of producers and prosumers, who have synergistic and circular
relationships with deepened circular supply chains, networks, where leakage of
wealth out of the system is minimised. In a prosumer economy there is no waste,
no lasting negative impacts on the ecology and no social exploitation. The
prosumer economy is like a lake or a forest, an economic ecosystem that is
productive and supportive of the planet. We are already planting this forest
through Good4Trust.org, started in Turkey. Good4Trust is a community platform
bringing together ecologically and socially just producers and prosumers.
Prosumers come together around a basic ethical tenet the golden rule and share
on the platform their good deeds. The relationship are already deepening and
circularity is forming to create a prosumer economy. The platforms software to
structure the economy is open source, and is available to be licenced to start
Good4Trust anywhere on the planet. Complexity theory tells us that if enough
agents in a given system adopt simple rules which they all follow, the system
may shift. The shift from a consumer economy to a prosumer economy has already
started, the future is either ecologically and socially just or bust.",The Prosumer Economy -- Being Like a Forest,2019-03-16 15:56:19,Uygar Ozesmi,"http://arxiv.org/abs/1903.07615v1, http://arxiv.org/pdf/1903.07615v1",econ.GN
31371,gn,"This paper examines the effects of institutional quality on financing choice
of individual using a large dataset of 137,160 people from 131 countries. We
classify borrowing activities into three categories, including formal,
constructive informal, and underground borrowing. Although the result shows
that better institutions aids the uses of formal borrowing, the impact of
institutions on constructive informal and underground borrowing among three
country sub-groups differs. Higher institutional quality improves constructive
informal borrowing in middle-income countries but reduces the use of
underground borrowing in high- and low-income countries.","The effects of institutional quality on formal and informal borrowing across high-, middle-, and low-income countries",2019-03-19 10:35:25,Lan Chu Khanh,"http://arxiv.org/abs/1903.07866v1, http://arxiv.org/pdf/1903.07866v1",econ.GN
31372,gn,"We propose a combinatorial model of economic development. An economy develops
by acquiring new capabilities allowing for the production of an ever greater
variety of products of increasingly complex products. Taking into account that
economies abandon the least complex products as they develop over time, we show
that variety first increases and then decreases in the course of economic
development. This is consistent with the empirical pattern known as 'the hump'.
Our results question the common association of variety with complexity. We
further discuss the implications of our model for future research.","Variety, Complexity and Economic Development",2019-03-19 16:41:06,"Alje van Dam, Koen Frenken","http://arxiv.org/abs/1903.07997v1, http://arxiv.org/pdf/1903.07997v1",econ.GN
31373,gn,"The purpose of this study is to find a relation between sex education and
abortion in the United States. Accordingly, multivariate logistic regression is
employed to study the relation between abortion and frequency of sex,
pre-marriage sex, and pregnancy by rape. The finding shows the odds of abortion
among those who have had premarital sex, more frequent sex before marriage, and
been the victim of rape is higher than those who have not experienced any of
these incidents. The output identified with one unit increase in pre-marriage
sex the log-odds of abortion increases by 0.47. Similarly, it shows by one unit
increase in the frequency of sex, the log-odds of abortion increases by 0.39.
Also, for every additional pregnancy by rape, there is an expectation of a 3.17
increase in the log-odds of abortion. The findings of this study also suggests
abortion is associated with sex education. Despite previous findings, this
study shows the factors of age, having children, and social standing is not
considered a burden to parents and thereby do not have a causal relation to
abortion.","The Impact of Sex Education on Sexual Activity, Pregnancy, and Abortion",2019-03-20 04:25:47,Nima Khodakarami,"http://arxiv.org/abs/1903.08307v1, http://arxiv.org/pdf/1903.08307v1",econ.GN
31374,gn,"As early as the 1920's Marshall suggested that firms co-locate in cities to
reduce the costs of moving goods, people, and ideas. These 'forces of
agglomeration' have given rise, for example, to the high tech clusters of San
Francisco and Boston, and the automobile cluster in Detroit. Yet, despite its
importance for city planners and industrial policy-makers, until recently there
has been little success in estimating the relative importance of each
Marshallian channel to the location decisions of firms.
  Here we explore a burgeoning literature that aims to exploit the co-location
patterns of industries in cities in order to disentangle the relationship
between industry co-agglomeration and customer/supplier, labour and idea
sharing. Building on previous approaches that focus on across- and
between-industry estimates, we propose a network-based method to estimate the
relative importance of each Marshallian channel at a meso scale. Specifically,
we use a community detection technique to construct a hierarchical
decomposition of the full set of industries into clusters based on
co-agglomeration patterns, and show that these industry clusters exhibit
distinct patterns in terms of their relative reliance on individual Marshallian
channels.",Unravelling the forces underlying urban industrial agglomeration,2019-03-22 03:16:48,"Neave O'Clery, Samuel Heroy, Francois Hulot, Mariano Beguerisse-Díaz","http://arxiv.org/abs/1903.09279v2, http://arxiv.org/pdf/1903.09279v2",econ.GN
31375,gn,"In this paper we study the impact of errors in wind and solar power forecasts
on intraday electricity prices. We develop a novel econometric model which is
based on day-ahead wholesale auction curves data and errors in wind and solar
power forecasts. The model shifts day-ahead supply curves to calculate intraday
prices. We apply our model to the German EPEX SPOT SE data. Our model
outperforms both linear and non-linear benchmarks. Our study allows us to
conclude that errors in renewable energy forecasts exert a non-linear impact on
intraday prices. We demonstrate that additional wind and solar power capacities
induce non-linear changes in the intraday price volatility. Finally, we comment
on economical and policy implications of our findings.",The Impact of Renewable Energy Forecasts on Intraday Electricity Prices,2019-03-22 13:32:05,"Sergei Kulakov, Florian Ziel","http://dx.doi.org/10.5547/2160-5890.10.1.skul, http://arxiv.org/abs/1903.09641v2, http://arxiv.org/pdf/1903.09641v2",econ.GN
31376,gn,"The granting process of all credit institutions rejects applicants who seem
risky regarding the repayment of their debt. A credit score is calculated and
associated with a cut-off value beneath which an applicant is rejected.
Developing a new score implies having a learning dataset in which the response
variable good/bad borrower is known, so that rejects are de facto excluded from
the learning process. We first introduce the context and some useful notations.
Then we formalize if this particular sampling has consequences on the score's
relevance. Finally, we elaborate on methods that use not-financed clients'
characteristics and conclude that none of these methods are satisfactory in
practice using data from Cr\'edit Agricole Consumer Finance.
  -----
  Un syst\`eme d'octroi de cr\'edit peut refuser des demandes de pr\^et
jug\'ees trop risqu\'ees. Au sein de ce syst\`eme, le score de cr\'edit fournit
une valeur mesurant un risque de d\'efaut, valeur qui est compar\'ee \`a un
seuil d'acceptabilit\'e. Ce score est construit exclusivement sur des donn\'ees
de clients financ\'es, contenant en particulier l'information `bon ou mauvais
payeur', alors qu'il est par la suite appliqu\'e \`a l'ensemble des demandes.
Un tel score est-il statistiquement pertinent ? Dans cette note, nous
pr\'ecisons et formalisons cette question et \'etudions l'effet de l'absence
des non-financ\'es sur les scores \'elabor\'es. Nous pr\'esentons ensuite des
m\'ethodes pour r\'eint\'egrer les non-financ\'es et concluons sur leur
inefficacit\'e en pratique, \`a partir de donn\'ees issues de Cr\'edit Agricole
Consumer Finance.",Réintégration des refusés en Credit Scoring,2019-03-21 14:15:10,"Adrien Ehrhardt, Christophe Biernacki, Vincent Vandewalle, Philippe Heinrich, Sébastien Beben","http://arxiv.org/abs/1903.10855v1, http://arxiv.org/pdf/1903.10855v1",econ.GN
31377,gn,"In this paper we develop a novel method of wholesale electricity market
modeling. Our optimization-based model decomposes wholesale supply and demand
curves into buy and sell orders of individual market participants. In doing so,
the model detects and removes arbitrage orders. As a result, we construct an
innovative fundamental model of a wholesale electricity market. First, our
fundamental demand curve has a unique composition. The demand curve lies in
between the wholesale demand curve and a perfectly inelastic demand curve.
Second, our fundamental supply and demand curves contain only actual (i.e.
non-arbitrage) transactions with physical assets on buy and sell sides. Third,
these transactions are designated to one of the three groups of wholesale
electricity market participants: retailers, suppliers, or utility companies. To
evaluate the performance of our model, we use the German wholesale market data.
Our fundamental model yields a more precise approximation of the actual load
values than a model with perfectly inelastic demand. Moreover, we conduct a
study of wholesale demand elasticities. The obtained conclusions regarding
wholesale demand elasticity are consistent with the existing academic
literature.",Determining Fundamental Supply and Demand Curves in a Wholesale Electricity Market,2019-03-22 14:17:52,"Sergei Kulakov, Florian Ziel","http://arxiv.org/abs/1903.11383v2, http://arxiv.org/pdf/1903.11383v2",econ.GN
31378,gn,"Food insecurity is associated with increased risk for several health
conditions and with poor chronic disease management. Key determinants for
household food insecurity are income and food costs. Whereas short-term
household incomes are likely to remain static, increased food prices would be a
significant driver of food insecurity. To investigate food price drivers for
household food security and its health consequences in the UK under scenarios
of Deal and No deal for Brexit . To estimate the 5\% and 95\% quantiles of the
projected price distributions. Structured expert judgement elicitation, a
well-established method for quantifying uncertainty, using experts. In July
2018, each expert estimated the median, 5\% and 95\% quantiles of changes in
price for ten food categories under Brexit Deal and No-deal to June 2020
assuming Brexit had taken place on 29th March 2019. These were aggregated based
on the accuracy and informativeness of the experts on calibration questions.
Ten specialists in food procurement, retail, agriculture, economics, statistics
and household food security. Results: when combined in proportions used to
calculate Consumer Prices Index food basket costs, median food price change for
Brexit with a Deal is expected to be +6.1\% [90\% credible interval:-3\%,
+17\%] and with No deal +22.5\% [+1\%, +52\%]. The number of households
experiencing food insecurity and its severity are likely to increase because of
expected sizeable increases in median food prices after Brexit. Higher
increases are more likely than lower rises and towards the upper limits, these
would entail severe impacts. Research showing a low food budget leads to
increasingly poor diet suggests that demand for health services in both the
short and longer term is likely to increase due to the effects of food
insecurity on the incidence and management of diet-sensitive conditions.",Anticipated impacts of Brexit scenarios on UK food prices and implications for policies on poverty and health: a structured expert judgement approach,2019-04-05 16:24:12,"Martine J Barons, Willy Aspinall","http://arxiv.org/abs/1904.03053v3, http://arxiv.org/pdf/1904.03053v3",econ.GN
31380,gn,"Bitcoin as well as other cryptocurrencies are all plagued by the impact from
bifurcation. Since the marginal cost of bifurcation is theoretically zero, it
causes the coin holders to doubt on the existence of the coin's intrinsic
value. This paper suggests a normative dual-value theory to assess the
fundamental value of Bitcoin. We draw on the experience from the art market,
where similar replication problems are prevalent. The idea is to decompose the
total value of a cryptocurrency into two parts: one is its art value and the
other is its use value. The tradeoff between these two values is also analyzed,
which enlightens our proposal of an image coin for Bitcoin so as to elevate its
use value without sacrificing its art value. To show the general validity of
the dual-value theory, we also apply it to evaluate the prospects of four major
cryptocurrencies. We find this framework is helpful for both the investors and
the exchanges to examine a new coin's value when it first appears in the
market.",A Normative Dual-value Theory for Bitcoin and other Cryptocurrencies,2019-04-10 10:19:02,"Zhiyong Tu, Lan Ju","http://arxiv.org/abs/1904.05028v1, http://arxiv.org/pdf/1904.05028v1",econ.GN
31381,gn,"The transport sector is witnessing unprecedented levels of disruption.
Privately owned cars that operate on internal combustion engines have been the
dominant modes of passenger transport for much of the last century. However,
recent advances in transport technologies and services, such as the development
of autonomous vehicles, the emergence of shared mobility services, and the
commercialization of alternative fuel vehicle technologies, promise to
revolutionise how humans travel. The implications are profound: some have
predicted the end of private car dependent Western societies, others have
portended greater suburbanization than has ever been observed before. If
transport systems are to fulfil current and future needs of different
subpopulations, and satisfy short and long-term societal objectives, it is
imperative that we comprehend the many factors that shape individual behaviour.
This chapter introduces the technologies and services most likely to disrupt
prevailing practices in the transport sector. We review past studies that have
examined current and future demand for these new technologies and services, and
their likely short and long-term impacts on extant mobility patterns. We
conclude with a summary of what these new technologies and services might mean
for the future of mobility.","Understanding consumer demand for new transport technologies and services, and implications for the future of mobility",2019-04-11 10:06:14,Akshay Vij,"http://arxiv.org/abs/1904.05554v1, http://arxiv.org/pdf/1904.05554v1",econ.GN
31382,gn,"Most people are mistaken about the details of their pensions. Mistaken
beliefs about financially important policies imply significant informational
frictions. This paper incorporates informational friction, specifically a cost
of attention to an uncertain pension policy, into a life-cycle model of
retirement. This entails solving a dynamic rational inattention model with
endogenous heterogeneous beliefs: a significant methodological contribution in
itself. Resulting endogenous mistaken beliefs help explain a puzzle, namely
labour market exits concentrate at official retirement ages despite weak
incentives to do so. The context of the study is the UK female state pension
age (SPA) reform. I find most women are mistaken about their SPA, mistakes are
predictive of behaviour, and mistakes decrease with age. I estimate the model
using simulated method of moments. Costly attention significantly improves
model predictions of the labour supply response to the SPA whilst accommodating
the observed learning about the individual's SPA. An extension addresses
another retirement puzzle, the extremely low take-up of actuarially
advantageous deferral options. Introducing costly attention into a model with
claiming significantly increase the number of people claiming early when the
option to defer appears actuarially advantageous.",Costly Attention and Retirement,2019-04-13 13:19:46,Jamie Hentall MacCuish,"http://arxiv.org/abs/1904.06520v2, http://arxiv.org/pdf/1904.06520v2",econ.GN
31383,gn,"Transportation Network Companies (TNCs) are changing the transportation
ecosystem, but micro-decisions of drivers and users need to be better
understood to assess the system-level impacts of TNCs. In this regard, we
contribute to the literature by estimating a) individuals' preferences of being
a rider, a driver, or a non-user of TNC services; b) preferences of ridehailing
users for ridepooling; c) TNC drivers' choice to switch to vehicles with better
fuel economy, and also d) the drivers' decision to buy, rent or lease new
vehicles with driving for TNCs being a major consideration. Elicitation of
drivers' preferences using a unique sample (N=11,902) of the U.S. population
residing in TNC-served areas is the key feature of this study. The statistical
analysis indicates that ridehailing services are mainly attracting personal
vehicle users as riders, without substantially affecting demand for transit.
Moreover, around 10% of ridehailing users reported postponing the purchase of a
new car due to the availability of TNC services. The model estimation results
indicate that the likelihood of being a TNC user increases with the increase in
age for someone younger than 44 years, but the pattern is reversed post 44
years. This change in direction of the marginal effect of age is insightful as
the previous studies have reported a negative association. We also find that
postgraduate drivers who live in metropolitan regions are more likely to switch
to fuel-efficient vehicles. These findings would inform transportation planners
and TNCs in developing policies to improve the fuel economy of the fleet.",Eliciting Preferences of Ridehailing Users and Drivers: Evidence from the United States,2019-04-14 16:57:42,"Prateek Bansal, Akanksha Sinha, Rubal Dua, Ricardo Daziano","http://arxiv.org/abs/1904.06695v1, http://arxiv.org/pdf/1904.06695v1",econ.GN
31384,gn,"We formalize one aspect of reliability in the context of Mobility-on-Demand
(MoD) systems by acknowledging the uncertainty in the pick-up time of these
services. This study answers two key questions: i) how the difference between
the stated and actual pick-up times affect the propensity of a passenger to
choose an MoD service? ii) how an MoD service provider can leverage this
information to increase its ridership? We conduct a discrete choice experiment
in New York to answer the former question and adopt a micro-simulation-based
optimization method to answer the latter question. In our experiments, the
ridership of an MoD service could be increased by up to 10\% via displaying the
predicted wait time strategically.",Can Mobility-on-Demand services do better after discerning reliability preferences of riders?,2019-04-17 00:13:05,"Prateek Bansal, Yang Liu, Ricardo Daziano, Samitha Samaranayake","http://arxiv.org/abs/1904.07987v1, http://arxiv.org/pdf/1904.07987v1",econ.GN
31523,gn,"We propose and estimate a model of demand and supply of annuities. To this
end, we use rich data from Chile, where annuities are bought and sold in a
private market via a two-stage process: first-price auctions followed by
bargaining. We model firms with private information about costs and retirees
with different mortalities and preferences for bequests and firms' risk
ratings. We find substantial costs and preference heterogeneity, and because
there are many firms, the market performs well. Counterfactuals show that
simplifying the current mechanism with English auctions and ""shutting down""
risk ratings increase pensions, but only for high-savers.",Auctioning Annuities,2020-11-05 18:19:30,"Gaurab Aryal, Eduardo Fajnzylber, Maria F. Gabrielli, Manuel Willington","http://arxiv.org/abs/2011.02899v7, http://arxiv.org/pdf/2011.02899v7",econ.GN
31385,gn,"Although wage inequality has evolved in advanced countries over recent
decades, it remains unknown the extent to which changes in wage inequality and
their differences across countries are attributable to specific capital and
labor quantities. We examine this issue by estimating a sector-level production
function extended to allow for capital-skill complementarity and factor-biased
technological change using cross-country and cross-industry panel data. Our
results indicate that most of the changes in the skill premium are attributable
to the relative quantities of ICT equipment, skilled labor, and unskilled labor
in the goods and service sectors of the majority of advanced countries.",ICT Capital-Skill Complementarity and Wage Inequality: Evidence from OECD Countries,2019-04-22 16:25:45,"Hiroya Taniguchi, Ken Yamada","http://dx.doi.org/10.1016/j.labeco.2022.102151, http://arxiv.org/abs/1904.09857v5, http://arxiv.org/pdf/1904.09857v5",econ.GN
31386,gn,"We study Bayesian coordination games where agents receive noisy private
information over the game's payoff structure, and over each others' actions. If
private information over actions is precise, we find that agents can coordinate
on multiple equilibria. If private information over actions is of low quality,
equilibrium uniqueness obtains like in a standard global games setting. The
current model, with its flexible information structure, can thus be used to
study phenomena such as bank-runs, currency crises, recessions, riots, and
revolutions, where agents rely on information over each others' actions.",Observing Actions in Bayesian Games,2019-04-24 14:10:27,"Dominik Grafenhofer, Wolfgang Kuhle","http://arxiv.org/abs/1904.10744v1, http://arxiv.org/pdf/1904.10744v1",econ.GN
31387,gn,"Manufacturing industry is heading towards socialization, interconnection, and
platformization. Motivated by the infiltration of sharing economy usage in
manufacturing, this paper addresses a new factory model -- shared factory --
and provides a theoretical architecture and some actual cases for manufacturing
sharing. Concepts related to three kinds of shared factories which deal
respectively with sharing production-orders, manufacturing-resources and
manufacturing-capabilities, are defined accordingly. These three kinds of
shared factory modes can be used for building correspondent sharing
manufacturing ecosystems. On the basis of sharing economic analysis, we
identify feasible key enabled technologies for configuring and running a shared
factory. At the same time, opportunities and challenges of enabling the shared
factory are also analyzed in detail. In fact, shared factory, as a new
production node, enhances the sharing nature of social manufacturing paradigm,
fits the needs of light assets and gives us a new chance to use socialized
manufacturing resources. It can be drawn that implementing a shared factory
would reach a win-win way through production value-added transformation and
social innovation.",Shared factory: a new production node for social manufacturing in the context of sharing economy,2019-04-12 12:27:46,"Pingyu Jiang, Pulin Li","http://dx.doi.org/10.1177/0954405419863220, http://arxiv.org/abs/1904.11377v1, http://arxiv.org/pdf/1904.11377v1",econ.GN
31388,gn,"We aim to determine whether a game-theoretic model between an insurer and a
healthcare practice yields a predictive equilibrium that incentivizes either
player to deviate from a fee-for-service to capitation payment system. Using
United States data from various primary care surveys, we find that non-extreme
equilibria (i.e., shares of patients, or shares of patient visits, seen under a
fee-for-service payment system) can be derived from a Stackelberg game if
insurers award a non-linear bonus to practices based on performance. Overall,
both insurers and practices can be incentivized to embrace capitation payments
somewhat, but potentially at the expense of practice performance.",A Game Theoretic Setting of Capitation Versus Fee-For-Service Payment Systems,2019-04-26 00:58:55,Allison Koenecke,"http://dx.doi.org/10.1371/journal.pone.0223672, http://arxiv.org/abs/1904.11604v2, http://arxiv.org/pdf/1904.11604v2",econ.GN
31389,gn,"The automation technology is emerging, but the adoption rate of autonomous
vehicles (AV) will largely depend upon how policymakers and the government
address various challenges such as public acceptance and infrastructure
development. This study proposes a five-step method to understand these
barriers to AV adoption. First, based on a literature review followed by
discussions with experts, ten barriers are identified. Second, the opinions of
eighteen experts from industry and academia regarding inter-relations between
these barriers are recorded. Third, a multicriteria decision making (MCDM)
technique, the grey-based Decision-making Trial and Evaluation Laboratory
(Grey-DEMATEL), is applied to characterize the structure of relationships
between the barriers. Fourth, robustness of the results is tested using
sensitivity analysis. Fifth, the key results are depicted in a causal loop
diagram (CLD), a systems thinking approach, to comprehend cause-and-effect
relationships between the barriers. The results indicate that the lack of
customer acceptance (LCA) is the most prominent barrier, the one which should
be addressed at the highest priority. The CLD suggests that LCA can be rather
mitigated by addressing two other prominent, yet more tangible, barriers --
lack of industry standards and the absence of regulations and certifications.
The study's overarching contribution thus lies in bringing to fore multiple
barriers to AV adoption and their potential influences on each other. Moreover,
the insights from this study can help associations related to AVs prioritize
their endeavors to expedite AV adoption. From the methodological perspective,
this is the first study in transportation literature that integrates
Grey-DEMATEL with systems thinking.",A Multicriteria Decision Making Approach to Study the Barriers to the Adoption of Autonomous Vehicles,2019-04-27 00:10:50,"Alok Raj, J Ajith Kumar, Prateek Bansal","http://arxiv.org/abs/1904.12051v3, http://arxiv.org/pdf/1904.12051v3",econ.GN
31395,gn,"Existing research argues that countries increase their production basket by
adding products which require similar capabilities to those they already
produce, a process referred to as path dependency. Green economic growth is a
global movement that seeks to achieve economic expansion while at the same time
mitigating environmental risks. We postulate that countries engaging in green
economic growth are motivated to invest strategically to develop new
capabilities that will help them transition to a green economy. As a result,
they could potentially increase their production baskets not only by a path
dependent process but also by the non path dependent process we term, high
investment structural jumps. The main objective of this research is to
determine whether countries increase their green production basket mainly by a
process of path dependency, or alternatively, by a process of structural jumps.
We analyze data from 65 countries and over a period from years 2007 to 2017. We
focus on China as our main case study. The results of this research show that
countries not only increase their green production baskets based on their
available capabilities, following path dependency, but also expand to products
that path dependency does not predict by investing in innovating and developing
new environmental related technologies.",Growing green: the role of path dependency and structural jumps in the green economy expansion,2019-06-12 20:50:29,"Seyyedmilad Talebzadehhosseini, Steven R. Scheinert, Ivan Garibay","http://dx.doi.org/10.18278/jpcs.6.1.2, http://arxiv.org/abs/1906.05269v2, http://arxiv.org/pdf/1906.05269v2",econ.GN
31390,gn,"In the global economy, the intermediate companies owned by multinational
corporations are becoming an important policy issue as they are likely to cause
international profit shifting and diversion of foreign direct investments. The
purpose of this analysis is to call the intermediate companies with high risk
of international profit shifting as key firms and to identifying and clarify
them. For this aim, we propose a model that focuses on each affiliate's
position on the ownership structure of each multinational corporation. Based on
the information contained in the Orbis database, we constructed the Global
Ownership Network, reflecting the relationship that can give significant
influence to a firm, and analyzed for large multinational corporations listed
in Fortune Global 500. In this analysis, first, we confirmed the validity of
this model by identifying affiliates playing an important role in international
tax avoidance at a certain degree. Secondly, intermediate companies are mainly
found in the Netherlands and the United Kingdom, etc., and tended to be located
in jurisdictions favorable to treaty shopping. And it was found that such key
firms are concentrated on the IN component of the bow-tie structure that the
giant weakly connected component of the Global Ownership Network consist of.
Therefore, it clarifies that the key firms are geographically located in
specific jurisdictions, and concentrates on specific components in the Global
Ownership Network. The location of key firms are related with the ease of
treaty shopping, and there is a difference in the jurisdiction where key firms
are located depending on the location of the multinational corporations.",Identification of Key Companies for International Profit Shifting in the Global Ownership Network,2019-04-29 02:31:29,"Tembo Nakamoto, Abhijit Chakraborty, Yuichi Ikeda","http://dx.doi.org/10.1007/s41109-019-0158-8, http://arxiv.org/abs/1904.12397v1, http://arxiv.org/pdf/1904.12397v1",econ.GN
31391,gn,"Direct elicitation, guided by theory, is the standard method for eliciting
latent preferences. The canonical direct-elicitation approach for measuring
individuals' valuations for goods is the Becker-DeGroot-Marschak procedure,
which generates willingness-to-pay (WTP) values that are imprecise and
systematically biased by understating valuations. We show that enhancing
elicited WTP values with supervised machine learning (SML) can substantially
improve estimates of peoples' out-of-sample purchase behavior. Furthermore,
swapping WTP data with choice data generated from a simple task,
two-alternative forced choice, leads to comparable performance. Combining all
the data with the best-performing SML methods yields large improvements in
predicting out-of-sample purchases. We quantify the benefit of using various
SML methods in conjunction with using different types of data. Our results
suggest that prices set by SML would increase revenue by 28% over using the
stated WTP, with the same data.",Supervised Machine Learning for Eliciting Individual Demand,2019-04-30 18:45:29,"John A. Clithero, Jae Joon Lee, Joshua Tasoff","http://arxiv.org/abs/1904.13329v4, http://arxiv.org/pdf/1904.13329v4",econ.GN
31392,gn,"While the disruptive potential of artificial intelligence (AI) and Big Data
has been receiving growing attention and concern in a variety of research and
application fields over the last few years, it has not received much scrutiny
in contemporary entrepreneurship research so far. Here we present some
reflections and a collection of papers on the role of AI and Big Data for this
emerging area in the study and application of entrepreneurship research. While
being mindful of the potentially overwhelming nature of the rapid progress in
machine intelligence and other Big Data technologies for contemporary
structures in entrepreneurship research, we put an emphasis on the reciprocity
of the co-evolving fields of entrepreneurship research and practice. How can AI
and Big Data contribute to a productive transformation of the research field
and the real-world phenomena (e.g., 'smart entrepreneurship')? We also discuss,
however, ethical issues as well as challenges around a potential contradiction
between entrepreneurial uncertainty and rule-driven AI rationality. The
editorial gives researchers and practitioners orientation and showcases avenues
and examples for concrete research in this field. At the same time, however, it
is not unlikely that we will encounter unforeseeable and currently inexplicable
developments in the field soon. We call on entrepreneurship scholars,
educators, and practitioners to proactively prepare for future scenarios.",Artificial Intelligence and Big Data in Entrepreneurship: A New Era Has Begun,2019-06-03 06:52:47,"Martin Obschonka, David B. Audretsch","http://dx.doi.org/10.1007/s11187-019-00202-4, http://arxiv.org/abs/1906.00553v1, http://arxiv.org/pdf/1906.00553v1",econ.GN
31393,gn,"This chapter examines the geographical meaning of the Sahel, its fluid
boundaries, and its spatial dynamics. Unlike other approaches that define the
Sahel as a bioclimatic zone or as an ungoverned area, it shows that the Sahel
is primarily a space of circulation in which uncertainty has historically been
overcome by mobility. The first part of the paper discusses how pre-colonial
empires relied on a network of markets and cities that facilitated trade and
social relationships across the region and beyond. The second part explores
changing regional mobility patterns precipitated by colonial powers and the new
approach they developed to control networks and flows. The third part discusses
the contradiction between the mobile strategies adopted by local herders,
farmers and traders in the Sahel and the territorial development initiatives of
modern states and international donors. Particular attention is paid in the
last section to how the Sahel was progressively redefined through a security
lens.",Mapping the Sahelian Space,2019-06-05 21:05:59,"Olivier Walther, Denis Retaille","http://arxiv.org/abs/1906.02223v1, http://arxiv.org/pdf/1906.02223v1",econ.GN
31394,gn,"Ergodicity describes an equivalence between the expectation value and the
time average of observables. Applied to human behaviour, ergodic theories of
decision-making reveal how individuals should tolerate risk in different
environments. To optimise wealth over time, agents should adapt their utility
function according to the dynamical setting they face. Linear utility is
optimal for additive dynamics, whereas logarithmic utility is optimal for
multiplicative dynamics. Whether humans approximate time optimal behavior
across different dynamics is unknown. Here we compare the effects of additive
versus multiplicative gamble dynamics on risky choice. We show that utility
functions are modulated by gamble dynamics in ways not explained by prevailing
decision theory. Instead, as predicted by time optimality, risk aversion
increases under multiplicative dynamics, distributing close to the values that
maximise the time average growth of wealth. We suggest that our findings
motivate a need for explicitly grounding theories of decision-making on ergodic
considerations.",Ergodicity-breaking reveals time optimal decision making in humans,2019-06-11 18:26:02,"David Meder, Finn Rabe, Tobias Morville, Kristoffer H. Madsen, Magnus T. Koudahl, Ray J. Dolan, Hartwig R. Siebner, Oliver J. Hulme","http://arxiv.org/abs/1906.04652v5, http://arxiv.org/pdf/1906.04652v5",econ.GN
31396,gn,"Market definition is an important component in the premerger investigation,
but the models used in the market definition have not developed much in the
past three decades since the Critical Loss Analysis (CLA) was proposed in 1989.
The CLA helps the Hypothetical Monopolist Test to determine whether the
hypothetical monopolist is going to profit from the small but significant and
non-transitory increase in price (SSNIP). However, the CLA has long been
criticized by academic scholars for its tendency to conclude a narrow market.
Although the CLA was adopted by the 2010 Horizontal Merger Guidelines (the 2010
Guidelines), the criticisms are likely still valid. In this dissertation, we
discussed the mathematical deduction of CLA, the data used, and the SSNIP
defined by the Agencies. Based on our research, we concluded that the narrow
market conclusion was due to the incorrect implementation of the CLA; not the
model itself. On the other hand, there are other unresolvable problems in the
CLA and the Hypothetical Monopolist Test. The SSNIP test and the CLA are bright
resolutions for market definition problem during their time, but we have more
advanced tools to solve the task nowadays. In this dissertation, we propose a
model which is based directly on the multi-dimensional substitutability between
the products and is capable of maximizing the substitutability of product
features within each group. Since the 2010 Guidelines does not exclude the use
of models other than the ones mentioned by the Guidelines, our method can
hopefully supplement the current models to show a better picture of the
substitutive relations and provide a more stable definition of the market.",A New Solution to Market Definition: An Approach Based on Multi-dimensional Substitutability Statistics,2019-06-24 18:42:33,Yan Yang,"http://arxiv.org/abs/1906.10030v1, http://arxiv.org/pdf/1906.10030v1",econ.GN
31397,gn,"Attempts to allocate capital across a selection of different investments are
often hampered by the fact that investors' decisions are made under limited
information (no historical return data) and during an extremely limited
timeframe. Nevertheless, in some cases, rational investors with a certain level
of experience are able to ordinally rank investment alternatives through
relative assessments of the probabilities that investments will be successful.
However, to apply traditional portfolio optimization models, analysts must use
historical (or simulated/expected) return data as the basis for their
calculations. This paper develops an alternative portfolio optimization
framework that is able to handle this kind of information (given by an ordinal
ranking of investment alternatives) and to calculate an optimal capital
allocation based on a Cobb-Douglas function, which we call the Sorted Weighted
Portfolio (SWP). Considering risk-neutral investors, we show that the results
of this portfolio optimization model usually outperform the output generated by
the (intuitive) Equally Weighted Portfolio (EWP) of different investment
alternatives, which is the result of optimization when one is unable to
incorporate additional data (the ordinal ranking of the alternatives). To
further extend this work, we show that our model can also address risk-averse
investors to capture correlation effects.",On Capital Allocation under Information Constraints,2019-06-14 11:57:56,"Christoph J. Börner, Ingo Hoffmann, Fabian Poetter, Tim Schmitz","http://arxiv.org/abs/1906.10624v2, http://arxiv.org/pdf/1906.10624v2",econ.GN
31398,gn,"We review and interpret two basic propositions published by Ellerman (2014).
The propositions address the algebraic structure of T accounts and double entry
bookkeeping (DEB). The paper builds on this previous contribution with the view
of reconciling the two, apparently dichotomous, perspectives of accounting
measurement: the one that focuses preferably on the stock of wealth and to the
one that focuses preferably on the flow of income. The paper claims that
T-accounts and DEB have an underlying algebraic structure suitable for
approaching measurement from either or both perspectives. Accountants
preferences for stocks or flows can be framed in ways which are mutually
consistent. The paper is a first step in addressing this consistency issue. It
avoids the difficult mathematics of abstract algebra by applying the concept of
syntax to accounting numbers such that the accounting procedure qualifies as a
formal language with which accountants convey meaning.",The Syntax of the Accounting Language: A First Step,2019-06-26 09:31:06,Frederico Botafogo,"http://arxiv.org/abs/1906.10865v1, http://arxiv.org/pdf/1906.10865v1",econ.GN
31399,gn,"Clinical research should conform to high standards of ethical and scientific
integrity, given that human lives are at stake. However, economic incentives
can generate conflicts of interest for investigators, who may be inclined to
withhold unfavorable results or even tamper with data in order to achieve
desired outcomes. To shed light on the integrity of clinical trial results,
this paper systematically analyzes the distribution of p-values of primary
outcomes for phase II and phase III drug trials reported to the
ClinicalTrials.gov registry. First, we detect no bunching of results just above
the classical 5% threshold for statistical significance. Second, a density
discontinuity test reveals an upward jump at the 5% threshold for phase III
results by small industry sponsors. Third, we document a larger fraction of
significant results in phase III compared to phase II. Linking trials across
phases, we find that early favorable results increase the likelihood of
continuing into the next phase. Once we take into account this selective
continuation, we can explain almost completely the excess of significant
results in phase III for trials conducted by large industry sponsors. For small
industry sponsors, instead, part of the excess remains unexplained.",P-hacking in clinical trials and how incentives shape the distribution of results across phases,2019-06-29 14:48:30,"Jérôme Adda, Christian Decker, Marco Ottaviani","http://dx.doi.org/10.1073/pnas.1919906117, http://arxiv.org/abs/1907.00185v3, http://arxiv.org/pdf/1907.00185v3",econ.GN
31400,gn,"Presidential debates are viewed as providing an important public good by
revealing information on candidates to voters. We consider an endogenous model
of presidential debates in which an incumbent and a challenger (who is
privately informed about her own quality) publicly announce whether they are
willing to participate in a public debate, taking into account that a voter's
choice of candidate depends on her beliefs regarding the candidates' qualities
and on the state of nature.It is found that in equilibrium a debate occurs or
does not occur independently of the challenger's quality and therefore the
candidates' announcements are uninformative. This is because opting-out is
perceived to be worse than losing a debate and therefore the challenger never
refuses to participate.",A Model of Presidential Debates,2019-07-02 16:45:16,"Doron Klunover, John Morgan","http://arxiv.org/abs/1907.01362v10, http://arxiv.org/pdf/1907.01362v10",econ.GN
34226,th,"In this paper, we propose a model of the marriage market in which individuals
meet potential partners either directly or through their friends. When
socialization is exogenous, a higher arrival rate of direct meetings also
implies more meetings through friends. When individuals decide how much to
invest in socialization, meetings through friends are first increasing and then
decreasing in the arrival rate of direct offers. Hence, our model can
rationalize the negative correlation between the advent of online dating and
the decrease of marriages through friends observed in the US over the past
decades.",Marriage through friends,2021-11-06 10:39:59,"Ugo Bolletta, Luca Paolo Merlino","http://arxiv.org/abs/2111.03825v2, http://arxiv.org/pdf/2111.03825v2",econ.TH
31401,gn,"Standard macroeconomic models assume that households are rational in the
sense that they are perfect utility maximizers, and explain economic dynamics
in terms of shocks that drive the economy away from the stead-state. Here we
build on a standard macroeconomic model in which a single rational
representative household makes a savings decision of how much to consume or
invest. In our model households are myopic boundedly rational heterogeneous
agents embedded in a social network. From time to time each household updates
its savings rate by copying the savings rate of its neighbor with the highest
consumption. If the updating time is short, the economy is stuck in a poverty
trap, but for longer updating times economic output approaches its optimal
value, and we observe a critical transition to an economy with irregular
endogenous oscillations in economic output, resembling a business cycle. In
this regime households divide into two groups: Poor households with low savings
rates and rich households with high savings rates. Thus inequality and economic
dynamics both occur spontaneously as a consequence of imperfect household
decision making. Our work here supports an alternative program of research that
substitutes utility maximization for behaviorally grounded decision making.",Emergent inequality and endogenous dynamics in a simple behavioral macroeconomic model,2019-07-04 01:55:28,"Yuki M. Asano, Jakob J. Kolb, Jobst Heitzig, J. Doyne Farmer","http://arxiv.org/abs/1907.02155v1, http://arxiv.org/pdf/1907.02155v1",econ.GN
31402,gn,"Through this paper, an attempt has been made to quantify the underlying
relationships between the leading macroeconomic indicators. More clearly, an
effort has been made in this paper to assess the cointegrating relationships
and examine the error correction behavior revealed by macroeconomic variables
using econometric techniques that were initially developed by Engle and Granger
(1987), and further explored by various succeeding papers, with the latest
being Tu and Yi (2017). Gross Domestic Product, Discount Rate, Consumer Price
Index and population of U.S are representatives of the economy that have been
used in this study to analyze the relationships between economic indicators and
understand how an adverse change in one of these variables might have
ramifications on the others. This is performed to corroborate and guide the
belief that a policy maker with specified intentions cannot ignore the
spillover effects caused by implementation of a certain policy.",Relationships between different Macroeconomic Variables using VECM,2019-07-10 01:41:58,Saannidhya Rawat,"http://arxiv.org/abs/1907.04447v1, http://arxiv.org/pdf/1907.04447v1",econ.GN
31403,gn,"This paper constructs a global economic policy uncertainty index through the
principal component analysis of the economic policy uncertainty indices for
twenty primary economies around the world. We find that the PCA-based global
economic policy uncertainty index is a good proxy for the economic policy
uncertainty on a global scale, which is quite consistent with the GDP-weighted
global economic policy uncertainty index. The PCA-based economic policy
uncertainty index is found to be positively related with the volatility and
correlation of the global financial market, which indicates that the stocks are
more volatile and correlated when the global economic policy uncertainty is
higher. The PCA-based global economic policy uncertainty index performs
slightly better because the relationship between the PCA-based uncertainty and
market volatility and correlation is more significant.",A global economic policy uncertainty index from principal component analysis,2019-07-11 11:42:03,"Peng-Fei Dai, Xiong Xiong, Wei-Xing Zhou","http://dx.doi.org/10.1016/j.frl.2020.101686, http://arxiv.org/abs/1907.05049v2, http://arxiv.org/pdf/1907.05049v2",econ.GN
31404,gn,"Economic theory is a mathematically rich field in which there are
opportunities for the formal analysis of singularities and catastrophes. This
article looks at the historical context of singularities through the work of
two eminent Frenchmen around the late 1960s and 1970s. Ren\'e Thom (1923-2002)
was an acclaimed mathematician having received the Fields Medal in 1958,
whereas G\'erard Debreu (1921-2004) would receive the Nobel Prize in economics
in 1983. Both were highly influential within their fields and given the
fundamental nature of their work, the potential for cross-fertilisation would
seem to be quite promising. This was not to be the case: Debreu knew of Thom's
work and cited it in the analysis of his own work, but despite this and other
applied mathematicians taking catastrophe theory to economics, the theory never
achieved a lasting following and relatively few results were published. This
article reviews Debreu's analysis of the so called ${\it regular}$ and ${\it
crtitical}$ economies in order to draw some insights into the economic
perspective of singularities before moving to how singularities arise naturally
in the Nash equilibria of game theory. Finally a modern treatment of stochastic
game theory is covered through recent work on the quantal response equilibrium.
In this view the Nash equilibrium is to the quantal response equilibrium what
deterministic catastrophe theory is to stochastic catastrophe theory, with some
caveats regarding when this analogy breaks down discussed at the end.",Singularities and Catastrophes in Economics: Historical Perspectives and Future Directions,2019-07-12 08:45:18,"Michael S. Harré, Adam Harris, Scott McCallum","http://arxiv.org/abs/1907.05582v1, http://arxiv.org/pdf/1907.05582v1",econ.GN
31405,gn,"A controversy involving loan loss provisions in banks concerns their
relationship with the business cycle. While international accounting standards
for recognizing provisions (incurred loss model) would presumably be
pro-cyclical, accentuating the effects of the current economic cycle, an
alternative model, the expected loss model, has countercyclical
characteristics, acting as a buffer against economic imbalances caused by
expansionary or contractionary phases in the economy. In Brazil, a mixed
accounting model exists, whose behavior is not known to be pro-cyclical or
countercyclical. The aim of this research is to analyze the behavior of these
accounting models in relation to the business cycle, using an econometric model
consisting of financial and macroeconomic variables. The study allowed us to
identify the impact of credit risk behavior, earnings management, capital
management, Gross Domestic Product (GDP) behavior, and the behavior of the
unemployment rate on provisions in countries that use different accounting
models. Data from commercial banks in the United Kingdom (incurred loss), in
Spain (expected loss), and in Brazil (mixed model) were used, covering the
period from 2001 to 2012. Despite the accounting models of the three countries
being formed by very different rules regarding possible effects on the business
cycles, the results revealed a pro-cyclical behavior of provisions in each
country, indicating that when GDP grows, provisions tend to fall and vice
versa. The results also revealed other factors influencing the behavior of loan
loss provisions, such as earning management.","The cyclicality of loan loss provisions under three different accounting models: the United Kingdom, Spain, and Brazil",2019-07-17 16:12:43,"A. M. B. Araujo, P. R. B. Lustosa","http://dx.doi.org/10.1590/1808-057x201804490, http://arxiv.org/abs/1907.07491v1, http://arxiv.org/pdf/1907.07491v1",econ.GN
31406,gn,"In this paper, we contrast parametric and semi-parametric representations of
unobserved heterogeneity in hierarchical Bayesian multinomial logit models and
leverage these methods to infer distributions of willingness to pay for
features of shared automated vehicle (SAV) services. Specifically, we compare
the multivariate normal (MVN), finite mixture of normals (F-MON) and Dirichlet
process mixture of normals (DP-MON) mixing distributions. The latter promises
to be particularly flexible in respect to the shapes it can assume and unlike
other semi-parametric approaches does not require that its complexity is fixed
prior to estimation. However, its properties relative to simpler mixing
distributions are not well understood. In this paper, we evaluate the
performance of the MVN, F-MON and DP-MON mixing distributions using simulated
data and real data sourced from a stated choice study on preferences for SAV
services in New York City. Our analysis shows that the DP-MON mixing
distribution provides superior fit to the data and performs at least as well as
the competing methods at out-of-sample prediction. The DP-MON mixing
distribution also offers substantive behavioural insights into the adoption of
SAVs. We find that preferences for in-vehicle travel time by SAV with
ride-splitting are strongly polarised. Whereas one third of the sample is
willing to pay between 10 and 80 USD/h to avoid sharing a vehicle with
strangers, the remainder of the sample is either indifferent to ride-splitting
or even desires it. Moreover, we estimate that new technologies such as vehicle
automation and electrification are relatively unimportant to travellers. This
suggests that travellers may primarily derive indirect, rather than immediate
benefits from these new technologies through increases in operational
efficiency and lower operating costs.",Semi-Parametric Hierarchical Bayes Estimates of New Yorkers' Willingness to Pay for Features of Shared Automated Vehicle Services,2019-07-23 03:29:27,"Rico Krueger, Taha H. Rashidi, Akshay Vij","http://arxiv.org/abs/1907.09639v1, http://arxiv.org/pdf/1907.09639v1",econ.GN
31407,gn,"The concept of the sustainable business model describes the rationale of how
an organization creates, delivers, and captures value, in economic, social,
cultural, or other contexts, in a sustainable way. The process of sustainable
business model construction forms an innovative part of a business strategy.
Different industries and businesses have utilized sustainable business models
concept to satisfy their economic, environmental, and social goals
simultaneously. However, the success, popularity, and progress of sustainable
business models in different application domains are not clear. To explore this
issue, this research provides a comprehensive review of sustainable business
models literature in various application areas. Notable sustainable business
models are identified and further classified in fourteen unique categories, and
in every category, the progress -- either failure or success -- has been
reviewed, and the research gaps are discussed. Taxonomy of the applications
includes innovation, management and marketing, entrepreneurship, energy,
fashion, healthcare, agri-food, supply chain management, circular economy,
developing countries, engineering, construction and real estate, mobility and
transportation, and hospitality. The key contribution of this study is that it
provides an insight into the state of the art of sustainable business models in
the various application areas and future research directions. This paper
concludes that popularity and the success rate of sustainable business models
in all application domains have been increased along with the increasing use of
advanced technologies.",Sustainable Business Models: A Review,2019-07-23 10:34:18,"Saeed Nosratabadi, Amir Mosavi, Shahaboddin Shamshirband, Edmundas Kazimieras Zavadskas, Andry Rakotonirainy, Kwok Wing Chau","http://dx.doi.org/10.3390/su11061663, http://arxiv.org/abs/1907.10052v1, http://arxiv.org/pdf/1907.10052v1",econ.GN
31408,gn,"Following the value relevance literature, this study verifies whether the
marketplace differentiates companies of high, medium, and low long-term
operational performance, measured by accounting information on profitability,
sales variation and indebtedness. The data comprises the Corporate Financial
Statements disclosed during the period from 1996 to 2009 and stock prices of
companies listed on the Sao Paulo Stock Exchange and Commodities and Futures
Exchange - BM&FBOVESPA. The final sample is composed of 142 non-financial
companies. Five year mobile windows were used, which resulted in ten five-year
periods. After checking each company indices, the accounting variables were
unified in an Index Performance Summary to synthesize the final performance for
each five-year period, which allowed segregation in operational performance
levels. Multiple regressions were performed using panel data techniques, fixed
effects model and dummies variables, and then hypothesis tests were made.
Regarding the explanatory power of each individual variable, the results show
that not all behaviors are according to the research hypothesis and that the
Brazilian stock market differentiates companies of high and low long-term
operational performance. This distinction is not fully perceived between
companies of high and medium operational performance.",Market and Long Term Accounting Operational Performance,2019-07-26 18:21:13,"M. S. S. Rosa, P. R. B. Lustosa","http://dx.doi.org/10.4013/base.2014.111.03, http://arxiv.org/abs/1907.11719v1, http://arxiv.org/pdf/1907.11719v1",econ.GN
31409,gn,"City size distributions are known to be well approximated by power laws
across a wide range of countries. But such distributions are also meaningful at
other spatial scales, such as within certain regions of a country. Using data
from China, France, Germany, India, Japan, and the US, we first document that
large cities are significantly more spaced out than would be expected by chance
alone. We next construct spatial hierarchies for countries by first
partitioning geographic space using a given number of their largest cities as
cell centers, and then continuing this partitioning procedure within each cell
recursively. We find that city size distributions in different parts of these
spatial hierarchies exhibit power laws that are again far more similar than
would be expected by chance alone -- suggesting the existence of a spatial
fractal structure.",Cities and space: Common power laws and spatial fractal structures,2019-07-29 12:09:50,"Tomoya Mori, Tony E. Smith, Wen-Tai Hsu","http://arxiv.org/abs/1907.12289v1, http://arxiv.org/pdf/1907.12289v1",econ.GN
31410,gn,"Killer technology is a radical innovation, based on new products and/or
processes, that with high technical and/or economic performance destroys the
usage value of established techniques previously sold and used. Killer
technology is a new concept in economics of innovation that may be useful for
bringing a new perspective to explain and generalize the behavior and
characteristics of innovations that generate a destructive creation for
sustaining technical change. To explore the behavior of killer technologies, a
simple model is proposed to analyze and predict how killer technologies destroy
and substitute established technologies. Empirical evidence of this theoretical
framework is based on historical data on the evolution of some example
technologies. Theoretical framework and empirical evidence hint at general
properties of the behavior of killer technologies to explain corporate,
industrial, economic and social change and to support best practices for
technology management of firms and innovation policy of nations. Overall, then,
the proposed theoretical framework can lay a foundation for the development of
more sophisticated concepts to explain the behavior of vital technologies that
generate technological and industrial change in society.",Killer Technologies: the destructive creation in the technical change,2019-07-29 16:14:27,Mario Coccia,"http://arxiv.org/abs/1907.12406v1, http://arxiv.org/pdf/1907.12406v1",econ.GN
31411,gn,"Effective treatment strategies exist for substance use disorder (SUD),
however severe hurdles remain in ensuring adequacy of the SUD treatment (SUDT)
workforce as well as improving SUDT affordability, access and stigma. Although
evidence shows recent increases in SUD medication access from expanding
Medicaid availability under the Affordable Care Act, it is yet unknown whether
these policies also led to a growth in the changes in the nature of hiring in
SUDT related workforce, partly due to poor data availability. Our study uses
novel data to shed light on recent trends in a fast-evolving and
policy-relevant labor market, and contributes to understanding the current SUDT
related workforce and the effect of Medicaid expansion on hiring attempts in
this sector. We examine attempts over 2010-2018 at hiring in the SUDT and
related behavioral health sector as background for estimating the causal effect
of the 2014-and-beyond state Medicaid expansion on these outcomes through
""difference-in-difference"" econometric models. We use Burning Glass
Technologies (BGT) data covering virtually all U.S. job postings by employers.
Nationally, we find little growth in the sector's hiring attempts in 2010-2018
relative to the rest of the economy or to health care as a whole. However, this
masks diverging trends in subsectors, which saw reduction in hospital based
hiring attempts, increases towards outpatient facilities, and changes in
occupational hiring demand shifting from medical personnel towards counselors
and social workers. Although Medicaid expansion did not lead to any
statistically significant or meaningful change in overall hiring attempts,
there was a shift in the hiring landscape.",Hiring in the substance use disorder treatment related sector during the first five years of Medicaid expansion,2019-08-01 08:18:35,"Olga Scrivner, Thuy Nguyen, Kosali Simon, Esmé Middaugh, Bledi Taska, Katy Börner","http://dx.doi.org/10.1371/journal.pone.0228394, http://arxiv.org/abs/1908.00216v1, http://arxiv.org/pdf/1908.00216v1",econ.GN
31412,gn,"This study shows evidence for collaborative knowledge creation among
individual researchers through direct exchanges of their mutual differentiated
knowledge. Using patent application data from Japan, the collaborative output
is evaluated according to the quality and novelty of the developed patents,
which are measured in terms of forward citations and the order of application
within their primary technological category, respectively. Knowledge exchange
is shown to raise collaborative productivity more through the extensive margin
(i.e., the number of patents developed) in the quality dimension, whereas it
does so more through the intensive margin in the novelty dimension (i.e.,
novelty of each patent).",Creation of knowledge through exchanges of knowledge: Evidence from Japanese patent data,2019-08-04 04:56:34,"Tomoya Mori, Shosei Sakaguchi","http://arxiv.org/abs/1908.01256v2, http://arxiv.org/pdf/1908.01256v2",econ.GN
31413,gn,"This paper investigates the dynamics of gambling and how they can affect
risk-taking behavior in regions not explored by Kahneman and Tversky's Prospect
Theory. Specifically, it questions why extreme outcomes do not fit the theory
and proposes alternative ways to measure prospects. The paper introduces a
measure of contrast between gambles and conducts an experiment to test the
hypothesis that individuals prospect gambles with nonadditive dynamics
differently. The results suggest a strong bias towards certain options, which
challenges the predictions of Kahneman and Tversky's theory.",Behavioral Biases and Nonadditive Dynamics in Risk Taking: An Experimental Investigation,2019-08-05 19:15:54,José Cláudio do Nascimento,"http://arxiv.org/abs/1908.01709v3, http://arxiv.org/pdf/1908.01709v3",econ.GN
31414,gn,"Farmers use pesticides to reduce yield losses. The efficacies of pesticide
treatments are often evaluated by analyzing the average treatment effects and
risks. The stochastic efficiency with respect to a function is often employed
in such evaluations through ranking the certainty equivalents of each
treatment. The main challenge of using this method is gathering an adequate
number of observations to produce results with statistical power. However, in
many cases, only a limited number of trials are replicated in field
experiments, leaving an inadequate number of observations. In addition, this
method focuses only on the farmer's profit without incorporating the impact of
disease pressure on yield and profit. The objective of our study is to propose
a methodology to address the issue of an insufficient number of observations
using simulations and take into account the effect of disease pressure on yield
through a quantile regression model. We apply this method to the case of
strawberry disease management in Florida.",Evaluating Pest Management Strategies: A Robust Method and its Application to Strawberry Disease Management,2019-08-05 22:18:11,"Ariel Soto-Caro, Feng Wu, Zhengfei Guan, Natalia Peres","http://arxiv.org/abs/1908.01808v2, http://arxiv.org/pdf/1908.01808v2",econ.GN
31415,gn,"This research aims to explore business processes and what the factors have
major influence on electronic marketing and CRM systems? Which data needs to be
analyzed and integrated in the system, and how to do that? How effective of
integration the electronic marketing and CRM with big data enabled to support
Marketing and Customer Relation operations. Research based on case studies at
XYZ Organization: International Language Education Service in Surabaya.
Research is studying secondary data which is supported by qualitative research
methods. Using purposive sampling technique with observation and interviewing
several respondents who need the system integration. The documentation of
interview is coded to keep confidentiality of the informant. Method of
extending participation, triangulation of data sources, discussions and the
adequacy of the theory are uses to validate data. Miles and Huberman models is
uses to do analysis the data interview. Results of the research are expected to
become a holistic approach to fully integrate the Big Data Analytics program
with electronic marketing and CRM systems.",Review of the Plan for Integrating Big Data Analytics Program for the Electronic Marketing System and Customer Relationship Management: A Case Study XYZ Institution,2019-08-07 06:34:08,Idha Sudianto,"http://dx.doi.org/10.5281/zenodo.3361854, http://arxiv.org/abs/1908.02430v1, http://arxiv.org/pdf/1908.02430v1",econ.GN
31416,gn,"Recently, the French Senate approved a law that imposes a 3% tax on revenue
generated from digital services by companies above a certain size. While there
is a lot of political debate about economic consequences of this action, it is
actually interesting to reverse the question: We consider the long-term
implications of an economy with no such digital tax. More generally, we can
think of digital services as a special case of products with low or zero cost
of transportation. With basic economic models we show that a market with no
transportation costs is prone to monopolization as minuscule, random
differences in quality are rewarded disproportionally. We then propose a
distance-based tax to counter-balance the tendencies of random centralisation.
Unlike a tax that scales with physical (cardinal) distance, a ranked (ordinal)
distance tax leverages the benefits of digitalization while maintaining a
stable economy.",Ordinal Tax To Sustain a Digital Economy,2019-08-06 20:23:55,"Nate Dwyer, Sandro Claudio Lera, Alex Sandy Pentland","http://arxiv.org/abs/1908.03287v1, http://arxiv.org/pdf/1908.03287v1",econ.GN
31417,gn,"To establish an updated understanding of the U.S. textile and apparel (TAP)
industrys competitive position within the global textile environment, trade
data from UN-COMTRADE (1996-2016) was used to calculate the Normalized Revealed
Comparative Advantage (NRCA) index for 169 TAP categories at the four-digit
Harmonized Schedule (HS) code level. Univariate time series using
Autoregressive Integrated Moving Average (ARIMA) models forecast short-term
future performance of Revealed categories with export advantage. Accompanying
outlier analysis examined permanent level shifts that might convey important
information about policy changes, influential drivers and random events.",Forecasting U.S. Textile Comparative Advantage Using Autoregressive Integrated Moving Average Models and Time Series Outlier Analysis,2019-08-13 23:47:13,"Zahra Saki, Lori Rothenberg, Marguerite Moor, Ivan Kandilov, A. Blanton Godfrey","http://arxiv.org/abs/1908.04852v1, http://arxiv.org/pdf/1908.04852v1",econ.GN
31418,gn,"I use a panel of higher education clearinghouse data to study the centralized
assignment of applicants to Finnish polytechnics. I show that on a yearly
basis, large numbers of top applicants unnecessarily remain unassigned to any
program. There are programs which rejected applicants would find acceptable,
but the assignment mechanism both discourages applicants from applying, and
stops programs from admitting those who do. A mechanism which would admit each
year's most eligible applicants has the potential to substantially reduce
re-applications, thereby shortening the long queues into Finnish higher
education.",Why Finnish polytechnics reject top applicants,2019-08-15 10:08:21,Kristian Koerselman,"http://dx.doi.org/10.1080/09645292.2020.1787953, http://arxiv.org/abs/1908.05443v1, http://arxiv.org/pdf/1908.05443v1",econ.GN
31419,gn,"When facing threats from automation, a worker residing in a large Chinese
city might not be as lucky as a worker in a large U.S. city, depending on the
type of large city in which one resides. Empirical studies found that large
U.S. cities exhibit resilience to automation impacts because of the increased
occupational and skill specialization. However, in this study, we observe
polarized responses in large Chinese cities to automation impacts. The
polarization might be attributed to the elaborate master planning of the
central government, through which cities are assigned with different industrial
goals to achieve globally optimal economic success and, thus, a fast-growing
economy. By dividing Chinese cities into two groups based on their
administrative levels and premium resources allocated by the central
government, we find that Chinese cities follow two distinct industrial
development trajectories, one trajectory owning government support leads to a
diversified industrial structure and, thus, a diversified job market, and the
other leads to specialty cities and, thus, a specialized job market. By
revisiting the automation impacts on a polarized job market, we observe a
Simpson's paradox through which a larger city of a diversified job market
results in greater resilience, whereas larger cities of specialized job markets
are more susceptible. These findings inform policy makers to deploy appropriate
policies to mitigate the polarized automation impacts.",Automation Impacts on China's Polarized Job Market,2019-08-15 15:45:11,"Haohui 'Caron' Chen, Xun Li, Morgan Frank, Xiaozhen Qin, Weipan Xu, Manuel Cebrian, Iyad Rahwan","http://arxiv.org/abs/1908.05518v1, http://arxiv.org/pdf/1908.05518v1",econ.GN
31420,gn,"The compact city, as a sustainable concept, is intended to augment the
efficiency of urban function. However, previous studies have concentrated more
on morphology than on structure. The present study focuses on urban structural
elements, i.e., urban hotspots consisting of high-density and high-intensity
socioeconomic zones, and explores the economic performance associated with
their spatial structure. We use nighttime luminosity (NTL) data and the Loubar
method to identify and extract the hotspot and ultimately draw two conclusions.
First, with population increasing, the hotspot number scales sublinearly with
an exponent of approximately 0.50~0.55, regardless of the location in China,
the EU or the US, while the intersect values are totally different, which is
mainly due to different economic developmental level. Secondly, we demonstrate
that the compactness of hotspots imposes an inverted U-shaped influence on
economic growth, which implies that an optimal compactness coefficient does
exist. These findings are helpful for urban planning.",The inverted U-shaped effect of urban hotspots spatial compactness on urban economic growth,2019-08-15 16:30:16,"Weipan Xu, Haohui'Caron' Chen, Enrique Frias-Martinez, Manuel Cebrian, Xun Li","http://arxiv.org/abs/1908.05530v1, http://arxiv.org/pdf/1908.05530v1",econ.GN
31421,gn,"The choice of appropriate measures of deprivation, identification and
aggregation of poverty has been a challenge for many years. The works of Sen,
Atkinson and others have been the cornerstone for most of the literature on
poverty measuring. Recent contributions have focused in what we now know as
multidimensional poverty measuring. Current aggregation and identification
measures for multidimensional poverty make the implicit assumption that
dimensions are independent of each other, thus ignoring the natural dependence
between them. In this article a variant of the usual method of deprivation
measuring is presented. It allows the existence of the forementioned
connections by drawing from geometric and networking notions. This new
methodology relies on previous identification and aggregation methods, but with
small modifications to prevent arbitrary manipulations. It is also proved that
this measure still complies with the axiomatic framework of its predecessor.
Moreover, the general form of latter can be considered a particular case of
this new measure, although this identification is not unique.",A complex net of intertwined complements: Measuring interdimensional dependence among the poor,2019-08-21 16:49:09,Felipe Del Canto M,"http://arxiv.org/abs/1908.07870v1, http://arxiv.org/pdf/1908.07870v1",econ.GN
31422,gn,"From a theoretical point of view, result-based agri-environmental payments
are clearly preferable to action-based payments. However, they suffer from two
major practical disadvantages: costs of measuring the results and payment
uncertainty for the participating farmers. In this paper, we propose an
alternative design to overcome these two disadvantages by means of modelling
(instead of measuring) the results. We describe the concept of model-informed
result-based agri-environmental payments (MIRBAP), including a hypothetical
example of payments for the protection and enhancement of soil functions. We
offer a comprehensive discussion of the relative advantages and disadvantages
of MIRBAP, showing that it not only unites most of the advantages of
result-based and action-based schemes, but also adds two new advantages: the
potential to address trade-offs among multiple policy objectives and management
for long-term environmental effects. We argue that MIRBAP would be a valuable
addition to the agri-environmental policy toolbox and a reflection of recent
advancements in agri-environmental modelling.",Implementing result-based agri-environmental payments by means of modelling,2019-08-22 09:40:14,"Bartosz Bartkowski, Nils Droste, Mareike Ließ, William Sidemo-Holm, Ulrich Weller, Mark V. Brady","http://dx.doi.org/10.1016/j.landusepol.2020.105230, http://arxiv.org/abs/1908.08219v2, http://arxiv.org/pdf/1908.08219v2",econ.GN
31423,gn,"In this study, we consider research and development investment by the
government. Our study is motivated by the bias in the budget allocation owing
to the competitive funding system. In our model, each researcher presents
research plans and expenses, and the government selects a research plan in two
periods---before and after the government knows its favorite plan---and spends
funds on the adopted program in each period. We demonstrate that, in a subgame
perfect equilibrium, the government adopts equally as many active plans as
possible. In an equilibrium, the selected plans are distributed proportionally.
Thus, the investment in research projects is symmetric and unbiased. Our
results imply that equally widespread expenditure across all research fields is
better than the selection of and concentration in some specific fields.",Government Expenditure on Research Plans and their Diversity,2019-08-03 11:27:03,"Ryosuke Ishii, Kuninori Nakagawa","http://arxiv.org/abs/1908.08786v1, http://arxiv.org/pdf/1908.08786v1",econ.GN
31424,gn,"This paper extends the core results of discrete time infinite horizon dynamic
programming to the case of state-dependent discounting. We obtain a condition
on the discount factor process under which all of the standard optimality
results can be recovered. We also show that the condition cannot be
significantly weakened. Our framework is general enough to handle complications
such as recursive preferences and unbounded rewards. Economic and financial
applications are discussed.",Dynamic Programming with State-Dependent Discounting,2019-08-23 15:58:42,"John Stachurski, Junnan Zhang","http://arxiv.org/abs/1908.08800v4, http://arxiv.org/pdf/1908.08800v4",econ.GN
31425,gn,"The disclosure of the VW emission manipulation scandal caused a
quasi-experimental market shock to the observable environmental quality of VW
diesel vehicles. To investigate the market reaction to this shock, we collect
data from a used-car online advertisement platform. We find that the supply of
used VW diesel vehicles increases after the VW emission scandal. The positive
supply side effects increase with the probability of manipulation. Furthermore,
we find negative impacts on the asking prices of used cars subject to a high
probability of manipulation. We rationalize these findings with a model for
sorting by the environmental quality of used cars.",Sorting on the Used-Car Market After the Volkswagen Emission Scandal,2019-08-26 14:46:02,"Anthony Strittmatter, Michael Lechner","http://dx.doi.org/10.1016/j.jeem.2020.102305, http://arxiv.org/abs/1908.09609v1, http://arxiv.org/pdf/1908.09609v1",econ.GN
31426,gn,"Many large cities are found at locations with certain first nature
advantages. Yet, those exogenous locational features may not be the most potent
forces governing the spatial pattern of cities. In particular, population size,
spacing and industrial composition of cities exhibit simple, persistent and
monotonic relationships. Theories of economic agglomeration suggest that this
regularity is a consequence of interactions between endogenous agglomeration
and dispersion forces. This paper reviews the extant formal models that explain
the spatial pattern together with the size distribution of cities, and
discusses the remaining research questions to be answered in this literature.
To obtain results about explicit spatial patterns of cities, a model needs to
depart from the most popular two-region and systems-of-cities frameworks in
urban and regional economics in which there is no variation in interregional
distance. This is one of the major reasons that only few formal models have
been proposed in this literature. To draw implications as much as possible from
the extant theories, this review involves extensive discussions on the behavior
of the many-region extension of these models. The mechanisms that link the
spatial pattern of cities and the diversity in city sizes are also discussed in
detail.",Spatial pattern and city size distribution,2019-08-26 17:32:58,Tomoya Mori,"http://arxiv.org/abs/1908.09706v1, http://arxiv.org/pdf/1908.09706v1",econ.GN
31427,gn,"Meeting the defined greenhouse gas (GHG) reduction targets in Germany is only
possible by switching to renewable technologies in the energy sector. A major
share of that reduction needs to be covered by the heat sector, which accounts
for ~35% of the energy based emissions in Germany. Biomass is the renewable key
player in the heterogeneous heat sector today. Its properties such as weather
independency, simple storage and flexible utilization open up a wide field of
applications for biomass. However, in a future heat sector fulfilling GHG
reduction targets and energy sectors being increasingly connected: which
bioenergy technology concepts are competitive options against other renewable
heating systems? In this paper, the cost optimal allocation of the limited
German biomass potential is investigated under longterm scenarios using a
mathematical optimization approach. The model results show that bioenergy can
be a competitive option in the future. Especially the use of biomass from
residues can be highly competitive in hybrid combined heat and power (CHP)
pellet combustion plants in the private household sector. However, towards
2050, wood based biomass use in high temperature industry applications is found
to be the most cost efficient way to reduce heat based emissions by 95% in
2050.",Future competitive bioenergy technologies in the German heat sector: Findings from an economic optimization approach,2019-08-27 10:44:45,"Matthias Jordan, Volker Lenz, Markus Millinger, Katja Oehmichen, Daniela Thrän","http://dx.doi.org/10.1016/j.energy.2019.116194, http://arxiv.org/abs/1908.10065v2, http://arxiv.org/pdf/1908.10065v2",econ.GN
31428,gn,"A potential solution to reduce greenhouse gas (GHG) emissions in the
transport sector is to use alternatively fueled vehicles (AFV). Heavy-duty
vehicles (HDV) emit a large share of GHG emissions in the transport sector and
are therefore the subject of growing attention from global regulators. Fuel
cell and green hydrogen technologies are a promising option to decarbonize
HDVs, as their fast refueling and long vehicle ranges are in line with current
logistic operation concepts. Moreover, the application of green hydrogen in
transport could enable more effective integration of renewable energies (RE)
across different energy sectors. This paper explores the interplay between HDV
Hydrogen Refueling Stations (HRS) that produce hydrogen locally and the power
system by combining an infrastructure location planning model and an energy
system optimization model that takes grid expansion options into account. Two
scenarios - one sizing refueling stations in symbiosis with the power system
and one sizing them independently of it - are assessed regarding their impacts
on the total annual energy system costs, regional RE integration and the
levelized cost of hydrogen (LCOH). The impacts are calculated based on
locational marginal pricing for 2050. Depending on the integration scenario, we
find average LCOH of between 5.66 euro/kg and 6.20 euro/kg, for which nodal
electricity prices are the main determining factor as well as a strong
difference in LCOH between north and south Germany. From a system perspective,
investing in HDV-HRS in symbiosis with the power system rather than
independently promises cost savings of around one billion-euros per annum. We
therefore conclude that the co-optimization of multiple energy sectors is
important for investment planning and has the potential to exploit synergies.",Interaction of a Hydrogen Refueling Station Network for Heavy-Duty Vehicles and the Power System in Germany for 2050,2019-08-27 13:26:01,"Philipp Kluschke, Fabian Neumann","http://dx.doi.org/10.1016/j.trd.2020.102358, http://arxiv.org/abs/1908.10119v1, http://arxiv.org/pdf/1908.10119v1",econ.GN
31429,gn,"We show that competitive equilibria in a range of models related to
production networks can be recovered as solutions to dynamic programs. Although
these programs fail to be contractive, we prove that they are tractable. As an
illustration, we treat Coase's theory of the firm, equilibria in production
chains with transaction costs, and equilibria in production networks with
multiple partners. We then show how the same techniques extend to other
equilibrium and decision problems, such as the distribution of management
layers within firms and the spatial distribution of cities.",Coase Meets Bellman: Dynamic Programming for Production Networks,2019-08-28 08:39:40,"Tomoo Kikuchi, Kazuo Nishimura, John Stachurski, Junnan Zhang","http://arxiv.org/abs/1908.10557v3, http://arxiv.org/pdf/1908.10557v3",econ.GN
31430,gn,"A number of macroeconomic theories, very popular in the 1980s, seem to have
completely disappeared and been replaced by the dynamic stochastic general
equilibrium (DSGE) approach. We will argue that this replacement is due to a
tacit agreement on a number of assumptions, previously seen as mutually
exclusive, and not due to a settlement by 'nature'. As opposed to econometrics
and microeconomics and despite massive progress in the access to data and the
use of statistical software, macroeconomic theory appears not to be a
cumulative science so far. Observational equivalence of different models and
the problem of identification of parameters of the models persist as will be
highlighted by examining two examples: one in growth theory and a second in
testing inflation persistence.",Publish and Perish: Creative Destruction and Macroeconomic Theory,2019-08-19 15:19:11,"Jean-Bernard Chatelain, Kirsten Ralf","http://dx.doi.org/10.19272/201806102004, http://arxiv.org/abs/1908.10680v1, http://arxiv.org/pdf/1908.10680v1",econ.GN
31431,gn,"In this paper, we extend the model of Gao and Su (2016) and consider an
omnichannel strategy in which inventory can be replenished when a retailer
sells only in physical stores. With ""buy-online-and-pick-up-in-store"" (BOPS)
having been introduced, consumers can choose to buy directly online, buy from a
retailer using BOPS, or go directly to a store to make purchases without using
BOPS. The retailer is able to select the inventory level to maximize the
probability of inventory availability at the store. Furthermore, the retailer
can incur an additional cost to reduce the BOPS ordering lead time, which
results in a lowered hassle cost for consumers who use BOPS. In conclusion, we
found that there are two types of equilibrium: that in which all consumers go
directly to the store without using BOPS and that in which all consumers use
BOPS.",Buy-Online-and-Pick-up-in-Store in Omnichannel Retailing,2019-09-02 21:25:49,Yasuyuki Kusuda,"http://arxiv.org/abs/1909.00822v2, http://arxiv.org/pdf/1909.00822v2",econ.GN
31432,gn,"This paper aims to investigate the factors that can mitigate carbon-dioxide
(CO2) intensity and further assess CMRBS in China based on a household scale
via decomposition analysis. Here we show that: Three types of housing economic
indicators and the final emission factor significantly contributed to the
decrease in CO2 intensity in the residential building sector. In addition, the
CMRBS from 2001-2016 was 1816.99 MtCO2, and the average mitigation intensity
during this period was 266.12 kgCO2/household/year. Furthermore, the
energy-conservation and emission-mitigation strategy caused CMRBS to
effectively increase and is the key to promoting a more significant emission
mitigation in the future. Overall, this paper covers the CMRBS assessment gap
in China, and the proposed assessment model can be regarded as a reference for
other countries and cities for measuring the retrospective CO2 mitigation
effect in residential buildings.",CO2 mitigation model for China's residential building sector,2019-09-03 18:23:43,"Minda Ma, Weiguang Cai","http://arxiv.org/abs/1909.01249v1, http://arxiv.org/pdf/1909.01249v1",econ.GN
31433,gn,"There is an emerging consensus in the literature that locally embedded
capabilities and industrial know-how are key determinants of growth and
diversification processes. In order to model these dynamics as a branching
process, whereby industries grow as a function of the availability of related
or relevant skills, industry networks are typically employed. These networks,
sometimes referred to as industry spaces, describe the complex structure of the
capability or skill overlap between industry pairs, measured here via
inter-industry labour flows. Existing models typically deploy a local or
'nearest neighbour' approach to capture the size of the labour pool available
to an industry in related sectors. This approach, however, ignores higher order
interactions in the network, and the presence of industry clusters or groups of
industries which exhibit high internal skill overlap. We argue that these
clusters represent skill basins in which workers circulate and diffuse
knowledge, and delineate the size of the skilled labour force available to an
industry. By applying a multi-scale community detection algorithm to this
network of flows, we identify industry clusters on a range of scales, from many
small clusters to few large groupings. We construct a new variable, cluster
employment, which captures the workforce available to an industry within its
own cluster. Using UK data we show that this variable is predictive of
industry-city employment growth and, exploiting the multi-scale nature of the
industrial clusters detected, propose a methodology to uncover the optimal
scale at which labour pooling operates.",Modular structure in labour networks reveals skill basins,2019-09-08 06:14:56,"Neave O'Clery, Stephen Kinsella","http://arxiv.org/abs/1909.03379v3, http://arxiv.org/pdf/1909.03379v3",econ.GN
31434,gn,"Sustainable development is a worldwide recognized social and political goal,
discussed in both academic and political discourse and with much research on
the topic related to sustainable development in higher education. Since mental
models are formed more effectively at school age, we propose a new way of
thinking that will help achieve this goal. This paper was written in the
context of Russia, where the topic of sustainable development in education is
poorly developed. The authors used the classical methodology of the case
analysis. The analysis and interpretation of the results were conducted in the
framework of the institutional theory. Presented is the case of Ural Federal
University, which has been working for several years on the creation of a
device for the purification of industrial sewer water in the framework of an
initiative student group. Schoolchildren recently joined the program, and such
projects have been called university-to-school projects. Successful solutions
of inventive tasks contribute to the formation of mental models. This case has
been analyzed in terms of institutionalism, and the authors argue for the
primacy of mental institutions over normative ones during sustainable society
construction. This case study is the first to analyze a partnership between a
Federal University and local schools regarding sustainable education and
proposes a new way of thinking.",Education Projects for Sustainable Development: Evidence from Ural Federal University,2019-09-08 14:04:39,"Marina Volkova, Jol Stoffers, Dmitry Kochetkov","http://dx.doi.org/10.13140/RG.2.2.28890.08641, http://arxiv.org/abs/1909.03429v2, http://arxiv.org/pdf/1909.03429v2",econ.GN
31435,gn,"We build a novel stochastic dynamic regional integrated assessment model
(IAM) of the climate and economic system including a number of important
climate science elements that are missing in most IAMs. These elements are
spatial heat transport from the Equator to the Poles, sea level rise,
permafrost thaw and tipping points. We study optimal policies under cooperation
and noncooperation between two regions (the North and the Tropic-South) in the
face of risks and recursive utility. We introduce a new general computational
algorithm to find feedback Nash equilibrium. Our results suggest that when the
elements of climate science are ignored, important policy variables such as the
optimal regional carbon tax and adaptation could be seriously biased. We also
find the regional carbon tax is significantly smaller in the feedback Nash
equilibrium than in the social planner's problem in each region, and the North
has higher carbon taxes than the Tropic-South.",Climate Policy under Spatial Heat Transport: Cooperative and Noncooperative Regional Outcomes,2019-09-09 20:44:37,"Yongyang Cai, William Brock, Anastasios Xepapadeas, Kenneth Judd","http://arxiv.org/abs/1909.04009v1, http://arxiv.org/pdf/1909.04009v1",econ.GN
31436,gn,"We examine the impact of a new tool for suppressing the expression of
dissent---a daily tax on social media use. Using a synthetic control framework,
we estimate that the tax reduced the number of georeferenced Twitter users in
Uganda by 13 percent. The estimated treatment effects are larger for poorer and
less frequent users. Despite the overall decline in Twitter use, tweets
referencing collective action increased by 31 percent and observed protests
increased by 47 percent. These results suggest that taxing social media use may
not be an effective tool for reducing political dissent.",Taxing dissent: The impact of a social media tax in Uganda,2019-09-09 22:03:41,"Levi Boxell, Zachary Steinert-Threlkeld","http://arxiv.org/abs/1909.04107v1, http://arxiv.org/pdf/1909.04107v1",econ.GN
31437,gn,"Although definitions of technology exist to explain the patterns of
technological innovations, there is no general definition that explain the role
of technology for humans and other animal species in environment. The goal of
this study is to suggest a new concept of technology with a systemic-purposeful
perspective for technology analysis. Technology here is a complex system of
artifact, made and_or used by living systems, that is composed of more than one
entity or sub-system and a relationship that holds between each entity and at
least one other entity in the system, selected considering practical, technical
and_or economic characteristics to satisfy needs, achieve goals and_or solve
problems of users for purposes of adaptation and_or survival in environment.
Technology T changes current modes of cognition and action to enable makers
and_or users to take advantage of important opportunities or to cope with
consequential environmental threats. Technology, as a complex system, is formed
by different elements given by incremental and radical innovations.
Technological change generates the progress from a system T1 to T2, T3, etc.
driven by changes of technological trajectories and technological paradigms.
Several examples illustrate here these concepts and a simple model with a
preliminary empirical analysis shows how to operationalize the suggested
definition of technology. Overall, then, the role of adaptation (i.e.
reproductive advantage) can be explained as a main driver of technology use for
adopters to take advantage of important opportunities or to cope with
environmental threats. This study begins the process of clarifying and
generalizing, as far as possible, the concept of technology with a new
perspective that it can lay a foundation for the development of more
sophisticated concepts and theories to explain technological and economic
change in environment.","A new concept of technology with systemic-purposeful perpsective: theory, examples and empirical application",2019-09-11 11:18:37,Mario Coccia,"http://arxiv.org/abs/1909.05689v1, http://arxiv.org/pdf/1909.05689v1",econ.GN
31438,gn,"We develop a general model for finding the optimal penal strategy based on
the behavioral traits of the offenders. We focus on how the discount rate
(level of time discounting) affects criminal propensity on the individual
level, and how the aggregation of these effects influences criminal activities
on the population level. The effects are aggregated based on the distribution
of discount rate among the population. We study this distribution empirically
through a survey with 207 participants, and we show that it follows
zero-inflated exponential distribution. We quantify the effectiveness of the
penal strategy as its net utility for the population, and show how this
quantity can be maximized. When we apply the maximization procedure on the
offense of impaired driving (DWI), we discover that the effectiveness of DWI
deterrence depends critically on the amount of fine and prison condition.",The Optimal Deterrence of Crime: A Focus on the Time Preference of DWI Offenders,2019-09-14 05:31:12,"Yuqing Wang, Yan Ru Pei","http://arxiv.org/abs/1909.06509v2, http://arxiv.org/pdf/1909.06509v2",econ.GN
31439,gn,"Structural socioeconomic analysis of Brazil. All basic information about this
South American country is gathered in a comprehensive outlook that includes the
challenges Brazil faces, as well as their causes and posible economic
solutions.",Informe-pais Brasil,2019-09-18 19:43:45,Juan Gonzalez-Blanco,"http://arxiv.org/abs/1909.08564v1, http://arxiv.org/pdf/1909.08564v1",econ.GN
31440,gn,"Food insecurity is a term used to measure hunger and food deprivation of a
large population. As per the 2015 statistics provided by Feeding America - one
of the largest domestic hunger-relief organizations in the United States, 42.2
million Americans live in food insecure households, including 29.1 million
adults and 13.1 million children. This constitutes about 13.1% of households
that are food insecure. Food Banks have been developed to improve food security
for the needy. We have developed a novel food distribution policy using
suitable welfare and poverty indices and functions. In this work, we propose an
equitable and fair distribution of donated foods as per the demands and
requirements of the people, thus ensuring minimum wastage of food (perishable
and non-perishable) with focus towards nutrition. We present results and
analysis based on the application of the proposed policy using the information
of a local food bank as a case study. The results show that the new policy
performs better than the current methods in terms of population being covered
and reduction of food wastage obtaining suitable levels of nutrition.",New Policy Design for Food Accessibility to the People in Need,2019-09-18 21:05:52,"Rahul Srinivas Sucharitha, Seokcheon Lee","http://arxiv.org/abs/1909.08648v1, http://arxiv.org/pdf/1909.08648v1",econ.GN
31441,gn,"The increasing difficulties in financing the welfare state and in particular
public retirement pensions have been one of the outcomes both of the decrease
of fertility and birth rates combined with the increase of life expectancy. The
dynamics of retirement pensions are usually studied in Economics using
overlapping generation models. These models are based on simplifying
assumptions like the use of a representative agent to ease the problem of
tractability. Alternatively, we propose to use agent-based modelling (ABM),
relaxing the need for those assumptions and enabling the use of interacting and
heterogeneous agents assigning special importance to the study of
inter-generational relations. We treat pension dynamics both in economics and
political perspectives. The model we build, following the ODD protocol, will
try to understand the dynamics of choice of public versus private retirement
pensions resulting from the conflicting preferences of different agents but
also from the cooperation between them. The aggregation of these individual
preferences is done by voting. We combine a microsimulation approach following
the evolution of synthetic populations along time, with the ABM approach
studying the interactions between the different agent types. Our objective is
to depict the conditions for the survival of the public pensions system
emerging from the relation between egoistic and altruistic individual and
collective behaviours.",Generational political dynamics of retirement pensions systems: An agent based model,2019-09-13 14:58:36,"Sérgio Bacelar, Luis Antunes","http://arxiv.org/abs/1909.08706v1, http://arxiv.org/pdf/1909.08706v1",econ.GN
31442,gn,"The Cooperation Council for the Arab States of the Gulf (GCC) is generally
regarded as a success story for economic integration in Arab countries. The
idea of regional integration gained ground by signing the GCC Charter. It
envisioned a closer economic relationship between member states.Although
economic integration among GCC member states is an ambitious step in the right
direction, there are gaps and challenges ahead. The best way to address the
gaps and challenges that exist in formulating integration processes in the GCC
is to start with a clear set of rules and put the necessary mechanisms in
place. Integration attempts must also exhibit a high level of commitment in
order to deflect dynamics of disintegration that have all too often frustrated
meaningful integration in Arab countries. If the GCC can address these issues,
it could become an economic powerhouse within Arab countries and even Asia.",Legal Architecture and Design for Gulf Cooperation Council Economic Integration,2019-09-19 07:24:56,Bashar H. Malkawi,"http://arxiv.org/abs/1909.08798v1, http://arxiv.org/pdf/1909.08798v1",econ.GN
31450,gn,"In this paper, we revisit the equity premium puzzle reported in 1985 by Mehra
and Prescott. We show that the large equity premium that they report can be
explained by choosing a more appropriate distribution for the return data. We
demonstrate that the high-risk aversion value observed by Mehra and Prescott
may be attributable to the problem of fitting a proper distribution to the
historical returns and partly caused by poorly fitting the tail of the return
distribution. We describe a new distribution that better fits the return
distribution and when used to describe historical returns can explain the large
equity risk premium and thereby explains the puzzle.",Equity Premium Puzzle or Faulty Economic Modelling?,2019-09-28 06:57:09,"Abootaleb Shirvani, Stoyan V. Stoyanov, Frank J. Fabozzi, Svetlozar T. Rachev","http://arxiv.org/abs/1909.13019v2, http://arxiv.org/pdf/1909.13019v2",econ.GN
31443,gn,"We seek to investigate the effect of oil price on UAE goods trade deficit
with the U.S. The current increase in the price of oil and the absence of
significant studies in the UAE economy are the main motives behind the current
study. Our paper focuses on a small portion of UAE trade, which is 11% of the
UAE foreign trade, however, it is a significant part since the U.S. is a major
trade partner with the UAE. The current paper concludes that oil price has a
significant positive influence on real imports. At the same time, oil price
does not have a significant effect on real exports. As a result, any increase
in the price of oil increases goods trade deficit of the UAE economy. The
policy implication of the current paper is that the revenue of oil sales is not
used to encourage UAE real exports.",The Effect of Oil Price on United Arab Emirates Goods Trade Deficit with the United States,2019-09-19 18:55:40,"Osama D. Sweidan, Bashar H. Malkawi","http://arxiv.org/abs/1909.09057v1, http://arxiv.org/pdf/1909.09057v1",econ.GN
31444,gn,"Rules of origin (ROO) are pivotal element of the Greater Arab Free Trade Area
(GAFTA). ROO are basically established to ensure that only eligible products
receive preferential tariff treatment. Taking into consideration the profound
implications of ROO for enhancing trade flows and facilitating the success of
regional integration, this article sheds light on the way that ROO in GAFTA are
designed and implemented. Moreover, the article examines the extent to which
ROO still represents an obstacle to the full implementation of GAFTA. In
addition, the article provides ways to overcome the most important shortcomings
of ROO text in the agreement and ultimately offering possible solutions to
those issues.",The Design and Operation of Rules of Origin in Greater Arab Free Trade Area: Challenges of Implementation and Reform,2019-09-19 18:59:46,"Bashar H. Malkawi, Mohammad I. El-Shafie","http://arxiv.org/abs/1909.09061v1, http://arxiv.org/pdf/1909.09061v1",econ.GN
31445,gn,"Why are we good? Why are we bad? Questions regarding the evolution of
morality have spurred an astoundingly large interdisciplinary literature. Some
significant subset of this body of work addresses questions regarding our moral
psychology: how did humans evolve the psychological properties which underpin
our systems of ethics and morality? Here I do three things. First, I discuss
some methodological issues, and defend particularly effective methods for
addressing many research questions in this area. Second, I give an in-depth
example, describing how an explanation can be given for the evolution of
guilt---one of the core moral emotions---using the methods advocated here.
Last, I lay out which sorts of strategic scenarios generally are the ones that
our moral psychology evolved to `solve', and thus which models are the most
useful in further exploring this evolution.","Methods, Models, and the Evolution of Moral Psychology",2019-09-10 01:01:12,Cailin O'Connor,"http://arxiv.org/abs/1909.09198v1, http://arxiv.org/pdf/1909.09198v1",econ.GN
31446,gn,"We model sectoral production by cascading binary compounding processes. The
sequence of processes is discovered in a self-similar hierarchical structure
stylized in the economy-wide networks of production. Nested substitution
elasticities and Hicks-neutral productivity growth are measured such that the
general equilibrium feedbacks between all sectoral unit cost functions
replicate the transformation of networks observed as a set of two temporally
distant input-output coefficient matrices. We examine this system of unit cost
functions to determine how idiosyncratic sectoral productivity shocks propagate
into aggregate macroeconomic fluctuations in light of potential network
transformation. Additionally, we study how sectoral productivity increments
propagate into the dynamic general equilibrium, thereby allowing network
transformation and ultimately producing social benefits.",Productivity propagation with networks transformation,2019-09-20 09:32:45,"Satoshi Nakano, Kazuhiko Nishimura","http://dx.doi.org/10.1016/j.jmacro.2020.103216, http://arxiv.org/abs/1909.09641v3, http://arxiv.org/pdf/1909.09641v3",econ.GN
31447,gn,"We investigate state-dependent effects of fiscal multipliers and allow for
endogenous sample splitting to determine whether the US economy is in a slack
state. When the endogenized slack state is estimated as the period of the
unemployment rate higher than about 12 percent, the estimated cumulative
multipliers are significantly larger during slack periods than non-slack
periods and are above unity. We also examine the possibility of time-varying
regimes of slackness and find that our empirical results are robust under a
more flexible framework. Our estimation results point out the importance of the
heterogenous effects of fiscal policy and shed light on the prospect of fiscal
policy in response to economic shocks from the current COVID-19 pandemic.",Desperate times call for desperate measures: government spending multipliers in hard times,2019-09-21 16:42:17,"Sokbae Lee, Yuan Liao, Myung Hwan Seo, Youngki Shin","http://dx.doi.org/10.1111/ecin.12919, http://arxiv.org/abs/1909.09824v3, http://arxiv.org/pdf/1909.09824v3",econ.GN
31448,gn,"We develop a model of electoral accountability with mainstream and
alternative media. In addition to regular high- and low-competence types, the
incumbent may be an aspiring autocrat who controls the mainstream media and
will subvert democracy if retained in office. A truthful alternative media can
help voters identify and remove these subversive types while re-electing
competent leaders. A malicious alternative media, in contrast, spreads false
accusations about the incumbent and demotivates policy effort. If the
alternative media is very likely be malicious and hence is unreliable, voters
ignore it and use only the mainstream media to hold regular incumbents
accountable, leaving aspiring autocrats to win re-election via propaganda that
portrays them as effective policymakers. When the alternative media's
reliability is intermediate, voters heed its warnings about subversive
incumbents, but the prospect of being falsely accused demotivates effort by
regular incumbents and electoral accountability breaks down.","Propaganda, Alternative Media, and Accountability in Fragile Democracies",2019-09-26 04:02:37,"Anqi Li, Davin Raiha, Kenneth W. Shotts","http://arxiv.org/abs/1909.11836v6, http://arxiv.org/pdf/1909.11836v6",econ.GN
31449,gn,"Today's service companies operate in a technology-oriented and
knowledge-intensive environment while recruiting and training individuals from
an increasingly diverse population. One of the resulting challenges is ensuring
strategic alignment between their two key resources - technology and workforce
- through the resource planning and allocation processes. The traditional
hierarchical decision approach to resource planning and allocation considers
only technology planning as a strategic-level decision, with workforce
recruiting and training planning as a subsequent tactical-level decision.
However, two other decision approaches - joint and integrated - elevate
workforce planning to the same strategic level as technology planning. Thus we
investigate the impact of strategically aligning technology and workforce
decisions through the comparison of joint and integrated models to each other
and to a baseline hierarchical model in terms of the total cost. Numerical
experiments are conducted to characterize key features of solutions provided by
these approaches under conditions typically found in this type of service
company. Our results show that the integrated model is the lowest cost across
all conditions. This is because the integrated approach maintains a small but
skilled workforce that can operate new and more advanced technology with higher
capacity. However, the cost performance of the joint model is very close to the
integrated model under many conditions and is easier to implement
computationally and managerially, making it a good choice in many environments.
Managerial insights derived from this study can serve as a valuable guide for
choosing the proper decision approach for technology-oriented and
knowledge-intensive service companies.",Decision Models for Workforce and Technology Planning in Services,2019-09-27 20:58:42,"Gang Li, Joy M. Field, Hongxun Jiang, Tian He, Youming Pang","http://dx.doi.org/10.1287/serv.2015.0094, http://arxiv.org/abs/1909.12829v1, http://arxiv.org/pdf/1909.12829v1",econ.GN
31451,gn,"Background Food taxes and subsidies are one intervention to address poor
diets. Price elasticity (PE) matrices are commonly used to model the change in
food purchasing. Usually a PE matrix is generated in one setting then applied
to another setting with differing starting consumption and prices of foods.
This violates econometric assumptions resulting in likely misestimation of
total food consumption. We illustrate rescaling all consumption after applying
a PE matrix using a total food expenditure elasticity (TFEe, the expenditure
elasticity for all food combined given the policy induced change in the total
price of food). We use case studies of NZ$2 per 100g saturated fat (SAFA) tax,
NZ$0.4 per 100g sugar tax, and a 20% fruit and vegetable (F&V) subsidy. Methods
We estimated changes in food purchasing using a NZ PE matrix applied
conventionally, then with TFEe adjustment. Impacts were quantified for total
food expenditure and health adjusted life years (HALYs) for the total NZ
population alive in 2011 over the rest of their lifetime using a multistate
lifetable model. Results Two NZ studies gave TFEes of 0.68 and 0.83, with
international estimates ranging from 0.46 to 0.90. Without TFEe adjustment,
total food expenditure decreased with the tax policies and increased with the
F&V subsidy, implausible directions of shift given economic theory. After TFEe
adjustment, HALY gains reduced by a third to a half for the two taxes and
reversed from an apparent health loss to a health gain for the F&V subsidy.
With TFEe adjustment, HALY gains (in 1000s) were 1,805 (95% uncertainty
interval 1,337 to 2,340) for the SAFA tax, 1,671 (1,220 to 2,269) for the sugar
tax, and 953 (453 to 1,308) for the F&V subsidy. Conclusions If PE matrices are
applied in settings beyond where they were derived, additional scaling is
likely required. We suggest that the TFEe is a useful scalar.",Modelling the health impact of food taxes and subsidies with price elasticities: the case for additional scaling of food consumption using the total food expenditure elasticity,2019-09-29 04:31:56,"Tony Blakely, Nhung Nghiem, Murat Genc, Anja Mizdrak, Linda Cobiac, Cliona Ni Mhurchu, Boyd Swinburn, Peter Scarborough, Christine Cleghorn","http://dx.doi.org/10.1371/journal.pone.0230506, http://arxiv.org/abs/1909.13179v1, http://arxiv.org/pdf/1909.13179v1",econ.GN
31452,gn,"This paper shows that black and Hispanic borrowers are 39% more likely to
experience a debt collection judgment than white borrowers, even after
controlling for credit scores and other relevant credit attributes. The racial
gap in judgments is more pronounced in areas with a high density of payday
lenders, a high share of income-less households, and low levels of tertiary
education. State-level measures of racial discrimination cannot explain the
judgment gap, nor can neighborhood-level differences in the previous share of
contested judgments or cases with attorney representation. A
back-of-the-envelope calculation suggests that closing the racial wealth gap
could significantly reduce the racial disparity in debt collection judgments.",Racial Disparities in Debt Collection,2019-10-07 04:07:56,"Jessica LaVoice, Domonkos F. Vamossy","http://arxiv.org/abs/1910.02570v2, http://arxiv.org/pdf/1910.02570v2",econ.GN
31453,gn,"This study proposes a method to identify treatment effects without exclusion
restrictions in randomized experiments with noncompliance. Exploiting a
baseline survey commonly available in randomized experiments, I decompose the
intention-to-treat effects conditional on the endogenous treatment status. I
then identify these parameters to understand the effects of the assignment and
treatment. The key assumption is that a baseline variable maintains rank orders
similar to the control outcome. I also reveal that the change-in-changes
strategy may work without repeated outcomes. Finally, I propose a new estimator
that flexibly incorporates covariates and demonstrate its properties using two
experimental studies.",Noncompliance in randomized control trials without exclusion restrictions,2019-10-08 07:14:53,Masayuki Sawada,"http://arxiv.org/abs/1910.03204v5, http://arxiv.org/pdf/1910.03204v5",econ.GN
31454,gn,"Attaining the optimal scale size of production systems is an issue frequently
found in the priority questions on management agendas of various types of
organizations. Determining the most productive scale size (MPSS) allows the
decision makers not only to know the best scale size that their systems can
achieve but also to tell the decision makers how to move the inefficient
systems onto the MPSS region. This paper investigates the MPSS concept for
production systems consisting of multiple subsystems connected in parallel.
First, we propose a relational model where the MPSS of the whole system and the
internal subsystems are measured in a single DEA implementation. Then, it is
proved that the MPSS of the system can be decomposed as the weighted sum of the
MPSS of the individual subsystems. The main result is that the system is
overall MPSS if and only if it is MPSS in each subsystem. MPSS decomposition
allows the decision makers to target the non-MPSS subsystems so that the
necessary improvements can be readily suggested. An application of China's
Five-Year Plans (FYPs) with shared inputs is used to show the applicability of
the proposed model for estimating and decomposing MPSS in parallel network DEA.
Industry and Agriculture sectors are selected as two parallel subsystems in the
FYPs. Interesting findings have been noticed. Using the same amount of
resources, the Industry sector had a better economic scale than the Agriculture
sector. Furthermore, the last two FYPs, 11th and 12th, were the perfect two
FYPs among the others.",Estimating and decomposing most productive scale size in parallel DEA networks with shared inputs: A case of China's Five-Year Plans,2019-10-08 17:25:34,"Saeed Assani, Jianlin Jiang, Ahmad Assani, Feng Yang","http://arxiv.org/abs/1910.03421v3, http://arxiv.org/pdf/1910.03421v3",econ.GN
31455,gn,"In this paper, I empirically investigate how the openness of political
institutions to diverse representation can impact conflict-related violence. By
exploiting plausibly exogenous variations in the number of councillors in
Colombian municipalities, I develop two sets of results. First, regression
discontinuity estimates show that larger municipal councils have a considerably
greater number of political parties with at least one elected representative. I
interpret this result as evidence that larger municipal councils are more open
to diverse political participation. The estimates also reveal that
non-traditional parties are the main beneficiaries of this greater political
openness. Second, regression discontinuity estimates show that political
openness substantially decreases conflict-related violence, namely the killing
of civilian non-combatants. By exploiting plausibly exogenous variations in
local election results, I show that the lower level of political violence stems
from greater participation by parties with close links to armed groups. Using
data about the types of violence employed by these groups, and representation
at higher levels of government, I argue that armed violence has decreased not
because of power-sharing arrangements involving armed groups linked to the
parties with more political representation, but rather because armed groups
with less political power and visibility are deterred from initiating certain
types of violence.",Political Openness and Armed Conflict: Evidence from Local Councils in Colombia,2019-10-09 02:07:20,Hector Galindo-Silva,"http://dx.doi.org/10.1016/j.ejpoleco.2020.101984, http://arxiv.org/abs/1910.03712v2, http://arxiv.org/pdf/1910.03712v2",econ.GN
34362,th,"An uninformed sender publicly commits to an informative experiment about an
uncertain state, privately observes its outcome, and sends a cheap-talk message
to a receiver. We provide an algorithm valid for arbitrary state-dependent
preferences that will determine the sender's optimal experiment, and give
sufficient conditions for information design to be valuable or not under
different payoff structures. These conditions depend more on marginal
incentives -- how payoffs vary with the state -- than on the alignment of
sender's and receiver's rankings over actions within a state.",Information Design in Cheap Talk,2022-07-11 18:08:36,"Qianjun Lyu, Wing Suen","http://arxiv.org/abs/2207.04929v1, http://arxiv.org/pdf/2207.04929v1",econ.TH
31456,gn,"Direct cash transfer programs have shown success as poverty interventions in
both the developing and developed world, yet little research exists examining
the society-wide outcomes of an unconditional cash transfer program disbursed
without means-testing. This paper attempts to determine the impact of direct
cash transfers on educational outcomes in a developed society by investigating
the impacts of the Alaska Permanent Fund Dividend, which was launched in 1982
and continues to be disbursed on an annual basis to every Alaskan. A synthetic
control model is deployed to examine the path of educational attainment among
Alaskans between 1977 and 1991 in order to determine if high school status
completion rates after the launch of the dividend diverge from the synthetic in
a manner suggestive of a treatment effect.",The Impacts of the Alaska Permanent Fund Dividend on High School Status Completion Rates,2019-10-09 19:01:41,Mattathias Lerner,"http://arxiv.org/abs/1910.04083v1, http://arxiv.org/pdf/1910.04083v1",econ.GN
31457,gn,"The link between taxation and justice is a classic debate issue, while also
being very relevant at a time of changing environmental factors and conditions
of the social and economic system. Technologically speaking, there are three
types of taxes: progressive, proportional and regressive. Although justice,
like freedom, is an element and manifestation of the imagined reality in
citizens minds, the state must comply with it. In particular, the tax system
has to adapt to the mass imagined reality in order for it to appear fairer and
more acceptable.",Taxation and Social Justice,2019-10-08 21:08:54,Boyan Durankev,"http://arxiv.org/abs/1910.04155v1, http://arxiv.org/pdf/1910.04155v1",econ.GN
31458,gn,"Challenge Theory (CT), a new approach to decision under risk departs
significantly from expected utility, and is based on firmly psychological,
rather than economic, assumptions. The paper demonstrates that a purely
cognitive-psychological paradigm for decision under risk can yield excellent
predictions, comparable to those attained by more complex economic or
psychological models that remain attached to conventional economic constructs
and assumptions. The study presents a new model for predicting the popularity
of choices made in binary risk problems. A CT-based regression model is tested
on data gathered from 126 respondents who indicated their preferences with
respect to 44 choice problems. Results support CT's central hypothesis,
strongly associating between the Challenge Index (CI) attributable to every
binary risk problem, and the observed popularity of the bold prospect in that
problem (with r=-0.92 and r=-0.93 for gains and for losses, respectively). The
novelty of the CT perspective as a new paradigm is illuminated by its simple,
single-index (CI) representation of psychological effects proposed by Prospect
Theory for describing choice behavior (certainty effect, reflection effect,
overweighting small probabilities and loss aversion).",Risk as Challenge: A Dual System Stochastic Model for Binary Choice Behavior,2019-10-10 14:31:08,"Samuel Shye, Ido Haber","http://arxiv.org/abs/1910.04487v1, http://arxiv.org/pdf/1910.04487v1",econ.GN
31459,gn,"How much should you receive in a week to be indifferent to \$ 100 in six
months? Note that the indifference requires a rule to ensure the similarity
between early and late payments. Assuming that rational individuals have low
accuracy, then the following rule is valid: if the amounts to be paid are much
less than the personal wealth, then the $q$-exponential discounting guarantees
indifference in several periods. Thus, the discounting can be interpolated
between hyperbolic and exponential functions due to the low accuracy to
distinguish time averages when the payments have low impact on personal wealth.
Therefore, there are physical conditions that allow the hyperbolic discounting
regardless psycho-behavioral assumption.",Rational hyperbolic discounting,2019-10-11 17:18:57,José Cláudio do Nascimento,"http://arxiv.org/abs/1910.05209v2, http://arxiv.org/pdf/1910.05209v2",econ.GN
31460,gn,"Universal Basic Income (UBI) has recently been gaining traction. Arguments
exist on both sides in favor of and against it. Like any other financial tool,
UBI can be useful if used with discretion. This paper seeks to clarify how UBI
affects the economy, including how it can be beneficial. The key point is to
regulate the rate of UBI based on the inflation rate. This should be done by an
independent institution from the executive branch of the government. If
implemented correctly, UBI can add a powerful tool to the Federal Reserve
toolkit. UBI can be used to reintroduce inflation to the countries which suffer
long-lasting deflationary environment. UBI has the potential to decrease the
wealth disparity, decrease the national debt, increase productivity, and
increase comparative advantage of the economy. UBI also can substitute the
current welfare systems because of its transparency and efficiency. This
article focuses more on the United States, but similar ideas can be implemented
in other developed nations.",Universal Basic Income: The Last Bullet in the Darkness,2019-10-13 02:06:44,Mohammad Rasoolinejad,"http://arxiv.org/abs/1910.05658v2, http://arxiv.org/pdf/1910.05658v2",econ.GN
31461,gn,"This paper has the following objectives: to understand the concepts of
Environmental Accounting in Brazil; Make criticisms and propositions anchored
in the reality or demand of environmental accounting for Amazonia Paraense. The
methodological strategy was a critical analysis of Ferreira's books (2007);
Ribeiro (2010) and Tinoco and Kraemer (2011) using their correlation with the
scientific production of authors discussing the Paraense Amazon, besides our
experience as researchers of this territory. As a result, we created three
sections: one for understanding the current constructs of environmental
accounting, one for criticism and one for propositions.","Precisamos de uma Contabilidade Ambiental para as ""Amazônias"" Paraense?",2019-10-15 05:59:27,Ailton Castro Pinheiro,"http://arxiv.org/abs/1910.06499v1, http://arxiv.org/pdf/1910.06499v1",econ.GN
31462,gn,"Many countries embroiled in non-religious civil conflicts have experienced a
dramatic increase in religious competition in recent years. This study examines
whether increasing competition between religions affects violence in
non-religious or secular conflicts. The study focuses on Colombia, a deeply
Catholic country that has suffered one of the world's longest-running internal
conflicts and, in the last few decades, has witnessed an intense increase in
religious competition between the Catholic Church and new non-Catholic
churches. The estimation of a dynamic treatment effect model shows that
establishing the first non-Catholic church in a municipality substantially
increases the probability of conflict-related violence. The effect is larger
for violence by guerrilla groups, and is concentrated on municipalities where
the establishment of the first non-Catholic church leads to more intense
religious competition. Further analysis suggests that the increase in guerrilla
violence is associated with an expectation among guerrilla groups that their
membership will decline as a consequence of more intense competition with
religious groups for followers.",Fighting for Not-So-Religious Souls: The Role of Religious Competition in Secular Conflicts,2019-10-17 07:36:38,"Hector Galindo-Silva, Guy Tchuente","http://dx.doi.org/10.1016/j.jebo.2021.08.027, http://arxiv.org/abs/1910.07707v3, http://arxiv.org/pdf/1910.07707v3",econ.GN
31464,gn,"Car-sharing platforms provide access to a shared rather than a private fleet
of automobiles distributed in the region. Participation in such services
induces changes in mobility behaviour as well as vehicle ownership patterns
that could have positive environmental impacts. This study contributes to the
understanding of the total mobility-related greenhouse gas emissions reduction
related to business-to-consumer car-sharing participation. A comprehensive
model which takes into account distances travelled annually by the major urban
transport modes as well as their life-cycle emissions factors is proposed, and
the before-and-after analysis is conducted for an average car-sharing member in
three geographical cases (Netherlands, San Francisco, Calgary). In addition to
non-operational emissions for all the transport modes involved, this approach
considers the rebound effects associated with the modal shift effect
(substituting driving distances with alternative modes) and the lifetime shift
effect for the shared automobiles, phenomena which have been barely analysed in
the previous studies. As a result, in contrast to the previous impact
assessments in the field, a significantly more modest reduction of the annual
total mobility-related life-cycle greenhouse gas emissions caused by
car-sharing participation has been estimated, 3-18% for three geographical case
studies investigated (versus up to 67% estimated previously). This suggests the
significance of the newly considered effects and provides with the practical
implications for improved assessments in the future.",Does car sharing reduce greenhouse gas emissions? Life cycle assessment of the modal shift and lifetime shift rebound effects,2019-10-25 11:34:25,"Levon Amatuni, Juudit Ottelin, Bernhard Steubing, José Mogollon","http://dx.doi.org/10.1016/j.jclepro.2020.121869, http://arxiv.org/abs/1910.11570v1, http://arxiv.org/pdf/1910.11570v1",econ.GN
31465,gn,"This paper investigates the relationships between economic growth, investment
in human capital and income equality in Turkey. The conclusion drawn based on
the data from the OECD and the World Bank suggests that economic growth can
improve income equality depending on the expenditures undertaken by the
government. As opposed to the standard view that economic growth and income
inequality are positively related, the findings of this paper suggest that
other factors such as education and healthcare spending are also driving
factors of income inequality in Turkey. The proven positive impact of
investment in education and health care on income equality could aid
policymakers who aim to achieve fairer income equality and economic growth, in
investment decisions.",Inequality in Turkey: Looking Beyond Growth,2019-10-25 18:04:44,"Bayram Cakir, Ipek Ergul","http://arxiv.org/abs/1910.11780v1, http://arxiv.org/pdf/1910.11780v1",econ.GN
31466,gn,"This paper shows that the power law property of the firm size distribution is
a robust prediction of the standard entry-exit model of firm dynamics. Only one
variation is required: the usual restriction that firm productivity lies below
an ad hoc upper bound is dropped. We prove that, after this small modification,
the Pareto tail of the distribution is predicted under a wide and empirically
plausible class of specifications for firm-level productivity growth. We also
provide necessary and sufficient conditions under which the entry-exit model
exhibits a unique stationary recursive equilibrium in the setting where firm
size is unbounded.",Power Laws without Gibrat's Law,2019-10-29 21:28:24,John Stachurski,"http://arxiv.org/abs/1910.14023v1, http://arxiv.org/pdf/1910.14023v1",econ.GN
31467,gn,"After the Fall of the Berlin Wall, Central Eastern European cities (CEEc)
integrated the globalized world, characterized by a core-periphery structure
and hierarchical interactions between cities. This article gives evidence of
the core-periphery effect on CEEc in 2013 in terms of differentiation of their
urban functions after 1989. We investigate the position of all CEEc in
transnational company networks in 2013. We examine the orientations of
ownership links between firms, the spatial patterns of these networks and the
specialization of firms in CEEc involved. The major contribution of this paper
consists in giving proof of a core-periphery structure within Central Eastern
Europe itself, but also of the diffusion of innovations theory as not only
large cities, but also medium-sized and small ones are part of the
multinational networks of firms. These findings provide significant insights
for the targeting of specific regional policies of the European Union.",Exploring cities of Central and Eastern Europe within transnational company networks: the core-periphery effect,2019-10-31 20:43:17,Natalia Zdanowska,"http://arxiv.org/abs/1910.14652v1, http://arxiv.org/pdf/1910.14652v1",econ.GN
31468,gn,"After the fall of the Berlin Wall, Central and Eastern Europe were subject to
strong polarisation processes. This article proposes examines two neglected
aspects regarding the transition period: a comparative static assessment of
foreign trade since 1967 until 2012 and a city-centred analysis of
transnational companies in 2013. Results show a growing economic
differentiation between the North-West and South-East as well as a division
between large metropolises and other cities. These findings may complement the
targeting of specific regional strategies such as those conceived within the
Cohesion policy of the European Union.",Spatial polarisation within foreign trade and transnational firms' networks. The Case of Central and Eastern Europe,2019-10-31 20:50:52,Natalia Zdanowska,"http://arxiv.org/abs/1910.14658v1, http://arxiv.org/pdf/1910.14658v1",econ.GN
31469,gn,"Forests will have two notable economic roles in the future: providing
renewable raw material and storing carbon to mitigate climate change. The
pricing of forest carbon leads to longer rotation times and consequently larger
carbon stocks, but also exposes landowners to a greater risk of forest damage.
This paper investigates optimal forest rotation under carbon pricing and forest
damage risk. I provide the optimality conditions for this problem and
illustrate the setting with numerical calculations representing boreal forests
under a range of carbon prices and damage probabilities. The relation between
damage probability and carbon price towards the optimal rotation length is
nearly linear, with carbon pricing having far greater impact. As such,
increasing forest carbon stocks by lengthening rotations is an economically
attractive method for climate change mitigation, despite the forest damage
risk. Carbon pricing also increases land expectation value and reduces the
economic risks of the landowner. The production possibility frontier under
optimal rotation suggests that significantly larger forests carbon stocks are
achievable, but imply lower harvests. However, forests' societally optimal role
between these two activities is not yet clear-cut; but rests on the future
development of relative prices between timber, carbon and other commodities
dependent on land-use.",Optimal forest rotation under carbon pricing and forest damage risk,2019-12-01 00:50:14,Tommi Ekholm,"http://arxiv.org/abs/1912.00269v1, http://arxiv.org/pdf/1912.00269v1",econ.GN
31470,gn,"In recent years, it has been debated whether a reduction in working hours
would be a viable solution to tackle the unemployment caused by technological
change. The improvement of existing production technology is gradually being
seen to reduce labor demand. Although this debate has been at the forefront for
many decades, the high and persistent unemployment encountered in the European
Union has renewed interest in implementing this policy in order to increase
employment. According to advocates of reducing working hours, this policy will
increase the number of workers needed during the production process, increasing
employment. However, the contradiction expressed by advocates of working time
reduction is that the increase in labor costs will lead to a reduction in
business activity and ultimately to a reduction in demand for human resources.
In this article, we will attempt to answer the question of whether reducing
working hours is a way of countering the potential decline in employment due to
technological change. In order to answer this question, the aforementioned
conflicting views will be examined. As we will see during our statistical
examination of the existing empirical studies, the reduction of working time
does not lead to increased employment and cannot be seen as a solution to the
long-lasting unemployment.",Examination of the Correlation between Working Time Reduction and Employment,2019-12-03 02:23:29,Virginia Tsoukatou,"http://arxiv.org/abs/1912.01605v1, http://arxiv.org/pdf/1912.01605v1",econ.GN
31471,gn,"This study investigates the socio-demographic characteristics that individual
cryptocurrency investors exhibit and the factors which go into their investment
decisions in different Initial Coin Offerings. A web based revealed preference
survey was conducted among Australian and Chinese blockchain and cryptocurrency
followers, and a Multinomial Logit model was applied to inferentially analyze
the characteristics of cryptocurrency investors and the determinants of the
choice of investment in cryptocurrency coins versus other types of ICO tokens.
The results show a difference between the determinant of these two choices
among Australian and Chinese cryptocurrency folks. The significant factors of
these two choices include age, gender, education, occupation, and investment
experience, and they align well with the behavioural literature. Furthermore,
alongside differences in how they rank the attributes of ICOs, there is further
variance between how Chinese and Australian investors rank deterrence factors
and investment strategies.",Investigating the Investment Behaviors in Cryptocurrency,2019-12-06 08:58:00,"Dingli Xi, Timothy Ian O'Brien, Elnaz Irannezhad","http://arxiv.org/abs/1912.03311v1, http://arxiv.org/pdf/1912.03311v1",econ.GN
31472,gn,"Use of healthcare services is inadequate in Ethiopia in spite of the high
burden of diseases. User-fee charges are the most important factor for this
deficiency in healthcare utilization. Hence, the country is introducing
community based and social health insurances since 2010 to tackle such
problems. This study was conducted cross-sectionally, in March 2013, to assess
willingness of rural households to pay for community-based health insurance in
Debub Bench district of Southwest Ethiopia. Two-stage sampling technique was
used to select 845 households. Selected households were contacted using simple
random sampling technique. Double bounded dichotomous choice method was used to
illicit the willingness to pay. Data were analyzed with STATA 11. Krinsky and
Rob method was used to calculate the mean/median with 95% CI willingness to pay
after the predictors have been estimated using Seemingly Unrelated Bivariate
Probit Regression. Eight hundred and eight (95.6%) of the sampled households
were interviewed. Among them 629(77.8%) households were willing to join the
proposed CBHI scheme. About 54% of the households in the district were willing
to pay either the initial or second bids presented. On average, these
households were willingness to pay was 162.61 Birr per household (8.9 US$)
annually. If the community based health insurance is rolled out in the
district, about half of households will contribute 163 Birr (8.9 US$) annually.
If the premium exceeds the amount specified, majority of the households would
not join the scheme. Key words: community based health insurance, willingness
to pay, contingent valuation method, double bounded dichotomous choice, Krinsky
and Robb, rural households, Ethiopia.",Willingness to Pay for Community-Based Health Insurance among Rural Households of Southwest Ethiopia,2019-12-09 15:03:40,"Melaku Haile Likka, Shimeles Ololo Sinkie, Berhane Megerssa","http://arxiv.org/abs/1912.04281v1, http://arxiv.org/pdf/1912.04281v1",econ.GN
31473,gn,"Urban waste heat recovery, in which low temperature heat from urban sources
is recovered for use in a district heat network, has a great deal of potential
in helping to achieve 2050 climate goals. For example, heat from data centres,
metro systems, public sector buildings and waste water treatment plants could
be used to supply ten percent of Europe's heat demand. Despite this, at
present, urban waste heat recovery is not widespread and is an immature
technology. To help achieve greater uptake, three policy recommendations are
made. First, policy raising awareness of waste heat recovery and creating a
legal framework is suggested. Second, it is recommended that pilot projects are
promoted to help demonstrate technical and economic feasibility. Finally, a
pilot credit facility is proposed aimed at bridging the gap between potential
investors and heat recovery projects.",The role of low temperature waste heat recovery in achieving 2050 goals: a policy positioning paper,2019-12-13 18:31:37,"Edward Wheatcroft, Henry Wynn, Kristina Lygnerud, Giorgio Bonvicini","http://arxiv.org/abs/1912.06558v1, http://arxiv.org/pdf/1912.06558v1",econ.GN
31474,gn,"This is the first study that attempts to assess the regional economic impacts
of the European Institute of Innovation and Technology (EIT) investments in a
spatially explicit macroeconomic model, which allows us to take into account
all key direct, indirect and spatial spillover effects of EIT investments via
inter-regional trade and investment linkages and a spatial diffusion of
technology via an endogenously determined global knowledge frontier with
endogenous growth engines driven by investments in knowledge and human capital.
Our simulation results of highly detailed EIT expenditure data suggest that,
besides sizable direct effects in those regions that receive the EIT investment
support, there are also significant spatial spillover effects to other
(non-supported) EU regions. Taking into account all key indirect and spatial
spillover effects is a particular strength of the adopted spatial general
equilibrium methodology; our results suggest that they are important indeed and
need to be taken into account when assessing the impacts of EIT investment
policies on regional economies.",EU Economic Modelling System,2019-12-16 00:28:07,"Olga Ivanova, d'Artis Kancs, Mark Thissen","http://dx.doi.org/10.2791/184008, http://arxiv.org/abs/1912.07115v1, http://arxiv.org/pdf/1912.07115v1",econ.GN
31475,gn,"This paper analyzes identification issues of a behavorial New Keynesian model
and estimates it using likelihood-based and limited-information methods with
identification-robust confidence sets. The model presents some of the same
difficulties that exist in simple benchmark DSGE models, but the analytical
solution is able to indicate in what conditions the cognitive discounting
parameter (attention to the future) can be identified and the robust estimation
methods is able to confirm its importance for explaining the proposed
behavioral model.",Estimating a Behavioral New Keynesian Model,2019-12-16 19:36:51,"Joaquim Andrade, Pedro Cordeiro, Guilherme Lambais","http://arxiv.org/abs/1912.07601v1, http://arxiv.org/pdf/1912.07601v1",econ.GN
31478,gn,"This paper explores whether unconventional monetary policy operations have
redistributive effects on household wealth. Drawing on household balance sheet
data from the Wealth and Asset Survey, we construct monthly time series
indicators on the distribution of different asset types held by British
households for the period that the monetary policy switched as the policy rate
reached the zero lower bound (2006-2016). Using this series, we estimate the
response of wealth inequalities on monetary policy, taking into account the
effect of unconventional policies conducted by the Bank of England in response
to the Global Financial Crisis. Our evidence reveals that unconventional
monetary policy shocks have significant long-lasting effects on wealth
inequality: an expansionary monetary policy in the form of asset purchases
raises wealth inequality across households, as measured by their Gini
coefficients of net wealth, housing wealth, and financial wealth. The evidence
of our analysis helps to raise awareness of central bankers about the
redistributive effects of their monetary policy decisions.",Monetary Policy and Wealth Inequalities in Great Britain: Assessing the role of unconventional policies for a decade of household data,2019-12-20 11:59:26,"Anastasios Evgenidis, Apostolos Fasianos","http://arxiv.org/abs/1912.09702v1, http://arxiv.org/pdf/1912.09702v1",econ.GN
31479,gn,"A recent paper by Hausmann and collaborators (1) reaches the important
conclusion that Complexity-weighted diversification is the essential element to
predict country growth. We like this result because Complexity-weighted
diversification is precisely the first equation of the Fitness algorithm that
we introduced in 2012 (2,3). However, contrary to what is claimed in (1), it is
incorrect to say that diversification is contained also in the ECI algorithm
(4). We discuss the origin of this misunderstanding and show that the ECI
algorithm contains exactly zero diversification. This is actually one of the
reasons for the poor performances of ECI which leads to completely unrealistic
results, as for instance, the derivation that Qatar or Saudi Arabia are
industrially more competitive than China (5,6). Another important element of
our new approach is the representation of the economic dynamics of countries as
trajectories in the GDPpc-Fitness space (7-10). In some way also this has been
rediscovered by Hausmann and collaborators and renamed as ""Stream plots"", but,
given their weaker metrics and methods, they propose it to use it only for a
qualitative insight, while ours led to quantitative and successful forecasting.
The Fitness approach has paved the way to a robust and testable framework for
Economic Complexity resulting in a highly competitive scheme for growth
forecasting (7-10). According to a recent report by Bloomberg (9): The new
Fitness method, ""systematically outperforms standard methods, despite requiring
much less data"".","Economic Complexity: why we like ""Complexity weighted diversification""",2019-12-23 19:29:08,"Luciano Pietronero, Andrea Gabrielli, Andrea Zaccaria","http://arxiv.org/abs/1912.10955v1, http://arxiv.org/pdf/1912.10955v1",econ.GN
31480,gn,"This paper examines voters' responses to the disclosure of electoral crime
information in large democracies. I focus on Brazil, where the electoral court
makes candidates' criminal records public before every election. Using a sample
of local candidates running for office between 2004 and 2016, I find that a
conviction for an electoral crime reduces candidates' probability of election
and vote share by 10.3 and 12.9 percentage points (p.p.), respectively. These
results are not explained by (potential) changes in judge, voter, or candidate
behavior over the electoral process. I additionally perform machine
classification of court documents to estimate heterogeneous punishment for
severe and trivial crimes. I document a larger electoral penalty (6.5 p.p.) if
candidates are convicted for severe crimes. These results supplement the
information shortcut literature by examining how judicial information
influences voters' decisions and showing that voters react more strongly to
more credible sources of information.",Electoral Crime Under Democracy: Information Effects from Judicial Decisions in Brazil,2019-12-23 19:31:45,Andre Assumpcao,"http://arxiv.org/abs/1912.10958v1, http://arxiv.org/pdf/1912.10958v1",econ.GN
31481,gn,"When it comes to preventive healthcare, place matters. It is increasingly
clear that social factors, particularly reliable access to healthy food, are as
determinant to health and health equity as medical care. However, food access
studies often only present one-dimensional measurements of access. We
hypothesize that food access is a multidimensional concept and evaluated
Penchansky and Thomas's 1981 definition of access. In our approach, we identify
ten variables contributing to food access in the City of Chicago and use
principal component analysis to identify vulnerable populations with low
access. Our results indicate that within the urban environment of the case
study site, affordability is the most important factor in low food
accessibility, followed by urban youth, reduced mobility, and higher immigrant
population.",Healthy Access for Healthy Places: A Multidimensional Food Access Measure,2019-12-23 03:12:27,"Irena Gao, Marynia Kolak","http://arxiv.org/abs/1912.11351v1, http://arxiv.org/pdf/1912.11351v1",econ.GN
31482,gn,"This paper aims to analyze the relationship between yield curve -being a line
of the interests in various maturities at a given time- and GDP growth in
Turkey. The paper focuses on analyzing the yield curve in relation to its
predictive power on Turkish macroeconomic dynamics using the linear regression
model. To do so, the interest rate spreads of different maturities are used as
a proxy of the yield curve. Findings of the OLS regression are similar to that
found in the literature and supports the positive relation between slope of
yield curve and GDP growth in Turkey. Moreover, the predicted values of the GDP
growth from interest rate spread closely follow the actual GDP growth in
Turkey, indicating its predictive power on the economic activity.",Reading Macroeconomics From the Yield Curve: The Turkish Case,2019-12-23 22:52:32,"Ipek Turker, Bayram Cakir","http://arxiv.org/abs/1912.12351v1, http://arxiv.org/pdf/1912.12351v1",econ.GN
31483,gn,"We develop an exchange rate target zone model with finite exit time and
non-Gaussian tails. We show how the tails are a consequence of time-varying
investor risk aversion, which generates mean-preserving spreads in the
fundamental distribution. We solve explicitly for stationary and non-stationary
exchange rate paths, and show how both depend continuously on the distance to
the exit time and the target zone bands. This enables us to show how central
bank intervention is endogenous to both the distance of the fundamental to the
band and the underlying risk. We discuss how the feasibility of the target zone
is shaped by the set horizon and the degree of underlying risk, and we
determine a minimum time at which the required parity can be reached. We prove
that increases in risk after a certain threshold can yield endogenous regime
shifts where the ``honeymoon effects'' vanish and the target zone cannot be
feasibly maintained. None of these results can be obtained by means of the
standard Gaussian or affine models. Numerical simulations allow us to recover
all the exchange rate densities established in the target zone literature. The
generality of our framework has important policy implications for modern target
zone arrangements.",Can one hear the shape of a target zone?,2020-02-03 17:11:15,"Jean-Louis Arcand, Max-Olivier Hongler, Shekhar Hari Kumar, Daniele Rinaldo","http://arxiv.org/abs/2002.00948v3, http://arxiv.org/pdf/2002.00948v3",econ.GN
31485,gn,"Every nation prioritizes the inclusive economic growth and development of all
regions. However, we observe that economic activities are clustered in space,
which results in a disparity in per-capita income among different regions. A
complexity-based method was proposed by Hidalgo and Hausmann [PNAS 106,
10570-10575 (2009)] to explain the large gaps in per-capita income across
countries. Although there have been extensive studies on countries' economic
complexity using international export data, studies on economic complexity at
the regional level are relatively less studied. Here, we study the industrial
sector complexity of prefectures in Japan based on the basic information of
more than one million firms. We aggregate the data as a bipartite network of
prefectures and industrial sectors. We decompose the bipartite network as a
prefecture-prefecture network and sector-sector network, which reveals the
relationships among them. Similarities among the prefectures and among the
sectors are measured using a metric. From these similarity matrices, we cluster
the prefectures and sectors using the minimal spanning tree technique.The
computed economic complexity index from the structure of the bipartite network
shows a high correlation with macroeconomic indicators, such as per-capita
gross prefectural product and prefectural income per person. We argue that this
index reflects the present economic performance and hidden potential of the
prefectures for future growth.",Economic complexity of prefectures in Japan,2020-02-12 06:35:23,"Abhijit Chakraborty, Hiroyasu Inoue, Yoshi Fujiwara","http://dx.doi.org/10.1371/journal.pone.0238017, http://arxiv.org/abs/2002.05785v2, http://arxiv.org/pdf/2002.05785v2",econ.GN
31486,gn,"Governments spend billions of dollars subsidizing the adoption of different
goods. However, it is difficult to gauge whether those goods are resold, or are
valued by their ultimate recipients. This project studies a program to
subsidize the adoption of mobile phones in one of the poorest countries in the
world. Rwanda subsidized the equivalent of 8% of the stock of mobile phones for
select rural areas. We analyze the program using 5.3 billion transaction
records from the dominant mobile phone network. Transaction records reveal
where and how much subsidized handsets were ultimately used, and indicators of
resale. Some subsidized handsets drifted from the rural areas where they were
allocated to urban centers, but the subsidized handsets were used as much as
handsets purchased at retail prices, suggesting they were valued. Recipients
are similar to those who paid for phones, but are highly connected to each
other. We then simulate welfare effects using a network demand system that
accounts for how each person's adoption affects the rest of the network.
Spillovers are substantial: 73-76% of the operator revenue generated by the
subsidy comes from nonrecipients. We compare the enacted subsidy program to
counterfactual targeting based on different network heuristics.",The Effect of Network Adoption Subsidies: Evidence from Digital Traces in Rwanda,2020-02-14 00:50:04,"Daniel Björkegren, Burak Ceyhun Karaca","http://dx.doi.org/10.1016/j.jdeveco.2021.102762, http://arxiv.org/abs/2002.05791v1, http://arxiv.org/pdf/2002.05791v1",econ.GN
31487,gn,"In recent decades, global oil palm production has shown an abrupt increase,
with almost 90% produced in Southeast Asia alone. Monitoring oil palm is
largely based on national surveys and inventories or one-off mapping studies.
However, they do not provide detailed spatial extent or timely updates and
trends in oil palm expansion or age. Palm oil yields vary significantly with
plantation age, which is critical for landscape-level planning. Here we show
the extent and age of oil palm plantations for the year 2017 across Southeast
Asia using remote sensing. Satellites reveal a total of 11.66 (+/- 2.10)
million hectares (Mha) of plantations with more than 45% located in Sumatra.
Plantation age varies from ~7 years in Kalimantan to ~13 in Insular Malaysia.
More than half the plantations on Kalimantan are young (<7 years) and not yet
in full production compared to Insular Malaysia where 45% of plantations are
older than 15 years, with declining yields. For the first time, these results
provide a consistent, independent, and transparent record of oil palm
plantation extent and age structure, which are complementary to national
statistics.",Satellite reveals age and extent of oil palm plantations in Southeast Asia,2020-02-17 12:02:36,"Olha Danylo, Johannes Pirker, Guido Lemoine, Guido Ceccherini, Linda See, Ian McCallum, Hadi, Florian Kraxner, Frédéric Achard, Steffen Fritz","http://arxiv.org/abs/2002.07163v1, http://arxiv.org/pdf/2002.07163v1",econ.GN
31488,gn,"We test how individuals with incorrect beliefs about their ability learn
about an external parameter (`fundamental') when they cannot separately
identify the effects of their ability, actions, and the parameter on their
output. Heidhues et al. (2018) argue that learning makes overconfident
individuals worse off as their beliefs about the fundamental get less accurate,
causing them to take worse actions. In our experiment, subjects take
incorrectly-marked tests, and we measure how they learn about the marker's
accuracy over time. Overconfident subjects put in less effort, and their
beliefs about the marker's accuracy got worse, as they learnt. Beliefs about
the proportion of correct answers marked as correct fell by 0.05 over the
experiment. We find no effect in underconfident subjects.",How Do Expectations Affect Learning About Fundamentals? Some Experimental Evidence,2020-02-17 23:06:42,"Kieran Marray, Nikhil Krishna, Jarel Tang","http://arxiv.org/abs/2002.07229v3, http://arxiv.org/pdf/2002.07229v3",econ.GN
31489,gn,"Towards the realization of a sustainable, fair and inclusive society, we
proposed a novel decision-making model that incorporates social norms in a
rational choice model from the standpoints of deontology and utilitarianism. We
proposed a hypothesis that interprets choice of action as the X-point for
individual utility function that increases with actions and social norm
function that decreases with actions. This hypothesis is based on humans
psychologically balancing the value of utility and norms in selecting actions.
Using the hypothesis and approximation, we were able to isolate and infer
utility function and norm function from real-world measurement data of actions
on environmental conditions and elucidate the interaction between the both
functions that led from current status to target actions. As examples of
collective data that aggregate decision-making of individuals, we looked at the
changes in power usage before and after the Great East Japan Earthquake and the
correlation between national GDP and CO2 emission in different countries. The
first example showed that the perceived benefits of power (i.e., utility of
power usage) was stronger than the power usage restrictions imposed by norms
after the earthquake, contrary to our expectation. The second example showed
that a reduction of CO2 emission in each country was not related to utility
derived from GDP but to norms related to CO2 emission. Going forward, we will
apply this new X-point model to actual social practices involving normative
problems, and design the approaches for the diagnosis, prognosis and
intervention of social systems by IT systems.",Rational Choice Hypothesis as X-point of Utility Function and Norm Function,2020-02-19 13:14:31,"Takeshi Kato, Yasuyuki Kudo, Junichi Miyakoshi, Jun Otsuka, Hayato Saigo, Kaori Karasawa, Hiroyuki Yamaguchi, Yasuo Deguchi","http://dx.doi.org/10.11114/aef.v7i4.4890, http://arxiv.org/abs/2002.09036v2, http://arxiv.org/pdf/2002.09036v2",econ.GN
31490,gn,"We introduced a decision-making model based on value functions that included
individualistic utility function and socio-constructivistic norm function and
proposed a norm-fostering process that recursively updates norm function
through mutual recognition between the self and others. As an example, we
looked at the resource-sharing problem typical of economic activities and
assumed the distribution of individual actions to define the (1) norm function
fostered through mutual comparison of value/action ratio based on the equity
theory (progressive tax-like), (2) norm function proportional to resource
utilization (proportional tax-like) and (3) fixed norm function independent of
resource utilization (fixed tax-like). By carrying out numerical simulation, we
showed that the progressive tax-like norm function (i) does not increase
disparity for the distribution of the actions, unlike the other norm functions,
and (ii) has high resource productivity and low Gini coefficient. Therefore the
progressive tax-like norm function has the highest sustainability and fairness.",Sustainability and Fairness Simulations Based on Decision-Making Model of Utility Function and Norm Function,2020-02-19 13:20:10,"Takeshi Kato, Yasuyuki Kudo, Junichi Miyakoshi, Jun Otsuka, Hayato Saigo, Kaori Karasawa, Hiroyuki Yamaguchi, Yoshinori Hiroi, Yasuo Deguchi","http://dx.doi.org/10.11114/aef.v7i3.4825, http://arxiv.org/abs/2002.09037v2, http://arxiv.org/pdf/2002.09037v2",econ.GN
31491,gn,"We prove that the consumption functions in optimal savings problems are
asymptotically linear if the marginal utility is regularly varying. We also
analytically characterize the asymptotic marginal propensities to consume
(MPCs) out of wealth. Our results are useful for obtaining good initial guesses
when numerically computing consumption functions, and provide a theoretical
justification for linearly extrapolating consumption functions outside the
grid.",Asymptotic Linearity of Consumption Functions and Computational Efficiency,2020-02-21 06:30:49,"Qingyin Ma, Alexis Akira Toda","http://dx.doi.org/10.1016/j.jmateco.2021.102562, http://arxiv.org/abs/2002.09108v2, http://arxiv.org/pdf/2002.09108v2",econ.GN
31492,gn,"The Asian-pacific region is the major international tourism demand market in
the world, and its tourism demand is deeply affected by various factors.
Previous studies have shown that different market factors influence the tourism
market demand at different timescales. Accordingly, the decomposition ensemble
learning approach is proposed to analyze the impact of different market factors
on market demand, and the potential advantages of the proposed method on
forecasting tourism demand in the Asia-pacific region are further explored.
This study carefully explores the multi-scale relationship between tourist
destinations and the major source countries, by decomposing the corresponding
monthly tourist arrivals with noise-assisted multivariate empirical mode
decomposition. With the China and Malaysia as case studies, their respective
empirical results show that decomposition ensemble approach significantly
better than the benchmarks which include statistical model, machine learning
and deep learning model, in terms of the level forecasting accuracy and
directional forecasting accuracy.",A New Decomposition Ensemble Approach for Tourism Demand Forecasting: Evidence from Major Source Countries,2020-02-21 13:07:07,"Chengyuan Zhang, Fuxin Jiang, Shouyang Wang, Shaolong Sun","http://arxiv.org/abs/2002.09201v1, http://arxiv.org/pdf/2002.09201v1",econ.GN
31493,gn,"We examine COVID-19-related states of emergency (SOEs) using data on 180
countries in the period January 1 through June 12, 2020. The results suggest
that states' declaration of SOEs is driven by both external and internal
factors. A permissive regional environment, characterized by many and
simultaneously declared SOEs, may have diminished reputational and political
costs, making employment of emergency powers more palatable for a wider range
of governments. At the same time, internal characteristics, specifically
democratic institutions and pandemic preparedness, shaped governments'
decisions. Weak democracies with poor pandemic preparedness were considerably
more likely to opt for SOEs than dictatorships and robust democracies with
higher preparedness. We find no significant association between pandemic
impact, measured as national COVID-19-related deaths, and SOEs, suggesting that
many states adopted SOEs proactively before the disease spread locally.","Emergency Powers in Response to COVID-19: Policy diffusion, Democracy, and Preparedness",2020-07-02 10:16:15,"Magnus Lundgren, Mark Klamberg, Karin Sundström, Julia Dahlqvist","http://arxiv.org/abs/2007.00933v1, http://arxiv.org/pdf/2007.00933v1",econ.GN
31494,gn,"We lightly modify Eriksson's (1996) model to accommodate the home office in a
simple model of endogenous growth. By home office we mean any working activity
carried out away from the workplace which is assumed to be fixed. Due to the
strong mobility restrictions imposed on citizens during the COVID-19 pandemic,
we allow the home office to be located at home. At the home office, however, in
consequence of the fear and anxiety workers feel because of COVID-19, they
become distracted and spend less time working. We show that in the long run,
the intertemporal elasticity of substitution of the home-office labor is
sufficiently small only if the intertemporal elasticity of substitution of the
time spent on distracting activities is small enough also.",The Home Office in Times of COVID-19 Pandemic and its impact in the Labor Supply,2020-07-05 06:16:08,"José Nilmar Alves de Oliveira, Jaime Orrillo, Franklin Gamboa","http://arxiv.org/abs/2007.02935v2, http://arxiv.org/pdf/2007.02935v2",econ.GN
31495,gn,"The COVID-19 pandemic has caused more than 8 million confirmed cases and
500,000 death to date. In response to this emergency, many countries have
introduced a series of social-distancing measures including lockdowns and
businesses' temporary shutdowns, in an attempt to curb the spread of the
infection. Accordingly, the pandemic has been generating unprecedent disruption
on practically every aspect of society. This paper demonstrates that
high-frequency electricity market data can be used to estimate the causal,
short-run impact of COVID-19 on the economy. In the current uncertain economic
conditions, timeliness is essential. Unlike official statistics, which are
published with a delay of a few months, with our approach one can monitor
virtually every day the impact of the containment policies, the extent of the
recession and measure whether the monetary and fiscal stimuli introduced to
address the crisis are being effective. We illustrate our methodology on daily
data for the Italian day-ahead power market. Not surprisingly, we find that the
containment measures caused a significant reduction in economic activities and
that the GDP at the end of in May 2020 is still about 11% lower that what it
would have been without the outbreak.",Real-time estimation of the short-run impact of COVID-19 on economic activity using electricity market data,2020-07-07 17:10:41,"Carlo Fezzi, Valeria Fanghella","http://arxiv.org/abs/2007.03477v1, http://arxiv.org/pdf/2007.03477v1",econ.GN
31496,gn,"What are the chances of an ethical individual rising through the ranks of a
political party or a corporation in the presence of unethical peers? To answer
this question, I consider a four-player two-stage elimination tournament, in
which players are partitioned into those willing to be involved in sabotage
behavior and those who are not. I show that, under certain conditions, the
latter are more likely to win the tournament.",Nice guys don't always finish last: succeeding in hierarchical organizations,2020-07-07 13:42:10,Doron Klunover,"http://arxiv.org/abs/2007.04435v4, http://arxiv.org/pdf/2007.04435v4",econ.GN
31497,gn,"Based on the theoretical assumptions of the Labeling Approach, Critical
Criminology and Behavioral Economics, taking into account that almost half of
the Brazilian prison population is composed of individuals who are serving
pre-trial detention, it was sought to assess the characteristics of the
flagranteated were presented as determinants to influence the subjectivity of
the judges in the decision to determine, or not, the custody of these. The
research initially adopted a deductive methodology, based on the principle that
external objective factors encourage magistrates to decide in a certain sense.
It was then focused on the identification of which characteristics of the
flagranteated and allegedly committed crimes would be relevant to guide such
decisions. Subsequently, after deduction of such factors, an inductive
methodology was adopted, analyzing the data from the theoretical assumptions
pointed out. During the research, 277 decisions were analyzed, considering
decision as individual decision and not custody hearing. This sample embarked
decisions of six judges among the largest cities of the State of Paran\'a and
concerning the crimes of theft, robbery and drug trafficking. It was then
concluded that the age, gender, social class and type of accusation that the
flagranteated suffered are decisive for the decree of his provisional arrest,
being that, depending on the judge competent in the case, the chances of the
decree can increase in Up to 700%, taking into account that the circumstantial
and causal variables are constant. Given the small sample size, as far as the
number of judges is concerned, more extensive research is needed so that
conclusions of national validity can be developed.","Análise dos fatores determinantes para o decreto de prisão preventiva em casos envolvendo acusações por roubo, tráfico de drogas e furto: Um estudo no âmbito das cidades mais populosas do Paraná",2020-07-09 20:51:27,"Giovane Cerezuela Policeno, Mario Edson Passerino Fischer da Silva, Vitor Pestana Ostrensky","http://arxiv.org/abs/2007.04962v1, http://arxiv.org/pdf/2007.04962v1",econ.GN
31498,gn,"Education has traditionally been classroom-oriented with a gradual growth of
online courses in recent times. However, the outbreak of the COVID-19 pandemic
has dramatically accelerated the shift to online classes. Associated with this
learning format is the question: what do people think about the educational
value of an online course compared to a course taken in-person in a classroom?
This paper addresses the question and presents a Bayesian quantile analysis of
public opinion using a nationally representative survey data from the United
States. Our findings show that previous participation in online courses and
full-time employment status favor the educational value of online courses. We
also find that the older demographic and females have a greater propensity for
online education. In contrast, highly educated individuals have a lower
willingness towards online education vis-\`a-vis traditional classes. Besides,
covariate effects show heterogeneity across quantiles which cannot be captured
using probit or logit models.",Do Online Courses Provide an Equal Educational Value Compared to In-Person Classroom Teaching? Evidence from US Survey Data using Quantile Regression,2020-07-14 15:23:18,"Manini Ojha, Mohammad Arshad Rahman","http://arxiv.org/abs/2007.06994v1, http://arxiv.org/pdf/2007.06994v1",econ.GN
31499,gn,"The objective of this work is twofold: to expand the depression models
proposed by Tobin and analyse a supply shock, such as the Covid-19 pandemic, in
this Keynesian conceptual environment. The expansion allows us to propose the
evolution of all endogenous macroeconomic variables. The result obtained is
relevant due to its theoretical and practical implications. A quantity or
Keynesian adjustment to the shock produces a depression through the effect on
aggregate demand. This depression worsens in the medium/long-term. It is
accompanied by increases in inflation, inflation expectations and the real
interest rate. A stimulus tax policy is also recommended, as well as an active
monetary policy to reduce real interest rates. On the other hand, the pricing
or Marshallian adjustment foresees a more severe and rapid depression in the
short-term. There would be a reduction in inflation and inflation expectations,
and an increase in the real interest rates. The tax or monetary stimulus
measures would only impact inflation. This result makes it possible to clarify
and assess the resulting depression, as well as propose policies. Finally, it
offers conflicting predictions that allow one of the two models to be
falsified.",Keynesian models of depression. Supply shocks and the COVID-19 Crisis,2020-07-11 21:08:10,Ignacio Escanuela Romana,"http://arxiv.org/abs/2007.07353v2, http://arxiv.org/pdf/2007.07353v2",econ.GN
31500,gn,"An important theme is how to maximize the cooperation of employees when
dealing with crisis measures taken by the company. Therefore, to find out what
kind of employees have cooperated with the company's measures in the current
corona (COVID-19) crisis, and what effect the cooperation has had to these
employees/companies to get hints for preparing for the next crisis, the pass
analysis was carried out using awareness data obtained from a questionnaire
survey conducted on 2,973 employees of Japanese companies in China. The results
showed that employees with higher social capital and resilience were more
supportive of the company's measures against corona and that employees who were
more supportive of corona measures were less likely to leave their jobs.
However, regarding fatigue and anxiety about the corona felt by employees, it
was shown that it not only works to support cooperation in corona
countermeasures but also enhances the turnover intention. This means that just
by raising the anxiety of employees, even if a company achieves the short-term
goal of having them cooperate with the company's countermeasures against
corona, it may not reach the longer-term goal by making them increase their
intention to leave. It is important for employees to be aware of the crisis and
to fear it properly. But more than that, it should be possible for the company
to help employees stay resilient, build good relationships with them, and
increase their social capital to make them support crisis measurement of the
company most effectively while keeping their turnover intention low.",Social capital and resilience make an employee cooperate for coronavirus measures and lower his/her turnover intention,2020-07-15 22:36:51,"Keisuke Kokubun, Yoshiaki Ino, Kazuyoshi Ishimura","http://dx.doi.org/10.1108/IJWHM-07-2021-0142, http://arxiv.org/abs/2007.07963v3, http://arxiv.org/pdf/2007.07963v3",econ.GN
31501,gn,"Because SARS-Cov-2 (COVID-19) statistics affect economic policies and
political outcomes, governments have an incentive to control them. Manipulation
may be less likely in democracies, which have checks to ensure transparency. We
show that data on disease burden bear indicia of data modification by
authoritarian governments relative to democratic governments. First, data on
COVID-19 cases and deaths from authoritarian governments show significantly
less variation from a 7 day moving average. Because governments have no reason
to add noise to data, lower deviation is evidence that data may be massaged.
Second, data on COVID-19 deaths from authoritarian governments do not follow
Benford's law, which describes the distribution of leading digits of numbers.
Deviations from this law are used to test for accounting fraud. Smoothing and
adjustments to COVID-19 data may indicate other alterations to these data and a
need to account for such alterations when tracking the disease.",Authoritarian Governments Appear to Manipulate COVID Data,2020-07-19 05:54:01,"Mudit Kapoor, Anup Malani, Shamika Ravi, Arnav Agrawal","http://arxiv.org/abs/2007.09566v1, http://arxiv.org/pdf/2007.09566v1",econ.GN
31502,gn,"This paper reviews the research on the impacts of incarceration on crime.
Where data availability permits, reviewed studies are replicated and
reanalyzed. Among three dozen studies I reviewed, I obtained or reconstructed
the data and code for eight. Replication and reanalysis revealed significant
methodological concerns in seven and led to major reinterpretations of four. I
estimate that, at typical policy margins in the United States today,
decarceration has zero net impact on crime outside of prison. That estimate is
uncertain, but at least as much evidence suggests that decarceration reduces
crime as increases it. The crux of the matter is that tougher sentences hardly
deter crime, and that while imprisoning people temporarily stops them from
committing crime outside prison walls, it also tends to increase their
criminality after release. As a result, ""tough-on-crime"" initiatives can reduce
crime in the short run but cause offsetting harm in the long run. A
cost-benefit analysis finds that even under a devil's advocate reading of this
evidence, in which incarceration does reduce crime in U.S., it is unlikely to
increase aggregate welfare.",The impacts of incarceration on crime,2020-07-16 03:28:02,David Roodman,"http://arxiv.org/abs/2007.10268v1, http://arxiv.org/pdf/2007.10268v1",econ.GN
31503,gn,"This paper critically reviews the research on the impact of immigration on
employment and wages of natives in wealthy countries--where ""natives"" includes
previous immigrants and their descendants. While written for a non-technical
audience, the paper engages with technical issues and probes one particularly
intense scholarly debate in an appendix. While the available evidence is not
definitive, it paints a consistent picture. Industrial economies can generally
absorb migrants quickly, in part because capital is mobile and flexible, in
part because immigrants are consumers as well as producers. Thus, long-term
average impacts are probably zero at worst. And the long-term may come quickly,
especially if migration inflows are predictable to investors. Possibly, skilled
immigration boosts productivity and wages for many others. Around the averages,
there are distributional effects. Among low-income ""native"" workers, the ones
who stand to lose the most are those who most closely resemble new arrivals, in
being immigrants themselves, being low-skill, being less assimilated, and,
potentially, undocumented. But native workers and earlier immigrants tend to
benefit from the arrival of workers different from them, who complement more
than compete with them in production. Thus skilled immigration can offset the
effects of low-skill immigration on natives and earlier immigrants.",The domestic economic impacts of immigration,2020-07-16 03:36:24,David Roodman,"http://arxiv.org/abs/2007.10269v1, http://arxiv.org/pdf/2007.10269v1",econ.GN
31504,gn,"This paper reviews the research on the impacts of alcohol taxation outcomes
such as heavy drinking and mortality. Where data availability permits, reviewed
studies are replicated and reanalyzed. Despite weaknesses in the majority of
studies, and despite seeming disagreements among the more credible one--ones
based on natural experiments--we can be reasonably confident that taxing
alcohol reduces drinking in general and problem drinking in particular. The
larger and cleaner the underlying natural experiment, the more apt a study is
to detect impacts on drinking. Estimates from the highest-powered study
settings, such as in Alaska in 2002 and Finland in 2004, suggest an elasticity
of mortality with respect to price of -1 to -3. A 10% price increase in the US
would, according to this estimate, save 2,000-6,000 lives and 48,000-130,000
years of life each year.",The impacts of alcohol taxes: A replication review,2020-07-16 03:40:48,David Roodman,"http://arxiv.org/abs/2007.10270v1, http://arxiv.org/pdf/2007.10270v1",econ.GN
31505,gn,"Agricultural research has fostered productivity growth, but the historical
influence of anthropogenic climate change on that growth has not been
quantified. We develop a robust econometric model of weather effects on global
agricultural total factor productivity (TFP) and combine this model with
counterfactual climate scenarios to evaluate impacts of past climate trends on
TFP. Our baseline model indicates that anthropogenic climate change has reduced
global agricultural TFP by about 21% since 1961, a slowdown that is equivalent
to losing the last 9 years of productivity growth. The effect is substantially
more severe (a reduction of ~30-33%) in warmer regions such as Africa and Latin
America and the Caribbean. We also find that global agriculture has grown more
vulnerable to ongoing climate change.",The Historical Impact of Anthropogenic Climate Change on Global Agricultural Productivity,2020-07-20 22:05:29,"Ariel Ortiz-Bobea, Toby R. Ault, Carlos M. Carrillo, Robert G. Chambers, David B. Lobell","http://dx.doi.org/10.1038/s41558-021-01000-1, http://arxiv.org/abs/2007.10415v2, http://arxiv.org/pdf/2007.10415v2",econ.GN
31506,gn,"Many interventions in global health save lives. One criticism sometimes
lobbed at these interventions invokes the spirit of Malthus. The good done, the
charge goes, is offset by the harm of spreading the earth's limited resources
more thinly: more people, and more misery per person. To the extent this holds,
the net benefit of savings lives is lower than it appears at first. On the
other hand, if lower mortality, especially in childhood, leads families to have
fewer children, life-saving interventions could reduce population. This
document critically reviews the evidence. It finds that the impact of
life-saving interventions on fertility and population growth varies by context,
and is rarely greater than 1:1. In places where lifetime births/woman has been
converging to 2 or lower, saving one child's life should lead parents to avert
a birth they would otherwise have. The impact of mortality drops on fertility
will be nearly 1:1, so population growth will hardly change. In the
increasingly exceptional locales where couples appear not to limit fertility
much, such as Niger and Mali, the impact of saving a life on total births will
be smaller, and may come about mainly through the biological channel of
lactational amenorrhea. Here, mortality-drop-fertility-drop ratios of 1:0.5 and
1:0.33 appear more plausible. But in the long-term, it would be surprising if
these few countries do not join the rest of the world in the transition to
lower and more intentionally controlled fertility.",The impact of life-saving interventions on fertility,2020-07-16 03:45:00,David Roodman,"http://arxiv.org/abs/2007.11388v1, http://arxiv.org/pdf/2007.11388v1",econ.GN
31507,gn,"This research aims to provide an explanatory analyses of the business cycles
divergence between Euro Area and Romania, respectively its drivers, since the
synchronisation of output-gaps is one of the most important topic in the
context of a potential EMU accession. According to the estimates, output-gaps
synchronisation entered on a downward path in the subperiod 2010-2017, compared
to 2002-2009. The paper demonstrates there is a negative relationship between
business cycles divergence and three factors (economic structure convergence,
wage structure convergence and economic openness), but also a positive
relationship between it and its autoregressive term, respectively the GDP per
capita convergence.",Examining the drivers of business cycle divergence between Euro Area and Romania,2020-07-20 17:45:41,Ionut Jianu,"http://arxiv.org/abs/2007.11407v1, http://arxiv.org/pdf/2007.11407v1",econ.GN
31510,gn,"This paper aims to estimate the effect of young people who are not in
employment, education or training (neets rate) on the people at risk of poverty
rate in the European Union. Statistical data covering the 2010-2016 period for
all EU-28 Member States have been used. Regarding the methodology, the study
was performed by using Panel Estimated Generalized Least Squares method,
weighted by Period SUR option. The effect of neets rate on poverty rate proved
to be positive and statistically significant in European Union, since this
indicator includes two main areas which are extremely relevant for poverty
dimension. Firstly, young unemployment rate was one of the main channels
through which the financial crisis has affected the population income.
Secondly, it accounts for the educational system coverage and its skills
deficiencies.","The Effect of Young People Not In Employment, Education or Training, On Poverty Rate in European Union",2020-07-20 17:24:36,Ionut Jianu,"http://arxiv.org/abs/2007.11435v1, http://arxiv.org/pdf/2007.11435v1",econ.GN
31511,gn,"This paper aims to review the different impacts of income inequality drivers
on the Gini coefficient, depending on institutional specificities. In this
context, we divided the European Union member states in two clusters (the
cluster of member states with inclusive institutions / extractive institutions)
using the institutional pillar as a clustering criterion. In both cases, we
assesed the impact of income inequality drivers on Gini coefficient by using a
fixed effects model in order to examine the role and importance of the
institutions in the dynamics of income disparities.The models were estimated by
applying the Panel Estimated Generalized Least Squares (EGLS) method, this
being weighted by Cross-section weights option. The separate assessment of the
income inequality reactivity to the change in its determinants according to the
institutional criterion represents a new approach in this field of research and
the results show that the impact of moderating income inequality strategies is
limitedin the case of member states with extractive institutions.",The implications of institutional specificities on the income inequalities drivers in European Union,2020-07-20 17:37:22,"Ionut Jianu, Ion Dobre, Dumitru Alexandru Bodislav, Carmen Valentina Radulescu, Sorin Burlacu","http://arxiv.org/abs/2007.11436v1, http://arxiv.org/pdf/2007.11436v1",econ.GN
31512,gn,"The main goal of the paper is to extract the aggregate demand and aggregate
supply shocks in Greece, Ireland, Italy and Portugal, as well as to examine the
correlation among the two types of shocks. The decomposition of the shocks was
achieved by using a structural vector autoregression that analyses the
relationship between the evolution of the gross domestic product and inflation
in the period 1997-2015. The goal of the paper is to confirm the aggregate
demand - aggregate supply model in the above-mentioned economies.","A comprehensive view of the manifestations of aggregate demand and aggregate supply shocks in Greece, Ireland, Italy and Portugal",2020-07-20 17:42:51,Ionut Jianu,"http://arxiv.org/abs/2007.11439v1, http://arxiv.org/pdf/2007.11439v1",econ.GN
31513,gn,"This article uses data of subjective Life Satisfaction aggregated to the
community level in Canada and examines the spatial interdependencies and
spatial spillovers of community happiness. A theoretical model of utility is
presented. Using spatial econometric techniques, we find that the utility of
community, proxied by subjective measures of life satisfaction, is affected
both by the utility of neighbouring communities as well as by the latter's
average household income and unemployment rate. Shared cultural traits and
institutions may justify such spillovers. The results are robust to the
different binary contiguity spatial weights matrices used and to the various
econometric models. Clusters of both high-high and low-low in Life Satisfaction
communities are also found based on the Moran's I test",How happy are my neighbours? Modelling spatial spillover effects of well-being,2020-07-22 14:24:09,"Thanasis Ziogas, Dimitris Ballas, Sierdjan Koster, Arjen Edzes","http://arxiv.org/abs/2007.11580v1, http://arxiv.org/pdf/2007.11580v1",econ.GN
31514,gn,"This paper aims to estimate the impact of economic and financial crises on
the unemployment rate in the European Union, taking also into consideration the
institutional specificities, since unemployment was the main channel through
which the economic and financial crisis influenced the social developments.. In
this context, I performed two institutional clusters depending on their
inclusive or extractive institutional features and, in each cases, I computed
the crisis effect on unemployment rate over the 2003-2017 period. Both models
were estimated by using Panel Estimated Generalized Least Squares method, and
are weighted by Period SUR option in order to remove, in advance the possible
inconveniences of the models. The institutions proved to be a relevant
criterion that drives the impact of economic and financial crises on the
unemployment rate, highlighting that countries with inclusive institutions are
less vulnerable to economic shocks and are more resilient than countries with
extractive institutions. The quality of institutions was also found to have a
significant effect on the response of unemployment rate to the dynamic of its
drivers.",The Relationship between the Economic and Financial Crises and Unemployment Rate in the European Union -- How Institutions Affected Their Linkage,2020-07-20 17:18:56,Ionut Jianu,"http://arxiv.org/abs/2007.12007v1, http://arxiv.org/pdf/2007.12007v1",econ.GN
31515,gn,"Home advantage (HA) in football, soccer is well documented in the literature;
however, the explanation for such phenomenon is yet to be completed, as this
paper demonstrates that it is possible to overcome such disadvantage through
managerial planning and intervention (tactics), an effect so far absent in the
literature. To accomplish that, this study develops an integrative theoretical
model of team performance to explain HA based on prior literature, pushing its
limits to unfold the manager role in such a persistent pattern of performance
in soccer. Data on one decade of the national tournament of Brazil was obtained
from public sources, including information about matches and coaches of all 12
teams who played these competitions. Our conceptual modeling allows an
empirical analysis of HA and performance that covers the effects of tactics,
presence of supporters in matches and team fatigue via logistic regression. The
results confirm the HA in the elite division of Brazilian soccer across all
levels of comparative technical quality, a new variable introduced to control
for a potential technical gap between teams (something that would turn the
managerial influence null). Further analysis provides evidence that highlights
managerial capacity to block the HA effect above and beyond the influences of
fatigue (distance traveled) and density of people in the matches. This is the
case of coaches Abel Braga, Marcelo Fernandes and Roger Machado, who were
capable to reverse HA when playing against teams of similar quality. Overall,
the home advantage diminishes as the comparative quality increases but
disappears only when the two teams are of extremely different technical
quality.",Home Advantage in the Brazilian Elite Football: Verifying managers' capacity to outperform their disadvantage,2020-07-24 00:07:02,"Carlos Denner dos Santos, Jessica Alves","http://arxiv.org/abs/2007.12255v1, http://arxiv.org/pdf/2007.12255v1",econ.GN
31516,gn,"The dynamic capability and marketing strategy are challenges to the banking
sector in Indonesia. This study uses a survey method solving 39 banks in
Makassar. Data collection was conducted of questionnaires. The results show
that, the dynamic capability has a positive yet insignificant impact on the
organizational performance, the marketing strategy has a positive and
significant effect on organizational performance and, dynamic capability and
marketing strategy have a positive and significant effect on the organization's
performance in the banking sector in Makassar. Keywords : dynamic capability,
marketing strategy, organizational performance, banking","Effects of dynamic capability and marketing strategy on the organizational performance of the banking sector in Makassar, Indonesia",2020-07-24 13:02:35,"Akhmad Muhammadin, Rashila Ramli, Syamsul Ridjal, Muhlis Kanto, Syamsul Alam, Hamzah Idris","http://arxiv.org/abs/2007.12433v1, http://arxiv.org/pdf/2007.12433v1",econ.GN
31517,gn,"Social status and political connections could confer large economic benefits
to an individual. Previous studies focused on China examine the relationship
between Communist party membership and earnings and find a positive
correlation. However, this correlation may be partly or totally spurious,
thereby generating upwards-biased estimates of the importance of political
party membership. Using data from three surveys spanning more than three
decades, we estimate the causal effect of Chinese party membership on monthly
earnings in in China. We find that, on average, membership in the Communist
party of China increases monthly earnings and we find evidence that the wage
premium has grown in recent years. We explore for potential mechanisms and we
find suggestive evidence that improvements in one's social network, acquisition
of job-related qualifications and improvement in one's social rank and life
satisfaction likely play an important role. (JEL D31, J31, P2)",The Wage Premium of Communist Party Membership: Evidence from China,2020-07-23 21:35:25,"Plamen Nikolov, Hongjian Wang, Kevin Acker","http://arxiv.org/abs/2007.13549v1, http://arxiv.org/pdf/2007.13549v1",econ.GN
31518,gn,"This paper investigates the relationship between economic media sentiment and
individuals' expetations and perceptions about economic conditions. We test if
economic media sentiment Granger-causes individuals' expectations and opinions
concerning economic conditions, controlling for macroeconomic variables. We
develop a measure of economic media sentiment using a supervised machine
learning method on a data set of Swedish economic media during the period
1993-2017. We classify the sentiment of 179,846 media items, stemming from
1,071 unique media outlets, and use the number of news items with positive and
negative sentiment to construct a time series index of economic media
sentiment. Our results show that this index Granger-causes individuals'
perception of macroeconomic conditions. This indicates that the way the
economic media selects and frames macroeconomic news matters for individuals'
aggregate perception of macroeconomic reality.","Economic Reality, Economic Media and Individuals' Expectations",2020-07-27 22:30:08,Kristoffer Persson,"http://arxiv.org/abs/2007.13823v1, http://arxiv.org/pdf/2007.13823v1",econ.GN
31519,gn,"Using a semi-structural approach, the paper identifies how heterogeneity and
financial frictions affect the transmission of aggregate shocks. Approximating
a heterogeneous agent model around the representative agent allocation can
successfully trace the aggregate and distributional dynamics and can be
consistent with alternative mechanisms. Employing Spanish macroeconomic data as
well as firm and household survey data, the paper finds that frictions on both
consumption and investment have rich interactions with aggregate shocks. The
response of heterogeneity amplifies or dampens these effects depending on the
type of the shock. Both dispersion in consumption shares and the marginal
revenue product of firms, as well as the proportion of investment constrained
firms are key determinants of the fiscal multiplier.",Heterogeneity and the Dynamic Effects of Aggregate Shocks,2020-07-28 09:59:29,Andreas Tryphonides,"http://arxiv.org/abs/2007.14022v1, http://arxiv.org/pdf/2007.14022v1",econ.GN
31520,gn,"The employment status of billions of people has been affected by the COVID
epidemic around the Globe. New evidence is needed on how to mitigate the job
market crisis, but there exists only a handful of studies mostly focusing on
developed countries. We fill in this gap in the literature by using novel data
from Ukraine, a transition country in Eastern Europe, which enacted strict
quarantine policies early on. We model four binary outcomes to identify
respondents (i) who are not working during quarantine, (ii) those who are more
likely to work from home, (iii) respondents who are afraid of losing a job,
and, finally, (iv) survey participants who have savings for 1 month or less if
quarantine is further extended. Our findings suggest that respondents employed
in public administration, programming and IT, as well as highly qualified
specialists, were more likely to secure their jobs during the quarantine.
Females, better educated respondents, and those who lived in Kyiv were more
likely to work remotely. Working in the public sector also made people more
confident about their future employment perspectives. Although our findings are
limited to urban households only, they provide important early evidence on the
correlates of job market outcomes, expectations, and financial security,
indicating potential deterioration of socio-economic inequalities.",Job market effects of COVID-19 on urban Ukrainian households,2020-07-30 22:41:57,"Tymofii Brik, Maksym Obrizan","http://arxiv.org/abs/2007.15704v1, http://arxiv.org/pdf/2007.15704v1",econ.GN
31521,gn,"This paper documents the representation of women in Economics academia in
India by analyzing the share of women in faculty positions, and their
participation in a prestigious conference held annually. Data from the elite
institutions shows that the presence of women as the Economics faculty members
remains low. Of the authors of the papers which were in the final schedule of
the prestigious research conference, the proportion of women authors is again
found to be disproportionately low. Our findings from further analysis indicate
that women are not under-represented at the post-graduate level. Further, the
proportion of women in doctoral programmes has increased over time, and is now
almost proportionate. Tendency of women who earn a doctorate abroad, to not
return to India, time needed to complete a doctoral program, and
responsibilities towards the family may explain lower presence of women in
Economics academia in India.",Presence of Women in Economics Academia: Evidence from India,2020-11-01 00:42:37,"Ambrish Dongre, Karan Singhal, Upasak Das","http://arxiv.org/abs/2011.00366v2, http://arxiv.org/pdf/2011.00366v2",econ.GN
31522,gn,"Mobile phone-based sports betting has exploded in popularity in many African
countries. Commentators worry that low-ability gamblers will not learn from
experience, and may rely on debt to gamble. Using data on financial
transactions for over 50 000 Kenyan smartphone users, we find that gamblers do
learn from experience. Gamblers are less likely to bet following poor results
and more likely to bet following good results. The reaction to positive and
negative feedback is of equal magnitude and is consistent with a model of
Bayesian updating. Using an instrumental variables strategy, we find no
evidence that increased gambling leads to increased debt.",Gamblers Learn from Experience,2020-11-01 08:38:18,"Joshua E. Blumenstock, Matthew Olckers","http://arxiv.org/abs/2011.00432v2, http://arxiv.org/pdf/2011.00432v2",econ.GN
31524,gn,"This paper studies the impact of an university opening on incentives for
human capital accumulation of prospective students in its neighborhood. The
opening causes an exogenous fall on the cost to attend university, through the
decrease in distance, leading to an incentive to increase effort - shown by the
positive effect on students' grades. I use an event study approach with two-way
fixed effects to retrieve a causal estimate, exploiting the variation across
groups of students that receive treatment at different times - mitigating the
bias created by the decision of governments on the location of new
universities. Results show an average increase of $0.038$ standard deviations
in test grades, for the municipality where the university was established, and
are robust to a series of potential problems, including some of the usual
concerns in event study models.",How the Availability of Higher Education Affects Incentives? Evidence from Federal University Openings in Brazil,2020-11-06 01:13:48,Guilherme Jardim,"http://arxiv.org/abs/2011.03120v2, http://arxiv.org/pdf/2011.03120v2",econ.GN
31525,gn,"Agent-based computational economics (ACE) - while adopted comparably widely
in other domains of managerial science - is a rather novel paradigm for
management accounting research (MAR). This paper provides an overview of
opportunities and difficulties that ACE may have for research in management
accounting and, in particular, introduces a framework that researchers in
management accounting may employ when considering ACE as a paradigm for their
particular research endeavor. The framework builds on the two interrelated
paradigmatic elements of ACE: a set of theoretical assumptions on economic
agents and the approach of agent-based modeling. Particular focus is put on
contrasting opportunities and difficulties of ACE in comparison to other
research methods employed in MAR.",Agent-based Computational Economics in Management Accounting Research: Opportunities and Difficulties,2020-11-06 14:39:06,"Friederike Wall, Stephan Leitner","http://dx.doi.org/10.2308/jmar-19-073, http://arxiv.org/abs/2011.03297v1, http://arxiv.org/pdf/2011.03297v1",econ.GN
31526,gn,"The article pays close attention to obtaining gender assessments in the world
of work, which made it possible to characterize the effectiveness of social
policy aimed at achieving gender equality.",Socio-demographic goals within digitalization environment: a gender aspect,2020-11-06 15:22:34,"Olga A. Zolotareva, Aleksandr V. Bezrukov","http://arxiv.org/abs/2011.03310v1, http://arxiv.org/pdf/2011.03310v1",econ.GN
31527,gn,"Do firm dynamics matter for the transmission of monetary policy? Empirically,
the startup rate declines following a monetary contraction, while the exit rate
increases, both of which reduce aggregate employment. I present a model that
combines firm dynamics in the spirit of Hopenhayn (1992) with New-Keynesian
frictions and calibrate it to match cross-sectional evidence. The model can
qualitatively account for the responses of entry and exit rates to a monetary
policy shock. However, the responses of macroeconomic variables closely
resemble those in a representative-firm model. I discuss the equilibrium forces
underlying this approximate equivalence, and what may overturn this result.",Monetary Policy and Firm Dynamics,2020-11-06 21:45:36,Matthew Read,"http://arxiv.org/abs/2011.03514v1, http://arxiv.org/pdf/2011.03514v1",econ.GN
31528,gn,"This paper identifies latent group structures in the effect of motherhood on
employment by employing the C-Lasso, a recently developed, purely data-driven
classification method. Moreover, I assess how the introduction of the generous
German parental benefit reform in 2007 affects the different cluster groups by
taking advantage of an identification strategy that combines the sharp
regression discontinuity design and hypothesis testing of predicted employment
probabilities. The C-Lasso approach enables heterogeneous employment effects
across mothers, which are classified into an a priori unknown number of cluster
groups, each with its own group-specific effect. Using novel German
administrative data, the C-Lasso identifies three different cluster groups pre-
and post-reform. My findings reveal marked unobserved heterogeneity in maternal
employment and that the reform affects the identified cluster groups'
employment patterns differently.",Identifying Latent Structures in Maternal Employment: Evidence on the German Parental Benefit Reform,2020-11-06 16:51:52,Sophie-Charlotte Klose,"http://arxiv.org/abs/2011.03541v1, http://arxiv.org/pdf/2011.03541v1",econ.GN
31529,gn,"This paper studies politically feasible policy solutions to inequities in
local public goods provision. I focus in particular on the entwined issues of
high property taxes, geographic income disparities, and inequalities in public
education prevalent in the United States. It has long been recognized that with
a mobile population, local administration and funding of schools leads to
competition between districts. By accounting for heterogeneity in incomes and
home qualities, I am able to shed new light on this phenomenon, and make novel
policy recommendations. I characterize the equilibrium in a dynamic general
equilibrium model of location choice and education investment with a
competitive housing market, heterogeneous wealth levels and home qualities, and
strategic district governments. When all homes are owner-occupied, I show that
competition between strategic districts leads to over-taxation in an attempt to
attract wealthier residents. A simple class of policies that cap and/or tax the
expenditure of richer districts are Pareto improving, and thus politically
feasible. These policies reduce inequality in access to education while
increasing expenditure for under-funded schools. Gains are driven by mitigation
of the negative externalities generated by excessive spending among wealthier
districts. I also discuss the policy implications of the degree of
homeownership. The model sheds new light on observed patterns of homeownership,
location choice, and income. Finally, I test the assumptions and implications
empirically using a regression discontinuity design and data on property tax
referenda in Massachusetts.",Redistribution Through Tax Relief,2020-11-08 03:56:48,Quitzé Valenzuela-Stookey,"http://arxiv.org/abs/2011.03878v1, http://arxiv.org/pdf/2011.03878v1",econ.GN
31530,gn,"Electrification of transport in RES-based power system will support the
decarbonisation of the transportsector. However, due to the increase in energy
demand and the large peak effects of charging, the passiveintegration of
electric cars is likely to undermine sustainability efforts. This study
investigates three differentcharging strategies for electric vehicle in Europe
offering various degrees of flexibility: passive charging,smart charging and
vehicle-to-grid, and puts this flexibility in perspective with the flexibility
offered byinterconnections. We use the Balmorel optimization tool to represent
the short-term dispatch and long-terminvestment in the energy system and we
contribute to the state-of-the-art in developing new methodologiesto represent
home charging and battery degradation. Our results show how each step of
increased chargingflexibility reduces system costs, affects energy mix, impacts
spot prices and reduces CO2 emissions untilthe horizon 2050. We quantify how
flexible charging and variable generation mutually support each other(¿100TWh
from wind and solar energy in 2050) and restrict the business case for
stationary batteries, whereaspassive charging results in a substitution of wind
by solar energy. The comparison of each charging schemewith and without
interconnection expansion highlights the interplay between European countries
in terms ofelectricity prices and CO2 emissions in the context of electrified
transport. Although the best outcome isreached under the most flexible scenario
at the EU level, the situation of the countries with the cheapest andmost
decarbonised electricity mix is damaged, which calls for adapted coordination
policy at the EU level.",From passive to active: Flexibility from electric vehicles in the context of transmission system development,2020-11-11 17:49:30,"Philipp Andreas Gunkel, Claire Bergaentzlé, Ida Græsted Jensen, Fabian Scheller","http://dx.doi.org/10.1016/j.apenergy.2020.115526, http://arxiv.org/abs/2011.05830v1, http://arxiv.org/pdf/2011.05830v1",econ.GN
31531,gn,"Compliance with the public health guidelines during a pandemic requires
coordinated community actions which might be undermined in socially diverse
areas. In this paper, we assess the relationship between caste-group diversity
and the spread of COVID-19 infection during the nationwide lockdown and
unlocking period in India. On the extensive margin, we find that
caste-homogeneous districts systematically took more days to cross the
concentration thresholds of 50 to 500 cases. Estimates on the intensive margin,
using daily cases, further show that caste-homogeneous districts experienced
slower growth in infection. Overall, the effects of caste-group homogeneity
remained positive and statistically significant for 2.5 months (about 76 days)
after the beginning of the lockdown and weakened with subsequent phases of the
lockdown. The results hold even after accounting for the emergence of initial
hotspots before lockdown, broader diffusion patterns through daily fixed
effects, region fixed effects, and dynamic administrative response through
time-variant lagged COVID-19 fatalities at the district level. These effects
are not found to be confounded by differential levels of testing and
underreporting of cases in some states. Consistent estimates from bias-adjusted
treatment effects also ensure that our findings remain robust even after
accounting for other unobservables. We find suggestive evidence of higher
engagement of community health workers in caste-homogenous localities, which
further increased after the lockdown. We posit this as one potential channel
that can explain our results. Our findings reveal how caste-group diversity can
be used to identify potential hotspots during public health emergencies and
emphasize the importance of community health workers and decentralized policy
response.",Social Diversity and Spread of Pandemic: Evidence from India,2020-11-11 18:06:48,"Upasak Das, Udayan Rathore, Prasenjit Sarkhel","http://arxiv.org/abs/2011.05839v2, http://arxiv.org/pdf/2011.05839v2",econ.GN
31532,gn,"Research attention on decentralized autonomous energy systems has increased
exponentially in the past three decades, as demonstrated by the absolute number
of publications and the share of these studies in the corpus of energy system
modelling literature. This paper shows the status quo and future modelling
needs for research on local autonomous energy systems. A total of 359 studies
are roughly investigated, of which a subset of 123 in detail. The studies are
assessed with respect to the characteristics of their methodology and
applications, in order to derive common trends and insights. Most case studies
apply to middle-income countries and only focus on the supply of electricity in
the residential sector. Furthermore, many of the studies are comparable
regarding objectives and applied methods. Local energy autonomy is associated
with high costs, leading to levelized costs of electricity of 0.41 $/kWh on
average. By analysing the studies, many improvements for future studies could
be identified: the studies lack an analysis of the impact of autonomous energy
systems on surrounding energy systems. In addition, the robust design of
autonomous energy systems requires higher time resolutions and extreme
conditions. Future research should also develop methodologies to consider local
stakeholders and their preferences for energy systems.",Reviewing energy system modelling of decentralized energy autonomy,2020-11-11 20:13:23,"Jann Michael Weinand, Fabian Scheller, Russell McKenna","http://dx.doi.org/10.1016/j.energy.2020.117817, http://arxiv.org/abs/2011.05915v1, http://arxiv.org/pdf/2011.05915v1",econ.GN
31533,gn,"This paper studies existence and uniqueness of equilibrium prices in a model
of the banking sector in which banks trade contingent convertible bonds with
stock price triggers among each other. This type of financial product was
proposed as an instrument for stabilizing the global banking system after the
financial crisis. Yet it was recognized early on that these products may create
circularity problems in the definition of stock prices - even in the absence of
trade. We find that if conversion thresholds are such that bond holders are
indifferent about marginal conversions, there exists a unique equilibrium
irrespective of the network structure. When thresholds are lower, existence of
equilibrium breaks down while higher thresholds may lead to multiplicity of
equilibria. Moreover, there are complex network effects. One bank's conversion
may trigger further conversions - or prevent them, depending on the
constellations of asset values and conversion triggers.",Contingent Capital with Stock Price Triggers in Interbank Networks,2020-11-12 19:29:29,"Anne G. Balter, Nikolaus Schweizer, Juan C. Vera","http://arxiv.org/abs/2011.06474v1, http://arxiv.org/pdf/2011.06474v1",econ.GN
31534,gn,"Extreme Value Theory (EVT) is one of the most commonly used approaches in
finance for measuring the downside risk of investment portfolios, especially
during financial crises. In this paper, we propose a novel approach based on
EVT called Uncertain EVT to improve its forecast accuracy and capture the
statistical characteristics of risk beyond the EVT threshold. In our framework,
the extreme risk threshold, which is commonly assumed a constant, is a dynamic
random variable. More precisely, we model and calibrate the EVT threshold by a
state-dependent hidden variable, called Break-Even Risk Threshold (BRT), as a
function of both risk and ambiguity. We will show that when EVT approach is
combined with the unobservable BRT process, the Uncertain EVT's predicted VaR
can foresee the risk of large financial losses, outperforms the original EVT
approach out-of-sample, and is competitive to well-known VaR models when
back-tested for validity and predictability.",The Uncertain Shape of Grey Swans: Extreme Value Theory with Uncertain Threshold,2020-11-13 03:02:42,"Hamidreza Arian, Hossein Poorvasei, Azin Sharifi, Shiva Zamani","http://arxiv.org/abs/2011.06693v1, http://arxiv.org/pdf/2011.06693v1",econ.GN
31535,gn,"In this study we investigated impact of crop diversification on
socio-economic life of tribal people from eastern ghats of India. We have
adopted linear regression formalism to check impact of cross diversification.
We observe a positive intercept for almost all factors. Coefficient of
correlation is calculated to examine the inter dependence of CDI and our
various individually measured dependent variables. A positive correlation is
observed in almost all factors. This study shows that a positive change
occurred in their social, economic life in the post diversification era.",Impact of crop diversification on socio-economic life of tribal farmers: A case study from Eastern ghats of India,2020-11-14 19:01:01,"Sadasiba Tripathy, Sandhyarani Das","http://arxiv.org/abs/2011.07326v1, http://arxiv.org/pdf/2011.07326v1",econ.GN
31536,gn,"This paper examines how the group membership fee influences the formation of
groups and the cooperation rate within the socialized groups. We found that
monetary transactions do not ruin the establishment of social ties and the
formation of group relations.",The effect of monetary incentives on sociality induced cooperation,2020-11-15 21:28:02,"Tatiana Kozitsina, Alexander Chaban, Evgeniya Lukinova, Mikhail Myagkov","http://arxiv.org/abs/2011.07597v1, http://arxiv.org/pdf/2011.07597v1",econ.GN
31581,gn,"We investigate the allegation that legacy U.S. airlines communicated via
earnings calls to coordinate with other legacy airlines in offering fewer seats
on competitive routes. To this end, we first use text analytics to build a
novel dataset on communication among airlines about their capacity choices.
Estimates from our preferred specification show that the number of offered
seats is 2% lower when all legacy airlines in a market discuss the concept of
""capacity discipline."" We verify that this reduction materializes only when
legacy airlines communicate concurrently, and that it cannot be explained by
other possibilities, including that airlines are simply announcing to investors
their unilateral plans to reduce capacity, and then following through on those
announcements.",Coordinated Capacity Reductions and Public Communication in the Airline Industry,2021-02-11 00:06:58,"Gaurab Aryal, Federico Ciliberto, Benjamin T. Leyden","http://arxiv.org/abs/2102.05739v3, http://arxiv.org/pdf/2102.05739v3",econ.GN
31537,gn,"There are high aspirations to foster growth in Namibia's Zambezi region via
the development of tourism. The Zambezi region is a core element of the
Kavango-Zambezi Transfrontier Conservation Area (KAZA), a mosaic of areas with
varying degrees of protection, which is designed to combine nature conservation
and rural development. These conservation areas serve as a resource base for
wildlife tourism, and growth corridor policy aims to integrate the region into
tourism global production networks (GPNs) by means of infrastructure
development. Despite the increasing popularity of growth corridors, little is
known about the effectiveness of this development strategy at local level. The
mixed-methods approach reveals that the improvement of infrastructure has led
to increased tourism in the region. However, the establishment of a territorial
conservation imaginary that results in the designation of conservation areas is
a necessary precondition for such a development. Despite the far-reaching
territorial claims associated with tourism, the benefits for rural residents
are limited.","Do tar roads bring tourism? Growth corridor policy and tourism development in the Zambezi region, Namibia",2020-11-16 12:23:52,"Linus Kalvelage, Javier Revilla Diez, Michael Bollig","http://dx.doi.org/10.1057/s41287-021-00402-3, http://arxiv.org/abs/2011.07809v1, http://arxiv.org/pdf/2011.07809v1",econ.GN
31538,gn,"Credit scoring is an essential tool used by global financial institutions and
credit lenders for financial decision making. In this paper, we introduce a new
method based on Gaussian Mixture Model (GMM) to forecast the probability of
default for individual loan applicants. Clustering similar customers with each
other, our model associates a probability of being healthy to each group. In
addition, our GMM-based model probabilistically associates individual samples
to clusters, and then estimates the probability of default for each individual
based on how it relates to GMM clusters. We provide applications for risk
managers and decision makers in banks and non-bank financial institutions to
maximize profit and mitigate the expected loss by giving loans to those who
have a probability of default below a decision threshold. Our model has a
number of advantages. First, it gives a probabilistic view of credit standing
for each individual applicant instead of a binary classification and therefore
provides more information for financial decision makers. Second, the expected
loss on the train set calculated by our GMM-based default probabilities is very
close to the actual loss, and third, our approach is computationally efficient.",Forecasting Probability of Default for Consumer Loan Management with Gaussian Mixture Models,2020-11-16 15:40:53,"Hamidreza Arian, Seyed Mohammad Sina Seyfi, Azin Sharifi","http://arxiv.org/abs/2011.07906v1, http://arxiv.org/pdf/2011.07906v1",econ.GN
31539,gn,"We present a hierarchical architecture based on Recurrent Neural Networks
(RNNs) for predicting disaggregated inflation components of the Consumer Price
Index (CPI). While the majority of existing research is focused mainly on
predicting the inflation headline, many economic and financial entities are
more interested in its partial disaggregated components. To this end, we
developed the novel Hierarchical Recurrent Neural Network (HRNN) model that
utilizes information from higher levels in the CPI hierarchy to improve
predictions at the more volatile lower levels. Our evaluations, based on a
large data-set from the US CPI-U index, indicate that the HRNN model
significantly outperforms a vast array of well-known inflation prediction
baselines.",Forecasting CPI Inflation Components with Hierarchical Recurrent Neural Networks,2020-11-16 16:13:05,"Oren Barkan, Jonathan Benchimol, Itamar Caspi, Eliya Cohen, Allon Hammer, Noam Koenigstein","http://arxiv.org/abs/2011.07920v3, http://arxiv.org/pdf/2011.07920v3",econ.GN
31540,gn,"Monte Carlo Approaches for calculating Value-at-Risk (VaR) are powerful tools
widely used by financial risk managers across the globe. However, they are time
consuming and sometimes inaccurate. In this paper, a fast and accurate Monte
Carlo algorithm for calculating VaR and ES based on Gaussian Mixture Models is
introduced. Gaussian Mixture Models are able to cluster input data with respect
to market's conditions and therefore no correlation matrices are needed for
risk computation. Sampling from each cluster with respect to their weights and
then calculating the volatility-adjusted stock returns leads to possible
scenarios for prices of assets. Our results on a sample of US stocks show that
the Gmm-based VaR model is computationally efficient and accurate. From a
managerial perspective, our model can efficiently mimic the turbulent behavior
of the market. As a result, our VaR measures before, during and after crisis
periods realistically reflect the highly non-normal behavior and non-linear
correlation structure of the market.",Portfolio Risk Measurement Using a Mixture Simulation Approach,2020-11-16 17:44:11,"Seyed Mohammad Sina Seyfi, Azin Sharifi, Hamidreza Arian","http://arxiv.org/abs/2011.07994v1, http://arxiv.org/pdf/2011.07994v1",econ.GN
31541,gn,"We show that the quotient of Levy processes of jump-diffusion type has a
fat-tailed distribution. An application is to price theory in economics. We
show that fat tails arise endogenously from modeling of price change based on
an excess demand analysis resulting in a quotient of arbitrarily correlated
demand and supply whether or not jump discontinuities are present. The
assumption is that supply and demand are described by drift terms, Brownian
(i.e., Gaussian) and compound Poisson jump processes. If $P^{-1}dP/dt$ (the
relative price change in an interval $dt$) is given by a suitable function of
relative excess demand, $\left( \mathcal{D}% -\mathcal{S}\right) /\mathcal{S}$
(where $\mathcal{D}$ and $\mathcal{S}$ are demand and supply), then the
distribution has tail behavior $F\left( x\right) \sim x^{-\zeta}$ for a power
$\zeta$ that depends on the function $G$ in $P^{-1}dP/dt=G\left(
\mathcal{D}/\mathcal{S}\right) $. For $G\left( x\right) \sim\left\vert
x\right\vert ^{1/q}$ one has $\zeta=q.$ The empirical data for assets typically
yields a value, $\zeta\tilde{=}3,$ or $\ \zeta \in\left[ 3,5\right] $ for some
markets.
  The discrepancy between the empirical result and theory never arises if one
models price dynamics using basic economics methodology, i.e., generalized
Walrasian adjustment, rather than the usual starting point for classical
finance which assumes a normal distribution of price changes. The function $G$
is deterministic, and can be calibrated with a smaller data set. The results
establish a simple link between the decay exponent of the density function and
the price adjustment function, a feature that can improve methodology for risk
assessment.
  The mathematical results can be applied to other problems involving the
relative difference or quotient of Levy processes of jump-diffusion type.","Fat tails arise endogenously in asset prices from supply/demand, with or without jump processes",2020-11-17 00:03:27,Gunduz Caginalp,"http://arxiv.org/abs/2011.08275v2, http://arxiv.org/pdf/2011.08275v2",econ.GN
31547,gn,"The purpose of this study is to analyze the power of Islamic scholars
lectures to decide using Islamic banks with a customer response strength
approach. The sampling technique uses purposive sampling, and the respondents
were those who attended lectures on Islamic banks delivered by Islamic
scholars. The number of respondents who met the requirements was 96
respondents. Data were analyzed using the customer response strength method.
The instrument has met the valid and reliable criteria. The results showed 99%
of the total number of respondents acted according to their perceptions of the
contents of Islamic banks lectures. Lecture material delivered by scholars
about Islamic banks has a strong relationship with their responses ranging from
giving attention, interest, fostering desires, and beliefs to having an
interest in making transactions with Islamic banks.",The power of islamic scholars lecture to decide using Islamic bank with customer response strength approach,2020-11-22 09:57:23,"Suryo Budi Santoso, Herni Justiana Astuti","http://arxiv.org/abs/2011.10957v1, http://arxiv.org/pdf/2011.10957v1",econ.GN
31542,gn,"Big data sources provide a significant opportunity for governments and
development stakeholders to sense and identify in near real time, economic
impacts of shocks on populations at high spatial and temporal resolutions. In
this study, we assess the potential of transaction and location based measures
obtained from automatic teller machine (ATM) terminals, belonging to a major
private sector bank in Indonesia, to sense in near real time, the impacts of
shocks across income groups. For each customer and separately for years 2014
and 2015, we model the relationship between aggregate measures of cash
withdrawals for each year, total inter-terminal distance traversed by the
customer for the specific year and reported customer income group. Results
reveal that the model was able to predict the corresponding income groups with
80% accuracy, with high precision and recall values in comparison to the
baseline model, across both the years. Shapley values suggest that the total
inter-terminal distance traversed by a customer in each year differed
significantly between customer income groups. Kruskal-Wallis test further
showed that customers in the lower-middle class income group, have
significantly high median values of inter-terminal distances traversed (7.21
Kms for 2014 and 2015) in comparison to high (2.55 Kms and 0.66 Kms for years
2014 and 2015), and low (6.47 Kms for 2014 and 2015) income groups. Although no
major shocks were noted in 2014 and 2015, our results show that lower-middle
class income group customers, exhibit relatively high mobility in comparison to
customers in low and high income groups. Additional work is needed to leverage
the sensing capabilities of this data to provide insights on, who, where and by
how much is the population impacted by a shock to facilitate targeted
responses.",Assessing the use of transaction and location based insights derived from Automatic Teller Machines (ATMs) as near real time sensing systems of economic shocks,2020-11-16 18:14:58,"Dharani Dhar Burra, Sriganesh Lokanathan","http://arxiv.org/abs/2011.08721v2, http://arxiv.org/pdf/2011.08721v2",econ.GN
31543,gn,"Social media-transmitted online information, which is associated with
emotional expressions, shapes our thoughts and actions. In this study, we
incorporate social network theories and analyses and use a computational
approach to investigate how emotional expressions, particularly
\textit{negative discrete emotional expressions} (i.e., anxiety, sadness,
anger, and disgust), lead to differential diffusion of online content in social
media networks. We rigorously quantify diffusion cascades' structural
properties (i.e., size, depth, maximum breadth, and structural virality) and
analyze the individual characteristics (i.e., age, gender, and network degree)
and social ties (i.e., strong and weak) involved in the cascading process. In
our sample, more than six million unique individuals transmitted 387,486
randomly selected articles in a massive-scale online social network, WeChat. We
detect the expression of discrete emotions embedded in these articles, using a
newly generated domain-specific and up-to-date emotion lexicon. We apply a
partial-linear instrumental variable approach with a double machine learning
framework to causally identify the impact of the negative discrete emotions on
online content diffusion. We find that articles with more expressions of
anxiety spread to a larger number of individuals and diffuse more deeply,
broadly, and virally. Expressions of anger and sadness, however, reduce
cascades' size and maximum breadth. We further show that the articles with
different degrees of negative emotional expressions tend to spread differently
based on individual characteristics and social ties. Our results shed light on
content marketing and regulation, utilizing negative emotional expressions.",Emotions in Online Content Diffusion,2020-11-18 02:38:10,"Yifan Yu, Shan Huang, Yuchen Liu, Yong Tan","http://arxiv.org/abs/2011.09003v2, http://arxiv.org/pdf/2011.09003v2",econ.GN
31544,gn,"Does COVID-19 change the willingness to cooperate? Four Austrian university
courses in economics play a public goods game in consecutive semesters on the
e-learning platform Moodle: two of them in the year before the crisis, one
immediately after the beginning of the first lockdown in March 2020 and the
last one in the days before the announcement of the second lockdown in October
2020. Between 67% and 76% of the students choose to cooperate, i.e. contribute
to the public good, in the pre-crisis year. Immediately after the imposition of
the lockdown, 71% choose to cooperate. Seven months into the crisis, however,
cooperation drops to 43%. Depending on whether two types of biases resulting
from the experimental design are eliminated or not, probit and logit
regressions show that this drop is statistically significant at the 0.05 or the
0.1 significance level.",Cooperation in the Age of COVID-19: Evidence from Public Goods Games,2020-11-18 13:23:03,Patrick Mellacher,"http://arxiv.org/abs/2011.09189v1, http://arxiv.org/pdf/2011.09189v1",econ.GN
31545,gn,"There is a growing interest in the integration of energy infrastructures to
increase systems' flexibility and reduce operational costs. The most studied
case is the synergy between electric and heating networks. Even though
integrated heat and power markets can be described by a convex optimization
problem, prices derived from dual values do not guarantee cost recovery. In
this work, a two-step approach is presented for the calculation of the optimal
energy dispatch and prices. The proposed methodology guarantees cost-recovery
for each of the energy vectors and revenue-adequacy for the integrated market.",Pricing in Integrated Heat and Power Markets,2020-11-20 15:49:15,"Alvaro Gonzalez-Castellanos, David Pozo, Aldo Bischi","http://arxiv.org/abs/2011.10384v1, http://arxiv.org/pdf/2011.10384v1",econ.GN
31546,gn,"Various poverty reduction strategies are being implemented in the pursuit of
eliminating extreme poverty. One such strategy is increased access to
microcredit in poor areas around the world. Microcredit, typically defined as
the supply of small loans to underserved entrepreneurs that originally aimed at
displacing expensive local money-lenders, has been both praised and criticized
as a development tool (Banerjee et al., 2015b). This paper presents an analysis
of heterogeneous impacts from increased access to microcredit using data from
three randomised trials. In the spirit of recognising that in general the
impact of a policy intervention varies conditional on an unknown set of
factors, particular, we investigate whether heterogeneity presents itself as
groups of winners and losers, and whether such subgroups share characteristics
across RCTs. We find no evidence of impacts, neither average nor
distributional, from increased access to microcredit on consumption levels. In
contrast, the lack of average effects on profits seems to mask heterogeneous
impacts. The findings are, however, not robust to the specific machine learning
algorithm applied. Switching from the better performing Elastic Net to the
worse performing Random Forest leads to a sharp increase in the variance of the
estimates. In this context, methods to evaluate the relative performing machine
learning algorithm developed by Chernozhukov et al. (2019) provide a
disciplined way for the analyst to counter the uncertainty as to which
algorithm to deploy.",Understanding the Distributional Aspects of Microcredit Expansions,2020-11-20 20:08:39,"Melvyn Weeks, Tobias Gabel Christiansen","http://arxiv.org/abs/2011.10509v1, http://arxiv.org/pdf/2011.10509v1",econ.GN
31558,gn,"This paper explores the link between education and the decision to start
smoking as well as the decision to quit smoking. Data is gathered from IPUMS
CPS and Centers for Disease Control and Prevention. Probit analysis (with the
use of probability weight and robust standard error) indicates that every
additional year of education will reduce the 2.3 percentage point of the
smoking probability and will add 3.53 percentage point in quitting likelihood,
holding home restriction, public restriction, cigarette price, family income,
age, gender, race, and ethnicity constant. I believe that tobacco epidemic is a
serious global issue that may be mitigated by using careful regulations on
smoking restriction and education.",The Effect of Education on Smoking Decisions in the United States,2020-11-15 05:58:07,Sang T. Truong,"http://arxiv.org/abs/2011.14834v1, http://arxiv.org/pdf/2011.14834v1",econ.GN
31548,gn,"The purpose of this study is the design model of Islamic bank socialization
in terms of four pillars (Business Institution, Formal Education, Islamic
Scholar and Higher Education) through Synergy and Proactive. The location of
the study was conducted in the Regency of Banyumas, Indonesia. The results of
the survey on respondents obtained 145 respondents' answers that deserve to be
analyzed. Data were analyzed using SEM models with Partial Least Squares
approach, designing measurement models (outer models) and designing inner
models. The results of the calculation outside the model of all measurements
are more than the minimum criteria required by removing Formal Education from
the model because it does not meet the requirements. While the inner model
results show that the socialization model was only built by the Business
Institution, Islamic Scholar and Higher Education through synergy and
proactivity. All independent variables directly influence the dependent
variable, while the intervening variables also significantly influence except
the relationship of Islamic Scholar Islamic to Bank Socialization through
Proactive.",A Framework for Conceptualizing Islamic Bank Socialization in Indonesia,2020-11-22 10:06:03,"Suryo Budi Santoso, Herni Justiana Astuti","http://arxiv.org/abs/2011.10958v1, http://arxiv.org/pdf/2011.10958v1",econ.GN
31549,gn,"Human decision-making differs due to variation in both incentives and
available information. This constitutes a substantial challenge for the
evaluation of whether and how machine learning predictions can improve decision
outcomes. We propose a framework that incorporates machine learning on
large-scale data into a choice model featuring heterogeneity in decision maker
payoff functions and predictive skill. We apply this framework to the major
health policy problem of improving the efficiency in antibiotic prescribing in
primary care, one of the leading causes of antibiotic resistance. Our analysis
reveals large variation in physicians' skill to diagnose bacterial infections
and in how physicians trade off the externality inherent in antibiotic use
against its curative benefit. Counterfactual policy simulations show that the
combination of machine learning predictions with physician diagnostic skill
results in a 25.4 percent reduction in prescribing and achieves the largest
welfare gains compared to alternative policies for both estimated physician as
well as conservative social planner preference weights on the antibiotic
resistance externality.",Machine Predictions and Human Decisions with Variation in Payoffs and Skill,2020-11-22 16:53:24,"Michael Allan Ribers, Hannes Ullrich","http://arxiv.org/abs/2011.11017v1, http://arxiv.org/pdf/2011.11017v1",econ.GN
31550,gn,"This study investigates the impact of competitive project-funding on
researchers' publication outputs. Using detailed information on applicants at
the Swiss National Science Foundation (SNSF) and their proposals' evaluation,
we employ a case-control design that accounts for individual heterogeneity of
researchers and selection into treatment (e.g. funding). We estimate the impact
of grant award on a set of output indicators measuring the creation of new
research results (the number of peer-reviewed articles), its relevance (number
of citations and relative citation ratios), as well as its accessibility and
dissemination as measured by the publication of preprints and by altmetrics.
The results show that the funding program facilitates the publication and
dissemination of additional research amounting to about one additional article
in each of the three years following the grant. The higher citation metrics and
altmetrics of publications by funded researchers suggest that impact goes
beyond quantity, but that funding fosters quality and impact.",The Impact of Research Funding on Knowledge Creation and Dissemination: A study of SNSF Research Grants,2020-11-23 11:43:49,"Rachel Heyard, Hanna Hottenrott","http://arxiv.org/abs/2011.11274v2, http://arxiv.org/pdf/2011.11274v2",econ.GN
31551,gn,"In this paper, we analyzed how business age and mortality are related during
the first years of life, and tested the different hypotheses proposed in the
literature. For that, we used data on U.S. business establishments, with 1-year
resolution in the range of age of 0-5 years, in the period 1977-2016, published
by the United States Census Bureau. First, we explored the adaptation of
classical techniques of survival analysis (the Life Table and Peto-Turnbull
methods) to the business survival analysis. Then, we considered nine parametric
probabilistic models, most of them well-known in reliability analysis and in
the actuarial literature, with different shapes of the hazard function, that we
fitted by maximum likelihood method and compared with the Akaike information
criterion. Our findings show that newborn firms seem to have a decreasing
failure rate with the age during the first five years in market, with the
exception of the first months of some years in which the risk can rise.",The risk of death in newborn businesses during the first years in market,2020-11-24 01:35:25,"Faustino Prieto, José María Sarabia, Enrique Calderín-Ojeda","http://arxiv.org/abs/2011.11776v1, http://arxiv.org/pdf/2011.11776v1",econ.GN
31552,gn,"Using high-quality nation-wide social security data combined with machine
learning tools, we develop predictive models of income support receipt
intensities for any payment enrolee in the Australian social security system
between 2014 and 2018. We show that off-the-shelf machine learning algorithms
can significantly improve predictive accuracy compared to simpler heuristic
models or early warning systems currently in use. Specifically, the former
predicts the proportion of time individuals are on income support in the
subsequent four years with greater accuracy, by a magnitude of at least 22% (14
percentage points increase in the R2), compared to the latter. This gain can be
achieved at no extra cost to practitioners since the algorithms use
administrative data currently available to caseworkers. Consequently, our
machine learning algorithms can improve the detection of long-term income
support recipients, which can potentially provide governments with large
savings in accrued welfare costs.",Using Machine Learning to Create an Early Warning System for Welfare Recipients,2020-11-24 15:19:30,"Dario Sansone, Anna Zhu","http://arxiv.org/abs/2011.12057v2, http://arxiv.org/pdf/2011.12057v2",econ.GN
31565,gn,"This paper surveys recent applications of methods from the theory of optimal
transport to econometric problems.",A survey of some recent applications of optimal transport methods to econometrics,2021-02-02 22:16:59,Alfred Galichon,"http://dx.doi.org/10.1111/ectj.12083, http://arxiv.org/abs/2102.01716v1, http://arxiv.org/pdf/2102.01716v1",econ.GN
31606,gn,"I propose a new conceptual framework to disentangle the impacts of weather
and climate on economic activity and growth: A stochastic frontier model with
climate in the production frontier and weather shocks as a source of
inefficiency. I test it on a sample of 160 countries over the period 1950-2014.
Temperature and rainfall determine production possibilities in both rich and
poor countries; positively in cold countries and negatively in hot ones.
Weather anomalies reduce inefficiency in rich countries but increase
inefficiency in poor and hot countries; and more so in countries with low
weather variability. The climate effect is larger that the weather effect.",The economic impact of weather and climate,2021-02-25 21:08:12,Richard S. J. Tol,"http://arxiv.org/abs/2102.13110v3, http://arxiv.org/pdf/2102.13110v3",econ.GN
31553,gn,"Many workers at the production department of Libyan Textile Company work with
different performances. Plan of company management is paying the money
according to the specific performance and quality requirements for each worker.
Thus, it is important to predict the accurate evaluation of workers to extract
the knowledge for management, how much money it will pay as salary and
incentive. For example, if the evaluation is average, then management of the
company will pay part of the salary. If the evaluation is good, then it will
pay full salary, moreover, if the evaluation is excellent, then it will pay
salary plus incentive percentage. Twelve variables with 121 instances for each
variable collected to predict the evaluation of the process for each worker.
Before starting classification, feature selection used to predict the
influential variables which impact the evaluation process. Then, four
algorithms of decision trees used to predict the output and extract the
influential relationship between inputs and output. To make sure get the
highest accuracy, ensemble algorithm (Bagging) used to deploy four algorithms
of decision trees and predict the highest prediction result 99.16%. Standard
errors for four algorithms were very small; this means that there is a strong
relationship between inputs (7 variables) and output (Evaluation). The curve of
(Receiver operating characteristics) for algorithms gave a high-level
specificity and sensitivity, and Gain charts were very close to together.
According to the results, management of the company should take a logic
decision about the evaluation of production process and extract the important
variables that impact the evaluation.",Use Bagging Algorithm to Improve Prediction Accuracy for Evaluation of Worker Performances at a Production Company,2020-11-24 22:52:22,Hamza Saad,"http://dx.doi.org/10.4172/2169-0316.1000257, http://arxiv.org/abs/2011.12343v1, http://arxiv.org/pdf/2011.12343v1",econ.GN
31554,gn,"Index insurance has been promoted as a promising solution for reducing
agricultural risk compared to traditional farm-based insurance. By linking
payouts to a regional factor instead of individual loss, index insurance
reduces monitoring costs, and alleviates the problems of moral hazard and
adverse selection. Despite its theoretical appeal, demand for index insurance
has remained low in many developing countries, triggering a debate on the
causes of the low uptake. Surprisingly, there has been little discussion in
this debate about the experience in the United States. The US is an unique case
as both farm-based and index-based products have been available for more than
two decades. Furthermore, the number of insurance zones is very large, allowing
interesting comparisons over space. As in developing countries, the adoption of
index insurance is rather low -- less than than 5\% of insured acreage. Does
this mean that we should give up on index insurance?
  In this paper, we investigate the low take-up of index insurance in the US
leveraging a field-level dataset for corn and soybean obtained from satellite
predictions. While previous studies were based either on county aggregates or
on relatively small farm-level dataset, our satellite-derived data gives us a
very large number of fields (close to 1.8 million) comprised within a large
number of index zones (600) observed over 20 years. To evaluate the suitability
of index insurance, we run a large-scale simulation comparing the benefits of
both insurance schemes using a new measure of farm-equivalent risk coverage of
index insurance. We make two main contributions. First, we show that in our
simulations, demand for index insurance is unexpectedly high, at about 30\% to
40\% of total demand. This result is robust to relaxing several assumptions of
the model and to using prospect theory instead of expected utility.",On the benefits of index insurance in US agriculture: a large-scale analysis using satellite data,2020-11-25 09:42:05,"Matthieu Stigler, David Lobell","http://arxiv.org/abs/2011.12544v2, http://arxiv.org/pdf/2011.12544v2",econ.GN
31555,gn,"In this paper, we investigate the effectiveness of conventional and
unconventional monetary policy measures by the European Central Bank (ECB)
conditional on the prevailing level of uncertainty. To obtain exogenous
variation in central bank policy, we rely on high-frequency surprises in
financial market data for the euro area (EA) around policy announcement dates.
We trace the dynamic effects of shocks to the short-term policy rate, forward
guidance and quantitative easing on several key macroeconomic and financial
quantities alongside survey-based measures of expectations. For this purpose,
we propose a Bayesian smooth-transition vector autoregression (ST-VAR). Our
results suggest that transmission channels are impaired when uncertainty is
elevated. While conventional monetary policy is less effective during such
periods, and sometimes also forward guidance, quantitative easing measures seem
to work comparatively well in uncertain times.",On the effectiveness of the European Central Bank's conventional and unconventional policies under uncertainty,2020-11-29 22:21:55,"Niko Hauzenberger, Michael Pfarrhofer, Anna Stelzer","http://arxiv.org/abs/2011.14424v1, http://arxiv.org/pdf/2011.14424v1",econ.GN
31556,gn,"How do severe shocks such as war alter the economy? We study how a country's
production network is affected by a devastating but localized conflict. Using
unique transaction-level data on Ukrainian railway shipments around the start
of the 2014 Russia-Ukraine crisis, we uncover several novel indirect effects of
conflict on firms. First, we document substantial propagation effects on
interfirm trade--trade declines even between partners outside the conflict
areas if one of them had traded with those areas before the conflict events.
The magnitude of such second-degree effect of conflict is one-fifth of the
first-degree effect. Ignoring this propagation would lead to an underestimate
of the total impact of conflict on trade by about 67\%. Second, war induces
sudden changes in the production-network structure that influence firm
performance. Specifically, we find that firms that exogenously became more
central--after the conflict practically cut off certain regions from the rest
of Ukraine--received a relative boost to their revenues. Finally, in a
production-network model, we separately estimate the effects of the exogenous
firm removal and the subsequent endogenous network adjustment on firm revenue
distribution. For a median firm, network adjustment compensates for 80\% of the
network destruction a year after the conflict onset.",Production Networks and War,2020-11-30 16:11:35,"Vasily Korovkin, Alexey Makarin","http://arxiv.org/abs/2011.14756v4, http://arxiv.org/pdf/2011.14756v4",econ.GN
31557,gn,"The aim of this paper is twofold:1)contribute to a better understanding of
the place of women in Economics and Management disciplines by characterizing
the difference in levels of scientific collaboration between men and women at
the specialties level;2) Investigate the relationship between gender diversity
and citation impact in Economics and Management. Our data, extracted from the
Web of Science database, cover global production as indexed in 302 journals in
Economics and 370 journals in Management, with respectively 153 667 and 163 567
articles published between 2008 and 2018. Results show that collaborative
practices between men and women are quite different in Economics and
Management. We also find that there is a positive and significant effect of
gender diversity on the academic impact of publications. Mixed-gender
publications (co-authored by men and women) receive more citations than
non-mixed papers (written by same-gender author teams) or single-author
publications. The effect is slightly stronger in Management. The regression
analysis also indicates that there is, for both disciplines, a small negative
effect on citations received if the corresponding author is a woman.",Gender diversity in research teams and citation impact in Economics and Management,2020-11-24 00:16:35,"Abdelghani Maddi, Yves Gingras","http://arxiv.org/abs/2011.14823v1, http://arxiv.org/pdf/2011.14823v1",econ.GN
31559,gn,"The COVID-19 pandemic has posed an unprecedented challenge to individuals
around the globe. To mitigate the spread of the virus, many states in the U.S.
issued lockdown orders to urge their residents to stay at their homes, avoid
get-togethers, and minimize physical interactions. While many offline workers
are experiencing significant challenges performing their duties, digital
technologies have provided ample tools for individuals to continue working and
to maintain their productivity. Although using digital platforms to build
resilience in remote work is effective, other aspects of remote work (beyond
the continuation of work) should also be considered in gauging true resilience.
In this study, we focus on content creators, and investigate how restrictions
in individual's physical environment impact their online content creation
behavior. Exploiting a natural experimental setting wherein four states issued
state-wide lockdown orders on the same day whereas five states never issued a
lockdown order, and using a unique dataset collected from a short video-sharing
social media platform, we study the impact of lockdown orders on content
creators' behaviors in terms of content volume, content novelty, and content
optimism. We combined econometric methods (difference-in-differences
estimations of a matched sample) with machine learning-based natural language
processing to show that on average, compared to the users residing in
non-lockdown states, the users residing in lockdown states create more content
after the lockdown order enforcement. However, we find a decrease in the
novelty level and optimism of the content generated by the latter group. Our
findings have important contributions to the digital resilience literature and
shed light on managers' decision-making process related to the adjustment of
employees' work mode in the long run.",The Unintended Consequences of Stay-at-Home Policies on Work Outcomes: The Impacts of Lockdown Orders on Content Creation,2020-11-30 21:05:17,"Xunyi Wang, Reza Mousavi, Yili Hong","http://arxiv.org/abs/2011.15068v1, http://arxiv.org/pdf/2011.15068v1",econ.GN
31560,gn,"How has the science system reacted to the early stages of the COVID-19
pandemic? Here we compare the (growing) international network for coronavirus
research with the broader international health science network. Our findings
show that, before the outbreak, coronavirus research realized a relatively
small and rather peculiar niche within the global health sciences. As a
response to the pandemic, the international network for coronavirus research
expanded rapidly along the hierarchical structure laid out by the global health
science network. Thus, in face of the crisis, the global health science system
proved to be structurally stable yet versatile in research. The observed
versatility supports optimistic views on the role of science in meeting future
challenges. However, the stability of the global core-periphery structure may
be worrying, because it reduces learning opportunities and social capital of
scientifically peripheral countries -- not only during this pandemic but also
in its ""normal"" mode of operation.",Global health science leverages established collaboration network to fight COVID-19,2021-01-30 22:39:35,"Stefano Bianchini, Moritz Müller, Pierre Pelletier, Kevin Wirtz","http://arxiv.org/abs/2102.00298v1, http://arxiv.org/pdf/2102.00298v1",econ.GN
31561,gn,"This paper reports on a two-tiered experiment designed to separately identify
the selection and effort margins of pay-for-performance (P4P). At the
recruitment stage, teacher labor markets were randomly assigned to a
'pay-for-percentile' or fixed-wage contract. Once recruits were placed, an
unexpected, incentive-compatible, school-level re-randomization was performed,
so that some teachers who applied for a fixed-wage contract ended up being paid
by P4P, and vice versa. By the second year of the study, the within-year effort
effect of P4P was 0.16 standard deviations of pupil learning, with the total
effect rising to 0.20 standard deviations after allowing for selection.","Recruitment, effort, and retention effects of performance contracts for civil servants: Experimental evidence from Rwandan primary schools",2021-01-31 16:06:41,"Clare Leaver, Owen Ozier, Pieter Serneels, Andrew Zeitlin","http://arxiv.org/abs/2102.00444v1, http://arxiv.org/pdf/2102.00444v1",econ.GN
31562,gn,"The COVID-19 outbreak has posed an unprecedented challenge to humanity and
science. On the one side, public and private incentives have been put in place
to promptly allocate resources toward research areas strictly related to the
COVID-19 emergency. But on the flip side, research in many fields not directly
related to the pandemic has lagged behind. In this paper, we assess the impact
of COVID-19 on world scientific production in the life sciences. We investigate
how the usage of medical subject headings (MeSH) has changed following the
outbreak. We estimate through a difference-in-differences approach the impact
of COVID-19 on scientific production through PubMed. We find that COVID-related
research topics have risen to prominence, displaced clinical publications,
diverted funds away from research areas not directly related to COVID-19 and
that the number of publications on clinical trials in unrelated fields has
contracted. Our results call for urgent targeted policy interventions to
reactivate biomedical research in areas that have been neglected by the
COVID-19 emergency.",The Impact of the COVID-19 Pandemic on Scientific Research in the Life Sciences,2021-01-31 20:32:32,"Massimo Riccaboni, Luca Verginer","http://dx.doi.org/10.1371/journal.pone.0263001, http://arxiv.org/abs/2102.00497v1, http://arxiv.org/pdf/2102.00497v1",econ.GN
31563,gn,"The main advantage of wind and solar power plants is the power production
free of CO2. Their main disadvantage is the volatility of the generated power.
According to the estimates of H.-W. Sinn[1], suppressing this volatility
requires pumped-storage plants with a huge capacity, several orders of
magnitude larger than the present available capacity in Germany[2]. Sinn
concluded that wind-solar power can be used only together with conventional
power plants as backups. However, based on German power data[3] of 2019 we show
that the required storage capacity can significantly be reduced, provided i) a
surplus of wind-solar power plants is supplied, ii) smart meters are installed,
iii) partly a different kind of wind turbines and solar panels are used in
Germany. Our calculations suggest that all the electric energy, presently
produced in Germany, can be obtained from wind-solar power alone. And our
results let us predict that wind-solar power can be used to produce in addition
the energy for transportation, warm water, space heating and in part for
process heating, meaning an increase of the present electric energy production
by a factor of about 5[1]. Of course, to put such a prediction on firm ground
the present calculations have to be confirmed for a period of many years. And
it should be kept in mind, that in any case a huge number of wind turbines and
solar panels is required.",Controlling volatility of wind-solar power,2021-02-01 04:54:34,Hans Lustfeld,"http://arxiv.org/abs/2102.00587v1, http://arxiv.org/pdf/2102.00587v1",econ.GN
31564,gn,"The worries expressed by Alan Greenspan that the long run economic growth of
the United States will fade away due to increasing burden of entitlements
motivated us to empirically investigate the impact of entitlements of key
macroeconomic variables. To examine this contemporary issue, we estimate a
vector error-correction model is used to analyze the impact of entitlements on
the price level, real output, and the long-term interest rate. The results show
that a shock to entitlements leads to decrease in output and lends support to
the assertion made by Alan Greenspan. Several robustness checks are conducted
and the results of the model qualitatively remains unchanged.",The Macroeconomic Impacts of Entitlements,2021-02-02 20:07:40,"Ateeb Akhter Shah Syed, Kaneez Fatima, Riffat Arshad","http://arxiv.org/abs/2102.01609v1, http://arxiv.org/pdf/2102.01609v1",econ.GN
31566,gn,"We argue that models coming from a variety of fields, such as matching models
and discrete choice models among others, share a common structure that we call
matching function equilibria with partial assignment. This structure includes
an aggregate matching function and a system of nonlinear equations. We provide
a proof of existence and uniqueness of an equilibrium and propose an efficient
algorithm to compute it. For a subclass of matching models, we also develop a
new parameter-free approach for constructing the counterfactual matching
equilibrium. It has the advantage of not requiring parametric estimation when
computing counterfactuals. We use our procedure to analyze the impact of the
elimination of the Social Security Student Benefit Program in 1982 on the
marriage market in the United States. We estimate several candidate models from
our general class of matching functions and select the best fitting model using
information based criterion.","Matching Function Equilibria with Partial Assignment: Existence, Uniqueness and Estimation",2021-02-03 17:02:39,"Liang Chen, Eugene Choo, Alfred Galichon, Simon Weber","http://arxiv.org/abs/2102.02071v2, http://arxiv.org/pdf/2102.02071v2",econ.GN
31567,gn,"The purpose of this dissertation is to present an overview of the operational
and financial performance of airports in Europe. In benchmarking studies,
airports are assessed and compared with other airports based on key indicators
from a technical and an economic point of view. The interest lies primarily in
the question, which key figures can best measure the perception of quality of
service from the point of view of the passenger for the services at an airport.","Airport Capacity and Performance in Europe -- A study of transport economics, service quality and sustainability",2021-02-04 05:42:36,Branko Bubalo,"http://dx.doi.org/10.25592/uhhfdm.8631, http://arxiv.org/abs/2102.02379v1, http://arxiv.org/pdf/2102.02379v1",econ.GN
31568,gn,"I show that the structure of the firm is not neutral in respect to regulatory
capital budgeted under rules which are based on the Value-at-Risk.",The VAR at Risk,2021-02-04 15:30:07,Alfred Galichon,"http://dx.doi.org/10.1142/S0219024910005875, http://arxiv.org/abs/2102.02577v1, http://arxiv.org/pdf/2102.02577v1",econ.GN
31569,gn,"This paper exhibits a duality between the theory of Revealed Preference of
Afriat and the housing allocation problem of Shapley and Scarf. In particular,
it is shown that Afriat's theorem can be interpreted as a second welfare
theorem in the housing problem. Using this duality, the revealed preference
problem is connected to an optimal assignment problem, and a geometrical
characterization of the rationalizability of experiment data is given. This
allows in turn to give new indices of rationalizability of the data, and to
define weaker notions of rationalizability, in the spirit of Afriat's
efficiency index.",The housing problem and revealed preference theory: duality and an application,2021-02-04 16:09:50,"Ivar Ekeland, Alfred Galichon","http://dx.doi.org/10.1007/s00199-012-0719-x, http://arxiv.org/abs/2102.02593v1, http://arxiv.org/pdf/2102.02593v1",econ.GN
31570,gn,"The Massachusetts Attorney General issued an Enforcement Notice in 2016 to
announce a new interpretation of a key phrase in the state's assault weapons
ban. The Enforcement Notice increased sales of tagged assault rifles by 616% in
the first 5 days, followed by a 9% decrease over the next three weeks. Sales of
Handguns and Shotguns did not change significantly. Tagged assault rifle sales
fell 28-30% in 2017 compared to previous years, suggesting that the Enforcement
Notice reduced assault weapon sales but also that many banned weapons continued
to be sold. Tagged assault rifles sold most in 2017 in zip codes with higher
household incomes and proportions of white males. Overall, the results suggest
that the firearm market reacts rapidly to policy changes and partially complies
with firearm restrictions.",How the Massachusetts Assault Weapons Ban Enforcement Notice Changed Firearm Sales,2021-02-04 23:55:31,"Meenakshi Balakrishna, Kenneth C. Wilbur","http://arxiv.org/abs/2102.02884v1, http://arxiv.org/pdf/2102.02884v1",econ.GN
31571,gn,"Does the ability to pledge an asset as collateral, after purchase, affect its
price? This paper identifies the impact of collateral service flows on house
prices, exploiting a plausibly exogenous constitutional amendment in Texas
which legalized home equity loans in 1998. The law change increased Texas house
prices 4%; this is price-based evidence that households are credit-constrained
and value home equity loans to facilitate consumption smoothing. Prices rose
more in locations with inelastic supply, higher pre-law house prices, higher
income, and lower unemployment. These estimates reveal that richer households
value the option to pledge their home as collateral more strongly.",Does Collateral Value Affect Asset Prices? Evidence from a Natural Experiment in Texas,2021-02-05 03:37:49,Albert Alex Zevelev,"http://dx.doi.org/10.1093/rfs/hhaa117, http://arxiv.org/abs/2102.02935v1, http://arxiv.org/pdf/2102.02935v1",econ.GN
31572,gn,"The involvement of children in the family dairy farming is pivotal point to
reduce the cost of production input, especially in smallholder dairy farming.
The purposes of the study are to analysis the factors that influence children's
participation in working in the family dairy farm. The study was held December
2020 in the development center of dairy farming in Pangalengan subdistrict,
West Java Province, Indonesia. The econometric method used in the study was the
logit regression model. The results of the study determine that the there were
number of respondents who participates in family farms was 52.59% of total
respondents, and the rest was no participation in the family farms. There are 3
variables in the model that are very influential on children's participation in
the family dairy farming, such as X1 (number of dairy farm land ownership), X2
(number of family members), and X6 (the amount of work spent on the family's
dairy farm). Key words: Participation, children, family, dairy farming, logit
model","Econometric model of children participation in family dairy farming in the center of dairy farming, West Java Province, Indonesia",2021-02-05 17:16:16,"Achmad Firman, Ratna Ayu Saptati","http://arxiv.org/abs/2102.03187v1, http://arxiv.org/pdf/2102.03187v1",econ.GN
31573,gn,"David Gauthier in his article, Maximization constrained: the rationality of
cooperation, tries to defend the joint strategy in situations in which no
outcome is both equilibrium and optimal. Prisoner Dilemma is the most familiar
example of these situations. He first starts with some quotes by Hobbes in
Leviathan; Hobbes, in chapter 15 discusses an objection by someone is called
Foole, and then will reject his view. In response to Foole, Hobbes presents two
strategies (i.e. joint and individual) and two kinds of agents in such problems
including Prisoner Dilemma, i.e. straightforward maximizer (SM) and constrained
maximizer(CM). Then he considers two arguments respectively for SM and CM, and
he will show that why in an ideal and transparent situation, the first argument
fails and the second one would be the only valid argument. Likewise, in the
following part of his article, he considers more realistic situations with
translucency and he concludes that under some conditions, the joint strategy
would be still the rational decision.",Prisoner Dilemma in maximization constrained: the rationality of cooperation,2021-02-06 21:56:09,Shahin Esmaeili,"http://arxiv.org/abs/2102.03644v2, http://arxiv.org/pdf/2102.03644v2",econ.GN
31574,gn,"The rapid early spread of COVID-19 in the U.S. was experienced very
differently by different socioeconomic groups and business industries. In this
study, we study aggregate mobility patterns of New York City and Chicago to
identify the relationship between the amount of interpersonal contact between
people in urban neighborhoods and the disparity in the growth of positive cases
among these groups. We introduce an aggregate Contact Exposure Index (CEI) to
measure exposure due to this interpersonal contact and combine it with social
distancing metrics to show its effect on positive case growth. With the help of
structural equations modeling, we find that the effect of exposure on case
growth was consistently positive and that it remained consistently higher in
lower-income neighborhoods, suggesting a causal path of income on case growth
via contact exposure. Using the CEI, schools and restaurants are identified as
high-exposure industries, and the estimation suggests that implementing
specific mobility restrictions on these point-of-interest categories are most
effective. This analysis can be useful in providing insights for government
officials targeting specific population groups and businesses to reduce
infection spread as reopening efforts continue to expand across the nation.",Mobility-based contact exposure explains the disparity of spread of COVID-19 in urban neighborhoods,2021-02-07 05:02:15,"Rajat Verma, Takahiro Yabe, Satish V. Ukkusuri","http://arxiv.org/abs/2102.03698v1, http://arxiv.org/pdf/2102.03698v1",econ.GN
31575,gn,"I provide evidence that, when income-contingent loans are available, student
enrolment in university courses is not significantly affected by large
increases in the price of those courses. I use publicly available domestic
enrolment data from Australia. I study whether large increases in the price of
higher education for selected disciplines in Australia in 2009 and in 2012 was
associated with changes in their enrolment growth. I find that large increases
in the price of a course did not lead to significant changes in their enrolment
growth for that course.",Increasing the price of a university degree does not significantly affect enrolment if income contingent loans are available: evidence from HECS in Australia,2021-02-08 03:50:45,Fabio Italo Martinenghi,"http://arxiv.org/abs/2102.03956v1, http://arxiv.org/pdf/2102.03956v1",econ.GN
31576,gn,"Although the share of platform work is very small compared to conventional
and traditional employment system, research shows that the use of platform work
is increasingly growing all over the world. Trade unions have also paid special
attention to the platform work because they know that the transfer of a
percentage of human resources to the platforms is undeniable. To this end, the
trade unions prepare themselves for the challenges and dynamics of this
emerging phenomenon in the field of human resources. Using a qualitative
research method and a case study of Hungary, the present study aimed to
identify the strategies adopted by Trade Unions to manage the transition to
platform works and provide suggestions for both practitioners and future
research.",Trade Union Strategies towards Platform Workers: Exploration Instead of Action (The Case of Hungarian Trade Unions),2021-02-08 14:33:22,"Szilvia Borbely, Csaba Mako, Miklos Illessy, Saeed Nostrabadi","http://arxiv.org/abs/2102.04137v1, http://arxiv.org/pdf/2102.04137v1",econ.GN
31577,gn,"The global production (as a system of creating values) is eventually forming
a vast web of value chains that explains the transitional structures of global
trade and development of the world economy. It is truly a new wave of
globalisation, and we can term it as the global value chains (GVCs), creating
the nexus among firms, workers and consumers around the globe. The emergence of
this new scenario is asking how an economy's businesses, producers and
employees are connecting to the global economy and capturing the gains out of
it regarding different dimensions of economic development. Indeed, this GVC
approach is very crucial for understanding the organisation of the global
industries (including firms) through analysing the statics and dynamics of
different economic players involved in this complex global production network.
Its widespread notion deals with various global issues (including regional
value chains also) from the top down to the bottom up, founding a scope for
policy analysis.",Research Methods of Assessing Global Value Chains,2021-02-08 16:12:56,Sourish Dutta,"http://dx.doi.org/10.2139/ssrn.3784173, http://arxiv.org/abs/2102.04176v3, http://arxiv.org/pdf/2102.04176v3",econ.GN
31578,gn,"We characterize solutions for two-sided matching, both in the transferable
and in the nontransferable-utility frameworks, using a cardinal formulation.
Our approach makes the comparison of the matching models with and without
transfers particularly transparent. We introduce the concept of a no-trade
matching to study the role of transfers in matching. A no-trade matching is one
in which the availability of transfers do not affect the outcome.",Ordinal and cardinal solution concepts for two-sided matching,2021-02-08 19:42:52,"Federico Echenique, Alfred Galichon","http://dx.doi.org/10.1016/j.geb.2015.10.002, http://arxiv.org/abs/2102.04337v2, http://arxiv.org/pdf/2102.04337v2",econ.GN
31579,gn,"In Japan, the increase in the consumption tax rate, a measure of balanced
public finance, reduces the inequality of fiscal burden between the present and
future generations. This study estimates the effect of grandchildren on an
older person's view of consumption tax, using independently collected data. The
results show that having grandchildren is positively associated with supporting
an increase in consumption tax. Further, this association is observed strongly
between granddaughters and grandparents. However, the association between
grandsons and grandparents depends on the sub-sample. This implies that people
of the old generation are likely to accept the tax burden to reduce the burden
on their grandchildren, especially granddaughters. In other words, grandparents
show intergenerational altruism.",View about consumption tax and grandchildren,2021-02-09 08:57:58,Eiji Yamamura,"http://arxiv.org/abs/2102.04658v1, http://arxiv.org/pdf/2102.04658v1",econ.GN
31580,gn,"What role does culture play in determining institutions in a country? This
paper argues that the establishment of institutions is a process originating
predominantly in a nation's culture and tries to discern the role of a cultural
background in the governance of countries. We use the six Hofstede's Cultural
Dimensions and the six Worldwide Governance Indicators to test the strength of
the relationship on 94 countries between 1996 and 2019. We find that the
strongest cultural characteristics are Power Distance with negative effect on
governance and Long-Term Orientation with positive effect. We also determine
how well countries transform their cultural characteristics into institutions
using stochastic frontier analysis.",The Role of a Nation's Culture in the Country's Governance: Stochastic Frontier Analysis,2021-02-10 16:09:22,"Vladimír Holý, Tomáš Evan","http://dx.doi.org/10.1007/s10100-021-00754-5, http://arxiv.org/abs/2102.05411v2, http://arxiv.org/pdf/2102.05411v2",econ.GN
31582,gn,"We develop a model of inter-temporal and intra-temporal price discrimination
by monopoly airlines to study the ability of different discriminatory pricing
mechanisms to increase efficiency and the associated distributional
implications. To estimate the model, we use unique data from international
airline markets with flight-level variation in prices across time, cabins, and
markets and information on passengers' reasons for travel and time of purchase.
The current pricing practice yields approximately 77% of the first-best
welfare. The source of this inefficiency arises primarily from private
information about passenger valuations, not dynamic uncertainty about demand.
We also find that if airlines could discriminate between business and leisure
passengers, total welfare would improve at the expense of business passenger
surplus. Also, replacing the current pricing that involves screening passengers
across cabin classes with offering a single cabin class has minimal effect on
total welfare.",Price Discrimination in International Airline Markets,2021-02-11 00:49:55,"Gaurab Aryal, Charles Murry, Jonathan W. Williams","http://arxiv.org/abs/2102.05751v6, http://arxiv.org/pdf/2102.05751v6",econ.GN
31583,gn,"The paper investigates the effects of the credit market development on the
labor mobility between the informal and formal labor sectors. In the case of
Russia, due to the absence of a credit score system, a formal lender may set a
credit limit based on the verified amount of income. To get a loan, an informal
worker must first formalize his or her income (switch to a formal job), and
then apply for a loan. To show this mechanism, the RLMS data was utilized, and
the empirical method is the dynamic multinomial logit model of employment. The
empirical results show that a relaxation of credit constraints increases the
probability of transition from an informal to a formal job, and improved CMA
(by one standard deviation) increases the chances of informal sector workers to
formalize by 5.4 ppt. These results are robust in different specifications of
the model. Policy simulations show strong support for a reduction in informal
employment in response to better CMA in credit-constrained communities.",Labor Informality and Credit Market Accessibility,2021-02-11 04:54:02,"Alina Malkova, Klara Sabirianova Peter, Jan Svejnar","http://arxiv.org/abs/2102.05803v1, http://arxiv.org/pdf/2102.05803v1",econ.GN
31584,gn,"Experimental studies regularly show that third-party punishment (TPP)
substantially exists in various settings. This study further investigates the
robustness of TPP under an environment where context effects are involved. In
our experiment, we offer a third party an additional but unattractive risky
investment option. We find that, when the dominated investment option
irrelevant to prosocial behavior is available, the demand for punishment
decreases, whereas the demand for investment increases. These findings support
our hypothesis that the seemingly unrelated and dominated investment option may
work as a compromise and suggest the fragility of TPP in this setting.",On the Fragility of Third-party Punishment: The Context Effect of a Dominated Risky Investment Option,2021-02-11 10:40:31,"Changkuk Im, Jinkwon Lee","http://arxiv.org/abs/2102.05876v3, http://arxiv.org/pdf/2102.05876v3",econ.GN
31585,gn,"We propose a new measure of deviations from expected utility theory. For any
positive number~$e$, we give a characterization of the datasets with a
rationalization that is within~$e$ (in beliefs, utility, or perceived prices)
of expected utility theory. The number~$e$ can then be used as a measure of how
far the data is to expected utility theory. We apply our methodology to data
from three large-scale experiments. Many subjects in those experiments are
consistent with utility maximization, but not with expected utility
maximization. Our measure of distance to expected utility is correlated with
subjects' demographic characteristics.",Approximate Expected Utility Rationalization,2021-02-12 05:37:35,"Federico Echenique, Kota Saito, Taisuke Imai","http://arxiv.org/abs/2102.06331v1, http://arxiv.org/pdf/2102.06331v1",econ.GN
31586,gn,"No matter its source, financial- or policy-related, uncertainty can feed onto
itself, inflicting the real economic sector, altering expectations and
behaviours, and leading to identification challenges in empirical applications.
The strong intertwining between policy and financial realms prevailing in
Europe, and in Euro Area in particular, might complicate the problem and create
amplification mechanisms difficult to pin down. To reveal the complex
transmission of country-specific uncertainty shocks in a multi-country setting,
and to properly account for cross-country interdependencies, we employ a global
VAR specification for which we adapt an identification approach based on
magnitude restrictions. Once we separate policy uncertainty from financial
uncertainty shocks, we find evidence of important cross-border uncertainty
spill-overs. We also uncover a new amplification mechanism for domestic
uncertainty shocks, whose true nature becomes more blurred once they cross the
national boundaries and spill over to other countries. With respect to ECB
policy reactions, we reveal stronger but less persistent responses to financial
uncertainty shocks compared to policy uncertainty shocks. This points to ECB
adopting a more (passive or) accommodative stance towards the former, but a
more pro-active stance towards the latter shocks, possibly as an attempt to
tame policy uncertainty spill-overs and prevent the fragmentation of the Euro
Area financial markets.",Uncertainty spill-overs: when policy and financial realms overlap,2021-02-12 12:09:22,"Emanuele Bacchiocchi, Catalin Dragomirescu-Gaina","http://arxiv.org/abs/2102.06404v1, http://arxiv.org/pdf/2102.06404v1",econ.GN
31587,gn,"In many economic contexts, agents from a same population team up to better
exploit their human capital. In such contexts (often called ""roommate matching
problems""), stable matchings may fail to exist even when utility is
transferable. We show that when each individual has a close substitute, a
stable matching can be implemented with minimal policy intervention. Our
results shed light on the stability of partnerships on the labor market.
Moreover, they imply that the tools crafted in empirical studies of the
marriage problem can easily be adapted to many roommate problems.",On Human Capital and Team Stability,2021-02-12 15:39:03,"Pierre-André Chiappori, Alfred Galichon, Bernard Salanié","http://dx.doi.org/10.1086/702925, http://arxiv.org/abs/2102.06487v1, http://arxiv.org/pdf/2102.06487v1",econ.GN
31607,gn,"Does the ability to protect an asset from unsecured creditors affect its
price? This paper identifies the impact of bankruptcy protection on house
prices using 139 changes in homestead exemptions. Large increases in the
homestead exemption raised house prices 3% before 2005. Smaller exemption
increases, to adjust for inflation, did not affect house prices. The effect
disappeared after BAPCPA, a 2005 federal law designed to prevent bankruptcy
abuse. The effect was bigger in inelastic locations.",Does Bankruptcy Protection Affect Asset Prices? Evidence from changes in Homestead Exemptions,2021-02-25 23:32:45,"Yildiray Yildirim, Albert Alex Zevelev","http://arxiv.org/abs/2102.13157v1, http://arxiv.org/pdf/2102.13157v1",econ.GN
31588,gn,"In this paper, we extend Gary Becker's empirical analysis of the marriage
market to same-sex couples. Becker's theory rationalizes the well-known
phenomenon of homogamy among different-sex couples: individuals mate with their
likes because many characteristics, such as education, consumption behaviour,
desire to nurture children, religion, etc., exhibit strong complementarities in
the household production function. However, because of asymmetries in the
distributions of male and female characteristics, men and women may need to
marry ""up"" or ""down"" according to the relative shortage of their
characteristics among the populations of men and women. Yet, among same-sex
couples, this limitation does not exist as partners are drawn from the same
population, and thus the theory of assortative mating would boldly predict that
individuals will choose a partner with nearly identical characteristics.
Empirical evidence suggests a very different picture: a robust stylized fact is
that the correlation of the characteristics is in fact weaker among same-sex
couples. In this paper, we build an equilibrium model of same-sex marriage
market which allows for straightforward identification of the gains to
marriage. We estimate the model with 2008-2012 ACS data on California and show
that positive assortative mating is weaker for homosexuals than for
heterosexuals with respect to age and race. Our results suggest that positive
assortative mating with respect to education is stronger among lesbians, and
not significantly different when comparing gay men and married different-sex
couples. As regards labor market outcomes, such as hourly wages and working
hours, we find some indications that the process of specialization within the
household mainly applies to different-sex couples.",Like Attract Like? A Structural Comparison of Homogamy across Same-Sex and Different-Sex Households,2021-02-12 17:22:05,"Edoardo Ciscato, Alfred Galichon, Marion Goussé","http://dx.doi.org/10.1086/704611, http://arxiv.org/abs/2102.06547v1, http://arxiv.org/pdf/2102.06547v1",econ.GN
31589,gn,"Autonomous vehicles (AVs) have the potential of reshaping the human mobility
in a wide variety of aspects. This paper focuses on a new possibility that the
AV owners have the option of ""renting"" their AVs to a company, which can use
these collected AVs to provide on-demand ride services without any drivers. We
call such a mobility market with AV renting options the ""AV crowdsourcing
market"". This paper establishes an aggregate equilibrium model with multiple
transport modes to analyze the AV crowdsourcing market. The modeling framework
can capture the customers' mode choices and AV owners' rental decisions with
the presence of traffic congestion. Then, we explore different scenarios that
either maximize the crowdsourcing platform's profit or maximize social welfare.
Gradient-based optimization algorithms are designed for solving the problems.
The results obtained by numerical examples reveal the welfare enhancement and
the strong profitability of the AV crowdsourcing service. However, when the
crowdsourcing scale is small, the crowdsourcing platform might not be
profitable. A second-best pricing scheme is able to avoid such undesirable
cases. The insights generated from the analyses provide guidance for
regulators, service providers and citizens to make future decisions regarding
the utilization of the AV crowdsourcing markets for serving the good of the
society.",Aggregate Modeling and Equilibrium Analysis of the Crowdsourcing Market for Autonomous Vehicles,2021-02-14 16:13:32,"Xiaoyan Wang, Xi Lin, Meng Li","http://arxiv.org/abs/2102.07147v1, http://arxiv.org/pdf/2102.07147v1",econ.GN
31590,gn,"This paper investigates various ways in which a pandemic such as the novel
coronavirus, could be predicted using different mathematical models. It also
studies the various ways in which these models could be depicted using various
visualization techniques. This paper aims to present various statistical
techniques suggested by the Centres for Disease Control and Prevention in order
to represent the epidemiological data. The main focus of this paper is to
analyse how epidemiological data or contagious diseases are theorized using any
available information and later may be presented wrongly by not following the
guidelines, leading to inaccurate representation and interpretations of the
current scenario of the pandemic; with a special reference to the Indian
Subcontinent.",How Misuse of Statistics Can Spread Misinformation: A Study of Misrepresentation of COVID-19 Data,2021-02-14 20:09:08,"Shailesh Bharati, Rahul Batra","http://arxiv.org/abs/2102.07198v1, http://arxiv.org/pdf/2102.07198v1",econ.GN
31591,gn,"Which and how many attributes are relevant for the sorting of agents in a
matching market? This paper addresses these questions by constructing indices
of mutual attractiveness that aggregate information about agents' attributes.
The first k indices for agents on each side of the market provide the best
approximation of the matching surplus by a k-dimensional model. The methodology
is applied on a unique Dutch households survey containing information about
education, height, BMI, health, attitude toward risk and personality traits of
spouses.",Personality Traits and the Marriage Market,2021-02-15 14:35:35,"Arnaud Dupuy, Alfred Galichon","http://dx.doi.org/10.1086/677191, http://arxiv.org/abs/2102.07476v1, http://arxiv.org/pdf/2102.07476v1",econ.GN
31592,gn,"In the context of the Beckerian theory of marriage, when men and women match
on a single-dimensional index that is the weighted sum of their respective
multivariate attributes, many papers in the literature have used linear
canonical correlation, and related techniques, in order to estimate these
weights. We argue that this estimation technique is inconsistent and suggest
some solutions.",Canonical Correlation and Assortative Matching: A Remark,2021-02-15 14:45:13,"Arnaud Dupuy, Alfred Galichon","http://dx.doi.org/10.15609/annaeconstat2009.119-120.375, http://arxiv.org/abs/2102.07489v1, http://arxiv.org/pdf/2102.07489v1",econ.GN
31593,gn,"This study presents the implementation of a non-parametric multiple criteria
decision aiding (MCDA) model, the Multi-group Hierarchy Discrimination
(M.H.DIS) model, with the Preference Ranking Organization Method for Enrichment
Evaluations (PROMETHEE), on a dataset of 114 European unlisted companies
operating in the energy sector. Firstly, the M.H.DIS model has been developed
following a five-fold cross validation procedure to analyze whether the model
explains and replicates a two-group pre-defined classification of companies in
the considered sample, provided by Bureau van Dijk's Amadeus database. Since
the M.H.DIS method achieves a quite limited satisfactory accuracy in predicting
the considered Amadeus classification in the holdout sample, the PROMETHEE
method has been performed then to provide a benchmark sorting procedure useful
for comparison purposes.",Assessment of a failure prediction model in the energy sector: a multicriteria discrimination approach with Promethee based classification,2021-02-15 19:34:37,"Silvia Angilella, Maria Rosaria Pappalardo","http://arxiv.org/abs/2102.07656v1, http://arxiv.org/pdf/2102.07656v1",econ.GN
31594,gn,"Rawls' theory of justice aims at fairness. He does not only think of justice
between exiting parties in existing society, but he also thinks of it between
generations intergenerational justice problem. Rawls' solution to this problem
is the saving principle, and he says that we are responsible for being just
with the next generations. Wolf thinks of our responsibility for future
generations as a kind of financial debt that we have to pay. He also develops
the meaning of ""saving"" and says that it is not restricted to the monetary one.
Wolf extends the definition of ""saving"" such that it includes investment on
behalf of the next generations as well. In this paper, I want to extend the
meaning of ""saving"" to ""using the resources for sustainable and resilient
systems."" By referring to the problem of time, I show that our decision on
behalf of the next generations will be rational only if we entirely use the
natural resources and wealth to produce sustainable and resilient systems.",Sustainable and Resilient Systems for Intergenerational Justice,2021-02-18 05:47:43,Sahar Zandi,"http://arxiv.org/abs/2102.09122v1, http://arxiv.org/pdf/2102.09122v1",econ.GN
31595,gn,"Economic shocks due to Covid-19 were exceptional in their severity,
suddenness and heterogeneity across industries. To study the upstream and
downstream propagation of these industry-specific demand and supply shocks, we
build a dynamic input-output model inspired by previous work on the economic
response to natural disasters. We argue that standard production functions, at
least in their most parsimonious parametrizations, are not adequate to model
input substitutability in the context of Covid-19 shocks. We use a survey of
industry analysts to evaluate, for each industry, which inputs were absolutely
necessary for production over a short time period. We calibrate our model on
the UK economy and study the economic effects of the lockdown that was imposed
at the end of March and gradually released in May. Looking back at predictions
that we released in May, we show that the model predicted aggregate dynamics
very well, and sectoral dynamics to a large extent. We discuss the relative
extent to which the model's dynamics and performance was due to the choice of
the production function or the choice of an exogenous shock scenario. To
further explore the behavior of the model, we use simpler scenarios with only
demand or supply shocks, and find that popular metrics used to predict a priori
the impact of shocks, such as output multipliers, are only mildly useful.",In and out of lockdown: Propagation of supply and demand shocks in a dynamic input-output model,2021-02-18 23:49:05,"Anton Pichler, Marco Pangallo, R. Maria del Rio-Chanona, François Lafond, J. Doyne Farmer","http://arxiv.org/abs/2102.09608v1, http://arxiv.org/pdf/2102.09608v1",econ.GN
31596,gn,"Nas \'ultimas d\'ecadas houve forte mudan\c{c}a no perfil das
publica\c{c}\~oes em an\'alise econ\^omica do direito e nos m\'etodos
emp\'iricos mais utilizados. Por\'em, nos pr\'oximos anos a mudan\c{c}a pode
ser maior e mais r\'apida, exigindo adapta\c{c}\~ao dos pesquisadores da
\'area. Neste cap\'itulo analiso algumas tend\^encias recentes de mudan\c{c}a e
oportunidades futuras que se avizinham a partir avan\c{c}os nas bases de dados,
estat\'istica, computa\c{c}\~ao e no arcabou\c{c}o regulat\'orio dos pa\'ises.
Avan\c{c}o a hip\'otese de que expans\~ao de objetos e m\'etodos favorecer\'a
equipes de pesquisa maiores e interdisciplinares e apresento evid\^encias
circunstanciais a partir de dados bibliom\'etricos de que isso j\'a vem
acontecendo no Journal of Law and Economics.",Métodos Empíricos Aplicados à Análise Econômica do Direito,2021-02-20 03:17:33,Thomas V. Conti,"http://arxiv.org/abs/2102.10211v1, http://arxiv.org/pdf/2102.10211v1",econ.GN
31597,gn,"This paper empirically examines how the opening of K-12 schools and colleges
is associated with the spread of COVID-19 using county-level panel data in the
United States. Using data on foot traffic and K-12 school opening plans, we
analyze how an increase in visits to schools and opening schools with different
teaching methods (in-person, hybrid, and remote) is related to the 2-weeks
forward growth rate of confirmed COVID-19 cases. Our debiased panel data
regression analysis with a set of county dummies, interactions of state and
week dummies, and other controls shows that an increase in visits to both K-12
schools and colleges is associated with a subsequent increase in case growth
rates. The estimates indicate that fully opening K-12 schools with in-person
learning is associated with a 5 (SE = 2) percentage points increase in the
growth rate of cases. We also find that the positive association of K-12 school
visits or in-person school openings with case growth is stronger for counties
that do not require staff to wear masks at schools. These results have a causal
interpretation in a structural model with unobserved county and time
confounders. Sensitivity analysis shows that the baseline results are robust to
timing assumptions and alternative specifications.",The Association of Opening K-12 Schools with the Spread of COVID-19 in the United States: County-Level Panel Data Analysis,2021-02-21 01:00:53,"Victor Chernozhukov, Hiroyuki Kasahara, Paul Schrimpf","http://dx.doi.org/10.1073/pnas.2103420118, http://arxiv.org/abs/2102.10453v2, http://arxiv.org/pdf/2102.10453v2",econ.GN
31598,gn,"-It has considered almost 30 years since the emergence of e-commerce, but it
is still a global phenomenon to this day. E-commerce is replacing the
traditional way of doing business. Yet, expectations of sustainable development
have been unmet. There are still significant differences between online and
offline shopping. Although many academic studies have conducted on the adoption
of various forms of ecommerce, there are little research topics on East African
countries, The adoption of B2C e-commerce in East African countries has faced
many challenges that have been unaddressed because of the complex nature of
e-commerce in these nations. This study examines the adaptation of B2C in East
Africa using the theory of diffusion of innovation. Data collected from 279
participants in Tanzania were used to test the research model. The results show
that awareness, infrastructure innovation and social media play a significant
role in the adoption of e-commerce. Lack of good e-commerce policy and
awareness discourages the adoption of B2C. We also examine how time influences
the adaptation of B2C e-commerce to the majority. So, unlike previous adoption
studies, which have tended to focus on technology, organizational, and
environmental factors, this study guides the government on how to use social
media to promote B2C e-commerce.","Exploring the role of Awareness, Government Policy, and Infrastructure in adapting B2C E-Commerce to East African Countries",2021-02-23 17:43:00,"Emmanuel H. Yindi, Immaculate Maumoh, Prisillah L. Mahavile","http://arxiv.org/abs/2102.11729v1, http://arxiv.org/pdf/2102.11729v1",econ.GN
31599,gn,"We focus on how individual behavior that complies with social norms
interferes with performance-based incentive mechanisms in organizations with
multiple distributed decision-making agents. We model social norms to emerge
from interactions between agents: agents observe other the agents' actions and,
from these observations, induce what kind of behavior is socially acceptable.
By complying with the induced socially accepted behavior, agents experience
utility. Also, agents get utility from a pay-for-performance incentive
mechanism. Thus, agents pursue two objectives. We place the interaction between
social norms and performance-based incentive mechanisms in the complex
environment of an organization with distributed decision-makers, in which a set
of interdependent tasks is allocated to multiple agents. The results suggest
that, unless the sets of assigned tasks are highly correlated, complying with
emergent socially accepted behavior is detrimental to the organization's
performance. However, we find that incentive schemes can help offset the
performance loss by applying individual-based incentives in environments with
lower task-complexity and team-based incentives in environments with higher
task-complexity.",Interactions between social norms and incentive mechanisms in organizations,2021-02-24 17:37:42,"Ravshanbek Khodzhimatov, Stephan Leitner, Friederike Wall","http://arxiv.org/abs/2102.12309v1, http://arxiv.org/pdf/2102.12309v1",econ.GN
31600,gn,"In the context of modern marketing, Twitter is considered a communication
platform to spread information. Many companies create and acquire several
Twitter accounts to support and perform varieties of marketing mix activities.
Initially, each accounts used to capture a specific market profile. Together,
the accounts create a network of information that provide consumer to the
information they need depends on their contextual utilization. From many
accounts available, we have the fundamental question on how to measure the
influence of each account in the market based not only on their relations but
also on the effects of their postings. The magnitude of Influence (MOI) metric
is adapted together with Influence Rank (IR) measurement of accounts in their
social network neighborhood. We use social network analysis approach to analyze
65 accounts in the social network of an Indonesian mobile phone network
operator, Telkomsel which involved in marketing communications mix activities
through series of related tweets. Using social network provide the idea of the
activity in building and maintaining relationships with the target audience.
This paper shows the results of the most potential accounts based on the
network structure and engagement. Based on this research, the more number of
followers one account has, the more responsibility it has to generate the
interaction from their followers in order to achieve the expected
effectiveness. The focus of this paper is to determine the most potential
accounts in the application of marketing communications mix in Twitter.",Measuring Marketing Communications Mix Effort Using Magnitude Of Influence And Influence Rank Metric,2021-02-24 17:56:47,"Andry Alamsyah, Endang Sofyan, Tsana Hasti Nabila","http://arxiv.org/abs/2102.12320v1, http://arxiv.org/pdf/2102.12320v1",econ.GN
31601,gn,"Knowledge management is an important aspect of an organization, especially in
the ICT industry. Having more control of it is essentials for the organization
to stay competitive in the business. One way to assess the organization's
knowledge capital is by measuring employee knowledge networks and their
personal reputation in social media. Using this measurement, we see how
employees build relationships around their peer networks or clients virtually.
We are also able to see how knowledge networks support organizational
performance. The research objective is to map knowledge network and reputation
formulation in order to fully understand how knowledge flow and whether
employee reputation has a higher degree of influence in the organization's
knowledge network. We particularly develop formulas to measure knowledge
networks and personal reputation based on their social media activities. As a
case study, we pick an Indonesian ICT company that actively build their
business around their employee peer knowledge outside the company. For the
knowledge network, we perform data collection by conducting interviews. For
reputation management, we collect data from several popular social media. We
base our work on Social Network Analysis (SNA) methodology. The result shows
that employees' knowledge is directly proportional to their reputation, but
there are different reputations level on different social media observed in
this research.",Mapping Organization Knowledge Network and Social Media Based Reputation Management,2021-02-24 18:15:16,"Andry Alamsyah, Maribella Syawiluna","http://dx.doi.org/10.21108/jdsa.2018.1.3, http://arxiv.org/abs/2102.12337v1, http://arxiv.org/pdf/2102.12337v1",econ.GN
31602,gn,"The tourism industry is one of the potential revenues and has an important
role in economics in Indonesia. The tourism Industry brings job and business
opportunities, foreign exchange earnings, and infrastructure development,
tourism also plays the role of one of the main drivers in socio-economic
progress in Indonesia. The number of foreign tourists visiting Indonesia
increase cumulatively and has reached 10.41 million visits or an increase of
10.46 percent from the same period in the previous year. Government trying to
increase the number of tourists to visit Indonesia by promoting many Indonesian
tourist attractions.",Summarizing Online Conversation of Indonesia Tourism Industry using Network Text Analysis,2021-02-24 18:33:14,"Andry Alamsyah, Sheila Shafira, Muhamad Alfin Yudhistira","http://arxiv.org/abs/2102.12350v1, http://arxiv.org/pdf/2102.12350v1",econ.GN
31603,gn,"Knowledge is known to be a pre-condition for an individuals behavior. For the
most efficient informational strategies for education, it is essential that we
identify the types of knowledge that promote behavior effectively and
investigate their structure. The purpose of this paper is therefore to examine
the factors that affect Kenyan farmers, environmental citizenship behavior
(ECB) in the context of Adaptation and mitigation (Climate smart agriculture).
To achieve this objective, a theoretical framework has been developed based on
value belief norm (VBN) theory. Design/methodology/approach, Data were obtained
from 350 farmers using a survey method. Partial lease square structural
equation modelling (PLS-SEM) was used to examine the hypothetical model. The
results of PLS analysis confirm the direct and mediating effect of the causal
sequences of the variables in the VBN model. The moderating role of
Environmental knowledge has been seen to be impactful in Climate Smart
Agriculture.","Understanding the Farmers, Environmental Citizenship Behaviors Towards Climate Change. The Moderating Mediating Role of Environmental Knowledge and Ascribed Responsibility",2021-02-23 18:05:51,"Immaculate Maumoh, Emmanuel H. Yindi","http://arxiv.org/abs/2102.12378v1, http://arxiv.org/pdf/2102.12378v1",econ.GN
31604,gn,"In this paper, we first revisit the Koenker and Bassett variational approach
to (univariate) quantile regression, emphasizing its link with latent factor
representations and correlation maximization problems. We then review the
multivariate extension due to Carlier et al. (2016, 2017) which relates vector
quantile regression to an optimal transport problem with mean independence
constraints. We introduce an entropic regularization of this problem, implement
a gradient descent numerical method and illustrate its feasibility on
univariate and bivariate examples.","Vector quantile regression and optimal transport, from theory to numerics",2021-02-25 15:25:11,"Guillaume Carlier, Victor Chernozhukov, Gwendoline De Bie, Alfred Galichon","http://dx.doi.org/10.1007/s00181-020-01919-y, http://arxiv.org/abs/2102.12809v1, http://arxiv.org/pdf/2102.12809v1",econ.GN
31605,gn,"We investigate in this paper the theory and econometrics of optimal matchings
with competing criteria. The surplus from a marriage match, for instance, may
depend both on the incomes and on the educations of the partners, as well as on
characteristics that the analyst does not observe. Even if the surplus is
complementary in incomes, and complementary in educations, imperfect
correlation between income and education at the individual level implies that
the social optimum must trade off matching on incomes and matching on
educations. Given a flexible specification of the surplus function, we
characterize under mild assumptions the properties of the set of feasible
matchings and of the socially optimal matching. Then we show how data on the
covariation of the types of the partners in observed matches can be used to
test that the observed matches are socially optimal for this specification, and
to estimate the parameters that define social preferences over matches.",Matching with Trade-offs: Revealed Preferences over Competing Characteristics,2021-02-25 15:27:16,"Alfred Galichon, Bernard Salanié","http://arxiv.org/abs/2102.12811v1, http://arxiv.org/pdf/2102.12811v1",econ.GN
31609,gn,"This paper studies whether and how differently projected information about
the impact of the Covid-19 pandemic affects individuals' prosocial behavior and
expectations on future outcomes. We conducted an online experiment with British
participants (N=961) when the UK introduced its first lockdown and the outbreak
was on its growing stage. Participants were primed with either the
environmental or economic consequences (i.e., negative primes), or the
environmental or economic benefits (i.e., positive primes) of the pandemic, or
with neutral information. We measured priming effects on an incentivized
take-and-give dictator game and on participants' expectations about future
environmental quality and economic growth. Our results show that primes affect
participants' expectations, but not their prosociality. In particular,
participants primed with environmental consequences hold a more pessimistic
view on future environmental quality, while those primed with economic benefits
are more optimistic about future economic growth. Instead, the positive
environmental prime and the negative economic prime do not influence
expectations. Our results offer insights into how information affects behavior
and expectations during the Covid-19 pandemic.",Priming prosocial behavior and expectations in response to the Covid-19 pandemic -- Evidence from an online experiment,2021-02-26 18:16:35,"Valeria Fanghella, Thi-Thanh-Tam Vu, Luigi Mittone","http://arxiv.org/abs/2102.13538v1, http://arxiv.org/pdf/2102.13538v1",econ.GN
31610,gn,"Reaching the 2030 targets for the EU primary energy use (PE) and CO2eq
emissions (CE) requires an accurate assessment of how different technologies
perform on these two fronts. In this regard, the focus in academia is
increasingly shifting from traditional technologies to electricity consuming
alternatives. Calculating and comparing their performance with respect to
traditional technologies requires conversion factors (CFs) like a primary
energy factor and a CO2eq intensity. These reflect the PE and CE associated
with each unit of electricity consumed. Previous work has shown that the
calculation and use of CFs is a contentious and multifaceted issue. However,
this has mostly remained a theoretical discussion. A stock-taking of how CFs
are actually calculated and used in academic literature has so far been
missing, impeding insight into what the contemporary trends and challenges are.
Therefore, we structurally review 65 publications across six methodological
aspects. We find that 72% of the publications consider only a single country,
86% apply a purely retrospective perspective, 54% apply a yearly temporal
resolution, 65% apply a purely operational (instead of a life-cycle)
perspective, 91% make use of average (rather than marginal) CFs, and 75% ignore
electricity imports from surrounding countries. We conclude that there is a
strong need in the literature for a publicly available, transparently
calculated dataset of CFs, which avoids the shortcomings found in the
literature. This would enable more accurate and transparent PE and CE
calculations, and support the development of new building energy performance
assessment methods and smart grid algorithms.",The use of primary energy factors and CO2 intensities -- reviewing the state of play in academic literature,2021-02-26 18:21:45,"Sam Hamels, Eline Himpe, Jelle Laverge, Marc Delghust, Kjartan Van den Brande, Arnold Janssens, Johan Albrecht","http://dx.doi.org/10.1016/j.rser.2021.111182, http://arxiv.org/abs/2102.13539v1, http://arxiv.org/pdf/2102.13539v1",econ.GN
31611,gn,"We address the question how sensitive international knowledge flows respond
to geo-political conflicts taking the politico-economic tensions between
EU-Russia since the Ukraine crisis 2014 as case study. We base our econometric
analysis on comprehensive data covering more than 500 million scientific
publications and 8 million international co-publications between 1995 and 2018.
Our findings indicate that the imposition of EU sanctions and Russian
counter-sanctions from 2014 onwards has significant negative effects on
bilateral international scientific co-publication rates between EU countries
and Russia. Depending on the chosen control group and sectors considered,
effect size ranges from 15% to 70%. Effects are also observed to grow over
time.","Geo-political conflicts, economic sanctions and international knowledge flows",2021-12-01 18:30:05,"Teemu Makkonen, Timo Mitze","http://arxiv.org/abs/2112.00564v1, http://arxiv.org/pdf/2112.00564v1",econ.GN
31612,gn,"Enhancing residents' willingness to participate in basic health services is a
key initiative to optimize the allocation of health care resources and promote
equitable improvements in group health. This paper investigates the effect of
education on resident health record completion rates using a system GMM model
based on pseudo-panel that consisting of five-year cross-sectional data. To
mitigate possible endogeneity, this paper controls for cohort effects while
also attenuating dynamic bias in the estimation from a dynamic perspective and
provides robust estimates based on multi-model regression. The results show
that (1) education can give positive returns on health needs to the mobile
population under the static perspective, and such returns are underestimated
when cohort effects are ignored; (2) there is a significant cumulative effect
of file completion rate under the dynamic perspective, and file completion in
previous years will have a positive effect on the current year. (3)The positive
relationship between education and willingness to make health decisions is also
characterized by heterogeneity by gender, generation, and education level
itself. Among them, education is more likely to promote decision-making
intentions among men and younger groups, and this motivational effect is more
significant among those who received basic education.",Can Education Motivate Individual Health Demands? Dynamic Pseudo-panel Evidence from China's Immigration,2021-12-02 11:17:11,"Shixi Kang, Jingwen Tan","http://arxiv.org/abs/2112.01046v1, http://arxiv.org/pdf/2112.01046v1",econ.GN
31613,gn,"This study investigates whether a uni-directional or bi-directional causal
relationship exists between financial development and international trade for
Indian economy, during the time period from 1980 to 2019. The empirical
analysis utilizes three measures of financial development created by IMF,
namely, financial institutional development index, financial market development
index and a composite index of financial development, encompassing dimensions
of financial access, depth and efficiency. Johansen cointegration, vector error
correction model and vector auto regressive model are estimated to examine the
long run relationship and short run dynamics among the variables of interest.
The econometric results indicate that there is indeed a long run causal
relationship between the composite index of financial development and trade
openness. Cointegration is also found to exist between trade openness and index
of financial market development. However, there is no evidence of cointegration
between financial institutional development and trade openness. Granger
causality test results indicate the presence of uni-directional causality
running from composite index of financial development to trade openness.
Financial market development is also found to Granger cause trade openness. In
contrast, trade openness is found to promote financial institutional
development in the short run. Empirical evidence thus underlines the importance
of formulating policies which recognize the role of well-developed financial
markets in accelerating international trade of Indian economy.","Financial Markets, Financial Institutions and International Trade: Examining the causal links for Indian Economy",2021-12-03 10:10:59,"Ummuhabeeba Chaliyan, Mini P. Thomas","http://arxiv.org/abs/2112.01749v2, http://arxiv.org/pdf/2112.01749v2",econ.GN
31614,gn,"In light of the ongoing integration efforts, the question of whether CAPADR
economies may benefit from a single currency arises naturally. This paper
examines the feasibility of an Optimum Currency Area (OCA) within seven CAPADR
countries. We estimate SVAR models to retrieve demand and supply shocks between
2009:01 - 2020:01 and determine their extent of symmetry. We then go on to
compute two regional indicators of dispersion and the cost of inclusion into a
hypothetical OCA for each country. Our results indicate that asymmetric shocks
tend to prevail. In addition, the dispersion indexes show that business cycles
have become more synchronous over time. However, CAPADR countries are still
sources of cyclical divergence, so that they would incur significant costs in
terms of cycle correlation whenever they pursue currency unification. We
conclude that the region does not meet the required symmetry and synchronicity
for an OCA to be appropiate.",Shock Symmetry and Business Cycle Synchronization: Is Monetary Unification Feasible among CAPADR Countries?,2021-12-03 21:24:26,Jafet Baca,"http://arxiv.org/abs/2112.02063v2, http://arxiv.org/pdf/2112.02063v2",econ.GN
31615,gn,"Production of cocoa, the third largest trade commodity globally has
experienced climate related yield stagnation since 2016, forcing farmers to
expand production in forested habitats and to shift from nature friendly
agroforestry systems to intensive monocultures. The goal for future large-scale
cocoa production combines high yields with biodiversity friendly management
into a climate adapted smart agroforestry system (SAS). As pollination
limitation is a key driver of global production, we use data of more than
150,000 cocoa farms and results of hand pollination experiments to show that
manually enhancing cocoa pollination (hereafter manual pollination) can produce
SAS. Manual pollination can triple farm yields and double farmers annual profit
in the major producer countries Ivory Coast, Ghana, and Indonesia, and can
increase global cocoa supplies by up to 13%. We propose a win win scenario to
mitigate negative long term price and socioeconomic effects, whereby manual
pollination compensates only for yield losses resulting from climate and
disease related decreases in production area and conversion of monocultures
into agroforestry systems. Our results highlight that yields in biodiversity
friendly and climate adapted SAS can be similar to yields currently only
achieved in monocultures. Adoption of manual pollination could be achieved
through wider implementation of ecocertification standards, carbon markets, and
zero deforestation pledges.","Cocoa pollination, biodiversity-friendly production, and the global market",2021-12-06 12:09:12,"Thomas Cherico Wanger, Francis Dennig, Manuel Toledo-Hernández, Teja Tscharntke, Eric F. Lambin","http://arxiv.org/abs/2112.02877v1, http://arxiv.org/pdf/2112.02877v1",econ.GN
31616,gn,"Weather is one of the main drivers of both the power demand and supply,
especially in the Nordic region which is characterized by high heating needs
and a high share of renewable energy. Furthermore, ambitious decarbonization
plans may cause power to replace fossil fuels for heating in the Nordic region,
at the same time as large wind power expansions are expected, resulting in even
greater exposure to weather risk. In this study, we quantify the increase in
weather risk resulting from replacing fossil fuels with power for heating in
the Nordic region, at the same time as variable renewable generation expands.
First, we calibrate statistical weather-driven power consumption models for
each of the countries Norway, Sweden, Denmark, and Finland. Then, we modify the
weather sensitivity of the models to simulate different levels of heating
electrification, and use 300 simulated weather years to investigate how
differing weather conditions impact power consumption at each electrification
level. The results show that full replacement of fossil fuels by power for
heating in 2040 leads to an increase in annual consumption of 155 TWh (30%)
compared to a business-as-usual scenario during an average weather year, but a
178 TWh (34%) increase during a one-in-twenty weather year. However, the
increase in the peak consumption is greater: around 50% for a normal weather
year, and 70% for a one-in-twenty weather year. Furthermore, wind and solar
generation contribute little during the consumption peaks. The increased
weather sensitivity caused by heating electrification causes greater total
load, but also causes a significant increase in inter-annual, seasonal, and
intra-seasonal variations. We conclude that heating electrification must be
accompanied by an increase in power system flexibility to ensure a stable and
secure power supply.",Increased Electrification of Heating and Weather Risk in the Nordic Power System,2021-12-06 12:30:43,"Ian M. Trotter, Torjus F. Bolkesjø, Eirik O. Jåstad, Jon Gustav Kirkerud","http://arxiv.org/abs/2112.02893v1, http://arxiv.org/pdf/2112.02893v1",econ.GN
31617,gn,"We study the diffusion of shocks in the global financial cycle and global
liquidity conditions to emerging and developing economies. We show that the
classification according to their external trade patterns (as commodities' net
exporters or net importers) allows to evaluate the relative importance of
international monetary spillovers and their impact on the domestic financial
cycle volatility -i.e., the coefficient of variation of financial spreads and
risks. Given the relative importance of commodity trade in the economic
structure of these countries, our study reveals that the sign and size of the
trade balance of commodity goods are key parameters to rationalize the impact
of global financial and liquidity conditions. Hence, the sign and volume of
commodity external trade will define the effect on countries' financial
spreads. We implement a two-equation dynamic panel data model for 33 countries
during 1999:Q1-2020:Q4 that identifies the effect of global conditions on the
countries' commodities terms of trade and financial spreads, first in a direct
way, and then by a feedback mechanism by which the terms of trade have an
asymmetric additional influence on spreads.","Global Financial Cycle, Commodity Terms of Trade and Financial Spreads in Emerging Markets and Developing Economies",2021-12-08 13:40:42,"Jorge Carrera, Gabriel Montes-Rojas, Fernando Toledo","http://arxiv.org/abs/2112.04218v1, http://arxiv.org/pdf/2112.04218v1",econ.GN
31618,gn,"Macroeconomic data on the Spanish economy during the Second Republic is not
accurate, the interpretation of historical events from the figures obtained is
divergent and misleading. Hasty laws were enacted in attempts to resolve social
problems arising mainly from deep economic inequalities, but they were often
nothing more than declarations of good intentions. Spain suffered in the
aftermath of the international economic downturn as it began to be felt at the
end of the dictatorship of General Primo de Rivera. Economic policy was
developed under the Constitution,but, despite the differences between the first
and second biennium, there was a tendency to maintain the guidelines from the
previous stage and in general, sometimes unfairly, it aimed at least to avoid
the destabilization of the financial system. Nonetheless, it ultimately failed
to achieve its goals, mainly because of the frequent changes of government
mediated by a social crisis of greater significance that had relegated economic
issues into the background.",Aproximacion a los estudios sobre la economia en la Segunda Republica espanola hasta 1936 -- Approaches to the economics of the Spanish Second Republic prior to 1936,2021-12-08 18:23:29,I. Martin-de-Santos,"http://dx.doi.org/10.20318/revhisto.2018.4297, http://arxiv.org/abs/2112.04332v1, http://arxiv.org/pdf/2112.04332v1",econ.GN
31619,gn,"Analysis and assessment of female characters in economic financial themes
films that have appeared in some thirty relevant films, from the beginning of
cinema, selected from various bibliographies. Descriptive, comparative and
impartial study. Quantitative techniques are applied to measure the influence
of women in terms of protagonism, positive values, negative values, compliance
or non-compliance with the Bechdel test, and results are evaluated. It is based
on two previous binding publications: Marzal et al. The crisis of the real.
Representations of the 2008 financial crisis in contemporary audiovisual (2018)
and Mart\'in-de-Santos, Financial administration through cinema. Analytical and
critical study of the most interesting films for university education (20219.
Realistic film versions are contrasted with actual events today.",La mujer a través de los personajes femeninos en el cine de temática financiera -- Women through female characters in financial topics films,2021-12-08 19:19:41,I. Martín-de-Santos,"http://dx.doi.org/10.5281/zenodo.5704697, http://arxiv.org/abs/2112.04366v1, http://arxiv.org/pdf/2112.04366v1",econ.GN
31652,gn,"There are visible changes in the world organization, environment and health
of national conscience that create a background for discussion on possible
redefinition of global, state and regional management goals. The author applies
the sustainable development criteria to a hierarchical management scheme that
is to lead the world community to non-contradictory growth. Concrete
definitions are discussed in respect of decision-making process representing
the state mostly. With the help of systems analysis it is highlighted how to
understand who would carry the distinctive sign of world leadership in the
nearest future.",Information Technologies in Public Administration,2018-04-05 16:49:09,V. I. Gorelov,"http://arxiv.org/abs/1805.12107v1, http://arxiv.org/pdf/1805.12107v1",econ.GN
31620,gn,"China aims for net-zero carbon emissions by 2060, and an emissions peak
before 2030. This will reduce its consumption of coal for power generation and
steel making. Simultaneously, China aims for improved energy security,
primarily with expanded domestic coal production and transport infrastructure.
Here, we analyze effects of both these pressures on seaborne coal imports, with
a purpose-built model of China's coal production, transport, and consumption
system with installation-level geospatial and technical detail. This represents
a 1000-fold increase in granularity versus earlier models, allowing
representation of aspects that have previously been obscured. We find that
reduced Chinese coal consumption affects seaborne imports much more strongly
than domestic supply. Recent expansions of rail and port capacity, which reduce
costs of getting domestic coal to Southern coastal provinces, will further
reduce demand for seaborne thermal coal and amplify the effect of
decarbonisation on coal imports. Seaborne coking coal imports are also likely
to fall, because of expanded supply of cheap and high quality coking coal from
neighbouring Mongolia.",An installation-level model of China's coal sector shows how its decarbonization and energy security plans will reduce overseas coal imports,2021-12-13 03:01:31,"Jorrit Gosens, Alex Turnbull, Frank Jotzo","http://arxiv.org/abs/2112.06357v2, http://arxiv.org/pdf/2112.06357v2",econ.GN
31621,gn,"This paper empirically examines the impact of enterprise digital
transformation on the level of enterprise diversification. It is found that the
digital transformation of enterprises has significantly improved the level of
enterprise diversification, and the conclusion has passed a series of
robustness tests and endogenous tests. Through mechanism analysis, we find that
the promotion effect of enterprise digital transformation on enterprise
diversification is mainly realized through market power channel and firm risk
channel, the pursuit of establishing market power, monopoly profits and
challenge the monopolistic position of market occupiers based on digital
transformation and the decentralization strategy to deal with the risks
associated with digital transformation are important reasons for enterprises to
adopt diversification strategy under the background of digital transformation.
Although the organization costs channel, transaction costs channel, block
holder control channel, industry type and information asymmetry channel have
some influence on the main effect of this paper, they are not the main channel
because they have not passed the inter group regression coefficient difference
test statistically.",Will enterprise digital transformation affect diversification strategy?,2021-12-13 15:40:40,"Ge-zhi Wu, Da-ming You","http://arxiv.org/abs/2112.06605v5, http://arxiv.org/pdf/2112.06605v5",econ.GN
31622,gn,"Humans have a major challenge: how to share knowledge effectively. People
often need quick informational help, and meanwhile are often able to offer
help. Let alone professional insights, knowledge common to some is frequently
needed by others. However, rapid advice among strangers is rarely offered in
reality. This signifies a critical issue: the current labor market, called by
this paper the Conventional Market (CM), massively fails such Burst Jobs,
leading to huge job opportunity losses and human intelligence underutilization.
This paper attributes the failure to high transaction costs due to technology
constraints. Thus, this paper proposes creating an online Burst Market (BM)
letting people sell any services at their own rate via video or audio chats
lasting as short as a few seconds or minutes. Empowering all people with
enormous job creation, the BM will aid in poverty lifting, alleviate aging
society problem, reduce AI-led unemployment, maximize human values with
Life-Long Working, and increase prosperity and human achievements. The BM will
also reshape industries, and may cause a global power reshuffle or even the
ultimate destruction of nations. In short, this paper proposes the concept of
the BM and demonstrates that it will provide a major leap for humanity.",The Burst Market: the Next Leap for Humanity,2021-12-06 02:21:23,Vincent Yuansang Zha,"http://arxiv.org/abs/2112.06646v6, http://arxiv.org/pdf/2112.06646v6",econ.GN
31623,gn,"Due to the specificity of China's dualistic household registration system and
the differences in the rights and interests attached to it, household
registration is prevalent as a control variable in the empirical evidence. In
the context of family planning policies, this paper proposes to use family size
and number of children as instrumental variables for household registration,
and discusses qualitatively and statistically verifies their relevance and
exogeneity, while empirically analyzing the impact of the household
registration system on citizenship of the mobile population. After controlling
for city, individual control variables and fixed effects, the following
conclusions are drawn: family size and number of children pass the
over-identification test when used as instrumental variables for household
registration; non-agricultural households have about 20.2% lower settlement
intentions and 7.28% lower employment levels in inflow cities than agricultural
households; the mechanism of the effect of the nature of household registration
on employment still holds for the non-mobile population group.",Finding the Instrumental Variables of Household Registration: A discussion of the impact of China's household registration system on the citizenship of the migrant population,2021-12-14 12:59:40,"Jingwen Tan, Shixi Kang","http://arxiv.org/abs/2112.07268v2, http://arxiv.org/pdf/2112.07268v2",econ.GN
31624,gn,"While the size of China's mobile population continues to expand, the
fertility rate is significantly lower than the stable generation replacement
level of the population, and the structural imbalance of human resource supply
has attracted widespread attention. This paper uses LPM and Probit models to
estimate the impact of house prices on the fertility intentions of the mobile
population based on data from the 2018 National Mobile Population Dynamics
Monitoring Survey. The lagged land sales price is used as an instrumental
variable of house price to mitigate the potential endogeneity problem. The
results show that for every 100\% increase in the ratio of house price to
household income of mobile population, the fertility intention of the female
mobile population of working age at the inflow location will decrease by
4.42\%, and the marginal effect of relative house price on labor force
fertility intention is EXP(-0.222); the sensitivity of mobile population
fertility intention to house price is affected by the moderating effect of
infrastructure construction at the inflow location. The willingness to have
children in the inflow area is higher for female migrants of working age with
lower age, smaller family size and higher education. Based on the above
findings, the study attempts to provide a new practical perspective for the
mainline institutional change and balanced economic development in China's
economic transition phase.",Urban Housing Prices and Migration's Fertility Intentions: Based on the 2018 China Migrants' Dynamic Survey,2021-12-14 13:09:08,"Jingwen Tan, Shixi Kang","http://arxiv.org/abs/2112.07273v1, http://arxiv.org/pdf/2112.07273v1",econ.GN
31625,gn,"This study examines the relationship between road infrastructure and crime
rate in rural India using a nationally representative survey. On the one hand,
building roads in villages may increase connectivity, boost employment, and
lead to better living standards, reducing criminal activities. On the other
hand, if the benefits of roads are non-uniformly distributed among villagers,
it may lead to higher inequality and possibly higher crime. We empirically test
the relationship using the two waves of the Indian Human Development Survey. We
use an instrumental variable estimation strategy and observe that building
roads in rural parts of India has reduced crime. The findings are robust to
relaxing the strict instrument exogeneity condition and using alternate
measures. On exploring the pathways, we find that improved street lighting,
better public bus services and higher employment are a few of the direct
potential channels through which road infrastructure impedes crime. We also
find a negative association between villages with roads and various types of
inequality measures confirming the broad economic benefits of roads. Our study
also highlights that the negative impact of roads on crime is more pronounced
in states with weaker institutions and higher income inequality.",The road to safety- Examining the nexus between road infrastructure and crime in rural India,2021-12-14 14:48:34,"Ritika Jain, Shreya Biswas","http://arxiv.org/abs/2112.07314v1, http://arxiv.org/pdf/2112.07314v1",econ.GN
31626,gn,"The objective of this paper is to find the existence of a relationship
between stock market prices and the fundamental macroeconomic indicators. We
build a Vector Auto Regression (VAR) model comprising of nine major
macroeconomic indicators (interest rate, inflation, exchange rate, money
supply, gdp, fdi, trade-gdp ratio, oil prices, gold prices) and then try to
forecast them for next 5 years. Finally we calculate cross-correlation of these
forecasted values with the BSE Sensex closing price for each of those years. We
find very high correlation of the closing price with exchange rate and money
supply in the Indian economy.",Stock prices and Macroeconomic indicators: Investigating a correlation in Indian context,2021-12-15 15:23:11,"Dhruv Rawat, Sujay Patni, Ram Mehta","http://arxiv.org/abs/2112.08071v2, http://arxiv.org/pdf/2112.08071v2",econ.GN
31627,gn,"Online reviews are typically written by volunteers and, as a consequence,
information about seller quality may be under-provided in digital marketplaces.
We study the extent of this under-provision in a large-scale randomized
experiment conducted by Airbnb. In this experiment, buyers are offered a coupon
to review listings that have no prior reviews. The treatment induces additional
reviews and these reviews tend to be more negative than reviews in the control
group, consistent with selection bias in reviewing. Reviews induced by the
treatment result in a temporary increase in transactions but these transactions
are for fewer nights, on average. The effects on transactions and nights per
transaction cancel out so that there is no detectable effect on total nights
sold and revenue. Measures of transaction quality in the treatment group fall,
suggesting that incentivized reviews do not improve matching. We show how
market conditions and the design of the reputation system can explain our
findings.",More Reviews May Not Help: Evidence from Incentivized First Reviews on Airbnb,2021-12-18 00:56:34,"Andrey Fradkin, David Holtz","http://arxiv.org/abs/2112.09783v1, http://arxiv.org/pdf/2112.09783v1",econ.GN
31628,gn,"Identifying who should be treated is a central question in economics. There
are two competing approaches to targeting - paternalistic and autonomous. In
the paternalistic approach, policymakers optimally target the policy given
observable individual characteristics. In contrast, the autonomous approach
acknowledges that individuals may possess key unobservable information on
heterogeneous policy impacts, and allows them to self-select into treatment. In
this paper, we propose a new approach that mixes paternalistic assignment and
autonomous choice. Our approach uses individual characteristics and empirical
welfare maximization to identify who should be treated, untreated, and decide
whether to be treated themselves. We apply this method to design a targeting
policy for an energy saving programs using data collected in a randomized field
experiment. We show that optimally mixing paternalistic assignments and
autonomous choice significantly improves the social welfare gain of the policy.
Exploiting random variation generated by the field experiment, we develop a
method to estimate average treatment effects for each subgroup of individuals
who would make the same autonomous treatment choice. Our estimates confirm that
the estimated assignment policy optimally allocates individuals to be treated,
untreated, or choose themselves based on the relative merits of paternalistic
assignments and autonomous choice for individuals types.","Paternalism, Autonomy, or Both? Experimental Evidence from Energy Saving Programs",2021-12-18 08:59:17,"Takanori Ida, Takunori Ishihara, Koichiro Ito, Daido Kido, Toru Kitagawa, Shosei Sakaguchi, Shusaku Sasaki","http://arxiv.org/abs/2112.09850v1, http://arxiv.org/pdf/2112.09850v1",econ.GN
31629,gn,"The development of trajectory-based operations and the rolling network
operations plan in European air traffic management network implies a move
towards more collaborative, strategic flight planning. This opens up the
possibility for inclusion of additional information in the collaborative
decision-making process. With that in mind, we define the indicator for the
economic risk of network elements (e.g., sectors or airports) as the expected
costs that the elements impose on airspace users due to Air Traffic Flow
Management (ATFM) regulations. The definition of the indicator is based on the
analysis of historical ATFM regulations data, that provides an indication of
the risk of accruing delay. This risk of delay is translated into a monetary
risk for the airspace users, creating the new metric of the economic risk of a
given airspace element. We then use some machine learning techniques to find
the parameters leading to this economic risk. The metric is accompanied by an
indication of the accuracy of the delay cost prediction model. Lastly, the
economic risk is transformed into a qualitative economic severity
classification. The economic risks and consequently economic severity can be
estimated for different temporal horizons and time periods providing an
indicator which can be used by Air Navigation Service Providers to identify
areas which might need the implementation of strategic measures (e.g.,
resectorisation or capacity provision change), and by Airspace Users to
consider operation of routes which use specific airspace regions.",Estimating economic severity of Air Traffic Flow Management regulations,2021-12-21 17:43:34,"Luis Delgado, Gérald Gurtner, Tatjana Bolić, Lorenzo Castelli","http://dx.doi.org/10.1016/j.trc.2021.103054, http://arxiv.org/abs/2112.11263v1, http://arxiv.org/pdf/2112.11263v1",econ.GN
34408,th,"Lexicographic composition is a natural way to build an aggregate choice
function from component choice functions. As the name suggests, the components
are ordered and choose sequentially. The sets that subsequent components select
from are constrained by the choices made by earlier choice functions. The
specific constraints affect whether properties like path independence are
preserved. For several domains of inputs, we characterize the constraints that
ensure such preservation.",Lexicographic Composition of Choice Functions,2022-09-19 21:37:31,"Sean Horan, Vikram Manjunath","http://arxiv.org/abs/2209.09293v1, http://arxiv.org/pdf/2209.09293v1",econ.TH
31630,gn,"The progress made by the entrepreneurial university, which is a newly
emerging category in Hungarian higher education after its change of model, has
not only deepened relations between universities and the industry and
intensified the technology and knowledge transfer processes, but also increased
the role of universities in shaping regional innovation policy. This
transformation places co-operation between the actors of the regional
innovation ecosystem and the relationships between the economic, governmental
and academic systems into a new framework. The purpose of this paper is to
describe the process of the change in the model through a specific example, and
to outline the future possibilities of university involvement in the currently
changing Hungarian innovation policy system.",The Changing Role of Entrepreneurial Universities in the Altering Innovation Policy: Opportunities Arising from the Paradigm Change in Light of the Experience of Széchenyi István University,2021-12-21 22:38:13,"Attila Lajos Makai, Szabolcs Rámháp","http://dx.doi.org/10.24307/psz.2020.1219, http://arxiv.org/abs/2112.11499v1, http://arxiv.org/pdf/2112.11499v1",econ.GN
31631,gn,"The COVID-19 pandemic and subsequent public health restrictions led to a
significant slump in economic activities around the globe. This slump has met
by various policy actions to cushion the detrimental socio-economic
consequences of the COVID-19 crisis and eventually bring the economy back on
track. We provide an ex-ante evaluation of the effectiveness of a massive
increase in research and innovation (R&I) funding in Finland to stimulate
post-crisis recovery growth through an increase in R&I activities of Finnish
firms. We make use of the fact that novel R&I grants for firms in disruptive
circumstances granted in 2020 were allocated through established R&I policy
channels. This allows us to estimate the structural link between R&I funding
and economic growth for Finnish NUTS-3 regions using pre-COVID-19 data.
Estimates are then used to forecast regional recovery growth out of sample and
to quantify the growth contribution of R&I funding. Depending on the chosen
scenario, our forecasts point to a mean recovery growth rate of GDP between
2-4% in 2021 after a decline of up to -2.5% in 2020. R&I funding constitutes a
significant pillar of the recovery process with mean contributions in terms of
GDP growth of between 0.4% and 1%.",Can large-scale R&I funding stimulate post-crisis recovery growth? Evidence for Finland during COVID-19,2021-12-20 19:33:10,"Timo Mitze, Teemu Makkonen","http://arxiv.org/abs/2112.11562v1, http://arxiv.org/pdf/2112.11562v1",econ.GN
31632,gn,"Hofstede's six cultural dimensions make it possible to measure the culture of
countries but are criticized for assuming the homogeneity of each country. In
this paper, we propose two measures based on Hofstede's cultural dimensions
which take into account the heterogeneous structure of citizens with respect to
their countries of origin. Using these improved measures, we study the
influence of heterogeneous culture and cultural diversity on the quality of
institutions measured by the six worldwide governance indicators. We use a
linear regression model allowing for dependence in spatial and temporal
dimensions as well as high correlation between the governance indicators. Our
results show that the effect of cultural diversity improves some of the
governance indicators while worsening others depending on the individual
Hofstede cultural dimension.",Cultural Diversity and Its Impact on Governance,2021-12-16 02:36:56,"Tomáš Evan, Vladimír Holý","http://arxiv.org/abs/2112.11563v1, http://arxiv.org/pdf/2112.11563v1",econ.GN
31633,gn,"Background The COVID-19 pandemic has increased mental distress globally. The
proportion of people reporting anxiety is 26%, and depression is 34% points.
Disentangling associational and causal contributions of behavior, COVID-19
cases, and economic distress on mental distress will dictate different
mitigation strategies to reduce long-term pandemic-related mental distress.
Methods We use the Household Pulse Survey (HPS) April 2020 to February 2021
data to examine mental distress among U.S. citizens attributable to COVID-19.
We combined HPS survey data with publicly available state-level weekly:
COVID-19 case and death data from the Centers for Disease Control, public
policies, and Apple and Google mobility data. Finally, we constructed economic
and mental distress measures to estimate structural models with lag dependent
variables to tease out public health policies' associational and causal path
coefficients on economic and mental distress. Findings From April 2020 to
February 2021, we found that anxiety and depression had steadily climbed in the
U.S. By design, mobility restrictions primarily affected public health policies
where businesses and restaurants absorbed the biggest hit. Period t-1 COVID-19
cases increased job loss by 4.1% and economic distress by 6.3% points in the
same period. Job-loss and housing insecurity in t-1 increased period t mental
distress by 29.1% and 32.7%, respectively. However, t-1 food insecurity
decreased mental distress by 4.9% in time t. The pandemic-related potential
causal path coefficient of period t-1 economic distress on period t depression
is 57.8%, and anxiety is 55.9%. Thus, we show that period t-1 COVID-19 case
information, behavior, and economic distress may be causally associated with
pandemic related period t mental distress.",Associational and plausible causal effects of COVID-19 public health policies on economic and mental distress,2021-12-20 17:29:29,"Reka Sundaram-Stukel, Richard J Davidson","http://arxiv.org/abs/2112.11564v1, http://arxiv.org/pdf/2112.11564v1",econ.GN
31634,gn,"In 2013, U.S. President Barack Obama announced a policy to minimize civilian
casualties following drone strikes in undeclared theaters of war. The policy
calibrated Obamas approval of strikes against the near certainty of no civilian
casualties. Scholars do not empirically study the merits of Obamas policy.
Rather, they rely on descriptive trends for civilian casualties in Pakistan to
justify competing claims for the policys impact. We provide a novel estimate
for the impact of Obamas policy for civilian casualties in Pakistan following
U.S. drone strikes. We employ a regression discontinuity design to estimate the
effect of Obamas policy for civilian casualties, strike precision, and adverted
civilian casualties. We find a discontinuity in civilian casualties
approximately two years before Obamas policy announcement, corroborating our
primary research including interviews with senior officials responsible for
implementing the near certainty standard. After confirming the sharp cutoff, we
estimate the policy resulted in a reduction of 12 civilian deaths per month or
2 casualties per strike. The policy also enhanced the precision of U.S. drone
strikes to the point that they only killed the intended targets. Finally, we
use a Monte Carlo simulation to estimate that the policy adverted 320 civilian
casualties. We then conduct a Value of Statistical Life calculation to show
that the adverted civilian casualties represent a gain of 80 to 260 million
U.S. dollars. In addition to conditioning social and political outcomes, then,
the near certainty standard also imposed economic implications that are much
less studied.",Double Standards: The Implications of Near Certainty Drone Strikes in Pakistan,2021-12-20 07:53:48,"Shyam Raman, Paul Lushenko, Sarah Kreps","http://arxiv.org/abs/2112.11565v1, http://arxiv.org/pdf/2112.11565v1",econ.GN
31653,gn,"This paper examines the history of previous examples of EMU from the
viewpoint that state actors make decisions about whether to participate in a
monetary union based on rational self-interest concerning costs and benefits to
their national economies. Illustrative examples are taken from nineteenth
century German, Italian and Japanese attempts at monetary integration with
early twentieth century ones from the Latin Monetary Union and the Scandinavian
Monetary Union and contemporary ones from the West African Monetary Union and
the European Monetary System. Lessons learned from the historical examples will
be used to identify issues that could arise with the move towards closer EMU in
Europe.",Lessons from the History of European EMU,2018-05-25 13:58:01,Chris Kirrane,"http://arxiv.org/abs/1805.12112v1, http://arxiv.org/pdf/1805.12112v1",econ.GN
31635,gn,"This study is the first to provide a systematic review of the literature
focused on the relationship between digitalization and organizational agility
(OA). It applies the bibliographic coupling method to 171 peer-reviewed
contributions published by 30 June 2021. It uses the digitalization perspective
to investigate the enablers, barriers and benefits of processes aimed at
providing firms with the agility required to effectively face increasingly
turbulent environments. Three different, though interconnected, thematic
clusters are discovered and analysed, respectively focusing on big-data
analytic capabilities as crucial drivers of OA, the relationship between
digitalization and agility at a supply chain level, and the role of information
technology capabilities in improving OA. By adopting a dynamic capabilities
perspective, this study overcomes the traditional view, which mainly considers
digital capabilities enablers of OA, rather than as possible outcomes. Our
findings reveal that, in addition to being complex, the relationship between
digitalization and OA has a bidirectional character. This study also identifies
extant research gaps and develops 13 original research propositions on possible
future research pathways and new managerial solutions.","The co-evolutionary relationship between digitalization and organizational agility: Ongoing debates, theoretical developments and future research perspectives",2021-12-22 14:57:40,"Francesco Ciampi, Monica Faraoni, Jacopo Ballerini, Francesco Meli","http://dx.doi.org/10.1016/j.techfore.2021.121383, http://arxiv.org/abs/2112.11822v1, http://arxiv.org/pdf/2112.11822v1",econ.GN
31636,gn,"Although single empirical studies provide important insights into who adopts
a specific LCT for what reason, fundamental questions concerning the relations
between decision subject (= who decides), decision object (= what is decided
upon) and context (= when and where it is decided) remain unanswered. In this
paper, this research gap is addressed by deriving a decision framework for
residential decision-making, suggesting that traits of decision subject and
object are determinants of financial, environmental, symbolic, normative,
effort and technical considerations preceding adoption. Thereafter, the
decision framework is initially verified by employing literature on the
adoption of photovoltaic systems, energy-efficient appliances and green
tariffs. Of the six proposed relations, two could be confirmed (financial and
environmental), one could be rejected (effort), and three could neither be
confirmed nor rejected due to lacking evidence. Future research on LCT adoption
could use the decision framework as a guidepost to establish a more coordinated
and integrated approach, ultimately allowing to address fundamental questions.","Product traits, decision-makers, and household low-carbon technology adoptions: moving beyond single empirical studies",2021-12-22 16:24:00,"Emily Schulte, Fabian Scheller, Wilmer Pasut, Thomas Bruckner","http://dx.doi.org/10.1016/j.erss.2021.102313, http://arxiv.org/abs/2112.11867v1, http://arxiv.org/pdf/2112.11867v1",econ.GN
31637,gn,"The goal of this paper is to revise the Henderson-Chu approach by dividing
the commuters into two income-based groups: the `rich' and the `poor'. The rich
are clearly more flexible in their arrival times and would rather have more
schedule delay cost instead of travel delay. The poor are quite inflexible as
they have to arrive at work at a specified time -- travel delay cost is
preferred over schedule delay cost. We combined multiple models of peak-load
bottleneck congestion with and without toll-pricing to generate a Pareto
improvement in Lexus lanes.",Henderson--Chu model extended to two heterogeneous groups,2021-11-27 06:25:42,"Oliver Chiriac, Jonathan Hall","http://arxiv.org/abs/2112.12179v5, http://arxiv.org/pdf/2112.12179v5",econ.GN
31638,gn,"The adoption of residential photovoltaic systems (PV) is seen as an important
part of the sustainable energy transition. To facilitate this process, it is
crucial to identify the determinants of solar adoption. This paper follows a
meta-analytical structural equation modeling approach, presenting a
meta-analysis of studies on residential PV adoption intention, and assessing
four behavioral models based on the theory of planned behavior to advance
theory development. Of 653 initially identified studies, 110 remained for
full-text screening. Only eight studies were sufficiently homogeneous, provided
bivariate correlations, and could thus be integrated into the meta-analysis.The
pooled correlations across primary studies revealed medium to large
correlations between environmental concern, novelty seeking, perceived
benefits, subjective norm and intention to adopt a residential PV system,
whereas socio-demographic variables were uncorrelated with intention.
Meta-analytical structural equation modeling revealed a model (N = 1,714) in
which adoption intention was predicted by benefits and perceived behavioral
control, and benefits in turn could be explained by environmental concern,
novelty seeking, and subjective norm. Our results imply that measures should
primarily focus on enhancing the perception of benefits. Based on obstacles we
encountered within the analysis, we suggest guidelines to facilitate the future
aggregation of scientific evidence, such as the systematic inclusion of key
variables and reporting of bivariate correlations.","A meta-analysis of residential PV adoption: the important role of perceived benefits, intentions and antecedents in solar energy acceptance",2021-12-23 14:04:24,"Emily Schulte, Fabian Scheller, Daniel Sloot, Thomas Bruckner","http://dx.doi.org/10.1016/j.erss.2021.102339, http://arxiv.org/abs/2112.12464v1, http://arxiv.org/pdf/2112.12464v1",econ.GN
31639,gn,"In models of intra-household resource allocation, the earnings from joint
work between two or more household members are often omitted. I test
assumptions about complete pooling of resources within a household, by
accounting for income earned jointly by multiple household members, in addition
to income earned individually by men and women. Applied in the case of Malawi,
I find that by explicitly including intra-household collaboration, I find
evidence of partial income pooling and partial insurance within the household,
specifically for expenditures on food. Importantly, including joint income
reveals dynamics between household members, as well as opportunities and
vulnerabilities which may previously be obfuscated in simpler, binary
specifications. Contrasting with previous studies and empirical practice, my
findings suggest that understanding detailed intra-household interactions and
their outcomes on household behavior have important consequences for household
resource allocation and decision making.",Intra-Household Management of Joint Resources: Evidence from Malawi,2021-12-23 21:39:50,Anna Josephson,"http://arxiv.org/abs/2112.12766v2, http://arxiv.org/pdf/2112.12766v2",econ.GN
31681,gn,"This paper examines how subsistence farmers respond to extreme heat. Using
micro-data from Peruvian households, we find that high temperatures reduce
agricultural productivity, increase area planted, and change crop mix. These
findings are consistent with farmers using input adjustments as a short-term
mechanism to attenuate the effect of extreme heat on output. This response
seems to complement other coping strategies, such as selling livestock, but
exacerbates the drop in yields, a standard measure of agricultural
productivity. Using our estimates, we show that accounting for land adjustments
is important to quantify damages associated with climate change.",Climate Change and Agriculture: Subsistence Farmers' Response to Extreme Heat,2019-02-25 14:40:42,"Fernando M. Aragón, Francisco Oteiza, Juan Pablo Rud","http://arxiv.org/abs/1902.09204v2, http://arxiv.org/pdf/1902.09204v2",econ.GN
31640,gn,"The creative Education system is one of the effective education systems in
many countries like Finland, Denmark, and South Korea. Bangladesh Government
has also launched the creative curriculum system in 2009 in both primary and
secondary levels, where changes have been made in educational contents and exam
question patterns. These changes in the previous curriculum aimed to avoid
memorization and less creativity and increase the students' level of
understanding and critical thinking. Though the Government has taken these
steps, the quality of the educational system in Bangladesh is still
deteriorating. Since the curriculum has been changed recently, this policy
issue got massive attention of the people because the problem of a substandard
education system has arisen. Many students have poor performances in
examinations, including entrance hall exams in universities and board
examinations. This deteriorating situation is mostly for leakage of question
paper, inadequate equipment and materials, and insufficient training. As a
result, the existing education system has failed to provide the standard level
of education. This research will discuss and find why this creative educational
system is getting impacted by these factors. It will be qualitative research. A
systematic questionnaire will interview different school teachers, parents,
experts, and students.",Economics of Innovation and Perceptions of Renewed Education and Curriculum Design in Bangladesh,2021-12-24 02:14:00,"Shifa Taslim Chowdhury, Mohammad Nur Nobi, Anm Moinul Islam","http://arxiv.org/abs/2112.13842v1, http://arxiv.org/pdf/2112.13842v1",econ.GN
31641,gn,"This paper examines the long-term effects of prenatal, childhood, and teen
exposure to electoral violence on health and human capital. Furthermore, it
investigates whether these effects are passed down to future generations. We
exploit the temporal and spatial variation of electoral violence in Kenya
between 1992 and 2013 in conjunction with a nationally representative survey to
identify people exposed to such violence. Using coarsened matching, we find
that exposure to electoral violence between prenatal and the age of sixteen
reduces adult height. Previous research has demonstrated that protracted,
large-scale armed conflicts can pass down stunting effects to descendants. In
line with these studies, we find that the low-scale but recurrent electoral
violence in Kenya has affected the height-for-age of children whose parents
were exposed to such violence during their growing years. Only boys exhibit
this intergenerational effect, possibly due to their increased susceptibility
to malnutrition and stunting in Sub-Saharan Africa. In contrast to previous
research on large-scale conflicts, childhood exposure to electoral violence has
no long-term effect on educational attainment or household consumption per
capita. Most electoral violence in Kenya has occurred during school breaks,
which may have mitigated its long-term effects on human capital and earning
capacity.",The Long-Run Impact of Electoral Violence on Health and Human Capital in Kenya,2021-12-27 15:23:04,Roxana Gutiérrez-Romero,"http://arxiv.org/abs/2112.13849v3, http://arxiv.org/pdf/2112.13849v3",econ.GN
31642,gn,"Coal extraction was an influential economic activity in interwar Japan. In
the initial stage, coal mines used males and females as the miners worked in
the pits. However, the innovation of labor-saving technologies and the renewal
of traditional extraction methodology induced institutional change through the
revision of labor regulations on female miners in the early 1930s. This
dramatically changed the mines as the place where skilled males were the
principal miners engaged in the underground works. I investigate the impact of
coal mining on regional growth and assess how the institutional change induced
by the labor regulations affected its process. By linking the location
information of mines with registration- and census-based statistics, I found
that coal mines led to remarkable population growth. The labor regulations did
not stagnate but accelerated the local population growth as they forced female
miners to exit from the labor market and form their families. The regulations
prohibited risky underground works by female miners. This reduction in
occupational hazards also improved early-life mortality via the mortality
selection mechanism in utero.","Technology, Institution, and Regional Growth: Evidence from Mineral Mining Industry in Industrializing Japan",2021-12-29 14:47:08,Kota Ogasawara,"http://arxiv.org/abs/2112.14514v10, http://arxiv.org/pdf/2112.14514v10",econ.GN
31643,gn,"The model shift of higher education in Hungary brought not only the deepening
of university-industry relations and technology transfer processes, but also
contribute the emerging role of universities in shaping regional innovation
policy. This process provides a new framework for cooperation between actors in
regional innovation ecosystems and raises the relationship between
economic-governmental-academic systems to a new level. Active involvement of
government, the predominance of state resources, and the strong
innovation-organizing power of higher education institutions are similarities
that characterize both the Hungarian and Chinese innovation systems. This paper
attempts to gather Chinese good practices whose adaptation can contribute to
successful public-university collaboration. In the light of the examined
practices, the processes related to the university model shift implemented so
far can be placed in a new context, which are presented through the example of
the Sz\'echenyi Istv\'an University of Gy\H{o}r.","Perspectives in Public and University Sector Co-operation in the Change of Higher Education Model in Hungary, in Light of China's Experience",2021-12-21 22:45:18,"Attila Lajos Makai, Szabolcs Ramhap","http://dx.doi.org/10.24307/psz.2021.0116, http://arxiv.org/abs/2112.14713v1, http://arxiv.org/pdf/2112.14713v1",econ.GN
31644,gn,"In most cities, transit consists solely of fixed-route transportation, whence
the inherent limited Quality of Service for travellers in suburban areas and
during off-peak periods. On the other hand, completely replacing fixed-route
(FR) with demand-responsive (DR) transit would imply a huge operational cost.
It is still unclear how to integrate DR transportation into current transit
systems to take full advantage of it. We propose a Continuous Approximation
model of a transit system that gets the best from fixed-route and DR
transportation. Our model allows deciding whether to deploy a FR or a DR
feeder, in each sub-region of an urban conurbation and each time of day, and to
redesign the line frequencies and the stop spacing of the main trunk service.
Since such a transit design can adapt to the spatial and temporal variation of
the demand, we call it Adaptive Transit. Numerical results show that, with
respect to conventional transit, Adaptive Transit significantly improves
user-related cost, by drastically reducing access time to the main trunk
service. Such benefits are particularly remarkable in the suburbs. Moreover,
the generalized cost, including agency and user cost, is also reduced. These
findings are also confirmed in scenarios with automated vehicles. Our model can
assist in planning future-generation transit systems, able to improve urban
mobility by appropriately combining fixed and DR transportation.",Adaptive Transit Design: Optimizing Fixed and Demand Responsive Multi-Modal Transportation via Continuous Approximation,2021-12-29 21:53:06,"Giovanni Calabro', Andrea Araldo, Simon Oh, Ravi Seshadri, Giuseppe Inturri, Moshe Ben-Akiva","http://arxiv.org/abs/2112.14748v2, http://arxiv.org/pdf/2112.14748v2",econ.GN
31645,gn,"One frequently given explanation for why autocrats maintain corrupt and
inefficient institutions is that the autocrats benefit personally even though
the citizens of their countries are worse off. The empirical evidence does not
support this hypothesis. Autocrats in countries with low-quality institutions
do tend to be wealthy, but typically, they were wealthy before they assumed
power. A plausible explanation, consistent with the data, is that wealthy
individuals in countries with inefficient and corrupt institutions face the
threat of having their wealth appropriated by government, so have the incentive
to use some of their wealth to seek political power to protect the rest of
their wealth from confiscation. While autocrats may use government institutions
to increase their wealth, autocrats in countries with low-quality institutions
tend to be wealthy when they assume power, because wealthy individuals have the
incentive to use their wealth to acquire political power to protect themselves
from a potentially predatory government.",Institutional Quality and the Wealth of Autocrats,2021-12-30 01:17:55,"Christopher Boudreaux, Randall Holcombe","http://arxiv.org/abs/2112.14849v1, http://arxiv.org/pdf/2112.14849v1",econ.GN
31646,gn,"This study attempted to analyze whether there is a relationship between the
performance of drivers and the number of overtime hours worked by them. The
number of overtime hours worked by the drivers in the pool for the years 2017
and 2018 were extracted from the overtime registers and feedback received on
the performance of drivers from staff members who frequently traveled in the
University vehicles were used for this study. The overall performance of a
driver was decided by taking the aggregate of marks received by him for the
traits: skillfulness, patience, responsibility, customer service and care for
the vehicle. The type of vehicle the driver is assigned for is also taken into
account in the analysis of this study. The study revealed that there is no
significant relationship between the performance of the drivers and the number
of overtime hours worked by them but the type of vehicle and the condition of
the vehicle affects attracting long journeys to them which enable them to earn
more overtime hours.",Analysis of Performance of Drivers and Usage of Overtime Hours: A Case Study of a Higher Educational Institution,2021-12-29 10:32:58,K. C. Sanjeevani Perera,"http://arxiv.org/abs/2112.15447v1, http://arxiv.org/pdf/2112.15447v1",econ.GN
31647,gn,"Two network measures known as the Economic Complexity Index (ECI) and Product
Complexity Index (PCI) have provided important insights into patterns of
economic development. We show that the ECI and PCI are equivalent to a spectral
clustering algorithm that partitions a similarity graph into two parts. The
measures are also related to various dimensionality reduction methods and can
be interpreted as vectors that determine distances between nodes based on their
similarity. Our results shed a new light on the ECI's empirical success in
explaining cross-country differences in GDP/capita and economic growth, which
is often linked to the diversity of country export baskets. In fact, countries
with high (low) ECI tend to specialize in high (low) PCI products. We also find
that the ECI and PCI uncover economically informative specialization patterns
across US states and UK regions.",Interpreting Economic Complexity,2017-11-22 15:12:57,"Penny Mealy, J. Doyne Farmer, Alexander Teytelboym","http://arxiv.org/abs/1711.08245v3, http://arxiv.org/pdf/1711.08245v3",q-fin.EC
31648,gn,"In this article, a game-theoretic model is constructed that is related to the
problem of optimal assignments. Examples are considered. A compromise point is
found, the Nash equilibriums and the decision of the Nash arbitration scheme
are constructed.",Forecasting the sustainable status of the labor market in agriculture,2018-05-23 13:18:45,"O. A. Malafeyev, V. E. Onishenko, I. V. Zaytseva","http://arxiv.org/abs/1805.09686v1, http://arxiv.org/pdf/1805.09686v1",econ.GN
31649,gn,"This paper scrutinizes the rationale for the adoption of inflation targeting
(IT) by Bank of Ghana in 2002. In this case, we determine the stability or
otherwise of the relationship between money supply and inflation in Ghana over
the period 1970M1-2016M3 using battery of econometric methods. The empirical
results show an unstable link between inflation and monetary growth in Ghana,
while the final state coefficient of inflation elasticity to money growth is
positive but statistically insignificant. We find that inflation elasticity to
monetary growth has continued to decline since the 1970s, showing a waning
impact of money growth on inflation in Ghana. Notably, there is also evidence
of negative inflation elasticity to monetary growth between 2001 and 2004,
lending support to the adoption of IT framework in Ghana in 2002. We emphasized
that the unprecedented 31-months of single-digit inflation (June 2010-December
2012), despite the observed inflationary shocks in 2010 and 2012, reinforces
the immense contribution of the IT framework in anchoring inflation
expectations, with better inflation outcomes and inflation variability in
Ghana. The paper therefore recommends the continuous pursuance and
strengthening of the IT framework in Ghana, as it embodies a more eclectic
approach to policy formulation and implementation.",Justifying the Adoption and Relevance of Inflation Targeting Framework: A Time-Varying Evidence from Ghana,2018-05-29 19:17:19,"Nana Kwame Akosah, Francis W. Loloh, Maurice Omane-Adjepong","http://arxiv.org/abs/1805.11562v1, http://arxiv.org/pdf/1805.11562v1",econ.GN
31650,gn,"This paper discusses how public research organizations consume funding for
research, applying a new approach based on economic metabolism of research
labs, in a broad analogy with biology. This approach is applied to a case study
in Europe represented by one of the biggest European public research
organizations, the National Research council of Italy. Results suggest that
funding for research (state subsidy and public contracts) of this public
research organization is mainly consumed for the cost of personnel. In
addition, the analysis shows a disproportionate growth of the cost of personnel
in public research labs in comparison with total revenue from government. In
the presence of shrinking public research lab budgets, this organizational
behavior generates inefficiencies and stress. R&D management and public policy
implications are suggested for improving economic performance of public
research organizations in turbulent markets.",How do public research labs use funding for research? A case study,2018-05-25 14:43:27,Mario Coccia,"http://arxiv.org/abs/1805.11932v1, http://arxiv.org/pdf/1805.11932v1",econ.GN
31651,gn,"A theoretical self-sustainable economic model is established based on the
fundamental factors of production, consumption, reservation and reinvestment,
where currency is set as a unconditional credit symbol serving as transaction
equivalent and stock means. Principle properties of currency are explored in
this ideal economic system. Physical analysis reveals some facts that were not
addressed by traditional monetary theory, and several basic principles of ideal
currency are concluded: 1. The saving-replacement is a more primary function of
currency than the transaction equivalents; 2. The ideal efficiency of currency
corresponds to the least practical value; 3. The contradiction between constant
face value of currency and depreciable goods leads to intrinsic inflation.",A Physical Review on Currency,2018-04-18 06:07:24,Ran Huang,"http://arxiv.org/abs/1805.12102v1, http://arxiv.org/pdf/1805.12102v1",econ.GN
31654,gn,"Monetary integration has both costs and benefits. Europeans have a strong
aversion to exchange rate instability. From this perspective, the EMS has shown
its limits and full monetary union involving a single currency appears to be a
necessity. This is the goal of the EMU project contained in the Maastricht
Treaty. This paper examines the pertinent choices: independence of the Central
Bank, budgetary discipline and economic policy coordination. Therefore, the
implications of EMU for the economic policy of France will be examined. If the
external force disappears, the public sector still cannot circumvent its
solvency constraint. The instrument of national monetary policy will not be
available so the absorption of asymmetric shocks will require greater wage
flexibility and fiscal policy will play a greater role. The paper includes
three parts. The first concerns the economic foundations of monetary union and
the costs it entails. The second is devoted to the institutional arrangements
under the Treaty of Maastricht. The third examines the consequences of monetary
union for the economy and the economic policy of France.",Implications of EMU for the European Community,2018-05-25 14:02:51,Chris Kirrane,"http://arxiv.org/abs/1805.12113v1, http://arxiv.org/pdf/1805.12113v1",econ.GN
31655,gn,"Livestock industries are vulnerable to disease threats, which can cost
billions of dollars and have substantial negative social ramifications. Losses
are mitigated through increased use of disease-related biosecurity practices,
making increased biosecurity an industry goal. Currently, there is no
industry-wide standard for sharing information about disease incidence or
on-site biosecurity strategies, resulting in uncertainty regarding disease
prevalence and biosecurity strategies employed by industry stakeholders. Using
an experimental simulation game, we examined human participant's willingness to
invest in biosecurity when confronted with disease outbreak scenarios. We
varied the scenarios by changing the information provided about 1) disease
incidence and 2) biosecurity strategy or response by production facilities to
the threat of disease. Here we show that willingness to invest in biosecurity
increases with increased information about disease incidence, but decreases
with increased information about biosecurity practices used by nearby
facilities. Thus, the type or context of the uncertainty confronting the
decision maker may be a major factor influencing behavior. Our findings suggest
that policies and practices that encourage greater sharing of disease incidence
information should have the greatest benefit for protecting herd health.",Decision-making in Livestock Biosecurity Practices amidst Environmental and Social Uncertainty: Evidence from an Experimental Game,2018-11-02 23:36:47,"Scott C. Merrill, Christopher J. Koliba, Susan M. Moegenburg, Asim Zia, Jason Parker, Timothy Sellnow, Serge Wiltshire, Gabriela Bucini, Caitlin Danehy, Julia M. Smith","http://dx.doi.org/10.1371/journal.pone.0214500, http://arxiv.org/abs/1811.01081v2, http://arxiv.org/pdf/1811.01081v2",econ.GN
31656,gn,"The ability to uncover preferences from choices is fundamental for both
positive economics and welfare analysis. Overwhelming evidence shows that
choice is stochastic, which has given rise to random utility models as the
dominant paradigm in applied microeconomics. However, as is well known, it is
not possible to infer the structure of preferences in the absence of
assumptions on the structure of noise. This makes it impossible to empirically
test the structure of noise independently from the structure of preferences.
Here, we show that the difficulty can be bypassed if data sets are enlarged to
include response times. A simple condition on response time distributions (a
weaker version of first order stochastic dominance) ensures that choices reveal
preferences without assumptions on the structure of utility noise. Sharper
results are obtained if the analysis is restricted to specific classes of
models. Under symmetric noise, response times allow to uncover preferences for
choice pairs outside the data set, and if noise is Fechnerian, even choice
probabilities can be forecast out of sample. We conclude by showing that
standard random utility models from economics and standard drift-diffusion
models from psychology necessarily generate data sets fulfilling our sufficient
condition on response time distributions.",Time will tell - Recovering Preferences when Choices are Noisy,2018-11-06 20:11:21,"Carlos Alos-Ferrer, Ernst Fehr, Nick Netzer","http://arxiv.org/abs/1811.02497v1, http://arxiv.org/pdf/1811.02497v1",econ.GN
31657,gn,"We use a simple combinatorial model of technological change to explain the
Industrial Revolution. The Industrial Revolution was a sudden large improvement
in technology, which resulted in significant increases in human wealth and life
spans. In our model, technological change is combining or modifying earlier
goods to produce new goods. The underlying process, which has been the same for
at least 200,000 years, was sure to produce a very long period of relatively
slow change followed with probability one by a combinatorial explosion and
sudden takeoff. Thus, in our model, after many millennia of relative quiescence
in wealth and technology, a combinatorial explosion created the sudden takeoff
of the Industrial Revolution.",A Simple Combinatorial Model of World Economic History,2018-11-12 02:11:54,"Roger Koppl, Abigail Devereaux, Jim Herriot, Stuart Kauffman","http://arxiv.org/abs/1811.04502v1, http://arxiv.org/pdf/1811.04502v1",econ.GN
31658,gn,"This paper examines the association between household healthcare expenses and
participation in the Supplemental Nutrition Assistance Program (SNAP) when
moderated by factors associated with financial stability of households. Using a
large longitudinal panel encompassing eight years, this study finds that an
inter-temporal increase in out-of-pocket medical expenses increased the
likelihood of household SNAP participation in the current period. Financially
stable households with precautionary financial assets to cover at least 6
months worth of household expenses were significantly less likely to
participate in SNAP. The low income households who recently experienced an
increase in out of pocket medical expenses but had adequate precautionary
savings were less likely than similar households who did not have precautionary
savings to participate in SNAP. Implications for economists, policy makers, and
household finance professionals are discussed.","Health Care Expenditures, Financial Stability, and Participation in the Supplemental Nutrition Assistance Program (SNAP)",2018-11-13 20:21:15,"Yunhee Chang, Jinhee Kim, Swarn Chatterjee","http://arxiv.org/abs/1811.05421v1, http://arxiv.org/pdf/1811.05421v1",econ.GN
31687,gn,"We experimentally evaluate the comparative performance of the winner-bid,
average-bid, and loser-bid auctions for the dissolution of a partnership. The
analysis of these auctions based on the empirical equilibrium refinement of
Velez and Brown (2020) arXiv:1907.12408 reveals that as long as behavior
satisfies weak payoff monotonicity, winner-bid and loser-bid auctions
necessarily exhibit a form of bias when empirical distributions of play
approximate best responses (Velez and Brown, 2020 arXiv:1905.08234). We find
support for both weak payoff monotonicity and the form of bias predicted by the
theory for these two auctions. Consistently with the theory, the average-bid
auction does not exhibit this form of bias. It has lower efficiency that the
winner-bid auction, however.",Empirical bias and efficiency of alpha-auctions: experimental evidence,2019-05-10 01:16:42,"Alexander L. Brown, Rodrigo A. Velez","http://arxiv.org/abs/1905.03876v2, http://arxiv.org/pdf/1905.03876v2",econ.GN
31659,gn,"Almost a decade on from the launch of Bitcoin, cryptocurrencies continue to
generate headlines and intense debate. What started as an underground
experiment by a rag tag group of programmers armed with a Libertarian manifesto
has now resulted in a thriving $230 billion ecosystem, with constant on-going
innovation. Scholars and researchers alike are realizing that cryptocurrencies
are far more than mere technical innovation; they represent a distinct and
revolutionary new economic paradigm tending towards decentralization.
Unfortunately, this bold new universe is little explored from the perspective
of Islamic economics and finance. Our work aims to address these deficiencies.
Our paper makes the following distinct contributions We significantly expand
the discussion on whether cryptocurrencies qualify as ""money"" from an Islamic
perspective and we argue that this debate necessitates rethinking certain
fundamental definitions. We conclude that the cryptocurrency phenomenon, with
its radical new capabilities, may hold considerable opportunity which merits
deeper investigation.",Navigating the Cryptocurrency Landscape: An Islamic Perspective,2018-11-14 21:06:57,"Hina Binte Haq, Syed Taha Ali","http://arxiv.org/abs/1811.05935v1, http://arxiv.org/pdf/1811.05935v1",econ.GN
31660,gn,"The increasing policy interests and the vivid academic debate on non-tariff
measures (NTMs) has stimulated a growing literature on how NTMs affect agrifood
trade. The empirical literature provides contrasting and heterogeneous
evidence, with some studies supporting the standards as catalysts view, and
others favouring the standards as barriers explanation. To the extent that NTMs
can influence trade, understanding the prevailing effect, and the motivations
behind one effect or the other, is a pressing issue. We review a large body of
empirical evidence on the effect of NTMs on agri-food trade and conduct a
meta-analysis to disentangle potential determinants of heterogeneity in
estimates. Our findings show the role played by the publication process and by
study-specific assumptions. Some characteristics of the studies are correlated
with positive significant estimates, others covary with negative significant
estimates. Overall, we found that the effects of NTMs vary across types of
NTMs, proxy for NTMs, and levels of details of studies. Not negligible is the
influence of methodological issues and publication process.",The effects of non-tariff measures on agri-food trade: a review and meta-analysis of empirical evidence,2018-11-15 15:57:54,"Fabio Gaetano Santeramo, Emilia Lamonaca","http://dx.doi.org/10.1111/1477-9552.12316, http://arxiv.org/abs/1811.06323v1, http://arxiv.org/pdf/1811.06323v1",econ.GN
31661,gn,"This study examines the network of supply and use of significant innovations
across industries in Sweden, 1970-2013. It is found that 30% of innovation
patterns can be predicted by network stimulus from backward and forward
linkages. The network is hierarchical, characterized by hubs that connect
diverse industries in closely knitted communities. To explain the network
structure, a preferential weight assignment process is proposed as an
adaptation of the classical preferential attachment process to weighted
directed networks. The network structure is strongly predicted by this process
where historical technological linkages and proximities matter, while human
capital flows and economic input-output flows have conflicting effects on link
formation. The results are consistent with the idea that innovations emerge in
closely connected communities, but suggest that the transformation of
technological systems are shaped by technological requirements, imbalances and
opportunities that are not straightforwardly related to other proximities.",Evolution and structure of technological systems - An innovation output network,2018-11-16 14:57:14,Josef Taalbi,"http://dx.doi.org/10.1016/j.respol.2020.104010, http://arxiv.org/abs/1811.06772v1, http://arxiv.org/pdf/1811.06772v1",econ.GN
31662,gn,"Monetization of the non-use and nonmarket values of ecosystem services is
important especially in the areas of environmental cost-benefit analysis,
management and environmental impact assessment. However, the reliability of
valuation estimations has been criticized due to the biases that associated
with methods like the popular contingent valuation method (CVM). In order to
provide alternative valuation results for comparison purpose, we proposed the
possibility of using a method that incorporates fact-based costs and contingent
preferences for evaluating non-use and nonmarket values, which we referred to
as value allotment method (VAM). In this paper, we discussed the economic
principles of VAM, introduced the performing procedure, analyzed assumptions
and potential biases that associated with the method and compared VAM with CVM
through a case study in Guangzhou, China. The case study showed that the VAM
gave more conservative estimates than the CVM, which could be a merit since CVM
often generates overestimated values. We believe that this method can be used
at least as a referential alternative to CVM and might be particularly useful
in assessing the non-use and nonmarket values of ecosystem services from
human-invested ecosystems, such as restored ecosystems, man-made parks and
croplands.",A possible alternative evaluation method for the non-use and nonmarket values of ecosystem services,2018-11-20 20:25:07,"Shuyao Wu, Shuangcheng Li","http://arxiv.org/abs/1811.08376v1, http://arxiv.org/pdf/1811.08376v1",econ.GN
31663,gn,"This is the first study to explore the transmission paths for liquidity
shocks in China's segmented money market. We examine how money market
transactions create such pathways between China's closely-guarded banking
sector and the rest of its financial system, and empirically capture the
transmission of liquidity shocks through these pathways during two recent
market events. We find strong indications that money market transactions allow
liquidity shocks to circumvent certain regulatory restrictions and financial
market segmentation in China. Our findings suggest that a widespread
illiquidity contagion facilitated by money market transactions can happen in
China and new policy measures are needed to prevent such contagion.",The transmission of liquidity shocks via China's segmented money market: evidence from recent market events,2018-11-21 23:59:29,"Ruoxi Lu, David A. Bessler, David J. Leatham","http://dx.doi.org/10.1016/j.intfin.2018.07.005, http://arxiv.org/abs/1811.08949v1, http://arxiv.org/pdf/1811.08949v1",econ.GN
31664,gn,"We study long-run selection and treatment effects of a health insurance
subsidy in Ghana, where mandates are not enforceable. We randomly provide
different levels of subsidy (1/3, 2/3, and full), with follow-up surveys seven
months and three years after the initial intervention. We find that a one-time
subsidy promotes and sustains insurance enrollment for all treatment groups,
but long-run health care service utilization increases only for the partial
subsidy groups. We find evidence that selection explains this pattern: those
who were enrolled due to the subsidy, especially the partial subsidy, are more
ill and have greater health care utilization.",Long-run Consequences of Health Insurance Promotion When Mandates are Not Enforceable: Evidence from a Field Experiment in Ghana,2018-11-22 06:34:30,"Patrick Asuming, Hyuncheol Bryant Kim, Armand Sim","http://arxiv.org/abs/1811.09004v2, http://arxiv.org/pdf/1811.09004v2",econ.GN
31665,gn,"Global achievement of climate change mitigation will heavy reply on how much
of CO2 emission has and will be released by China. After rapid growth of
emissions during last decades, China CO2 emissions declined since 2014 that
driven by decreased coal consumption, suggesting a possible peak of China coal
consumption and CO2 emissions. Here, by combining a updated methodology and
underlying data from different sources, we reported the soaring 5.5% (range:
+2.5% to +8.5% for one sigma) increase of China CO2 emissions in 2018 compared
to 2017, suggesting China CO2 is not yet to peak and leaving a big uncertain to
whether China emission will continue to rise in the future. Although our best
estimate of total emission (9.9Gt CO2 in 2018) is lower than international
agencies in the same year, the results show robust on a record-high energy
consumption and total CO2 emission in 2018. During 2014-2016, China energy
intensity (energy consumption per unit of GDP) and total CO2 emissions has
decreased driven by energy and economic structure optimization. However, the
decrease in emissions is now offset by stimulates of heavy industry production
under economic downturn that driving coal consumption (+5% in 2018), as well as
the surging of natural gas consumption (+18% in 2018) due to the government led
coal-to-gas energy transition to reduce local air pollutions. Timing policy and
actions are urgent needed to address on these new drivers to turn down the
total emission growth trend.",New dynamics of energy use and CO2 emissions in China,2018-11-23 16:45:39,"Zhu Liu, Bo Zheng, Qiang Zhang","http://arxiv.org/abs/1811.09475v1, http://arxiv.org/pdf/1811.09475v1",econ.GN
31666,gn,"In this article, we have modeled mortality rates of Peruvian female and male
populations during the period of 1950-2017 using the Lee-Carter (LC) model. The
stochastic mortality model was introduced by Lee and Carter (1992) and has been
used by many authors for fitting and forecasting the human mortality rates. The
Singular Value Decomposition (SVD) approach is used for estimation of the
parameters of the LC model. Utilizing the best fitted auto regressive
integrated moving average (ARIMA) model we forecast the values of the time
dependent parameter of the LC model for the next thirty years. The forecasted
values of life expectancy at different age group with $95\%$ confidence
intervals are also reported for the next thirty years. In this research we use
the data, obtained from the Peruvian National Institute of Statistics (INEI).",Lee-Carter method for forecasting mortality for Peruvian Population,2018-11-23 07:21:25,"J. Cerda-Hernández, A. Sikov","http://dx.doi.org/10.17268/sel.mat.2021.01.05, http://arxiv.org/abs/1811.09622v1, http://arxiv.org/pdf/1811.09622v1",econ.GN
31667,gn,"Understanding market participants' channel choices is important to policy
makers because it yields information on which channels are effective in
transmitting information. These channel choices are the result of a recursive
process of social interactions and determine the observable trading networks.
They are characterized by feedback mechanisms due to peer interaction and
therefore need to be understood as complex adaptive systems (CAS). When
modelling CAS, conventional approaches like regression analyses face severe
drawbacks since endogeneity is omnipresent. As an alternative, process-based
analyses allow researchers to capture these endogenous processes and multiple
feedback loops. This paper applies an agent-based modelling approach (ABM) to
the empirical example of the Indonesian rubber trade. The feedback mechanisms
are modelled via an innovative approach of a social matrix, which allows
decisions made in a specific period to feed back into the decision processes in
subsequent periods, and allows agents to systematically assign different
weights to the decision parameters based on their individual characteristics.
In the validation against the observed network, uncertainty in the found
estimates, as well as under determination of the model, are dealt with via an
approach of evolutionary calibration: a genetic algorithm finds the combination
of parameters that maximizes the similarity between the simulated and the
observed network. Results indicate that the sellers' channel choice decisions
are mostly driven by physical distance and debt obligations, as well as
peer-interaction. Within the social matrix, the most influential individuals
are sellers that live close by to other traders, are active in social groups
and belong to the ethnic majority in their village.",Modelling Social Evolutionary Processes and Peer Effects in Agricultural Trade Networks: the Rubber Value Chain in Indonesia,2018-11-28 13:10:40,"Thomas Kopp, Jan Salecker","http://arxiv.org/abs/1811.11476v1, http://arxiv.org/pdf/1811.11476v1",econ.GN
31668,gn,"Corruption is an endemic societal problem with profound implications in the
development of nations. In combating this issue, cross-national evidence
supporting the effectiveness of the rule of law seems at odds with poorly
realized outcomes from reforms inspired in such literature. This paper provides
an explanation for such contradiction. By taking a computational approach, we
develop two methodological novelties into the empirical study of corruption:
(1) generating large within-country variation by means of simulation (instead
of cross-national data pooling), and (2) accounting for interactions between
covariates through a spillover network. The latter (the network), seems
responsible for a significant reduction in the effectiveness of the rule of
law; especially among the least developed countries. We also find that
effectiveness can be boosted by improving complementary policy issues that may
lie beyond the governance agenda. Moreover, our simulations suggest that
improvements to the rule of law are a necessary yet not sufficient condition to
curve corruption.",Does Better Governance Guarantee Less Corruption? Evidence of Loss in Effectiveness of the Rule of Law,2019-02-01 19:05:41,"Omar A. Guerrero, Gonzalo Castañeda","http://arxiv.org/abs/1902.00428v1, http://arxiv.org/pdf/1902.00428v1",econ.GN
31669,gn,"We provide two methodological insights on \emph{ex ante} policy evaluation
for macro models of economic development. First, we show that the problems of
parameter instability and lack of behavioral constancy can be overcome by
considering learning dynamics. Hence, instead of defining social constructs as
fixed exogenous parameters, we represent them through stable functional
relationships such as social norms. Second, we demonstrate how agent computing
can be used for this purpose. By deploying a model of policy prioritization
with endogenous government behavior, we estimate the performance of different
policy regimes. We find that, while strictly adhering to policy recommendations
increases efficiency, the nature of such recipes has a bigger effect. In other
words, while it is true that lack of discipline is detrimental to prescription
outcomes (a common defense of failed recommendations), it is more important
that such prescriptions consider the systemic and adaptive nature of the
policymaking process (something neglected by traditional technocratic advice).",The Importance of Social and Government Learning in Ex Ante Policy Evaluation,2019-02-01 19:05:47,"Gonzalo Castaeda, Omar A. Guerrero","http://arxiv.org/abs/1902.00429v1, http://arxiv.org/pdf/1902.00429v1",econ.GN
31688,gn,"This study investigates the impacts of the Automobile NOx Law of 1992 on
ambient air pollutants and fetal and infant health outcomes in Japan. Using
panel data taken from more than 1,500 monitoring stations between 1987 and
1997, we find that NOx and SO2 levels reduced by 87% and 52%, respectively in
regulated areas following the 1992 regulation. In addition, using a
municipal-level Vital Statistics panel dataset and adopting the regression
differences-in-differences method, we find that the enactment of the regulation
explained most of the improvements in the fetal death rate between 1991 and
1993. This study is the first to provide evidence on the positive impacts of
this large-scale automobile regulation policy on fetal health.","Particulate Air Pollution, Birth Outcomes, and Infant Mortality: Evidence from Japan's Automobile Emission Control Law of 1992",2019-05-11 04:12:19,"Tatsuki Inoue, Nana Nunokawa, Daisuke Kurisu, Kota Ogasawara","http://arxiv.org/abs/1905.04417v2, http://arxiv.org/pdf/1905.04417v2",econ.GN
31670,gn,"Over the last 30 years, the concept of policy coherence for development has
received especial attention among academics, practitioners and international
organizations. However, its quantification and measurement remain elusive. To
address this challenge, we develop a theoretical and empirical framework to
measure the coherence of policy priorities for development. Our procedure takes
into account the country-specific constraints that governments face when trying
to reach specific development goals. Hence, we put forward a new definition of
policy coherence where context-specific efficient resource allocations are
employed as the baseline to construct an index. To demonstrate the usefulness
and validity of our index, we analyze the cases of Mexico, Korea and Estonia,
three developing countries that, arguably, joined the OECD with the aim of
coherently establishing policies that could enable a catch-up process. We find
that Korea shows significant signs of policy coherence, Estonia seems to be in
the process of achieving it, and Mexico has unequivocally failed. Furthermore,
our results highlight the limitations of assessing coherence in terms of naive
benchmark comparisons using development-indicator data. Altogether, our
framework sheds new light in a promising direction to develop bespoke analytic
tools to meet the 2030 agenda.",Quantifying the Coherence of Development Policy Priorities,2019-02-01 19:05:52,"Omar A. Guerrero, Gonzalo Castañeda","http://arxiv.org/abs/1902.00430v1, http://arxiv.org/pdf/1902.00430v1",econ.GN
31671,gn,"Determining policy priorities is a challenging task for any government
because there may be, for example, a multiplicity of objectives to be
simultaneously attained, a multidimensional policy space to be explored,
inefficiencies in the implementation of public policies, interdependencies
between policy issues, etc. Altogether, these factor s generate a complex
landscape that governments need to navigate in order to reach their goals. To
address this problem, we develop a framework to model the evolution of
development indicators as a political economy game on a network. Our approach
accounts for the --recently documented-- network of spillovers between policy
issues, as well as the well-known political economy problem arising from budget
assignment. This allows us to infer not only policy priorities, but also the
effective use of resources in each policy issue. Using development indicators
data from more than 100 countries over 11 years, we show that the
country-specific context is a central determinant of the effectiveness of
policy priorities. In addition, our model explains well-known aggregate facts
about the relationship between corruption and development. Finally, this
framework provides a new analytic tool to generate bespoke advice on
development strategies.",How do governments determine policy priorities? Studying development strategies through spillover networks,2019-02-01 19:05:59,"Omar A. Guerrero, Gonzalo Castañeda, Florian Chávez-Juárez","http://arxiv.org/abs/1902.00432v1, http://arxiv.org/pdf/1902.00432v1",econ.GN
31672,gn,"Sources of bias in empirical studies can be separated in those coming from
the modelling domain (e.g. multicollinearity) and those coming from outliers.
We propose a two-step approach to counter both issues. First, by
decontaminating data with a multivariate outlier detection procedure and
second, by consistently estimating parameters of the production function. We
apply this approach to a panel of German field crop data. Results show that the
decontamination procedure detects multivariate outliers. In general,
multivariate outlier control delivers more reasonable results with a higher
precision in the estimation of some parameters and seems to mitigate the
effects of multicollinearity.",Robust Productivity Analysis: An application to German FADN data,2019-02-02 12:30:08,"Mathias Kloss, Thomas Kirschstein, Steffen Liebscher, Martin Petrick","http://arxiv.org/abs/1902.00678v2, http://arxiv.org/pdf/1902.00678v2",econ.GN
31673,gn,"We prove that a subtle but substantial bias exists in a common measure of the
conditional dependence of present outcomes on streaks of past outcomes in
sequential data. The magnitude of this streak selection bias generally
decreases as the sequence gets longer, but increases in streak length, and
remains substantial for a range of sequence lengths often used in empirical
work. We observe that the canonical study in the influential hot hand fallacy
literature, along with replications, are vulnerable to the bias. Upon
correcting for the bias we find that the long-standing conclusions of the
canonical study are reversed.",Surprised by the Hot Hand Fallacy? A Truth in the Law of Small Numbers,2019-02-04 18:53:56,"Joshua B. Miller, Adam Sanjurjo","http://dx.doi.org/10.3982/ECTA14943, http://arxiv.org/abs/1902.01265v1, http://arxiv.org/pdf/1902.01265v1",econ.GN
31674,gn,"In this book, we outline a model of a non-capitalist market economy based on
not-for-profit forms of business. This work presents both a critique of the
current economic system and a vision of a more socially, economically, and
ecologically sustainable economy. The point of departure is the purpose and
profit-orientation embedded in the legal forms used by businesses (e.g.,
for-profit or not-for-profit) and the ramifications of this for global
sustainability challenges such as environmental pollution, resource use,
climate change, and economic inequality. We document the rapid rise of
not-for-profit forms of business in the global economy and offer a conceptual
framework and an analytical lens through which to view these relatively new
economic actors and their potential for transforming the economy. The book
explores how a market consisting of only or mostly not-for-profit forms of
business might lead to better financial circulation, economic equality, social
well-being, and environmental regeneration as compared to for-profit markets.",How on Earth: Flourishing in a Not-for-Profit World by 2050,2019-02-04 20:33:48,"Jennifer Hinton, Donnie Maclurcan","http://arxiv.org/abs/1902.01398v1, http://arxiv.org/pdf/1902.01398v1",econ.GN
31675,gn,"Issues such as urban sprawl, congestion, oil dependence, climate change and
public health, are prompting urban and transportation planners to turn to land
use and urban design to rein in automobile use. One of the implicit beliefs in
this effort is that the right land-use policies will, in fact, help to reduce
automobile use and increase the use of alternative modes of transportation.
Thus, planners and transport engineers are increasingly viewing land use
policies and lifestyle patterns as a way to manage transportation demand. While
a substantial body of work has looked at the relationship between the built
environment and travel behaviour, as well as the influence of lifestyles and
lifestyle-related decisions on using different travel modes and activity
behaviours, limited work has been done in capturing these effects
simultaneously and also in exploring the effect of intra-household interaction
on individual attitudes and beliefs towards travel and activity behavior, and
their subsequent influence on lifestyles and modality styles. Therefore, for
this study we proposed a framework that captures the concurrent influence of
lifestyles and modality styles on both household-level decisions, such as
neighbourhood location, and individual-level decisions, such as travel mode
choices using a hierarchical Latent Class Choice Model.",A lifestyle-based model of household neighbourhood location and individual travel mode choice behaviours,2019-02-06 03:41:08,"Ali Ardeshiri, Akshay Vij","http://arxiv.org/abs/1902.01986v2, http://arxiv.org/pdf/1902.01986v2",econ.GN
31676,gn,"A significant part of the United Nations World Heritage Sites (WHSs) is
located in developing countries. These sites attract an increasing number of
tourist and income to these countries. Unfortunately, many of these WHSs are in
a poor condition due to climatic and environmental impacts; war and tourism
pressure, requiring the urgent need for restoration and preservation (Tuan &
Navrud, 2007). In this study, we characterise residents from Shiraz city
(visitors and non-visitors) willingness to invest in the management of the
heritage sites through models for the preservation of heritage and development
of tourism as a local resource. The research looks at different categories of
heritage sites within Shiraz city, Iran. The measurement instrument is a stated
preference referendum task administered state-wide to a sample of 489
respondents, with the payment mechanism defined as a purpose-specific
incremental levy of a fixed amount over a set period of years. A Latent Class
Binary Logit model, using parametric constraints is used innovatively to deal
with any strategic voting such as Yea-sayers and Nay-sayers, as well as
revealing the latent heterogeneity among sample members. Results indicate that
almost 14% of the sampled population is unwilling to be levied any amount
(Nay-sayers) to preserve any heritage sites. Not recognizing the presence of
nay-sayers in the data or recognizing them but eliminating them from the
estimation will result in biased Willingness to Pay (WTP) results and,
consequently, biased policy propositions by authorities. Moreover, it is found
that the type of heritage site is a driver of WTP. The results from this study
provide insights into the WTP of heritage site visitors and non-visitors with
respect to avoiding the impacts of future erosion and destruction and
contributing to heritage management and maintenance policies.",Conservation or deterioration in heritage sites? Estimating willingness to pay for preservation,2019-02-07 01:32:32,"Ali Ardeshiri, Roya Etminani Ghasrodashti, Taha Hossein Rashidi, Mahyar Ardeshiri, Ken Willis","http://arxiv.org/abs/1902.02418v1, http://arxiv.org/pdf/1902.02418v1",econ.GN
31677,gn,"Using discrete choice modelling, the study investigates 946 American
consumers willingness-to-pay and preferences for diverse beef products. A novel
experiment was used to elicit the number of beef products that each consumer
would purchase. The range of products explored in this study included ground,
diced, roast, and six cuts of steaks (sirloin, tenderloin, flank, flap, New
York and cowboy or rib-eye). The outcome of the study suggests that US
consumers vary in their preferences for beef products by season. The presence
of a USDA certification logo is by far the most important factor affecting
consumers willingness to pay for all beef cuts, which is also heavily dependent
on season. In relation to packaging, US consumers have mixed preference for
different beef products by season. The results from a scaled adjusted ordered
logit model showed that after price, safety-related attributes such as
certification logos, types of packaging, and antibiotic free and organic
products are a stronger influence on American consumers choice. Furthermore, US
consumers on average purchase diced and roast products more often in winter
slow cooking season, than in summer, whereas New York strip and flank steak are
more popular in the summer grilling season. This study provides valuable
insights for businesses as well as policymakers to make inform decisions while
considering how consumers relatively value among different labelling and
product attributes by season and better address any ethical, safety and
aesthetic concerns that consumers might have.",Seasonality Effects on Consumers Preferences Over Quality Attributes of Different Beef Products,2019-02-07 01:35:43,"Ali Ardeshiri, Spring Sampson, Joffre Swait","http://dx.doi.org/10.1016/j.meatsci.2019.06.004, http://arxiv.org/abs/1902.02419v1, http://arxiv.org/pdf/1902.02419v1",econ.GN
31678,gn,"This study explores the potential of crowdfunding as a tool for achieving
citizen co-funding of public projects. Focusing on philanthropic crowdfunding,
we examine whether collaborative projects between public and private
organizations are more successful in fundraising than projects initiated solely
by private organizations. We argue that government involvement in crowdfunding
provides some type of accreditation or certification that attests to a project
aim to achieve public rather than private goals, thereby mitigating information
asymmetry and improving mutual trust between creators (i.e., private sector
organizations) and funders (i.e., crowd). To support this argument, we show
that crowdfunding projects with government involvement achieved a greater
success rate and attracted a greater amount of funding than comparable projects
without government involvement. This evidence shows that governments may take
advantage of crowdfunding to co-fund public projects with the citizenry for
addressing the complex challenges that we face in the twenty-first century.",Crowdfunding Public Projects: Collaborative Governance for Achieving Citizen Co-funding of Public Goods,2019-02-07 08:36:45,"Sounman Hong, Jungmin Ryu","http://arxiv.org/abs/1902.02480v1, http://arxiv.org/pdf/1902.02480v1",econ.GN
31679,gn,"Coastal erosion is a global and pervasive phenomenon that predicates a need
for a strategic approach to the future management of coastal values and assets
(both built and natural), should we invest in protective structures like
seawalls that aim to preserve specific coastal features, or allow natural
coastline retreat to preserve sandy beaches and other coastal ecosystems.
Determining the most suitable management approach in a specific context
requires a better understanding of the full suite of economic values the
populations holds for coastal assets, including non-market values. In this
study, we characterise New South Wales residents willingness to pay to maintain
sandy beaches (width and length). We use an innovative application of a Latent
Class Binary Logit model to deal with Yea-sayers and Nay-sayers, as well as
revealing the latent heterogeneity among sample members. We find that 65% of
the population would be willing to pay some amount of levy, dependent on the
policy setting. In most cases, there is no effect of degree of beach
deterioration characterised as loss of width and length of sandy beaches of
between 5% and 100% on respondents willingness to pay for a management levy.
This suggests that respondents who agreed to pay a management levy were
motivated to preserve sandy beaches in their current state irrespective of the
severity of sand loss likely to occur as a result of coastal erosion.
Willingness to pay also varies according to beach type (amongst Iconic, Main,
Bay and Surf beaches) a finding that can assist with spatial prioritisation of
coastal management. Not recognizing the presence of nay-sayers in the data or
recognizing them but eliminating them from the estimation will result in biased
WTP results and, consequently, biased policy propositions by coastal managers.",Preserve or retreat? Willingness-to-pay for Coastline Protection in New South Wales,2019-02-07 01:28:44,"Ali Ardeshiri, Joffre Swait, Elizabeth C. Heagney, Mladen Kovac","http://dx.doi.org/10.1016/j.ocecoaman.2019.05.007, http://arxiv.org/abs/1902.03310v1, http://arxiv.org/pdf/1902.03310v1",econ.GN
31680,gn,"The objective of this study is to understand how senders choose shipping
services for different products, given the availability of both emerging
crowd-shipping (CS) and traditional carriers in a logistics market. Using data
collected from a US survey, Random Utility Maximization (RUM) and Random Regret
Minimization (RRM) models have been employed to reveal factors that influence
the diversity of decisions made by senders. Shipping costs, along with
additional real-time services such as courier reputations, tracking info,
e-notifications, and customized delivery time and location, have been found to
have remarkable impacts on senders' choices. Interestingly, potential senders
were willing to pay more to ship grocery items such as food, beverages, and
medicines by CS services. Moreover, the real-time services have low
elasticities, meaning that only a slight change in those services will lead to
a change in sender-behavior. Finally, data-science techniques were used to
assess the performance of the RUM and RRM models and found to have similar
accuracies. The findings from this research will help logistics firms address
potential market segments, prepare service configurations to fulfill senders'
expectations, and develop effective business operations strategies.",Influencing factors that determine the usage of the crowd-shipping services,2019-02-23 00:49:22,"Tho V. Le, Satish V. Ukkusuri","http://arxiv.org/abs/1902.08681v1, http://arxiv.org/pdf/1902.08681v1",econ.GN
31682,gn,"We introduce a dynamic principal-agent model to understand the nature of
contracts between an employer and an independent gig worker. We model the
worker's self-respect with an endogenous backward-looking participation
constraint; he accepts a job offer if and only if its utility is at least as
large as his reference value, which is based on the average of previously
realized wages. If the dynamically changing reference value capturing the
worker's demand is too high, then no contract is struck until the reference
value hits a threshold. Below the threshold, contracts are offered and
accepted, and the worker's wage demand follows a stochastic process. We apply
our model to perfectly competitive and monopsonistic labor market structures
and investigate first-best and second-best solutions. We show that a
far-sighted employer with market power may sacrifice instantaneous profit to
regulate the agent's demand. Moreover, the far-sighted employer implements
increasing and path-dependent effort levels. Our model captures the worker's
bargaining power by a vulnerability parameter that measures the rate at which
his wage demand decreases when unemployed. With a low vulnerability parameter,
the worker can afford to go unemployed and need not take a job at all costs.
Conversely, a worker with high vulnerability can be exploited by the employer,
and in this case our model also exhibits self-exploitation.",Self-respecting worker in the precarious gig economy: A dynamic principal-agent model,2019-02-26 19:05:06,"Zsolt Bihary, Péter Csóka, Péter Kerényi, Alexander Szimayer","http://dx.doi.org/10.2139/ssrn.3866721, http://arxiv.org/abs/1902.10021v4, http://arxiv.org/pdf/1902.10021v4",econ.GN
31683,gn,"Research in operations management has traditionally focused on models for
understanding, mostly at a strategic level, how firms should operate. Spurred
by the growing availability of data and recent advances in machine learning and
optimization methodologies, there has been an increasing application of data
analytics to problems in operations management. In this paper, we review recent
applications of data analytics to operations management, in three major areas
-- supply chain management, revenue management and healthcare operations -- and
highlight some exciting directions for the future.",Data Analytics in Operations Management: A Review,2019-05-02 05:52:26,"Velibor V. Mišić, Georgia Perakis","http://arxiv.org/abs/1905.00556v1, http://arxiv.org/pdf/1905.00556v1",econ.GN
31684,gn,"We consider an environment where players need to decide whether to buy a
certain product (or adopt a technology) or not. The product is either good or
bad, but its true value is unknown to the players. Instead, each player has her
own private information on its quality. Each player can observe the previous
actions of other players and estimate the quality of the product. A classic
result in the literature shows that in similar settings informational cascades
occur where learning stops for the whole network and players repeat the actions
of their predecessors. In contrast to this literature, in this work, players
get more than one opportunity to act. In each turn, a player is chosen
uniformly at random from all players and can decide to buy the product and
leave the market or wait. Her utility is the total expected discounted reward,
and thus myopic strategies may not constitute equilibria. We provide a
characterization of perfect Bayesian equilibria (PBE) with forward-looking
strategies through a fixed-point equation of dimensionality that grows only
quadratically with the number of players. Using this tractable fixed-point
equation, we show the existence of a PBE and characterize PBE with threshold
strategies. Based on this characterization we study informational cascades in
two regimes. First, we show that for a discount factor {\delta} strictly
smaller than one, informational cascades happen with high probability as the
number of players N increases. Furthermore, only a small portion of the total
information in the system is revealed before a cascade occurs ...",Do Informational Cascades Happen with Non-myopic Agents?,2019-05-03 21:07:45,"Ilai Bistritz, Nasimeh Heydaribeni, Achilleas Anastasopoulos","http://arxiv.org/abs/1905.01327v4, http://arxiv.org/pdf/1905.01327v4",econ.GN
31685,gn,"The central problems of Development Economics are the explanation of the
gross disparities in the global distribution, $\cal{D}$, of economic
performance, $\cal{E}$, and the persistence, $\cal{P}$, of said distribution.
Douglass North argued, epigrammatically, that institutions, $\cal{I}$, are the
rules of the game, meaning that $\cal{I}$ determines or at least constrains
$\cal{E}$. This promised to explain $\cal{D}$. 65,000 citations later, the
central problems remain unsolved. North's institutions are informal, slowly
changing cultural norms as well as roads, guilds, and formal legislation that
may change overnight. This definition, mixing the static and the dynamic, is
unsuited for use in a necessarily time dependent theory of developing
economies. We offer here a suitably precise definition of $\cal{I}$, a
dynamical theory of economic development, a new measure of the economy, an
explanation of $\cal{P}$, a bivariate model that explains half of $\cal{D}$,
and a critical reconsideration of North's epigram.",Economic Performance Through Time: A Dynamical Theory,2019-05-08 11:38:17,"Daniel Seligson, Anne McCants","http://arxiv.org/abs/1905.02956v1, http://arxiv.org/pdf/1905.02956v1",econ.GN
31686,gn,"This article analyzes the role of Finnish regulation in achieving the
broadband penetration goals defined by the National Regulatory Authority. It is
well known that in the absence of regulatory mitigation the population density
has a positive effect on broadband diffusion. Hence, we measure the effect of
the population density on the determinants of broadband diffusion throughout
the postal codes of Finland via Geographically Weighted Regression. We suggest
that the main determinants of broadband diffusion and the population density
follow a spatial pattern that is either concentric with a weak/medium/strong
strength or non-concentric convex/concave. Based on 10 patterns, we argue that
the Finnish spectrum policy encouraged Mobile Network Operators to satisfy
ambitious Universal Service Obligations without the need for a Universal
Service Fund. Spectrum auctions facilitated infrastructure-based competition
via equitable spectrum allocation and coverage obligation delivery via low-fee
licenses. However, state subsidies for fiber deployment did not attract
investment from nationwide operators due to mobile preference. These subsidies
encouraged demand-driven investment, leading to the emergence of fiber consumer
cooperatives. To explain this emergence, we show that when population density
decreases, the level of mobile service quality decreases and community
commitment increases. Hence, we recommend regulators implementing market-driven
strategies for 5G to stimulate local investment. For example, by allocating the
3.5 GHz and higher bands partly through local light licensing.",The mitigating role of regulation on the concentric patterns of broadband diffusion. The case of Finland,2019-05-08 13:43:54,"Jaume Benseny, Juuso Töyli, Heikki Hämmäinen, Andrés Arcia-Moret","http://dx.doi.org/10.1016/j.tele.2019.04.008, http://arxiv.org/abs/1905.03002v3, http://arxiv.org/pdf/1905.03002v3",econ.GN
31690,gn,"The literature suggests that autonomous vehicles (AVs) may drastically change
the user experience of private automobile travel by allowing users to engage in
productive or relaxing activities while travelling. As a consequence, the
generalised cost of car travel may decrease, and car users may become less
sensitive to travel time. By facilitating private motorised mobility, AVs may
eventually impact land use and households' residential location choices. This
paper seeks to advance the understanding of the potential impacts of AVs on
travel behaviour and land use by investigating stated preferences for
combinations of residential locations and travel options for the commute in the
context of autonomous automobile travel. Our analysis draws from a stated
preference survey, which was completed by 512 commuters from the Sydney
metropolitan area in Australia and provides insights into travel time
valuations in a long-term decision-making context. For the analysis of the
stated choice data, mixed logit models are estimated. Based on the empirical
results, no changes in the valuation of travel time due to the advent of AVs
should be expected. However, given the hypothetical nature of the stated
preference survey, the results may be affected by methodological limitations.",Autonomous Driving and Residential Location Preferences: Evidence from a Stated Choice Survey,2019-05-27 23:25:29,"Rico Krueger, Taha H. Rashidi, Vinayak V. Dixit","http://dx.doi.org/10.1016/j.trc.2019.09.018, http://arxiv.org/abs/1905.11486v3, http://arxiv.org/pdf/1905.11486v3",econ.GN
31691,gn,"Relative advantage, or the degree to which a new technology is perceived to
be better over the existing technology it supersedes, has a significant impact
on individuals decision of adopting to the new technology. This paper
investigates the impact of electric vehicles perceived advantage over the
conventional internal combustion engine vehicles, from consumers perspective,
on their decision to select electric vehicles. Data is obtained from a stated
preference survey from 1176 residents in New South Wales, Australia. The
collected data is used to estimate an integrated choice and latent variable
model of electric vehicle choice, which incorporates the perceived advantage of
electric vehicles in the form of latent variables in the utility function. The
design of the electric vehicle, impact on the environment, and safety are three
identified advantages from consumers point of view. The model is used to
simulate the effectiveness of various policies to promote electric vehicles on
different cohorts. Rebate on the purchase price is found to be the most
effective strategy to promote electric vehicles adoption.",Perceived Advantage in Perspective Application of Integrated Choice and Latent Variable Model to Capture Electric Vehicles Perceived Advantage from Consumers Perspective,2019-05-28 07:41:42,"Milad Ghasri, Ali Ardeshiri, Taha Rashidi","http://arxiv.org/abs/1905.11606v2, http://arxiv.org/pdf/1905.11606v2",econ.GN
31692,gn,"We use a rich, census-like Brazilian dataset containing information on
spatial mobility, schooling, and income in which we can link children to
parents to assess the impact of early education on several labor market
outcomes. Brazilian public primary schools admit children up to one year
younger than the national minimum age to enter school if their birthday is
before an arbitrary threshold, causing an exogenous variation in schooling at
adulthood. Using a Regression Discontinuity Design, we estimate one additional
year of schooling increases labor income in 25.8% - almost twice as large as
estimated using mincerian models. Around this cutoff there is also a gap of
9.6% on the probability of holding a college degree in adulthood, with which we
estimate the college premium and find a 201% increase in labor income. We test
the robustness of our estimates using placebo variables, alternative model
specifcations and McCrary Density Tests.",Labor Market Outcomes and Early Schooling: Evidence from School Entry Policies Using Exact Date of Birth,2019-05-30 23:04:34,"Pedro Cavalcante Oliveira, Daniel Duque","http://arxiv.org/abs/1905.13281v1, http://arxiv.org/pdf/1905.13281v1",econ.GN
31693,gn,"The fall of the Berlin Wall in 1989, modified the relations between cities of
the former communist bloc. The European and worldwide reorientation of
interactions that followed raises the question of the actual state of
historical relationships between Central Eastern European cities, but also with
ex-USSR and ex-Yugoslavian ones. Do Central and Eastern European cities
reproduce trajectories from the past in a new economic context? This paper will
examine their evolution in terms of trade exchanges and air traffic connexions
since 1989. They are confronted with transnational firm networks for the recent
years. The main contribution is to show a progressive formation of several
economic regions in Central and Eastern Europe as a result of integration into
Braudel's \'economie-monde.",Integration into économie-monde and regionalisation of the Central Eastern European space since 1989,2019-10-31 21:10:27,Natalia Zdanowska,"http://arxiv.org/abs/1911.00033v1, http://arxiv.org/pdf/1911.00033v1",econ.GN
31694,gn,"We implement nonparametric revealed-preference tests of subjective expected
utility theory and its generalizations. We find that a majority of subjects'
choices are consistent with the maximization of some utility function. They
respond to price changes in the direction subjective expected utility theory
predicts, but not to a degree that makes them consistent with the theory.
Maxmin expected utility a dds no explanatory power. The degree of deviations
from the theory is uncorrelated with demographic characteristics. Our findings
are essentially the same in laboratory data with a student population and in a
panel survey with a general sample of the U.S. population.",Decision Making under Uncertainty: An Experimental Study in Market Settings,2019-11-03 22:05:59,"Federico Echenique, Taisuke Imai, Kota Saito","http://arxiv.org/abs/1911.00946v3, http://arxiv.org/pdf/1911.00946v3",econ.GN
31695,gn,"This paper shows how data science can contribute to improving empirical
research in economics by leveraging on large datasets and extracting
information otherwise unsuitable for a traditional econometric approach. As a
test-bed for our framework, machine learning algorithms allow us to create a
new holistic measure of innovation built on a 2012 Italian Law aimed at
boosting new high-tech firms. We adopt this measure to analyse the impact of
innovativeness on a large population of Italian firms which entered the market
at the beginning of the 2008 global crisis. The methodological contribution is
organised in different steps. First, we train seven supervised learning
algorithms to recognise innovative firms on 2013 firmographics data and select
a combination of those with best predicting power. Second, we apply the former
on the 2008 dataset and predict which firms would have been labelled as
innovative according to the definition of the law. Finally, we adopt this new
indicator as regressor in a survival model to explain firms' ability to remain
in the market after 2008. Results suggest that the group of innovative firms
are more likely to survive than the rest of the sample, but the survival
premium is likely to depend on location.",The survival of start-ups in time of crisis. A machine learning approach to measure innovation,2019-11-04 11:40:00,"Marco Guerzoni, Consuelo R. Nava, Massimiliano Nuccio","http://arxiv.org/abs/1911.01073v1, http://arxiv.org/pdf/1911.01073v1",econ.GN
31696,gn,"This paper develops a sufficient-statistic formula for the unemployment gap
-- the difference between the actual unemployment rate and the efficient
unemployment rate. While lowering unemployment puts more people into work, it
forces firms to post more vacancies and to devote more resources to recruiting.
This unemployment-vacancy tradeoff, governed by the Beveridge curve, determines
the efficient unemployment rate. Accordingly, the unemployment gap can be
measured from three sufficient statistics: elasticity of the Beveridge curve,
social cost of unemployment, and cost of recruiting. Applying this formula to
the United States, 1951--2019, we find that the efficient unemployment rate
averages 4.3%, always remains between 3.0% and 5.4%, and has been stable
between 3.8% and 4.6% since 1990. As a result, the unemployment gap is
countercyclical, reaching 6 percentage points in slumps. The US labor market is
therefore generally inefficient and especially inefficiently slack in slumps.
In turn, the unemployment gap is a crucial statistic to design labor-market and
macroeconomic policies.",Beveridgean Unemployment Gap,2019-11-13 06:34:08,"Pascal Michaillat, Emmanuel Saez","http://dx.doi.org/10.1016/j.pubecp.2021.100009, http://arxiv.org/abs/1911.05271v4, http://arxiv.org/pdf/1911.05271v4",econ.GN
31697,gn,"One of the fundamental questions in science is how scientific disciplines
evolve and sustain progress in society. No studies to date allows us to explain
the endogenous processes that support the evolution of scientific disciplines
and emergence of new scientific fields in applied sciences of physics. This
study confronts this problem here by investigating the evolution of
experimental physics to explain and generalize some characteristics of the
dynamics of applied sciences. Empirical analysis suggests properties about the
evolution of experimental physics and in general of applied sciences, such as:
a) scientific fission, the evolution of scientific disciplines generates a
process of division into two or more research fields that evolve as autonomous
entities over time; b) ambidextrous drivers of science, the evolution of
science via scientific fission is due to scientific discoveries or new
technologies; c) new driving research fields, the drivers of scientific
disciplines are new research fields rather than old ones; d) science driven by
development of general purpose technologies, the evolution of experimental
physics and applied sciences is due to the convergence of experimental and
theoretical branches of physics associated with the development of computer,
information systems and applied computational science. Results also reveal that
average duration of the upwave of scientific production in scientific fields
supporting experimental physics is about 80 years. Overall, then, this study
begins the process of clarifying and generalizing, as far as possible, some
characteristics of the evolutionary dynamics of scientific disciplines that can
lay a foundation for the development of comprehensive properties explaining the
evolution of science as a whole for supporting fruitful research policy
implications directed to advancement of science and technological progress in
society.",How do scientific disciplines evolve in applied sciences? The properties of scientific fission and ambidextrous scientific drivers,2019-11-13 12:25:59,Mario Coccia,"http://arxiv.org/abs/1911.05363v1, http://arxiv.org/pdf/1911.05363v1",econ.GN
31698,gn,"This paper reexamines the validity of the natural resource curse hypothesis,
using the database of mineral exporting countries. Our findings are as follows:
(i) Resource-rich countries (RRCs) do not necessarily exhibit poor political,
economic and social performance; (ii) RRCs that perform poorly have a low
diversified exports portfolio; (iii) In contrast, RRCs with a low diversified
exports portfolio do not necessarily perform poorly. Then, we develop a model
of strategic interaction from a Bayesian game setup to study the role of
leadership and governance in the management of natural resources. We show that
an improvement in the leadership-governance binomial helps to discipline the
behavior of lobby groups (theorem 1) and generate a Pareto improvement in the
management of natural resources (theorem 2). Evidence from the World Bank
Group's CPIA data confirms the later finding. Our results remain valid after
some robustness checks.",The artefact of the Natural Resources Curse,2019-11-21 08:36:57,"Matata Ponyo Mapon, Jean-Paul K. Tsasa","http://arxiv.org/abs/1911.09681v1, http://arxiv.org/pdf/1911.09681v1",econ.GN
31699,gn,"The indirect transactions between sectors of an economic system has been a
long-standing open problem. There have been numerous attempts to conceptually
define and mathematically formulate this notion in various other scientific
fields in literature as well. The existing direct and indirect effects
formulations, however, can neither determine the direct and indirect
transactions separately nor quantify these transactions between two individual
sectors of interest in a multisectoral economic system. The novel concepts of
the direct, indirect and transfer (total) transactions between any two sectors
are introduced, and the corresponding requirements matrices and coefficients
are systematically formulated relative to both final demands and gross outputs
based on the system decomposition theory in the present manuscript. It is
demonstrated theoretically and through illustrative examples that the proposed
requirements matrices accurately define and correctly quantify the
corresponding direct, indirect, and total interactions and relationships. The
proposed requirements matrices for the US economy using aggregated input-output
tables for multiple years are then presented and briefly analyzed.",Direct and indirect transactions and requirements,2019-11-23 16:21:18,"Husna Betul Coskun, Huseyin Coskun","http://dx.doi.org/10.31219/osf.io/w2a4d, http://arxiv.org/abs/1911.11569v5, http://arxiv.org/pdf/1911.11569v5",econ.GN
31700,gn,"We propose to study electricity capacity remuneration mechanism design
through a Principal-Agent approach. The Principal represents the aggregation of
electricity consumers (or a representative entity), subject to the physical
risk of shortage, and the Agent represents the electricity capacity owners, who
invest in capacity and produce electricity to satisfy consumers' demand, and
are subject to financial risks. Following the methodology of Cvitanic et al.
(2017), we propose an optimal contract, from consumers' perspective, which
complements the revenue capacity owners achieved from the spot energy market,
and incentivizes both parties to perform an optimal level of investments while
sharing the physical and financial risks. Numerical results provide insights on
the necessity of a capacity remuneration mechanism and also show how this is
especially true when the level of uncertainties on demand or production side
increases.",A Principal-Agent approach to Capacity Remuneration Mechanisms,2019-11-28 13:19:00,"Clémence Alasseur, Heythem Farhat, Marcelo Saguan","http://arxiv.org/abs/1911.12623v3, http://arxiv.org/pdf/1911.12623v3",econ.GN
31701,gn,"It is still common wisdom amongst economists, politicians and lay people that
economic growth is a necessity of our social systems, at least to avoid
distributional conflicts. This paper challenges such belief moving from a
purely physical theoretical perspective. It formally considers the constraints
imposed by a finite environment on the prospect of continuous growth, including
the dynamics of costs. As costs grow faster than production it is easy to
deduce a final unavoidable global collapse. Then, analyzing and discussing the
evolution of the unequal share of wealth under the premises of growth and
competition, it is shown that the increase of inequalities is a necessary
consequence of the premises.",Growth and inequalities in a physicist's view,2019-12-30 13:42:15,Angelo Tartaglia,"http://dx.doi.org/10.1007/s41247-020-00071-6, http://arxiv.org/abs/2001.00478v3, http://arxiv.org/pdf/2001.00478v3",econ.GN
31702,gn,"An annual well-being index constructed from thirteen socioeconomic factors is
proposed in order to dynamically measure the mood of the US citizenry.
Econometric models are fitted to the log-returns of the index in order to
quantify its tail risk and perform option pricing and risk budgeting. By
providing a statistically sound assessment of socioeconomic content, the index
is consistent with rational finance theory, enabling the construction and
valuation of insurance-type financial instruments to serve as contracts written
against it. Endogenously, the VXO volatility measure of the stock market
appears to be the greatest contributor to tail risk. Exogenously,
""stress-testing"" the index against the politically important factors of trade
imbalance and legal immigration, quantify the systemic risk. For probability
levels in the range of 5% to 10%, values of trade below these thresholds are
associated with larger downward movements of the index than for immigration at
the same level. The main intent of the index is to provide early-warning for
negative changes in the mood of citizens, thus alerting policy makers and
private agents to potential future market downturns.",A Socioeconomic Well-Being Index,2020-01-04 07:50:26,"A. Alexandre Trindade, Abootaleb Shirvani, Xiaohan Ma","http://dx.doi.org/10.11114/aef.v7i4.4855, http://arxiv.org/abs/2001.01036v1, http://arxiv.org/pdf/2001.01036v1",econ.GN
31703,gn,"To completely understand the effects of urban ecosystems, the effects of
ecosystem disservices should be considered along with the ecosystem services
and require more research attention. In this study, we tried to better
understand its formation through the use of cascade flowchart and
classification systems and compare their effects with ecosystem services. It is
vitally important to differentiate final and intermediate ecosystem disservices
for understanding the negative effects of the ecosystem on human well-being.
The proposed functional classification of EDS (i.e. provisioning, regulating
and cultural EDS) should also help better bridging EDS and ES studies. In
addition, we used Beijing as a case study area to value the EDS caused by urban
ecosystems and compare the findings with ES values. The results suggested that
although EDS caused great financial loss the potential economic gain from
ecosystem services still significantly outweigh the loss. Our study only sheds
light on valuating the net effects of urban ecosystems. In the future, we
believe that EDS valuation should be at least equally considered in ecosystem
valuation studies to create more comprehensive and sustainable development
policies, land use proposals and management plans.","Classifying ecosystem disservices and comparing their effects with ecosystem services in Beijing, China",2020-01-06 17:31:33,"Shuyao Wu, Jiao Huang, Shuangcheng Li","http://arxiv.org/abs/2001.01605v1, http://arxiv.org/pdf/2001.01605v1",econ.GN
31704,gn,"This paper employs the survey data of CHFS (2013) to investigate the impact
of housing investment on household stock market participation and portfolio
choice. The results show that larger housing investment encourages the
household participation in the stock market, but reduces the proportion of
their stockholding. The above conclusion remains true even when the endogeneity
problem is controlled with risk attitude classification, Heckman model test and
subsample regression. This study shows that the growth in the housing market
will not lead to stock market development because of lack of household
financial literacy and the low expected yield on stock market.","Housing Investment, Stock Market Participation and Household Portfolio choice: Evidence from China's Urban Areas",2020-01-06 19:07:04,Huirong Liu,"http://arxiv.org/abs/2001.01641v1, http://arxiv.org/pdf/2001.01641v1",econ.GN
31705,gn,"Public transit disruption is becoming more common across different transit
services, which can have a destructive influence on the resiliency and
reliability of the transportation system. Utilizing a recently collected data
of transit users in the Chicago Metropolitan Area, the current study aims to
analyze how transit users respond to unplanned service disruption and disclose
the factors that affect their behavior.",Passengers' Travel Behavior in Response to Unplanned Transit Disruptions,2020-01-07 02:01:34,"Nima Golshani, Ehsan Rahimi, Ramin Shabanpour, Kouros Mohammadian, Joshua Auld, Hubert Ley","http://arxiv.org/abs/2001.01718v4, http://arxiv.org/pdf/2001.01718v4",econ.GN
31706,gn,"An income loss can have a negative impact on households, forcing them to
reduce their consumption of some staple goods. This can lead to health issues
and, consequently, generate significant costs for society. We suggest that
consumers can, to prevent these negative consequences, buy insurance to secure
sufficient consumption of a staple good if they lose part of their income. We
develop a two-period/two-good principal-agent problem with adverse selection
and endogenous reservation utility to model insurance with in-kind benefits.
This model allows us to obtain semi-explicit solutions for the insurance
contract and is applied to the context of fuel poverty. For this application,
our model allows to conclude that, even in the least efficient scenario from
the households point of view, i.e., when the insurance is provided by a
monopoly, this mechanism decreases significantly the risk of fuel poverty of
households by ensuring them a sufficient consumption of energy. The
effectiveness of in-kind insurance is highlighted through a comparison with
income insurance, but our results nevertheless underline the need to regulate
such insurance market.",Optimal contracts under adverse selection for staple goods: efficiency of in-kind insurance,2020-01-06 17:44:06,"Clémence Alasseur, Corinne Chaton, Emma Hubert","http://arxiv.org/abs/2001.02099v3, http://arxiv.org/pdf/2001.02099v3",econ.GN
31707,gn,"In most of the recent literature on state capacity, the significance of wars
in state-building assumes that threats from foreign countries generate common
interests among domestic groups, leading to larger investments in state
capacity. However, many countries that have suffered external conflicts don't
experience increased unity. Instead, they face factional politics that often
lead to destructive civil wars. This paper develops a theory of the impact of
interstate conflicts on fiscal capacity in which fighting an external threat is
not always a common-interest public good, and in which interstate conflicts can
lead to civil wars. The theory identifies conditions under which an increased
risk of external conflict decreases the chance of civil war, which in turn
results in a government with a longer political life and with more incentives
to invest in fiscal capacity. These conditions depend on the cohesiveness of
institutions, but in a non-trivial and novel way: a higher risk of an external
conflict that results in lower political turnover, but that also makes a
foreign invasion more likely, contributes to state-building only if
institutions are sufficiently incohesive.","External Threats, Political Turnover and Fiscal Capacity",2020-01-08 03:44:33,Hector Galindo-Silva,"http://dx.doi.org/10.1111/ecpo.12155, http://arxiv.org/abs/2001.02322v1, http://arxiv.org/pdf/2001.02322v1",econ.GN
31711,gn,"Two distinct trends can prove the existence of technological unemployment in
the US. First, there are more open jobs than the number of unemployed persons
looking for a job, and second, the shift of the Beveridge curve. There have
been many attempts to find the cause of technological unemployment. However,
all of these approaches fail when it comes to evaluating the impact of modern
technologies on employment future. This study hypothesizes that rather than
looking into skill requirement or routine non-routine discrimination of tasks,
a holistic approach is required to predict which occupations are going to be
vulnerable with the advent of this 4th industrial revolution, i.e., widespread
application of AI, ML algorithms, and Robotics. Three critical attributes are
considered: bottleneck, hazardous, and routine. Forty-five relevant attributes
are chosen from the O*NET database that can define these three types of tasks.
Performing Principal Axis Factor Analysis, and K-medoid clustering, the study
discovers a list of 367 vulnerable occupations. The study further analyzes the
last nine years of national employment data and finds that over the previous
four years, the growth of vulnerable occupations is only half than that of
non-vulnerable ones despite the long rally of economic expansion.",If the Prospect of Some Occupations Are Stagnating With Technological Advancement? A Task Attribute Approach to Detect Employment Vulnerability,2020-01-09 02:44:29,"Iftekhairul Islam, Fahad Shaon","http://arxiv.org/abs/2001.02783v1, http://arxiv.org/pdf/2001.02783v1",econ.GN
31712,gn,"Robust assessment of the institutionalist account of comparative development
is hampered by problems of omitted variable bias and reverse causation, since
institutional quality is not randomly assigned with respect to geographic and
human capital endowments. A recent series of papers has applied spatial
regression discontinuity designs to estimate the impact of institutions on
incomes at international borders, drawing inference from the abrupt
discontinuity in governance at borders, whereas other determinants of income
vary smoothly across borders. I extend this literature by assessing the
importance of sub-national variation in institutional quality at provincial
borders in China. Employing nighttime lights emissions as a proxy for income,
across multiple specifications I find no evidence in favour of an
institutionalist account of the comparative development of Chinese provinces.",Institutions and China's comparative development,2020-01-09 04:43:27,Paul Minard,"http://dx.doi.org/10.13140/RG.2.2.26690.73922, http://arxiv.org/abs/2001.02804v1, http://arxiv.org/pdf/2001.02804v1",econ.GN
31713,gn,"China is the world's second largest economy. After four decades of economic
miracles, China's economy is transitioning into an advanced, knowledge-based
economy. Yet, we still lack a detailed understanding of the skills that underly
the Chinese labor force, and the development and spatial distribution of these
skills. For example, the US standardized skill taxonomy O*NET played an
important role in understanding the dynamics of manufacturing and
knowledge-based work, as well as potential risks from automation and
outsourcing. Here, we use Machine Learning techniques to bridge this gap,
creating China's first workforce skill taxonomy, and map it to O*NET. This
enables us to reveal workforce skill polarization into social-cognitive skills
and sensory-physical skills, and to explore the China's regional inequality in
light of workforce skills, and compare it to traditional metrics such as
education. We build an online tool for the public and policy makers to explore
the skill taxonomy: skills.sysu.edu.cn. We will also make the taxonomy dataset
publicly available for other researchers upon publication.",China's First Workforce Skill Taxonomy,2020-01-09 10:03:32,"Weipan Xu, Xiaozhen Qin, Xun Li, Haohui""Caron"" Chen, Morgan Frank, Alex Rutherford, Andrew Reeson, Iyad Rahwan","http://arxiv.org/abs/2001.02863v1, http://arxiv.org/pdf/2001.02863v1",econ.GN
31714,gn,"This paper generalizes the original Schelling (1969, 1971a,b, 2006) model of
racial and residential segregation to a context of variable externalities due
to social linkages. In a setting in which individuals' utility function is a
convex combination of a heuristic function a la Schelling, of the distance to
friends, and of the cost of moving, the prediction of the original model gets
attenuated: the segregation equilibria are not the unique solutions. While the
cost of distance has a monotonic pro-status-quo effect, equivalent to that of
models of migration and gravity models, if friends and neighbours are formed
following independent processes the location of friends in space generates an
externality that reinforces the initial configuration if the distance to
friends is minimal, and if the degree of each agent is high. The effect on
segregation equilibria crucially depends on the role played by network
externalities.",Segregation with Social Linkages: Evaluating Schelling's Model with Networked Individuals,2020-01-09 16:14:52,"Roy Cerqueti, Luca De Benedictis, Valerio Leone Sciabolazza","http://dx.doi.org/10.1111/meca.12367, http://arxiv.org/abs/2001.02959v1, http://arxiv.org/pdf/2001.02959v1",econ.GN
31715,gn,"Colleting the data through a survey in the Northern region of Malaysia;
Kedah, Perlis, Penang and Perak, this study investigates intergenerational
social mobility in Malaysia. We measure and analyzed the factors that influence
social-economic mobility by using binary choice model (logit model). Social
mobility can be measured in several ways, by income, education, occupation or
social class. More often, economic research has focused on some measure of
income. Social mobility variable is measured using the difference between
educational achievement between a father and son. If there is a change of at
least of two educational levels between a father and son, then this study will
assign the value one which means that social mobility has occurred.",Determinants of Social-economic Mobility in the Northern Region of Malaysia,2020-01-07 10:35:09,Mukaramah Harun,"http://arxiv.org/abs/2001.03043v1, http://arxiv.org/pdf/2001.03043v1",econ.GN
31716,gn,"The implementation of Goods and Services Tax(GST) is often attributed as the
main cause of the rising prices of goods and services. The main objective of
this study is to estimate the extent of GST implementation impact on the costs
of production, which in turn have implication on households living costs.",Estimating the Impact of GST Implementation on Cost of Production and Cost of Living in Malaysia,2020-01-07 10:40:07,Mukaramah Harun,"http://dx.doi.org/10.17576/JEM-2016-5002-02, http://arxiv.org/abs/2001.03045v1, http://arxiv.org/pdf/2001.03045v1",econ.GN
31717,gn,"This study examines the relationship between type of risks and income of the
rural households in Pattani province,Thailand using the standard multiple
regression analysis.A multi-stage sampling technique is employed to select 600
households of 12 districts in the rural Pattani province and a structured
questionnaire is used for data collection.Evidences from descriptive analysis
show that the type of risks faced by households in rural Pattani province are
job loss,reduction of salary,household member died,household members who work
have accident,marital problem and infection of crops or livestock.In
addition,result from the regression analysis suggests that job loss,household
member died and marital problem have significant negative effects on the
households income.The result suggests that job loss has adverse impact on
households income.",Relationship between Type of Risks and Income of the Rural Households in the Pattani Province of Thailand,2020-01-07 10:53:54,Mukaramah Harun,"http://dx.doi.org/10.5539/ass.v10n17p204, http://arxiv.org/abs/2001.03046v1, http://arxiv.org/pdf/2001.03046v1",econ.GN
31718,gn,"Economists are showing increasing interest in the use of text as an input to
economic research. Here, we analyse online text to construct a real time metric
of welfare. For purposes of description, we call it the Feel Good Factor (FGF).
The particular example used to illustrate the concept is confined to data from
the London area, but the methodology is readily generalisable to other
geographical areas. The FGF illustrates the use of online data to create a
measure of welfare which is not based, as GDP is, on value added in a
market-oriented economy. There is already a large literature which measures
wellbeing/happiness. But this relies on conventional survey approaches, and
hence on the stated preferences of respondents. In unstructured online media
text, users reveal their emotions in ways analogous to the principle of
revealed preference in consumer demand theory. The analysis of online media
offers further advantages over conventional survey-based measures of sentiment
or well-being. It can be carried out in real time rather than with the lags
which are involved in survey approaches. In addition, it is very much cheaper.",Text as Data: Real-time Measurement of Economic Welfare,2020-01-10 14:50:11,"Rickard Nyman, Paul Ormerod","http://arxiv.org/abs/2001.03401v1, http://arxiv.org/pdf/2001.03401v1",econ.GN
31719,gn,"Improving road safety and setting targets for reducing traffic-related
crashes and deaths are highlighted as part of the United Nation's sustainable
development goals and vision zero efforts around the globe. The advent of
transportation network companies, such as ridesourcing, expands mobility
options in cities and may impact road safety outcomes. In this study, we
analyze the effects of ridesourcing use on road crashes, injuries, fatalities,
and driving while intoxicated (DWI) offenses in Travis County Texas. Our
approach leverages real-time ridesourcing volume to explain variation in road
safety outcomes. Spatial panel data models with fixed effects are deployed to
examine whether the use of ridesourcing is significantly associated with road
crashes and other safety metrics. Our results suggest that for a 10% increase
in ridesourcing trips, we expect a 0.12% decrease in road crashes (p<0.05), a
0.25% decrease in road injuries (p<0.001), and a 0.36% decrease in DWI offenses
(p<0.0001) in Travis County. Ridesourcing use is not associated with road
fatalities at a 0.05 significance level. This study augments existing work
because it moves beyond binary indicators of ridesourcing presence or absence
and analyzes patterns within an urbanized area rather than metropolitan-level
variation. Contributions include developing a data-rich approach for assessing
the impacts of ridesourcing use on our transportation system's safety, which
may serve as a template for future analyses of other US cities. Our findings
provide feedback to policymakers by clarifying associations between
ridesourcing use and traffic safety, while helping identify sets of actions to
achieve safer and more efficient shared mobility systems.",Associating Ridesourcing with Road Safety Outcomes: Insights from Austin Texas,2020-01-09 01:36:26,"Eleftheria Kontou, Noreen C. McDonald","http://dx.doi.org/10.1371/journal.pone.0248311, http://arxiv.org/abs/2001.03461v4, http://arxiv.org/pdf/2001.03461v4",econ.GN
31720,gn,"The main objective of this paper is to fill a critical gap in the literature
by analyzing the effects of decentralization on the macroeconomic stability. A
survey of the voluminous literature on decentralization suggests that the
question of the links between decentralization and macroeconomic stability has
been relatively scantily analyzed. Even though there is still a lot of room for
analysis as far as the effects of decentralization on other aspects of the
economy are concerned, we believe that it is in this area that a more thorough
analyses are mostly called for. Through this paper, we will try to shed more
light on the issue notably by looking at other dimension of macroeconomic
stability than the ones usually employed in previous studies as well as by
examining other factors that might accentuate or diminish the effects of
decentralization on macroeconomic stability. Our results found that
decentralization appears to lead to a decrease in inflation rate. However, we
do not find any correlation between decentralization with the level of fiscal
deficit. Our results also show that the impact of decentralization on inflation
is conditional on the level of perceived corruption and political institutions.",Macroeconomic Instability And Fiscal Decentralization: An Empirical Analysis,2020-01-07 11:18:49,"Ahmad Zafarullah Abdul Jalil, Mukaramah Harun, Siti Hadijah Che Mat","http://dx.doi.org/10.18267/j.pep.416, http://arxiv.org/abs/2001.03486v1, http://arxiv.org/pdf/2001.03486v1",econ.GN
31721,gn,"This paper used a primary data collected through a surveys among farmers in
rural Kedah to examine the effect of non farm income on poverty and income
inequality. This paper employed two method, for the first objective which is to
examine the impact of non farm income to poverty, we used poverty decomposition
techniques - Foster, greer and Thorbecke (FGT) as has been done by Adams
(2004). For the second objective, which is to examine the impact of non farm
income to income inequality, we used Gini decomposition techniques.",Does Non-Farm Income Improve The Poverty and Income Inequality Among Agricultural Household In Rural Kedah?,2020-01-07 11:33:46,"Siti Hadijah Che Mata, Ahmad Zafarullah Abdul Jalil, Mukaramah Harun","http://arxiv.org/abs/2001.03487v1, http://arxiv.org/pdf/2001.03487v1",econ.GN
31722,gn,"The use of the social accounting matrix (SAM) in income distribution analysis
is a method recommended by economists. However, until now, there have only been
a few SAM developed in Malaysia. The last SAM produced for Malaysia was
developed in 1984 based upon data from 1970 and has not been updated since this
time despite the significance changes in the structure of the Malaysian
economy. The paper proposes a new Malaysian SAM framework to analyse public
expenditure impact on income distribution in Malaysia. The SAM developed in the
present paper is based on more recent data, providing an up-to date and
coherent picture of the complexity of the Malaysian economy. The paper
describes the structure of the SAM framework with a detailed aggregation and
disaggregation of accounts related to public expenditure and income
distribution issues. In the SAM utilized in the present study, the detailed
framework of the different components of public expenditure in the production
sectors and household groups is essential in the analysis of the different
effects of the various public expenditure programmes on the incomes of
households among different groups.",Constructing a Social Accounting Matrix Framework to Analyse the Impact of Public Expenditure on Income Distribution in Malaysia,2020-01-07 11:37:27,"Mukaramah Harun, A. R. Zakariah, M. Azali","http://arxiv.org/abs/2001.03488v1, http://arxiv.org/pdf/2001.03488v1",econ.GN
31757,gn,"How does economics research help in solving societal challenges? This brief
note sheds additional light on this question by providing ways to connect
Journal of Economic Literature (JEL) codes and Sustainable Development Goals
(SDGs) of the United Nations. These simple linkages illustrate that the themes
of SDGs have corresponding JEL classification codes. As the mappings presented
here are necessarily imperfect and incomplete, there is plenty of room for
improvements. In an ideal world, there would be a JEL classification system for
SDGs, a separate JEL code for each of the 17 SDGs.",Classifying economics for the common good: Connecting sustainable development goals to JEL codes,2020-04-09 09:54:47,Jussi T. S. Heikkilä,"http://arxiv.org/abs/2004.04384v1, http://arxiv.org/pdf/2004.04384v1",econ.GN
31723,gn,"We propose and investigate the concept of commuting service platforms (CSP)
that leverage emerging mobility services to provide commuting services and
connect directly commuters (employees) and their worksites (employers). By
applying the two-sided market analysis framework, we show under what conditions
a CSP may present the two-sidedness. Both the monopoly and duopoly CSPs are
then analyzed. We showhowthe price allocation, i.e., the prices charged to
commuters and worksites, can impact the participation and profit of the CSPs.
We also add demand constraints to the duopoly model so that the participation
rates ofworksites and employees are (almost) the same. With demand constraints,
the competition between the two CSPs becomes less intense in general.
Discussions are presented on how the results and findings in this paper may
help build CSP in practice and how to develop new, CSP-based travel demand
management strategies.",Commuting Service Platform: Concept and Analysis,2020-01-10 22:48:18,"Rong Fan, Xuegang, Ban","http://arxiv.org/abs/2001.03646v1, http://arxiv.org/pdf/2001.03646v1",econ.GN
31724,gn,"This paper investigates the contribution of business model innovations in
improvement of food supply chains. Through a systematic literature review, the
notable business model innovations in the food industry are identified,
surveyed, and evaluated. Findings reveal that the innovations in value
proposition, value creation processes, and value delivery processes of business
models are the successful strategies proposed in food industry. It is further
disclosed that rural female entrepreneurs, social movements, and also urban
conditions are the most important driving forces inducing the farmers to
reconsider their business models. In addition, the new technologies and
environmental factors are the secondary contributors in business model
innovation for the food processors. It is concluded that digitalization has
disruptively changed the food distributors models. E-commerce models and
internet of things are reported as the essential factors imposing the retailers
to innovate their business models. Furthermore, the consumption demand and the
product quality are two main factors affecting the business models of all the
firms operating in the food supply chain regardless of their positions in the
chain. The findings of the current study provide an insight into the food
industry to design a sustainable business model to bridge the gap between food
supply and food demand.",Food Supply Chain and Business Model Innovation,2020-01-12 22:19:31,"Saeed Nosratabadi, Amirhosein Mosavi, Zoltan Lakner","http://arxiv.org/abs/2001.03982v1, http://arxiv.org/pdf/2001.03982v1",econ.GN
31725,gn,"We develop a network reconstruction model based on entropy maximization
considering the sparsity of networks. We reconstruct the interbank network in
Japan from financial data in individual banks' balance sheets using the
developed reconstruction model from 2000 to 2016. The observed sparsity of the
interbank network is successfully reproduced. We examine the characteristics of
the reconstructed interbank network by calculating important network
attributes. We obtain the following characteristics, which are consistent with
the previously known stylized facts. Although we do not introduce the mechanism
to generate the core and peripheral structure, we impose the constraints to
consider the sparsity that is no transactions within the same bank category
except for major commercial banks, the core and peripheral structure has
spontaneously emerged. We identify major nodes in each community using the
value of PageRank and degree to examine the changing role of each bank
category. The observed changing role of banks is considered a result of the
quantitative and qualitative monetary easing policy started by the Bank of
Japan in April 2013.",Reconstruction of Interbank Network using Ridge Entropy Maximization Model,2020-01-13 11:14:42,"Yuichi Ikeda, Hidetoshi Takeda","http://arxiv.org/abs/2001.04097v2, http://arxiv.org/pdf/2001.04097v2",econ.GN
31726,gn,"Technological change is responsible for major changes in the labor market.
One of the offspring of technological change is the SBTC, which is for many
economists the leading cause of the increasing wage inequality. However,
despite that the technological change affected similarly the majority of the
developed countries, nevertheless, the level of the increase of wage inequality
wasn't similar. Following the predictions of the SBTC theory, the different
levels of inequality could be due to varying degrees of skill inequality
between economies, possibly caused by variations in the number of skilled
workers available. However, recent research shows that the difference mentioned
above can explain a small percentage of the difference between countries.
Therefore, most of the resulting inequality could be due to the different ways
in which the higher level of skills is valued in each labor market. The
position advocated in this article is that technological change is largely
given for all countries without much scope to reverse. Therefore, in order to
illustrate the changes in the structure of wage distribution that cause wage
inequality, we need to understand how technology affects labor market
institutions.In this sense, the pay inequality caused by technological progress
is not a phenomenon we passively accept. On the contrary, recognizing that the
structure and the way labor market institutions function is largely influenced
by the way institutions respond to technological change, we can understand and
maybe reverse this underlying wage inequality.",Examining the correlation of the level of wage inequality with labor market institutions,2020-01-16 13:49:13,Virginia Tsoukatou,"http://dx.doi.org/10.5281/zenodo.3609865, http://arxiv.org/abs/2001.06003v2, http://arxiv.org/pdf/2001.06003v2",econ.GN
31727,gn,"We propose a novel explanation for classic international macro puzzles
regarding capital flows and portfolio investment, which builds on modern
macro-finance models of experience-based belief formation. Individual
experiences of past macroeconomic outcomes have been shown to exert a
long-lasting influence on beliefs about future realizations, and to explain
domestic stock-market investment. We argue that experience effects can explain
the tendency of investors to hold an over proportional fraction of their equity
wealth in domestic stocks (home bias), to invest in domestic equity markets in
periods of domestic crises (retrenchment), and to withdraw capital from foreign
equity markets in periods of foreign crises (fickleness). Experience-based
learning generates additional implications regarding the strength of these
puzzles in times of higher or lower economic activity and depending on the
demographic composition of market participants. We test and confirm these
predictions in the data.",Investor Experiences and International Capital Flows,2020-01-22 00:54:39,"Ulrike Malmendier, Demian Pouzo, Victoria Vanasco","http://arxiv.org/abs/2001.07790v1, http://arxiv.org/pdf/2001.07790v1",econ.GN
31939,gn,"Unconventional monetary policy (UMP) may make the effective lower bound (ELB)
on the short-term interest rate irrelevant. We develop a theoretical model that
underpins our empirical test of this `irrelevance hypothesis' based on the
simple idea that under the hypothesis, the short rate can be excluded in any
empirical model that accounts for alternative measures of monetary policy. We
test the hypothesis for Japan and the United States using a structural vector
autoregressive model with the ELB. We firmly reject the hypothesis but find
that UMP has had strong delayed effects.",Testing the effectiveness of unconventional monetary policy in Japan and the United States,2020-12-30 16:58:56,"Daisuke Ikeda, Shangshang Li, Sophocles Mavroeidis, Francesco Zanetti","http://arxiv.org/abs/2012.15158v4, http://arxiv.org/pdf/2012.15158v4",econ.GN
31728,gn,"We introduce the Input Rank as a measure of relevance of direct and indirect
suppliers in Global Value Chains. We conceive an intermediate input to be more
relevant for a downstream buyer if a decrease in that input's productivity
affects that buyer more. In particular, in our framework, the relevance of any
input depends: i) on the network position of the supplier relative to the
buyer, ii) the patterns of intermediate inputs vs labor intensities connecting
the buyer and the supplier, iii) and the competitive pressures along supply
chains. After we compute the Input Rank from both U.S. and world Input-Output
tables, we provide useful insights on the crucial role of services inputs as
well as on the relatively higher relevance of domestic suppliers and suppliers
coming from regionally integrated partners. Finally, we test that the Input
Rank is a good predictor of vertical integration choices made by 20,489 U.S.
parent companies controlling 154,836 subsidiaries worldwide.",Measuring the Input Rank in Global Supply Networks,2020-01-22 16:25:18,"Armando Rungi, Loredana Fattorini, Kenan Huremovic","http://arxiv.org/abs/2001.08003v2, http://arxiv.org/pdf/2001.08003v2",econ.GN
31729,gn,"Scholars present their new research at seminars and conferences, and send
drafts to peers, hoping to receive comments and suggestions that will improve
the quality of their work. Using a dataset of papers published in economics
journals, this article measures how much peers' individual and collective
comments improve the quality of research. Controlling for the quality of the
research idea and author, I find that a one standard deviation increase in the
number of peers' individual and collective comments increases the quality of
the journal in which the research is published by 47%.",Comments are welcome,2020-01-21 13:37:56,Asier Minondo,"http://arxiv.org/abs/2001.08376v2, http://arxiv.org/pdf/2001.08376v2",econ.GN
31730,gn,"Political systems shape institutions and govern institutional change
supporting economic performance, production and diffusion of technological
innovation. This study shows, using global data of countries, that
institutional change, based on a progressive democratization of countries, is a
driving force of inventions, adoption and diffusion of innovations in society.
The relation between technological innovation and level of democracy can be
explained with following factors: higher economic freedom in society, effective
regulation, higher economic and political stability, higher investments in R&D
and higher education, good economic governance and higher level of education
system for training high-skilled human resources. Overall, then, the positive
associations between institutional change, based on a process of
democratization, and paths of technological innovation can sustain best
practices of political economy for the development of economies in the presence
of globalization and geographical expansion of markets.",Effects of the institutional change based on democratization on origin and diffusion of technological innovation,2020-01-23 13:22:54,Mario Coccia,"http://arxiv.org/abs/2001.08432v1, http://arxiv.org/pdf/2001.08432v1",econ.GN
31731,gn,"Many researches have discussed the phenomenon and definition of sharing
economy, but an understanding of sharing economy's reconstructions of the world
remains elusive. We illustrate the mechanism of sharing economy's
reconstructions of the world in detail based on big data including the
mechanism of sharing economy's reconstructions of society, time and space,
users, industry, and self-reconstruction in the future, which is very important
for society to make full use of the reconstruction opportunity to upgrade our
world through sharing economy. On the one hand, we established the mechanisms
for sharing economy rebuilding society, industry, space-time, and users through
qualitative analyses, and on the other hand, we demonstrated the rationality of
the mechanisms through quantitative analyses of big data.",Big Data based Research on Mechanisms of Sharing Economy Restructuring the World,2020-01-24 12:30:54,Dingju Zhu,"http://arxiv.org/abs/2001.08926v1, http://arxiv.org/pdf/2001.08926v1",econ.GN
31732,gn,"Social cost of carbon (SCC) is estimated by integrated assessment models
(IAM) and is widely used by government agencies to value climate policy
impacts. While there is an ongoing debate about obtained numerical estimates
and related uncertainties, little attention has been paid so far to the SCC
calculation method itself.
  This work attempts to fill the gap by providing theoretical background and
economic interpretation of the SCC calculation approach implemented in the
open-source IAM DICE (Dynamic Integrated model of Climate and the Economy). Our
analysis indicates that the present calculation method provides an
approximation that might work pretty well in some cases, while in the other
cases the estimated value substantially (by the factor of four) deviates from
the ""true"" value. This deviation stems from the inability of the present
calculation method to catch the linkages between two key IAM's components --
complex interconnected systems -- climate and economy, both influenced by
emission abatement policies. Within the modeling framework of DICE, the
presently estimated SCC valuates policy-uncontrolled emissions against
economically unjustified consumption, which makes it irrelevant for application
in climate-economic policies and, therefore, calls for a replacement by a more
appropriate indicator.
  An apparent SCC alternative, which can be employed for policy formulation is
the direct output of the DICE model -- the socially optimal marginal abatement
cost (SMAC), which corresponds to technological possibilities at optimal level
of carbon emissions abatement. In policy making, because of the previously
employed implicit approximation, great attention needs to be paid to the use of
SCC estimates obtained earlier.",Social Cost of Carbon: What Do the Numbers Really Mean?,2020-01-24 13:09:48,"Nikolay Khabarov, Alexey Smirnov, Michael Obersteiner","http://arxiv.org/abs/2001.08935v3, http://arxiv.org/pdf/2001.08935v3",econ.GN
31733,gn,"The design of integrated mobility-on-demand services requires jointly
considering the interactions between traveler choice behavior and operators'
operation policies to design a financially sustainable pricing scheme. However,
most existing studies focus on the supply side perspective, disregarding the
impact of customer choice behavior in the presence of co-existing transport
networks. We propose a modeling framework for dynamic integrated
mobility-on-demand service operation policy evaluation with two service
options: door-to-door rideshare and rideshare with transit transfer. A new
constrained dynamic pricing model is proposed to maximize operator profit,
taking into account the correlated structure of different modes of transport.
User willingness to pay is considered as a stochastic constraint, resulting in
a more realistic ticket price setting while maximizing operator profit. Unlike
most studies, which assume that travel demand is known, we propose a demand
learning process to calibrate customer demand over time based on customers'
historical purchase data. We evaluate the proposed methodology through
simulations under different scenarios on a test network by considering the
interactions of supply and demand in a multimodal market. Different scenarios
in terms of customer arrival intensity, vehicle capacity, and the variance of
user willingness to pay are tested. Results suggest that the proposed
chance-constrained assortment price optimization model allows increasing
operator profit while keeping the proposed ticket prices acceptable.",Integrated ridesharing services with chance-constrained dynamic pricing and demand learning,2020-01-23 23:10:42,"Tai-Yu Ma, Sylvain Klein","http://arxiv.org/abs/2001.09151v2, http://arxiv.org/pdf/2001.09151v2",econ.GN
31734,gn,"Network Science is an emerging discipline using the network paradigm to model
communication systems as pair-sets of interconnected nodes and their linkages
(edges). This paper applies this paradigm to study an interacting system in
regional economy consisting of daily road transportation flows for labor
purposes, the so-called commuting phenomenon. In particular, the commuting
system in Greece including 39 non-insular prefectures is modeled into a complex
network and it is studied using measures and methods of complex network
analysis and empirical techniques. The study aims to detect the structural
characteristics of the Greek interregional commuting network (GCN) and to
interpret how this network is related to the regional development. The analysis
highlights the effect of the spatial constraints in the structure of the GCN,
it provides insights about the major road transport projects constructed the
last decade, and it outlines a populationcontrolled (gravity) pattern of
commuting, illustrating that high-populated regions attract larger volumes of
the commuting activity, which consequently affects their productivity. Overall,
this paper highlights the effectiveness of complex network analysis in the
modeling of systems of regional economy, such as the systems of spatial
interaction and the transportation networks, and it promotes the use of the
network paradigm to the regional research.",The network paradigm as a modeling tool in regional economy: the case of interregional commuting in Greece,2020-01-27 13:18:36,"Dimitrios Tsiotas, Labros Sdrolias, Dimitrios Belias","http://arxiv.org/abs/2001.09664v1, http://arxiv.org/pdf/2001.09664v1",econ.GN
31735,gn,"Technological developments worldwide are contributing to the improvement of
transport infrastructures and they are helping to reduce the overall transport
costs. At the same time, such developments along with the reduction in
transport costs are affecting the spatial interdependence between the regions
and countries, a fact inducing significant effects on their economies and, in
general, on their growth-rates. A specific class of transport infrastructures
contributing significantly to overcoming the spatial constraints is the
airtransport infrastructures. Nowadays, the importance of air-transport
infrastructures in the economic development is determinative, especially for
the geographically isolated regions, such as for the island regions of Greece.
Within this context, this paper studies the Greek airports and particularly the
evolution of their overall transportation imprint, their geographical
distribution, and the volume of the transport activity of each airport. Also,
it discusses, in a broad context, the seasonality of the Greek airport
activity, the importance of the airports for the local and regional
development, and it formulates general conclusions.","Regional airports in Greece, their characteristics and their importance for the local economic development",2020-01-27 13:22:01,"Serafeim Polyzos, Dimitrios Tsiotas","http://arxiv.org/abs/2001.09666v1, http://arxiv.org/pdf/2001.09666v1",econ.GN
31736,gn,"Using a large dataset of research seminars held at US economics departments
in 2018, I explore the factors that determine who is invited to present at a
research seminar and whether the invitation is accepted. I find that
high-quality scholars have a higher probability of being invited than
low-quality scholars, and researchers are more likely to accept an invitation
if it is issued by a top economics department. The probability of being invited
increases with the size of the host department. Young and low-quality scholars
have a higher probability of accepting an invitation. The distance between the
host department and invited scholar reduces the probability of being invited
and accepting the invitation. Female scholars do not have a lower probability
of being invited to give a research seminar than men.",Who presents and where? An analysis of research seminars in US economics departments,2020-01-28 22:12:56,Asier Minondo,"http://arxiv.org/abs/2001.10561v2, http://arxiv.org/pdf/2001.10561v2",econ.GN
31737,gn,"We develop a dynamic decomposition of the empirical Beveridge curve, i.e.,
the level of vacancies conditional on unemployment. Using a standard model, we
show that three factors can shift the Beveridge curve: reduced-form matching
efficiency, changes in the job separation rate, and out-of-steady-state
dynamics. We find that the shift in the Beveridge curve during and after the
Great Recession was due to all three factors, and each factor taken separately
had a large effect. Comparing the pre-2010 period to the post-2010 period, a
fall in matching efficiency and out-of-steady-state dynamics both pushed the
curve upward, while the changes in the separation rate pushed the curve
downward. The net effect was the observed upward shift in vacancies given
unemployment. In previous recessions changes in matching efficiency were
relatively unimportant, while dynamics and the separation rate had more impact.
Thus, the unusual feature of the Great Recession was the deterioration in
matching efficiency, while separations and dynamics have played significant,
partially offsetting roles in most downturns. The importance of these latter
two margins contrasts with much of the literature, which abstracts from one or
both of them. We show that these factors affect the slope of the empirical
Beveridge curve, an important quantity in recent welfare analyses estimating
the natural rate of unemployment.",Dynamic Beveridge Curve Accounting,2020-02-28 22:20:41,"Hie Joo Ahn, Leland D. Crane","http://arxiv.org/abs/2003.00033v1, http://arxiv.org/pdf/2003.00033v1",econ.GN
31738,gn,"Our study aims at quantifying the impact of climate change on corn farming in
Ontario under several warming scenarios at the 2068 horizon. It is articulated
around a discrete-time dynamic model of corn farm income with an annual
time-step, corresponding to one agricultural cycle from planting to harvest. At
each period, we compute the income given the corn yield, which is highly
dependent on weather variables. We also provide a reproducible forecast of the
yearly distribution of corn yield for 10 cities in Ontario. The price of corn
futures at harvest time is taken into account and we fit our model by using 49
years of historical data. We then conduct out-of-sample Monte-Carlo simulations
to obtain the farm income forecasts under a given climate change scenario.",Influence Of Climate Change On The Corn Yield In Ontario And Its Impact On Corn Farms Income At The 2068 Horizon,2020-03-03 03:52:37,"Antoine Kornprobst, Matt Davison","http://arxiv.org/abs/2003.01270v2, http://arxiv.org/pdf/2003.01270v2",econ.GN
31784,gn,"The era of technological change entails complex patterns of changes in wages
and employment. We develop a unified framework to measure the contribution of
technological change embodied in new capital to changes in the relative wages
and income shares of different types of labor. We obtain the aggregate
elasticities of substitution by estimating and aggregating sectoral production
function parameters with cross-country and cross-industry panel data from OECD
countries. We show that advances in information, communication, and computation
technologies contribute significantly to narrowing the gender wage gap,
widening the skill wage gap, and declining labor shares.",The Race between Technology and Woman: Changes in Gender and Skill Premia in OECD Countries,2020-05-26 12:41:30,"Hiroya Taniguchi, Ken Yamada","http://arxiv.org/abs/2005.12600v6, http://arxiv.org/pdf/2005.12600v6",econ.GN
31739,gn,"Integrated Assessment Models (IAMs) of the climate and economy aim to analyze
the impact and efficacy of policies that aim to control climate change, such as
carbon taxes and subsidies. A major characteristic of IAMs is that their
geophysical sector determines the mean surface temperature increase over the
preindustrial level, which in turn determines the damage function. Most of the
existing IAMs are perfect-foresight forward-looking models, assuming that we
know all of the future information. However, there are significant
uncertainties in the climate and economic system, including parameter
uncertainty, model uncertainty, climate tipping risks, economic risks, and
ambiguity. For example, climate damages are uncertain: some researchers assume
that climate damages are proportional to instantaneous output, while others
assume that climate damages have a more persistent impact on economic growth.
Climate tipping risks represent (nearly) irreversible climate events that may
lead to significant changes in the climate system, such as the Greenland ice
sheet collapse, while the conditions, probability of tipping, duration, and
associated damage are also uncertain. Technological progress in carbon capture
and storage, adaptation, renewable energy, and energy efficiency are uncertain
too. In the face of these uncertainties, policymakers have to provide a
decision that considers important factors such as risk aversion, inequality
aversion, and sustainability of the economy and ecosystem. Solving this problem
may require richer and more realistic models than standard IAMs, and advanced
computational methods. The recent literature has shown that these uncertainties
can be incorporated into IAMs and may change optimal climate policies
significantly.",The Role of Uncertainty in Controlling Climate Change,2020-03-03 19:00:26,Yongyang Cai,"http://arxiv.org/abs/2003.01615v2, http://arxiv.org/pdf/2003.01615v2",econ.GN
31740,gn,"I study the relationship between the likelihood of a violent domestic
conflict and the risk that such a conflict ""externalizes"" (i.e. spreads to
another country by creating an international dispute). I consider a situation
in which a domestic conflict between a government and a rebel group has the
potential to externalize. I show that the risk of externalization increases the
likelihood of a peaceful outcome, but only if the government is sufficiently
powerful relative to the rebels, the risk of externalization is sufficiently
high, and the foreign actor who can intervene in the domestic conflict is
sufficiently uninterested in material costs and benefits. I show how this model
helps to understand the recent and successful peace process between the
Colombian government and the country's most powerful rebel group, the
Revolutionary Armed Forces of Colombia (FARC).",Conflict externalization and the quest for peace: theory and case evidence from Colombia,2020-03-06 04:33:27,Hector Galindo-Silva,"http://dx.doi.org/10.1515/peps-2020-0010, http://arxiv.org/abs/2003.02990v2, http://arxiv.org/pdf/2003.02990v2",econ.GN
31741,gn,"We provide one of the first systematic assessments of the development and
determinants of economic anxiety at the onset of the coronavirus pandemic.
Using a global dataset on internet searches and two representative surveys from
the US, we document a substantial increase in economic anxiety during and after
the arrival of the coronavirus. We also document a large dispersion in beliefs
about the pandemic risk factors of the coronavirus, and demonstrate that these
beliefs causally affect individuals' economic anxieties. Finally, we show that
individuals' mental models of infectious disease spread understate non-linear
growth and shape the extent of economic anxiety.",Coronavirus Perceptions And Economic Anxiety,2020-03-09 00:18:54,"Thiemo Fetzer, Lukas Hensel, Johannes Hermle, Christopher Roth","http://arxiv.org/abs/2003.03848v4, http://arxiv.org/pdf/2003.03848v4",econ.GN
31742,gn,"This article analyzes the procedure for the initial employment of research
assistants in Turkish universities to see if it complies with the rules and
regulations. We manually collected 2409 applicant data from 53 Turkish
universities to see if applicants are ranked according to the rules suggested
by the Higher Education Council of Turkey. The rulebook states that applicants
should be ranked according to a final score based on the weighted average of
their GPA, graduate examination score, academic examination score, and foreign
language skills score. Thus, the research assistant selection is supposed to be
a fair process where each applicant is evaluated based on objective metrics.
However, our analysis of data suggests that the final score of the applicants
is almost entirely based on the highly subjective academic examination
conducted by the hiring institution. Thus, the applicants GPA, standardized
graduate examination score, standardized foreign language score are irrelevant
in the selection process, making it a very unfair process based on favoritism.",Favoritism in Research Assistantship Selection in Turkish Academia,2020-03-09 15:19:12,Osman Gulseven,"http://arxiv.org/abs/2003.04060v1, http://arxiv.org/pdf/2003.04060v1",econ.GN
31743,gn,"This paper examines the export promotion of processed foods by a regional
economy and regional vitalisation policy. We employ Bertrand models that
contain a major home producer and a home producer in a local area. In our
model, growth in the profit of one producer does not result in an increase in
the profit of the other, despite strategic complements. We show that the profit
of the producer in the local area decreases because of the deterioration of a
location condition, and its profit increases through the reinforcement of the
administrative guidance. Furthermore, when the inefficiency of the location
worsens, the local government should optimally decrease the level of
administrative guidance. Hence, the local government should strategically
eliminate this inefficiency to maintain a sufficient effect of administrative
guidance.",Optimal trade strategy of a regional economy by food exports,2020-03-08 12:08:37,M. Okimoto,"http://arxiv.org/abs/2003.04307v1, http://arxiv.org/pdf/2003.04307v1",econ.GN
31744,gn,"The installation of high-speed rail in the world during the last two decades
resulted in significant socioeconomic and environmental changes. The U.S. has
the longest rail network in the world, but the focus is on carrying a wide
variety of loads including coal, farm crops, industrial products, commercial
goods, and miscellaneous mixed shipments. Freight and passenger services in the
U.S. dates to 1970, with both carried out by private railway companies.
Railways were the main means of transport between cities from the late 19th
century through the middle of the 20th century. However, rapid growth in
production and improvements in technologies changed those dynamics. The fierce
competition for comfortability and pleasantness in passenger travel and the
proliferation of aviation services in the U.S. channeled federal and state
budgets towards motor vehicle infrastructure, which brought demand for
railroads to a halt in the 1950s. Presently, the U.S. has no high-speed trains,
aside from sections of Amtrak s Acela line in the Northeast Corridor that can
reach 150 mph for only 34 miles of its 457-mile span. The average speed between
New York and Boston is about 65 mph. On the other hand, China has the world s
fastest and largest high-speed rail network, with more than 19,000 miles, of
which the vast majority was built in the past decade. Japan s bullet trains can
reach nearly 200 miles per hour and dates to the 1960s. That system moved more
than 9 billion people without a single passenger casualty. In this systematic
review, we studied the effect of High-Speed Rail (HSR) on the U.S. and other
countries including France, Japan, Germany, Italy, and China in terms of energy
consumption, land use, economic development, travel behavior, time use, human
health, and quality of life.",A Systematic and Analytical Review of the Socioeconomic and Environmental Impact of the Deployed High-Speed Rail (HSR) Systems on the World,2020-03-10 02:18:01,"Mohsen Momenitabar, Zhila Dehdari Ebrahimi, Mohammad Arani","http://arxiv.org/abs/2003.04452v2, http://arxiv.org/pdf/2003.04452v2",econ.GN
31751,gn,"Using a survey on wage expectations among students at two Swiss institutions
of higher education, we examine the wage expectations of our respondents along
two main lines. First, we investigate the rationality of wage expectations by
comparing average expected wages from our sample with those of similar
graduates; we further examine how our respondents revise their expectations
when provided information about actual wages. Second, using causal mediation
analysis, we test whether the consideration of a rich set of personal and
professional controls, namely concerning family formation and children in
addition to professional preferences, accounts for the difference in wage
expectations across genders. We find that males and females overestimate their
wages compared to actual ones, and that males respond in an overconfident
manner to information about outside wages. Despite the attenuation of the
gender difference in wage expectations brought about by the comprehensive set
of controls, gender generally retains a significant direct, unexplained effect
on wage expectations.",Gender Differences in Wage Expectations,2020-03-25 19:57:57,"Ana Fernandes, Martin Huber, Giannina Vaccaro","http://dx.doi.org/10.1371/journal.pone.0250892, http://arxiv.org/abs/2003.11496v1, http://arxiv.org/pdf/2003.11496v1",econ.GN
31745,gn,"Pursuing three important elements including economic, safety, and traffic are
the overall objective of decision evaluation across all transport projects. In
this study, we investigate the feasibility of the development of city
interchanges and road connections for network users. To achieve this goal, a
series of minor goals are required to be met in advance including determining
benefits, costs of implement-ing new highway interchanges, quantifying the
effective parameters, the increase in fuel consumption, the reduction in travel
time, and finally influence on travel speed. In this study, geometric
advancement of Hakim highway, and Yadegar-e-Emam Highway were investigated in
the Macro view from the cloverleaf inter-section with a low capacity to a
three-level directional intersection of the enhanced cloverleaf. For this
purpose, the simulation was done by EMME software of INRO Company. The results
of the method were evaluated by the objective of net present value (NPV), and
the benefit and cost of each one was stated precisely in different years. At
the end, some suggestion has been provided.",A New Approach for Macroscopic Analysis to Improve the Technical and Economic Impacts of Urban Interchanges on Traffic Networks,2020-03-10 02:30:11,"Seyed Hassan Hosseini, Ahmad Mehrabian, Zhila Dehdari Ebrahimi, Mohsen Momenitabar, Mohammad Arani","http://arxiv.org/abs/2003.04459v2, http://arxiv.org/pdf/2003.04459v2",econ.GN
31746,gn,"Existing emissions trading system (ETS) designs inhibit emissions but do not
constrain warming to any fxed level, preventing certainty of the global path of
warming. Instead, they have the indirect objective of reducing emissions. They
provide poor future price information. And they have high transaction costs for
implementation, requiring treaties and laws. To address these shortcomings,
this paper proposes a novel double-sided auction mechanism of emissions permits
and sequestration contracts tied to temperature. This mechanism constrains
warming for many (e.g., 150) years into the future and every auction would
provide price information for this time range. In addition, this paper proposes
a set of market rules and a bottom-up implementation path. A coalition of
businesses begin implementation with jurisdictions joining as they are ready.
The combination of the selected market rules and the proposed implementation
path appear to incentivize participation. This design appears to be closer to
""first best"" with a lower cost of mitigation than any in the literature, while
increasing the certainty of avoiding catastrophic warming. This design should
also have a faster pathway to implementation. A numerical simulation shows
surprising results, e.g., that static prices are wrong, prices should evolve
over time in a way that contradicts other recent proposals, and ""global warming
potential"" as used in existing ETSs are generally erroneous.",A price on warming with a supply chain directed market,2020-03-11 08:04:39,John F. Raffensperger,"http://dx.doi.org/10.1007/s43621-021-00011-4, http://arxiv.org/abs/2003.05114v2, http://arxiv.org/pdf/2003.05114v2",econ.GN
31747,gn,"The expansion of global production networks has raised many important
questions about the interdependence among countries and how future changes in
the world economy are likely to affect the countries' positioning in global
value chains. We are approaching the structure and lengths of value chains from
a completely different perspective than has been available so far. By assigning
a random endogenous variable to a network linkage representing the number of
intermediate sales/purchases before absorption (final use or value added), the
discrete-time absorbing Markov chains proposed here shed new light on the world
input/output networks. The variance of this variable can help assess the risk
when shaping the chain length and optimize the level of production. Contrary to
what might be expected simply on the basis of comparative advantage, the
results reveal that both the input and output chains exhibit the same
quasi-stationary product distribution. Put differently, the expected proportion
of time spent in a state before absorption is invariant to changes of the
network type. Finally, the several global metrics proposed here, including the
probability distribution of global value added/final output, provide guidance
for policy makers when estimating the resilience of world trading system and
forecasting the macroeconomic developments.",On the structure of the world economy: An absorbing Markov chain approach,2020-03-11 13:27:32,"Olivera Kostoska, Viktor Stojkoski, Ljupco Kocarev","http://dx.doi.org/10.3390/e22040482, http://arxiv.org/abs/2003.05204v1, http://arxiv.org/pdf/2003.05204v1",econ.GN
31748,gn,"This study estimates the risk contributions of individual European countries
regarding the indemnity payments in agricultural insurance. We model the total
risk exposure as an insurance portfolio where each country is unique in terms
of its risk characteristics. The data has been collected from the recent
surveys conducted by the European Commission and the World Bank. Farm
Accountancy Data Network is used as well. 22 out of 26 member states are
included in the study. The results suggest that the EuroMediterranean countries
are the major risk contributors. These countries not only have the highest
expected loss but also high volatility of indemnity payments. Nordic countries
have the lowest indemnity payments and risk exposure.",Indemnity Payments in Agricultural Insurance: Risk Exposure of EU States,2020-03-12 15:16:56,"Osman Gulseven, Kasirga Yildirak","http://arxiv.org/abs/2003.05726v1, http://arxiv.org/pdf/2003.05726v1",econ.GN
31749,gn,"This article introduces the Hedonic Metric (HM) approach as an original
method to model the demand for differentiated products. Using this approach,
initially, we create an n-dimensional hedonic space based on the characteristic
information available to consumers. Next, we allocate products into this space
and estimate the elasticities using distances. Our model makes it possible to
estimate a large number of differentiated products in a single demand system.
We applied our model to estimate the retail demand for fluid milk products.",A Hedonic Metric Approach to Estimating the Demand for Differentiated Products: An Application to Retail Milk Demand,2020-03-12 15:06:19,"Osman Gulseven, Michael Wohlgenant","http://arxiv.org/abs/2003.07197v1, http://arxiv.org/pdf/2003.07197v1",econ.GN
31750,gn,"Many countries are ethnically diverse. However, despite the benefits of
ethnic heterogeneity, ethnic-based political inequality and discrimination are
pervasive. Why is this? This study suggests that part of the variation in
ethnic-based political inequality depends on the relative size of ethnic groups
within each country. Using group-level data for 569 ethnic groups in 175
countries from 1946 to 2017, I find evidence of an inverted-U-shaped
relationship between an ethnic group's relative size and its access to power.
This single-peaked relationship is robust to many alternative specifications,
and a battery of robustness checks suggests that relative size influences
access to power. Through a very simple model, I propose an explanation based on
an initial high level of political inequality, and on the incentives that more
powerful groups have to continue limiting other groups' access to power. This
explanation incorporates essential elements of several existing theories on the
relationship between group size and discrimination, and suggests a new
empirical prediction: the single-peaked pattern should be weaker in countries
where political institutions have historically been less open. This additional
prediction is supported by the data.",Ethnic Groups' Access to State Power and Group Size,2020-03-18 09:35:35,Hector Galindo-Silva,"http://arxiv.org/abs/2003.08064v1, http://arxiv.org/pdf/2003.08064v1",econ.GN
31752,gn,"As baby boomers have begun to downsize and retire, their preferences now
overlap with millennials' predilection for urban amenities and smaller living
spaces. This confluence in tastes between the two largest age segments of the
U.S. population has meaningfully changed the evolution of home prices in the
United States. Utilizing a Bartik shift-share instrument for demography-driven
demand shocks, we show that from 2000 to 2018 (i) the price growth of four- and
five-bedroom houses has lagged the prices of one- and two-bedroom homes, (ii)
within local labor markets, the relative home prices in baby boomer-rich zip
codes have declined compared with millennial-rich neighborhoods, and (iii) the
zip codes with the largest relative share of smaller homes have grown fastest.
These patterns have become more pronounced during the latest economic cycle. We
show that the effects are concentrated in areas where housing supply is most
inelastic. If this pattern in the housing market persists or expands, the
approximately 16.5 trillion in real estate wealth held by households headed by
those aged 55 or older will be significantly affected. We find little evidence
that these upcoming changes have been incorporated into current prices.","The Millennial Boom, the Baby Bust, and the Housing Market",2020-03-25 21:03:45,"Marijn A. Bolhuis, Judd N. L. Cramer","http://arxiv.org/abs/2003.11565v2, http://arxiv.org/pdf/2003.11565v2",econ.GN
31753,gn,"While the coronavirus spreads, governments are attempting to reduce contagion
rates at the expense of negative economic effects. Market expectations
plummeted, foreshadowing the risk of a global economic crisis and mass
unemployment. Governments provide huge financial aid programmes to mitigate the
economic shocks. To achieve higher effectiveness with such policy measures, it
is key to identify the industries that are most in need of support. In this
study, we introduce a data-mining approach to measure industry-specific risks
related to COVID-19. We examine company risk reports filed to the U.S.
Securities and Exchange Commission (SEC). This alternative data set can
complement more traditional economic indicators in times of the fast-evolving
crisis as it allows for a real-time analysis of risk assessments. Preliminary
findings suggest that the companies' awareness towards corona-related business
risks is ahead of the overall stock market developments. Our approach allows to
distinguish the industries by their risk awareness towards COVID-19. Based on
natural language processing, we identify corona-related risk topics and their
perceived relevance for different industries. The preliminary findings are
summarised as an up-to-date online index. The CoRisk-Index tracks the
industry-specific risk assessments related to the crisis, as it spreads through
the economy. The tracking tool is updated weekly. It could provide relevant
empirical data to inform models on the economic effects of the crisis. Such
complementary empirical information could ultimately help policymakers to
effectively target financial support in order to mitigate the economic shocks
of the crisis.",The CoRisk-Index: A data-mining approach to identify industry-specific risk assessments related to COVID-19 in real-time,2020-03-27 17:21:50,"Fabian Stephany, Niklas Stoehr, Philipp Darius, Leonie Neuhäuser, Ole Teutloff, Fabian Braesemann","http://arxiv.org/abs/2003.12432v3, http://arxiv.org/pdf/2003.12432v3",econ.GN
31754,gn,"Challenge Theory (Shye & Haber 2015; 2020) has demonstrated that a newly
devised challenge index (CI) attributable to every binary choice problem
predicts the popularity of the bold option, the one of lower probability to
gain a higher monetary outcome (in a gain problem); and the one of higher
probability to lose a lower monetary outcome (in a loss problem). In this paper
we show how Facet Theory structures the choice-behavior concept-space and
yields rationalized measurements of gambling behavior. The data of this study
consist of responses obtained from 126 student, specifying their preferences in
44 risky decision problems. A Faceted Smallest Space Analysis (SSA) of the 44
problems confirmed the hypothesis that the space of binary risky choice
problems is partitionable by two binary axial facets: (a) Type of Problem (gain
vs. loss); and (b) CI (Low vs. High). Four composite variables, representing
the validated constructs: Gain, Loss, High-CI and Low-CI, were processed using
Multiple Scaling by Partial Order Scalogram Analysis with base Coordinates
(POSAC), leading to a meaningful and intuitively appealing interpretation of
two necessary and sufficient gambling-behavior measurement scales.",Challenge Theory: The Structure and Measurement of Risky Binary Choice Behavior,2020-03-24 23:45:11,"Samuel Shye, Ido Haber","http://arxiv.org/abs/2003.12474v1, http://arxiv.org/pdf/2003.12474v1",econ.GN
31755,gn,"A large body of research documents that the 2010 dependent coverage mandate
of the Affordable Care Act was responsible for significantly increasing health
insurance coverage among young adults. No prior research has examined whether
sexual minority young adults also benefitted from the dependent coverage
mandate, despite previous studies showing lower health insurance coverage among
sexual minorities and the fact that their higher likelihood of strained
relationships with their parents might predict a lower ability to use parental
coverage. Our estimates from the American Community Surveys using
difference-in-differences and event study models show that men in same-sex
couples age 21-25 were significantly more likely to have any health insurance
after 2010 compared to the associated change for slightly older 27 to
31-year-old men in same-sex couples. This increase is concentrated among
employer-sponsored insurance, and it is robust to permutations of time periods
and age groups. Effects for women in same-sex couples and men in different-sex
couples are smaller than the associated effects for men in same-sex couples.
These findings confirm the broad effects of expanded dependent coverage and
suggest that eliminating the federal dependent mandate could reduce health
insurance coverage among young adult sexual minorities in same-sex couples.",Effects of the Affordable Care Act Dependent Coverage Mandate on Health Insurance Coverage for Individuals in Same-Sex Couples,2020-04-05 22:59:45,"Christopher S. Carpenter, Gilbert Gonzales, Tara McKay, Dario Sansone","http://arxiv.org/abs/2004.02296v1, http://arxiv.org/pdf/2004.02296v1",econ.GN
31756,gn,"The Coase Theorem has a central place in the theory of environmental
economics and regulation. But its applicability for solving real-world
externality problems remains debated. In this paper, we first place this
seminal contribution in its historical context. We then survey the experimental
literature that has tested the importance of the many, often tacit assumptions
in the Coase Theorem in the laboratory. We discuss a selection of applications
of the Coase Theorem to actual environmental problems, distinguishing between
situations in which the polluter or the pollutee pays. While limited in scope,
Coasian bargaining over externalities offers a pragmatic solution to problems
that are difficult to solve in any other way.",Applications of the Coase Theorem,2020-04-08 23:54:37,"Tatyana Deryugina, Frances Moore, Richard S. J. Tol","http://arxiv.org/abs/2004.04247v2, http://arxiv.org/pdf/2004.04247v2",econ.GN
31758,gn,"Social distancing is the primary policy prescription for combating the
COVID-19 pandemic, and has been widely adopted in Europe and North America. We
estimate the value of disease avoidance using an epidemiological model that
projects the spread of COVID-19 across rich and poor countries. Social
distancing measures that ""flatten the curve"" of the disease to bring demand
within the capacity of healthcare systems are predicted to save many lives in
high-income countries, such that practically any economic cost is worth
bearing. These social distancing policies are estimated to be less effective in
poor countries with younger populations less susceptible to COVID-19, and more
limited healthcare systems, which were overwhelmed before the pandemic.
Moreover, social distancing lowers disease risk by limiting people's economic
opportunities. Poorer people are less willing to make those economic
sacrifices. They place relatively greater value on their livelihood concerns
compared to contracting COVID-19. Not only are the epidemiological and economic
benefits of social distancing much smaller in poorer countries, such policies
may exact a heavy toll on the poorest and most vulnerable. Workers in the
informal sector lack the resources and social protections to isolate themselves
and sacrifice economic opportunities until the virus passes. By limiting their
ability to earn a living, social distancing can lead to an increase in hunger,
deprivation, and related mortality and morbidity. Rather than a blanket
adoption of social distancing measures, we advocate for the exploration of
alternative harm-reduction strategies, including universal mask adoption and
increased hygiene measures.",The Benefits and Costs of Social Distancing in Rich and Poor Countries,2020-04-10 03:48:15,"Zachary Barnett-Howell, Ahmed Mushfiq Mobarak","http://dx.doi.org/10.1093/trstmh/traa140, http://arxiv.org/abs/2004.04867v1, http://arxiv.org/pdf/2004.04867v1",econ.GN
31759,gn,"Most of the world poorest people come from rural areas and depend on their
local ecosystems for food production. Recent research has highlighted the
importance of self-reinforcing dynamics between low soil quality and persistent
poverty but little is known on how they affect poverty alleviation. We
investigate how the intertwined dynamics of household assets, nutrients
(especially phosphorus), water and soil quality influence food production and
determine the conditions for escape from poverty for the rural poor. We have
developed a suite of dynamic, multidimensional poverty trap models of
households that combine economic aspects of growth with ecological dynamics of
soil quality, water and nutrient flows to analyze the effectiveness of common
poverty alleviation strategies such as intensification through agrochemical
inputs, diversification of energy sources and conservation tillage. Our results
show that (i) agrochemical inputs can reinforce poverty by degrading soil
quality, (ii) diversification of household energy sources can create
possibilities for effective application of other strategies, and (iii)
sequencing of interventions can improve effectiveness of conservation tillage.
Our model-based approach demonstrates the interdependence of economic and
ecological dynamics which preclude blanket solution for poverty alleviation.
Stylized models as developed here can be used for testing effectiveness of
different strategies given biophysical and economic settings in the target
region.","Effective alleviation of rural poverty depends on the interplay between productivity, nutrients, water and soil quality",2020-04-09 00:39:54,"Sonja Radosavljevic, L. Jamila Haider, Steven J. Lade, Maja Schluter","http://dx.doi.org/10.1016/j.ecolecon.2019.106494, http://arxiv.org/abs/2004.05229v1, http://arxiv.org/pdf/2004.05229v1",econ.GN
31760,gn,"We provide quantitative predictions of first order supply and demand shocks
for the U.S. economy associated with the COVID-19 pandemic at the level of
individual occupations and industries. To analyze the supply shock, we classify
industries as essential or non-essential and construct a Remote Labor Index,
which measures the ability of different occupations to work from home. Demand
shocks are based on a study of the likely effect of a severe influenza epidemic
developed by the US Congressional Budget Office. Compared to the pre-COVID
period, these shocks would threaten around 22% of the US economy's GDP,
jeopardise 24% of jobs and reduce total wage income by 17%. At the industry
level, sectors such as transport are likely to have output constrained by
demand shocks, while sectors relating to manufacturing, mining and services are
more likely to be constrained by supply shocks. Entertainment, restaurants and
tourism face large supply and demand shocks. At the occupation level, we show
that high-wage occupations are relatively immune from adverse supply and
demand-side shocks, while low-wage occupations are much more vulnerable. We
should emphasize that our results are only first-order shocks -- we expect them
to be substantially amplified by feedback effects in the production network.",Supply and demand shocks in the COVID-19 pandemic: An industry and occupation perspective,2020-04-14 22:19:15,"R. Maria del Rio-Chanona, Penny Mealy, Anton Pichler, Francois Lafond, Doyne Farmer","http://dx.doi.org/10.1093/oxrep/graa033, http://arxiv.org/abs/2004.06759v1, http://arxiv.org/pdf/2004.06759v1",econ.GN
31761,gn,"The Hicks induced innovation hypothesis states that a price increase of a
production factor is a spur to invention. We propose an alternative hypothesis
restating that a spur to invention require not only an increase of one factor
but also a decrease of at least one other factor to offset the companies' cost.
We illustrate the need for our alternative hypothesis in a historical example
of the industrial revolution in the United Kingdom. Furthermore, we
econometrically evaluate both hypotheses in a case study of research and
development (R&D) in 29 OECD countries from 2003 to 2017. Specifically, we
investigate dependence of investments to R&D on economic environment
represented by average wages and oil prices using panel regression. We find
that our alternative hypothesis is supported for R&D funded and/or performed by
business enterprises while the original Hicks hypothesis holds for R&D funded
by the government and R&D performed by universities. Our results reflect that
business sector is significantly influenced by market conditions, unlike the
government and higher education sectors.",Economic Conditions for Innovation: Private vs. Public Sector,2020-04-15 22:37:21,"Tomáš Evan, Vladimír Holý","http://dx.doi.org/10.1016/j.seps.2020.100966, http://arxiv.org/abs/2004.07814v3, http://arxiv.org/pdf/2004.07814v3",econ.GN
31823,gn,"The paper is a collection of knowledge regarding the phenomenon of climate
change, competitiveness, and literature linking the two phenomena to
agricultural market competitiveness. The objective is to investigate the peer
reviewed and grey literature on the subject to explore the link between climate
change and agricultural market competitiveness and also explore an appropriate
technique to validate the presumed relationship empirically. The paper
concludes by identifying implications for developing an agricultural
competitiveness index while incorporating the climate change impacts, to
enhance the potential of agricultural markets for optimizing the agricultural
sectors competitiveness.",Reviewing climate change and agricultural market competitiveness,2020-08-31 19:45:03,"Bakhtmina Zia, Dr Muhammad Rafiq PhD Research Scholar, Institute of Management Sciences, Peshawar, Pakistan, Associate Professor, Institute of Management Sciences, Peshawar, Pakistan","http://arxiv.org/abs/2008.13726v1, http://arxiv.org/pdf/2008.13726v1",econ.GN
31762,gn,"The long-lasting socio-economic impact of the global financial crisis has
questioned the adequacy of traditional tools in explaining periods of financial
distress, as well as the adequacy of the existing policy response. In
particular, the effect of complex interconnections among financial institutions
on financial stability has been widely recognized. A recent debate focused on
the effects of unconventional policies aimed at achieving both price and
financial stability. In particular, Quantitative Easing (QE, i.e., the
large-scale asset purchase programme conducted by a central bank upon the
creation of new money) has been recently implemented by the European Central
Bank (ECB). In this context, two questions deserve more attention in the
literature. First, to what extent, by injecting liquidity, the QE may alter the
bank-firm lending level and stimulate the real economy. Second, to what extent
the QE may also alter the pattern of intra-financial exposures among financial
actors (including banks, investment funds, insurance corporations, and pension
funds) and what are the implications in terms of financial stability. Here, we
address these two questions by developing a methodology to map the
macro-network of financial exposures among institutional sectors across
financial instruments (e.g., equity, bonds, and loans) and we illustrate our
approach on recently available data (i.e., data on loans and private and public
securities purchased within the QE). We then test the effect of the
implementation of ECB's QE on the time evolution of the financial linkages in
the macro-network of the euro area, as well as the effect on macroeconomic
variables, such as output and prices.",Real implications of Quantitative Easing in the euro area: a complex-network perspective,2020-04-02 22:23:16,"Chiara Perillo, Stefano Battiston","http://dx.doi.org/10.1007/978-3-319-72150-7_94, http://arxiv.org/abs/2004.09418v1, http://arxiv.org/pdf/2004.09418v1",econ.GN
31763,gn,"Several studies have shown that birth order and the sex of siblings may have
an influence on individual behavioral traits. In particular, it has been found
that second brothers (of older male siblings) tend to have more disciplinary
problems. If this is the case, this should also be shown in contact sports. To
assess this hypothesis we use a data set from the South Rugby Union (URS) from
Bah\'ia Blanca, Argentina, and information obtained by surveying more than four
hundred players of that league. We find a statistically significant positive
relation between being a second-born male rugby player with an older male
brother and the number of yellow cards received.
  \textbf{Keywords:} Birth Order; Behavior; Contact Sports; Rugby.",The Impact of Birth Order on Behavior in Contact Team Sports: the Evidence of Rugby Teams in Argentina,2020-04-17 01:16:02,"Fernando Delbianco, Federico Fioravanti, Fernando Tohmé","http://arxiv.org/abs/2004.09421v1, http://arxiv.org/pdf/2004.09421v1",econ.GN
31764,gn,"We propose a highly schematic economic model in which, in some cases, wage
inequalities lead to higher overall social welfare. This is due to the fact
that high earners can consume low productivity, non essential products, which
allows everybody to remain employed even when the productivity of essential
goods is high and producing them does not require everybody to work. We derive
a relation between heterogeneities in technologies and the minimum Gini
coefficient required to maximize global welfare. Stronger inequalities appear
to be economically unjustified. Our model may shed light on the role of
non-essential goods in the economy, a topical issue when thinking about the
post-Covid-19 world.",How Much Income Inequality Is Too Much?,2020-04-21 12:08:28,Jean-Philippe Bouchaud,"http://arxiv.org/abs/2004.09835v1, http://arxiv.org/pdf/2004.09835v1",econ.GN
31765,gn,"The issue of climate change has become increasingly noteworthy in the past
years, the transition towards a renewable energy system is a priority in the
transition to a sustainable society. In this document, we explore the
definition of green energy transition, how it is reached, and what are the
driven factors to achieve it. To answer that firstly, we have conducted a
literature review discovering definitions from different disciplines, secondly,
gathering the key factors that are drivers for energy transition, finally, an
analysis of the factors is conducted within the context of European Union data.
Preliminary results have shown that household net income and governmental legal
actions related to environmental issues are potential candidates to predict
energy transition within countries. With this research, we intend to spark new
research directions in order to get a common social and scientific
understanding of green energy transition.",Venturing the Definition of Green Energy Transition: A systematic literature review,2020-04-22 16:25:53,"Pedro V Hernandez Serrano, Amrapali Zaveri","http://arxiv.org/abs/2004.10562v2, http://arxiv.org/pdf/2004.10562v2",econ.GN
31766,gn,"In this article, we provide a comparative analysis of the industry,
communication, and research infrastructure among the GCC member states as
measured by the United Nations sustainable development goal 9. SDG 9 provides a
clear framework for measuring the performance of nations in achieving
sustainable industrialization. Three pillars of this goal are defined as
quality logistics and efficient transportation, availability of mobile-cellular
network with high-speed internet access, and quality research output. Based on
the data from both the United Nations' SDG database and the Bertelsmann
Stiftung SDG-index, our results suggest that while most of the sub-goals in SDG
9 are achieved, significant challenges remain ahead. Notably, the research
output of the GCC member states is not in par with that of the developed world.
We suggest the GCC decisionmakers initiate national and supranational research
schemes in order to boost research and development in the region.",The Divergence Between Industrial Infrastructure and Research Output among the GCC Member States,2020-04-22 17:44:37,"Osman Gulseven, Abdulrahman Elmi, Odai Bataineh","http://arxiv.org/abs/2004.11235v1, http://arxiv.org/pdf/2004.11235v1",econ.GN
31767,gn,"Proof-of-work blockchains need to be carefully designed so as to create the
proper incentives for miners to faithfully maintain the network in a
sustainable way. This paper describes how the economic engineering of the
Conflux Network, a high throughput proof-of-work blockchain, leads to sound
economic incentives that support desirable and sustainable mining behavior. In
detail, this paper parameterizes the level of income, and thus network
security, that Conflux can generate, and it describes how this depends on user
behavior and ""policy variables'' such as block and interest inflation. It also
discusses how the underlying economic engineering design makes the Conflux
Network resilient against double spending and selfish mining attacks.",Engineering Economics in the Conflux Network,2020-04-28 20:51:28,"Yuxi Cai, Fan Long, Andreas Park, Andreas Veneris","http://arxiv.org/abs/2004.13696v1, http://arxiv.org/pdf/2004.13696v1",econ.GN
34726,th,"Given a player is guaranteed the same payoff for each delivery path in a
single-cube delivery network, the player's best response is to randomly divide
all goods and deliver them to all other nodes, and the best response satisfies
the Kuhn-Tucker condition. The state of the delivery network is randomly
complete. If congestion costs are introduced to the player's maximization
problem in a multi-cubic delivery network, the congestion paradox arises where
all coordinates become congested as long as the previous assumptions about
payoffs are maintained.",Theoretical Steps to Optimize Transportation in the Cubic Networks and the Congestion Paradox,2024-01-01 22:10:25,Joonkyung Yoo,"http://arxiv.org/abs/2401.00940v1, http://arxiv.org/pdf/2401.00940v1",econ.TH
31768,gn,"We conduct a unique, Amazon MTurk-based global experiment to investigate the
importance of an exponential-growth prediction bias (EGPB) in understanding why
the COVID-19 outbreak has exploded. The scientific basis for our inquiry is the
well-established fact that disease spread, especially in the initial stages,
follows an exponential function meaning few positive cases can explode into a
widespread pandemic if the disease is sufficiently transmittable. We define
prediction bias as the systematic error arising from faulty prediction of the
number of cases x-weeks hence when presented with y-weeks of prior, actual data
on the same. Our design permits us to identify the root of this
under-prediction as an EGPB arising from the general tendency to underestimate
the speed at which exponential processes unfold. Our data reveals that the
""degree of convexity"" reflected in the predicted path of the disease is
significantly and substantially lower than the actual path. The bias is
significantly higher for respondents from countries at a later stage relative
to those at an early stage of disease progression. We find that individuals who
exhibit EGPB are also more likely to reveal markedly reduced compliance with
the WHO-recommended safety measures, find general violations of safety
protocols less alarming, and show greater faith in their government's actions.
A simple behavioral nudge which shows prior data in terms of raw numbers, as
opposed to a graph, causally reduces EGPB. Clear communication of risk via raw
numbers could increase accuracy of risk perception, in turn facilitating
compliance with suggested protective behaviors.",Exponential-growth prediction bias and compliance with safety measures in the times of COVID-19,2020-04-29 08:28:29,"Ritwik Banerjee, Joydeep Bhattacharya, Priyama Majumdar","http://arxiv.org/abs/2005.01273v1, http://arxiv.org/pdf/2005.01273v1",econ.GN
31769,gn,"Over sixty percent of employees at a large South African company contribute
the minimum rate of 7.5 percent to a retirement fund, far below the rate of 15
percent recommended by financial advisers. I use a field experiment to
investigate whether providing employees with a retirement calculator, which
shows projections of retirement income, leads to increases in contributions.
The impact is negligible. The lack of response to the calculator suggests many
employees may wish to save less than the minimum. I use a model of asymmetric
information to explain why the employer sets a binding minimum.",On Track for Retirement?,2020-05-04 20:45:35,Matthew Olckers,"http://arxiv.org/abs/2005.01692v3, http://arxiv.org/pdf/2005.01692v3",econ.GN
31770,gn,"We investigate structural change in the PR China during a period of
particularly rapid growth 1998-2014. For this, we utilize sectoral data from
the World Input-Output Database and firm-level data from the Chinese Industrial
Enterprise Database. Starting with correlation laws known from the literature
(Fabricant's laws), we investigate which empirical regularities hold at the
sectoral level and show that many of these correlations cannot be recovered at
the firm level. For a more detailed analysis, we propose a multi-level
framework, which is validated with empirically. For this, we perform a robust
regression, since various input variables at the firm-level as well as the
residuals of exploratory OLS regressions are found to be heavy-tailed. We
conclude that Fabricant's laws and other regularities are primarily
characteristics of the sectoral level which rely on aspects like
infrastructure, technology level, innovation capabilities, and the knowledge
base of the relevant labor force. We illustrate our analysis by showing the
development of some of the larger sectors in detail and offer some policy
implications in the context of development economics, evolutionary economics,
and industrial organization.",Levels of structural change: An analysis of China's development push 1998-2014,2020-05-05 02:02:16,"Torsten Heinrich, Jangho Yang, Shuanping Dai","http://arxiv.org/abs/2005.01882v2, http://arxiv.org/pdf/2005.01882v2",econ.GN
31771,gn,"This paper uses transaction data from a large bank in Scandinavia to estimate
the effect of social distancing laws on consumer spending in the COVID-19
pandemic. The analysis exploits a natural experiment to disentangle the effects
of the virus and the laws aiming to contain it: Denmark and Sweden were
similarly exposed to the pandemic but only Denmark imposed significant
restrictions on social and economic activities. We estimate that aggregate
spending dropped by around 25 percent in Sweden and, as a result of the
shutdown, by 4 additional percentage points in Denmark. This implies that most
of the economic contraction is caused by the virus itself and occurs regardless
of social distancing laws. The age gradient in the estimates suggest that
social distancing reinforces the virus-induced drop in spending for low
health-risk individuals but attenuates it for high-risk individuals by lowering
the overall prevalence of the virus in the society.","Pandemic, Shutdown and Consumer Spending: Lessons from Scandinavian Policy Responses to COVID-19",2020-05-10 14:06:13,"Asger Lau Andersen, Emil Toft Hansen, Niels Johannesen, Adam Sheridan","http://arxiv.org/abs/2005.04630v1, http://arxiv.org/pdf/2005.04630v1",econ.GN
31772,gn,"The COVID-19 pandemic has caused a massive economic shock across the world
due to business interruptions and shutdowns from social-distancing measures. To
evaluate the socio-economic impact of COVID-19 on individuals, a micro-economic
model is developed to estimate the direct impact of distancing on household
income, savings, consumption, and poverty. The model assumes two periods: a
crisis period during which some individuals experience a drop in income and can
use their precautionary savings to maintain consumption; and a recovery period,
when households save to replenish their depleted savings to pre-crisis level.
The San Francisco Bay Area is used as a case study, and the impacts of a
lockdown are quantified, accounting for the effects of unemployment insurance
(UI) and the CARES Act federal stimulus. Assuming a shelter-in-place period of
three months, the poverty rate would temporarily increase from 17.1% to 25.9%
in the Bay Area in the absence of social protection, and the lowest income
earners would suffer the most in relative terms. If fully implemented, the
combination of UI and CARES could keep the increase in poverty close to zero,
and reduce the average recovery time, for individuals who suffer an income
loss, from 11.8 to 6.7 months. However, the severity of the economic impact is
spatially heterogeneous, and certain communities are more affected than the
average and could take more than a year to recover. Overall, this model is a
first step in quantifying the household-level impacts of COVID-19 at a regional
scale. This study can be extended to explore the impact of indirect
macroeconomic effects, the role of uncertainty in households' decision-making
and the potential effect of simultaneous exogenous shocks (e.g., natural
disasters).",Socio-Economic Impacts of COVID-19 on Household Consumption and Poverty,2020-05-12 20:43:26,"Amory Martin, Maryia Markhvida, Stéphane Hallegatte, Brian Walsh","http://dx.doi.org/10.1007/s41885-020-00070-3, http://arxiv.org/abs/2005.05945v1, http://arxiv.org/pdf/2005.05945v1",econ.GN
31824,gn,"This paper considers how to elicit information from sensitive survey
questions. First we thoroughly evaluate list experiments (LE), a leading method
in the experimental literature on sensitive questions. Our empirical results
demonstrate that the assumptions required to identify sensitive information in
LE are violated for the majority of surveys. Next we propose a novel survey
method, called Multiple Response Technique (MRT), for eliciting information
from sensitive questions. We require all of the respondents to answer three
questions related to the sensitive information. This technique recovers
sensitive information at a disaggregated level while still allowing arbitrary
misreporting in survey responses. An application of the MRT provides novel
empirical evidence on sexual orientation and Lesbian, Gay, Bisexual, and
Transgender (LGBT)-related sentiment.",Eliciting Information from Sensitive Survey Questions,2020-09-03 06:20:48,"Yonghong An, Pengfei Liu","http://arxiv.org/abs/2009.01430v1, http://arxiv.org/pdf/2009.01430v1",econ.GN
31773,gn,"We provide the first nationally representative estimates of sexual minority
representation in STEM fields by studying 142,641 men and women in same-sex
couples from the 2009-2018 American Community Surveys. These data indicate that
men in same-sex couples are 12 percentage points less likely to have completed
a bachelor's degree in a STEM field compared to men in different-sex couples;
there is no gap observed for women in same-sex couples compared to women in
different-sex couples. The STEM gap between men in same-sex and different-sex
couples is larger than the STEM gap between white and black men but is smaller
than the gender STEM gap. We also document a gap in STEM occupations between
men in same-sex and different-sex couples, and we replicate this finding using
independently drawn data from the 2013-2018 National Health Interview Surveys.
These differences persist after controlling for demographic characteristics,
location, and fertility. Our findings further the call for interventions
designed at increasing representation of sexual minorities in STEM.",Turing's Children: Representation of Sexual Minorities in STEM,2020-05-14 02:36:09,"Dario Sansone, Christopher S. Carpenter","http://dx.doi.org/10.1371/journal.pone.0241596, http://arxiv.org/abs/2005.06664v1, http://arxiv.org/pdf/2005.06664v1",econ.GN
31774,gn,"Social mobility captures the extent to which socio-economic status of
children, is independent of status of their respective parents. In order to
measure social mobility, most widely used indicators of socio-economic status
are income, education and occupation. While social mobility measurement based
on income is less contested, data availability in Indian context limits us to
observing mobility patterns along the dimensions of either education or
occupation. In this study we observe social mobility patterns for different
social groups along these two main dimensions, and find that while upward and
downward mobility prospects in education for SCs/STs is somewhat improving in
the recent times, occupational mobility patterns are rather worrisome. These
results motivate the need for reconciling disparate trends along education and
occupation, in order to get a more comprehensive picture of social mobility in
the country.",Patterns of social mobility across social groups in India,2020-05-14 10:37:00,Vinay Reddy Venumuddala,"http://arxiv.org/abs/2005.06771v1, http://arxiv.org/pdf/2005.06771v1",econ.GN
31775,gn,"India like many other developing countries is characterized by huge
proportion of informal labour in its total workforce. The percentage of
Informal Workforce is close to 92% of total as computed from NSSO 68th round on
Employment and Unemployment, 2011-12. There are many traditional and
geographical factors which might have been responsible for this staggering
proportion of Informality in our country. As a part of this study, we focus
mainly on finding out how Informality varies with Region, Sector, Gender,
Social Group, and Working Age Groups. Further we look at how Total Inequality
is contributed by Formal and Informal Labour, and how much do
occupations/industries contribute to inequality within each of formal and
informal labour groups separately. For the purposes of our study we use NSSO
rounds 61 (2004-05) and 68 (2011-12) on employment and unemployment. The study
intends to look at an overall picture of Informality, and based on the data
highlight any inferences which are visible from the data.",Informal Labour in India,2020-05-14 11:04:17,Vinay Reddy Venumuddala,"http://arxiv.org/abs/2005.06795v1, http://arxiv.org/pdf/2005.06795v1",econ.GN
31776,gn,"Ad blockers allow users to browse websites without viewing ads. Online news
providers that rely on advertising revenue tend to perceive users adoption of
ad blockers purely as a threat to revenue. Yet, this perception ignores the
possibility that avoiding ads, which users presumably dislike, may affect users
online news consumption behavior in positive ways. Using 3.1 million anonymized
visits from 79,856 registered users on a news website, we find that adopting an
ad blocker has a robust positive effect on the quantity and variety of articles
users consume (21.5% - 43.3% more articles and 13.4% - 29.1% more content
categories). An increase in repeat user visits of the news website, rather than
the number of page impressions per visit, drives the news consumption. These
visits tend to start with direct navigation to the news website, indicating
user loyalty. The increase in news consumption is more substantial for users
who have less prior experience with the website. We discuss how news publishers
could benefit from these findings, including exploring revenue models that
consider users desire to avoid ads.",How Does the Adoption of Ad Blockers Affect News Consumption?,2020-05-14 12:39:38,"Shunyao Yan, Klaus M. Miller, Bernd Skiera","http://arxiv.org/abs/2005.06840v2, http://arxiv.org/pdf/2005.06840v2",econ.GN
31777,gn,"In our country, majority of agricultural workers (who may include farmers
working within a cooperative framework, or those who work individually either
as owners or tenants) are shown to be reaping the least amount of profits in
the agriculture value chain when compared to the effort they put in. There is a
good amount of literature which broadly substantiates this situation in our
country. Main objective of this study is to have a broad understanding of the
role played by public systems in this value chain, particularly in the segment
that interacts with farmers. As a starting point, we first try to get a better
understanding of how farmers are placed in a typical agriculture value chain.
For this we take the help of recent seminal works on this topic that captured
the situation of farmers' within certain types of value chains. Then, we
isolate the segment which interacts with farmers and deep-dive into data to
understand the role played by public interventions in determining farmers'
income from agriculture. NSSO 70th round on Situation Assessment Survey of
farmers has data pertaining to the choices of farmers and the type of their
interaction with different players in the value chain. Using this data we tried
to get a econometric picture of the role played by government interventions and
the extent to which they determine the incomes that a typical farming household
derives out of agriculture.",Farmers' situation in agriculture markets and role of public interventions in India,2020-05-14 11:38:48,Vinay Reddy Venumuddala,"http://arxiv.org/abs/2005.07538v1, http://arxiv.org/pdf/2005.07538v1",econ.GN
31790,gn,"Attitudes toward risk underlie virtually every important economic decision an
individual makes. In this experimental study, I examine how introducing a time
delay into the execution of an investment plan influences individuals' risk
preferences. The field experiment proceeded in three stages: a decision stage,
an execution stage and a payout stage. At the outset, in the Decision Stage
(Stage 1), each subject was asked to make an investment plan by splitting a
monetary investment amount between a risky asset and a safe asset. Subjects
were informed that the investment plans they made in the Decision Stage are
binding and will be executed during the Execution Stage (Stage 2). The Payout
Stage (Stage 3) was the payout date. The timing of the Decision Stage and
Payout Stage was the same for each subject, but the timing of the Execution
Stage varied experimentally. I find that individuals who were assigned to
execute their investment plans later (i.e., for whom there was a greater delay
prior to the Execution Stage) invested a greater amount in the risky asset
during the Decision Stage.",Time Delay and Investment Decisions: Evidence from an Experiment in Tanzania,2020-06-02 08:25:10,Plamen Nikolov,"http://arxiv.org/abs/2006.02143v2, http://arxiv.org/pdf/2006.02143v2",econ.GN
31778,gn,"It is now widely recognised that components of the environment play the role
of economic assets, termed natural capital, that are a foundation of social and
economic development. National governments monitor the state and trends of
natural capital through a range of activities including natural capital
accounting, national ecosystem assessments, ecosystem service valuation, and
economic and environmental analyses. Indicators play an integral role in these
activities as they facilitate the reporting of complex natural capital
information. One factor that hinders the success of these activities and their
comparability across countries is the absence of a coherent framework of
indicators concerning natural capital (and its benefits) that can aid
decision-making. Here we present an integrated Natural Capital Indicator
Framework (NCIF) alongside example indicators, which provides an illustrative
structure for countries to select and organise indicators to assess their use
of and dependence on natural capital. The NCIF sits within a wider context of
indicators related to natural, human, social and manufactured capital, and
associated flows of benefits. The framework provides decision-makers with a
structured approach to selecting natural capital indicators with which to make
decisions about economic development that take into account national natural
capital and associated flows of benefits.",The Natural Capital Indicator Framework (NCIF): A framework of indicators for national natural capital reporting,2020-05-18 13:27:09,"Alison Fairbrass, Georgina Mace, Paul Ekins, Ben Milligan","http://dx.doi.org/10.1016/j.ecoser.2020.101198, http://arxiv.org/abs/2005.08568v1, http://arxiv.org/pdf/2005.08568v1",econ.GN
31779,gn,"We provide a survey of the Kolkata index of social inequality, focusing in
particular on income inequality. Based on the observation that inequality
functions (such as the Lorenz function), giving the measures of income or
wealth against that of the population, to be generally nonlinear, we show that
the fixed point (like Kolkata index k) of such a nonlinear function (or
related, like the complementary Lorenz function) offer better measure of
inequality than the average quantities (like Gini index). Indeed the Kolkata
index can be viewed as a generalized Hirsch index for a normalized inequality
function and gives the fraction k of the total wealth possessed by the rich
(1-k) fraction of the population. We analyze the structures of the inequality
indices for both continuous and discrete income distributions. We also compare
the Kolkata index to some other measures like the Gini coefficient and the
Pietra index. Lastly, we provide some empirical studies which illustrate the
differences between the Kolkata index and the Gini coefficient.",Inequality Measures: The Kolkata index in comparison with other measures,2020-05-15 19:24:27,"Suchismita Banerjee, Bikas K. Chakrabarti, Manipushpak Mitra, Suresh Mutuswami","http://dx.doi.org/10.3389/fphy.2020.562182, http://arxiv.org/abs/2005.08762v2, http://arxiv.org/pdf/2005.08762v2",econ.GN
31780,gn,"This paper uses Bureau of Labor Statistics employment and wage data to study
the distributional impact of the COVID-19 crisis on wages in the United States
by mid-April. It answers whether wages of lower-wage workers decreased more
than others', and to what extent. We find that the COVID-19 outbreak
exacerbates existing inequalities. Workers at the bottom quintile in mid-March
were three times more likely to be laid off by mid-April compared to
higher-wage workers. Weekly wages of workers at the bottom quintile decreased
by 6% on average between mid-February and mid-March and by 26% between
mid-March and mid-April. The average decrease for higher quintiles was less
than 1% between mid-February and mid-March and about 10% between mid-March and
mid-April. We also find that workers aged 16-24 were hit much harder than older
workers. Hispanic workers were also hurt more than other racial groups. Their
wages decreased by 2-3 percentage points more than other workers' between
mid-March and mid-April.",The Distributional Short-Term Impact of the COVID-19 Crisis on Wages in the United States,2020-05-18 17:30:45,Yonatan Berman,"http://arxiv.org/abs/2005.08763v1, http://arxiv.org/pdf/2005.08763v1",econ.GN
31781,gn,"In the following study, we inquire into the financial inclusion from a demand
side perspective. Utilizing IHDS round-1 (2004-05) and round-2 (2011-12),
starting from a broad picture of demand side access to finance at the country
level, we venture into analysing the patterns at state level, and then lastly
at district level. Particularly at district level, we focus on agriculture
households in rural areas to identify if there is a shift in the demand side
financial access towards non-agriculture households in certain parts of the
country. In order to do this, we use District level 'Basic Statistical Returns
of Scheduled Commercial Banks' for the years 2004 and 2011, made available by
RBI, to first construct supply side financial inclusion indices, and then infer
about a relative shift in access to formal finance away from agriculture
households, using a logistic regression framework.",Patterns in demand side financial inclusion in India -- An inquiry using IHDS Panel Data,2020-05-18 06:25:37,Vinay Reddy Venumuddala,"http://arxiv.org/abs/2005.08961v1, http://arxiv.org/pdf/2005.08961v1",econ.GN
31782,gn,"Knowledge of consumers' willingness to pay (WTP) is a prerequisite to
profitable price-setting. To gauge consumers' WTP, practitioners often rely on
a direct single question approach in which consumers are asked to explicitly
state their WTP for a product. Despite its popularity among practitioners, this
approach has been found to suffer from hypothetical bias. In this paper, we
propose a rigorous method that improves the accuracy of the direct single
question approach. Specifically, we systematically assess the hypothetical
biases associated with the direct single question approach and explore ways to
de-bias it. Our results show that by using the de-biasing procedures we
propose, we can generate a de-biased direct single question approach that is
accu-rate enough to be useful for managerial decision-making. We validate this
approach with two studies in this paper.",A De-biased Direct Question Approach to Measuring Consumers' Willingness to Pay,2020-05-22 13:45:15,"Reto Hofstetter, Klaus M. Miller, Harley Krohmer, Z. John Zhang","http://arxiv.org/abs/2005.11318v1, http://arxiv.org/pdf/2005.11318v1",econ.GN
31783,gn,"We study the stochastic dynamics of natural resources under the threat of
ecological regime shifts. We establish a Pareto optimal framework of regime
shift detection under uncertainty that minimizes the delay with which economic
agents become aware of the shift. We integrate ecosystem surveillance in the
formation of optimal resource extraction policies. We fully solve the case of a
profit-maximizing monopolist and provide the conditions that determine whether
anticipating detection of an adverse regime shift can lead to an aggressive or
a precautionary extraction policy, depending on the interaction between market
and environmental parameters. We compare the monopolist's policy under
detection to a social planner's. We apply our framework to the case of the
Cantareira water reservoir in S\~ao Paulo, Brazil, and study the events that
led to its depletion and the consequent water supply crisis.",Quickest Detection of Ecological Regimes for Dynamic Resource Management,2020-05-23 12:22:15,"Neha Deopa, Daniele Rinaldo","http://arxiv.org/abs/2005.11500v7, http://arxiv.org/pdf/2005.11500v7",econ.GN
31785,gn,"Policymakers in developing countries increasingly see science, technology,
and innovation (STI) as an avenue for meeting sustainable development goals
(SDGs), with STI-based startups as a key part of these efforts. Market failures
call for government interventions in supporting STI for SDGs and
publicly-funded incubators can potentially fulfil this role. Using the specific
case of India, we examine how publicly-funded incubators could contribute to
strengthening STI-based entrepreneurship. India's STI policy and its links to
societal goals span multiple decades -- but since 2015 these goals became
formally organized around the SDGs. We examine why STI-based incubators were
created under different policy priorities before 2015, the role of public
agencies in implementing these policies, and how some incubators were
particularly effective in addressing the societal challenges that can now be
mapped to SDGs. We find that effective incubation for supporting STI-based
entrepreneurship to meet societal goals extended beyond traditional incubation
activities. For STI-based incubators to be effective, policymakers must
strengthen the 'incubation system'. This involves incorporating targeted SDGs
in specific incubator goals, promoting coordination between existing incubator
programs, developing a performance monitoring system, and finally, extending
extensive capacity building at multiple levels including for incubator managers
and for broader STI in the country.","Strengthening science, technology, and innovation-based incubators to help achieve Sustainable Development Goals: Lessons from India",2020-05-27 06:17:52,"Kavita Surana, Anuraag Singh, Ambuj D Sagar","http://dx.doi.org/10.1016/j.techfore.2020.120057, http://arxiv.org/abs/2005.13138v2, http://arxiv.org/pdf/2005.13138v2",econ.GN
31786,gn,"We extend the model of Parenti (2018) on large and small firms by introducing
cost heterogeneity among small firms. We propose a novel necessary and
sufficient condition for the existence of such a mixed market structure.
Furthermore, in contrast to Parenti (2018), we show that in the presence of
cost heterogeneity among small firms, trade liberalization may raise or reduce
the mass of small firms in operation.",Competition among Large and Heterogeneous Small Firms,2020-05-29 11:16:19,"Lijun Pan, Yongjin Wang","http://arxiv.org/abs/2005.14442v1, http://arxiv.org/pdf/2005.14442v1",econ.GN
31787,gn,"The resilience of the food supply chain is a matter of critical importance,
both for national security and broader societal well bring. COVID19 has
presented a test to the current system, as well as means by which to explore
whether the UK's food supply chain will be resilient to future disruptions. In
the face of a growing need to ensure that food supply is more environmentally
sustainable and socially just, COVOD19 also represents an opportunity to
consider the ability of the system to innovative, and its capacity for change.
The purpose of this case based study is to explore the response and resilience
of the UK fruit and vegetable food supply chain to COVID19, and to assess this
empirical evidence in the context of a resilience framework based on the
adaptive cycle. To achieve this we reviewed secondary data associated with
changes to retail demand, conducted interviews with 23 organisations associated
with supply to this market, and conducted four video workshops with 80
organisations representing half of the UK fresh produce community. The results
highlight that, despite significant disruption, the retail dominated fresh food
supply chain has demonstrated a high degree of resilience. In the context of
the adaptive cycle, the system has shown signs of being stuck in a rigidity
trap, as yet unable to exploit more radical innovations that may also assist in
addressing other drivers for change. This has highlighted the significant role
that innovation and R&D communities will need to play in enabling the supply
chain to imagine and implement alternative future states post COVID.",The impact of COVID-19 on the UK fresh food supply chain,2020-05-30 17:11:14,"Rebecca Mitchell, Roger Maull, Simon Pearson, Steve Brewer, Martin Collison","http://arxiv.org/abs/2006.00279v1, http://arxiv.org/pdf/2006.00279v1",econ.GN
31788,gn,"Numerous studies have considered the important role of cognition in
estimating the returns to schooling. How cognitive abilities affect schooling
may have important policy implications, especially in developing countries
during periods of increasing educational attainment. Using two longitudinal
labor surveys that collect direct proxy measures of cognitive skills, we study
the importance of specific cognitive domains for the returns to schooling in
two samples. We instrument for schooling levels and we find that each
additional year of schooling leads to an increase in earnings by approximately
18-20 percent. The estimated effect sizes-based on the two-stage least squares
estimates-are above the corresponding ordinary least squares estimates.
Furthermore, we estimate and demonstrate the importance of specific cognitive
domains in the classical Mincer equation. We find that executive functioning
skills (i.e., memory and orientation) are important drivers of earnings in the
rural sample, whereas higher-order cognitive skills (i.e., numeracy) are more
important for determining earnings in the urban sample. Although numeracy is
tested in both samples, it is only a statistically significant predictor of
earnings in the urban sample.",The Importance of Cognitive Domains and the Returns to Schooling in South Africa: Evidence from Two Labor Surveys,2020-06-01 09:31:59,"Plamen Nikolov, Nusrat Jimi","http://dx.doi.org/10.1016/j.labeco.2020.101849, http://arxiv.org/abs/2006.00739v2, http://arxiv.org/pdf/2006.00739v2",econ.GN
31789,gn,"Ageing populations in developing countries have spurred the introduction of
public pension programs to preserve the standard of living for the elderly. The
often-overlooked mechanism of intergenerational transfers, however, can dampen
these intended policy effects as adult children who make income contributions
to their parents could adjust their behavior to changes in their parents'
income. Exploiting a unique policy intervention in China, we examine using a
difference-in-difference-in-differences (DDD) approach how a new pension
program impacts inter vivos transfers. We show that pension benefits lower the
propensity of receiving transfers from adult children in the context of a large
middle-income country and we also estimate a small crowd-out effect. Taken
together, these estimates fit the pattern of previous research in high-income
countries, although our estimates of the crowd-out effect are significantly
smaller than previous studies in both high-income and middle-income countries.",Do Private Household Transfers to the Elderly Respond to Public Pension Benefits? Evidence from Rural China,2020-06-01 21:20:05,"Plamen Nikolov, Alan Adelman","http://dx.doi.org/10.1016/j.jeoa.2019.100204, http://arxiv.org/abs/2006.01185v2, http://arxiv.org/pdf/2006.01185v2",econ.GN
31803,gn,"Structural models with no solution are incoherent, and those with multiple
solutions are incomplete. We show that models with occasionally binding
constraints are not generically coherent. Coherency requires restrictions on
the parameters or on the support of the distribution of the shocks. In presence
of multiple shocks, the support restrictions cannot be independent from each
other, so the assumption of orthogonality of structural shocks is incompatible
with coherency. Models whose coherency is based on support restrictions are
generically incomplete, admitting a very large number of minimum state variable
solutions.",The unbearable lightness of equilibria in a low interest rate environment,2020-06-20 23:59:30,"Guido Ascari, Sophocles Mavroeidis","http://arxiv.org/abs/2006.12966v4, http://arxiv.org/pdf/2006.12966v4",econ.GN
31791,gn,"This paper examines the impact of the New Rural Pension Scheme (NRPS) in
China. Exploiting the staggered implementation of an NRPS policy expansion that
began in 2009, we use a difference-in-difference approach to study the effects
of the introduction of pension benefits on the health status, health behaviors,
and healthcare utilization of rural Chinese adults age 60 and above. The
results point to three main conclusions. First, in addition to improvements in
self-reported health, older adults with access to the pension program
experienced significant improvements in several important measures of health,
including mobility, self-care, usual activities, and vision. Second, regarding
the functional domains of mobility and self-care, we found that the females in
the study group led in improvements over their male counterparts. Third, in our
search for the mechanisms that drive positive retirement program results, we
find evidence that changes in individual health behaviors, such as a reduction
in drinking and smoking, and improved sleep habits, play an important role. Our
findings point to the potential benefits of retirement programs resulting from
social spillover effects. In addition, these programs may lessen the morbidity
burden among the retired population.",Short-Run Health Consequences of Retirement and Pension Benefits: Evidence from China,2020-06-02 20:25:16,"Plamen Nikolov, Alan Adelman","http://dx.doi.org/10.1515/fhep-2017-0031, http://arxiv.org/abs/2006.02900v1, http://arxiv.org/pdf/2006.02900v1",econ.GN
31792,gn,"Prior literature has argued that flood insurance maps may not capture the
extent of flood risk. This paper performs a granular assessment of coastal
flood risk in the mortgage market by using physical simulations of hurricane
storm surge heights instead of using FEMA's flood insurance maps. Matching
neighborhood-level predicted storm surge heights with mortgage files suggests
that coastal flood risk may be large: originations and securitizations in storm
surge areas have been rising sharply since 2012, while they remain stable when
using flood insurance maps. Every year, more than 50 billion dollars of
originations occur in storm surge areas outside of insurance floodplains. The
share of agency mortgages increases in storm surge areas, yet remains stable in
the flood insurance 100-year floodplain. Mortgages in storm surge areas are
more likely to be complex: non-fully amortizing features such as interest-only
or adjustable rates. Households may also be more vulnerable in storm surge
areas: median household income is lower, the share of African Americans and
Hispanics is substantially higher, the share of individuals with health
coverage is lower. Price-to-rent ratios are declining in storm surge areas
while they are increasing in flood insurance areas. This paper suggests that
uncovering future financial flood risk requires scientific models that are
independent of the flood insurance mapping process.",Coastal Flood Risk in the Mortgage Market: Storm Surge Models' Predictions vs. Flood Insurance Maps,2020-06-04 19:08:08,Amine Ouazad,"http://arxiv.org/abs/2006.02977v2, http://arxiv.org/pdf/2006.02977v2",econ.GN
31793,gn,"Did sovereign default risk affect macroeconomic activity through firms'
access to credit during the European sovereign debt crisis? We investigate this
question by a estimating a structural panel vector autoregressive model for
Italy, Spain, Portugal, and Ireland, where the sovereign risk shock is
identified using sign restrictions. The results suggest that decline in the
creditworthiness of the sovereign contributed to a fall in private lending and
economic activity in several euro-area countries by reducing the value of
banks' assets and crowding out private lending.",Sovereign Default Risk and Credit Supply: Evidence from the Euro Area,2020-06-05 13:55:45,Olli Palmén,"http://dx.doi.org/10.1016/j.jimonfin.2020.102257, http://arxiv.org/abs/2006.03592v1, http://arxiv.org/pdf/2006.03592v1",econ.GN
31794,gn,"Improving productivity among farm microenterprises is important, especially
in low-income countries where market imperfections are pervasive and resources
are scarce. Relaxing credit constraints can increase the productivity of
farmers. Using a field experiment involving microenterprises in Bangladesh, we
estimate the impact of access to credit on the overall productivity of rice
farmers, and disentangle the total effect into technological change (frontier
shift) and technical efficiency changes. We find that relative to the baseline
rice output per decimal, access to credit results in, on average, approximately
a 14 percent increase in yield, holding all other inputs constant. After
decomposing the total effect into the frontier shift and efficiency
improvement, we find that, on average, around 11 percent of the increase in
output comes from changes in technology, or frontier shift, while the remaining
3 percent is attributed to improvements in technical efficiency. The efficiency
gain is higher for modern hybrid rice varieties, and almost zero for
traditional rice varieties. Within the treatment group, the effect is greater
among pure tenant and mixed-tenant farm households compared with farmers that
only cultivate their own land.",The Effects of Access to Credit on Productivity Among Microenterprises: Separating Technological Changes from Changes in Technical Efficiency,2020-06-05 22:45:46,"Nusrat Abedin Jimi, Plamen Nikolov, Mohammad Abdul Malek, Subal Kumbhakar","http://dx.doi.org/10.1007/s11123-019-00555-8, http://arxiv.org/abs/2006.03650v1, http://arxiv.org/pdf/2006.03650v1",econ.GN
31795,gn,"Evidence on educational returns and the factors that determine the demand for
schooling in developing countries is extremely scarce. Building on previous
studies that show individuals underestimating the returns to schooling, we use
two surveys from Tanzania to estimate both the actual and perceived schooling
returns and subsequently examine what factors drive individual misperceptions
regarding actual returns. Using ordinary least squares and instrumental
variable methods, we find that each additional year of schooling in Tanzania
increases earnings, on average, by 9 to 11 percent. We find that on average
individuals underestimate returns to schooling by 74 to 79 percent and three
factors are associated with these misperceptions: income, asset poverty and
educational attainment. Shedding light on what factors relate to individual
beliefs about educational returns can inform policy on how to structure
effective interventions in order to correct individual misperceptions.",What Factors Drive Individual Misperceptions of the Returns to Schooling in Tanzania? Some Lessons for Education Policy,2020-06-06 01:38:06,"Plamen Nikolov, Nusrat Jimi","http://dx.doi.org/10.1080/00036846.2018.1466991, http://arxiv.org/abs/2006.03723v1, http://arxiv.org/pdf/2006.03723v1",econ.GN
31796,gn,"Against the background of renewed interest in vertical support policies
targeting specific industries or technologies, we investigate the effects of
vertical vs. horizontal policies in a combinatorial model of economic
development. In the framework we propose, an economy develops by acquiring new
capabilities allowing for the production of an ever greater variety of products
with an increasing complexity. Innovation policy can aim to expand the number
of capabilities (vertical policy) or the ability to combine capabilities
(horizontal policy). The model shows that for low-income countries, the two
policies are complementary. For high-income countries that are specialised in
the most complex products, focusing on horizontal policy only yields the
highest returns. We reflect on the model results in the light of the
contemporary debate on vertical policy.",Vertical vs. Horizontal Policy in a Capabilities Model of Economic Development,2020-06-08 17:24:00,"Alje van Dam, Koen Frenken","http://arxiv.org/abs/2006.04624v1, http://arxiv.org/pdf/2006.04624v1",econ.GN
31797,gn,"This paper proposes a public-private insurance scheme for earthquakes and
floods in Italy in which property-owners, the insurer and the government
co-operate in risk financing. Our model departs from the existing literature by
describing a public-private insurance intended to relieve the financial burden
that natural events place on governments, while at the same time assisting
individuals and protecting the insurance business. Hence, the business is
aiming at maximizing social welfare rather than profits. Given the limited
amount of data available on natural risks, expected losses per individual have
been estimated through risk-modeling. In order to evaluate the insurer's loss
profile, spatial correlation among insured assets has been evaluated by means
of the Hoeffding bound for r-dependent random variables. Though earthquakes
generate expected losses that are almost six times greater than floods, we
found that the amount of public funds needed to manage the two perils is almost
the same. We argue that this result is determined by a combination of the risk
aversion of individuals and the shape of the loss distribution. Lastly, since
earthquakes and floods are uncorrelated, we tested whether jointly managing the
two perils can counteract the negative impact of spatial correlation. Some
benefit from risk diversification emerged, though the probability of the
government having to inject further capital might be considerable. Our findings
suggest that, when not supported by the government, private insurance might
either financially over-expose the insurer or set premiums so high that
individuals would fail to purchase policies.",A Public-Private Insurance Model for Natural Risk Management: an Application to Seismic and Flood Risks on Residential Buildings in Italy,2020-06-10 17:06:25,"Selene Perazzini, Giorgio Stefano Gnecco, Fabio Pammolli","http://arxiv.org/abs/2006.05840v1, http://arxiv.org/pdf/2006.05840v1",econ.GN
31798,gn,"Natural hazards can considerably impact the overall society of a country. As
some degree of public sector involvement is always necessary to deal with the
consequences of natural disasters, central governments have increasingly
invested in proactive risk management planning. In order to empower and involve
the whole society, some countries have established public-private partnerships,
mainly with the insurance industry, with satisfactorily outcomes. Although they
have proven necessary and most often effective, the public-private initiatives
have often incurred high debts or have failed to achieved the desired risk
reduction objectives. We review the role of these partnerships in the
management of natural risks, with particular attention to the insurance sector.
Among other country-specific issues, poor risk knowledge and weak governance
have widely challenged the initiatives during the recent years, while the
future is threatened by the uncertainty of climate change and unsustainable
development. In order to strengthen the country's resilience, a greater
involvement of all segments of the community, especially the weakest layers, is
needed and the management of natural risks should be included in a sustainable
development plan.",Public-Private Partnership in the Management of Natural Disasters: A Review,2020-06-10 17:16:07,Selene Perazzini,"http://arxiv.org/abs/2006.05845v1, http://arxiv.org/pdf/2006.05845v1",econ.GN
31799,gn,"Behavioral responses to pandemics are less shaped by actual mortality or
hospitalization risks than they are by risk attitudes. We explore human
mobility patterns as a measure of behavioral responses during the COVID-19
pandemic. Our results indicate that risk-taking attitude is a critical factor
in predicting reduction in human mobility and increase social confinement
around the globe. We find that the sharp decline in movement after the WHO
(World Health Organization) declared COVID-19 to be a pandemic can be
attributed to risk attitudes. Our results suggest that regions with risk-averse
attitudes are more likely to adjust their behavioral activity in response to
the declaration of a pandemic even before most official government lockdowns.
Further understanding of the basis of responses to epidemics, e.g.,
precautionary behavior, will help improve the containment of the spread of the
virus.",Risk Attitudes and Human Mobility during the COVID-19 Pandemic,2020-06-11 00:47:34,"Ho Fai Chan, Ahmed Skali, David Savage, David Stadelmann, Benno Torgler","http://dx.doi.org/10.1038/s41598-020-76763-2, http://arxiv.org/abs/2006.06078v1, http://arxiv.org/pdf/2006.06078v1",econ.GN
31800,gn,"A negative interest rate policy is often accompanied by tiered remuneration,
which allows for exemption from negative rates. This study proposes a basic
model of interest rates formed in the interbank market with a tiering system.
The results predicted by the model largely mirror actual market developments in
late 2019, when the European Central Bank introduced, and the Switzerland
National Bank modified, the tiering system.",A simple model of interbank trading with tiered remuneration,2020-06-19 06:43:39,Toshifumi Nakamura,"http://arxiv.org/abs/2006.10946v1, http://arxiv.org/pdf/2006.10946v1",econ.GN
31801,gn,"This paper analyzes the evolution of Keynesianism making use of concepts
offered by Imre Lakatos. The Keynesian ""hard core"" lies in its views regarding
the instability of the market economy, its ""protective belt"" in the policy
strategy for macroeconomic stabilization using fiscal policy and monetary
policy. Keynesianism developed as a policy program to counter classical
liberalism, which attributes priority to the autonomy of the market economy and
tries to limit the role of government. In general, the core of every policy
program consists in an unfalsifiable worldview and a value judgment that remain
unchanged. On the other hand, a policy strategy with a protective belt
inevitably evolves owing to changes in reality and advances in scientific
knowledge. This is why the Keynesian policy strategy has shifted from being
fiscal-led to one that is monetary-led because of the influence of monetarism;
further, the Great Recession has even led to their integration.",Shifting Policy Strategy in Keynesianism,2020-06-21 12:47:16,Asahi Noguchi,"http://arxiv.org/abs/2006.11749v1, http://arxiv.org/pdf/2006.11749v1",econ.GN
31802,gn,"The coronavirus disease (COVID-19) has caused one of the most serious social
and economic losses to countries around the world since the Spanish influenza
pandemic of 1918 (during World War I). It has resulted in enormous economic as
well as social costs, such as increased deaths from the spread of infection in
a region. This is because public regulations imposed by national and local
governments to deter the spread of infection inevitably involves a deliberate
suppression of the level of economic activity. Given this trade-off between
economic activity and epidemic prevention, governments should execute public
interventions to minimize social and economic losses from the pandemic. A major
problem regarding the resultant economic losses is that it unequally impacts
certain strata of the society. This raises an important question on how such
economic losses should be shared equally across the society. At the same time,
there is some antipathy towards economic compensation by means of public debt,
which is likely to increase economic burden in the future. However, as Paul
Samuelson once argued, much of the burden, whether due to public debt or
otherwise, can only be borne by the present generation, and not by future
generations.",The Economic Costs of Containing a Pandemic,2020-06-21 12:51:04,Asahi Noguchi,"http://arxiv.org/abs/2006.11750v1, http://arxiv.org/pdf/2006.11750v1",econ.GN
31804,gn,"We study the welfare effects of school district consolidation, i.e. the
integration of disjoint school districts into a centralised clearinghouse. We
show theoretically that, in the worst-case scenario, district consolidation may
unambiguously reduce students' welfare, even if the student-optimal stable
matching is consistently chosen. However, on average all students experience
expected welfare gains from district consolidation, particularly those who
belong to smaller and over-demanded districts. Using data from the Hungarian
secondary school assignment mechanism, we compute the actual welfare gains from
district consolidation in Budapest and compare these to our theoretical
predictions. We empirically document substantial welfare gains from district
consolidation for students, equivalent to attending a school five kilometres
closer to the students' home addresses. As an important building block of our
empirical strategy, we describe a method to consistently estimate students'
preferences over schools and vice versa that does not fully assume that
students report their preferences truthfully in the student-proposing deferred
acceptance algorithm.",What Happens when Separate and Unequal School Districts Merge?,2020-06-23 00:50:01,"Robert Aue, Thilo Klein, Josue Ortega","http://arxiv.org/abs/2006.13209v1, http://arxiv.org/pdf/2006.13209v1",econ.GN
31805,gn,"We suggest the use of indicators to analyze entrepreneurial ecosystems, in a
way similar to ecological indicators: simple, measurable, and actionable
characteristics, used to convey relevant information to stakeholders and
policymakers. We define 3 possible such indicators: Fundraising Speed,
Acceleration and nth-year speed, all related to the ability of startups to
develop more or less rapidly in a given ecosystem. Results based on these 3
indicators for 6 prominent ecosystems (Berlin, Israel, London, New York, Paris,
Silicon Valley) exhibit markedly different situations and trajectories.
Altogether, they contribute to confirm that such indicators can help shed new
and interesting light on entrepreneurial ecosystems, to the benefit of
potentially more grounded policy decisions, and all the more so in otherwise
blurred and somewhat cacophonic environments.",Towards Entrepreneurial Ecosystem Indicators : Speed and Acceleration,2020-06-25 14:26:01,"Théophile Carniel, Jean-Michel Dalle","http://arxiv.org/abs/2006.14313v1, http://arxiv.org/pdf/2006.14313v1",econ.GN
31806,gn,"Inertia and context-dependent choice effects are well-studied classes of
behavioural phenomena. While much is known about these effects in isolation,
little is known about whether one of them ""dominates"" the other when both can
potentially be present. Knowledge of any such dominance is relevant for
effective choice architecture and descriptive modelling. We initiate this
empirical investigation with a between-subjects lab experiment in which each
subject made a single decision over two or three money lotteries. Our
experiment was designed to test for dominance between *status quo bias* and the
*decoy effect*. We find strong evidence for status quo bias and no evidence for
the decoy effect. We also find that status quo bias can be powerful enough so
that, at the aggregate level, a fraction of subjects switch from being
risk-averse to being risk-seeking. Survey evidence suggests that this is due to
subjects focusing on the maximum possible amount when the risky lottery is the
default and on the highest probability of winning the biggest possible reward
when there is no default. The observed reversal in risk attitudes is
explainable by a large class of Koszegi-Rabin (2006) reference-dependent
preferences.",Status Quo Bias and the Decoy Effect: A Comparative Analysis in Choice under Risk,2020-06-26 12:02:11,"Miguel Costa-Gomes, Georgios Gerasimou","http://arxiv.org/abs/2006.14868v3, http://arxiv.org/pdf/2006.14868v3",econ.GN
31807,gn,"I study labor markets in which firms hire via referrals. I develop an
employment model showing that--despite initial equality in ability, employment,
wages, and network structure--minorities receive fewer jobs through referral
and lower expected wages, simply because their social group is smaller. This
disparity, termed ""social network discrimination,"" falls outside the dominant
economics discrimination models--taste-based and statistical. Social network
discrimination can be mitigated by minorities having more social ties or a
""stronger-knit"" network. I calibrate the model using a
nationally-representative U.S. sample and estimate the lower-bound welfare gap
caused by social network discrimination at over four percent, disadvantaging
black workers.",Social Networks as a Mechanism for Discrimination,2020-06-26 15:20:23,Chika O. Okafor,"http://arxiv.org/abs/2006.15988v2, http://arxiv.org/pdf/2006.15988v2",econ.GN
31808,gn,"Since 2018 UK firms with at least 250 employees have been mandated to
publicly disclose gender equality indicators. Exploiting variations in this
mandate across firm size and time we show that pay transparency closes 18
percent of the gender pay gap by reducing men's wage growth. The public
availability of the equality indicators seems to influence employers' response
as worse performing firms and industries more exposed to public scrutiny reduce
their gender pay gap the most. Employers are also 9 percent more likely to post
wages in job vacancies, potentially in an effort to improve gender equality at
entry level.",Pay Transparency and Gender Equality,2020-06-25 14:21:57,"Emma Duchini, Stefania Simion, Arthur Turrell, Jack Blundell","http://arxiv.org/abs/2006.16099v3, http://arxiv.org/pdf/2006.16099v3",econ.GN
31809,gn,"Immigration legal services providers (ISPs) are a principal source of support
for low-income immigrants seeking immigration benefits. Yet there is scant
quantitative evidence on the prevalence and geographic distribution of ISPs in
the United States. To fill this gap, we construct a comprehensive, nationwide
database of 2,138 geocoded ISP offices that offer low- or no-cost legal
services to low-income immigrants. We use spatial optimization methods to
analyze the geographic network of ISPs and measure ISPs' proximity to the
low-income immigrant population. Because both ISPs and immigrants are highly
concentrated in major urban areas, most low-income immigrants live close to an
ISP. However, we also find a sizable fraction of low-income immigrants in
underserved areas, which are primarily in midsize cities in the South. This
reflects both a general skew in non-governmental organization service provision
and the more recent arrival of immigrants in these largely Southern
destinations. Finally, our optimization analysis suggests significant gains
from placing new ISPs in underserved areas to maximize the number of low-income
immigrants who live near an ISP. Overall, our results provide vital information
to immigrants, funders, and policymakers about the current state of the ISP
network and opportunities to improve it.",Identifying Opportunities to Improve the Network of Immigration Legal Services Providers,2020-08-05 19:57:36,"Vasil Yasenov, David Hausman, Michael Hotard, Duncan Lawrence, Alexandra Siegel, Jessica S. Wolff, David D. Laitin, Jens Hainmueller","http://arxiv.org/abs/2008.02230v1, http://arxiv.org/pdf/2008.02230v1",econ.GN
31810,gn,"The spread of new coronavirus (COVID-19) infections continues to increase.
The practice of social distance attracts attention as a measure to prevent the
spread of infection, but it is difficult for some occupations. Therefore, in
previous studies, the scale of factors that determine social distance has been
developed. However, it was not clear how to select the items among them, and it
seemed to be somewhat arbitrary. In response to this trend, this paper
extracted eight scales by performing exploratory factor analysis based on
certain rules while eliminating arbitrariness as much as possible. They were
Adverse Conditions, Leadership, Information Processing, Response to Aggression,
Mechanical Movement, Autonomy, Communication with the Outside, and Horizontal
Teamwork. Of these, Adverse Conditions, Response to Aggression, and Horizontal
Teamwork had a positive correlation with Physical Proximity, and Information
Processing, Mechanical Movement, Autonomy, and Communication with the Outside
had a negative correlation with Physical Proximity. Furthermore, as a result of
multiple regression analysis, it was shown that Response to Aggression, not the
mere teamwork assumed in previous studies, had the greatest influence on
Physical Proximity.",Aggression in the workplace makes social distance difficult,2020-08-10 16:48:42,Keisuke Kokubun,"http://dx.doi.org/10.3390/ijerph18105074, http://arxiv.org/abs/2008.04131v2, http://arxiv.org/pdf/2008.04131v2",econ.GN
31811,gn,"Technological change is essential to balance economic growth and
environmental sustainability. This study documents energy-saving technological
change to understand the trends and differences therein in OECD countries. We
estimate sector-level production functions with factor-augmenting technologies
using cross-country and cross-industry panel data and shift-share instruments,
thereby measuring energy-saving technological change for each country and
sector. Our results show how the levels and growth rates of energy-saving
technology vary across countries, sectors, and time. In addition, we evaluate
the extent to which factor-augmenting technologies contribute to economic
growth and how this contribution differs across countries and sectors.",Measuring Energy-saving Technological Change: International Trends and Differences,2020-08-11 14:47:17,"Emiko Inoue, Hiroya Taniguchi, Ken Yamada","http://dx.doi.org/10.1016/j.jeem.2022.102709, http://arxiv.org/abs/2008.04639v4, http://arxiv.org/pdf/2008.04639v4",econ.GN
31812,gn,"The outbreak of COVID-19 in March 2020 led to a shutdown of economic
activities in Europe. This included the sports sector, since public gatherings
were prohibited. The German Bundesliga was among the first sport leagues
realising a restart without spectators. Several recent studies suggest that the
home advantage of teams was eroded for the remaining matches. Our paper
analyses the reaction by bookmakers to the disappearance of such home
advantage. We show that bookmakers had problems to adjust the betting odds in
accordance to the disappeared home advantage, opening opportunities for
profitable betting strategies.",Bookmakers' mispricing of the disappeared home advantage in the German Bundesliga after the COVID-19 break,2020-08-12 19:19:51,"Christian Deutscher, David Winkelmann, Marius Ötting","http://arxiv.org/abs/2008.05417v2, http://arxiv.org/pdf/2008.05417v2",econ.GN
31813,gn,"This paper provides the first estimates of the pass-through rate of the
ongoing temporary value-added tax (VAT) reduction, which is part of the German
fiscal response to COVID-19. Using a unique dataset containing the universe of
price changes at fuel stations in Germany and France in June and July 2020, we
employ a difference-in-differences strategy and find that pass-through is fast
and substantial but remains incomplete for all fuel types. Furthermore, we find
a high degree of heterogeneity between the pass-through estimates for different
fuel types. Our results are consistent with the interpretation that
pass-through rates are higher for customer groups who are more likely to exert
competitive pressure by shopping for lower prices. Our results have important
implications for the effectiveness of the stimulus measure and the
cost-effective design of unconventional fiscal policy.",Are temporary value-added tax reductions passed on to consumers? Evidence from Germany's stimulus,2020-08-19 18:42:15,"Felix Montag, Alina Sagimuldina, Monika Schnitzer","http://arxiv.org/abs/2008.08511v1, http://arxiv.org/pdf/2008.08511v1",econ.GN
31814,gn,"I have analyzed the practicality of the Evans Rule in the state based forward
guidance and possible ways to reform it. I examined the biases, measurement
errors, and other limitations extant in the unemployment and the inflation rate
in the Evans Rule. Using time series analysis, I calibrated the thresholds of
ECI wage growth and the employment to population ratio and investigated the
relationship between other labor utilization variables. Then I imposed various
shocks and constructed impulse response functions to contrast the paths of
eight macroeconomic variables under three scenarios. The results suggest that
under the wage growth rate scenario, the federal funds rate lift off earlier
than under the current Evans Rule.",Reforming the State-Based Forward Guidance through Wage Growth Rate Threshold: Evidence from FRB/US Simulations,2020-08-20 02:39:07,Sudiksha Joshi,"http://arxiv.org/abs/2008.08705v1, http://arxiv.org/pdf/2008.08705v1",econ.GN
31815,gn,"We investigate a group choice problem of agents pursuing social status. We
assume heterogeneous agents want to signal their private information (ability,
income, patience, altruism, etc.) to others, facing tradeoff between ""outside
status"" (desire to be perceived in prestigious group from outside observers)
and ""inside status"" (desire to be perceived talented from peers inside their
group). To analyze the tradeoff, we develop two stage signaling model in which
each agent firstly chooses her group and secondly chooses her action in the
group she chose. They face binary choice problems both in group and action
choices. Using cutoff strategy, we construct an partially separating
equilibrium such that there are four populations: (i) choosing high group with
strong incentive for action in the group, (ii) high group with weak incentive,
(iii) low group with strong incentive, and (iv) low group with weak incentive.
By comparative statics results, we find some spillover effects from a certain
group to another, on how four populations change, when a policy is taken in
each group. These results have rich implications for group choice problems like
school, firm or residential preference.",Implications of the Tradeoff between Inside and Outside Social Status in Group Choice,2020-08-24 04:01:52,Takaaki Hamada,"http://arxiv.org/abs/2008.10145v1, http://arxiv.org/pdf/2008.10145v1",econ.GN
31816,gn,"We exploit variation in the timing of decriminalization of same-sex sexual
intercourse across U.S. states to estimate the impact of these law changes on
crime through difference-in-difference and event-study models. We provide the
first evidence that sodomy law repeals led to a decline in the number of
arrests for disorderly conduct, prostitution, and other sex offenses.
Furthermore, we show that these repeals led to a reduction in arrests for drug
and alcohol consumption.",The Impact of Sodomy Law Repeals on Crime,2020-08-25 13:15:38,"Riccardo Ciacci, Dario Sansone","http://arxiv.org/abs/2008.10926v1, http://arxiv.org/pdf/2008.10926v1",econ.GN
31817,gn,"Global historical series spanning the last two centuries recently became
available for primary energy consumption (PEC) and Gross Domestic Product
(GDP). Based on a thorough analysis of the data, we propose a new, simple
macroeconomic model whereby physical power is fueling economic power. From 1820
to 1920, the linearity between global PEC and world GDP justifies basic
equations where, originally, PEC incorporates unskilled human labor that
consumes and converts energy from food. In a consistent model, both physical
capital and human capital are fed by PEC and represent a form of stored energy.
In the following century, from 1920 to 2016, GDP grows quicker than PEC.
Periods of quasi-linearity of the two variables are separated by distinct
jumps, which can be interpreted as radical technology shifts. The GDP to PEC
ratio accumulates game-changing innovation, at an average growth rate
proportional to PEC. These results seed alternative strategies for modeling and
for political management of the climate crisis and the energy transition.",An energy-based macroeconomic model validated by global historical series since 1820,2020-08-25 15:45:58,"Herve Bercegol, Henri Benisty","http://dx.doi.org/10.1016/j.ecolecon.2021.107253, http://arxiv.org/abs/2008.10967v3, http://arxiv.org/pdf/2008.10967v3",econ.GN
31818,gn,"This paper presents the way in which can be determined the exchange rates
that simultaneously balance the trade balances of all countries that trade with
each other within a common market. A mathematical synthesis between the theory
of comparative advantages of Ricardo and Mill and the New Theory on
International Trade of Paul Krugman is also presented in this paper. This
mathematical synthesis shows that these theories are complementary. Also
presented in this paper is a proposal to organize a common market of the
American hemisphere. This economic alliance would allow to establish a
political alliance for the common defense of the entire American hemisphere.
The formula we developed in this paper to determine the exchange rates of the
countries solves the problem that Mexico, Canada, Europe, Japan and China
currently experience with the United States in relation to the deficits and
surpluses in the trade balance of the countries and their consequent impediment
so that stable growth of international trade can be achieved.",Formula to Determine the Countries Equilibrium Exchange Rate With the Dollar and Proposal for a Second Bretton Woods Conference,2020-08-24 13:40:42,Walter H. Bruckman,"http://arxiv.org/abs/2008.11275v1, http://arxiv.org/pdf/2008.11275v1",econ.GN
31819,gn,"Global ballast water management regulations aiming to decrease aquatic
species invasion require actions that can increase shipping costs. We employ an
integrated shipping cost and global economic modeling approach to investigate
the impacts of ballast water regulations on bilateral trade, national
economies, and shipping patterns. Given the potential need for more stringent
regulation at regional hotspots of species invasions, this work considers two
ballast water treatment policy scenarios: implementation of current
international regulations, and a possible stricter regional regulation that
targets ships traveling to and from the United States while other vessels
continue to face current standards. We find that ballast water management
compliance costs under both scenarios lead to modest negative impacts on
international trade and national economies overall. However, stricter
regulations applied to U.S. ports are expected to have large negative impacts
on bilateral trade of several specific commodities for a few countries. Trade
diversion causes decreased U.S. imports of some products, leading to minor
economic welfare losses.","Potential impacts of ballast water regulations on international trade, shipping patterns, and the global economy: An integrated transportation and economic modeling assessment",2020-08-26 04:45:34,"Zhaojun Wang, Duy Nong, Amanda M. Countryman, James J. Corbett, Travis Warziniack","http://dx.doi.org/10.1016/j.jenvman.2020.110892, http://arxiv.org/abs/2008.11334v1, http://arxiv.org/pdf/2008.11334v1",econ.GN
31820,gn,"This paper examines the spatial distribution of income in Ireland. Median
gross household disposable income data from the CSO, available at the Electoral
Division (ED) level, is used to explore the spatial variability in income.
Geary's C highlights the spatial dependence of income, highlighting that the
distribution of income is not random across space and is influenced by
location. Given the presence of spatial autocorrelation, utilising a global OLS
regression will lead to biased results. Geographically Weighted Regression
(GWR) is used to examine the spatial heterogeneity of income and the impact of
local demographic drivers on income. GWR results show the demographic drivers
have varying levels of influence on income across locations. Lone parent has a
stronger negative impact in the Cork commuter belt than it does in the Dublin
commuter belt. The relationship between household income and the demographic
context of the area is a complicated one. This paper attempts to examine these
relationships acknowledging the impact of space.",A Spatial Analysis of Disposable Income in Ireland: A GWR Approach,2020-08-26 14:31:46,"Paul Kilgarriff, Martin Charlton","http://arxiv.org/abs/2008.11720v1, http://arxiv.org/pdf/2008.11720v1",econ.GN
31821,gn,"In Australia and beyond, journalism is reportedly an industry in crisis, a
crisis exacerbated by COVID-19. However, the evidence revealing the crisis is
often anecdotal or limited in scope. In this unprecedented longitudinal
research, we draw on data from the Australian journalism jobs market from
January 2012 until March 2020. Using Data Science and Machine Learning
techniques, we analyse two distinct data sets: job advertisements (ads) data
comprising 3,698 journalist job ads from a corpus of over 8 million Australian
job ads; and official employment data from the Australian Bureau of Statistics.
Having matched and analysed both sources, we address both the demand for and
supply of journalists in Australia over this critical period. The data show
that the crisis is real, but there are also surprises. Counter-intuitively, the
number of journalism job ads in Australia rose from 2012 until 2016, before
falling into decline. Less surprisingly, for the entire period studied the
figures reveal extreme volatility, characterised by large and erratic
fluctuations. The data also clearly show that COVID-19 has significantly
worsened the crisis. We then tease out more granular findings, including: that
there are now more women than men journalists in Australia, but that gender
inequity is worsening, with women journalists getting younger and worse-paid
just as men journalists are, on average, getting older and better-paid; that,
despite the crisis besetting the industry, the demand for journalism skills has
increased; and that, perhaps concerningly, the skills sought by journalism job
ads increasingly include social media and generalist communications.","Layoffs, Inequity and COVID-19: A Longitudinal Study of the Journalism Jobs Crisis in Australia from 2012 to 2020",2020-08-28 06:20:48,"Nik Dawson, Sacha Molitorisz, Marian-Andrei Rizoiu, Peter Fray","http://dx.doi.org/10.1177/1464884921996286, http://arxiv.org/abs/2008.12459v2, http://arxiv.org/pdf/2008.12459v2",econ.GN
31822,gn,"Failure to receive post-natal care within first week of delivery causes a 3%
increase in the possibility of Acute Respiratory Infection in children under
five. Mothers with unpaid maternity leave put their children at a risk of 3.9%
increase in the possibility of ARI compared to those with paid maternity leave.",Implication of Natal Care and Maternity Leave on Child Morbidity: Evidence from Ghana,2020-08-29 08:11:00,"Danny Turkson, Joy Kafui Ahiabor","http://dx.doi.org/10.5539/gjhs.v12n9p94, http://arxiv.org/abs/2008.12910v1, http://arxiv.org/pdf/2008.12910v1",econ.GN
31825,gn,"We use a randomized experiment to compare a workforce training program to
cash transfers in Rwanda. Conducted in a sample of poor and underemployed
youth, this study measures the impact of the training program not only relative
to a control group but relative to the counterfactual of simply disbursing the
cost of the program directly to beneficiaries. While the training program was
successful in improving a number of core outcomes (productive hours, assets,
savings, and subjective well-being), cost-equivalent cash transfers move all
these outcomes as well as consumption, income, and wealth. In the head-to-head
costing comparison cash proves superior across a number of economic outcomes,
while training outperforms cash only in the production of business knowledge.
We find little evidence of complementarity between human and physical capital
interventions, and no signs of heterogeneity or spillover effects.",Using Household Grants to Benchmark the Cost Effectiveness of a USAID Workforce Readiness Program,2020-09-02 19:52:37,"Craig McIntosh, Andrew Zeitlin","http://arxiv.org/abs/2009.01749v1, http://arxiv.org/pdf/2009.01749v1",econ.GN
31826,gn,"A major focus of debate about rationing guidelines for COVID-19 vaccines is
whether and how to prioritize access for minority populations that have been
particularly affected by the pandemic, and been the subject of historical and
structural disadvantage, particularly Black and Indigenous individuals. We
simulate the 2018 CDC Vaccine Allocation guidelines using data from the
American Community Survey under different assumptions on total vaccine supply.
Black and Indigenous individuals combined receive a higher share of vaccines
compared to their population share for all assumptions on total vaccine supply.
However, their vaccine share under the 2018 CDC guidelines is considerably
lower than their share of COVID-19 deaths and age-adjusted deaths. We then
simulate one method to incorporate disadvantage in vaccine allocation via a
reserve system. In a reserve system, units are placed into categories and units
reserved for a category give preferential treatment to individuals from that
category. Using the Area Deprivation Index (ADI) as a proxy for disadvantage,
we show that a 40% high-ADI reserve increases the number of vaccines allocated
to Black or Indigenous individuals, with a share that approaches their COVID-19
death share when there are about 75 million units. Our findings illustrate that
whether an allocation is equitable depends crucially on the benchmark and
highlight the importance of considering the expected distribution of outcomes
from implementing vaccine allocation guidelines.",Do Black and Indigenous Communities Receive their Fair Share of Vaccines Under the 2018 CDC Guidelines,2020-09-07 04:51:00,"Parag A. Pathak, Harald Schmidt, Adam Solomon, Edwin Song, Tayfun Sönmez, M. Utku Ünver","http://arxiv.org/abs/2009.02853v1, http://arxiv.org/pdf/2009.02853v1",econ.GN
31827,gn,"The embrace of globalization and protectionism among economies has ebbed and
flowed over the past few decades. These fluctuations call for quantitative
analytics to help countries improve their trade policies. Changing attitudes
about globalization also imply that the best trade policies may vary over time
and be country-specific. We argue that the imports and exports of all economies
constitute a counterbalanced network where conflict and cooperation are two
sides of the same coin. Quantitative competitiveness is then formulated for
each country using a network counterbalance equilibrium. A country could
improve its relative strength in the network by embracing globalization,
protectionism, trade collaboration, or conflict. This paper presents the
necessary conditions for globalization and trade wars, evaluates their side
effects, derives national bargaining powers, identifies appropriate targets for
conflict or collaboration, and recommends fair resolutions for trade conflicts.
Data and events from the past twenty years support these conditions.",Globalization? Trade War? A Counterbalance Perspective,2020-09-04 15:38:51,"Arthur Hu, Xingwei Hu, Hui Tong","http://arxiv.org/abs/2009.03436v3, http://arxiv.org/pdf/2009.03436v3",econ.GN
31828,gn,"This paper undertakes a near real-time analysis of the income distribution
effects of the COVID-19 crisis in Australia to understand the ongoing changes
in the income distribution as well as the impact of policy responses. By
semi-parametrically combining incomplete observed data from three different
sources, namely, the Monthly Longitudinal Labour Force Survey, the Survey of
Income and Housing and the administrative payroll data, we estimate the impact
of COVID-19 and the associated policy responses on the Australian income
distribution between February and June 2020, covering the immediate periods
before and after the initial outbreak. Our results suggest that despite the
growth in unemployment, the Gini of the equalised disposable income inequality
dropped by nearly 0.03 point since February. The reduction is because of the
additional wage subsidies and welfare supports offered as part of the policy
response, offsetting a potential surge in income inequality. Additionally, the
poverty rate, which could have been doubled in the absence of the government
response, also reduced by 3 to 4 percentage points. The result shows the
effectiveness of temporary policy measures in maintaining both the living
standards and the level of income inequality. However, the heavy reliance on
the support measures raises the possibility that the changes in the income
distribution may be reversed and even substantially worsened off should the
measures be withdrawn.",The Impact of COVID-19 and Policy Responses on Australian Income Distribution and Poverty,2020-09-09 02:55:45,"Jinjing Li, Yogi Vidyattama, Hai Anh La, Riyana Miranti, Denisa M Sologon","http://arxiv.org/abs/2009.04037v1, http://arxiv.org/pdf/2009.04037v1",econ.GN
31829,gn,"The impacts of COVID-19 reach far beyond the hundreds of lives lost to the
disease; in particular, the pre-existing learning crisis is expected to be
magnified during school shutdown. Despite efforts to put distance learning
strategies in place, the threat of student dropouts, especially among
adolescents, looms as a major concern. Are interventions to motivate
adolescents to stay in school effective amidst the pandemic? Here we show that,
in Brazil, nudges via text messages to high-school students, to motivate them
to stay engaged with school activities, substantially reduced dropouts during
school shutdown, and greatly increased their motivation to go back to school
when classes resume. While such nudges had been shown to decrease dropouts
during normal times, it is surprising that those impacts replicate in the
absence of regular classes because their effects are typically mediated by
teachers (whose effort in the classroom changes in response to the nudges).
Results show that insights from the science of adolescent psychology can be
leveraged to shift developmental trajectories at a critical juncture. They also
qualify those insights: effects increase with exposure and gradually fade out
once communication stops, providing novel evidence that motivational
interventions work by redirecting adolescents' attention.",Using Nudges to Prevent Student Dropouts in the Pandemic,2020-09-10 13:51:02,"Guilherme Lichand, Julien Christen","http://arxiv.org/abs/2009.04767v1, http://arxiv.org/pdf/2009.04767v1",econ.GN
31830,gn,"The development of Digital Economy sets its own requirements for the
formation and development of so-called digital doubles and digital shadows of
real objects (subjects/regions). An integral element of their development and
application is a multi-level matrix of targets and resource constraints (time,
financial, technological, production, etc.). The volume of statistical
information collected for a digital double must meet several criteria: be
objective, characterize the real state of the managed object as accurately as
possible, contain all the necessary information on all managed parameters, and
at the same time avoid unnecessary and duplicate indicators (""information
garbage""). The relevance of forming the profile of the ""digital shadow of the
region"" in the context of multitasking and conflict of departmental and Federal
statistics predetermined the goal of the work-to form a system of indicators of
the socio-economic situation of regions based on the harmonization of
information resources. In this study, an inventory of the composition of
indicators of statistical forms for their relevance and relevance was carried
out on the example of assessing the economic health of the subject and the
level of provision of banking services",Application of a system of indicatirs for assessing the socio-economic situation of a subject based on digital shadows,2020-09-12 14:47:07,Olga G. Lebedinskaya,"http://arxiv.org/abs/2009.05771v1, http://arxiv.org/pdf/2009.05771v1",econ.GN
31831,gn,"We present a model for the equilibrium frequency of offenses and the
informativeness of witness reports when potential offenders can commit multiple
offenses and witnesses are subject to retaliation risk and idiosyncratic
reporting preferences. We compare two ways of handling multiple accusations
discussed in legal scholarship: (i) When convictions are based on the
probability that the defendant committed at least one, unspecified offense and
entail a severe punishment, potential offenders induce negative correlation in
witnesses' private information, which leads to uninformative reports,
information aggregation failures, and frequent offenses in equilibrium.
Moreover, lowering the punishment in case of conviction can improve deterrence
and the informativeness of witnesses' reports. (ii) When accusations are
treated separately to adjudicate guilt and conviction entails a severe
punishment, witness reports are highly informative and offenses are infrequent
in equilibrium.","Crime Aggregation, Deterrence, and Witness Credibility",2020-09-14 17:25:46,"Harry Pei, Bruno Strulovici","http://arxiv.org/abs/2009.06470v1, http://arxiv.org/pdf/2009.06470v1",econ.GN
31832,gn,"To prevent the outbreak of the Coronavirus disease (COVID-19), many countries
around the world went into lockdown and imposed unprecedented containment
measures. These restrictions progressively produced changes to social behavior
and global mobility patterns, evidently disrupting social and economic
activities. Here, using maritime traffic data collected via a global network of
AIS receivers, we analyze the effects that the COVID-19 pandemic and
containment measures had on the shipping industry, which accounts alone for
more than 80% of the world trade. We rely on multiple data-driven maritime
mobility indexes to quantitatively assess ship mobility in a given unit of
time. The mobility analysis here presented has a worldwide extent and is based
on the computation of: CNM of all ships reporting their position and
navigational status via AIS, number of active and idle ships, and fleet average
speed. To highlight significant changes in shipping routes and operational
patterns, we also compute and compare global and local density maps. We compare
2020 mobility levels to those of previous years assuming that an unchanged
growth rate would have been achieved, if not for COVID-19. Following the
outbreak, we find an unprecedented drop in maritime mobility, across all
categories of commercial shipping. With few exceptions, a generally reduced
activity is observable from March to June, when the most severe restrictions
were in force. We quantify a variation of mobility between -5.62% and -13.77%
for container ships, between +2.28% and -3.32% for dry bulk, between -0.22% and
-9.27% for wet bulk, and between -19.57% and -42.77% for passenger traffic.
This study is unprecedented for the uniqueness and completeness of the employed
dataset, which comprises a trillion AIS messages broadcast worldwide by 50000
ships, a figure that closely parallels the documented size of the world
merchant fleet.",COVID-19 Impact on Global Maritime Mobility,2020-09-15 13:08:51,"Leonardo M. Millefiori, Paolo Braca, Dimitris Zissis, Giannis Spiliopoulos, Stefano Marano, Peter K. Willett, Sandro Carniel","http://dx.doi.org/10.1038/s41598-021-97461-7, http://arxiv.org/abs/2009.06960v3, http://arxiv.org/pdf/2009.06960v3",econ.GN
31833,gn,"In economic literature, economic complexity is typically approximated on the
basis of an economy's gross export structure. However, in times of ever
increasingly integrated global value chains, gross exports may convey an
inaccurate image of a country's economic performance since they also
incorporate foreign value-added and double-counted exports. Thus, I introduce a
new empirical approach approximating economic complexity based on a country's
value-added export structure. This approach leads to substantially different
complexity rankings compared to established metrics. Moreover, the explanatory
power of GDP per capita growth rates for a sample of 40 lower-middle- to
high-income countries is considerably higher, even if controlling for typical
growth regression covariates.",Economic Complexity and Growth: Can value-added exports better explain the link?,2020-09-16 13:53:12,Philipp Koch,"http://dx.doi.org/10.1016/j.econlet.2020.109682, http://arxiv.org/abs/2009.07599v2, http://arxiv.org/pdf/2009.07599v2",econ.GN
31834,gn,"We assess the correlation between CAP support provided to farmers and their
income and use of capital and labour in the first year of the new CAP regime.
This is done applying three regression models on the Italian FADN farms
controlling for other farm characteristics. CAP annual payments are positively
correlated with farm income and capital but are negatively correlated with
labour use. Farm investment support provided by RDP measures is positively
correlated to the amount of capital. Results suggest that CAP is positively
affecting farm income directly but also indirectly by supporting the
substitution of labour with capital",The direct and indirect effect of CAP support on farm income enhancement:a farm-based econometric analysis,2020-09-16 16:41:23,"Simone Severini, Luigi Biagini","http://arxiv.org/abs/2009.07684v3, http://arxiv.org/pdf/2009.07684v3",econ.GN
31835,gn,"This paper develops a methodology for tracking in real time the impact of
shocks (such as natural disasters, financial crises or pandemics) on gross
domestic product (GDP) by analyzing high-frequency electricity market data. As
an illustration, we estimate the GDP loss caused by COVID-19 in twelve European
countries during the first wave of the pandemic. Our results are almost
indistinguishable from the official statistics of the recession during the
first two quarters of 2020 (correlation coefficient of 0.98) and are validated
by several robustness tests. However, they are also more chronologically
disaggregated and up-to-date than standard macroeconomic indicators and,
therefore, can provide crucial and timely information for policy evaluation.
Our results show that delaying intervention and pursuing 'herd immunity' have
not been successful strategies so far, since they increased both economic
disruption and mortality. We also find that coordinating policies
internationally is fundamental for minimizing spillover effects from NPIs
across countries.",Tracking GDP in real-time using electricity market data: insights from the first wave of COVID-19 across Europe,2020-09-19 15:58:30,"Carlo Fezzi, Valeria Fanghella","http://arxiv.org/abs/2009.09222v3, http://arxiv.org/pdf/2009.09222v3",econ.GN
31836,gn,"This article explores the challenges for the adoption of scrubbers and low
sulfur fuels on ship manufacturers and shipping companies. Results show that
ship manufacturers, must finance their working capital and operating costs,
which implies an increase in the prices of the ships employing these new
technologies. On the other hand, shipping companies must adopt the most
appropriate technology according to the areas where ships navigate, the scale
economies of trade routes, and the cost-benefit analysis of ship modernization.",Sulfur emission reduction in cargo ship manufacturers and shipping companies based on MARPOL Annex VI,2020-09-21 03:06:53,"Abraham Londono Pineda, Jose Alejandro Cano, Lissett Pulgarin","http://arxiv.org/abs/2009.09547v1, http://arxiv.org/pdf/2009.09547v1",econ.GN
32795,gn,"This paper shows that a non-price intervention which increased the prevalence
of a new technology facilitated its further adoption. The BlueLA program put
Electric Vehicles (EVs) for public use in many heavily trafficked areas,
primarily (but not exclusively) aimed at low-to-middle income households. We
show, using data on subsidies for these households and a difference-in
differences strategy, that BlueLA is associated with a 33\% increase of new EV
adoptions, justifying a substantial portion of public investment. While the
program provides a substitute to car ownership, our findings are consistent
with the hypothesis that increasing familiarity with EVs could facilitate
adoption.",Familiarity Facilitates Adoption: Evidence from Electric Vehicles,2022-11-26 21:18:21,"Jonathan Libgober, Ruozi Song","http://arxiv.org/abs/2211.14634v1, http://arxiv.org/pdf/2211.14634v1",econ.GN
31837,gn,"Economic growth is measured as the rate of relative change in gross domestic
product (GDP) per capita. Yet, when incomes follow random multiplicative
growth, the ensemble-average (GDP per capita) growth rate is higher than the
time-average growth rate achieved by each individual in the long run. This
mathematical fact is the starting point of ergodicity economics. Using the
atypically high ensemble-average growth rate as the principal growth measure
creates an incomplete picture. Policymaking would be better informed by
reporting both ensemble-average and time-average growth rates. We analyse
rigorously these growth rates and describe their evolution in the United States
and France over the last fifty years. The difference between the two growth
rates gives rise to a natural measure of income inequality, equal to the mean
logarithmic deviation. Despite being estimated as the average of individual
income growth rates, the time-average growth rate is independent of income
mobility.",The Two Growth Rates of the Economy,2020-09-22 14:16:11,"Alexander Adamou, Yonatan Berman, Ole Peters","http://arxiv.org/abs/2009.10451v1, http://arxiv.org/pdf/2009.10451v1",econ.GN
31838,gn,"This study analyzes the factors affecting the configuration and consolidation
of green ports in Colombia. For this purpose a case stady of maritime cargo
ports of Cartagena, Barranquilla and Santa Marta is performed addressing
semiestructured interviews to identify the factors contributing to the
consolidation of green ports and the factors guiding the sustainability
management in the ports that have not yet been certified as green ports. The
results show that environmental regulations are atarting point not the key
factor to consolidate asgreen ports. As a conclusions, the conversion of
Colombian to green ports should not be limited to the attaiment of
certifications, such as Ecoport certification, but should ensure the
contribution to sustainable development through economic, social and
environmental dimensions and the achievement of the SDGs",Analysis of the main factors for the configuration of green ports in Colombia,2020-09-23 00:51:26,"Abraham Londono Pineda, Tatiana Arias Naranjo, Jose Alejandro Cano Arenas","http://arxiv.org/abs/2009.10834v1, http://arxiv.org/pdf/2009.10834v1",econ.GN
31839,gn,"Empirical evidence for the Heckscher-Ohlin model has been inconclusive. We
test whether the predictions of the Heckscher-Ohlin Theorem with respect to
labor and capital find support in value-added trade. Defining labor-capital
intensities and endowments as the ratio of hours worked to the nominal capital
stock, we find evidence against Heckscher-Ohlin. However, taking the ratio of
total factor compensations, and thus accounting for differences in
technologies, we find strong support for it. That is, labor-abundant countries
tend to export value-added in goods of labor-intensive industries. Moreover,
differentiating between broad industries, we find support for nine out of
twelve industries.",A test for Heckscher-Ohlin using value-added exports,2020-09-24 18:08:25,"Philipp Koch, Clemens Fessler","http://arxiv.org/abs/2009.11743v1, http://arxiv.org/pdf/2009.11743v1",econ.GN
31840,gn,"Despite widely documented shortfalls of teacher skills and effort, there is
little systematic evidence of rates of teacher turnover in low-income
countries. I investigate the incidence and consequences of teacher turnover in
Rwandan public primary schools over the period from 2016-2019. To do so, I
combine the universe of teacher placement records with student enrollment
figures and school-average Primary Leaving Exam scores in a nationally
representative sample of 259 schools. Results highlight five features of
teacher turnover. First, rates of teacher turnover are high: annually, 20
percent of teachers separate from their jobs, of which 11 percent exit from the
public-sector teaching workforce. Second, the burden of teacher churn is higher
in schools with low learning levels and, perhaps surprisingly, in low
pupil-teacher-ratio schools. Third, teacher turnover is concentrated among
early-career teachers, male teachers, and those assigned to teach Math. Fourth,
replacing teachers quickly after they exit is a challenge; 23 percent of
exiting teachers are not replaced the following year. And fifth, teacher
turnover is associated with subsequent declines in learning outcomes. On
average, the loss of a teacher is associated with a reduction in learning
levels of 0.05 standard deviations. In addition to class-size increases, a
possible mechanism for these learning outcomes is the prevalence of teachers
teaching outside of their areas of subject expertise: in any given year, at
least 21 percent of teachers teach in subjects in which they have not been
trained. Taken together, these results suggest that the problem of teacher
turnover is substantial in magnitude and consequential for learning outcomes in
schools.",Teacher turnover in Rwanda,2020-09-28 09:19:41,Andrew Zeitlin,"http://arxiv.org/abs/2009.13091v1, http://arxiv.org/pdf/2009.13091v1",econ.GN
31841,gn,"This paper presents preliminary summary results from a longitudinal study of
participants in seven U.S. states during the COVID-19 pandemic. In addition to
standard socio-economic characteristics, we collect data on various economic
preference parameters: time, risk, and social preferences, and risk perception
biases. We pay special attention to predictors that are both important drivers
of social distancing and are potentially malleable and susceptible to policy
levers. We note three important findings: (1) demographic characteristics exert
the largest influence on social distancing measures and mask-wearing, (2) we
show that individual risk perception and cognitive biases exert a critical role
in influencing the decision to adopt social distancing measures, (3) we
identify important demographic groups that are most susceptible to changing
their social distancing behaviors. These findings can help inform the design of
policy interventions regarding targeting specific demographic groups, which can
help reduce the transmission speed of the COVID-19 virus.",Predictors of Social Distancing and Mask-Wearing Behavior: Panel Survey in Seven U.S. States,2020-09-28 10:17:02,"Plamen Nikolov, Andreas Pape, Ozlem Tonguc, Charlotte Williams","http://arxiv.org/abs/2009.13103v1, http://arxiv.org/pdf/2009.13103v1",econ.GN
31848,gn,"We investigate how protectionist policies influence economic growth. Our
empirical strategy exploits an extraordinary tax scandal that gave rise to an
unexpected change of government in Sweden. A free-trade majority in parliament
was overturned by a protectionist majority in 1887. The protectionist
government increased tariffs. We employ the synthetic control method to select
control countries against which economic growth in Sweden can be compared. We
do not find evidence suggesting that protectionist policies influenced economic
growth and examine channels why. The new tariff laws increased government
revenue. However, the results do not suggest that the protectionist government
stimulated the economy by increasing government expenditure.",Protectionism and economic growth: Causal evidence from the first era of globalization,2020-10-06 01:50:28,"Niklas Potrafke, Fabian Ruthardt, Kaspar Wüthrich","http://arxiv.org/abs/2010.02378v3, http://arxiv.org/pdf/2010.02378v3",econ.GN
31842,gn,"On April 16th, The White House launched ""Opening up America Again"" (OuAA)
campaign while many U.S. counties had stay-at-home orders in place. We created
a panel data set of 1,563 U.S. counties to study the impact of U.S. counties'
stay-at-home orders on community mobility before and after The White House's
campaign to reopen the country. Our results suggest that before the OuAA
campaign stay-at-home orders brought down time spent in retail and recreation
businesses by about 27% for typical conservative and liberal counties. However,
after the launch of OuAA campaign, the time spent at retail and recreational
businesses in a typical conservative county increased significantly more than
in liberal counties (15% increase in a typical conservative county Vs. 9%
increase in a typical liberal county). We also found that in conservative
counties with stay-at-home orders in place, time spent at retail and
recreational businesses increased less than that of conservative counties
without stay-at-home orders. These findings illuminate to what extent
residents' political ideology could determine to what extent they follow local
orders and to what extent the White House's OuAA campaign polarized the
obedience between liberal and conservative counties. The silver lining in our
study is that even when the federal government was reopening the country, the
local authorities that enforced stay-at-home restrictions were to some extent
effective.","When Local Governments' Stay-at-Home Orders Meet the White House's ""Opening Up America Again""",2020-09-29 18:27:25,"Reza Mousavi, Bin Gu","http://arxiv.org/abs/2009.14097v2, http://arxiv.org/pdf/2009.14097v2",econ.GN
31843,gn,"In a fast-changing technology-driven era, drafting an implementable strategic
roadmap to achieve economic prosperity becomes a real challenge. Although the
national and international strategic development plans may vary, they usually
target the improvement of the quality of living standards through boosting the
national GDP per capita and the creation of decent jobs. There is no doubt that
human capacity building, through higher education, is vital to the availability
of highly qualified workforce supporting the implementation of the
aforementioned strategies. In other words, fulfillment of most strategic
development plan goals becomes dependent on the drafting and implementation of
successful higher education strategies. For MENA region countries, this is
particularly crucial due to many specific challenges, some of which are
different from those facing developed nations. More details on the MENA region
higher education strategic planning challenges as well as the proposed higher
education strategic requirements to support national economic prosperity and
fulfill the 2030 UN SDGs are given in the paper.",On The Quest For Economic Prosperity: A Higher Education Strategic Perspective For The Mena Region,2020-09-30 06:26:31,Amr A. Adly,"http://arxiv.org/abs/2009.14408v1, http://arxiv.org/pdf/2009.14408v1",econ.GN
31844,gn,"Using machine learning methods in a quasi-experimental setting, I study the
heterogeneous effects of introducing waste prices - unit prices on household
unsorted waste disposal on - waste demands, municipal costs and pollution.
Using a unique panel of Italian municipalities with large variation in prices
and observables, I show that waste demands are nonlinear. I find evidence of
constant elasticities at low prices, and increasing elasticities at high prices
driven by income effects and waste habits before policy. The policy reduces
waste management costs and pollution in all municipalities after three years of
adoption, when prices cause significant waste avoidance.",Policy evaluation of waste pricing programs using heterogeneous causal effect estimation,2020-10-02 20:03:26,Marica Valente,"http://arxiv.org/abs/2010.01105v8, http://arxiv.org/pdf/2010.01105v8",econ.GN
31845,gn,"In this paper we examine the mechanism proposed by Buterin, Hitzig, and Weyl
(2019) for public goods financing, particularly regarding its matching funds
requirements, related efficiency implications, and incentives for strategic
behavior. Then, we use emerging evidence from Gitcoin Grants, to identify
stylized facts in contribution giving and test our propositions. Because of its
quadratic design, matching funds requirements scale rapidly, particularly by
more numerous and equally contributed projects. As a result, matching funds are
exhausted early in the funding rounds, and much space remains for social
efficiency improvement. Empirically, there is also a tendency by contributors
to give small amounts, scattered among multiple projects, which accelerates
this process. Among other findings, we also identify a significant amount of
reciprocal backing, which could be consistent with the kind of strategic
behavior we discuss.",Quadratic Funding and Matching Funds Requirements,2020-10-02 23:50:48,Ricardo A. Pasquini,"http://arxiv.org/abs/2010.01193v3, http://arxiv.org/pdf/2010.01193v3",econ.GN
31846,gn,"This paper analyzes the differences in poverty in high wealth communities and
low wealth communities. We first discuss methods of measuring poverty and
analyze the causes of individual poverty and poverty in the Bay Area. Three
cases are considered regarding relative poverty. The first two cases involve
neighborhoods in the Bay Area while the third case evaluates two neighborhoods
within the city of San Jose, CA. We find that low wealth communities have more
crime, more teen births, and more cost-burdened renters because of high
concentrations of temporary and seasonal workers, extensive regulations on
greenhouse gas emissions, minimum wage laws, and limited housing supply. In the
conclusion, we review past attempts to alleviate the effects of poverty and
give suggestions on how future policy can be influenced to eventually create a
future free of poverty.",Wealth and Poverty: The Effect of Poverty on Communities,2020-10-03 15:23:36,"Merrick Wang, Robert Johnston","http://arxiv.org/abs/2010.01335v1, http://arxiv.org/pdf/2010.01335v1",econ.GN
31847,gn,"The purpose of this paper is to review the concept of cryptocurrencies in our
economy. First, Bitcoin and alternative cryptocurrencies' histories are
analyzed. We then study the implementation of Bitcoin in the airline and real
estate industries. Our study finds that many Bitcoin companies partner with
airlines in order to decrease processing times, to provide ease of access for
spending in international airports, and to reduce fees on foreign exchanges for
fuel expenses, maintenance, and flight operations. Bitcoin transactions have
occurred in the real estate industry, but many businesses are concerned with
Bitcoin's potential interference with the U.S. government and its high
volatility. As Bitcoin's price has been growing rapidly, we assessed Bitcoin's
real value; Bitcoin derives value from its scarcity, utility, and public trust.
In the conclusion, we discuss Bitcoin's future and conclude that Bitcoin may
change from a short-term profit investment to a more steady industry as we
identify Bitcoin with the ""greater fool theory"", and as the number of available
Bitcoins to be mined dwindles and technology becomes more expensive.",Bitcoin and its impact on the economy,2020-10-03 15:28:49,Merrick Wang,"http://arxiv.org/abs/2010.01337v1, http://arxiv.org/pdf/2010.01337v1",econ.GN
31849,gn,"Empirical studies on food expenditure are largely based on cross-section data
and for a few studies based on longitudinal (or panel) data the focus has been
on the conditional mean. While the former, by construction, cannot model the
dependencies between observations across time, the latter cannot look at the
relationship between food expenditure and covariates (such as income,
education, etc.) at lower (or upper) quantiles, which are of interest to
policymakers. This paper analyzes expenditures on total food (TF), food at home
(FAH), and food away from home (FAFH) using mean regression and quantile
regression models for longitudinal data to examine the impact of economic
recession and various demographic, socioeconomic, and geographic factors. The
data is taken from the Panel Study of Income Dynamics (PSID) and comprises of
2174 families in the United States (US) observed between 2001-2015. Results
indicate that age and education of the head, family income, female headed
family, marital status, and economic recession are important determinants for
all three types of food expenditure. Spouse education, family size, and some
regional indicators are important for expenditures on TF and FAH, but not for
FAFH. Quantile analysis reveals considerable heterogeneity in the covariate
effects for all types of food expenditure, which cannot be captured by models
focused on conditional mean. The study ends by showing that modeling
conditional dependence between observations across time for the same family
unit is crucial to reducing/avoiding heterogeneity bias and better model
fitting.",Heterogeneity in Food Expenditure amongst US families: Evidence from Longitudinal Quantile Regression,2020-10-06 13:46:30,"Arjun Gupta, Soudeh Mirghasemi, Mohammad Arshad Rahman","http://arxiv.org/abs/2010.02614v1, http://arxiv.org/pdf/2010.02614v1",econ.GN
31850,gn,"This article identifies how scarcity, abundance, and sufficiency influence
exchange behavior. Analyzing the mechanisms governing exchange of resources
constitutes the foundation of several social-science perspectives. Neoclassical
economics provides one of the most well-known perspectives of how rational
individuals allocate and exchange resources. Using Rational Choice Theory
(RCT), neoclassical economics assumes that exchange between two individuals
will occur when resources are scarce and that these individuals interact
rationally to satisfy their requirements (i.e., preferences). While RCT is
useful to characterize interaction in closed and stylized systems, it proves
insufficient to capture social and psychological reality where culture,
emotions, and habits play an integral part in resource exchange. Social
Resource Theory (SRT) improves on RCT in several respects by making the social
nature of resources the object of study. SRT shows how human interaction is
driven by an array of psychological mechanisms, from emotions to heuristics.
Thus, SRT provides a more realistic foundation for analyzing and explaining
social exchange than the stylized instrumental rationality of RCT. Yet SRT has
no clear place for events of abundance and sufficiency as additional
motivations to exchange resources. This article synthesize and formalize a
foundation for SRT using not only scarcity but also abundance and sufficiency.",Extending Social Resource Exchange to Events of Abundance and Sufficiency,2020-10-06 15:10:40,"Jonas Bååth, Adel Daoud","http://arxiv.org/abs/2010.02658v1, http://arxiv.org/pdf/2010.02658v1",econ.GN
31851,gn,"Deep learning (DL) and machine learning (ML) methods have recently
contributed to the advancement of models in the various aspects of prediction,
planning, and uncertainty analysis of smart cities and urban development. This
paper presents the state of the art of DL and ML methods used in this realm.
Through a novel taxonomy, the advances in model development and new application
domains in urban sustainability and smart cities are presented. Findings reveal
that five DL and ML methods have been most applied to address the different
aspects of smart cities. These are artificial neural networks; support vector
machines; decision trees; ensembles, Bayesians, hybrids, and neuro-fuzzy; and
deep learning. It is also disclosed that energy, health, and urban transport
are the main domains of smart cities that DL and ML methods contributed in to
address their problems.",State of the Art Survey of Deep Learning and Machine Learning Models for Smart Cities and Urban Sustainability,2020-10-06 15:40:45,"Saeed Nosratabadi, Amir Mosavi, Ramin Keivani, Sina Ardabili, Farshid Aram","http://dx.doi.org/10.1007/978-3-030-36841-8_22, http://arxiv.org/abs/2010.02670v1, http://arxiv.org/pdf/2010.02670v1",econ.GN
31852,gn,"The recent developments of computer and electronic systems have made the use
of intelligent systems for the automation of agricultural industries. In this
study, the temperature variation of the mushroom growing room was modeled by
multi-layered perceptron and radial basis function networks based on
independent parameters including ambient temperature, water temperature, fresh
air and circulation air dampers, and water tap. According to the obtained
results from the networks, the best network for MLP was in the second
repetition with 12 neurons in the hidden layer and in 20 neurons in the hidden
layer for radial basis function network. The obtained results from comparative
parameters for two networks showed the highest correlation coefficient (0.966),
the lowest root mean square error (RMSE) (0.787) and the lowest mean absolute
error (MAE) (0.02746) for radial basis function. Therefore, the neural network
with radial basis function was selected as a predictor of the behavior of the
system for the temperature of mushroom growing halls controlling system.",Modelling Temperature Variation of Mushroom Growing Hall Using Artificial Neural Networks,2020-10-06 15:44:43,"Sina Ardabili, Amir Mosavi, Asghar Mahmoudi, Tarahom Mesri Gundoshmian, Saeed Nosratabadi, Annamaria R. Varkonyi-Koczy","http://dx.doi.org/10.1007/978-3-030-36841-8_3, http://arxiv.org/abs/2010.02673v1, http://arxiv.org/pdf/2010.02673v1",econ.GN
31853,gn,"One of the challenges for international companies is to manage multicultural
environments effectively. Cultural intelligence (CQ) is a soft skill required
of the leaders of organizations working in cross-cultural contexts to be able
to communicate effectively in such environments. On the other hand,
organizational structure plays an active role in developing and promoting such
skills in an organization. Therefore, this study aimed to investigate the
effect of leader CQ on organizational performance mediated by organizational
structure. To achieve the objective of this research, first, conceptual models
and hypotheses of this research were formed based on the literature. Then, a
quantitative empirical research design using a questionnaire, as a tool for
data collection, and structural equation modeling, as a tool for data analysis,
was employed among executives of knowledge-based companies in the Science and
Technology Park, Bushehr, Iran. The results disclosed that leader CQ directly
and indirectly (i.e., through the organizational structure) has a positive and
significant effect on organizational performance. In other words, in
organizations that operate in a multicultural environment, the higher the level
of leader CQ, the higher the performance of that organization. Accordingly,
such companies are encouraged to invest in improving the cultural intelligence
of their leaders to improve their performance in cross-cultural environments,
and to design appropriate organizational structures for the development of
their intellectual capital.",Leader Cultural Intelligence and Organizational Performance,2020-10-06 15:50:58,"Saeed Nosratabadi, Parvaneh Bahrami, Khodayar Palouzian, Amir Mosavi","http://dx.doi.org/10.1080/23311975.2020.1809310, http://arxiv.org/abs/2010.02678v1, http://arxiv.org/pdf/2010.02678v1",econ.GN
31854,gn,"The process of technological change can be regarded as a non-deterministic
system governed by factors of a cumulative nature that generate cyclical
phenomena. In this context, the process of growth and decline of technology can
be systematically analyzed to design best practices for technology management
of firms and innovation policy of nations. In this perspective, this study
focuses on the evolution of technologies in the U.S. recorded music industry.
Empirical findings reveal that technological change in the sector under study
here has recurring fluctuations of technological innovations. In particular,
cycle of technology has up wave phase longer than down wave phase in the
process of evolution in markets before it is substituted by a new technology.
Results suggest that radical innovation is one of the main sources of cyclical
phenomena for industrial and corporate change, and as a consequence, economic
and social change.",Cyclical phenomena in technological change,2020-10-05 16:50:40,Mario Coccia,"http://arxiv.org/abs/2010.03168v2, http://arxiv.org/pdf/2010.03168v2",econ.GN
31855,gn,"Decision makers are often confronted with complex tasks which cannot be
solved by an individual alone, but require collaboration in the form of a
coalition. Previous literature argues that instability, in terms of the
re-organization of a coalition with respect to its members over time, is
detrimental to performance. Other lines of research, such as the dynamic
capabilities framework, challenge this view. Our objective is to understand the
effects of instability on the performance of coalitions which are formed to
solve complex tasks. In order to do so, we adapt the NK-model to the context of
human decision-making in coalitions, and introduce an auction-based mechanism
for autonomous coalition formation and a learning mechanism for human agents.
Preliminary results suggest that re-organizing innovative and well-performing
teams is beneficial, but that this is true only in certain situations.",Dynamic coalitions in complex task environments: To change or not to change a winning team?,2020-10-07 15:35:36,"Dario Blanco-Fernandez, Stephan Leitner, Alexandra Rausch","http://arxiv.org/abs/2010.03371v1, http://arxiv.org/pdf/2010.03371v1",econ.GN
31856,gn,"Recommendation systems are essential ingredients in producing matches between
products and buyers. Despite their ubiquity, they face two important
challenges. First, they are data-intensive, a feature that precludes
sophisticated recommendations by some types of sellers, including those selling
durable goods. Second, they often focus on estimating fixed evaluations of
products by consumers while ignoring state-dependent behaviors identified in
the Marketing literature.
  We propose a recommendation system based on consumer browsing behaviors,
which bypasses the ""cold start"" problem described above, and takes into account
the fact that consumers act as ""moving targets,"" behaving differently depending
on the recommendations suggested to them along their search journey. First, we
recover the consumers' search policy function via machine learning methods.
Second, we include that policy into the recommendation system's dynamic problem
via a Bellman equation framework.
  When compared with the seller's own recommendations, our system produces a
profit increase of 33%. Our counterfactual analyses indicate that browsing
history along with past recommendations feature strong complementary effects in
value creation. Moreover, managing customer churn effectively is a big part of
value creation, whereas recommending alternatives in a forward-looking way
produces moderate effects.",No data? No problem! A Search-based Recommendation System with Cold Starts,2020-10-07 17:48:58,"Pedro M. Gardete, Carlos D. Santos","http://arxiv.org/abs/2010.03455v1, http://arxiv.org/pdf/2010.03455v1",econ.GN
31857,gn,"We present a model of political competition in which an incumbent politician,
may implement a costly policy to prevent a possible threat to, for example,
national security or a natural disaster.",To Act or not to Act? Political competition in the presence of a threat,2020-10-07 17:59:56,"Arthur Fishman, Doron Klunover","http://arxiv.org/abs/2010.03464v2, http://arxiv.org/pdf/2010.03464v2",econ.GN
31858,gn,"This work is dedicated to finding the determinants of voting behavior in
Poland at the poviat level. 2019 parliamentary election has been analyzed and
an attempt to explain vote share for the winning party (Law and Justice) has
been made. Sentiment analysis of tweets in Polish (original) and English
(machine-translations), collected in the period around the election, has been
applied. Amid multiple machine learning approaches tested, the best
classification accuracy has been achieved by Huggingface BERT on
machine-translated tweets. OLS regression, with sentiment of tweets and
selected socio-economic features as independent variables, has been utilized to
explain Law and Justice vote share in poviats. Sentiment of tweets has been
found to be a significant predictor, as stipulated by the literature of the
field.",Sentiment of tweets and socio-economic characteristics as the determinants of voting behavior at the regional level. Case study of 2019 Polish parliamentary election,2020-10-07 18:58:20,Grzegorz Krochmal,"http://arxiv.org/abs/2010.03493v1, http://arxiv.org/pdf/2010.03493v1",econ.GN
31859,gn,"We find UK 'local lockdowns' of cities and small regions, focused on limiting
how many people a household can interact with and in what settings, are
effective in turning the tide on rising positive COVID-19 cases. Yet, by
focusing on household mixing within the home, these local lockdowns have not
inflicted the large declines in consumption observed in March 2020 when the
first virus wave and first national lockdown occurred. Our study harnesses a
new source of real-time, transaction-level consumption data that we show to be
highly correlated with official statistics. The effectiveness of local
lockdowns are evaluated applying a difference-in-difference approach which
exploits nearby localities not subject to local lockdowns as comparison groups.
Our findings indicate that policymakers may be able to contain virus outbreaks
without killing local economies. However, the ultimate effectiveness of local
lockdowns is expected to be highly dependent on co-ordination between regions
and an effective system of testing.",The English Patient: Evaluating Local Lockdowns Using Real-Time COVID-19 & Consumption Data,2020-10-08 20:14:21,"John Gathergood, Benedict Guttman-Kenney","http://arxiv.org/abs/2010.04129v3, http://arxiv.org/pdf/2010.04129v3",econ.GN
31860,gn,"After the U.S market earned strong returns in 2003, day trading made a
comeback and once again became a popular trading method among traders. Although
there is no comprehensive empirical evidence available to answer the question
do individual day traders make money, there is a number of studies that point
out that only few are able to consistently earn profits sufficient to cover
transaction costs and thus make money. The day trading concept of buying and
selling stocks on margin alone suggests that it is more risky than the usual
going long way of making profit. This paper offers a new approach to day
trading, an approach that eliminates some of the risks of day trading through
specialization. The concept is that the trader should specialize himself in
just one (blue chip) stock and use existing day trading techniques (trend
following, playing news, range trading, scalping, technical analysis, covering
spreads) to make money.",Specilized day trading -- a new view on an old game,2020-10-11 15:53:46,"V Simovic, V Simovic","http://arxiv.org/abs/2010.05238v1, http://arxiv.org/pdf/2010.05238v1",econ.GN
31861,gn,"Sex differences in early age mortality have been explained in prior
literature by differences in biological make-up and gender discrimination in
the allocation of household resources. Studies estimating the effects of these
factors have generally assumed that offspring sex ratio is random, which is
implausible in view of recent evidence that the sex of a child is partly
determined by prenatal environmental factors. These factors may also affect
child health and survival in utero or after birth, which implies that
conventional approaches to explaining sex differences in mortality are likely
to yield biased estimates. We propose a methodology for decomposing these
differences into the effects of prenatal environment, child biology, and
parental preferences. Using a large sample of twins, we compare mortality rates
in male-female twin pairs in India, a region known for discriminating against
daughters, and sub-Saharan Africa, a region where sons and daughters are
thought to be valued by their parents about equally. We find that: (1) prenatal
environment positively affects the mortality of male children; (2) biological
make-up of the latter contributes to their excess mortality, but its effect has
been previously overestimated; and (3) parental discrimination against female
children in India negatively affects their survival; but failure to control for
the effects of prenatal and biological factors leads conventional approaches to
underestimating its effect by 237 percent during infancy, and 44 percent during
childhood.","Twin Estimates of the Effects of Prenatal Environment, Child Biology, and Parental Bias on Sex Differences in Early Age Mortality",2020-10-12 16:49:12,Roland Pongou,"http://arxiv.org/abs/2010.05712v1, http://arxiv.org/pdf/2010.05712v1",econ.GN
31862,gn,"China's pledge to reach carbon neutrality before 2060 is an ambitious goal
and could provide the world with much-needed leadership on how to limit warming
to +1.5C warming above pre-industrial levels by the end of the century. But the
pathways that would achieve net zero by 2060 are still unclear, including the
role of negative emissions technologies. We use the Global Change Analysis
Model to simulate how negative emissions technologies, in general, and direct
air capture (DAC) in particular, could contribute to China's meeting this
target. Our results show that negative emissions could play a large role,
offsetting on the order of 3 GtCO2 per year from difficult-to-mitigate sectors
such as freight transportation and heavy industry. This includes up to a 1.6
GtCO2 per year contribution from DAC, constituting up to 60% of total projected
negative emissions in China. But DAC, like bioenergy with carbon capture and
storage and afforestation, has not yet been demonstrated at anywhere
approaching the scales required to meaningfully contribute to climate
mitigation. Deploying NETs at these scales will have widespread impacts on
financial systems and natural resources such as water, land, and energy in
China.",The role of negative emissions in meeting China's 2060 carbon neutrality goal,2020-10-14 01:36:39,"Jay Fuhrman, Andres F. Clarens, Haewon McJeon, Pralit Patel, Scott C. Doney, William M. Shobe, Shreekar Pradhan","http://arxiv.org/abs/2010.06723v2, http://arxiv.org/pdf/2010.06723v2",econ.GN
31863,gn,"In this study, I aimed to estimate the incidence of catastrophic health
expenditure and analyze the extent of inequalities in out-of-pocket health
expenditure and its decomposition according to gender, sector, religion and
social groups of the households across Districts of West Bengal. I analysed
health spending in West Bengal, using National Sample Survey 71st round pooled
data suitably represented to estimate up to district level. We measured CHE at
different thresholds when OOP in health expenditure. Gini Coefficients and its
decomposition techniques were applied to assess the degree of inequality in OOP
health expenditures and between different socio geographic factors across
districts. The incidence of catastrophic payments varies considerably across
districts. Only 14.1 percent population of West Bengal was covered under health
coverage in 2014. The inequality in OOP health expenditure for West Bengal has
been observed with gini coefficient of 0.67. Based on the findings from this
analysis, more attention is needed on effective financial protection for people
of West Bengal to promote fairness, with special focus on the districts with
higher inequality. This study only provides the extent of CHE and inequality
across Districts of West Bengal but the causality may be taken in future scope
of study.",Catastrophic health expenditure and inequalities -- a district level study of West Bengal,2020-10-14 10:46:33,Pijush Kanti Das,"http://arxiv.org/abs/2010.06856v1, http://arxiv.org/pdf/2010.06856v1",econ.GN
31864,gn,"The article presents the results of multivariate classification of Russian
regions by the indicators characterizing the population income and their
concentration. The clusterization was performed upon an author approach to
selecting the characteristics which determines the academic novelty in the
evaluation of regional differentiation by population income and the
interconnected characteristics. The performed analysis was aimed at the
evaluation of the real scale of disproportions in spatial development of the
country territories by the considered characteristics. The clusterization
results allowed to formulate the condition of a relatively ""strong"" position of
a group of high-income regions (the changes in the array of regions
constituting it is highly unlikely in the foreseeable future). Additionally
there has been revealed a group of Russian regions that the population is
struggling to live on quite low income. These so-called ""poor"" regions, within
the crisis conditions caused by Covid-19 are in need of additional public
support, without which their population will impoverish.",The application of multivariate classification in evaluating the regional differentiation by population income in Russia,2020-10-08 11:32:10,"Natalia A. Sadovnikova, Olga A. Zolotareva","http://arxiv.org/abs/2010.07403v1, http://arxiv.org/pdf/2010.07403v1",econ.GN
31865,gn,"I study the heterogeneity of credence goods provision in taxi drivers taking
detours in New York City. First, I document that there is significant detouring
on average by drivers. Second, there is significant heterogeneity in cheating
across individuals, yet each individual's propensity to take detours is stable:
drivers who detour almost always detour, while those who do not detour almost
never do. Drivers who take longer detours on each trip also take such trips
more often. Third, cultural attitudes plausibly explain some of this
heterogeneity in behavior across individuals.",Individual Heterogeneity and Cultural Attitudes in Credence Goods Provision,2020-10-16 16:40:17,Johnny Tang,"http://arxiv.org/abs/2010.08386v1, http://arxiv.org/pdf/2010.08386v1",econ.GN
31877,gn,"This research is to assess cryptocurrencies with the conditional beta,
compared with prior studies based on unconditional beta or fixed beta. It is a
new approach to building a pricing model for cryptocurrencies. Therefore, we
expect that the use of conditional beta will increase the explanatory ability
of factors in previous pricing models. Besides, this research is also a pioneer
in placing the uncertainty factor in the cryptocurrency pricing model. Earlier
studies on cryptocurrency pricing have ignored this factor. However, it is a
significant factor in the valuation of cryptocurrencies because uncertainty
leads to investor sentiment and affects prices.",Conditional beta and uncertainty factor in the cryptocurrency pricing model,2020-10-24 04:31:43,Khanh Q. Nguyen,"http://arxiv.org/abs/2010.12736v1, http://arxiv.org/pdf/2010.12736v1",econ.GN
31866,gn,"Two decades of studies have found significant regional differences in the
timing of transitions in national business cycles and their durations. Earlier
studies partly detect regional synchronization during business cycle expansions
and contractions in Europe, the United States, and Japan. We examine this
possibility applying a sophisticated method for identifying the time-varying
degree of synchronization to regional business cycle data in the U.S. and
Japan. The method is prominent in nonlinear sciences but has been infrequently
applied in business cycle studies.We find that synchronization in regional
business cycles increased during contractions and decreased during expansions
throughout the period under study.Such asymmetry between the contraction and
expansion phases of a business cycle will contribute our better understanding
of the phenomenon of business cycles.",Regional Synchronization during Economic Contraction: The Case of the U.S. and Japan,2020-10-17 20:55:44,"Makoto Muto, Tamotsu Onozaki, Yoshitaka Saiki","http://arxiv.org/abs/2010.08835v2, http://arxiv.org/pdf/2010.08835v2",econ.GN
31867,gn,"This article presents the results of a cluster analysis of the regions of the
Russian Federation in terms of the main parameters of socio-economic
development according to the data presented in the official data sources of the
Federal State Statistics Service (Rosstat). Studied and analyzed the domestic
and foreign (Eurostat) methodology for assessing the socio-economic development
of territories. The aim of the study is to determine the main parameters of
territorial differentiation and to identify key indicators that affect the
socio-economic development of Russian regions. The authors have carried out a
classification of the constituent entities of the Russian Federation not in
terms of territorial location and geographical features, but in terms of the
specifics and key parameters of the socio-economic situation.",Differentiation of subjects of the Russian Federation according to the main parameters of socio-economic development,2020-10-18 22:00:23,"Natalia A. Sadovnikova, Leysan A. Davletshina, Olga A. Zolotareva, Olga O. Lebedinskaya","http://arxiv.org/abs/2010.09068v1, http://arxiv.org/pdf/2010.09068v1",econ.GN
31868,gn,"Extensive research has established a strong influence of product display on
demand. In this domain, managers have many tools at their disposal to influence
the demand through the product display, like assortment,shelf-space or
allocating profitable products to highly attractive shelf locations. In this
research, we focus on the influence of the product arrangement on competition
among products within the display. Intuitively,products located next to each
other are compared more often than when they are placed far apart. We introduce
a model that allows product competition effects mediated by their relative
position in the display. This model naturally produces an increased competition
on products located closer together, inducing demand correlations based on
products proximity with their competitors in the shelf, and not only their
relative characteristics. We fit this model to experimental data from physical
retail stores and in online product displays.The proposed model shows this
effect to be significant; moreover, this model outperforms traditional models
in fit and prediction power, and shows that ignoring the shelf-implied
competition generates a bias in the price sensitivity, affecting price and
promotion strategies. The proposed model was used to evaluate different shelf
displays, and to evaluate and select displays with higher profitability, by
exploiting the influence on competition among products to shift demand to
higher profitability products. Finally, from the model,we generate
recommendations for retail managers to construct better shelf designs;testing
these suggestions on our fitted model on retail store data, we achieve a 3%
increase in profits over current shelf designs.",Influencing Competition Through Shelf Design,2020-10-19 08:17:03,"Francisco Cisternas, Wee Chaimanowong, Alan Montgomery","http://arxiv.org/abs/2010.09227v2, http://arxiv.org/pdf/2010.09227v2",econ.GN
31869,gn,"The Covid-19 pandemic exposed firms, organisations and their respective
supply chains which are directly involved in the manufacturing of products that
are critical to alleviating the effects of the health crisis, collectively
referred to as the Crisis-Critical Sector,to unprecedented challenges. Firms
from other sectors, such as automotive, luxury and home appliances, have rushed
into the Crisis-Critical Sector in order to support the effort to upscale
incumbent manufacturing capacities, thereby introducing Intellectual Property
(IP)related dynamics and challenges. We apply an innovation ecosystem
perspective on the Crisis-Critical Sector and adopt a novel visual mapping
approach to identify IP associated challenges and IP specific dynamic
developments during and potentially beyond the crisis.In this paper, we add
methodologically by devising and testing a visual approach to capturing IP
related dynamics in evolving innovation ecosystems and contribute to literature
on IP management in the open innovation context by proposing paraground IP as a
novel IP type.Finally, we also deduce managerial implications for IP management
practitioners at both incumbent firms and new entrants for navigating
innovation ecosystems subject to crisis-induced dynamic shifts.",Identifying Crisis-Critical Intellectual Property Challenges during the Covid-19 Pandemic: A scenario analysis and conceptual extrapolation of innovation ecosystem dynamics using a visual mapping approach,2020-10-20 10:27:21,"Alexander Moerchel, Frank Tietze, Leonidas Aristodemou, Pratheeba Vimalnath","http://dx.doi.org/10.17863/CAM.58372, http://arxiv.org/abs/2010.10086v1, http://arxiv.org/pdf/2010.10086v1",econ.GN
31870,gn,"In this investigation we analyze impact of diversification of agriculture on
farmer's income, a study from primitive tribal groups from eastern ghats of
India. We have taken crop diversification index to measure the extent and
regression formalism to analyze the impact, of crop diversification.
Descriptive statistics is employed to know the average income of the farmers,
paired results of crop diversification index. We observed a positive impact on
crop diversification in scheduled areas and investigated reasons where it did
not work.",Impact of crop diversification on tribal farmer's income: A case study from Eastern ghats of India,2020-10-20 14:41:15,"Sadasiba Tripathy, Dr. Sandhyarani Das","http://arxiv.org/abs/2010.10208v1, http://arxiv.org/pdf/2010.10208v1",econ.GN
31894,gn,"We build a formal model that examines how different policymaking environments
shape career-concerned officials' reform decisions and implementation. When
career concerns are strong, officials will inefficiently initiate reforms to
signal to the central government that they are congruent. To improve the
quality of reform policymaking, the central government must hold officials
accountable to policy outcomes. We demonstrate that the central government can
exercise this accountability by requiring officials to publicize policy
outcomes while maintain secrecy on implementation details. In this situation,
officials can signal their congruence only through a desirable policy outcome,
so they are highly motivated to carry out a reform well. We also demonstrate
that the accountability on policy outcomes is infeasible under alternative
policymaking environments. We apply the results to China's recent practice in
decentralized reform policymaking.",Accountability and Motivation,2020-12-02 20:02:09,Liqun Liu,"http://arxiv.org/abs/2012.01331v7, http://arxiv.org/pdf/2012.01331v7",econ.GN
31871,gn,"In this paper it is demonstrated that the application of principal components
analysis for regional cluster modelling and analysis is essential in the
situations where there is significant multicollinearity among several
parameters, especially when the dimensionality of regional data is measured in
tens. The proposed principal components model allows for same-quality
representation of the clustering of regions. In fact, the clusters become more
distinctive and the apparent outliers become either more pronounced with the
component model clustering or are alleviated with the respective hierarchical
cluster. Thus, a five-component model was obtained and validated upon 85
regions of Russian Federation and 19 socio-economic parameters. The principal
components allowed to describe approximately 75 percent of the initial
parameters variation and enable further simulations upon the studied variables.
The cluster analysis upon the principal components modelling enabled better
exposure of regional structure and disparity in economic development in Russian
Federation, consisting of four main clusters: the few-numbered highest
development regions, the clusters with mid-to-high and low economic
development, and the ""poorest"" regions. It is observable that the development
in most regions relies upon resource economy, and the industrial potential as
well as inter-regional infrastructural potential are not realized to their
fullest, while only the wealthiest regions show highly developed economy, while
the industry in other regions shows signs of stagnation which is scaled further
due to the conditions entailed by economic sanctions and the recent Covid-19
pandemic. Most Russian regions are in need of additional public support and
industrial development, as their capital assets potential is hampered and,
while having sufficient labor resources, their donorship will increase.",Analysis of Regional Cluster Structure By Principal Components Modelling in Russian Federation,2020-10-21 02:31:39,Alexander V. Bezrukov,"http://arxiv.org/abs/2010.10625v1, http://arxiv.org/pdf/2010.10625v1",econ.GN
31872,gn,"Four radical ideas are presented. First, that the rationale for cancellation
of principal can be modified in modern banking. Second, that non-cancellation
of loan principal upon payment may cure an old problem of maintenance of
positive equity in the non-governmental sector. Third, that crediting this
money to local/state government, and fourth crediting to at-risk loans that
create new utility value, creates an additional virtuous monetary circuit that
ties finances of government directly to commercial activity.
  Taking these steps can cure a problem I have identified with modern monetary
theory, which is that breaking the monetary circuit of taxation in the minds of
politicians will free them from centuries of restraint, optimizing their
opportunities for implementing tyranny. It maintains and strengthens the
current circuit, creating a new, more direct monetary circuit that in some
respects combats inequality.",Cancellation of principal in banking: Four radical ideas emerge from deep examination of double entry bookkeeping in banking,2020-10-21 04:27:35,Brian P. Hanley,"http://arxiv.org/abs/2010.10703v3, http://arxiv.org/pdf/2010.10703v3",econ.GN
31873,gn,"Bank's asset fire sales and recourse to central bank credit are modelled with
continuous asset liquidity, allowing to derive the liability structure of a
bank. Both asset sales liquidity and the central bank collateral framework are
modeled as power functions within the unit interval. Funding stability is
captured as a strategic bank run game in pure strategies between depositors.
Fire sale liquidity and the central bank collateral framework determine jointly
the ability of the banking system to deliver maturity transformation without
endangering financial stability. The model also explains why banks tend to use
the least liquid eligible collateral with the central bank and why a sudden
non-anticipated reduction of asset liquidity, or a tightening of the collateral
framework, can trigger a bank run. The model also shows that the collateral
framework can be understood, beyond its aim to protect the central bank, as
financial stability and non-conventional monetary policy instrument.","Fire Sales, the LOLR and Bank Runs with Continuous Asset Liquidity",2020-10-21 17:22:01,"Ulrich Bindseil, Edoardo Lanari","http://arxiv.org/abs/2010.11030v1, http://arxiv.org/pdf/2010.11030v1",econ.GN
31874,gn,"I measure the uncertainty affecting estimates of economic inequality in the
US and investigate how accounting for properly estimated standard errors can
affect the results of empirical and structural macroeconomic studies. In my
analysis, I rely upon two data sets: the Survey of Consumer Finances (SCF),
which is a triennial survey of household financial condition, and the
Individual Tax Model Public Use File (PUF), an annual sample of individual
income tax returns. While focusing on the six income and wealth shares of the
top 10 to the top 0.01 percent between 1988 and 2018, my results suggest that
ignoring uncertainties in estimated wealth and income shares can lead to
erroneous conclusions about the current state of the economy and, therefore,
lead to inaccurate predictions and ineffective policy recommendations. My
analysis suggests that for the six top-decile income shares under
consideration, the PUF estimates are considerably better than those constructed
using the SCF; for wealth shares of the top 10 to the top 0.5 percent, the SCF
estimates appear to be more reliable than the PUF estimates; finally, for the
two most granular wealth shares, the top 0.1 and 0.01 percent, both data sets
present non-trivial challenges that cannot be readily addressed.",Quantifying Uncertainties in Estimates of Income and Wealth Inequality,2020-10-21 22:34:07,Marta Boczon,"http://arxiv.org/abs/2010.11261v1, http://arxiv.org/pdf/2010.11261v1",econ.GN
31875,gn,"Higher duration of programs that involve legal protection may entail gradual
positive changes in social norms that can be leveraged by potential
beneficiaries in their favor. This paper examines the heterogeneous impact of
the duration of exposure to gender-neutral reforms in the inheritance law in
India on two latent domains of women empowerment: intrinsic, which pertains to
expansion of agency and instrumental which relates to ability to make
decisions. The time lag between the year of the amendment in the respective
states and the year of marriage generate exogenous variation in reform exposure
across women. The findings indicate a significant non-linear increase in the
instrumental as well as intrinsic empowerment. Importantly, improvements in
education along with increase in the age of marriage and changes in family
structure are found to be the potential channels that signal gradual relaxation
of social norms and explain the higher returns to exposure on empowerment.",Duration of exposure to inheritance law in India: Examining the heterogeneous effects on empowerment,2020-10-22 08:55:43,"Shreya Biswas, Upasak Das, Prasenjit Sarkhel","http://arxiv.org/abs/2010.11460v1, http://arxiv.org/pdf/2010.11460v1",econ.GN
31876,gn,"This study analyses the current viability of this business based on a sample
of European countries in the year 2019; countries where electricity prices
(day-ahead market) and financial conditions show a certain degree of
heterogeneity. We basically follow a sequence of three analyses in our study.
Firstly, a Linear Mixed-Integrated Programming model has been developed to
optimize the arbitrage strategy for each country in the sample. Secondly, using
the cash-flows from the optimization model, we calculate two financial
indicators (NPV and IRR) in order to select the optimal converter size for each
country. Tax and discount rates specific to each country have been used with
the calculation of this second rate following the methodology proposed by the
Spanish regulator. Thirdly, a mixed linear regression model is proposed in
order to investigate the importance of observed and unobserved heterogeneity
(at country level) in explaining the business profitability.",An assessment of European electricity arbitrage using storage systems,2020-10-22 20:45:18,"Fernando Núñez, David Canca, Ángel Arcos-Vargas","http://dx.doi.org/10.1016/j.energy.2021.122916, http://arxiv.org/abs/2010.11912v1, http://arxiv.org/pdf/2010.11912v1",econ.GN
31878,gn,"Basic broadband connectivity is regarded as generally having a positive
macroeconomic effect. However, over the past decade there has been an emerging
school of thought suggesting the impacts of upgrading to higher speed broadband
have been overstated, potentially leading to the inefficient allocation of
taxpayer-funded subsidies. In this analysis we model the impacts of Next
Generation Access on new business creation using high-resolution panel data.
After controlling for a range of factors, the results provide evidence of a
small but significant negative impact of high-speed broadband on new business
creation over the study period which we suggest could be due to two factors.
Firstly, moving from basic to high-speed broadband provides few benefits to
enable new businesses being formed. Secondly, strong price competition and
market consolidation from online service providers (e.g. Amazon etc.) may be
deterring new business start-ups. This analysis provides another piece of
evidence to suggest that the economic impact of broadband is more nuanced than
the debate has traditionally suggested. Our conjecture is that future policy
decisions need to be more realistic about the potential economic impacts of
broadband, including those effects that could be negative on the stock of local
businesses and therefore the local tax base.",Evaluating the impact of next generation broadband on local business creation,2020-10-27 10:54:31,"Philip Chen, Edward J Oughton, Pete Tyler, Mo Jia, Jakub Zagdanski","http://arxiv.org/abs/2010.14113v1, http://arxiv.org/pdf/2010.14113v1",econ.GN
31879,gn,"Are low-income individuals relying on government transfers liquidity
constrained by the end of the month to a degree that they postpone medical
treatment? I investigate this question using Danish administrative data
comprising the universe of welfare recipients and the filling of all
prescription drugs. I find that on transfer income payday, recipients have a
52% increase in the propensity to fill a prescription. By separating
prophylaxis drugs used to treat chronic conditions, where the patient can
anticipate the need to fill the prescription, e.g. cholesterol-lowering
statins, I find an increase of up to 99% increase on payday. Even for drugs
used to treat acute conditions, where timely treatment is essential, I find a
22% increase on payday for antibiotics and a 5-8% decrease in the four days
preceding payday. Lastly, exploiting the difference in day the doctor write the
prescription and the day the patient fill it, I show that liquidity constraints
is the key operating mechanism for postponing antibiotic treatment.",Liquidity Constraints and Demand for Healthcare: Evidence from Danish Welfare Recipients,2020-10-28 01:27:53,Frederik Plesner Lyngse,"http://arxiv.org/abs/2010.14651v1, http://arxiv.org/pdf/2010.14651v1",econ.GN
31880,gn,"How should central banks optimally aggregate sectoral inflation rates in the
presence of imperfect labor mobility across sectors? We study this issue in a
two-sector New-Keynesian model and show that a lower degree of sectoral labor
mobility, ceteris paribus, increases the optimal weight on inflation in a
sector that would otherwise receive a lower weight. We analytically and
numerically find that, with limited labor mobility, adjustment to asymmetric
shocks cannot fully occur through the reallocation of labor, thus putting more
pressure on wages, causing inefficient movements in relative prices, and
creating scope for central banks intervention. These findings challenge
standard central banks practice of computing sectoral inflation weights based
solely on sector size, and unveil a significant role for the degree of sectoral
labor mobility to play in the optimal computation. In an extended estimated
model of the U.S. economy, featuring customary frictions and shocks, the
estimated inflation weights imply a decrease in welfare up to 10 percent
relative to the case of optimal weights.",Sectoral Labor Mobility and Optimal Monetary Policy,2020-10-28 02:37:00,"Alessandro Cantelmo, Giovanni Melina","http://arxiv.org/abs/2010.14668v2, http://arxiv.org/pdf/2010.14668v2",econ.GN
31881,gn,"In this paper, I present a visual representation of the relationship between
mean hourly total compensation divided by per-capita GDP, hours worked per
capita, and the labor share, and show the represented labor equilibrium
equation is the definition of the labor share. I also present visual
examination of the productivity horizon and wage compression, and use these to
show the relationship between productivity, available employment per capita,
and minimum wage. From this I argue that wages are measured in relation to
per-capita GDP, and that minimum wage controls income inequality and
productivity growth.","Minimum Wage, Labor Equilibrium, and the Productivity Horizon: A Visual Examination",2020-10-28 02:43:38,John R. Moser,"http://arxiv.org/abs/2010.14669v3, http://arxiv.org/pdf/2010.14669v3",econ.GN
31882,gn,"The adoption of a ""makeup"" strategy is one of the proposals in the ongoing
review of the Fed's monetary policy framework. Another suggestion, to avoid the
zero lower bound, is a more active role for fiscal policy. We put together
these ideas to study monetary-fiscal interactions under price level targeting.
Under price level targeting and a fiscally-led regime, we find that following a
deflationary demand shock: (i) the central bank increases (rather than
decreases) the policy rate; (ii) the central bank, thus, avoids the zero lower
bound; (iii) price level targeting is generally welfare improving if compared
to inflation targeting.",Monetary-fiscal interactions under price level targeting,2020-10-28 16:41:24,"Guido Ascari, Anna Florio, Alessandro Gobbi","http://arxiv.org/abs/2010.14979v1, http://arxiv.org/pdf/2010.14979v1",econ.GN
31883,gn,"We study the effects on economic activity of a pure temporary change in
government debt and the relationship between the debt multiplier and the level
of debt in an overlapping generations framework. The debt multiplier is
positive but quite small during normal times while it is much larger during
crises. Moreover, it increases with the steady state level of debt. Hence, the
call for fiscal consolidation during recessions seems ill-advised. Finally, a
rise in the steady state debt-to-GDP level increases the steady state real
interest rate providing more room for manoeuvre to monetary policy to fight
deflationary shocks.",The public debt multiplier,2020-10-28 21:26:01,"Alice Albonico, Guido Ascari, Alessandro Gobbi","http://dx.doi.org/10.1016/j.jedc.2021.104204, http://arxiv.org/abs/2010.15165v2, http://arxiv.org/pdf/2010.15165v2",econ.GN
31895,gn,"We consider demand-side economy. Using Caratheodory's approach, we define
empirical existence of equation of state (EoS) and coordinates. We found new
insights of thermodynamics EoS, the {\it effect structure}. Rules are proposed
as criteria in promoting and classifying an empirical law to EoS status. Four
laws of thermodynamics are given for economics. We proposed a method to model
the EoS with econometrics. Consumer surplus in economics can not be considered
as utility. Concepts such as total wealth, generalized utility and generalized
surplus are introduced. EoS provides solid foundation in statistical mechanics
modelling of economics and finance.",Thermodynamics Formulation of Economics,2020-12-02 23:14:09,Burin Gumjudpai,"http://arxiv.org/abs/2012.01505v3, http://arxiv.org/pdf/2012.01505v3",econ.GN
31884,gn,"Food insecurity is associated with increased risk for several health
conditions and with increased national burden of chronic disease. Key
determinants for household food insecurity are income and food costs. Forecasts
show household disposable income for 2020 expected to fall and for 2021 to rise
only slightly. Prices are forecast to rise. Thus, future increased food prices
would be a significant driver of greater food insecurity. Structured expert
judgement elicitation, a well-established method for quantifying uncertainty,
using experts. In July 2020, each expert estimated the median, 5th percentile
and 95th percentile quantiles of changes in price to April 2022 for ten food
categories under three end-2020 settlement Brexit scenarios: A: full WTO terms;
B: a moderately disruptive trade agreement (better than WTO); C: a minimally
disruptive trade agreement. When combined in proportions for calculate Consumer
Prices Index food basket costs, the median food price change under full WTO
terms is expected to be +17.9% [90% credible interval:+5.2%, +35.1%]; with
moderately disruptive trade agreement: +13.2% [+2.6%, +26.4%] and with a
minimally disruptive trade agreement +9.3% [+0.8%, +21.9%]. The number of
households experiencing food insecurity and its severity are likely to increase
because of expected sizeable increases in median food prices in the months
after Brexit, whereas low income group spending on food is unlikely to
increase, and may be further eroded by other factors not considered here (e.g.
COVID-19). Higher increases are more likely than lower rises and towards the
upper limits, these would entail severe impacts. Research showing a low food
budget leads to increasingly poor diet suggests that demand for health services
in both the short and longer term is likely to increase due to the effects of
food insecurity on the incidence and management of diet-sensitive conditions.",Anticipated impacts of Brexit scenarios on UK food prices and implications for policies on poverty and health: a structured expert judgement update,2020-10-29 14:00:19,"Martine J Barons, Willy Aspinall","http://arxiv.org/abs/2010.15484v1, http://arxiv.org/pdf/2010.15484v1",econ.GN
31885,gn,"The implementation of large-scale containment measures by governments to
contain the spread of the COVID-19 virus has resulted in a large supply and
demand shock throughout the global economy. Here, we use empirical vessel
tracking data and a newly developed algorithm to estimate the global maritime
trade losses during the first eight months of the pandemic. Our results show
widespread trade losses on a port level with the largest absolute losses found
for ports in China, the Middle-East and Western Europe, associated with the
collapse of specific supply-chains (e.g. oil, vehicle manufacturing). In total,
we estimate that global maritime trade reduced by -7.0% to -9.6% during the
first eight months of 2020, which is equal to around 206-286 million tonnes in
volume losses and up to 225-412 billion USD in value losses. The fishery,
mining and quarrying, electrical equipment and machinery manufacturing, and
transport equipment manufacturing sectors are hit hardest, with losses up to
11.8%. Moreover, we find a large geographical disparity in losses, with some
small islands developing states and low-income economies suffering the largest
relative trade losses. We find a clear negative impact of COVID-19 related
business and public transport closures on country-wide exports. Overall, we
show how real-time indicators of economic activity can support governments and
international organisations in economic recovery efforts and allocate funds to
the hardest hit economies and sectors.",The implications of large-scale containment policies on global maritime trade during the COVID-19 pandemic,2020-10-29 22:46:47,"Jasper Verschuur, Elco Koks, Jim Hall","http://arxiv.org/abs/2010.15907v2, http://arxiv.org/pdf/2010.15907v2",econ.GN
31886,gn,"The Deferred Acceptance algorithm is a popular school allocation mechanism
thanks to its strategy proofness. However, with application costs, strategy
proofness fails, leading to an identification problem. In this paper, I address
this identification problem by developing a new Threshold Rank setting that
models the entire rank order list as a one-step utility maximization problem. I
apply this framework to study student assignments in Chile. There are three
critical contributions of the paper. I develop a recursive algorithm to compute
the likelihood of my one-step decision model. Partial identification is
addressed by incorporating the outside value and the expected probability of
admission into a linear cost framework. The empirical application reveals that
although school proximity is a vital variable in school choice, student ability
is critical for ranking high academic score schools. The results suggest that
policy interventions such as tutoring aimed at improving student ability can
help increase the representation of low-income low-ability students in better
quality schools in Chile.",Preference Estimation in Deferred Acceptance with Partial School Rankings,2020-10-30 00:51:29,Shanjukta Nath,"http://arxiv.org/abs/2010.15960v1, http://arxiv.org/pdf/2010.15960v1",econ.GN
31887,gn,"This paper examines discrimination by early-stage investors based on startup
founders' gender and race using two complementary field experiments with real
U.S. venture capitalists. Results show the following. (i) Discrimination varies
depending on the context. Investors implicitly discriminate against female and
Asian founders when evaluating attractive startups, but they favor female and
Asian founders when evaluating struggling startups. This helps to reconcile the
contradictory results in the extant literature and confirms the theoretical
predictions of ""discrimination reversion"" and ""pro-cyclical discrimination""
phenomena. (ii) Among multiple coexisting sources of discrimination identified,
statistical discrimination and implicit discrimination are important reasons
for investors' ""anti-minority"" behaviors. A consistent estimator is developed
to measure the polarization of investors' discrimination behaviors and their
separate driving forces. (iii) Homophily exists when investors provide
anonymous encouragement to startups in a non-investment setting. (iv) There was
temporary, stronger discrimination against Asian founders during the COVID-19
outbreak.",Discrimination in the Venture Capital Industry: Evidence from Field Experiments,2020-10-30 08:27:34,Ye Zhang,"http://arxiv.org/abs/2010.16084v3, http://arxiv.org/pdf/2010.16084v3",econ.GN
31888,gn,"The paper presents the professor-student network of Nobel laureates in
economics. 74 of the 79 Nobelists belong to one family tree. The remaining 5
belong to 3 separate trees. There are 350 men in the graph, and 4 women. Karl
Knies is the central-most professor, followed by Wassily Leontief. No classical
and few neo-classical economists have left notable descendants. Harvard is the
central-most university, followed by Chicago and Berlin. Most candidates for
the Nobel prize belong to the main family tree, but new trees may arise for the
students of Terence Gorman and Denis Sargan.",Rise of the Kniesians: The professor-student network of Nobel laureates in economics,2020-11-26 21:49:57,Richard S. J. Tol,"http://arxiv.org/abs/2012.00103v2, http://arxiv.org/pdf/2012.00103v2",econ.GN
31924,gn,"We examine how Green governments influence environmental, macroeconomic, and
education outcomes. We exploit that the Fukushima nuclear disaster in Japan
gave rise to an unanticipated change in government in the German state
Baden-Wuerttemberg in 2011. Using the synthetic control method, we find no
evidence that the Green government influenced CO2 emissions or increased
renewable energy usage overall. The share of wind power usage even decreased.
Intra-ecological conflicts prevented the Green government from implementing
drastic changes in environmental policies. The results do not suggest that the
Green government influenced macroeconomic outcomes. Inclusive education
policies caused comprehensive schools to become larger.",Green governments,2020-12-17 22:59:55,"Niklas Potrafke, Kaspar Wuthrich","http://arxiv.org/abs/2012.09906v4, http://arxiv.org/pdf/2012.09906v4",econ.GN
31889,gn,"Ad exchanges, i.e., platforms where real-time auctions for ad impressions
take place, have developed sophisticated technology and data ecosystems to
allow advertisers to target users, yet advertisers may not know which sites
their ads appear on, i.e., the ad context. In practice, ad exchanges can
require publishers to provide accurate ad placement information to ad buyers
prior to submitting their bids, allowing them to adjust their bids for ads at
specific domains, subdomains or URLs. However, ad exchanges have historically
been reluctant to disclose placement information due to fears that buyers will
start buying ads only on the most desirable sites leaving inventory on other
sites unsold and lowering average revenue. This paper explores the empirical
effect of ad placement disclosure using a unique data set describing a change
in context information provided by a major private European ad exchange.
Analyzing this as a quasi-experiment using diff-in-diff, we find that average
revenue per impression rose when more context information was provided. This
shows that ad context information is important to ad buyers and that providing
more context information will not lead to deconflation. The exception to this
are sites which had a low number of buyers prior to the policy change;
consistent with theory, these sites with thin markets do not show a rise in
prices. Our analysis adds evidence that ad exchanges with reputable publishers,
particularly smaller volume, high quality sites, should provide ad buyers with
site placement information, which can be done at almost no cost.",Context information increases revenue in ad auctions: Evidence from a policy change,2020-12-02 00:30:37,"Sıla Ada, Nadia Abou Nabout, Elea McDonnell Feit","http://arxiv.org/abs/2012.00840v1, http://arxiv.org/pdf/2012.00840v1",econ.GN
31890,gn,"With Australia's significant capacity of household PV, decreasing battery
costs may lead to widespread use of household PV-battery systems. As the
adoption of these systems are heavily influenced by retail tariffs, this paper
analyses the effects of flat retail tariffs with households free to invest in
PV battery systems. Using Perth, Australia for context, an open-source model is
used to simulate household PV battery investments over a 20-year period. We
find that flat usage and feed-in tariffs lead to distinct residual demand
patterns as households' transition from PV-only to PV-battery systems.
Analysing these patterns qualitatively from the bottom-up, we identify tipping
point transitions that may challenge future electricity system management,
market participation and energy policy. The continued use of flat tariffs
incentivises PV-battery households to maximise self-consumption, which reduces
annual grid-imports, increases annual grid-exports, and shifts residual demand
towards winter. Diurnal and seasonal demand patterns continue to change as
PV-battery households eventually become net-generators. Unmanaged, these
bottom-up changes may complicate energy decarbonisation efforts within
centralised electricity markets and suggests that policymakers should prepare
for PV-battery households to play a more active role in the energy system.",Molehills into mountains: Transitional pressures from household PV-battery adoption under flat retail and feed-in tariffs,2020-12-02 05:43:21,"Kelvin Say, Michele John","http://arxiv.org/abs/2012.00934v1, http://arxiv.org/pdf/2012.00934v1",econ.GN
31891,gn,"We examine the impact of labour law deregulations in the Indian state of
Rajasthan on plant employment and productivity. In 2014, after a long time,
Rajasthan was the first Indian state that introduced labour reforms in the
Industrial Disputes Act (1947), the Factories Act (1948), the Contract Labour
(Regulation and Abolition) Act (1970), and the Apprentices Act (1961).
Exploiting this unique quasi-natural experiment, we apply a
difference-in-difference framework using the Annual Survey of Industries panel
data of manufacturing plants. Our results show that reforms had an unintended
consequence of the decline in labour use. Also, worryingly, the flexibility
resulted in the disproportionate decline in the directly employed worker.
Evidence suggests that the reforms positively impacted the plants' value-added
and productivity. The strength of these effects varies depending on the
underlying industry and reform structure. These findings prove robust to a set
of specifications.",Labor Reforms in Rajasthan: A boon or a bane?,2020-12-02 11:13:25,"Diti Goswami, Sourabh Bikas Paul","http://arxiv.org/abs/2012.01016v1, http://arxiv.org/pdf/2012.01016v1",econ.GN
31892,gn,"Individuals' behavior in economic decisions depends on such factors as
ethnicity, gender, social environment, personal traits. However, the
distinctive features of decision making have not been studied properly so far
between indigenous populations from different ethnicities in a modern and
multinational state like the Russian Federation. Addressing this issue, we
conducted a series of experiments between the Russians in Moscow (the capital
of Russia) and the Yakuts in Yakutsk (the capital of Russian region with the
mostly non-Russian residents). We investigated the effect of socialization on
participants' strategies in the Prisoner's Dilemma game, Ultimatum game, and
Trust game. At the baseline stage, before socialization, the rates of
cooperation, egalitarianism, and trust for the Yakuts are higher than for the
Russians in groups composed of unfamiliar people. After socialization, for the
Russians all these indicators increase considerably; whereas, for the Yakuts
only the rate of cooperation demonstrates a rising trend. The Yakuts are
characterized by relatively unchanged indicators regardless of the
socialization stage. Furthermore, the Yakutsk females have higher rates of
cooperation and trust than the Yakuts males before socialization. After
socialization, we observed the alignment in indicators for males and females
both for the Russians and for the Yakuts. Hence, we concluded that cultural
differences can exist inside one country despite the equal economic, politic,
and social conditions.",Ethnicity and gender influence the decision making in a multinational state: The case of Russia,2020-12-02 18:27:31,"Tatiana Kozitsina, Anna Mikhaylova, Anna Komkova, Anastasia Peshkovskaya, Anna Sedush, Olga Menshikova, Mikhail Myagkov, Ivan Menshikov","http://arxiv.org/abs/2012.01272v1, http://arxiv.org/pdf/2012.01272v1",econ.GN
31893,gn,"Real-world problems are becoming highly complex and, therefore, have to be
solved with combinatorial optimisation (CO) techniques. Motivated by the strong
increase of publications on CO, 8,393 articles from this research field are
subjected to a bibliometric analysis. The corpus of literature is examined
using mathematical methods and a novel algorithm for keyword analysis. In
addition to the most relevant countries, organisations and authors as well as
their collaborations, the most relevant CO problems, solution methods and
application areas are presented. Publications on CO focus mainly on the
development or enhancement of metaheuristics like genetic algorithms. The
increasingly problem-oriented studies deal particularly with real-world
applications within the energy sector, production sector or data management,
which are of increasing relevance due to various global developments. The
demonstration of global research trends in CO can support researchers in
identifying the relevant issues regarding this expanding and transforming
research area.",Research trends in combinatorial optimisation,2020-12-02 19:05:17,"Jann Michael Weinand, Kenneth Sörensen, Pablo San Segundo, Max Kleinebrahm, Russell McKenna","http://arxiv.org/abs/2012.01294v1, http://arxiv.org/pdf/2012.01294v1",econ.GN
31896,gn,"Men and women systematically differ in their beliefs about their performance
relative to others; in particular, men tend to be more overconfident. This
paper provides support for one explanation for gender differences in
overconfidence, performance-motivated reasoning, in which people distort how
they process new information in ways that make them believe they outperformed
others. Using a large online experiment, I find that male subjects distort
information processing in ways that favor their performance, while female
subjects do not systematically distort information processing in either
direction. These statistically-significant gender differences in
performance-motivated reasoning mimic gender differences in overconfidence;
beliefs of male subjects are systematically overconfident, while beliefs of
female subjects are well-calibrated on average. The experiment also includes
political questions, and finds that politically-motivated reasoning is similar
for both men and women. These results suggest that, while men and women are
both susceptible to motivated reasoning in general, men find it particularly
attractive to believe that they outperformed others.",Gender Differences in Motivated Reasoning,2020-12-03 00:24:01,Michael Thaler,"http://arxiv.org/abs/2012.01538v3, http://arxiv.org/pdf/2012.01538v3",econ.GN
31897,gn,"People often receive good news that makes them feel better or bad news that
makes them feel worse about the world around them. This paper studies how good
news and bad news affect belief updating. Using two experiments with over 1,500
subjects and 5,500 observations, I test whether people engage in motivated
reasoning to overly trust good news versus bad news on issues such as cancer
survival rates, others' happiness, and infant mortality. The estimate for
motivated reasoning towards good news is a precisely-estimated null. Modest
effects, of one-third the size of motivated reasoning about politics and
self-image, can be ruled out. Complementary survey evidence shows that most
people expect good news to make people happier, but to not lead to motivated
reasoning, suggesting that news that makes people feel better is not sufficient
to lead them to distort belief updating to favor those beliefs.",Good News Is Not a Sufficient Condition for Motivated Reasoning,2020-12-03 00:43:02,Michael Thaler,"http://arxiv.org/abs/2012.01548v3, http://arxiv.org/pdf/2012.01548v3",econ.GN
31898,gn,"Motivated reasoning posits that people distort how they process information
in the direction of beliefs they find attractive. This paper creates a novel
experimental design to identify motivated reasoning from Bayesian updating when
people have preconceived beliefs. It analyzes how subjects assess the veracity
of information sources that tell them the median of their belief distribution
is too high or too low. Bayesians infer nothing about the source veracity, but
motivated beliefs are evoked. Evidence supports politically-motivated reasoning
about immigration, income mobility, crime, racial discrimination, gender,
climate change, and gun laws. Motivated reasoning helps explain belief biases,
polarization, and overconfidence.",The Fake News Effect: Experimentally Identifying Motivated Reasoning Using Trust in News,2020-12-03 05:33:04,Michael Thaler,"http://arxiv.org/abs/2012.01663v4, http://arxiv.org/pdf/2012.01663v4",econ.GN
31899,gn,"I estimate the semi-elasticity of blood donations with respect to a monetary
benefit, namely the waiver of user fees when using the National Health Service,
in Portugal. Using within-county variation over time in the value of the
benefitI estimate both the unconditional elasticity, which captures overall
response of the market, and the conditional elasticity, which holds constant
the number of blood drives. This amounts to fixing a measure of the cost of
donation to the blood donor. I instrument for the number of blood drives, which
is endogenous, using a variable based on the number of weekend days and the
proportion of blood drives on weekends. A one euro increase in the subsidy
leads 1.8% more donations per 10000 inhabitants, conditional on the number of
blood drives. The unconditional effect is smaller. The benefit does not attract
new donors, instead it fosters repeated donation. Furthermore, the
discontinuation of the benefit lead to a predicted decrease in donations of
around 18%, on average. However, I show that blood drives have the potential to
effectively substitute monetary incentives in solving market imbalances.",Estimating the Blood Supply Elasticity: Evidence from a Universal Scale Benefit Scheme,2020-12-03 13:42:49,Sara R. Machado,"http://arxiv.org/abs/2012.01814v1, http://arxiv.org/pdf/2012.01814v1",econ.GN
31900,gn,"The COVID-19 crisis has led to the sharpest collapse in the Spanish trade of
goods and services in recent decades. The containment measures adopted to
arrest the spread of the virus have caused an especially intense fall of trade
in services. Spain's export specialization in transport equipment, capital and
outdoor goods, and services that rely on the movement of people has made the
COVID-19 trade crisis more intense in Spain than in the rest of the European
Union. However, the nature of the collapse suggests that trade in goods can
recover swiftly when the health crisis ends. On the other hand, COVID-19 may
have a long-term negative impact on the trade of services that rely on the
movement of people.",Impact of COVID-19 on the trade of goods and services in Spain,2020-12-03 16:35:54,Asier Minondo,"http://arxiv.org/abs/2012.01903v1, http://arxiv.org/pdf/2012.01903v1",econ.GN
31901,gn,"This paper examines the evolution of business and consumer uncertainty amid
the coronavirus pandemic in 32 European countries and the European Union
(EU).Since uncertainty is not directly observable, we approximate it using the
geometric discrepancy indicator of Claveria et al. (2019).This approach allows
us quantifying the proportion of disagreement in business and consumer
expectations of 32 countries.We have used information from all monthly
forward-looking questions contained in Joint Harmonised Programme of Business
and Consumer Surveys conducted by the European Commission (the industry survey,
the service survey, the retail trade survey, the building survey and the
consumer survey).First, we have calculated a discrepancy indicator for each of
the 17 survey questions analysed, which allows us to approximate the proportion
of uncertainty about different aspects of economic activity, both form the
demand and the supply sides of the economy.We then use these indicators to
calculate disagreement indices at the sector level.We graphic the evolution of
the degree of uncertainty in the main economic sectors of the analysed
economies up to June 2020.We observe marked differences, both across variables,
sectors and countries since the inception of the COVID-19 crisis.Finally, by
adding the sectoral indicators, an indicator of business uncertainty is
calculated and compared with that of consumers.Again, we find substantial
differences in the evolution of uncertainty between managers and consumers.This
analysis seeks to offer a global overview of the degree of economic uncertainty
in the midst of the coronavirus crisis at the sectoral level.",Business and consumer uncertainty in the face of the pandemic: A sector analysis in European countries,2020-12-03 20:28:25,Oscar Claveria,"http://arxiv.org/abs/2012.02091v1, http://arxiv.org/pdf/2012.02091v1",econ.GN
31902,gn,"It is important and informative to compare and contrast major economic crises
in order to confront novel and unknown cases such as the COVID-19 pandemic. The
2006 Great Recession and then the 2019 pandemic have a lot to share in terms of
unemployment rate, consumption expenditures, and interest rates set by Federal
Reserve. In addition to quantitative historical data, it is also interesting to
compare the contents of Federal Reserve statements for the period of these two
crises and find out whether Federal Reserve cares about similar concerns or
there are some other issues that demand separate and unique monetary policies.
This paper conducts an analysis to explore the Federal Reserve concerns as
expressed in their statements for the period of 2005 to 2020. The concern
analysis is performed using natural language processing (NLP) algorithms and a
trend analysis of concern is also presented. We observe that there are some
similarities between the Federal Reserve statements issued during the Great
Recession with those issued for the 2019 COVID-19 pandemic.",A Concern Analysis of FOMC Statements Comparing The Great Recession and The COVID-19 Pandemic,2020-12-03 20:38:53,"Luis Felipe Gutiérrez, Sima Siami-Namini, Neda Tavakoli, Akbar Siami Namin","http://arxiv.org/abs/2012.02098v1, http://arxiv.org/pdf/2012.02098v1",econ.GN
31903,gn,"A minimal central bank credibility, with a non-zero probability of not
renegning his commitment (""quasi-commitment""), is a necessary condition for
anchoring inflation expectations and stabilizing inflation dynamics. By
contrast, a complete lack of credibility, with the certainty that the policy
maker will renege his commitment (""optimal discretion""), leads to the local
instability of inflation dynamics. In the textbook example of the new-Keynesian
Phillips curve, the response of the policy instrument to inflation gaps for
optimal policy under quasi-commitment has an opposite sign than in optimal
discretion, which explains this bifurcation.",Imperfect Credibility versus No Credibility of Optimal Monetary Policy,2020-12-04 18:37:06,"Jean-Bernard Chatelain, Kirsten Ralf","http://arxiv.org/abs/2012.02662v1, http://arxiv.org/pdf/2012.02662v1",econ.GN
31904,gn,"The aim of the present paper is to provide criteria for a central bank of how
to choose among different monetary-policy rules when caring about a number of
policy targets such as the output gap and expected inflation. Special attention
is given to the question if policy instruments are predetermined or only
forward looking. Using the new-Keynesian Phillips curve with a cost-push-shock
policy-transmission mechanism, the forward-looking case implies an extreme lack
of robustness and of credibility of stabilization policy. The backward-looking
case is such that the simple-rule parameters can be the solution of Ramsey
optimal policy under limited commitment. As a consequence, we suggest to model
explicitly the rational behavior of the policy maker with Ramsey optimal
policy, rather than to use simple rules with an ambiguous assumption leading to
policy advice that is neither robust nor credible.",Policy Maker's Credibility with Predetermined Instruments for Forward-Looking Targets,2020-12-04 22:21:25,"Jean-Bernard Chatelain, Kirsten Ralf","http://arxiv.org/abs/2012.02806v1, http://arxiv.org/pdf/2012.02806v1",econ.GN
31905,gn,"We review economic research regarding the decision making processes of
individuals in economics, with a particular focus on papers which tried
analyzing factors that affect decision making with the evolution of the history
of economic thought. The factors that are discussed here are psychological,
emotional, cognitive systems, and social norms. Apart from analyzing these
factors, it deals with the reasons behind the limitations of rational
decision-making theory in individual decision making and the need for a
behavioral theory of decision making. In this regard, it has also reviewed the
role of situated learning in the decision-making process.",Decision making in Economics -- a behavioral approach,2020-12-05 10:33:32,Amitesh Saha,"http://arxiv.org/abs/2012.02968v1, http://arxiv.org/pdf/2012.02968v1",econ.GN
31906,gn,"This study aimed to find new perspectives on the use of humor through digital
media. A qualitative approach was used to conduct this study, where data were
collected through a literature review. Stress is caused by the inability of a
person to adapt between desires and reality. All forms of stress are basically
caused by a lack of understanding of human's own limitations. Inability to
fight limitations that will cause frustration, conflict, anxiety, and guilt.
Too much stress can threaten a person's ability to deal with the environment.
As a result, employees develop various kinds of stress symptoms that can
interfere with their work performance. Thus, the management of work stress is
important to do, one of which uses humor. However, in the digital age, the
spread of humor can be easily facilitated. The results of this review article
find new perspectives to reduce stress through digital humor, namely
interactive humor, funny photos, manipulations, phanimation, celebrity
soundboards, and PowerPoint humor. The research shows that the use of humor as
a coping strategy is able to predict positive affect and well-being
work-related. Moreover, digital humor which has various forms as well as easy,
fast, and wide spread, then the effect is felt increasingly significant",New Perspectives to Reduce Stress through Digital Humor,2020-12-06 02:32:01,"Misnal Munir, Amaliyah, Moses Glorino Rumambo Pandin","http://arxiv.org/abs/2012.03144v1, http://arxiv.org/pdf/2012.03144v1",econ.GN
31907,gn,"Social capital creates a synergy that benefits all members of a community.
This review examines how social capital contributes to the food security of
communities. A systematic literature review, based on Prisma, is designed to
provide a state-of-the-art review on capacity social capital in this realm. The
output of this method led to finding 39 related articles. Studying these
articles illustrates that social capital improves food security through two
mechanisms of knowledge sharing and product sharing (i.e., sharing food
products). It reveals that social capital through improving the food security
pillars (i.e., food availability, food accessibility, food utilization, and
food system stability) affects food security. In other words, the interaction
among the community members results in sharing food products and information
among community members, which facilitates food availability and access to
food. There are many shreds of evidence in the literature that sharing food and
food products among the community member decreases household food security and
provides healthy nutrition to vulnerable families and improves the food
utilization pillar of food security. It is also disclosed that belonging to the
social networks increases the community members' resilience and decreases the
community's vulnerability that subsequently strengthens the stability of a food
system. This study contributes to the common literature on food security and
social capital by providing a conceptual model based on the literature. In
addition to researchers, policymakers can use this study's findings to provide
solutions to address food insecurity problems.",Social Capital Contributions to Food Security: A Comprehensive Literature Review,2020-12-07 14:44:48,"Saeed Nosratabadi, Nesrine Khazami, Marwa Ben Abdallah, Zoltan Lackner, Shahab S. Band, Amir Mosavi, Csaba Mako","http://arxiv.org/abs/2012.03606v1, http://arxiv.org/pdf/2012.03606v1",econ.GN
31908,gn,"I study the economic effects of testing during the outbreak of a novel
disease. I propose a model where testing permits isolation of the infected and
provides agents with information about the prevalence and lethality of the
disease. Additional testing reduces the perceived lethality of the disease, but
might increase the perceived risk of infection. As a result, more testing could
increase the perceived risk of dying from the disease - i.e. ""stoke fear"" - and
cause a fall in economic activity, despite improving health outcomes. Two main
insights emerge. First, increased testing is beneficial to the economy and pays
for itself if performed at a sufficiently large scale, but not necessarily
otherwise. Second, heterogeneous risk perceptions across age-groups can have
important aggregate consequences. For a SARS-CoV-2 calibration of the model,
heterogeneous risk perceptions across young and old individuals mitigate GDP
losses by 50% and reduce the death toll by 30% relative to a scenario in which
all individuals have the same perceptions of risk.",The Testing Multiplier: Fear vs Containment,2020-12-07 19:34:55,Francesco Furno,"http://arxiv.org/abs/2012.03834v1, http://arxiv.org/pdf/2012.03834v1",econ.GN
31909,gn,"Cognitive skills are an important personal attribute that affects career
success. However, colleagues' support is also vital as most works are done in
groups, and the degree of their support is influenced by their generosity.
Social norms enter in groups, and gender may interact with cognitive skills
through gender norms in society. Because these gender norms penalize women with
high potential, they can reduce colleagues' generosity towards these women.
Using a novel experimental design where I exogenously vary gender and cognitive
skills and sufficiently powered analysis, I find neither the two attributes nor
their interactions affect other people's generosity; if anything, people are
more generous to women with high potential. I argue that my findings have
implications for the role of gender norms in labor markets.",The Role of Gender and Cognitive Skills on Other People's Generosity,2020-12-08 20:46:05,Yuki Takahashi,"http://arxiv.org/abs/2012.04591v2, http://arxiv.org/pdf/2012.04591v2",econ.GN
31910,gn,"Literature about the scholarly impact of scientific research offers very few
contributions on private sector research, and the comparison with public
sector. In this work, we try to fill this gap examining the citation-based
impact of Italian 2010-2017 publications distinguishing authorship by the
private sector from the public sector. In particular, we investigate the
relation between different forms of collaboration and impact: how intra-sector
private publications compare to public, and how private-public joint
publications compare to intra-sector extramural collaborations. Finally, we
assess the different effect of international collaboration on private and
public research impact, and whether there occur differences across research
fields.",The relative impact of private research on scientific advancement,2020-12-09 11:23:31,"Giovanni Abramo, Ciriaco Andrea D'Angelo, Flavia Di Costa","http://arxiv.org/abs/2012.04908v1, http://arxiv.org/pdf/2012.04908v1",econ.GN
31911,gn,"This paper shows how nature (i.e., one's genetic endowments) and nurture
(i.e., one's environment) interact in producing educational attainment. Genetic
endowments are measured using a polygenic score for educational attainment,
while we use birth order as an important environmental determinant of
educational attainment. Since genetic endowments are randomly assigned
within-families and orthogonal to one's birth order, our family fixed effects
approach exploits exogenous variation in genetic endowments as well as
environments. We find that those with higher genetic endowments benefit
disproportionally more from being firstborn compared to those with lower
genetic endowments.",Nature-nurture interplay in educational attainment,2020-12-09 16:00:17,"Dilnoza Muslimova, Hans van Kippersluis, Cornelius A. Rietveld, Stephanie von Hinke, S. Fleur W. Meddens","http://arxiv.org/abs/2012.05021v3, http://arxiv.org/pdf/2012.05021v3",econ.GN
31912,gn,"We analyse the Bihar assembly elections of 2020, and find that poverty was
the key driving factor, over and above female voters as determinants. The
results show that the poor were more likely to support the NDA. The relevance
of this result for an election held in the midst of a pandemic, is very
crucial, given that the poor were the hardest hit. Secondly, in contrast to
conventional commentary, the empirical results show that the AIMIM-factor and
the LJP-factor hurt the NDA while benefitting the MGB, with their presence in
these elections. The methodological novelty in this paper is combining
elections data with wealth index data to study the effect of poverty on
elections outcomes.",Bihar Assembly Elections 2020: An Analysis,2020-12-11 11:36:54,"Mudit Kapoor, Shamika Ravi","http://arxiv.org/abs/2012.06192v1, http://arxiv.org/pdf/2012.06192v1",econ.GN
31913,gn,"This study investigates the relationship of the equity home bias with 1) the
country-level behavioral unfamiliarity, and 2) the home-foreign return
correlation. We set the hypotheses that 1) unfamiliarity about foreign equities
plays a role in the portfolio set up and 2) the correlation of return on home
and foreign equities affects the equity home bias when there is a lack of
information about foreign equities. For the empirical analysis, the proportion
of respondents to the question ""How much do you trust? - People you meet for
the first time"" is used as a proxy measure for country-specific unfamiliarity.
Based on the eleven developed countries for which such data are available, we
implement a feasible generalized linear squares (FGLS) method. Empirical
results suggest that country-specific unfamiliarity has a significant and
positive correlation with the equity home bias. When it comes to the
correlation of return between home and foreign equities, we identify that there
is a negative correlation with the equity home bias, which is against our
hypothesis. Moreover, an excess return on home equities compared to foreign
ones is found to have a positive correlation with the equity home bias, which
is consistent with the comparative statics only if foreign investors have a
sufficiently higher risk aversion than domestic investors. We check the
robustness of our empirical analysis by fitting alternative specifications and
use a log-transformed measure of the equity home bias, resulting in consistent
results with ones with the original measure.",Non-fundamental Home Bias in International Equity Markets,2020-12-12 06:48:56,Gyu Hyun Kim,"http://arxiv.org/abs/2012.06716v1, http://arxiv.org/pdf/2012.06716v1",econ.GN
31931,gn,"In this paper, we attempt to estimate the effect of the implementation of
subscription-based streaming services on the demand of the associated game
consoles. We do this by applying the BLP demand estimation model proposed by
Berry (1994). This results in a linear demand specification which can be
identified using conventional identification methods such as instrumental
variables estimation and fixed-effects models. We find that given our dataset,
the two-stage least squares (2SLS) regression provides us with convincing
estimates that subscription-based streaming services does have a positive
effect on the demand of game consoles as proposed by the general principle of
complementary goods.",Estimating The Effect Of Subscription based Streaming Services On The Demand For Game Consoles,2020-12-23 17:30:55,"Tung Yu Marco Chan, Yue Zhang, Tsun Yi Yeung","http://arxiv.org/abs/2012.12704v1, http://arxiv.org/pdf/2012.12704v1",econ.GN
31914,gn,"Industries can enter one country first, and then enter its neighbors'
markets. Firms in the industry can expand trade network through the export
behavior of other firms in the industry. If a firm is dependent on a few
foreign markets, the political risks of the markets will hurt the firm. The
frequent trade disputes reflect the importance of the choice of export
destinations. Although the market diversification strategy was proposed before,
most firms still focus on a few markets, and the paper shows reasons.In this
paper, we assume the entry cost of firms is not all sunk cost, and show 2 ways
that product heterogeneity impacts extensive margin of exports theoretically
and empirically. Firstly, the increase in product heterogeneity promotes the
increase in market power and profit, and more firms are able to pay the entry
cost. If more firms enter the market, the information of the market will be
known by other firms in the industry. Firms can adjust their behavior according
to other firms, so the information changes entry cost and is not sunk cost
completely. The information makes firms more likely to entry the market, and
enter the surrounding markets of existing markets of other firms in the
industry. When firms choose new markets, they tend to enter the markets with
few competitors first.Meanwhile, product heterogeneity will directly affect the
firms' network expansion, and the reduction of product heterogeneity will
increase the value of peer information. This makes firms more likely to entry
the market, and firms in the industry concentrate on the markets.",Product Differentiation and Geographical Expansion of Exports Network at Industry level,2020-12-13 12:14:46,Xuejian Wang,"http://arxiv.org/abs/2012.07008v1, http://arxiv.org/pdf/2012.07008v1",econ.GN
31915,gn,"Different regional reactions to war in 1894 and 1900 can significantly impact
Chinese imports in 2001. As international relationship gets tense and China
rises, international conflicts could decrease trade.We analyze impact of
historic political conflict. We measure regional change of number of people
passing imperial exam because of war. War leads to an unsuccessful reform and
shocks elites. Elites in different regions have different ideas about
modernization, and the change of number of people passing exam is quite
different in different regions after war. Regional number of people passing
exam increases 1% after war, imports from then empires decrease 2.050% in 2001,
and this shows impact of cultural barrier. Manufactured goods can be impacted
because brands can be identified easily. Risk aversion of expensive products in
conservative regions can increase imports of equipment. Value chains need deep
trust, and this decreases imports of foreign company and assembly trade.",Impact of Regional Reactions to War on Contemporary Chinese Trade,2020-12-13 13:49:34,Xuejian Wang,"http://arxiv.org/abs/2012.07027v1, http://arxiv.org/pdf/2012.07027v1",econ.GN
31916,gn,"This article is a response to a question many economists ask: how can I
improve my first draft? The first section addresses a common approach to doing
this: treating problems visible on the surface. This paper presents six such
symptoms along with treatments for them. The second section addresses another
approach, one that often turns out to be more effective: looking deeper for the
underlying malady that causes several symptoms to show up on the surface and
treating it. This paper presents five common maladies that matter for eventual
outcomes, such as publishing and hiring.",Treating Research Writing: Symptoms and Maladies,2020-12-08 16:57:06,Varanya Chaubey,"http://arxiv.org/abs/2012.07787v1, http://arxiv.org/pdf/2012.07787v1",econ.GN
31917,gn,"In this paper we study the so-called minimum income condition order, which is
used in some day-ahead electricity power exchanges to represent the
production-related costs of generating units. This order belongs to the family
of complex orders, which imply non-convexities in the market clearing problem.
We demonstrate via simple numerical examples that if more of such bids are
present in the market, their interplay may open the possibility of strategic
bidding. More precisely, we show that by the manipulation of bid parameters, a
strategic player may increase its own profit and potentially induce the
deactivation of an other minimum income condition order, which would be
accepted under truthful bidding. Furthermore, we show that if we modify the
objective function used in the market clearing according to principles
suggested in the literature, it is possible to prevent the possibility of such
strategic bidding, but the modification raises other issues.",Strategic bidding via the interplay of minimum income condition orders in day-ahead power exchanges,2020-12-07 14:44:54,Dávid Csercsik,"http://dx.doi.org/10.1016/j.eneco.2021.105126, http://arxiv.org/abs/2012.07789v1, http://arxiv.org/pdf/2012.07789v1",econ.GN
31918,gn,"The UK Welfare Reform Act 2012 imposed a series of deep welfare cuts, which
disproportionately affected ex-ante poorer areas. In this paper, we provide the
first evidence of the impact of these austerity measures on two different but
complementary elements of crime -- the crime rate and the less-studied
concentration of crime -- over the period 2011-2015 in England and Wales, and
document four new facts. First, areas more exposed to the welfare reforms
experience increased levels of crime, an effect driven by a rise in violent
crime. Second, both violent and property crime become more concentrated within
an area due to the welfare reforms. Third, it is ex-ante more deprived
neighborhoods that bear the brunt of the crime increases over this period.
Fourth, we find no evidence that the welfare reforms increased recidivism,
suggesting that the changes in crime we find are likely driven by new
criminals. Combining these results, we document unambiguous evidence of a
negative spillover of the welfare reforms at the heart of the UK government's
austerity program on social welfare, which reinforced the direct
inequality-worsening effect of this program. Guided by a hedonic house price
model, we calculate the welfare effects implied by the cuts in order to provide
a financial quantification of the impact of the reform. We document an implied
welfare loss of the policy -- borne by the public -- that far exceeds the
savings made to government coffers.",Kicking You When You're Already Down: The Multipronged Impact of Austerity on Crime,2020-12-15 10:39:39,"Corrado Giulietti, Brendon McConnell","http://arxiv.org/abs/2012.08133v4, http://arxiv.org/pdf/2012.08133v4",econ.GN
31932,gn,"This paper investigates the heterogeneous impacts of either Global or Local
Investor Sentiments on stock returns. We study 10 industry sectors through the
lens of 6 (so called) emerging countries: China, Brazil, India, Mexico,
Indonesia and Turkey, over the 2000 to 2014 period. Using a panel data
framework, our study sheds light on a significant effect of Local Investor
Sentiments on expected returns for basic materials, consumer goods, industrial,
and financial industries. Moreover, our results suggest that from Global
Investor Sentiments alone, one cannot predict expected stock returns in these
markets.","If Global or Local Investor Sentiments are Prone to Developing an Impact on Stock Returns, is there an Industry Effect?",2020-12-22 01:51:42,"Jing Shi, Marcel Ausloos, Tingting Zhu","http://dx.doi.org/10.1002/ijfe.2216, http://arxiv.org/abs/2012.12951v1, http://arxiv.org/pdf/2012.12951v1",econ.GN
31919,gn,"Hardly any other area of research has recently attracted as much attention as
machine learning (ML) through the rapid advances in artificial intelligence
(AI). This publication provides a short introduction to practical concepts and
methods of machine learning, problems and emerging research questions, as well
as an overview of the participants, an overview of the application areas and
the socio-economic framework conditions of the research.
  In expert circles, ML is used as a key technology for modern artificial
intelligence techniques, which is why AI and ML are often used interchangeably,
especially in an economic context. Machine learning and, in particular, deep
learning (DL) opens up entirely new possibilities in automatic language
processing, image analysis, medical diagnostics, process management and
customer management. One of the important aspects in this article is
chipization. Due to the rapid development of digitalization, the number of
applications will continue to grow as digital technologies advance. In the
future, machines will more and more provide results that are important for
decision making. To this end, it is important to ensure the safety, reliability
and sufficient traceability of automated decision-making processes from the
technological side. At the same time, it is necessary to ensure that ML
applications are compatible with legal issues such as responsibility and
liability for algorithmic decisions, as well as technically feasible. Its
formulation and regulatory implementation is an important and complex issue
that requires an interdisciplinary approach. Last but not least, public
acceptance is critical to the continued diffusion of machine learning processes
in applications. This requires widespread public discussion and the involvement
of various social groups.","Development of cloud, digital technologies and the introduction of chip technologies",2020-12-16 14:05:41,Ali R. Baghirzade,"http://arxiv.org/abs/2012.08864v1, http://arxiv.org/pdf/2012.08864v1",econ.GN
31920,gn,"A substantial increase in illegal extraction of the benthic resources in
central Chile is likely driven by an interplay of numerous socio-economic local
factors that threatens the success of the fisheries management areas (MA)
system. To assess this problem, the exploitation state of a commercially
important benthic resource (i.e., keyhole limpet) in the MAs was related with
socio-economic drivers of the small-scale fisheries. The potential drivers of
illegal extraction included rebound effect of fishing effort displacement by
MAs, level of enforcement, distance to surveillance authorities, wave exposure
and land-based access to the MA, and alternative economic activities in the
fishing village. The exploitation state of limpets was assessed by the
proportion of the catch that is below the minimum legal size, with high
proportions indicating a poor state, and by the relative median size of limpets
fished within the MAs in comparison with neighbouring OA areas, with larger
relative sizes in the MA indicating a good state. A Bayesian-Belief Network
approach was adopted to assess the effects of potential drivers of illegal
fishing on the status of the benthic resource in the MAs. Results evidenced the
absence of a direct link between the level of enforcement and the status of the
resource, with other socio-economic (e.g., alternative economic activities in
the village) and context variables (e.g., fishing effort or distance to
surveillance authorities) playing important roles. Scenario analysis explored
variables that are susceptible to be managed, evidencing that BBN is a powerful
approach to explore the role of multiple external drivers, and their impact on
marine resources, in complex small-scale fisheries.",Disentangling the socio-ecological drivers behind illegal fishing in a small-scale fishery managed by a TURF system,2020-12-15 12:38:22,"Silvia de Juan, Maria Dulce Subida, Andres Ospina-Alvarez, Ainara Aguilar, Miriam Fernandez","http://arxiv.org/abs/2012.08970v1, http://arxiv.org/pdf/2012.08970v1",econ.GN
31921,gn,"The publication is one of the first studies of its kind, devoted to the
economic dimension of crimes against cultural and archaeological heritage. Lack
of research in this area is largely due to irregular global prevalence vague
definition of economic value of the damage these crimes cause to the society at
national and global level, to present and future generations. The author uses
classical models of Becker and Freeman, by modifying and complementing them
with the tools of economics of culture based on the values of non-use. The
model tries to determine the opportunity costs of this type of crime in several
scenarios and based on this to determine the extent of their limitation at an
affordable cost to society and raising public benefits of conservation of World
and National Heritage.",Economic dimension of crimes against cultural-historical and archaeological heritage (EN),2020-12-11 13:57:20,Shteryo Nozharov,"http://arxiv.org/abs/2012.09113v1, http://arxiv.org/pdf/2012.09113v1",econ.GN
31922,gn,"We show the recovery in consumer spending in the United Kingdom through the
second half of 2020 is unevenly distributed across regions. We utilise Fable
Data: a real-time source of consumption data that is a highly correlated,
leading indicator of Bank of England and Office for National Statistics data.
The UK's recovery is heavily weighted towards the ""home counties"" around outer
London and the South. We observe a stark contrast between strong online
spending growth while offline spending contracts. The strongest recovery in
spending is seen in online spending in the ""commuter belt"" areas in outer
London and the surrounding localities and also in areas of high second home
ownership, where working from home (including working from second homes) has
significantly displaced the location of spending. Year-on-year spending growth
in November 2020 in localities facing the UK's new tighter ""Tier 3""
restrictions (mostly the midlands and northern areas) was 38.4% lower compared
with areas facing the less restrictive ""Tier 2"" (mostly London and the South).
These patterns had been further exacerbated during November 2020 when a second
national lockdown was imposed. To prevent such COVID-19-driven regional
inequalities from becoming persistent we propose governments introduce
temporary, regionally-targeted interventions in 2021. The availability of
real-time, regional data enables policymakers to efficiently decide when, where
and how to implement such regional interventions and to be able to rapidly
evaluate their effectiveness to consider whether to expand, modify or remove
them.",Levelling Down and the COVID-19 Lockdowns: Uneven Regional Recovery in UK Consumer Spending,2020-12-17 03:27:31,"John Gathergood, Fabian Gunzinger, Benedict Guttman-Kenney, Edika Quispe-Torreblanca, Neil Stewart","http://arxiv.org/abs/2012.09336v2, http://arxiv.org/pdf/2012.09336v2",econ.GN
31923,gn,"In this paper we propose and solve a real options model for the optimal
adoption of an electric vehicle. A policymaker promotes the abeyance of
fossil-fueled vehicles through an incentive, and the representative
fossil-fueled vehicle's owner decides the time at which buying an electric
vehicle, while minimizing a certain expected cost. This involves a combination
of various types of costs: the stochastic opportunity cost of driving one unit
distance with a traditional fossil-fueled vehicle instead of an electric one,
the cost associated to traffic bans, and the net purchase cost. After
determining the optimal switching time and the minimal cost function for a
general diffusive opportunity cost, we specialize to the case of a
mean-reverting process. In such a setting, we provide a model calibration on
real data from Italy, and we study the dependency of the optimal switching time
with respect to the model's parameters. Moreover, we study the effect of
traffic bans and incentive on the expected optimal switching time. We observe
that incentive and traffic bans on fossil-fueled transport can be used as
effective tools in the hand of the policymaker to encourage the adoption of
electric vehicles, and hence to reduce air pollution.",Optimal switch from a fossil-fueled to an electric vehicle,2020-12-17 13:34:44,"Paolo Falbo, Giorgio Ferrari, Giorgio Rizzini, Maren Diane Schmeck","http://dx.doi.org/10.1007/s10203-021-00359-2, http://arxiv.org/abs/2012.09493v1, http://arxiv.org/pdf/2012.09493v1",econ.GN
31925,gn,"The 2008 economic crisis was not forecastable by at that time existing models
of macroeconomics. Thus macroeconomics needs new tools. We introduce a model
based on National Accounts that shows how macroeconomic sectors are
interconnected. These connections explain the spread of business cycles from
one industry to another and from financial sector to the real economy. These
lingages cannot be explained by General Equilibrium type of models. Our model
describes the real part of National Accounts (NA) of an economy. The accounts
are presented in the form of a money flow diagram between the following
macro-sectors: Non-financial firms, financial firms, households, government,
and rest of the world. The model contains all main items in NA and the
corresponding simulation model creates time paths for 59 key macroeconomic
quantities for an unlimited future. Finnish data of NA from time period
1975-2012 is used in calibrating the parameters of the model, and the model
follows the historical data with sufficient accuracy. Our study serves as a
basis for systems analytic macro-models that can explain the positive and
negative feed-backs in the production system of an economy. These feed-backs
are born from interactions between economic units and between real and
financial markets. JEL E01, E10.
  Key words: Stock-Flow Models, National Accounts, Simulation model.","National Accounts as a Stock-Flow Consistent System, Part 1: The Real Accounts",2020-12-21 15:22:21,"Matti Estola, Kristian Vepsäläinen","http://arxiv.org/abs/2012.11282v1, http://arxiv.org/pdf/2012.11282v1",econ.GN
31926,gn,"In 2014 the Patient Protection and Affordable Care Act (ACA) introduced the
expansion of Medicaid where states can opt to expand the eligibility for those
in need of free health insurance. In this paper, we attempt to assess the
effectiveness of Medicaid expansion on health outcomes of state populations
using Difference-in-Difference (DD) regressions to seek for causal impacts of
expanding Medicaid on health outcomes in 49 states. We find that in the time
frame of 2013 to 2016, Medicaid expansion seems to have had no significant
impact on the health outcomes of states that have chosen to expand.",An Empirical Evaluation On The Effectiveness Of Medicaid Expansion Across 49 States,2020-12-21 15:27:16,Tung Yu Marco Chan,"http://arxiv.org/abs/2012.11286v1, http://arxiv.org/pdf/2012.11286v1",econ.GN
31927,gn,"This paper quantifies the significance and magnitude of the effect of
measurement error in remote sensing weather data in the analysis of smallholder
agricultural productivity. The analysis leverages 17 rounds of
nationally-representative, panel household survey data from six countries in
Sub-Saharan Africa. These data are spatially-linked with a range of geospatial
weather data sources and related metrics. We provide systematic evidence on
measurement error introduced by 1) different methods used to obfuscate the
exact GPS coordinates of households, 2) different metrics used to quantify
precipitation and temperature, and 3) different remote sensing measurement
technologies. First, we find no discernible effect of measurement error
introduced by different obfuscation methods. Second, we find that simple
weather metrics, such as total seasonal rainfall and mean daily temperature,
outperform more complex metrics, such as deviations in rainfall from the
long-run average or growing degree days, in a broad range of settings. Finally,
we find substantial amounts of measurement error based on remote sensing
product. In extreme cases, data drawn from different remote sensing products
result in opposite signs for coefficients on weather metrics, meaning that
precipitation or temperature draw from one product purportedly increases crop
output while the same metrics drawn from a different product purportedly
reduces crop output. We conclude with a set of six best practices for
researchers looking to combine remote sensing weather data with socioeconomic
survey data.",Estimating the Impact of Weather on Agriculture,2020-12-22 03:41:06,"Jeffrey D. Michler, Anna Josephson, Talip Kilic, Siobhan Murray","http://arxiv.org/abs/2012.11768v3, http://arxiv.org/pdf/2012.11768v3",econ.GN
31928,gn,"Using recent data from voluntary mass testing, I provide credible bounds on
prevalence of SARS-CoV-2 for Austrian counties in early December 2020. When
estimating prevalence, a natural missing data problem arises: no test results
are generated for non-tested people. In addition, tests are not perfectly
predictive for the underlying infection. This is particularly relevant for mass
SARS-CoV-2 testing as these are conducted with rapid Antigen tests, which are
known to be somewhat imprecise. Using insights from the literature on partial
identification, I propose a framework addressing both issues at once. I use the
framework to study differing selection assumptions for the Austrian data.
Whereas weak monotone selection assumptions provide limited identification
power, reasonably stronger assumptions reduce the uncertainty on prevalence
significantly.",How many people are infected? A case study on SARS-CoV-2 prevalence in Austria,2020-12-22 17:12:06,Gabriel Ziegler,"http://arxiv.org/abs/2012.12020v1, http://arxiv.org/pdf/2012.12020v1",econ.GN
31929,gn,"Governments have used social distancing to stem the spread of COVID-19, but
lack evidence on the most effective policy to ensure compliance. We examine the
effectiveness of fines and informational messages (nudges) in promoting social
distancing in a web-based interactive experiment conducted during the first
wave of the pandemic on a near-representative sample of the US population.
Fines promote distancing, but nudges only have a marginal impact. Individuals
do more social distancing when they are aware they are a superspreader. Using
an instrumental variable approach, we argue progressives are more likely to
practice distancing, and they are marginally more responsive to fines.",Social distancing in networks: A web-based interactive experiment,2020-12-22 18:56:33,"Edoardo Gallo, Darija Barak, Alastair Langtry","http://arxiv.org/abs/2012.12118v4, http://arxiv.org/pdf/2012.12118v4",econ.GN
31930,gn,"The existence of involuntary unemployment advocated by J. M. Keynes is a very
important problem of the modern economic theory. Using a three-generations
overlapping generations model, we show that the existence of involuntary
unemployment is due to the instability of the economy. Instability of the
economy is the instability of the difference equation about the equilibrium
price around the full-employment equilibrium, which means that a fall in the
nominal wage rate caused by the presence of involuntary unemployment further
reduces employment. This instability is due to the negative real balance effect
that occurs when consumers' net savings (the difference between savings and
pensions) are smaller than their debt multiplied by the marginal propensity to
consume from childhood consumption.",Involuntary unemployment in overlapping generations model due to instability of the economy,2020-12-13 12:23:09,Yasuhito Tanaka,"http://arxiv.org/abs/2012.12199v1, http://arxiv.org/pdf/2012.12199v1",econ.GN
34818,th,"We study the cake-cutting problem when agents have single-peaked preferences
over the cake. We show that a recently proposed mechanism by Wang-Wu (2019) to
obtain envy-free allocations can yield large welfare losses. Using a
simplifying assumption, we characterize all Pareto optimal allocations, which
have a simple structure: are peak-preserving and non-wasteful. Finally, we
provide simple alternative mechanisms that Pareto dominate that of Wang-Wu, and
which achieve envy-freeness or Pareto optimality.",Fairness and Efficiency in Cake-Cutting with Single-Peaked Preferences,2020-02-08 17:41:25,"Bhavook Bhardwaj, Rajnish Kumar, Josue Ortega","http://arxiv.org/abs/2002.03174v3, http://arxiv.org/pdf/2002.03174v3",cs.GT
31933,gn,"Research shows that women volunteer significantly more for tasks that people
prefer others to complete. Such tasks carry little monetary incentives because
of their very nature. We use a modified version of the volunteer's dilemma game
to examine if non-monetary interventions, particularly, social recognition can
be used to change the gender norms associated with such tasks. We design three
treatments, where a) a volunteer receives positive social recognition, b) a
non-volunteer receives negative social recognition, and c) a volunteer receives
positive, but a non-volunteer receives negative social recognition. Our results
indicate that competition for social recognition increases the overall
likelihood that someone in a group has volunteered. Positive social recognition
closes the gender gap observed in the baseline treatment, so does the
combination of positive and negative social recognition. Our results,
consistent with the prior literature on gender differences in competition,
suggest that public recognition of volunteering can change the default gender
norms in organizations and increase efficiency at the same time.",Using social recognition to address the gender difference in volunteering for low-promotability tasks,2020-12-25 07:41:57,"Ritwik Banerjee, Priyoma Mustafi","http://arxiv.org/abs/2012.13514v1, http://arxiv.org/pdf/2012.13514v1",econ.GN
31934,gn,"The popularity of business intelligence (BI) systems to support business
analytics has tremendously increased in the last decade. The determination of
data items that should be stored in the BI system is vital to ensure the
success of an organisation's business analytic strategy. Expanding conventional
BI systems often leads to high costs of internally generating, cleansing and
maintaining new data items whilst the additional data storage costs are in many
cases of minor concern -- what is a conceptual difference to big data systems.
Thus, potential additional insights resulting from a new data item in the BI
system need to be balanced with the often high costs of data creation. While
the literature acknowledges this decision problem, no model-based approach to
inform this decision has hitherto been proposed. The present research describes
a prescriptive framework to prioritise data items for business analytics and
applies it to human resources. To achieve this goal, the proposed framework
captures core business activities in a comprehensive process map and assesses
their relative importance and possible data support with multi-criteria
decision analysis.",Prioritising data items for business analytics: Framework and application to human resources,2020-12-27 00:19:39,Tom Pape,"http://dx.doi.org/10.1016/j.ejor.2016.01.052, http://arxiv.org/abs/2012.13813v1, http://arxiv.org/pdf/2012.13813v1",econ.GN
31935,gn,"In multi-criteria decision analysis workshops, participants often appraise
the options individually before discussing the scoring as a group. The
individual appraisals lead to score ranges within which the group then seeks
the necessary agreement to identify their preferred option. Preference
programming enables some options to be identified as dominated even before the
group agrees on a precise scoring for them. Workshop participants usually face
time pressure to make a decision. Decision support can be provided by flagging
options for which further agreement on their scores seems particularly
valuable. By valuable, we mean the opportunity to identify other options as
dominated (using preference programming) without having their precise scores
agreed beforehand. The present paper quantifies this Value of Agreement and
extends the concept to portfolio decision analysis and criterion weights. The
new concept is validated through a case study in recruitment.","Value of agreement in decision analysis: Concept, measures and application",2020-12-27 00:26:39,Tom Pape,"http://dx.doi.org/10.1016/j.ejor.2016.01.052, http://arxiv.org/abs/2012.13816v1, http://arxiv.org/pdf/2012.13816v1",econ.GN
31936,gn,"I study the co-evolution between public opinion and party policy in
situations of crises by investigating a policy U-turn of a major Austrian
right-wing party (FPOE) during the Covid-19 pandemic. My analysis suggests the
existence of both i) a ""Downsian"" effect, which causes voters to adapt their
party preferences based on policy congruence and ii) a ""party identification""
effect, which causes partisans to realign their policy preferences based on
""their"" party's platform. Specifically, I use individual-level panel data to
show that i) ""corona skeptical"" voters who did not vote for the FPOE in the
pre-Covid-19 elections of 2019 were more likely to vote for the party after it
embraced ""corona populism"", and ii) beliefs of respondents who declared that
they voted for the FPOE in 2019 diverged from the rest of the population in
three out of four health-dimensions only after the turn, causing them to
underestimate the threat posed by Covid-19 compared to the rest of the
population. Using aggregate-level panel data, I study whether the turn has
produced significant behavioral differences which could be observed in terms of
reported cases and deaths per capita. Paradoxically, after the turn the FPOE
vote share is significantly positively correlated with deaths per capita, but
not with the reported number of infections. I hypothesize that this due to a
self-selection bias in testing, which causes a correlation between the number
of ""corona skeptics"" and the share of unreported cases after the turn. I find
empirical support for this hypothesis in individual-level data from a Covid-19
prevalence study that involves information about participants' true vs.
reported infection status. I finally study a simple heterogeneous mixing
epidemiological model and show that a testing bias can indeed explain the
apparent paradox of an increase in deaths without an increase in reported
cases.",The Impact of Corona Populism: Empirical Evidence from Austria and Theory,2020-12-30 01:36:09,Patrick Mellacher,"http://arxiv.org/abs/2012.14962v5, http://arxiv.org/pdf/2012.14962v5",econ.GN
31937,gn,"The industrial life cycle theory has proved to be helpful for describing the
evolution of industries from birth to maturity. This paper is to highlight the
historical evolution stage of Atlantic City's gambling industry in a structural
framework covered by industrial market, industrial organization, industrial
policies and innovation. Data mining was employed to obtain from local official
documents, to verify the module of industrial life cycle in differential phases
as introduction, development, maturity and decline. The trajectory of Atlantic
City's gambling sector evolution reveals the process from the stages of
introduction to decline via a set of variables describing structural properties
of this industry such as product, market and organization of industry under a
special industry environment in which industry recession as a result of theory
of industry life cycle is a particular evidence be proved again. Innovation of
the gambling industry presents the ongoing recovering process of the Atlantic
City gambling industry enriches the theory of industrial life cycle in service
sectors.",The Involution of Industrial Life Cycle on Atlantic City Gambling Industry,2020-12-30 04:31:46,"Jin Quan Zhou, Wen Jin He","http://arxiv.org/abs/2012.14999v1, http://arxiv.org/pdf/2012.14999v1",econ.GN
31938,gn,"Labor displacement off-or nearshore is a performance improvement instrument
that currently sparks a lot of interest in the service sector. This article
proposes a model to understand the consequences of such a decision on
management consulting firms. Its calibration on the market of consulting
services for the German transportation industry highlights that, under
realistic assumptions, labor displacement translates in price decrease by-0.5%
on average per year and that for MC practices to remain competitive/profitable
they have to at least increase the amount of work they off/nears shore by +0.7%
a year.",What is the impact of labor displacement on management consulting services?,2020-12-30 16:12:34,Edouard Ribes,"http://arxiv.org/abs/2012.15144v1, http://arxiv.org/pdf/2012.15144v1",econ.GN
31940,gn,"The transformation of the electricity sector is a main element of the
transition to a decarbonized economy. Conventional generators powered by fossil
fuels have to be replaced by variable renewable energy (VRE) sources in
combination with electricity storage and other options for providing temporal
flexibility. We discuss the market dynamics of increasing VRE penetration and
their integration in the electricity system. We describe the merit-order effect
(decline of wholesale electricity prices as VRE penetration increases) and the
cannibalization effect (decline of VRE value as their penetration increases).
We further review the role of electricity storage and other flexibility options
for integrating variable renewables, and how storage can contribute to
mitigating the two mentioned effects. We also use a stylized open-source model
to provide some graphical intuition on this. While relatively high shares of
VRE are achievable with moderate amounts of electricity storage, the role of
long-term storage increases as the VRE share approaches 100%.",The Economics of Variable Renewables and Electricity Storage,2020-12-31 02:54:19,"Javier López Prol, Wolf-Peter Schill","http://dx.doi.org/10.1146/annurev-resource-101620-081246, http://arxiv.org/abs/2012.15371v1, http://arxiv.org/pdf/2012.15371v1",econ.GN
31941,gn,"We estimate the relationship between GDP per capita growth and the growth
rate of the national savings rate using a panel of 130 countries over the
period 1960-2017. We find that GDP per capita growth increases (decreases) the
growth rate of the national savings rate in poor countries (rich countries),
and a higher credit-to-GDP ratio decreases the national savings rate as well as
the income elasticity of the national savings rate. We develop a model with a
credit constraint to explain the growth-saving relationship by the saving
behavior of entrepreneurs at both the intensive and extensive margins. We
further present supporting evidence for our theoretical findings by utilizing
cross-country time series data of the number of new businesses registered and
the corporate savings rate.",Transitional Dynamics of the Saving Rate and Economic Growth,2020-12-31 07:01:13,"Markus Brueckner, Tomoo Kikuchi, George Vachadze","http://arxiv.org/abs/2012.15435v3, http://arxiv.org/pdf/2012.15435v3",econ.GN
31942,gn,"Society is undergoing many transformations and faces economic crises,
environmental, social, and public health issues. At the same time, the
Internet, mobile communications, cloud technologies, and social networks are
growing rapidly and fostering the digitalization processes of business and
society. It is in this context that the shared economy has assumed itself as a
new social and economic system based on the sharing of resources and has
allowed the emergence of innovative businesses like Airbnb. However, COVID-19
has challenged this business model in the face of restrictions imposed in the
tourism sector. Its consequences are not exclusively short-term and may also
call into question the sustainability of Airbnb. In this sense, this study aims
to explore the sustainability of the Airbnb business model considering two
theories which advocate that hosts can cover the short-term financial effects,
while another defends a paradigm shift in the demand for long-term
accommodations to ensure greater stability for hosts.",Exploring the Impact of COVID-19 in the Sustainability of Airbnb Business Model,2021-01-01 20:57:53,"Rim Krouk, Fernando Almeida","http://arxiv.org/abs/2101.00281v1, http://arxiv.org/pdf/2101.00281v1",econ.GN
31943,gn,"The objective of this work was to know the amount and frequency with which
the people of Arandas in the Altos de Jalisco region use disposable cups and
then know how willing they are to use edible cups made with natural gelatin. In
this regard, it is worth commenting that these can not only be nutritious for
those who consume them (since gelatin is a fortifying nutrient created from the
skin and bone of pigs and cows), but they could also be degraded in a few days
or be ingested by animals. To collect the information, a survey consisting of
six questions was used, which was applied to 31 people by telephone and another
345 personally (in both cases they were applied to young people and adults).
The results show that the residents of that town considerably use plastic cups
in the different events that take place each week, which are more numerous
during the patron saint festivities or at the end of the year. Even so, these
people would be willing to change these habits, although for this, measures
must be taken that do not affect the companies in that area, which work mainly
with plastics and generate a high percentage of jobs.",What does the consumer know about the environmental damage caused by the disposable cup and the need to replace it,2021-01-03 16:34:14,Guillermo José Navarro del Toro,"http://arxiv.org/abs/2101.00625v1, http://arxiv.org/pdf/2101.00625v1",econ.GN
31944,gn,"Tradable mobility credit (TMC) schemes are an approach to travel demand
management that have received significant attention in recent years. This paper
proposes and analyzes alternative market models for a TMC system -- focusing on
market design aspects such as allocation/expiration of tokens, rules governing
trading, transaction fees, and regulator intervention -- and develops a
methodology to explicitly model the dis-aggregate behavior of individuals
within the market. Extensive simulation experiments are conducted within a
combined mode and departure time context for the morning commute problem to
compare the performance of the alternative designs relative to congestion
pricing and a no-control scenario. The simulation experiments employ a
day-to-day assignment framework wherein transportation demand is modeled using
a logit-mixture model with income effects and supply is modeled using a
standard bottleneck model. The results indicate that small fixed transaction
fees can effectively mitigate undesirable behavior in the market without a
significant loss in efficiency (total welfare) whereas proportional transaction
fees are less effective both in terms of efficiency and in avoiding undesirable
market behavior. Further, an allocation of tokens in continuous time can be
beneficial in dealing with non-recurrent events and avoiding concentrated
trading activity. In the presence of income effects, despite small fixed
transaction fees, the TMC system yields a marginally higher social welfare than
congestion pricing while attaining revenue neutrality. Further, it is more
robust in the presence of forecasting errors and non-recurrent events due to
the adaptiveness of the market. Finally, as expected, the TMC scheme is more
equitable (when revenues from congestion pricing are not redistributed)
although it is not guaranteed to be Pareto-improving when tokens are
distributed equally.",Market Design for Tradable Mobility Credits,2021-01-03 20:14:52,"Siyu Chen, Ravi Seshadri, Carlos Lima Azevedo, Arun P. Akkinepally, Renming Liu, Andrea Araldo, Yu Jiang, Moshe E. Ben-Akiva","http://arxiv.org/abs/2101.00669v2, http://arxiv.org/pdf/2101.00669v2",econ.GN
31945,gn,"A new and rapidly growing econometric literature is making advances in the
problem of using machine learning methods for causal inference questions. Yet,
the empirical economics literature has not started to fully exploit the
strengths of these modern methods. We revisit influential empirical studies
with causal machine learning methods and identify several advantages of using
these techniques. We show that these advantages and their implications are
empirically relevant and that the use of these methods can improve the
credibility of causal analysis.",The Value Added of Machine Learning to Causal Inference: Evidence from Revisited Studies,2021-01-04 13:42:49,"Anna Baiardi, Andrea A. Naghi","http://arxiv.org/abs/2101.00878v1, http://arxiv.org/pdf/2101.00878v1",econ.GN
31946,gn,"Models simulating household energy demand based on different occupant and
household types and their behavioral patterns have received increasing
attention over the last years due the need to better understand fundamental
characteristics that shape the demand side. Most of the models described in the
literature are based on Time Use Survey data and Markov chains. Due to the
nature of the underlying data and the Markov property, it is not sufficiently
possible to consider day to day dependencies in occupant behavior. An accurate
mapping of day to day dependencies is of increasing importance for accurately
reproducing mobility patterns and therefore for assessing the charging
flexibility of electric vehicles. This study bridges the gap between energy
related activity modelling and novel machine learning approaches with the
objective to better incorporate findings from the field of social practice
theory in the simulation of occupancy behavior. Weekly mobility data are merged
with daily time use survey data by using attention based models. In a first
step an autoregressive model is presented, which generates synthetic weekly
mobility schedules of individual occupants and thereby captures day to day
dependencies in mobility behavior. In a second step, an imputation model is
presented, which enriches the weekly mobility schedules with detailed
information about energy relevant at home activities. The weekly activity
profiles build the basis for modelling consistent electricity, heat and
mobility demand profiles of households. Furthermore, the approach presented
forms the basis for providing data on socio-demographically differentiated
occupant behavior to the general public.",Using attention to model long-term dependencies in occupancy behavior,2021-01-04 16:13:48,"Max Kleinebrahm, Jacopo Torriti, Russell McKenna, Armin Ardone, Wolf Fichtner","http://arxiv.org/abs/2101.00940v1, http://arxiv.org/pdf/2101.00940v1",econ.GN
31947,gn,"Sales of new petrol and diesel passenger vehicles may not be permitted in the
United Kingdom (UK) post-2030. Should this happen, it is likely that vehicles
presently powered by hydrocarbons will be progressively replaced by Battery
Electric Vehicles (BEVs). This paper describes the use of mathematical
modelling, drawing on real time records of the UK electricity grid, to
investigate the likely performance of the grid when supplying power to a fleet
of up to 35 million BEVs. The model highlights the importance of understanding
how the grid will cope when powering a BEV fleet under conditions similar to
those experienced during an extended wind lull during the 3rd week of January
2017. Allowing a two-way flow of electricity between the BEVs and the grid,
known as the vehicle-to-grid (V2G) configuration, turns out to be of key
importance in minimising the need for additional gas turbine generation or
energy storage during wind lulls. This study has shown that with the use of
V2G, it should be possible to provide power to about 15 million BEVs with the
gas turbine capacity currently available. Without V2G, it is likely that the
current capacity of the gas turbines and associated gas infrastructure might be
overwhelmed by even a relatively small BEV fleet. Since it is anticipated that
80% of BEV owners will be able to park the vehicles at their residences,
widespread V2G will enable both the powering of residences when supply from the
grid is constrained and the charging of BEVs when supply is in excess. The
model shows that this configuration will maintain a constant load on the grid
and avoid the use of either expensive alternative storage or hydrogen obtained
by reforming methane. There should be no insuperable problem in providing power
to the 20% of BEV owners who do not have parking at their residences; their
power could come directly from the grid.",Predicting the Performance of a Future United Kingdom Grid and Wind Fleet When Providing Power to a Fleet of Battery Electric Vehicles,2020-12-24 06:40:35,"Anthony D Stephens, David R Walwyn","http://arxiv.org/abs/2101.01065v1, http://arxiv.org/pdf/2101.01065v1",econ.GN
31948,gn,"The opioid epidemic began with prescription pain relievers. In 2010 Purdue
Pharma reformulated OxyContin to make it more difficult to abuse. OxyContin
misuse fell dramatically, and concurrently heroin deaths began to rise.
Previous research overlooked generic oxycodone and argued that the
reformulation induced OxyContin users to switch directly to heroin. Using a
novel and fine-grained source of all oxycodone sales from 2006-2014, we show
that the reformulation led users to substitute from OxyContin to generic
oxycodone, and the reformulation had no overall impact on opioid or heroin
mortality. In fact, generic oxycodone, instead of OxyContin, was the driving
factor in the transition to heroin. Finally, we show that by omitting generic
oxycodone we recover the results of the literature. These findings highlight
the important role generic oxycodone played in the opioid epidemic and the
limited effectiveness of a partial supply-side intervention.",The OxyContin Reformulation Revisited: New Evidence From Improved Definitions of Markets and Substitutes,2021-01-04 20:49:36,"Shiyu Zhang, Daniel Guth","http://arxiv.org/abs/2101.01128v2, http://arxiv.org/pdf/2101.01128v2",econ.GN
31949,gn,"As we can understand with the spread of GVCs, a lot of new questions emerge
regarding the measurement of participation and positioning in the globalised
production process. The World Development Report (WDR) 2020 explains the GVC
phenomenon and then focus on participation and the prospects especially in a
world of change in technology. From the overview section, we can figure out
that nowadays, goods and services flow across borders as intermediate inputs
rather than final goods. In traditional trade, we need two countries with the
notions of export and import. However, in GVC trade, the goods and services
cross borders multiple times requiring more than two countries. Remarkable
improvements in information, communication, and transport technologies have
made it possible to fragment production across national boundaries. So the
question is: how to conceptualise this type of new trade to justify the
measurement of participation.",Measurement of Global Value Chain (GVC) Participation in World Development Report 2020,2021-01-07 13:55:18,Sourish Dutta,"http://dx.doi.org/10.2139/ssrn.3763103, http://arxiv.org/abs/2101.02478v1, http://arxiv.org/pdf/2101.02478v1",econ.GN
31950,gn,"Objective: An understanding of when one or more external factors may
influence the evolution of innovation tracking indices (such as US patent and
trademark applications (PTA)) is an important aspect of examining economic
progress/regress. Using exploratory statistics, the analysis uses a novel tool
to leverage the long-range dependency (LRD) intrinsic to PTA to resolve when
such factor(s) may have caused significant disruptions in the evolution of the
indices, and thus give insight into substantive economic growth dynamics.
Approach: This paper explores the use of the Chronological Hurst Exponent (CHE)
to explore the LRD using overlapping time windows to quantify long-memory
dynamics in the monthly PTA time-series spanning 1977 to 2016.
Results/Discussion: The CHE is found to increase in a clear S-curve pattern,
achieving persistence (H~1) from non-persistence (H~0.5). For patents, the
inflection occurred over a span of 10 years (1980-1990), while it was much
sharper (3 years) for trademarks (1977-1980). Conclusions/Originality/Value:
This analysis suggests (in part) that the rapid augmentation in R&D expenditure
and the introduction of the various patent directed policy acts (e.g.,
Bayh-Dole, Stevenson-Wydler) are the key impetuses behind persistency, latent
in PTA. The post-1990s exogenic factors seem to be simply maintaining the high
degree and consistency of the persistency metric. These findings suggest
investigators should consider latent persistency when using these data and the
CHE may be an important tool to investigate the impact of substantive exogenous
variables on growth dynamics.",Leveraging latent persistency in United States patent and trademark applications to gain insight into the evolution of an innovation-driven economy,2021-01-02 04:40:22,Iraj Daizadeh,"http://dx.doi.org/10.47909/ijsmc.32, http://arxiv.org/abs/2101.02588v2, http://arxiv.org/pdf/2101.02588v2",econ.GN
31951,gn,"This paper aims to assess the effects of industrial pollution on infant
mortality between the years 1850-1940 using full count decennial censuses. In
this period, US economy experienced a tremendous rise in industrial activity
with significant variation among different counties in absorbing manufacturing
industries. Since manufacturing industries are shown to be the main source of
pollution, we use the share of employment at the county level in this industry
to proxy for space-time variation in industrial pollution. Since male embryos
are more vulnerable to external stressors like pollution during prenatal
development, they will face higher likelihood of fetal death. Therefore, we
proxy infant mortality with different measures of gender ratio. We show that
the upswing in industrial pollution during late nineteenth century and early
twentieth century has led to an increase in infant mortality. The results are
consistent and robust across different scenarios, measures for our proxies, and
aggregation levels. We find that infants and more specifically male infants had
paid the price of pollution during upswing in industrial growth at the dawn of
the 20th century. Contemporary datasets are used to verify the validity of the
proxies. Some policy implications are discussed.",Upswing in Industrial Activity and Infant Mortality during Late 19th Century US,2021-01-05 22:16:54,"Nahid Tavassoli, Hamid Noghanibehambari, Farzaneh Noghani, Mostafa Toranji","http://dx.doi.org/10.20448/journal.505.2020.61.1.13, http://arxiv.org/abs/2101.02590v2, http://arxiv.org/pdf/2101.02590v2",econ.GN
31952,gn,"A planner aims to target individuals who exceed a threshold in a
characteristic, such as wealth or ability. The individuals can rank their
friends according to the characteristic. We study a strategy-proof mechanism
for the planner to use the rankings for targeting. We discuss how the mechanism
works in practice, when the rankings may contain errors.",Friend-Based Ranking in Practice,2021-01-08 08:43:30,"Francis Bloch, Matthew Olckers","http://arxiv.org/abs/2101.02857v1, http://arxiv.org/pdf/2101.02857v1",econ.GN
31953,gn,"This paper investigates the effects of introducing external medical review
for disability insurance (DI) in a system relying on treating physician
testimony for eligibility determination. Using a unique policy change and
administrative data from Switzerland, I show that medical review reduces DI
incidence by 23%. Incidence reductions are closely tied to
difficult-to-diagnose conditions, suggesting inaccurate assessments by treating
physicians. Due to a partial benefit system, reductions in full benefit awards
are partly offset by increases in partial benefits. More intense screening also
increases labor market participation. Existing benefit recipients are
downgraded and lose part of their benefit income when scheduled medical reviews
occur. Back-of-the-envelope calculations indicate that external medical review
is highly cost-effective. Under additional assumptions, the results provide a
lower bound of the effect on the false positive award error rate.",Does external medical review reduce disability insurance inflow?,2021-01-08 20:18:43,Helge Liebert,"http://dx.doi.org/10.1016/j.jhealeco.2018.12.005, http://arxiv.org/abs/2101.03117v1, http://arxiv.org/pdf/2101.03117v1",econ.GN
31954,gn,"Among industrialized countries, U.S. holds two somehow inglorious records:
the highest rate of fatal police shootings and the highest rate of deaths
related to firearms. The latter has been associated with strong diffusion of
firearms ownership largely due to loose legislation in several member states.
The present paper investigates the relation between firearms
legislation\diffusion and the number of fatal police shooting episodes using a
seven-year panel dataset. While our results confirm the negative impact of
stricter firearms regulations found in previous cross-sectional studies, we
find that the diffusion of guns ownership has no statistically significant
effect. Furthermore, regulations pertaining to the sphere of gun owner
accountability seem to be the most effective in reducing fatal police
shootings.",Firearms Law and Fatal Police Shootings: A Panel Data Analysis,2021-01-08 20:53:02,"Marco Rogna, Diep Bich Nguyen","http://arxiv.org/abs/2101.03131v1, http://arxiv.org/pdf/2101.03131v1",econ.GN
31955,gn,"Investment in research and development is a key factor in increasing
countries' competitiveness. However, its impact can potentially be broader and
include other socially relevant elements like job quality. In effect, the
quantity of generated jobs is an incomplete indicator since it does not allow
to conclude on the quality of the job generated. In this sense, this paper
intends to explore the relevance of R&D investments for the job quality in the
European Union between 2009 and 2018. For this purpose, we investigate the
effects of R&D expenditures made by the business sector, government, and higher
education sector on three dimensions of job quality. Three research methods are
employed, i.e. univariate linear analysis, multiple linear analysis, and
cluster analysis. The findings only confirm the association between R&D
expenditure and the number of hours worked, such that the European Union
countries with the highest R&D expenses are those with the lowest average
weekly working hours.",Exploring the association between R&D expenditure and the job quality in the European Union,2021-01-08 23:38:58,"Fernando Almeida, Nelson Amoedo","http://dx.doi.org/10.29358/sceco.v0i32.476, http://arxiv.org/abs/2101.03214v1, http://arxiv.org/pdf/2101.03214v1",econ.GN
31956,gn,"Previous studies show that prenatal shocks to embryos could have adverse
impacts on health endowment at birth. Using the universe of birth data and a
difference-in-difference-in-difference strategy, I find that exposure to
Ramadan during prenatal development has negative birth outcomes. Exposure to a
full month of fasting is associated with 96 grams lower birth-weight. These
results are robust across specifications and do not appear to be driven by
mothers selective fertility.",Ramadan and Infants Health Outcomes,2021-01-09 03:00:55,Hossein Abbaszadeh Shahri,"http://arxiv.org/abs/2101.03259v2, http://arxiv.org/pdf/2101.03259v2",econ.GN
31957,gn,"Indeed, the global production (as a system of creating values) is eventually
forming like a gigantic and complex network/web of value chains that explains
the transitional structures of global trade and development of the global
economy. It's truly a new wave of globalisation, and we term it as the global
value chains (GVCs), creating the nexus among firms, workers and consumers
around the globe. The emergence of this new scenario asks: how an economy's
firms, producers and workers connect in the global economy. And how are they
capturing the gains out of it in terms of different dimensions of economic
development? This GVC approach is very crucial for understanding the
organisation of the global industries and firms. It requires the statics and
dynamics of diverse players involved in this complex global production network.
Its broad notion deals with different global issues (including regional value
chains also) from the top down to the bottom up, founding a scope for policy
analysis (Gereffi & Fernandez-Stark 2011). But it is true that, as Feenstra
(1998) points out, any single computational framework is not sufficient to
quantification this whole range of economic activities. We should adopt an
integrative framework for accurate projection of this dynamic multidimensional
phenomenon.",Mechanistic Framework of Global Value Chains,2021-01-09 16:24:11,Sourish Dutta,"http://dx.doi.org/10.2139/ssrn.3762963, http://arxiv.org/abs/2101.03358v2, http://arxiv.org/pdf/2101.03358v2",econ.GN
31958,gn,"Using the econometric models, this paper addresses the ability of Albanian
Small and Medium-sized Enterprises (SMEs) to identify the risks they face. To
write this paper, we studied SMEs operating in the Gjirokastra region. First,
qualitative data gathered through a questionnaire was used. Next, the 5-level
Likert scale was used to measure it. Finally, the data was processed through
statistical software SPSS version 21, using the binary logistic regression
model, which reveals the probability of occurrence of an event when all
independent variables are included. Logistic regression is an integral part of
a category of statistical models, which are called General Linear Models.
Logistic regression is used to analyze problems in which one or more
independent variables interfere, which influences the dichotomous dependent
variable. In such cases, the latter is seen as the random variable and is
dependent on them. To evaluate whether Albanian SMEs can identify risks, we
analyzed the factors that SMEs perceive as directly affecting the risks they
face. At the end of the paper, we conclude that Albanian SMEs can identify risk",Using the Econometric Models for Identification of Risk Factors for Albanian SMEs (Case study: SMEs of Gjirokastra region),2021-01-10 21:39:40,"Lorenc Kociu, Kledian Kodra","http://dx.doi.org/10.37394/23207.2021.18.17, http://arxiv.org/abs/2101.03598v1, http://arxiv.org/pdf/2101.03598v1",econ.GN
31959,gn,"This paper investigates the causal relationship between income shocks during
the first years of life and adulthood mortality due to specific causes of
death. Using all death records in the United States during 1968-2004 for
individuals who were born in the first half of the 20th century, we document a
sizable and statistically significant association between income shocks early
in life, proxied by GDP per capita fluctuations, and old age cause-specific
mortality. Conditional on individual characteristics and controlling for a
broad array of current and early-life conditions, we find that a 1 percent
decrease in the aggregate business cycle in the year of birth is associated
with 2.2, 2.3, 3.1, 3.7, 0.9, and 2.1 percent increase in the likelihood of
mortality in old ages due to malignant neoplasms, Diabetes Mellitus,
cardiovascular diseases, Influenza, chronic respiratory diseases, and all other
diseases, respectively.",Early-life Income Shocks and Old-Age Cause-Specific Mortality,2021-01-08 18:47:08,"Hamid NoghaniBehambari, Farzaneh Noghani, Nahid Tavassoli","http://dx.doi.org/10.28934/ea.20.53.2.pp1-19, http://arxiv.org/abs/2101.03943v1, http://arxiv.org/pdf/2101.03943v1",econ.GN
31960,gn,"The topic of my research is ""Learning and Upgrading in Global Value Chains:
An Analysis of India's Manufacturing Sector"". To analyse India's learning and
upgrading through position, functions, specialisation & value addition of
manufacturing GVCs, it is required to quantify the extent, drivers, and impacts
of India's Manufacturing links in GVCs. I have transformed this overall broad
objective into three fundamental questions: (1) What is the extent of India's
Manufacturing Links in GVCs? (2) What are the determinants of India's
Manufacturing Links in GVCs? (3) What are the impacts of India's Manufacturing
Links in GVCs? These three objectives represent my three chapters in my PhD
thesis.",Learning and Upgrading in Global Value Chains: An Analysis of India's Manufacturing Sector,2021-01-12 15:39:15,Sourish Dutta,"http://dx.doi.org/10.2139/ssrn.3615725, http://arxiv.org/abs/2101.04447v1, http://arxiv.org/pdf/2101.04447v1",econ.GN
31961,gn,"Many important economic outcomes result from cumulative effects of smaller
choices, so the best outcomes require accounting for other choices at each
decision point. We document narrow bracketing -- the neglect of such accounting
-- in work choices in a pre-registered experiment on MTurk: bracketing changes
average willingness to work by 13-28%. In our experiment, broad bracketing is
so simple to implement that narrow bracketing cannot possibly be due to optimal
conservation of cognitive resources, so it must be suboptimal. We jointly
estimate disutility of work and bracketing, finding gender differences in
convexity of disutility, but not in bracketing.",Narrow Bracketing in Work Choices,2021-01-12 18:03:24,"Francesco Fallucchi, Marc Kaufmann","http://arxiv.org/abs/2101.04529v2, http://arxiv.org/pdf/2101.04529v2",econ.GN
31962,gn,"This paper develops a theoretical model to study the economic incentives for
a social media platform to moderate user-generated content. We show that a
self-interested platform can use content moderation as an effective marketing
tool to expand its installed user base, to increase the utility of its users,
and to achieve its positioning as a moderate or extreme content platform. The
optimal content moderation strategy differs for platforms with different
revenue models, advertising or subscription. We also show that a platform's
content moderation strategy depends on its technical sophistication. Because of
imperfect technology, a platform may optimally throw away the moderate content
more than the extreme content. Therefore, one cannot judge how extreme a
platform is by just looking at its content moderation strategy. Furthermore, we
show that a platform under advertising does not necessarily benefit from a
better technology for content moderation, but one under subscription does. This
means that platforms under different revenue models can have different
incentives to improve their content moderation technology. Finally, we draw
managerial and policy implications from our insights.","Social Media, Content Moderation, and Technology",2021-01-12 20:17:51,"Yi Liu, Pinar Yildirim, Z. John Zhang","http://dx.doi.org/10.1287/mksc.2022.1361, http://arxiv.org/abs/2101.04618v2, http://arxiv.org/pdf/2101.04618v2",econ.GN
31963,gn,"This paper illustrates the intergenerational transmission of the gender gap
in education among first and second-generation immigrants. Using the Current
Population Survey (1994-2018), we find that the difference in female-male
education persists from the home country to the new environment. A one standard
deviation increase of the ancestral country female-male difference in schooling
is associated with 17.2% and 2.5% of a standard deviation increase in the
gender gap among first and second generations, respectively. Since gender
perspective in education uncovers a new channel for cultural transmission among
families, we interpret the findings as evidence of cultural persistence among
first generations and partial cultural assimilation of second generations.
Moreover, Disaggregation into country-groups reveals different paths for this
transmission: descendants of immigrants of lower-income countries show fewer
attachments to the gender opinions of their home country. Average local
education of natives can facilitate the acculturation process. Immigrants
residing in states with higher education reveal a lower tendency to follow
their home country attitudes regarding the gender gap.",Intergenerational transmission of culture among immigrants: Gender gap in education among first and second generations,2021-01-12 07:15:19,"Hamid NoghaniBehambari, Nahid Tavassoli, Farzaneh Noghani","http://arxiv.org/abs/2101.05364v1, http://arxiv.org/pdf/2101.05364v1",econ.GN
31964,gn,"A continuous variable changing between 0 and 1 is introduced to characterise
contentment, or satisfaction with life, of an individual and an equation
governing its evolution is postulated from analysis of several factors likely
to affect the contentment. As contentment is strongly affected by material
well-being, a similar equation is formulated for wealth of an individual and
from these two equations derived an evolution equation for the joint
distribution of individuals' wealth and contentment within a society. The
equation so obtained is used to compute evolution of this joint distribution in
a society with initially low variation of wealth and contentment over a long
period time. As illustration of this model capabilities, effects of the wealth
tax rate are simulated and it is shown that a higher taxation in the longer run
may lead to a wealthier and more content society. It is also shown that lower
rates of the wealth tax lead to pronounced stratification of the society in
terms of both wealth and contentment and that there is no direct relationship
between the average values of these two variables.",Dynamics of contentment,2021-01-14 18:08:32,Alexey A. Burluka,"http://dx.doi.org/10.1016/j.physd.2021.133012, http://arxiv.org/abs/2101.05655v2, http://arxiv.org/pdf/2101.05655v2",econ.GN
31965,gn,"In repeated-game applications where both the collusive and non-collusive
outcomes can be supported as equilibria, researchers must resolve underlying
selection questions if theory will be used to understand counterfactual
policies. One guide to selection, based on clear theoretical underpinnings, has
shown promise in predicting when collusive outcomes will emerge in controlled
repeated-game experiments. In this paper we both expand upon and experimentally
test this model of selection, and its underlying mechanism: strategic
uncertainty. Adding an additional source of strategic uncertainty (the number
of players) to the more-standard payoff sources, we stress test the model. Our
results affirm the model as a tool for predicting when tacit collusion is
likely/unlikely to be successful. Extending the analysis, we corroborate the
mechanism of the model. When we remove strategic uncertainty through an
explicit coordination device, the model no longer predicts the selected
equilibrium.",Testing Models of Strategic Uncertainty: Equilibrium Selection in Repeated Games,2021-01-15 01:57:33,"Emanuel Vespa, Taylor Weidman, Alistair J. Wilson","http://arxiv.org/abs/2101.05900v1, http://arxiv.org/pdf/2101.05900v1",econ.GN
31966,gn,"Signalling social status through the consumption of visible goods has often
been perceived as a way in which individuals seek to emulate or move up
compared to others within the community. Using representative migration survey
data from the Indian state of Kerala, this paper assesses the impact of
transnational migration on consumption of visible goods. We utilize the
plausibly exogenous variation in migration networks in the neighbourhood and
religious communities to account for the potential endogeneity. The findings
indicate a significantly positive and robust effect of migration on conspicuous
consumption, even after controlling for household income. In terms of the
mechanisms, while we are unable to rule out the associated taste-based changes
in preferences and the peer group effects driving up the spending on status
goods, we observe only limited effects of these channels. A potential channel
that we propose is information gap among permanent residents about the income
levels of an out-migrant, which is leveraged by them to signal higher status in
society. We explore this channel through a theoretical model where we connect
migration, information gap and status good consumption. We derive a set of
conditions to test whether migrants exhibit snobbish or conformist behaviour.
Empirical observations indicate predominance of a snobbish behaviour.","Moving Away from the Joneses to Move Ahead: Migration, Information Gap and Signalling",2021-01-15 17:47:16,"Shihas Abdul-Razak, Upasak Das, Rupayan Pal","http://arxiv.org/abs/2101.06149v2, http://arxiv.org/pdf/2101.06149v2",econ.GN
31967,gn,"To reduce computational complexity, macro-energy system models commonly
implement reduced time-series data. For renewable energy systems dependent on
seasonal storage and characterized by intermittent renewables, like wind and
solar, adequacy of time-series reduction is in question. Using a capacity
expansion model, we evaluate different methods for creating and implementing
reduced time-series regarding loss of load and system costs.
  Results show that adequacy greatly depends on the length of the reduced
time-series and how it is implemented into the model. Implementation as a
chronological sequence with re-scaled time-steps prevents loss of load best but
imposes a positive bias on seasonal storage resulting in an overestimation of
system costs. Compared to chronological sequences, grouped periods require more
time so solve for the same number of time-steps, because the approach requires
additional variables and constraints. Overall, results suggest further efforts
to improve time-series reduction and other methods for reducing computational
complexity.",Adequacy of time-series reduction for renewable energy systems,2021-01-15 20:07:48,"Leonard Göke, Mario Kendziorski","http://dx.doi.org/10.1016/j.energy.2021.121701, http://arxiv.org/abs/2101.06221v2, http://arxiv.org/pdf/2101.06221v2",econ.GN
31968,gn,"In normal times, it is assumed that financial institutions operating in
non-overlapping sectors have complementary and distinct outcomes, typically
reflected in mostly uncorrelated outcomes and asset returns. Such is the
reasoning behind common ""free lunches"" to be had in investing, like
diversifying assets across equity and bond sectors. Unfortunately, the
recurrence of crises like the Great Financial Crisis of 2007-2008 demonstrate
that such convenient assumptions often break down, with dramatic consequences
for all financial actors. In hindsight, the emergence of systemic risk (as
exemplified by failure in one part of a system spreading to ostensibly
unrelated parts of the system) has been explained by narratives such as
deregulation and leverage. But can we diagnose and quantify the ongoing
emergence of systemic risk in financial systems? In this study, we focus on two
previously-documented measures of systemic risk that require only easily
available time series data (eg monthly asset returns): cross-correlation and
principal component analysis. We apply these tests to daily and monthly returns
on hedge fund indexes and broad-based market indexes, and discuss their
results. We hope that a frank discussion of these simple, non-parametric
measures can help inform legislators, lawmakers, and financial actors of
potential crises looming on the horizon.",Diagnosis of systemic risk and contagion across financial sectors,2021-01-17 07:13:58,"Sayuj Choudhari, Richard Licheng Zhu","http://arxiv.org/abs/2101.06585v1, http://arxiv.org/pdf/2101.06585v1",econ.GN
32070,gn,"Expansions in the size and scope of public procurement across the Atlantic
have increased calls for accountability of democratic governments. Indefinite
Delivery, Indefinite Quantity contracts by their very nature are less
transparent but serve as major tools of public procurement in both the European
and American economies. This paper utilizes a cross-Atlantic perspective to
discuss common challenges faced by governments and contracting entities while
highlighting the need for balancing transparency with efficiency to avoid
negative economic outcomes. It concludes by discussing and providing potential
solutions to certain common challenges.",Improving Transparency in IDIQ Contracts: A Comparison of Common Procurement Issues Affecting Economies Across the Atlantic and Suggested Solutions,2021-04-16 20:58:17,Sareesh Rawat,"http://arxiv.org/abs/2104.08276v1, http://arxiv.org/pdf/2104.08276v1",econ.GN
31969,gn,"One of the goals of any business, in addition to producing high-quality,
community-accepted products, is to significantly increase sales. Unfortunately,
there are regions where new marketing technologies that make it possible to
reach a larger number of potential consumers, not only at the regional level,
but also at the state and national level, are not yet used. This research,
which included qualitative and quantitative methods, as well as interviews
applied to owners, employees and clients of three sausage companies, seeks to
measure the impact of digital marketing in the Altos of Jalisco, Mexico. Thus,
in addition to inquiring about the degree of knowledge they have regarding
information and communication technologies (ICT) to expand their markets to
areas with higher population density, another goal is to know the opinion about
their manufactured products, their quality and acceptance. It should not be
forgotten that companies are moving to an increasingly connected world, which
enables entrepreneurs to get their products to a greater number of consumers
through the Internet and smart devices, such as cell phones, tablets and
computers; and thus ensure the survival of the company and a longer stay in the
market.",The Impact of Digital Marketing on Sausage Manufacturing Companies in the Altos of Jalisco,2021-01-17 09:31:48,Guillermo Jose Navarro del Toro,"http://dx.doi.org/10.23913/ricea.v9i18.148, http://arxiv.org/abs/2101.06603v1, http://arxiv.org/pdf/2101.06603v1",econ.GN
31970,gn,"Communicating new scientific discoveries is key to human progress. Yet, this
endeavor is hindered by monetary restrictions for publishing one's findings and
accessing other scientists' reports. This process is further exacerbated by a
large portion of publishing media owned by private, for-profit companies that
do not reinject academic publishing benefits into the scientific community, in
contrast with journals from scientific societies. As the academic world is not
exempt from economic crises, new alternatives are necessary to support a fair
publishing system for society. After summarizing the general issues of academic
publishing today, we present several solutions at the levels of the individual
scientist, the scientific community, and the publisher towards more sustainable
scientific publishing. By providing a voice to the many scientists who are
fundamental protagonists, yet often powerless witnesses, of the academic
publishing system, and a roadmap for implementing solutions, this initiative
can spark increased awareness and promote shifts towards impactful practices.",Towards a more sustainable academic publishing system,2021-01-18 04:53:02,"Mohsen Kayal, Jane Ballard, Ehsan Kayal","http://dx.doi.org/10.24908/iee.2021.14.3.f, http://arxiv.org/abs/2101.06834v1, http://arxiv.org/pdf/2101.06834v1",econ.GN
31971,gn,"We argue that uncertainty network structures extracted from option prices
contain valuable information for business cycles. Classifying U.S. industries
according to their contribution to system-related uncertainty across business
cycles, we uncover an uncertainty hub role for the communications, industrials
and information technology sectors, while shocks to materials, real estate and
utilities do not create strong linkages in the network. Moreover, we find that
this ex-ante network of uncertainty is a useful predictor of business cycles,
especially when it is based on uncertainty hubs. The industry uncertainty
network behaves counter-cyclically in that a tighter network tends to associate
with future business cycle contractions.",Dynamic industry uncertainty networks and the business cycle,2021-01-18 12:40:11,"Jozef Barunik, Mattia Bevilacqua, Robert Faff","http://arxiv.org/abs/2101.06957v2, http://arxiv.org/pdf/2101.06957v2",econ.GN
31972,gn,"Natural and anthropogenic disasters frequently affect both the supply and
demand side of an economy. A striking recent example is the Covid-19 pandemic
which has created severe disruptions to economic output in most countries.
These direct shocks to supply and demand will propagate downstream and upstream
through production networks. Given the exogenous shocks, we derive a lower
bound on total shock propagation. We find that even in this best case scenario
network effects substantially amplify the initial shocks. To obtain more
realistic model predictions, we study the propagation of shocks bottom-up by
imposing different rationing rules on industries if they are not able to
satisfy incoming demand. Our results show that economic impacts depend strongly
on the emergence of input bottlenecks, making the rationing assumption a key
variable in predicting adverse economic impacts. We further establish that the
magnitude of initial shocks and network density heavily influence model
predictions.","Simultaneous supply and demand constraints in input-output networks: The case of Covid-19 in Germany, Italy, and Spain",2021-01-19 22:02:51,"Anton Pichler, J. Doyne Farmer","http://arxiv.org/abs/2101.07818v2, http://arxiv.org/pdf/2101.07818v2",econ.GN
31973,gn,"Consumer preference elicitation is critical to devise effective policies for
the diffusion of electric vehicles (EVs) in India. This study contributes to
the EV demand literature in the Indian context by (a) analysing the EV
attributes and attitudinal factors of Indian car buyers that determine
consumers' preferences for EVs, (b) estimating Indian consumers' willingness to
pay (WTP) to buy EVs with improved attributes, and c) quantifying how the
reference dependence affects the WTP estimates. We adopt a hybrid choice
modelling approach for the above analysis. The results indicate that accounting
for reference dependence provides more realistic WTP estimates than the
standard utility estimation approach. Our results suggest that Indian consumers
are willing to pay an additional USD 10-34 in the purchase price to reduce the
fast charging time by 1 minute, USD 7-40 to add a kilometre to the driving
range of EVs at 200 kilometres, and USD 104-692 to save USD 1 per 100
kilometres in operating cost. These estimates and the effect of attitudes on
the likelihood to adopt EVs provide insights about EV design, marketing
strategies, and pro-EV policies (e.g., specialised lanes and reserved parking
for EVs) to expedite the adoption of EVs in India.",Willingness to Pay and Attitudinal Preferences of Indian Consumers for Electric Vehicles,2021-01-20 10:43:06,"Prateek Bansal, Rajeev Ranjan Kumar, Alok Raj, Subodh Dubey, Daniel J. Graham","http://arxiv.org/abs/2101.08008v2, http://arxiv.org/pdf/2101.08008v2",econ.GN
31974,gn,"The spread of the novel coronavirus disease caused schools in Japan to close
to cope with the pandemic. In response to this, parents of students were
obliged to care for their children during the daytime when they were usually at
school. Does the increase in burden of childcare influence parents mental
health? Based on short panel data from mid-March to mid-April 2020, we explored
how school closures influenced the mental health of parents with school-aged
children. Using the fixed effects model, we found that school closures lead to
students mothers suffering from worse mental health than other females, while
the fathers mental health did not differ from other males. This tendency was
only observed for less educated mothers who had children attending primary
school, but not those attending junior high school. The contribution of this
paper is to show that school closures increased the inequality of mental health
between genders and the educational background of parents.",Impact of closing schools on mental health during the COVID-19 pandemic: Evidence using panel data from Japan,2021-01-21 10:31:27,"Eiji Yamamura, Yoshiro Tsutsui","http://arxiv.org/abs/2101.08476v1, http://arxiv.org/pdf/2101.08476v1",econ.GN
31975,gn,"COVID-19 has led to school closures in Japan to cope with the pandemic. Under
the state of emergency, in addition to school closure, after-school care has
not been sufficiently supplied. We independently collected individual level
data through internet surveys to construct short panel data from mid-March to
mid-June 2020, which covered before and after the state of emergency. We
analyze how the presence of school-aged children influences their parents views
about working from home. After controlling for various factors using a fixed
effects model, we find that in cases where parents were workers, and the
children are (1) in primary school, parents are willing to promote working from
home. If children are (2) in junior high school, the parents view is hardly
affected. (3) Surprisingly, workers whose children are primary school pupils
are most likely to support promotion of working from home after schools reopen.
Due to school closure and a lack of after-school care, parents need to work
from home, and this experience motivated workers with small children to
continue doing so to improve work-life balance even after schools reopen.",Changing views about remote working during the COVID-19 pandemic: Evidence using panel data from Japan,2021-01-21 10:40:29,"Eiji Yamamura, Yoshiro Tsutsui","http://arxiv.org/abs/2101.08480v1, http://arxiv.org/pdf/2101.08480v1",econ.GN
31976,gn,"This study examines the influence of learning in a female teacher homeroom
class in elementary school on pupils' voting behavior later in life, using
independently collected individual-level data. Further, we evaluate its effect
on preference for women's participation in the workplace in adulthood. Our
study found that having a female teacher in the first year of school makes
individuals more likely to vote for female candidates, and to prefer policy for
female labor participation in adulthood. However, the effect is only observed
among males, and not female pupils. These findings offer new evidence for the
female socialization hypothesis.",Female teachers effect on male pupils' voting behavior and preference formation,2021-01-21 10:53:02,Eiji Yamamura,"http://arxiv.org/abs/2101.08487v1, http://arxiv.org/pdf/2101.08487v1",econ.GN
31977,gn,"In Japan, teacher and student is randomly matched in the first year of
elementary school. Under the quasi-natural experimental setting, we examine how
learning in female teacher homeroom class in the elementary school influence
pupils' smoking behavior after they become adult. We found that pupils are
unlikely to smoke later in life if they belonged to female teacher homeroom
class in pupil's first year of school.",Long-term effects of female teacher on her pupils' smoking behaviour later in life,2021-01-21 11:01:12,Eiji Yamamura,"http://arxiv.org/abs/2101.08488v1, http://arxiv.org/pdf/2101.08488v1",econ.GN
31978,gn,"Decision theorists propose a normative theory of rational choice.
Traditionally, they assume that they should provide some constant and invariant
principles as criteria for rational decisions, and indirectly, for agents. They
seek a decision theory that invaribably works for all agents all the time. They
believe that a rational agent should follow a certain principle, perhaps the
principle of maximizing expected utility everywhere, all the time. As a result
of the given context, these principles are considered, in this sense,
context-independent.
  Furthermore, decision theorists usually assume that the relevant agents at
work are ideal agents, and they believe that non-ideal agents should follow
them so that their decisions qualify as rational. These principles are
universal rules. I will refer to this context-independent and universal
approach in traditional decision theory as Invariantism. This approach is,
implicitly or explicitly, adopted by theories which are proposed on the basis
of these two assumptions.",A Contextualist Decision Theory,2021-01-22 04:37:05,Saleh Afroogh,"http://arxiv.org/abs/2101.08914v1, http://arxiv.org/pdf/2101.08914v1",econ.GN
31979,gn,"In this study, we explored how the coronavirus disease (COVID-19) affected
the demand for insurance and vaccines in Japan from mid-March to mid-April
2020. Through independent internet surveys, respondents were asked hypothetical
questions concerning the demand for insurance and vaccines for protection
against COVID-19. Using the collected short-panel data, after controlling for
individual characteristics using the fixed effects model, the key findings,
within the context of the pandemic, were as follows: (1) Contrary to extant
studies, the demand for insurance by females was smaller than that by their
male counterparts; (2) The gap in demand for insurance between genders
increased as the pandemic prevailed; (3) The demand for a vaccine by females
was higher than that for males; and (4) As COVID-19 spread throughout Japan,
demand for insurance decreased, whereas the demand for a vaccine increased.",How does COVID-19 change insurance and vaccine demand? Evidence from short-panel data in Japan,2021-01-22 05:25:35,"Eiji Yamamura, Yoshiro Tsutsui","http://arxiv.org/abs/2101.08922v1, http://arxiv.org/pdf/2101.08922v1",econ.GN
31980,gn,"This paper aims at proposing a model representing individuals' welfare using
Sen's capability approach (CA). It is the first step of an attempt to measure
the negative impact caused by the damage at a Common on a given population's
welfare, and widely speaking, a first step into modelling collective threat.
The CA is a multidimensional representation of persons' well-beings which
account for human diversity. It has received substantial attention from
scholars from different disciplines such as philosophy, economics and social
scientist. Nevertheless, there is no empirical work that really fits the
theoretical framework. Our goal is to show that the capability approach can be
very useful for decision aiding, especially if we fill the gap between the
theory and the empirical work; thus we will propose a framework that is both
usable and a close representation of what capability is.",Is the Capability approach a useful tool for decision aiding in public policy making?,2021-01-17 15:39:20,"Nicolas Fayard, Chabane Mazri, Alexis Tsoukiàs","http://arxiv.org/abs/2101.09357v1, http://arxiv.org/pdf/2101.09357v1",econ.GN
31981,gn,"This study more complex digital platforms in early stages in the two-sided
market to produce powerful network effects. In this study, I use Transfer
Entropy to look for super users who connect hominids in different networks to
achieve higher network effects in the digital platform in the two-sided market,
which has recently become more complex. And this study also aims to redefine
the decision criteria of product managers by helping them define users with
stronger network effects. With the development of technology, the structure of
the industry is becoming more difficult to interpret and the complexity of
business logic is increasing. This phenomenon is the biggest problem that makes
it difficult for start-ups to challenge themselves. I hope this study will help
product managers create new digital economic networks, enable them to make
prioritized, data-driven decisions, and find users who can be the hub of the
network even in small products.",The Two-Sided Market Network Analysis Based on Transfer Entropy & Labelr,2021-01-25 07:16:04,Seung Bin Baik,"http://arxiv.org/abs/2101.09886v2, http://arxiv.org/pdf/2101.09886v2",econ.GN
31982,gn,"Healthcare workers are more likely to be infected with the 2019 novel
coronavirus (COVID-19) because of unavoidable contact with infected people.
Although they are equipped to reduce the likelihood of infection, their
distress has increased. This study examines how COVID-19 influences healthcare
workers' happiness, compared to other workers. We constructed panel data via
Internet surveys during the COVID-19 epidemic in Japan, from March to June
2020, by surveying the same respondents at different times. The survey period
started before the state of emergency, and ended after deregulation. The key
findings are as follows. (1) Overall, the happiness level of healthcare workers
is lower than that of other workers. (2) The biggest disparity in happiness
level, between healthcare workers and others, was observed after deregulation
and not during the state of emergency. After deregulation, the difference was
larger by 0.26 points, on an 11-point scale, than in the initial wave before
the state of emergency.",How COVID-19 influences healthcare workers' happiness: Panel data analysis in Japan,2021-01-25 23:50:34,"Eiji Yamamura, Yoshiro Tsutsui","http://arxiv.org/abs/2101.10408v1, http://arxiv.org/pdf/2101.10408v1",econ.GN
31983,gn,"While patents and standards have been identified as essential driving
components of innovation and market growth, the inclusion of a patent in a
standard poses many difficulties. These difficulties arise from the
contradicting natures of patents and standards, which makes their combination
really challenging, but, also, from the opposing business and market strategies
of different patent owners involved in the standardisation process. However, a
varying set of policies has been adopted to address the issues occurring from
the unavoidable inclusion of patents in standards concerning certain industry
sectors with a constant high degree of innovation, such as telecommunications.
As these policies have not always proven adequate enough, constant efforts are
being made to improve and expand them. The intriguing and complicated
relationship between patents and standards is finally examined through a review
of the use cases of well-known standards of the telecommunications sector which
include a growing set of essential patents.","Exploring the Complicated Relationship Between Patents and Standards, With a Particular Focus on the Telecommunications Sector",2021-01-26 07:01:21,Nikolaos Athanasios Anagnostopoulos,"http://arxiv.org/abs/2101.10548v1, http://arxiv.org/pdf/2101.10548v1",econ.GN
31984,gn,"Using network analysis, this paper develops a multidimensional methodological
framework for understanding the uneven (cross-country) spread of COVID-19 in
the context of the global interconnected economy. The globally interconnected
system of tourism mobility is modeled as a complex network, where two main
stages in the temporal spread of COVID-19 are revealed and defined by the
cutting-point of the 44th day from Wuhan. The first stage describes the
outbreak in Asia and North America, the second one in Europe, South America,
and Africa, while the outbreak in Oceania is spread along both stages. The
analysis shows that highly connected nodes in the global tourism network (GTN)
are infected early by the pandemic, while nodes of lower connectivity are late
infected. Moreover, countries with the same network centrality as China were
early infected on average by COVID-19. The paper also finds that network
interconnectedness, economic openness, and transport integration are key
determinants in the early global spread of the pandemic, and it reveals that
the spatio-temporal patterns of the worldwide spread of COVID-19 are more a
matter of network interconnectivity than of spatial proximity.",Understanding the uneven spread of COVID-19 in the context of the global interconnected economy,2021-01-26 22:12:42,"Dimitrios Tsiotas, Vassilis Tselios","http://arxiv.org/abs/2101.11036v1, http://arxiv.org/pdf/2101.11036v1",econ.GN
31985,gn,"Policymakers worldwide draft privacy laws that require trading-off between
safeguarding consumer privacy and preventing economic loss to companies that
use consumer data. However, little empirical knowledge exists as to how privacy
laws affect companies' performance. Accordingly, this paper empirically
quantifies the effects of the enforcement of the EU's General Data Protection
Regulation (GDPR) on online user behavior over time, analyzing data from 6,286
websites spanning 24 industries during the 10 months before and 18 months after
the GDPR's enforcement in 2018. A panel differences estimator, with a synthetic
control group approach, isolates the short- and long-term effects of the GDPR
on user behavior. The results show that, on average, the GDPR's effects on user
quantity and usage intensity are negative; e.g., the numbers of total visits to
a website decrease by 4.9% and 10% due to GDPR in respectively the short- and
long-term. These effects could translate into average revenue losses of $7
million for e-commerce websites and almost $2.5 million for ad-based websites
18 months after GDPR. The GDPR's effects vary across websites, with some
industries even benefiting from it; moreover, more-popular websites suffer
less, suggesting that the GDPR increased market concentration.",The Impact of Privacy Laws on Online User Behavior,2021-01-27 16:02:08,"Julia Schmitt, Klaus M. Miller, Bernd Skiera","http://arxiv.org/abs/2101.11366v2, http://arxiv.org/pdf/2101.11366v2",econ.GN
31986,gn,"This paper studies optimal bundling of products with non-additive values.
Under monotonic preferences and single-peaked profits, I show a monopolist
finds pure bundling optimal if and only if the optimal sales volume for the
grand bundle is larger than the optimal sales volume for any smaller bundle. I
then (i) detail how my analysis relates to ""ratio monotonicity"" results on
bundling; and (ii) describe the implications for non-linear pricing.",A Characterization for Optimal Bundling of Products with Non-Additive Values,2021-01-27 19:36:29,Soheil Ghili,"http://arxiv.org/abs/2101.11532v3, http://arxiv.org/pdf/2101.11532v3",econ.GN
31992,gn,"This paper presents a model addressing welfare optimal policies of demand
responsive transportation service, where passengers cause external travel time
costs for other passengers due to the route changes. Optimal pricing and trip
production policies are modelled both on the aggregate level and on the network
level. The aggregate model is an extension from Jokinen (2016) with flat
pricing model, but occupancy rate is now modelled as an endogenous variable
depending on demand and capacity levels. The network model enables to describe
differences between routes from the viewpoint of occupancy rate and efficient
trip combining. Moreover, the model defines the optimal differentiated pricing
for routes.",Modelling Optimal Policies of Demand Responsive Transport and Interrelationships between Occupancy Rate and Costs,2021-02-28 20:06:55,Jani-Pekka Jokinen,"http://arxiv.org/abs/2103.00565v1, http://arxiv.org/pdf/2103.00565v1",econ.GN
31993,gn,"The indirect environmental impacts of transport disruptions in urban mobility
are frequently overlooked due to a lack of appropriate assessment methods.
Consequential Life Cycle Assessment (CLCA) is a method to capture the
environmental consequences of the entire cause and effect chain of these
disruptions but has never been adapted to transportat disruption at the city
scale. This paper proposes a mathematical formalization of CLCA applied to a
territorial mobility change. The method is applied to quantify the impact on
climate change of the breakthrough of free-floating e-scooters (FFES) in Paris.
A FFES user survey is conducted to estimate the modal shifts due to FFES. Trip
substitutions from all the Parisian modes concerned are considered - personal
or shared bicycles and motor scooters, private car, taxi and ride-hailing, bus,
streetcar, metro and RER (the Paris metropolitan area mass rapid transit
system). All these Parisian modes are assessed for the first time using LCA.
Final results estimate that over one year, the FFES generated an extra thirteen
thousand tons of CO2eq under an assumption of one million users, mainly due to
major shifts coming from lower-emitting modes (60% from the metro and the RER,
22% from active modes). Recommendations are given to enhance their carbon
footprint. A scenario analysis shows that increasing the lifetime mileage is
insufficient to get a positive balance: reducing drastically servicing
emissions is also required. A sensitivity analysis switching the French
electricity mix for eleven other country mixes suggests a better climate change
effect of the FFES in similar metropolitan areas with higher electricity carbon
intensity, such as in Germany and China. Finally, the novelty and the limits of
the method are discussed, as well as the results and the role of e-scooters,
micromobility, and shared vehicles towards a sustainable mobility.",Consequential LCA for territorial and multimodal transportation policies: method and application to the free-floating e-scooter disruption in Paris,2021-03-01 04:20:32,"Anne de Bortoli, Zoi Christoforou","http://dx.doi.org/10.1016/j.jclepro.2020.122898, http://arxiv.org/abs/2103.00680v1, http://arxiv.org/pdf/2103.00680v1",econ.GN
31987,gn,"This paper describes a study designed to investigate the current and emergent
impacts of Covid-19 and Brexit on UK horticultural businesses. Various
characteristics of UK horticultural production, notably labour reliance and
import dependence, make it an important sector for policymakers concerned to
understand the effects of these disruptive events as we move from 2020 into
2021. The study design prioritised timeliness, using a rapid survey to gather
information from a relatively small (n = 19) but indicative group of producers.
The main novelty of the results is to suggest that a very substantial majority
of producers either plan to scale back production in 2021 (47%) or have been
unable to make plans for 2021 because of uncertainty (37%). The results also
add to broader evidence that the sector has experienced profound labour supply
challenges, with implications for labour cost and quality. The study discusses
the implications of these insights from producers in terms of productivity and
automation, as well as in terms of broader economic implications. Although
automation is generally recognised as the long-term future for the industry
(89%), it appeared in the study as the second most referred short-term option
(32%) only after changes to labour schemes and policies (58%). Currently,
automation plays a limited role in contributing to the UK's horticultural
workforce shortage due to economic and socio-political uncertainties. The
conclusion highlights policy recommendations and future investigative
intentions, as well as suggesting methodological and other discussion points
for the research community.",Current and Emergent Economic Impacts of Covid-19 and Brexit on UK Fresh Produce and Horticultural Businesses,2021-01-27 20:13:51,"Lilian Korir, Archie Drake, Martin Collison, Tania Carolina Camacho-Villa, Elizabeth Sklar, Simon Pearson","http://arxiv.org/abs/2101.11551v1, http://arxiv.org/pdf/2101.11551v1",econ.GN
31988,gn,"The allocation of venture capital is one of the primary factors determining
who takes products to market, which startups succeed or fail, and as such who
gets to participate in the shaping of our collective economy. While gender
diversity contributes to startup success, most funding is allocated to
male-only entrepreneurial teams. In the wake of COVID-19, 2020 is seeing a
notable decline in funding to female and mixed-gender teams, giving raise to an
urgent need to study and correct the longstanding gender bias in startup
funding allocation. We conduct an in-depth data analysis of over 48,000
companies on Crunchbase, comparing funding allocation based on the gender
composition of founding teams. Detailed findings across diverse industries and
geographies are presented. Further, we construct machine learning models to
predict whether startups will reach an equity round, revealing the surprising
finding that the CEO's gender is the primary determining factor for attaining
funding. Policy implications for this pressing issue are discussed.","Investors Embrace Gender Diversity, Not Female CEOs: The Role of Gender in Startup Fundraising",2021-01-22 02:28:46,"Christopher Cassion, Yuhang Qian, Constant Bossou, Margareta Ackerman","http://arxiv.org/abs/2101.12008v1, http://arxiv.org/pdf/2101.12008v1",econ.GN
31989,gn,"Preferences often change -- even in short time intervals -- due to either the
mere passage of time (present-biased preferences) or changes in environmental
conditions (state-dependent preferences). On the basis of the empirical
findings in the context of state-dependent preferences, we critically discuss
the Aristotelian view of unitary decision makers in economics and urge a more
Heraclitean perspective on human decision-making. We illustrate that the
conceptualization of preferences as present-biased or state-dependent has very
different normative implications under the Aristotelian view, although both
concepts are empirically hard to distinguish. This is highly problematic, as it
renders almost any paternalistic intervention justifiable.",The Behavioral Economics of Intrapersonal Conflict: A Critical Assessment,2021-01-29 14:28:53,"Sebastian Krügel, Matthias Uhl","http://arxiv.org/abs/2101.12526v1, http://arxiv.org/pdf/2101.12526v1",econ.GN
31990,gn,"The Atoyac River is among the two most polluted in Mexico. Water quality in
the Upper Atoyac River Basin (UARB) has been devastated by industrial and
municipal wastewater, as well as from effluents from local dwellers, that go
through little to no treatment, affecting health, production, ecosystems and
property value. We did a systematic review and mapping of the costs that
pollution imposes on different sectors and localities in the UARB, and
initially found 358 studies, of which 17 were of our particular interest. We
focus on estimating the cost of pollution through different valuation methods
such as averted costs, hedonic pricing, and contingent valuation, and for that
we only use 10 studies. Costs range from less than a million to over $16
million dollars a year, depending on the sector, with agriculture, industry and
tourism yielding the highest costs. This exercise is the first of its kind in
the UARB that maps costs for sectors and localities affected, and sheds light
on the need of additional research to estimate the total cost of pollution
throughout the basin. This information may help design further research needs
in the region.",The Cost of Pollution in the Upper Atoyac River Basin: A Systematic Review,2021-02-27 04:07:05,"Maria Eugenia Ibarraran, Romeo A. Saldana-Vazquez, Tamara Perez-Garcia","http://arxiv.org/abs/2103.00095v1, http://arxiv.org/pdf/2103.00095v1",econ.GN
31991,gn,"The rapid growth of the e-commerce market in Indonesia, making various
e-commerce companies appear and there has been high competition among them.
Marketing intelligence is an important activity to measure competitive
position. One element of marketing intelligence is to assess customer
satisfaction. Many Indonesian customers express their sense of satisfaction or
dissatisfaction towards the company through social media. Hence, using social
media data provides a new practical way to measure marketing intelligence
effort. This research performs sentiment analysis using the naive bayes
classifier classification method with TF-IDF weighting. We compare the
sentiments towards of top-3 e-commerce sites visited companies, are Bukalapak,
Tokopedia, and Elevenia. We use Twitter data for sentiment analysis because
it's faster, cheaper, and easier from both the customer and the researcher
side. The purpose of this research is to find out how to process the huge
customer sentiment Twitter to become useful information for the e-commerce
company, and which of those top-3 e-commerce companies has the highest level of
customer satisfaction. The experiment results show the method can be used to
classify customer sentiments in social media Twitter automatically and Elevenia
is the highest e-commerce with customer satisfaction.",A Comparison of Indonesia E-Commerce Sentiment Analysis for Marketing Intelligence Effort,2021-02-27 17:32:11,"Andry Alamsyah, Fatma Saviera","http://arxiv.org/abs/2103.00231v1, http://arxiv.org/pdf/2103.00231v1",econ.GN
31994,gn,"We investigate the possibility of a harvesting effect, i.e. a temporary
forward shift in mortality, associated with the COVID-19 pandemic by looking at
the excess mortality trends of an area that registered one of the highest death
tolls in the world during the first wave, Northern Italy. We do not find any
evidence of a sizable COVID-19 harvesting effect, neither in the summer months
after the slowdown of the first wave nor at the beginning of the second wave.
According to our estimates, only a minor share of the total excess deaths
detected in Northern Italian municipalities over the entire period under
scrutiny (February - November 2020) can be attributed to an anticipatory role
of COVID-19. A slightly higher share is detected for the most severely affected
areas (the provinces of Bergamo and Brescia, in particular), but even in these
territories, the harvesting effect can only account for less than 20% of excess
deaths. Furthermore, the lower mortality rates observed in these areas at the
beginning of the second wave may be due to several factors other than a
harvesting effect, including behavioral change and some degree of temporary
herd immunity. The very limited presence of short-run mortality displacement
restates the case for containment policies aimed at minimizing the health
impacts of the pandemic.",Was there a COVID-19 harvesting effect in Northern Italy?,2021-03-02 18:43:46,"Augusto Cerqua, Roberta Di Stefano, Marco Letta, Sara Miccoli","http://arxiv.org/abs/2103.01812v3, http://arxiv.org/pdf/2103.01812v3",econ.GN
31995,gn,"The importance of trade to an economy needs no emphasis. You sell products or
services that you are competitive at and buy those where you are not.
Experience of countries such as South Korea and China demonstrate that
resources required for development can be garnered through trade; thus,
motivating many countries to embrace trade as a means for development.
Simultaneously, emergence of 'Global Value Chain' or 'GVC' as they are
popularly known has changed the way we trade. Though the concept of GVC was
introduced in the early 2000s, there are examples of global value chains before
the 1980s. However, the scale of the phenomenon and the way in which
technological change, by lowering trade costs, has allowed fragmentation of
production was not possible before (Hernandez et al., 2014). In this context,
the World Bank has recently published its 'World Development Report 2020:
Trading for Development in the Age of Global Value Chains' (WDR). The report
prescribes that GVCs still offer developing countries a clear path to progress
and that developing countries can achieve better outcomes by pursuing
market-oriented reforms specific to their stage of development.",Commentary on World Development Report 2020: Trading for Development in the Age of Global Value Chains,2021-03-02 19:02:00,"Rajkumar Byahut, Sourish Dutta, Chidambaran G. Iyer, Manikantha Nataraj","http://arxiv.org/abs/2103.01824v1, http://arxiv.org/pdf/2103.01824v1",econ.GN
31996,gn,"Using firm-level survey- and register-data for both Sweden and Denmark we
show systematic mis-measurement in both vacancy measures. While the
register-based measure on the aggregate constitutes a quarter of the
survey-based measure, the latter is not a super-set of the former. To obtain
the full set of unique vacancies in these two databases, the number of survey
vacancies should be multiplied by approximately 1.2. Importantly, this
adjustment factor varies over time and across firm characteristics. Our
findings have implications for both the search-matching literature and policy
analysis based on vacancy measures: Observed changes in vacancies can be an
outcome of changes in mis-measurement, and are not necessarily changes in the
actual number of vacancies.",Measuring Vacancies: Firm-level Evidence from Two Measures,2021-03-03 12:17:15,"Niels-Jakob Harbo Hansen, Hans Henrik Sievertsen","http://arxiv.org/abs/2103.02272v1, http://arxiv.org/pdf/2103.02272v1",econ.GN
31997,gn,"To solve complex tasks, individuals often autonomously organize in teams.
Examples of complex tasks include disaster relief rescue operations or project
development in consulting. The teams that work on such tasks are adaptive at
multiple levels: First, by autonomously choosing the individuals that jointly
perform a specific task, the team itself adapts to the complex task at hand,
whereby the composition of teams might change over time. We refer to this
process as self-organization. Second, the members of a team adapt to the
complex task environment by learning. There is, however, a lack of extensive
research on multi-level adaptation processes that consider self-organization
and individual learning as simultaneous processes in the field of management
science. We introduce an agent-based model based on the NK-framework to study
the effects of simultaneous multi-level adaptation on a team's performance. We
implement the multi-level adaptation process by a second-price auction
mechanism for self-organization at the team level. Adaptation at the individual
level follows an autonomous learning mechanism. Our preliminary results suggest
that, depending on the task's complexity, different configurations of
individual and collective adaptation can be associated with higher overall task
performance. Low complex tasks favour high individual and collective
adaptation, while moderate individual and collective adaptation is associated
with better performance in case of moderately complex tasks. For highly complex
tasks, the results suggest that collective adaptation is harmful to
performance.",Multi-level Adaptation of Distributed Decision-Making Agents in Complex Task Environments,2021-03-03 14:49:07,"Darío Blanco-Fernández, Stephan Leitner, Alexandra Rausch","http://arxiv.org/abs/2103.02345v1, http://arxiv.org/pdf/2103.02345v1",econ.GN
31998,gn,"There is a resurging interest in automation because of rapid progress of
machine learning and AI. In our perspective, innovation is not an exemption
from their expansion. This situation gives us an opportunity to reflect on a
direction of future innovation studies. In this conceptual paper, we propose a
framework of innovation process by exploiting the concept of unit process.
Deploying it in the context of automation, we indicate the important aspects of
innovation process, i.e. human, organizational, and social factors. We also
highlight the cognitive and interactive underpinnings at micro- and
macro-levels of the process. We propose to embrace all those factors in what we
call Innovation-Automation-Strategy cycle (IAS). Implications of IAS for future
research are also put forward.
  Keywords: innovation, automation of innovation, unit process,
innovation-automation-strategy cycle",Automation-driven innovation management? Toward Innovation-Automation-Strategy cycle,2021-03-03 16:41:06,"Piotr Tomasz Makowski, Yuya Kajikawa","http://arxiv.org/abs/2103.02395v1, http://arxiv.org/pdf/2103.02395v1",econ.GN
31999,gn,"When it comes to conversations about funding, the questions of whether the
United States should be spending its resources on space-based research often
rears its head. Opponents of the idea tend to share the opinion that the
resources would be better spent helping citizens on the ground. With an
estimated homeless population around 562,000 throughout the country, roughly
39.4 million Americans (12.3% of the population) living below the poverty
level, and 63.1 million tons of food waste per year, it's hard to argue that
the United States does not have its share of problems that need to be
addressed. However, a history of space-based research has proven time and time
again to bring forth advances in technology and scientific understanding that
benefit humans across the globe and provide crucial protection for life on
Earth.",The Importance of Funding Space-Based Research,2021-03-03 23:14:24,Devan Taylor,"http://arxiv.org/abs/2103.02658v2, http://arxiv.org/pdf/2103.02658v2",econ.GN
32000,gn,"Why do vaccination rates remain low even in countries where long-established
immunization programs exist and vaccines are provided for free? We study this
paradox in the context of India, which contributes to the world's largest pool
of under-vaccinated children and about one-third of all vaccine-preventable
deaths globally. Combining historical records with survey datasets, we examine
the Indian government's forced sterilization policy, a short-term aggressive
family planning program implemented between 1976 and 1977. Using multiple
estimation methods, including an instrumental variable (IV) and a geographic
regression discontinuity design (RDD) approach, we document that the current
vaccination completion rate is low in places where forced sterilization was
high. We also explore the heterogeneous effects, mechanisms, and reasons for
the mechanism. Finally, we examine the enduring consequence and present
evidence that places more exposed to forced sterilization have an average 60
percent higher child mortality rate today. Together, these findings suggest
that government policies implemented in the past can have persistent adverse
impacts on demand for health-seeking behavior, even if the burden is
exceedingly high.",Understanding Vaccine Hesitancy: Empirical Evidence from India,2021-03-04 12:27:28,Pramod Kumar Sur,"http://arxiv.org/abs/2103.02909v3, http://arxiv.org/pdf/2103.02909v3",econ.GN
32001,gn,"Decomposing taxes by source (labor, capital, sales), we analyze the impact of
automation on tax revenues and the structure of taxation in 19 EU countries
during 1995-2016. Pre-2008, robot diffusion lead to decreasing factor and tax
income, and a shift from taxes on capital to goods. ICTs changed the structure
of taxation from capital to labor, with decreasing employment, but increasing
wages and labor income. Post-2008, we find an ICT-induced increase in capital
income and services, but no effect on taxation from ICT/robots. Overall,
automation goes through various phases with heterogeneous economic effects
which impact the amount and structure of taxes. Whether automation erodes
taxation depends on the technology and stage of diffusion, and thus concerns
about public budgets might be myopic when focusing on the short-run and
ignoring relevant technological trends.",Automation and Taxation,2021-03-06 16:36:28,"Kerstin Hötte, Angelos Theodorakopoulos, Pantelis Koutroumpis","http://arxiv.org/abs/2103.04111v2, http://arxiv.org/pdf/2103.04111v2",econ.GN
32002,gn,"This paper considers the use of instruments to identify and estimate private
and social returns to education within a model of employer learning. What an
instrument identifies depends on whether it is hidden from, or transparent
(i.e., observed) to, the employers. A hidden instrument identifies private
returns to education, and a transparent instrument identifies social returns to
education. We use variation in compulsory schooling laws across non-central and
central municipalities in Norway to, respectively, construct hidden and
transparent instruments. We estimate a private return of 7.9%, of which 70% is
due to increased productivity and the remaining 30% is due to signaling.",Signaling and Employer Learning with Instruments,2021-03-06 17:35:04,"Gaurab Aryal, Manudeep Bhuller, Fabian Lange","http://arxiv.org/abs/2103.04123v2, http://arxiv.org/pdf/2103.04123v2",econ.GN
32003,gn,"Identifying the real causes of democracy is an ongoing debate. We contribute
to the literature by examining the robustness of a comprehensive list of 42
potential determinants of democracy. We take a step forward and employ
Instrumental Variable Bayesian Model Averaging (IVBMA) method to tackle
endogeneity explicitly. Using the data of 111 countries, our IVBMA results mark
arable land as the most persistent predictor of democracy with a posterior
inclusion probability (PIP) of 0.961. Youth population (PIP: 0.893), life
expectancy (PIP: 0.839), and GDP per capita (PIP: 0.758) are the next critical
independent variables. In a subsample of 80 developing countries, in addition
to arable land (PIP: 0.919), state fragility proves to be a significant
determinant of democracy (PIP: 0.779).",The Determinants of Democracy Revisited: An Instrumental Variable Bayesian Model Averaging Approach,2021-03-07 07:15:11,Sajad Rahimian,"http://arxiv.org/abs/2103.04255v1, http://arxiv.org/pdf/2103.04255v1",econ.GN
32004,gn,"The environmental performance of shared micromobility services compared to
private alternatives has never been assessed using an integrated modal Life
Cycle Assessment (LCA) relying on field data. Such an LCA is conducted on three
shared micromobility services in Paris - bikes, second-generation e-scooters,
and e-mopeds - and their private alternatives. Global warming potential,
primary energy consumption, and the three endpoint damages are calculated.
Sensitivity analyses on vehicle lifespan, shipping, servicing distance, and
electricity mix are conducted. Electric micromobility ranks between active
modes and personal ICE modes. Its impacts are globally driven by vehicle
manufacturing. Ownership does not affect directly the environmental
performance: the vehicle lifetime mileage does. Assessing the sole carbon
footprint leads to biased environmental decision-making, as it is not
correlated to the three damages: multicriteria LCA is mandatory to preserve the
planet. Finally, a major change of paradigm is needed to eco-design modern
transportation policies.",Environmental performance of shared micromobility and personal alternatives using integrated modal LCA,2021-03-08 00:50:31,Anne de Bortoli,"http://dx.doi.org/10.1016/j.trd.2021.102743, http://arxiv.org/abs/2103.04464v1, http://arxiv.org/pdf/2103.04464v1",econ.GN
32005,gn,"The initial period of vaccination shows strong heterogeneity between
countries' vaccinations rollout, both in the terms of the start of the
vaccination process and in the dynamics of the number of people that are
vaccinated. A predominant thesis in the ongoing debate on the drivers of this
observed heterogeneity is that a key determinant of the swift and extensive
vaccine rollout is state capacity. Here, we utilize two measures that quantify
different aspects of the state capacity: i) the external capacity (measured
through the soft power and the economic power of the country) and ii) the
internal capacity (measured via the country's government effectiveness) and
investigate their relationship with the coronavirus vaccination outcome in the
initial period (up to 30th January 2021). By using data on 189 countries and a
two-step Heckman approach, we find that the economic power of the country and
its soft power are robust determinants of whether a country has started with
the vaccination process. In addition, the government effectiveness is a key
factor that determines vaccine roll-out. Altogether, our findings are in line
with the hypothesis that state capacity determines the observed heterogeneity
between countries in the initial period of COVID-19 vaccines rollout.",The impact of state capacity on the cross-country variations in COVID-19 vaccination rates,2021-03-08 21:58:31,"Dragan Tevdovski, Petar Jolakoski, Viktor Stojkoski","http://arxiv.org/abs/2103.04981v1, http://arxiv.org/pdf/2103.04981v1",econ.GN
32006,gn,"The question of how a pure fiat currency is enforced and comes to have a
non-zero value has been much debated \cite{10.2307/2077948}. What is less often
addressed is, in the case where the enforcement is taken for granted and we ask
what value (in terms of goods and services) the currency will end up taking.
Establishing a decentralised mechanism for price formation has proven a
challenge for economists: ""Since no decentralized out-of-equilibrium adjustment
mechanism has been discovered, we currently have no acceptable dynamical model
of the Walrasian system"" (Gintis 2006). In his paper, Gintis put forward a
model for price discovery based on the evolution of the model's agents, i.e.
""poorly performing agents dying and being replaced by copies of the well
performing agents."" It seems improbable that this mechanism is the driving
force behind price discovery in the real world. This paper proposes a more
realistic mechanism and presents results from a corresponding agent based
model.",On the marginal utility of fiat money: insurmountable circularity or not?,2021-03-09 20:08:33,Michael Reiss,"http://arxiv.org/abs/2103.05556v1, http://arxiv.org/pdf/2103.05556v1",econ.GN
32007,gn,"Nowadays there are a lot of creative and innovative ideas of business
start-ups or various projects starting from a novel or music album and
finishing with some innovative goods or website that makes our life better and
easier. Unfortunately, young people often do not have enough financial support
to bring their ideas to life. The best way to solve particular problem is to
use crowdfunding platforms. Crowdfunding itself is a way of financing a project
by raising money from a crowd or simply large number of people. It is believed
that crowdfunding term appeared at the same time as crowdsourcing in 2006. Its
author is Jeff Howe. However, the phenomenon of the national funding, of
course, much older. For instance, the construction of the Statue of Liberty in
New York, for which funds were collected by the people. Currently, the national
project is financed with the use of the Internet. Author of the project in need
of funding, can post information about the project on a special website and
request sponsorship of the audience. Firstly, author selects the best
crowdfunding platform for project requirements and sign in. then he or she
creates and draws up the project. The project that is created must correspond
to one of the categories available for selection (music, film, publishing,
etc.). If you create brand new product, it is necessary to submit the
draft-working prototype or sample product. A full list of design rules for a
project can be viewed directly on the site of crowdfunding platform. While
calculating the cost of project it is necessary to take into account the cost
of realization the project, reward for your sponsors, moreover commission of
payment systems and taxes. The project is considered successfully launched
after it gets through moderation on website.",Crowdfunding for Independent Parties,2021-03-10 13:12:09,"A. R. Baghirzade, B. Kushbakov","http://arxiv.org/abs/2103.05973v1, http://arxiv.org/pdf/2103.05973v1",econ.GN
32008,gn,"The outbreak of the Covid-19 pandemic has led to an increasing interest in
Universal Basic Income (UBI) proposals as it exposed the inadequacy of
traditional welfare systems to provide basic financial security to a large
share of the population. In this paper, we use a static tax-benefit
microsimulation model to analyse the fiscal and distributional effects of the
hypothetical implementation in Brazil of alternative UBI schemes which
partially replace the existing tax-transfer system. The results indicate that
the introduction of a UBI/Flat Tax system in the country could be both
extremely effective in reducing poverty and inequality and economically viable.",A Universal Basic Income For Brazil: Fiscal and Distributional Effects of Alternative Schemes,2021-03-10 15:36:53,"Rozane Bezerra de Siqueira, Jose Ricardo Bezerra Nogueira","http://arxiv.org/abs/2103.06020v2, http://arxiv.org/pdf/2103.06020v2",econ.GN
32009,gn,"We conduct a sensitivity analysis of a new type of integrated
climate-economic model recently proposed in the literature, where the core
economic component is based on the Goodwin-Keen dynamics instead of a
neoclassical growth model. Because these models can exhibit much richer
behaviour, including multiple equilibria, runaway trajectories and unbounded
oscillations, it is crucial to determine how sensitive they are to changes in
underlying parameters. We focus on four economic parameters (markup rate, speed
of price adjustments, coefficient of money illusion, growth rate of
productivity) and two climate parameters (size of upper ocean reservoir,
equilibrium climate sensitivity) and show how their relative effects on the
outcomes of the model can be quantified by methods that can be applied to an
arbitrary number of parameters.",Sensitivity analysis of an integrated climate-economic model,2021-03-10 20:57:13,"Benjamin M. Bolker, Matheus R. Grasselli, Emma Holmes","http://arxiv.org/abs/2103.06227v1, http://arxiv.org/pdf/2103.06227v1",econ.GN
32010,gn,"In the article we made an attempt to reveal the contents and development of
the concept of economic clusters, to characterize the specificity of the
regional cluster as a project. We have identified features of an estimation of
efficiency of state participation in the cluster, where the state is an
institution representing the interests of society.",Assessment of the Effectiveness of State Participation in Economic Clusters,2021-03-11 11:47:41,"A. R. Baghirzade, B. Kushbakov","http://arxiv.org/abs/2103.06530v1, http://arxiv.org/pdf/2103.06530v1",econ.GN
32011,gn,"We develop a method suitable for detecting whether racial homophily is on the
rise and also whether the economic divide (i.e., the gap between individuals
with different education levels and thereby with different abilities to
generate income) is growing in a society. We identify these changes with the
changing aggregate marital preferences over the partners' race and education
level through their effects on the share of inter-racial couples and the share
of educationally homogamous couples. These shares are shaped not only by
preferences, but also by the distributions of marriageable men and women by
traits. The method proposed is designed to control for changes in the trait
distributions from one generation to another. By applying the method, we find
the economic divide in the US to display a U-curve pattern between 1960 and
2010 followed by its slightly negative trend between 2010 and 2015. The
identified trend of racial homophily suggests that the American society has
become more and more permissive towards racial intermarriages since 1970.
Finally, we refute the aggregate version of the status-cast exchange hypothesis
based on the joint dynamics of the economic divide and the racial homophily.","A new method for identifying what Cupid's invisible hand is doing. Is it spreading color blindness while turning us more ""picky'' about spousal education?",2021-03-12 01:54:29,"Anna Naszodi, Francisco Mendonca","http://arxiv.org/abs/2103.06991v2, http://arxiv.org/pdf/2103.06991v2",econ.GN
32012,gn,"An important area of anti-crisis public administration is the development of
small businesses. They are an important part of the economy of developed and
developing countries, provide employment for a significant part of the
population and tax revenues to budgets, and contribute to increased competition
and the development of entrepreneurial abilities of citizens. Therefore, the
primary task of the state Federal and regional policy is to reduce
administrative barriers and risks, time and resources spent on opening and
developing small businesses, problems with small businesses ' access to Bank
capital [8], etc. Despite the loud statements of officials, administrative
barriers to the development of small businesses in trade and public catering
are constantly increasing, including during the 2014-2016 crisis.",Modern risks of small businesses,2021-03-12 14:08:29,A. R Baghirzade,"http://arxiv.org/abs/2103.07213v1, http://arxiv.org/pdf/2103.07213v1",econ.GN
32013,gn,"Academic research projects receive hundreds of billions of dollars of
government investment each year. They complement business research projects by
focusing on the generation of new foundational knowledge and addressing
societal challenges. Despite the importance of academic research, the
management of it is often undisciplined and ad hoc. It has been postulated that
the inherent uncertainty and complexity of academic research projects make them
challenging to manage. However, based on this study's analysis of input and
voting from more than 500 academic research team members in facilitated risk
management sessions, the most important perceived risks are general, as opposed
to being research specific. Overall participants' top risks related to funding,
team instability, unreliable partners, study participant recruitment, and data
access. Many of these risks would require system- or organization-level
responses that are beyond the scope of individual academic research teams.","Risks for Academic Research Projects, An Empirical Study of Perceived Negative Risks and Possible Responses",2021-03-15 00:50:48,P. Alison Paprica,"http://arxiv.org/abs/2103.08048v1, http://arxiv.org/pdf/2103.08048v1",econ.GN
32014,gn,"Since the 1980s, technology business incubators (TBIs), which focus on
accelerating businesses through resource sharing, knowledge agglomeration, and
technology innovation, have become a booming industry. As such, research on
TBIs has gained international attention, most notably in the United States,
Europe, Japan, and China. The present study proposes an entrepreneurial
ecosystem framework with four key components, i.e., people, technology,
capital, and infrastructure, to investigate which factors have an impact on the
performance of TBIs. We also empirically examine this framework based on
unique, three-year panel survey data from 857 national TBIs across China. We
implemented factor analysis and panel regression models on dozens of variables
from 857 national TBIs between 2015 and 2017 in all major cities in China and
found that a number of factors associated with people, technology, capital, and
infrastructure components have various statistically significant impacts on the
performance of TBIs at either national model or regional models.",What are the key components of an entrepreneurial ecosystem in a developing economy? A longitudinal empirical study on technology business incubators in China,2021-03-15 07:18:40,"Xiangfei Yuan, Haijing Hao, Chenghua Guan, Alex Pentland","http://arxiv.org/abs/2103.08131v1, http://arxiv.org/pdf/2103.08131v1",econ.GN
32015,gn,"This paper relies on a microsimulation framework to undertake an analysis of
the distributional implications of the COVID-19 crisis over three waves. Given
the lack of real-time survey data during the fast moving crisis, it applies a
nowcasting methodology and real-time aggregate administrative data to calibrate
an income survey and to simulate changes in the tax benefit system that
attempted to mitigate the impacts of the crisis. Our analysis shows how
crisis-induced income-support policy innovations combined with existing
progressive elements of the tax-benefit system were effective in avoiding an
increase in income inequality at all stages of waves 1-3 of the COVID-19
emergency in Ireland. There was, however, a decline in generosity over time as
benefits became more targeted. On a methodological level, our paper makes a
specific contribution in relation to the choice of welfare measure in assessing
the impact of the COVID-19 crisis on inequality.",A Microsimulation Analysis of the Distributional Impact over the Three Waves of the COVID-19 Crisis in Ireland,2021-03-15 17:14:29,"Cathal O'Donoghue, Denisa M. Sologon, Iryna Kyzyma, John McHale","http://arxiv.org/abs/2103.08398v1, http://arxiv.org/pdf/2103.08398v1",econ.GN
32016,gn,"This paper explores a decentralisation initiative in the United Kingdom - the
Northern Powerhouse strategy (NPS) - in terms of its main goal: strengthening
connectivity between Northern cities of England. It focuses on economic
interactions of these cities, defined by ownership linkages between firms,
since the NPS's launch in 2010. The analysis reveals a relatively weak increase
in the intensity of economic regional patterns in the North, in spite of a
shift away from NPS cities' traditional manufacturing base. These results
suggest potential directions for policy-makers in terms of the future
implementation of the NPS.",Decentralising the United Kingdom: the Northern Powerhouse strategy and urban ownership links between firms since 2010,2021-03-15 21:09:11,"Natalia Zdanowska, Robin Morphet","http://arxiv.org/abs/2103.08627v1, http://arxiv.org/pdf/2103.08627v1",econ.GN
32017,gn,"Why do people engage in certain behavior. What are the effects of social
expectations and perceptions of community behavior and beliefs on own behavior.
Given that proper infant feeding practices are observable and have significant
health impacts, we explore the relevance of these questions in the context of
exclusive infant breastfeeding behavior using social norms theory. We make use
of a primary survey of mothers of children below the age of two years in the
Kayes and Sikasso region of Mali, which have a historically lower prevalence of
exclusive breastfeeding. The findings from regression estimations, controlling
for a host of potential confounding factors, indicate that expectations about
the behavior of other community members can strongly predict individual
exclusive breastfeeding. Beliefs about approval of the infant feeding behavior
of the community though are found to be only modestly associated with it. In
addition, mothers who hold false but positive beliefs about the community are
found to exclusively breastfeed their kids. Further, using responses from
randomly assigned vignettes where we experimentally manipulated the levels of
social expectations, our data reveal a strong relationship between perceived
prevalence of community level exclusive breastfeeding and individual behavior.
This result indicates the existence of a potential causal relationship. We
argue that our findings represent an important foundation for the design of
policy interventions aimed at altering social expectations, and thus effecting
a measurable change in individual behaviors. This type of intervention, by
using social norm messaging to end negative behavior, avoids the use of
coercive measures to effect behavior change in a cost-effective and efficient
way.",Examining norms and social expectations surrounding exclusive breastfeeding: Evidence from Mali,2021-03-17 17:41:23,"Cristina Bicchieri, Upasak Das, Samuel Gant, Rachel Sander","http://arxiv.org/abs/2103.09690v1, http://arxiv.org/pdf/2103.09690v1",econ.GN
32018,gn,"The rapid uptake of renewable energy technologies in recent decades has
increased the demand of energy researchers, policymakers and energy planners
for reliable data on the spatial distribution of their costs and potentials.
For onshore wind energy this has resulted in an active research field devoted
to analysing these resources for regions, countries or globally. A particular
thread of this research attempts to go beyond purely technical or spatial
restrictions and determine the realistic, feasible or actual potential for wind
energy. Motivated by these developments, this paper reviews methods and
assumptions for analysing geographical, technical, economic and, finally,
feasible onshore wind potentials. We address each of these potentials in turn,
including aspects related to land eligibility criteria, energy meteorology, and
technical developments relating to wind turbine characteristics such as power
density, specific rotor power and spacing aspects. Economic aspects of
potential assessments are central to future deployment and are discussed on a
turbine and system level covering levelized costs depending on locations, and
the system integration costs which are often overlooked in such analyses.
Non-technical approaches include scenicness assessments of the landscape,
expert and stakeholder workshops, willingness to pay / accept elicitations and
socioeconomic cost-benefit studies. For each of these different potential
estimations, the state of the art is critically discussed, with an attempt to
derive best practice recommendations and highlight avenues for future research.",Reviewing methods and assumptions for high-resolution large-scale onshore wind energy potential assessments,2021-03-17 20:07:38,"Russell McKenna, Stefan Pfenninger, Heidi Heinrichs, Johannes Schmidt, Iain Staffell, Katharina Gruber, Andrea N. Hahmann, Malte Jansen, Michael Klingler, Natascha Landwehr, Xiaoli Guo Larsén, Johan Lilliestam, Bryn Pickering, Martin Robinius, Tim Tröndle, Olga Turkovska, Sebastian Wehrle, Jann Michael Weinand, Jan Wohland","http://arxiv.org/abs/2103.09781v1, http://arxiv.org/pdf/2103.09781v1",econ.GN
32019,gn,"We investigate the sources of variability in agricultural production and
their relative importance in the context of weather index insurance for
smallholder farmers in India. Using parcel-level panel data, multilevel
modeling, and Bayesian methods we measure how large a role seasonal variation
in weather plays in explaining yield variance. Seasonal variation in weather
accounts for 19-20 percent of total variance in crop yields. Motivated by this
result, we derive pricing and payout schedules for actuarially fair index
insurance. These calculations shed light on the low uptake rates of index
insurance and provide direction for designing more suitable index insurance.","Risk, Agricultural Production, and Weather Index Insurance in Village India",2021-03-20 01:11:18,"Jeffrey D. Michler, Frederi G. Viens, Gerald E. Shively","http://arxiv.org/abs/2103.11047v1, http://arxiv.org/pdf/2103.11047v1",econ.GN
32020,gn,"Elections are crucial for legitimating modern democracies, and giving all
candidates the possibility to run a proper electoral campaign is necessary for
elections' success in providing such legitimization. Yet, during a pandemic,
the risk that electoral campaigns would enhance the spread of the disease
exists and is substantive. In this work, we estimate the causal impact of
electoral campaigns on the spread of COVID-19. Exploiting plausibly exogenous
variation in the schedule of local elections across Italy, we show that the
electoral campaign preceding this latter led to a significant worsening of the
epidemiological situation related to the disease. Our results strongly
highlight the importance of undertaking stringent measures along the entire
electoral process to minimize its epidemiological consequences.","To vote, or not to vote: on the epidemiological impact of electoral campaigns at the time of COVID-19",2021-03-22 15:12:30,"Davide Cipullo, Marco Le Moglie","http://arxiv.org/abs/2103.11753v1, http://arxiv.org/pdf/2103.11753v1",econ.GN
32021,gn,"We propose a portable framework to infer shareholders' preferences and
influences on firms' prosocial decisions and the costs these decisions impose
on firms and shareholders. Using quasi-experimental variations from the media
coverage of firms' annual general meetings, we find that shareholders support
costly prosocial decisions, such as covid-related donations and private
sanctions on Russia, if they can earn image gains from them. In contrast,
shareholders that the public cannot readily associate with specific firms, like
financial corporations with large portfolios, oppose them. These prosocial
expenditures crowd out investments at exposed firms, reducing productivity and
earnings by between 1 and 3\%: pursuing the values of some shareholders comes
at the cost of others, which the shareholders' monitoring motivated by
heterogeneous preferences could prevent.",The Shared Cost of Pursuing Shareholder Value,2021-03-22 22:01:15,"Michele Fioretti, Victor Saint-Jean, Simon C. Smith","http://arxiv.org/abs/2103.12138v13, http://arxiv.org/pdf/2103.12138v13",econ.GN
32022,gn,"In this paper, we provide causal evidence on abortions and risky health
behaviors as determinants of mental health development among young women. Using
administrative in- and outpatient records from Sweden, we apply a novel grouped
fixed-effects estimator proposed by Bonhomme and Manresa (2015) to allow for
time-varying unobserved heterogeneity. We show that the positive association
obtained from standard estimators shrinks to zero once we control for grouped
time-varying unobserved heterogeneity. We estimate the group-specific profiles
of unobserved heterogeneity, which reflect differences in unobserved risk to be
diagnosed with a mental health condition. We then analyze mental health
development and risky health behaviors other than unwanted pregnancies across
groups. Our results suggest that these are determined by the same type of
unobserved heterogeneity, which we attribute to the same unobserved process of
decision-making. We develop and estimate a theoretical model of risky choices
and mental health, in which mental health disparity across groups is generated
by different degrees of self-control problems. Our findings imply that mental
health concerns cannot be used to justify restrictive abortion policies.
Moreover, potential self-control problems should be targeted as early as
possible to combat future mental health consequences.","Mental Health and Abortions among Young Women: Time-varying Unobserved Heterogeneity, Health Behaviors, and Risky Decisions",2021-03-22 23:15:03,"Lena Janys, Bettina Siflinger","http://arxiv.org/abs/2103.12159v4, http://arxiv.org/pdf/2103.12159v4",econ.GN
32023,gn,"We investigate the motivation and means through which individuals expand
their skill-set by analyzing a survey of applicants from the Facebook Jobs
product. Individuals who report being influenced by their networks or local
economy are over 29% more likely to have a postsecondary degree, but peer
effects still exist among those who do not acknowledge such influences. Users
with postsecondary degrees are more likely to upskill in general, by continuing
coursework or applying to higher-skill jobs, though the latter is more common
among users across all education backgrounds. These findings indicate that
policies aimed at connecting individuals with different educational backgrounds
can encourage upskilling. Policies that encourage users to enroll in coursework
may not be as effective among individuals with a high school degree or less.
Instead, connecting such individuals to opportunities that value skills
acquired outside of a formal education, and allow for on-the-job training, may
be more effective.",Understanding Factors that Influence Upskilling,2021-03-23 00:50:39,"Eduardo Laguna-Muggenburg, Monica Bhole, Michael Meaney","http://arxiv.org/abs/2103.12193v2, http://arxiv.org/pdf/2103.12193v2",econ.GN
32024,gn,"Using the method of Haberis and Lipinska (2020), this paper explores the
effect of forward guidance (FG) in a two-country New Keynesian (NK) economy
under the zero lower bound (ZLB). We simulate the effect of different lengths
of FG or the zero interest rate policy under the circumstance of the global
liquidity trap. We show that the size of the intertemporal elasticity of
substitution plays an important role in determining the beggar-thy-neighbor
effect or the prosper-thy-neighbor effect of home FG policy on the foreign
economy. And in the former case, by targeting a minimum welfare loss of the
individual country alone but not global welfare loss, two central banks can
perform interesting FG bargaining in which they cooperatively adopt the same
length of FG or strategically deviate from cooperation.",The interaction of forward guidance in a two-country new Keynesian model,2021-03-23 16:06:15,"Daisuke Ida, Hirokuni Iiboshi","http://arxiv.org/abs/2103.12503v2, http://arxiv.org/pdf/2103.12503v2",econ.GN
32025,gn,"Because of the ongoing Covid-19 crisis, supply chain management performance
seems to be struggling. The purpose of this paper is to examine a variety of
critical factors related to the application of contingency theory to determine
its feasibility in preventing future supply chain bottlenecks. The study
reviewed current online news reports, previous research on contingency theory,
as well as strategic and structural contingency theories. This paper also
systematically reviewed several global supply chain management and strategic
decision-making studies in an effort to promote a new strategy. The findings
indicated that the need for mass production of products within the United
States, as well as within trading partners, is necessary to prevent additional
Covid-19 related supply chain gaps. The paper noted that in many instances, the
United States has become dependent on foreign products, where the prevention of
future supply chain gaps requires the United States restore its manufacturing
prowess.",The Current Chinese Global Supply Chain Monopoly and the Covid-19 Pandemic,2021-03-23 22:56:36,"George Rapciewicz Jr., Dr. Donald Buresh","http://dx.doi.org/10.14302/issn.2692-1537.ijcv-21-3720, http://arxiv.org/abs/2103.12812v1, http://arxiv.org/pdf/2103.12812v1",econ.GN
32026,gn,"Difference-in-differences estimation is a widely used method of program
evaluation. When treatment is implemented in different places at different
times, researchers often use two-way fixed effects to control for
location-specific and period-specific shocks. Such estimates can be severely
biased when treatment effects change over time within treated units. I review
the sources of this bias and propose several simple diagnostics for assessing
its likely severity. I illustrate these tools through a case study of free
primary education in Sub-Saharan Africa.",Simple Diagnostics for Two-Way Fixed Effects,2021-03-24 17:39:43,Pamela Jakiela,"http://arxiv.org/abs/2103.13229v1, http://arxiv.org/pdf/2103.13229v1",econ.GN
32027,gn,"In higher education, gamification offers the prospect of providing a pivotal
shift from traditional asynchronous forms of engagement, to developing methods
to foster greater levels of synchronous interactivity and partnership between
and amongst teaching and learning stakeholders. The small vein of research that
focuses on gamification in teaching and learning contexts, has mainly focused
on the implementation of pre-determined game elements. This approach reflects a
largely asynchronous approach to the development of learning practices in
educational settings, thereby limiting stakeholder engagement in their design
and adoption. Therefore, we draw on the theory of co-creation to examine the
development process of gamification-based learning as a synchronous partnership
between and amongst teaching and learning stakeholders. Empirical insights
suggest that students gain a greater sense of partnership and inclusivity as
part of a synchronous co-creation gamification-based learning development and
implementation process.",Co-Creation of Innovative Gamification Based Learning: A Case of Synchronous Partnership,2021-03-24 18:40:44,"Nicholas Dacre, Vasilis Gkogkidis, Peter Jenkins","http://dx.doi.org/10.2139/ssrn.3486496, http://arxiv.org/abs/2103.13273v2, http://arxiv.org/pdf/2103.13273v2",econ.GN
32028,gn,"The COVID19 pandemic has forced Indian engineering institutions (EIs) to
bring their previous half shut shades completely down. Fetching new admissions
to EI campuses during the pandemic has become a now or never situation for EIs.
During crisis situations, institutions have struggled to return to the normal
track. The pandemic has drastically changed students behavior and family
preferences due to mental stress and the emotional life attached to it.
Consequently, it becomes a prerequisite, and emergencies need to examine the
choice characteristics influencing the selection of EI during the COVID19
pandemic situation.
  The purpose of this study is to critically examine institutional influence
and pandemic influence due to COVID19 that affects students choice about an
engineering institution (EI) and consequently to explore relationships between
institutional and pandemic influence. The findings of this quantitative
research, conducted through a self-reported survey, have revealed that
institutional and pandemic influence have governed EI choice under the COVID19
pandemic. Second, pandemic influence is positively affected by institutional
influence. The study demonstrated that EIs will have to reposition themselves
to normalize pandemic influence by tuning institutional characteristics that
regulate situational influence and new enrollments. It can be yardstick for
policy makers to attract new enrollments under pandemic situations.",Making it normal for new enrollments: Effect of institutional and pandemic influence on selecting an engineering institution under the COVID-19 pandemic situation,2021-03-23 20:35:29,"Prashant Mahajan, Vaishali Patil","http://dx.doi.org/10.21203/rs.3.rs-354086/v1, http://arxiv.org/abs/2103.13297v1, http://arxiv.org/pdf/2103.13297v1",econ.GN
32029,gn,"I study households' primary health care usage in India, which presents a
paradox. I examine why most households use fee-charging private health care
services even though (1) most providers have no formal medical qualifications
and (2) in markets where qualified doctors offer free care through public
hospitals. I present evidence that this puzzling practice has deep historical
routes. I examine India's coercive forced sterilization policy implemented
between 1976 and 1977. Utilizing the unexpected timing of the policy, multiple
measures of forced sterilization, including at a granular level, and an
instrumental variable approach, I document that places heavily affected by the
policy have lower public health care usage today. I also show that the
instrument I use is unrelated to a battery of demographic, economic, or
political aspects before the forced sterilization period. Finally, I explore
the mechanism and document that supply-side factors do not explain these
differences. Instead, I demonstrate that places with greater exposure to forced
sterilization have higher confidence in private hospitals and doctors to
provide good treatment.",Understanding the Paradox of Primary Health Care Use: Empirical Evidence from India,2021-03-25 13:34:18,Pramod Kumar Sur,"http://arxiv.org/abs/2103.13737v2, http://arxiv.org/pdf/2103.13737v2",econ.GN
32030,gn,"This study analyses the actual effect of a representative low-emission zone
(LEZ) in terms of shifting vehicle registrations towards alternative fuel
technologies and its effectiveness for reducing vehicle fleet CO2 emissions.
Vehicle registration data is combined with real life fuel consumption values on
individual vehicle model level, and the impact of the LEZ is then determined
via an econometric approach. The increase in alternative fuel vehicles (AFV)
registration shares due to the LEZ is found to be significant but fosters
rather fossil fuel powered AFV and plug-in hybrid electric vehicles than zero
emission vehicles. This is reflected in the average CO2 emissions of newly
registered vehicles, which do not decrease significantly. In consequence, while
the LEZ is an effective measure for stimulating the shift towards low emission
vehicles, the support of non-electric AFV as low emission vehicles jeopardizes
its effectiveness for decarbonizing the vehicle fleet.",Low emission zones: Effects on alternative-fuel vehicle uptake and fleet CO2 emissions,2021-03-25 15:53:09,"Jens F. Peters, Mercedes Burguillo, Jose M. Arranz","http://dx.doi.org/10.1016/j.trd.2021.102882, http://arxiv.org/abs/2103.13801v2, http://arxiv.org/pdf/2103.13801v2",econ.GN
32031,gn,"Government employees in Brazil are granted tenure after three years on the
job. Firing a tenured government employee is all but impossible, so tenure is a
big employee benefit. But exactly how big is it? In other words: how much money
is tenure worth to a government employee in Brazil? No one has ever attempted
to answer that question. I do that in this paper. I use a modified version of
the Sharpe ratio to estimate what the risk-adjusted salaries of government
workers should be. The difference between actual salary and risk-adjusted
salary gives us an estimate of how much tenure is worth to each employee. I
find that in the 2005-2019 period the monthly value of tenure was 3980 reais to
the median federal government employee, 1971 reais to the median state
government employee, and 500 reais to the median municipal government employee.",Putting a price on tenure,2021-03-25 19:45:34,Thiago Marzagao,"http://arxiv.org/abs/2103.13965v3, http://arxiv.org/pdf/2103.13965v3",econ.GN
32032,gn,"During the global spread of COVID-19, Japan has been among the top countries
to maintain a relatively low number of infections, despite implementing limited
institutional interventions. Using a Tokyo Metropolitan dataset, this study
investigated how these limited intervention policies have affected public
health and economic conditions in the COVID-19 context. A causal loop analysis
suggested that there were risks to prematurely terminating such interventions.
On the basis of this result and subsequent quantitative modelling, we found
that the short-term effectiveness of a short-term pre-emptive stay-at-home
request caused a resurgence in the number of positive cases, whereas an
additional request provided a limited negative add-on effect for economic
measures (e.g. the number of electronic word-of-mouth (eWOM) communications and
restaurant visits). These findings suggest the superiority of a mild and
continuous intervention as a long-term countermeasure under epidemic pressures
when compared to strong intermittent interventions.",Superiority of mild interventions against COVID-19 on public health and economic measures,2021-03-26 10:27:42,"Makoto Niwa, Yasushi Hara, Yusuke Matsuo, Hodaka Narita, Lim Yeongjoo, Shintaro Sengoku, Kota Kodama","http://arxiv.org/abs/2103.14298v1, http://arxiv.org/pdf/2103.14298v1",econ.GN
32033,gn,"The form of political polarization where citizens develop strongly negative
attitudes towards out-party policies and members has become increasingly
prominent across many democracies. Economic hardship and social inequality, as
well as inter-group and racial conflict, have been identified as important
contributing factors to this phenomenon known as ""affective polarization."" Such
partisan animosities are exacerbated when these interests and identities become
aligned with existing party cleavages. In this paper we use a model of cultural
evolution to study how these forces combine to generate and maintain affective
political polarization. We show that economic events can drive both affective
polarization and sorting of group identities along party lines, which in turn
can magnify the effects of underlying inequality between those groups. But on a
more optimistic note, we show that sufficiently high levels of wealth
redistribution through the provision of public goods can counteract this
feedback and limit the rise of polarization. We test some of our key
theoretical predictions using survey data on inter-group polarization, sorting
of racial groups and affective polarization in the United States over the past
50 years.","Inequality, Identity, and Partisanship: How redistribution can stem the tide of mass polarization",2021-03-26 20:31:44,"Alexander J. Stewart, Joshua B. Plotkin, Nolan McCarty","http://dx.doi.org/10.1073/pnas.2102140118, http://arxiv.org/abs/2103.14619v1, http://arxiv.org/pdf/2103.14619v1",econ.GN
32034,gn,"This paper aims to further understand the main factors influencing the
behavioural intentions (BI) of private vehicle users towards public transport
to provide policymakers and public transport operators with the tools they need
to attract more private vehicle users. As service quality, satisfaction and
attitudes towards public transport are considered the main motivational forces
behind the BI of public transport users, this research analyses 26 indicators
frequently associated with these constructs for both public transport users and
private vehicle users. Non-parametric tests and ordinal logit models have been
applied to an online survey asked in Madrid's metropolitan area with a sample
size of 1,025 respondents (525 regular public transport users and 500 regular
private vehicle users). In order to achieve a comprehensive analysis and to
deal with heterogeneity in perceptions, 338 models have been developed for the
entire sample and for 12 users' segments. The results led to the identification
of indicators with no significant differences between public transport and
private vehicle users in any of the segments being considered (punctuality,
information and low-income), as well as those that did show significant
differences in all the segments (proximity, intermodality, save time and money,
and lifestyle). The main differences between public transport and private
vehicle users were found in the attitudes towards public transport and for
certain user segments (residents in the city centre, males, young, with
university qualification and with incomes above 2,700EUR/month). Findings from
this study can be used to develop policies and recommendations for persuading
more private vehicle users to use the public transport services.","Public transport users versus private vehicle users: differences about quality of service, satisfaction and attitudes toward public transport in Madrid (Spain)",2021-03-27 02:01:13,"Juan de Oña, Esperanza Estévez, Rocio de Oña","http://dx.doi.org/10.1016/j.tbs.2020.11.003, http://arxiv.org/abs/2103.14762v1, http://arxiv.org/pdf/2103.14762v1",econ.GN
32035,gn,"This study on public-private research collaboration measures the variation
over time of the propensity of academics to collaborate with colleagues from
private companies. It also investigates the change in weights of the main
drivers underlying the academics' propensity to collaborate, and whether the
type profile of the collaborating academics changes. To do this, the study
applies an inferential model on a dataset of professors working in Italian
universities in consecutive periods, 2010-2013 and 2014-2017. The results,
obtained at overall and field levels, support the formulation of policies aimed
at fostering public-private research collaborations, and should be taken into
account in post-assessment of their effectiveness.","Public-private research collaborations: longitudinal field-level analysis of determinants, frequency and impact",2021-03-27 12:35:22,"Giovanni Abramo, Francesca Apponi, Ciriaco Andrea D'Angelo","http://arxiv.org/abs/2103.14857v1, http://arxiv.org/pdf/2103.14857v1",econ.GN
32036,gn,"University-industry research collaboration is one of the major research
policy priorities of advanced economies. In this study, we try to identify the
main drivers that could influence the propensity of academics to engage in
research collaborations with the private sector, in order to better inform
policies and initiatives to foster such collaborations. At this purpose, we
apply an inferential model to a dataset of 32,792 Italian professors in order
to analyze the relative impact of individual and contextual factors affecting
the propensity of academics to engage in collaboration with industry, at
overall level and across disciplines. The outcomes reveal that the typical
profile of the professor collaborating with industry is a male under age 40,
full professor, very high performer, with highly diversified research, and who
has a certain tradition in collaborating with industry. This professor is
likely to be part of a staff used to collaborating with industry, in a small
university, typically a polytechnic, located in the north of the country.",Drivers of academic engagement in public-private research collaboration: an empirical study,2021-03-27 12:35:46,"Giovanni Abramo, Ciriaco Andrea D'Angelo","http://arxiv.org/abs/2103.14859v1, http://arxiv.org/pdf/2103.14859v1",econ.GN
32037,gn,"We document the evolution of labour market power by employers on the US and
Peruvian labour markets during the 2010s. Making use of a structural estimation
model of labour market dynamics, we estimate differences in market power that
workers face depending on their sector of activity, their age, sex, location
and educational level. In particular, we show that differences in
cross-sectional market power are significant and higher than variations over
the ten-year time span of our data. In contrast to findings of labour market
power in developed countries such as the US, we document significant market
power of employers in Peru vis-\`a-vis the tertiary educated workforce,
regardless of age and sector. In contrast, for the primary educated workforce,
market power seems to be high in (private) services and manufacturing. For
secondary educated workers, only the mining sector stands out as moderately
more monopsonistic than the rest of the labour market. We also show that at
least for the 2010s, labour market power declined in Peru. We contrast these
findings with similar estimates obtained for the United States where we are
able to show that increases in labour market power are particularly acute in
certain sectors, such as agriculture and entertainment and recreational
services, as well as in specific geographic areas where these sectors are
dominant. Moreover, we show that for the US, the labour market power has
gradually increased over the past ten years, in line with the general rise in
inequality. Importantly, we show that the pervasive gender pay gap cannot be
linked to differential market power as men face higher labour market power than
women. We also discuss possible reasons for these findings, including for the
differences of labour market power across skill levels. Especially, we discuss
the reasons for polarization of market power that are specific to sectors and
locations.",How has labour market power evolved? Comparing labour market monopsony in Peru and the United States,2021-03-28 20:46:05,"Jorge Davalos, Ekkehard Ernst","http://arxiv.org/abs/2103.15183v1, http://arxiv.org/pdf/2103.15183v1",econ.GN
32038,gn,"This study examines the impact of technological leapfrogging on manufacturing
value-added in SSA. The study utilizes secondary data spanning 1990 to 2018.
The data is analyzed using cross-sectional autoregressive distributed lags
(CS-ARDL) and cross-sectional distributed lags (CS-DL) techniques. The study
found that technological leapfrogging is a positive driver of manufacturing
value-added in SSA. This implies that SSA can copy the foreign technologies and
adapt them for domestic uses, rather than going through the evolutionary
process of the old technologies that are relatively less efficient. If the
governments of SSA could reinforce their absorptive capacity and beef up
productivity through proper utilization of the existing technology. The
productive activities of the domestic firms will stir new innovations and
discoveries that will eventually translate into indigenous technology",Technological Leapfrogging and Manufacturing Value-added in sub-Saharan African (1990-2018),2021-03-30 06:31:52,"Segun Michael Ojo, Edward Oladipo Ogunleye","http://arxiv.org/abs/2103.16049v1, http://arxiv.org/pdf/2103.16049v1",econ.GN
32039,gn,"Tis paper is a literature review focusing on human capital, skills of
employees, demographic change, management, training and their impact on
productivity growth. Intrafirm behaviour has been recognized as a potentially
important driver for productivity. Results from surveys show that management
practices have become more structured, in the sense of involving more data
collection and analysis. Furthermore, a strong positive correlation between the
measured management quality and firm performance can be observed. Studies
suggest that there is a positive association between management score and
productivity growth. The lack or low level of employees' skills and
qualifications might be in different ways a possible explanation for the
observed slowdown of productivity growth. The main reason for the decline in
skilled labor is the demographic change. Construction sectors are increasingly
affected by demographic developments. Labour reserves in construction are
largely exhausted. Shortage of qualified workforce is impacting project cost,
schedules and quality.",Productivity development in the construction industry and human capital: a literature review,2021-04-01 00:46:49,"Matthias Bahr, Leif Laszig","http://dx.doi.org/10.5121/civej.2021.8101, http://arxiv.org/abs/2104.00129v1, http://arxiv.org/pdf/2104.00129v1",econ.GN
32040,gn,"Plagiarism is the representation of another author's language, thoughts,
ideas, or expressions as one's own original work. In educational contexts,
there are differing definitions of plagiarism depending on the institution.
Prominent scholars of plagiarism include Rebecca Moore Howard, Susan Blum,
Tracey Bretag, and Sarah Elaine Eaton, among others. Plagiarism is considered a
violation of academic integrity and a breach of journalistic ethics. It is
subject to sanctions such as penalties, suspension, expulsion from school or
work, substantial fines and even incarceration. Recently, cases of ""extreme
plagiarism"" have been identified in academia. The modern concept of plagiarism
as immoral and originality as an ideal emerged in Europe in the 18th century,
particularly with the Romantic movement. Generally, plagiarism is not in itself
a crime, but like counterfeiting fraud can be punished in a court for
prejudices caused by copyright infringement, violation of moral rights, or
torts. In academia and industry, it is a serious ethical offense. Plagiarism
and copyright infringement overlap to a considerable extent, but they are not
equivalent concepts, and many types of plagiarism do not constitute copyright
infringement, which is defined by copyright law and may be adjudicated by
courts. Plagiarism might not be the same in all countries. Some countries, such
as India and Poland, consider plagiarism to be a crime, and there have been
cases of people being imprisoned for plagiarizing. In other instances
plagiarism might be the complete opposite of ""academic dishonesty,"" in fact
some countries find the act of plagiarizing a professional's work flattering.
Students who move to the United States and other Western countries from
countries where plagiarism is not frowned upon often find the transition
difficult.",On the Perception of Plagiarism in Academia: Context and Intent,2021-04-01 18:57:00,"Aaron Gregory, Joshua Leeman","http://arxiv.org/abs/2104.00574v1, http://arxiv.org/pdf/2104.00574v1",econ.GN
32047,gn,"Managing morning commute traffic through parking provision management has
been well studied in the literature. However, most previous studies made the
assumption that all road users require parking spaces at CBD area. However, in
recent years, due to technological advancements and low market entry barrier,
more and more e-dispatch FHVs (eFHVs) are provided in service. The rapidly
growing eFHVs, on one hand, supply substantial trip services and complete the
trips requiring no parking demand; on the other hand, imposes congestion
effects to all road users. In this study, we investigate the multi-modal
morning commute problem with bottleneck congestion and parking space
constraints in the presence of ride-sourcing and transit service. Meanwhile, we
derive the optimal number of parking spaces to best manage the commute traffic.
One interesting finding is that, in the presence of ride-sourcing, excessive
supply of parking spaces could incur higher system commute costs in the
multi-modal case.",Optimal parking provision in multi-modal morning commute problem considering ride-sourcing service,2021-04-05 07:04:51,Qida Su,"http://arxiv.org/abs/2104.01761v1, http://arxiv.org/pdf/2104.01761v1",econ.GN
32041,gn,"The initial purpose of the study is to search whether the market exhibits
herd behaviour or not by examining the crypto-asset market in the context of
behavioural finance. And the second purpose of the study is to measure whether
the financial information stimulates the herd behaviour or not. Within this
frame, the announcements of the Federal Open Market Committee (FOMC), Governing
Council of European Central Bank (ECB) and Policy Board of Bank of Japan (BOJ)
for interest change, and S&P 500, Nikkei 225, FTSE 100 and GOLD SPOT indices
data were used. In the study, the analyses were made over 100 cryptocurrencies
with the highest trading volume by the use of the 2014:5 - 2019:12 period. For
the analysis, the Markov Switching approach, as well as loads of empiric models
developed by Chang et al. (2000), were used. According to the results obtained,
the presence of herd behaviour in the crypto-asset market was determined in the
relevant period. But it was found that interest rate announcements and stock
exchange performances had no effect on herd behaviour.",Herd Behavior in Crypto Asset Market and Effect of Financial Information on Herd Behavior,2021-04-01 23:54:46,"Üzeyir Aydin, Büşra Ağan, Ömer Aydin","http://dx.doi.org/10.34109/ijefs.202012221, http://arxiv.org/abs/2104.00763v1, http://arxiv.org/pdf/2104.00763v1",econ.GN
32042,gn,"More than half a century has passed since the Great Chinese Famine
(1959-1961), and China has transformed from a poor, underdeveloped country to
the world's leading emerging economy. Does the effect of the famine persist
today? To explore this question, we combine historical data on province-level
famine exposure with contemporary data on individual wealth. To better
understand if the relationship is causal, we simultaneously account for the
well-known historical evidence on the selection effect arising for those who
survive the famine and those born during this period, as well as the issue of
endogeneity on the exposure of a province to the famine. We find robust
evidence showing that famine exposure has had a considerable negative effect on
the contemporary wealth of individuals born during this period. Together, the
evidence suggests that the famine had an adverse effect on wealth, and it is
even present among the wealthiest cohort of individuals in present-day China.",The Persistent Effect of Famine on Present-Day China: Evidence from the Billionaires,2021-04-02 11:27:12,"Pramod Kumar Sur, Masaru Sasaki","http://arxiv.org/abs/2104.00935v2, http://arxiv.org/pdf/2104.00935v2",econ.GN
32043,gn,"Flextime is one of the efficient approaches in travel demand management to
reduce peak hour congestion and encourage social distancing in epidemic
prevention. Previous literature has developed bi-level models of the work
starting time choice considering both labor output and urban mobility. Yet,
most analytical studies assume the single trip purpose in peak hours (to work)
only and do not consider the household travels (daycare drop-off/pick-up). In
fact, as one of the main reasons to adopt flextime, household travel plays an
influential role in travelers' decision making on work schedule selection. On
this account, we incorporate household travels into the work starting time
choice model in this study. Both short-run travel behaviours and long-run work
start time selection of heterogenous commuters are examined under agglomeration
economies. If flextime is not flexible enough, commuters tend to agglomerate in
work schedule choice at long-run equilibrium. Further, we analyze optimal
schedule choices with two system performance indicators. For total commuting
cost, it is found that the rigid school schedule for households may impede the
benefits of flextime in commuting cost saving. In terms of total net benefit,
while work schedule agglomeration of all commuters leads to the maximum in some
cases, the polarized agglomeration of the two heterogenous groups can never
achieve the optimum.",Bottleneck Congestion And Work Starting Time Distribution Considering Household Travels,2021-04-02 11:37:59,"Qida Su, David Z. W. Wang","http://arxiv.org/abs/2104.00938v1, http://arxiv.org/pdf/2104.00938v1",econ.GN
32044,gn,"We propose a general methodology to measure labour market dynamics, inspired
by the search and matching framework, based on the estimate of the transition
rates between labour market states. We show how to estimate instantaneous
transition rates starting from discrete time observations provided in
longitudinal datasets, allowing for any number of states. We illustrate the
potential of such methodology using Italian labour market data. First, we
decompose the unemployment rate fluctuations into inflow and outflow driven
components; then, we evaluate the impact of the implementation of a labour
market reform, which substantially changed the regulations of temporary
contracts.",A general methodology to measure labour market dynamics,2021-04-02 18:44:36,"Davide Fiaschi, Cristina Tealdi","http://arxiv.org/abs/2104.01097v1, http://arxiv.org/pdf/2104.01097v1",econ.GN
32045,gn,"This paper presents a model where intergenerational occupational mobility is
the joint outcome of three main determinants: income incentives, equality of
opportunity and changes in the composition of occupations. The model
rationalizes the use of transition matrices to measure mobility, which allows
for the identification of asymmetric mobility patterns and for the formulation
of a specific mobility index for each determinant. Italian children born in
1940-1951 had a lower mobility with respect to those born after 1965. The
steady mobility for children born after 1965, however, covers a lower
structural mobility in favour of upper-middle classes and a higher downward
mobility from upper-middle classes. Equality of opportunity was far from the
perfection but steady for those born after 1965. Changes in income incentives
instead played a major role, leading to a higher downward mobility from
upper-middle classes and lower upward mobility from the lower class.",Occupational Mobility: Theory and Estimation for Italy,2021-04-03 04:04:04,"Irene Brunetti, Davide Fiaschi","http://dx.doi.org/10.1007/s10888-023-09568-8, http://arxiv.org/abs/2104.01285v1, http://arxiv.org/pdf/2104.01285v1",econ.GN
32046,gn,"We use geospatial data to examine the unprecedented national program
currentlyunderway in the United States to distribute and administer vaccines
against COVID-19. We quantify the impact of the proposed federal partnership
with the companyDollar General to serve as vaccination sites and compare
vaccine access with DollarGeneral to the current Federal Retail Pharmacy
Partnership Program. Although dollarstores have been viewed with skepticism and
controversy in the policy sector, we showthat, relative to the locations of the
current federal program, Dollar General stores aredisproportionately likely to
be located in Census tracts with high social vulnerability;using these stores
as vaccination sites would greatly decrease the distance to vaccinesfor both
low-income and minority households. We consider a hypothetical
alternativepartnership with Dollar Tree and show that adding these stores to
the vaccinationprogram would be similarly valuable, but impact different
geographic areas than theDollar General partnership. Adding Dollar General to
the current pharmacy partnersgreatly surpasses the goal set by the Biden
administration of having 90% of the popu-lation within 5 miles of a vaccine
site. We discuss the potential benefits of leveragingthese partnerships for
other vaccinations, including against influenza.",Equity Impacts of Dollar Store Vaccine Distribution,2021-04-03 05:19:50,"Judith A. Chevalier, Jason L. Schwartz, Yihua Su, Kevin R. Williams","http://arxiv.org/abs/2104.01295v1, http://arxiv.org/pdf/2104.01295v1",econ.GN
32048,gn,"Travellers in autonomous vehicles (AVs) need not to walk to the destination
any more after parking like those in conventional human-driven vehicles (HVs).
Instead, they can drop off directly at the destination and AVs can cruise for
parking autonomously. It is a revolutionary change that such parking autonomy
of AVs may increase the potential parking span substantially and affect the
spatial parking equilibrium. Given this, from urban planners' perspective, it
is of great necessity to reconsider the planning of parking supply along the
city. To this end, this paper is the first to examine the spatial parking
equilibrium considering the mix of AVs and HVs with parking cruising effect. It
is found that the equilibrium solution of travellers' parking location choices
can be biased due to the ignorance of cruising effects. On top of that, the
optimal parking span of AVs at given parking supply should be no less than that
at equilibrium. Besides, the optimal parking planning to minimize the total
parking cost is also explored in a bi-level parking planning design problem
(PPDP). While the optimal differentiated pricing allows the system to achieve
optimal parking distribution, this study suggests that it is beneficial to
encourage AVs to cruise further to park by reserving less than enough parking
areas for AVs.",Spatial parking planning design with mixed conventional and autonomous vehicles,2021-04-05 07:59:27,"Qida Su, David Z. W. Wang","http://arxiv.org/abs/2104.01773v1, http://arxiv.org/pdf/2104.01773v1",econ.GN
32049,gn,"Standard economic theory uses mathematics as its main means of understanding,
and this brings clarity of reasoning and logical power. But there is a
drawback: algebraic mathematics restricts economic modeling to what can be
expressed only in quantitative nouns, and this forces theory to leave out
matters to do with process, formation, adjustment, creation and nonequilibrium.
For these we need a different means of understanding, one that allows verbs as
well as nouns. Algorithmic expression is such a means. It allows verbs
(processes) as well as nouns (objects and quantities). It allows fuller
description in economics, and can include heterogeneity of agents, actions as
well as objects, and realistic models of behavior in ill-defined situations.
The world that algorithms reveal is action-based as well as object-based,
organic, possibly ever-changing, and not fully knowable. But it is strangely
and wonderfully alive.",Economics in Nouns and Verbs,2021-04-05 15:13:36,W. Brian Arthur,"http://arxiv.org/abs/2104.01868v2, http://arxiv.org/pdf/2104.01868v2",econ.GN
32050,gn,"Retail centers can be considered as places for interactional and recreational
activities and such social roles of retail centers contribute to the popularity
of the retail centers. Therefore, the main objective of this study was to
identify effective factors encouraging customers to engage with interactional
activities and measure how these factors affect customer behavior. Accordingly,
two hypotheses were raised illustrating that the travel time (i.e., the time it
takes for a customer to reach the retail center) and the variety of shops (in a
retail center) increase the percentage of people who spend their leisure time
and recreational activities retail centers. Two case studies were conducted in
two analogous retail centers, one in Tehran, Iran, and the other in Madrid,
Spain. According to the results, there is an interaction between the travel
time and the motivation for the presence of people in the retail center.
Furthermore, the results revealed that half of both retail center goers who
spend more than 10 minutes to reach the retail centers prefer to do leisure
activities and browsing than shopping. In other words, the longer it takes a
person to get to the center, the more likely he/she is to spend more time in
the mall and do more leisure activities. It is also found that there is a
significant relationship between the variety of shops in a retail center and
the motivation of customers attending a retail center that encourages people to
spend their leisure time in retail centers.",Driving Factors Behind the Social Role of Retail Centers on Recreational Activities,2021-04-06 17:35:56,"Sepideh Baghaee, Saeed Nosratabadi, Farshid Aram, Amir Mosavi","http://arxiv.org/abs/2104.02544v1, http://arxiv.org/pdf/2104.02544v1",econ.GN
32051,gn,"As environmental concerns mostly drive the electrification of our economy and
the corresponding increase in demand for battery storage systems, information
about the potential environmental impacts of the different battery systems is
required. However, this kind of information is scarce for emerging post-lithium
systems such as the magnesium-sulfur (MgS) battery. Therefore, we use life
cycle assessment following a cradle-to-gate perspective to quantify the
cumulative energy demand and potential environmental impacts per Wh of the
storage capacity of a hypothetical MgS battery (46 Wh/kg). Furthermore, we also
estimate global warming potential (0.33 kg CO2 eq/Wh) , fossil depletion
potential (0.09 kg oil eq / Wh), ozone depletion potential (2.5E-08 kg
CFC-11/Wh) and metal depletion potential (0.044 kg Fe eq/Wh), associated with
the MgS battery production. The battery is modelled based on an existing
prototype MgS pouch cell and hypothetically optimised according to the current
state of the art in lithium-ion batteries (LIB), exploring future improvement
potentials. It turns out that the initial (non-optimised) prototype cell cannot
compete with current LIB in terms of energy density or environmental
performance, mainly due to the high share of non-active components, decreasing
its performance substantially. Therefore, if the assumed evolutions of the MgS
cell composition are achieved to overcome current design hurdles and reach a
comparable lifespan, efficiency, cost and safety levels to that of existing
LIB; then the MgS battery has significant potential to outperform both existing
LIB, and lithium-sulfur batteries.",Environmental assessment of a new generation battery: The magnesium-sulfur system,2021-04-08 17:18:49,"Claudia Tomasini Montenegro, Jens F. Peters, Manuel Baumann, Zhirong Zhao-Karger, Christopher Wolter, Marcel Weil","http://dx.doi.org/10.1016/j.est.2020.102053, http://arxiv.org/abs/2104.03794v2, http://arxiv.org/pdf/2104.03794v2",econ.GN
32052,gn,"Online dating emerged as a key platform for human mating. Previous research
focused on socio-demographic characteristics to explain human mating in online
dating environments, neglecting the commonly recognized relevance of sport.
This research investigates the effect of sport activity on human mating by
exploiting a unique data set from an online dating platform. Thereby, we
leverage recent advances in the causal machine learning literature to estimate
the causal effect of sport frequency on the contact chances. We find that for
male users, doing sport on a weekly basis increases the probability to receive
a first message from a woman by 50%, relatively to not doing sport at all. For
female users, we do not find evidence for such an effect. In addition, for male
users the effect increases with higher income.",The Effect of Sport in Online Dating: Evidence from Causal Machine Learning,2021-04-07 18:51:55,"Daniel Boller, Michael Lechner, Gabriel Okasa","http://arxiv.org/abs/2104.04601v1, http://arxiv.org/pdf/2104.04601v1",econ.GN
32053,gn,"Carbon emission right allowance is a double-edged sword, one edge is to
reduce emission as its original design intention, another edge has in practice
slain many less developed coal-consuming enterprises, especially for those in
thermal power industry. Partially governed on the hilt in hands of the
authority, body of this sword is the prices of carbon emission right. How
should the thermal power plants dance on the blade motivates this research.
Considering the impact of price fluctuations of carbon emission right
allowance, we investigate the operation of Chinese thermal power plant by
modeling the decision-making with optimal stopping problem, which is
established on the stochastic environment with carbon emission allowance price
process simulated by geometric Brownian motion. Under the overall goal of
maximizing the ultimate profitability, the optimal stopping indicates the
timing of suspend or halt of production, hence the optimal stopping boundary
curve implies the edge of life and death with regard to this enterprise.
Applying this methodology, real cases of failure and survival of several
Chinese representative thermal power plants were analyzed to explore the
industry ecotope, which leads to the findings that: 1) The survival environment
of existed thermal power plants becomes severer when facing more pressure from
the newborn carbon-finance market. 2) Boundaries of survival environment is
mainly drawn by the technical improvements for rising the utilization rate of
carbon emission. Based on the same optimal stopping model, outlook of this
industry is drawn with a demarcation surface defining the vivosphere of thermal
power plants with different levels of profitability. This finding provides
benchmarks for those enterprises struggling for survival and policy makers
scheming better supervision and necessary intervene.",Option to survive or surrender: carbon asset management and optimization in thermal power enterprises from China,2021-04-10 13:22:02,"Yue Liu, Lixin Tian, Zhuyun Xie, Zaili Zhen, Huaping Sun","http://arxiv.org/abs/2104.04729v1, http://arxiv.org/pdf/2104.04729v1",econ.GN
32054,gn,"We live in an age of consumption with an ever-increasing demand of already
scarce resources and equally fast growing problems of waste generation and
climate change. To tackle these difficult issues, we must learn from mother
nature. Just like waste does not exist in nature, we must strive to create
circular ecosystems where waste is minimized and energy is conserved. This
paper focuses on how public procurement can help us transition to a more
circular economy, while navigating international trade laws that govern it.",WTO GPA and Sustainable Procurement as Tools for Transitioning to a Circular Economy,2021-04-10 15:05:13,Sareesh Rawat,"http://arxiv.org/abs/2104.04744v1, http://arxiv.org/pdf/2104.04744v1",econ.GN
32055,gn,"This paper studies the impact of Demand-pull (DP) and Technology-push (TP) on
growth, innovation, and the factor bias of technological change in a two-layer
network of input-output (market) and patent citation (innovation) links among
307 6-digit US manufacturing industries in 1977-2012. Two types of TP and DP
are distinguished: (1) DP and TP are between-layer spillovers when market
demand shocks pull innovation and innovation pushes market growth. (2)
Within-layer DP arises if downstream users trigger upstream innovation and
growth, while TP effects spill over from up- to downstream industries. The
results support between- and within-layer TP: Innovation spillovers from
upstream industries drive market growth and innovation. Within the market,
upstream supply shocks stimulate growth, but this effect differs across
industries. DP is not supported but shows a factor bias favoring labor, while
TP comes with a shift towards non-production work. The results are strongest
after the 2000s and shed light on the drivers of recent technological change
and its factor bias.","Demand-pull, technology-push, and the direction of technological change",2021-04-10 19:56:53,Kerstin Hötte,"http://arxiv.org/abs/2104.04813v5, http://arxiv.org/pdf/2104.04813v5",econ.GN
32056,gn,"In this note an assessment of the condition \(K_w/K=S_w/S\) is made to
interpret its meaning to the Passineti's theory of
distribution\cite{pasinetti1962rate}. This condition leads the theory to
enforce the result \(s_w\rightarrow0\) as \(P_w\rightarrow 0\), which is the
Pasinetti's description about behavior of the workers. We find that the
Pasinetti's claim, of long run worker's propensity to save as not influencing
the distribution of income between profits and the wage can not be generalized.
This claim is found to be valid only when \(W>>P_w\) or \(P_w=0\) with
\(W\ne0\). In practice, the Pasinetti's condition imposes a restriction on the
actual savings by one of the agents to a lower level compared to its full
saving capacity. An implied relationship between the propensities to save by
workers and capitalists shows that the Passineti's condition can be practiced
only through a contract for a constant value of \(R=s_w/s_c\), to be agreed
upon between the workers and the capitalists. It is showed that the Passineti's
condition can not be described as a dynamic equilibrium of economic growth.
Implementation of this condition (a) may lead to accumulation of unsaved
income, (b) reduces growth of capital, (c)is not practicable and (d) is not
warranted. We have also presented simple mathematical steps for the derivation
of the Pasinetti's final equation compared to those presented in
\cite{pasinetti1962rate}",Assessing the practicability of the condition used for dynamic equilibrium in Pasinetti theory of distribution,2021-04-12 09:51:43,"A Jayakrishnan, Anil Lal S","http://arxiv.org/abs/2104.05229v1, http://arxiv.org/pdf/2104.05229v1",econ.GN
32057,gn,"Despite the key role of multinational enterprises (MNEs) in both
international markets and domestic economies, there is no consensus on their
impact on their host economy. In particular, do MNEs stimulate new domestic
firms through knowledge spillovers? Here, we look at the impact of MNEs on the
entry and exit of domestic industries in Irish regions before, during, and
after the 2008 Financial Crisis. Specifically, we are interested in whether the
presence of MNEs in a region results in knowledge spillovers and the creation
of new domestic industries in related sectors. To quantify how related an
industry is to a region's industry basket we propose two cohesion measures,
weighted closeness and strategic closeness, which capture direct linkages and
the complex connectivity structure between industries in a region respectively.
We use a dataset of government-supported firms in Ireland (covering 90% of
manufacturing and exporting) between 2006-2019. We find that domestic
industries are both more likely to enter and less likely to leave a region if
they are related to so-called 'overlapping' industries containing both domestic
and MNE firms. In contrast, we find a negative impact on domestic entry and
survival from cohesion to 'exclusive MNE' industries, suggesting that domestic
firms are unable to 'leap' and thrive in MNE-proximate industries likely due to
a technology or know-how gap. This dynamic was broken, with domestic firms
entering MNE exclusive sectors, by a large injection of Brexit diversification
funds in 2017-18. Finally, the type of cohesion matters. For example, strategic
rather than weighted closeness to exclusive domestic sectors matters for both
entries and exits.",The role of relatedness and strategic linkages between domestic and MNE sectors in regional branching and resilience,2021-04-12 21:32:34,"Mattie Landman, Sanna Ojanperä, Stephen Kinsella, Neave O'Clery","http://arxiv.org/abs/2104.05754v2, http://arxiv.org/pdf/2104.05754v2",econ.GN
32058,gn,"Social norms are rules and standards of expected behavior that emerge in
societies as a result of information exchange between agents. This paper
studies the effects of emergent social norms on the performance of teams. We
use the NK-framework to build an agent-based model, in which agents work on a
set of interdependent tasks and exchange information regarding their past
behavior with their peers. Social norms emerge from these interactions. We find
that social norms come at a cost for the overall performance, unless tasks
assigned to the team members are highly correlated, and the effect is stronger
when agents share information regarding more tasks, but is unchanged when
agents communicate with more peers. Finally, we find that the established
finding that the team-based incentive schemes improve performance for highly
complex tasks still holds in presence of social norms.",On the effect of social norms on performance in teams with distributed decision makers,2021-04-13 10:54:15,"Ravshanbek Khodzhimatov, Stephan Leitner, Friederike Wall","http://arxiv.org/abs/2104.05993v2, http://arxiv.org/pdf/2104.05993v2",econ.GN
32059,gn,"We study a class of deterministic mean field games on finite and infinite
time horizons arising in models of optimal exploitation of exhaustible
resources. The main characteristic of our game is an absorption constraint on
the players' state process. As a result of the state constraint the optimal
time of absorption becomes part of the equilibrium. This requires a novel
approach when applying Pontyagin's maximum principle. We prove the existence
and uniqueness of equilibria and solve the infinite horizon models in closed
form. As players may drop out of the game over time, equilibrium production
rates need not be monotone nor smooth.",A Maximum Principle approach to deterministic Mean Field Games of Control with Absorption,2021-04-13 15:55:36,"Paulwin Graewe, Ulrich Horst, Ronnie Sircar","http://arxiv.org/abs/2104.06152v1, http://arxiv.org/pdf/2104.06152v1",econ.GN
32060,gn,"The sharp devaluation of the ruble in 2014 increased the real returns to
Russians from working in a global online labor marketplace, as con- tracts in
this market are dollar-denominated. Russians clearly noticed the opportunity,
with Russian hours-worked increasing substantially, primarily on the extensive
margin -- incumbent Russians already active were fairly inelastic. Contrary to
the predictions of bargaining models, there was little to no pass-through of
the ruble price changes in to wages. There was also no evidence of a
demand-side response, with buyers not posting more ""Russian friendly"" jobs,
suggesting limited cross-side externalities. The key findings -- a high
extensive margin elasticity but low intensive margin elasticity; little
pass-through into wages; and little evidence of a cross-side externality --
have implications for market designers with respect to pricing and supply
acquisition.",The Ruble Collapse in an Online Marketplace: Some Lessons for Market Designers,2021-04-13 16:22:00,John Horton,"http://arxiv.org/abs/2104.06170v1, http://arxiv.org/pdf/2104.06170v1",econ.GN
32061,gn,"In 1974, Robert Caro published The Power Broker, a critical biography of
Robert Moses's dictatorial tenure as the ""master builder"" of mid-century New
York. Moses profoundly transformed New York's urban fabric and transportation
system, producing the Brooklyn Battery Tunnel, the Verrazano Narrows Bridge,
the Westside Highway, the Cross-Bronx Expressway, the Lincoln Center, the UN
headquarters, Shea Stadium, Jones Beach State Park and many other projects.
However, The Power Broker did lasting damage to his public image and today he
remains one of the most controversial figures in city planning history. On
August 26, 1974, Moses issued a turgid 23-page statement denouncing Caro's work
as ""full of mistakes, unsupported charges, nasty baseless personalities, and
random haymakers."" Moses's original typewritten statement survives today as a
grainy photocopy in the New York City Parks Department archive. To better
preserve and disseminate it, I have extracted and transcribed its text using
optical character recognition and edited the result to correct errors. Here I
compile my transcription of Moses's statement, alongside Caro's reply to it.",We Live in a Motorized Civilization: Robert Moses Replies to Robert Caro,2021-03-26 23:22:00,Geoff Boeing,"http://arxiv.org/abs/2104.06179v1, http://arxiv.org/pdf/2104.06179v1",econ.GN
32062,gn,"In this paper, I reply to the recent article by Jeffery and Verheijen (2020)
'A new soil health policy paradigm: Pay for practice not performance!'. While
expressing support for their call for a more pronounced role of soil protection
in agri-environmental policy, I critically discuss the two main elements of
their specific proposal: its emphasis of the concept of soil health and the
recommendation to use action-based payments as the main policy instrument. I
argue for using soil functions as a more established concept (and thus more
adequate for policy purposes), which is also informationally richer than soil
health. Furthermore, I provide a more differentiated discussion of the relative
advantages and disadvantages of result-based and action-based payments, while
addressing the specific criticisms towards the former that Jeffery and
Verheijen voice. Also, I suggest an alternative approach (a hybrid model-based
scheme) that addresses the limitations of both Jeffery and Verheijen's own
proposal and the valid criticisms they direct at result-based payments.",Don't throw efficiency out with the bathwater: A reply to Jeffery and Verheijen (2020),2021-04-12 13:35:09,Bartosz Bartkowski,"http://dx.doi.org/10.1016/j.envsci.2021.04.011, http://arxiv.org/abs/2104.06229v2, http://arxiv.org/pdf/2104.06229v2",econ.GN
32063,gn,"General partners (GP) are sometimes paid on a deal-by-deal basis and other
times on a whole-portfolio basis. When is one method of payment better than the
other? I show that when assets (projects or firms) are highly correlated or
when GPs have low reputation, whole-portfolio contracting is superior to
deal-by-deal contracting. In this case, by bundling payouts together,
whole-portfolio contracting enhances incentives for GPs to exert effort.
Therefore, it is better suited to alleviate the moral hazard problem which is
stronger than the adverse selection problem in the case of high correlation of
assets or low reputation of GPs. In contrast, for low correlation of assets or
high reputation of GPs, information asymmetry concerns dominate and
deal-by-deal contracts become optimal, as they can efficiently weed out bad
projects one by one. These results shed light on recent empirical findings on
the relationship between investors and venture capitalists.",Optimal Design of Limited Partnership Agreements,2021-04-14 21:01:27,Mohammad Abbas Rezaei,"http://arxiv.org/abs/2104.07049v1, http://arxiv.org/pdf/2104.07049v1",econ.GN
32064,gn,"I use the quasi-natural experiment of the 2018 African swine fever (ASF)
outbreak in China to analyze swine exporters' reaction to a foreign market's
positive demand shock. I use the universe of Spanish firms' export transactions
to China and other countries, and compare the performance of swine and other
exporters before and after the ASF. The ASF almost tripled Spanish swine
exporters' sales to China. Swine exporters did not increase exported product
portfolio or export revenue concentration in their best-performing products in
China after the ASF. The increase in exports to China positively impacted
export revenue and survival in third markets. This positive impact was
especially intense for small swine exporters. Domestic sales also increased for
swine exporters with liquidity constraints before the ASF.",Exporters' reaction to positive foreign demand shocks,2021-04-15 12:17:42,Asier Minondo,"http://arxiv.org/abs/2104.07319v2, http://arxiv.org/pdf/2104.07319v2",econ.GN
32065,gn,"The mission statement(s) (MS) is one of the most-used tools for planning and
management. Universities worldwide have implemented MS in their knowledge
planning and management processes since the 1980s. Research studies have
extensively explored the content and readability of MS and its effect on
performance in firms, but their effect on public or nonprofit institutions such
as universities has not been scrutinized with the same intensity. This study
used Gunning's Fog Index score to determine the readability of a sample of
worldwide universities' MS and two rankings, i.e., Quacquarelli Symonds World
University Ranking and SCImago Institutions Rankings, to determine their effect
on performance. No significant readability differences were identified in
regions, size, focus, research type, age band, or status. Logistic regression
(cumulative link model) results showed that variables, such as universities'
age, focus, and size, have more-significant explanatory power on performance
than MS readability.",Mission Statements in Universities: Readability and performance,2021-04-13 21:35:12,"Julian D. Cortes, Liliana Rivera, Katerina Bohle Carbonell","http://dx.doi.org/10.1016/j.iedeen.2021.100183, http://arxiv.org/abs/2104.07438v1, http://arxiv.org/pdf/2104.07438v1",econ.GN
32066,gn,"The mission statement (MS) is the most used organizational strategic planning
tool worldwide. The relationship between an MS and an organizations financial
performance has been shown to be significantly positive, albeit small. However,
an MSs relationship to the macroeconomic environment and to organizational
innovation has not been investigated. We implemented a Structural Equation
Modeling using the SCImago Institutional Ranking (SIR) as a global baseline
sample and assessment of organizational research and innovation (RandI), an
automated MS content analysis, and the Economic Complexity Index (ECI) as a
comprehensive macroeconomic environment measure. We found that the median
performance of organizations that do not report an MS is significantly higher
than that of reporting organizations, and that a path-dependence driven by the
State's long-term view and investment is a better explanatory variable for
organizational RandI performance than the MS construct or the intermediate-term
macroeconomic environment.",Mission Statement Effect on Research and Innovation Performance,2021-04-13 22:00:13,"Julian D. Cortes, Diego Tellez, Jesus Godoy","http://arxiv.org/abs/2104.07476v1, http://arxiv.org/pdf/2104.07476v1",econ.GN
32067,gn,"Auctions have become the primary instrument for promoting renewable energy
around the world. However, the data published on such auctions are typically
limited to aggregated information (e.g., total awarded capacity, average
payments). These data constraints hinder the evaluation of realisation rates
and other relevant auction dynamics. In this study, we present an algorithm to
overcome these data limitations in German renewable energy auction programme by
combining publicly available information from four different databases. We
apply it to the German solar auction programme and evaluate auctions using
quantitative methods. We calculate realisation rates and - using correlation
and regression analysis - explore the impact of PV module prices, competition,
and project and developer characteristics on project realisation and bid
values. Our results confirm that the German auctions were effective. We also
found that project realisation took, on average, 1.5 years (with 28% of
projects finished late and incurring a financial penalty), nearly half of
projects changed location before completion (again, incurring a financial
penalty) and small and inexperienced developers could successfully participate
in auctions.",Lessons Learned from Photovoltaic Auctions in Germany,2021-04-15 18:45:21,"Taimyra Batz Liñeiro, Felix Müsgens","http://arxiv.org/abs/2104.07536v1, http://arxiv.org/pdf/2104.07536v1",econ.GN
32068,gn,"The Municipality of Rondon\'opolis possesses several touristic attractions
such as a great diversity of waterfalls and little beaches located in the
surroundings of the urban area, which attract tourists from various locations.
Aiming to understand how ecotourism can contribute to the conservation of water
resources in the leisure areas, as well as their potential development of
touristic activities in those places. The procedures included the use of
various techniques subsidized in remote sensing and geoprocessing tools that
allowed the analysis and spatial distribution of tourism activities of the main
leisure areas. The spatial distribution of the waterfalls and its surroundings,
we observe the biophysical characters such as: the endemic vegetation, the
cachoeiras, the waterfalls, the rocky outcrops, rivers, little beaches and
espraiados. The results showed a correct perception of respondents on existing
inter-relationships between ecotourism practices and the sustainable use of
water resources. In conclusion though, a long way must be performed in order to
prevent the economic benefits of ecotourism generate an inappropriate
exploitation of natural resources, causing environmental problems, particularly
to water resources in the surroundings.",Contribuição do ecoturismo para o uso sustentável dos recursos hídricos do município de Rondonópolis-MT,2021-04-16 17:46:11,Manoel Benedito Nirdo da Silva Campos,"http://arxiv.org/abs/2104.08144v1, http://arxiv.org/pdf/2104.08144v1",econ.GN
32069,gn,"General Santos City, as the tuna capital of the Philippines, relies with the
presence of tricycles in moving people and goods. Considered as a
highly-urbanized city, General Santos City serves as vital link of the entire
SOCKSARGEN region's economic activities. With the current thrust of the city in
providing a sustainable transport service, several options were identified to
adopt in the entire city, that includes cleaner and better transport mode.
Electric tricycle is an after sought alternative that offers better choice in
terms of identified factors of sustainable transport: reliability, safety,
comfort, environment, affordability, and facility. A literature review was
conducted to provide a comparison of cost and emission between a motorized
tricycle and an e-tricycle. The study identified the existing tricycle industry
of the city and reviewed the modal share with the city's travel pattern. The
survey revealed a number of hazards were with the current motorized tricycle
that needs to address for the welfare of the passengers and drivers. The study
favors the shift to adopting E-tricycle. The model derived from binary
logistics regression provided a 72.72% model accuracy. Based from the results
and findings, electric tricycle can be an alternative mode of public transport
in the city that highly support sustainable option that provides local populace
to improve their quality of life through mobility and economic activity.
Further recommendation to local policy makers in the transport sector of the
city include the clustering of barangays for better traffic management and
franchise regulation, the inclusion of transport-related infrastructure related
to tricycle service with their investment planning and programming, the roll
out and implementation of tricycle code of the city, and the piloting activity
of introducing e-tricycle in the city.",Exploratory Data Analysis of Electric Tricycle as Sustainable Public Transport Mode in General Santos City Using Logistic Regression,2021-04-16 18:45:48,"Geoffrey L. Cueto, Francis Aldrine A. Uy, Keith Anshilo Diaz","http://arxiv.org/abs/2104.08182v1, http://arxiv.org/pdf/2104.08182v1",econ.GN
32086,gn,"Decentralization is a centerpiece in Cameroonian's government institutions'
design. This chapter elaborates a simple hierarchy model for the analysis of
the effects of power devolution. The model predicts overall positive effects of
decentralization with larger effects when the local authority processes useful
information on how to better allocate the resources. The estimation of the
effects of the 2010's power devolution to municipalities in Cameroon suggests a
positive impact of decentralization on early human capital accumulation. The
value added by decentralization is the same for Anglophone and Francophone
municipalities; the effects of decentralization are larger for advanced levels
of primary school.",Early Human Capital Accumulation and Decentralization,2021-04-27 01:36:09,Guy Tchuente,"http://arxiv.org/abs/2104.12902v1, http://arxiv.org/pdf/2104.12902v1",econ.GN
32071,gn,"This paper promotes the application of a path-independent decomposition
scheme. Besides presenting some theoretical arguments supporting this
decomposition scheme, this study also illustrates the difference between the
path-independent decomposition scheme and a popular sequential decomposition
with an empirical application of the two schemes. The empirical application is
about identifying a directly unobservable phenomenon, i.e. the changing social
gap between people from different educational strata, through its effect on
marriages and cohabitations. It exploits census data from four waves between
1977 and 2011 about the American, French, Hungarian, Portuguese, and Romanian
societies. For some societies and periods, the outcome of the decomposition is
found to be highly sensitive to the choice of the decomposition scheme. These
examples illustrate the point that a careful selection of the decomposition
scheme is crucial for adequately documenting the dynamics of unobservable
factors.",Decomposition scheme matters more than you may think,2021-04-19 11:59:54,Anna Naszodi,"http://arxiv.org/abs/2104.09141v1, http://arxiv.org/pdf/2104.09141v1",econ.GN
32072,gn,"Brazil is the 5th largest country in the world, despite of having a ``High
Human Development'' it is the 9th most unequal country. The existing Brazilian
micro pension programme is one of the safety nets for poor people. To become
eligible for this benefit, each person must have an income that is less than a
quarter of the Brazilian minimum monthly wage and be either over 65 or
considered disabled. That minimum income corresponds to approximately $2$
dollars per day. This paper analyses quantitatively some aspects of this
programme in the Public Pension System of Brazil. We look for the impact of
some particular economic variables on the number of people receiving the
benefit, and seek if that impact significantly differs among the 27 Brazilian
Federal Units. We search for heterogeneity. We perform regression and spatial
cluster analysis for detection of geographical grouping. We use a database that
includes the entire population that receives the benefit. Afterwards, we
calculate the amount that the system spends with the beneficiaries, estimate
values \textit{per capita} and the weight of each UF, searching for
heterogeneity reflected on the amount spent \textit{per capita}. In this latter
calculation we use a more comprehensive database, by individual, that includes
all people that started receiving a benefit under the programme in the period
from 2nd of January 2018 to 6th of April 2018. We compute the expected
discounted benefit and confirm a high heterogeneity among UF's as well as
gender. We propose achieving a more equitable system by introducing `age
adjusting factors' to change the benefit age.",A public micro pension programme in Brazil: Heterogeneity among states and setting up of benefit age adjustment,2021-04-19 14:11:57,"Renata Gomes Alcoforado, Alfredo D. Egídio dos Reis","http://arxiv.org/abs/2104.09210v1, http://arxiv.org/pdf/2104.09210v1",econ.GN
32073,gn,"In the context of nonlinear prices, the empirical evidence suggests that the
consumers have cognitive biases represented in a limited understanding of
nonlinear price structures, and they respond to some alternative perceptions of
the marginal prices. In particular, consumers usually make choices based more
on the average than the marginal prices, which can result in a suboptimal
behavior. Taking the misspecification in the marginal price as exogenous, this
document analyzes how is the optimal quadratic price scheme for a monopolist
and the optimal quadratic price scheme that maximizes welfare when there is a
continuum of representative consumers with an arbitrary perception of the
marginal price. Under simple preferences and costs functional forms and very
straightforward hypotheses, the results suggest that the bias in the marginal
price doesn't affect the maximum welfare attainable with quadratic price
schemes, and it has a negligible effect over the efficiency cost caused by the
monopolist, so the misspecification in the marginal price is not relevant for
welfare increasing policies. However, almost always the misspecification in the
marginal price is beneficial for the monopolist. An interesting result is that
under these functional forms, more misspecification in the marginal price is
beneficial for both the consumers and the monopolist if the level of bias is
low, so not always is socially optima to have educated consumers about the real
marginal price. Finally, the document shows that the bias in the marginal price
has a negative effect on reduction of aggregate consumption using two-tier
increasing tariffs, which are commonly used for reduction of aggregate
consumption.",Nonlinear Pricing with Misspecified and Arbitrary Perception of the Marginal Price,2021-04-21 02:22:42,Diego Alejandro Murillo Taborda,"http://arxiv.org/abs/2104.10281v1, http://arxiv.org/pdf/2104.10281v1",econ.GN
32074,gn,"Financial literacy and financial education are important components of modern
life. The importance of financial literacy is increasing for financial
consumers because of the weakening of both government and employer-based
retirement systems. Unfortunately, empirical research shows that financial
consumers are not fully informed and are not able to make proper choices even
when appropriate information is available. More research is needed as to how
financial consumers obtain investment and financial planning information. A
primary data study was conducted to understand the differences between the
demographic categories of gender, age, education-level, and income-level with
the means of obtaining investment and financial planning information. In this
research study, which selected a population from the LinkedIn platform,
statistical differences between gender, age, education-level, and income-level
were confirmed. These differences helped to confirm prior research in this
field of study. Practical opportunities for commercial outreach to specific
populations became evident through this type of research. Providers of
investment and financial planning information can access their targeted
audience more effectively by understanding the demographic profile of the
audience, as well as the propensity of the demographic profile of the audience
to respond. As this type of research is relatively easy to construct and
administer, commercial outreach for providers of investment and financial
planning information can be conducted in a cost-efficient and effective manner.",An Examination of Demographic Differences in Obtaining Investment and Financial Planning Information,2021-04-22 04:59:33,Paul Bechly,"http://dx.doi.org/10.2139/ssrn.3369267, http://arxiv.org/abs/2104.10827v1, http://arxiv.org/pdf/2104.10827v1",econ.GN
32087,gn,"The purpose of this study is to measure the benefits and costs of using
biochar, a carbon sequestration technology, to reduce the B.C Wine Industry's
carbon emissions. An economic model was developed to calculate the value-added
for each of the three sectors that comprise the BC Wine industry. Results
indicate that each sector of the wine value chain is potentially profitable,
with 9,000 tonnes of CO2 sequestered each year. The study is unique in that it
demonstrates that using biochar, produced from wine industry waste, to
sequester atmospheric CO2 can be both profitable and environmentally
sustainable.",Climate Change Adaptation in the British Columbia Wine Industry Can carbon sequestration technology lower the B.C. Wine Industry's greenhouse gas emissions?,2021-04-27 20:09:34,"Lee Cartier, Svan Lembke","http://dx.doi.org/10.11114/aef.v8i4.5259, http://arxiv.org/abs/2104.13330v1, http://arxiv.org/pdf/2104.13330v1",econ.GN
32075,gn,"Public transport ridership around the world has been hit hard by the COVID-19
pandemic. Travellers are likely to adapt their behaviour to avoid the risk of
transmission and these changes may even be sustained after the pandemic. To
evaluate travellers' behaviour in public transport networks during these times
and assess how they will respond to future changes in the pandemic, we conduct
a stated choice experiment with train travellers in the Netherlands. We
specifically assess behaviour related to three criteria affecting the risk of
COVID-19 transmission: (i) crowding, (ii) exposure duration, and (iii)
prevalent infection rate.
  Observed choices are analysed using a latent class choice model which reveals
two, nearly equally sized traveller segments: 'COVID Conscious' and 'Infection
Indifferent'. The former has a significantly higher valuation of crowding,
accepting, on average 8.75 minutes extra waiting time to reduce one person
on-board. Moreover, they demonstrate a strong desire to sit without anybody in
their neighbouring seat and are quite sensitive to changes in the prevalent
infection rate. By contrast, Infection Indifferent travellers' value of
crowding (1.04 waiting time minutes/person) is only slightly higher than
pre-pandemic estimates and they are relatively unaffected by infection rates.
We find that older and female travellers are more likely to be COVD Conscious
while those reporting to use the trains more frequently during the pandemic
tend to be Infection Indifferent. Further analysis also reveals differences
between the two segments in attitudes towards the pandemic and self-reported
rule-following behaviour. The behavioural insights from this study will not
only contribute to better demand forecasting for service planning but will also
inform public transport policy decisions aimed at curbing the shift to private
modes.",Traveller behaviour in public transport in the early stages of the COVID-19 pandemic in the Netherlands,2021-04-22 13:12:24,"Sanmay Shelat, Oded Cats, Sander van Cranenburgh","http://dx.doi.org/10.1016/j.tra.2022.03.027, http://arxiv.org/abs/2104.10973v2, http://arxiv.org/pdf/2104.10973v2",econ.GN
32076,gn,"It is well-known that home team has an inherent advantage against visiting
teams when playing team sports. One of the most obvious underlying reasons, the
presence of supporting fans has mostly disappeared in major leagues with the
emergence of COVID-19 pandemic. This paper investigates with the help of
historical National Football League (NFL) data, how much effect spectators have
on the game outcome. Our findings reveal that under no allowance of spectators
the home teams' performance is substantially lower than under normal
circumstances, even performing slightly worse than the visiting teams. On the
other hand, when a limited amount of spectators are allowed to the game, the
home teams' performance is no longer significantly different than what we
observe with full stadiums. This suggests that from a psychological point of
view the effect of crowd support is already induced by a fraction of regular
fans.",Does home advantage without crowd exist in American football?,2021-04-22 12:18:17,"Dávid Zoltán Szabó, Diego Andrés Pérez","http://arxiv.org/abs/2104.11595v1, http://arxiv.org/pdf/2104.11595v1",econ.GN
32077,gn,"The paper presents the results of a behavioral experiment conducted between
February 2020 and March 2021 at Universit\`a Cattolica del Sacro Cuore, Milan
Campus in which students were matched to either a human or a humanoid robotic
partner to play an iterated Prisoner's Dilemma. The results of a Logit
estimation procedure show that subjects are more likely to cooperate with human
rather robotic partners; that are more likely to cooperate after receiving a
dialogic verbal reaction following the realization of a sub-obtimal social
outcome; that the effect of the verbal reaction is independent on the nature of
the partner. Our findings provide new evidence on the effect of verbal
communication in strategic frameworks. Results are robust to the exclusion of
students of Economics related subjects, to the inclusion of a set of
psychological and behavioral controls, to the way subjects perceive robots'
behavior and to potential gender biases in human-human interactions.",If it Looks like a Human and Speaks like a Human ... Dialogue and cooperation in human-robot interactions,2021-04-23 18:07:58,"Mario A. Maggioni, Domenico Rossignoli","http://arxiv.org/abs/2104.11652v4, http://arxiv.org/pdf/2104.11652v4",econ.GN
32078,gn,"The rigorous evaluation of anti-poverty programs is key to the fight against
global poverty. Traditional evaluation approaches rely heavily on repeated
in-person field surveys to measure changes in economic well-being and thus
program effects. However, this is known to be costly, time-consuming, and often
logistically challenging. Here we provide the first evidence that we can
conduct such program evaluations based solely on high-resolution satellite
imagery and deep learning methods. Our application estimates changes in
household welfare in the context of a recent anti-poverty program in rural
Kenya. The approach we use is based on a large literature documenting a
reliable relationship between housing quality and household wealth. We infer
changes in household wealth based on satellite-derived changes in housing
quality and obtain consistent results with the traditional field-survey based
approach. Our approach can be used to obtain inexpensive and timely insights on
program effectiveness in international development programs.",Using Satellite Imagery and Deep Learning to Evaluate the Impact of Anti-Poverty Programs,2021-04-23 21:30:09,"Luna Yue Huang, Solomon Hsiang, Marco Gonzalez-Navarro","http://arxiv.org/abs/2104.11772v1, http://arxiv.org/pdf/2104.11772v1",econ.GN
32079,gn,"How does an entrepreneur's social capital improve small informal business
productivity? Although studies have investigated this relationship, we still
know little about the underlying theoretical mechanisms driving these findings.
Using a unique Zambian Business Survey of 1,971 entrepreneurs administered by
the World Bank, we find an entrepreneur's social capital facilitates small
business productivity through the mediating channels of firm financing and
customer relationships. Our findings identify specific mechanisms that channel
social capital toward an informal business' productivity, which prior studies
have overlooked.",Social capital and small business productivity: The mediating roles of financing and customer relationships,2021-04-24 22:09:50,"Christopher Boudreaux, George Clarke, Anand Jha","http://arxiv.org/abs/2104.12004v1, http://arxiv.org/pdf/2104.12004v1",econ.GN
32080,gn,"This study examines how foreign aid and institutions affect entrepreneurship
activity following natural disasters. We use insights from the
entrepreneurship, development, and institutions literature to develop a model
of entrepreneurship activity in the aftermath of natural disasters. First, we
hypothesize the effect of natural disasters on entrepreneurship activity
depends on the amount of foreign aid received. Second, we hypothesize that
natural disasters and foreign aid either encourages or discourages
entrepreneurship activity depending on two important institutional conditions:
the quality of government and economic freedom. The findings from our panel of
85 countries from 2006 to 2016 indicate that natural disasters are negatively
associated with entrepreneurship activity, but both foreign aid and economic
freedom attenuate this effect. In addition, we observe that foreign aid is
positively associated with entrepreneurship activity but only in countries with
high quality government. Hence, we conclude that the effect of natural
disasters on entrepreneurship depends crucially on the quality of government,
economic freedom, and foreign aid. Our findings provide new insights into how
natural disasters and foreign aid affect entrepreneurship and highlight the
important role of the institutional context.",Weathering the Storm: How Foreign Aid and Institutions Affect Entrepreneurship Following Natural Disasters,2021-04-24 22:23:09,"Christopher Boudreaux, Anand Jha, Monica Escaleras","http://dx.doi.org/10.1177/10422587211002185, http://arxiv.org/abs/2104.12008v1, http://arxiv.org/pdf/2104.12008v1",econ.GN
32081,gn,"One important dimension of Conditional Cash Transfer Programs apart from
conditionality is the provision of continuous frequency of payouts. On the
contrary, the Apni Beti Apna Dhan program, implemented in the state of Haryana
in India from 1994 to 1998 offers a promised amount to female beneficiaries
redeemable only after attaining 18 years of age if she remains unmarried. This
paper assesses the impact of this long-term financial incentivization on
outcomes, not directly associated with the conditionality. Using multiple
datasets in a triple difference framework, the findings reveal a significant
positive impact on years of education though it does not translate into gains
in labor participation. While gauging the potential channels, we did not
observe higher educational effects beyond secondary education. Additionally,
impact on time allocation for leisure, socialization or self-care, age of
marriage beyond 18 years, age at first birth, and post-marital empowerment
indicators are found to be limited. These evidence indicate failure of the
program in altering the prevailing gender norms despite improvements in
educational outcomes. The paper recommends a set of complementary potential
policy instruments that include altering gender norms through behavioral
interventions skill development and incentives to encourage female work
participation.",Whats the worth of a promise? Evaluating the indirect effects of a program to reduce early marriage in India,2021-04-25 20:27:30,"Shreya Biswas, Upasak Das","http://arxiv.org/abs/2104.12215v1, http://arxiv.org/pdf/2104.12215v1",econ.GN
32082,gn,"Unemployment benefits in the US were extended by up to 73 weeks during the
Great Recession. Equilibrium labor market theory indicates that extensions of
benefit duration impact not only search decisions by job seekers but also job
vacancy creations by employers. Most of the literature focused on the former to
show partial equilibrium effect that increment of unemployment benefits
discourage job search and lead to a rise in unemployment. To study the total
effect of UI benefit extensions on unemployment, I follow border county
identification strategy, take advantage of quasi-differenced specification to
control for changes in future benefit policies, apply interactive fixed effects
model to deal with unobserved shocks so as to obtain unbiased and consistent
estimation. I find that benefit extensions have a statistically significant
positive effect on unemployment, which is consistent with the results of
prevailing literature.",To What Extent do Labor Market Outcomes respond to UI Extensions?,2021-04-26 10:58:41,Aiwei Huang,"http://arxiv.org/abs/2104.12387v1, http://arxiv.org/pdf/2104.12387v1",econ.GN
32083,gn,"A polycentric approach to ecosystem service (ES) governance that combines
individual incentives for interdependent ES providers with collective action is
a promising lever to overcome the decline in ES and generate win-win solutions
in agricultural landscapes. In this study, we explored the effectiveness of
such an approach by focusing on incentives for managed pollination targeting
either beekeepers or farmers who were either in communication with each other
or not. We used a stylized bioeconomic model to simulate (i) the mutual
interdependency through pollination in intensive agricultural landscapes and
(ii) the economic and ecological impacts of introducing two beekeeping
subsidies and one pesticide tax. The findings showed that incentives generated
a spillover effect, affecting not only targeted stakeholders but non-targeted
stakeholders as well as the landscape, and that this effect was amplified by
communication. However, none of the simulated types of polycentric ES
governance proved sustainable overall: subsidies showed excellent economic but
low environmental performance, while the tax led to economic losses but was
beneficial for the landscape. Based on these results, we identified three
conditions for sustainable ES governance based on communication between
stakeholders and incentives: (i) strong mutual interdependency (i.e. few
alternatives exist for stakeholders), (ii) the benefits of communication
outweigh the costs, and (iii) the incentivized ES drivers are not detrimental
to other ES. Further research is needed to systematize which combination of
individual payments and collaboration are sustainable in which conditions.",Combining incentives for pollination with collective action to provide a bundle of ecosystem services in farmland,2021-04-26 18:10:26,"Jerome Faure, Lauriane Mouysset, Sabrina Gaba","http://arxiv.org/abs/2104.12640v1, http://arxiv.org/pdf/2104.12640v1",econ.GN
32084,gn,"Brazil rose as a global powerhouse producer of soybeans and corn over the
past 15 years has fundamentally changed global markets in these commodities.
This is arguably due to the development of varieties of soybean and corn
adapted to climates within Brazil, allowing farmers to double-crop corn after
soybeans in the same year. Corn and soybean market participants increasingly
look to Brazil for fundamental price information, and studies have shown that
the two markets have become cointegrated. However little is known about how
much volatility from each market spills over to the other. In this article we
measure volatility spillover ratios between U.S. and Brazilian first crop corn,
second crop corn, and soybeans. We find that linkages between the two countries
increased after double cropping corn after soybeans expanded, volatility
spillover magnitudes expanded, and the direction of volatility spillovers
flipped from U.S. volatility spilling over to Brazil before double cropping, to
Brazil spilling over to U.S. after double cropping.",The Impact of Brazil on Global Grain Dynamics: A Study on Cross-Market Volatility Spillovers,2021-04-22 18:04:47,"Felipe Avileis, Mindy Mallory","http://arxiv.org/abs/2104.12706v1, http://arxiv.org/pdf/2104.12706v1",econ.GN
32085,gn,"The present study investigates the price (co)volatility of four dairy
commodities -- skim milk powder, whole milk powder, butter and cheddar cheese
-- in three major dairy markets. It uses a multivariate factor stochastic
volatility model for estimating the time-varying covariance and correlation
matrices by imposing a low-dimensional latent dynamic factor structure. The
empirical results support four factors representing the European Union and
Oceania dairy sectors as well as the milk powder markets. Factor volatilities
and marginal posterior volatilities of each dairy commodity increase after the
2006/07 global (food) crisis, which also coincides with the free trade
agreements enacted from 2007 onwards and EU and US liberalization policy
changes. The model-implied correlation matrices show increasing dependence
during the second half of 2006, throughout the first half of 2007, as well as
during 2008 and 2014, which can be attributed to various regional agricultural
dairy policies. Furthermore, in-sample value at risk measures (VaRs and CoVaRs)
are provided for each dairy commodity under consideration.",On the joint volatility dynamics in dairy markets,2021-04-21 20:30:57,"Anthony N. Rezitis, Gregor Kastner","http://arxiv.org/abs/2104.12707v1, http://arxiv.org/pdf/2104.12707v1",econ.GN
32088,gn,"Many empirical studies have shown that government quality is a key
determinant of vulnerability to natural disasters. Protection against natural
disasters can be a public good -- flood protection, for example -- or a natural
monopoly -- early warning systems, for instance. Recovery from natural
disasters is easier when the financial system is well-developed, particularly
insurance services. This requires a strong legal and regulatory environment.
This paper reviews the empirical literature to find that government quality and
democracy reduce vulnerability to natural disasters while corruption of public
officials increases vulnerability. The paper complements the literature by
including tax revenue as an explanatory variable for vulnerability to natural
disasters, and by modelling both the probability of natural disaster and the
damage done. Countries with a larger public sector are better at preventing
extreme events from doing harm. Countries that take more of their revenue in
income taxes are better that reducing harm from natural disasters.",State capacity and vulnerability to natural disasters,2021-04-27 21:52:13,Richard S. J. Tol,"http://arxiv.org/abs/2104.13425v1, http://arxiv.org/pdf/2104.13425v1",econ.GN
32089,gn,"Acemoglu and Johnson (2007) put forward the unprecedented view that health
improvement has no significant effect on income growth. To arrive at this
conclusion, they constructed predicted mortality as an instrumental variable
based on the WHO international disease interventions to analyse this problem. I
replicate the process of their research and eliminate some biases in their
estimate. In addition, and more importantly, we argue that the construction of
their instrumental variable contains a violation of the exclusion restriction
of their instrumental variable. This negative correlation between health
improvement and income growth still lacks an accurate causal explanation,
according to which the instrumental variable they constructed increases reverse
causality bias instead of eliminating it.",A Review of Disease and Development,2021-04-26 12:19:24,Ruiwu Liu,"http://arxiv.org/abs/2104.13475v1, http://arxiv.org/pdf/2104.13475v1",econ.GN
32090,gn,"Incentives have surprisingly inconsistent effects when it comes to
encouraging people to behave prosocially. Classical economic theory, according
to which a specific behavior becomes more prevalent when it is rewarded,
struggles to explain why incentives sometimes backfire. More recent theories
therefore posit a reputational cost offsetting the benefits of receiving an
incentive -- yet unexplained effects of incentives remain, for instance across
incentive types and countries. We propose that social norms can offer an
explanation for these inconsistencies. Ultimately, social norms determine the
reputational costs or benefits resulting from a given behavior, and thus
variation in the effect of incentives may reflect variation in norms. We
implemented a formal model of prosocial behavior integrating social norms,
which we empirically tested on the real-world prosocial behavior of blood
donation. Blood donation is essential for many life-saving medical procedures,
but also presents an ideal testing ground for our theory: Various incentive
policies for blood donors exist across countries, enabling a comparative
approach. Our preregistered analyses reveal that social norms can indeed
account for the varying effects of financial and time incentives on
individual-level blood donation behavior across 28 European countries.
Incentives are associated with higher levels of prosociality when norms
regarding the incentive are more positive. The results indicate that social
norms play an important role in explaining the relationship between incentives
and prosocial behavior. More generally, our approach highlights the potential
of integrating theory from across the economic and behavioral sciences to
generate novel insights, with tangible consequences for policy-making.",Social Norms Offer Explanation for Inconsistent Effects of Incentives on Prosocial Behavior,2021-04-28 12:18:50,"Caroline Graf, Eva-Maria Merz, Bianca Suanet, Pamala Wiepking","http://arxiv.org/abs/2104.13652v1, http://arxiv.org/pdf/2104.13652v1",econ.GN
32091,gn,"Computational models of managerial search often build on backward-looking
search based on hill-climbing algorithms. Regardless of its prevalence, there
is some evidence that this family of algorithms does not universally represent
managers' search behavior. Against this background, the paper proposes an
alternative algorithm that captures key elements of Simon's concept of
satisficing which received considerable support in behavioral experiments. The
paper contrasts the satisficing-based algorithm to two variants of
hill-climbing search in an agent-based model of a simple decision-making
organization. The model builds on the framework of NK fitness landscapes which
allows controlling for the complexity of the decision problem to be solved. The
results suggest that the model's behavior may remarkably differ depending on
whether satisficing or hill-climbing serves as an algorithmic representation
for decision-makers' search. Moreover, with the satisficing algorithm, results
indicate oscillating aspiration levels, even to the negative, and intense - and
potentially destabilizing - search activities when intra-organizational
complexity increases. Findings may shed some new light on prior computational
models of decision-making in organizations and point to avenues for future
research.",Modeling Managerial Search Behavior based on Simon's Concept of Satisficing,2021-04-28 23:09:53,Friederike Wall,"http://arxiv.org/abs/2104.14002v2, http://arxiv.org/pdf/2104.14002v2",econ.GN
32092,gn,"This paper introduces on-the-way choice of retail outlet as a form of
convenience shopping. It presents a model of on-the-way choice of retail outlet
and applies the model in the context of fuel retailing to explore its
implications for segmentation and spatial competition. The model is a latent
class random utility choice model. An application to gas station choices
observed in a medium-sized Asian city show the model to fit substantially
better than existing models. The empirical results indicate consumers may adopt
one of two decision strategies. When adopting an immediacy-oriented strategy
they behave in accordance with the traditional gravity-based retail models and
tend to choose the most spatially convenient outlet. When following a
destination-oriented strategy they focus more on maintaining their overall trip
efficiency and so will tend to visit outlets located closer to their main
destination and are more susceptible to retail agglomeration effects. The paper
demonstrates how the model can be used to inform segmentation and local
competition analyses that account for variations in these strategies as well as
variations in consumer type, origin and time of travel. Simulations of a
duopoly setting further demonstrate the implications.",Where to Refuel: Modeling On-the-way Choice of Convenience Outlet,2021-04-29 02:02:24,"Ari Pramono, Harmen Oppewal","http://dx.doi.org/10.1016/j.jretconser.2021.102572, http://arxiv.org/abs/2104.14043v1, http://arxiv.org/pdf/2104.14043v1",econ.GN
32104,gn,"We conduct a randomized experiment that varies one-time health insurance
subsidy amounts (partial and full) in Ghana to study the impacts of subsidies
on insurance enrollment and health care utilization. We find that both partial
and full subsidies promote insurance enrollment in the long run, even after the
subsidies expired. Although the long run enrollment rate and selective
enrollment do not differ by subsidy level, long run health care utilization
increased only for the partial subsidy group. We show that this can plausibly
be explained by stronger learning-through-experience behavior in the partial
than in the full subsidy group.",Selection and Behavioral Responses of Health Insurance Subsidies in the Long Run: Evidence from a Field Experiment in Ghana,2021-05-03 06:40:14,"Patrick Opoku Asuming, Hyuncheol Bryant Kim, Armand Sim","http://arxiv.org/abs/2105.00617v1, http://arxiv.org/pdf/2105.00617v1",econ.GN
32093,gn,"Since its inception, the E.U.'s Common Agricultural Policy (CAP) aimed at
ensuring an adequate and stable farm income. While recognizing that the CAP
pursues a larger set of objectives, this thesis focuses on the impact of the
CAP on the level and the stability of farm income in Italian farms. It uses
microdata from a high standardized dataset, the Farm Accountancy Data Network
(FADN), that is available in all E.U. countries. This allows if perceived as
useful, to replicate the analyses to other countries. The thesis first assesses
the Income Transfer Efficiency (i.e., how much of the support translate to farm
income) of several CAP measures. Secondly, it analyses the role of a specific
and relatively new CAP measure (i.e., the Income Stabilisation Tool - IST) that
is specifically aimed at stabilising farm income. The assessment of the
potential use of Machine Learning procedures to develop an adequate ratemaking
in IST. These are used to predict indemnity levels because this is an essential
point for a similar insurance scheme. The assessment of ratemaking is
challenging: indemnity distribution is zero-inflated, not-continuous,
right-skewed, and several factors can potentially explain it. We address these
problems by using Tweedie distributions and three Machine Learning procedures.
The objective is to assess whether this improves the ratemaking by using the
prospective application of the Income Stabilization Tool in Italy as a case
study. We look at the econometric performance of the models and the impact of
using their predictions in practice. Some of these procedures efficiently
predict indemnities, using a limited number of regressors, and ensuring the
scheme's financial stability.",The role of Common Agricultural Policy (CAP) in enhancing and stabilising farm income: an analysis of income transfer efficiency and the Income Stabilisation Tool,2021-04-29 11:13:19,"Luigi Biagini, Simone Severini","http://arxiv.org/abs/2104.14188v1, http://arxiv.org/pdf/2104.14188v1",econ.GN
32094,gn,"We estimate the short- to medium term impact of six major past pandemic
crises on the CO2 emissions and energy transition to renewable electricity. The
results show that the previous pandemics led on average to a 3.4-3.7% fall in
the CO2 emissions in the short-run (1-2 years since the start of the pandemic).
The effect is present only in the rich countries, as well as in countries with
the highest pandemic death toll (where it disappears only after 8 years) and in
countries that were hit by the pandemic during economic recessions. We found
that the past pandemics increased the share of electricity generated from
renewable sources within the fiveyear horizon by 1.9-2.3 percentage points in
the OECD countries and by 3.2-3.9 percentage points in countries experiencing
economic recessions. We discuss the implications of our findings in the context
of CO2 emissions and the transition to renewable energy in the post-COVID-19
era.",The impact of past pandemics on CO$_2$ emissions and transition to renewable energy,2021-04-29 11:37:16,Michal Brzezinski,"http://arxiv.org/abs/2104.14199v1, http://arxiv.org/pdf/2104.14199v1",econ.GN
32095,gn,"Although there is a clear indication that stages of residential decision
making are characterized by their own stakeholders, activities, and outcomes,
many studies on residential low-carbon technology adoption only implicitly
address stage-specific dynamics. This paper explores stakeholder influences on
residential photovoltaic adoption from a procedural perspective, so-called
stakeholder dynamics. The major objective is the understanding of underlying
mechanisms to better exploit the potential for residential photovoltaic uptake.
Four focus groups have been conducted in close collaboration with the
independent institute for social science research SINUS Markt- und
Sozialforschung in East Germany. By applying a qualitative content analysis,
major influence dynamics within three decision stages are synthesized with the
help of egocentric network maps from the perspective of residential
decision-makers. Results indicate that actors closest in terms of emotional and
spatial proximity such as members of the social network represent the major
influence on residential PV decision-making throughout the stages. Furthermore,
decision-makers with a higher level of knowledge are more likely to move on to
the subsequent stage. A shift from passive exposure to proactive search takes
place through the process, but this shift is less pronounced among risk-averse
decision-makers who continuously request proactive influences. The discussions
revealed largely unexploited potential regarding the stakeholders local
utilities and local governments who are perceived as independent, trustworthy
and credible stakeholders. Public stakeholders must fulfill their
responsibility in achieving climate goals by advising, assisting, and financing
services for low-carbon technology adoption at the local level. Supporting
community initiatives through political frameworks appears to be another
promising step.",Stakeholder dynamics in residential solar energy adoption: findings from focus group discussions in Germany,2021-04-29 13:14:18,"Fabian Scheller, Isabel Doser, Emily Schulte, Simon Johanning, Russell McKenna, Thomas Bruckner","http://dx.doi.org/10.1016/j.erss.2021.102065, http://arxiv.org/abs/2104.14240v1, http://arxiv.org/pdf/2104.14240v1",econ.GN
32096,gn,"Advancing models for accurate estimation of food production is essential for
policymaking and managing national plans of action for food security. This
research proposes two machine learning models for the prediction of food
production. The adaptive network-based fuzzy inference system (ANFIS) and
multilayer perceptron (MLP) methods are used to advance the prediction models.
In the present study, two variables of livestock production and agricultural
production were considered as the source of food production. Three variables
were used to evaluate livestock production, namely livestock yield, live
animals, and animal slaughtered, and two variables were used to assess
agricultural production, namely agricultural production yields and losses. Iran
was selected as the case study of the current study. Therefore, time-series
data related to livestock and agricultural productions in Iran from 1961 to
2017 have been collected from the FAOSTAT database. First, 70% of this data was
used to train ANFIS and MLP, and the remaining 30% of the data was used to test
the models. The results disclosed that the ANFIS model with Generalized
bell-shaped (Gbell) built-in membership functions has the lowest error level in
predicting food production. The findings of this study provide a suitable tool
for policymakers who can use this model and predict the future of food
production to provide a proper plan for the future of food security and food
supply for the next generations.",Prediction of Food Production Using Machine Learning Algorithms of Multilayer Perceptron and ANFIS,2021-04-29 15:14:53,"Saeed Nosratabadi, Sina Ardabili, Zoltan Lakner, Csaba Mako, Amir Mosavi","http://arxiv.org/abs/2104.14286v1, http://arxiv.org/pdf/2104.14286v1",econ.GN
32097,gn,"Analyzing the financial benefit of marketing is still a critical topic for
both practitioners and researchers. Companies consider marketing costs as a
type of investment and expect this investment to be returned to the company in
the form of profit. On the other hand, companies adopt different innovative
strategies to increase their value. Therefore, this study aims to test the
impact of marketing investment on firm value and systematic risk. To do so,
data related to four Arabic emerging markets during the period 2010-2019 are
considered, and firm share price and beta share are considered to measure firm
value and systematic risk, respectively. Since a firm's ownership concentration
is a determinant factor in firm value and systematic risk, this variable is
considered a moderated variable in the relationship between marketing
investment and firm value and systematic risk. The findings of the study, using
panel data regression, indicate that increasing investment in marketing has a
positive effect on the firm value valuation model. It is also found that the
ownership concentration variable has a reinforcing role in the relationship
between marketing investment and firm value. It is also disclosed that it
moderates the systematic risk aligned with the monitoring impact of controlling
shareholders. This study provides a logical combination of governance-marketing
dimensions to interpret performance indicators in the capital market.",The Effect of Marketing Investment on Firm Value and Systematic Risk,2021-04-29 15:41:05,"Musaab Mousa, Saeed Nosratabadi, Judit Sagi, Amir Mosavi","http://dx.doi.org/10.3390/joitmc7010064, http://arxiv.org/abs/2104.14301v1, http://arxiv.org/pdf/2104.14301v1",econ.GN
32098,gn,"Background: Poverty among the population of a country is one of the most
disputable topics in social studies. Many researchers devote their work to
identifying the factors that influence it most. Bulgaria is one of the EU
member states with the highest poverty levels. Regional facets of social
exclusion and risks of poverty among the population are a key priority of the
National Development Strategy for the third decade of 21st century. In order to
mitigate the regional poverty levels it is necessary for the social policy
makers to pay more attention to the various factors expected to influence these
levels. Results: Poverty reduction is observed in most areas of the country.
The regions with obviously favorable developments are Sofia district, Pernik,
Pleven, Lovech, Gabrovo, Veliko Tarnovo, Silistra, Shumen, Stara Zagora,
Smolyan, Kyustendil and others. Increased levels of poverty are found for
Razgrad and Montana districts. It was fond that the reduction in the risk of
poverty is associated to the increase in employment, investment, and housing.
Conclusion: The social policy making needs to be aware of the fact that the
degree of exposition to risk of poverty and social exclusion significantly
relates to the levels of regional employment, investment and housing.",Regional poverty in Bulgaria in the period 2008-2019,2021-04-28 09:19:55,Iva Raycheva,"http://dx.doi.org/10.9790/0837-2603064147, http://arxiv.org/abs/2104.14414v1, http://arxiv.org/pdf/2104.14414v1",econ.GN
32099,gn,"Hydrogen can contribute substantially to the reduction of carbon emissions in
industry and transportation. However, the production of hydrogen through
electrolysis creates interdependencies between hydrogen supply chains and
electricity systems. Therefore, as governments worldwide are planning
considerable financial subsidies and new regulation to promote hydrogen
infrastructure investments in the next years, energy policy research is needed
to guide such policies with holistic analyses. In this study, we link a
electrolytic hydrogen supply chain model with an electricity system dispatch
model, for a cross-sectoral case study of Germany in 2030. We find that
hydrogen infrastructure investments and their effects on the electricity system
are strongly influenced by electricity prices. Given current uniform prices,
hydrogen production increases congestion costs in the electricity grid by 17%.
In contrast, passing spatially resolved electricity price signals leads to
electrolyzers being placed at low-cost grid nodes and further away from
consumption centers. This causes lower end-use costs for hydrogen. Moreover,
congestion management costs decrease substantially, by up to 20% compared to
the benchmark case without hydrogen. These savings could be transferred into
according subsidies for hydrogen production. Thus, our study demonstrates the
benefits of differentiating economic signals for hydrogen production based on
spatial criteria.",Integrating Hydrogen in Single-Price Electricity Systems: The Effects of Spatial Economic Signals,2021-05-01 03:36:44,"Frederik vom Scheidt, Jingyi Qu, Philipp Staudt, Dharik S. Mallapragada, Christof Weinhardt","http://arxiv.org/abs/2105.00130v2, http://arxiv.org/pdf/2105.00130v2",econ.GN
32100,gn,"We provide the first direct test of how the credibility of an auction format
affects bidding behavior and final outcomes. To do so, we conduct a series of
laboratory experiments where the role of the seller is played by a human
subject who receives the revenue from the auction and who (depending on the
treatment) has agency to determine the outcome of the auction. Contrary to
theoretical predictions, we find that the non-credible second-price auction
fails to converge to the first-price auction. We provide a behavioral
explanation for our results based on sellers' aversion to rule-breaking, which
is confirmed by an additional experiment.",Credibility in Second-Price Auctions: An Experimental Test,2021-05-01 12:55:58,"Ahrash Dianat, Mikhail Freer","http://arxiv.org/abs/2105.00204v2, http://arxiv.org/pdf/2105.00204v2",econ.GN
32101,gn,"We propose an original application of screening methods using machine
learning to detect collusive groups of firms in procurement auctions. As a
methodical innovation, we calculate coalition-based screens by forming
coalitions of bidders in tenders to flag bid-rigging cartels. Using Swiss,
Japanese and Italian procurement data, we investigate the effectiveness of our
method in different countries and auction settings, in our cases first-price
sealed-bid and mean-price sealed-bid auctions. We correctly classify 90\% of
the collusive and competitive coalitions when applying four machine learning
algorithms: lasso, support vector machine, random forest, and super learner
ensemble method. Finally, we find that coalition-based screens for the variance
and the uniformity of bids are in all the cases the most important predictors
according the random forest.",Detecting bid-rigging coalitions in different countries and auction formats,2021-05-01 22:48:51,"David Imhof, Hannes Wallimann","http://arxiv.org/abs/2105.00337v1, http://arxiv.org/pdf/2105.00337v1",econ.GN
32102,gn,"Black markets can reduce the effects of distortionary regulations by
reallocating scarce resources toward consumers who value them most. The illegal
nature of black markets, however, creates transaction costs that reduce the
gains from trade. We take a partial identification approach to infer gains from
trade and transaction costs in the black market for Beijing car license plates,
which emerged following their recent rationing. We find that at least 11% of
emitted license plates are illegally traded. The estimated transaction costs
suggest severe market frictions: between 61% and 82% of the realized gains from
trade are lost to transaction costs.",The Black Market for Beijing License Plates,2021-05-02 20:50:54,"Øystein Daljord, Guillaume Pouliot, Junji Xiao, Mandy Hu","http://arxiv.org/abs/2105.00517v1, http://arxiv.org/pdf/2105.00517v1",econ.GN
32103,gn,"A series of crises, culminating with COVID-19, shows that going Beyond GDP is
urgently necessary. Social and environmental degradation are consequences of
emphasizing GDP as a measure of progress. This degradation created the
conditions for the COVID-19 pandemic and limited the efficacy of
counter-measures. Additionally, rich countries did not fare the pandemic much
better than poor ones. COVID-19 thrived on inequalities and a lack of
cooperation. In this article we leverage on defensive growth models to explain
the complex relationships between these factors, and we put forward the idea of
neo-humanism, a cultural movement grounded on evidence from quality-of-life
studies. The movement proposes a new culture leading towards a socially and
environmentally sustainable future. Specifically, neo-humanism suggests that
prioritizing well-being by, for instance, promoting social relations, would
benefit the environment, enable collective action to address public issues,
which in turn positively affects productivity and health, among other
behavioral outcomes, and thereby instills a virtuous cycle. Arguably, such a
society would have been better endowed to cope with COVID-19, and possibly even
prevented the pandemic. Neo-humanism proposes a world in which the well-being
of people comes before the well-being of markets, in which promoting
cooperation and social relations represents the starting point for better
lives, and a peaceful and respectful coexistence with other species on Earth.",Neo-humanism and COVID-19: Opportunities for a socially and environmentally sustainable world,2021-05-03 00:37:52,"Francesco Sarracino, Kelsey J. O'Connor","http://arxiv.org/abs/2105.00556v1, http://arxiv.org/pdf/2105.00556v1",econ.GN
32105,gn,"While the importance of peer influences has been demonstrated in several
studies, little is known about the underlying mechanisms of active peer effects
in residential photovoltaic (PV) diffusion. Empirical evidence indicates that
the impacts of inter-subjective exchanges are dependent on the subjective
mutual evaluation of the interlocutors. This paper aims to quantify, how
subjective evaluations of peers affect peer effects across different stages of
PV adoption decision-making. The findings of a survey among potential and
current adopters in Germany(N=1,165)confirm two hypotheses. First, peer effects
play a role in residential PV adoption: the number of peer adopters in the
decision-maker's social circle has a positive effect on the decision-maker's
belief that their social network supports PV adoption; their ascription of
credibility on PV-related topics to their peers; and their interest in actively
seeking information from their peers in all decision-making stages. Second,
there is a correlation between the perceived positive attributes of a given
peer and the reported influence of said peer within the decision-making
process, suggesting that decision-makers' subjective evaluations of peers play
an important role in active peer effects. Decision-makers are significantly
more likely to engage in and be influenced by interactions with peers who they
perceive as competent, trustworthy, and likeable. In contrast, attributes such
as physical closeness and availability have a less significant effect. From a
policymaking perspective, this study suggests that the density and quality of
peer connections empower potential adopters. Accordingly, peer consultation and
community-led outreach initiatives should be promoted to accelerate residential
PV adoption.",Active peer effects in residential photovoltaic adoption: evidence on impact drivers among potential and current adopters in Germany,2021-05-03 15:48:03,"Fabian Scheller, Sören Graupner, James Edwards, Jann Weinand, Thomas Bruckner","http://arxiv.org/abs/2105.00796v1, http://arxiv.org/pdf/2105.00796v1",econ.GN
32106,gn,"Observation of other people's choices can provide useful information in many
circumstances. However, individuals may not utilize this information
efficiently, i.e., they may make decision-making errors in social interactions.
In this paper, I use a simple and transparent experimental setting to identify
these errors. In a within-subject design, I first show that subjects exhibit a
higher level of irrationality in the presence than in the absence of social
interaction, even when they receive informationally equivalent signals across
the two conditions. A series of treatments aimed at identifying mechanisms
suggests that a decision maker is often uncertain about the behavior of other
people so that she has difficulty in inferring the information contained in
others' choices. Building upon these reduced-from results, I then introduce a
general decision-making process to highlight three sources of error in
decision-making under social interactions. This model is non-parametrically
estimated and sheds light on what variation in the data identifies which error.",Errors in Learning from Others' Choices,2021-05-03 20:40:44,Mohsen Foroughifar,"http://arxiv.org/abs/2105.01043v3, http://arxiv.org/pdf/2105.01043v3",econ.GN
32107,gn,"Climate and energy policy targets of the European Commission aim to make
Europe the first climate-neutral continent by 2050. For low-carbon and
net-neutral energy systems primarily based on variable renewable power
generation, issues related to the market integration, cannibalisation of
revenues, and cost recovery of wind and solar photovoltaics have become major
concerns. The traditional discussion of the merit-order effect expects
wholesale power prices in a system with 100 % renewable energy sources to
alternate between very high and very low values. Unlike previous work, we
present a structured and technology-specific analysis of the cross-sectoral
demand bidding effect for the price formation in low-carbon power markets.
Starting from a stylised market arrangement and by successively augmenting it
with all relevant technologies, we construct and quantify the cross-sectoral
demand bidding effects in future European power markets with the cross-sectoral
market modelling framework SCOPE SD. As the main contribution, we explain and
substantiate the market clearing effects of new market participants in detail.
Hereby, we put a special focus on hybrid heat supply systems consisting of
combined heat and power plant, fuel boiler, thermal storage and electrical back
up and derive the opportunity costs of these systems. Furthermore, we show the
effects of cross-border integration for a large-scale European net-neutral
energy scenario. Finally, the detailed information on market clearing effects
allows us to evaluate the resulting revenues of all major technology categories
on future electricity markets.",On Wholesale Electricity Prices and Market Values in a Carbon-Neutral Energy System,2021-05-03 22:00:36,"Diana Böttger, Philipp Härtel","http://arxiv.org/abs/2105.01127v1, http://arxiv.org/pdf/2105.01127v1",econ.GN
32108,gn,"State governments in the U.S. have been facing difficult decisions involving
tradeoffs between economic and health-related outcomes during the COVID-19
pandemic. Despite evidence of the effectiveness of government-mandated
restrictions mitigating the spread of contagion, these orders are stigmatized
due to undesirable economic consequences. This tradeoff resulted in state
governments employing mandates in widely different ways. We compare the
different policies states implemented during periods of restriction (lockdown)
and reopening with indicators of COVID-19 spread and consumer card spending at
each state during the first wave of the pandemic in the U.S. between March and
August 2020. We find that while some states enacted reopening decisions when
the incidence rate of COVID-19 was minimal or sustained in its relative
decline, other states relaxed socioeconomic restrictions near their highest
incidence and prevalence rates experienced so far. Nevertheless, all states
experienced similar trends in consumer card spending recovery, which was
strongly correlated with reopening policies following the lockdowns and
relatively independent from COVID-19 incidence rates at the time. Our findings
suggest that consumer card spending patterns can be attributed to government
mandates rather than COVID-19 incidence in the states. We estimate the recovery
in states that reopened in late April was more than the recovery in states that
did not reopen in the same period - 15% for consumer card spending and 18% for
spending by high income households. This result highlights the important role
of state policies in minimizing health impacts while promoting economic
recovery and helps planning effective interventions in subsequent waves and
immunization efforts.","Relationship among state reopening policies, health outcomes and economic recovery through first wave of the COVID-19 pandemic in the U.S",2021-05-03 22:43:34,"Alexandre K. Ligo, Emerson Mahoney, Jeffrey Cegan, Benjamin D. Trump, Andrew S. Jin, Maksim Kitsak, Jesse Keenan, Igor Linkov","http://dx.doi.org/10.1371/journal.pone.0260015, http://arxiv.org/abs/2105.01142v2, http://arxiv.org/pdf/2105.01142v2",econ.GN
32109,gn,"It is well known that rightly applied reverse auctions offer big commercial
potential to procurement departments. However, the sheer number of auction
types often overwhelms users in practice. And since the implications of a
wrongly chosen auction type are equally well known, the overall usage of
reverse auctions lacks its potential significantly. In this paper, a novel
method is being proposed that guides the user in selecting the right
combination of basic auction forms for single lot events, considering both
market-, as well as supplier-related, bijective criteria.",How the 'Auction Cube' Supports the Selection of Auction Designs in Industrial Procurement,2021-05-03 23:16:06,"Gregor Berz, Florian Rupp, Brian Sieben","http://arxiv.org/abs/2105.01154v2, http://arxiv.org/pdf/2105.01154v2",econ.GN
32110,gn,"The COVID-19 pandemic forced almost all professional and amateur sports to be
played without attending crowds. Thus, it induced a large-scale natural
experiment on the impact of social pressure on decision making and behavior in
sports fields. Using a data set of 1027 rugby union matches from 11 tournaments
in 10 countries, we find that home teams have won less matches and their points
difference decreased during the pandemics, shedding light on the impact of
crowd attendance on the {\em home advantage} of sports teams.",Home advantage and crowd attendance: Evidence from rugby during the Covid 19 pandemic,2021-05-04 15:14:20,"Federico Fioravanti, Fernando Delbianco, Fernando Tohmé","http://arxiv.org/abs/2105.01446v1, http://arxiv.org/pdf/2105.01446v1",econ.GN
32111,gn,"How are economies in a modern age impacted by epidemics? In what ways is
economic life disrupted? How can pandemics be modeled? What can be done to
mitigate and manage the danger? Does the threat of pandemics increase or
decrease in the modern world? The Covid-19 pandemic has demonstrated the
importance of these questions and the potential of complex systems science to
provide answers. This article offers a broad overview of the history of
pandemics, of established facts, and of models of infection diffusion,
mitigation strategies, and economic impact. The example of the Covid-19
pandemic is used to illustrate the theoretical aspects, but the article also
includes considerations concerning other historic epidemics and the danger of
more infectious and less controllable outbreaks in the future.",Epidemics in modern economies,2021-05-06 04:24:43,Torsten Heinrich,"http://arxiv.org/abs/2105.02387v2, http://arxiv.org/pdf/2105.02387v2",econ.GN
32112,gn,"Demand response (DR) programs have gained much attention after the
restructuring of the electricity markets and have been used to optimize the
decisions of market participants. They can potentially enhance system
reliability and manage price volatility by modifying the amount or time of
electricity consumption. This paper proposes a novel game-theoretical model
accounting for the relationship between retailers (leaders) and consumers
(followers) in a dynamic price environment under uncertainty. The quality and
economic gains brought by the proposed procedure essentially stem from the
utilization of demand elasticity in a hierarchical decision process that
renders the options of different market configurations under different sources
of uncertainty. The model is solved under two frameworks: by considering the
retailer's market power and by accounting for an equilibrium setting based on a
perfect competitive game. These are formulated in terms of a mathematical
program with equilibrium constraints (MPEC) and with a mixed-integer linear
program (MILP), respectively. In particular, the retailers' market power model
is first formulated as a bi-level optimization problem, and the MPEC is
subsequently derived by replacing the consumers' problem (lower level) with its
Karush-Kuhn-Tucker (KKT) optimality conditions. In contrast, the equilibrium
model is solved as a MILP by concatenating the retailer's and consumers' KKT
optimality conditions. We illustrate the proposed procedure and numerically
assess the performance of the model using realistic data. Numerical results
show the applicability and effectiveness of the proposed model to explore the
interactions of market power and DR programs. The results confirm that
consumers are better off in an equilibrium framework while the retailer
increases its expected profit when exercising its market power.",Dynamic tariffs-based demand response in retail electricity market under uncertainty,2021-05-07 20:35:51,"Arega Getaneh Abate, Rosana Riccardi, Carlos Ruiz","http://arxiv.org/abs/2105.03405v3, http://arxiv.org/pdf/2105.03405v3",econ.GN
32113,gn,"With the costs of renewable energy technologies declining, new forms of urban
energy systems are emerging that can be established in a cost-effective way.
The SolarEV City concept has been proposed that uses rooftop Photovoltaics (PV)
to its maximum extent, combined with Electric Vehicle (EV) with bi-directional
charging for energy storage. Urban environments consist of various areas, such
as residential and commercial districts, with different energy consumption
patterns, building structures, and car parks. The cost effectiveness and
decarbonization potentials of PV + EV and PV (+ battery) systems vary across
these different urban environments and change over time as cost structures
gradually shift. To evaluate these characteristics, we performed
techno-economic analyses of PV, battery, and EV technologies for a residential
area in Shinchi, Fukushima and the central commercial district of Kyoto, Japan
between 2020 and 2040. We found that PV + EV and PV only systems in 2020 are
already cost competitive relative to existing energy systems (grid electricity
and gasoline car). In particular, the PV + EV system rapidly increases its
economic advantage over time, particularly in the residential district which
has larger PV capacity and EV battery storage relative to the size of energy
demand. Electricity exchanges between neighbors (e.g., peer-to-peer or
microgrid) further enhanced the economic value (net present value) and
decarbonization potential of PV + EV systems up to 23 percent and 7 percent in
2030, respectively. These outcomes have important strategic implications for
urban decarbonization over the coming decades.",Deeply decarbonizing residential and urban central districts through photovoltaics plus electric vehicle applications,2021-05-08 05:40:15,"Takuro Kobashi, Younghun Choi, Yujiro Hirano, Yoshiki Yamagata, Kelvin Say","http://dx.doi.org/10.1016/j.apenergy.2021.118142, http://arxiv.org/abs/2105.03562v1, http://arxiv.org/pdf/2105.03562v1",econ.GN
32114,gn,"A meta-analysis of published estimates shows that the social cost of carbon
has increased as knowledge about climate change accumulates. Correcting for
inflation and emission year and controlling for the discount rate, kernel
density decomposition reveals a non-stationary distribution. In the last 10
years, estimates of the social cost of carbon have increased from $33/tC to
$146/tC for a high discount rate and from $446/tC to $1925/tC for a low
discount rate. Actual carbon prices are almost everywhere below its estimated
value and should therefore go up.",Estimates of the social cost of carbon have increased over time,2021-05-08 12:59:06,Richard S. J. Tol,"http://arxiv.org/abs/2105.03656v3, http://arxiv.org/pdf/2105.03656v3",econ.GN
32115,gn,"This tidal stream energy industry has to date been comprised of small
demonstrator projects made up of one to a four turbines. However, there are
currently plans to expand to commercially sized projects with tens of turbines
or more. As the industry moves to large-scale arrays for the first time, there
has been a push to develop tools to optimise the array design and help bring
down the costs. This review investigates different methods of modelling the
economic performance of tidal-stream arrays, for use within these optimisation
tools. The different cost reduction pathways are discussed from costs falling
as the global installed capacity increases, due to greater experience, improved
power curves through larger-diameter higher-rated turbines, to economic
efficiencies that can be found by moving to large-scale arrays. A literature
review is conducted to establish the most appropriate input values for use in
economic models. This includes finding a best case, worst case and typical
values for costs and other related parameters. The information collated in this
review can provide a useful steering for the many optimisation tools that have
been developed, especially when cost information is commercially sensitive and
a realistic parameter range is difficult to obtain.",Economic analysis of tidal stream turbine arrays: a review,2021-05-11 03:14:33,"Zoe Goss, Daniel Coles, Matthew Piggott","http://arxiv.org/abs/2105.04718v1, http://arxiv.org/pdf/2105.04718v1",econ.GN
32116,gn,"I propose an approach to quantify attention to inflation in the data and show
that the decrease in the volatility and persistence of U.S. inflation after the
Great Inflation period was accompanied by a decline in the public's attention
to inflation. This decline in attention has important implications (positive
and normative) for monetary policy as it renders managing inflation
expectations more difficult and can lead to inflation-attention traps:
prolonged periods of a binding lower bound and low inflation due to
slowly-adjusting inflation expectations. As attention declines the optimal
policy response is to increase the inflation target. Accounting for the lower
bound fundamentally changes the normative implications of declining attention.
While lower attention raises welfare absent the lower-bound constraint, it
decreases welfare when accounting for the lower bound.",Inflation -- who cares? Monetary Policy in Times of Low Attention,2021-05-11 21:48:07,Oliver Pfäuti,"http://arxiv.org/abs/2105.05297v6, http://arxiv.org/pdf/2105.05297v6",econ.GN
32117,gn,"The study examines the essential features of the so-called platform-based
work, which is rapidly evolving into a major, potentially game-changing force
in the labor market. From low-skilled, low-paid services (such as passenger
transport) to highly skilled and high-paying project-based work (such as the
development of artificial intelligence algorithms), a broad range of tasks can
be carried out through a variety of digital platforms. Our paper discusses the
platform-based content, working conditions, employment status, and advocacy
problems. Terminological and methodological problems are dealt with in-depth in
the course of the literature review, together with the 'gray areas' of work and
employment regulation. To examine some of the complex dynamics of this
fast-evolving arena, we focus on the unsuccessful market entry of the digital
platform company Uber in Hungary 2016 and the relationship to
institutional-regulatory platform-based work standards. Dilemmas relevant to
the enforcement of labor law regarding platform-based work are also paid
special attention to the study. Employing a digital workforce is a significant
challenge not only for labor law regulation but also for stakeholder advocacy.",Emerging Platform Work in the Context of the Regulatory Loophole (The Uber Fiasco in Hungary),2021-05-12 16:34:55,"Csaba Mako, Miklos Illessy, Jozsef Pap, Saeed Nosratabadi","http://arxiv.org/abs/2105.05651v1, http://arxiv.org/pdf/2105.05651v1",econ.GN
32118,gn,"U.S. metropolitan areas, particularly in the industrial Midwest and
Northeast, are well-known for high levels of racial segregation. This is
especially true where core cities end and suburbs begin; often crossing the
street can lead to physically similar, but much less ethnically diverse,
suburban neighborhood. While these differences are often visually or
""intuitively"" apparent, this study seeks to quantify them using Geographic
Information Systems and a variety of statistical methods. 2016 Census block
group data are used to calculate an ethnic Herfindahl index for a set of two
dozen large U.S. cities and their contiguous suburbs. Then, a mathematical
method is developed to calculate a block-group-level ""Border Disparity Index""
(BDI), which is shown to vary by MSA and by specific suburbs. Its values can be
compared across the sample to examine which cities are more likely to have
borders that separate more-diverse block groups from less-diverse ones. The
index can also be used to see which core cities are relatively more or less
diverse than their suburbs, and which individual suburbs have the largest
disparities vis-\`a-vis their core city. Atlanta and Detroit have particularly
diverse suburbs, while Milwaukee's are not. Regression analysis shows that
income differences and suburban shares of Black residents play significant
roles in explaining variation across suburbs.",Do City Borders Constrain Ethnic Diversity?,2021-05-13 04:08:07,Scott W. Hegerty,"http://arxiv.org/abs/2105.06017v1, http://arxiv.org/pdf/2105.06017v1",econ.GN
32119,gn,"Milwaukee's 53206 ZIP code, located on the city's near North Side, has drawn
considerable attention for its poverty and incarceration rates, as well as for
its large proportion of vacant properties. As a result, it has benefited from
targeted policies at the city level. Keeping in mind that ZIP codes are often
not the most effective unit of geographic analysis, this study investigates
Milwaukee's socioeconomic conditions at the block group level. These smaller
areas' statistics are then compared with those of their corresponding ZIP
codes. The 53206 ZIP code is compared against others in Milwaukee for eight
socioeconomic variables and is found to be near the extreme end of most
rankings. This ZIP code would also be among Chicago's most extreme areas, but
would lie near the middle of the rankings if located in Detroit. Parts of other
ZIP codes, which are often adjacent, are statistically similar to 53206,
however--suggesting that a focus solely on ZIP codes, while a convenient
shorthand, might overlook neighborhoods that have similar need for investment.
A multivariate index created for this study performs similarly to a standard
multivariate index of economic deprivation if spatial correlation is taken into
account, confirming that poverty and other socioeconomic stresses are
clustered, both in the 53206 ZIP code and across Milwaukee.",How Unique is Milwaukee's 53206? An Examination of Disaggregated Socioeconomic Characteristics Across the City and Beyond,2021-05-13 04:13:33,Scott W. Hegerty,"http://arxiv.org/abs/2105.06021v1, http://arxiv.org/pdf/2105.06021v1",econ.GN
32120,gn,"Decades of economic decline have led to areas of increased deprivation in a
number of U.S. inner cities, which can be linked to adverse health and other
outcomes. Yet the calculation of a single ""deprivation"" index, which has
received wide application in Britain and elsewhere in the world, involves a
choice of variables and methods that have not been directly compared in the
American context. This study creates four related measures--using two sets of
variables and two weighting schemes--to create such indices for block groups in
Buffalo, Cleveland, Detroit, and Milwaukee. After examining the indices'
similarities, we then map concentrations of high deprivation in each city and
analyze their relationships to income, racial makeup, and transportation usage.
Overall, we find certain measures to have higher correlations than others, but
that all show deprivation to be linked with lower incomes and a higher nonwhite
population.",Spatial Measures of Socioeconomic Deprivation: An Application to Four Midwestern Industrial Cities,2021-05-13 03:53:10,Scott W. Hegerty,"http://arxiv.org/abs/2105.07821v1, http://arxiv.org/pdf/2105.07821v1",econ.GN
32121,gn,"According to ""Social Disorganization"" theory, criminal activity increases if
the societal institutions that might be responsible for maintaining order are
weakened. Do large apartment buildings, which often have fairly transient
populations and low levels of community involvement, have disproportionately
high rates of crime? Do these rates differ during the daytime or nighttime,
depending when residents are present, or away from their property? This study
examines four types of ""acquisitive"" crime in Milwaukee during 2014. Overall,
nighttime crimes are shown to be more dispersed than daytime crimes. A spatial
regression estimation finds that the density of multiunit housing is positively
related to all types of crime except burglaries, but not for all times of day.
Daytime robberies, in particular, increase as the density of multiunit housing
increases.","Acquisitive Crimes, Time of Day, and Multiunit Housing in the City of Milwaukee",2021-05-13 03:51:06,Scott W. Hegerty,"http://arxiv.org/abs/2105.07822v1, http://arxiv.org/pdf/2105.07822v1",econ.GN
32136,gn,"Based on the Global Entrepreneurship Monitor (GEM) surveys and conducting a
panel data estimation to test our hypothesis, this paper examines whether
corruption perceptions might sand or grease the wheels for entrepreneurship
inside companies or intrapreneurship in a sample of 92 countries for the period
2012 to 2019. Our results find that the corruption perception sands the wheel
for intrapreneurship. There is evidence of a quadratic relation, but this
relation is only clear for the less developed countries, which sort of moderate
the very negative effect of corruption for these countries. The results also
confirm that corruption influences differently on intrapreneurship depending on
the level of development of the country.",Perception of corruption influences entrepreneurship inside established companies,2021-05-25 13:54:37,"F. Javier Sanchez-Vidal, Camino Ramon-Llorens","http://arxiv.org/abs/2105.11829v1, http://arxiv.org/pdf/2105.11829v1",econ.GN
32122,gn,"Recent research on the geographic locations of bank branches in the United
States has identified thresholds below which a given area can be considered to
be a ""banking desert."" Thus far, most analyses of the country as a whole have
tended to focus on minimum distances from geographic areas to the nearest bank,
while a recent density-based analysis focused only on the city of Chicago. As
such, there is not yet a nationwide study of bank densities for the entire
United States. This study calculates banks per square mile for U.S. Census
tracts over ten different ranges of population density. One main finding is
that bank density is sensitive to the measurement radius used (for example,
density in urban areas can be calculated as the number of banks within two
miles, while some rural areas require a 20-mile radius). This study then
compiles a set of lower 5- and 10-percent thresholds that might be used to
identify ""banking deserts"" in various urban, suburban, and rural areas; these
largely conform to the findings of previous analyses. Finally, adjusting for
population density using regression residuals, this paper examines whether an
index of economic deprivation is significantly higher in the five percent of
""desert"" tracts than in the remaining 95 percent. The differences are largest
-- and highly significant -- in the densest tracts in large urban areas.","Bank Density, Population Density, and Economic Deprivation Across the United States",2021-05-13 03:46:12,Scott W. Hegerty,"http://arxiv.org/abs/2105.07823v1, http://arxiv.org/pdf/2105.07823v1",econ.GN
32123,gn,"Decades of deindustrialization have led to economic decline and population
loss throughout the U.S. Midwest, with the highest national poverty rates found
in Detroit, Cleveland, and Buffalo. This poverty is often confined to core
cities themselves, however, as many of their surrounding suburbs continue to
prosper. Poverty can therefore be highly concentrated at the MSA level, but
more evenly distributed within the borders of the city proper. One result of
this disparity is that if suburbanites consider poverty to be confined to the
central city, they might be less willing to devote resources to alleviate it.
But due to recent increases in suburban poverty, particularly since the 2008
recession, such urban-suburban gaps might be shrinking. Using Census
tract-level data, this study quantifies poverty concentrations for four ""Rust
Belt"" MSAs, comparing core-city and suburban concentrations in 2000, 2010, and
2015. There is evidence of a large gap between core cities and outlying areas,
which is closing in the three highest-poverty cities, but not in Milwaukee. A
set of four comparison cities show a smaller, more stable city-suburban divide
in the U.S. ""Sunbelt,"" while Chicago resembles a ""Rust Belt"" metro.",Are the Spatial Concentrations of Core-City and Suburban Poverty Converging in the Rust Belt?,2021-05-13 03:32:26,Scott W. Hegerty,"http://arxiv.org/abs/2105.07824v1, http://arxiv.org/pdf/2105.07824v1",econ.GN
32124,gn,"Climate change perceptions are fundamental for adaptation and environmental
policy support. Although Africa is one of the most vulnerable regions to
climate change, little research has focused on how climate change is perceived
in the continent. Using random forest methodology, we analyse Afrobarometer
data (N = 45,732), joint with climatic data, to explore what shapes climate
change perceptions in Africa. We include 5 different dimensions of climate
change perceptions: awareness, belief in its human cause, risk perception, need
to stop it and self-efficacy. Results indicate that perceived agriculture
conditions are crucial for perceiving climate change. Country-level factors and
long-term changes in local weather conditions are among the most important
predictors. Moreover, education level, access to information, poverty,
authoritarian values, and trust in institutions shape individual climate change
perceptions. Demographic effects -- including religion -- seem negligible.
These findings suggest policymakers and environmental communicators how to
frame climate change in Africa to raise awareness, gather public support and
induce adaptation.",What shapes climate change perceptions in Africa? A random forest approach,2021-05-17 17:03:14,"Juan B Gonzalez, Alfonso Sanchez","http://arxiv.org/abs/2105.07867v1, http://arxiv.org/pdf/2105.07867v1",econ.GN
32125,gn,"I introduce a model-free methodology to assess the impact of disaster risk on
the market return. Using S&P500 returns and the risk-neutral quantile function
derived from option prices, I employ quantile regression to estimate local
differences between the conditional physical and risk-neutral distributions.
The results indicate substantial disparities primarily in the left-tail,
reflecting the influence of disaster risk on the equity premium. These
differences vary over time and persist beyond crisis periods. On average, the
bottom 5% of returns contribute to 17% of the equity premium, shedding light on
the Peso problem. I also find that disaster risk increases the stochastic
discount factor's volatility. Using a lower bound observed from option prices
on the left-tail difference between the physical and risk-neutral quantile
functions, I obtain similar results, reinforcing the robustness of my findings.",A Tale of Two Tails: A Model-free Approach to Estimating Disaster Risk Premia and Testing Asset Pricing Models,2021-05-18 03:22:34,Tjeerd de Vries,"http://arxiv.org/abs/2105.08208v6, http://arxiv.org/pdf/2105.08208v6",econ.GN
32126,gn,"The Online Labour Index (OLI) was launched in 2016 to measure the global
utilisation of online freelance work at scale. Five years after its creation,
the OLI has become a point of reference for scholars and policy experts
investigating the online gig economy. As the market for online freelancing work
matures, a high volume of data and new analytical tools allow us to revisit
half a decade of online freelance monitoring and extend the index's scope to
more dimensions of the global online freelancing market. In addition to
measuring the utilisation of online labour across countries and occupations by
tracking the number of projects and tasks posted on major English-language
platforms, the new Online Labour Index 2020 (OLI 2020) also tracks Spanish- and
Russian-language platforms, reveals changes over time in the geography of
labour supply, and estimates female participation in the online gig economy.
The rising popularity of software and tech work and the concentration of
freelancers on the Indian subcontinent are examples of the insights that the
OLI 2020 provides. The OLI 2020 delivers a more detailed picture of the world
of online freelancing via an interactive online visualisation updated daily. It
provides easy access to downloadable open data for policymakers, labour market
researchers, and the general public (www.onlinelabourobservatory.org).",Online Labour Index 2020: New ways to measure the world's remote freelancing market,2021-05-19 17:10:04,"Fabian Stephany, Otto Kässi, Uma Rani, Vili Lehdonvirta","http://arxiv.org/abs/2105.09148v2, http://arxiv.org/pdf/2105.09148v2",econ.GN
32161,gn,"A novel token-distance-based triple approach is proposed for identifying EPU
mentions in textual documents. The method is applied to a corpus of
French-language news to construct a century-long historical EPU index for the
Canadian province of Quebec. The relevance of the index is shown in a
macroeconomic nowcasting experiment.",A Century of Economic Policy Uncertainty Through the French-Canadian Lens,2021-06-09 20:34:44,"David Ardia, Keven Bluteau, Alaa Kassem","http://dx.doi.org/10.1016/j.econlet.2021.109938, http://arxiv.org/abs/2106.05240v2, http://arxiv.org/pdf/2106.05240v2",econ.GN
32127,gn,"The ambitious Net Zero aspirations of Great Britain (GB) require massive and
rapid developments of Variable Renewable Energy (VRE) technologies. GB
possesses substantial resources for these technologies, but questions remain
about which VRE should be exploited where. This study explores the trade-offs
between landscape impact, land use competition and resource quality for onshore
wind as well as ground- and roof-mounted photovoltaic (PV) systems for GB.
These trade-offs constrain the technical and economic potentials for these
technologies at the Local Authority level. Our approach combines
techno-economic and geospatial analyses with crowd-sourced scenicness data to
quantify landscape aesthetics. Despite strong correlations between scenicness
and planning application outcomes for onshore wind, no such relationship exists
for ground-mounted PV. The innovative method for rooftop-PV assessment combines
bottom-up analysis of four cities with a top-down approach at the national
level. The results show large technical potentials that are strongly
constrained by both landscape and land use aspects. This equates to about 1324
TWh of onshore wind, 153 TWh of rooftop PV and 1200-7093 TWh ground-mounted PV,
depending on scenario. We conclude with five recommendations that focus around
aligning energy and planning policies for VRE technologies across multiple
scales and governance arenas.","Exploring trade-offs between landscape impact, land use and resource quality for onshore variable renewable energy: an application to Great Britain",2021-05-20 16:35:13,"R. McKenna, I. Mulalic, I. Soutar, J. M. Weinand, J. Price, S. Petrovic, K. Mainzer","http://arxiv.org/abs/2105.09736v1, http://arxiv.org/pdf/2105.09736v1",econ.GN
32128,gn,"Calcium (Ca) requirement increases tenfold upon parturition in dairy cows &
buffaloes and its deficiency leads to a condition called milk fever (MF).
Estimation of losses is necessary to understand the depth of the problem and
design preventive measures. How much is the economic loss due to MF? What will
be the efficiency gain if MF is prevented at the advent of a technology? We
answer these questions using survey data and official statistics employing
economic surplus model. MF incidence in sample buffaloes and cows was 19% and
28%, respectively. Total economic losses were calculated as a sum total of
losses from milk production, mortality of animals and treatment costs. Yearly
economic loss due to MF was estimated to be INR 1000 crores (US$ 137 million)
in Haryana. Value of milk lost had the highest share in total economic losses
(58%), followed by losses due to mortality (29%) and treatment costs (13%).
Despite lower MF incidence, losses were higher in buffaloes due to higher milk
prices and market value of animals. The efficiency gain accruing to producers
if MF is prevented, resulting from increased milk production at decreased costs
was estimated at INR 10990 crores (US$ 1.5 billion). As the potential gain if
prevented is around 10 times the economic losses, this study calls for the use
of preventive technology against MF.","Estimation of economic losses due to milk fever and efficiency gains if prevented: evidence from Haryana, India",2021-05-20 17:31:01,"A. G. A. Cariappa, B. S. Chandel, G. Sankhala, V. Mani, R. Sendhil, A. K. Dixit, B. S. Meena","http://dx.doi.org/10.2139/ssrn.3851567, http://arxiv.org/abs/2105.09782v2, http://arxiv.org/pdf/2105.09782v2",econ.GN
32129,gn,"We develop a novel fixed-k tail regression method that accommodates the
unique feature in the Forbes 400 data that observations are truncated from
below at the 400th largest order statistic. Applying this method, we find that
higher maximum marginal income tax rates induce higher wealth Pareto exponents.
Setting the maximum tax rate to 30-40% (as in U.S. currently) leads to a Pareto
exponent of 1.5-1.8, while counterfactually setting it to 80% (as suggested by
Piketty, 2014) would lead to a Pareto exponent of 2.6. We present a simple
economic model that explains these findings and discuss the welfare
implications of taxation.",Fixed-k Tail Regression: New Evidence on Tax and Wealth Inequality from Forbes 400,2021-05-20 22:48:54,"Ji Hyung Lee, Yuya Sasaki, Alexis Akira Toda, Yulong Wang","http://arxiv.org/abs/2105.10007v4, http://arxiv.org/pdf/2105.10007v4",econ.GN
32130,gn,"Both within the United States and worldwide, the city of Detroit has become
synonymous with economic decline, depopulation, and crime. Is Detroit's
situation unique, or can similar neighborhoods be found elsewhere? This study
examines Census block group data, as well as local crime statistics for 2014,
for a set of five Midwestern cities. Roughly three percent of Chicago's and
Milwaukee's block groups--all of which are in majority nonwhite areas--exceed
Detroit's median values for certain crimes, vacancies, and a poverty measure.
This figure rises to 11 percent for St. Louis, while Minneapolis has only a
single ""Detroit-like"" block group. Detroit's selected areas are more likely to
be similar to the entire city itself, both spatially and statistically, while
these types of neighborhoods for highly concentrated ""pockets"" of poverty
elsewhere. Development programs that are targeted in one city, therefore, must
take these differences into account and should be targeted to appropriate
neighborhoods.","Deprivation, Crime, and Abandonment: Do Other Midwestern Cities Have 'Little Detroits'?",2021-05-21 23:50:57,Scott W. Hegerty,"http://arxiv.org/abs/2105.10567v1, http://arxiv.org/pdf/2105.10567v1",econ.GN
32131,gn,"The energy trade is an important pillar of each country's development, making
up for the imbalance in the production and consumption of fossil fuels.
Geopolitical risks affect the energy trade of various countries to a certain
extent, but the causes of geopolitical risks are complex, and energy trade also
involves many aspects, so the impact of geopolitics on energy trade is also
complex. Based on the monthly data from 2000 to 2020 of 17 emerging economies,
this paper employs the fixed-effect model and the regression-discontinuity (RD)
model to verify the negative impact of geopolitics on energy trade first and
then analyze the mechanism and heterogeneity of the impact. The following
conclusions are drawn: First, geopolitics has a significant negative impact on
the import and export of the energy trade, and the inhibition on the export is
greater than that on the import. Second, the impact mechanism of geopolitics on
the energy trade is reflected in the lagging effect and mediating effect on the
imports and exports; that is, the negative impact of geopolitics on energy
trade continued to be significant 10 months later. Coal and crude oil prices,
as mediating variables, decreased to reduce the imports and exports, whereas
natural gas prices showed an increase. Third, the impact of geopolitics on
energy trade is heterogeneous in terms of national attribute characteristics
and geo-event types.",Does Geopolitics Have an Impact on Energy Trade? Empirical Research on Emerging Countries,2021-05-24 06:12:23,"Fen Li, Cunyi Yang, Zhenghui Li, Pierre Failler","http://dx.doi.org/10.3390/su13095199, http://arxiv.org/abs/2105.11077v1, http://arxiv.org/pdf/2105.11077v1",econ.GN
32132,gn,"The difficulty of balance between environment and energy consumption makes
countries and enterprises face a dilemma, and improving energy efficiency has
become one of the ways to solve this dilemma. Based on data of 158 countries
from 1980 to 2018, the dynamic TFP of different countries is calculated by
means of the Super-SBM-GML model. The TFP is decomposed into indexes of EC
(Technical Efficiency Change), TC (Technological Change) and EC has been
extended to PEC (Pure Efficiency Change) and SEC (Scale Efficiency Change).
Then the fixed effect model and fixed effect panel quantile model are used to
analyze the moderating effect and exogenous effect of energy efficiency on
PM2.5 concentration on the basis of verifying that energy efficiency can reduce
PM2.5 concentration. We conclude, first, the global energy efficiency has been
continuously improved during the sample period, and both of technological
progress and technical efficiency have been improved. Second, the impact of
energy efficiency on PM2.5 is heterogeneous which is reflected in the various
elements of energy efficiency decomposition. The increase of energy efficiency
can inhibit PM2.5 concentration and the inhibition effect mainly comes from TC
and PEC but SEC promotes PM2.5 emission. Third, energy investment plays a
moderating role in the environmental protection effect of energy efficiency.
Fourth, the impact of energy efficiency on PM2.5 concentration is heterogeneous
in terms of national attribute, which is embodied in the differences of
national development, science & technology development level, new energy
utilization ratio and the role of international energy trade.",Does energy efficiency affect ambient PM2.5? The moderating role of energy investment,2021-05-24 06:45:22,"Cunyi Yang, Tinghui Li, Khaldoon Albitar","http://dx.doi.org/10.3389/fenvs.2021.707751, http://arxiv.org/abs/2105.11080v1, http://arxiv.org/pdf/2105.11080v1",econ.GN
32133,gn,"Involving residential actors in the energy transition is crucial for its
success. Local energy generation, consumption and trading are identified as
desirable forms of involvement, especially in energy communities. The
potentials for energy communities in the residential building stock are high
but are largely untapped in multi-family buildings. In many countries, rapidly
evolving legal frameworks aim at overcoming related barriers, e.g. ownership
structures, principal-agent problems and system complexity. But academic
literature is scarce regarding the techno-economic and environmental
implications of such complex frameworks. This paper develops a mixed-integer
linear program (MILP) optimisation model for assessing the implementation of
multi-energy systems in an energy community in multi-family buildings with a
special distinction between investor and user. The model is applied to the
German Tenant Electricity Law. Based on hourly demands from appliances, heating
and electric vehicles, the optimal energy system layout and dispatch are
determined. The results contain a rich set of performance indicators that
demonstrate how the legal framework affects the technologies' interdependencies
and economic viability of multi-energy system energy communities. Certain
economic technology combinations may fail to support national emissions
mitigation goals and lead to lock-ins in Europe's largest residential building
stock. The subsidies do not lead to the utilisation of a battery storage.
Despite this, self-sufficiency ratios of more than 90% are observable for
systems with combined heat and power plants and heat pumps. Public CO2
mitigation costs range between 147.5-272.8 EUR/tCO2. Finally, the results show
the strong influence of the heat demand on the system layout.",Optimal system design for energy communities in multi-family buildings: the case of the German Tenant Electricity Law,2021-05-24 13:44:24,"Fritz Braeuer, Max Kleinebrahm, Elias Naber, Fabian Scheller, Russell McKenna","http://arxiv.org/abs/2105.11195v1, http://arxiv.org/pdf/2105.11195v1",econ.GN
32134,gn,"Financial inclusion and inclusive growth are the buzzwords today. Inclusive
growth empowers people belonging to vulnerable sections. This in turn depends
upon a variety of factors, the most important being financial inclusion, which
plays a strategic role in promoting inclusive growth and helps in reducing
poverty by providing regular and reliable sources of finance to the vulnerable
sections. In this direction, the Government of India in its drive for financial
inclusion has taken several measures to increase the access to and availing of
formal financial services by unbanked households. The purpose of this paper is
to assess the nature and extent of financial inclusion and its impact on the
socio-economic status of households belonging to vulnerable sections focusing
on inclusive growth. This has been analyzed with the theoretical background on
financial access and economic growth, and by analyzing the primary data
collected from the Revenue Divisions of Karnataka. The results show that there
is a disparity in nature and extent of financial inclusion. Access to, availing
of formal banking services pave the way to positive changes in the
socio-economic status of households belonging to vulnerable sections which are
correlated, leading to inclusive growth based on which the paper proposes a
model to make the financial system more inclusive and pro-poor.",Impact of Financial Inclusion on the Socio-Economic Status of Rural and Urban Households of Vulnerable Sections in Karnataka,2021-05-25 10:39:33,"Manohar Serrao, Aloysius Sequeira, K. V. M. Varambally","http://arxiv.org/abs/2105.11716v1, http://arxiv.org/pdf/2105.11716v1",econ.GN
32135,gn,"In this article, we are presenting the relationship between environmental
pollution and the income level of the selected twenty-four countries. We
implemented a data-based research analysis where, for each country, we analyzed
the related data for fifty-six years, from 1960 to 2016, to assess the
relationship between the carbon emission and income level. After performing the
related data analysis for each country, we concluded whether the results for
that country were in line with the Environmental Kuznets Curve (EKC)
hypothesis. The EKC hypothesis suggests that the carbon emission per capita
starts a declining trend when the country-specific high level of income is
reached. The results of our data analyses show that the EKC hypothesis is valid
for high-income countries and the declining trends of carbon emission are
clearly observed when the income level reaches a specific high enough level. On
the other hand, for the non-high income countries, our analysis results show
that it is too early to make an assessment at this growth stage of their
economies because they have not reached their related high-enough income per
capita levels yet. Furthermore, we performed two more additional analyses on
high-income countries. First, we analyzed the related starting years of their
carbon emission declining trends. The big variance in the starting years of the
carbon emission declining trends shows that the international policies are
clearly ineffective in initiating the declining trend in carbon emission. In
addition, for the high-income countries, we explained the differences in their
carbon emission per capita levels in 2014 with their SGI indices and their
dependence on high-carbon emission energy production.",Environmental Kuznets Curve & Effectiveness of International Policies: Evidence from Cross Country Carbon Emission Analysis,2021-05-25 11:52:36,"Elvan Ece Satici, Bayram Cakir","http://arxiv.org/abs/2105.11756v1, http://arxiv.org/pdf/2105.11756v1",econ.GN
32137,gn,"We analyze the impact of obtaining a residence permit on foreign workers'
labor market and residential attachment. To overcome the usually severe
selection issues, we exploit a unique migration lottery that randomly assigns
access to otherwise restricted residence permits in Liechtenstein (situated
between Austria and Switzerland). Using an instrumental variable approach, our
results show that lottery compliers (whose migration behavior complies with the
assignment in their first lottery) raise their employment probability in
Liechtenstein by on average 24 percentage points across outcome periods (2008
to 2018) as a result of receiving a permit. Relatedly, their activity level and
employment duration in Liechtenstein increase by on average 20 percentage
points and 1.15 years, respectively, over the outcome window. These substantial
and statistically significant effects are mainly driven by individuals not
(yet) working in Liechtenstein prior to the lottery rather than by previous
cross-border commuters. Moreover, we find both the labor market and residential
effects to be persistent even several years after the lottery with no sign of
fading out. These results suggest that granting resident permits to foreign
workers can be effective to foster labor supply even beyond the effect of
cross-border commuting from adjacent regions.",How residence permits affect the labor market attachment of foreign workers: Evidence from a migration lottery in Liechtenstein,2021-05-25 14:23:30,"Berno Buechel, Selina Gangl, Martin Huber","http://arxiv.org/abs/2105.11840v1, http://arxiv.org/pdf/2105.11840v1",econ.GN
32138,gn,"Agriculture is arguably the most climate-sensitive sector of the economy.
Growing concerns about anthropogenic climate change have increased research
interest in assessing its potential impact on the sector and in identifying
policies and adaptation strategies to help the sector cope with a changing
climate. This chapter provides an overview of recent advancements in the
analysis of climate change impacts and adaptation in agriculture with an
emphasis on methods. The chapter provides an overview of recent research
efforts addressing key conceptual and empirical challenges. The chapter also
discusses practical matters about conducting research in this area and provides
reproducible R code to perform common tasks of data preparation and model
estimation in this literature. The chapter provides a hands-on introduction to
new researchers in this area.","Climate, Agriculture and Food",2021-05-25 19:22:28,Ariel Ortiz-Bobea,"http://arxiv.org/abs/2105.12044v1, http://arxiv.org/pdf/2105.12044v1",econ.GN
32139,gn,"Internet survey experiment is conducted to examine how providing peer
information of evaluation about progressive firms changed individual's
evaluations. Using large sample including over 13,000 observations collected by
two-step experimental surveys, I found; (1) provision of the information leads
individuals to expect higher probability of rising of stocks and be more
willing to buy it. (2) the effect on willingness to buy is larger than the
expected probability of stock price rising, (3) The effect for woman is larger
than for man. (4) individuals who prefer environment (woman's empowerment)
become more willing to buy stock of pro-environment (gender-balanced) firms
than others if they have the information. (5) The effect of the peer
information is larger for individuals with ""warm -glow"" motivation.",The Effect of Providing Peer Information on Evaluation for Gender Equalized and ESG Oriented Firms: An Internet Survey Experiment,2021-05-26 04:34:43,Eiji Yamamura,"http://arxiv.org/abs/2105.12292v1, http://arxiv.org/pdf/2105.12292v1",econ.GN
32140,gn,"Limited memory of decision-makers is often neglected in economic models,
although it is reasonable to assume that it significantly influences the
models' outcomes. The hidden-action model introduced by Holmstr\""om also
includes this assumption. In delegation relationships between a principal and
an agent, this model provides the optimal sharing rule for the outcome that
optimizes both parties' utilities. This paper introduces an agent-based model
of the hidden-action problem that includes limitations in the cognitive
capacity of contracting parties. Our analysis mainly focuses on the sensitivity
of the principal's and the agent's utilities to the relaxed assumptions. The
results indicate that the agent's utility drops with limitations in the
principal's cognitive capacity. Also, we find that the agent's cognitive
capacity limitations affect neither his nor the principal's utility. Thus, the
agent bears all adverse effects resulting from limitations in cognitive
capacity.",Effects of limited and heterogeneous memory in hidden-action situations,2021-05-26 13:55:28,"Patrick Reinwald, Stephan Leitner, Friederike Wall","http://arxiv.org/abs/2105.12469v1, http://arxiv.org/pdf/2105.12469v1",econ.GN
32141,gn,"This paper aims to identify the robust determinants of corruption after
integrating out the effects of spatial spillovers in corruption levels between
countries. In other words, we want to specify which variables play the most
critical role in determining the corruption levels after accounting for the
effects that neighbouring countries have on each other. We collected the annual
data of 115 countries over the 1985-2015 period and used the averaged values to
conduct our empirical analysis. Among 39 predictors of corruption, our spatial
BMA models identify Rule of Law as the most persistent determinant of
corruption.","Corruption Determinants, Geography, and Model Uncertainty",2021-05-27 02:39:56,Sajad Rahimian,"http://arxiv.org/abs/2105.12878v1, http://arxiv.org/pdf/2105.12878v1",econ.GN
32142,gn,"Growing interdependencies between organizations lead them towards the
creation of inter-organizational networks where cybersecurity and sustainable
development have become one of the most important issues. The Environmental
Goods and Services Sector (EGSS) is one of the fastest developing sectors of
the economy fueled by the growing relationships between network entities based
on ICT usage. In this sector, Green Cybersecurity is an emerging issue because
it secures processes related directly and indirectly to environmental
management and protection. In the future, the multidimensional development of
the EGSS can help European Union to overcome the upcoming crises. At the same
time, computer technologies and cybersecurity can contribute to the
implementation of the concept of sustainable development. The development of
environmental technologies along with their cybersecurity is one of the aims of
the realization of sustainable production and domestic security concepts among
the EU countries. Hence, the aim of this article is a theoretical discussion
and research on the relationships between cybersecurity and sustainable
development in inter-organizational networks. Therefore, the article is an
attempt to give an answer to the question about the current state of the
implementation of cybersecurity in relation to the EGSS part of the economy in
different EU countries.",Cybersecurity and Sustainable Development,2021-05-28 10:58:46,"Adam Sulich, Malgorzata Rutkowska, Agnieszka Krawczyk-Jezierska, Jaroslaw Jezierski, Tomasz Zema","http://dx.doi.org/10.13140/RG.2.2.16633.60001, http://arxiv.org/abs/2105.13652v1, http://arxiv.org/pdf/2105.13652v1",econ.GN
32143,gn,"The study investigates the relationship between bank profitability and a
comprehensive list of bank specific, industry specific and macroeconomic
variables using unique panel data from 23 Bangladeshi banks with large market
shares from 2005 to 2019 employing the Pooled Ordinary Least Square (POLS)
Method for regression estimation. The random Effect model has been used to
check for robustness. Three variables, namely, Return on Asset (ROA), Return on
Equity (ROE), and Net Interest Margin (NIM), have been used as profitability
proxies. Non-interest income, capital ratio, and GDP growth have been found to
have a significant relationship with ROA. In addition to non-interest income,
market share, bank size, and real exchange rates are significant explaining
variables if profitability is measured as NIM. The only significant determinant
of profitability measured by ROE is market share. The primary contribution of
this study to the existing knowledge base is an extensive empirical analysis by
covering the entire gamut of independent variables (bank specific, industry
related, and macroeconomic) to explain the profitability of the banks in
Bangladesh. It also covers an extensive and recent data set. Banking sector
stakeholders may find great value from the outputs of this paper. Regulators
and policymakers may find this useful in undertaking analyses in setting policy
rates, banking industry stability, and impact assessment of critical policy
measures before and after the enactment, etc. Investors and the bank management
are to use the findings of this paper in analyzing the real drivers of
profitability of the banks they are contemplating to invest and managing on a
daily basis.",Comprehensive Analysis On Determinants Of Bank Profitability In Bangladesh,2021-05-29 06:35:42,"Md Saimum Hossain, Faruque Ahamed","http://arxiv.org/abs/2105.14198v2, http://arxiv.org/pdf/2105.14198v2",econ.GN
32144,gn,"This paper aims to study the impact of public and private investments on the
economic growth of developing countries. The study uses the panel data of 39
developing countries covering the periods 1990-2019. The study was based on the
neoclassical growth models or exogenous growth models state in which land,
labor, capital accumulation, etc., and technology proved substantial for
economic growth. The paper finds that public investment has a strong positive
impact on economic growth than private investment. Gross capital formation,
labor growth, and government final consumption expenditure were found
significant in explaining the economic growth. Overall, both public and private
investments are substantial for the economic growth and development of
developing countries.",Impact of Public and Private Investments on Economic Growth of Developing Countries,2021-05-29 06:40:26,Faruque Ahamed,"http://arxiv.org/abs/2105.14199v1, http://arxiv.org/pdf/2105.14199v1",econ.GN
32145,gn,"This paper revisits the discussion on determinants of budget balances and
investigates the change in their effect in light of the COVID-19 crisis by
utilizing data on 43 countries and a system generalized method of moments
approach. The results show that the overall impact of the global pandemic led
to a disproportionate increase in the magnitude of the estimated effects of the
macroeconomic determinants on the budget balance. However, we also find that
more developed economies were able to undertake higher stimulus packages for
the relatively same level of primary balance. We believe that one of the
factors affecting this outcome is that that they exhibit a higher government
debt position in domestic currency denomination.",Determinants of budget deficits: Focus on the effects from the COVID-19 crisis,2021-05-31 16:45:04,"Dragan Tevdovski, Petar Jolakoski, Viktor Stojkoski","http://arxiv.org/abs/2105.14959v1, http://arxiv.org/pdf/2105.14959v1",econ.GN
32146,gn,"Since the beginning of the 1990s, Brazil has introduced different policies
for increasing agricultural production among family farms, such as the National
Program for Strengthening Family Farming (Pronaf), the technical assistance and
rural extension programmes (ATER), and seeds distribution. Despite the
importance of these policies for the development of family farming, there is a
lack of empirical studies investigating their impact on commercialization of
food products. By considering household-level data from the 2014 Brazilian
National Household Sample Survey, we use propensity score matching techniques
accounting for the interaction effects between policies to compare the
commercialisation behaviour of recipients with non recipients. We find that
Pronaf has a significant positive impact on family farmers propensity to engage
in commercialisation, and this effect increases if farmers have also access to
ATER. Receiving technical assistance alone has a positive effect, but this is
mostly limited to smaller farms. In turn, seed distribution appears not to
increase commercialization significantly. A well balanced policy mix could
ensure that, in a country subject to the pressure of international food
markets, increased commercialisation does not result in reduced food security
for rural dwellers.",Assessing Brazilian agri-food policies: what impact on family farms?,2021-05-31 17:30:58,"Valdemar J. Wesz Junior, Simone Piras, Catia Grisa, Stefano Ghinoi","http://arxiv.org/abs/2105.14996v1, http://arxiv.org/pdf/2105.14996v1",econ.GN
32147,gn,"We benchmark a multi-dimensional child nutrition intervention against an
unconditional cash transfer of equal cost. Randomized variation in transfer
amounts allows us to estimate impacts of cash transfers at expenditure levels
equivalent to the in-kind program, as well as to estimate the return to
increasing cash transfer values. While neither the in-kind program nor a
cost-equivalent transfer costing \$124 per household moves core child outcomes
within a year, cash transfers create significantly greater consumption than the
in-kind alternative. A larger cash transfer costing \$517 substantially
improves consumption and investment outcomes and drives modest improvements in
dietary diversity and child growth.",Cash versus Kind: Benchmarking a Child Nutrition Program against Unconditional Cash Transfers in Rwanda,2021-06-01 07:03:15,"Craig McIntosh, Andrew Zeitlin","http://arxiv.org/abs/2106.00213v1, http://arxiv.org/pdf/2106.00213v1",econ.GN
32148,gn,"In this paper, we analyze the effect of transport infrastructure investments
in railways. As a testing ground, we use data from a new historical database
that includes annual panel data on approximately 2,400 Swedish rural
geographical areas during the period 1860-1917. We use a staggered event study
design that is robust to treatment effect heterogeneity. Importantly, we find
extremely large reduced-form effects of having access to railways. For real
nonagricultural income, the cumulative treatment effect is approximately 120%
after 30 years. Equally important, we also show that our reduced-form effect is
likely to reflect growth rather than a reorganization of existing economic
activity since we find no spillover effects between treated and untreated
regions. Specifically, our results are consistent with the big push hypothesis,
which argues that simultaneous/coordinated investment, such as large
infrastructure investment in railways, can generate economic growth if there
are strong aggregate demand externalities (e.g., Murphy et al. 1989). We used
plant-level data to further corroborate this mechanism. Indeed, we find that
investments in local railways dramatically, and independent of initial
conditions, increase local industrial production and employment on the order of
100-300% across almost all industrial sectors.",The Causal Effect of Transport Infrastructure: Evidence from a New Historical Database,2021-06-01 12:48:01,"Lindgren Erik, Per Pettersson-Lidbom, Bjorn Tyrefors","http://arxiv.org/abs/2106.00348v2, http://arxiv.org/pdf/2106.00348v2",econ.GN
32149,gn,"In this paper, we estimate the causal effect of political power on the
provision of public education. We use data from a historical nondemocratic
society with a weighted voting system where eligible voters received votes in
proportion to their taxable income and without any limit on the maximum of
votes, i.e., the political system used in Swedish local governments during the
period 1862-1909. We use a novel identification strategy where we combine two
different identification strategies, i.e., a threshold regression analysis and
a generalized event-study design, both of which exploit nonlinearities or
discontinuities in the effect of political power between two opposing local
elites: agricultural landowners and emerging industrialists. The results
suggest that school spending is approximately 90-120% higher if the nonagrarian
interest controls all of the votes compared to when landowners have more than a
majority of votes. Moreover, we find no evidence that the concentration of
landownership affected this relationship",The causal effect of political power on the provision of public education: Evidence from a weighted voting system,2021-06-01 12:55:29,"Lindgren Erik, Per Pettersson-Lidbom, Bjorn Tyrefors","http://arxiv.org/abs/2106.00350v1, http://arxiv.org/pdf/2106.00350v1",econ.GN
32150,gn,"Green Management (GM) is now one of many methods proposed to achieve new,
more ecological, and sustainable economic models. The paper is focused on the
impact of the developing human population on the environment measured by
researched variables. Anthropopressure can have both a positive and a negative
dimension. This paper aims to present an econometric model of the Green
Industrial Revolution (GIR) impact on the Labour Market. The GIR is similar to
the Fourth Industrial Revolution (FIR) and takes place as the next stage in the
development of humanity in the perception of both machines and devices and the
natural environment. The processes of the GIR in the European Union can be
identified based on selected indicators of Sustainable Development (SD), in
particular with the use of indicators of the Green Economy (GE) using taxonomic
methods and regression analysis. The GM strives to implement the idea of the SD
in many areas, to transform the whole economy, and elements of this process are
visible Green Labour Market (GLM). The adopted direction of economic
development depends on the as-sumptions of strategic management, which can be
defined, for example, with green management, which is mainly manifested in the
creation of green jobs.",The Green Management Towards a Green Industrial Revolution,2021-05-28 11:19:38,"Malgorzata Rutkowska, Adam Sulich","http://dx.doi.org/10.13140/RG.2.2.10997.50401, http://arxiv.org/abs/2106.00464v1, http://arxiv.org/pdf/2106.00464v1",econ.GN
32151,gn,"The graduates careers are the most spectacular and visible outcome of
excellent university education. This is also important for the university
performance assessment when its graduates can easily find jobs in the labor
market. The information about graduates matching their qualifications and
fields of studies versus undertaken employment, creates an important set of
data for future students and employers to be analyzed. Additionally, there is
business environment pressure to transform workplaces and whole organizations
towards a more green and sustainable form. Green Jobs (GJ) are the elements of
the whole economic transformation. This change is based on the green
qualifications and green careers which translate theoretical assumptions into
business language. Therefore, the choice of future career path is based on
specified criteria, which were examined by surveys performed among graduates by
the career office at Wroclaw University of Technology (WUT) in Poland. The aim
of this article was to address the question about the most significant criteria
of green career paths among graduates of WUT in 2019. Special attention was
paid to the GJ understood as green careers. In this article, the multi-criteria
Bellinger method was explained, presented, and then used to analyze chosen
factors of choice graduates career paths and then compared with Gale-Shapley
algorithm results in a comparative analysis. Future research can develop a
graduate profile willing to be employed in GJ.",Decision Towards Green Careers and Sustainable Development,2021-05-28 11:07:30,"Adam Sulich, Malgorzata Rutkowska, Uma Shankar Singh","http://dx.doi.org/10.13140/RG.2.2.14326.73280/1, http://arxiv.org/abs/2106.00465v1, http://arxiv.org/pdf/2106.00465v1",econ.GN
32152,gn,"The article reveals the main theoretical approaches to the analysis and study
of the phenomenon of corruption. Special attention is paid to the consideration
of the index approach to the analysis of corruption.",Theoretical and methodological approaches to the study of the problem of corruption,2021-06-03 15:19:11,"Valeri Lipunov, Vladislav Shirshikov, Jonathan Lewis","http://arxiv.org/abs/2106.01787v1, http://arxiv.org/pdf/2106.01787v1",econ.GN
32153,gn,"The study examines the effect of cooking fuel choice on educational outcomes
of adolescent children in rural India. Using multiple large-scale nationally
representative datasets, we observe household solid fuel usage to adversely
impact school attendance, years of schooling and age-appropriate grade
progression among children. This inference is robust to alternative ways of
measuring educational outcomes, other datasets, specifications and estimation
techniques. Importantly, the effect is found to be more pronounced for females
in comparison to the males highlighting the gendered nature of the impact. On
exploring possible pathways, we find that the direct time substitution on
account of solid fuel collection and preparation can explain the detrimental
educational outcomes that include learning outcomes as well, even though we are
unable to reject the health channel. In the light of the micro and macro level
vulnerabilities posed by the COVID-19 outbreak, the paper recommends
interventions that have the potential to fasten the household energy transition
towards clean fuel in the post-covid world.",Adding fuel to human capital: Exploring the educational effects of cooking fuel choice from rural India,2021-06-03 16:08:33,"Shreya Biswas, Upasak Das","http://arxiv.org/abs/2106.01815v1, http://arxiv.org/pdf/2106.01815v1",econ.GN
32154,gn,"Based on debt collection agency (PAIR Finance) data, we developed a novel
debtor typology framework by expanding previous approaches to 4 behavioral
dimensions. The 4 dimensions we identified were willingness to pay, ability to
pay, financial organization, and rational behavior. Using these dimensions,
debtors could be classified into 16 different typologies. We identified 5 main
typologies, which account for 63% of the debtors in our data set. Further, we
observed that each debtor typology reacted differently to the content and
timing of reminder messages, allowing us to define an optimal debt collection
strategy for each typology. For example, sending a reciprocity message at 8
p.m. in the evening is the most successful strategy to get a reaction from a
debtor who is willing to pay their debt, able to pay their debt, chaotic in
terms of their financial organization, and emotional when communicating and
handling their finances. In sum, our findings suggest that each debtor type
should be approached in a personalized way using different tonalities and
timing schedules.",Personalized Communication Strategies: Towards A New Debtor Typology Framework,2021-06-03 18:59:00,"Minou Ghaffari, Maxime Kaniewicz, Stephan Stricker","http://arxiv.org/abs/2106.01952v1, http://arxiv.org/pdf/2106.01952v1",econ.GN
32155,gn,"We investigate a model of one-to-one matching with transferable utility and
general unobserved heterogeneity. Under a separability assumption that
generalizes Choo and Siow (2006), we first show that the equilibrium matching
maximizes a social gain function that trades off exploiting complementarities
in observable characteristics and matching on unobserved characteristics. We
use this result to derive simple closed-form formulae that identify the joint
matching surplus and the equilibrium utilities of all participants, given any
known distribution of unobserved heterogeneity. We provide efficient algorithms
to compute the stable matching and to estimate parametric versions of the
model. Finally, we revisit Choo and Siow's empirical application to illustrate
the potential of our more general approach.",Cupid's Invisible Hand: Social Surplus and Identification in Matching Models,2021-06-04 12:38:47,"Alfred Galichon, Bernard Salanié","http://arxiv.org/abs/2106.02371v2, http://arxiv.org/pdf/2106.02371v2",econ.GN
32156,gn,"Recent event of ousting Rohingyas from Rakhine State by the Tatmadaw provoked
worldwide public-and-academic interest in history and social evolution of the
Rohingyas, and this is to what the article is devoted. As the existing
literature presents a debate over Who are the Rohingyas?, and How legitimate is
their claim over Rakhine State?, the paper reinvestigates the issues using a
qualitative research method. Compiling a detailed history, the paper finds that
Rohingya community developed through historically complicated processes marked
by invasions and counter-invasions. The paper argues many people entered Bengal
from Arakan before British brought people into Rakhine state. The Rohingyas
believe Rakhine State is their ancestral homeland and they developed a sense of
Ethnic Nationalism. Their right over Rakhine State is as significant as other
groups. The paper concludes that the UN must pursue solution to the crisis and
the government should accept the Rohingyas as it did the land or territory.",The Rohingyas of Rakhine State: Social Evolution and History in the Light of Ethnic Nationalism,2021-06-05 22:00:12,"Sarwar J. Minar, Abdul Halim","http://dx.doi.org/10.30884/seh/2020.02.06, http://arxiv.org/abs/2106.02945v1, http://arxiv.org/pdf/2106.02945v1",econ.GN
32157,gn,"The intellectual property protection system constructed by China's Foreign
Investment Law has opened a new phase of rule of law protection of intellectual
property rights for foreign-invested enterprises, which is an important
institutional support indispensable for optimizing the business environment
under the rule of law.The development of the regime was influenced by the major
concerns of investors' home countries, the ""innovation-driven development""
strategy, and the trend towards a high level of stringent protection of
international intellectual property and investment rules.In addition, there is
a latent game of interests between multiple subjects, which can be analyzed by
constructing two standard formal game models according to legal game theory.The
first game model aims to compare and analyze the gains and losses of China and
India's IPR protection system for foreign-invested enterprises to attract
foreign investment.The second game model is designed to analyze the benefits of
China and foreign investors under their respective possible behaviors before
and after the inclusion of IPR protection provisions in the Foreign Investment
Law, with the optimal solution being a ""moderately cautious"" strategy for
foreign investors and a ""strict enforcement"" strategy for China.","The Intellectual Property Protection System of the Foreign Investment Law: Basic Structure, Motivation and Game Logic",2021-06-07 12:49:14,Luo Ying,"http://arxiv.org/abs/2106.03467v1, http://arxiv.org/pdf/2106.03467v1",econ.GN
32158,gn,"Calcium deficiency in high yielding bovines during calving causes milk fever
which leads to economic losses of around INR 1000 crores (USD 137 million) per
annum in Haryana, India. With increasing milk production, the risk of milk
fever is continuously rising. In the context, we aim to address the most
fundamental research question: What is the effect of a preventive health
product (anionic mineral mixture (AMM)) on milk fever incidence, milk
productivity and farmers income? In an effort to contribute to the scanty
economic literature on effect of preventive measures on nutritional deficiency
disorders in dairy animals, specifically, on AMM effects in India, this study
uses a randomized controlled design to estimate internally valid estimates.
Using data from 200 dairy farms, results indicate that milk fever incidence
decreases from 21 per cent at baseline to 2 per cent in treated animals at
follow-up. Further, AMM leads to a 12 per cent and 38 per cent increase in milk
yield and farmers net income, respectively. Profits earned due to the
prevention of milk fever [INR 16000 (USD 218.7)] overweighs the losses from
milk fever [INR 4000 (USD 54.7)]; thus, prevention using AMM is better than
cure.","Prevention Is Better Than Cure: Experimental Evidence From Milk Fever Incidence in Dairy Animals of Haryana, India",2021-06-07 17:12:53,"A. G. Adeeth Cariappa, B. S. Chandel, Gopal Sankhala, Veena Mani, Sendhil R, Anil Kumar Dixit, B. S. Meena","http://dx.doi.org/10.2139/ssrn.3851561, http://arxiv.org/abs/2106.03643v1, http://arxiv.org/pdf/2106.03643v1",econ.GN
32159,gn,"The article considers the practice of applying the cluster approach in Russia
as a tool for overcoming economic inequality between Russian regions. The
authors of the study analyze the legal framework of cluster policy in Russia,
noting that its successful implementation requires a little more time than
originally planned. Special attention is paid to the experience of
benchmarking.",The Russian practice of applying cluster approach in regional development,2021-06-08 13:35:59,"Victor Grebenik, Yuri Tarasenko, Dmitry Zerkin, Mattia Masolletti","http://arxiv.org/abs/2106.04239v1, http://arxiv.org/pdf/2106.04239v1",econ.GN
32160,gn,"Major theories of military innovation focus on relatively narrow
technological developments, such as nuclear weapons or aircraft carriers.
Arguably the most profound military implications of technological change,
however, come from more fundamental advances arising from general purpose
technologies, such as the steam engine, electricity, and the computer. With few
exceptions, political scientists have not theorized about GPTs. Drawing from
the economics literature on GPTs, we distill several propositions on how and
when GPTs affect military affairs. We call these effects general-purpose
military transformations. In particular, we argue that the impacts of GMTs on
military effectiveness are broad, delayed, and shaped by indirect productivity
spillovers. Additionally, GMTs differentially advantage those militaries that
can draw from a robust industrial base in the GPT. To illustrate the
explanatory value of our theory, we conduct a case study of the military
consequences of electricity, the prototypical GPT. Finally, we apply our
findings to artificial intelligence, which will plausibly cause a profound
general-purpose military transformation.","Engines of Power: Electricity, AI, and General-Purpose Military Transformations",2021-06-08 16:55:19,"Jeffrey Ding, Allan Dafoe","http://arxiv.org/abs/2106.04338v1, http://arxiv.org/pdf/2106.04338v1",econ.GN
32162,gn,"According to common understanding, in free completion of a private product,
market and price, the two main factors in the competition that leads to
economic efficiency, always exist together. This paper, however, points out the
phenomenon that in some free competitions the two factors are separated hence
causing inefficiency. For one type, the market exists whereas the price is
absent, i.e. free, for a product. An example of this type is the job
application market where the problem of over-application commonly exists,
costing recruiters much time in finding desired candidates from massive
applicants, resulting in inefficiency. To solve the problem, this paper
proposes a solution that the recruiters charge submission fees to the
applications to make the competition complete with both factors, hence
enhancing the efficiency. For the other type, the price exists whereas the
market is absent for a product. An example of this type is the real estate
agent market, where the price of the agents exists but the market, i.e. the
facility allowing the sellers' information to be efficiently discovered, is
largely absent, also causing inefficiency. In summary, the contribution of this
paper consists of two aspects: one is the discovery of the possible separation
of the two factors in free competitions; the other is, thanks to the discovery,
a solution to the over-application problem in the job market.",The separation of market and price in some free competitions and its related solution to the over-application problem in the job market,2021-06-10 05:44:50,Vincent Zha,"http://arxiv.org/abs/2106.05972v2, http://arxiv.org/pdf/2106.05972v2",econ.GN
32163,gn,"Conventional energy production based on fossil fuels causes emissions which
contribute to global warming. Accurate energy system models are required for a
cost-optimal transition to a zero-emission energy system, an endeavor that
requires an accurate modeling of cost reductions due to technological learning
effects. In this review, we summarize common methodologies for modeling
technological learning and associated cost reductions. The focus is on learning
effects in hydrogen production technologies due to their importance in a
low-carbon energy system, as well as the application of endogenous learning in
energy system models. Finally, we present an overview of the learning rates of
relevant low-carbon technologies required to model future energy systems.",Applying endogenous learning models in energy system optimization,2021-06-11 16:25:08,"Jabir Ali Ouassou, Julian Straus, Marte Fodstad, Gunhild Reigstad, Ove Wolfgang","http://dx.doi.org/10.3390/en14164819, http://arxiv.org/abs/2106.06373v1, http://arxiv.org/pdf/2106.06373v1",econ.GN
32164,gn,"This review paper identifies the core evidence of research on employee
engagement , considering a stern challenge facing the financial sector
nowadays. The study highlights the noteworthy knowledge gaps that will support
human resource management practitioners to embed in the research towards
sectoral context. Pertinent articles were selected through key search points
and excerpt-related literature. The key search points covered the topic related
to different terms of engagement for example ""employee engagement"" OR ""work
engagement"" OR ""job engagement"" OR ""organization engagement"" OR ""staff
engagement"" OR ""personnel engagement"" which were steered in diverse context
particularly financial sector. Through critically reviewing the literature for
the last 11 years i.e., 2009-2019, we discovered 91 empirical studies in
financial sector. From these studies, we found the overall concept of
engagement and its different determinants (e.g., organizational factors,
individual factors, job factors) as well as its various outcomes (e.g.,
employee outcomes, organizational outcomes). We also formulated a conceptual
model to expand the body of knowledge in the area of employee engagement for a
better understanding of its predictors and outcomes. Besides, limitations of
the study and future recommendations are also contemplated.",Finding the Contextual Gap Towards Employee Engagement in Financial Sector: A Review Study,2021-06-11 17:48:29,"Habiba Akter, Ilham Sentosa, Sheikh Muhamad Hizam, Waqas Ahmed, Arifa Akter","http://dx.doi.org/10.6007/IJARBSS/v11-i5/9847, http://arxiv.org/abs/2106.06436v1, http://arxiv.org/pdf/2106.06436v1",econ.GN
32165,gn,"Travel speed is an intrinsic feature of transport, and enlarging the speed is
considered as beneficial. The benefit of a speed increase is generally assessed
as the value of the saved travel time. However, this approach conflicts with
the observation that time spent on travelling is rather constant and might not
be affected by speed changes. The paper aims to define the benefits of a speed
increase and addresses two research questions. First, how will a speed increase
in person transport work out, which factors are affected? Second, is the value
of time a good proxy for the value of speed? Based on studies on time spending
and research on the association between speed and land use, we argue that human
wealth could be the main affected factor by speed changes, rather than time or
access. Then the value of time is not a good proxy for the value of speed: the
benefits of a wealth increase are negatively correlated with prosperity
following the law of diminishing marginal utility, while the calculated
benefits of saved travel time prove to be positively correlated. The inadequacy
of the value of time is explained by some shortcomings with respect to the
willingness to pay that is generally used for assessing the value of time:
people do not predict correctly the personal benefits that will be gained from
a decision, and they neglect the social impacts.",The value of travel speed,2021-06-11 23:19:44,Cornelis Dirk van Goeverden,"http://arxiv.org/abs/2106.06599v1, http://arxiv.org/pdf/2106.06599v1",econ.GN
32166,gn,"Immigration to the United States is certainly not a new phenomenon, and it is
therefore natural for immigration, culture and identity to be given due
attention by the public and policy makers. However, current discussion of
immigration, legal and illegal, and the philosophical underpinnings is lost in
translation, not necessarily on ideological lines, but on political
orientation. In this paper we reexamine the philosophical underpinnings of the
melting pot versus multiculturalism as antecedents and precedents of current
immigration debate and how the core issues are lost in translation. We take a
brief look at immigrants and the economy to situate the current immigration
debate. We then discuss the two philosophical approaches to immigration and how
the understanding of the philosophical foundations can help streamline the
current immigration debate.",Re-examining the Philosophical Underpinnings of the Melting Pot vs. Multiculturalism in the Current Immigration Debate in the United States,2021-06-15 14:46:57,"Daniel Woldeab, Robert Yawson, Irina Woldeab","http://dx.doi.org/10.31124/advance.14749101.v1, http://arxiv.org/abs/2106.08066v1, http://arxiv.org/pdf/2106.08066v1",econ.GN
33176,gn,"We study how competitive forces may drive firms to inefficiently acquire
startup talent. In our model, two rival firms have the capacity to acquire and
integrate a startup operating in a possibly orthogonal market. We show that
firms may pursue such ""acquihires"" primarily as a preemptive strategy, even
when these transactions appear unprofitable in isolation. Thus, acquihires,
even absent traditional competition-reducing effects, need not be benign as
they can lead to inefficient talent allocation. Additionally, our analysis
underscores that such talent hoarding can diminish consumer surplus and
exacerbate job volatility for acquihired employees.",Startup Acquisitions: Acquihires and Talent Hoarding,2023-08-19 18:10:30,"Jean-Michel Benkert, Igor Letina, Shuo Liu","http://arxiv.org/abs/2308.10046v2, http://arxiv.org/pdf/2308.10046v2",econ.GN
32167,gn,"This paper examines the relationship between net FDI inflows and real GDP for
Turkey from 1970 to 2019. Although conventional economic growth theories and
most empirical research suggest that there is a bi-directional positive effect
between these macro variables, the results indicate that there is a
uni-directional significant short-run positive effect of real GDP on net FDI
inflows to Turkey by employing the Vector Error Correction Model, Granger
Causality, Impulse Response Functions and Variance Decomposition. Also, there
is no long-run effect has been found. The findings recommend Turkish
authorities optimally benefit from the potential positive effect of net
incoming FDI on the real GDP by allocating it for the productive sectoral
establishments while effectively maintaining the country's real economic growth
to attract further FDI inflows.",The Relationship between Foreign Direct Investment and Economic Growth: A Case of Turkey,2021-06-15 16:47:44,Orhan Gokmen,"http://dx.doi.org/10.5539/ijef.v13n7p85, http://arxiv.org/abs/2106.08144v1, http://arxiv.org/pdf/2106.08144v1",econ.GN
32168,gn,"We analyse the distribution and the flows between different types of
employment (self-employment, temporary, and permanent), unemployment,
education, and other types of inactivity, with particular focus on the duration
of the school-to-work transition (STWT). The aim is to assess the impact of the
COVID-19 pandemic in Italy on the careers of individuals aged 15-34. We find
that the pandemic worsened an already concerning situation of higher
unemployment and inactivity rates and significantly longer STWT duration
compared to other EU countries, particularly for females and residents in the
South of Italy. In the midst of the pandemic, individuals aged 20-29 were less
in (permanent and temporary) employment and more in the NLFET (Neither in the
Labour Force nor in Education or Training) state, particularly females and non
Italian citizens. We also provide evidence of an increased propensity to return
to schooling, but most importantly of a substantial prolongation of the STWT
duration towards permanent employment, mostly for males and non Italian
citizens. Our contribution lies in providing a rigorous estimation and analysis
of the impact of COVID-19 on the carriers of young individuals in Italy, which
has not yet been explored in the literature.",Young people between education and the labour market during the COVID-19 pandemic in Italy,2021-06-15 20:13:06,"Davide Fiaschi, Cristina Tealdi","http://dx.doi.org/10.1108/IJM-06-2021-0352, http://arxiv.org/abs/2106.08296v1, http://arxiv.org/pdf/2106.08296v1",econ.GN
32169,gn,"This paper studies Bayesian games with general action spaces, correlated
types and interdependent payoffs. We introduce the condition of ``decomposable
coarser payoff-relevant information'', and show that this condition is both
sufficient and necessary for the existence of pure-strategy equilibria and
purification from behavioral strategies. As a consequence of our purification
method, a new existence result on pure-strategy equilibria is also obtained for
discontinuous Bayesian games. Illustrative applications of our results to
oligopolistic competitions and all-pay auctions are provided.",Characterization of equilibrium existence and purification in general Bayesian games,2021-06-16 08:58:52,"Wei He, Xiang Sun, Yeneng Sun, Yishu Zeng","http://arxiv.org/abs/2106.08563v1, http://arxiv.org/pdf/2106.08563v1",econ.GN
32170,gn,"Over the past 70 years, the number of international environmental agreements
(IEAs) has increased substantially, highlighting their prominent role in
environmental governance. This paper applies the toolkit of network analysis to
identify the network properties of international environmental cooperation
based on 546 IEAs signed between 1948 and 2015. We identify four stylised facts
that offer topological corroboration for some key themes in the IEA literature.
First, we find that a statistically significant cooperation network did not
emerge until early 1970, but since then the network has grown continuously in
strength, resulting in higher connectivity and intensity of cooperation between
signatory countries. Second, over time the network has become closer, denser
and more cohesive, allowing more effective policy coordination and knowledge
diffusion. Third, the network, while global, has a noticeable European imprint:
initially the United Kingdom and more recently France and Germany have been the
most strategic players to broker environmental cooperation. Fourth,
international environmental coordination started with the management of
fisheries and the sea, but is now most intense on waste and hazardous
substances. The network of air and atmosphere treaties is weaker on a number of
metrics and lacks the hierarchical structure found in other networks. It is the
only network whose topological properties are shaped significantly by
UN-sponsored treaties.",What does Network Analysis teach us about International Environmental Cooperation?,2021-06-09 08:32:36,"Stefano Carattini, Sam Fankhauser, Jianjian Gao, Caterina Gennaioli, Pietro Panzarasa","http://arxiv.org/abs/2106.08883v1, http://arxiv.org/pdf/2106.08883v1",econ.GN
32171,gn,"Measuring the number of ""likes"" in Twitter and the number of bills voted in
favor by the members of the Chilean Chambers of Deputies. We empirically study
how signals of agreement in Twitter translates into cross-cutting voting during
a high political polarization period of time. Our empirical analysis is guided
by a spatial voting model that can help us to understand Twitter as a market of
signals. Our model, which is standard for the public choice literature,
introduces authenticity, an intrinsic factor that distort politicians'
willigness to agree (Trilling, 2009). As our main contribution, we document
empirical evidence that ""likes"" between opponents are positively related to the
number of bills voted by the same pair of politicians in Congress, even when we
control by politicians' time-invariant characteristics, coalition affiliation
and following links in Twitter. Our results shed light into several contingent
topics, such as polarization and disagreement within the public sphere.",Politicians' Willingness to Agree: Evidence from the interactions in Twitter of Chilean Deputies,2021-06-17 01:29:09,"Pablo Henríquez, Jorge Sabat, José Patrìcio Sullivan","http://arxiv.org/abs/2106.09163v2, http://arxiv.org/pdf/2106.09163v2",econ.GN
32172,gn,"The article analyzes the essence of the phenomenon of corruption, highlights
its main varieties and characteristics. The authors of the study apply
historical analysis, emphasizing the long-term nature of corruption and its
historical roots. The paper uses legal analysis to characterize the legal
interpretation of corruption as an economic crime.","The Concept, Types and Structure of Corruption",2021-06-17 16:48:09,"Oleg Antonov, Ekaterina Lineva","http://arxiv.org/abs/2106.09498v1, http://arxiv.org/pdf/2106.09498v1",econ.GN
32173,gn,"By investigating the exam scores of introductory economics in a business
school in Taiwan between 2008 and 2019, we find three sets of results: First,
we find no significant difference between genders in the exam scores. Second,
students' majors are significantly associated with their exam scores, which
likely reflects their academic ability measured at college admission. Third,
the exam scores are strong predictors of students' future academic performance.","Introductory Economics: Gender, Majors, and Future Performance",2021-06-18 15:33:16,"Natsuki Arai, Shian Chang, Biing-Shen Kuo","http://arxiv.org/abs/2106.10091v1, http://arxiv.org/pdf/2106.10091v1",econ.GN
32174,gn,"Active labor market programs are important instruments used by European
employment agencies to help the unemployed find work. Investigating large
administrative data on German long-term unemployed persons, we analyze the
effectiveness of three job search assistance and training programs using Causal
Machine Learning. Participants benefit from quickly realizing and long-lasting
positive effects across all programs, with placement services being the most
effective. For women, we find differential effects in various characteristics.
Especially, women benefit from better local labor market conditions. We propose
more effective data-driven rules for allocating the unemployed to the
respective labor market programs that could be employed by decision-makers.",Active labour market policies for the long-term unemployed: New evidence from causal machine learning,2021-06-18 17:13:18,"Daniel Goller, Tamara Harrer, Michael Lechner, Joachim Wolff","http://arxiv.org/abs/2106.10141v2, http://arxiv.org/pdf/2106.10141v2",econ.GN
32175,gn,"In the race to achieve climate goals, many governments and organizations are
encouraging the local development of Renewable Energy Technology (RET). The
spatial innovation dynamics of the development of a technology partly depends
on the characteristics of the knowledge base on which this technology builds,
in particular the analyticity and cumulativeness of knowledge. Theoretically,
greater analyticity and lesser cumulativeness are positively associated with
more widespread development. In this study, we first empirically evaluate these
relations for general technology and then systematically determine the
knowledge base characteristics for a set of 14 different RETs. We find that,
while several RETs (photovoltaics, fuel-cells, energy storage) have a highly
analytic knowledge base and develop more widespread, there are also important
RETs (wind turbines, solar thermal, geothermal and hydro energy) for which the
knowledge base is less analytic and which develop less widespread. Likewise,
the technological cumulativeness tends to be lower for the former than for the
latter group. This calls for regional and country-level policies to be specific
for different RETs, taking for a given RET into account both the type of
knowledge it builds on as well as the local presence of this knowledge.",The Knowledge Mobility of Renewable Energy Technology,2021-06-19 14:38:26,"P. G. J. Persoon, R. N. A. Bekkers, F. Alkemade","http://arxiv.org/abs/2106.10474v2, http://arxiv.org/pdf/2106.10474v2",econ.GN
32176,gn,"Non-unitary household models suggest that enhancing women's bargaining power
can influence child health, a crucial determinant of human capital and economic
standing throughout adulthood. We examine the effects of a policy shift, the
Hindu Succession Act Amendment (HSAA), which granted inheritance rights to
unmarried women in India, on child health. Our findings indicate that the HSAA
improved children's height and weight. Furthermore, we uncover evidence
supporting a mechanism whereby the policy bolstered women's intra-household
bargaining power, resulting in downstream benefits through enhanced parental
care for children and improved child health. These results emphasize that
children fare better when mothers control a larger share of family resources.
Policies empowering women can yield additional positive externalities for
children's human capital.",Entitled to Property: How Breaking the Gender Barrier Improves Child Health in India,2021-06-21 07:05:11,"Md Shahadath Hossain, Plamen Nikolov","http://arxiv.org/abs/2106.10841v10, http://arxiv.org/pdf/2106.10841v10",econ.GN
32177,gn,"This paper examines the short- and long-run effects of U.S. federal personal
income and corporate income tax cuts on a wide array of economic policy
variables in a data-rich environment. Using a panel of U.S. macroeconomic data
set, made up of 132 quarterly macroeconomic series for 1959-2018, the study
estimates factor-augmented vector autoregression (FAVARs) models where an
extended narrative tax changes dataset combined with unobserved factors. The
narrative approach classifies if tax changes are exogenous or endogenous. This
paper identifies narrative tax shocks in the vector autoregression model using
the sign restrictions with Uhlig's (2005) penalty function. Empirical findings
show a significant expansionary effect of tax cuts on the macroeconomic
variables. Cuts in personal and corporate income taxes cause a rise in output,
investment, employment, and consumption; however, cuts in personal taxes appear
to be a more effective fiscal policy tool than the cut in corporate income
taxes. Real GDP, employment, investment, and industrial production increase
significantly and reach their maximum response values two years after personal
income tax cuts. The effects of corporate tax cuts have relatively smaller
effects on output and consumption but show immediate and higher effects on
fixed investment and price levels.","Output, Employment, and Price Effects of U.S. Narrative Tax Changes: A Factor-Augmented Vector Autoregression Approach",2021-06-21 07:16:25,Masud Alam,"http://arxiv.org/abs/2106.10844v1, http://arxiv.org/pdf/2106.10844v1",econ.GN
32178,gn,"During the COVID-19 epidemic, many health professionals started using mass
communication on social media to relay critical information and persuade
individuals to adopt preventative health behaviors. Our group of clinicians and
nurses developed and recorded short video messages to encourage viewers to stay
home for the Thanksgiving and Christmas Holidays. We then conducted a two-stage
clustered randomized controlled trial in 820 counties (covering 13 States) in
the United States of a large-scale Facebook ad campaign disseminating these
messages. In the first level of randomization, we randomly divided the counties
into two groups: high intensity and low intensity. In the second level, we
randomly assigned zip codes to either treatment or control such that 75% of zip
codes in high intensity counties received the treatment, while 25% of zip codes
in low intensity counties received the treatment. In each treated zip code, we
sent the ad to as many Facebook subscribers as possible (11,954,109 users
received at least one ad at Thanksgiving and 23,302,290 users received at least
one ad at Christmas). The first primary outcome was aggregate holiday travel,
measured using mobile phone location data, available at the county level: we
find that average distance travelled in high-intensity counties decreased by
-0.993 percentage points (95% CI -1.616, -0.371, p-value 0.002) the three days
before each holiday. The second primary outcome was COVID-19 infection at the
zip-code level: COVID-19 infections recorded in the two-week period starting
five days post-holiday declined by 3.5 percent (adjusted 95% CI [-6.2 percent,
-0.7 percent], p-value 0.013) in intervention zip codes compared to control zip
codes.",Doctors and Nurses Social Media Ads Reduced Holiday Travel and COVID-19 infections: A cluster randomized controlled trial in 13 States,2021-06-21 15:09:08,"Emily Breza, Fatima Cody Stanford, Marcela Alsan, M. D. Ph. D., Burak Alsan, Abhijit Banerjee, Arun G. Chandrasekhar, Sarah Eichmeyer, Traci Glushko, Paul Goldsmith-Pinkham, Kelly Holland, Emily Hoppe, Mohit Karnani, Sarah Liegl, Tristan Loisel, Lucy Ogbu-Nwobodo, Benjamin A. Olken Carlos Torres, Pierre-Luc Vautrey, Erica Warner, Susan Wootton, Esther Duflo","http://arxiv.org/abs/2106.11012v1, http://arxiv.org/pdf/2106.11012v1",econ.GN
32179,gn,"We analyze a series of trials that randomly assigned Wikipedia users in
Germany to different web banners soliciting donations. The trials varied
framing or content of social information about how many other users are
donating. Framing a given number of donors in a negative way increased donation
rates. Variations in the communicated social information had no detectable
effects. The findings are consistent with the results from a survey experiment.
In line with donations being strategic substitutes, the survey documents that
the negative framing lowers beliefs about others' donations. Varying the social
information, in contrast, is ineffective in changing average beliefs.",Framing and Social Information Nudges at Wikipedia,2021-06-21 17:06:34,"Maximilian Linek, Christian Traxler","http://arxiv.org/abs/2106.11128v1, http://arxiv.org/pdf/2106.11128v1",econ.GN
32180,gn,"Widely discredited ideas nevertheless persist. Why do people fail to
``unlearn''? We study one explanation: beliefs are resistant to retractions
(the revoking of earlier information). Our experimental design identifies
unlearning -- i.e., updating from retractions -- and enables its comparison
with learning from equivalent new information. Across different kinds of
retractions -- for instance, those consistent or contradictory with the prior,
or those occurring when prior beliefs are either extreme or moderate --
subjects do not fully unlearn from retractions and update less from them than
from equivalent new information. This phenomenon is not explained by most of
the well-studied violations of Bayesian updating, which yield differing
predictions in our design. However, it is consistent with difficulties in
conditional reasoning, which have been documented in other domains and
circumstances.",Learning versus Unlearning: An Experiment on Retractions,2021-06-22 01:24:23,"Duarte Gonçalves, Jonathan Libgober, Jack Willis","http://arxiv.org/abs/2106.11433v2, http://arxiv.org/pdf/2106.11433v2",econ.GN
32181,gn,"A customized internet survey experiment is conducted in Japan to examine how
individuals' relative income position influences preferences for income
redistribution and individual perceptions regarding income tax burden. I first
asked respondents about their perceived income position in their country and
their preferences for redistribution and perceived tax burden. In the follow-up
survey for the treatment group, I provided information on their true income
position and asked the same questions as in the first survey. For the control
group, I did not provide their true income position and asked the same
questions. I gathered a large sample that comprised observations of the
treatment group (4,682) and the control group (2,268). The key findings suggest
that after being informed of individuals' real income position, (1) individuals
who thought their income position was higher than the true one perceived their
tax burden to be larger, (2) individuals' preference for redistribution hardly
changes, and (3) irreciprocal individuals perceive their tax burden to be
larger and are more likely to prefer redistribution. However, the share of
irreciprocal ones is small. This leads Japan to be a non-welfare state.",Information of income position and its impact on perceived tax burden and preference for redistribution: An Internet Survey Experiment,2021-06-22 07:26:22,Eiji Yamamura,"http://arxiv.org/abs/2106.11537v1, http://arxiv.org/pdf/2106.11537v1",econ.GN
32182,gn,"Democracy often fails to meet its ideals, and these failures may be made
worse by electoral institutions. Unwanted outcomes include polarized
institutions, unresponsive representatives, and the ability of a faction of
voters to gain power at the expense of the majority. Various reforms have been
proposed to address these problems, but their effectiveness is difficult to
predict against a backdrop of complex interactions. Here we outline a path for
systems-level modeling to help understand and optimize repairs to U.S.
democracy. Following the tradition of engineering and biology, models of
systems include mechanisms with dynamical properties that include
nonlinearities and amplification (voting rules), positive feedback mechanisms
(single-party control, gerrymandering), negative feedback (checks and
balances), integration over time (lifetime judicial appointments), and low
dimensionality (polarization). To illustrate a systems-level approach we
analyze three emergent phenomena: low dimensionality, elite polarization, and
anti-majoritarianism in legislatures. In each case, long-standing rules now
contribute to undesirable outcomes as a consequence of changes in the political
environment. Theoretical understanding at a general level will also help
evaluate whether a proposed reform's benefits will materialize and be lasting,
especially as conditions change again. In this way, rigorous modeling may not
only shape new lines of research, but aid in the design of effective and
lasting reform.",A systems framework for remedying dysfunction in U.S. democracy,2021-06-22 19:12:47,"Samuel S. -H. Wang, Jonathan Cervas, Bernard Grofman, Keena Lipsitz","http://dx.doi.org/10.1073/pnas.2102154118, http://arxiv.org/abs/2106.11901v2, http://arxiv.org/pdf/2106.11901v2",econ.GN
32183,gn,"This paper provides a general overview of different perspectives and studies
on trust, offers a definition of trust, and provides factors that play a
substantial role in developing social trust, and shows from which perspectives
it can be fostered. The results showed that trust is playing an important role
in success for organizations involved in cross-national strategic partnerships.
Trust can reduce transaction costs, promotes inter-organizational
relationships, and improve subordinate relationships between managers.","Relationship between Cultural Values, Sense of Community and Trust and the Effect of Trust in Workplace",2021-06-25 02:04:33,"Nazli Mohammad, Yvonne Stedham","http://arxiv.org/abs/2106.13347v1, http://arxiv.org/pdf/2106.13347v1",econ.GN
32184,gn,"The social and psychological concept of herding behavior provides a suitable
solution to give an understanding of the behavioral biases that often occur in
the capital market. The aim of this paper is to provide an overview of the
broader bibliometric literature on the term and concept of herding behavior.
Articles are collected through the help of software consisting of Publish or
Perish (PoP), Google Scholar, Mendeley, and VOSViewer through a systematic
approach, explicit and reproductive methods. In addition, the articles were
scanned by Scimagojr.com (Q1, Q2, Q3, and Q4), analyzing 83 articles of 261
related articles from reputable and non-reputable journals from 1996 to 2021.
Mendeley software is used to manage and resume references. To review this
database, classification was performed using the VOSviewer software. Four
clusters were reviewed; The words that appear most often in each group are the
type of stock market, the type of crisis, and the factors that cause herding.
Thus these four clusters became the main research themes on the topic of
herding in times of crisis. Meanwhile, methodology and strategy are the themes
for future research in the future.",Bibliometric Analysis Of Herding Behavior In Times Of Crisis,2021-06-09 19:18:43,"Fenny Marietza, Ridwan Nurazi, Fitri Santi, Saiful","http://arxiv.org/abs/2106.13598v1, http://arxiv.org/pdf/2106.13598v1",econ.GN
32185,gn,"We study the link between political influence and industrial concentration.
We present a joint model of political influence and market competition: an
oligopoly lobbies the government over regulation, and competes in the product
market shaped by this influence. We show broad conditions for mergers to
increase lobbying, both on the intensive margin and the extensive margin. We
combine data on mergers with data on lobbying expenditures and campaign
contributions in the US from 1999 to 2017. We document a positive association
between mergers and lobbying, both by individual firms and by industry trade
associations. Mergers are also associated with extensive margin changes such as
the formation of in-house lobbying teams and corporate PACs. We find some
evidence for a positive association between mergers and higher campaign
contributions.",Political Power and Market Power,2021-06-25 16:05:59,"Bo Cowgill, Andrea Prat, Tommaso Valletti","http://arxiv.org/abs/2106.13612v7, http://arxiv.org/pdf/2106.13612v7",econ.GN
32186,gn,"We study a fully funded, collective defined-contribution (DC) pension system
with multiple overlapping generations. We investigate whether the welfare of
participants can be improved by intergenerational risk sharing (IRS)
implemented with a realistic investment strategy (e.g., no borrowing) and
without an outside entity (e.g., share holders) that helps finance the pension
fund. To implement IRS, the pension system uses an automatic adjustment rule
for the indexation of individual accounts, which adapts to the notional funding
ratio of the pension system. The pension system has two parameters that
determine the investment strategy and the strength of the adjustment rule,
which are optimized by expected utility maximization using Bayesian
optimization. The volatility of the retirement benefits and that of the funding
ratio are analyzed, and it is shown that the trade-off between them can be
controlled by the optimal adjustment parameter to attain IRS. Compared with the
optimal individual DC benchmark using the life-cycle strategy, the studied
pension system with IRS is shown to improve the welfare of risk-averse
participants, when the financial market is volatile.",Intergenerational risk sharing in a Defined Contribution pension system: analysis with Bayesian optimization,2021-06-25 16:55:10,"An Chen, Motonobu Kanagawa, Fangyuan Zhang","http://arxiv.org/abs/2106.13644v3, http://arxiv.org/pdf/2106.13644v3",econ.GN
32187,gn,"Sovereign wealth funds are created in those countries whose budget is highly
dependent on market factors, usually world commodity prices. At the same time,
these funds are large institutional investors. An analysis of the nature of
investments by the State Pension Fund Global of Norway showed that investments
of the Fund are based on a seven-level model of diversifying its investments.
This model can also be applied to the investments of the National Wealth Fund
of Russia to increase its profitability.",Sovereign wealth funds: main activity trends,2021-06-25 17:39:46,"Oksana Mamina, Alexander Barannikov, Ludmila Gruzdeva","http://arxiv.org/abs/2106.13670v1, http://arxiv.org/pdf/2106.13670v1",econ.GN
32188,gn,"We propose an empirical framework for asymmetric Cournot oligopoly with
private information about variable costs. First, considering a linear demand
for a homogenous product with a random intercept, we characterize the Bayesian
Cournot-Nash equilibrium. Then we establish the identification of the joint
distribution of demand and firm-specific cost distributions. Following the
identification steps, we propose a likelihood-based estimation method and apply
it to the global market for crude-oil and quantify the welfare effect of
private information. We also consider extensions of the model to include either
product differentiation, conduct parameters, nonlinear demand, or selective
entry.",Empirical Framework for Cournot Oligopoly with Private Information,2021-06-29 03:22:40,"Gaurab Aryal, Federico Zincenko","http://arxiv.org/abs/2106.15035v5, http://arxiv.org/pdf/2106.15035v5",econ.GN
32189,gn,"Onshore wind development has historically focused on cost-efficiency, which
may lead to inequitable turbine distributions and public resistance due to
landscape impacts. Using a multi-criteria planning approach, we show how
onshore wind capacity targets can be achieved by 2050 in a cost-efficient,
equitable and publicly acceptable way. For the case study of Germany, we build
on the existing turbine stock and use open data on technically feasible turbine
locations and scenicness of landscapes to plan the optimal expansion. The
analysis shows that while the trade-off between cost-efficiency and public
acceptance is rather weak with about 15% higher costs or scenicness, an
equitable distribution has a large impact on these criteria. Although the
onshore wind capacity per inhabitant could be distributed about 220% more
equitably through the expansion, equity would severely limit planning
flexibility by 2050. Our analysis assists stakeholders in resolving the onshore
wind expansion trilemma.","Exploring the trilemma of cost-efficient, equitable and publicly acceptable onshore wind expansion planning",2021-06-29 12:33:07,"Jann Michael Weinand, Russell McKenna, Heidi Heinrichs, Michael Roth, Detlef Stolten, Wolf Fichtner","http://arxiv.org/abs/2106.15198v1, http://arxiv.org/pdf/2106.15198v1",econ.GN
32190,gn,"Due to the unavailability of nationally representative data on time use, a
systematic analysis of the gender gap in unpaid household and care work has not
been undertaken in the context of India. The present paper, using the recent
Time Use Survey (2019) data, examines the socioeconomic and demographic factors
associated with variation in time spent on unpaid household and care work among
men and women. It analyses how much of the gender gap in the time allocated to
unpaid work can be explained by differences in these factors. The findings show
that women spend much higher time compared to men in unpaid household and care
work. The decomposition results reveal that differences in socioeconomic and
demographic factors between men and women do not explain most of the gender gap
in unpaid household work. Our results indicate that unobserved gender norms and
practices most crucially govern the allocation of unpaid work within Indian
households.",What Explains Gender Gap in Unpaid Household and Care Work in India?,2021-06-29 16:05:17,"Athary Janiso, Prakash Kumar Shukla, Bheemeshwar Reddy A","http://arxiv.org/abs/2106.15376v2, http://arxiv.org/pdf/2106.15376v2",econ.GN
32191,gn,"Models on innovation, for the most part, do not include a comprehensive and
end-to-end view. Most innovation policy attention seems to be focused on the
capacity to innovate and on input factors such as R&D investment, scientific
institutions, human resources and capital. Such inputs frequently serve as
proxies for innovativeness and are correlated with intermediate outputs such as
patent counts and outcomes such as GDP per capita. While this kind of analysis
is generally indicative of innovative behaviour, it is less useful in terms of
discriminating causality and what drives successful strategy or public policy
interventions. This situation has led to the developing of new frameworks for
the innovation system led by National Science and Technology Policy Centres
across the globe. These new models of innovation are variously referred to as
the National Innovation Ecosystem. There is, however, a fundamental question
that needs to be answered: what elements should an innovation policy include,
and how should such policies be implemented? This paper attempts to answer this
question.",The Ecological System of Innovation: A New Architectural Framework for a Functional Evidence-Based Platform for Science and Innovation Policy,2021-06-24 14:05:08,Robert M Yawson,"http://dx.doi.org/10.31124/advance.7367138.v1, http://arxiv.org/abs/2106.15479v1, http://arxiv.org/pdf/2106.15479v1",econ.GN
32192,gn,"The authors of the study conduct a legal analysis of the concept of energy
security. Energy is vital for sustainable development, and sustainability is
not only at the heart of development, but also economic, environmental, social
and military policies. To ensure the sustainability of the policy, 'security'
seems to be a mandatory goal to achieve. The article critically assesses the
change in the energy paradigm.","Energy security: key concepts, components, and change of the paradigm",2021-06-30 18:15:30,"Julia Edigareva, Tatiana Khimich, Oleg Antonov, Jesus Gonzalez","http://arxiv.org/abs/2106.16117v1, http://arxiv.org/pdf/2106.16117v1",econ.GN
32193,gn,"To explore the relationship between corporate green technological innovation
and the risk of stock price crashes, we first analyzed the data of listed
companies in China from 2008 to 2018 and constructed indicators for the
quantity and quality of corporate green technology innovation. The study found
that the quantity of green technology innovation is not related to the risk of
stock price crashes, while the quality of green technology innovation is
negatively related to the risk of stock price crashes. Second, we studied the
impact of corporate ownership on the relationship between the quality of green
technological innovation and the risk of stock price crashes and found that in
nonstate-owned enterprises, the quality of green technological innovation is
negatively correlated with the risk of a stock price collapse, while in
state-owned enterprises, the quality of green technological innovation and the
risk of a stock price collapse are positive and not significant. Furthermore,
we studied the mediating effect of the number of negative news reports in the
media of listed companies on the relationship between the quality of corporate
green technology innovation and the stock price crash and found that the
quality of green technology innovation is positively correlated with the number
of negative news reports in the media of listed companies, while the number of
negative news reports in the media of listed companies is positively correlated
with the risk of a stock price collapse. Finally, we conducted a DID regression
by using the impact of exogenous policy shocks on the quality of green
technology innovation, and the main results passed the robustness test.","""Stabilizer"" or ""catalyst""? How green technology innovation affects the risk of stock price crashes: an analysis based on the quantity and quality of patents",2021-06-30 19:12:19,"Ge-zhi Wu, Da-ming You","http://arxiv.org/abs/2106.16177v3, http://arxiv.org/pdf/2106.16177v3",econ.GN
32194,gn,"Recent trends in academics show an increase in enrollment levels in higher
education Predominantly in Doctoral programmes where individual scholars
institutes and supervisors play the key roles The human factor at receiving end
of academic excellence is the scholar having a supervisor at the facilitating
end In this paper I try to establish the role of different factors and
availability of information about them in forming the basic choice set in a
scholars mind After studying three different groups of individuals who were
subjected to substitutive choices we found that scholars prefer an
approachable, moderately intervening and frequently interacting professor as
their guide","Choice of a Mentor: A Subjective Evaluation of Expectations, Experiences and Feedbacks",2021-07-01 00:18:37,Kaibalyapati Mishra,"http://arxiv.org/abs/2107.00106v1, http://arxiv.org/pdf/2107.00106v1",econ.GN
32195,gn,"The authors of the article analyze the impact of the global COVID-19 pandemic
on the transport and logistics sector. The research is interdisciplinary in
nature. The purpose of the study is to identify and briefly characterize new
trends in the field of transport and cargo transportation in post-COVID
conditions.",Political and legal aspects of the COVID-19 pandemic impact on world transport systems,2021-07-01 15:06:54,"Alexey Gubin, Valeri Lipunov, Mattia Masolletti","http://arxiv.org/abs/2107.00390v1, http://arxiv.org/pdf/2107.00390v1",econ.GN
32196,gn,"This study examines patterns of regionalisation in the International Trade
Network (ITN). The study makes use of Gould Fernandez brokerage to examine the
roles countries play in the ITN linking different regional partitions. An
examination of three ITNs is provided for three networks with varying levels of
technological content, representing trade in high tech, medium tech, and
low-tech goods. Simulated network data, based on multiple approaches including
an advanced network model controlling for degree centralisation and clustering
patterns, is compared to the observed data to examine whether the roles
countries play within and between regions are a result of centralisation and
clustering patterns. The findings indicate that the roles countries play
between and within regions are indeed a result of centralisation patterns and
chiefly clustering patterns; indicating a need to examine the presence of hubs
when investigating regionalisation and globalisation patterns in the modern
global economy.",Trading patterns within and between regions: an analysis of Gould-Fernandez brokerage roles,2021-07-04 20:52:55,"Matthew Smith, Yasaman Sarabi","http://arxiv.org/abs/2107.01696v3, http://arxiv.org/pdf/2107.01696v3",econ.GN
32197,gn,"In this contribution, we exploit machine learning techniques to evaluate
whether and how close firms are to becoming successful exporters. First, we
train and test various algorithms using financial information on both exporters
and non-exporters in France in 2010-2018. Thus, we show that we are able to
predict the distance of non-exporters from export status. In particular, we
find that a Bayesian Additive Regression Tree with Missingness In Attributes
(BART-MIA) performs better than other techniques with an accuracy of up to
0.90. Predictions are robust to changes in definitions of exporters and in the
presence of discontinuous exporting activity. Eventually, we discuss how our
exporting scores can be helpful for trade promotion, trade credit, and
assessing aggregate trade potential. For example, back-of-the-envelope
estimates show that a representative firm with just below-average exporting
scores needs up to 44% more cash resources and up to 2.5 times more capital to
get to foreign markets.",Predicting Exporters with Machine Learning,2021-07-06 13:11:59,"Francesca Micocci, Armando Rungi","http://arxiv.org/abs/2107.02512v2, http://arxiv.org/pdf/2107.02512v2",econ.GN
32198,gn,"Fake news is a growing problem in developing countries with potentially
far-reaching consequences. We conduct a randomized experiment in urban Pakistan
to evaluate the effectiveness of two educational interventions to counter
misinformation among low-digital literacy populations. We do not find a
significant effect of video-based general educational messages about
misinformation. However, when such messages are augmented with personalized
feedback based on individuals' past engagement with fake news, we find an
improvement of 0.14 standard deviations in identifying fake news. We also find
negative but insignificant effects on identifying true news, driven by female
respondents. Our results suggest that educational interventions can enable
information discernment but their effectiveness critically depends on how well
their features and delivery are customized for the population of interest.",Countering Misinformation on Social Media Through Educational Interventions: Evidence from a Randomized Experiment in Pakistan,2021-07-06 20:37:56,"Ayesha Ali, Ihsan Ayyub Qazi","http://arxiv.org/abs/2107.02775v1, http://arxiv.org/pdf/2107.02775v1",econ.GN
32937,gn,"This synthetic control study quantifies the economic costs of the
Russo-Ukrainian war in terms of foregone entrepreneurial activity in both
countries since the invasion of Crimea in 2014. Relative to its synthetic
counterfactual, Ukraine's number of self-employed dropped by 675,000,
corresponding to a relative loss of 20%. The number of Ukrainian SMEs
temporarily dropped by 71,000 (14%) and recovered within five years of the
conflict. In contrast, Russia had lost more than 1.4 million SMEs (42%) five
years into the conflict. The disappearance of Russian SMEs is driven by both
fewer new businesses created and more existing business closures.",The Economic Costs of the Russia-Ukraine War: A Synthetic Control Study of (Lost) Entrepreneurship,2023-03-06 00:19:14,"David Audretsch, Paul P. Momtaz, Hanna Motuzenko, Silvio Vismara","http://arxiv.org/abs/2303.02773v1, http://arxiv.org/pdf/2303.02773v1",econ.GN
32199,gn,"We use a controlled laboratory experiment to study the causal impact of
income decreases within a time period on redistribution decisions at the end of
that period, in an environment where we keep fixed the sum of incomes over the
period. First, we investigate the effect of a negative income trend
(intra-personal decrease), which means a decreasing income compared to one's
recent past. Second, we investigate the effect ofa negative income trend
relative to the income trend of another person (inter-personal decrease). If
intra-personal or inter-personal decreases create dissatisfaction for an
individual, that person may become more selfish to obtain compensation. We
formal-ize both effects in a multi-period model augmenting a standard model of
inequality aversion. Overall, conditional on exhibiting sufficiently-strong
social preferences, we find that individuals indeed behave more selfishly when
they experience decreasing incomes. While many studies examine the effect of
income inequality on redistribution decisions, we delve into the history behind
one's income to isolate the effect of income changes.",Decreasing Incomes Increase Selfishness,2021-07-06 23:57:32,"Nickolas Gagnon, Riccardo D. Saulle, Henrik W. Zaunbrecher","http://arxiv.org/abs/2107.02888v1, http://arxiv.org/pdf/2107.02888v1",econ.GN
32200,gn,"Global concern regarding ultrafine particles (UFPs), which are particulate
matter (PM) with a diameter of less than 100nm, is increasing. These
particles-with more serious health effects than PM less than 2.5 micrometers
(PM2.5)-are difficult to measure using the current methods because their
characteristics are different from those of other air pollutants. Therefore, a
new monitoring system is required to obtain accurate UFPs information, which
will raise the financial burden of the government and people. In this study, we
estimated the economic value of UFPs information by evaluating the
willingness-to-pay (WTP) for the UFPs monitoring and reporting system. We used
the contingent valuation method (CVM) and the one-and-one-half-bounded
dichotomous choice (OOHBDC) spike model. We analyzed how the respondents'
socio-economic variables, as well as their cognition level of PM, affected
their WTP. Therefore, we collected WTP data of 1,040 Korean respondents through
an online survey. The estimated mean WTP for building a UFPs monitoring and
reporting system is KRW 6,958.55-7,222.55 (USD 6.22-6.45) per household per
year. We found that people satisfied with the current air pollutant
information, and generally possessing relatively greater knowledge of UFPs,
have higher WTP for a UFPs monitoring and reporting system. The results can be
used to establish new policies response to PM including UFPs.",Estimating the economic value of ultrafine particles information: A contingent valuation method,2021-07-07 09:24:39,"Eunjung Cho, Youngsang Cho","http://arxiv.org/abs/2107.03034v1, http://arxiv.org/pdf/2107.03034v1",econ.GN
32201,gn,"The authors of the article have reviewed the scientific literature on the
development of the Russian-Chinese cooperation in the field of combining
economic and logistics projects of the Eurasian Economic Union and the Silk
Road Economic Belt. The opinions of not only Russian, but also Chinese experts
on these projects are indicated, which provides the expansion of the vision of
the concept of the New Silk Road in both countries.",Economic prospects of the Russian-Chinese partnership in the logistics projects of the Eurasian Economic Union and the Silk Road Economic Belt: a scientific literature review,2021-07-07 12:53:50,"Elena Rudakova, Alla Pavlova, Oleg Antonov, Kira Kuntsevich, Yue Yang","http://arxiv.org/abs/2107.03116v1, http://arxiv.org/pdf/2107.03116v1",econ.GN
32202,gn,"Models of economic decision makers often include idealized assumptions, such
as rationality, perfect foresight, and access to all relevant pieces of
information. These assumptions often assure the models' internal validity, but,
at the same time, might limit the models' power to explain empirical phenomena.
This paper is particularly concerned with the model of the hidden action
problem, which proposes an optimal performance-based sharing rule for
situations in which a principal assigns a task to an agent, and the action
taken to carry out this task is not observable by the principal. We follow the
agentization approach and introduce an agent-based version of the hidden action
problem, in which some of the idealized assumptions about the principal and the
agent are relaxed so that they only have limited information access, are
endowed with the ability to gain information, and store it in and retrieve it
from their (limited) memory. We follow an evolutionary approach and analyze how
the principal's and the agent's decisions affect the sharing rule, task
performance, and their utility over time. The results indicate that the optimal
sharing rule does not emerge. The principal's utility is relatively robust to
variations in intelligence, while the agent's utility is highly sensitive to
limitations in intelligence. The principal's behavior appears to be driven by
opportunism, as she withholds a premium from the agent to assure the optimal
utility for herself.",Limited intelligence and performance-based compensation: An agent-based model of the hidden action problem,2021-07-08 14:19:49,"Patrick Reinwald, Stephan Leitner, Friederike Wall","http://arxiv.org/abs/2107.03764v1, http://arxiv.org/pdf/2107.03764v1",econ.GN
32203,gn,"Plastic pollution is one of the most challenging problems affecting the
marine environment of our time. Based on a unique dataset covering four
European seas and eight European countries, this paper adds to the limited
empirical evidence base related to the societal welfare effects of marine
litter management. We use a discrete choice experiment to elicit public
willingness-to-pay (WTP) for macro and micro plastic removal to achieve Good
Environmental Status across European seas as required by the European Marine
Strategy Framework Directive. Using a common valuation design and following
best-practice guidelines, we draw meaningful comparisons between countries,
seas and policy contexts. European citizens have strong preferences to improve
the environmental status of the marine environment by removing both micro and
macro plastic litter favouring a pan-European approach. However, public WTP
estimates differ significantly across European countries and seas. We explain
why and discuss implications for policymaking.",Public preferences for marine plastic litter reductions across Europe,2021-07-08 19:31:34,"Salma Khedr, Katrin Rehdanz, Roy Brouwer, Hanna Dijkstra, Sem Duijndam, Pieter van Beukering, Ikechukwu C. Okoli","http://arxiv.org/abs/2107.03957v1, http://arxiv.org/pdf/2107.03957v1",econ.GN
32229,gn,"We present evidence that the word entropy of American English has been rising
steadily since around 1900, contrary to predictions from existing
sociolinguistic theories. We also find differences in word entropy between
media categories, with short-form media such as news and magazines having
higher entropy than long-form media, and social media feeds having higher
entropy still. To explain these results we develop an ecological model of the
attention economy that combines ideas from Zipf's law and information foraging.
In this model, media consumers maximize information utility rate taking into
account the costs of information search, while media producers adapt to
technologies that reduce search costs, driving them to generate higher entropy
content in increasingly shorter formats.",The Rising Entropy of English in the Attention Economy,2021-07-27 17:31:50,"Charlie Pilgrim, Weisi Guo, Thomas T. Hills","http://arxiv.org/abs/2107.12848v5, http://arxiv.org/pdf/2107.12848v5",econ.GN
32204,gn,"How will the novel coronavirus evolve? I study a simple epidemiological
model, in which mutations may change the properties of the virus and its
associated disease stochastically and antigenic drifts allow new variants to
partially evade immunity. I show analytically that variants with higher
infectiousness, longer disease duration, and shorter latent period prove to be
fitter. ""Smart"" containment policies targeting symptomatic individuals may
redirect the evolution of the virus, as they give an edge to variants with a
longer incubation period and a higher share of asymptomatic infections. Reduced
mortality, on the other hand, does not per se prove to be an evolutionary
advantage. I then implement this model as an agent-based simulation model in
order to explore its aggregate dynamics. Monte Carlo simulations show that a)
containment policy design has an impact on both speed and direction of viral
evolution, b) the virus may circulate in the population indefinitely, provided
that containment efforts are too relaxed and the propensity of the virus to
escape immunity is high enough, and crucially c) that it may not be possible to
distinguish between a slowly and a rapidly evolving virus by looking only at
short-term epidemiological outcomes. Thus, what looks like a successful
mitigation strategy in the short run, may prove to have devastating long-run
effects. These results suggest that optimal containment policy must take the
propensity of the virus to mutate and escape immunity into account,
strengthening the case for genetic and antigenic surveillance even in the early
stages of an epidemic.","Endogenous viral mutations, evolutionary selection, and containment policy design",2021-07-09 13:49:54,Patrick Mellacher,"http://dx.doi.org/10.1007/s11403-021-00344-3, http://arxiv.org/abs/2107.04358v2, http://arxiv.org/pdf/2107.04358v2",econ.GN
32205,gn,"Optimal transport has become part of the standard quantitative economics
toolbox. It is the framework of choice to describe models of matching with
transfers, but beyond that, it allows to: extend quantile regression; identify
discrete choice models; provide new algorithms for computing the random
coefficient logit model; and generalize the gravity model in trade. This paper
offer a brief review of the basics of the theory, its applications to
economics, and some extensions.",The unreasonable effectiveness of optimal transport in economics,2021-07-10 01:22:10,Alfred Galichon,"http://arxiv.org/abs/2107.04700v1, http://arxiv.org/pdf/2107.04700v1",econ.GN
32206,gn,"Prescription Drug Monitoring Programs (PDMPs) seek to potentially reduce
opioid misuse by restricting the sale of opioids in a state. We examine
discontinuities along state borders, where one side may have a PDMP and the
other side may not. We find that electronic PDMP implementation, whereby
doctors and pharmacists can observe a patient's opioid purchase history,
reduces a state's opioid sales but increases opioid sales in neighboring
counties on the other side of the state border. We also find systematic
differences in opioid sales and mortality between border counties and interior
counties. These differences decrease when neighboring states both have ePDMPs,
which is consistent with the hypothesis that individuals cross state lines to
purchase opioids. Our work highlights the importance of understanding the
opioid market as connected across counties or states, as we show that states
are affected by the opioid policies of their neighbors.",Geographic Spillover Effects of Prescription Drug Monitoring Programs (PDMPs),2021-07-11 01:59:38,"Daniel Guth, Shiyu Zhang","http://arxiv.org/abs/2107.04925v2, http://arxiv.org/pdf/2107.04925v2",econ.GN
32207,gn,"Unabated coal power in India must be phased out by mid-century to achieve
global climate targets under the Paris Agreement. Here we estimate the costs of
hybrid power plants - lithium-ion battery storage with wind and solar PV - to
replace coal generation. We design least cost mixes of these technologies to
supply stylized baseload and load-following generation profiles in three Indian
states - Karnataka, Gujarat, and Tamil Nadu. Our analysis shows that
availability of low cost capital, solar PV capital costs of at least $250/kW,
and battery storage capacity costs at least 50% cheaper than current levels
will be required to phase out existing coal power plants. Phaseout by 2040
requires a 6% annual decline in the cost of hybrid systems over the next two
decades. We find that replacing coal generation with hybrid systems 99% of the
hours over multiple decades is roughly 40% cheaper than 100% replacement,
indicating a key role for other low cost grid flexibility mechanisms to help
hasten coal phaseout. Solar PV is more suited to pairing with short duration
storage than wind power. Overall, our results describe the challenging
technological and policy advances needed to achieve the temperature goals of
the Paris Agreement.",Sustained cost declines in solar PV and battery storage needed to eliminate coal generation in India,2021-07-11 02:28:16,"Aniruddh Mohan, Shayak Sengupta, Parth Vaishnav, Rahul Tongia, Asim Ahmed, Ines L. Azevedo","http://dx.doi.org/10.1088/1748-9326/ac98d8, http://arxiv.org/abs/2107.04928v4, http://arxiv.org/pdf/2107.04928v4",econ.GN
32208,gn,"While controversial, e-learning has become an essential tool for all kinds of
education: especially within the kindergarten-to-twelfth sector. However,
pockets of this sector lack access, mainly economically underserved students.
This paper explores the options available to underserved and aptly resourced
members of the kindergarten-to-twelfth educational sector: a 250-million-person
market, with only 9 million students enrolled in online education. The paper
also provides a brief overview of the options and challenges of making
e-learning available to everyone in the kindergarten-to-twelfth educational
sector. To establish whether e-learning is beneficial, it also discusses the
results of a survey conducted on students and educators who have experienced
e-learning, with the results showing that it is beneficial, with a general
trend of teachers showing more comfort with online learning than students. The
paper utilizes primary and secondary resources for this purpose, with
information both from the internet, and from surveys conducted within people
from the system: parents, students, and teachers.",E-Learning and its Socioeconomics,2021-07-11 16:08:20,Avni Singh,"http://arxiv.org/abs/2107.05041v2, http://arxiv.org/pdf/2107.05041v2",econ.GN
32209,gn,"We compare three populations commonly used in experiments by economists and
other social scientists: undergraduate students at a physical location (lab),
Amazon's Mechanical Turk (MTurk), and Prolific. The comparison is made along
three dimensions: the noise in the data due to inattention, the cost per
observation, and the elasticity of response. We draw samples from each
population, examining decisions in four one-shot games with varying tensions
between the individual and socially efficient choices. When there is no
tension, where individual and pro-social incentives coincide, noisy behavior
accounts for 60% of the observations on MTurk, 19% on Prolific, and 14% for the
lab. Taking costs into account, if noisy data is the only concern Prolific
dominates from an inferential power point of view, combining relatively low
noise with a cost per observation one fifth of the lab's. However, because the
lab population is more sensitive to treatment, across our main PD game
comparison the lab still outperforms both Prolific and MTurk.",The Experimenters' Dilemma: Inferential Preferences over Populations,2021-07-11 18:01:59,"Neeraja Gupta, Luca Rigotti, Alistair Wilson","http://arxiv.org/abs/2107.05064v2, http://arxiv.org/pdf/2107.05064v2",econ.GN
32210,gn,"The transition to a low-carbon economy is one of the ambitions of the
European Union for 2030. Biobased industries play an essential role in this
transition. However, there has been an on-going discussion about the actual
benefit of using biomass to produce biobased products, specifically the use of
agricultural materials (e.g., corn and sugarcane). This paper presents the
environmental impact assessment of 30% and 100% biobased PET (polyethylene
terephthalate) production using EU biomass supply chains (e.g., sugar beet,
wheat, and Miscanthus). An integral assessment between the life cycle
assessment methodology and the global sensitivity assessment is presented as an
early-stage support tool to propose and select supply chains that improve the
environmental performance of biobased PET production. From the results,
Miscanthus is the best option for the production of biobased PET: promoting EU
local supply chains, reducing greenhouse gas (GHG) emissions (process and
land-use change), and generating lower impacts in midpoint categories related
to resource depletion, ecosystem quality, and human health. This tool can help
improving the environmental performance of processes that could boost the shift
to a low-carbon economy.",Can we improve the environmental benefits of biobased PET production through local 1 biomass value chains? A life cycle assessment perspective,2021-07-12 11:18:48,"Carlos Garcia-Velasquez, Yvonne van der Meer","http://dx.doi.org/10.1016/j.jclepro.2022.135039, http://arxiv.org/abs/2107.05251v1, http://arxiv.org/pdf/2107.05251v1",econ.GN
32211,gn,"Background: The US Food and Drug Administration (FDA) regulates medical
devices (MD), which are predicated on a concoction of economic and policy
forces (e.g., supply/demand, crises, patents). Assuming that the number of FDA
MD (Premarketing Notifications (PMN), Approvals (PMAs), and their sum)
Applications behaves similarly to those of other econometrics, this work
explores the hypothesis of the existence (and, if so, the length scale(s)) of
economic cycles (periodicities). Methods: Beyond summary statistics, the
monthly (May, 1976 to December, 2020) number of observed FDA MD Applications
are investigated via an assortment of time series techniques (including:
Discrete Wavelet Transform, Running Moving Average Filter (RMAF), Complete
Ensemble Empirical Mode with Adaptive Noise decomposition (CEEMDAN), and
Seasonal Trend Loess (STL) decomposition) to exhaustively search and
characterize such periodicities. Results: The data were found to be non-normal,
non-stationary (fractional order of integration < 1), non-linear, and strongly
persistent (Hurst > 0.5). Importantly, periodicities exist and follow seasonal,
1 year short-term, 5-6 year (Juglar), and a single 24-year medium-term
(Kuznets) period (when considering the total number of MD Applications).
Economic crises (e.g., COVID-19) do not seem to affect the evolution of the
periodicities. Conclusions: This work concludes that (1) PMA and PMN data may
be viewed as a proxy measure of the MD industry; (2) periodicities exists in
the data with time lengths associated with seasonal/1-year, Juglar and Kuznets
affects; (4) these metrics do not seem affected by specific crises (such as
COVID-19) (similarly with other econometrics used in periodicity assessments);
(5) PMNs and PMAs evolve inversely and suggest a structural industrial
transformation; (6) Total MDs are predicted to continue their decline into the
mid-2020s prior to recovery.",Seasonal and Secular Periodicities Identified in the Dynamics of US FDA Medical Devices (1976 2020) Portends Intrinsic Industrial Transformation and Independence of Certain Crises,2021-07-09 18:00:53,Iraj Daizadeh,"http://dx.doi.org/10.1007/s43441-021-00334-4, http://arxiv.org/abs/2107.05347v2, http://arxiv.org/pdf/2107.05347v2",econ.GN
32212,gn,"In many U.S. central cities, property values are relatively low, while rents
are closer to those in better-off neighborhoods. This gap can lead to
relatively large profits for landlords, and has been referred to as
""exploitaton"" for renters. While much of this gap might be explained by risk,
factors such as income and race might play important roles as well. This study
calculates Census tract-level measures of the rent-to-property-value (RPV)
ratio for 30 large cities and their surrounding metropolitan areas. After
examining the spatial distribution of this ratio and relationships with other
socioeconomic variables for Milwaukee and three other cities, Z-scores and
quantiles are used to identify ""extreme"" RPV values nationwide. ""Rust Belt""
cities such as Detroit, Cleveland, and Milwaukee are shown to have higher
median and 95% values than do West Coast cities such as Seattle and San
Francisco. A spatial lag regression estimation shows that, controlling for
income, property values, and vacancy rates, racial characteristics often have
the ""opposite"" signs from what might be expected and that there is little
evidence of purely race-based ""exploitation"" of renters. A significantly
negative coefficient for the percentage of Black residents, for example, might
suggest that the RPV ratio is lower in a given tract, all else equal. While
this study shows where RPV values are highest within as well as between cities,
further investigation might uncover the drivers of these spatial differences
more fully.",Are Rents Excessive in the Central City?: A Geospatial Analysis,2021-07-12 18:59:15,Scott W. Hegerty,"http://arxiv.org/abs/2107.05529v1, http://arxiv.org/pdf/2107.05529v1",econ.GN
32213,gn,"The hidden-action model provides an optimal sharing rule for situations in
which a principal assigns a task to an agent who makes an effort to carry out
the task assigned to him. However, the principal can only observe the task
outcome but not the agent's actual action, which is why the sharing rule can
only be based on the outcome. The hidden-action model builds on somewhat
idealized assumptions about the principal's and the agent's capabilities
related to information access. We propose an agent-based model that relaxes
some of these assumptions. Our analysis lays particular focus on the
micro-level dynamics triggered by limited access to information. For the
principal's sphere, we identify the so-called Sisyphus effect that explains why
the sharing rule that provides the agent with incentives to take optimal action
is difficult to achieve if the information is limited, and we identify factors
that moderate this effect. In addition, we analyze the behavioral dynamics in
the agent's sphere. We show that the agent might make even more of an effort
than optimal under unlimited access to information, which we refer to as excess
effort. Interestingly, the principal can control the probability of making an
excess effort via the incentive mechanism. However, how much excess effort the
agent finally makes is out of the principal's direct control.",Micro-level dynamics in hidden action situations with limited information,2021-07-13 14:52:09,"Stephan Leitner, Friederike Wall","http://arxiv.org/abs/2107.06002v2, http://arxiv.org/pdf/2107.06002v2",econ.GN
32214,gn,"Most explanations of economic growth are based on knowledge spillovers, where
the development of some technologies facilitates the enhancement of others.
Empirical studies show that these spillovers can have a heterogeneous and
rather complex structure. But, so far, little attention has been paid to the
consequences of different structures of such cross-technology interactions: Is
economic development more easily fostered by homogenous or heterogeneous
interactions, by uni- or bidirectional spillovers? Using a detailed description
of an r&d sector with cross-technology interactions embedded in a simple growth
model, we analyse how the structure of spillovers influences growth prospects
and growth patterns. We show that some type of interactions (e.g., one-way
interactions) cannot induce exponential growth, whereas other structures can.
Furthermore, depending on the structure of interactions, all or only some
technologies will contribute to growth in the long run. Finally, some spillover
structures can lead to complex growth patterns, such as technology transitions,
where, over time, different technology clusters are the main engine of growth.",Economic development and the structure of cross-technology interactions,2021-07-13 17:39:37,"Anton Bondarev, Frank C. Krysiak","http://dx.doi.org/10.1016/j.euroecorev.2020.103628, http://arxiv.org/abs/2107.06137v1, http://arxiv.org/pdf/2107.06137v1",econ.GN
32235,gn,"The 2017 crackdown on Rakhine Rohingyas by the Myanmar army (Tatmadaw) pushed
more than 600,000 refugees into Bangladesh. Both Western and Islamic countries
denounced Aung Sang Suu Kyis government, but both Asian giants, China and
India, supported Myanmars actions. Both also have high stakes in Myanmar given
their long-term geopolitics and geoeconomic South and Southeast Asian plans. In
spite of Myanmar-based commonalities, Chinas and Indias approaches differ
significantly, predicting equally dissimilar outcomes. This chapter examines
their foreign policy and stakes in Myanmar in order to draw a sketch of the
future of Rakhine Rohingyas stuck in Bangladesh.","China, India, Myanmar: Playing Rohingya Roulette",2021-07-29 06:29:54,Hossain Ahmed Taufiq,"http://dx.doi.org/10.1007/978-981-13-7240-7_4, http://arxiv.org/abs/2107.13727v1, http://arxiv.org/pdf/2107.13727v1",econ.GN
32215,gn,"To analyze climate change mitigation strategies, economists rely on
simplified climate models - climate emulators. We propose a generic and
transparent calibration and evaluation strategy for these climate emulators
that is based on Coupled Model Intercomparison Project, Phase 5 (CMIP5). We
demonstrate that the appropriate choice of the free model parameters can be of
key relevance for the predicted social cost of carbon. We propose to use four
different test cases: two tests to separately calibrate and evaluate the carbon
cycle and temperature response, a test to quantify the transient climate
response, and a final test to evaluate the performance for scenarios close to
those arising from economic models. We re-calibrate the climate part of the
widely used DICE-2016: the multi-model mean as well as extreme, but still
permissible climate sensitivities and carbon cycle responses. We demonstrate
that the functional form of the climate emulator of the DICE-2016 model is fit
for purpose, despite its simplicity, but its carbon cycle and temperature
equations are miscalibrated. We examine the importance of the calibration for
the social cost of carbon in the context of a partial equilibrium setting where
interest rates are exogenous, as well as the simple general equilibrium setting
from DICE-2016. We find that the model uncertainty from different consistent
calibrations of the climate system can change the social cost of carbon by a
factor of four if one assumes a quadratic damage function. When calibrated to
the multi-model mean, our model predicts similar values for the social cost of
carbon as the original DICE-2016, but with a strongly reduced sensitivity to
the discount rate and about one degree less long-term warming. The social cost
of carbon in DICE-2016 is oversensitive to the discount rate, leading to
extreme comparative statics responses to changes in preferences.",The climate in climate economics,2021-07-13 18:23:13,"Doris Folini, Felix Kübler, Aleksandra Malova, Simon Scheidegger","http://arxiv.org/abs/2107.06162v3, http://arxiv.org/pdf/2107.06162v3",econ.GN
32216,gn,"We analyze differences in mode of transportation to work by sexual
orientation, using the American Community Survey 2008-2019. Individuals in
same-sex couples are significantly less likely to drive to work than men and
women in different-sex couples. This gap is particularly stark among men: on
average, almost 12 percentage point (or 13%) lower likelihood of driving to
work for men in same-sex couples. Individuals in same-sex couples are also more
likely to use public transport, walk, or bike to work: on average, men and
women are 7 and 3 percentage points more likely, respectively, to take public
transportation to work than those in different-sex couples. These differences
persist after controlling for demographic characteristics, partner's
characteristics, location, fertility, and marital status. Additional evidence
from the General Social Survey 2008-2018 suggests that these disparities by
sexual orientation may be due to lesbian, gay, and bisexual individuals caring
more for the environment than straight individuals.",Sissy That Walk: Transportation to Work by Sexual Orientation,2021-07-13 19:06:54,"Sonia Oreffice, Dario Sansone","http://dx.doi.org/10.1371/journal.pone.0263687, http://arxiv.org/abs/2107.06210v1, http://arxiv.org/pdf/2107.06210v1",econ.GN
32217,gn,"The article examines both the legal responsibility itself and its types, and
in various aspects. The authors apply legal analysis, as well as the principles
of consistency and integrity. The contradictions of administrative
responsibility, as well as legal gaps in its interpretation, are highlighted.",Key features of administrative responsibility,2021-07-16 13:53:32,"Vladimir Zhavoronkov, Valeri Lipunov, Mattia Masolletti","http://arxiv.org/abs/2107.07816v1, http://arxiv.org/pdf/2107.07816v1",econ.GN
32218,gn,"This paper evaluates the significant factors contributing to environmental
awareness among individuals living in the urban area of Sylhet, Bangladesh.
Ordered Probit(OPM) estimation is applied on the value of ten measures of
individual environmental concern. The estimated results of OPM reveal the
dominance of higher education, higher income, and full-employment status on
environmental concern and environmentally responsible behavior. Younger and
more educated respondents tended to be more knowledgeable and concerned than
older and less educated respondents. The marginal effect of household size,
middle-income level income, and part-time employment status of the survey
respondents played a less significant role in the degree of environmental
awareness. Findings also validate the ""age hypothesis"" proposed by Van Liere
and Dunlap (1980), and the gender effect reveals an insignificant role in
determining the degree of environmental concern. Environmental awareness among
urban individuals with higher income increased linearly with environmental
awareness programs which may have significant policy importance, such as
environmental awareness programs for old-aged and less-educated individuals,
and may lead to increased taxation on higher income groups to mitigate city
areas' pollution problems.","A Probit Estimation of Urban Bases of Environmental Awareness: Evidence from Sylhet City, Bangladesh",2021-07-18 05:08:23,"Mohammad Masud Alam, AFM Zakaria","http://arxiv.org/abs/2107.08342v3, http://arxiv.org/pdf/2107.08342v3",econ.GN
32219,gn,"In the last two decades, composite indicators' construction to measure and
compare multidimensional phenomena in a broad spectrum of domains has increased
considerably. Different methodological approaches are used to summarize huge
data sets of information in a single figure. This paper proposes a new approach
that consists of computing a multicriteria composite performance interval based
on different aggregation rules. The suggested approach provides an additional
layer of information as the performance interval displays a lower bound from a
non-compensability perspective, and an upper bound allowing for
full-compensability. The outstanding features of this proposal are: (i) a
distance-based multicriteria technique is taken as the baseline to construct
the multicriteria performance interval (ii) the aggregation of
distances/separation measures is made using particular cases of Minkowski's
$L_p$ metrics; (iii) the span of the multicriteria performance interval can be
considered as a sign of the dimensions or indicators balance.",Monitoring multidimensional phenomena with a multicriteria composite performance interval approach,2021-07-18 12:11:18,"Ana Garcia-Bernabeu, Adolfo Hilario-Caballero","http://arxiv.org/abs/2107.08393v1, http://arxiv.org/pdf/2107.08393v1",econ.GN
32220,gn,"Is the typical specification of the Euler equation for investment employed in
DSGE models consistent with aggregate macro data? Using state-of-the-art
econometric methods that are robust to weak instruments and exploit information
in possible structural changes, the answer is yes. Unfortunately, however,
there is very little information about the values of these parameters in
aggregate data because investment is unresponsive to changes in capital
utilization and the real interest rate. In DSGE models, the investment
adjustment cost and the persistence of the investment-specific technology shock
parameters are mainly identified by, respectively, the cross-equation
restrictions and the dynamics implied by the structure of the model.",Empirical evidence on the Euler equation for investment in the US,2021-07-19 12:38:59,"Guido Ascari, Qazi Haque, Leandro M. Magnusson, Sophocles Mavroeidis","http://arxiv.org/abs/2107.08713v2, http://arxiv.org/pdf/2107.08713v2",econ.GN
32221,gn,"Taxes on goods and services account for about 45% of total tax revenue in
Brazil. This tax collection results in a highly complex system, with several
taxes, different tax bases, and a multiplicity of rates. Moreover, about 43% of
taxes on goods fall on inputs. In this context, the effective tax rates can
substantially differ from the legal rates. In this study we estimate the final
incidence of indirect taxes in Brazil using the 2015 Brazilian input-output
matrix and a method that incorporates the multisector effects of the taxation
of inputs.",A Incidência Final dos Tributos Indiretos no Brasil: Estimativa Usando a Matriz de Insumo-Produto 2015,2021-07-20 13:31:18,"Rozane Bezerra de Siqueira, José Ricardo Bezerra Nogueira, Carlos Feitosa Luna","http://arxiv.org/abs/2107.09396v1, http://arxiv.org/pdf/2107.09396v1",econ.GN
32222,gn,"This study examines how housing sector volatilities affect real estate
investment trust (REIT) equity return in the United States. I argue that
unexpected changes in housing variables can be a source of aggregate housing
risk, and the first principal component extracted from the volatilities of U.S.
housing variables can predict the expected REIT equity returns. I propose and
construct a factor-based housing risk index as an additional factor in asset
price models that uses the time-varying conditional volatility of housing
variables within the U.S. housing sector. The findings show that the proposed
housing risk index is economically and theoretically consistent with the
risk-return relationship of the conditional Intertemporal Capital Asset Pricing
Model (ICAPM) of Merton (1973), which predicts an average maximum of 5.6
percent of risk premium in REIT equity return. In subsample analyses, the
positive relationship is not affected by sample periods' choice but shows
higher housing risk beta values for the 2009-18 sample period. The relationship
remains significant after controlling for VIX, Fama-French three factors, and a
broad set of macroeconomic and financial variables. Moreover, the proposed
housing beta also accurately forecasts U.S. macroeconomic and financial
conditions.",Time Varying Risk in U.S. Housing Sector and Real Estate Investment Trusts Equity Return,2021-07-22 07:57:39,Masud Alam,"http://arxiv.org/abs/2107.10455v1, http://arxiv.org/pdf/2107.10455v1",econ.GN
32223,gn,"Can online education enable all students to participate in and benefit from
it equally? Massive online education without addressing the huge access gap and
disparities in digital infrastructure would not only exclude a vast majority of
students from learning opportunities but also exacerbate the existing
socio-economic disparities in educational opportunities.",Of Access and Inclusivity Digital Divide in Online Education,2021-07-22 17:56:38,"Bheemeshwar Reddy A, Sunny Jose, Vaidehi R","http://arxiv.org/abs/2107.10723v1, http://arxiv.org/pdf/2107.10723v1",econ.GN
32224,gn,"A tailor-made internet survey experiment provides individuals with
information on their income positions to examine their effects on subjective
well-being. In the first survey, respondents were asked about their household
income and subjective well-being. Based on the data collected, three different
respondents' income positions within the residential locality, within a group
of the same educational background, and cohort were obtained. In the follow-up
survey for the treatment group, respondents are informed of their income
positions and then asked for subjective well-being. Key findings are that,
after obtaining information, a higher individual's income position improves
their subjective well-being. The effects varied according to individual
characteristics and proxies.",Where do I rank? Am I happy?: learning income position and subjective-wellbeing in an internet experiment,2021-07-22 01:51:34,Eiji Yamamura,"http://arxiv.org/abs/2107.11185v1, http://arxiv.org/pdf/2107.11185v1",econ.GN
32225,gn,"This paper studies reputation in the online market for illegal drugs in which
no legal institutions exist to alleviate uncertainty. Trade takes place on
platforms that offer rating systems for sellers, thereby providing an
observable measure of reputation. The analysis exploits the fact that one of
the two dominant platforms unexpectedly disappeared. Re-entering sellers reset
their rating. The results show that on average prices decreased by up to 9% and
that a 1% increase in rating causes a price increase of 1%. Ratings and prices
recover after about three months. We calculate that identified good types earn
1,650 USD more per week.",Dealing with Uncertainty: The Value of Reputation in the Absence of Legal Institutions,2021-07-23 18:47:17,"Nicolas Eschenbaum, Helge Liebert","http://arxiv.org/abs/2107.11314v1, http://arxiv.org/pdf/2107.11314v1",econ.GN
32226,gn,"The ongoing Rohingya refugee crisis is considered as one of the largest
human-made humanitarian disasters of the 21st century. So far, Bangladesh is
the largest recipient of these refugees. According to the United Nations Office
for the Coordination of Humanitarian Affairs (UN OCHA), approximately 650,000
new entrants have been recorded since the new violence erupted on 25 August
2017 in the Rakhine state of Myanmar.1 However, such crisis is nothing new in
Bangladesh, nor are the security-related challenges new that such an exodus
brings with it. Ever since the military came to power in Myanmar (in 1962),
Rohingya exodus to neighboring countries became a recurring incident. The
latest mass exodus of Rohingyas from Rakhine state of Myanmar to Bangladesh is
the largest of such influxes. Unlike, the previous refugee crisis, the ongoing
crisis has wide-ranging security implications on Bangladesh. They are also
varied and multifaceted. Thus, responsibilities for ensuring effective
protection have become operationally multilateral. The problem of security
regarding the Rohingya refugee issue is complicated by the Islamist insurgency,
illicit methamphetamine/yaba drug trafficking, and HIV/AIDS/STI prevalence
factors. The chapter examines the different dimensions of security challenges
that the recent spell of Rohingya exodus brings to Bangladesh and the refugees
themselves. In order to understand the challenges, firstly the chapter attempts
to conceptualize the prominent security frameworks. Secondly, it examines the
context and political economy behind the persecution of Rohingyas in the
Rakhine state. Thirdly, it explores the political and military aspects of
security. Fourthly, it explores the social and economic dimensions. Finally, it
examines the environmental impacts of Rohingya crisis in Bangladesh.",Rohingya Refugee Crisis and the State of Insecurity in Bangladesh,2021-07-26 12:58:54,Hossain Ahmed Taufiq,"http://arxiv.org/abs/2107.12080v1, http://arxiv.org/pdf/2107.12080v1",econ.GN
32227,gn,"Water-logging is a major challenge for Dhaka city, the capital of Bangladesh.
The rapid, unregulated, and unplanned urbanization, as well as detrimental
social, economic, infrastructural, and environmental consequences, not to
mention diseases like dengue, challenge the several crash programs combating
water-logging in the city. This study provides a brief contextual analysis of
the Dhakas topography and natural, as well as storm water drainage systems,
before concentrating on the man-made causes and effects of water-logging,
ultimately exploring a few remedial measures.","Dhaka Water-logging: Causes, Effects and Remedial Policy Options",2021-07-27 09:47:45,Hossain Ahmed Taufiq,"http://arxiv.org/abs/2107.12625v1, http://arxiv.org/pdf/2107.12625v1",econ.GN
32228,gn,"Using three rounds of NSS datasets, the present paper attempts to understand
the relationship between income inequality and intergenerational income
mobility (IGIM) by segregating generations into social and income classes. The
originality of the paper lies in assessing the IGIM using different approaches,
which we expect to contribute to the existing literature. We conclude that the
country has low-income mobility and high inequality which is no longer
associated with a particular social class in India. Also, both may have a
negative or positive relationship, hence needs to be studied at a regional
level.",Income Inequality and Intergenerational Mobility in India,2021-07-27 13:00:13,Anuradha Singh,"http://arxiv.org/abs/2107.12702v1, http://arxiv.org/pdf/2107.12702v1",econ.GN
32258,gn,"In this paper, we estimate the causal impact (i.e. Average Treatment Effect,
ATT) of the EU ETS on GHG emissions and firm competitiveness (primarily
measured by employment, turnover, and exports levels) by combining a
difference-in-differences approach with semi-parametric matching techniques and
estimators an to investigate the effect of the EU ETS on the economic
performance of these German manufacturing firms using a Stochastic Production
Frontier model.",Causal Impact Of European Union Emission Trading Scheme On Firm Behaviour And Economic Performance: A Study Of German Manufacturing Firms,2021-08-16 18:29:59,"Nitish Gupta, Jay Shah, Satwik Gupta, Ruchir Kaul","http://arxiv.org/abs/2108.07163v1, http://arxiv.org/pdf/2108.07163v1",econ.GN
32230,gn,"Social accountability refers to promoting good governance by making ruling
elites more responsive. In Bangladesh, where bureaucracy and legislature
operate with little effective accountability or checks and balances,
traditional horizontal or vertical accountability proved to be very blunt and
weak. In the presence of such faulty mechanisms, ordinary citizens access to
information is frequently denied, and their voices are kept mute. It impasses
the formation of an enabling environment, where activists and civil society
institutions representing the ordinary peoples interest are actively
discouraged. They become vulnerable to retribution. Social accountability, on
the other hand, provides an enabling environment for activists and civil
society institutions to operate freely. Thus, leaders and administration become
more accountable to people. An enabling environment means providing legal
protection, enhancing the availability of information and increasing citizen
voice, strengthening institutional and public service capacities and directing
incentives that foster accountability. Donors allocate significant shares of
resources to encouraging civil society to partner with elites rather than
holding them accountable. This paper advocate for a stronger legal environment
to protect critical civil society and whistle-blowers, and for independent
grant-makers tasked with building strong, self-regulating social accountability
institutions.
  Key Words: Accountability, Legal Protection, Efficiency, Civil Society,
Responsiveness",Towards an enabling environment for social accountability in Bangladesh,2021-07-28 05:05:31,Hossain Ahmed Taufiq,"http://arxiv.org/abs/2107.13128v1, http://arxiv.org/pdf/2107.13128v1",econ.GN
32231,gn,"To decarbonize the economy, many governments have set targets for the use of
renewable energy sources. These are often formulated as relative shares of
electricity demand or supply. Implementing respective constraints in energy
models is a surprisingly delicate issue. They may cause a modeling artifact of
excessive electricity storage use. We introduce this phenomenon as 'unintended
storage cycling', which can be detected in case of simultaneous storage
charging and discharging. In this paper, we provide an analytical
representation of different approaches for implementing minimum renewable share
constraints in models, and show how these may lead to unintended storage
cycling. Using a parsimonious optimization model, we quantify related
distortions of optimal dispatch and investment decisions as well as market
prices, and identify important drivers of the phenomenon. Finally, we provide
recommendations on how to avoid the distorting effects of unintended storage
cycling in energy modeling.",Renewable Energy Targets and Unintended Storage Cycling: Implications for Energy Modeling,2021-07-28 17:11:24,"Martin Kittel, Wolf-Peter Schill","http://dx.doi.org/10.1016/j.isci.2022.104002, http://arxiv.org/abs/2107.13380v2, http://arxiv.org/pdf/2107.13380v2",econ.GN
32232,gn,"The MobilityCoin is a new, all-encompassing currency for the management of
the multimodal urban transportation system. MobilityCoins includes and replaces
various existing transport policy instruments while also incentivizing a shift
to more sustainable modes as well as empowering the public to vote for
infrastructure measures.",MobilityCoins -- A new currency for the multimodal urban transportation system,2021-07-28 18:48:06,"Klaus Bogenberger, Philipp Blum, Florian Dandl, Lisa-Sophie Hamm, Allister Loder, Patrick Malcolm, Martin Margreiter, Natalie Sautter","http://arxiv.org/abs/2107.13441v2, http://arxiv.org/pdf/2107.13441v2",econ.GN
32233,gn,"This paper investigates the assumption of homogeneous effects of federal tax
changes across the U.S. states and identifies where and why that assumption may
not be valid. More specifically, what determines the transmission mechanism of
tax shocks at the state level? How vital are states' fiscal structures,
financial conditions, labor market rigidities, and industry mix? Do these
economic and structural characteristics drive the transmission mechanism of the
tax changes at the state level at different horizons? This study employs a
panel factor-augmented vector autoregression (FAVAR) technique to answer these
issues. The findings show that state economies respond homogeneously in terms
of employment and price levels; however, they react heterogeneously in real GDP
and personal income growth. In most states, these reactions are statistically
significant, and the heterogeneity in the effects of tax cuts is significantly
related to the state's fiscal structure, manufacturing and financial
composition, and the labor market's rigidity. A cross-state regression analysis
shows that states with higher tax elasticity, higher personal income tax,
strict labor market regulation, and economic policy uncertainties are
relatively less responsive to federal tax changes. In contrast, the magnitude
of the response in real GDP, personal income, and employment to tax cuts is
relatively higher in states with a larger share of finance, manufacturing,
lower tax burdens, and flexible credit markets.",Heterogeneous Responses to the U.S. Narrative Tax Changes: Evidence from the U.S. States,2021-07-29 03:21:11,Masud Alam,"http://arxiv.org/abs/2107.13678v1, http://arxiv.org/pdf/2107.13678v1",econ.GN
32234,gn,"Non-governmental organisations have made a significant contribution in the
development of Bangladesh. Today, Bangladesh has more than 2000 NGOs, and few
of them are among the largest in the world. NGOs are claimed to have impacts on
the sustainable development in Bangladesh. However, to what extent they have
fostered equity and social inclusion in the urban cities of Bangladesh remains
a subject of thorough examination. The 11th goal of the Sustainable Development
Goals (SDG) advocates for making cities and human settlements inclusive, safe,
resilient and sustainable. Bangladesh which is the most densely populated
country in the world faces multifaceted urbanization challenges. The capital
city Dhaka itself has experienced staggering population growth in last few
decades. Today, Dhaka has become one of the fastest growing megacities in the
world. Dhaka started its journey with a manageable population of 2.2 million in
1975 which now reached 14.54 million. The growth rate averaged 6 per cent each
year. As this rapid growth of Dhaka City is not commensurate with its
industrial development, a significant portion of its population is living in
informal settlements or slums where they experience the highest level of
poverty and vulnerability. Many NGOs have taken either concerted or individual
efforts to address socio-economic challenges in the city. Earlier results
suggest that programs undertaken by NGOs have shown potential to positively
contribute to fostering equity and reducing social exclusion. This paper,
attempts to explore what types of relevant NGO programs are currently in place
taking the case of Dhaka city.",Role of NGOs in fostering equity and social inclusion in cities of Bangladesh: The Case of Dhaka,2021-07-29 06:03:40,Hossain Ahmed Taufiq,"http://arxiv.org/abs/2107.13716v1, http://arxiv.org/pdf/2107.13716v1",econ.GN
32944,gn,"We illustrate the point with an empirical analysis of assortative mating in
the US, namely, that the outcome of comparing two distant groups can be
sensitive to whether comparing the groups directly, or indirectly via a series
of counterfactual decompositions involving the groups' comparisons to some
intermediate groups. We argue that the latter approach is typically more fit
for its purpose.",Direct comparison or indirect comparison via a series of counterfactual decompositions?,2023-03-09 00:51:06,Anna Naszodi,"http://arxiv.org/abs/2303.04905v1, http://arxiv.org/pdf/2303.04905v1",econ.GN
32236,gn,"The consumers' willingness to pay plays an important role in economic theory
and in setting policy. For a market, this function can often be estimated from
observed behavior -- preferences are revealed. However, economists would like
to measure consumers' willingness to pay for some goods where this can only be
measured through stated valuation. Confirmed convergence of valuations based on
stated preferences as compared to valuations based on revealed preferences is
rare, and it is important to establish circumstances under which one can expect
such convergence. By building a simple probabilistic model for the consumers'
likelihood of travel, we provide an approach that should make comparing stated
and revealed preferences easier in cases where the preference is tied to travel
or some other behavior whose cost can be measured. We implemented this approach
in a pilot study and found an estimate of willingness to pay for visiting an
environmentally enhanced recreational site based on actual travel in good
agreement with an estimate based on a survey using stated preferences. To use
the probabilistic model we used population statistics to adjust for the
relevant duration and thus compare stated and revealed responses.",Reconciling revealed and stated measures for willingness to pay in recreation by building a probability model,2021-07-30 00:54:38,"Edoh Y. Amiran, Joni S. James Charles","http://arxiv.org/abs/2107.14343v1, http://arxiv.org/pdf/2107.14343v1",econ.GN
32237,gn,"How do matching of spouses and the nature of work jointly shape the
distribution of COVID-19 health risks? To address this question, I study the
association between the incidence of COVID-19 and the degree of spousal sorting
into occupations that differ by contact intensity at the workplace. The
mechanism, that I explore, implies that the higher degree of positive spousal
sorting mitigates intra-household contagion and this translates into a smaller
number of individuals exposed to COVID-19 risk. Using the U.S. data at the
state level, I argue that spousal sorting is an important factor for
understanding the disparities in the prevalence of COVID-19 during the early
stages of the pandemic. First, I document that it creates about two-thirds of
the U.S. dual-earner couples that are exposed to higher COVID-19 health risk
due to within-household transmission. Moreover, I uncover substantial
heterogeneity in the degree of spousal sorting by state. Next, for the first
week of April 2020, I estimate that a one standard deviation increase in the
measure of spousal sorting is associated with a 30% reduction in the total
number of cases per 100000 inhabitants and a 39.3% decline in the total number
of deaths per 100000 inhabitants. Furthermore, I find substantial temporal
heterogeneity as the coefficients decline in magnitude over time. My results
speak to the importance of policies that allow mitigating intra-household
contagion.",Spousal Occupational Sorting and COVID-19 Incidence: Evidence from the United States,2021-07-30 01:10:42,Egor Malkov,"http://arxiv.org/abs/2107.14350v2, http://arxiv.org/pdf/2107.14350v2",econ.GN
32238,gn,"Why do household saving rates differ so much across countries? This
micro-level question has global implications: countries that systematically
""oversave"" export capital by running current account surpluses. In the
recipient countries, interest rates are thus too low and financial stability is
put at risk. Existing theories argue that saving is precautionary, but tests
are limited to cross-country comparisons and are not always supportive. We
report the findings of an original survey experiment. Using a simulated
financial saving task implemented online, we compare the saving preferences of
a large and diverse sample of Chinese-Canadians with other Canadians. This
comparison is instructive given that Chinese-Canadians migrated from, or
descend from those who migrated from, a high-saving environment to a
low-savings, high-debt environment. We also compare behavior in the presence
and absence of a simulated ""welfare state,"" which we represent in the form of
mandatory insurance. Our respondents exhibit behavior in the saving task that
corresponds to standard economic assumptions about lifecycle savings and risk
aversion. We find strong evidence that precautionary saving is reduced when a
mandatory insurance is present, but no sign that Chinese cultural influences -
represented in linguistic or ethnic terms - have any effect on saving behavior.",Knowing When to Splurge: Precautionary Saving and Chinese-Canadians,2021-08-01 21:40:28,"Mark S. Manger, J. Scott Matthews","http://arxiv.org/abs/2108.00519v1, http://arxiv.org/pdf/2108.00519v1",econ.GN
32239,gn,"This study investigates the causal relationship between patent grants and
firms' dynamics in the Information and Communication Technology (ICT) industry,
as the latter is a peculiar sector of modern economies, often under the lens of
antitrust authorities. For our purpose, we exploit matched information about
financial accounts and patenting activity in 2009-2017 by 179,660 companies
operating in 39 countries. Preliminarily, we show how bigger companies are less
than 2% of the sample, although they concentrate about 89% of the grants
obtained in the period of analyses. Thus, we test that patent grants in the ICT
industry have a significant and large impact on market shares and firm size of
smaller companies (31.5% and 30.7%, respectively) in the first year after the
grants, while we have no evidence of an impact for bigger companies. After a
novel instrumental variable strategy that exploits information at the level of
patent offices, we confirm that most of the effects on smaller companies are
due to the protection of property rights and not to the innovative content of
inventions. Finally, we never observe a significant impact on either
profitability or productivity for any firm size category. Eventually, we
discuss how our findings support the idea that the ICT industry is a case of
endogenous R&D sunk costs, which prevent profit margins from rising in the
presence of a relatively high market concentration.",What do Firms Gain from Patenting? The Case of the Global ICT Industry,2021-08-02 15:30:46,"Dimitrios Exadaktylos, Mahdi Ghodsi, Armando Rungi","http://arxiv.org/abs/2108.00814v5, http://arxiv.org/pdf/2108.00814v5",econ.GN
32240,gn,"Each individual in society experiences an evolution of their income during
their lifetime. Macroscopically, this dynamics creates a statistical
relationship between age and income for each society. In this study, we
investigate income distribution and its relationship with age and identify a
stable joint distribution function for age and income within the United Kingdom
and the United States. We demonstrate a flexible calibration methodology using
panel and population surveys and capture the characteristic differences between
the UK and the US populations. The model here presented can be utilised for
forecasting income and planning pensions.",A generative model for age and income distribution,2021-07-26 15:03:59,"Fatih Ozhamaratli, Oleg Kitov, Paolo Barucca","http://dx.doi.org/10.1140/epjds/s13688-022-00317-x, http://arxiv.org/abs/2108.00848v1, http://arxiv.org/pdf/2108.00848v1",econ.GN
32241,gn,"This study examines the impact of nighttime light intensity on child health
outcomes in Bangladesh. We use nighttime light intensity as a proxy measure of
urbanization and argue that the higher intensity of nighttime light, the higher
is the degree of urbanization, which positively affects child health outcomes.
In econometric estimation, we employ a methodology that combines parametric and
non-parametric approaches using the Gradient Boosting Machine (GBM), K-Nearest
Neighbors (KNN), and Bootstrap Aggregating that originate from machine learning
algorithms. Based on our benchmark estimates, findings show that one standard
deviation increase of nighttime light intensity is associated with a 1.515 rise
of Z-score of weight for age after controlling for several control variables.
The maximum increase of weight for height and height for age score range from
5.35 to 7.18 units. To further understand our benchmark estimates, generalized
additive models also provide a robust positive relationship between nighttime
light intensity and children's health outcomes. Finally, we develop an economic
model that supports the empirical findings of this study that the marginal
effect of urbanization on children's nutritional outcomes is strictly positive.",Nighttime Light Intensity and Child Health Outcomes in Bangladesh,2021-08-02 17:21:12,"Mohammad Rafiqul Islam, Masud Alam, Munshi Naser İbne Afzal, Sakila Alam","http://arxiv.org/abs/2108.00926v2, http://arxiv.org/pdf/2108.00926v2",econ.GN
32242,gn,"This paper presents evidence of an informational effect in changes of the
federal funds rate around FOMC announcements by exploiting exchange rate
variations for a panel of emerging economies. For several FOMC announcements
dates, emerging market economies' exchange rate strengthened relative to the US
dollar, in contrast to what the standard theory predicts. These results are in
line with the information effect, which denote the Federal Reserve's disclosure
of information about the state of the economy. Using Jarocinski \& Karadi
2020's identification scheme relying on sign restrictions and high-frequency
surprises of multiple financial instruments, I show how different US monetary
policy shocks imply different spillovers on emerging markets financial flows
and macroeconomic performance. I emphasize the contrast in dynamics of
financial flows and equity indexes and how different exchange rate regimes
shape aggregate fluctuations. Using a structural DSGE model and IRFs matching
techniques I argue that ignoring information shocks bias the inference over key
frictions for small open economy models.",US Spillovers of US Monetary Policy: Information effects & Financial Flows,2021-08-02 19:34:16,Santiago Camara,"http://arxiv.org/abs/2108.01026v1, http://arxiv.org/pdf/2108.01026v1",econ.GN
32243,gn,"Social scientists have become increasingly interested in how narratives --
the stories in fiction, politics, and life -- shape beliefs, behavior, and
government policies. This paper provides an unsupervised method to quantify
latent narrative structures in text documents. Our new software package RELATIO
identifies coherent entity groups and maps explicit relations between them in
the text. We provide an application to the United States Congressional Record
to analyze political and economic narratives in recent decades. Our analysis
highlights the dynamics, sentiment, polarization, and interconnectedness of
narratives in political discourse.",RELATIO: Text Semantics Capture Political and Economic Narratives,2021-08-03 22:47:13,"Elliott Ash, Germain Gauthier, Philine Widmer","http://arxiv.org/abs/2108.01720v3, http://arxiv.org/pdf/2108.01720v3",econ.GN
32244,gn,"Why is the U.S. industry-level productivity dispersion countercyclical?
Theoretically, we build a duopoly model in which heterogeneous R&D costs
determine firms' optimal behaviors and the equilibrium technology gap after a
negative profit shock. Quantitatively, we calibrate a parameterized model,
simulate firms' post--shock responses and predict that productivity dispersion
is due to the low-cost firm increasing R&D efforts and the high-cost firm doing
the opposite. Empirically, we construct an index of negative profit shocks and
provide two reduced-form tests for this mechanism.",R&D Heterogeneity and Countercyclical Productivity Dispersion,2021-08-04 23:24:21,"Shuowen Chen, Yang Ming","http://arxiv.org/abs/2108.02272v4, http://arxiv.org/pdf/2108.02272v4",econ.GN
32245,gn,"In this research we perform hedonic regression model to examine the
residential property price determinants in the city of Boulder in the state of
Colorado, USA. The urban housing markets are too compounded to be considered as
homogeneous markets. The heterogeneity of an urban property market requires
creation of market segmentation. To test whether residential properties in the
real estate market in the city of Boulder are analyzed and predicted in the
disaggregate level or at an aggregate level we stratify the housing market
based on both property types and location and estimate separate hedonic price
models for each submarket. The results indicate that the implicit values of the
property characteristics are not identical across property types and locations
in the city of Boulder and market segmentation exists.","House Price Determinants and Market Segmentation in Boulder, Colorado: A Hedonic Price Approach",2021-08-05 11:19:48,Mahdieh Yazdani,"http://arxiv.org/abs/2108.02442v1, http://arxiv.org/pdf/2108.02442v1",econ.GN
32246,gn,"This work sought to find out the effectiveness of Anambra Broadcasting
Service (ABS) Radio news on teaching and learning. The study focused mainly on
listeners of ABS radio news broadcast in Awka, the capital of Anambra State,
Nigeria. Its objectives were to find out; if Awka based students are exposed to
ABS radio; to discover the ABS radio program students favorite; the need
gratification that drives students to listen to ABS radio news; the
contributions of radio news to students teaching and learning; and
effectiveness of ABS radio news on teaching and learning in Awka. The
population of Awka students is 198,868. This is also the population of the
study. But a sample size of 400 was chosen and administered with
questionnaires. The study was hinged on the uses and gratification theory. It
adopted a survey research design. The data gathered was analyzed using simple
percentages and frequency of tables. The study revealed that news is very
effective in teaching and learning. It was concluded that news is the best
instructional media to be employed in teaching and learning. Among other
things, it was recommended that teachers and students should listen to and make
judicious use of news for academic purposes.",Effectiveness of Anambra Broadcasting Service (ABS) Radio News on Teaching and Learning (a case study of Awka based Students),2021-08-06 06:07:40,Okechukwu Christopher Onuegbu,"http://dx.doi.org/10.14445/2349641X/IJCMS-V8I2P103, http://arxiv.org/abs/2108.02925v1, http://arxiv.org/pdf/2108.02925v1",econ.GN
32951,gn,"Nobel laureates cluster together. 696 of the 727 winners of the Nobel Prize
in physics, chemistry, medicine, and economics belong to one single academic
family tree. 668 trace their ancestry to Emmanuel Stupanus, 228 to Lord
Rayleigh (physics, 1904). Craig Mello (medicine, 2006) counts 51 Nobelists
among his ancestors. Chemistry laureates have the most Nobel ancestors and
descendants, economics laureates the fewest. Chemistry is the central
discipline. Its Nobelists have trained and are trained by Nobelists in other
fields. Nobelists in physics (medicine) have trained (by) others. Economics
stands apart. Openness to other disciplines is the same in recent and earlier
times. The familial concentration of Nobelists is lower now than it used to be.",The Nobel Family,2023-03-10 20:46:57,Richard S. J. Tol,"http://arxiv.org/abs/2303.06106v1, http://arxiv.org/pdf/2303.06106v1",econ.GN
32247,gn,"In this study, I construct a growth model of the Great Divergence, which
formalizes Pomeranz's (2000) hypothesis that the relief of land constraints in
Europe caused divergence in economic growth between Europe and China since the
19th century. The model includes the agricultural and manufacturing sectors.
The agricultural sector produces subsistence goods from land, intermediate
goods made in the manufacturing sector, and labor. The manufacturing sector
produces goods from labor and its productivity grows through learning-by-doing.
Households make fertility decisions. In the model, a large exogenous positive
shock in land supply causes the transition of the economy from the Malthusian
state, in which all workers are engaged in agricultural production and per
capita income is constant, to the non-Malthusian state, in which the share of
workers engaging in agricultural production gradually decreases and per capita
income grows at a roughly constant growth rate. Quantitative predictions of the
model provide several insights into the causes of the Great Divergence.",A Pomeranzian Growth Theory of the Great Divergence,2021-08-06 16:27:20,Shuhei Aoki,"http://arxiv.org/abs/2108.03110v2, http://arxiv.org/pdf/2108.03110v2",econ.GN
32248,gn,"Technologies can help strengthen the resilience of our economy against
existential climate-risks. We investigate climate change adaptation
technologies (CCATs) in US patents to understand (1) historical patterns and
drivers of innovation; (2) scientific and technological requirements to develop
and use CCATs; and (3) CCATs' potential technological synergies with
mitigation. First, in contrast to mitigation, innovation in CCATs only slowly
takes off, indicating a relatively low awareness of investors for solutions to
cope with climate risks. Historical trends in environmental regulation, energy
prices, and public support can be associated with patenting in CCATs. Second,
CCATs form two main clusters: science-intensive ones in agriculture, health,
and monitoring technologies; and engineering-intensive ones in coastal, water,
and infrastructure technologies. Analyses of technology-specific scientific and
technological knowledge bases inform directions for how to facilitate
advancement, transfer and use of CCATs. Lastly, CCATs show strong technological
complementarities with mitigation as more than 25% of CCATs bear mitigation
benefits. While not judging about the complementarity of mitigation and
adaptation in general, our results suggest how policymakers can harness these
technological synergies to achieve both goals simultaneously.",Knowledge for a warmer world: a patent analysis of climate change adaptation technologies,2021-08-08 23:04:39,"Kerstin Hötte, Su Jung Jee","http://arxiv.org/abs/2108.03722v2, http://arxiv.org/pdf/2108.03722v2",econ.GN
32249,gn,"Agriculture plays a significant role in economic development of the
underdeveloped region. Multiple factors influence the performance of
agricultural sector but a few of these have a strong bearing on its growth. We
develop a growth diagnostics framework for agricultural sector in Bihar located
in eastern India to identify the most binding constraints. Our results show
that poor functioning of agricultural markets and low level of crop
diversification are the important reasons for lower agricultural growth in
Bihar. Rise in the level of instability in the prices of agricultural produces
indicates a weak price transmission across the markets even after repealing the
agricultural produce market committee act. Poor market linkages and
non-functioning producer collectives at village level affect the farmers
motivation for undertaking crop diversification. Our policy suggestions include
state provision of basic market infrastructure to attract private investment in
agricultural marketing, strengthening the farmer producer organisations, and a
comprehensive policy on crop diversification.","Agricultural Growth Diagnostics: Identifying the Binding Constraints and Policy Remedies for Bihar, India",2021-08-09 13:10:24,"Elumalai Kannan, Sanjib Pohit","http://arxiv.org/abs/2108.03912v1, http://arxiv.org/pdf/2108.03912v1",econ.GN
32250,gn,"During recent crisis, wage subsidies played a major role in sheltering firms
and households from economic shocks. During COVID-19, most workers were
affected and many liberal welfare states introduced new temporary wage
subsidies to protected workers' earnings and employment (OECD, 2021). New wage
subsidies marked a departure from the structure of traditional income support
payments and required reform. This paper uses simulated datasets to assess the
structure and incentives of the Irish COVID-19 wage subsidy scheme (CWS) under
five designs. We use a nowcasting approach to update 2017 microdata, producing
a near real time picture of the labour market at the peak of the crisis. Using
microsimulation modelling, we assess the impact of different designs on income
replacement, work incentives and income inequality. Our findings suggest that
pro rata designs support middle earners more and flat rate designs support low
earners more. We find evidence for strong work disincentives under all designs,
though flat rate designs perform better. Disincentives are primarily driven by
generous unemployment payments and work related costs. The impact of design on
income inequality depends on the generosity of payments. Earnings related pro
rata designs were associated to higher market earnings inequality. The
difference in inequality levels falls once benefits, taxes and work related
costs are considered. In our discussion, we turn to transaction costs, the
rationale for reform and reintegration of CWS. We find some support for the
claim that design changes were motivated by political considerations. We
suggest that establishing permanent wage subsidies based on sectorial turnover
rules could offer enhanced protection to middle-and high-earners and reduce
uncertainty, the need for reform, and the risk of politically motivated
designs.",The Structure and Incentives of a COVID related Emergency Wage Subsidy,2021-08-09 20:36:36,"Jules Linden, Cathal O'Donoghue, Denisa M. Sologon","http://arxiv.org/abs/2108.04198v1, http://arxiv.org/pdf/2108.04198v1",econ.GN
32251,gn,"Every year, natural disasters such as earthquake, flood, hurricane and etc.
impose immense financial and humane losses on governments owing to their
unpredictable character and arise of emergency situations and consequently the
reduction of the abilities due to serious damages to infrastructures, increases
demand for logistic services and supplies. First, in this study the necessity
of paying attention to locating procedures in emergency situations is pointed
out and an outline for the studied case of disaster relief supply chain was
discussed and the problem was validated at small scale. On the other hand, to
solve this kind of problems involving three objective functions and complicated
time calculation, meta-heuristic methods which yield almost optimum solutions
in less time are applied. The EC method and NSGA II algorithm are among the
evolutionary multi-objective optimization algorithms applied in this case. In
this study the aforementioned algorithm is used for solving problems at large
scale.",A New Multi Objective Mathematical Model for Relief Distribution Location at Natural Disaster Response Phase,2021-08-12 00:49:38,"Mohamad Ebrahim Sadeghi, Morteza Khodabakhsh, Mahmood Reza Ganjipoor, Hamed Kazemipoor, Hamed Nozari","http://dx.doi.org/10.52547/ijie.1.1.22, http://arxiv.org/abs/2108.05458v1, http://arxiv.org/pdf/2108.05458v1",econ.GN
32952,gn,"We propose a new sorting framework: composite sorting. Composite sorting
comprises of (1) distinct worker types assigned to the same occupation, and (2)
a given worker type simultaneously being part of both positive and negative
sorting. Composite sorting arises when fixed investments mitigate variable
costs of mismatch. We completely characterize optimal sorting and additionally
show it is more positive when mismatch costs are less concave. We then
characterize equilibrium wages. Wages have a regional hierarchical structure -
relative wages depend solely on sorting within skill groups. Quantitatively,
composite sorting can generate a sizable portion of within-occupations wage
dispersion in the US.",Composite Sorting,2023-03-12 19:33:00,"Job Boerma, Aleh Tsyvinski, Ruodu Wang, Zhenyuan Zhang","http://arxiv.org/abs/2303.06701v2, http://arxiv.org/pdf/2303.06701v2",econ.GN
32252,gn,"In this work, we explore the relationship between monetary poverty and
production combining relatedness theory, graph theory, and regression analysis.
We develop two measures at product level that capture short-run and long-run
patterns of poverty, respectively. We use the network of related products (or
product space) and both metrics to estimate the influence of the productive
structure of a country in its current and future levels of poverty. We found
that poverty is highly associated with poorly connected nodes in the PS,
especially products based on natural resources. We perform a series of
regressions with several controls (including human capital, institutions,
income, and population) to show the robustness of our measures as predictors of
poverty. Finally, by means of some illustrative examples, we show how our
measures distinguishes between nuanced cases of countries with similar poverty
and production and identify possibilities of improving their current poverty
levels.",Identifying poverty traps based on the network structure of economic output,2021-08-12 04:29:39,"Vanessa Echeverri, Juan C. Duque, Daniel E. Restrepo","http://arxiv.org/abs/2108.05488v2, http://arxiv.org/pdf/2108.05488v2",econ.GN
32253,gn,"We examine the effects of introducing a political outsider to the nomination
process leading to an election. To this end, we develop a sequential game where
politicians -- insiders and outsiders -- make a platform offer to a party, and
parties in turn decide which offer to accept; this process conforms the voting
ballot. Embedded in the evaluation of a party-candidate match is partisan
affect, a variable comprising the attitudes of voters towards the party.
Partisan affect may bias the electorate's appraisal of a match in a positive or
negative way. We characterize the conditions that lead to the nomination of an
outsider and determine whether her introduction as a potential candidate has
any effect on the winning policy and on the welfare of voters. We find that the
victory of an outsider generally leads to policy polarization, and that
partisan affect has a more significant effect on welfare than ideology
extremism.",Partisan affect and political outsiders,2021-08-12 22:44:10,Fernanda Herrera,"http://arxiv.org/abs/2108.05943v1, http://arxiv.org/pdf/2108.05943v1",econ.GN
32254,gn,"This paper studies the effects of economies of density in transportation
markets, focusing on ridesharing. Our theoretical model predicts that (i)
economies of density skew the supply of drivers away from less dense regions,
(ii) the skew will be more pronounced for smaller platforms, and (iii)
rideshare platforms do not find this skew efficient and thus use prices and
wages to mitigate (but not eliminate) it. We then develop a general empirical
strategy with simple implementation and limited data requirements to test for
spatial skew of supply from demand. Applying our method to ride-level,
multi-platform data from New York City (NYC), we indeed find evidence for a
skew of supply toward busier areas, especially for smaller platforms. We
discuss the implications of our analysis for business strategy (e.g., spatial
pricing) and public policy (e.g., consequences of breaking up or downsizing a
rideshare platform)",Spatial Distribution of Supply and the Role of Market Thickness: Theory and Evidence from Ride Sharing,2021-08-12 23:21:13,"Soheil Ghili, Vineet Kumar","http://arxiv.org/abs/2108.05954v1, http://arxiv.org/pdf/2108.05954v1",econ.GN
32255,gn,"Quantile regression and quantile treatment effect methods are powerful
econometric tools for considering economic impacts of events or variables of
interest beyond the mean. The use of quantile methods allows for an examination
of impacts of some independent variable over the entire distribution of
continuous dependent variables. Measurement in many quantative settings in
economic history have as a key input continuous outcome variables of interest.
Among many other cases, human height and demographics, economic growth,
earnings and wages, and crop production are generally recorded as continuous
measures, and are collected and studied by economic historians. In this paper
we describe and discuss the broad utility of quantile regression for use in
research in economic history, review recent quantitive literature in the field,
and provide an illustrative example of the use of these methods based on 20,000
records of human height measured across 50-plus years in the 19th and 20th
centuries. We suggest that there is considerably more room in the literature on
economic history to convincingly and productively apply quantile regression
methods.",The Use of Quantile Methods in Economic History,2021-08-13 07:20:25,"Damian Clarke, Manuel Llorca Jaña, Daniel Pailañir","http://arxiv.org/abs/2108.06055v1, http://arxiv.org/pdf/2108.06055v1",econ.GN
32256,gn,"This study investigates the role of logistics and its six components on trade
flows in selected Economic Community of West Africa States (ECOWAS) countries.
The impact of other macro-economic variables on trade flows was also
investigated. Ten countries were selected in eight years period. We decomposed
trade flows into import and export trade. The World Bank Logistics performance
index was used as a measure of logistics performance. The LPI has six
components, and the impact of these components on trade flows were also
examined. The fixed-effect model was used to explain the cross-country result
that was obtained. The results showed that logistics has no significant impact
on both Import and export, thus logistics play no role on trade flows among the
selected ECOWAS countries. The components of logistics except Timeliness of
shipments in reaching the final destination ( CRC ),have no impact on trade
flows. Income was found to be positively related to imports. Exchange rate,
consumption and money supply, reserve and tariff have no significant impact on
imports. Relative import price has an inverse and significant relationship with
imports. GDP has a positive and significant impact on export trade. The study
also found FDI, savings, exchange rate and labour to have insignificant impact
on exports. Finally, we found that logistics is not a driver of trade among the
selected ECOWAS countries. The study recommended the introduction of the single
window system and improvement in border management in order to reduce the cost
associated with Logistics and thereby enhance trade.",Logistics and trade flows in selected ECOWAS Countries: An empirical verification,2021-08-14 04:38:04,Eriamiatoe Efosa Festus,"http://arxiv.org/abs/2108.06441v1, http://arxiv.org/pdf/2108.06441v1",econ.GN
32257,gn,"The results based on the nonparametric nearest neighbor matching suggest a
statistically significant positive effect of the EU ETS on the economic
performance of the regulated firms during Phase I of the EU ETS. A year-by-year
analysis shows that the effect was only significant during the first year of
Phase I. The EU ETS, therefore, had a particularly strong effect when it was
introduced. It is important to note that the EU ETS does not homogeneously
affect firms in the manufacturing sector. We found a significant positive
impact of EU ETS on the economic performance of regulated firms in the paper
industry.",Study Of German Manufacturing Firms: Causal Impact Of European Union Emission Trading Scheme On Firm Behaviour And Economic Performance,2021-08-16 17:38:38,"Nitish Gupta, Ruchir Kaul, Satwik Gupta, Jay Shah","http://arxiv.org/abs/2108.07116v1, http://arxiv.org/pdf/2108.07116v1",econ.GN
32259,gn,"Background and Objective: Different industries go through high-precision and
complex processes that need to analyze their data and discover defects before
growing up. Big data may contain large variables with missed data that play a
vital role to understand what affect the quality. So, specialists of the
process might be struggling to defined what are the variables that have direct
effect in the process. Aim of this study was to build integrated data analysis
using data mining and quality tools to improve the quality of production and
process. Materials and Methods: Data collected in different steps to reduce
missed data. The specialists in the production process recommended to select
the most important variables from big data and then predictor screening was
used to confirm 16 of 71 variables. Seven important variables built the output
variable that called textile quality score. After testing ten algorithms,
boosted tree and random forest were evaluated to extract knowledge. In the
voting process, three variables were confirmed to use as input factors in the
design of experiments. The response of design was estimated by data mining and
the results were confirmed by the quality specialists. Central composite
(surface response) has been run 17 times to extract the main effects and
interactions on the textile quality score. Results: Current study found that a
machine productivity has negative effect on the quality, so this validated by
the management. After applying changes, the efficiency of production has
improved 21%. Conclusion: Results confirmed a big improvement in quality
processes in industrial sector. The efficiency of production improved to 21%,
weaving process improved to 23% and the overall process improved to 17.06%.",Analysis of Data Mining Process for Improvement of Production Quality in Industrial Sector,2021-08-17 16:28:47,"Hamza Saad, Nagendra Nagarur, Abdulrahman Shamsan","http://dx.doi.org/10.3923/jas.2021.10.20, http://arxiv.org/abs/2108.07615v1, http://arxiv.org/pdf/2108.07615v1",econ.GN
32260,gn,"Economists have predicted that damages from global warming will be as low as
2.1% of global economic production for a 3$^\circ$C rise in global average
surface temperature, and 7.9% for a 6$^\circ$C rise. Such relatively trivial
estimates of economic damages -- when these economists otherwise assume that
human economic productivity will be an order of magnitude higher than today --
contrast strongly with predictions made by scientists of significantly reduced
human habitability from climate change. Nonetheless, the coupled economic and
climate models used to make such predictions have been influential in the
international climate change debate and policy prescriptions. Here we review
the empirical work done by economists and show that it severely underestimates
damages from climate change by committing several methodological errors,
including neglecting tipping points, and assuming that economic sectors not
exposed to the weather are insulated from climate change. Most fundamentally,
the influential Integrated Assessment Model DICE is shown to be incapable of
generating an economic collapse, regardless of the level of damages. Given
these flaws, economists' empirical estimates of economic damages from global
warming should be rejected as unscientific, and models that have been
calibrated to them, such as DICE, should not be used to evaluate economic risks
from climate change, or in the development of policy to attenuate damages.",Economists' erroneous estimates of damages from climate change,2021-08-17 22:32:56,"Stephen Keen, Timothy M. Lenton, Antoine Godin, Devrim Yilmaz, Matheus Grasselli, Timothy J. Garrett","http://arxiv.org/abs/2108.07847v1, http://arxiv.org/pdf/2108.07847v1",econ.GN
32261,gn,"The study compares the competitiveness of three Korean groups raised in
different institutional environments: South Korea, North Korea, and China.
Laboratory experiments reveal that North Korean refugees are less likely to
participate in competitive tournaments than South Koreans and Korean-Chinese
immigrants. Analysis using a choice model with probability weighting suggests
that lower cognitive ability may lead to lower expected performance, more
pessimistic beliefs, and greater aversion to competition.",Why North Korean Refugees are Reluctant to Compete: The Roles of Cognitive Ability,2021-08-18 14:38:40,"Syngjoo Choi, Byung-Yeon Kim, Jungmin Lee, Sokbae Lee","http://arxiv.org/abs/2108.08097v2, http://arxiv.org/pdf/2108.08097v2",econ.GN
32262,gn,"Rapid rise in income inequality in India is a serious concern. While the
emphasis is on inclusive growth, it seems difficult to tackle the problem
without looking at the intricacies of the problem. The Social Mobility Index is
an important tool that focuses on bringing long-term equality by identifying
priority policy areas in the country. The PCA technique is employed in
computation of the index. Overall, the Union Territory of Delhi ranks first,
with the highest social mobility and the least social mobility is in
Chhattisgarh. In addition, health and education access, quality and equity are
key priority areas that can help improve social mobility in India. Thus, we
conclude that human capital is of great importance in promoting social mobility
and development in the present times.",Regional disparities in Social Mobility of India,2021-08-19 20:35:19,Anuradha Singh,"http://arxiv.org/abs/2108.08816v1, http://arxiv.org/pdf/2108.08816v1",econ.GN
32263,gn,"Designing waterfront redevelopment generally focuses on attractiveness,
leisure, and beauty, resulting in various types of building and block shapes
with limited considerations on environmental aspects. However, increasing
climate change impacts necessitate these buildings to be sustainable,
resilient, and zero CO2 emissions. By producing five scenarios (plus existing
buildings) with constant floor areas, we investigated how building and district
form with building integrated photovoltaics (BIPV) affect energy consumption
and production, self-sufficiency, CO2 emission, and energy costs in the context
of waterfront redevelopment in Tokyo. From estimated hourly electricity demands
of the buildings, techno-economic analyses are conducted for rooftop PV systems
for 2018 and 2030 with declining costs of rooftop PV systems. We found that
environmental building designs with rooftop PV system are increasingly
economical in Tokyo with CO2 emission reduction of 2-9% that depends on rooftop
sizes. Payback periods drop from 14 years in 2018 to 6 years in 2030. Toward
net-zero CO2 emissions by 2050, immediate actions are necessary to install
rooftop PVs on existing and new buildings with energy efficiency improvements
by construction industry and building owners. To facilitate such actions,
national and local governments need to adopt appropriate policies.",Assessment of waterfront office redevelopment plan on optimal building energy demand and rooftop photovoltaics for urban decarbonization,2021-08-20 10:24:36,"Younghun Choi, Takuro Kobashi, Yoshiki Yamagata, Akito Murayama","http://dx.doi.org/10.3390/en15030883, http://arxiv.org/abs/2108.09029v1, http://arxiv.org/pdf/2108.09029v1",econ.GN
32276,gn,"A recent contribution to research on age and well-being (Blanchflower 2021)
found that the impact of age on happiness is ""u-shaped"" virtually everywhere:
happiness declines towards middle age and subsequently rises, in almost all
countries. This paper evaluates that finding for European countries,
considering whether it is robust to alternative methodological approaches. The
analysis here excludes control variables that are affected by age (noting that
those variable are not themselves antecedents of age) and uses data from the
entire adult age range (rather than using data only from respondents younger
than 70). I also explore the relationship via models that do not impose a
quadratic functional form. The paper shows that these alternate approaches do
not lead us to perceive a u-shape ""everywhere"": u-shapes are evident for some
countries, but for others the pattern is quite different.",Is happiness u-shaped in age everywhere? A methodological reconsideration for Europe,2021-08-31 11:23:53,David Bartram,"http://arxiv.org/abs/2108.13671v2, http://arxiv.org/pdf/2108.13671v2",econ.GN
32264,gn,"COVID-19 pandemic has affected each and every country's health service and
plunged refugees into the most desperate conditions. The plight of Rohingya
refugees is among the harshest. It has severely affected their existing HIV/STI
prevention and management services and further increased the risk of violence
and onward HIV transmission within the camps. In this commentary, we discuss
the context and the changing dynamics of HIV/AIDS during COVID-19 among the
Rohingya refugee community in Bangladesh. What we currently observe is the
worst crisis in the Rohingya refugee camps thus far. Firstly, because of being
displaced, Rohingya refugees have increased vulnerability to HIV, as well as to
STIs and other poor health outcomes. Secondly, for the same reason, they have
inadequate access to HIV testing treatment and care. Not only because of their
refugee status but also because of the poor capacity of the host country to
provide services. Thirdly, a host of complex economic, socio-cultural and
behavioural factors exacerbate their dire situation with access to HIV testing,
treatment and care. And finally, the advent of the COVID-19 pandemic has
changed priorities in all societies, including the refugee camps. In the
context of the unfolding COVID-19 crisis, more emphasis is placed on COVID-19
rather than other health issues, which exacerbates the dire situation with HIV
detection, management, and prevention among Rohingya refugees. Despite the
common crisis experienced by most countries around the world, the international
community has an obligation to work together to improve the life, livelihood,
and health of those who are most vulnerable. Rohingya refugees are among them.",The changing dynamics of HIV/AIDS during the Covid-19 pandemic in the Rohingya refugee camps in Bangladesh a call for action,2021-08-22 14:30:59,"Muhammad Anwar Hossain, Iryna Zablotska-Manos","http://arxiv.org/abs/2108.09690v1, http://arxiv.org/pdf/2108.09690v1",econ.GN
32265,gn,"This paper develops a framework for assessing the welfare effects of labor
income tax changes on married couples. I build a static model of couples' labor
supply that features both intensive and extensive margins and derive a
tractable expression that delivers a transparent understanding of how labor
supply responses, policy parameters, and income distribution affect the
reform-induced welfare gains. Using this formula, I conduct a comparative
welfare analysis of four tax reforms implemented in the United States over the
last four decades, namely the Tax Reform Act of 1986, the Omnibus Budget
Reconciliation Act of 1993, the Economic Growth and Tax Relief Reconciliation
Act of 2001, and the Tax Cuts and Jobs Act of 2017. I find that these reforms
created welfare gains ranging from -0.16 to 0.62 percent of aggregate labor
income. A sizable part of the gains is generated by the labor force
participation responses of women. Despite three reforms resulted in aggregate
welfare gains, I show that each reform created both winners and losers.
Furthermore, I uncover two patterns in the relationship between welfare gains
and couples' labor income. In particular, the reforms of 1986 and 2017 display
a monotonically increasing relationship, while the other two reforms
demonstrate a U-shaped pattern. Finally, I characterize the bias in welfare
gains resulting from the assumption about a linear tax function. I consider a
reform that changes tax progressivity and show that the linearization bias is
given by the ratio between the tax progressivity parameter and the inverse
elasticity of taxable income. Quantitatively, it means that linearization
overestimates the welfare effects of the U.S. tax reforms by 3.6-18.1%.",Welfare Effects of Labor Income Tax Changes on Married Couples: A Sufficient Statistics Approach,2021-08-23 10:32:19,Egor Malkov,"http://arxiv.org/abs/2108.09981v2, http://arxiv.org/pdf/2108.09981v2",econ.GN
32266,gn,"Corrections among colleagues are an integral part of group work, but people
may take corrections as personal criticism, especially corrections by women. I
study whether people dislike collaborating with someone who corrects them and
more so when that person is a woman. People, including those with high
productivity, are less willing to collaborate with a person who has corrected
them even if the correction improves group performance. Yet, people respond to
corrections by women as negatively as by men. These findings suggest that
although women do not face a higher hurdle, correcting colleagues is costly and
reduces group efficiency.",Gender Differences in the Cost of Corrections in Group Work,2021-08-23 15:24:27,Yuki Takahashi,"http://arxiv.org/abs/2108.10109v1, http://arxiv.org/pdf/2108.10109v1",econ.GN
32267,gn,"This study reports on the current state-of-affairs in the funding of
entrepreneurship and innovations in China and provides a broad survey of
academic findings on the subject. We also discuss the implications of these
findings for public policies governing the Chinese financial system,
particularly regulations governing the initial public offering (IPO) process.
We also identify and discuss promising areas for future research.",Financing Entrepreneurship and Innovation in China,2021-08-25 01:40:16,"Lin William Cong, Charles M. C. Lee, Yuanyu Qu, Tao Shen","http://arxiv.org/abs/2108.10982v1, http://arxiv.org/pdf/2108.10982v1",econ.GN
32268,gn,"We introduce systematic tests exploiting robust statistical and behavioral
patterns in trading to detect fake transactions on 29 cryptocurrency exchanges.
Regulated exchanges feature patterns consistently observed in financial markets
and nature; abnormal first-significant-digit distributions, size rounding, and
transaction tail distributions on unregulated exchanges reveal rampant
manipulations unlikely driven by strategy or exchange heterogeneity. We
quantify the wash trading on each unregulated exchange, which averaged over 70%
of the reported volume. We further document how these fabricated volumes
(trillions of dollars annually) improve exchange ranking, temporarily distort
prices, and relate to exchange characteristics (e.g., age and userbase), market
conditions, and regulation.",Crypto Wash Trading,2021-08-25 01:48:28,"Lin William Cong, Xi Li, Ke Tang, Yang Yang","http://dx.doi.org/10.2139/ssrn.3530220, http://arxiv.org/abs/2108.10984v1, http://arxiv.org/pdf/2108.10984v1",econ.GN
32269,gn,"The relevance of the topic is dictated by the fact that in recent decades,
the threat to international security emanating from terrorism has increased
many times. Terrorist organizations have become full-fledged subjects of
politics on a par with political parties. In addition, enormous power and
resources are concentrated in the hands of terrorist groups. Terrorist activity
has become the usual way of leading a political struggle, expressing social
protest. In addition, terrorism has become a tool in economic competition. Each
terrorist action entails more and more human casualties. It breeds instability,
fear, hatred, and distrust in society. The authors pay special attention to
counter-terrorism activities in the North Caucasus Region.",Prevention of Terrorist Crimes in the North Caucasus Region,2021-08-25 18:10:03,"Ivan Kucherkov, Mattia Masolletti","http://arxiv.org/abs/2108.11287v1, http://arxiv.org/pdf/2108.11287v1",econ.GN
32277,gn,"This note discusses some aspects of interpretations of the theory of optimal
taxation presented in recent works on the Brazilian tax system.",Nota Sobre Algumas Interpretacoes da Teoria de Tributacao Otima,2021-09-01 13:37:43,Jose Ricardo Bezerra Nogueira,"http://arxiv.org/abs/2109.00297v2, http://arxiv.org/pdf/2109.00297v2",econ.GN
32270,gn,"Conventional wisdom suggests that large-scale refugees pose security threats
to the host community or state. With massive influx of Rohingyas in Bangladesh
in 2017 resulting a staggering total of 1.6 million Rohingyas, a popular
discourse emerged that Bangladesh would face severe security threats. This
article investigates the security experience of Bangladesh in case of Rohingya
influx over a three-year period, August 2017 to August 2020. The research
question I intend to address is, 'has Bangladesh experienced security threat
due to massive Rohingya influx?' If so in what ways? I test four security
threat areas: societal security, economic security, internal security, and
public security. I have used newspaper content analysis over past three years
along with interview data collected from interviewing local people in coxs
bazar area where the Rohingya camps are located. To assess if the threats are
low level, medium level, or high level, I investigated both the frequency of
reports and the way they are interpreted. I find that Bangladesh did not
experience any serious security threats over the last three years. There are
some criminal activities and offenses, but these are only low-level security
threat at best. My research presents empirical evidence that challenges
conventional assertions that refugees are security threats or challenges to the
host states.",Refugees and Host State Security: An Empirical Investigation of Rohingya Refuge in Bangladesh,2021-08-25 20:08:22,Sarwar J. Minar,"http://arxiv.org/abs/2108.11344v1, http://arxiv.org/pdf/2108.11344v1",econ.GN
32271,gn,"The article analyzes the legal framework regulating the legal provision of
transport security in Russia. Special attention is paid to the role of
prosecutor's supervision in the field of prevention of crimes in transport.",Ensuring Transport Security; Features of Legal Regulation,2021-08-26 15:17:33,"Vitaly Khrustalev, Mattia Masolletti","http://arxiv.org/abs/2108.11732v1, http://arxiv.org/pdf/2108.11732v1",econ.GN
32272,gn,"In this study, I propose a five-step algorithm for synthetic control method
for comparative studies. My algorithm builds on the synthetic control model of
Abadie et al., 2015 and the later model of Amjad et al., 2018. I apply all
three methods (robust PCA synthetic control, synthetic control, and robust
synthetic control) to answer the hypothetical question, what would have been
the per capita GDP of West Germany if it had not reunified with East Germany in
1990? I then apply all three algorithms in two placebo studies. Finally, I
check for robustness. This paper demonstrates that my method can outperform the
robust synthetic control model of Amjad et al., 2018 in placebo studies and is
less sensitive to the weights of synthetic members than the model of Abadie et
al., 2015.",Robust PCA Synthetic Control,2021-08-28 04:18:12,Mani Bayani,"http://arxiv.org/abs/2108.12542v2, http://arxiv.org/pdf/2108.12542v2",econ.GN
32273,gn,"I present a structural empirical model of a one-sided one-to-many matching
with complementarities to quantify the effect of subsidy design on endogenous
merger matching. I investigate shipping mergers and consolidations in Japan in
1964. At the time, 95 firms formed six large groups. I find that the existence
of unmatched firms enables us to recover merger costs, and the importance of
technological diversification varies across carrier and firm types. The
counterfactual simulations show that 20 \% of government subsidy expenditures
could have been cut. The government could have possibly changed the equilibrium
number of groups to between one and six.",Estimating Endogenous Coalitional Mergers: Merger Costs and Assortativeness of Size and Specialization,2021-08-29 06:55:19,Suguru Otani,"http://arxiv.org/abs/2108.12744v5, http://arxiv.org/pdf/2108.12744v5",econ.GN
32274,gn,"The Ballast Water Management Convention can decrease the introduction risk of
harmful aquatic organisms and pathogens, yet the Convention increases shipping
costs and causes subsequent economic impacts. This paper examines whether the
Convention generates disproportionate invasion risk reduction results and
economic impacts on Small Island Developing States (SIDS) and Least Developed
Countries (LDCs). Risk reduction is estimated with an invasion risk assessment
model based on a higher-order network, and the effects of the regulation on
national economies and trade are estimated with an integrated shipping cost and
computable general equilibrium modeling framework. Then we use the Lorenz curve
to examine if the regulation generates risk or economic inequality among
regions. Risk reduction ratios of all regions (except Singapore) are above 99%,
which proves the effectiveness of the Convention. The Gini coefficient of 0.66
shows the inequality in risk changes relative to income levels among regions,
but risk reductions across all nations vary without particularly high risks for
SIDS and LDCs than for large economies. Similarly, we reveal inequality in
economic impacts relative to income levels (the Gini coefficient is 0.58), but
there is no evidence that SIDS and LDCs are disproportionately impacted
compared to more developed regions. Most changes in GDP, real exports, and real
imports of studied regions are minor (smaller than 0.1%). However, there are
more noteworthy changes for select sectors and trade partners including Togo,
Bangladesh, and Dominican Republic, whose exports may decrease for textiles and
metal and chemicals. We conclude the Convention decreases biological invasion
risk and does not generate disproportionate negative impacts on SIDS and LDCs.",Economic and environmental impacts of ballast water management on Small Island Developing States and Least Developed Countries,2021-08-30 18:31:36,"Zhaojun Wang, Amanda M. Countryman, James J. Corbett, Mandana Saebi","http://arxiv.org/abs/2108.13315v1, http://arxiv.org/pdf/2108.13315v1",econ.GN
32275,gn,"The Covid-19 pandemic has led to the rise of remote work with consequences
for the global division of work. Remote work could connect labour markets, but
it could also increase spatial polarisation. However, our understanding of the
geographies of remote work is limited. Specifically, does remote work bring
jobs to rural areas or is it concentrating in large cities, and how do skill
requirements affect competition for jobs and wages? We use data from a fully
remote labour market - an online labour platform - to show that remote work is
polarised along three dimensions. First, countries are globally divided: North
American, European, and South Asian remote workers attract most jobs, while
many Global South countries participate only marginally. Secondly, remote jobs
are pulled to urban regions; rural areas fall behind. Thirdly, remote work is
polarised along the skill axis: workers with in-demand skills attract
profitable jobs, while others face intense competition and obtain low wages.
The findings suggest that remote work is shaped by agglomerative forces, which
are deepening the gap between urban and rural areas. To make remote work an
effective tool for rural development, it needs to be embedded in local
skill-building and labour market programmes.",The global polarisation of remote work,2021-08-30 19:28:10,"Fabian Braesemann, Fabian Stephany, Ole Teutloff, Otto Kässi, Mark Graham, Vili Lehdonvirta","http://dx.doi.org/10.1371/journal.pone.0274630, http://arxiv.org/abs/2108.13356v2, http://arxiv.org/pdf/2108.13356v2",econ.GN
32294,gn,"The study examines the relationship between mobile financial services and
individual financial behavior in India wherein a sizeable population is yet to
be financially included. Addressing the endogeneity associated with the use of
mobile financial services using an instrumental variable method, the study
finds that the use of mobile financial services increases the likelihood of
investment, having insurance and borrowing from formal financial institutions.
Further, the analysis highlights that access to mobile financial services have
the potential to bridge the gender divide in financial inclusion. Fastening the
pace of access to mobile financial services may partially alter pandemic
induced poverty.",Effect of mobile financial services on financial behavior in developing economies-Evidence from India,2021-09-15 08:00:07,Shreya Biswas,"http://arxiv.org/abs/2109.07077v1, http://arxiv.org/pdf/2109.07077v1",econ.GN
32278,gn,"The paper provides energy system-wide estimates of the effects sufficiency
measures in different sectors can have on energy supply and system costs. In
distinction to energy efficiency, we define sufficiency as behavioral changes
to reduce useful energy without significantly reducing utility, for example by
adjusting thermostats. By reducing demand, sufficiency measures are a
potentially decisive but seldomly considered factor to support the
transformation towards a decarbonized energy system. Therefore, this paper
addresses the following question: What is the potential of sufficiency measures
and what is their impacts on the supply side of a 100% renewable energy system?
For this purpose, an extensive literature review is conducted to obtain
estimates for the effects of different sufficiency measures on final energy
demand in Germany. Afterwards, the impact of these measures on the supply side
and system costs is quantified using a bottom-up planning model of a renewable
energy system. Results indicate that final energy could be reduced by up to
20.5% and as a result cost reduction between 11.3% to 25.6% are conceivable.
The greatest potential for sufficiency measures was identified in the heating
sector.",The Potential of Sufficiency Measures to Achieve a Fully Renewable Energy System -- A case study for Germany,2021-09-01 18:52:48,"Elmar Zozmann, Mirjam Helena Eerma, Dylan Manning, Gro Lill Økland, Citlali Rodriguez del Angel, Paul E. Seifert, Johanna Winkler, Alfredo Zamora Blaumann, Seyedsaeed Hosseinioun, Leonard Göke, Mario Kendziorski, Christian von Hirschhausen","http://arxiv.org/abs/2109.00453v3, http://arxiv.org/pdf/2109.00453v3",econ.GN
32279,gn,"The concept of transportation demand management (TDM) upholds the development
of sustainable mobility through the triumph of optimally balanced transport
modal share in cities. The modal split management directly reflects on TDM of
each transport subsystem, including parking. In developing countries, the
policy-makers have largely focused on supply-side measures, yet demand-side
measures have remained unaddressed in policy implications. Ample literature is
available presenting responses of TDM strategies, but most studies account mode
choice and parking choice behaviour separately rather than considering
trade-offs between them. Failing to do so may lead to biased model estimates
and impropriety in policy implications. This paper seeks to fill this gap by
admitting parking choice as an endogenous decision within the model of mode
choice behaviour. This study integrates attitudinal factors and
built-environment variables in addition to parking and travel attributes for
developing comprehensive estimation results. A mixed logit model with random
coefficients is estimated using hierarchical Bayes approach based on the Markov
Chain Monte Carlo simulation method. The results reveal significant influence
of mode/parking specific attitudes on commuters choice behaviour in addition to
the built-environment factors and mode/parking related attributes. It is
identified that considerable shift is occurring between parking-types in
preference to switching travel mode with hypothetical changes in parking
attributes. Besides, study investigates the heterogeneity in the
willingness-to-pay through a follow-up regression model, which provides
important insights for identifying possible sources of this heterogeneity among
respondents. The study provides remarkable results which may be beneficial to
planning authorities for improving TDM strategies especially in developing
countries.",Analysis of taste heterogeneity in commuters travel decisions using joint parking and mode choice model: A case from urban India,2021-08-26 15:35:03,"Janak Parmar, Gulnazbanu Saiyed, Sanjaykumar Dave","http://dx.doi.org/10.1016/j.tra.2023.103610, http://arxiv.org/abs/2109.01045v4, http://arxiv.org/pdf/2109.01045v4",econ.GN
32280,gn,"This study aims to analyze the impact of the crude oil market on the Toronto
Stock Exchange Index (TSX)c based on monthly data from 1970 to 2021 using
Markov-switching vector autoregressive (MSI-VAR) model. The results indicate
that TSX return contains two regimes, including: positive return (regime 1),
when growth rate of stock index is positive; and negative return (regime 2),
when growth rate of stock index is negative. Moreover, regime 1 is more
volatile than regime 2. The findings also show the crude oil market has
negative effect on the stock market in regime 1, while it has positive effect
on the stock market in regime 2. In addition, we can see this effect in regime
1 more significantly in comparison to regime 2. Furthermore, two period lag of
oil price decreases stock return in regime 1, while it increases stock return
in regime 2.",Detection of Structural Regimes and Analyzing the Impact of Crude Oil Market on Canadian Stock Market: Markov Regime-Switching Approach,2021-08-31 07:43:16,"Mohammadreza Mahmoudi, Hana Ghaneei","http://arxiv.org/abs/2109.01046v3, http://arxiv.org/pdf/2109.01046v3",econ.GN
32281,gn,"Cognition, a component of human capital, is fundamental for decision-making,
and understanding the causes of human capital depreciation in old age is
especially important in aging societies. Using various proxy measures of
cognitive performance from a longitudinal survey in South Africa, we study how
education affects cognition in late adulthood. We show that an extra year of
schooling improves memory performance and general cognition. We find evidence
of heterogeneous effects by gender: the effects are stronger among women. We
explore potential mechanisms, and we show that a more supportive social
environment, improved health habits, and reduced stress levels likely play a
critical role in mediating the beneficial effects of educational attainment on
cognition among the elderly.",Reaping the Rewards Later: How Education Improves Old-Age Cognition in South Africa,2021-09-06 01:29:30,"Plamen Nikolov, Steve Yeh","http://arxiv.org/abs/2109.02177v2, http://arxiv.org/pdf/2109.02177v2",econ.GN
32282,gn,"There is growing awareness within the economics profession of the important
role narratives play in the economy. Even though empirical approaches that try
to quantify economic narratives are getting increasingly popular, there is no
theory or even a universally accepted definition of economic narratives
underlying this research. First, we review and categorize the economic
literature concerned with narratives and work out the different paradigms that
are at play. Only a subset of the literature considers narratives to be active
drivers of economic activity. In order to solidify the foundation of narrative
economics, we propose a definition of collective economic narratives, isolating
five important characteristics. We argue that, for a narrative to be
economically relevant, it must be a sense-making story that emerges in a social
context and suggests action to a social group. We also systematize how a
collective economic narrative differs from a topic and from other kinds of
narratives that are likely to have less impact on the economy. With regard to
the popular use of topic modeling as an empirical strategy, we suggest that the
complementary use of other canonical methods from the natural language
processing toolkit and the development of new methods is inevitable to go
beyond identifying topics and be able to move towards true empirical narrative
economics.",Narratives in economics,2021-09-06 13:05:08,"Michael Roos, Matthias Reccius","http://dx.doi.org/10.4419/96973068, http://arxiv.org/abs/2109.02331v2, http://arxiv.org/pdf/2109.02331v2",econ.GN
32295,gn,"The explosive nature of Covid-19 transmission drastically altered the rhythm
of daily life by forcing billions of people to stay at their homes. A critical
challenge facing transportation planners is to identify the type and the extent
of changes in people's activity-travel behavior in the post-pandemic world. In
this study, we investigated the travel behavior evolution by analyzing a
longitudinal two-wave panel survey data conducted in the United States from
April 2020 to October 2020 (wave 1) and from November 2020 to May 2021(wave 2).
Encompassing nearly 3,000 respondents across different states, we explored
pandemic-induced changes and underlying reasons in four major categories of
telecommute/telemedicine, commute mode choice, online shopping, and air travel.
Upon concrete evidence, our findings substantiate significantly observed and
expected changes in habits and preferences. According to results, nearly half
of employees anticipate having the alternative to telecommute and among which
71% expect to work from home at least twice a week after the pandemic. In the
post-pandemic period, auto and transit commuters are expected to be 9% and 31%
less than pre-pandemic, respectively. A considerable rise in hybrid work and
grocery/non-grocery online shopping is expected. Moreover, 41% of pre-covid
business travelers expect to have fewer flights (after the pandemic) while only
8% anticipate more, compared to the pre-pandemic. Upon our analyses, we discuss
a spectrum of policy implications in all mentioned areas.","The Enduring Effects of COVID-19 on Travel Behavior in the United States: A Panel Study on Observed and Expected Changes in Telecommuting, Mode Choice, Online Shopping and Air Travel",2021-09-16 16:56:49,"Mohammadjavad Javadinasr, Tassio B. Magassy, Ehsan Rahimi, Motahare, Mohammadi, Amir Davatgari, Abolfazl, Mohammadian, Deborah Salon, Matthew Wigginton Bhagat-Conway, Rishabh Singh Chauhan, Ram M. Pendyala, Sybil Derrible, Sara Khoeini","http://dx.doi.org/10.1016/j.trf.2022.09.019, http://arxiv.org/abs/2109.07988v1, http://arxiv.org/pdf/2109.07988v1",econ.GN
32283,gn,"While the potential for peer-to-peer electricity trading, where households
trade surplus electricity with peers in a local energy market, is rapidly
growing, the drivers of participation in this trading scheme have been
understudied so far. In particular, there is a dearth of research on the role
of non-monetary incentives for trading surplus electricity, despite their
potentially important role. This paper presents the first discrete choice
experiment conducted with prosumers (i.e. proactive households actively
managing their electricity production and consumption) in the Netherlands.
Electricity trading preferences are analyzed regarding economic, environmental,
social and technological parameters, based on survey data (N = 74). The
dimensions most valued by prosumers are the environmental and, to a lesser
extent, economic dimensions, highlighting the key motivating roles of
environmental factors. Furthermore, a majority of prosumers stated they would
provide surplus electricity for free or for non-monetary compensations,
especially to energy-poor households. These observed trends were more
pronounced among members of energy cooperatives. This suggests that
peer-to-peer energy trading can advance a socially just energy transition.
Regarding policy recommendations, these findings point to the need for
communicating environmental and economic benefits when marketing P2P
electricity trading platforms and for technical designs enabling effortless and
customizable transactions","Keep it green, simple and socially fair: a choice experiment on prosumers' preferences for peer to peer electricity trading in the Netherlands",2021-09-06 16:29:09,"Elena Georgarakis, Thomas Bauwens, Anne-Marie Pronk, Tarek AlSkaif","http://arxiv.org/abs/2109.02452v1, http://arxiv.org/pdf/2109.02452v1",econ.GN
32284,gn,"We fully solve a sorting problem with heterogeneous firms and multiple
heterogeneous workers whose skills are imperfect substitutes. We show that
optimal sorting, which we call mixed and countermonotonic, is comprised of two
regions. In the first region, mediocre firms sort with mediocre workers and
coworkers such that the output losses are equal across all these teams
(mixing). In the second region, a high skill worker sorts with low skill
coworkers and a high productivity firm (countermonotonicity). We characterize
the equilibrium wages and firm values. Quantitatively, our model can generate
the dispersion of earnings within and across US firms.",Sorting with Teams,2021-09-06 23:49:37,"Job Boerma, Aleh Tsyvinski, Alexander P. Zimin","http://arxiv.org/abs/2109.02730v3, http://arxiv.org/pdf/2109.02730v3",econ.GN
32285,gn,"This paper aims to present empirical analysis of Iranian economic growth from
1950 to 2018 using data from the World Bank, Madison Data Bank, Statistical
Center of Iran, and Central Bank of Iran. The results show that Gross Domestic
Product (GDP) per capital increased by 2 percent annually during this time,
however this indicator has had a huge fluctuation over time. In addition, the
economic growth of Iran and oil revenue have close relationship with each
other. In fact, whenever oil crises happen, great fluctuation in growth rate
and other indicators happened subsequently. Even though the shares of other
sectors like industry and services in GDP have increased over time, the oil
sector still plays a key role in the economic growth of Iran. Moreover, growth
accounting analysis shows contribution of capital plays a significant role in
economic growth of Iran. Furthermore, based on growth accounting framework the
steady state of effective capital is 4.27 for Iran's economy.",Identifying the Main Factors of Iran's Economic Growth using Growth Accounting Framework,2021-09-07 03:02:37,Mohammadreza Mahmoudi,"http://dx.doi.org/10.24018/EJBMR, http://arxiv.org/abs/2109.02787v3, http://arxiv.org/pdf/2109.02787v3",econ.GN
32286,gn,"The authors of the article analyze the content of the Eurasian integration,
from the initial initiative to the modern Eurasian Economic Union, paying
attention to the factors that led to the transition from the Customs Union and
the Single Economic Space to a stronger integration association. The main
method of research is historical and legal analysis.",Eurasian Economic Union: Current Concept and Prospects,2021-09-08 16:38:09,"Larisa Kargina, Mattia Masolletti","http://arxiv.org/abs/2109.03644v1, http://arxiv.org/pdf/2109.03644v1",econ.GN
32287,gn,"The recovery of the public transportation system is critical for both social
re-engagement and economic rebooting after the shutdown during pandemic like
COVID-19. In this study, we focus on the integrated optimization of service
line reopening plan and timetable design. We model the transit system as a
space-time network. In this network, the number of passengers on each vehicle
at the same time can be represented by arc flow. We then apply a simplified
spatial compartmental model of epidemic (SCME) to each vehicle and platform to
model the spread of pandemic in the system as our objective, and calculate the
optimal open plan and timetable. We demonstrate that this optimization problem
can be decomposed into a simple integer programming and a linear
multi-commodity network flow problem using Lagrangian relaxation techniques.
Finally, we test the proposed model using real-world data from the Bay Area
Rapid Transit (BART) and give some useful suggestions to system managers.",Optimizing timetable and network reopen plans for public transportation networks during a COVID19-like pandemic,2021-09-09 00:24:14,"Yiduo Huang, Zuojun Max Shen","http://arxiv.org/abs/2109.03940v1, http://arxiv.org/pdf/2109.03940v1",econ.GN
32288,gn,"The authors of the article analyze the policy of the Russian government in
the field of family support, paying attention to legal programs at the federal
and regional levels. The maternity capital program is considered separately, as
well as measures aimed at supporting large families.",Protection of the Rights of Large Families as One of the Key Tasks of the State's Social Policy,2021-09-09 19:00:03,"Valery Dolgov, Mattia Masolletti","http://arxiv.org/abs/2109.04370v1, http://arxiv.org/pdf/2109.04370v1",econ.GN
32289,gn,"During the last three decades, shrimp has remained one of the major export
items in Bangladesh. It contributes to the development of this country by
enhancing export earnings and promoting employment. However, coastal wetlands
and agricultural lands are used for shrimp culture, which reduces agricultural
opportunity and peasants income, and destroys the mangroves and coastal
eco-system. These are the external environmental costs that are not reflected
in farmers price and output decisions. This study has aimed to estimate those
external environmental costs through the contingent valuation method. The
calculated environmental cost of shrimp farming is USD 13.66 per acre per year.
Findings suggest that current shrimp production and shrimp price will no longer
be optimal once the external costs are internalized. Thus alternative policy
recommendations have been proposed so that shrimp farming becomes a sustainable
and equitable means of aquaculture.",Estimating the Environmental Cost of Shrimp Farming in Coastal Areas of Chittagong and Coxs bazaar in Bangladesh,2021-09-12 06:34:05,"Mohammad Nur Nobi, Dr. A N M Moinul Islam","http://arxiv.org/abs/2109.05416v4, http://arxiv.org/pdf/2109.05416v4",econ.GN
32290,gn,"This study aims to assess the net benefit of the kaptai dam on the Karnafuli
river in Kaptai, Chittagong, Bangladesh. Kaptai Dam, the only hydroelectricity
power source in Bangladesh, provides only 5% electricity demand of Bangladesh.
The Dam is located on the Karnafuli River at Kaptai in Rangamati District, 65
km upstream from Chittagong. It is an earth-fill or embankment dam with a
reservoir with a water storage capacity of 11,000 skm. Though the Dam's primary
purpose is to generate electricity, it became a reservoir of water used for
fishing and tourism. To find the net benefit value and estimate the
environmental costs and benefits, we considered the environmental net benefit
from 1962 to 1997. We identify the costs of Kaptai Dam, including its
establishment cost, operational costs, the costs of lives that have been lost
due to conflicts, and environmental costs, including loss of biodiversity, loss
of land uses, and loss of human displacements. Also, we assess the benefits of
electricity production, earnings from fisheries production, and gain from
tourism to Kaptai Lake. The findings show that the Dam contributes tremendous
value to Bangladesh. As a source of hydroelectricity, the Kaptai Dam is a
source of clean energy, and its value might have been worthy of this Dam
produced a significant portion of the electricity. However, providing less than
5% of the national demand for electricity followed by various external and
sensitive costs, the Dam hardly contributes to the Bangladesh economy. This
study thus recommends that Bangladesh should look for other sources of clean
energy that have no chances of eco-political conflicts.","Cost-Benefit Analysis of Kaptai Dam in Rangamati District, Chittagong, Bangladesh",2021-09-12 06:47:45,Mohammad Nur Nobi,"http://arxiv.org/abs/2109.05419v3, http://arxiv.org/pdf/2109.05419v3",econ.GN
32291,gn,"A significant proportion of slum residents offer vital services that are
relied upon by wealthier urban residents. However, the lack of access to clean
drinking water and adequate sanitation facilities causes considerable health
risks for slum residents, leading to interruption in services and potential
transmission of diseases to the beneficiaries. This study explores the
willingness of the households benefitting from these services to contribute
financially towards the measures that can mitigate the negative externalities
of the diseases resulting from poor water and sanitation in slums. This study
adopts the Contingent Valuation Method using face-to-face interviews with 260
service-receiving households in Chittagong City Corporation of Bangladesh.
Estimating the logistic regression model, the findings indicate that 74 percent
of respondents express their willingness to contribute financially towards an
improvement of water and sanitation facilities in the slums. Within this group,
16 percent are willing to pay 1.88 USD/month, 18 percent prefer 3.86 USD/year,
and 40 percent are willing to contribute a lump sum of 3.92 USD. The empirical
findings suggest a significant influence of gender, college, and housemaids
working hours in the households on respondents willingness to pay. For example,
female respondents with a college degree and households with longer working
hours of housemaids are more likely to contribute towards the improvement of
the water and sanitation facilities in slums. Though the findings are
statistically significant at a 5% level across different estimated models, the
regression model exhibits a low goodness of fit.","Willingness to Pay to Prevent Water and Sanitation-Related Diseases Suffered by Slum Dwellers and Beneficiary Households: Evidence from Chittagong, Bangladesh",2021-09-12 06:56:16,Mohammad Nur Nobi,"http://arxiv.org/abs/2109.05421v3, http://arxiv.org/pdf/2109.05421v3",econ.GN
32292,gn,"The article analyzes the population's assessment of their own health and
attitude to a healthy lifestyle in the context of distribution by age groups.
Of particular interest is the presence of transformations taking into account
the complex epidemiological situation, the increase in the incidence of
coronavirus infection in the population (the peak of the incidence came during
the period of selective observation in 2020). The article assesses the
closeness of the relationship between the respondents ' belonging to a
particular socio-demographic group and their social well-being during the
period of self-isolation, quarantine or other restrictions imposed during the
coronavirus pandemic in 2020. To solve this problem, the demographic and
socio-economic characteristics of respondents are presented, the distribution
of responses according to the survey results is estimated and the most
significant factor characteristics are selected. The distributions of
respondents ' responses are presented for the selected questions. To determine
the closeness of the relationship between the respondents ' answers to the
question and their gender or age distribution, the coefficients of mutual
conjugacy and rank correlation coefficients were calculated and analyzed. The
ultimate goal of the analytical component of this study is to determine the
social well-being of the Russian population during the pandemic on the basis of
sample survey data. As a result of the analysis of changes for the period
2019-2020, the assessment of the closeness of communication revealed the
parameters that form differences (gender, wealth, territory of residence).",The state of health of the Russian population during the pandemic (according to sample surveys),2021-09-13 15:40:31,"Leysan Anvarovna Davletshina, Natalia Alekseevna Sadovnikova, Alexander Valeryevich Bezrukov, Olga Guryevna Lebedinskaya","http://arxiv.org/abs/2109.05917v1, http://arxiv.org/pdf/2109.05917v1",econ.GN
32293,gn,"Given limited supply of approved vaccines and constrained medical resources,
design of a vaccination strategy to control a pandemic is an economic problem.
We use time-series and panel methods with real-world country-level data to
estimate effects on COVID-19 cases and deaths of two key elements of mass
vaccination - time between doses and vaccine type. We find that new infections
and deaths are both significantly negatively associated with the fraction of
the population vaccinated with at least one dose. Conditional on first-dose
coverage, an increased fraction with two doses appears to offer no further
reductions in new cases and deaths. For vaccines from China, however, we find
significant effects on both health outcomes only after two doses. Our results
support a policy of extending the interval between first and second doses of
vaccines developed in Europe and the US. As vaccination progresses, population
mobility increases, which partially offsets the direct effects of vaccination.
This suggests that non-pharmaceutical interventions remain important to contain
transmission as vaccination is rolled out.",Vaccination strategies and transmission of COVID-19: evidence across advanced countries,2021-09-14 08:43:03,"Dongwoo Kim, Young Jun Lee","http://arxiv.org/abs/2109.06453v2, http://arxiv.org/pdf/2109.06453v2",econ.GN
32302,gn,"We study the relationship between foreign debt and GDP growth using a panel
dataset of 50 countries from 1997 to 2015. We find that economic growth
correlates positively with foreign debt and that the relationship is causal in
nature by using the sovereign credit default swap spread as an instrumental
variable. Furthermore, we find that foreign debt increases investment and then
GDP growth in subsequent years. Our findings suggest that lower sovereign
default risks lead to higher foreign debt contributing to GDP growth more in
OECD than non-OECD countries.",Does Foreign Debt Contribute to Economic Growth?,2021-09-22 07:54:41,"Tomoo Kikuchi, Satoshi Tobe","http://arxiv.org/abs/2109.10517v3, http://arxiv.org/pdf/2109.10517v3",econ.GN
32296,gn,"The existing theorization of development economics and transition economics
is probably inadequate and perhaps even flawed to accurately explain and
analyze a dual economic system such as that in China. China is a country in the
transition of dual structure and system. The reform of its economic system has
brought off a long period of transformation. The allocation of factors is
subjected to the dualistic regulation of planning or administration and market
due to the dualistic system, and thus the signal distortion will be a commonly
seen existence. From the perspective of balanced and safe growth, the
institutional distortions of population birth, population flow, land
transaction and housing supply, with the changing of export, may cause great
influences on the production demand, which includes the iterative contraction
of consumption, the increase of export competitive cost, the widening of
urban-rural income gap, the transferring of residents' income and the crowding
out of consumption. In view of the worldwide shift from a conservative model
with more income than expenditure to the debt-based model with more expenditure
than income and the need for loose monetary policy, we must explore a basic
model that includes variables of debt and land assets that affecting money
supply and price changes, especially in China, where the current debt ratio is
high and is likely to rise continuously. Based on such a logical framework of
dualistic system economics and its analysis method, a preliminary calculation
system is formed through the establishment of models.",An Economic Analysis on the Potential and Steady Growth of China: a Practice Based on the Dualistic System Economics in China,2021-09-16 19:38:29,Tianyong Zhou,"http://arxiv.org/abs/2109.08099v2, http://arxiv.org/pdf/2109.08099v2",econ.GN
32297,gn,"How does food consumption improve educational outcomes is an important policy
issue for developing countries. Applying the Indonesian Family Life Survey
(IFLS) 2014, we estimate the returns of food consumption to education and
investigate if more educated individuals tend to consume healthier bundles than
less-educated individuals do. We implement the Expected Outcome Methodology,
which is similar to Average Treatment on The Treated (ATT) conceptualized by
Angrist and Pischke (2009). We find that education tends to tilt consumption
towards healthier foods. Specifically, individuals with upper secondary or
higher levels of education, on average, consume 31.5% more healthy foods than
those with lower secondary education or lower levels of education. With respect
to unhealthy food consumption, more highly-educated individuals, on average,
consume 22.8% less unhealthy food than less-educated individuals. This suggests
that education can increase the inequality in the consumption of healthy food
bundles. Our study suggests that it is important to design policies to expand
education for all for at least up to higher secondary level in the context of
Indonesia. Our finding also speaks to the link between food-health gradient and
human capital formation for a developing country such as Indonesia.",Education and Food Consumption Patterns: Quasi-Experimental Evidence from Indonesia,2021-09-16 20:25:13,"Dr Mohammad Rafiqul Islam, Dr Nicholas Sim","http://arxiv.org/abs/2109.08124v1, http://arxiv.org/pdf/2109.08124v1",econ.GN
32298,gn,"In the General Theory, Keynes remarked that the economy's state depends on
expectations, and that these expectations can be subject to sudden swings. In
this work, we develop a multiple equilibria behavioural business cycle model
that can account for demand or supply collapses due to abrupt drops in consumer
confidence, which affect both consumption propensity and investment. We show
that, depending on the model parameters, four qualitatively different outcomes
can emerge, characterised by the frequency of capital scarcity and/or demand
crises. In the absence of policy measures, the duration of such crises can
increase by orders of magnitude when parameters are varied, as a result of the
""paradox of thrift"". Our model suggests policy recommendations that prevent the
economy from getting trapped in extended stretches of low output, low
investment and high unemployment.",Economic Crises in a Model with Capital Scarcity and Self-Reflexive Confidence,2021-09-20 12:16:54,"Federico Guglielmo Morelli, Karl Naumann-Woleske, Michael Benzaquen, Marco Tarzia, Jean-Philippe Bouchaud","http://arxiv.org/abs/2109.09386v1, http://arxiv.org/pdf/2109.09386v1",econ.GN
32299,gn,"Using data from World Bank Enterprises Survey 2014, we find that having a
female owner in India increases firm innovation probability using both input
and output indicators of innovation. We account for possible endogeneity of
female owner variable using a two stage instrumental variable probit model. We
find that the positive effect of female owner variable is observed in the
sub-samples of firms with more access to internal funding, young firms and
firms located in regions with no or less crime This study highlights the need
to promote female entrepreneurship as a potential channel for promoting firm
innovation in India.",She Innovates- Female owner and firm innovation in India,2021-09-20 15:59:54,Shreya Biswas,"http://arxiv.org/abs/2109.09515v1, http://arxiv.org/pdf/2109.09515v1",econ.GN
32300,gn,"We study how overreaction and underreaction to signals depend on their
informativeness. While a large literature has studied belief updating in
response to highly informative signals, people in important real-world settings
are often faced with a steady stream of weak signals. We use a tightly
controlled experiment and new empirical evidence from betting and financial
markets to demonstrate that updating behavior differs meaningfully by signal
strength: across domains, our consistent and robust finding is overreaction to
weak signals and underreaction to strong signals. Both sets of results align
well with a simple theory of cognitive imprecision about signal
informativeness. Our framework and findings can help harmonize apparently
contradictory results from the experimental and empirical literatures.",Overinference from Weak Signals and Underinference from Strong Signals,2021-09-21 01:33:20,"Ned Augenblick, Eben Lazarus, Michael Thaler","http://arxiv.org/abs/2109.09871v4, http://arxiv.org/pdf/2109.09871v4",econ.GN
32301,gn,"This paper studies the content of central bank speech communication from 1997
through 2020 and asks the following questions: (i) What global topics do
central banks talk about? (ii) How do these topics evolve over time? I turn to
natural language processing, and more specifically Dynamic Topic Models, to
answer these questions. The analysis consists of an aggregate study of nine
major central banks and a case study of the Federal Reserve, which allows for
region specific control variables. I show that: (i) Central banks address a
broad range of topics. (ii) The topics are well captured by Dynamic Topic
Models. (iii) The global topics exhibit strong and significant autoregressive
properties not easily explained by financial control variables.",Evolution of topics in central bank speech communication,2021-09-21 12:57:18,Magnus Hansson,"http://arxiv.org/abs/2109.10058v1, http://arxiv.org/pdf/2109.10058v1",econ.GN
32303,gn,"We present a circularity transition index based on open data principles and
circularity of energy, material, and information. The aim of the Circular City
Index is to provide data and a succinct measurement of the attributes related
to municipalities performances that can support the definition of green
policies at national and local level. We have identified a set of key
performance indicators, defined at municipality level, measuring factors that,
directly and indirectly, could influence circularity and green transition, with
a focus on the green new deal vision embraced by the European Union. The CCI is
tested on a open dataset that collects data covering 100% of the Italian
municipalities (7,904). Our results show that the computation of the CCI on a
large sample leads to a normal distribution of the index, suggesting
disparities both under the territorial point of view and under the point of
view of city size. Results provide useful information to practitioner, policy
maker and experts from academia alike, to define effective tools able to
underpin a careful planning of investments supported by the national recovery
and resilience plan recently issued by the Italian government. This may be
particularly useful to enhance enabling factors of the green transition that
may differ across territories, helping policymakers to promote a smooth and
fair transition by fostering the preparedness of municipalities in addressing
the challenge.",Circular City Index: An Open Data analysis to assess the urban circularity preparedness of cities to address the green transition -- A study on the Italian municipalities,2021-09-22 19:46:27,"Alessio Muscillo, Simona Re, Sergio Gambacorta, Giuseppe Ferrara, Nicola Tagliafierro, Emiliano Borello, Alessandro Rubino, Angelo Facchini","http://arxiv.org/abs/2109.10832v1, http://arxiv.org/pdf/2109.10832v1",econ.GN
32304,gn,"Most online markets establish reputation systems to assist building trust
between sellers and buyers. Sellers' reputations not only provide guidelines
for buyers but may also inform sellers their optimal pricing strategy. In this
research, we assumed two types of buyer: informed buyers and uninformed buyers.
Informed buyers know more about the reputation about the seller but may incur a
search cost. Then we developed a benchmark model and a competition model. We
found that high reputation sellers and low reputation sellers adapt different
pricing strategy depending on the informativeness of buyers and the competition
among sellers. With a large proportion of informed buyers, high reputation
sellers may charge lower price than low reputation sellers, which exists a
negative price premium effect, in contrast to conclusions of some previous
studies. Empirical findings were in consistence with our theoretical models. We
collected data of five categories of products, televisions, laptops, cosmetics,
shoes, and beverages, from Taobao, a leading C2C Chinese online market.
Negative price premium effect was observed for TVs, laptops, and cosmetics;
price premium effect was observed for beverages; no significant trend was
observed for shoes. We infer product value and market complexity are the main
factors of buyer informativeness.",Reputation dependent pricing strategy: analysis based on a Chinese C2C marketplace,2021-09-26 05:46:57,"Zehao Chen, Yanchen Zhu, Tianyang Shen, Yufan Ye","http://arxiv.org/abs/2109.12477v1, http://arxiv.org/pdf/2109.12477v1",econ.GN
32305,gn,"While extensive, research on policing in America has focused on documented
actions such as stops and arrests -- less is known about patrolling and
presence. We map the movements of over ten thousand police officers across
twenty-one of America's largest cities by combining anonymized smartphone data
with station and precinct boundaries. Police spend considerably more time in
Black neighborhoods, a disparity which persists after controlling for density,
socioeconomics, and crime-driven demand for policing. Our results suggest that
roughly half of observed racial disparities in arrests are associated with this
exposure disparity, which is lower in cities with more supervisor (but not
officer) diversity.",Smartphone Data Reveal Neighborhood-Level Racial Disparities in Police Presence,2021-09-26 07:50:42,"M. Keith Chen, Katherine L. Christensen, Elicia John, Emily Owens, Yilin Zhuo","http://arxiv.org/abs/2109.12491v2, http://arxiv.org/pdf/2109.12491v2",econ.GN
32306,gn,"Comparing the results for preference attainment, self-perceived influence and
reputational influence, this paper analyzes the relationship between financial
resources and lobbying influence. The empirical analysis builds on data from an
original survey with 312 Swiss energy policy stakeholders combined with
document data from multiple policy consultation submission processes. The
results show that the distribution of influence varies substantially depending
on the measure. While financial resources for political purposes predict
influence across all measures, the relationship is positive only for some. An
analysis of indirect effects sheds light on the potential mechanisms that
translate financial resources into influence.","Lobbying Influence -- The Role of Money, Strategies and Measurements",2021-09-28 14:56:11,"Fintan Oeri, Adrian Rinscheid, Aya Kachi","http://arxiv.org/abs/2109.13928v1, http://arxiv.org/pdf/2109.13928v1",econ.GN
32307,gn,"Growth in the global human population this century will have momentous
consequences for societies and the environment. Population growth has come with
higher aggregate human welfare, but also climate change and biodiversity loss.
Based on the well-established empirical association and plausible causal
relationship between economic and population growth, we devised a novel method
for forecasting population based on Gross Domestic Product (GDP) per capita.
Although not mechanistically causal, our model is intuitive, transparent,
replicable, and grounded on historical data. Our central finding is that a
richer world is likely to be associated with a lower population, an effect
especially pronounced in rapidly developing countries. In our baseline
scenario, where GDP per capita follows a business-as-usual trajectory, global
population is projected to reach 9.2 billion in 2050 and peak in 2062. With 50%
higher annual economic growth, population peaks even earlier, in 2056, and
declines to below 8 billion by the end of the century. Without any economic
growth after 2020, however, the global population will grow to 9.9 billion in
2050 continue rising thereafter. Economic growth has the largest effect on
low-income countries. The gap between the highest and lowest GDP scenarios
reaches almost 4 billion by 2100. Education and family planning are important
determinants of population growth, but economic growth is also likely to be a
driver of slowing population growth by changing incentives for childbearing.
Since economic growth could slow population growth, it will offset
environmental impacts stemming from higher per-capita consumption of food,
water, and energy, and work in tandem with technological innovation.",Constrained scenarios for twenty-first century human population size based on the empirical coupling to economic growth,2021-09-29 09:22:43,"Barry W. Brook, Jessie C. Buettel, Sanghyun Hong","http://arxiv.org/abs/2109.14209v1, http://arxiv.org/pdf/2109.14209v1",econ.GN
32308,gn,"The Sundarban Reserve Forest (SRF) of Bangladesh provides tourism services to
local and international visitors. Indeed, tourism is one of the major ecosystem
services that this biodiversity-rich mangrove forest provides. Through a
convenient sampling technique, 421 tourist respondents were interviewed to
assess their willingness to pay for the tourism services of the Sundarban,
using the Zonal Travel Cost Method (ZTCM). The estimated annual economic
contribution of tourism in the Sundarban mangroves to the Bangladesh economy is
USD 53 million. The findings of this study showed that facilities for watching
wildlife and walking inside the forest can increase the number of tourists in
the SRF. The findings also show that the availability of information like
forest maps, wildlife precautionary signs, and danger zones would increase the
number of tourists as well. Thus, the government of Bangladesh should consider
increasing visitor entry fees to fund improvements and to enhance the
ecotourism potential of the Sundarban mangroves.","Economic valuation of tourism of the Sundarban Mangroves, Bangladesh",2021-10-01 05:56:21,"Mohammad Nur Nobi, A. H. M. Raihan Sarker, Biswajit Nath, Eivin Røskaft, Ma Suza, Paul Kvinta","http://dx.doi.org/10.5897/JENE2021.0910, http://arxiv.org/abs/2110.00182v1, http://arxiv.org/pdf/2110.00182v1",econ.GN
32309,gn,"We present an integrated database suitable for the investigations of the
Economic development of countries by using the Economic Fitness and Complexity
framework. Firstly, we implement machine learning techniques to reconstruct the
database of Trade of Services and we integrate it with the database of the
Trade of the physical Goods, generating a complete view of the International
Trade and denoted the Universal database. Using this data, we derive a
statistically significant network of interaction of the Economic activities,
where preferred paths of development and clusters of High-Tech industries
naturally emerge. Finally, we compute the Economic Fitness, an algorithmic
assessment of the competitiveness of countries, removing the unexpected
misbehaviour of Economies under-represented by the sole consideration of the
Trade of the physical Goods.",Universal Database for Economic Complexity,2021-10-01 13:25:16,"Aurelio Patelli, Andrea Zaccaria, Luciano Pietronero","http://dx.doi.org/10.1038/s41597-022-01732-5, http://arxiv.org/abs/2110.00302v1, http://arxiv.org/pdf/2110.00302v1",econ.GN
32310,gn,"We develop a tractable macroeconomic model that captures dynamic behaviors
across multiple timescales, including business cycles. The model is anchored in
a dynamic capital demand framework reflecting an interactions-based process
whereby firms determine capital needs and make investment decisions at the
micro level. We derive equations for aggregate demand from this micro setting
and embed them in the Solow growth economy. As a result, we obtain a
closed-form dynamical system with which we study economic fluctuations and
their impact on long-term growth. For realistic parameters, the model has two
attracting equilibria: one at which the economy contracts and one at which it
expands. This bi-stable configuration gives rise to quasiperiodic fluctuations,
characterized by the economy's prolonged entrapment in either a contraction or
expansion mode punctuated by rapid alternations between them. We identify the
underlying endogenous mechanism as a coherence resonance phenomenon. In
addition, the model admits a stochastic limit cycle likewise capable of
generating quasiperiodic fluctuations; however, we show that these fluctuations
cannot be realized as they induce unrealistic growth dynamics. We further find
that while the fluctuations powered by coherence resonance can cause
substantial excursions from the equilibrium growth path, such deviations vanish
in the long run as supply and demand converge.",Capital Demand Driven Business Cycles: Mechanism and Effects,2021-09-30 09:50:50,"Karl Naumann-Woleske, Michael Benzaquen, Maxim Gusev, Dimitri Kroujiline","http://arxiv.org/abs/2110.00360v3, http://arxiv.org/pdf/2110.00360v3",econ.GN
32311,gn,"Income inequality is a distributional phenomenon. This paper examines the
impact of U.S governor's party allegiance (Republican vs Democrat) on ethnic
wage gap. A descriptive analysis of the distribution of yearly earnings of
Whites and Blacks reveals a divergence in their respective shapes over time
suggesting that aggregate analysis may mask important heterogeneous effects.
This motivates a granular estimation of the comparative causal effect of
governors' party affiliation on labor market outcomes. We use a regression
discontinuity design (RDD) based on marginal electoral victories and samples of
quantiles groups by wage and hours worked. Overall, the distributional causal
estimations show that the vast majority of subgroups of black workers earnings
are not affected by democrat governors' policies, suggesting the possible
existence of structural factors in the labor markets that contribute to create
and keep a wage trap and/or hour worked trap for most of the subgroups of black
workers. Democrat governors increase the number of hours worked of black
workers at the highest quartiles of earnings. A bivariate quantiles groups
analysis shows that democrats decrease the total hours worked for black workers
who have the largest number of hours worked and earn the least. Black workers
earning more and working fewer hours than half of the sample see their number
of hours worked increase under a democrat governor.",The Forest Behind the Tree: Heterogeneity in How US Governor's Party Affects Black Workers,2021-10-01 18:44:56,"Guy Tchuente, Johnson Kakeu, John Nana Francois","http://arxiv.org/abs/2110.00582v1, http://arxiv.org/pdf/2110.00582v1",econ.GN
32312,gn,"Flexibility options, such as demand response, energy storage and
interconnection, have the potential to reduce variation in electricity prices
between different future scenarios, therefore reducing investment risk.
Moreover, investment in flexibility options can lower the need for generation
capacity. However, there are complex interactions between different flexibility
options. In this paper, we investigate the interactions between flexibility and
investment risk in electricity markets. We employ a large-scale stochastic
transmission and generation expansion model of the European electricity system.
Using this model, we first investigate the effect of risk aversion on the
investment decisions. We find that the interplay of parameters leads to (i)
more investment in a less emission-intensive energy system if planners are risk
averse (hedging against CO2 price uncertainty) and (ii) constant total
installed capacity, regardless of the level of risk aversion (planners do not
hedge against demand and RES deployment uncertainties). Second, we investigate
the individual effects of three flexibility elements on optimal investment
levels under different levels of risk aversion: demand response, investment in
additional interconnection capacity and investment in additional energy
storage. We find that that flexible technologies have a higher value for
risk-averse decision-makers, although the effects are nonlinear. Finally, we
investigate the interactions between the flexibility elements. We find that
risk-averse decision-makers show a strong preference for transmission grid
expansion once flexibility is available at low cost levels.",Risk aversion in flexible electricity markets,2021-10-08 15:32:46,"Thomas Möbius, Iegor Riepin, Felix Müsgens, Adriaan H. van der Weijde","http://arxiv.org/abs/2110.04088v1, http://arxiv.org/pdf/2110.04088v1",econ.GN
32313,gn,"This paper studies gross labour market flows and determinants of labour
market transitions for urban Indian workers using a panel dataset constructed
from Indian Periodic Labour Force Survey (PLFS) data for the period 2017--18 to
2019--20. Longitudinal studies based on the PLFS have been hampered by data
problems that prevent a straightforward merging of the 2017--18 and 2018--19
data releases. In this paper, we propose and validate a matching procedure
based on individual and household characteristics that can successfully link
almost all records across these two years. We use the constructed data set to
document a number of stylised facts about gross worker flows and to estimate
the effects of different individual characteristics and work histories on
probabilities of job gain and loss.",Indian urban workers' labour market transitions,2021-10-11 11:22:25,Jyotirmoy Bhattacharya,"http://dx.doi.org/10.1007/s41027-023-00434-9, http://arxiv.org/abs/2110.05482v2, http://arxiv.org/pdf/2110.05482v2",econ.GN
32314,gn,"This paper proposes a stylised model to address the issue of online sorting.
There are two large homogeneous groups of individuals. Everyone must choose
between two online platforms, one of which has superior amenities. Each
individual enjoys interacting online with those from their own group but
dislikes being on the same platform as those in the other group. Unlike a
Tiebout model of residential sorting, both platforms have unlimited capacity so
at any moment anyone is free to switch. We find that an online platform is
tipped from integrated to segregated by a combination of the current Schelling
ratio and the absolute numbers of each group on each platform. That is, it is
tipping sets and not tipping points that matter. In certain cases, the flight
of one group from a platform can be triggered by a change in the group ratio in
favor of those in the group that leave. If online integration of the two
communities is the desired outcome then the optimal policy is clear: make the
preferred platform even more desirable; revitalizing the inferior platform will
never lead to integration. Finally, integration is more elastic in response to
changes in neighborhood characteristics than to reductions in intolerance.",Tiebout Meets Schelling Online: Sorting in Cybercommunities,2021-10-08 18:03:20,"John Lynham, Philip R Neary","http://arxiv.org/abs/2110.05608v2, http://arxiv.org/pdf/2110.05608v2",econ.GN
32315,gn,"The empirical evidence suggests that key accumulation decisions and risky
choices associated with economic development depend, at least in part, on
economic preferences such as willingness to take risk and patience. This paper
studies whether temperature could be one of the potential channels that
influences such economic preferences. Using data from the Indonesia Family Life
Survey and NASAs Modern Era Retrospective Analysis for Research and
Applications data we exploit quasi exogenous variations in outdoor temperatures
caused by the random allocation of survey dates. This approach allows us to
estimate the effects of temperature on elicited measures of risk aversion,
rational choice violations, and impatience. We then explore three possible
mechanisms behind this relationship, cognition, sleep, and mood. Our findings
show that higher temperatures lead to significantly increased rational choice
violations and impatience, but do not significantly increase risk aversion.
These effects are mainly driven by night time temperatures on the day prior to
the survey and less so by temperatures on the day of the survey. This impact is
quasi linear and increasing when midnight outdoor temperatures are above 22C.
The evidence shows that night time temperatures significantly deplete cognitive
functioning, mathematical skills in particular. Based on these findings we
posit that heat induced night time disturbances cause stress on critical parts
of the brain, which then manifest in significantly lower cognitive functions
that are critical for individuals to perform economically rational decision
making.",Heat and Economic Preferences,2021-10-05 09:21:23,"Michelle Escobar Carias, David Johnston, Rachel Knott, Rohan Sweeney","http://arxiv.org/abs/2110.05611v2, http://arxiv.org/pdf/2110.05611v2",econ.GN
32316,gn,"The UN states that inequalities are determined along with income by other
factors - gender, age, origin, ethnicity, disability, sexual orientation,
class, and religion. India, since the ancient period, has socio-political
stratification that induced socio-economic inequality and continued till now.
There have been attempts to reduce socio-economic inequality through policy
interventions since the first plan, still there are evidences of social and
economic discrimination. This paper examines earning gaps between the forward
castes and the traditionally disadvantaged caste workers in the Indian labour
market using two distinct estimation methods. First, we interpret the
inequality indicator of the Theil index and decompose Theil to show within and
between-group inequalities. Second, a Threefold Oaxaca Decomposition is
employed to break the earnings differentials into components of endowment,
coefficient and interaction. Earnings gaps are examined separately in urban and
rural divisions. Within-group, inequalities are found larger than between
groups across variables; with a higher overall inequality for forward castes. A
high endowment is observed which implies pre-market discrimination in human
capital investment such as nutrition and education. Policymakers should first
invest in basic quality education and simultaneously expand post-graduate
diploma opportunities, subsequently increasing the participation in the labour
force for the traditionally disadvantaged in disciplines and occupations where
the forward castes have long dominated.",Interpreting the Caste-based Earning Gaps in the Indian Labour Market: Theil and Oaxaca Decomposition Analysis,2021-10-13 19:08:15,"Pallavi Gupta, Satyanarayan Kothe","http://arxiv.org/abs/2110.06822v1, http://arxiv.org/pdf/2110.06822v1",econ.GN
32317,gn,"Health behaviors are plagued by self-control problems, and commitment devices
are frequently proposed as a solution. We show that a simple alternative works
even better: appointments. We randomly offer HIV testing appointments and
financial commitment devices to high-risk men in Malawi. Appointments are much
more effective than financial commitment devices, more than doubling testing
rates. In contrast, most men who take up financial commitment devices lose
their investments. Appointments address procrastination without the potential
drawback of commitment failure, and also address limited memory problems.
Appointments have the potential to increase demand for healthcare in the
developing world.",Appointments: A More Effective Commitment Device for Health Behaviors,2021-10-09 06:04:50,"Laura Derksen, Jason Kerwin, Natalia Ordaz Reynoso, Olivier Sterck","http://arxiv.org/abs/2110.06876v1, http://arxiv.org/pdf/2110.06876v1",econ.GN
32357,gn,"Air pollution has been linked to elevated levels of risk aversion. This paper
provides the first evidence showing that such effect reduces life-threatening
risky behaviors. We study the impact of air pollution on traffic accidents
caused by risky driving behaviors, using the universe of accident records and
high-resolution air quality data of Taiwan from 2009 to 2015. We find that air
pollution significantly decreases accidents caused by driver violations, and
that this effect is nonlinear. In addition, our results suggest that air
pollution primarily reduces road users' risky behaviors through visual channels
rather than through the respiratory system.",Can Air Pollution Save Lives? Air Quality and Risky Behaviors on Roads,2021-11-12 20:42:21,"Wen Hsu, Bing-Fang Hwang, Chau-Ren Jung, Yau-Huo Jimmy Shr","http://arxiv.org/abs/2111.06837v2, http://arxiv.org/pdf/2111.06837v2",econ.GN
32318,gn,"This research adds to the expanding field of data-driven analysis, scientific
modeling, and forecasting on the impact of having access to the Internet and
IoT on the general US population regarding immigrants and immigration policies.
More specifically, this research focuses on the public opinion of undocumented
immigrants in the United States and having access to the Internet in their
local settings. The term Undocumented Immigrants refers to those who live in
the United States without legal papers, documents, or visas. Undocumented
immigrants may have come into the country unlawfully or with valid
documentation, but their legal status has expired. Using the 2020 American
National Election Studies (ANES) time series dataset, I investigated the
relationship between internet access (A2I) and public perception of
undocumented immigrants. According to my research and analysis, increasing
internet access among non-Hispanic whites with at least a bachelors degree with
an annual household income of less than 99K is more likely to oppose the
deportation of undocumented immigrants and separating unaccompanied children
from their families in borderland areas. The individuals with substantial
Republican political ideology exhibit significantly lower opposing effects in
deporting undocumented immigrants or separating unaccompanied children from
their families. The evidence from multiple statistical models is resilient to a
variety of factors. The findings show that increased internet access may
improve undocumented immigrants social integration and acceptability. During
health emergencies, it may be especially beneficial to make them feel safe,
included, and supported in their local settings.",An Empirical Analysis of how Internet Access Influences Public Opinion towards Undocumented Immigrants and Unaccompanied Children,2021-09-28 16:02:23,Muhammad Hassan Bin Afzal,"http://arxiv.org/abs/2110.07489v1, http://arxiv.org/pdf/2110.07489v1",econ.GN
32319,gn,"In the aftermath of the Great Recession, the regulatory framework for credit
union operations has become a subject of controversy. Competing financial
enterprises such as thrifts and other banks have argued that credit unions have
received preferential treatment under existing tax and regulatory codes,
whereas credit unions complained of undue restrictions on their ability to
scale up and increase their scope of operations. Building on previous location
models, this analysis focuses on credit union headquarter locations immediately
following the 2008 financial crisis. We offer a new perspective for
understanding credit union behavior based on central place theory, new
industrial organization literature, and credit union location analysis. Our
findings indicate that credit unions avoid locating near other lending
institutions, instead operating in areas with a low concentration of other
banks. This finding provides evidence that credit unions serve niche markets,
product differential, and are not a significant source of direct competition
for thrifts and other banks.",Locational Factors in the Competition between Credit Unions and Banks after the Great Recession,2021-10-14 20:10:48,"Reka Sundaram-Stukel, Steven C Deller","http://arxiv.org/abs/2110.07611v2, http://arxiv.org/pdf/2110.07611v2",econ.GN
32320,gn,"Based on a multisector general equilibrium framework, we show that the
sectoral elasticity of substitution plays the key role in the evolution of
asymmetric tails of macroeconomic fluctuations and the establishment of
robustness against productivity shocks. Non-unitary elasticity of substitution
renders a nonlinear Domar aggregation, where normal sectoral productivity
shocks translate into non-normal aggregated shocks with variable expected
output growth. We empirically estimate 100 sectoral elasticities of
substitution, using the time-series linked input-output tables for Japan, and
find that the production economy is elastic overall, relative to Cobb-Douglas
with unitary elasticity. Along with the previous assessment of an inelastic
production economy for the US, the contrasting tail asymmetry of the
distribution of aggregated shocks between the US and Japan is explained.
Moreover, robustness of an economy is assessed by the expected output growth,
the level of which is led by the sectoral elasticities of substitution, under
zero mean productivity shocks.",The elastic origins of tail asymmetry,2021-10-16 19:51:21,"Satoshi Nakano, Kazuhiko Nishimura","http://dx.doi.org/10.1017/S1365100523000172, http://arxiv.org/abs/2110.08612v3, http://arxiv.org/pdf/2110.08612v3",econ.GN
32321,gn,"How does women's obedience to traditional gender roles affect their labour
outcomes? To investigate on this question, we employ discontinuity tests and
fixed effect regressions with time lag to measure how married women in China
diminish their labour outcomes so as to maintain the bread-winning status of
their husbands. In the first half of this research, our discontinuity test
exhibits a missing mass of married women who just out-earn their husbands,
which is interpreted as an evidence showing that these females diminish their
earnings under the influence of gender norms. In the second half, we use fixed
effect regressions with time lag to assess the change of a female's future
labour outcomes if she currently earns more than her husband. Our results
suggest that women's future labour participation decisions (whether they still
join the workforce) are unaffected, but their yearly incomes and weekly working
hours will be reduced in the future. Lastly, heterogeneous studies are
conducted, showing that low-income and less educated married women are more
susceptible to the influence of gender norms.",Gender identity and relative income within household: Evidence from China,2021-10-17 07:52:44,"Han Dongcheng, Kong Fanbo, Wang Zixun","http://arxiv.org/abs/2110.08723v1, http://arxiv.org/pdf/2110.08723v1",econ.GN
32322,gn,"Leveraging unique insights into the special education placement process
through written individual psychological records, I present results from the
first ever study to examine short- and long-term returns to special education
programs with causal machine learning and computational text analysis methods.
I find that special education programs in inclusive settings have positive
returns in terms of academic performance as well as labor-market integration.
Moreover, I uncover a positive effect of inclusive special education programs
in comparison to segregated programs. This effect is heterogenous: segregation
has least negative effects for students with emotional or behavioral problems,
and for nonnative students with special needs. Finally, I deliver optimal
program placement rules that would maximize aggregated school performance and
labor market integration for students with special needs at lower program
costs. These placement rules would reallocate most students with special needs
from segregation to inclusion.",Estimating returns to special education: combining machine learning and text analysis to address confounding,2021-10-17 15:25:35,Aurélien Sallin,"http://arxiv.org/abs/2110.08807v2, http://arxiv.org/pdf/2110.08807v2",econ.GN
32364,gn,"We study Bayesian coordination games where agents receive noisy private
information over the game's payoffs, and over each others' actions. If private
information over actions is of low quality, equilibrium uniqueness obtains in a
manner similar to a global games setting. On the contrary, if private
information over actions (and thus over the game's payoff coefficient) is
precise, agents can coordinate on multiple equilibria. We argue that our
results apply to phenomena such as bank-runs, currency crises, recessions, or
riots and revolutions, where agents monitor each other closely.",Observing Actions in Global Games,2021-11-20 13:27:57,"Dominik Grafenhofer, Wolfgang Kuhle","http://arxiv.org/abs/2111.10554v1, http://arxiv.org/pdf/2111.10554v1",econ.GN
32323,gn,"I investigate how political incentives affect the behavior of district
attorneys (DAs). I develop a theoretical model that predicts DAs will increase
sentencing intensity in an election period compared to the period prior. To
empirically test this prediction, I compile one of the most comprehensive
datasets to date on the political careers of all district attorneys in office
during the steepest rise in incarceration in U.S. history (roughly 1986-2006).
Using quasi-experimental methods, I find causal evidence that being in a DA
election year increases total admissions per capita and total months sentenced
per capita. I estimate that the election year effects on admissions are akin to
moving 0.85 standard deviations along the distribution of DA behavior within
state (e.g., going from the 50th to 80th percentile in sentencing intensity). I
find evidence that election effects are larger (1) when DA elections are
contested, (2) in Republican counties, and (3) in the southern United
States--all these factors are consistent with the perspective that election
effects arise from political incentives influencing DAs. Further, I find that
district attorney election effects decline over the period 1986-2006, in tandem
with U.S. public opinion softening regarding criminal punishment. These
findings suggest DA behavior may respond to voter preferences--in particular to
public sentiment regarding the harshness of the court system.",Prosecutor Politics: The Impact of Election Cycles on Criminal Sentencing in the Era of Rising Incarceration,2021-10-18 13:37:13,Chika O. Okafor,"http://arxiv.org/abs/2110.09169v1, http://arxiv.org/pdf/2110.09169v1",econ.GN
32324,gn,"This paper considers how sanctions affected the Iranian economy using a novel
measure of sanctions intensity based on daily newspaper coverage. It finds
sanctions to have significant effects on exchange rates, inflation, and output
growth, with the Iranian rial over-reacting to sanctions, followed up with a
rise in inflation and a fall in output. In absence of sanctions, Iran's average
annual growth could have been around 4-5 per cent, as compared to the 3 per
cent realized. Sanctions are also found to have adverse effects on employment,
labor force participation, secondary and high-school education, with such
effects amplified for females.",Identifying the Effects of Sanctions on the Iranian Economy using Newspaper Coverage,2021-10-04 03:25:19,"Dario Laudati, M. Hashem Pesaran","http://arxiv.org/abs/2110.09400v1, http://arxiv.org/pdf/2110.09400v1",econ.GN
32325,gn,"We present a revealed preference characterization of marital stability where
some couples are committed. A couple is committed if they can only divorce upon
mutual consent. We provide theoretical insights into the potential of the
characterization for identifying intrahousehold consumption patterns. We show
that when there is no price variation for private goods between potential
couples, it is only possible to identify intrahousehold resource allocations
for non-committed couples. Simulation exercises using household data drawn from
the Longitudinal Internet Studies for the Social Sciences (LISS) panel support
our theoretical findings. Our results show that in the presence of price
variation, the empirical implications of marital stability can be used for
identifying household consumption allocations for both committed and
non-committed couples.",Marital Stability With Committed Couples: A Revealed Preference Analysis,2021-10-21 00:13:51,"Mikhail Freer, Khushboo Surana","http://arxiv.org/abs/2110.10781v5, http://arxiv.org/pdf/2110.10781v5",econ.GN
32326,gn,"This paper proposes a two-stage pricing strategy for nondurable (such as
typical electronics) products, where retail price is cut down at certain time
points of the product lifecycle. We consider learning effect of electronic
products that, with the accumulation of production, average production cost
decreases over time as manufacturers get familiar with the production process.
Moreover, word-of-mouth (WOM) of existing customers is used to analyze future
demand, which is sensitive to the difference between the actual reliability and
the perceived reliability of products. We theoretically prove the existence and
uniqueness of the optimal switch time between the two stages and the optimal
price in each stage. In addition, warranty as another important factor of
electronic products is also considered, whose interaction with word-of-mouth as
well as the corresponding influences on total profit are analyzed.
Interestingly, our findings indicate that (1) the main reason for manufacturers
to cut down prices for electronic products pertains to the learning effects;
(2) even through both internal factors (e.g., the learning effects of
manufacturers) and external factors (e.g., the price elasticity of customers)
have impacts on product price, their influence on manufacturer's profit is
widely divergent; (3) generally warranty weakens the influence of external
advertising on the reliability estimate, because warranty price only partially
reflects the actual reliability information of products; (4) and the optimal
warranty price can increase the profits for the manufacturer by approximately
10%.",A Two-stage Pricing Strategy Considering Learning Effects and Word-of-Mouth,2021-10-22 07:18:53,"Yanrong Li, Lai Wei, Wei Jiang","http://arxiv.org/abs/2110.11581v1, http://arxiv.org/pdf/2110.11581v1",econ.GN
32327,gn,"We investigate whether pandemic-induced contagion disamenities and income
effects arising due to COVID-related unemployment adversely affected real
estate prices of one- or two-family owner-occupied properties across New York
City (NYC). First, OLS hedonic results indicate that greater COVID case numbers
are concentrated in neighborhoods with lower-valued properties. Second, we use
a repeat-sales approach for the period 2003 to 2020, and we find that both the
possibility of contagion and pandemic-induced income effects adversely impacted
home sale prices. Estimates suggest sale prices fell by roughly $60,000 or
around 8% in response to both of the following: 1,000 additional infections per
100,000 residents; and a 10-percentage point increase in unemployment in a
given Modified Zip Code Tabulation Area (MODZCTA). These price effects were
more pronounced during the second wave of infections. Based on cumulative
MODZCTA infection rates through 2020, the estimated COVID-19 price discount
ranged from approximately 1% to 50% in the most affected neighborhoods, and
averaged 14%. The contagion effect intensified in the more affluent, but less
densely populated NYC neighborhoods, while the income effect was more
pronounced in the most densely populated neighborhoods with more rental
properties and greater population shares of foreign-born residents. This
disparity implies the pandemic may have been correlated with a wider gap in
housing wealth in NYC between homeowners in lower-priced and higher-priced
neighborhoods.",The Impact of the Coronavirus Pandemic on New York City Real Estate: First Evidence,2021-10-22 23:34:45,"Jeffrey P. Cohen, Felix L. Friedt, Jackson P. Lautier","http://dx.doi.org/10.1111/jors.12591, http://arxiv.org/abs/2110.12050v2, http://arxiv.org/pdf/2110.12050v2",econ.GN
32328,gn,"In a laboratory experiment we compare voluntary cooperation in Iceland and
the US. We furthermore compare the associated thought processes across
cultures. The two countries have similar economic performance, but survey
measures show that they differ culturally. Our hypotheses are based on two such
measures, The Inglehart cultural world map and the Knack and Keefers scale of
civic attitudes toward large-scale societal functioning. We prime the
participants with different social foci, emphasizing in one a narrow grouping
and in the other a larger social unit. In each country we implement this using
two different feedback treatments. Under group feedback, participants only know
the contributions by the four members of their directly cooperating group.
Under session feedback they are informed of the contributions within their
group as well as by everyone else in the session. Under group feedback,
cooperation levels do not differ between the two cultures. However, under
session feedback cooperation levels increase in Iceland and decline in the US.
Even when contribution levels are the same members of the two cultures differ
in their motives to cooperate: Icelanders tend to cooperate unconditionally and
US subjects conditionally. Our findings indicate that different cultures can
achieve similar economic and societal performance through different cultural
norms and suggest that cooperation should be encouraged through culturally
tailored suasion tactics. We also find that some decision factors such as
Inequity Aversion do not differ across the two countries, which raises the
question whether they are human universals.",Reciprocity or community: Different cultural pathways to cooperation and welfare,2021-10-23 01:34:40,"Anna Gunnthorsdottir, Palmar Thorsteinsson","http://arxiv.org/abs/2110.12085v1, http://arxiv.org/pdf/2110.12085v1",econ.GN
32329,gn,"Push-pull theory, one of the most important macro theories in demography,
argues that population migration is driven by a combination of push (repulsive)
forces at the place of emigration and pull (attractive) forces at the place of
emigration. Based on the push-pull theory, this paper shows another practical
perspective of the theory by measuring the reverse push and pull forces from
the perspective of housing property rights. We use OLS and sequential Probit
models to analyze the impact of urban and rural property rights factors on the
social integration of the migrant population-based, on ""China Migrants' Dynamic
Survey"". We found that after controlling for personal and urban
characteristics, there is a significant negative effect of rural property
rights (homestead) ownership of the mobile population on their socio-economic
integration, and cultural and psychological integration in the inflow area. The
effect of urban house price on social integration of the migrant population is
consistent with the ""inverted U-shaped"" nonlinear assumption: when the house
price to income ratio of the migrant population in the inflow area increases
beyond the inflection point, its social integration level decreases. That is,
there is an inverse push force and pull force mechanism of housing property
rights on population mobility.",Housing property rights and social integration of migrant population: based on the 2017 china migrants' dynamic survey,2021-10-24 12:07:35,"Jingwen Tan, Shixi Kang","http://arxiv.org/abs/2110.12394v1, http://arxiv.org/pdf/2110.12394v1",econ.GN
32330,gn,"The South China Sea (SCS) is one of the most economically valuable resources
on the planet, and as such has become a source of territorial disputes between
its bordering nations. Among other things, states compete to harvest the
multitude of fish species in the SCS. In an effort to gain a competitive
advantage states have turned to increased maritime patrols, as well as the use
of ""maritime militias,"" which are fishermen armed with martial assets to resist
the influence of patrols. This conflict suggests a game of strategic resource
allocation where states allocate patrols intelligently to earn the greatest
possible utility. The game, however, is quite computationally challenging when
considering its size (there are several distinct fisheries in the SCS), the
nonlinear nature of biomass growth, and the influence of patrol allocations on
costs imposed on fishermen. Further, uncertainty in player behavior attributed
to modeling error requires a robust analysis to fully capture the dispute's
dynamics. To model such a complex scenario, this paper employs a response
surface methodology to assess optimal patrolling strategies and their impact on
realized utilities. The methodology developed successfully finds strategies
which are more robust to behavioral uncertainty than a more straight-forward
method.",Analyzing a Complex Game for the South China Sea Fishing Dispute using Response Surface Methodologies,2021-10-25 04:01:59,Michael Macgregor Perry,"http://arxiv.org/abs/2110.12568v2, http://arxiv.org/pdf/2110.12568v2",econ.GN
32331,gn,"Fisheries in the East China Sea (ECS) face multiple concerning trends. Aside
from depleted stocks caused by overfishing, illegal encroachments by fishermen
from one nation into another's legal waters are a common occurrence. This
behavior presumably could be stopped via strong monitoring, controls, and
surveillance (MCS), but MCS is routinely rated below standards for nations
bordering the ECS. This paper generalizes the ECS to a model of a congested
maritime environment, defined as an environment where multiple nations can fish
in the same waters with equivalent operating costs, and uses game-theoretic
analysis to explain why the observed behavior persists in the ECS. The paper
finds that nations in congested environments are incentivized to issue
excessive quotas, which in turn tacitly encourages illegal fishing and extracts
illegal rent from another's legal waters. This behavior couldn't persist in the
face of strong MCS measures, and states are thus likewise incentivized to use
poor MCS. A bargaining problem is analyzed to complement the noncooperative
game, and a key finding is the nation with lower nonoperating costs has great
leverage during the bargain.",Fisheries Management in Congested Waters: A Game-Theoretic Assessment of the East China Sea,2021-10-26 22:06:07,Michael Macgregor Perry,"http://dx.doi.org/10.1007/s10640-022-00688-9, http://arxiv.org/abs/2110.13966v2, http://arxiv.org/pdf/2110.13966v2",econ.GN
32332,gn,"I study how political bias and audience costs impose domestic institutional
constraints that affect states' capacity to reach peaceful agreements during
crises. With a mechanism design approach, I show that the existence of peaceful
agreements hinges crucially on whether the resource being divided can appease
two sides of the highest type (i.e. the maximum war capacity). The derivation
has two major implications. On the one hand, if war must be averted, then
political leaders are not incentivized by audience costs to communicate private
information; they will pool on the strategy that induces the maximum bargaining
gains. On the other hand, political bias matters for the scope of peace because
it alters a state's expected war payoff.",Domestic Constraints in Crisis Bargaining,2021-10-28 11:15:30,Liqun Liu,"http://arxiv.org/abs/2110.14938v3, http://arxiv.org/pdf/2110.14938v3",econ.GN
32333,gn,"This paper considers dynamic moral hazard settings, in which the consequences
of the agent's actions are not precisely understood. In a new continuous-time
moral hazard model with drift ambiguity, the agent's unobservable action
translates to drift set that describe the evolution of output. The agent and
the principal have imprecise information about the technology, and both seek
robust performance from a contract in relation to their respective worst-case
scenarios. We show that the optimal long-term contract aligns the parties'
pessimistic expectations and broadly features compressing of the high-powered
incentives. Methodologically, we provide a tractable way to formulate and
characterize optimal long-run contracts with drift ambiguity. Substantively,
our results provide some insights into the formal link between robustness and
simplicity of dynamic contracts, in particular high-powered incentives become
less effective in the presence of ambiguity.","Moral Hazard, Dynamic Incentives, and Ambiguous Perceptions",2021-10-28 18:42:13,Martin Dumav,"http://arxiv.org/abs/2110.15229v1, http://arxiv.org/pdf/2110.15229v1",econ.GN
32334,gn,"Mathematical ability is among the most important determinants of prospering
in the labour market. Using multiple representative datasets with learning
outcomes of over 2 million children from rural India in the age group 8 to 16
years, the paper examines the prevalence of gender gap in performance in
mathematics and its temporal variation from 2010 to 2018. Our findings from the
regressions show significant gender gap in mathematics, which is not observable
for reading skills. This difference in mathematics scores remains prevalent
across households of different socio-economic and demographic groups. This gap
is found to be persistent over time and it appears to increase as the children
get older. We also find significant inter-state variation with the north Indian
states lagging behind considerably and the south Indian states showing a
reverse gender gap. As an explanation to this, we observe evidence of a robust
association between pre-existing gender norms at the household and district
level with higher gender gap. The findings, in light of other available
evidence on the consequences of such gaps, call for the need to understand
these gender specific differences more granularly and periodically to inform
gender-specific interventions.",Solving it correctly Prevalence and Persistence of Gender Gap in Basic Mathematics in rural India,2021-10-28 20:30:16,"Upasak Das, Karan Singhal","http://arxiv.org/abs/2110.15312v1, http://arxiv.org/pdf/2110.15312v1",econ.GN
32335,gn,"Para-Aminophenol is one of the key chemicals required for the synthesis of
Paracetamol, an analgesic and antipyretic drug. Data shows a large fraction of
India's demand for Para-Aminophenol being met through imports from China. The
uncertainty in the India-China relations would affect the supply and price of
this ""Key Starting Material."" This report is a detailed business plan for
setting up a plant and producing Para-Aminophenol in India at a competitive
price. The plant is simulated in AspenPlus V8 and different Material Balances
and Energy Balances calculations are carried out. The plant produces 22.7 kmols
Para-Aminophenol per hour with a purity of 99.9%. Along with the simulation,
economic analysis is carried out for this plant to determine the financial
parameters like Payback Period and Return on Investment.",Process Design and Economics of Production of p-Aminophenol,2021-10-29 15:54:33,"Chinmay Ghoroi, Jay Shah, Devanshu Thakar, Sakshi Baheti","http://arxiv.org/abs/2110.15750v1, http://arxiv.org/pdf/2110.15750v1",econ.GN
32336,gn,"Upon arrival to a new country, many immigrants face job downgrading, a
phenomenon describing workers being in jobs below the ones they have based on
the skills they possess. Moreover, in the presence of downgrading immigrants
receiving lower wage returns to the same skills compared to natives. The level
of downgrading could depend on the immigrant type and numerous other factors.
This study examines the determinants of skill downgrading among two types of
immigrants - refugees and economic immigrants - in the German labor markets
between 1984 and 2018. We find that refugees downgrade more than economic
immigrants, and this discrepancy between the two groups persists over time. We
show that language skill improvements exert a strong influence on subsequent
labor market outcomes of both groups.",Skill Downgrading Among Refugees and Economic Immigrants in Germany,2021-10-30 22:46:53,"Plamen Nikolov, Leila Salarpour, David Titus","http://arxiv.org/abs/2111.00319v2, http://arxiv.org/pdf/2111.00319v2",econ.GN
32337,gn,"Precise information is essential for making good policies, especially those
regarding reform decisions. However, decision-makers may hesitate to gather
such information if certain decisions could have negative impacts on their
future careers. We model how decision-makers with career concerns may acquire
policy-relevant information and carry out reform decisions when their policy
discretion can be limited ex ante. Typically, decision-makers with career
concerns have weaker incentives to acquire information compared to
decision-makers without such concerns. In this context, we demonstrate that the
public can encourage information acquisition by eliminating either the
""moderate policy"" or the status quo from decision-makers' discretion. We also
analyze when reform decisions should be strategically delegated to
decision-makers with or without career concerns.","The Politics of (No) Compromise: Information Acquisition, Policy Discretion, and Reputation",2021-10-31 18:14:13,Liqun Liu,"http://arxiv.org/abs/2111.00522v2, http://arxiv.org/pdf/2111.00522v2",econ.GN
32338,gn,"The expansion of trade agreements has provided a potential basis for trade
integration and economic convergence of different countries. Moreover,
developing and expanding global value chains (GVCs) have provided more
opportunities for knowledge and technology spillovers and the potential
convergence of production techniques. This can result in conceivable
environmental outcomes in developed and developing countries. This study
investigates whether GVCs can become a basis for the carbon intensity (CI)
convergence of different countries. To answer this question, data from 101
countries from 1997 to 2014 are analyzed using spatial panel data econometrics.
The results indicate a spatial correlation between GVCs trade partners in terms
of CI growth, and they confirm the GVCs-based conditional CI convergence of the
countries. Moreover, estimates indicate that expanding GVCs even stimulates
bridging the CI gap between countries, i.e., directly and indirectly through
spillover effects. According to the results, GVCs have the potential capacity
to improve the effectiveness of carbon efficiency policies. Therefore,
different dimensions of GVCs and their benefits should be taken into account
when devising environmental policies.",The Role of Global Value Chains in Carbon Intensity Convergence: A Spatial Econometrics Approach,2021-10-31 21:37:53,"Kazem Biabany Khameneh, Reza Najarzadeh, Hassan Dargahi, Lotfali Agheli","http://arxiv.org/abs/2111.00566v1, http://arxiv.org/pdf/2111.00566v1",econ.GN
32339,gn,"The classical DICE model is a widely accepted integrated assessment model for
the joint modeling of economic and climate systems, where all model state
variables evolve over time deterministically. We reformulate and solve the DICE
model as an optimal control dynamic programming problem with six state
variables (related to the carbon concentration, temperature, and economic
capital) evolving over time deterministically and affected by two controls
(carbon emission mitigation rate and consumption). We then extend the model by
adding a discrete stochastic shock variable to model the economy in the
stressed and normal regimes as a jump process caused by events such as the
COVID-19 pandemic. These shocks reduce the world gross output leading to a
reduction in both the world net output and carbon emission. The extended model
is solved under several scenarios as an optimal stochastic control problem,
assuming that the shock events occur randomly on average once every 100 years
and last for 5 years. The results show that, if the world gross output recovers
in full after each event, the impact of the COVID-19 events on the temperature
and carbon concentration will be immaterial even in the case of a conservative
10\% drop in the annual gross output over a 5-year period. The impact becomes
noticeable, although still extremely small (long-term temperature drops by
$0.1^\circ \mathrm{C}$), in a presence of persistent shocks of a 5\% output
drop propagating to the subsequent time periods through the recursively reduced
productivity. If the deterministic DICE model policy is applied in a presence
of stochastic shocks (i.e. when this policy is suboptimal), then the drop in
temperature is larger (approximately $0.25^\circ \mathrm{C}$), that is, the
lower economic activities owing to shocks imply that more ambitious mitigation
targets are now feasible at lower costs.",Impact of COVID-19 type events on the economy and climate under the stochastic DICE model,2021-11-01 14:09:49,"Pavel V. Shevchenko, Daisuke Murakami, Tomoko Matsui, Tor A. Myrvoll","http://arxiv.org/abs/2111.00835v1, http://arxiv.org/pdf/2111.00835v1",econ.GN
32340,gn,"The most important resource to improve technologies in the field of
artificial intelligence is data. Two types of policies are crucial in this
respect: privacy and data-sharing regulations, and the use of surveillance
technologies for policing. Both types of policies vary substantially across
countries and political regimes. In this chapter, we examine how authoritarian
and democratic political institutions can influence the quality of research in
artificial intelligence, and the availability of large-scale datasets to
improve and train deep learning algorithms. We focus mainly on the Chinese
case, and find that -- ceteris paribus -- authoritarian political institutions
continue to have a negative effect on innovation. They can, however, have a
positive effect on research in deep learning, via the availability of
large-scale datasets that have been obtained through government surveillance.
We propose a research agenda to study which of the two effects might dominate
in a race for leadership in artificial intelligence between countries with
different political institutions, such as the United States and China.","Artificial Intelligence, Surveillance, and Big Data",2021-11-01 17:57:13,"David Karpa, Torben Klarl, Michael Rochlitz","http://arxiv.org/abs/2111.00992v1, http://arxiv.org/pdf/2111.00992v1",econ.GN
32341,gn,"This paper examines the economic consequences of the COVID-19 pandemic to
sub-Saharan Africa (SSA) using the historical approach by analyzing the policy
responses of the region to past crises and their economic consequences. The
study employs the manufacturing-value-added share of GDP as a performance
indicator. The analysis shows that wrong policy intervention to past crises,
lead the African sub-region into the deplorable economic situation. The study
observed that the region leapfrogged prematurely to import substitution, export
promotion, and global value chains. Based on these past experiences, the region
should adopt a gradual approach in responding to the COVID-19 economic
consequences. The sub-region should first address relevant areas of
sustainability, including proactive investment in research and development to
develop home-grown technology, upgrade essential infrastructural facilities,
develop security infrastructures, and strengthen the financial sector.",Economic consequences of covid-19 pandemic to the sub-Saharan Africa: an historical perspective,2021-11-01 18:41:56,"Anthony Enisan Akinlo, Segun Michael Ojo","http://arxiv.org/abs/2111.01038v2, http://arxiv.org/pdf/2111.01038v2",econ.GN
32342,gn,"We provide a generalized revealed preference test for quasilinear
preferences. The test applies to nonlinear budget sets and non-convex
preferences as those found in taxation and nonlinear pricing contexts. We study
the prevalence of quasilinear preferences in a laboratory real-effort task
experiment with nonlinear wages. The experiment demonstrates the empirical
relevance of our test. We find support for either convex (non-separable)
preferences or quasilinear preferences but weak support for the hypothesis of
both quasilinear and convex preferences.",A General Revealed Preference Test for Quasilinear Preferences: Theory and Experiments,2021-11-01 23:29:50,"Mikhail Freer, Marco Castillo","http://arxiv.org/abs/2111.01248v2, http://arxiv.org/pdf/2111.01248v2",econ.GN
32343,gn,"This integrated assessment modeling research analyzes what Korea's 2050
carbon neutrality would require for the national energy system and the role of
the power sector concerning the availability of critical mitigation
technologies. Our scenario-based assessments show that Korea's current policy
falls short of what the nation's carbon-neutrality ambition would require.
Across all technology scenarios examined in this study, extensive and rapid
energy system transition is imperative, requiring the large-scale deployment of
renewables and carbon capture & storage (CCS) early on and negative emission
technologies (NETs) by the mid-century. Importantly, rapid decarbonization of
the power sector that goes with rapid electrification of end-uses seems to be a
robust national decarbonization strategy. Furthermore, we contextualize our
net-zero scenario results using policy costs, requirements for natural
resources, and the expansion rate of zero-carbon technologies. We find that the
availability of nuclear power lowers the required expansion rate of renewables
and CCS, alleviating any stress on terrestrial and geological systems. By
contrast, the limited availability of CCS without nuclear power necessarily
demands a very high penetration of renewables and significantly high policy
compliance costs, which would decrease the feasibility of achieving the carbon
neutrality target.",Integrated Assessment Modeling of Korea 2050 Carbon Neutrality Technology Pathways,2021-11-02 16:53:01,"Hanwoong Kim, Haewon McJeon, Dawoon Jung, Hanju Lee, Candelaria Bergero, Jiyong Eom","http://arxiv.org/abs/2111.01598v1, http://arxiv.org/pdf/2111.01598v1",econ.GN
32344,gn,"We provide asymptotic approximations to the distribution of statistics that
are obtained from network data for limiting sequences that let the number of
nodes (agents) in the network grow large. Network formation is permitted to be
strategic in that agents' incentives for link formation may depend on the ego
and alter's positions in that endogenous network. Our framework does not limit
the strength of these interaction effects, but assumes that the network is
sparse. We show that the model can be approximated by a sampling experiment in
which subnetworks are generated independently from a common equilibrium
distribution, and any dependence across subnetworks is captured by state
variables at the level of the entire network. Under many-player asymptotics,
the leading term of the approximation error to the limiting model established
in Menzel (2015b) is shown to be Gaussian, with an asymptotic bias and variance
that can be estimated consistently from a single network.",Central Limit Theory for Models of Strategic Network Formation,2021-11-02 18:38:14,Konrad Menzel,"http://arxiv.org/abs/2111.01678v1, http://arxiv.org/pdf/2111.01678v1",econ.GN
32345,gn,"Using a unique dataset containing gridded data on population densities,
rents, housing sizes, and transportation in 192 cities worldwide, we
investigate the empirical relevance of the monocentric standard urban model
(SUM). Overall, the SUM seems surprisingly capable of capturing the inner
structure of cities, both in developed and developing countries. As expected,
cities spread out when they are richer, more populated, and when transportation
or farmland is cheaper. Respectively 100% and 87% of the cities exhibit the
expected negative density and rent gradients: on average, a 1% decrease in
income net of transportation costs leads to a 21% decrease in densities and a
3% decrease in rents per m2. We also investigate the heterogeneity between
cities of different characteristics in terms of monocentricity, informality,
and amenities.",Testing the monocentric standard urban model in a global sample of cities,2021-11-03 13:10:58,"Charlotte Liotta, Vincent Viguié, Quentin Lepetit","http://dx.doi.org/10.1016/j.regsciurbeco.2022.103832, http://arxiv.org/abs/2111.02112v2, http://arxiv.org/pdf/2111.02112v2",econ.GN
32346,gn,"We study the effect of globalization of world economy between 1980 and 2010
by using network analysis technics on trade and GDP data of 71 countries in the
world. We draw results distinguishing relatively developing and relatively
developed countries during this period of time and point out the standing out
economies among the BRICS countries during the years of globalization: within
our context of study, China and Russia are the countries that already exhibit
developed economy characters, India is next in line but have some unusual
features, while Brazil and South Africa still have erratic behaviors",Network analysis regarding international trade network,2021-11-04 08:17:18,"Xiufeng Yan, Qi Tang","http://arxiv.org/abs/2111.02633v1, http://arxiv.org/pdf/2111.02633v1",econ.GN
32347,gn,"The present study aims at exploring the strategies for managing innovation in
technical education by using blended learning philosophy and practices with
special reference to Politeknik Brunei. Based on literature review and desk
research, the study found out salient characteristics, explored constraining
factors, elicited strategies of Politeknik Brunei, and suggested some options
and a framework for innovations management and development of effective blended
teaching and learning. The limiting factors identified are the unwillingness of
the top-level management, lack of structural support, dearth of readiness of
the stakeholders, the gap between teacher's expectations and changed students
characteristics, and blended teaching myopia on the way of effective
application of blended learning strategies. Notable suggestions for strategic
development are developing wide-angle vision and self-renewal processes,
analyzing the environment for needs determination. Clarity of purpose and
tasks, technological adaptability, data-driven decision making, prompt
feedback, flipped classroom, and development of learning clusters are other
dimensions that may go a long way toward innovating teaching-learning and the
overall development of an academic institution. Finally, the study suggested
important guidelines for applying the strategies and proposed framework for
quality blended learning and managing innovations in technical education.",Managing Innovation in Technical Education: Revisiting the Developmental Strategies of Politeknik Brunei,2021-11-04 16:23:31,"Bashir Ahmed Bhuiyan, Mohammad Shahansha Molla, Masud Alam","http://dx.doi.org/10.33166/ACDMHR.2021.04.004, http://arxiv.org/abs/2111.02850v1, http://arxiv.org/pdf/2111.02850v1",econ.GN
32348,gn,"Agents, some with a bias, decide between undertaking a risky project and a
safe alternative based on information about the project's efficiency. Only a
part of that information is verifiable. Unbiased agents want to undertake only
efficient projects, while biased agents want to undertake any project. If the
project causes harm, a court examines the verifiable information, forms a
belief about the agent's type, and decides the punishment. Tension arises
between deterring inefficient projects and a chilling effect on using the
unverifiable information. Improving the unverifiable information always
increases overall efficiency, but improving the verifiable information may
reduce efficiency.",The Wrong Kind of Information,2021-11-07 23:21:03,"Aditya Kuvalekar, João Ramos, Johannes Schneider","http://dx.doi.org/10.1111/1756-2171.12440, http://arxiv.org/abs/2111.04172v3, http://arxiv.org/pdf/2111.04172v3",econ.GN
32349,gn,"We propose an empirical method to analyze data from first-price procurements
where bidders are asymmetric in their risk-aversion (CRRA) coefficients and
distributions of private costs. Our Bayesian approach evaluates the likelihood
by solving type-symmetric equilibria using the boundary-value method and
integrates out unobserved heterogeneity through data augmentation. We study a
new dataset from Russian government procurements focusing on the category of
printing papers. We find that there is no unobserved heterogeneity (presumably
because the job is routine), but bidders are highly asymmetric in their cost
and risk-aversion. Our counterfactual study shows that choosing a type-specific
cost-minimizing reserve price marginally reduces the procurement cost; however,
inviting one more bidder substantially reduces the cost, by at least 5.5%.
Furthermore, incorrectly imposing risk-neutrality would severely mislead
inference and policy recommendations, but the bias from imposing homogeneity in
risk-aversion is small.",Procurements with Bidder Asymmetry in Cost and Risk-Aversion,2021-11-08 19:57:29,"Gaurab Aryal, Hanna Charankevich, Seungwon Jeong, Dong-Hyuk Kim","http://dx.doi.org/10.1080/07350015.2022.2115497, http://arxiv.org/abs/2111.04626v2, http://arxiv.org/pdf/2111.04626v2",econ.GN
32350,gn,"How well do firearm markets comply with firearm restrictions? The
Massachusetts Attorney General issued an Enforcement Notice in 2016 to announce
a new interpretation of the key phrase ""copies and duplicates"" in the state's
assault weapons ban. The Enforcement Notice increased assault rifle sales by
1,349 (+560%) within five days, followed by a reduction of 211 (-58%) over the
next three weeks. Assault rifle sales were 64-66% lower in 2017 than in
comparable earlier periods, suggesting that the Enforcement Notice reduced
assault weapon sales but also that many banned weapons continued to be sold.",Do Firearm Markets Comply with Firearm Restrictions? How the Massachusetts Assault Weapons Ban Enforcement Notice Changed Firearm Sales,2021-11-09 20:06:04,"Meenakshi Balakrishna, Kenneth C. Wilbur","http://arxiv.org/abs/2111.05272v2, http://arxiv.org/pdf/2111.05272v2",econ.GN
32388,gn,"Do low corporate taxes always favor multinational production over economic
integration? We propose a two-country model in which multinationals choose the
locations of production plants and foreign distribution affiliates and shift
profits between them through transfer prices. With high trade costs, plants are
concentrated in the low-tax country; surprisingly, this pattern reverses with
low trade costs. Indeed, economic integration has a non-monotonic impact:
falling trade costs first decrease and then increase the plant share in the
high-tax country, which we empirically confirm. Moreover, allowing for transfer
pricing makes tax competition tougher and international coordination on
transfer-pricing regulation can be beneficial.",Economic Integration and Agglomeration of Multinational Production with Transfer Pricing,2022-01-09 06:55:09,"Hayato Kato, Hirofumi Okoshi","http://arxiv.org/abs/2201.02919v4, http://arxiv.org/pdf/2201.02919v4",econ.GN
32351,gn,"Billions of people live in urban poverty, with many forced to reside in
disaster-prone areas. Research suggests that such disasters harm child
nutrition and increase adult morbidity. However, little is known about impacts
on mental health, particularly of people living in slums. In this paper we
estimate the effects of flood disasters on the mental and physical health of
poor adults and children in urban Indonesia. Our data come from the Indonesia
Family Life Survey and new surveys of informal settlement residents. We find
that urban poor populations experience increases in acute morbidities and
depressive symptoms following floods, that the negative mental health effects
last longer, and that the urban wealthy show no health effects from flood
exposure. Further analysis suggests that worse economic outcomes may be partly
responsible. Overall, the results provide a more nuanced understanding of the
morbidities experienced by populations most vulnerable to increased disaster
occurrence.",Flood Disasters and Health Among the Urban Poor,2021-11-10 02:34:32,"Michelle Escobar Carias, David Johnston, Rachel Knott, Rohan Sweeney","http://arxiv.org/abs/2111.05455v2, http://arxiv.org/pdf/2111.05455v2",econ.GN
32352,gn,"In this paper, we design and implement an experiment aimed at testing the
level-k model of auctions. We begin by asking which (simple) environments can
best disentangle the level-k model from its leading rival, Bayes-Nash
equilibrium. We find two environments that are particularly suited to this
purpose: an all-pay auction with uniformly distributed values, and a
first-price auction with the possibility of cancelled bids. We then implement
both of these environments in a virtual laboratory in order to see which theory
can best explain observed bidding behaviour. We find that, when plausibly
calibrated, the level-k model substantially under-predicts the observed bids
and is clearly out-performed by equilibrium. Moreover, attempting to fit the
level-k model to the observed data results in implausibly high estimated
levels, which in turn bear no relation to the levels inferred from a game known
to trigger level-k reasoning. Finally, subjects almost never appeal to iterated
reasoning when asked to explain how they bid. Overall, these findings suggest
that, despite its notable success in predicting behaviour in other strategic
settings, the level-k model (and its close cousin cognitive hierarchy) cannot
explain behaviour in auctions.",Going... going... wrong: a test of the level-k (and cognitive hierarchy) models of bidding behaviour,2021-11-10 16:49:54,Itzhak Rasooly,"http://arxiv.org/abs/2111.05686v1, http://arxiv.org/pdf/2111.05686v1",econ.GN
32353,gn,"Using state-of-the-art techniques in computer vision, we analyze one million
satellite images covering 12% of the African continent between 1984 and 2019 to
track local development around 1,658 mineral deposits. We use stacked event
studies and difference-in-difference models to estimate the impact of mine
openings and closings. The magnitude of the effect of mine openings is
considerable - after 15 years, urban areas within 20km of an opening mine
almost double in size. We find strong evidence of a political resource curse at
the local level. Although mining boosts the local economy in democratic
countries, these gains are meager in autocracies and come at the expense of
tripling the likelihood of conflict relative to prior the onset of mining.
Furthermore, our results suggest that the growth acceleration in mining areas
is only temporary and diminishes with the closure of the mine.",The Local Economic Impact of Mineral Mining in Africa: Evidence from Four Decades of Satellite Imagery,2021-11-10 19:36:29,"Sandro Provenzano, Hannah Bull","http://arxiv.org/abs/2111.05783v4, http://arxiv.org/pdf/2111.05783v4",econ.GN
32354,gn,"When people choose what messages to send to others, they often consider how
others will interpret the messages. A sender may expect a receiver to engage in
motivated reasoning, leading the receiver to trust good news more than bad
news, relative to a Bayesian. This paper experimentally studies how motivated
reasoning affects information transmission in political settings. Senders are
randomly matched with receivers whose political party's stances happen to be
aligned or misaligned with the truth, and either face incentives to be rated as
truthful or face no incentives. Incentives to be rated as truthful cause
senders to be less truthful; when incentivized, senders send false information
to align messages with receivers' politically-motivated beliefs. The adverse
effect of incentives is not appreciated by receivers, who rate senders in both
conditions as being equally likely to be truthful. A complementary experiment
further identifies senders' beliefs about receivers' motivated reasoning as the
mechanism driving these results. Senders are additionally willing to pay to
learn the politics of their receivers, and use this information to send more
false messages.",The Supply of Motivated Beliefs,2021-11-11 08:59:10,Michael Thaler,"http://arxiv.org/abs/2111.06062v7, http://arxiv.org/pdf/2111.06062v7",econ.GN
32355,gn,"While volume-based grid tariffs have been the norm for residential consumers,
capacity-based tariffs will become more relevant with the increasing
electrification of society. A further development is capacity subscription,
where consumers are financially penalised for exceeding their subscribed
capacity, or alternatively their demand is limited to the subscribed level. The
penalty or limitation can either be static (always active) or dynamic, meaning
that it is only activated when there are active grid constraints. We
investigate the cost impact for static and dynamic capacity subscription
tariffs, for 84 consumers based on six years of historical load data. We use
several approaches for finding the optimal subscription level ex ante. The
results show that annual costs remain both stable and similar for most
consumers, with a few exceptions for those that have high peak demand. In the
case of a physical limitation, it is important to use a stochastic approach for
the optimal subscription level to avoid excessive demand limitations. Facing
increased peak loads due to electrification, regulators should consider a move
to capacity-based tariffs in order to reduce cross-subsidisation between
consumers and increase cost reflectivity without impacting the DSO cost
recovery.",Grid Tariffs Based on Capacity Subscription: Multi Year Analysis on Metered Consumer Data,2021-11-05 15:01:38,"Sigurd Bjarghov, Hossein Farahmand, Gerard Doorman","http://arxiv.org/abs/2111.06253v1, http://arxiv.org/pdf/2111.06253v1",econ.GN
32356,gn,"This paper introduces a transparent framework to identify the informational
content of FOMC announcements. We do so by modelling the expectations of the
FOMC and private sector agents using state of the art computational linguistic
tools on both FOMC statements and New York Times articles. We identify the
informational content of FOMC announcements as the projection of high frequency
movements in financial assets onto differences in expectations. Our recovered
series is intuitively reasonable and shows that information disclosure has a
significant impact on the yields of short-term government bonds.","It's not always about the money, sometimes it's about sending a message: Evidence of Informational Content in Monetary Policy Announcements",2021-11-11 21:25:00,"Yong Cai, Santiago Camara, Nicholas Capel","http://arxiv.org/abs/2111.06365v1, http://arxiv.org/pdf/2111.06365v1",econ.GN
32358,gn,"Index insurance is a promising tool to reduce the risk faced by farmers, but
high basis risk, which arises from imperfect correlation between the index and
individual farm yields, has limited its adoption to date. Basis risk arises
from two fundamental sources: the intrinsic heterogeneity within an insurance
zone (zonal risk), and the lack of predictive accuracy of the index (design
risk). Whereas previous work has focused almost exclusively on design risk, a
theoretical and empirical understanding of the role of zonal risk is still
lacking.
  Here we investigate the relative roles of zonal and design risk, using the
case of maize yields in Kenya. Our first contribution is to derive a formal
decomposition of basis risk, providing a simple upper bound on the insurable
basis risk that any index can reach within a given zone. Our second
contribution is to provide the first large-scale empirical analysis of the
extent of zonal versus design risk. To do so, we use satellite estimates of
yields at 10m resolution across Kenya, and investigate the effect of using
smaller zones versus using different indices. Our results show a strong local
heterogeneity in yields, underscoring the challenge of implementing index
insurance in smallholder systems, and the potential benefits of low-cost yield
measurement approaches that can enable more local definitions of insurance
zones.",Optimal index insurance and basis risk decomposition: an application to Kenya,2021-11-16 19:34:47,"Matthieu Stigler, David Lobell","http://dx.doi.org/10.1111/ajae.12375, http://arxiv.org/abs/2111.08601v2, http://arxiv.org/pdf/2111.08601v2",econ.GN
32359,gn,"This paper quantifies the international spillovers of US monetary policy by
exploiting the high-frequency movement of multiple financial assets around FOMC
announcements. I use the identification strategy introduced by Jarocinski &
Karadi (2022) to identify two FOMC shocks: a pure US monetary policy and an
information disclosure shock. These two FOMC shocks have intuitive and very
different international spillovers. On the one hand, a US tightening caused by
a pure US monetary policy shock leads to an economic recession, an exchange
rate depreciation and tighter financial conditions. On the other hand, a
tightening of US monetary policy caused by the FOMC disclosing positive
information about the state of the US economy leads to an economic expansion,
an exchange rate appreciation and looser financial conditions. Ignoring the
disclosure of information by the FOMC biases the impact of a US monetary policy
tightening and may explain recent atypical findings.",Spillovers of US Interest Rates: Monetary Policy & Information Effects,2021-11-16 20:21:56,Santiago Camara,"http://arxiv.org/abs/2111.08631v3, http://arxiv.org/pdf/2111.08631v3",econ.GN
32360,gn,"I develop a simple Schumpeterian agent-based model where the entry and exit
of firms, their productivity and markup, the birth of new industries and the
social structure of the population are endogenous and use it to study the
causes of rising inequality and ""declining business dynamism"" since the 1980s.
My hybrid model combines features of i) the so-called Schumpeter Mark I
(centering around the entrepreneur), ii) the Mark II model (emphasizing the
innovative capacities of firms), and iii) Cournot competition, with firms using
OLS learning to estimate the market environment and the behavior of their
competitors. A scenario which is quantitatively calibrated to US data on growth
and inequality replicates a large number of stylized facts regarding the
industry life-cycle, growth, inequality and all ten stylized facts on
""declining business dynamism"" proposed by Akcigit and Ates (AEJ:Macro, 2021).
Counterfactual simulations show that antitrust policy is highly effective at
combatting inequality and increasing business dynamism and growth, but is
subject to a conflict of interest between workers and firm owners, as GDP and
wages grow at the expense of profits. Technological factors, on the other hand,
are much less effective in combatting declining business dynamism in my model.","Growth, Inequality and Declining Business Dynamism in a Unified Schumpeter Mark I + II Model",2021-11-18 00:50:42,Patrick Mellacher,"http://arxiv.org/abs/2111.09407v2, http://arxiv.org/pdf/2111.09407v2",econ.GN
32361,gn,"I develop a rather simple agent-based model to capture a co-evolution of
opinion formation, political decision making and economic outcomes. I use this
model to study how societies form opinions if their members have opposing
interests. Agents are connected in a social network and exchange opinions, but
differ with regard to their interests and ability to gain information about
them. I show that inequality in information and economic resources can have a
drastic impact on aggregated opinion. In particular, my model illustrates how a
tiny, but well-informed minority can influence group decisions to their favor.
This effect is amplified if these agents are able to command more economic
resources to advertise their views and if they can target their advertisements
efficiently, as made possible by the rise of information technology. My results
contribute to the understanding of pressing questions such as climate change
denial and highlight the dangers that economic and information inequality can
pose for democracies.",Opinion Dynamics with Conflicting Interests,2021-11-18 00:52:55,Patrick Mellacher,"http://arxiv.org/abs/2111.09408v1, http://arxiv.org/pdf/2111.09408v1",econ.GN
32362,gn,"In this article we explain why the November 2021 election for the Ward 2 city
council seat in Minneapolis, MN, may be the mathematically most interesting
ranked choice election in US history.",The Curious Case of the 2021 Minneapolis Ward 2 City Council Election,2021-11-18 21:14:16,"David McCune, Lori McCune","http://arxiv.org/abs/2111.09846v3, http://arxiv.org/pdf/2111.09846v3",econ.GN
32363,gn,"Purpose: The purpose of this paper is to contribute to the debate as to
whether collaboration in coworking spaces contributes to firm innovativeness
and impacts the business models of organizations in a positive manner.
  Methodology: This paper includes primary data from 75 organizations in 17
coworking spaces and uses quantitative research methods. The methodology
includes multiple statistical methods, such as principal component analysis,
correlation analysis as well as linear and binary regression analysis.
  Results: The results show a positive interrelation between collaboration and
innovation, indicating that coworkers are able to improve their innovative
capabilities by making use of strategic partnerships in coworking spaces.
Further, this study shows that business models are significantly affected by
the level of collaboration in coworking spaces, which suggests that coworking
is a promoting force for business model development or business model
innovation. Contributions: The paper contributes to management literature and
represents the first empirical investigations which focuses on the effects of
collaboration on a firm-level in coworking spaces.
  Practical implications: The results indicate that organizations in coworking
spaces should embrace a collaborative mindset and should actively seek out
collaborative alliances and partnerships, as doing such is shown to increase
their innovativeness and/or develop their business model.
  Future Research: Future research should focus on the antecedents of
collaboration or could investigate the effects of collaboration in coworking
spaces on a community level.",Collaboration in Coworking Spaces: Impact on Firm Innovativeness and Business Models,2021-11-18 21:46:23,M. Moore,"http://arxiv.org/abs/2111.09866v1, http://arxiv.org/pdf/2111.09866v1",econ.GN
32365,gn,"Technology codes are assigned to each patent for classification purposes and
to identify the components of its novelty. Not all the technology codes are
used with the same frequency - if we study the use frequency of codes in a
year, we can find predominant technologies used in many patents and technology
codes not so frequent as part of a patent. In this paper, we measure that
inequality in the use frequency of patent technology codes. First, we analyze
the total inequality in that use frequency considering the patent applications
filed under the Patent Co-operation Treaty at international phase, with the
European Patent Office as designated office, in the period 1977-2018, on a
yearly basis. Then, we analyze the decomposition of that inequality by grouping
the technology codes by productive economic activities. We show that total
inequality had an initial period of growth followed by a phase of relative
stabilization, and that it tends to be persistently high. We also show that
total inequality was mainly driven by inequality within productive economic
activities, with a low contribution of the between-activities component.",Inequality in the use frequency of patent technology codes,2021-11-22 16:53:37,"José Alejandro Mendoza, Faustino Prieto, José María Sarabia","http://arxiv.org/abs/2111.11211v1, http://arxiv.org/pdf/2111.11211v1",econ.GN
32366,gn,"Bank operational risk capital modeling using the Basel II advanced
measurement approach (AMA) often lead to a counter-intuitive capital estimate
of value at risk at 99.9% due to extreme loss events. To address this issue, a
flexible semi-nonparametric (SNP) model is introduced using the change of
variables technique to enrich the family of distributions to handle extreme
loss events. The SNP models are proved to have the same maximum domain of
attraction (MDA) as the parametric kernels, and it follows that the SNP models
are consistent with the extreme value theory peaks over threshold method but
with different shape and scale parameters from the kernels. By using the
simulation dataset generated from a mixture of distributions with both light
and heavy tails, the SNP models in the Frechet and Gumbel MDAs are shown to fit
the tail dataset satisfactorily through increasing the number of model
parameters. The SNP model quantile estimates at 99.9 percent are not overly
sensitive towards the body-tail threshold change, which is in sharp contrast to
the parametric models. When applied to a bank operational risk dataset with
three Basel event types, the SNP model provides a significant improvement in
the goodness of fit to the two event types with heavy tails, yielding an
intuitive capital estimate that is in the same magnitude as the event type
total loss. Since the third event type does not have a heavy tail, the
parametric model yields an intuitive capital estimate, and the SNP model cannot
provide additional improvement. This research suggests that the SNP model may
enable banks to continue with the AMA or its partial use to obtain an intuitive
operational risk capital estimate when the simple non-model based Basic
Indicator Approach or Standardized Approach are not suitable per Basel
Committee Banking Supervision OPE10 (2019).",Semi-nonparametric Estimation of Operational Risk Capital with Extreme Loss Events,2021-11-22 22:01:41,"Heng Z. Chen, Stephen R. Cosslett","http://arxiv.org/abs/2111.11459v2, http://arxiv.org/pdf/2111.11459v2",econ.GN
32367,gn,"This paper explores different methods to estimate prices paid per efficiency
unit of labor in panel data. We study the sensitivity of skill price estimates
to different assumptions regarding workers' choice problem, identification
strategies, the number of occupations considered, skill accumulation processes,
and estimation strategies. In order to do so, we conduct careful Monte Carlo
experiments designed to generate similar features as in German panel data. We
find that once skill accumulation is appropriately modelled, skill price
estimates are generally robust to modelling choices when the number of
occupations is small, i.e., switches between occupations are rare. When
switching is important, subtle issues emerge and the performance of different
methods varies more strongly.",The Performance of Recent Methods for Estimating Skill Prices in Panel Data,2021-11-24 15:26:14,"Michael J. Böhm, Hans-Martin von Gaudecker","http://arxiv.org/abs/2111.12459v1, http://arxiv.org/pdf/2111.12459v1",econ.GN
32368,gn,"This paper extends a standard general equilibrium framework with a corporate
tax code featuring two key elements: tax depreciation policy and the
distinction between c-corporations and pass-through businesses. In the model,
the stimulative effect of a tax rate cut on c-corporations is smaller when tax
depreciation policy is accelerated, and is further diluted in the aggregate by
the presence of pass-through entities. Because of a highly accelerated tax
depreciation policy and a large share of pass-through activity in 2017, the
model predicts small stimulus, large payouts to shareholders, and a dramatic
loss of corporate tax revenues following the Tax Cuts and Jobs Act (TCJA-17).
These predictions are consistent with novel micro- and macro-level evidence
from professional forecasters and sectoral tax returns. At the same time,
because of less-accelerated tax depreciation and a lower pass-through share in
the early 1960s, the model predicts sizable stimulus in response to the
Kennedy's corporate tax cuts - also supported by the data. The model-implied
corporate tax multipliers for Trump's TCJA-17 and Kennedy's tax cuts are +0.6
and +2.5, respectively.",The Macroeconomic Effects of Corporate Tax Reforms,2021-11-25 00:10:50,Francesco Furno,"http://arxiv.org/abs/2111.12799v1, http://arxiv.org/pdf/2111.12799v1",econ.GN
32369,gn,"The paper discusses the process of social and economic development of
municipalities. A conclusion is made that developing an adequate model of
social and economic development using conventional approaches presents a
considerable challenge. It is proposed to use semantic modeling to represent
the social and economic development of municipalities, and cognitive mapping to
identify the set of connections that occur among indicators and that have a
direct impact on social and economic development.",Management of Social and Economic Development of Municipalities,2021-11-26 14:55:26,"Maria A. Shishanina, Anatoly A. Sidorov","http://arxiv.org/abs/2111.13690v1, http://arxiv.org/pdf/2111.13690v1",econ.GN
32370,gn,"Economists increasingly refer to monopsony power to reconcile the absence of
negative employment effects of minimum wages with theory. However, systematic
evidence for the monopsony argument is scarce. In this paper, I perform a
comprehensive test of monopsony theory by using labor market concentration as a
proxy for monopsony power. Labor market concentration turns out substantial in
Germany. Absent wage floors, a 10 percent increase in labor market
concentration makes firms reduce wages by 0.5 percent and employment by 1.6
percent, reflecting monopsonistic exploitation. In line with perfect
competition, sectoral minimum wages lead to negative employment effects in
slightly concentrated labor markets. This effect weakens with increasing
concentration and, ultimately, becomes positive in highly concentrated or
monopsonistic markets. Overall, the results lend empirical support to the
monopsony argument, implying that conventional minimum wage effects on
employment conceal heterogeneity across market forms.",Minimum Wages in Concentrated Labor Markets,2021-11-26 18:24:42,Martin Popp,"http://arxiv.org/abs/2111.13692v6, http://arxiv.org/pdf/2111.13692v6",econ.GN
32402,gn,"The article discusses the controversy over infinite growth. It analyzes the
two dominant perspectives on economic growth and finds them based on a limited
and subjective view of reality. An examination of the principal aspects of
economic growth (economic activity, value, and value creation) helps to
understand what fuels economic growth. The article also discusses the
correlations between production, consumption, as well as resources and
population dynamics. The article finds that infinite and exponential economic
growth is essential for the survival of our civilization.",Infinite Growth: A Curse or a Blessing?,2022-01-24 20:11:02,Gennady Shkliarevsky,"http://dx.doi.org/10.13140/RG.2.2.33330.89287, http://arxiv.org/abs/2201.09806v1, http://arxiv.org/pdf/2201.09806v1",econ.GN
32371,gn,"This paper examines the effect of two different soda taxes on consumption
behaviour and health of school-aged children in Europe: Hungary imposed a
Public Health Product Tax (PHPT) on several unhealthy products in 2011. France
introduced solely a soda tax, containing sugar or artificial sweeteners, in
2012. In order to exploit spatial variation, I use a semi-parametric
Difference-in-Differences (DID) approach. Since the policies differ in Hungary
and France, I analyse the effects separately by using a neighbouring country
without a soda tax as a control group. The results suggest a counter-intuitive
positive effect of the tax on soda consumption in Hungary. The reason for this
finding could be the substitution of other unhealthy beverages, which are taxed
at a higher rate, by sodas. The effect of the soda tax in France is as expected
negative, but insignificant which might be caused by a low tax rate. The body
mass index (BMI) is not affected by the tax in any country. Consequently,
policy makers should think carefully about the design and the tax rate before
implementing a soda tax.",Do soda taxes affect the consumption and health of school-aged children? Evidence from France and Hungary,2021-11-29 16:24:10,Selina Gangl,"http://arxiv.org/abs/2111.14521v1, http://arxiv.org/pdf/2111.14521v1",econ.GN
32372,gn,"We analyse the effect of mandatory kindergarten attendance for four-year-old
children on maternal labour market outcomes in Switzerland. To determine the
causal effect of this policy, we combine two different datasets and
quasi-experiments in this paper: Firstly, we investigate a large administrative
dataset and apply a non-parametric regression discontinuity design (RDD) to
evaluate the effect of the reform at the birthday cut-off for entering the
kindergarten in the same versus in the following year. Secondly, we complement
this analysis by exploiting spatial variation and staggered treatment
implementation of the reform across cantons (administrative units in
Switzerland) in a difference-in-differences (DiD) approach based on a household
survey. All in all, the results suggest that if anything, mandatory
kindergarten increases the labour market outcomes of mothers very moderately.
The effects are driven by previous non-employed mothers and by older rather
than younger mothers.",From homemakers to breadwinners? How mandatory kindergarten affects maternal labour market outcomes,2021-11-29 16:32:27,"Selina Gangl, Martin Huber","http://arxiv.org/abs/2111.14524v3, http://arxiv.org/pdf/2111.14524v3",econ.GN
32373,gn,"In this paper, we assess the demand effects of lower public transport fares
in Geneva, an urban area in Switzerland. Considering a unique sample based on
transport companies' annual reports, we find that, when reducing the costs of
annual season tickets, day tickets and hourly tickets (by up to 29%, 6% and
20%, respectively), demand increases by, on average, over five years, about
10.6%. To the best of our knowledge, we are the first to show how the synthetic
control method (Abadie and Gardeazabal, 2003, Abadie, Diamond, and Hainmueller,
2010) can be used to assess such (for policy-makers) important price reduction
effects in urban public transport. Furthermore, we propose an aggregate metric
that inherits changes in public transport supply (e.g., frequency increases) to
assess these demand effects, namely passenger trips per vehicle kilometre. This
metric helps us to isolate the impact of price reductions by ensuring that
companies' frequency increases do not affect estimators of interest. In
addition, we show how to investigate the robustness of results in similar
settings. Using a recent statistical method and a different study design, i.e.,
not blocking off supply changes as an alternate explanation of the effect,
leads us to a lower bound of the effect, amounting to an increase of 3.7%.
Finally, as far as we know, it is the first causal estimate of price reduction
on urban public transport initiated by direct democracy.",Do price reductions attract customers in urban public transport? A synthetic control approach,2021-11-25 10:43:26,"Hannes Wallimann, Kevin Blättler, Widar von Arx","http://arxiv.org/abs/2111.14613v2, http://arxiv.org/pdf/2111.14613v2",econ.GN
32374,gn,"Examining the trend of the global economy shows that global trade is moving
towards high-tech products. Given that these products generate very high added
value, countries that can produce and export these products will have high
growth in the industrial sector. The importance of investing in advanced
technologies for economic and social growth and development is so great that it
is mentioned as one of the strong levers to achieve development. It should be
noted that the policy of developing advanced technologies requires
consideration of various performance aspects, risks and future risks in the
investment phase. Risk related to high-tech investment projects has a meaning
other than financial concepts only. In recent years, researchers have focused
on identifying, analyzing, and prioritizing risk. There are two important
components in measuring investment risk in high-tech industries, which include
identifying the characteristics and criteria for measuring system risk and how
to measure them. This study tries to evaluate and rank the investment risks in
advanced industries using fuzzy TOPSIS technique based on verbal variables.",Ranking of different of investment risk in high-tech projects using TOPSIS method in fuzzy environment based on linguistic variables,2021-11-29 19:25:26,"Mohammad Ebrahim Sadeghi, Hamed Nozari, Hadi Khajezadeh Dezfoli, Mehdi Khajezadeh","http://dx.doi.org/10.22105/JFEA.2021.298002.1159, http://arxiv.org/abs/2111.14665v1, http://arxiv.org/pdf/2111.14665v1",econ.GN
32375,gn,"The traditional monetary transmission mechanism usually views the equity
markets as the monetary reservoir that absorbs over-issued money, but due to
China's unique fiscal and financial system, the real estate sector has become
an ""invisible"" non-traditional monetary reservoir in China for many years.
First, using data from Chinese housing market and central bank for parameter
estimation, we constructs a dynamic general equilibrium model that includes
fiscal expansion and financial accelerator to reveal the mechanism of monetary
reservoir. An asset can be called a loan product, which worked as financed
asset for local fiscal expansion, as long as it satisfies the following three
conditions: leveraged trading system, balance commitment payment, and the
existence of the utility of local governments. This paper refers to this
mechanism as the monetary reservoir that will push up the premium of loan
product, form asset bubbles and has a significant impact on the effectiveness
of monetary policy. Local governments leverage the sector of the loan product
to obtain short-term growth by influencing the balance sheets of financial
intermediaries through fiscal financing, expenditure and also investment, but
this mechanism undermines the foundations of long-term growth by crowding out
human capital and technological accumulation.",China's Easily Overlooked Monetary Transmission Mechanism: Monetary Reservoir,2021-11-30 15:11:26,"Shuguang Xiao, Xinglin Lai, Jiamin Peng","http://arxiv.org/abs/2111.15327v4, http://arxiv.org/pdf/2111.15327v4",econ.GN
32376,gn,"The idea that research investments respond to market rewards is well
established in the literature on markets for innovation (Schmookler, 1966;
Acemoglu and Linn, 2004; Bryan and Williams, 2021). Empirical evidence tells us
that a change in market size, such as the one measured by demographical shifts,
is associated with an increase in the number of new drugs available (Acemoglu
and Linn, 2004; Dubois et al., 2015). However, the debate about potential
reverse causality is still open (Cerda et al., 2007). In this paper we analyze
market size's effect on innovation as measured by active clinical trials. The
idea is to exploit product recalls an innovative instrument tested to be sharp,
strong, and unexpected. The work analyses the relationship between US market
size and innovation at ATC-3 level through an original dataset and the two-step
IV methodology proposed by Wooldridge et al. (2019). The results reveal a
robust and significantly positive response of number of active trials to market
size.","Product recalls, market size and innovation in the pharmaceutical industry",2021-11-30 16:33:25,"Federico Nutarelli, Massimo Riccaboni, Andrea Morescalchi","http://arxiv.org/abs/2111.15389v1, http://arxiv.org/pdf/2111.15389v1",econ.GN
32377,gn,"We study a model of two-player bargaining game in the shadow of a preventive
trade war that examines why states deliberately maintain trade barriers in the
age of globalization. Globalization can induce substantial power shifts between
states, which makes the threat of a preventive trade war salient. In this
situation, there may exist ""healthy"" levels of trade barriers that dampen the
war incentives by reducing states' expected payoffs from such a war. Thus, we
demonstrate that trade barriers can sometimes serve as breaks and cushions
necessary to sustain inefficient yet peaceful economic cooperation between
states. We assess the theoretical implications by examining the US-China trade
relations since 1972.",Inefficient Peace or Preventive War?,2021-11-30 20:45:58,"Liqun Liu, Tusi, Wen","http://arxiv.org/abs/2111.15598v2, http://arxiv.org/pdf/2111.15598v2",econ.GN
32378,gn,"Depressive disorders, in addition to causing direct negative impacts on
health, are also responsible for imposing substantial costs on society. In
relation to the treatment of depression, antidepressants have proven effective,
and, to the World Health Organization, access to psychotropic drugs for people
with mental illnesses offers a chance of improved health and an opportunity for
reengagement in society. The aim of this study is to analyze the use of and
access to antidepressants in Brazil, according to macro-regions and to
demographic, social and economic conditions of the population, using the
National Survey on Access, Use and Promotion of Rational Use of Medicines
(PNAUM 2013/2014). The results show that there is a high prevalence of
antidepressant use in individuals with depression in Brazil. The main profile
of use of these drugs is: female individuals, between 20 and 59 years old,
white, from the Southeast region, of the economic class D/E, with a high
schooling level, in a marital situation, without health insurance coverage,
without limitations derived from depression, and who self-evaluated health as
regular.",Analise Demografica e Socioeconomica do Uso e do Acesso a Medicamentos Antidepressivos no Brasil,2021-11-30 21:16:07,"Karinna Moura Boaviagem, José Ricardo Bezerra Nogueira","http://arxiv.org/abs/2111.15618v1, http://arxiv.org/pdf/2111.15618v1",econ.GN
32379,gn,"Enabling children to acquire an education is one of the most effective means
to reduce inequality, poverty, and ill-health globally. While in normal times a
government controls its educational policies, during times of macroeconomic
instability, that control may shift to supporting international organizations,
such as the International Monetary Fund (IMF). While much research has focused
on which sectors has been affected by IMF policies, scholars have devoted
little attention to the policy content of IMF interventions affecting the
education sector and childrens education outcomes: denoted IMF education
policies. This article evaluates the extent which IMF education policies exist
in all programs and how these policies and IMF programs affect childrens
likelihood of completing schools. While IMF education policies have a small
adverse effect yet statistically insignificant on childrens probability of
completing school, these policies moderate effect heterogeneity for IMF
programs. The effect of IMF programs (joint set of policies) adversely effect
childrens chances of completing school by six percentage points. By analyzing
how IMF-education policies but also how IMF programs affect the education
sector in low and middle-income countries, scholars will gain a deeper
understanding of how such policies will likely affect downstream outcomes.",The International Monetary Funds intervention in education systems and its impact on childrens chances of completing school,2021-12-31 01:56:49,Adel Daoud,"http://arxiv.org/abs/2201.00013v1, http://arxiv.org/pdf/2201.00013v1",econ.GN
32380,gn,"This paper aims to study the economic impact of COVID-19. To do that, in the
first step, I showed that the adjusted SEQIER model, which is a generalization
form of SEIR model, is a good fit to the real COVID-induced daily death data in
a way that it could capture the nonlinearities of the data very well. Then, I
used this model with extra parameters to evaluate the economic effect of
COVID-19 through job market. The results show that there was a simple strategy
that US government could implemented in order to reduce the negative effect of
COVID-19. Because of that the answer to the paper's title is yes. If lockdown
policies consider the heterogenous characteristics of population and impose
more restrictions on old people and control the interactions between them and
the rest of population the devastating impact of COVID-19 on people lives and
US economy reduced dramatically. Specifically, based on this paper's results,
this strategy could reduce the death rate and GDP loss of the United States
0.03 percent and 2 percent respectively. By comparing these results with actual
data which show death rate and GDP loss 0.1 percent and 3.5 percent
respectively, we could figure out that death rate reduction is 0.07 percent
which means for the same percent of GDP loss executing optimal targeted policy
could save 2/3 lives. Approximately, 378,000 persons dead because of COVID-19
during 2020, hence reducing death rate to 0.03 percent means saving around
280,000 lives, which is huge.",COVID Lessons: Was there any way to reduce the negative effect of COVID-19 on the United States economy?,2022-01-02 05:25:34,Mohammadreza Mahmoudi,"http://arxiv.org/abs/2201.00274v1, http://arxiv.org/pdf/2201.00274v1",econ.GN
32381,gn,"This paper develops a formal framework to assess policies of learning
algorithms in economic games. We investigate whether reinforcement-learning
agents with collusive pricing policies can successfully extrapolate collusive
behavior from training to the market. We find that in testing environments
collusion consistently breaks down. Instead, we observe static Nash play. We
then show that restricting algorithms' strategy space can make algorithmic
collusion robust, because it limits overfitting to rival strategies. Our
findings suggest that policy-makers should focus on firm behavior aimed at
coordinating algorithm design in order to make collusive policies robust.",Robust Algorithmic Collusion,2022-01-02 15:10:26,"Nicolas Eschenbaum, Filip Mellgren, Philipp Zahn","http://arxiv.org/abs/2201.00345v2, http://arxiv.org/pdf/2201.00345v2",econ.GN
32382,gn,"The diffusion of new technologies is crucial for the realization of social
and economic returns to innovation. Tracking and mapping technology diffusion
is, however, typically limited by the extent to which we can observe technology
adoption. This study uses website texts to train a multilingual language model
ensemble to map technology diffusion for the case of 3D printing. The study
identifies relevant actors and their roles in the diffusion process. The
results show that besides manufacturers, service provider, retailers, and
information providers play an important role. The geographic distribution of
adoption intensity suggests that regional 3D-printing intensity is driven by
experienced lead users and the presence of technical universities. The overall
adoption intensity varies by sector and firm size. These patterns indicate that
the approach of using webAI provides a useful and novel tool for technology
mapping which adds to existing measures based on patents or survey data.",Technology Mapping Using WebAI: The Case of 3D Printing,2022-01-04 16:10:56,"Julian Schwierzy, Robert Dehghan, Sebastian Schmidt, Elisa Rodepeter, Andreas Stoemmer, Kaan Uctum, Jan Kinne, David Lenz, Hanna Hottenrott","http://arxiv.org/abs/2201.01125v1, http://arxiv.org/pdf/2201.01125v1",econ.GN
32383,gn,"A gradual growth in flexible work over many decades has been suddenly and
dramatically accelerated by the COVID-19 pandemic. The share of flexible work
days in the United States is forecasted to grow from 4\% in 2018 to over 26\%
by 2022. This rapid and unexpected shift in the nature of work will have a
profound effect on the demand for, and supply of, urban transportation.
Understanding how people make decisions around where and with whom to work will
be critical for predicting future travel patterns and designing mobility
systems to serve flexible commuters. To that end, this paper establishes a
formal taxonomy for describing possible flexible work arrangements, the
stakeholders involved and the relationships between them. An analytical
framework is then developed for adapting existing transportation models to
incorporate the unique dynamics of flexible work location choice. Several
examples are provided to demonstrate how the new taxonomy and analytical
framework can be applied across a broad set of scenarios. Finally, a critical
research agenda is proposed to create both the empirical knowledge and
methodological tools to prepare urban mobility for the future of work.",Preparing urban mobility for the future of work,2022-01-04 22:17:29,"Nicholas S. Caros, Jinhua Zhao","http://arxiv.org/abs/2201.01321v1, http://arxiv.org/pdf/2201.01321v1",econ.GN
32384,gn,"Proxy means testing (PMT) and community-based targeting (CBT) are two of the
leading methods for targeting social assistance in developing countries. In
this paper, we present a hybrid targeting method that incorporates CBT's
emphasis on local information and preferences with PMT's reliance on verifiable
indicators. Specifically, we outline a Bayesian framework for targeting that
resembles PMT in that beneficiary selection is based on a weighted sum of
sociodemographic characteristics. We nevertheless propose calibrating the
weights to preference rankings from community targeting exercises, implying
that the weights used by our method reflect how potential beneficiaries
themselves substitute sociodemographic features when making targeting
decisions. We discuss several practical extensions to the model, including a
generalization to multiple rankings per community, an adjustment for elite
capture, a method for incorporating auxiliary information on potential
beneficiaries, and a dynamic updating procedure. We further provide an
empirical illustration using data from Burkina Faso and Indonesia.",A hybrid approach to targeting social assistance,2022-01-05 00:35:01,"Lendie Follett, Heath Henderson","http://arxiv.org/abs/2201.01356v1, http://arxiv.org/pdf/2201.01356v1",econ.GN
32385,gn,"This study investigates the influence of infection cases of COVID-19 and two
non-compulsory lockdowns on human mobility within the Tokyo metropolitan area.
Using the data of hourly staying population in each 500m$\times$500m cell and
their city-level residency, we show that long-distance trips or trips to
crowded places decrease significantly when infection cases increase. The same
result holds for the two lockdowns, although the second lockdown was less
effective. Hence, Japanese non-compulsory lockdowns influence mobility in a
similar way to the increase in infection cases. This means that they are
accepted as alarm triggers for people who are at risk of contracting COVID-19.",Influence of trip distance and population density on intra-city mobility patterns in Tokyo during COVID-19 pandemic,2022-01-05 03:53:06,"Kazufumi Tsuboi, Naoya Fujiwara, Ryo Itoh","http://dx.doi.org/10.1371/journal.pone.0276741, http://arxiv.org/abs/2201.01398v1, http://arxiv.org/pdf/2201.01398v1",econ.GN
32386,gn,"We provide the first economic research on `buy now, pay later' (BNPL): an
unregulated FinTech credit product enabling consumers to defer payments into
interest-free instalments. We study BNPL using UK credit card transaction data.
We document consumers charging BNPL transactions to their credit card. Charging
of BNPL to credit cards is most prevalent among younger consumers and those
living in the most deprived geographies. Charging a $0\%$ interest, amortizing
BNPL debt to credit cards - where typical interest rates are $20\%$ and
amortization schedules decades-long - raises doubts on these consumers' ability
to pay for BNPL. This prompts a regulatory question as to whether consumers
should be allowed to refinance their unsecured debt.","Buy Now, Pay Later (BNPL)...On Your Credit Card",2022-01-05 21:42:49,"Benedict Guttman-Kenney, Christopher Firth, John Gathergood","http://arxiv.org/abs/2201.01758v5, http://arxiv.org/pdf/2201.01758v5",econ.GN
32387,gn,"This paper studies the role of households' heterogeneity in access to
financial markets and the consumption of commodity goods in the transmission of
foreign shocks. First, I use survey data from Uruguay to show that low income
households have poor to no access to savings technology while spending a
significant share of their income on commodity-based goods. Second, I construct
a Two-Agent New Keynesian (TANK) small open economy model with two main
features: (i) limited access to financial markets, and (ii) non-homothetic
preferences over commodity goods. I show how these features shape aggregate
dynamics and amplify foreign shocks. Additionally, I argue that these features
introduce a redistribution channel for monetary policy and a rationale for
""fear-of-floating"" exchange rate regimes. Lastly, I study the design of optimal
policy regimes and find that households have opposing preferences a over
monetary and fiscal rules.","TANK meets Diaz-Alejandro: Household heterogeneity, non-homothetic preferences & policy design",2022-01-09 06:34:16,Santiago Camara,"http://arxiv.org/abs/2201.02916v1, http://arxiv.org/pdf/2201.02916v1",econ.GN
33359,gn,"A major gap exists between the conceptual suggestion of how much a nation
should invest in science, innovation, and technology, and the practical
implementation of what is done. We identify 4 critical challenges that must be
address in order to develop an environment conducive to collaboration across
organizations and governments, while also preserving commercial rewards for
investors and innovators, in order to move towards a new Research Ecosystem.",Towards a Framework for a New Research Ecosystem,2023-12-12 11:39:20,"Roberto Savona, Cristina Maria Alberini, Lucia Alessi, Iacopo Baussano, Petros Dellaportas, Ranieri Guerra, Sean Khozin, Andrea Modena, Sergio Pecorelli, Guido Rasi, Paolo Daniele Siviero, Roger M. Stein","http://arxiv.org/abs/2312.07065v2, http://arxiv.org/pdf/2312.07065v2",econ.GN
32389,gn,"The StableSims project set out to determine optimal parameters for the new
auction mechanism, Liquidations 2.0, used by MakerDAO, a protocol built on
Ethereum offering a decentralized, collateralized stablecoin called Dai. We
developed an agent-based simulation that emulates both the Maker protocol smart
contract logic, and how profit-motivated agents (""keepers"") will act in the
real world when faced with decisions such as liquidating ""vaults""
(collateralized debt positions) and bidding on collateral auctions. This
research focuses on the incentive structure introduced in Liquidations 2.0,
which implements both a constant fee (tip) and a fee proportional to vault size
(chip) paid to keepers that liquidate vaults or restart stale collateral
auctions. We sought to minimize the amount paid in incentives while maximizing
the speed with which undercollateralized vaults were liquidated. Our findings
indicate that it is more cost-effective to increase the constant fee, as
opposed to the proportional fee, in order to decrease the time it takes for
keepers to liquidate vaults.",StableSims: Optimizing MakerDAO Liquidations 2.0 Incentives via Agent-Based Modeling,2022-01-10 21:30:32,"Andrew Kirillov, Sehyun Chung","http://arxiv.org/abs/2201.03519v1, http://arxiv.org/pdf/2201.03519v1",econ.GN
32390,gn,"In this comment, I revisit the question raised in Karadja and Prawitz (2019)
concerning a causal relationship between mass emigration and long-run political
outcomes. I discuss a number of potential problems with their instrumental
variable analysis. First, there are at least three reasons why their instrument
violates the exclusion restriction: (i) failing to control for internal
migration, (ii) insufficient control for confounders correlated with their
instrument, and (iii) emigration measured with a nonclassical measurement
error. Second, I also discuss two problems with the statistical inference, both
of which indicate that the instrument does not fulfill the relevance condition,
i.e., the instrument is not sufficiently correlated with the endogenous
variable emigration. Correcting for any of these problems reveals that there is
no relationship between emigration and political outcomes.","Exit, Voice and Political Change: Evidence from Swedish Mass Migration to the United States; A Comment",2022-01-13 13:48:57,Per Pettersson-Lidbom,"http://arxiv.org/abs/2201.04880v1, http://arxiv.org/pdf/2201.04880v1",econ.GN
32391,gn,"We replicate Meissner (2016), where debt aversion was reported for the first
time in an intertemporal consumption and saving problem. While Meissner (2016)
uses a German sample, our participants are US undergraduate students. All of
the original study's main findings replicate with similar effect sizes.
Additionally, we extend the original analysis by introducing a new individual
index of debt aversion, which we use to compare debt aversion across countries.
Interestingly, we find no significant differences in debt aversion between the
original German and the new US sample. We then test whether debt aversion
correlates with individual characteristics such as gender, cognitive reflection
ability, and risk aversion. Overall, this paper confirms the importance of debt
aversion in intertemporal consumption and saving problems and validates the
approach of Meissner (2016).",Intertemporal Consumption and Debt Aversion: A Replication and Extension,2022-01-16 12:33:56,"Steffen Ahrens, Ciril Bosch-Rosa, Thomas Meissner","http://arxiv.org/abs/2201.06006v2, http://arxiv.org/pdf/2201.06006v2",econ.GN
32392,gn,"When do multinationals show resilience during natural disasters? To answer
this, we develop a simple model in which foreign multinationals and local firms
in the host country are interacted through input-output linkages. When natural
disasters seriously hit local firms and thus increase the cost of sourcing
local intermediate inputs, most multinationals may leave the host country.
However, they are likely to stay if they are tightly linked with local
suppliers and face low trade costs of importing foreign intermediates. We
further provide a number of extensions of the basic model to incorporate, for
example, multinationals with heterogeneous productivity and disaster
reconstruction.",The Resilience of FDI to Natural Disasters through Industrial Linkages,2022-01-17 06:44:47,"Hayato Kato, Toshihiro Okubo","http://arxiv.org/abs/2201.06197v5, http://arxiv.org/pdf/2201.06197v5",econ.GN
32393,gn,"In this study, we identify the relative standard deviation volatility (RSD
volatility) in the individual target time fulfilment of the complete set of
comparables (e.g., all individuals in the same organisational structure) as a
possible key performance indicator (KPI) for predicting employee job
performance. KPIs are a well-established, measurable benchmark of an
organisation's critical success metrics; thus, in this paper, we attempt to
identify employees experiencing a transition in their RSD towards a higher per
cent deviation, indicating emerging inadequate work conditions. We believe RSD
volatility can be utilised as an additional assessment factor, particularly in
profiling.",Volatility in the Relative Standard Deviation of Target Fulfilment as Key Performance Indicator (KPI),2022-01-17 15:18:13,"Andreas Bauer, Jasna Omeragic","http://arxiv.org/abs/2201.06373v6, http://arxiv.org/pdf/2201.06373v6",econ.GN
32394,gn,"China has been developing very fast since the beginning of the 21st century.
The net income of households has been increased a lot as well. Nonetheless,
migration from rural areas to urban sectors tends to keep a high saving rate
instead of consumption. This essay tries to use the conventional Ordinary Least
Square regression, along with the method of Instrument Variable to test the
problem of endogeneity, to discover the relationship between the saving rates
of rural households and labor migration, controlling for other characteristic
variables including having insurance, marital status, education, having
children, health conditions. The assumption is that migration contributes
positively to the dependent variable, meaning that migration could increase the
household save rates. However, the conclusion is that it is negatively with the
household save rates. All the other variables regarding education, health
conditions, marital status, insurance, and number of children are negatively
related with the household saving rates.",Examining the Relations between Household Saving Rate of Rural Areas and Migration,2022-01-13 07:10:04,Fuhao Lou,"http://arxiv.org/abs/2201.07159v1, http://arxiv.org/pdf/2201.07159v1",econ.GN
32395,gn,"We exploit the new country-by-country reporting data of multinational
corporations, with unparalleled country coverage, to reveal the distributional
consequences of profit shifting. We estimate that multinational corporations
worldwide shifted over \$850 billion in profits in 2017, primarily to countries
with effective tax rates below 10\%. Countries with lower incomes lose a larger
share of their total tax revenue due to profit shifting. We further show that a
logarithmic function is better suited for capturing the non-linear relationship
between profits and tax rates than linear or quadratic functions. Our findings
highlight effective tax rates' importance for profit shifting and tax reforms.",Profit Shifting of Multinational Corporations Worldwide,2022-01-20 23:36:25,"Javier Garcia-Bernardo, Petr Janský","http://arxiv.org/abs/2201.08444v2, http://arxiv.org/pdf/2201.08444v2",econ.GN
33372,gn,"Global hunger levels have set new records in each of the last three years,
outpacing aid budgets (WFP, 2023; FSIN, 2023). Most households experiencing
food insecurity crises are now in fragile states (Townsend et al., 2021),
making it difficult to support vulnerable, hard-to-reach populations without
interference from oppressive governments and non-state actors (Kurtzer, 2019;
Cliffe et al., 2023). Despite growing interest in using digital payments for
crisis response (Suri and Jack, 2016; Pazarbasioglu et al., 2020; Aiken et al.,
2022), there is limited causal evidence on their efficacy (Gentilini, 2022;
WFP, 2023). We show that digital payments can address basic needs during a
humanitarian crisis. We conducted a randomized evaluation among very poor,
mostly tech-illiterate, female-headed households in Afghanistan with digital
transfers of $45 every two weeks for two months. Digital aid led to significant
improvements in nutrition and in mental well-being. We find high usage rates,
no evidence of diversion by the Taliban despite rigorous checks, and that 80%
of recipients would prefer digital aid rather than pay a 2.5% fee to receive
aid in cash. Conservative assumptions put the cost of delivery under 7 cents
per dollar, which is 10 cents per dollar less than the World Food Program's
global figure for cash-based humanitarian assistance. These savings could help
reduce hunger without additional resources. Asked to predict our findings,
policymakers and experts underestimated beneficiaries' ability to use digital
transfers and overestimated the likelihood of diversion and the costs of
delivery. Such misperceptions might impede the adoption of new technologies
during crises. These results highlight the potential for digital aid to
complement existing approaches in supporting vulnerable populations during
crises.",Can Digital Aid Deliver During Humanitarian Crises?,2023-12-21 00:17:57,"Michael Callen, Miguel Fajardo-Steinhäuser, Michael G. Findley, Tarek Ghani","http://arxiv.org/abs/2312.13432v1, http://arxiv.org/pdf/2312.13432v1",econ.GN
32396,gn,"This paper investigates the impact of Social Discount Rate (SDR) choice on
intergenerational equity issues caused by Public-Private Partnerships (PPPs)
projects. Indeed, more PPPs mean more debt being accumulated for future
generations leading to a fiscal deficit crisis. The paper draws on how the SDR
level taken today distributes societies on the Social Welfare Function (SWF).
This is done by answering two sub-questions: (i) What is the risk of PPPs debts
being off-balance sheet? (ii) How do public policies, based on the envisaged
SDR, position society within different ethical perspectives? The answers are
obtained from a discussion of the different SDRs (applied in the UK for
examples) according to the merits of the pertinent ethical theories, namely
libertarian, egalitarian, utilitarian and Rawlsian. We find that public
policymakers can manipulate the SDR to make PPPs looking like a better option
than the traditional financing form. However, this antagonises the Value for
Money principle. We also point out that public policy is not harmonised with
ethical theories. We find that at present (in the UK), the SDR is somewhere
between weighted utilitarian and Rawlsian societies in the trade-off curve.
Alas, our study finds no evidence that the (UK) government is using a
sophisticated system to keep pace with the accumulated off-balance sheet debts.
Thus, the exact prediction of the final state is hardly made because of the
uncertainty factor. We conclude that our study hopefully provides a good
analytical framework for policymakers in order to draw on the merits of ethical
theories before initiating public policies like PPPs.",An Intergenerational Issue: The Equity Issues due to Public-Private Partnerships. The Critical Aspect of the Social Discount Rate Choice for Future Generations,2022-01-22 17:44:06,"Abeer Al Yaqoobi, Marcel Ausloos","http://dx.doi.org/10.3390/jrfm15020049, http://arxiv.org/abs/2201.09064v1, http://arxiv.org/pdf/2201.09064v1",econ.GN
32397,gn,"We recall the historically admitted prerequisites of Economic Freedom (EF).
We have examined 908 data points for the Economic Freedom of the World (EFW)
index and 1884 points for the Index of Economic Freedom (IEF); the studied
periods are 2000-2006 and 1997-2007, respectively, thereby following the Berlin
wall collapse, and including Sept. 11, 2001. After discussing EFW index and
IEF, in order to compare the indices, one needs to study their overlap in time
and space. That leaves 138 countries to be examined over a period extending
from 2000 to 2006, thus 2 sets of 862 data points. The data analysis pertains
to the rank-size law technique. It is examined whether the distributions obey
an exponential or a power law. A correlation with the country Gross Domestic
Product (GDP), an admittedly major determinant of EF, follows, distinguishing
regional aspects, i.e. defining 6 continents. Semi-log plots show that the
EFW-rank relationship is exponential for countries of high rank ($\ge 20$);
overall the log-log plots point to a behaviour close to a power law. In
contrast, for the IEF, the overall ranking has an exponential behaviour; but
the log-log plots point to the existence of a transitional point between two
different power laws, i.e., near rank 10. Moreover, log-log plots of the EFW
index relationship to country GDP is characterised by a power law, with a
rather stable exponent ($\gamma \simeq 0.674$) as a function of time. In
contrast, log-log plots of the IEF relationship with the country's gross
domestic product point to a downward evolutive power law as a function of time.
Markedly the two studied indices provide different aspects of EF.","Economic Freedom: The Top, the Bottom, and the Reality. I. 1997-2007",2022-01-22 18:16:49,"Marcel Ausloos, Philippe Bronlet","http://dx.doi.org/10.3390/e24010038, http://arxiv.org/abs/2201.09073v1, http://arxiv.org/pdf/2201.09073v1",econ.GN
32398,gn,"We analyze the link between standardization and economic growth by
systematically reviewing leading economics journals, leading economic growth
researchers' articles, and economic growth-related books. We make the following
observations: 1) No article has analyzed the link between standardization and
economic growth in top5 economics journals between 1996 and 2018. 2) A
representative sample of the leading researchers of economic growth has
allocated little attention to the link between standardization and economic
growth. 3) Typically, economic growth textbooks do not contain ""standards"" or
""standardization"" in their word indexes. These findings suggest that the
economic growth theory has neglected the role of standardization.",The Link Between Standardization and Economic Growth: A Bibliometric Analysis,2022-01-22 23:28:33,"Jussi Heikkilä, Timo Ali-Vehmas, Julius Rissanen","http://dx.doi.org/10.4018/ijsr.287101, http://arxiv.org/abs/2201.09125v1, http://arxiv.org/pdf/2201.09125v1",econ.GN
32399,gn,"We show that public firm profit rates fell by half since 1980. Inferred as
the residual from the rise of US corporate profit rates in aggregate data,
private firm profit rates doubled since 1980. Public firm financial returns
matched their fall in profit rates, while public firm representativeness
increased from 30% to 60% of the US capital stock. These results imply that
time-varying selection biases in extrapolating public firms to the aggregate
economy can be severe.",Profit Puzzles or: Public Firm Profits Have Fallen,2022-01-23 05:27:08,"Carter Davis, Alexandre Sollaci, James Traina","http://arxiv.org/abs/2201.09160v1, http://arxiv.org/pdf/2201.09160v1",econ.GN
32400,gn,"This study focuses on the impact of digital finance on households. While
digital finance has brought financial inclusion, it has also increased the risk
of households falling into a debt trap. We provide evidence that supports this
notion and explain the channel through which digital finance increases the
likelihood of financial distress. Our results show that the widespread use of
digital finance increases credit market participation. The broadened access to
credit markets increases household consumption by changing the marginal
propensity to consume. However, the easier access to credit markets also
increases the risk of households falling into a debt trap.",The rise of digital finance: Financial inclusion or debt trap,2022-01-23 13:15:15,"Pengpeng Yue, Aslihan Gizem Korkmaz, Zhichao Yin, Haigang Zhou","http://dx.doi.org/10.1016/j.frl.2021.102604, http://arxiv.org/abs/2201.09221v1, http://arxiv.org/pdf/2201.09221v1",econ.GN
32401,gn,"We document that existing gender equality indices do not account for
gender-specific mandatory peace-time conscription (compulsory military
service). This suggests that gender-specific conscription is not considered to
be an important gender issue. If an indicator measuring the gender equality of
mandatory conscription was to be included in gender equality indices with
appropriate weight, then the relative rankings of countries in terms of
measured gender equality could be affected. In the context of the Nordic
countries, this would mean that Finland and Denmark - the countries with
mandatory conscription for men only - would have worse scores with respect to
gender equality compared to Sweden and Norway, countries with conscription for
both men and women - and Iceland, which has no mandatory conscription,
regardless of gender.",Gender-specific Call of Duty: A Note on the Neglect of Conscription in Gender Equality Indices,2022-01-23 17:00:03,"Jussi Heikkilä, Ina Laukkanen","http://dx.doi.org/10.1080/10242694.2020.1844400, http://arxiv.org/abs/2201.09270v1, http://arxiv.org/pdf/2201.09270v1",econ.GN
32403,gn,"This paper documents that the COVID-19 pandemic induced pressures on the
health care system have significant adverse knock-on effects on the
accessibility and quality of non-COVID-19 care. We observe persistently
worsened performance and longer waiting times in A&E; drastically limited
access to specialist care; notably delayed or inaccessible diagnostic services;
acutely undermined access to and quality of cancer care. We find that providers
under COVID-19 pressures experience notably more excess deaths among non-COVID
related hospital episodes such as, for example, for treatment of heart attacks.
We estimate there to be at least one such non-COVID-19 related excess death
among patients being admitted to hospital for non-COVID-19 reasons for every 30
COVID-19 deaths that is caused by the disruption to the quality of care due to
COVID-19. In total, this amounts to 4,003 non COVID-19 excess deaths from March
2020 to February 2021. Further, there are at least 32,189 missing cancer
patients that should counterfactually have started receiving treatment which
suggests continued increased numbers of excess deaths in the future due to
delayed access to care in the past.",Pandemic Pressures and Public Health Care: Evidence from England,2022-01-24 21:56:48,"Thiemo Fetzer, Christopher Rauh","http://arxiv.org/abs/2201.09876v1, http://arxiv.org/pdf/2201.09876v1",econ.GN
32404,gn,"The interplay between risk aversion and financial derivatives has received
increasing attention since the advent of electricity market liberalization. One
important challenge in this context is how to develop economically efficient
and cost-effective models to integrate renewable energy sources (RES) in the
electricity market, which constitutes a relatively new and exciting field of
research. This paper proposes a game-theoretical equilibrium model that
characterizes the interactions between oligopolistic generators in a two-stage
electricity market under the presence of high RES penetration. Given
conventional generators with generation cost uncertainty and renewable
generators with intermittent and stochastic capacity, we consider a single
futures contract market that is cleared prior to a spot market where the energy
delivery takes place. We introduce physical and financial contracts to evaluate
their performance assess their impact on the electricity market outcomes and
examine how these depend on the level of RES penetration. Since market
participants are usually risk-averse, a coherent risk measure is introduced to
deal with both risk-neutral and risk-averse generators. We derive analytical
relationships between contracts, study the implications of uncertainties, test
the performance of the proposed equilibrium model and its main properties
through numerical examples. Our results show that overall electricity prices,
generation costs, profits, and quantities for conventional generators decrease,
whereas quantities and profits for RES generators increase with RES
penetration. Hence, both physical and financial contracts efficiently mitigate
the impact of uncertainties and help the integration of RES into the
electricity system.",Contract design in electricity markets with high penetration of renewables: A two-stage approach,2022-01-24 22:34:48,"Arega Getaneh Abate, Rossana Riccardi, Carlos Ruiz","http://dx.doi.org/10.1016/j.omega.2022.102666, http://arxiv.org/abs/2201.09927v3, http://arxiv.org/pdf/2201.09927v3",econ.GN
32405,gn,"Extraordinary fiscal and monetary interventions in response to the COVID-19
pandemic have revived concerns about zombie prevalence in advanced economies.
Within a sample of publicly listed U.S. companies, we find zombie prevalence
and zombie-lending not to be a widespread phenomenon per se. Nevertheless, our
results reveal negative spillovers of zombie-lending on productivity,
capital-growth, and employment-growth of non-zombies as well as on overall
business dynamism. It is predominantly the class of healthy small- and
medium-sized companies that is sensitive to zombie-lending activities, with
financial constraints further amplifying these effects.",Zombie-Lending in the United States -- Prevalence versus Relevance,2022-01-23 14:12:05,"Maximilian Göbel, Nuno Tavares","http://arxiv.org/abs/2201.10524v2, http://arxiv.org/pdf/2201.10524v2",econ.GN
32406,gn,"We argue that the recent growth in income inequality is driven by disparate
growth in investment income rather than by disparate growth in wages.
Specifically, we present evidence that real wages are flat across a range of
professions, doctors, software engineers, auto mechanics and cashiers, while
stock ownership favors higher education and income levels. Artificial
Intelligence and automation allocate an increased share of job tasks towards
capital and away from labor. The rewards of automation accrue to capital, and
are reflected in the growth of the stock market with several companies now
valued in the trillions. We propose a Deferred Investment Payroll plan to
enable all workers to participate in the rewards of automation and analyze the
performance of such a plan.
  JEL Classification: J31, J33, O33","Income Inequality, Cause and Cure",2022-01-26 06:20:37,B. N. Kausik,"http://dx.doi.org/10.1080/05775132.2022.2046883, http://arxiv.org/abs/2201.10726v5, http://arxiv.org/pdf/2201.10726v5",econ.GN
32407,gn,"College students graduating in a recession have been shown to face large and
persistent negative effects on their earnings, health, and other outcomes. This
paper investigates whether students delay graduation to avoid these effects.
Using data on the universe of students in higher education in Brazil and
leveraging variation in labor market conditions across time, space, and chosen
majors, the paper finds that students in public institutions delay graduation
to avoid entering depressed labor markets. A typical recession causes the
on-time graduation rate to fall by 6.5% in public universities and there is no
effect on private institutions. The induced delaying increases average
graduation by 0.11 semesters, consistent with 1 out of 18 students delaying
graduation by one year in public universities. The delaying effect is larger
for students with higher scores, in higher-earnings majors, and from more
advantaged backgrounds. This has important implications for the distributional
impact of recessions.",Labor market conditions and college graduation: evidence from Brazil,2022-01-26 19:43:16,Lucas Finamor,"http://arxiv.org/abs/2201.11047v4, http://arxiv.org/pdf/2201.11047v4",econ.GN
32408,gn,"Many companies nowadays offer compensation to online reviews (called
compensated reviews), expecting to increase the volume of their non-compensated
reviews and overall rating. Does this strategy work? On what subjects or topics
does this strategy work the best? These questions have still not been answered
in the literature but draw substantial interest from the industry. In this
paper, we study the effect of compensated reviews on non-compensated reviews by
utilizing online reviews on 1,240 auto shipping companies over a ten-year
period from a transportation website. Because some online reviews have missing
information on their compensation status, we first develop a classification
algorithm to differentiate compensated reviews from non-compensated reviews by
leveraging a machine learning-based identification process, drawing upon the
unique features of the compensated reviews. From the classification results, we
empirically investigate the effects of compensated reviews on non-compensated.
Our results indicate that the number of compensated reviews does indeed
increase the number of non-compensated reviews. In addition, the ratings of
compensated reviews positively affect the ratings of non-compensated reviews.
Moreover, if the compensated reviews feature the topic or subject of a car
shipping function, the positive effect of compensated reviews on
non-compensated ones is the strongest. Besides methodological contributions in
text classification and empirical modeling, our study provides empirical
evidence on how to prove the effectiveness of compensated online reviews in
terms of improving the platform's overall online reviews and ratings. Also, it
suggests a guideline for utilizing compensated reviews to their full strength,
that is, with regard to featuring certain topics or subjects in these reviews
to achieve the best outcome.",Toward a More Populous Online Platform: The Economic Impacts of Compensated Reviews,2022-01-26 19:45:02,"Peng Li, Arim Park, Soohyun Cho, Yao Zhao","http://arxiv.org/abs/2201.11051v1, http://arxiv.org/pdf/2201.11051v1",econ.GN
32409,gn,"Studying potential global catastrophes is vital. The high stakes of
existential risk studies (ERS) necessitate serious scrutiny and
self-reflection. We argue that existing approaches to studying existential risk
are not yet fit for purpose, and perhaps even run the risk of increasing harm.
We highlight general challenges in ERS: accommodating value pluralism, crafting
precise definitions, developing comprehensive tools for risk assessment,
dealing with uncertainty, and accounting for the dangers associated with taking
exceptional actions to mitigate or prevent catastrophes. The most influential
framework for ERS, the 'techno-utopian approach' (TUA), struggles with these
issues and has a unique set of additional problems: it unnecessarily combines
the study of longtermism and longtermist ethics with the study of extinction,
relies on a non-representative moral worldview, uses ambiguous and inadequate
definitions, fails to incorporate insights from risk assessment in relevant
fields, chooses arbitrary categorisations of risk, and advocates for dangerous
mitigation strategies. Its moral and empirical assumptions might be
particularly vulnerable to securitisation and misuse. We suggest several key
improvements: separating the study of extinction ethics (ethical implications
of extinction) and existential ethics (the ethical implications of different
societal forms), from the analysis of human extinction and global catastrophe;
drawing on the latest developments in risk assessment literature; diversifying
the field, and; democratising its policy recommendations.",Democratising Risk: In Search of a Methodology to Study Existential Risk,2021-12-27 11:21:52,"Carla Zoe Cremer, Luke Kemp","http://arxiv.org/abs/2201.11214v1, http://arxiv.org/pdf/2201.11214v1",econ.GN
32410,gn,"The increasing complexity of interrelated systems has made the use of
multiplex networks an important tool for explaining the nature of relations
between elements in the system. In this paper, we aim at investigating various
aspects of countries' behaviour during the coronavirus pandemic period. By
means of a multiplex network we consider simultaneously stringency index
values, COVID-19 infections and international trade data, in order to detect
clusters of countries that showed a similar reaction to the pandemic. We
propose a new methodological approach based on the Estrada communicability for
identifying communities on a multiplex network, based on a two-step
optimization. At first, we determine the optimal inter-layer intensity between
levels by minimizing a distance function. Hence, the optimal inter-layer
intensity is used to detect communities on each layer. Our findings show that
the community detection on this multiplex network has greater information power
than classical methods for single-layer networks. Our approach better reveals
clusters on each layer with respect to the application of the same approach on
each single-layer. Moreover, detected groups in the multiplex case benefit of a
higher cohesion, leading to identifying on each layer a lower number of
communities with respect to the ones obtained in the single-layer cases.",The effect of the pandemic on complex socio-economic systems: community detection induced by communicability,2022-01-29 20:04:27,"Gian Paolo Clemente, Rosanna Grassi, Giorgio Rizzini","http://arxiv.org/abs/2201.12618v1, http://arxiv.org/pdf/2201.12618v1",econ.GN
32411,gn,"We study how Chapter 11 bankruptcies affect local legal labor markets. We
document that bankruptcy shocks increase county legal employment and
corroborate this finding by exploiting a stipulation of the law known as Forum
Shopping during the Court Competition Era (1991-1996). We quantify losses to
local communities from firms forum shopping away from their local area as
follows. First, we calculate the unrealized potential employment gains implied
by our reduced-form results. Second, we structurally estimate a model of legal
labor markets and quantify welfare losses. We uncover meaningful costs to local
communities from lax bankruptcy venue laws.",Bankruptcy Shocks and Legal Labor Markets: Evidence from the Court Competition Era,2022-01-31 22:12:59,"Chad Brown, Jeronimo Carballo, Alessandro Peri","http://arxiv.org/abs/2202.00044v1, http://arxiv.org/pdf/2202.00044v1",econ.GN
32412,gn,"In this article, I consider the issues of financial resource shortage, the
limited possibility of attracting bank loans, and business risk. All these
problems constrain the development of entrepreneurship. Objectives I aim to
determine a scientifically based financial mechanism of financing the priority
directions of territories' development. In this study, I develop tools of
financing the priority directions of the municipal economy. The proposed
financial scheme allows to expand the volume of financing and ensure the access
of businesses to financial support. The article proposes concrete financing
mechanisms for investment with minimal risk for the budget and preferential
conditions for business.",The regulation methods of fiscal risk in the framework of the implementation of entrepreneurship support,2022-01-11 15:25:55,Elena G. Demidova,"http://dx.doi.org/10.24891/fc.23.36.2189, http://arxiv.org/abs/2202.00108v1, http://arxiv.org/pdf/2202.00108v1",econ.GN
32413,gn,"Behavioral science has witnessed an explosion in the number of biases
identified by behavioral scientists, to more than 200 at present. This article
identifies the 10 most important behavioral biases for project management.
First, we argue it is a mistake to equate behavioral bias with cognitive bias,
as is common. Cognitive bias is half the story; political bias the other half.
Second, we list the top 10 behavioral biases in project management: (1)
strategic misrepresentation, (2) optimism bias, (3) uniqueness bias, (4) the
planning fallacy, (5) overconfidence bias, (6) hindsight bias, (7) availability
bias, (8) the base rate fallacy, (9) anchoring, and (10) escalation of
commitment. Each bias is defined, and its impacts on project management are
explained, with examples. Third, base rate neglect is identified as a primary
reason that projects underperform. This is supported by presentation of the
most comprehensive set of base rates that exist in project management
scholarship, from 2,062 projects. Finally, recent findings of power law
outcomes in project performance are identified as a possible first stage in
discovering a general theory of project management, with more fundamental and
more scientific explanations of project outcomes than found in conventional
theory.",Top Ten Behavioral Biases in Project Management: An Overview,2022-01-24 21:33:21,Bent Flyvbjerg,"http://dx.doi.org/10.1177/87569728211049046, http://arxiv.org/abs/2202.00125v1, http://arxiv.org/pdf/2202.00125v1",econ.GN
32414,gn,"We argue that contemporary stock market designs are, due to traders'
inability to fully express their preferences over the execution times of their
orders, prone to latency arbitrage. In turn, we propose a new order type which
allows traders to specify the time at which their orders are executed after
reaching the exchange. Using this order type, traders can synchronize order
executions across different exchanges, such that high-frequency traders, even
if they operate at the speed of light, can no-longer engage in latency
arbitrage.",On Market Design and Latency Arbitrage,2021-12-26 16:32:18,Wolfgang Kuhle,"http://arxiv.org/abs/2202.00127v1, http://arxiv.org/pdf/2202.00127v1",econ.GN
32415,gn,"There is a large literature on earnings and income volatility in labor
economics, household finance, and macroeconomics. One strand of that literature
has studied whether individual earnings volatility has risen or fallen in the
U.S. over the last several decades. There are strong disagreements in the
empirical literature on this important question, with some studies showing
upward trends, some showing downward trends, and some showing no trends. Some
studies have suggested that the differences are the result of using flawed
survey data instead of more accurate administrative data. This paper summarizes
the results of a project attempting to reconcile these findings with four
different data sets and six different data series--three survey and three
administrative data series, including two which match survey respondent data to
their administrative data. Using common specifications, measures of volatility,
and other treatments of the data, four of the six data series show a lack of
any significant long-term trend in male earnings volatility over the last
20-to-30+ years when differences across the data sets are properly accounted
for. A fifth data series (the PSID) shows a positive net trend but small in
magnitude. A sixth, administrative, data set, available only since 1998, shows
no net trend 1998-2011 and only a small decline thereafter. Many of the
remaining differences across data series can be explained by differences in
their cross-sectional distribution of earnings, particularly differences in the
size of the lower tail. We conclude that the data sets we have analyzed, which
include many of the most important available, show little evidence of any
significant trend in male earnings volatility since the mid-1980s.",Reconciling Trends in U.S. Male Earnings Volatility: Results from Survey and Administrative Data,2022-02-01 22:04:50,"Robert Moffitt, John Abowd, Christopher Bollinger, Michael Carr, Charles Hokayem, Kevin McKinney, Emily Wiemers, Sisi Zhang, James Ziliak","http://arxiv.org/abs/2202.00713v1, http://arxiv.org/pdf/2202.00713v1",econ.GN
32416,gn,"This paper analyzes whether a minimum wage should be used for redistribution
on top of taxes and transfers. I characterize optimal redistribution for a
government with three policy instruments -- labor income taxes and transfers,
corporate income taxes, and a minimum wage -- using an empirically grounded
model of the labor market with positive firm profits. A minimum wage can
increase social welfare when it increases the average post-tax wages of
low-skill labor market participants and when corporate profit incidence is
large. When chosen together with taxes, the minimum wage can help the
government redistribute efficiently to low-skill workers by preventing firms
from capturing low-wage income subsidies such as the EITC and from enjoying
high profits that cannot be redistributed via corporate taxes due to capital
mobility in unaffected industries. Event studies show that the average US
state-level minimum wage reform over the last two decades increased average
post-tax wages of low-skilled labor market participants and reduced corporate
profits in affected industries, namely low-skill labor-intensive services. A
sufficient statistics analysis implies that US minimum wages typically remain
below their optimum under the current tax and transfer system.",Minimum Wages and Optimal Redistribution,2022-02-02 04:25:27,Damián Vergara,"http://arxiv.org/abs/2202.00839v3, http://arxiv.org/pdf/2202.00839v3",econ.GN
32417,gn,"This paper studies the realizability and compatibility of the three CEP2020
targets, focusing on electricity prices. We study the impact of renewables and
other fundamental determinants on wholesale and household retail electricity
prices in ten EU countries from 2008 to 2016. Increases in production from
renewables decrease wholesale electricity prices in all countries. As decreases
in prices should promote consumption, an apparent contradiction emerges between
the target of an increase in renewables and the target of a reduction in
consumption. However, the impact of renewables on the non-energy part of
household wholesale electricity prices is positive in six countries. Therefore,
decreases in wholesale prices, that may compromise the CEP2020 target of
decrease in consumption, do not necessarily translate into lower household
retail prices.",Are EU Climate and Energy Package 20-20-20 targets achievable and compatible? Evidence from the impact of renewables on electricity prices,2022-02-03 20:37:02,"Juan Ignacio Peña, Rosa Rodriguez","http://arxiv.org/abs/2202.01720v1, http://arxiv.org/pdf/2202.01720v1",econ.GN
32418,gn,"This paper studies premiums got by winning bidders in default supply
auctions, and speculation and hedging activities in power derivatives markets
in dates near auctions. Data includes fifty-six auction prices from 2007 to
2013, those of CESUR in the Spanish OMEL electricity market, and those of Basic
Generation Service auctions (PJM-BGS) in New Jersey's PJM market. Winning
bidders got an average ex-post yearly forward premium of 7% (CESUR) and 38%
(PJM-BGS). The premium using an index of futures prices is 1.08% (CESUR) and
24% (PJM-BGS). Ex-post forward premium is negatively related to the number of
bidders and spot price volatility. In CESUR, hedging-driven trading in power
derivatives markets predominates around auction dates, but in PJM-BGS,
speculation-driven trading prevails.",Default Supply Auctions in Electricity Markets: Challenges and Proposals,2022-02-03 21:08:53,"Juan Ignacio Peña, Rosa Rodriguez","http://arxiv.org/abs/2202.01743v1, http://arxiv.org/pdf/2202.01743v1",econ.GN
32419,gn,"Unemployment insurance transfers should balance the provision of consumption
to the unemployed with the disincentive effects on the search behavior.
Developing countries face the additional challenge of informality. Workers can
choose to hide their employment state and labor income in informal jobs, an
additional form of moral hazard. To provide evidence about the effects of this
policy in a country affected by informality we exploit kinks in the schedule of
transfers in Argentina. Our results suggest that higher benefits induce
moderate behavioral responses in job-finding rates and increase re-employment
wages. We use a sufficient statistics formula from a model with random wage
offers and we calibrate it with our estimates. We show that welfare could rise
substantially if benefits were increased in Argentina. Importantly, our
conclusion is relevant for the median eligible worker that is strongly affected
by informality.",The welfare effects of unemployment insurance in Argentina. New estimates using changes in the schedule of transfers,2022-02-04 00:06:44,"Martin Gonzalez-Rozada, Hernan Ruffo","http://arxiv.org/abs/2202.01844v1, http://arxiv.org/pdf/2202.01844v1",econ.GN
32420,gn,"In the for-hire truckload market, firms often experience unexpected
transportation cost increases due to contracted transportation service provider
(carrier) load rejections. The dominant procurement strategy results in
long-term, fixed-price contracts that become obsolete as transportation
providers' networks change and freight markets fluctuate between times of over
and under supply. We build behavioral models of the contracted carrier's load
acceptance decision under two distinct freight market conditions based on
empirical load transaction data. With the results, we quantify carriers'
likelihood of sticking to the contract as their best known alternative priced
load options increase and become more attractive; in other words, carriers'
contract price stickiness. Finally, we explore carriers' contract price
stickiness for different lane, freight, and carrier segments and offer insights
for shippers to identify where they can expect to see substantial improvement
in contracted carrier load acceptance as they consider alternative,
market-based pricing strategies.",The end of 'set it and forget it' pricing? Opportunities for market-based freight contracts,2022-02-04 23:06:05,"Angela Acocella, Chris Caplice, Yossi Sheffi","http://arxiv.org/abs/2202.02367v1, http://arxiv.org/pdf/2202.02367v1",econ.GN
32421,gn,"Rapid increases in food supplies have reduced global hunger, while rising
burdens of diet-related disease have made poor diet quality the leading cause
of death and disability around the world. Today's ""double burden"" of
undernourishment in utero and early childhood then undesired weight gain and
obesity later in life is accompanied by a third less visible burden of
micronutrient imbalances. The triple burden of undernutrition, obesity, and
unbalanced micronutrients that underlies many diet-related diseases such as
diabetes, hypertension and other cardiometabolic disorders often coexist in the
same person, household and community. All kinds of deprivation are closely
linked to food insecurity and poverty, but income growth does not always
improve diet quality in part because consumers cannot directly or immediately
observe the health consequences of their food options, especially for newly
introduced or reformulated items. Even after direct experience and
epidemiological evidence reveals relative risks of dietary patterns and
nutritional exposures, many consumers may not consume a healthy diet because
food choice is driven by other factors. This chapter reviews the evidence on
dietary transition and food system transformation during economic development,
drawing implications for how research and practice in agricultural economics
can improve nutritional outcomes.",The economics of malnutrition: Dietary transition and food system transformation,2022-02-05 18:31:16,"William A. Masters, Amelia B. Finaret, Steven A. Block","http://arxiv.org/abs/2202.02579v1, http://arxiv.org/pdf/2202.02579v1",econ.GN
32422,gn,"Existing research on the static effects of the manipulation of welfare
program benefit parameters on labor supply has allowed only restrictive forms
of heterogeneity in preferences. Yet preference heterogeneity implies that the
marginal effects on labor supply of welfare expansions and contractions may
differ in different time periods with different populations and which sweep out
different portions of the distribution of preferences. A new examination of the
heavily studied AFDC program uses variation in state-level administrative
barriers to entering the program in the late 1980s and early 1990s to estimate
the marginal labor supply effects of changes in program participation induced
by that variation. The estimates are obtained from a theory-consistent reduced
form model which allows for a nonparametric specification of how changes in
welfare program participation affect labor supply on the margin. Estimates
using a form of local instrumental variables show that the marginal treatment
effects are quadratic, rising and then falling as participation rates rise
(i.e., becoming more negative then less negative on hours of work). The average
work disincentive is not large but that masks some margins where effects are
close to zero and some which are sizable. Traditional IV which estimates a
weighted average of marginal effects gives a misleading picture of marginal
responses. A counterfactual exercise which applies the estimates to three
historical reform periods in 1967, 1981, and 1996 when the program tax rate was
significantly altered shows that marginal labor supply responses differed in
each period because of differences in the level of participation in the period
and the composition of who was on the program.",The Marginal Labor Supply Disincentives of Welfare: Evidence from Administrative Barriers to Participation,2022-02-04 18:40:07,"Robert A. Moffitt, Matthew V. Zahn","http://arxiv.org/abs/2202.03413v1, http://arxiv.org/pdf/2202.03413v1",econ.GN
32423,gn,"I study the effects of US salary history bans which restrict employers from
inquiring about job applicants' pay history during the hiring process, but
allow candidates to voluntarily share information. Using a
difference-in-differences design, I show that these policies narrowed the
gender pay gap significantly by 2 p.p., driven almost entirely by an increase
in female earnings. The bans were also successful in weakening the
auto-correlation between current and future earnings, especially among
job-changers. I provide novel evidence showing that when employers could no
longer nudge candidates for information, the likelihood of voluntarily
disclosing salary history decreased among job applicants and by 2 p.p. more
among women. I then develop a salary negotiation model with asymmetric
information, where I allow job applicants to choose whether to reveal pay
history, and use this framework to explain my empirical findings on disclosure
behavior and gender pay gap.",US Salary History Bans -- Strategic Disclosure by Job Applicants and the Gender Pay Gap,2022-02-08 05:10:08,Sourav Sinha,"http://arxiv.org/abs/2202.03602v1, http://arxiv.org/pdf/2202.03602v1",econ.GN
32424,gn,"Recent democratic backsliding and the rise of authoritarian regimes worldwide
have rekindled interest in understanding the causes and consequences of such
authoritarian rule in democracies. In this paper, I study the long-run
political consequences of authoritarianism in the world's largest democracy.
Exploiting the unexpected timing of the authoritarian rule imposed in India in
the 1970s and using a difference-in-difference (DID), triple difference (DDD),
and a regression discontinuity design (RDD) estimation approach, I document a
sharp decline in the then-dominant incumbent, the Indian National Congress
party's political dominance in subsequent years. I also present evidence that
the decline in political dominance was not at the expense of a lower voter
turnout rate. Instead, a sharp rise in the number of opposition candidates
contesting elections in subsequent years played an important role. Finally, I
examine the enduring consequences, revealing that confidence in politicians
remains low in states where the draconian policy was high.",The Legacy of Authoritarianism in a Democracy,2022-02-08 10:05:13,Pramod Kumar Sur,"http://arxiv.org/abs/2202.03682v2, http://arxiv.org/pdf/2202.03682v2",econ.GN
32425,gn,"Cuierzhuang Phenomenon (or Cuierzhuang Model) is a regional development
phenomenon or rural revitalization model driven by ICT in the information era,
characterized by the storage and transportation, processing, packaging and
online sales of agricultural products, as well as online and offline
coordination, long-distance and cross-regional economic cooperation, ethnic
blending, equality, and mutual benefit. Unlike the Wenzhou Model, South Jiangsu
Model, and Pearl River Model in the 1980s and 1990s, the Cuierzhuang Model is
not only a rural revitalization brought about by the industrialization and
modernization of northern rural areas with the characteristics of industrial
development in the information age, but also an innovative regional economic
cooperation and development model with folk nature, spontaneous formation,
equality, and mutual benefit. Taking southern Xinjiang as the production base,
Xinjiang jujubes from Hotan and Ruoqiang are continuously transported to
Cuierzhuang, Cangzhou City, Hebei Province, where they are transferred,
cleaned, dried and packaged, and finally sold all over the country. With red
dates as a link, the eastern town of Cuierzhuang, which is more than 4,000
kilometers apart, connected with Xinjiang in the western region. Along the
ancient Silk Road, the farthest route can reach as far as Kashgar through the
southern Xinjiang route. Then, how did this long-distance and cross-regional
economic cooperation channel form, what are the regional economics or economic
geography principles of Cuierzhuang attracting Xinjiang jujube, and the
challenges and opportunities faced by Cuierzhuang phenomenon, etc. A
preliminary economic analysis has been carried out in this paper.",Cuierzhuang Phenomenon: A model of rural industrialization in north China,2022-02-08 14:53:34,"Jinghan Tian, Jianhua Wang","http://arxiv.org/abs/2202.03806v2, http://arxiv.org/pdf/2202.03806v2",econ.GN
32426,gn,"We quantify Facebook's ability to build shadow profiles by tracking
individuals across the web, irrespective of whether they are users of the
social network. For a representative sample of US Internet users, we find that
Facebook is able to track about 40 percent of the browsing time of both users
and non-users of Facebook, including on privacy-sensitive domains and across
user demographics. We show that the collected browsing data can produce
accurate predictions of personal information that is valuable for advertisers,
such as age or gender. Because Facebook users reveal their demographic
information to the platform, and because the browsing behavior of users and
non-users of Facebook overlaps, users impose a data externality on non-users by
allowing Facebook to infer their personal information.",Facebook Shadow Profiles,2022-02-08 23:19:39,"Luis Aguiar, Christian Peukert, Maximilian Schäfer, Hannes Ullrich","http://arxiv.org/abs/2202.04131v2, http://arxiv.org/pdf/2202.04131v2",econ.GN
32427,gn,"This paper combines a canonical epidemiology model of disease dynamics with
government policy of lockdown and testing, and agents' decision to social
distance in order to avoid getting infected. The model is calibrated with data
on deaths and testing outcomes in the Unites States. It is shown that an
intermediate but prolonged lockdown is socially optimal when both mortality and
GDP are taken into account. This is because the government wants the economy to
keep producing some output and the slack in reducing infection is picked up by
social distancing agents. Social distancing best responds to the optimal
government policy to keep the effective reproductive number at one and avoid
multiple waves through the pandemic. Calibration shows testing to have been
effective, but it could have been even more instrumental if it had been
aggressively pursued from the beginning of the pandemic. Not having any
lockdown or shutting down social distancing would have had extreme
consequences. Greater centralized control on social activities would have
mitigated further the spread of the pandemic.",Behavioral epidemiology: An economic model to evaluate optimal policy in the midst of a pandemic,2022-02-09 01:17:19,"Shomak Chakrabarti, Ilia Krasikov, Rohit Lamba","http://arxiv.org/abs/2202.04174v1, http://arxiv.org/pdf/2202.04174v1",econ.GN
32428,gn,"A common assumption in the literature is that the level of income inequality
shapes individuals' beliefs about whether the income distribution is fair
(``fairness views,'' for short). However, individuals do not directly observe
income inequality (which often leads to large misperceptions), nor do they
consider all inequities to be unfair. In this paper, we empirically assess the
link between objective measures of income inequality and fairness views in a
context of high but decreasing income inequality. We combine opinion poll data
with harmonized data from household surveys of 18 Latin American countries from
1997--2015. We report three main findings. First, we find a strong and
statistically significant relationship between income inequality and unfairness
views across countries and over time. Unfairness views evolved in the same
direction as income inequality for 17 out of the 18 countries in our sample.
Second, individuals who are older, unemployed, and left-wing are, on average,
more likely to perceive the income distribution as very unfair. Third, fairness
views and income inequality have predictive power for individuals'
self-reported propensity to mobilize and protest independent of each other,
suggesting that these two variables capture different channels through which
changes in the income distribution can affect social unrest.",Are Fairness Perceptions Shaped by Income Inequality? Evidence from Latin America,2022-02-09 20:42:26,"Leonardo Gasparini, Germán Reyes","http://dx.doi.org/10.1007/s10888-022-09526-w, http://arxiv.org/abs/2202.04591v2, http://arxiv.org/pdf/2202.04591v2",econ.GN
32429,gn,"Humanitarian and disaster management actors have increasingly adopted cash
transfer to reduce the sufferings and vulnerability of the survivors. Case
transfers have also been used as a critical instrument in the current COVID-19
pandemic. Unfortunately, academic work on humanitarian and disaster-cash
transfer related issues remains limited. This article explores how NGOs and
governments implement humanitarian cash transfer in a post-disaster setting
using an exploratory research strategy. It asks What are institutional
constraints and opportunities faced by humanitarian emergency responders in
ensuring an effective humanitarian cash transfer and how humanitarian actors
address such institutional conditions. We introduced a new conceptual
framework, namely humanitarian and disaster management ecosystem for cash
transfer. This framework allows non-governmental actors to restore complex
relations between the state, disaster survivors or citizen, local market
economy and civil society. Mixed methods and multistage research strategy were
used to collect and analyze primary and secondary data. The findings suggest
that implementing cash transfers in the context of post tsunamigenic
earthquakes and liquefaction hazards, NGOs must co-create an ecosystem of
response that not only aimed at restoring peoples access to cash and basic
needs but first they must restore relations between the states and their
citizen while linking the at-risk communities with the private sectors to
jump-starting local livelihoods and market economy.",Creating an institutional ecosystem for cash transfer programming: Lessons from post-disaster governance in Indonesia,2022-02-10 06:10:55,"Jonatan A. Lassa, Gisela Emanuela Nappoe, Susilo Budhi Sulistyo","http://arxiv.org/abs/2202.04811v1, http://arxiv.org/pdf/2202.04811v1",econ.GN
32430,gn,"A nudge changes people's actions without removing their options or altering
their incentives. During the COVID-19 vaccine rollout, the Swedish Region of
Uppsala sent letters with pre-booked appointments to inhabitants aged 16-17
instead of opening up manual appointment booking. Using regional and municipal
vaccination data, we document a higher vaccine uptake among 16- to 17-year-olds
in Uppsala compared to untreated control regions (constructed using the
synthetic control method as well as neighboring municipalities). The results
highlight pre-booked appointments as a strategy for increasing vaccination
rates in populations with low perceived risk.",Vaccination nudges: A study of pre-booked COVID-19 vaccinations in Sweden,2022-02-10 12:39:28,"Carl Bonander, Mats Ekman, Niklas Jakobsson","http://dx.doi.org/10.1016/j.socscimed.2022.115248, http://arxiv.org/abs/2202.04931v2, http://arxiv.org/pdf/2202.04931v2",econ.GN
32443,gn,"The Great Recession highlighted the role of financial and uncertainty shocks
as drivers of business cycle fluctuations. However, the fact that uncertainty
shocks may affect economic activity by tightening financial conditions makes
empirically distinguishing these shocks difficult. This paper examines the
macroeconomic effects of the financial and uncertainty shocks in the United
States in an SVAR model that exploits the non-normalities of the time series to
identify the uncertainty and the financial shock. The results show that
macroeconomic uncertainty and financial shocks seem to affect business cycles
independently as well as through dynamic interaction. Uncertainty shocks appear
to tighten financial conditions, whereas there appears to be no causal
relationship between financial conditions and uncertainty. Moreover, the
results suggest that uncertainty shocks may have persistent effects on output
and investment that last beyond the business cycle.",Macroeconomic Effect of Uncertainty and Financial Shocks: a non-Gaussian VAR approach,2022-02-22 14:48:28,Olli Palmén,"http://arxiv.org/abs/2202.10834v1, http://arxiv.org/pdf/2202.10834v1",econ.GN
32431,gn,"When publishing socioeconomic survey data, survey programs implement a
variety of statistical methods designed to preserve privacy but which come at
the cost of distorting the data. We explore the extent to which spatial
anonymization methods to preserve privacy in the large-scale surveys supported
by the World Bank Living Standards Measurement Study - Integrated Surveys on
Agriculture (LSMS-ISA) introduce measurement error in econometric estimates
when that survey data is integrated with remote sensing weather data. Guided by
a pre-analysis plan, we produce 90 linked weather-household datasets that vary
by the spatial anonymization method and the remote sensing weather product. By
varying the data along with the econometric model we quantify the magnitude and
significance of measurement error coming from the loss of accuracy that results
from protect privacy measures. We find that spatial anonymization techniques
currently in general use have, on average, limited to no impact on estimates of
the relationship between weather and agricultural productivity. However, the
degree to which spatial anonymization introduces mismeasurement is a function
of which remote sensing weather product is used in the analysis. We conclude
that care must be taken in choosing a remote sensing weather product when
looking to integrate it with publicly available survey data.","Privacy Protection, Measurement Error, and the Integration of Remote Sensing and Socioeconomic Survey Data",2022-02-10 21:24:19,"Jeffrey D. Michler, Anna Josephson, Talip Kilic, Siobhan Murray","http://arxiv.org/abs/2202.05220v1, http://arxiv.org/pdf/2202.05220v1",econ.GN
32432,gn,"In the recent times of global Covid pandemic, the Federal Reserve has raised
the concerns of upsurges in prices. Given the complexity of interaction between
inflation and inequality, we examine whether the impact of inflation on
inequality differs among distinct levels of income inequality across the US
states. Results reveal that there is a negative contemporaneous effect of
inflation on the inequality which becomes stronger with higher levels of income
inequality. However, over a one year period, we find higher inflation rate to
further increase income inequality only when income inequality is initially
relatively low.",Inflation and income inequality: Does the level of income inequality matter?,2022-02-11 19:32:22,"Edmond Berisha, Ram Sewak Dubey, Orkideh Gharehgozli","http://arxiv.org/abs/2202.05743v1, http://arxiv.org/pdf/2202.05743v1",econ.GN
32433,gn,"Stablecoins and central bank digital currencies are on the horizon in Asia,
and in some cases have already arrived. This paper provides new analysis and a
critique of the use case for both forms of digital currency. It provides
time-varying estimates of devaluation risk for the leading stablecoin, Tether,
using data from the futures market. It describes the formidable obstacles to
widespread use of central bank digital currencies in cross-border transactions,
the context in which their utility is arguably greatest. The bottom line is
that significant uncertainties continue to dog the region's digital currency
initiatives.",Stablecoins and Central Bank Digital Currencies: Policy and Regulatory Challenges,2022-02-15 19:47:47,"Barry Eichengreen, Ganesh Viswanath-Natraj","http://arxiv.org/abs/2202.07564v1, http://arxiv.org/pdf/2202.07564v1",econ.GN
32434,gn,"Increases in national concentration have been a salient feature of industry
dynamics in the U.S. and have contributed to concerns about increasing market
power. Yet, local trends may be more informative about market power,
particularly in the retail sector where consumers have traditionally shopped at
nearby stores. We find that local concentration has increased almost in
parallel with national concentration using novel Census data on product-level
revenue for all U.S. retail stores between 1992 and 2012. The increases in
concentration are broad based, affecting most markets, products, and retail
industries. We show that the expansion of multi-market firms into new markets
explains most of the increase in national retail concentration, with
consolidation via increases in local market shares increasing in importance
between 1997 and 2007, and single-market firms playing a negligible role.
Finally, we find that increases in local concentration can explain one-quarter
to one-third of the observed rise in retail gross margins.",The Evolution of U.S. Retail Concentration,2022-02-15 20:53:32,"Dominic A. Smith, Sergio Ocampo","http://arxiv.org/abs/2202.07609v4, http://arxiv.org/pdf/2202.07609v4",econ.GN
32435,gn,"Decentralized Finance (DeFi) services are moving traditional financial
operations to the Internet of Value (IOV) by exploiting smart contracts,
distributed ledgers, and clever heterogeneous transactions among different
protocols. The exponential increase of the Total Value Locked (TVL) in DeFi
foreshadows a bright future for automated money transfers in a plethora of
services. In this short survey paper, we describe the business model for
different DeFi domains - namely, Protocols for Loanable Funds (PLFs),
Decentralized Exchanges (DEXs), and Yield Aggregators. We claim that the
current state of the literature is still unclear how to value thousands of
different competitors (tokens) in DeFi. With this work, we abstract the general
business model for different DeFi domains and compare them. Finally, we provide
open research challenges that will involve heterogeneous domains such as
economics, finance, and computer science.",A Short Survey on Business Models of Decentralized Finance (DeFi) Protocols,2022-02-16 00:49:15,"Teng Andrea Xu, Jiahua Xu","http://dx.doi.org/10.1007/978-3-031-32415-4_13, http://arxiv.org/abs/2202.07742v2, http://arxiv.org/pdf/2202.07742v2",econ.GN
32436,gn,"When the prices of cereal grains rise, social unrest and conflict become
likely. In rural areas, the predation motives of perpetrators can explain the
positive relationship between prices and conflict. Predation happens at places
and in periods where and when spoils to be appropriated are available. In
predominantly agrarian societies, such opportune times align with the harvest
season. Does the seasonality of agricultural income lead to the seasonality of
conflict? We address this question by analyzing over 55 thousand incidents
involving violence against civilians staged by paramilitary groups across
Africa during the 1997-2020 period. We investigate the crop year pattern of
violence in response to agricultural income shocks via changes in international
cereal prices. We find that a year-on-year one standard deviation annual growth
of the price of the major cereal grain results in a harvest-time spike in
violence by militias in a one-degree cell where this cereal grain is grown.
This translates to a nearly ten percent increase in violence during the early
postharvest season. We observe no such change in violence by state forces or
rebel groups--the other two notable actors. By further investigating the
mechanisms, we show that the violence by militias is amplified after plausibly
rich harvest seasons when the value of spoils to be appropriated is higher. By
focusing on harvest-related seasonality of conflict, as well as actors more
likely to be involved in violence against civilians, we contribute to the
growing literature on the economic causes of conflict in predominantly agrarian
societies.",Agricultural Windfalls and the Seasonality of Political Violence in Africa,2022-02-16 08:08:24,"David Ubilava, Justin V. Hastings, Kadir Atalay","http://dx.doi.org/10.1111/ajae.12364, http://arxiv.org/abs/2202.07863v4, http://arxiv.org/pdf/2202.07863v4",econ.GN
32437,gn,"The study and measurement of economic resilience is ruled by high level of
complexity related to the diverse structure, functionality, spatiality, and
dynamics describing economic systems. Towards serving the demand of
integration, this paper develops a three-dimensional index, capturing
engineering, ecological, and evolutionary aspects of economic resilience that
are considered separately in the current literature. The proposed index is
computed on GDP data of worldwide countries, for the period 1960-2020,
concerning 14 crises considered as shocks, and was found well defined in a
conceptual context of its components. Its application on real-world data allows
introducing a novel classification of countries in terms of economic
resilience, and reveals geographical patterns and structural determinants of
this attribute. Impressively enough, economic resilience appears positively
related to major productivity coefficients, gravitationally driven, and
depended on agricultural specialization, with high structural heterogeneity in
the low class. Also, the analysis fills the literature gap by shaping the
worldwide map of economic resilience, revealing geographical duality and
centrifugal patterns in its geographical distribution, a relationship between
diachronically good performance in economic resilience and geographical
distance from the shocks origin, and a continent differentiation expressed by
the specialization of America in engineering resilience, Africa and Asia in
ecological and evolutionary resilience, and a relative lag of Europe and
Oceania. Finally, the analysis provides insights into the effect of the 2008 on
the globe and supports a further research hypothesis that political instability
is a main determinant of low economic resilience, addressing avenues of further
research.",A 3D index for measuring economic resilience with application to the modern international and global financial crises,2022-02-17 13:17:44,Dimitrios Tsiotas,"http://arxiv.org/abs/2202.08564v1, http://arxiv.org/pdf/2202.08564v1",econ.GN
32438,gn,"Soybean is the most globalized, traded and processed crop commodity. USA,
Argentina and Brazil continue to be the top three producers and exporters of
soybean and soymeal. Indian soyindustry has also made a mark in the national
and global arena. While soymeal, soyoil, lecithin and other soy-derivatives
stand to be driven up by commerce, the soyfoods for human health and nutrition
need to be further promoted. The changing habitat of commerce in soyderivatives
necessitates a shift in strategy, technological tools and policy environment to
make Indian soybean industry continue to thrive in the new industrial era.
Terms of trade for soyfarming and soy-industry could be further improved.
Present trends, volatilities, slowdowns, challenges faced and associated
desiderata are accordingly spelt out in the present article.",Emerging trends in soybean industry,2022-02-17 14:15:19,Siddhartha Paul Tiwari,"http://arxiv.org/abs/2202.08590v1, http://arxiv.org/pdf/2202.08590v1",econ.GN
32439,gn,"Business economics research on digital platforms often overlooks existing
knowledge from other fields of research leading to conceptual ambiguity and
inconsistent findings. To reduce these restrictions and foster the utilization
of the extensive body of literature, we apply a mixed methods design to
summarize the key findings of scientific platform research. Our bibliometric
analysis identifies 14 platform-related research fields. Conducting a
systematic qualitative content analysis, we identify three primary research
objectives related to platform ecosystems: (1) general literature defining and
unifying research on platforms; (2) exploitation of platform and ecosystem
strategies; (3) improvement of platforms and ecosystems. Finally, we discuss
the identified insights from a business economics perspective and present
promising future research directions that could enhance business economics and
management research on digital platforms and platform ecosystems.",Objectives of platform research: A co-citation and systematic literature review analysis,2022-02-17 21:45:28,"Fabian Schueler, Dimitri Petrik","http://dx.doi.org/10.1007/978-3-658-31118-6_1, http://arxiv.org/abs/2202.08822v1, http://arxiv.org/pdf/2202.08822v1",econ.GN
32440,gn,"We analyze work commute time by sexual orientation of partnered or married
individuals, using the American Community Survey 2008-2019. Women in same-sex
couples have a longer commute to work than working women in different-sex
couples, whereas the commute to work of men in same-sex couples is shorter than
the one of working men in different-sex couples, also after controlling for
demographic characteristics, partner characteristics, location, fertility, and
marital status. These differences are particularly stark among married couples
with children: on average, about 3 minutes more one-way to work for married
mothers in same-sex couples, and almost 2 minutes less for married fathers in
same-sex couples, than their corresponding working parents in different-sex
couples. These gaps among men and women amount to 50 percent, and 100 percent,
respectively, of the gender commuting gap estimated in the literature.
Within-couple gaps in commuting time are also significantly smaller in same-sex
couples. We interpret these differences as evidence that it is
gender-conforming social norms boosted by parenthood that lead women in
different-sex couples to specialize into jobs with a shorter commute while
their male partners or spouses hold jobs with a longer commute.",Commuting to work and gender-conforming social norms: evidence from same-sex couples,2022-02-21 19:29:47,"Sonia Oreffice, Dario Sansone","http://arxiv.org/abs/2202.10344v1, http://arxiv.org/pdf/2202.10344v1",econ.GN
32441,gn,"Geographic proximity is acknowledged to be a key factor in research
collaborations. Specifically, it can work as a possible substitute for
institutional proximity. The present study investigates the relevance of the
""proximity"" effect for different types of national research collaborations. We
apply a bibliometric approach based on the Italian 2010-2017 scientific
production indexed in the Web of Science. On such dataset, we apply statistical
tools for analyzing if and to what extent geographical distance between
co-authors in the byline of a publication varies across collaboration types,
scientific disciplines, and along time. Results can inform policies aimed at
effectively stimulating cross-sector collaborations, and also bear direct
practical implications for research performance assessments.",The geographic proximity effect on domestic cross-sector vis-a-vis intra-sector research collaborations,2022-02-18 18:58:33,"Giovanni Abramo, Francesca Apponi, Ciriaco Andrea D'Angelo","http://arxiv.org/abs/2202.10347v1, http://arxiv.org/pdf/2202.10347v1",econ.GN
32442,gn,"This paper tests for discrimination against immigrant defendants in the
criminal justice system in Chile using a decade of nationwide administrative
records on pretrial detentions. Observational benchmark regressions show that
immigrant defendants are 8.6 percentage points less likely to be released
pretrial relative to Chilean defendants with similar proxies for pretrial
misconduct potential. Diagnostics for omitted variable bias -- including a
novel test to assess the quality of the proxy vector based on comparisons of
pretrial misconduct rates among released defendants -- suggest that the
discrimination estimates are not driven by omitted variable bias and that, if
anything, failing to fully account for differences in misconduct potential
leads to an underestimation of discrimination. Our estimates suggest that
discrimination stems from an informational problem because judges do not
observe criminal records in origin countries, with stereotypes and taste-based
discrimination playing a role in the problem's resolution. We find that
discrimination is especially large for drug offenses and that discrimination
increased after a recent immigration wave.",Discrimination Against Immigrants in the Criminal Justice System: Evidence from Pretrial Detentions,2022-02-22 09:03:39,"Patricio Domínguez, Nicolás Grau, Damián Vergara","http://arxiv.org/abs/2202.10685v1, http://arxiv.org/pdf/2202.10685v1",econ.GN
32445,gn,"We investigate how crises alter societies by analyzing the timing and
channels of change using a longitudinal multi-wave survey of a representative
sample of Americans throughout 2020. This methodology allows us to overcome
some of the limitations of previous studies and uncover novel insights: (1)
individuals with a negative personal experience during a crisis become more
pro-welfare spending, in particular for policies they perceive will benefit
them personally, and they become less trusting of institutions; (2) indirect
shocks or the mere exposure to the crisis doesn't have a similar effect; (3)
policy preferences and institutional trust can change quickly after a negative
experience; and (4) consuming partisan media can mitigate or exacerbate these
effects by distorting perceptions of reality. In an experiment, we find that
exposing individuals to the same information can recalibrate distorted
perceptions with lasting effects. Using a machine learning model to test for
heterogeneous treatment effects, we find a negative personal experience did not
make individuals more responsive to the information treatment, suggesting that
lived and perceived experiences play an equally important role in changing
preferences during a crisis.",Crises and Political Polarization: Towards a Better Understanding of the Timing and Impact of Shocks and Media,2022-02-24 23:01:02,"Guglielmo Briscese, Maddalena Grignani, Stephen Stapleton","http://arxiv.org/abs/2202.12339v2, http://arxiv.org/pdf/2202.12339v2",econ.GN
32446,gn,"Offshore wind energy is rapidly expanding, facilitated largely through
auctions run by governments. We provide a detailed quantified overview of
applied auction schemes, including geographical spread, volumes, results, and
design specifications. Our comprehensive global dataset reveals heterogeneous
designs. Although most remuneration designs provide some form of revenue
stabilisation, their specific instrument choices vary and include feed-in
tariffs, one-sided and two-sided contracts for difference, mandated power
purchase agreements, and mandated renewable energy certificates. We review the
schemes used in all eight major offshore wind jurisdictions across Europe,
Asia, and North America and evaluate bids in their jurisdictional context. We
analyse cost competitiveness, likelihood of timely construction, occurrence of
strategic bidding, and identify jurisdictional aspects that might have
influenced auction results. We find that auctions are embedded within their
respective regulatory and market design context, and are remarkably diverse,
though with regional similarities. Auctions in each jurisdiction have evolved
and tend to become more exposed to market price risks over time. Less mature
markets are more prone to make use of lower-risk designs. Still, some form of
revenue stabilisation is employed for all auctioned offshore wind energy farms
analysed here, regardless of the specific policy choices. Our data confirm a
coincidence of declining costs and growing diffusion of auction regimes.",Policy choices and outcomes for offshore wind auctions globally,2022-02-25 11:27:26,"Malte Jansen, Philipp Beiter, Iegor Riepin, Felix Muesgens, Victor Juarez Guajardo-Fajardo, Iain Staffell, Bernard Bulder, Lena Kitzing","http://dx.doi.org/10.1016/j.enpol.2022.113000, http://arxiv.org/abs/2202.12548v2, http://arxiv.org/pdf/2202.12548v2",econ.GN
32447,gn,"We investigate the consequences of legal rulings on the conduct of monetary
policy. Several unconventional monetary policy measures of the European Central
Bank have come under scrutiny before national courts and the European Court of
Justice. These lawsuits have the potential to severely impact the scope and
flexibility of central bank policies, and central bank independence in a wide
sense, with important consequences for the real and financial economy. Since
the number of relevant legal challenges is small, we develop an econometric
approach that searches for minimum variance regimes which we use to isolate and
measure the effects of these events. Our results suggest that legal rulings
addressing central bank policies have a powerful effect on financial markets.
Expansionary shocks ease financial conditions along various dimensions, and
inflation swap reactions suggest inflationary pressures with stronger effects
in the short term.",Measuring Shocks to Central Bank Independence using Legal Rulings,2022-02-25 16:53:46,"Stefan Griller, Florian Huber, Michael Pfarrhofer","http://arxiv.org/abs/2202.12695v1, http://arxiv.org/pdf/2202.12695v1",econ.GN
32448,gn,"This paper examines the relationship between changes in the cost of imported
inputs and export performance using a novel dataset from Argentina which
identifies domestic firms' network of foreign suppliers. To guide my empirical
strategy, I construct a heterogeneous firm model subject to quality choice and
frictions in the market for foreign supplier. The model predicts that the
impact of an increase in the cost of imported inputs to be increasing in the
adjustments costs of supplier linkages and in the quality of the product
exported. I take the model to the data by constructing firm-specific shocks
using a shift-share analysis which exploits firms' lagged exposure to foreign
suppliers and finely defined import price shifts. Evidence suggests the
presence of significant adjustment cost in firms' foreign supplier linkages and
strong complementarities between imported inputs and export performance,
particularly of high-quality products.",Does an increase in the cost of imported inputs hurt exports? Evidence from firms' network of foreign suppliers,2022-02-25 19:41:16,Santiago Camara,"http://arxiv.org/abs/2202.12811v1, http://arxiv.org/pdf/2202.12811v1",econ.GN
32449,gn,"Through documentary research and interviews with nutrition experts, we found
that all nutrients have two thresholds, the Recommended Daily Allowance (RDA)
and the Tolerable Upper Intake Level (UL). Intake less than the RDA or more
than the UL negatively affects health. Intake quantities of nutrients within
these limits covers 100% of the objective physiological needs without negative
repercussions. These characteristics, and others, are common knowledge among
nutrition experts; however, these are not adequately reflected in the
microeconomics models that study these needs. We conclude that the generalized
presence of these thresholds determines the existence of significant
indifference areas that should be added to the microeconomics models of the
indifference curves, thus improving the modelling of reality.",An analysis of indifference curves and areas from a human nutrition perspective,2022-02-27 05:59:27,"Diego Roldán, Angélica Abad Cisneros, Francisco Roldán-Aráuz, Samantha Leta Angamarca, Anahí Ramírez Zambrano","http://arxiv.org/abs/2202.13276v1, http://arxiv.org/pdf/2202.13276v1",econ.GN
32450,gn,"We investigate the attachment to the labour market of women in their 30s, who
are combining career and family choices, through their reactions to an
exogenous, and potentially symmetric shock, such as the COVID-19 pandemic. We
find that in Italy a large number of females with small children, living in the
North, left permanent (and temporary) employment and became inactive in 2020.
Despite the short period of observation after the burst of the pandemic, the
identified impacts appear large and persistent, particularly with respect to
the males of the same age. We argue that this evidence is ascribable to
specific regional socio-cultural factors, which foreshadow a potential
long-term detrimental impact on female labour force participation.",The attachment of adult women to the Italian labour market in the shadow of COVID-19,2022-02-27 12:16:05,"Davide Fiaschi, Cristina Tealdi","http://arxiv.org/abs/2202.13317v2, http://arxiv.org/pdf/2202.13317v2",econ.GN
32451,gn,"Digital loans have exploded in popularity across low and middle income
countries, providing short term, high interest credit via mobile phones. This
paper reports the results of a randomized evaluation of a digital loan product
in Nigeria. Being randomly approved for digital credit (irrespective of credit
score) substantially increases subjective well-being after an average of three
months. For those who are approved, being randomly offered larger loans has an
insignificant effect. Neither treatment significantly impacts other measures of
welfare. We rule out large short-term impacts either positive or negative: on
income and expenditures, resilience, and women's economic empowerment.",Instant Loans Can Lift Subjective Well-Being: A Randomized Evaluation of Digital Credit in Nigeria,2022-02-28 07:32:58,"Daniel Björkegren, Joshua Blumenstock, Omowunmi Folajimi-Senjobi, Jacqueline Mauro, Suraj R. Nair","http://arxiv.org/abs/2202.13540v1, http://arxiv.org/pdf/2202.13540v1",econ.GN
32452,gn,"We propose a parametric specification of the probability of tax penalisation
faced by a taxpayer, based on the amount of deduction chosen by her to reduce
total taxation. Comparative analyses lead to a closed-form solution for the
optimum tax deduction, and provide the maximising conditions with respect to
the probability parameters.",Taxpayer deductions and the endogenous probability of tax penalisation,2022-02-28 14:30:03,Alex A. T. Rathke,"http://arxiv.org/abs/2202.13695v1, http://arxiv.org/pdf/2202.13695v1",econ.GN
32453,gn,"This study uses a randomized control trial to evaluate a new program for
increased labor market integration of refugees. The program introduces highly
intensive assistance immediately after the residence permit is granted. The
early intervention strategy contrasts previous integration policies, which
typically constitute low-intensive help over long periods of time. We find
positive effects on employment of the program. The magnitude of the effect is
substantial, corresponding to around 15 percentage points. Our cost estimates
suggest that the new policy is less expensive than comparable labor market
programs used in the past.",Labor Market Integration of Refugees: RCT Evidence from an Early Intervention Program in Sweden,2022-03-01 17:36:00,"Matz Dahlberg, Johan Egebark, Gülay Özcan, Ulrika Vikman","http://arxiv.org/abs/2203.00487v2, http://arxiv.org/pdf/2203.00487v2",econ.GN
32454,gn,"Economists and social scientists have debated the relative importance of
nature (one's genes) and nurture (one's environment) for decades, if not
centuries. This debate can now be informed by the ready availability of genetic
data in a growing number of social science datasets. This paper explores the
potential uses of genetic data in economics, with a focus on estimating the
interplay between nature (genes) and nurture (environment). We discuss how
economists can benefit from incorporating genetic data into their analyses even
when they do not have a direct interest in estimating genetic effects. We argue
that gene--environment (GxE) studies can be instrumental for (i) testing
economic theory, (ii) uncovering economic or behavioral mechanisms, and (iii)
analyzing treatment effect heterogeneity, thereby improving the understanding
of how (policy) interventions affect population subgroups. We introduce the
reader to essential genetic terminology, develop a conceptual economic model to
interpret gene-environment interplay, and provide practical guidance to
empirical researchers.",The Economics and Econometrics of Gene-Environment Interplay,2022-03-01 23:33:29,"Pietro Biroli, Titus J. Galama, Stephanie von Hinke, Hans van Kippersluis, Cornelius A. Rietveld, Kevin Thom","http://arxiv.org/abs/2203.00729v1, http://arxiv.org/pdf/2203.00729v1",econ.GN
32455,gn,"This contribution examines the current controversy over research
productivity. There are two sides in this controversy. Using extensive data
from several industries and areas of research, one side argues that research
productivity is currently in decline. The other side disputes this conclusion.
It contends that the data used in making this argument are selective and
limited; they do not reflect the overall state of research. The conclusion that
follows from this critique is that the indicators of research productivity we
currently use are not reliable and do not warrant a definitive answer to the
problem.
  The article agrees that we need a new set of indicators in assessing research
productivity. It proposes that we should look at global indicators related to
knowledge production in general, rather than look at selective data that are
inevitably limited in their scope. The article argues that the process of
creation plays the essential role in knowledge production. Therefore, the
perspective that uses the process of creation as its central organizing
principle offers a unique and global view on the production of knowledge and
makes a definitive resolution of the controversy possible. The article also
outlines some steps for improving research productivity and realizing the full
potential of the human capacity to produce knowledge.
  Key words: Research productivity, knowledge growth, the process of creation,
levels of organization, equilibration and the production of disequilibrium.",Is Our Research Productivity In Decline? A New Approach in Resolving the Controversy,2022-03-02 19:57:53,Gennady Shkliarevsky,"http://arxiv.org/abs/2203.01235v1, http://arxiv.org/pdf/2203.01235v1",econ.GN
32456,gn,"This paper studies how gifts - monetary or in-kind payments - from drug firms
to physicians in the US affect prescriptions and drug costs. We estimate
heterogeneous treatment effects by combining physician-level data on
antidiabetic prescriptions and payments with causal inference and machine
learning methods. We find that payments cause physicians to prescribe more
brand drugs, resulting in a cost increase of $30 per dollar received. Responses
differ widely across physicians, and are primarily explained by variation in
patients' out-of-pocket costs. A gift ban is estimated to decrease drug costs
by 3-4%. Taken together, these novel findings reveal how payments shape
prescription choices and drive up costs.",The Cost of Influence: How Gifts to Physicians Shape Prescriptions and Drug Costs,2022-03-03 18:46:06,"Melissa Newham, Marica Valente","http://arxiv.org/abs/2203.01778v3, http://arxiv.org/pdf/2203.01778v3",econ.GN
32457,gn,"Nature (one's genes) and nurture (one's environment) jointly contribute to
the formation and evolution of health and human capital over the life cycle.
This complex interplay between genes and environment can be estimated and
quantified using genetic information readily available in a growing number of
social science data sets. Using genetic data to improve our understanding of
individual decision making, inequality, and to guide public policy is possible
and promising, but requires a grounding in essential genetic terminology,
knowledge of the literature in economics and social-science genetics, and a
careful discussion of the policy implications and prospects of the use of
genetic data in the social sciences and economics.",Gene-Environment Interplay in the Social Sciences,2022-03-04 12:14:20,"Rita Dias Pereira, Pietro Biroli, Titus Galama, Stephanie von Hinke, Hans van Kippersluis, Cornelius A. Rietveld, Kevin Thom","http://dx.doi.org/10.1093/acrefore/9780190625979.013.804, http://arxiv.org/abs/2203.02198v2, http://arxiv.org/pdf/2203.02198v2",econ.GN
32458,gn,"We report a large-scale randomized controlled trial designed to assess
whether the partisan cue of a pro-vaccine message from Donald Trump would
induce Americans to get COVID-19 vaccines. Our study involved presenting a
27-second advertisement to millions of U.S. YouTube users in October 2021.
Results indicate that the campaign increased the number of vaccines in the
average treated county by 103. Spread across 1,014 treated counties, the total
effect of the campaign was an estimated increase of 104,036 vaccines. The
campaign was cost-effective: with an overall budget of about \$100,000, the
cost to obtain an additional vaccine was about \$1 or less.",Using Donald Trump's COVID-19 Vaccine Endorsement to Give Public Health a Shot in the Arm: A Large-Scale Ad Experiment,2022-03-05 03:45:13,"Bradley J. Larsen, Timothy J. Ryan, Steven Greene, Marc J. Hetherington, Rahsaan Maxwell, Steven Tadelis","http://arxiv.org/abs/2203.02625v3, http://arxiv.org/pdf/2203.02625v3",econ.GN
32459,gn,"Putin's Ukraine war has caused gas prices to skyrocket. Because of Europe's
dependence on Russian gas supplies, we all pay significantly more for heating,
involuntarily helping to fund Russia's war against Ukraine. Based on an
analysis of real-time gas price data, we present a calculation that estimates
every household's financial contribution for heating paid to Russian gas
suppliers daily at current prices - six euros per household and day. We show
ways everyone can save energy and help reduce the dependency on Russian gas
supply.",Data Science vs Putin: How much does each of us pay for Putin's war?,2022-03-05 17:19:59,"Fabian Braesemann, Max Schuler","http://arxiv.org/abs/2203.02756v1, http://arxiv.org/pdf/2203.02756v1",econ.GN
32460,gn,"Determining the development of Germany's energy system by taking the energy
transition objectives into account is the subject of a series of studies. Since
their assumptions and results play a significant role in the political energy
debate for understanding the role of hydrogen and synthetic energy carriers, a
better discussion is needed. This article provides a comparative assessment of
published transition pathways for Germany to assess the role and advantages of
hydrogen and synthetic energy carriers. Twelve energy studies were selected and
37 scenarios for the years 2030 and 2050 were evaluated. Despite the
variations, the two carriers will play an important future role. While their
deployment is expected to have only started by 2030 with a mean demand of 91
TWh/a (4% of the final energy demand) in Germany, they will be an essential
part by 2050 with a mean demand of 480 TWh/a (24% of the final energy demand).
A moderately positive correlation (0.53) between the decarbonisation targets
and the share of hydrogen-based carriers in final energy demand underlines the
relevance for reaching the climate targets. Additionally, value creation
effects of about 5 bn EUR/a in 2030 can be expected for hydrogen-based
carriers. By 2050, these effects will increase to almost 16 bn EUR/a. Hydrogen
is expected to be mainly produced domestically while synthetic fuels are
projected to be mostly imported. Despite of all the advantages, the
construction of the facilities is associated with high costs which should be
not neglected in the discussion.",Future role and economic benefits of hydrogen and synthetic energy carriers in Germany: a systematic review of long-term energy scenarios,2022-03-06 02:37:03,"Fabian Scheller, Stefan Wald, Hendrik Kondziella, Philipp Andreas Gunkel, Thomas Bruckner, Dogan Keles","http://arxiv.org/abs/2203.02834v1, http://arxiv.org/pdf/2203.02834v1",econ.GN
32461,gn,"A lack of financial access, which is often an issue in many central-city U.S.
neighborhoods, can be linked to higher interest rates as well as negative
health and psychological outcomes. A number of analyses of ""banking deserts""
have also found these areas to be poorer and less White than other parts of the
city. While previous research has examined specific cities, or has classified
areas by population densities, no study to date has examined a large set of
individual cities. This study looks at 319 U.S. cities with populations greater
than 100,000 and isolates areas with fewer than 0.318 banks per square mile
based on distances from block-group centroids. The relative shares of these
""deserts"" appears to be independent of city population across the sample, and
there is little relationship between these shares and socioeconomic variables
such as the poverty rate or the percentage of Black residents. One plausible
explanation is that only a subset of many cities' poorest, least White block
groups can be classified as banking deserts; nearby block groups with similar
socioeconomic characteristics are therefore non-deserts. Outside of the
Northeast, non-desert areas tend to be poorer than deserts, suggesting that
income- and bank-poor neighborhoods might not be as prevalent as is commonly
assumed.","Banking Deserts,"" City Size, and Socioeconomic Characteristics in Medium and Large U.S. Cities",2022-03-07 02:05:26,Scott W. Hegerty,"http://arxiv.org/abs/2203.03069v1, http://arxiv.org/pdf/2203.03069v1",econ.GN
32462,gn,"This paper aims to clarify the relationship between monetary policy shocks
and wage inequality. We emphasize the relevance of within and between wage
group inequalities in explaining total wage inequality in the United States.
Relying on the quarterly data for the period 2000-2020, our analysis shows that
racial disparities explain 12\% of observed total wage inequality.
Subsequently, we examine the role of monetary policy in wage inequality. We do
not find compelling evidence that shows that monetary policy plays a role in
exacerbating the racial wage gap. However, there is evidence that accommodative
monetary policy plays a role in magnifying between group wage inequalities but
the impact occurs after 2008.",Monetary policy and the racial wage gap,2022-03-07 21:09:29,"Edmond Berisha, Ram Sewak Dubey, Eric Olson","http://arxiv.org/abs/2203.03565v1, http://arxiv.org/pdf/2203.03565v1",econ.GN
32463,gn,"This paper examines experimentally how reputational uncertainty and the rate
of change of the social environment determine cooperation. Reputational
uncertainty significantly decreases cooperation, while a fast-changing social
environment only causes a second-order qualitative increase in cooperation. At
the individual level, reputational uncertainty induces more leniency and
forgiveness in imposing network punishment through the link proposal and
removal processes, inhibiting the formation of cooperative clusters. However,
this effect is significant only in the fast-changing environment and not in the
slow-changing environment. A substitution pattern between network punishment
and action punishment (retaliatory defection) explains this discrepancy across
the two social environments.",Cooperation and punishment mechanisms in uncertain and dynamic networks,2022-03-08 13:56:26,"Edoardo Gallo, Yohanes E. Riyanto, Nilanjan Roy, Tat-How Teh","http://arxiv.org/abs/2203.04001v1, http://arxiv.org/pdf/2203.04001v1",econ.GN
34145,th,"When opposing parties compete for a prize, the sunk effort players exert
during the conflict can affect the value of the winner's reward. These
spillovers can have substantial influence on the equilibrium behavior of
participants in applications such as lobbying, warfare, labor tournaments,
marketing, and R&D races. To understand this influence, we study a general
class of asymmetric, two-player all-pay auctions where we allow for spillovers
in each player's reward. The link between participants' efforts and rewards
yields novel effects -- in particular, players with higher costs and lower
values than their opponent sometimes extract larger payoffs.",Asymmetric All-Pay Auctions with Spillovers,2021-06-16 03:28:27,"Maria Betto, Matthew W. Thomas","http://arxiv.org/abs/2106.08496v2, http://arxiv.org/pdf/2106.08496v2",econ.TH
32464,gn,"Using institutional economic theory as our guiding framework, we develop a
model to describe how populist discourse by a nation's political leader
influences entrepreneurship. We hypothesize that populist discourse reduces
entrepreneurship by creating regime uncertainty concerning the future stability
of the institutional environment, resulting in entrepreneurs anticipating
higher future transaction costs. Our model highlights two important factors
that moderate the relationship. First, is the strength of political checks and
balances, which we hypothesize weakens the negative relationship between
populist discourse and entrepreneurship by providing entrepreneurs with greater
confidence that the actions of a populist will be constrained. Second, the
political ideology of the leader moderates the relationship between populist
discourse and entrepreneurship. The anti-capitalistic rhetoric of left-wing
populism will create greater regime uncertainty than right-wing populism, which
is often accompanied by rhetoric critical of free trade and foreigners but also
supportive of business interests. The effect of centrist populism, which is
accompanied by a mix of contradictory and often moderate ideas that make it
difficult to discern future transaction costs, will have a weaker negative
effect on entrepreneurship than either left-wing or right-wing populism. We
empirically test our model using a multi-level design and a dataset comprised
of more than 780,000 individuals in 33 countries over the period 2002-2016. Our
analysis largely supports our theory regarding the moderating role of ideology.
Still, surprisingly, our findings suggest that the negative effect of populism
on entrepreneurship is greater in nations with stronger checks and balances.",Populist Discourse and Entrepreneurship: The Role of Political Ideology and Institutions,2022-03-08 17:10:23,"Daniel L. Bennett, Christopher J. Boudreaux, Boris N. Nikolaev","http://arxiv.org/abs/2203.04101v1, http://arxiv.org/pdf/2203.04101v1",econ.GN
32465,gn,"We develop a labor demand model that encompasses pre-match hiring cost
arising from tight labor markets. Through the lens of the model, we study the
effect of labor market tightness on firms' labor demand by applying novel
Bartik instruments to the universe of administrative employment data on
Germany. In line with theory, the IV results suggest that a 10 percent increase
in labor market tightness reduces firms' employment by 0.5 percent. When
accounting for search externalities, we find that the individual-firm wage
elasticity of labor demand reduces from -0.7 to -0.5 at the aggregate level.
For the 2015 minimum wage introduction, the elasticities imply only modest
disemployment effects mirroring empirical ex-post evaluations. Moreover, the
doubling of tightness between 2012 and 2019 led to a significant slowdown in
employment growth by 1.1 million jobs.",Labor Demand on a Tight Leash,2022-03-10 22:09:40,"Mario Bossler, Martin Popp","http://arxiv.org/abs/2203.05593v6, http://arxiv.org/pdf/2203.05593v6",econ.GN
32466,gn,"This paper studies the role of social networks in spatial mobility across
India. Using aggregated and de-identified data from the world's largest online
social network, we (i) document new descriptive findings on the structure of
social networks and spatial mobility in India; (ii) quantify the effects of
social networks on annual migration choice; and (iii) embed these estimates in
a spatial equilibrium model to study the wage implications of increasing social
connectedness. Across millions of individuals, we find that multiple measures
of social capital are concentrated among the rich and educated and among
migrants. Across destinations, both mobility patterns and social networks are
concentrated toward richer areas. A model of migration suggests individuals are
indifferent between a 10% increase in destination wages and a 12-16% increase
in destination social networks. Accounting for networks reduces the
migration-distance relationship by 19%. In equilibrium, equalizing social
networks across locations improves average wages by 3% (24% for the bottom
wage-quartile), a larger impact than removing the marginal cost of distance. We
find evidence of an economic support mechanism, with destination economic
improvements reducing the migration-network elasticity. We also find suggestive
evidence for an emotional support mechanism from qualitative surveys among
Facebook users. Difference-in-difference estimates suggest college attendance
delivers a 20% increase in network size and diversity. Taken together, our data
suggest that - by reducing effective moving costs - increasing social
connectedness across space may have considerable economic gains.",Social Networks and Spatial Mobility: Evidence from Facebook in India,2022-03-10 22:23:01,"Harshil Sahai, Michael Bailey","http://arxiv.org/abs/2203.05595v1, http://arxiv.org/pdf/2203.05595v1",econ.GN
32467,gn,"This paper proposes a simulation-based deep learning Bayesian procedure for
the estimation of macroeconomic models. This approach is able to derive
posteriors even when the likelihood function is not tractable. Because the
likelihood is not needed for Bayesian estimation, filtering is also not needed.
This allows Bayesian estimation of HANK models with upwards of 800 latent
states as well as estimation of representative agent models that are solved
with methods that don't yield a likelihood--for example, projection and value
function iteration approaches. I demonstrate the validity of the approach by
estimating a 10 parameter HANK model solved via the Reiter method that
generates 812 covariates per time step, where 810 are latent variables, showing
this can handle a large latent space without model reduction. I also estimate
the algorithm with an 11-parameter model solved via value function iteration,
which cannot be estimated with Metropolis-Hastings or even conventional maximum
likelihood estimators. In addition, I show the posteriors estimated on
Smets-Wouters 2007 are higher quality and faster using simulation-based
inference compared to Metropolis-Hastings. This approach helps address the
computational expense of Metropolis-Hastings and allows solution methods which
don't yield a tractable likelihood to be estimated.",Fast Simulation-Based Bayesian Estimation of Heterogeneous and Representative Agent Models using Normalizing Flow Neural Networks,2022-03-13 02:59:32,Cameron Fen,"http://arxiv.org/abs/2203.06537v1, http://arxiv.org/pdf/2203.06537v1",econ.GN
32468,gn,"We show that pooling countries across a panel dimension to macroeconomic data
can improve by a statistically significant margin the generalization ability of
structural, reduced form, and machine learning (ML) methods to produce
state-of-the-art results. Using GDP forecasts evaluated on an out-of-sample
test set, this procedure reduces root mean squared error by 12\% across
horizons and models for certain reduced-form models and by 24\% across horizons
for dynamic structural general equilibrium models. Removing US data from the
training set and forecasting out-of-sample country-wise, we show that
reduced-form and structural models are more policy-invariant when trained on
pooled data, and outperform a baseline that uses US data only. Given the
comparative advantage of ML models in a data-rich regime, we demonstrate that
our recurrent neural network model and automated ML approach outperform all
tested baseline economic models. Robustness checks indicate that our
outperformance is reproducible, numerically stable, and generalizable across
models.","Improving Macroeconomic Model Validity and Forecasting Performance with Pooled Country Data using Structural, Reduced Form, and Neural Network Model",2022-03-13 03:37:09,"Cameron Fen, Samir Undavia","http://arxiv.org/abs/2203.06540v1, http://arxiv.org/pdf/2203.06540v1",econ.GN
32469,gn,"This paper presents evidence on the granular nature of firms' network of
foreign suppliers and studies its implications for the impact of supplier
shocks on domestic firms' performance. To demonstrate this, I use customs level
information on transactions between Argentinean firms and foreign firms. I
highlight two novel stylized facts: (i) the distribution of domestic firms'
number of foreign suppliers is highly skewed with the median firm reporting
linkages with only two, (ii) firms focus imported value on one top-supplier,
even when controlling for firm size. Motivated by these facts I construct a
theoretical framework of heterogeneous firms subject to search frictions in the
market for foreign suppliers. Through a calibration exercise I study the
framework's predictions and test them in the data using a shift-share
identification strategy. Results present evidence of significant frictions in
the market for foreign suppliers and strong import-export complementarities.","Granular Linkages, Supplier Cost Shocks & Export Performance",2022-03-14 19:55:16,Santiago Camara,"http://arxiv.org/abs/2203.07282v1, http://arxiv.org/pdf/2203.07282v1",econ.GN
32470,gn,"Vaccination against the coronavirus disease 2019 (COVID-19) is a key measure
to reduce the probability of getting infected with the disease. Accordingly,
this might significantly change an individuals perception and decision-making
in daily life. For instance, it is predicted that with widespread vaccination,
individuals will exhibit less rigid preventive behaviors, such as staying at
home, frequently washing hands, and wearing a mask. We observed the same
individuals on a monthly basis for 18 months, from March 2020 (the early stage
of the COVID-19 pandemic) to September 2021, in Japan to independently
construct large sample panel data (N=54,007). Using the data, we compare the
individuals preventive behaviors before and after they got vaccinated;
additionally, we compare their behaviors with those individuals who did not get
vaccinated. Furthermore, we compare the effect of vaccination on the
individuals less than or equal to 40 years of age with those greater than 40
years old. The major findings determined after controlling for individual
characteristics using the fixed effects model and various factors are as
follows. First, as opposed to the prediction, based on the whole sample, the
vaccinated people were observed to stay at home and did not change their habits
of frequently washing hands and wearing a mask. Second, using the sub-sample of
individuals aged equal to or below 40, we find that the vaccinated people are
more likely to go out. Third, the results obtained using a sample comprising
people aged over 40 are similar to those obtained using the whole sample.
Preventive behaviors are affecting oneself and generating externalities on
others during this pandemic. Informal social norms motivate people to increase
or maintain preventive behaviors even after being vaccinated in societies where
such behaviors are not enforced.",Effect of the COVID-19 vaccine on preventive behaviors: Evidence from Japan,2022-03-15 08:47:18,"Eiji Yamamura, Youki Koska, Yoshiro Tsutsui, Fumio Ohtake","http://arxiv.org/abs/2203.07660v1, http://arxiv.org/pdf/2203.07660v1",econ.GN
32471,gn,"Vaccination has been promoted to mitigate the spread of the coronavirus
disease 2019 (COVID-19). Vaccination is expected to reduce the probability of
and alleviate the seriousness of COVID-19 infection. Accordingly, this might
significantly change an individuals subjective well-being and mental health.
However, it is unknown how vaccinated people perceive the effectiveness of
COVID-19 and how their subjective well-being and mental health change after
vaccination. We thus observed the same individuals on a monthly basis from
March 2020 to September 2021 in all parts of Japan. Then, large sample panel
data (N=54,007) were independently constructed. Using the data, we compared the
individuals perceptions of COVID-19, subjective well-being, and mental health
before and after vaccination. Furthermore, we compared the effect of
vaccination on the perceptions of COVID-19 and mental health for females and
males. We used the fixed-effects model to control for individual time-invariant
characteristics. The major findings were as follows: First, the vaccinated
people perceived the probability of getting infected and the seriousness of
COVID-19 to be lower than before vaccination. This was observed not only when
we used the whole sample, but also when we used sub-samples. Second, using the
whole sample, subjective well-being and mental health improved. The same
results were also observed using the sub-sample of females, whereas the
improvements were not observed using a sub-sample of males.",Gender differences of the effect of vaccination on perceptions of COVID-19 and mental health in Japan,2022-03-15 08:57:01,"Eiji Yamamura, Youki Kosaka, Yoshiro Tsutsui, Fumio Ohtake","http://arxiv.org/abs/2203.07663v1, http://arxiv.org/pdf/2203.07663v1",econ.GN
32472,gn,"Organisations rely upon group formation to solve complex tasks, and groups
often adapt to the demands of the task they face by changing their composition
periodically. Previous research comes to ambiguous results regarding the
effects of group adaptation on task performance. This paper aims to understand
the impact of group adaptation, defined as a process of periodically changing a
group's composition, on complex task performance and considers the moderating
role of individual learning and task complexity in this relationship. We base
our analyses on an agent-based model of adaptive groups in a complex task
environment based on the NK-framework. The results indicate that reorganising
well-performing groups might be beneficial, but only if individual learning is
restricted. However, there are also cases in which group adaptation might
unfold adverse effects. We provide extensive analyses that shed additional
light on and, thereby, help explain the ambiguous results of previous research.",Dynamic groups in complex task environments: To change or not to change a winning team?,2022-03-17 11:26:00,"Darío Blanco-Fernández, Stephan Leitner, Alexandra Rausch","http://arxiv.org/abs/2203.09157v1, http://arxiv.org/pdf/2203.09157v1",econ.GN
32473,gn,"Previous research on organizations often focuses on either the individual,
team, or organizational level. There is a lack of multidimensional research on
emergent phenomena and interactions between the mechanisms at different levels.
This paper takes a multifaceted perspective on individual learning and
autonomous group formation and adaptation. To analyze interactions between the
two levels, we introduce an agent-based model that captures an organization
with a population of heterogeneous agents who learn and are limited in their
rationality. To solve a task, agents form a group that can be adapted from time
to time. We explore organizations that promote learning and group adaptation
either simultaneously or sequentially and analyze the interactions between the
activities and the effects on performance. We observe underproportional
interactions when tasks are interdependent and show that pushing learning and
group adaptation too far might backfire and decrease performance significantly.",Interactions between the individual and the group level in organizations: The case of learning and autonomous group adaptation,2022-03-17 11:31:23,"Dario Blanco-Fernandez, Stephan Leitner, Alexandra Rausch","http://arxiv.org/abs/2203.09162v1, http://arxiv.org/pdf/2203.09162v1",econ.GN
32474,gn,"This textbook is an introduction to economic networks, intended for students
and researchers in the fields of economics and applied mathematics. The
textbook emphasizes quantitative modeling, with the main underlying tools being
graph theory, linear algebra, fixed point theory and programming. The text is
suitable for a one-semester course, taught either to advanced undergraduate
students who are comfortable with linear algebra or to beginning graduate
students.",Economic Networks: Theory and Computation,2022-03-22 21:09:02,"Thomas J. Sargent, John Stachurski","http://arxiv.org/abs/2203.11972v5, http://arxiv.org/pdf/2203.11972v5",econ.GN
32475,gn,"Geopolitical conflicts have increasingly been a driver of trade policy. We
study the potential effects of global and persistent geopolitical conflicts on
trade, technological innovation, and economic growth. In conventional trade
models the welfare costs of such conflicts are modest. We build a multi-sector
multi-region general equilibrium model with dynamic sector-specific knowledge
diffusion, which magnifies welfare losses of trade conflicts. Idea diffusion is
mediated by the input-output structure of production, such that both sector
cost shares and import trade shares characterize the source distribution of
ideas. Using this framework, we explore the potential impact of a ""decoupling
of the global economy,"" a hypothetical scenario under which technology systems
would diverge in the global economy. We divide the global economy into two
geopolitical blocs -- East and West -- based on foreign policy similarity and
model decoupling through an increase in iceberg trade costs (full decoupling)
or tariffs (tariff decoupling). Results yield three main insights. First, the
projected welfare losses for the global economy of a decoupling scenario can be
drastic, as large as 15% in some regions and are largest in the lower income
regions as they would benefit less from technology spillovers from richer
areas. Second, the described size and pattern of welfare effects are specific
to the model with diffusion of ideas. Without diffusion of ideas the size and
variation across regions of the welfare losses would be substantially smaller.
Third, a multi-sector framework exacerbates diffusion inefficiencies induced by
trade costs relative to a single-sector one.","The Impact of Geopolitical Conflicts on Trade, Growth, and Innovation",2022-03-23 06:30:41,"Carlos Góes, Eddy Bekkers","http://arxiv.org/abs/2203.12173v2, http://arxiv.org/pdf/2203.12173v2",econ.GN
32476,gn,"Do school openings trigger Covid-19 diffusion when school-age vaccination is
available? We investigate this question using a unique geo-referenced high
frequency database on school openings, vaccinations, and Covid-19 cases from
the Italian region of Sicily. The analysis focuses on the change of Covid-19
diffusion after school opening in a homogeneous geographical territory. The
identification of causal effects derives from a comparison of the change in
cases before and after school opening in 2020/21, when vaccination was not
available, and in 2021/22, when the vaccination campaign targeted individuals
of age 12-19 and above 19. The results indicate that, while school opening
determined an increase in the growth rate of Covid-19 cases in 2020/2021, this
effect has been substantially reduced by school-age vaccination in 2021/2022.
In particular, we find that an increase of approximately 10% in the vaccination
rate of school-age population reduces the growth rate of Covid-19 cases after
school opening by approximately 1.4%. In addition, a counterfactual simulation
suggests that a permanent no vaccination scenario would have implied an
increase of 19% in ICU beds occupancy.","School-age Vaccination, School Openings and Covid-19 diffusion",2022-03-23 14:16:12,"Emanuele Amodio, Michele Battisti, Antonio Francesco Gravina, Andrea Mario Lavezzi, Giuseppe Maggio","http://arxiv.org/abs/2203.12331v1, http://arxiv.org/pdf/2203.12331v1",econ.GN
32477,gn,"This submission to the Irish Commission on Taxation and Welfare advocates the
introduction of a site value tax in Ireland. Ireland has high and volatile
property prices, constraining social and economic development. Site values are
the main driver of these phenomena. Taxing site values would reduce both the
level and volatility of property prices, and thus help to alleviate these
problems. Site value tax has many other beneficial features. For example, it
captures price gains due to the community and government rather than owners'
efforts and thus diminishes the incentive to buy land for speculative reasons.
Site value tax can be used to finance infrastructural investments, help
facilitate site assembly for development and as a support for the maintenance
of protected structures. Site value tax is also a tax on wealth.",Submission to the Commission on Taxation and Welfare on introducing a site value tax,2022-03-23 20:58:16,"Eóin Flaherty, Constantin Gurdgiev, Ronan Lyons, Emer Ó Siochrú, James Pike","http://arxiv.org/abs/2203.12611v1, http://arxiv.org/pdf/2203.12611v1",econ.GN
32478,gn,"This document details a dataset that contains all unconsolidated annual
financial statements of the universe of Norwegian private and public limited
liability companies. It also includes all financial statements of other company
types reported to the Norwegian authorities.",Financial statements of companies in Norway,2022-03-24 07:10:02,Ranik Raaen Wahlstrøm,"http://arxiv.org/abs/2203.12842v4, http://arxiv.org/pdf/2203.12842v4",econ.GN
32479,gn,"This study aims to identify the differences in SAPA user interest due to each
route, traffic volumes, and after COVID-19 by the daily feedback from them and
to help develop SAPA plans. Food was the most common opinion. However, for the
route, some showed interest in other options. For the traffic volume, the
difference of interest was also shown in some heavy traffic areas, On the other
hand, the changes in customer needs after the COVID-19 disaster were less
changed.","The differences in SAPA Needs by Route, Traffic Volume and after COVID-19",2022-03-24 08:41:52,"Katsunobu Okamoto, Takuji Takemoto, Yoshimi Kawamoto, Sachiyo Kamimura","http://arxiv.org/abs/2203.12858v2, http://arxiv.org/pdf/2203.12858v2",econ.GN
32480,gn,"In high-tech industries, where intellectual property plays a crucial role,
the acquisition of intangible assets and employees' tacit knowledge is an
integral part of the motivation for Mergers and Acquisitions (M&As). Following
the molecular biology revolution, the wave of takeovers in the biotechnology
industry in the Nineties is a well-known example of M&As to absorb new
knowledge. The retention of critical R&D employees embodying valuable knowledge
and potential future innovation is uncertain after an acquisition. While not
all employees might be relevant for the success of the takeover, inventors are
among the most valuable. This is especially true for the acquisition of an
innovative start-up. This paper estimates how likely an inventor working for an
acquired biotechnology company will leave. We find that inventors affected by
acquisitions are 20\% more likely to leave the company by a
difference-in-differences approach matching both firms and inventors.",The Impact of Acquisitions on Inventors' Turnover in the Biotechnology Industry,2022-03-24 13:16:24,"Luca Verginer, Federica Parisi, Jeroen van Lidth de Jeude, Massimo Riccaboni","http://arxiv.org/abs/2203.12968v1, http://arxiv.org/pdf/2203.12968v1",econ.GN
32510,gn,"We propose a deep learning approach to probabilistic forecasting of
macroeconomic and financial time series. Being able to learn complex patterns
from a data rich environment, our approach is useful for a decision making that
depends on uncertainty of large number of economic outcomes. Specifically, it
is informative to agents facing asymmetric dependence of their loss on outcomes
from possibly non-Gaussian and non-linear variables. We show the usefulness of
the proposed approach on the two distinct datasets where a machine learns the
pattern from data. First, we construct macroeconomic fan charts that reflect
information from high-dimensional data set. Second, we illustrate gains in
prediction of stock return distributions which are heavy tailed, asymmetric and
suffer from low signal-to-noise ratio.",Learning Probability Distributions in Macroeconomics and Finance,2022-04-14 12:48:54,"Jozef Barunik, Lubos Hanus","http://arxiv.org/abs/2204.06848v1, http://arxiv.org/pdf/2204.06848v1",econ.GN
32481,gn,"Across income groups and countries, individual citizens perceive economic
inequality spectacularly wrong. These misperceptions have far-reaching
consequences, as it is perceived inequality, not actualinequality informing
redistributive preferences. The prevalence of this phenomenon is independent of
social class and welfare regime, which suggests the existence of a common
mechanism behind public perceptions. The literature has identified several
stylised facts on how individual perceptions respond to actual inequality and
how these biases vary systematically along the income distribution. We propose
a network-based explanation of perceived inequality building on recent advances
in random geometric graph theory. The generating mechanism can replicate all of
forementioned stylised facts simultaneously. It also produces social networks
that exhibit salient features of real-world networks; namely, they cannot be
statistically distinguished from small-world networks, testifying to the
robustness of our approach. Our results, therefore, suggest that homophilic
segregation is a promising candidate to explain inequality perceptions with
strong implications for theories of consumption and voting behaviour.",A Network-Based Explanation of Inequality Perceptions,2022-03-27 12:16:52,"Jan Schulz, Daniel M. Mayerhoffer, Anna Gebhard","http://arxiv.org/abs/2203.14254v2, http://arxiv.org/pdf/2203.14254v2",econ.GN
32482,gn,"The nexus between debt and inequality has attracted considerable scholarly
attention in the wake of the global financial crisis. One prominent candidate
to explain the striking co-evolution of income inequality and private debt in
this period has been the theory of upward-looking consumption externalities
leading to expenditure cascades. We propose a parsimonious model of
upward-looking consumption at the micro level mediated by perception networks
with empirically plausible topologies. This allows us to make sense of the
ambiguous empirical literature on the relevance of this channel. Up to our
knowledge, our approach is the first to make the reference group to which
conspicuous consumption relates explicit. Our model, based purely on current
income, replicates the major stylised facts regarding micro consumption
behaviour and is thus observationally equivalent to the workhorse permanent
income hypothesis, without facing its dual problem of `excess smoothness' and
`excess sensitivity'. We also demonstrate that the network topology and
segregation has a significant effect on consumption patterns which has so far
been neglected.",A Network Approach to Consumption,2022-03-27 12:50:34,"Jan Schulz, Daniel M. Mayerhoffer","http://arxiv.org/abs/2203.14259v2, http://arxiv.org/pdf/2203.14259v2",econ.GN
32483,gn,"Several key actors -- police, prosecutors, judges -- can alter the course of
individuals passing through the multi-staged criminal justice system. I use
linked arrest-sentencing data for federal courts from 1994-2010 to examine the
role that earlier stages play when estimating Black-white sentencing gaps. I
find no evidence of sample selection at play in the federal setting, suggesting
federal judges are largely responsible for racial sentencing disparities. In
contrast, I document substantial sample selection bias in two different state
courts systems. Estimates of racial and ethnic sentencing gaps that ignore
selection underestimate the true disparities by 15% and 13% respectively.",Racial Sentencing Disparities and Differential Progression Through the Criminal Justice System: Evidence From Linked Federal and State Court Data,2022-03-27 14:45:44,Brendon McConnell,"http://arxiv.org/abs/2203.14282v2, http://arxiv.org/pdf/2203.14282v2",econ.GN
32484,gn,"Banks play an intrinsic role in any modern economy, recycling capital from
savers to borrowers. They are heavily regulated and there have been a
significant number of well publicized compliance failings in recent years. This
is despite Business Process Compliance (BPC) being both a well researched
domain in academia and one where significant progress has been made. This study
seeks to determine why Australian banks find BPC so challenging. We interviewed
22 senior managers from a range of functions within the four major Australian
banks to identify the key challenges. Not every process in every bank is facing
the same issues, but in processes where a bank is particularly challenged to
meet its compliance requirements, the same themes emerge. The compliance
requirement load they bear is excessive, dynamic and complex. Fulfilling these
requirements relies on impenetrable spaghetti processes, and the case for
sustainable change remains elusive, locking banks into a fail-fix cycle that
increases the underlying complexity. This paper proposes a conceptual framework
that identifies and aggregates the challenges, and a circuit-breaker approach
as an ""off ramp"" to the fail-fix cycle.",Why Do Banks Find Business Process Compliance So Challenging? An Australian Case Study,2022-03-25 02:35:46,"Nigel Adams, Adriano Augusto, Michael Davern, Marcello La Rosa","http://arxiv.org/abs/2203.14904v1, http://arxiv.org/pdf/2203.14904v1",econ.GN
32485,gn,"We aim to contribute to the literature on product space and diversification
by proposing a number of extensions of the current literature: (1) we propose
that the alternative but related idea of a country space also has empirical and
theoretical appeal; (2) we argue that the loss of comparative advantage should
be an integral part of (testing the empirical relevance of) the product space
idea; (3) we propose several new indicators for measuring relatedness in
product space; and (4) we propose a non-parametric statistical test based on
bootstrapping to test the empirical relevance of the product space idea.",Some New Views on Product Space and Related Diversification,2022-03-30 16:56:09,"Önder Nomaler, Bart Verspagen","http://arxiv.org/abs/2203.16316v1, http://arxiv.org/pdf/2203.16316v1",econ.GN
32486,gn,"The 2021 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred
Nobel was awarded to David Card ""for his empirical contributions to labour
economics"" and to Joshua Angrist and Guido Imbens ""for their methodological
contributions to the analysis of causal relationships."" We survey these
contributions of the three laureates, and discuss how their empirical and
methodological insights transformed the modern practice of applied
microeconomics. By emphasizing research design and formalizing the causal
content of different econometric procedures, the laureates shed new light on
key questions in labour economics and advanced a robust toolkit for empirical
analyses across many fields.","Labour by Design: Contributions of David Card, Joshua Angrist, and Guido Imbens",2022-03-30 18:37:05,"Peter Hull, Michal Kolesár, Christopher Walters","http://dx.doi.org/10.1111/sjoe.12505, http://arxiv.org/abs/2203.16405v1, http://arxiv.org/pdf/2203.16405v1",econ.GN
32511,gn,"Will the Opportunity Zones (OZ) program, America's largest new place-based
policy in decades, generate neighborhood change? We compare single-family
housing price growth in OZs with price growth in areas that were eligible but
not included in the program. We also compare OZs to their nearest geographic
neighbors. Our most credible estimates rule out price impacts greater than 0.5
percentage points with 95% confidence, suggesting that, so far, home buyers
don't believe that this subsidy will generate major neighborhood change. OZ
status reduces prices in areas with little employment, perhaps because buyers
think that subsidizing new investment will increase housing supply. Mixed
evidence suggests that OZs may have increased residential permitting.",JUE Insight: The (Non-)Effect of Opportunity Zones on Housing Prices,2022-04-14 16:48:40,"Jiafeng Chen, Edward Glaeser, David Wessel","http://dx.doi.org/10.1016/j.jue.2022.103451, http://arxiv.org/abs/2204.06967v1, http://arxiv.org/pdf/2204.06967v1",econ.GN
32487,gn,"Global trends of fertility decline, population aging, and rural outmigration
are creating pressures to consolidate school systems, with the rationale that
economies of scale will enable higher quality education to be delivered in an
efficient manner, despite longer travel distances for students. Yet, few
studies have considered the implications of system consolidation for
educational access and inequality, outside of the context of developed
countries. We estimate the impact of educational infrastructure consolidation
on educational attainment using the case of China's rural primary school
closure policies in the early 2000s. We use data from a large household survey
covering 728 villages in 7 provinces, and exploit variation in villages' year
of school closure and children's ages at closure to identify the causal impact
of school closure. For girls exposed to closure during their primary school
ages, we find an average decrease of 0.60 years of schooling by 2011, when
children's mean age was 17 years old. Negative effects strengthen with time
since closure. For boys, there is no corresponding significant effect.
Different effects by gender may be related to greater sensitivity of girls'
enrollment to distance and greater responsiveness of boys' enrollment to
quality.",Estimating the Effects of Educational System Consolidation: The Case of China's Rural School Closure Initiative,2022-03-31 18:20:40,"Emily Hannum, Xiaoying Liu, Fan Wang","http://dx.doi.org/10.1086/711654, http://arxiv.org/abs/2203.17101v1, http://arxiv.org/pdf/2203.17101v1",econ.GN
32488,gn,"We examine the spillover effect of neighboring ports on regional industrial
diversification and their economic resilience using the export data of South
Korea from 2006 to 2020. First, we build two distinct product spaces of ports
and port regions, and provide direct estimates of the role of neighboring ports
as spillover channels spatially linked. This is in contrast to the previous
literature that mainly regarded ports as transport infrastructure per se.
Second, we confirm that the knowledge spillover effect from neighboring ports
had a non-negligible role in sustaining regional economies during the recovery
after the economic crisis but its power has weakened recently due to a loosened
global value chain.",The spillover effect of neighboring port on regional industrial diversification and regional economic resilience,2022-04-01 06:44:32,"Jung-In Yeon, Sojung Hwang, Bogang Jun","http://arxiv.org/abs/2204.00189v1, http://arxiv.org/pdf/2204.00189v1",econ.GN
32489,gn,"This paper investigates whether associations between birth weight and
prenatal ambient environmental conditions--pollution and extreme
temperatures--differ by 1) maternal education; 2) children's innate health; and
3) interactions between these two. We link birth records from Guangzhou, China,
during a period of high pollution, to ambient air pollution (PM10 and a
composite measure) and extreme temperature data. We first use mean regressions
to test whether, overall, maternal education is an ""effect modifier"" in the
relationships between ambient air pollution, extreme temperature, and birth
weight. We then use conditional quantile regressions to test for effect
heterogeneity according to the unobserved innate vulnerability of babies after
conditioning on other confounders. Results show that 1) the negative
association between ambient exposures and birth weight is twice as large at
lower conditional quantiles of birth weights as at the median; 2) the
protection associated with college-educated mothers with respect to pollution
and extreme heat is heterogeneous and potentially substantial: between 0.02 and
0.34 standard deviations of birth weights, depending on the conditional
quantiles; 3) this protection is amplified under more extreme ambient
conditions and for infants with greater unobserved innate vulnerabilities.","Same environment, stratified impacts? Air pollution, extreme temperatures, and birth weight in south China",2022-04-01 08:46:59,"Xiaoying Liu, Jere R. Behrman, Emily Hannum, Fan Wang, Qingguo Zhao","http://dx.doi.org/10.1016/j.ssresearch.2021.102691, http://arxiv.org/abs/2204.00219v1, http://arxiv.org/pdf/2204.00219v1",econ.GN
32490,gn,"This review seeks to present a comprehensive picture of recent discussions in
the social sciences of the anticipated impact of AI on the world of work.
Issues covered include technological unemployment, algorithmic management,
platform work an the politics of AI work. The review identifies the major
disciplinary and methodological perspectives on AI's impact on work, and the
obstacles they face in making predictions. Two parameters influencing the
development and deployment of AI in the economy are highlighted, the capitalist
imperative and nationalistic pressures.",Artificial Intelligence and work: a critical review of recent research from the social sciences,2022-02-26 02:28:31,"Jean-Philippe Deranty, Thomas Corbin","http://arxiv.org/abs/2204.00419v1, http://arxiv.org/pdf/2204.00419v1",econ.GN
32491,gn,"Women agency defined as the ability to conceive of purposeful plan and to
carry out action consistent with such a plan can play an important role in
determining health status. Using data from female respondents conducted in a
survey in the city of Dhaka in Bangladesh, this paper explores how women agency
relates to their physical and mental health. The findings indicate women with
high agency to experience significantly lesser mental distress on average.
Counterintuitively, these women are more likely to report poor physical health.
As an explanation, we propose purposeful action among women with high agency as
a potential reason, wherein they conceive purpose in the future and formulate
action that is feasible today. Hence, these women prefer to report illness and
get the required treatment to ensure better future health. This illuminates our
understanding of sustainable development and emphasises the critical role of
women agency for sustainable human development.",Female Agency and its Implications on Mental and Physical Health: Evidence from the city of Dhaka,2022-04-01 20:11:25,"Upasak Das, Gindo Tampubolon","http://arxiv.org/abs/2204.00582v1, http://arxiv.org/pdf/2204.00582v1",econ.GN
32492,gn,"As the widely applied method for measuring matching assortativeness in a
transferable utility matching game, a matching maximum score estimation is
proposed by \cite{fox2010qe}. This article reveals that combining unmatched
agents, transfers, and individual rationality conditions with sufficiently
large penalty terms makes it possible to identify the coefficient parameter of
a single common constant, i.e., matching costs in the market.",Individual Rationality Conditions of Identifying Matching Costs in Transferable Utility Matching Games,2022-04-02 01:35:07,Suguru Otani,"http://arxiv.org/abs/2204.00713v3, http://arxiv.org/pdf/2204.00713v3",econ.GN
32672,gn,"This study analyses oil price movements through the lens of an agnostic
random forest model, which is based on 1,000 regression trees. It shows that
this highly disciplined, yet flexible computational model reduces in sample
root mean square errors by 65% relative to a standard linear least square model
that uses the same set of 11 explanatory factors. In forecasting exercises the
RMSE reduction ranges between 51% and 68%, highlighting the relevance of non
linearities in oil markets. The results underscore the importance of
incorporating financial factors into oil models: US interest rates, the dollar
and the VIX together account for 39% of the models RMSE reduction in the post
2010 sample, rising to 48% in the post 2020 sample. If Covid 19 is also
considered as a risk factor, these shares become even larger.","Quantifying the Role of Interest Rates, the Dollar and Covid in Oil Prices",2022-08-30 16:28:07,Emanuel Kohlscheen,"http://arxiv.org/abs/2208.14254v2, http://arxiv.org/pdf/2208.14254v2",econ.GN
32493,gn,"As the ageing population and childlessness are increasing in rural China,
social pensions will become the mainstream choice for farmers, and the level of
social pensions must be supported by better social insurance. The paper
compares the history of rural pension insurance system, outlines the current
situation and problems, analyses China Family Panel Studies data and explores
the key factors influencing farmers' participation through an empirical
approach. The paper shows that residents' social pension insurance is facing
problems in the rural areas such as low level of protection and weak management
capacity, which have contributed to the under-insured rate, and finds that
there is a significant impact on farmers' participation in insurance from
personal characteristics factors such as gender, age, health and (family)
financial factors such as savings, personal income, intergenerational mobility
of funds. And use of the Internet can help farmers enroll in pension insurance.
The paper argues for the need to continue to implement the rural revitalisation
strategy, with the government as the lead and the market as the support, in a
concerted effort to improve the protection and popularity of rural pension
insurance.",Rural Pension System and Farmers' Participation in Residents' Social Insurance,2022-04-02 10:12:01,Tao Xu,"http://arxiv.org/abs/2204.00785v1, http://arxiv.org/pdf/2204.00785v1",econ.GN
32494,gn,"We study the allocation of and compensation for occupational COVID-19 risk at
Auburn University, a large public university in the U.S. In Spring 2021,
approximately half of the face-to-face classes had enrollments above the legal
capacity allowed by a public health order, which followed CDC social distancing
guidelines. We find lower-ranked graduate student teaching assistants and
adjunct instructors were systematically recruited to deliver riskier classes.
Using an IV strategy in which teaching risk is shifted by classroom features
(geometry and furniture), we show instructors who taught at least one risky
class earned $7,400 more than those who did not.",The Price of COVID-19 Risk in a Public University,2022-04-02 19:23:20,"Duha Altindag, Samuel Cole, R. Alan Seals Jr","http://arxiv.org/abs/2204.00894v1, http://arxiv.org/pdf/2204.00894v1",econ.GN
32495,gn,"Primary school consolidation--the closure of small community schools or their
mergers into larger, better-resourced schools--is emerging as a significant
policy response to changing demographics in middle income countries with large
rural populations. In China, large-scale consolidation took place in the early
21st century. Because officially-recognized minority populations
disproportionately reside in rural and remote areas, minority students were
among those at elevated risk of experiencing school consolidation. We analyze
heterogeneous effects of consolidation on educational attainment and reported
national language ability in China by exploiting variations in closure timing
across villages and cohorts captured in a 2011 survey of provinces and
autonomous regions with substantial minority populations. We consider
heterogeneous treatment effects across groups defined at the intersections of
minority status, gender, and community ethnic composition and socioeconomic
status. Compared to villages with schools, villages whose schools had closed
reported that the schools students now attended were better resourced, less
likely to offer minority language of instruction, more likely to have Han
teachers, farther away, and more likely to require boarding. Much more than Han
youth, ethnic minority youth were negatively affected by closure, in terms of
its impact on both educational attainment and written Mandarin facility.
However, significant penalties accruing to minority youth occurred only in the
poorest villages. Penalties were generally heavier for girls, but in the most
ethnically segregated minority villages, boys from minority families were
highly vulnerable to closure effects on attainment and written Mandarin
facility. Results show that intersections of minority status, gender, and
community characteristics can delineate significant heterogeneities in policy
impacts.","Fewer, better pathways for all? Intersectional impacts of rural school consolidation in China's minority regions",2022-04-04 04:02:23,"Emily Hannum, Fan Wang","http://dx.doi.org/10.1016/j.worlddev.2021.105734, http://arxiv.org/abs/2204.01196v1, http://arxiv.org/pdf/2204.01196v1",econ.GN
32496,gn,"Does technological change destroy or create jobs? New technologies may
replace human workers, but can simultaneously create jobs if workers are needed
to use these technologies or if new economic activities emerge. Furthermore,
technology-driven productivity growth may increase disposable income,
stimulating a demand-induced expansion of employment. To synthesize the
existing knowledge on this question, we systematically review the empirical
literature on the past four decades of technological change and its impact on
employment, distinguishing between five broad technology categories (ICT,
Robots, Innovation, TFP-style, Other). Overall, we find across studies that the
labor-displacing effect of technology appears to be more than offset by
compensating mechanisms that create or reinstate labor. This holds for most
types of technology, suggesting that previous anxieties over widespread
technology-driven unemployment lack an empirical base, at least so far.
Nevertheless, low-skill, production, and manufacturing workers have been
adversely affected by technological change, and effective up- and reskilling
strategies should remain at the forefront of policy making along with targeted
social support systems.",Technology and jobs: A systematic literature review,2022-04-04 11:10:56,"Kerstin Hötte, Melline Somers, Angelos Theodorakopoulos","http://arxiv.org/abs/2204.01296v1, http://arxiv.org/pdf/2204.01296v1",econ.GN
32497,gn,"Recent estimates are that about 150 million children under five years of age
are stunted, with substantial negative consequences for their schooling,
cognitive skills, health, and economic productivity. Therefore, understanding
what determines such growth retardation is significant for designing public
policies that aim to address this issue. We build a model for nutritional
choices and health with reference-dependent preferences. Parents care about the
health of their children relative to some reference population. In our
empirical model, we use height as the health outcome that parents target.
Reference height is an equilibrium object determined by earlier cohorts'
parents' nutritional choices in the same village. We explore the exogenous
variation in reference height produced by a protein-supplementation experiment
in Guatemala to estimate our model's parameters. We use our model to decompose
the impact of the protein intervention on height into price and reference-point
effects. We find that the changes in reference points account for 65% of the
height difference between two-year-old children in experimental and control
villages in the sixth annual cohort born after the initiation of the
intervention.",You are what your parents expect: Height and local reference points,2022-04-05 05:04:29,"Fan Wang, Esteban Puentes, Jere R. Behrman, Flávio Cunha","http://dx.doi.org/10.1016/j.jeconom.2021.09.020, http://arxiv.org/abs/2204.01933v1, http://arxiv.org/pdf/2204.01933v1",econ.GN
32587,gn,"We present the first global analysis of the impact of the April 2022 cuts to
the future pensions of members of the Universities Superannuation Scheme. For
the 196,000 active members, if Consumer Price Inflation (CPI) remains at its
historic average of 2.5%, the distribution of the range of cuts peaks between
30%-35%. This peak increases to 40%-45% cuts if CPI averages 3.0%. The global
loss across current USS scheme members, in today's money, is calculated to be
16-18 billion GBP, with most of the 71,000 staff under the age of 40 losing
between 100k-200k GBP each, for CPI averaging 2.5%-3.0%. A repeated claim made
during the formal consultation by the body representing university management
(Universities UK) that those earning under 40k GBP would receive a ""headline""
cut of 12% to their future pension is shown to be a serious underestimate for
realistic CPI projections.",The distribution of loss to future USS pensions due to the UUK cuts of April 2022,2022-06-13 17:33:18,"Jackie Grant, Mark Hindmarsh, Sergey E. Koposov","http://arxiv.org/abs/2206.06201v1, http://arxiv.org/pdf/2206.06201v1",econ.GN
32498,gn,"This paper presents a conceptual model describing the medium and long-term
co-evolution of natural and socio-economic subsystems of Earth. An economy is
viewed as an out-of-equilibrium dissipative structure that can only be
maintained with a flow of energy and matter. The distinctive approach
emphasized here consists in capturing the economic impact of natural ecosystems
being depleted and destroyed by human activities via a pinch of thermodynamic
potentials. This viewpoint allows: (i) the full-blown integration of a limited
quantity of primary resources into a non-linear macrodynamics that is
stock-flow consistent both in terms of matter-energy as well as economic
transactions; (ii) the inclusion of natural and forced recycling; (iii) the
inclusion of a friction term which reflects the impossibility of producing
goods and services in high metabolising intensity without exuding energy and
matter wastes; (iv) the computation of the anthropically produced entropy as a
function of intensity and friction. Analysis and numerical computations confirm
the role played by intensity and friction as key factors for sustainability.
Our approach is flexible enough to allow for various economic models to be
embedded into our thermodynamic framework.",Macroeconomic Dynamics in a finite world: the Thermodynamic Potential Approach,2022-04-05 11:05:34,"Éric Herbert, and Gael Giraud, Aurélie Louis-Napoléon, Christophe Goupil","http://arxiv.org/abs/2204.02038v2, http://arxiv.org/pdf/2204.02038v2",econ.GN
32499,gn,"We examine effects of protein and energy intakes on height and weight growth
for children between 6 and 24 months old in Guatemala and the Philippines.
Using instrumental variables to control for endogeneity and estimating multiple
specifications, we find that protein intake plays an important and positive
role in height and weight growth in the 6-24 month period. Energy from other
macronutrients, however, does not have a robust relation with these two
anthropometric measures. Our estimates indicate that in contexts with
substantial child undernutrition, increases in protein-rich food intake in the
first 24 months can have important growth effects, which previous studies
indicate are related significantly to a range of outcomes over the life cycle.",Early life height and weight production functions with endogenous energy and protein inputs,2022-04-06 05:13:06,"Esteban Puentes, Fan Wang, Jere R. Behrman, Flávio Cunha, John Hoddinott, John A. Maluccio, Linda S. Adair, Judith B. Borja, Reynaldo Martorell, Aryeh D. Stein","http://dx.doi.org/10.1016/j.ehb.2016.03.002, http://arxiv.org/abs/2204.02542v1, http://arxiv.org/pdf/2204.02542v1",econ.GN
32500,gn,"Following the oil-price surge in the wake of Russia's invasion of Ukraine,
many countries in the EU are cutting taxes on petrol and diesel. Using standard
theory and empirical estimates, we assess how such tax cuts influence the oil
income in Russia. We find that a tax cut of 20 euro cents per liter increase
Russia's oil profits by around 11 million Euros per day in the short run and
long run. This is equivalent to 4100 million Euros in a year, 0.3% of Russia's
GDP or 7% of its military spending. We show that a cash transfer to EU
citizens, with an equivalent fiscal burden as the tax cut, reduces these side
effects to a fraction.",What is the effect of EU's fuel-tax cuts on Russia's oil income?,2022-04-07 12:31:44,"Johan Gars, Daniel Spiro, Henrik Wachtmeister","http://arxiv.org/abs/2204.03318v3, http://arxiv.org/pdf/2204.03318v3",econ.GN
32501,gn,"We examine whether a company's corporate reputation gained from their CSR
activities and a company leader's reputation, one that is unrelated to his or
her business acumen, can impact economic action fairness appraisals. We provide
experimental evidence that good corporate reputation causally buffers
individuals' negative fairness judgment following the firm's decision to
profiteer from an increase in the demand. Bad corporate reputation does not
make the decision to profiteer as any less acceptable. However, there is
evidence that individuals judge as more unfair an ill-reputed firm's decision
to raise their product's price to protect against losses. Thus, our results
highlight the importance of a good reputation in protecting a firm against
severe negative judgments from making an economic decision that the public
deems unfair.",Reputation as insurance: how reputation moderates public backlash following a company's decision to profiteer,2022-04-07 16:52:53,"Danae Arroyos-Calvera, Nattavudh Powdthavee","http://arxiv.org/abs/2204.03450v1, http://arxiv.org/pdf/2204.03450v1",econ.GN
32502,gn,"A planner allocates discrete transfers of size $D_g$ to $N$ heterogeneous
groups labeled $g$ and has CES preferences over the resulting outcomes,
$H_g(D_g)$. We derive a closed-form solution for optimally allocating a fixed
budget subject to group-specific inequality constraints under the assumption
that increments in the $H_g$ functions are non-increasing. We illustrate our
method by studying allocations of ""support checks"" from the U.S. government to
households during both the Great Recession and the COVID-19 pandemic. We
compare the actual allocations to optimal ones under alternative constraints,
assuming the government focused on stimulating aggregate consumption during the
2008--2009 crisis and focused on welfare during the 2020--2021 crisis. The
inputs for this analysis are obtained from versions of a life-cycle model with
heterogeneous households, which predicts household-type-specific consumption
and welfare responses to tax rebates and cash transfers.",Optimal allocations to heterogeneous agents with an application to stimulus checks,2022-04-08 04:04:19,"Vegard M. Nygaard, Bent E. Sørensen, Fan Wang","http://dx.doi.org/10.1016/j.jedc.2022.104352, http://arxiv.org/abs/2204.03799v1, http://arxiv.org/pdf/2204.03799v1",econ.GN
32503,gn,"Despite rapidly-expanding academic and policy interest in the links between
natural resource wealth and development failures (commonly referred to as the
resource curse) little attention has been devoted to the psychology behind the
phenomenon. Rent-seeking and excessive reliance on mineral revenues can be
attributed largely to social psychology. Mineral booms (whether due to the
discovery of mineral reserves or to the drastic rise in commodity prices) start
as positive income shocks that can subsequently evolve into influential and
expectation-changing public and media narratives; these lead consecutively to
unrealistic demands that favor immediate consumption of accrued mineral
revenues and to the postponement of productive investment. To our knowledge,
this paper is the first empirical analysis that tests hypotheses regarding the
psychological underpinnings of resource mismanagement in mineral-rich states.
Our study relies on an extensive personal survey (of 1977 respondents) carried
out in Almaty, Kazakhstan, between May and August 2018. We find empirical
support for a positive link between exposure to news and inflated expectations
regarding mineral availability, as well as evidence that the latter can
generate preferences for excessive consumption, and hence, rent-seeking.",The Psychology of Mineral Wealth: Empirical Evidence from Kazakhstan,2022-04-08 12:15:40,"Elissaios Pappyrakis, Osiris Jorge Parcero","http://arxiv.org/abs/2204.03948v1, http://arxiv.org/pdf/2204.03948v1",econ.GN
32504,gn,"Environmental degradation is a major global problem. Its impacts are not just
environmental, but also economic, with degradation recognised as a key cause of
reduced agricultural productivity and rural poverty in the developing world.
The degradation literature typically emphasises common property or open access
natural resources, and how perverse incentives or missing institutions lead
optimising private actors to degrade them. By contrast, the present paper
considers degradation occurring on private farms in peasant communities. This
is a critical yet delicate issue, given the poverty of such areas and questions
about the role of farmers in either degrading or regenerating rural lands. The
paper examines natural resource management by peasant farmers in Tanzania. Its
key concern is how the local knowledge informing their management decisions
adapts to challenges associated with environmental degradation and market
liberalisation. Given their poverty, this question could have direct
implications for the capacity of households to meet their livelihood needs.
Based on fresh empirical data, the paper finds that differential farmer
knowledge helps explain the large differences in how households respond to the
degradation challenge. The implication is that some farmers adapt more
effectively to emerging challenges than others, despite all being rational,
optimising agents who follow the strategies they deem best. The paper thus
provides a critique of local knowledge, implying that some farmers experience
adaptation slippages while others race ahead with effective adaptations. The
paper speaks to the chronic poverty that plagues many rural communities in the
developing world. It helps explain the failure of proven sustainable
agriculture technologies to disseminate readily beyond early innovators. Its
key policy implication is to inform improved capacity building for such
communities.",Local Knowledge and Natural Resource Management in a Peasant Farming Community Facing Rapid Change: A Critical Examination,2022-04-09 08:42:17,Jules R. Siedenburg,"http://arxiv.org/abs/2204.04396v1, http://arxiv.org/pdf/2204.04396v1",econ.GN
32505,gn,"This chapter provides new evidence on educational inequality and reviews the
literature on the causes and consequences of unequal education. We document
large achievement gaps between children from different socio-economic
backgrounds, show how patterns of educational inequality vary across countries,
time, and generations, and establish a link between educational inequality and
social mobility. We interpret this evidence from the perspective of economic
models of skill acquisition and investment in human capital. The models account
for different channels underlying unequal education and highlight how
endogenous responses in parents' and children's educational investments
generate a close link between economic inequality and educational inequality.
Given concerns over the extended school closures during the Covid-19 pandemic,
we also summarize early evidence on the impact of the pandemic on children's
education and on possible long-run repercussions for educational inequality.",Educational Inequality,2022-04-10 17:44:37,"Jo Blanden, Matthias Doepke, Jan Stuhler","http://arxiv.org/abs/2204.04701v1, http://arxiv.org/pdf/2204.04701v1",econ.GN
32506,gn,"We study the effect of Chile's Employment Protection Law (Ley de Protecci\'on
del Empleo, EPL), a law which allowed temporal suspensions of job contracts in
exceptional circumstances during the COVID-19 pandemic, on the fulfillment of
firms' expectations regarding layoffs. We use monthly surveys directed at a
representative group of firms in the national territory. This panel data allows
to follow firms through time and analyze the match between their expectations
and the actual realization to model their expectation fulfilment. We model the
probability of expectation fulfilment through a logit model that allows for
moderation effects. Results suggest that for those firms that expected to fire
workers, for the firms that used the EPL, the odds they finally ended up with a
job separation are 50% of the odds for those that did not used the EPL. Small
firms increase their probability of expectation fulfilment in 11.9% when using
the EPL compared to large firms if they declared they were expecting to fire
workers.",Impact of an Employment Policy on Companies' Expectations Fulfilment,2022-04-12 06:16:53,"Javier Espinosa-Brito, Carlos Yevenes-Ortega, Gonzalo Franetovic-Guzman, Diana Ochoa-Diaz","http://arxiv.org/abs/2204.05500v1, http://arxiv.org/pdf/2204.05500v1",econ.GN
32507,gn,"Data collected through the National Expert Survey (NES) of the Global
Entrepreneurship Monitor (GEM) are widely used to assess the quality and impact
of national entrepreneurial ecosystems. By focusing on the measurement of the
National Entrepreneurship Context Index (NECI), we argue and show that the
subjective nature of the responses of the national experts precludes meaningful
cross-country analyses and cross-country rankings. Moreover, we show that the
limited precision of the NECI severely constraints the longitudinal assessment
of within-country trends. We provide recommendations for the current use of
NECI data and suggestions for future NES data collections.",A critical assessment of the National Entrepreneurship Context Index of the Global Entrepreneurship Monitor,2022-04-12 15:44:52,"Cornelius A. Rietveld, Pankaj C. Patel","http://arxiv.org/abs/2204.05749v1, http://arxiv.org/pdf/2204.05749v1",econ.GN
32508,gn,"The decarbonization of municipal and district energy systems requires
economic and ecologic efficient transformation strategies in a wide spectrum of
technical options. Especially under the consideration of multi-energy systems,
which connect energy domains such as heat and electricity supply, expansion and
operational planning of so-called decentral multi-energy systems (DMES) holds a
multiplicity of complexities. This motivates the use of optimization problems,
which reach their limitations with regard to computational feasibility in
combination with the required level of detail. With an increased focus on DMES
implementation, this problem is aggravated since, moving away from the
traditional system perspective, a user-centered, market-integrated perspective
is assumed. Besides technical concepts it requires the consideration of market
regimes, e.g. self-consumption and the broader energy sharing. This highlights
the need for DMES optimization models which cover a microeconomic perspective
under consideration of detailed technical options and energy regulation, in
order to understand mutual technical and socio-economic and -ecologic
interactions of energy policies. In this context we present a
stakeholder-oriented multi-criteria optimization model for DMES, which
addresses technical aspects, as well as market and services coverage towards a
real-world implementation. The current work bridges a gap between the required
modelling level of detail and computational feasibility of DMES expansion and
operation optimization. Model detail is achieved by the application of a hybrid
combination of mathematical methods in a nested multi-level decomposition
approach, including a Genetic Algorithm, Benders Decomposition and Lagrange
Relaxation. This also allows for distributed computation on multi-node high
performance computer clusters.",A stakeholder-oriented multi-criteria optimization model for decentral multi-energy systems,2022-04-13 20:43:22,"Nils Körber, Maximilian Röhrig, Andreas Ulbig","http://arxiv.org/abs/2204.06545v1, http://arxiv.org/pdf/2204.06545v1",econ.GN
32509,gn,"PM2.5 produced by freight trucks has adverse impacts on human health.
However, it is unknown to what extent freight trucking affects communities of
color and the total public health burden arising from the sector. Based on
spatially resolved US federal government data, we explore the geographic
distribution of freight trucking emissions and demonstrate that Black and
Hispanic populations are more likely to be exposed to elevated emissions from
freight trucks. Our results indicate that freight trucks contribute ~10% of NOx
and ~12% of CO2 emissions from all sources in the continental US. The annual
costs to human health and the environment due to NOx, PM2.5, SO2, and CO2 from
freight trucking in the US are estimated respectively to be $11B, $5.5B, $110M,
and $30B. Overall, the sector is responsible for nearly two-fifths (~$47B out
of $120B) of all transportation-related public health damages.",Environmental injustice in America: Racial disparities in exposure to air pollution health damages from freight trucking,2022-04-13 21:14:26,"Priyank Lathwal, Parth Vaishnav, M. Granger Morgan","http://arxiv.org/abs/2204.06588v2, http://arxiv.org/pdf/2204.06588v2",econ.GN
32512,gn,"Using Japanese professional chess (Shogi) players records in the novel
setting, this paper examines how and the extent to which the emergence of
technological changes influences the ageing and innate ability of players
winning probability. We gathered games of professional Shogi players from 1968
to 2019.
  The major findings are: (1) diffusion of artificial intelligence (AI) reduces
innate ability, which reduces the performance gap among same-age players; (2)
players winning rates declined consistently from 20 years and as they get
older; (3) AI accelerated the ageing declination of the probability of winning,
which increased the performance gap among different aged players; (4) the
effects of AI on the ageing declination and the probability of winning are
observed for high innate skill players but not for low innate skill ones. This
implies that the diffusion of AI hastens players retirement from active play,
especially for those with high innate abilities. Thus, AI is a substitute for
innate ability in brain-work productivity.","AI, Ageing and Brain-Work Productivity: Technological Change in Professional Japanese Chess",2022-04-17 03:12:45,"Eiji Yamamura, Ryohei Hayashi","http://arxiv.org/abs/2204.07888v1, http://arxiv.org/pdf/2204.07888v1",econ.GN
32513,gn,"Post-World War II , there was massive internal migration from rural to urban
areas in Japan. The location of Sumo stables was concentrated in Tokyo. Hence,
supply of Sumo wrestlers from rural areas to Tokyo was considered as migration.
Using a panel dataset covering forty years, specifically 1946-1985, this study
investigates how weather conditions and social networks influenced the labor
supply of Sumo wrestlers. Major findings are; (1) inclemency of the weather in
local areas increased supply of Sumo wrestlers in the period 1946-1965, (2) the
effect of the bad weather conditions is greater in the locality where large
number of Sumo wrestlers were supplied in the pre-war period, (3) neither the
occurrence of bad weather conditions nor their interactions with sumo-wrestlers
influenced the supply of Sumo wrestlers in the period 1966-1985. These findings
imply that the negative shock of bad weather conditions on agriculture in the
rural areas incentivized young individuals to be apprenticed in Sumo stables in
Tokyo. Additionally, in such situations, the social networks within Sumo
wrestler communities from the same locality are important. However, once the
share of workers in agricultural sectors became very low, this mechanism did
not work.","Bad Weather, Social Network, and Internal Migration; Case of Japanese Sumo Wrestlers 1946-1985",2022-04-17 03:20:18,Eiji Yamamura,"http://arxiv.org/abs/2204.07891v1, http://arxiv.org/pdf/2204.07891v1",econ.GN
32514,gn,"Technological innovation is one of the most important variables in the
evolution of the textile industry system. As the innovation process changes, so
does the degree of technological diffusion and the state of competitive
equilibrium in the textile industry system, and this leads to fluctuations in
the economic growth of the industry system. The fluctuations resulting from the
role of innovation are complex, irregular and imperfectly cyclical. The study
of the chaos model of the accumulation of innovation in the evolution of the
textile industry can help to provide theoretical guidance for technological
innovation in the textile industry, and can help to provide suggestions for the
interaction between the government and the textile enterprises themselves. It
is found that reasonable government regulation parameters contribute to the
accelerated accumulation of innovation in the textile industry.",Research on the accumulation effect model of technological innovation in textile industry based on chaos theory,2022-04-13 11:38:00,Xiangtai Zuo,"http://arxiv.org/abs/2204.08340v1, http://arxiv.org/pdf/2204.08340v1",econ.GN
32515,gn,"How does the politician's reputation concern affect information provision
when the information is endogenously provided by a biased lobbyist? I develop a
model to study this problem and show that the answer depends on the
transparency design. When the lobbyist's preference is publicly known, the
politician's reputation concern induces the lobbyist to provide more
information. When the lobbyist's preference is unknown, the politician's
reputation concern may induce the lobbyist to provide less information. One
implication of the result is that given transparent preferences, the
transparency of decision consequences can impede information provision by
moderating the politician's reputational incentive.",Transparency and Policymaking with Endogenous Information Provision,2022-04-19 16:16:56,Hanzhe Li,"http://arxiv.org/abs/2204.08876v3, http://arxiv.org/pdf/2204.08876v3",econ.GN
32516,gn,"A main issue in improving public sector efficiency is to understand to what
extent public appointments are based on worker capability, instead of being
used to reward political supporters (patronage). I contribute to a recent
literature documenting patronage in public sector employment by establishing
what type of workers benefit the most from political connections. Under the
(empirically supported) assumption that in close elections the result of the
election is as good as random, I estimate a causal forest to identify
heterogeneity in the conditional average treatment effect of being affiliated
to the party of the winning mayor. Contrary to previous literature, for most
positions we find positive selection on education, but a negative selection on
(estimated) ability. Overall, unemployed workers or low tenure employees that
are newly affiliated to the winning candidate's party benefit the most from
political connections, suggesting that those are used for patronage.",Who Benefits from Political Connections in Brazilian Municipalities,2022-04-20 16:34:26,Pedro Forquesato,"http://arxiv.org/abs/2204.09450v1, http://arxiv.org/pdf/2204.09450v1",econ.GN
32517,gn,"Artificial Intelligence (AI) is often defined as the next general purpose
technology (GPT) with profound economic and societal consequences. We examine
how strongly four patent AI classification methods reproduce the GPT-like
features of (1) intrinsic growth, (2) generality, and (3) innovation
complementarities. Studying US patents from 1990-2019, we find that the four
methods (keywords, scientific citations, WIPO, and USPTO approach) vary in
classifying between 3-17% of all patents as AI. The keyword-based approach
demonstrates the strongest intrinsic growth and generality despite identifying
the smallest set of AI patents. The WIPO and science approaches generate each
GPT characteristic less strikingly, whilst the USPTO set with the largest
number of patents produces the weakest features. The lack of overlap and
heterogeneity between all four approaches emphasises that the evaluation of AI
innovation policies may be sensitive to the choice of classification method.",Exploring Artificial Intelligence as a General Purpose Technology with Patent Data -- A Systematic Comparison of Four Classification Approaches,2022-04-21 20:39:25,"Kerstin Hötte, Taheya Tarannum, Vilhelm Verendel, Lauren Bennett","http://arxiv.org/abs/2204.10304v1, http://arxiv.org/pdf/2204.10304v1",econ.GN
32518,gn,"This study examined the relationship between trade facilitation and economic
growth among the middle-income countries from 2010 to 2020 using 94 countries
made up of 48 lower-middle-income countries and 46 upper-middle-income
countries. The study utilized both difference and system Generalised Method of
Moments (GMM) since the cross-sections (N) were greater than the periods (T).
The study found that container port traffic, quality of trade and
transport-related infrastructure have a strong influence on imports and exports
of goods and national income while trade tariff hurts the growth of the
countries. The study also found that most of the trade facilitation indicators
indicated a weak positive influence on trade flows and economic growth. Based
on these findings, the study recommends that reforms aimed at significantly
lowering the costs of trading across borders among middle-income countries
should be highly prioritized in policy formulations, with a focus on the export
side by reducing at-the-border documentation, time, and real costs of trading
across borders while the international organizations should continue to report
the set of Trade Facilitation Indicators (TFIs) that identify areas for action
and enable the potential impact of reforms to be assessed.",Trade Facilitation and Economic Growth Among Middle-Income Countries,2022-04-23 18:16:36,Victor Ushahemba Ijirshar,"http://arxiv.org/abs/2204.11088v1, http://arxiv.org/pdf/2204.11088v1",econ.GN
32519,gn,"This paper documents changes in retirement saving patterns at the onset of
the COVID-19 pandemic. We construct a large panel of U.S. tax data, including
tens of millions of person-year observations, and measure retirement savings
contributions and withdrawals. We use these data to document several important
changes in retirement savings patterns during the pandemic relative to the
years preceding the pandemic or the Great Recession. First, unlike during the
Great Recession, contributions to retirement savings vehicles did not
meaningfully decline. Second, driven by the suspension of required minimum
distribution rules, IRA withdrawals substantially declined in 2020 for those
older than age 72. Third, potentially driven by partial suspension of the early
withdrawal penalty, employer-plan withdrawals increased for those under age 60.",Changes in Retirement Savings During the COVID Pandemic,2022-04-26 17:58:11,"Elena Derby, Lucas Goodman, Kathleen Mackie, Jacob Mortenson","http://arxiv.org/abs/2204.12359v2, http://arxiv.org/pdf/2204.12359v2",econ.GN
32520,gn,"I develop and estimate a dynamic equilibrium model of risky entrepreneurs'
borrowing and savings decisions incorporating both formal and local-informal
credit markets. Households have access to an exogenous formal credit market and
to an informal credit market in which the interest rate is endogenously
determined by the local demand and supply of credit. I estimate the model via
Simulated Maximum Likelihood using Thai village data during an episode of
formal credit market expansion. My estimates suggest that a 49 percent
reduction in fixed costs increased the proportion of households borrowing
formally by 36 percent, and that a doubling of the collateralized borrowing
limits lowered informal interest rates by 24 percent. I find that more
productive households benefited from the policies that expanded borrowing
access, but less productive households lost in terms of welfare due to
diminished savings opportunities. Gains are overall smaller than would be
predicted by models that do not consider the informal credit market.",An empirical equilibrium model of formal and informal credit markets in developing countries,2022-04-26 18:13:04,Fan Wang,"http://dx.doi.org/10.1016/j.red.2021.09.001, http://arxiv.org/abs/2204.12374v1, http://arxiv.org/pdf/2204.12374v1",econ.GN
32521,gn,"Explaining changes in bitcoin's price and predicting its future have been the
foci of many research studies. In contrast, far less attention has been paid to
the relationship between bitcoin's mining costs and its price. One popular
notion is the cost of bitcoin creation provides a support level below which
this cryptocurrency's price should never fall because if it did, mining would
become unprofitable and threaten the maintenance of bitcoin's public ledger.
Other research has used mining costs to explain or forecast bitcoin's price
movements. Competing econometric analyses have debunked this idea, showing that
changes in mining costs follow changes in bitcoin's price rather than preceding
them, but the reason for this behavior remains unexplained in these analyses.
This research aims to employ economic theory to explain why econometric studies
have failed to predict bitcoin prices and why mining costs follow movements in
bitcoin prices rather than precede them. We do so by explaining the chain of
causality connecting a bitcoin's price to its mining costs.",The Price and Cost of Bitcoin,2022-04-27 20:57:55,"John E. Marthinsen, Steven R. Gordon","http://dx.doi.org/10.1016/j.qref.2022.04.003, http://arxiv.org/abs/2204.13102v1, http://arxiv.org/pdf/2204.13102v1",econ.GN
32522,gn,"We characterize optimal policies in a multidimensional nonlinear taxation
model with bunching. We develop an empirically relevant model with cognitive
and manual skills, firm heterogeneity, and labor market sorting. The analysis
of optimal policy is based on two main results. We first derive an optimality
condition - a general ABC formula - that states that the entire schedule of
benefits of taxes second order stochastically dominates the entire schedule of
tax distortions. Second, we use Legendre transforms to represent our problem as
a linear program. This linearization allows us to solve the model
quantitatively and to precisely characterize the regions and patterns of
bunching. At an optimum, 9.8 percent of workers is bunched both locally and
nonlocally. We introduce two notions of bunching - blunt bunching and targeted
bunching. Blunt bunching constitutes 30 percent of all bunching, occurs at the
lowest regions of cognitive and manual skills, and lumps the allocations of
these workers resulting in a significant distortion. Targeted bunching
constitutes 70 percent of all bunching and recognizes the workers' comparative
advantage. The planner separates workers on their dominant skill and bunches
them on their weaker skill, thus mitigating distortions along the dominant
skill dimension. Tax wedges are particularly high for low skilled workers who
are bluntly bunched and are also high along the dimension of comparative
disadvantage for somewhat more skilled workers who are targetedly bunched.",Bunching and Taxing Multidimensional Skills,2022-04-28 16:14:41,"Job Boerma, Aleh Tsyvinski, Alexander P. Zimin","http://arxiv.org/abs/2204.13481v1, http://arxiv.org/pdf/2204.13481v1",econ.GN
32523,gn,"This paper empirically analyzes how individual characteristics are associated
with risk aversion, loss aversion, time discounting, and present bias. To this
end, we conduct a large-scale demographically representative survey across
eight European countries. We elicit preferences using incentivized multiple
price lists and jointly estimate preference parameters to account for their
structural dependencies. Our findings suggest that preferences are linked to a
variety of individual characteristics such as age, gender, and income as well
as some personal values. We also report evidence on the relationship between
cognitive ability and preferences. Incentivization, stake size, and the order
of presentation of binary choices matter, underlining the importance of
controlling for these factors when eliciting economic preferences.",Individual characteristics associated with risk and time preferences: A multi country representative survey,2022-04-28 20:26:57,"Thomas Meissner, Xavier Gassmann, Corinne Faure, Joachim Schleich","http://arxiv.org/abs/2204.13664v2, http://arxiv.org/pdf/2204.13664v2",econ.GN
32524,gn,"With the global increase in online services, there is a paradigm shift from
service quality to e-service quality. In order to sustain this strategic
change, there is need to measure and evaluate the quality of e-services.
Consequently, the paper seeks to determine the relevant e-service quality
dimensions for e-channels. The aim is to generate a concise set of dimensions
that managers can use to measure e-service quality. The paper proposed an
e-service quality model comprising seven e-service quality dimensions (website
appearance, ease of use, reliability, security, personalisation, fulfilment and
responsiveness) and overall e-service quality. The study employed a
cross-sectional research design and quantitative research approach. The data
were collected via a questionnaire from 400 e-channel users in Lagos State,
Nigeria. However, 318 copies of the questionnaire were found useful. The data
were analysed using mean, frequency, percentages, correlation and multiple
regression analysis. The results revealed that the relevant e-service quality
dimensions influencing overall e-service quality are reliability, security,
fulfilment, ease of use and responsiveness. These e-service quality dimensions
are expected to provide information for managers to evaluate and improve their
e-channel service delivery.","From Service Quality to E-Service Quality: Measurement, Dimensions and Model",2022-04-29 22:31:20,"Salome O. Ighomereho, Afolabi A. Ojo, Samuel O. Omoyele, Samuel O. Olabode","http://arxiv.org/abs/2205.00055v1, http://arxiv.org/pdf/2205.00055v1",econ.GN
32525,gn,"This paper aims to analyze the effect of Bitcoin on portfolio optimization
using mean-variance, conditional value-at-risk (CVaR), and Markov regime
switching approaches. I assessed each approach and developed the next based on
the prior approach's weaknesses until I ended with a high level of confidence
in the final approach. Though the results of mean-variance and CVaR frameworks
indicate that Bitcoin improves the diversification of a well-diversified
international portfolio, they assume that assets' returns are developed
linearly and normally distributed. However, the Bitcoin return does not have
both of these characteristics. Due to this, I developed a Markov regime
switching approach to analyze the effect of Bitcoin on an international
portfolio performance. The results show that there are two regimes based on the
assets' returns: 1- bear state, where returns have low means and high
volatility, 2- bull state, where returns have high means and low volatility.","Evaluating the Impact of Bitcoin on International Asset Allocation using Mean-Variance, Conditional Value-at-Risk (CVaR), and Markov Regime Switching Approaches",2022-04-30 22:46:14,Mohammadreza Mahmoudi,"http://arxiv.org/abs/2205.00335v1, http://arxiv.org/pdf/2205.00335v1",econ.GN
32526,gn,"We document the age-race-gender intersectionality in the distribution of
occupational tasks in the United States. We also investigate how the task
content of work changed from the early-2000s to the late-2010s for different
age-race/ethnicity-gender groups. Using the Occupation Information Network
(O*NET) and pooled cross-sectional data from the American Community Survey
(ACS) we examine how the tasks that workers perform vary with age and over
time. We find that White men transition to occupations high in non-routine
cognitive tasks early in their careers, whereas Hispanic and Black men work
mostly in physically demanding jobs over their entire working lives. Routine
manual tasks increased dramatically for 55-67 year-old workers, except for
Asian men and women. Policymakers will soon be challenged by financial stress
on entitlement programs, reforms could have disproportionate effects on gender
and racial/ethnic groups due to inequality in the distribution of occupational
tasks.",The Distribution of Occupational Tasks in the United States: Implications for a Diverse and Aging Population,2022-05-01 19:02:23,"Samuel Cole, Zachary Cowell, John M. Nunley, R. Alan Seals Jr","http://arxiv.org/abs/2205.00497v1, http://arxiv.org/pdf/2205.00497v1",econ.GN
32527,gn,"In macroeconomics, an emerging discussion of alternative monetary systems
addresses the dimensions of systemic risk in advanced financial systems.
Monetary regime changes with the aim of achieving a more sustainable financial
system have already been discussed in several European parliaments and were the
subject of a referendum in Switzerland. However, their effectiveness and
efficacy concerning macro-financial stability are not well-known. This paper
introduces a macroeconomic agent-based model (MABM) in a novel simulation
environment to simulate the current monetary system, which may serve as a basis
to implement and analyze monetary regime shifts. In this context, the monetary
system affects the lending potential of banks and might impact the dynamics of
financial crises. MABMs are predestined to replicate emergent financial crisis
dynamics, analyze institutional changes within a financial system, and thus
measure macro-financial stability. The used simulation environment makes the
model more accessible and facilitates exploring the impact of different
hypotheses and mechanisms in a less complex way. The model replicates a wide
range of stylized economic facts, including simplifying assumptions to reduce
model complexity.",A basic macroeconomic agent-based model for analyzing monetary regime shifts,2022-05-02 11:58:03,"Florian Peters, Doris Neuberger, Oliver Reinhardt, Adelinde Uhrmacher","http://dx.doi.org/10.1371/journal.pone.0277615, http://arxiv.org/abs/2205.00752v1, http://arxiv.org/pdf/2205.00752v1",econ.GN
32528,gn,"I discuss various ways in which inference based on the estimation of the
parameters of statistical models (reduced-form estimation) can be combined with
inference based on the estimation of the parameters of economic models
(structural estimation). I discuss five basic categories of integration:
directly combining the two methods, using statistical models to simplify
structural estimation, using structural estimation to extend the validity of
reduced-form results, using reduced-form techniques to assess the external
validity of structural estimations, and using structural estimation as a sample
selection remedy. I illustrate each of these methods with examples from
corporate finance, banking, and personal finance.",Integrating Structural and Reduced-Form Methods in Empirical Finance,2022-05-02 22:35:42,Toni M. Whited,"http://arxiv.org/abs/2205.01175v1, http://arxiv.org/pdf/2205.01175v1",econ.GN
32534,gn,"I study the role of industries' position in supply chains in shaping the
transmission of final demand shocks. First, I use a shift-share design based on
destination-specific final demand shocks and destination shares to show that
shocks amplify upstream. Quantitatively, upstream industries respond to final
demand shocks up to three times as much as final goods producers. To organize
the reduced form results, I develop a tractable production network model with
inventories and study how the properties of the network and the cyclicality of
inventories interact to determine whether final demand shocks amplify or
dissipate upstream. I test the mechanism by directly estimating the
model-implied relationship between output growth and demand shocks, mediated by
network position and inventories. I find evidence of the role of inventories in
explaining heterogeneous output elasticities. Finally, I use the model to
quantitatively study how inventories and network properties shape the
volatility of the economy.","Inventories, Demand Shocks Propagation and Amplification in Supply Chains",2022-05-08 16:28:00,Alessandro Ferrari,"http://arxiv.org/abs/2205.03862v5, http://arxiv.org/pdf/2205.03862v5",econ.GN
32529,gn,"Government efforts to address child poverty commonly encompass economic
assistance programs that bolster household income. The Child Tax Credit (CTC)
is the most prominent example of this. Introduced by the United States Congress
in 1997, the program endeavors to help working parents via income
stabilization. Our work examines the extent to which the CTC has done so. Our
study, which documents clear, consistent, and compelling evidence of gender
inequity in benefits realization, yields four key findings. First, stringent
requisite income thresholds disproportionally disadvantage single mothers, a
reflection of the high concentration of this demographic in lower segments of
the income distribution. Second, married parents and, to a lesser extent,
single fathers, are the primary beneficiaries of the CTC program when benefits
are structured as credits rather than refunds. Third, making program benefits
more generous disproportionally reduces how many single mothers, relative to
married parents and single fathers, can claim this benefit. Fourth and finally,
increasing credit refundability can mitigate gender differences in relief
eligibility, although doing so imposes externalities of its own. Our findings
can inform public policy discourse surrounding the efficacy of programs like
the CTC and the effectiveness of programs aimed at alleviating child poverty.","Estimating beneficiaries of the child tax credit: past, present, and future",2022-05-03 00:23:29,"Ashley Nunes, Chung Yi See, Lucas Woodley, Nicole A. Divers, Audrey L. Cui","http://arxiv.org/abs/2205.01216v1, http://arxiv.org/pdf/2205.01216v1",econ.GN
32530,gn,"Housing expenditure tends to be sticky and costly to adjust, and makes up a
large proportion of household expenditure. Additionally, the loss of housing
can have catastrophic consequences. These specific features of housing
expenditure imply that housing stress could cause negative mental health
impacts. This research investigates the effects of housing stress on mental
health, contributing to the literature by nesting housing stress within a
measure of financial hardship, thus improving robustness to omitted variables
and creating a natural comparison group for matching. Fixed effects (FE)
regressions and a difference-in-differences (DID) methodology are estimated
utilising data from the Household Income and Labour Dynamics in Australia
(HILDA) Survey. The results show that renters who are in housing stress have a
significant decline in self-reported mental health, with those in prior
financial hardship being more severely affected. In contrast, there is little
to no evidence of housing stress impacting on owners with a mortgage. The
results also suggest that the mental health impact of housing stress is more
important than some, but not all, aspects of financial hardship.",Incorporating Financial Hardship in Measuring the Mental Health Impact of Housing Stress,2022-05-03 03:38:56,"Timothy Ludlow, Jonas Fooken, Christiern Rose, Kam Tang","http://arxiv.org/abs/2205.01255v1, http://arxiv.org/pdf/2205.01255v1",econ.GN
32531,gn,"Electrification of all economic sectors and solar photovoltaics (PV) becoming
the lowest-cost electricity generation technology in ever more regions give
rise to new potential gains of trade. We develop a stylized analytical model to
minimize unit energy cost in autarky, open it to different trade
configurations, and evaluate it empirically. We identify large potential gains
from interhemispheric and global electricity trade by combining complementary
seasonal and diurnal cycles. The corresponding high willingness to pay for
large-scale transmission suggests far-reaching political economy and regulatory
implications.",Potential gains of long-distance trade in electricity,2022-05-03 14:53:53,"Javier López Prol, Karl W. Steininger, Keith Williges, Wolf D. Grossmann, Iris Grossmann","http://arxiv.org/abs/2205.01436v1, http://arxiv.org/pdf/2205.01436v1",econ.GN
32532,gn,"How should we think of the preferences of citizens? Whereas self-optimal
policy is relatively straightforward to produce, socially optimal policy often
requires a more detailed examination. In this paper, we identify an issue that
has received far too little attention in welfarist modelling of public policy,
which we name the ""hidden assumptions"" problem. Hidden assumptions can be
deceptive because they are not expressed explicitly and the social planner
(e.g. a policy maker, a regulator, a legislator) may not give them the critical
attention they need. We argue that ethical expertise has a direct role to play
in public discourse because it is hard to adopt a position on major issues like
public health policy or healthcare prioritisation without making contentious
assumptions about population ethics. We then postulate that ethicists are best
situated to critically evaluate these hidden assumptions, and can therefore
play a vital role in public policy debates.","Utilitarianism on the front lines: COVID-19, public ethics, and the ""hidden assumption"" problem",2022-05-04 11:51:17,"Charles Shaw, Silvio Vanadia","http://dx.doi.org/10.2478/ebce-2022-0006, http://arxiv.org/abs/2205.01957v1, http://arxiv.org/pdf/2205.01957v1",econ.GN
32533,gn,"Measures of economic mobility represent aggregated values for how wealth
ranks of individuals change over time. Therefore, in certain circumstances
mobility measures may not describe the feasibility of the typical individual to
change their wealth ranking. To address this issue, we introduce mixing, a
concept from statistical physics, as a relevant phenomenon for quantifying the
ability of individuals to move across the whole wealth distribution. We display
the relationship between mixing and mobility by studying the relaxation time, a
statistical measure for the degree of mixing, in reallocating geometric
Brownian motion (RGBM). RGBM is an established model of wealth in a growing and
reallocating economy that distinguishes between a mixing and a non-mixing
wealth dynamics regime. We show that measures of mixing are inherently
connected to the concept of economic mobility: while certain individuals can
move across the distribution when wealth is a non-mixing observable, only in
the mixing case every individual is able to move across the whole wealth
distribution. Then, there is also a direct equivalence between measures of
mixing and the magnitude of the standard measures of economic mobility. On the
other hand, the opposite is not true. Wealth dynamics, are however, best
modeled as non-mixing. Hence, measuring mobility using standard measures in a
non-mixing system may lead to misleading conclusions about the extent of
mobility across the whole distribution.",Measures of physical mixing evaluate the economic mobility of the typical individual,2022-05-05 20:24:52,Viktor Stojkoski,"http://arxiv.org/abs/2205.02800v1, http://arxiv.org/pdf/2205.02800v1",econ.GN
32552,gn,"With the rapid increase in the use of social media in the last decade, the
conspicuous consumption lifestyle within society has been now transferred to
the social media. Along with the changing culture of consumption, the consumer
who witnesses such portrayals on social media aspires to and desires the same
products and services. Having regard to this situation, this study examines the
impact of the conspicuous consumption trend in social media on purchasing
intentions. Accordingly, the study aims to discover whether social media is
being used as a conspicuous consumption channel and whether these conspicuous
portrayals affect purchasing intentions",The impact of conspicuous consumption in social Media on purchasing intentions,2022-05-24 15:23:50,İbrahim Halil Efendioğku,"http://arxiv.org/abs/2205.12026v2, http://arxiv.org/pdf/2205.12026v2",econ.GN
32535,gn,"We study how firm heterogeneity and market power affect macroeconomic
fragility, defined as the probability of long slumps. We propose a theory in
which the positive interaction between firm entry, competition and factor
supply can give rise to multiple steady-states. We show that when firm
heterogeneity is large, even small temporary shocks can trigger firm exit and
make the economy spiral in a competition-driven poverty trap. We calibrate our
model to incorporate the well-documented trends on rising firm heterogeneity in
the US economy, we show that they significantly increase the likelihood and
length of slow recoveries. We use our framework to study the 2008-09 recession
and show that the model can rationalize the persistent deviation of output and
most macroeconomic aggregates from trend, including the behavior of net entry,
markups and the labor share. Post-crisis cross-industry data corroborates our
proposed mechanism. We conclude by showing that firm subsidies can be powerful
in preventing long slumps and can lead to up to a 21% increase in welfare.","Firm Heterogeneity, Market Power and Macroeconomic Fragility",2022-05-08 19:19:48,"Alessandro Ferrari, Francisco Queirós","http://arxiv.org/abs/2205.03908v7, http://arxiv.org/pdf/2205.03908v7",econ.GN
32536,gn,"The article is devoted to the competitiveness analysis of Russian
institutions of higher education in international and local markets. The
methodology of research is based on generalized modified principal component
analysis. Principal components analysis has proven its efficiency in business
performance assessment. We apply a modification of this methodology to
construction of an aggregate index of university performance. The whole set of
principal components with weighting coefficients equal to the proportions of
the corresponding explained variance are utilized as an aggregate measure of
various aspects of higher education. This methodology allows to reveal the
factors which exert positive or negative influence on university
competitiveness. We construct a kind of objective ranking of universities in
order to estimate the current situation and prospects of higher education in
Russia. It is applicable for evaluation of public policy in higher education,
which, by inertia, aims to promote competition rather than cooperation among
universities.",Generalized modified principal components analysis of Russian universities competitiveness,2022-05-09 20:11:24,"Pavel Vashchenko, Alexei Verenikin, Anna Verenikina","http://dx.doi.org/10.18267/pr.2020.los.223.0, http://arxiv.org/abs/2205.04426v1, http://arxiv.org/pdf/2205.04426v1",econ.GN
32537,gn,"Launch behaviors are a key determinant of the orbital environment. Physical
and economic forces such as fragmentations and changing launch costs, or
policies like post-mission disposal (PMD) compliance requirements, will alter
the relative attractiveness of different orbits and lead operators to adjust
their launch behaviors. However, integrating models of adaptive launch behavior
with models of the debris environment remains an open challenge. We present a
statistical framework for integrating theoretically-grounded models of launch
behavior with evolutionary models of the low-Earth orbit (LEO) environment. We
implement this framework using data on satellite launches, the orbital
environment, launch vehicle prices, sectoral revenues, and government budgets
over 2007-2020. The data are combined with a multi-shell and multi-species
Particle-in-a-Box (PIB) model of the debris environment and a two-stage
budgeting model of commercial, civil government, and defense decisions to
allocate new launches across orbital shells. We demonstrate the framework's
capabilities in three counterfactual scenarios: unexpected fragmentation events
in highly-used regions, a sharp decrease in the cost of accessing lower parts
of LEO, and increasing compliance with 25-year PMD guidelines. Substitution
across orbits based on their evolving characteristics and the behavior of other
operators induces notable changes in the debris environment relative to models
without behavioral channels.",An integrated debris environment assessment model,2022-05-11 01:56:49,"Akhil Rao, Francesca Letizia","http://arxiv.org/abs/2205.05205v1, http://arxiv.org/pdf/2205.05205v1",econ.GN
32538,gn,"Using detailed Norwegian data on earnings and education histories, we
estimate a dynamic structural model of schooling and work decisions that
captures our data's rich patterns over the life-cycle. We validate the model
against variation in schooling choices induced by a compulsory schooling
reform. Our approach allows us to estimate the ex-ante returns to different
schooling tracks at different stages of the life-cycle and quantify the
contribution of option values. We find substantial heterogeneity in returns and
establish crucial roles for option values and re-enrollment in determining
schooling choices and the impact of schooling policies.","Sequential Choices, Option Values, and the Returns to Education",2022-05-11 15:47:44,"Manudeep Bhuller, Philipp Eisenhauer, Moritz Mendel","http://arxiv.org/abs/2205.05444v1, http://arxiv.org/pdf/2205.05444v1",econ.GN
32539,gn,"In transmission expansion planning, situations can arise in which an
expansion plan that is optimal for the system as a whole is detrimental to a
specific country in terms of its expected economic welfare. If this country is
one of the countries hosting the planned capacity expansion, it has the power
to veto the plan and thus, undermine the system-wide social optimum. To solve
this issue, welfare compensation mechanisms may be constructed that compensate
suffering countries and make them willing to participate in the expansion plan.
In the literature, welfare compensation mechanisms have been developed that
work in expectation. However, in a stochastic setting, even if the welfare
effect after compensation is positive in expectation, countries might still be
hesitant to accept the risk that the actual, realized welfare effect may be
negative in some scenarios.
  In this paper we analyze welfare compensation mechanisms in a stochastic
setting. We consider two existing mechanisms, lump-sum payments and purchase
power agreements, and we develop two novel mechanisms, based on the flow
through the new transmission line and its economic value. Using a case study of
the Northern European power market, we investigate how well these mechanisms
succeed in mitigating risk for the countries involved. Using a theoretically
ideal model-based mechanism, we show that there is a significant potential for
mitigating risk through welfare compensation mechanisms. Out of the four
practical mechanisms we consider, our results indicate that a mechanism based
on the economic value of the new transmission line is most promising.",Welfare compensation in international transmission expansion planning under uncertainty,2022-05-12 12:32:04,"E. Ruben van Beesten, Ole Kristian Ådnanes, Håkon Morken Linde, Paolo Pisciella, Asgeir Tomasgard","http://arxiv.org/abs/2205.05978v1, http://arxiv.org/pdf/2205.05978v1",econ.GN
33659,gn,"In this article, we consider the problem of equilibrium price formation in an
incomplete securities market consisting of one major financial firm and a large
number of minor firms. They carry out continuous trading via the securities
exchange to minimize their cost while facing idiosyncratic and common noises as
well as stochastic order flows from their individual clients. The equilibrium
price process that balances demand and supply of the securities, including the
functional form of the price impact for the major firm, is derived endogenously
both in the market of finite population size and in the corresponding mean
field limit.",Equilibrium Price Formation with a Major Player and its Mean Field Limit,2021-02-22 06:43:34,"Masaaki Fujii, Akihiko Takahashi","http://arxiv.org/abs/2102.10756v3, http://arxiv.org/pdf/2102.10756v3",q-fin.MF
32540,gn,"Adverse conditions in early life can have consequential impacts on
individuals' health in older age. In one of the first papers on this topic,
Barker and Osmond 1986 show a strong positive relationship between infant
mortality rates in the 1920s and ischaemic heart disease in the 1970s. We go
'beyond Barker', first by showing that this relationship is robust to the
inclusion of local geographic area fixed effects, but not family fixed effects.
Second, we explore whether the average effects conceal underlying
heterogeneity: we examine if the infant mortality effect offsets or reinforces
one's genetic predisposition for heart disease. We find considerable
heterogeneity that is robust to within-area as well as within-family analyses.
Our findings show that the effects of one's early life environments mainly
affect individuals with the highest genetic risk for developing heart disease.
Put differently, in areas with the lowest infant mortality rates, the effect of
one's genetic predisposition effectively vanishes. These findings suggest that
advantageous environments can cushion one's genetic risk of developing heart
disease.",Beyond Barker: Infant Mortality at Birth and Ischaemic Heart Disease in Older Age,2022-05-12 18:35:36,"Samuel Baker, Pietro Biroli, Hans van Kippersluis, Stephanie von Hinke","http://arxiv.org/abs/2205.06161v1, http://arxiv.org/pdf/2205.06161v1",econ.GN
32541,gn,"Inventory management optimisation in a multi-period setting with dependent
demand periods requires the determination of replenishment order quantities in
a dynamic stochastic environment. Retailers are faced with uncertainty in
demand and supply for each demand period. In grocery retailing, perishable
goods without best-before-dates further amplify the degree of uncertainty due
to stochastic spoilage. Assuming a lead time of multiple days, the inventory at
the beginning of each demand period is determined jointly by the realisations
of these stochastic variables. While existing contributions in the literature
focus on the role of single components only, we propose to integrate all of
them into a joint framework, explicitly modelling demand, supply shortages, and
spoilage using suitable probability distributions learned from historic data.
As the resulting optimisation problem is analytically intractable in general,
we use a stochastic lookahead policy incorporating Monte Carlo techniques to
fully propagate the associated uncertainties in order to derive replenishment
order quantities. We develop a general inventory management framework and
analyse the benefit of modelling each source of uncertainty with an appropriate
probability distribution. Additionally, we conduct a sensitivity analysis with
respect to location and dispersion of these distributions. We illustrate the
practical feasibility of our framework using a case study on data from a
European e-grocery retailer. Our findings illustrate the importance of properly
modelling stochastic variables using suitable probability distributions for a
cost-effective inventory management process.",Dynamic Stochastic Inventory Management in E-Grocery Retailing: The Value of Probabilistic Information,2022-05-13 15:00:17,"David Winkelmann, Matthias Ulrich, Michael Römer, Roland Langrock, Hermann Jahnke","http://arxiv.org/abs/2205.06572v1, http://arxiv.org/pdf/2205.06572v1",econ.GN
32542,gn,"Two strategies for boreal forestry with goodwill in estate capitalization are
introduced. A strategy focusing on Real Estate (RE) is financially superior to
Timber Sales (TS). The feasibility of the RE requires the presence of forest
land end users in the real estate market, like insurance companies or
investment trusts, and the periodic boundary condition does not apply.
Commercial thinnings do not enter the RE strategy in a stand-level discussion.
However, they may appear in estates with a variable age structure and enable an
extension of stand rotation times.",Two strategies for boreal forestry with goodwill in capitalization,2022-05-13 19:26:03,Petri P. Karenlampi,"http://arxiv.org/abs/2205.06744v1, http://arxiv.org/pdf/2205.06744v1",econ.GN
32543,gn,"This paper empirically evaluates whether adopting a common currency has
changed the level of consumption smoothing of euro area member states. We
construct a counterfactual dataset of macroeconomic variables through the
synthetic control method. We then use the output variance decomposition of
Asdrubali, Sorensen and Yosha (1996) on both the actual and the synthetic data
to study if there has been a change in risk sharing and through which channels.
We find that the euro adoption has reduced risk sharing and consumption
smoothing. We further show that this reduction is mainly driven by the
periphery countries of the euro area who have experienced a decrease in risk
sharing through private credit.",Risk Sharing and the Adoption of the Euro,2022-05-14 11:48:43,"Alessandro Ferrari, Anna Rogantini Picco","http://arxiv.org/abs/2205.07009v1, http://arxiv.org/pdf/2205.07009v1",econ.GN
32544,gn,"Firms' innovation potential depends on their position in the R&D network. But
details on this relation remain unclear because measures to quantify network
embeddedness have been controversially discussed. We propose and validate a new
measure, coreness, obtained from the weighted k-core decomposition of the R&D
network. Using data on R&D alliances, we analyse the change of coreness for
14,000 firms over 25 years and patenting activity. A regression analysis
demonstrates that coreness explains firms' R&D output by predicting future
patenting.",Network embeddedness indicates the innovation potential of firms,2022-05-16 16:41:42,"Giacomo Vaccario, Luca Verginer, Antonios Garas, Mario V. Tomasello, Frank Schweitzer","http://arxiv.org/abs/2205.07677v1, http://arxiv.org/pdf/2205.07677v1",econ.GN
32545,gn,"Emotional volatility is a human universal. Yet there has been no large-scale
scientific study of predictors of that phenomenon. Building from previous
works, which had been ad hoc and based on tiny samples, this paper reports the
first large-scale estimation of volatility in human emotional experiences. Our
study draws from a large sample of intrapersonal variation in moment-to-moment
happiness from over three million observations by 41,023 UK individuals.
Holding other things constant, we show that emotional volatility is highest
among women with children, the separated, the poor, and the young. Women
without children report substantially greater emotional volatility than men
with and without children. For any given rate of volatility, women with
children also experience more frequent extreme emotional lows than any other
socio-demographic group. Our results, which are robust to different
specification tests, enable researchers and policymakers to quantify and
prioritise different determinants of intrapersonal variability in human
emotions.","Predicting Emotional Volatility Using 41,000 Participants in the United Kingdom",2022-05-16 18:10:56,"George MacKerron, Nattavudh Powdthavee","http://arxiv.org/abs/2205.07742v2, http://arxiv.org/pdf/2205.07742v2",econ.GN
33797,th,"We study dynamic matching in exchange markets with easy- and hard-to-match
agents. A greedy policy, which attempts to match agents upon arrival, ignores
the positive externality that waiting agents generate by facilitating future
matchings. We prove that this trade-off between a ``thicker'' market and faster
matching vanishes in large markets; A greedy policy leads to shorter waiting
times, and more agents matched than any other policy. We empirically confirm
these findings in data from the National Kidney Registry. Greedy matching
achieves as many transplants as commonly-used policies (1.6\% more than
monthly-batching), and shorter patient waiting times.",Matching in Dynamic Imbalanced Markets,2018-09-18 19:45:09,"Itai Ashlagi, Afshin Nikzad, Philipp Strack","http://arxiv.org/abs/1809.06824v2, http://arxiv.org/pdf/1809.06824v2",econ.TH
32546,gn,"We quantitatively explore the impact of social security reforms in Japan,
which is facing rapid aging and the highest government debt among developed
countries, using an overlapping generations model with four types of agents
distinguished by gender and employment type. We find that introducing social
security reforms without extending the retirement age raises the welfare of
future generations, while reforms with rising copayment rates for medical and
long-term care expenditures, in particular, significantly lowers the welfare of
low-income groups (females and part-timers) of the current retired and working
generations. In contrast, reforms reducing the pension replacement rate lead to
a greater decline in the welfare of full-timers. The combination of these
reforms and the extension of the retirement age is expected to improve the
welfare of the current working generations by 2--9 % over the level without
reforms.","The Impact of the Social Security Reforms on Welfare: Who benefits and Who loses across Generations, Gender, and Employment Type?",2022-05-17 04:17:55,"Hirokuni Iiboshi, Daisuke Ozaki","http://arxiv.org/abs/2205.08042v4, http://arxiv.org/pdf/2205.08042v4",econ.GN
32547,gn,"We elicit incomplete preferences over monetary gambles with subjective
uncertainty. Subjects rank gambles, and these rankings are used to estimate
preferences; payments are based on estimated preferences. About 40\% of
subjects express incompleteness, but they do so infrequently. Incompleteness is
similar for individuals with precise and imprecise beliefs, and in an
environment with objective uncertainty, suggesting that it results from
imprecise tastes more than imprecise beliefs. When we force subjects to choose,
we observe more inconsistencies and preference reversals. Evidence suggests
there is incompleteness that is indirectly revealed -- in up to 98\% of
subjects -- in addition to what we directly measure.",Revealed Incomplete Preferences,2022-05-17 21:58:47,"Kirby Nielsen, Luca Rigotti","http://arxiv.org/abs/2205.08584v3, http://arxiv.org/pdf/2205.08584v3",econ.GN
32548,gn,"The topology of production networks determines the propagation mechanisms of
local shocks and thus the co-movement of industries. As a result, we need a
more precisely defined production network to model economic growth accurately.
In this study, we analyse Leontief's input-output model from a network theory
perspective, aiming to construct a production network in such a way that it
allows the most accurate modelling of the propagation mechanisms of changes
that generate industry growth. We do this by revisiting a prevalent threshold
in the literature that determines industry-industry interdependence. Our
hypothesis is that changing the threshold changes the topological structure of
the network and the core industries to a large extent. This is significant,
because if the production network topology is not precisely defined, the
resulting internal propagation mechanisms will be distorted, and thus industry
growth modelling will not be accurate. We prove our hypothesis by examining the
network topology, and centrality metrics under different thresholds on a
network derived from the US input-output accounts data for 2007 and 2012.",Topology-dependence of propagation mechanisms in the production network,2022-05-18 14:58:36,"Eszter Molnár, Dénes Csala","http://arxiv.org/abs/2205.08874v1, http://arxiv.org/pdf/2205.08874v1",econ.GN
32549,gn,"In this paper, we explore centralized and more decentral approaches to
succeed the energiewende in Germany, in the European context. We use the AnyMOD
framework to model a future renewable-based European energy system, based on a
techno-economic optimization, i.e. cost minimization with given demand,
including both investment and the subsequent dispatch of capacity. The model
includes 29 regions for European countries, and 38 NUTS-2 regions in Germany.
First the entire energy system on the European level is optimized. Based on
these results, the electricity system for the German regions is optimized to
achieve great regional detail to analyse spatial effects. The model allows a
comparison between a stylized central scenario with high amounts of wind
offshore deployed, and a decentral scenario using mainly the existing grid, and
thus relying more on local capacities. The results reveal that the cost for the
second optimization of these two scenarios are about the same: The central
scenario is characterized by network expansion in order to transport the
electricity from the wind offshore sites, whereas the decentral scenario leads
to more photovoltaic and battery deployment closer to the areas with a high
demand for energy. A scenarios with higher energy efficiency and lower demand
projections lead to a significant reduction of investment requirements, and to
different localizations thereof.","Centralized and decentral approaches to succeed the 100% energiewende in Germany in the European context: A model-based analysis of generation, network, and storage investments",2022-05-18 19:49:57,"Mario Kendziorski, Leonard Göke, Christian von Hirschhausen, Claudia Kemfert, Elmar Zozmann","http://dx.doi.org/10.1016/j.enpol.2022.113039, http://arxiv.org/abs/2205.09066v1, http://arxiv.org/pdf/2205.09066v1",econ.GN
32550,gn,"Rejected job applicants seldom receive explanations from employers.
Techniques from Explainable AI (XAI) could provide explanations at scale.
Although XAI researchers have developed many different types of explanations,
we know little about the type of explanations job applicants want. We use a
survey of recent job applicants to fill this gap. Our survey generates three
main insights. First, the current norm of, at most, generic feedback frustrates
applicants. Second, applicants feel the employer has an obligation to provide
an explanation. Third, job applicants want to know why they were unsuccessful
and how to improve.",What Type of Explanation Do Rejected Job Applicants Want? Implications for Explainable AI,2022-05-18 08:00:44,"Matthew Olckers, Alicia Vidler, Toby Walsh","http://arxiv.org/abs/2205.09649v1, http://arxiv.org/pdf/2205.09649v1",econ.GN
32551,gn,"We study the productivity implications of R&D, capital accumulation, and
innovation output for entrants and incumbents in Estonia. First, in contrast to
developed economies, a small percentage of firm engage in formal R&D, but a
much larger percentage innovate. Second, while we find no difference in the R&D
elasticity of productivity for the entrants and incumbents, the impact of
innovation output - many of which are a result of 'doing, using and
interacting' (DUI) mode of innovation - is found to be higher for the entrants.
Entrants who innovate are 21% to 30% more productive than entrants who do not;
the corresponding figures for the incumbents are 10% to 13%. Third, despite the
adverse sectoral composition typical of catching-up economies, Estonian
incumbents, who are the primary carriers of 'scientific and
technologically-based innovative' (STI) activities, are comparable to their
counterparts in developed economies in translating STI activities into
productivity gains. Fourth, while embodied technological change through capital
accumulation is found to be more effective in generating productivity growth
than R&D, the effectiveness is higher for firms engaging in R&D. Finally, our
results suggest that certain policy recommendations for spurring productivity
growth in technologically advanced economies may not be applicable for
catching-up economies.","Productivity Implications of R&D, Innovation, and Capital Accumulation for Incumbents and Entrants: Perspectives from a Catching-up Economy",2022-05-21 12:12:18,"Jaan Masso, Amaresh K Tiwari","http://arxiv.org/abs/2205.10540v1, http://arxiv.org/pdf/2205.10540v1",econ.GN
32553,gn,"The National Institute of Health (NIH) sets postdoctoral (postdoc) trainee
stipend levels that many American institutions and investigators use as a basis
for postdoc salaries. Although salary standards are held constant across
universities, the cost of living in those universities' cities and towns vary
widely. Across non-postdoc jobs, more expensive cities pay workers higher wages
that scale with an increased cost of living. This work investigates the extent
to which postdoc wages account for cost-of-living differences. More than 27,000
postdoc salaries across all US universities are analyzed alongside measures of
regional differences in cost of living. We find that postdoc salaries do not
account for cost-of-living differences, in contrast with the broader labor
market in the same cities and towns. Despite a modest increase in income in
high cost of living areas, real (cost of living adjusted) postdoc salaries
differ by 29% ($15k 2021 USD) between the least and most expensive areas.
Cities that produce greater numbers of tenure-track faculty relative to
students such as Boston, New York, and San Francisco are among the most
impacted by this pay disparity. The postdoc pay gap is growing and is
well-positioned to incur a greater financial burden on economically
disadvantaged groups and contribute to faculty hiring disparities in women and
racial minorities.",American postdoctoral salaries do not account for growing disparities in cost of living,2022-05-25 19:22:38,Tim Sainburg,"http://arxiv.org/abs/2205.12892v1, http://arxiv.org/pdf/2205.12892v1",econ.GN
32554,gn,"The Reconstruction Finance Corporation and Public Works Administration loaned
50 U.S. railroads over $1.1 billion between 1932 and 1939. The government goal
was to decrease the likelihood of bond defaults and increase employment.
Bailouts had little effect on employment, instead they increased the average
wage of their employees. Bailouts reduced leverage, but did not significantly
impact bond default. Overall, bailing out railroads had little effect on their
stock prices, but resulted in an increase in their bond prices and reduced the
likelihood of ratings downgrades. We find some evidence that manufacturing
firms located close to railroads benefited from bailout spillovers.",Railroad Bailouts in the Great Depression,2022-05-25 22:26:55,"Lyndon Moore, Gertjan Verdickt","http://arxiv.org/abs/2205.13025v4, http://arxiv.org/pdf/2205.13025v4",econ.GN
32555,gn,"The authors hypothesised that export develops in the network of business
collaborations that are embedded in migration status. In that, collaborative
networking positively affects export performance and immigrant entrepreneurs
enjoy higher collaborative networking than native entrepreneurs due to their
advantage of being embedded in the home and the host country. Moreover, the
advantage of being an immigrant promotes the benefits of collaborative
networking for export compared to those of native entrepreneurs. A total of
47,200 entrepreneurs starting, running and owning firms in 71 countries were
surveyed by Global Entrepreneurship Monitor and analysed through the
hierarchical linear modelling technique. Collaborative networking facilitated
export and migration status influenced entrepreneur networking, in that,
immigrant entrepreneurs had a higher level of collaborative networking than
native entrepreneurs. Consequently, immigrant entrepreneurs seemed to have
benefited from their network collaborations more than their native counterparts
did. This study sheds light on how immigrant entrepreneur network
collaborations can be effective for their exporting.",Immigrant and native export benefiting from business collaborations: a global study,2022-05-26 09:01:48,"Shayegheh Ashourizadeh, Mehrzad Saeedikiya","http://dx.doi.org/10.1504/EJIM.2020.10022504, http://arxiv.org/abs/2205.13171v1, http://arxiv.org/pdf/2205.13171v1",econ.GN
32556,gn,"We investigate the effect of technology adoption on competition by leveraging
a unique dataset on production, costs, and asset characteristics for North Sea
upstream oil & gas companies. Relying on heterogeneity in the geological
suitability of fields and a landmark decision of the Norwegian Supreme Court
that increased the returns of capital investment in Norway relative to the UK,
we show that technology adoption increases market concentration. Firms with
prior technology-specific know-how specialize more in fields suitable for the
same technology but also invest more in high-risk-high-return fields (e.g.,
ultra-deep recovery), diversifying their technology portfolio and ultimately
gaining larger shares of the North Sea market. Our analyses illustrate how
technology adoption can lead to market concentration both directly through
specialization and indirectly via experimentation.",Innovation Begets Innovation and Concentration: The Case of Upstream Oil & Gas in the North Sea,2022-05-26 09:49:57,"Michele Fioretti, Alessandro Iaria, Aljoscha Janssen, Clément Mazet-Sonilhac, Robert K. Perrons","http://arxiv.org/abs/2205.13186v2, http://arxiv.org/pdf/2205.13186v2",econ.GN
32557,gn,"The aim of the study is to determine the effect of the brand on purchasing
decision on generation Y. For this purpose, a face-to-face survey was conducted
with 231 people in the Y age range who have purchased mobile phones in the last
year. The study was conducted with young academicians and university students
working in Harran University Vocational High Schools. The collected data were
analysed with AMOS and SPSS statistical package programs using structural
equation modelling. According to the results of the research, the prestige,
name and reliability of the brand have a significant impact on the purchase
decision of the Y generation. Price, advertising campaigns and technical
services of the brand also affect this decision less. On the other hand, the
place where mobile phone brands are produced and the importance they attach to
social responsibility projects do not affect the purchasing decision.",The effect of the brand in the decision to purchase the mobile phone a research on Y generation consumers,2022-05-25 14:56:56,"İbrahim Halil Efendioğlu, Adnan Talha Mutlu, Yakup Durmaz","http://arxiv.org/abs/2205.13367v2, http://arxiv.org/pdf/2205.13367v2",econ.GN
32563,gn,"Recently enacted regulations aimed to enhance retail investors' understanding
about different types of investment accounts. Toward this goal, the Securities
and Exchange Commission (SEC) mandated that SEC-registered investment advisors
and broker-dealers provide a brief relationship summary (Form CRS) to retail
investors. The present study examines the impact of this regulation on
investors and considers its market implications. The effects of Form CRS were
evaluated based on three outcome variables: perceived helpfulness,
comprehension, and decision making. The study also examined whether personal
characteristics, such as investment experience, influenced the disclosure's
impact on decision making. Results indicated that participants perceived the
disclosure as helpful and it significantly enhanced comprehension about the two
types of investment accounts. Critically, participants also showed increased
preference and choice for broker-dealers after the disclosure. Increased
preference for broker-dealers was associated with greater investment
experience, greater comprehension gains, and access to more information from a
longer disclosure. These findings suggest that Form CRS may promote informed
decision making among retail investors while simultaneously increasing the
selection of broker-dealer accounts.",Disclosure of Investment Advisor and Broker-Dealer Relationships: Impact on Comprehension and Decision Making,2022-05-31 23:58:36,"Xiaoqing Wan, Nichole R. Lighthall","http://arxiv.org/abs/2206.00117v2, http://arxiv.org/pdf/2206.00117v2",econ.GN
32558,gn,"In the past decade, summer wildfires have become the norm in California, and
the United States of America. These wildfires are caused due to variety of
reasons. The state collects wildfire funds to help the impacted customers.
However, the funds are eligible only under certain conditions and are collected
uniformly throughout California. Therefore, the overall idea of this project is
to look for quantitative results on how electrical corporations cause wildfires
and how they can help to collect the wildfire funds or charge fairly to the
customers to maximize the social impact. The research project aims to propose
the implication of wildfire risk associated with vegetation, and due to power
lines and incorporate that in dollars. Therefore, the project helps to solve
the problem of collecting wildfire funds associated with each location and
incorporate energy prices to charge their customers according to their wildfire
risk related to the location to maximize the social surplus for the society.
The thesis findings will help to calculate the risk premium involving wildfire
risk associated with the location and incorporate the risk into pricing. The
research of this submitted proposal provides the potential contribution towards
detecting the utilities associated wildfire risk in the power lines, which can
prevent wildfires by controlling the line flows of the system. Ultimately, the
goal of this proposal is a social benefit to save money for the electrical
corporations and their customers in California, who pay flat charges for
Wildfire Fund each month $0.00580/kWh (in dollars). Therefore, this proposal
will propose new method to collect wildfire fund with maximum customer surplus
for future generations.",Wildfire Modeling: Designing a Market to Restore Assets,2022-05-27 09:18:04,"Ramandeep Kaur Bagri, Yihsu Chen","http://arxiv.org/abs/2205.13773v3, http://arxiv.org/pdf/2205.13773v3",econ.GN
32559,gn,"By comparing the historical patterns of currency development, this paper
pointed out the inevitability of the development of digital currency and the
relationship between digital currency and the digital economy. With the example
of China, this paper predicts the future development trend of digital currency.
In the context of the rapid development of private cryptocurrency, China
launched the digital currency based on serving the digital economy and
committed to the globalization of the digital renminbi (RMB) and the
globalization of the digital economy. The global economy in 2022 ushered in
stagnation, and China treats digital fiat currency and the digital economy
development as a breakthrough to pursue economic transformation and new growth.
It has become one of the forefront countries with numerous experiences that can
be learned by countries around the world.",The Relationship between Digital RMB and Digital Economy in China,2022-05-28 23:15:49,"Chang Su, Wenbo Lyu, Yueting Liu","http://arxiv.org/abs/2205.14517v1, http://arxiv.org/pdf/2205.14517v1",econ.GN
32560,gn,"Metaverse is a virtual universe that combines the physical world and the
digital world. People can socialize, play games and even shop with their
avatars created in this virtual environment. Metaverse, which is growing very
fast in terms of investment, is both a profitable and risky area for consumers.
In order to enter the Metaverse for investment purposes, it is necessary to do
a certain research and gather information. In this direction, the aim of the
study is to determine the effect of the quality of the information obtained by
the consumers about the metaverse world, the reliability of the information and
the perceived risk, on the purchase intention from the point of view of the
information adoption model. For the research, data were collected online from
495 consumers who were interested in metaverse investment. AMOS and SPSS
package programs were used in the analysis. First, descriptive statistical
analyzes were made for the basic structure of the variables. Then the
reliability and validity of the model were tested. Finally, the structural
equation model was used to test the proposed model. According to the findings,
the reliability and quality of the information affect the purchase intention
positively and significantly, while the perceived risk affects the purchase
intention negatively and significantly.",Can I invest in Metaverse? The effect of obtained information and perceived risk on purchase intention by the perspective of the information adoption model,2022-05-30 22:32:54,İbrahim Halil Efendioğlu,"http://arxiv.org/abs/2205.15398v2, http://arxiv.org/pdf/2205.15398v2",econ.GN
32561,gn,"Studies have evaluated the economic feasibility of 100% renewable power
systems using the optimization approach, but the mechanisms determining the
results remain poorly understood. Based on a simple but essential model, this
study found that the bottleneck formed by the largest mismatch between demand
and power generation profiles determines the optimal capacities of generation
and storage and their trade-off relationship. Applying microeconomic theory,
particularly the duality of quantity and value, this study comprehensively
quantified the relationships among the factor cost of technologies, their
optimal capacities, and total system cost. Using actual profile data for
multiple years/regions in Japan, this study demonstrated that hybrid systems
comprising cost-competitive multiple renewable energy sources and different
types of storage are critical for the economic feasibility of any profile.",Economics of 100% renewable power systems,2022-05-31 01:34:40,Takuya Hara,"http://arxiv.org/abs/2205.15451v1, http://arxiv.org/pdf/2205.15451v1",econ.GN
32562,gn,"Sustainability reporting enables investors to make informed decisions and is
hoped to facilitate the transition to a green economy. The European Union's
taxonomy regulation enacts rules to discern sustainable activities and
determine the resulting green revenue, whose disclosure is mandatory for many
companies. In an experiment, we explore how this standardized metric is
received by investors relative to a sustainability rating. We find that green
revenue affects the investment probability more than the rating if the two
metrics disagree. If they agree, a strong rating has an incremental effect on
the investment probability. The effects are robust to variation in investors'
attitudes. Our findings imply that a mandatory standardized sustainability
metric is an effective means of channeling investment, which complements rather
than substitutes sustainability ratings.",Mandatory Disclosure of Standardized Sustainability Metrics: The Case of the EU Taxonomy Regulation,2022-05-31 10:31:23,"Marvin Nipper, Andreas Ostermaier, Jochen Theis","http://arxiv.org/abs/2205.15576v1, http://arxiv.org/pdf/2205.15576v1",econ.GN
32629,gn,"The Journal of Economic Literature codes classification system (JEL)
published by the American Economic Association (AEA) is the de facto standard
classification system for research literature in economics. The JEL
classification system is used to classify articles, dissertations, books, book
reviews, and working papers in EconLit, a database maintained by the AEA. Over
time, it has evolved and extended to a system with over 850 subclasses. This
paper reviews the history and development of the JEL classification system,
describes the current version, and provides a selective overview of its uses
and applications in research. The JEL codes classification system has been
adopted by several publishers, and their instructions are reviewed. There are
interesting avenues for future research as the JEL classification system has
been surprisingly little used in existing bibliometric and scientometric
research as well as in library classification systems.",Journal of Economic Literature codes classification system (JEL),2022-07-13 12:33:42,Jussi T. S. Heikkila,"http://arxiv.org/abs/2207.06076v1, http://arxiv.org/pdf/2207.06076v1",econ.GN
32564,gn,"Unemployment is one of the most important issues in every country. Tourism
industry is a dynamic sector which is labor augmented and can create jobs,
increase consumption expenditures and offer employment opportunities. In the
analysis of this work, the empirical literature on this issue is presented,
which indicates that tourism can play a vital and beneficial role to increase
employment. This paper uses meta-analysis techniques to investigate the effect
of tourism on employment, and finds that the vast majority of the studies
reveal a positive relationship. The mean effect of the 36 studies of the
meta-sample, using Partial Correlations, is 0.129. This effect is similar to
the regression results which is around 0.9, which is positive and statistically
significant, clearly indicating that tourism is effective in the creation of
jobs and offers employment opportunities. Moreover, evidence of selection bias
in the literature in favor of studies which produce positive estimates is
found, and once this bias is corrected the true effect is 0.85-0.97.",Preliminary Results on the Employment Effect of Tourism. A meta-analysis,2022-06-01 04:33:17,Giotis Georgios,"http://arxiv.org/abs/2206.00174v1, http://arxiv.org/pdf/2206.00174v1",econ.GN
32565,gn,"Faced with increasingly severe environmental problems, carbon trading markets
and related financial activities aiming at limiting carbon dioxide emissions
are booming. Considering the complexity and urgency of carbon market, it is
necessary to construct an effective evaluation index system. This paper
selected carbon finance index as a composite indicator. Taking Beijing,
Shanghai, and Guangdong as examples, we adopted the classic method of multiple
criteria decision analysis (MCDA) to analyze the composite indicator. Potential
impact factors were screened extensively and calculated through normalization,
weighting by coefficient of variation and different aggregation methods. Under
the measurement of Shannon-Spearman Measure, the method with the least loss of
information was used to obtain the carbon finance index (CFI) of the pilot
areas. Through panel model analysis, we found that company size, the number of
patents per 10,000 people and the proportion of new energy generation were the
factors with significant influence. Based on the research, corresponding
suggestions were put forward for different market entities. Hopefully, this
research will contribute to the steady development of the national carbon
market.",Measurement of carbon finance level and exploration of its influencing factors,2022-06-01 05:24:24,"Peng Zhang, Yuwei Zhang, Nuo Xu","http://arxiv.org/abs/2206.00189v1, http://arxiv.org/pdf/2206.00189v1",econ.GN
32566,gn,"In spring 2022, the German federal government agreed on a set of measures
that aim at reducing households' financial burden resulting from a recent price
increase, especially in energy and mobility. These measures include among
others, a nation-wide public transport ticket for 9 EUR per month and a fuel
tax cut that reduces fuel prices by more than 15% . In transportation research
this is an almost unprecedented behavioral experiment. It allows to study not
only behavioral responses in mode choice and induced demand but also to assess
the effectiveness of transport policy instruments. We observe this natural
experiment with a three-wave survey and an app-based travel diary on a sample
of hundreds of participants as well as an analysis of traffic counts. In this
first report, we inform about the study design, recruiting and initial
participation of study participants.","A nation-wide experiment: fuel tax cuts and almost free public transport for three months in Germany -- Report 1 Study design, recruiting and participation",2022-06-01 14:02:58,"Allister Loder, Fabienne Cantner, Lennart Adenaw, Markus Siewert, Sebastian Goerg, Markus Lienkamp, Klaus Bogenberger","http://arxiv.org/abs/2206.00396v1, http://arxiv.org/pdf/2206.00396v1",econ.GN
32567,gn,"Estimating the influence of religion on innovation is challenging because of
both complexness and endogeneity. In order to untangle these issues, we use
several measures of religiosity, adopt an individual-level approach to
innovation and employ the instrumental variables method. We analyse the effect
of religiosity on individual attitudes that are either favourable or
unfavourable to innovation, presenting an individual's propensity to innovate.
We instrument one's religiosity with the average religiosity of people of the
same sex, age range, and religious affiliation who live in countries with the
same dominant religious denomination. The results strongly suggest that each
measure of religiosity has a somewhat negative effect on innovation attitudes.
The diagnostic test results and sensitivity analyses support the main findings.
We propose three causality channels from religion to innovation: time
allocation, the fear of uncertainty, and conventional roles reinforced by
religion.",Religiosity and Innovation Attitudes: An Instrumental Variables Analysis,2022-06-01 17:10:09,"Duygu Buyukyazici, Francesco Serti","http://arxiv.org/abs/2206.00509v2, http://arxiv.org/pdf/2206.00509v2",econ.GN
32568,gn,"Recent studies have found evidence of a negative association between economic
complexity and inequality at the country level. Moreover, evidence suggests
that sophisticated economies tend to outsource products that are less desirable
(e.g. in terms of wage and inequality effects), and instead focus on complex
products requiring networks of skilled labor and more inclusive institutions.
Yet the negative association between economic complexity and inequality on a
coarse scale could hide important dynamics at a fine-grained level. Complex
economic activities are difficult to develop and tend to concentrate spatially,
leading to 'winner-take-most' effects that spur regional inequality in
countries. Large, complex cities tend to attract both high- and low-skills
activities and workers, and are also associated with higher levels of
hierarchies, competition, and skill premiums. As a result, the association
between complexity and inequality reverses at regional scales; in other words,
more complex regions tend to be more unequal. Ideas from polarization theories,
institutional changes, and urban scaling literature can help to understand this
paradox, while new methods from economic complexity and relatedness can help
identify inclusive growth constraints and opportunities.",Economic complexity and inequality at the national and regional level,2022-06-02 04:15:08,"Dominik Hartmann, Flavio L. Pinheiro","http://arxiv.org/abs/2206.00818v2, http://arxiv.org/pdf/2206.00818v2",econ.GN
32579,gn,"We study a behavioral SIR model with time-varying costs of distancing. The
two main causes of the variation in the cost of distancing we explore are
distancing fatigue and public policies (lockdowns). We show that for a second
wave of an epidemic to arise, a steep increase in distancing cost is necessary.
Distancing fatigue cannot increase the distancing cost sufficiently fast to
create a second wave. However, public policies that discontinuously affect the
distancing cost can create a second wave. With that in mind, we characterize
the largest change in the distancing cost (due to, for example, lifting a
public policy) that will not cause a second wave. Finally, we provide a
numerical analysis of public policies under distancing fatigue and show that a
strict lockdown at the beginning of an epidemic (as, for example, recently in
China) can lead to unintended adverse consequences. When the policy is lifted
the disease spreads very fast due to the accumulated distancing fatigue of the
individuals causing high prevalence levels.",Time-varying Cost of Distancing: Distancing Fatigue and Lockdowns,2022-06-08 15:41:23,"Christoph Carnehl, Satoshi Fukuda, Nenad Kos","http://arxiv.org/abs/2206.03847v2, http://arxiv.org/pdf/2206.03847v2",econ.GN
32569,gn,"Competition in the Information Technology Outsourcing (ITO) and Business
Process Outsourcing (BPO) industry is increasingly moving from being motivated
by cost savings towards strategic benefits that service providers can offer to
their clients. Innovation is one such benefit that is expected nowadays in
outsourcing engagements. The rising importance of innovation has been noticed
and acknowledged not only in the Information Systems (IS) literature, but also
in other management streams such as innovation and strategy. However, to date,
these individual strands of research remain largely isolated from each other.
Our theoretical review addresses this gap by consolidating and analyzing
research on strategic innovation in the ITO and BPO context. The article set
includes 95 papers published between 1998 to 2020 in outlets from the IS and
related management fields. We craft a four-phase framework that integrates
prior insights about (1) the antecedents of the decision to pursue strategic
innovation in outsourcing settings; (2) arrangement options that facilitate
strategic innovation in outsourcing relationships; (3) the generation of
strategic innovations; and (4) realized strategic innovation outcomes, as
assessed in the literature. We find that the research landscape to date is
skewed, with many studies focusing on the first two phases. The last two phases
remain relatively uncharted. We also discuss how innovation-oriented
outsourcing insights compare with established research on cost-oriented
outsourcing engagements. Finally, we offer directions for future research.",Strategic Innovation Through Outsourcing: A Theoretical Review,2022-06-02 13:55:47,"Marfri Gambal, Aleksandre Asatiani, Julia Kotlarsky","http://dx.doi.org/10.1016/j.jsis.2022.101718, http://arxiv.org/abs/2206.00982v1, http://arxiv.org/pdf/2206.00982v1",econ.GN
32570,gn,"This paper shows that the bunching of wages at round numbers is partly driven
by firm coarse wage-setting. Using data from over 200 million new hires in
Brazil, I first establish that contracted salaries tend to cluster at round
numbers. Then, I show that firms that tend to hire workers at round-numbered
salaries are less sophisticated and have worse market outcomes. Next, I develop
a wage-posting model in which optimization costs lead to the adoption of coarse
rounded wages and provide evidence supporting two model predictions using two
research designs. Finally, I examine some consequences of coarse wage-setting
for relevant economic outcomes.",Coarse Wage-Setting and Behavioral Firms,2022-06-02 18:56:31,Germán Reyes,"http://arxiv.org/abs/2206.01114v4, http://arxiv.org/pdf/2206.01114v4",econ.GN
32571,gn,"We estimate the causal effect of external debt (ED) on greenhouse gas (GHG)
emissions in a panel of 78 emerging market and developing economies (EMDEs)
over the 1990- 2015 period. Unlike previous literature, we use external
instruments to address the potential endogeneity in the relationship between ED
and GHG emissions. Specifically, we use international liquidity shocks as
instrumental variables for ED. We find that dealing with the potential
endogeneity problem brings about a positive and statistically significant
effect of ED on GHG emissions: a 1 percentage point (pp.) rise in ED causes, on
average, a 0.7% increase in GHG emissions. Moreover, when disaggregating ED
between public and private indebtedness, instrumental variable estimates allow
us to find a positive and statistically significant effect of external public
debt on GHG emissions. Finally, disaggregation by type of creditor reveals that
the external public debt from bilateral (private) creditors causes, on average,
a 1.8% (1.2%) increase in GHG emissions. On the contrary, the instrument used
is not relevant in the case of multilateral lending so we have not found any
significant effect of external public debt from IMF creditors or other
multilateral creditors on environmental damage.",The Effect of External Debt on Greenhouse Gas Emissions,2022-06-04 01:22:39,"Jorge Carrera, Pablo de la Vega","http://arxiv.org/abs/2206.01840v1, http://arxiv.org/pdf/2206.01840v1",econ.GN
32572,gn,"Car-sharing services have been providing short-term car access to their
users, contributing to sustainable urban mobility and generating positive
societal and often environmental impacts. As car-sharing business models vary,
it is important to understand what features drive the attraction and retention
of its members in different contexts. For that, it is essential to examine
individuals preferences for subscriptions to different business models and what
they perceive as most relevant, as well as understand what could be attractive
incentives. This study aims precisely to examine individuals preferences for
the subscription of different car-sharing services in different cities. We
designed a stated preference experiment and collected data from three different
urban car-sharing settings, namely Copenhagen, Munich, and Tel Aviv-Yafo. Then
a mixed logit model was estimated to uncover car-sharing plan subscription and
incentives preferences. The results improve our understanding of how both the
features of the car-sharing business model and the provision of incentives can
maintain and attract members to the system. The achieved insights pave the road
for the actual design of car-sharing business models and incentives that can be
offered by existing and future car-sharing companies in the studied or similar
cities.","Car-Sharing Subscription Preferences: The Case of Copenhagen, Munich, and Tel Aviv-Yafo",2022-06-06 12:13:49,"Mayara Moraes Monteiro, Carlos M. Lima Azevedo, Maria Kamargianni, Yoram Shiftan, Ayelet Gal-Tzur, Sharon Shoshany Tavory, Constantinos Antoniou, Guido Cantelmo","http://arxiv.org/abs/2206.02448v1, http://arxiv.org/pdf/2206.02448v1",econ.GN
32573,gn,"The objective of the study is to explore the knowledge of consumers on
nutrition, food safety and hygiene of the sampled respondents in the Sylhet
City Corporation in Bangladesh in terms of different demographic
characteristics. This study includes the different types of consumers in the
life cycle viz. baby, child, adolescent, young and old so that an overall
awareness level can be measured in the urban area of Bangladesh. The study is
confined to SCC area in which all types of respondents has been included and
findings from this study will be used generally for Bangladesh in making policy
In conducting the study the population has been divided into six group as,
Baby, child, adolescent, parental, unmarried adult young and married adult
matured. We find that the average score of awareness of food nutrition and
hygiene of unmarried adult is higher than that of married adults. The study
suggested it is needed to increase awareness in of the parents for feeding
babies. The average awareness of parents to their childs eating behavior
between 5 and 9 years is 3.36 out of 5. The awareness is around 67 percent so
we should be more careful in this regard. The average awareness adolescent food
habit is 1.89 on three points scales which about 63 percent only. Therefore,
the consciousness of adolescent has been to increase in taking food. The
average feeding styles of parents is 4.24 out of 5 to their children up to 9
years and in percentage it is 84 percent.","Assessment of Consumers Awareness of Nutrition, Food Safety and Hygiene in the Life Cycle for Sustainable Development of Bangladesh- A Study on Sylhet City Corporation",2022-06-06 19:20:46,Nazrul Islam,"http://arxiv.org/abs/2206.02719v1, http://arxiv.org/pdf/2206.02719v1",econ.GN
32574,gn,"This research attempts to explore the key research questions about what are
the different microcredit programs in Haor area in Bangladesh? And do
microcredit programs have a positive impact on livelihoods of the clients in
terms of selected social indicators viz. income, consumption, assets, net
worth, education, access to finance, social capacity, food security and
handling socks etc. in Haor area in Bangladesh? Utilizing
difference-in-difference and factor analysis, we explore the nature and terms
of conditions of available formal and informal micro-creditss in Haor region of
Bangladesh; and investigate the impact of micro-creditss on the poverty
condition of Haor people in Bangladesh. The findings showed that total income
of borrowers has been increased over non-borrowers (z=6.75) significantly.
Among the components of income, non-agricultural income has been increased
significantly on the other hand income from labor sale has been decreased
significantly. Total consumption expenditure with its heads of food and
non-food consumption of both formal borrowers and informal borrowers have been
increased over the period 2016-2019 significantly. Most of the key informants
agreed that the findings are very much consistent with prevailing condition of
micro-credits in Haor region. However, some of them raised question about the
impacts of micro-credits. They argued that there is no straightforward positive
impact of micro-credits on poverty condition of the households.",Vicious Cycle of Poverty in Haor Region of Bangladesh- Impact of Formal and Informal Credits,2022-06-06 19:27:06,Nazrul Islam,"http://arxiv.org/abs/2206.02722v1, http://arxiv.org/pdf/2206.02722v1",econ.GN
32575,gn,"The objective of this paper is to assess the impact of micro credit on the
livelihoods of the clients in the haor area of Sunamganj district, Sylhet,
Bangladesh. The major findings of the study are that 66.2 percent respondents
of borrowers and 98.7 non-borrowers are head of the family and an average 76.6
percent and among the borrowers 32 percent is husband/wife while 1.3 percent of
non-borrowers and on average 22.2. In terms of sex 64.7 percent of borrowers
and 92.5 percent of non-borrowers are male while 35.3 percent of borrowers and
7.5 percent of non-borrowers are female. The impact of micro-credit in terms of
formal and informal credit receiving households based on DID method showed that
total income, total expenditure and investment have been increased 13.57
percent, 10.39 percent and 26.17 percent. All the elements of total income have
been increased except debt which has been decreased by 2.39 percent. But the
decrease in debt is the good sign of positive impact of debt. Consumption of
food has been increased but non-food has been decreased. All the elements of
investment have been increased except some factors. The savings has been
decreased due excess increase in investment. The study suggested that for
breaking vicious cycle of poverty by micro-credit the duration of loans should
be at least five year and the volume of loans must be minimum 500,000 and
repayment should at not be less than monthly. The rate of interest should not
be more than 5 percent.",Impact of micro-credit on the livelihoods of clients -- A study on Sunamganj District,2022-06-06 19:35:05,Nazrul Islam,"http://arxiv.org/abs/2206.02798v1, http://arxiv.org/pdf/2206.02798v1",econ.GN
32576,gn,"The largest 6,529 international corporations are accountable for almost 30%
of global CO2e emissions. A growing awareness of the role of the corporate
world in the path toward sustainability has led many shareholders and
stakeholders to pursue increasingly stringent and ambitious environmental
goals. However, how to assess the corporate environmental performance
objectively and efficiently remains an open question. This study reveals
underlying dynamics and structures that can be used to construct a unified
quantitative picture of the environmental impact of companies. This study shows
that the environmental impact (metabolism) of companies CO2e energy used, water
withdrawal and waste production, scales with their size according to a simple
power law which is often sublinear, and can be used to derive a
sector-specific, size-dependent benchmark to asses unambiguously a company's
environmental performance. Enforcing such a benchmark would potentially result
in a 15% emissions reduction, but a fair and effective environmental policy
should consider the size of the corporation and the super or sublinear nature
of the scaling relationship",Scaling laws in global corporations as a benchmarking approach to assess environmental performance,2022-06-07 12:41:25,"Rossana Mastrandrea, Rob ter Burg, Yuli Shan, Klaus Hubacek, Franco Ruzzenenti","http://arxiv.org/abs/2206.03148v4, http://arxiv.org/pdf/2206.03148v4",econ.GN
32577,gn,"This study provides new evidence regarding the extent to which medical care
mitigates the economic consequences of various health shocks for the individual
and a wider family. To obtain causal effects, I focus on the role of medical
scientific discoveries and leverage the longitudinal dimension of unique
administrative data for Sweden. The results indicate that medical innovations
strongly mitigate the negative economic consequences of a health shock for the
individual and create spillovers to relatives. Such mitigating effects are
highly heterogeneous across prognoses. These results suggest that medical
innovation substantially reduces the burden of welfare costs yet produces
income inequalities.",Household and individual economic responses to different health shocks: The role of medical innovations,2022-06-07 16:51:37,Volha Lazuka,"http://arxiv.org/abs/2206.03306v2, http://arxiv.org/pdf/2206.03306v2",econ.GN
32578,gn,"Multi-unit organizations such as retail chains are interested in the
diffusion of best practices throughout all divisions. However, the strict
guidelines or incentive schemes may not always be effective in promoting the
replication of a practice. In this paper we analyze how the individual belief
systems, namely the desire of individuals to conform, may be used to spread
knowledge between departments. We develop an agent-based simulation of an
organization with different network structures between divisions through which
the knowledge is shared, and observe the resulting synchrony. We find that the
effect of network structures on the diffusion of knowledge depends on the
interdependencies between divisions, and that peer-to-peer exchange of
information is more effective in reaching synchrony than unilateral sharing of
knowledge from one division. Moreover, we find that centralized network
structures lead to lower performance in organizations.",Controlling replication via the belief system in multi-unit organizations,2022-06-08 13:02:12,"Ravshanbek Khodzhimatov, Stephan Leitner, Friederike Wall","http://arxiv.org/abs/2206.03786v1, http://arxiv.org/pdf/2206.03786v1",econ.GN
32586,gn,"This paper constructs a third-step second-order numerical approach for
solving a mathematical model on the dynamic of corruption and poverty. The
stability and error estimates of the proposed technique are analyzed using the
$L^{2}$-norm. The developed algorithm is at less zero-stable and second-order
accurate. Furthermore, the new method is explicit, fast and more efficient than
a large class of numerical schemes applied to nonlinear systems of ordinary
differential equations and can serve as a robust tool for integrating general
systems of initial-value problems. Some numerical examples confirm the theory
and also consider the corruption and poverty in Cameroon.",A Fast Third-Step Second-Order Explicit Numerical Approach To Investigating and Forecasting The Dynamic of Corruption And Poverty In Cameroon,2022-06-03 12:39:52,Eric Ngondiep,"http://arxiv.org/abs/2206.05022v1, http://arxiv.org/pdf/2206.05022v1",econ.GN
32580,gn,"Gender norms may constrain the ability of women to develop their
entrepreneurial skills, particularly in rural areas. By bringing
entrepreneurial training to women rather than requiring extended time away from
home, mobile technology could open doors that would otherwise be closed. We
randomly selected Nepali women to be trained as veterinary service providers
known as community animal health workers. Half of the selected candidates were
randomly assigned to a traditional training course requiring 35 consecutive
days away from home, and half were assigned to a hybrid distance learning
course requiring two shorter stays plus a table-based curriculum to be
completed at home. Distance learning strongly increases women's ability to
complete training as compared to traditional training. Distance learning has a
larger effect than traditional training on boosting the number of livestock
responsibilities women carry out at home, while also raising aspirations. Both
training types increase women's control over income. Our results indicate that
if anything, distance learning produced more effective community animal health
workers.",Can Mobile Technology Improve Female Entrepreneurship? Evidence from Nepal,2022-06-08 17:26:48,"Conner Mullally, Sarah Janzen, Nicholas Magnan, Shruti Sharma, Bhola Shrestha","http://arxiv.org/abs/2206.03919v2, http://arxiv.org/pdf/2206.03919v2",econ.GN
32581,gn,"Determination of price of an artwork is a fundamental problem in cultural
economics. In this work we investigate what impact visual characteristics of a
painting have on its price. We construct a number of visual features measuring
complexity of the painting, its points of interest, segmentation-based
features, local color features, and features based on Itten and Kandinsky
theories, and utilize mixed-effects model to study impact of these features on
the painting price. We analyze the influence of the color on the example of the
most complex art style - abstractionism, by created Kandinsky, for which the
color is the primary basis. We use Itten's theory - the most recognized color
theory in art history, from which the largest number of subtheories was born.
For this day it is taken as the base for teaching artists. We utilize novel
dataset of 3885 paintings collected from Christie's and Sotheby's and find that
color harmony has some explanatory power, color complexity metrics are
insignificant and color diversity explains price well.",The influence of color on prices of abstract paintings,2022-06-08 20:05:23,"Maksim Borisov, Valeria Kolycheva, Alexander Semenov, Dmitry Grigoriev","http://arxiv.org/abs/2206.04013v3, http://arxiv.org/pdf/2206.04013v3",econ.GN
32582,gn,"Accurately estimating income Pareto exponents is challenging due to
limitations in data availability and the applicability of statistical methods.
Using tabulated summaries of incomes from tax authorities and a recent
estimation method, we estimate income Pareto exponents in U.S. for 1916-2019.
We find that during the past three decades, the capital and labor income Pareto
exponents have been stable at around 1.2 and 2. Our findings suggest that the
top tail income and wealth inequality is higher and wealthy agents have twice
as large an impact on the aggregate economy than previously thought but there
is no clear trend post-1985.","Capital and Labor Income Pareto Exponents in the United States, 1916-2019",2022-06-09 07:06:16,"Ji Hyung Lee, Yuya Sasaki, Alexis Akira Toda, Yulong Wang","http://arxiv.org/abs/2206.04257v1, http://arxiv.org/pdf/2206.04257v1",econ.GN
32583,gn,"Despite the wide adoption of revenue management in many industries such as
airline, railway, and hospitality, there is still scarce empirical evidence on
the gains or losses of such strategies compared to uniform pricing or fully
flexible strategies. We quantify such gains and losses and identify their
underlying sources in the context of French railway transportation. The
identification of demand is complicated by censoring and the absence of
exogenous price variations. We develop an original identification strategy
combining temporal variations in relative prices, consumers' rationality and
weak optimality conditions on the firm's pricing strategy. Our results suggest
similar or better performance of the actual revenue management compared to
optimal uniform pricing, but also substantial losses of up to 16.7% compared to
the optimal pricing strategy. We also highlight the key role of revenue
management in acquiring information when demand is uncertain.",Estimating the Gains (and Losses) of Revenue Management,2022-06-09 14:25:47,"Xavier D'Haultfœuille, Ao Wang, Philippe Février, Lionel Wilner","http://arxiv.org/abs/2206.04424v3, http://arxiv.org/pdf/2206.04424v3",econ.GN
32584,gn,"Occasional social assistance prevents individuals from a range of social
ills, particularly unemployment and poverty. It remains unclear, however, how
and to what extent continued reliance on social assistance leads to individuals
becoming trapped in social assistance dependency. In this paper, we build on
the theory of cumulative disadvantage and examine whether the accumulated use
of social assistance over the life course is associated with an increased risk
of future social assistance recipiency. We also analyze the extent to which
living in disadvantaged neighborhoods constitutes an important mechanism in the
explanation of this association. Our analyses use Swedish population registers
for the full population of individuals born in 1981, and these individuals are
followed for approximately 17 years. While most studies are limited by a lack
of granular, life-history data, our granular individual-level data allow us to
apply causal-mediation analysis, and thereby quantify the extent to which the
likelihood of ending up in social assistance dependency is affected by residing
in disadvantaged neighborhoods. Our findings show the accumulation of social
assistance over the studied period is associated with a more than four-fold
increase on a risk ratio scale for future social assistance recipiency,
compared to never having received social assistance during the period examined.
Then, we examine how social assistance dependency is mediated by prolonged
exposure to disadvantaged neighborhoods. Our results suggest that the indirect
effect of disadvantaged neighborhoods is weak to moderate. Therefore, social
assistance dependency may be a multilevel process. Future research is to
explore how the mediating effects of disadvantaged neighborhoods vary in
different contexts.",To What Extent Do Disadvantaged Neighborhoods Mediate Social Assistance Dependency? Evidence from Sweden,2022-06-10 00:36:06,"Cheng Lin, Adel Daoud, Maria Branden","http://arxiv.org/abs/2206.04773v3, http://arxiv.org/pdf/2206.04773v3",econ.GN
32585,gn,"We examine the contribution of institutional integration to the institutional
quality. To this end, we exploit the 2007 political crisis in Ukraine and
examine the effects of staying out of the European Union for 28 Ukrainian
provinces in the period 1996-2020. We construct novel subnational estimates of
institutional quality for Ukraine and central and eastern European countries
based on the latent residual component extraction of institutional quality from
the existing governance indicators by making use of Bayesian posterior analysis
under non-informative objective prior function. By comparing the residualized
institutional quality trajectories of Ukrainian provinces with their central
and eastern European peers that were admitted to the European Union in 2004 and
after, we assess the institutional quality cost of being under Russian
political influence and interference. Based on the large-scale synthetic
control analysis, we find evidence of large-scale negative institutional
quality effects of staying out of the European Union such as heightened
political instability and rampant deterioration of the rule of law and control
of corruption. Statistical significance of the estimated effects is evaluated
across a comprehensive placebo simulation with more than 34 billion placebo
averages for each institutional quality outcome.",The perils of Kremlin's influence: evidence from Ukraine,2022-06-10 11:56:27,"Chiara Natalie Focacci, Mitja Kovac, Rok Spruk","http://arxiv.org/abs/2206.04950v1, http://arxiv.org/pdf/2206.04950v1",econ.GN
32588,gn,"We study the empirical relationship between green technologies and industrial
production at very fine-grained levels by employing Economic Complexity
techniques. Firstly, we use patent data on green technology domains as a proxy
for competitive green innovation and data on exported products as a proxy for
competitive industrial production. Secondly, with the aim of observing how
green technological development trickles down into industrial production, we
build a bipartite directed network linking single green technologies at time
$t_1$ to single products at time $t_2 \ge t_1$ on the basis of their
time-lagged co-occurrences in the technological and industrial specialization
profiles of countries. Thirdly we filter the links in the network by employing
a maximum entropy null-model. In particular, we find that the industrial
sectors most connected to green technologies are related to the processing of
raw materials, which we know to be crucial for the development of clean energy
innovations. Furthermore, by looking at the evolution of the network over time,
we observe that more complex green technological know-how requires more time to
be transmitted to industrial production, and is also linked to more complex
products.",The trickle down from environmental innovation to productive complexity,2022-06-15 16:46:34,"Francesco de Cunzo, Alberto Petri, Andrea Zaccaria, Angelica Sbardella","http://arxiv.org/abs/2206.07537v1, http://arxiv.org/pdf/2206.07537v1",econ.GN
32589,gn,"Reducing the size of the hypoxic zone in the Gulf of Mexico has proven to be
a challenging task. A variety of mitigation options have been proposed, each
likely to produce markedly different patterns of mitigation with widely varying
consequences for the economy. The general consensus is that no single measure
alone is sufficient to achieve the EPA Task Force goal for reducing the Gulf
hypoxic zone and it appears that a combination of management practices must be
employed. However, absent a highly resolved, multi-scale framework for
assessing these policy combinations, it has been unclear what pattern of
mitigation is likely to emerge from different policies and what the
consequences would be for local, regional and national land use, food prices
and farm returns. We address this research gap by utilizing a novel multi-scale
framework for evaluating alternative N loss management policies in the
Mississippi River basin. This combines fine-scale agro-ecosystem responses with
an economic model capturing domestic and international market and price
linkages. We find that wetland restoration combined with improved N use
efficiency, along with a leaching tax could reduce the Mississippi River N load
by 30-53\% while only modestly increasing corn prices. This study underscores
the value of fine-resolution analysis and the potential of combined economic
and ecological instruments in tackling nonpoint source nitrate pollution.",Multi-scale Analysis of Nitrogen Loss Mitigation in the US Corn Belt,2022-06-15 18:24:39,"Jing Liu, Laura Bowling, Christopher Kucharik, Sadia Jame, Uris Baldos, Larissa Jarvis, Navin Ramankutty, Thomas Hertel","http://arxiv.org/abs/2206.07596v1, http://arxiv.org/pdf/2206.07596v1",econ.GN
32590,gn,"We study an energy market composed of producers who compete to supply energy
to different markets and want to maximize their profits. The energy market is
modeled by a graph representing a constrained power network where nodes
represent the markets and links are the physical lines with a finite capacity
connecting them. Producers play a networked Cournot game on such a network
together with a centralized authority, called market maker, that facilitates
the trade between geographically separate markets via the constrained power
network and aims to maximize a certain welfare function. We first prove a
general result that links the optimal action of the market maker with the
capacity constraint enforced on the power network. Under mild assumptions, we
study the existence and uniqueness of Nash equilibria and exploit our general
result to prove a connection between capacity bottlenecks in the power network
and the emergence of price differences between different markets that are
separated by saturated lines, a phenomenon that is often observed in real power
networks.",Equilibria in Network Constrained Energy Markets,2022-06-15 20:56:06,"Leonardo Massai, Giacomo Como, Fabio Fagnani","http://arxiv.org/abs/2206.08133v2, http://arxiv.org/pdf/2206.08133v2",econ.GN
32591,gn,"In this paper we propose a methodology suitable for a comprehensive analysis
of the global embodied energy flow trough a complex network approach. To this
end, we extend the existing literature, providing a multilayer framework based
on the environmentally extended input-output analysis. The multilayer
structure, with respect to the traditional approach, allows us to unveil the
different role of sectors and economies in the system. In order to identify key
sectors and economies, we make use of hub and authority scores, by adapting to
our framework an extension of the Kleinberg algorithm, called Multi-Dimensional
HITS (MD-HITS). A numerical analysis based on multi-region input-output tables
shows how the proposed approach provides meaningful insights.",Environmentally extended input-output analysis in complex networks: a multilayer approach,2022-06-17 15:59:59,"Alessandra Cornaro, Giorgio Rizzini","http://dx.doi.org/10.1007/s10479-022-05133-0, http://arxiv.org/abs/2206.08745v1, http://arxiv.org/pdf/2206.08745v1",econ.GN
32592,gn,"How should government and business solve big problems? In bold leaps or in
many smaller moves? We show that bespoke, one-off projects are prone to poorer
outcomes than projects built on a repeatable platform. Repeatable projects are
cheaper, faster, and scale at lower risk of failure. We compare evidence from
203 space missions at NASA and SpaceX, on cost, speed-to-market, schedule, and
scalability. We find that SpaceX's platform strategy was 10X cheaper and 2X
faster than NASA's bespoke strategy. Moreover, SpaceX's platform strategy was
financially less risky, virtually eliminating cost overruns. Finally, we show
that achieving platform repeatability is a strategically diligent process
involving experimental learning sequences. Sectors of the economy where
governments find it difficult to control spending or timeframes or to realize
planned benefits - e.g., health, education, climate, defence - are ripe for a
platform rethink.",How to Solve Big Problems: Bespoke Versus Platform Strategies,2022-06-09 14:50:19,"Atif Ansar, Bent Flyvbjerg","http://arxiv.org/abs/2206.08754v1, http://arxiv.org/pdf/2206.08754v1",econ.GN
32593,gn,"We analyze the sovereign bond issuance data of eight major emerging markets
(EMs) - Brazil, China, India, Indonesia, Mexico, Russia, South Africa and
Turkey from 1970 to 2018. Our analysis suggests that (i) EM local currency
bonds tend to be smaller in size, shorter in maturity, or lower in coupon rate
than foreign currency bonds; (ii) EMs are more likely to issue local-currency
sovereign bonds if their currencies appreciated before the global financial
crisis of 2008 (GFC); (iii) inflation-targeting monetary policy increases the
likelihood of issuing local-currency debt before GFC but not after; and (iv)
EMs that offer higher sovereign yields are more likely to issue local-currency
bonds after GFC. Future data will allow us to test and identify structural
changes associated with the COVID-19 pandemic and its aftermath.","Good-Bye Original Sin, Hello Risk On-Off, Financial Fragility, and Crises?",2022-06-18 17:47:28,"Joshua Aizenman, Yothin Jinjarak, Donghyun Park, Huanhuan Zheng","http://dx.doi.org/10.1016/j.jimonfin.2021.102442, http://arxiv.org/abs/2206.09218v1, http://arxiv.org/pdf/2206.09218v1",econ.GN
32594,gn,"This paper aims to identify and characterize the potential of green jobs in
Argentina, i.e., those that would benefit from a transition to a green economy,
using occupational green potential scores calculated in US O*NET data. We apply
the greenness scores to Argentine household survey data and estimate that 25%
of workers are in green jobs, i.e., have a high green potential. However, when
taking into account the informality dimension, we find that 15% of workers and
12% of wage earners are in formal green jobs. We then analyze the relationship
between the greenness scores (with emphasis on the nexus with decent work) and
various labor and demographic variables at the individual level. We find that
for the full sample of workers the green potential is relatively greater for
men, the elderly, those with very high qualifications, those in formal
positions, and those in specific sectors such as construction, transportation,
mining, and industry. These are the groups that are likely to be the most
benefited by the greening of the Argentine economy. When we restrict the sample
to wage earners, the green potential score is positively associated with
informality.",Going Green: Estimating the Potential of Green Jobs in Argentina,2022-06-18 23:37:50,"Natalia Porto, Pablo de la Vega, Manuela Cerimelo","http://arxiv.org/abs/2206.09279v2, http://arxiv.org/pdf/2206.09279v2",econ.GN
32595,gn,"During the course of the COVID-19 pandemic, a common strategy for public
health organizations around the world has been to launch interventions via
advertising campaigns on social media. Despite this ubiquity, little has been
known about their average effectiveness. We conduct a large-scale program
evaluation of campaigns from 174 public health organizations on Facebook and
Instagram that collectively reached 2.1 billion individuals and cost around
\$40 million. We report the results of 819 randomized experiments that measured
the impact of these campaigns across standardized, survey-based outcomes. We
find on average these campaigns are effective at influencing self-reported
beliefs, shifting opinions close to 1% at baseline with a cost per influenced
person of about \$3.41. There is further evidence that campaigns are especially
effective at influencing users' knowledge of how to get vaccines. Our results
represent, to the best of our knowledge, the largest set of online public
health interventions analyzed to date.",The Effectiveness of Digital Interventions on COVID-19 Attitudes and Beliefs,2022-06-21 12:32:56,"Susan Athey, Kristen Grabarz, Michael Luca, Nils Wernerfelt","http://arxiv.org/abs/2206.10214v1, http://arxiv.org/pdf/2206.10214v1",econ.GN
32596,gn,"We consider portfolio selection under nonparametric $\alpha$-maxmin ambiguity
in the neighbourhood of a reference distribution. We show strict concavity of
the portfolio problem under ambiguity aversion. Implied demand functions are
nondifferentiable, resemble observed bid-ask spreads, and are consistent with
existing parametric limiting participation results under ambiguity. Ambiguity
seekers exhibit a discontinuous demand function, implying an empty set of
reservation prices. If agents have identical, or sufficiently similar prior
beliefs, the first-best equilibrium is no trade. Simple conditions yield the
existence of a Pareto-efficient second-best equilibrium, implying that
heterogeneity in ambiguity preferences is sufficient for mutually beneficial
transactions among all else homogeneous traders. These equilibria reconcile
many observed phenomena in liquid high-information financial markets, such as
liquidity dry-ups, portfolio inertia, and negative risk premia.",Optimal Investment and Equilibrium Pricing under Ambiguity,2022-06-21 18:59:55,"Michail Anthropelos, Paul Schneider","http://arxiv.org/abs/2206.10489v1, http://arxiv.org/pdf/2206.10489v1",econ.GN
32597,gn,"In spring 2022, the German federal government agreed on a set of measures
that aim at reducing households' financial burden resulting from a recent price
increase, especially in energy and mobility. These measures include among
others, a nation-wide public transport ticket for 9 EUR per month and a fuel
tax cut that reduces fuel prices by more than 15%. In transportation research
this is an almost unprecedented behavioral experiment. It allows to study not
only behavioral responses in mode choice and induced demand but also to assess
the effectiveness of transport policy instruments. We observe this natural
experiment with a three-wave survey and an app-based travel diary on a sample
of hundreds of participants as well as an analysis of traffic counts. In this
second report, we update the information on study participation, provide first
insights on the smartphone app usage as well as insights on the first wave
results, particularly on the 9 EUR-ticket purchase intention.",A nation-wide experiment: fuel tax cuts and almost free public transport for three months in Germany -- Report 2 First wave results,2022-06-21 19:32:17,"Fabienne Cantner, Nico Nachtigall, Lisa S. Hamm, Andrea Cadavid, Lennart Adenaw, Allister Loder, Markus B. Siewert, Sebastian Goerg, Markus Lienkamp, Klaus Bogenberger","http://arxiv.org/abs/2206.10510v2, http://arxiv.org/pdf/2206.10510v2",econ.GN
32598,gn,"In this study, two mathematical models have been developed for assigning
emergency vehicles, namely ambulances, to geographical areas. The first model,
which is based on the assignment problem, the ambulance transfer (moving
ambulances) between locations has not been considered. As ambulance transfer
can improve system efficiency by decreasing the response time as well as
operational cost, we consider this in the second model, which is based on the
transportation problem. Both models assume that the demand of all geographical
locations must be met. The major contributions of this study are: ambulance
transfer between locations, day split into several time slots, and demand
distribution of the geographical zone. To the best of our knowledge the first
two have not been studied before. These extensions allow us to have a more
realistic model of the real-world operation. Although, in previous studies,
maximizing coverage has been the main objective of the goal, here, minimizing
operating costs is a function of the main objective, because we have assumed
that the demand of all geographical areas must be met.",Providing a model for the issue of multi-period ambulance location,2022-06-17 15:01:48,"Hamed Kazemipoor, Mohammad Ebrahim Sadeghi, Agnieszka Szmelter-Jarosz, Mohadese Aghabozorgi","http://dx.doi.org/10.52547/ijie.1.2.13, http://arxiv.org/abs/2206.11811v1, http://arxiv.org/pdf/2206.11811v1",econ.GN
32635,gn,"Duflo (2001) exploits a 1970s schooling expansion to estimate the returns to
schooling in Indonesia. Under the study's difference-in-differences (DID)
design, two patterns in the data--shallower pay scales for younger workers and
negative selection in treatment--can violate the parallel trends assumption and
upward-bias results. In response, I follow up later, test for trend breaks
timed to the intervention, and perform changes-in-changes (CIC). I also correct
data errors, cluster variance estimates, incorporate survey weights to correct
for endogenous sampling, and test for (and detect) instrument weakness. Weak
identification-robust inference yields positive but imprecise estimates. CIC
estimates tilt slightly negative.",Schooling and Labor Market Consequences of School Construction in Indonesia: Comment,2022-07-19 06:07:38,David Roodman,"http://arxiv.org/abs/2207.09036v5, http://arxiv.org/pdf/2207.09036v5",econ.GN
32599,gn,"One of the emerging technologies, which is expected to have tremendous
effects on community development, is the Internet of Things technology. Given
the prospects for this technology and the country's efforts to its development,
policymaking for this technology is very crucial. The technological innovation
system is one of the most important dynamic approaches in the field of modern
technology policy. In this approach, by analyzing various functions which are
influencing the development of a technology, the proper path to technological
advancement is explained. For this reason, 10 major factors influencing the
development of emerging technologies have been identified based on previous
studies in this area and the effect of ten factors on each other was identified
by using the system dynamics and the fuzzy DEMATEL method and the interactions
between these functions were modeled. Market formation, resource mobilization,
exploitation of the regime and policymaking and coordination functions have the
most direct effect on the other functions. Also, policymaking and coordination,
market formation, entrepreneurial activities, creating structure and resource
mobilization have the most total impact on the other functions. Regard to
resource constraint in the system, the policy makers should focus on those
factors which have the most direct and total impact on the others that in this
research are market formation, entrepreneurial activities, resource
mobilization and policymaking and coordination. Given the dynamic nature of
technology development, this model can help policymakers in the decision making
process for the development of the Internet of Things.",Proposing Dynamic Model of Functional Interactions of IoT Technological Innovation System by Using System Dynamics and Fuzzy DEMATEL,2022-06-17 15:07:43,"Mohammad Mousakhani, Fatemeh Saghafi, Mohammad Hasanzadeh, Mohammad Ebrahim Sadeghi","http://arxiv.org/abs/2206.11847v1, http://arxiv.org/pdf/2206.11847v1",econ.GN
32600,gn,"Purpose: In addition to playing an important role in creating economic
security and investment development, insurance companies also invest. The
country's insurance industry as one of the country's financial institutions has
a special place in the investment process and special attention to appropriate
investment policies in the field of insurance industry is essential. So that
the efficiency of this industry in allocating the existing budget stimulates
other economic sectors. This study seeks to model investment in the performance
of dynamic networks of insurance companies.
  Methodology: In this paper, a new investment model is designed to examine the
dynamic network performance of insurance companies in Iran. The designed model
is implemented using GAMS software and the outputs of the model are analyzed
based on regression method. The required information has been collected based
on the statistics of insurance companies in Iran between 1393 and 1398.
  Findings: After evaluating these units, out of 15 companies evaluated, 6
companies had unit performance and were introduced as efficient companies. The
average efficiency of insurance companies is 0.78 and the standard deviation is
0.2. The results show that the increase in the value of investments is due to
the large reduction in costs and in terms of capital and net profit of
companies is a large number that has a clear and strong potential for insurance
companies.
  Originality/Value: In this paper, investment modeling is performed to examine
the performance of dynamic networks of insurance companies in Iran.",Neural network based human reliability analysis method in production systems,2022-06-17 14:55:05,"Rasoul Jamshidi, Mohammad Ebrahim Sadeghi","http://dx.doi.org/10.22105/JARIE.2021.277071.1274, http://arxiv.org/abs/2206.11850v1, http://arxiv.org/pdf/2206.11850v1",econ.GN
32601,gn,"As manufacturers and technologies become more complicated, manufacturing
errors such as machine failure and human error have also been considered more
over the past. Since machines and humans are not error-proof, managing the
machines and human errors is a significant challenge in manufacturing systems.
There are numerous methods for investigating human errors, fatigue, and
reliability that categorized under Human Reliability Analysis (HRA) methods.
HRA methods use some qualitative factors named Performance Shaping Factors
(PSFs) to estimate Human Error Probability (HEP). Since the PSFs can be
considered as the acceleration factors in Accelerated Life Test (ALT). We
developed a method for Accelerated Human Fatigue Test (AHFT) to calculate human
fatigue, according to fatigue rate and other effective factors. The proposed
method reduces the time and cost of human fatigue calculation. AHFT first
extracts the important factors affecting human fatigue using Principal
Component Analysis (PCA) and then uses the accelerated test to calculate the
effect of PSFs on human fatigue. The proposed method has been applied to a real
case, and the provided results show that human fatigue can be calculated more
effectively using the proposed method.",Application of accelerated life testing in human reliability analysis,2022-06-17 14:39:43,"Rasoul Jamshidi, Mohammad Ebrahim Sadeghi","http://dx.doi.org/10.22105/RIEJ.2021.288263.1227, http://arxiv.org/abs/2206.11853v1, http://arxiv.org/pdf/2206.11853v1",econ.GN
32602,gn,"Using an unbalanced panel data covering 75 countries from 1991 to 2019, we
explore how the political risk impacts on food reserve ratio. The empirical
findings show that an increasing political risk negatively affect food reserve
ratio, and same effects hold for both internal risk and external risk.
Moreover, we find that the increasing external or internal risks both
negatively affect production and exports, but external risk does not
significantly impact on imports and it positively impacts on consumption, while
internal risk negatively impacts on imports and consumption. The results
suggest that most of governments have difficulty to raise subsequent food
reserve ratio in face of an increasing political risk, no matter it is an
internal risk or an external risk although the mechanisms behind the impacts
are different.",Impact of the political risk on food reserve ratio: evidence across countries,2022-06-24 15:59:33,"Kai Xing, Shang Li, Xiaoguang Yang","http://arxiv.org/abs/2206.12264v1, http://arxiv.org/pdf/2206.12264v1",econ.GN
32608,gn,"This paper studies the effects of talk radio, specifically the Rush Limbaugh
Show, on electoral outcomes and attitude polarization in the U.S. We propose a
novel identification strategy that considers the radio space in each county as
a market where multiple stations are competing for listeners' attention. Our
measure of competition is a spatial Herfindahl-Hirschman Index (HHI) in radio
frequencies. To address endogeneity concerns, we exploit the variation in
competition based on accidental frequency overlaps in a county, conditional on
the overall level of radio frequency competition. We find that counties with
higher exposure to the Rush Limbaugh Show have a systematically higher vote
share for Donald Trump in the 2016 and 2020 U.S. presidential elections.
Combining our county-level Rush Limbaugh Show exposure measure with individual
survey data reveals that self-identifying Republicans in counties with higher
exposure to the Show express more conservative political views, while
self-identifying Democrats in these same counties express more moderate
political views. Taken together, these findings provide some of the first
insights on the effects of contemporary talk radio on political outcomes, both
at the aggregate and individual level.",Competing for Attention -- The Effect of Talk Radio on Elections and Political Polarization in the US,2022-06-28 03:45:52,"Ashani Amarasinghe, Paul A. Raschky","http://arxiv.org/abs/2206.13675v1, http://arxiv.org/pdf/2206.13675v1",econ.GN
32603,gn,"Over the past thirty years, logistics has undergone tremendous change from a
purely operational performance that led to sales or production and focused on
securing supply and delivery lines to customers, with the transformation of
intelligent technologies into a professional operator. In today's world, the
fourth generation of industry has forced companies to rethink how their supply
chain is designed. In this case, in addition to the need for adaptation, supply
chains also have the opportunity to reach operational horizons, use emerging
digital supply chain business models, and transform the company into a digital
supply chain. One of the transformational technologies that has had a
tremendous impact on the supply chain in this regard is IoT technology. This
technology, as one of the largest sources of data production, can facilitate
supply chain processes in all its dimensions. However, due to the presence of
the Internet and the location of supply chain components in the context of
information networks, this digital supply chain always faces major challenges.
Therefore, in this paper, an attempt was made to examine and prioritize the
most important challenges of implementing a supply chain 0/4 using a nonlinear
hierarchical analysis method. In order to investigate these challenges, the
opinions of experts active in the supply chain of Fast-moving consumer goods
industries (FMCG) were used as a case study as well as some academic
experts.The results show that the lack of technological infrastructure and
security challenges are the most important challenges of implementing supply
chain 0/4 in the era of digital developments, which should be given special
attention for a successful implementation.",Quantitative Analysis of Implementation Challenges of IoT-Based Digital Supply Chain (Supply Chain 0/4),2022-06-17 14:34:02,"Hamed Nozari, Mohammad Ebrahim Sadeghi, Javid Ghahremani nahr, Seyyed Esmaeil Najafi","http://dx.doi.org/10.22034/jsqm.2022.314683.1380, http://arxiv.org/abs/2206.12277v1, http://arxiv.org/pdf/2206.12277v1",econ.GN
32604,gn,"The unavoidable travel time variability in transportation networks, resulted
from the widespread supply side and demand side uncertainties, makes travel
time reliability (TTR) be a common and core interest of all the stakeholders in
transportation systems, including planners, travelers, service providers, and
managers. This common and core interest stimulates extensive studies on
modeling TTR. Researchers have developed a range of theories and models of TTR,
many of which have been incorporated into transportation models, policies, and
project appraisals. Adopting the network perspective, this paper aims to
provide an integrated framework for reviewing the methodological developments
of modeling TTR in transportation networks, including its characterization,
evaluation and valuation, and traffic assignment. Specifically, the TTR
characterization provides a whole picture of travel time distribution in
transportation networks. TTR evaluation and TTR valuation (known as the value
of reliability, VOR) simply and intuitively interpret abstract characterized
TTR to be well understood by different stakeholders of transportation systems.
TTR-based traffic assignment investigates the effects of TTR on the individual
users travel behavior and consequently the collective network flow pattern. As
the above three topics are mainly separately studied in different disciplines
and research areas, the integrated framework allows us to better understand
their relationships and may contribute to developing possible combinations of
TTR modeling philosophy. Also, the network perspective enables to focus on
common challenges of modeling TTR, especially the uncertainty propagation from
the uncertainty sources to the TTR at spatial levels including link, route, and
the entire network. Some directions for future research are discussed in the
era of new data environment, applications, and emerging technologies.",Travel time reliability in transportation networks: A review of methodological developments,2022-06-25 20:05:26,"Zhaoqi Zang, Xiangdong Xu, Kai Qu, Ruiya Chen, Anthony Chen","http://dx.doi.org/10.1016/j.trc.2022.103866, http://arxiv.org/abs/2206.12696v2, http://arxiv.org/pdf/2206.12696v2",econ.GN
32605,gn,"Most governments are mandated to maintain their economies at full employment.
We propose that the best marker of full employment is the efficient
unemployment rate, $u^*$. We define $u^*$ as the unemployment rate that
minimizes the nonproductive use of labor -- both jobseeking and recruiting. The
nonproductive use of labor is well measured by the number of jobseekers and
vacancies, $u + v$. Through the Beveridge curve, the number of vacancies is
inversely related to the number of jobseekers. With such symmetry, the labor
market is efficient when there are as many jobseekers as vacancies ($u = v$),
too tight when there are more vacancies than jobseekers ($v > u$), and too
slack when there are more jobseekers than vacancies ($u > v$). Accordingly, the
efficient unemployment rate is the geometric average of the unemployment and
vacancy rates: $u^* = \sqrt{uv}$. We compute $u^*$ for the United States
between 1930 and 2022. We find for instance that the US labor market has been
over full employment ($u < u^*$) since May 2021.",$u^* = \sqrt{uv}$,2022-06-27 04:43:22,"Pascal Michaillat, Emmanuel Saez","http://arxiv.org/abs/2206.13012v1, http://arxiv.org/pdf/2206.13012v1",econ.GN
32606,gn,"Inbreeding homophily is a prevalent feature of human social networks with
important individual and group-level social, economic, and health consequences.
The literature has proposed an overwhelming number of dimensions along which
human relationships might sort, without proposing a unified
empirically-grounded framework for their categorization. We exploit rich data
on a sample of University freshmen with very similar characteristic - age, race
and education- and contrast the relative importance of observable vs.
unobservables characteristics in their friendship formation. We employ Bayesian
Model Averaging, a methodology explicitly designed to target model uncertainty
and to assess the robustness of each candidate attribute while predicting
friendships. We show that, while observable features such as assignment of
students to sections, gender, and smoking are robust key determinants of
whether two individuals befriend each other, unobservable attributes, such as
personality, cognitive abilities, economic preferences, or socio-economic
aspects, are largely sensible to the model specification, and are not important
predictors of friendships.",The role of unobservable characteristics in friendship network formation,2022-06-28 00:47:15,"Pablo Brañas-Garza, Lorenzo Ductor, Jaromír Kovárík","http://arxiv.org/abs/2206.13641v1, http://arxiv.org/pdf/2206.13641v1",econ.GN
32607,gn,"We study individuals' willingness to engage with others who hold opposite
views on polarizing policies. A representative sample of 2,507 Americans are
given the opportunity to listen to recordings of fellow countrymen and women
expressing their views on immigration, abortion laws and gun ownership laws. We
find that most Americans (more than two-thirds) are willing to listen to a view
opposite to theirs, and a fraction (ten percent) reports changing their views
as a result. We also test whether emphasizing having common grounds with those
who think differently helps bridging views. We identify principles the vast
majority of people agree upon: (1) a set of fundamental human rights, and (2) a
set of simple behavioral etiquette rules. A random subsample of people are made
explicitly aware they share common views, either on human rights (one-third of
the sample) or etiquette rules (another one-third of the sample), before they
have the opportunity to listen to different views. We find that the treatments
induce people to adjust their views towards the center on abortion and
immigration, relative to a control group, thus reducing polarization.","Reducing Polarization on Abortion, Guns and Immigration: An Experimental Study",2022-06-28 01:21:13,"Michele Belot, Guglielmo Briscese","http://arxiv.org/abs/2206.13652v2, http://arxiv.org/pdf/2206.13652v2",econ.GN
32609,gn,"In this paper, I revisit Phillips, Wu and Yu's seminal 2011 paper on testing
for the dot-com bubble. I apply recent advancements of their methods to
individual Nasdaq stocks and use a novel specification for fundamentals. To
address a divide in the literature, I generate a detailed sectoral breakdown of
the dot-com bubble. I find that it comprised multiple overlapping episodes of
exuberance and that there were indeed two starting dates for internet
exuberance.",Dissecting the dot-com bubble in the 1990s NASDAQ,2022-06-28 19:44:36,Yuchao Fan,"http://arxiv.org/abs/2206.14130v2, http://arxiv.org/pdf/2206.14130v2",econ.GN
32610,gn,"Biobased energy, particularly corn starch-based ethanol and other liquid
renewable fuels, are a major element of federal and state energy policies in
the United States. These policies are motivated by energy security and climate
change mitigation objectives, but corn ethanol does not substantially reduce
greenhouse gas emissions when compared to petroleum-based fuels. Corn
production also imposes substantial negative externalities (e.g., nitrogen
leaching, higher food prices, water scarcity, and indirect land use change). In
this paper, we utilize a partial equilibrium model of corn-soy production and
trade to analyze the potential of reduced US demand for corn as a biobased
energy feedstock to mitigate increases in nitrogen leaching, crop production
and land use associated with growing global populations and income from 2020 to
2050. We estimate that a 23% demand reduction would sustain land use and
nitrogen leaching below 2020 levels through the year 2025, and a 41% reduction
would do so through 2030. Outcomes are similar across major watersheds where
corn and soy are intensively farmed.",Reducing US Biofuels Requirements Mitigates Short-term Impacts of Global Population and Income Growth on Agricultural Environmental Outcomes,2022-06-29 02:12:43,"David R. Johnson, Nathan B. Geldner, Jing Liu, Uris Lantz Baldos, Thomas Hertel","http://arxiv.org/abs/2206.14321v1, http://arxiv.org/pdf/2206.14321v1",econ.GN
32611,gn,"The emergence of new organizational forms--such as virtual teams--has brought
forward some challenges for teams. One of the most relevant challenges is
coordinating the decisions of team members who work from different time zones.
Intuition suggests that task performance should improve if the team members'
decisions are coordinated. However, previous research suggests that the effect
of coordination on task performance is ambiguous. Specifically, the effect of
coordination on task performance depends on aspects such as the team members'
learning and the changes in team composition over time. This paper aims to
understand how individual learning and team composition moderate the
relationship between coordination and task performance. We implement an
agent-based modeling approach based on the NK-framework to fulfill our research
objective. Our results suggest that both factors have moderating effects.
Specifically, we find that excessively increasing individual learning is
harmful for the task performance of fully autonomous teams, but less
detrimental for teams that coordinate their decisions. In addition, we find
that teams that coordinate their decisions benefit from changing their
composition in the short-term, but fully autonomous teams do not. In
conclusion, teams that coordinate their decisions benefit more from individual
learning and dynamic composition than teams that do not coordinate.
Nevertheless, we should note that the existence of moderating effects does not
imply that coordination improves task performance. Whether coordination
improves task performance depends on the interdependencies between the team
members' decisions.",The benefits of coordination in (over)adaptive virtual teams,2022-06-29 13:01:37,"Darío Blanco-Fernández, Stephan Leitner, Alexandra Rausch","http://arxiv.org/abs/2206.14508v1, http://arxiv.org/pdf/2206.14508v1",econ.GN
32612,gn,"Ongoing school closures and gradual reopenings have been occurring since the
beginning of the COVID-19 pandemic. One substantial cost of school closure is
breakdown in channels of reporting of violence against children, in which
schools play a considerable role. There is, however, little evidence
documenting how widespread such a breakdown in reporting of violence against
children has been, and scant evidence exists about potential recovery in
reporting as schools re-open. We study all formal criminal reports of violence
against children occurring in Chile up to December 2021, covering physical,
psychological, and sexual violence. This is combined with administrative
records of school re-opening, attendance, and epidemiological and public health
measures. We observe sharp declines in violence reporting at the moment of
school closure across all classes of violence studied. Estimated reporting
declines range from -17% (rape), to -43% (sexual abuse). While reports rise
with school re-opening, recovery of reporting rates is slow. Conservative
projections suggest that reporting gaps remained into the final quarter of
2021, nearly two years after initial school closures. Our estimates suggest
that school closure and incomplete re-opening resulted in around 2,800
`missing' reports of intra-family violence, 2,000 missing reports of sexual
assault, and 230 missing reports of rape against children, equivalent to
between 10-25 weeks of reporting in baseline periods. The immediate and longer
term impacts of school closures account for between 40-70% of `missing' reports
in the post-COVID period.",Schools as a Safety-net: The Impact of School Closures and Reopenings on Rates of Reporting of Violence Against Children,2022-06-29 16:10:06,"Damian Clarke, Pilar Larroulet, Daniel Pailañir, Daniela Quintana","http://arxiv.org/abs/2206.14612v1, http://arxiv.org/pdf/2206.14612v1",econ.GN
32613,gn,"Using images containing information on wealth, this research investigates
that pictures are capable of reliably predicting the economic prosperity of
households. Without surveys on wealth-related information and human-made
standard of wealth quality that the traditional wealth-based approach relied
on, this novel approach makes use of only images posted on Dollar Street as
input data on household wealth across 66 countries and predicts the consumption
or income level of each household using the Convolutional Neural Network (CNN)
method. The best result predicts the log of consumption level with root mean
squared error of 0.66 and R-squared of 0.80 in CNN regression problem. In
addition, this simple model also performs well in classifying extreme poverty
with an accuracy of 0.87 and F-beta score of 0.86. Since the model shows a
higher performance in the extreme poverty classification when I applied the
different threshold of poverty lines to countries by their income group, it is
suggested that the decision of the World Bank to define poverty lines
differently by income group was valid.",Predicting Economic Welfare with Images on Wealth,2022-06-29 11:53:39,Jeonggil Song,"http://arxiv.org/abs/2206.14810v1, http://arxiv.org/pdf/2206.14810v1",econ.GN
32614,gn,"A leading explanation for widespread replication failures is publication
bias. I show in a simple model of selective publication that, contrary to
common perceptions, the replication rate is unaffected by the suppression of
insignificant results in the publication process. I show further that the
expected replication rate falls below intended power owing to issues with
common power calculations. I empirically calibrate a model of selective
publication and find that power issues alone can explain the entirety of the
gap between the replication rate and intended power in experimental economics.
In psychology, these issues explain two-thirds of the gap.",Can the Replication Rate Tell Us About Publication Bias?,2022-06-30 08:23:08,Patrick Vu,"http://arxiv.org/abs/2206.15023v2, http://arxiv.org/pdf/2206.15023v2",econ.GN
32615,gn,"Information frictions can harm the welfare of participants in two-sided
matching markets. Consider a centralized admission, where colleges cannot
observe students' preparedness for success in a particular major or degree
program. Colleges choose between using simple, cheap admission criteria, e.g.,
high school grades as a proxy for preparedness, or screening all applications,
which is time-consuming for both students and colleges. To address issues of
fairness and welfare, we introduce two novel mechanisms that allow students to
disclose private information voluntarily and thus only require partial
screening. The mechanisms are based on Deferred Acceptance and preserve its
core strategic properties of credible preference revelation, particularly
ordinal strategy-proofness. In addition, we demonstrate conditions for which
cardinal welfare improves for market participants compared to not screening.
Intuitively, students and colleges benefit from voluntary information
disclosure if public information about students correlates weakly with
students' private information and the cost of processing disclosed information
is sufficiently low. Finally, we present empirical evidence from the Danish
higher education system that supports critical features of our model. Our work
has policy implications for the mechanism design of large two-sided markets
where information frictions are inherent.",Voluntary Information Disclosure in Centralized Matching: Efficiency Gains and Strategic Properties,2022-06-30 10:55:19,"Andreas Bjerre-Nielsen, Emil Chrisander","http://arxiv.org/abs/2206.15096v1, http://arxiv.org/pdf/2206.15096v1",econ.GN
32616,gn,"Most organizations rely on managers to identify talented workers for
promotions. However, managers who are evaluated on team performance have an
incentive to hoard workers. This study provides the first empirical evidence of
talent hoarding using novel personnel records from a large manufacturing firm.
Temporary reductions of talent hoarding increase workers' applications for
promotions by 123%. By reducing the quality and performance of promoted
workers, talent hoarding contributes to misallocation of talent and perpetuates
gender inequality in representation and pay at the firm.",Talent Hoarding in Organizations,2022-06-30 11:02:15,Ingrid Haegele,"http://arxiv.org/abs/2206.15098v1, http://arxiv.org/pdf/2206.15098v1",econ.GN
32617,gn,"This paper examined the economic consequences of the COVID-19 pandemic on
sub-Saharan Africa (SSA) using the historical approach and analysing the policy
responses of the region to past crises and their economic consequences. The
study employed the manufacturing-value-added share of GDP as a performance
indicator. The analysis shows that the wrong policy interventions to past
crises led the sub-Saharan African sub-region into its deplorable economic
situation. The study observed that the region leapfrogged prematurely to import
substitution, export promotion, and global value chains. Based on these
experiences, the region should adopt a gradual approach in responding to the
COVID-19 economic consequences. The sub-region should first address relevant
areas of sustainability, including proactive investment in research and
development to develop homegrown technology, upgrade essential infrastructural
facilities, develop security infrastructure, and strengthen the financial
sector.",Economic Consequences of the COVID-19 Pandemic on Sub-Saharan Africa: A historical perspective,2022-07-02 00:35:26,"Anthony Enisan Akinlo, Segun Michael Ojo","http://arxiv.org/abs/2207.00666v1, http://arxiv.org/pdf/2207.00666v1",econ.GN
32618,gn,"The study examined the relationship between capital market performance and
the macroeconomic dynamics in Nigeria, and it utilized secondary data spanning
1993 to 2020. The data was analyzed using vector error correction model (VECM)
technology. The result revealed a significant long run relationship between
capital market performance and macroeconomic dynamics in Nigeria. We observed
long run causality running from the exchange rate, inflation, money supply, and
unemployment rate to capital market performance indicator in Nigeria. The
result supports the Arbitrage Pricing Theory (APT) proposition in the Nigerian
context. The theory stipulates that the linear relationship between an asset
expected returns and the macroeconomic factors whose dynamics affect the asset
risk can forecast an asset's returns. In other words, the result of this study
supports the proposition that the dynamics in the exchange rate, inflation,
money supply, and unemployment rate influence the capital market performance.
The study validates the recommendations of Arbitrage Pricing Theory (APT) in
Nigeria.",Capital Market Performance and Macroeconomic Dynamics in Nigeria,2022-07-02 11:33:01,"Oladapo Fapetu, Segun Michael Ojo, Adekunle Alexander Balogun, Adeoba Adepoju Asaolu","http://arxiv.org/abs/2207.00773v1, http://arxiv.org/pdf/2207.00773v1",econ.GN
32619,gn,"Ridesourcing is popular in many cities. Despite its theoretical benefits, a
large body of studies have claimed that ridesourcing also brings (negative)
externalities (e.g., inducing trips and aggravating traffic congestion).
Therefore, many cities are planning to enact or have already enacted policies
to regulate its use. However, these policies' effectiveness or impact on
ridesourcing demand and traffic congestion is uncertain. To this end, this
study applies difference-in-differences (i.e., a regression-based causal
inference approach) to empirically evaluate the effects of the congestion tax
policy on ridesourcing demand and traffic congestion in Chicago. It shows that
this congestion tax policy significantly curtails overall ridesourcing demand
but marginally alleviates traffic congestion. The results are robust to the
choice of time windows and data sets, additional control variables, alternative
model specifications, alternative control groups, and alternative modeling
approaches (i.e., regression discontinuity in time). Moreover, considerable
heterogeneity exists. For example, the policy notably reduces ridesourcing
demand with short travel distances, but such an impact is gradually attenuated
as the distance increases.",The Short-term Impact of Congestion Taxes on Ridesourcing Demand and Traffic Congestion: Evidence from Chicago,2022-07-05 06:42:52,"Yuan Liang, Bingjie Yu, Xiaojian Zhang, Yi Lu, Linchuan Yang","http://arxiv.org/abs/2207.01793v3, http://arxiv.org/pdf/2207.01793v3",econ.GN
32620,gn,"With some of the world's most ambitious renewable energy (RE) growth targets,
especially when normalized for scale, India aims more than quadrupling wind and
solar by 2030. Simultaneously, coal dominates the electricity grid, providing
roughly three-quarters of electricity today. We present results from the first
of a kind model to handle high uncertainty, which uses parametric analysis
instead of stochastic analysis for grid balancing based on economic despatch
through 2030, covering 30-minute resolution granularity at a national level.
The model assumes a range of growing demand, supply options, prices, and other
uncertain inputs. It calculates the lowest cost portfolio across a spectrum of
parametric uncertainty. We apply simplifications to handle the intersection of
capacity planning with optimized despatch. Our results indicate that very high
RE scenarios are cost-effective, even if a measurable fraction would be surplus
and thus discarded (""curtailed""). We find that high RE without storage as well
as existing slack in coal- and gas-powered capacity are insufficient to meet
rising demand on a real-time basis, especially adding time-of-day balancing.
Storage technologies prove valuable but remain expensive compared to the 2019
portfolio mix, due to issues of duty cycling like seasonal variability, not
merely inherent high capital costs. However, examining alternatives to
batteries for future growth finds all solutions for peaking power are even more
expensive. For balancing at peak times, a smarter grid that applies demand
response may be cost-effective. We also find the need for more sophisticated
modelling with higher stochasticity across annual timeframes (especially year
on year changes in wind output, rainfall, and demand) along with uncertainty on
supply and load profiles (shapes).",Balancing India's 2030 Electricity Grid Needs Management of Time Granularity and Uncertainty: Insights from a Parametric Model,2022-07-01 12:00:39,Rahul Tongia,"http://dx.doi.org/10.1007/s41403-022-00350-2, http://arxiv.org/abs/2207.02151v1, http://arxiv.org/pdf/2207.02151v1",econ.GN
32621,gn,"When funding public goods, resources are often allocated via mechanisms that
resemble contests, especially in the case of scientific grants. A common
critique of these contests is that they induce ""too much"" effort from
participants working on applications. However, this paper emphasizes the
importance of understanding the externalities associated with participation in
these contests before drawing conclusions about the optimal mechanism design.
Survey-based estimates suggest that the social costs of time spent on
scientific grant applications may not be a first-order concern in
non-emergencies. Still, further research is required to better understand how
scientists compete in grant contests.",The potential benefits of costly applications in grant contests,2022-07-06 04:02:38,Kyle R. Myers,"http://arxiv.org/abs/2207.02379v1, http://arxiv.org/pdf/2207.02379v1",econ.GN
32622,gn,"In many models in economics or business a dominantly self-interested homo
economicus is assumed. Unfortunately (or fortunately), humans are in general
not homines economici as e.g. the ultimatum game shows. This leads to the fact
that all these models are at least doubtful. Moreover, economists started to
set a quantitative value for the feeling of social justice, altruism, or envy
and the like to execute utilitarian calculation. Besides being ethically
doubtful, it delivers an explanation in hindsight with little predicting power.
We use examples from game theory to show its arbitrariness. It is even possible
that a stable Nash equilibrium can be calculated while it does not exist at
all, due to the wide differences in human values. Finally, we show that
assigned numbers for envy or altruism and the like do not build a field (in a
mathematical sense). As there is no homomorphism to real numbers or a subset of
it, any calculation is generally invalid or arbitrary. There is no (easy) way
to fix the problem. One has to go back to ethical concepts like the categorical
imperative or use at most semi quantitative approaches like considering knaves
and knights. Mathematically one can only speculate whether e.g. surreal numbers
can make ethics calculable.",Homo economicus to model human behavior is ethically doubtful and mathematically inconsistent,2022-07-05 10:21:37,"M. Lunkenheimer, A. Kracklauer, G. Klinkova, M. Grabinski","http://arxiv.org/abs/2207.02902v1, http://arxiv.org/pdf/2207.02902v1",econ.GN
32623,gn,"There is a rapid development and commercialization of new Energy Vehicles
(NEV) in recent years. Although traditional fuel vehicles (TFV) still occupy a
majority share of the market, it is generally believed that NEV is more
efficient, more environmental friendly, and has a greater potential of a
Schumpeterian ""creative destruction"" that may lead to a paradigm shift in auto
production and consumption. However, less is discussed regarding the potential
environmental impact of NEV production and future uncertainty in R&D bottleneck
of NEV technology and innovation. This paper aims to propose a modelling
framework based on Lux (1995) that investigates the long-term dynamics of TFV
and NEV, along with their associated environmental externality. We argue that
environmental and technological policies will play a critical role in
determining its future development. It is of vital importance to constantly
monitor the potential environmental impact of both sectors and support the R&D
of critical NEV technology, as well as curbing its negative externality in a
preemptive manner.",The Future of Traditional Fuel Vehicles (TFV) and New Energy Vehicles (NEV): Creative Destruction or Co-existence?,2022-07-08 06:19:54,"Zhaojia Huang, Liang Zhang, Tianhao Zhi","http://arxiv.org/abs/2207.03672v1, http://arxiv.org/pdf/2207.03672v1",econ.GN
32624,gn,"We generate a continuous measure of health to estimate a non-parametric model
of health dynamics, showing that adverse health shocks are highly persistent
when suffered by people in poor health. Canonical models cannot account for
this pattern. We incorporate this health dynamic into a life-cycle model of
consumption, savings, and labor force participation. After estimating the model
parameters, we simulate the effects of health shocks on economic outcomes. We
find that bad health shocks have long-term adverse economic effects that are
more extreme for those in poor health. Furthermore, bad health shocks also
increase the disparity of asset accumulation among this group of people. A
canonical model of health dynamics would not reveal these effects.",The welfare effects of nonlinear health dynamics,2022-07-08 13:50:27,"Chiara Dal Bianco, Andrea Moro","http://arxiv.org/abs/2207.03816v3, http://arxiv.org/pdf/2207.03816v3",econ.GN
32625,gn,"I construct the professor-student network for laureates of and candidates for
the Nobel Prize in Economics. I study the effect of proximity to previous
Nobelists on winning the Nobel Prize. Conditional on being Nobel-worthy,
students and grandstudents of Nobel laureates are significantly less likely to
win. Professors and fellow students of Nobel Prize winners, however, are
significantly more likely to win.",Nobel begets Nobel,2022-07-10 14:33:45,Richard S. J. Tol,"http://arxiv.org/abs/2207.04441v2, http://arxiv.org/pdf/2207.04441v2",econ.GN
32626,gn,"This paper provides a novel theory of research joint ventures for financially
constrained firms. When firms choose R&D portfolios, an RJV can help to
coordinate research efforts, reducing investments in duplicate projects. This
can free up resources, increase the variety of pursued projects and thereby
increase the probability of discovering the innovation. RJVs improve innovation
outcomes when market competition is weak and external financing conditions are
bad. An RJV may increase the innovation probability and nevertheless lower
total R&D costs. RJVs that increase innovation also increase consumer surplus
and tend to be profitable, but innovation-reducing RJVs also exist. Finally, we
compare RJVs to innovation-enhancing mergers.",Research Joint Ventures: The Role of Financial Constraints,2022-07-11 16:33:03,"Philipp Brunner, Igor Letina, Armin Schmutzler","http://arxiv.org/abs/2207.04856v3, http://arxiv.org/pdf/2207.04856v3",econ.GN
32627,gn,"This paper studies the short-term effects of ambient temperature on mental
health using data on nearly half a million helpline calls in Germany.
Leveraging location-based routing of helpline calls and random day-to-day
weather fluctuations, I find a negative effect of temperature extremes on
mental health as revealed by an increase in the demand for telephone counseling
services. On days with an average temperature above 25{\deg}C (77{\deg}F) and
below 0{\deg}C (32{\deg}F), call volume is 3.4 and 5.1 percent higher,
respectively, than on mid-temperature days. Mechanism analysis reveals
pronounced adverse effects of cold temperatures on social and psychological
well-being and of hot temperatures on psychological well-being and violence.
More broadly, the findings of this work contribute to our understanding of how
changing climatic conditions will affect population mental health and
associated social costs in the near future.",Temperature and Mental Health: Evidence from Helpline Calls,2022-07-11 19:34:45,Benedikt Janzen,"http://arxiv.org/abs/2207.04992v2, http://arxiv.org/pdf/2207.04992v2",econ.GN
32628,gn,"Despite the increasing integration of the global economic system,
anti-dumping measures are a common tool used by governments to protect their
national economy. In this paper, we propose a methodology to detect cases of
anti-dumping circumvention through re-routing trade via a third country. Based
on the observed full network of trade flows, we propose a measure to proxy the
evasion of an anti-dumping duty for a subset of trade flows directed to the
European Union, and look for possible cases of circumvention of an active
anti-dumping duty. Using panel regression, we are able correctly classify 86%
of the trade flows, on which an investigation of anti-dumping circumvention has
been opened by the European authorities.",Detecting Anti-dumping Circumvention: A Network Approach,2022-07-12 11:56:44,"Luca Barbaglia, Christophe Croux, Ines Wilms","http://arxiv.org/abs/2207.05394v1, http://arxiv.org/pdf/2207.05394v1",econ.GN
32630,gn,"Extensive empirical studies show that the long distribution tail of travel
time and the corresponding unexpected delay can have much more serious
consequences than expected or moderate delay. However, the unexpected delay due
to the distribution tail of travel time has received limited attention in
recent studies of the valuation of travel time variability. As a complement to
current valuation research, this paper proposes the concept of the value of
travel time distribution tail, which quantifies the value that travelers place
on reducing the unexpected delay for hedging against travel time variability.
Methodologically, we define the summation of all unexpected delays as the
unreliability area to quantify travel time distribution tail and show that it
is a key element of two well-defined measures accounting for unreliable aspects
of travel time. We then formally derive the value of distribution tail, show
that it is distinct from the more established value of reliability (VOR), and
combine it and the VOR in an overall value of travel time variability (VOV). We
prove theoretically that the VOV exhibits diminishing marginal benefit in terms
of the traveler's punctuality requirements under a validity condition. This
implies that it may be economically inefficient for travelers to blindly pursue
a higher probability of not being late. We then proceed to develop the concept
of the travel time variability ratio, which gives the implicit cost of the
punctuality requirement imposed on any given trip. Numerical examples reveal
that the cost of travel time distribution tail can account for more than 10% of
the trip cost, such that its omission could introduce non-trivial bias into
route choice models and transportation appraisal more generally.",On the value of distribution tail in the valuation of travel time variability,2022-07-13 18:46:42,"Zhaoqi Zang, Richard Batley, Xiangdong Xu, David Z. W. Wang","http://arxiv.org/abs/2207.06293v3, http://arxiv.org/pdf/2207.06293v3",econ.GN
32631,gn,"The cost and affordability of least-cost healthy diets by time and place are
increasingly used as a proxy for access to nutrient-adequate diets. Recent work
has focused on the nutrient requirements of individuals, although most food and
anti-poverty programs target whole households. This raises the question of how
the cost of a nutrient-adequate diet can be measured for an entire household.
This study identifies upper and lower bounds on the feasibility, cost, and
affordability of meeting all household members' nutrient requirements using
2013-2017 survey data from Malawi. Findings show only a minority of households
can afford the nutrient-adequate diet at either bound, with 20% of households
able to afford the (upper bound) shared diets and 38% the individualized (lower
bound) diets. Individualized diets are more frequently feasible with locally
available foods (90% vs. 60% of the time) and exhibit more moderate seasonal
fluctuation. To meet all members' needs, a shared diet requires a more
nutrient-dense combination of foods that is more costly and exhibits more
seasonality in diet cost than any one food group or the individualized diets.
The findings further help adjudicate the extent to which nutritional behavioral
change programs versus broader agricultural and food policies can be relied
upon to improve individual access to healthy diets.",Assessing the Affordability of Nutrient-Adequate Diets,2022-07-15 03:15:30,"Kate R. Schneider, Luc Christiaensen, Patrick Webb, William A. Masters","http://arxiv.org/abs/2207.07240v1, http://arxiv.org/pdf/2207.07240v1",econ.GN
32632,gn,"Debt aversion can have severe adverse effects on financial decision-making.
We propose a model of debt aversion, and design an experiment involving real
debt and saving contracts, to elicit and jointly estimate debt aversion with
preferences over time, risk and losses. Structural estimations reveal that the
vast majority of participants (89%) are debt averse, and that this has a strong
impact on choice. We estimate the ""borrowing premium"" - the compensation a debt
averse person would require to accept getting into debt - to be around 16% of
the principal for our average participant.",Debt Aversion: Theory and Measurement,2022-07-15 18:30:28,"Thomas Meissner, David Albrecht","http://arxiv.org/abs/2207.07538v2, http://arxiv.org/pdf/2207.07538v2",econ.GN
32633,gn,"The purpose of this research was to identify commonly adopted SAPs and their
adoption among Kentucky farmers. The specific objectives were to explore
farmers' Perceptions about farm and farming practice sustainability, to
identify predictors of SAPs adoption using farm attributes, farmers' attitudes
and behaviors, socioeconomic and demographic factors, and knowledge, and to
evaluate adoption barriers of SAPs among Kentucky Farmers. Farmers generally
perceive that their farm and farming activities attain the objectives of
sustainable agriculture. Inadequate knowledge, perceived difficulty of
implementation, lack of market, negative attitude about technologies, and lack
of technologies were major adoption barriers of SAPs in Kentucky.",Adoption of Sustainable Agricultural Practices among Kentucky Farmers and Their Perception about Farm Sustainability,2022-07-17 05:01:21,Bijesh Mishra,"http://dx.doi.org/10.1007/s00267-018-1109-3, http://arxiv.org/abs/2207.08053v1, http://arxiv.org/pdf/2207.08053v1",econ.GN
32634,gn,"China's structural changes have brought new challenges to its regional
employment structures, entailing labour redistribution. By now Chinese research
on migration decisions with a forward-looking stand and on bilateral
longitudinal determinants at the prefecture city level is almost non-existent.
This paper investigates the effects of sector-based job prospects on individual
migration decisions across prefecture boundaries. To this end, we created a
proxy variable for job prospects, compiled a unique quasi-panel of 66,427
individuals from 283 cities during 1997--2017, introduced reference-dependence
to the random utility maximisation model of migration in a sequential setting,
derived empirical specifications with theoretical micro-foundations, and
applied various monadic and dyadic fixed effects to address multilateral
resistance to migration. Multilevel logit models and two-step system GMM
estimation were adopted for the robustness check. Our primary findings are that
a 10% increase in the ratio of sector-based job prospects in cities of
destination to cities of origin raises the probability of migration by
1.281--2.185 percentage points, and the effects tend to be stronger when the
scale of the ratio is larger. Having a family migration network causes an
increase of approximately 6 percentage points in migratory probabilities.
Further, labour migrants are more likely to be male, unmarried, younger, or
more educated. Our results suggest that the ongoing industrial reform in China
influences labour mobility between cities, providing important insights for
regional policymakers to prevent brain drain and to attract relevant talent.",Job Prospects and Labour Mobility in China,2022-07-17 23:32:19,"Huaxin Wang-Lu, Octasiano Miguel Valerio Mendoza","http://dx.doi.org/10.1080/09638199.2022.2157463, http://arxiv.org/abs/2207.08282v2, http://arxiv.org/pdf/2207.08282v2",econ.GN
32636,gn,"Early-life environments can have long-lasting developmental effects.
Interestingly, research on how school reforms affect later-life study behavior
has hardly adopted this perspective. Therefore, we investigated a staggered
school reform that reduced the number of school years and increased weekly
instructional time for secondary school students in most German federal states.
We analyzed this quasi-experiment in a difference-in-differences framework
using representative large-scale survey data on 71,426 students who attended
university between 1998 and 2016. We found negative effects of reform exposure
on hours spent attending classes and on self-study, and a larger time gap
between school completion and higher education entry. Our results support the
view that research should examine unintended long-term effects of school
reforms on individual life courses.",Do school reforms shape study behavior at university? Evidence from an instructional time reform,2022-07-20 15:04:41,"Jakob Schwerter, Nicolai Netz, Nicolas Hübner","http://arxiv.org/abs/2207.09843v1, http://arxiv.org/pdf/2207.09843v1",econ.GN
32637,gn,"We study the impact of non-pharmaceutical interventions (NPIs) on mortality
and economic activity across U.S. cities during the 1918 Flu Pandemic. The
combination of fast and stringent NPIs reduced peak mortality by 50% and
cumulative excess mortality by 24% to 34%. However, while the pandemic itself
was associated with short-run economic disruptions, we find that these
disruptions were similar across cities with strict and lenient NPIs. NPIs also
did not worsen medium-run economic outcomes. Our findings indicate that NPIs
can reduce disease transmission without further depressing economic activity, a
finding also reflected in discussions in contemporary newspapers.","Pandemics Depress the Economy, Public Health Interventions Do Not: Evidence from the 1918 Flu",2022-07-24 05:02:46,"Sergio Correia, Stephan Luck, Emil Verner","http://dx.doi.org/10.1017/S0022050722000407, http://arxiv.org/abs/2207.11636v1, http://arxiv.org/pdf/2207.11636v1",econ.GN
32638,gn,"Earlier meta-analyses of the economic impact of climate change are updated
with more data, with three new results: (1) The central estimate of the
economic impact of global warming is always negative. (2) The confidence
interval about the estimates is much wider. (3) Elicitation methods are most
pessimistic, econometric studies most optimistic. Two previous results remain:
(4) The uncertainty about the impact is skewed towards negative surprises. (5)
Poorer countries are much more vulnerable than richer ones. A meta-analysis of
the impact of weather shocks reveals that studies, which relate economic growth
to temperature levels, cannot agree on the sign of the impact whereas studies,
which make economic growth a function of temperature change do agree on the
sign but differ an order of magnitude in effect size. The former studies posit
that climate change has a permanent effect on economic growth, the latter that
the effect is transient. The impact on economic growth implied by studies of
the impact of climate change is close to the growth impact estimated as a
function of weather shocks. The social cost of carbon shows a similar pattern
to the total impact estimates, but with more emphasis on the impacts of
moderate warming in the near and medium term.",A meta-analysis of the total economic impact of climate change,2022-07-25 16:39:24,Richard S. J. Tol,"http://arxiv.org/abs/2207.12199v3, http://arxiv.org/pdf/2207.12199v3",econ.GN
32639,gn,"We present a deep learning solution to address the challenges of simulating
realistic synthetic first-price sealed-bid auction data. The complexities
encountered in this type of auction data include high-cardinality discrete
feature spaces and a multilevel structure arising from multiple bids associated
with a single auction instance. Our methodology combines deep generative
modeling (DGM) with an artificial learner that predicts the conditional bid
distribution based on auction characteristics, contributing to advancements in
simulation-based research. This approach lays the groundwork for creating
realistic auction environments suitable for agent-based learning and modeling
applications. Our contribution is twofold: we introduce a comprehensive
methodology for simulating multilevel discrete auction data, and we underscore
the potential of DGM as a powerful instrument for refining simulation
techniques and fostering the development of economic models grounded in
generative AI.",Implementing a Hierarchical Deep Learning Approach for Simulating Multi-Level Auction Data,2022-07-25 18:22:04,"Marcelin Joanis, Andrea Lodi, Igor Sadoune","http://arxiv.org/abs/2207.12255v3, http://arxiv.org/pdf/2207.12255v3",econ.GN
32640,gn,"The purpose of this paper is to investigate if a country quality of
governance moderates the effect of natural disasters on startup activity within
that country. We test our hypotheses using a panel of 95 countries from 2006 to
2016. Our findings suggest that natural disasters discourage startup activity
in countries that have low quality governance but encourage startup activity in
countries that have high quality governance. Moreover, our estimates reveal
that natural disasters effects on startup activity persist for the short term
(1-3 years) but not the long term. Our findings provide new insights into how
natural disasters affect entrepreneurship activity and highlight the importance
of country governance during these events.","Natural Disasters, Entrepreneurship Activity, and the Moderating Role of Country Governance",2022-07-25 22:35:10,"Christopher Boudreaux, Anand Jha, Monica Escaleras","http://arxiv.org/abs/2207.12492v1, http://arxiv.org/pdf/2207.12492v1",econ.GN
32641,gn,"We evaluate the forecasting performance of a wide set of robust inflation
measures between 1960 and 2022, including official median and trimmed-mean
personal-consumption-expenditure inflation. When trimming out different
expenditure categories with the highest and lowest inflation rates, we find
that the optimal trim points vary widely across time and also depend on the
choice of target; optimal trims are higher when targeting future trend
inflation or for a 1970s-1980s subsample. Surprisingly, there are no grounds to
select a single series on the basis of forecasting performance. A wide range of
trims-including those of the official robust measures-have an average
prediction error that makes them statistically indistinguishable from the
best-performing trim. Despite indistinguishable average errors, these trims
imply different predictions for trend inflation in any given month, within a
range of 0.5 to 1 percentage points, suggesting the use of a set of
near-optimal trims.",Extending the Range of Robust PCE Inflation Measures,2022-07-25 22:46:55,"Sergio Ocampo, Raphael Schoenle, Dominic A. Smith","http://arxiv.org/abs/2207.12494v2, http://arxiv.org/pdf/2207.12494v2",econ.GN
32648,gn,"We measure bond and stock conditional return volatility as a function of
changes in sentiment, proxied by six indicators from the Tel Aviv Stock
Exchange. We find that changes in sentiment affect conditional volatilities at
different magnitudes and often in an opposite manner in the two markets,
subject to market states. We are the first to measure bonds conditional
volatility of retail investors sentiment thanks to a unique dataset of
corporate bond returns from a limit-order-book with highly active retail
traders. This market structure differs from the prevalent OTC platforms, where
institutional investors are active yet less prone to sentiment.",The Impact of Retail Investors Sentiment on Conditional Volatility of Stocks and Bonds,2022-08-02 18:36:21,"Elroi Hadad, Haim Kedar-Levy","http://arxiv.org/abs/2208.01538v1, http://arxiv.org/pdf/2208.01538v1",econ.GN
32642,gn,"In this paper, we compare different methods to extract skill requirements
from job advertisements. We consider three top-down methods that are based on
expert-created dictionaries of keywords, and a bottom-up method of unsupervised
topic modeling, the Latent Dirichlet Allocation (LDA) model. We measure the
skill requirements based on these methods using a U.K. dataset of job
advertisements that contains over 1 million entries. We estimate the returns of
the identified skills using wage regressions. Finally, we compare the different
methods by the wage variation they can explain, assuming that better-identified
skills will explain a higher fraction of the wage variation in the labor
market. We find that the top-down methods perform worse than the LDA model, as
they can explain only about 20% of the wage variation, while the LDA model
explains about 45% of it.",Skill requirements in job advertisements: A comparison of skill-categorization methods based on explanatory power in wage regressions,2022-07-26 14:58:44,"Ziqiao Ao, Gergely Horvath, Chunyuan Sheng, Yifan Song, Yutong Sun","http://arxiv.org/abs/2207.12834v1, http://arxiv.org/pdf/2207.12834v1",econ.GN
32643,gn,"The IPCC started at a time when climate policy was an aspiration for the
future. The research assessed in the early IPCC reports was necessarily about
potential climate policies, always stylized and often optimized. The IPCC has
continued on this path, even though there is now a considerable literature
studying actual climate policy, in all its infuriating detail, warts and all.
Four case studies suggest that the IPCC, in its current form, will not be able
to successfully switch from ex ante to ex post policy evaluation. This
transition is key as AR7 will most likely have to confront the failure to meet
the 1.5K target. The four cases are as follows. (1) The scenarios first build
and later endorsed by the IPCC all project a peaceful future with steady if not
rapid economic growth everywhere, more closely resembling political manifestos
than facts on the ground. (2) Successive IPCC reports have studiously avoided
discussing the voluminous literature suggesting that political targets for
greenhouse gas emission reduction are far from optimal, although a central part
of that work was awarded the Nobel Prize in 2018. (3) IPCC AR5 found it
impossible to acknowledge that the international climate policy negotiations
from COP1 (Berlin) to COP19 (Warsaw) were bound to fail, just months before the
radical overhaul at COP20 (Lima) proved that point. (4) IPCC AR6 by and large
omitted the nascent literature on \textit{ex post} climate policy evaluation.
Together, these cases suggest that the IPCC finds self-criticism difficult and
is too close to policy makers to criticize past and current policy mistakes.
One solution would be to move control over the IPCC to the national authorities
on research and higher education.",The IPCC and the challenge of ex post policy evaluation,2022-07-29 17:56:45,Richard S. J. Tol,"http://arxiv.org/abs/2207.14724v2, http://arxiv.org/pdf/2207.14724v2",econ.GN
32644,gn,"We examine the allocation of a limited pool of matching funds to public good
projects using Quadratic Funding. In particular, we consider a variation of the
Capital Constrained Quadratic Funding (CQF) mechanism proposed by Buterin,
Hitzig and Weyl (2019) where only funds in the matching pool are distributed
among projects. We show that this mechanism achieves a socially optimal
allocation of limited funds.",Optimal Allocation of Limited Funds in Quadratic Funding,2022-07-29 19:31:35,Ricardo A. Pasquini,"http://arxiv.org/abs/2207.14775v2, http://arxiv.org/pdf/2207.14775v2",econ.GN
32645,gn,"The notion of the ""adjacent possible"" has been advanced to theorize the
generation of novelty across many different research domains. This study is an
attempt to examine in what way the notion can be made empirically useful for
innovation studies. A theoretical framework is construed based on the notion of
innovation a search process of recombining knowledge to discover the ""adjacent
possible"". The framework makes testable predictions about the rate of
innovation, the distribution of innovations across organizations, and the rate
of diversification or product portfolios. The empirical section examines how
well this framework predicts long-run patterns of new product introductions in
Sweden, 1908-2016 and examines the long-run evolution of the product space of
Swedish organizations. The results suggest that, remarkably, the rate of
innovation depends linearly on cumulative innovations, which explains
advantages of incumbent firms, but excludes the emergence of ""winner takes all""
distributions. The results also suggest that the rate of development of new
types of products follows ""Heaps' law"", where the share of new product types
within organizations declines over time. The topology of the Swedish product
space carries information about future product diversifications, suggesting
that the adjacent possible is not altogether `""unprestatable"".",Long-run patterns in the discovery of the adjacent possible,2022-08-01 17:51:25,Josef Taalbi,"http://arxiv.org/abs/2208.00907v2, http://arxiv.org/pdf/2208.00907v2",econ.GN
32646,gn,"How accurately can behavioral scientists predict behavior? To answer this
question, we analyzed data from five studies in which 640 professional
behavioral scientists predicted the results of one or more behavioral science
experiments. We compared the behavioral scientists' predictions to random
chance, linear models, and simple heuristics like ""behavioral interventions
have no effect"" and ""all published psychology research is false."" We find that
behavioral scientists are consistently no better than - and often worse than -
these simple heuristics and models. Behavioral scientists' predictions are not
only noisy but also biased. They systematically overestimate how well
behavioral science ""works"": overestimating the effectiveness of behavioral
interventions, the impact of psychological phenomena like time discounting, and
the replicability of published psychology research.",Simple models predict behavior at least as well as behavioral scientists,2022-08-02 02:00:08,Dillon Bowen,"http://arxiv.org/abs/2208.01167v1, http://arxiv.org/pdf/2208.01167v1",econ.GN
32647,gn,"The paper contains the online supplementary materials for ""Data-Driven
Prediction and Evaluation on Future Impact of Energy Transition Policies in
Smart Regions"". We review the renewable energy development and policies in the
three metropolitan cities/regions over recent decades. Depending on the
geographic variations in the types and quantities of renewable energy resources
and the levels of policymakers' commitment to carbon neutrality, we classify
Singapore, London, and California as case studies at the primary, intermediate,
and advanced stages of the renewable energy transition, respectively.","Review of Energy Transition Policies in Singapore, London, and California",2022-08-02 16:10:10,"Chunmeng Yang, Siqi Bu, Yi Fan, Wayne Xinwei Wan, Ruoheng Wang, Aoife Foley","http://arxiv.org/abs/2208.01433v1, http://arxiv.org/pdf/2208.01433v1",econ.GN
33806,th,"We develop an axiomatic theory of information acquisition that captures the
idea of constant marginal costs in information production: the cost of
generating two independent signals is the sum of their costs, and generating a
signal with probability half costs half its original cost. Together with
Blackwell monotonicity and a continuity condition, these axioms determine the
cost of a signal up to a vector of parameters. These parameters have a clear
economic interpretation and determine the difficulty of distinguishing states.",The Cost of Information: The Case of Constant Marginal Costs,2018-12-11 06:57:50,"Luciano Pomatto, Philipp Strack, Omer Tamuz","http://arxiv.org/abs/1812.04211v4, http://arxiv.org/pdf/1812.04211v4",econ.TH
32649,gn,"The recent rise of sub-national minimum wage (MW) policies in the US has
resulted in significant dispersion of MW levels within urban areas. In this
paper, we study the spillover effects of these policies on local rental markets
through commuting. To do so, for each USPS ZIP code we construct a ""workplace""
MW measure based on the location of its resident's jobs, and use it to estimate
the effect of MW policies on rents. We use a novel identification strategy that
exploits the fine timing of differential changes in the workplace MW across ZIP
codes that share the same ""residence"" MW, defined as the same location's MW.
Our baseline results imply that a 10 percent increase in the workplace MW
increases rents at residence ZIP codes by 0.69 percent. To illustrate the
importance of commuting patterns, we use our estimates and a simple model to
simulate the impact of federal and city counterfactual MW policies. The
simulations suggest that landlords pocket approximately 10 cents of each dollar
generated by the MW across directly and indirectly affected areas, though the
incidence on landlords varies systematically across space.",From Workplace to Residence: The Spillover Effects of Minimum Wage Policies on Local Housing Markets,2022-08-03 03:12:40,"Gabriele Borg, Diego Gentile Passaro, Santiago Hermo","http://arxiv.org/abs/2208.01791v3, http://arxiv.org/pdf/2208.01791v3",econ.GN
32650,gn,"Regulation is a major driver of housing supply, yet often not easily
observed. Using only apartment prices and building heights, we estimate
$\textit{frontier costs}$, defined as housing production costs absent
regulation. Identification uses conditions on the support of supply and demand
shocks without recourse to instrumental variables. In an application to Israeli
residential construction, we find on average 43% of housing price ascribable to
regulation, but with substantial dispersion, and with higher rates in areas
that are higher priced, denser, and closer to city centers. We also find
economies of scale in frontier costs at low building heights. This estimation
takes into account measurement error, which includes random unobserved
structural quality. When allowing structural quality to vary with amenities
(locational quality), and assuming weak complementarity (the return in price on
structural quality is nondecreasing in amenities) among buildings within 1km,
we bound mean regulation from below by 19% of prices.",Regulation and Frontier Housing Supply,2022-08-03 13:45:41,"Dan Ben-Moshe, David Genesove","http://arxiv.org/abs/2208.01969v3, http://arxiv.org/pdf/2208.01969v3",econ.GN
32651,gn,"Standard rational expectations models with an occasionally binding zero lower
bound constraint either admit no solutions (incoherence) or multiple solutions
(incompleteness). This paper shows that deviations from full-information
rational expectations mitigate concerns about incoherence and incompleteness.
Models with no rational expectations equilibria admit self-confirming
equilibria involving the use of simple mis-specified forecasting models.
Completeness and coherence is restored if expectations are adaptive or if
agents are less forward-looking due to some information or behavioral friction.
In the case of incompleteness, the E-stability criterion selects an
equilibrium.",Coherence without Rationality at the Zero Lower Bound,2022-08-03 16:48:36,"Guido Ascari, Sophocles Mavroeidis, Nigel McClung","http://arxiv.org/abs/2208.02073v2, http://arxiv.org/pdf/2208.02073v2",econ.GN
32652,gn,"The aggregate ability of child care providers to meet local demand for child
care is linked to employment rates in many sectors of the economy. Amid growing
concern regarding child care provider sustainability due to the COVID-19
pandemic, state and local governments have received large amounts of new
funding to better support provider stability. In response to this new funding
aimed at bolstering the child care market in Florida, this study was devised as
an exploratory investigation into features of child care providers that lead to
business longevity. In this study we used optimal survival trees, a machine
learning technique designed to better understand which providers are expected
to remain operational for longer periods of time, supporting stabilization of
the child care market. This tree-based survival analysis detects and describes
complex interactions between provider characteristics that lead to differences
in expected business survival rates. Results show that small providers who are
religiously affiliated, and all providers who are serving children in Florida's
universal Prekindergarten program and/or children using child care subsidy, are
likely to have the longest expected survival rates.",Child Care Provider Survival Analysis,2022-08-03 18:42:53,"Phillip Sherlock, Herman T. Knopf, Robert Chapman, Maya Schreiber, Courtney K. Blackwell","http://arxiv.org/abs/2208.02154v1, http://arxiv.org/pdf/2208.02154v1",econ.GN
32653,gn,"Recent studies in psychology and neuroscience offer systematic evidence that
fictional works exert a surprisingly strong influence on readers and have the
power to shape their opinions and worldviews. Building on these findings, we
study what we term Potterian economics, the economic ideas, insights, and
structure, found in Harry Potter books, to assess how the books might affect
economic literacy. A conservative estimate suggests that more than 7.3 percent
of the world population has read the Harry Potter books, and millions more have
seen their movie adaptations. These extraordinary figures underscore the
importance of the messages the books convey. We explore the Potterian economic
model and compare it to professional economic models to assess the consistency
of the Potterian economic principles with the existing economic models. We find
that some of the principles of Potterian economics are consistent with
economists models. Many other principles, however, are distorted and contain
numerous inaccuracies, contradicting professional economists views and
insights. We conclude that Potterian economics can teach us about the formation
and dissemination of folk economics, the intuitive notions of naive individuals
who see market transactions as a zero-sum game, who care about distribution but
fail to understand incentives and efficiency, and who think of prices as
allocating wealth but not resources or their efficient use.",Potterian Economics,2022-08-06 21:47:56,"Daniel Levy, Avichai Snir","http://dx.doi.org/10.1093/ooec/odac004, http://arxiv.org/abs/2208.03564v1, http://arxiv.org/pdf/2208.03564v1",econ.GN
32654,gn,"Selection on moral hazard represents the tendency to select a specific health
insurance coverage depending on the heterogeneity in utilisation ''slopes''. I
use data from the Swiss Household Panel and from publicly available regulatory
data to explore the extent of selection on slopes in the Swiss managed
competition system. I estimate responses in terms of (log) doctor visits to
lowest and highest deductible levels using Roy-type models, identifying
marginal treatment effects with local instrumental variables. The response to
high coverage plans (i.e. plans with the lowest deductible level) among high
moral hazard types is 25-35 percent higher than average.",Selection on moral hazard in the Swiss market for mandatory health insurance: Empirical evidence from Swiss Household Panel data,2022-08-08 00:02:11,Francetic Igor,"http://arxiv.org/abs/2208.03815v2, http://arxiv.org/pdf/2208.03815v2",econ.GN
32655,gn,"On 11th Jan 2020, the first COVID-19 related death was confirmed in Wuhan,
Hubei. The Chinese government responded to the outbreak with a lockdown that
impacted most residents of Hubei province and lasted for almost three months.
At the time, the lockdown was the strictest both within China and worldwide.
Using an interactive web-based experiment conducted half a year after the
lockdown with participants from 11 Chinese provinces, we investigate the
behavioral effects of this `shock' event experienced by the population of
Hubei. We find that both one's place of residence and the strictness of
lockdown measures in their province are robust predictors of individual social
distancing behavior. Further, we observe that informational messages are
effective at increasing compliance with social distancing throughout China,
whereas fines for noncompliance work better within Hubei province relative to
the rest of the country. We also report that residents of Hubei increase their
propensity to social distance when exposed to social environments characterized
by the presence of a superspreader, while the effect is not present outside of
the province. Our results appear to be specific to the context of COVID-19, and
are not explained by general differences in risk attitudes and social
preferences.",Experience of the COVID-19 pandemic in Wuhan leads to a lasting increase in social distancing,2022-08-08 16:24:30,"Darija Barak, Edoardo Gallo, Ke Rong, Ke Tang, Wei Du","http://arxiv.org/abs/2208.04117v2, http://arxiv.org/pdf/2208.04117v2",econ.GN
32656,gn,"Currently, job satisfaction and turnover intentions are the significant
issues for oil and gas companies in the United Arab Emirates (UAE). These
issues need to be addressed soon for the performance of the oil and gas
companies. Thus, the aim related to the current study is to examine the impact
of job burnout, emotional intelligence, and job satisfaction on the turnover
intentions of the oil and gas companies in the UAE. The goals of this research
also include the examination of mediating the influence of job satisfaction
alongside the nexus of job burnout and turnover intentions of the oil and gas
companies in the UAE. The questionnaire method was adopted to collect the data
from the respondents, and Smart-PLS were employed to analyse the data. The
results show that job burnout, emotional intelligence, and job satisfaction
have a positive association with turnover intentions. In contrast, job
satisfaction positively mediates the nexus between job burnout and turnover
intentions. These results provide the guidelines to the policymakers that they
should enhance their focus on job satisfaction and turnover intentions of the
employees that improve the firm performance.",The Nexus between Job Burnout and Emotional Intelligence on Turnover Intention in Oil and Gas Companies in the UAE,2022-08-06 17:01:01,"Anas Abudaqa, Mohd Faiz Hilmi, Norziani Dahalan","http://arxiv.org/abs/2208.04843v1, http://arxiv.org/pdf/2208.04843v1",econ.GN
32657,gn,"This work has two intertwined components: first, as part of a research
programme it introduces a new methodology for identifying `power-centres' in
rural societies of developing countries in general and then applies that in the
specific context of contemporary rural India for addressing some debates on the
dynamics of power in rural India. We identify the nature of `local' rural
institutions based on primary data collected by ourselves (in 2013 and 2014).
We took 36 villages in the states of Maharashtra, Odisha and Uttar Pradesh - 12
in each of these states - as the sites for our observation and data collection.
We quantify nature of institutions from data on the day-to-day interactions of
households in the spheres of economy, society and politics. Our household
survey shows that there is substantial variation in power structure across
regions. We identified the presence of `local elites' in 22 villages out of 36
surveyed. We conducted a follow-up survey, called `elite survey', to get
detailed information about the identified elite households. We observe that
landlordism has considerably weakened, land has ceased to be the sole source of
power and new power-centres have emerged. Despite these changes, caste,
landownership and patron-client relation continue to be three important pillars
of rural power structure.",Patronage and power in rural India: a study based on interaction networks,2022-08-09 22:09:23,"Anindya Bhattacharya, Anirban Kar, Sunil Kumar, Alita Nandi","http://arxiv.org/abs/2208.05002v1, http://arxiv.org/pdf/2208.05002v1",econ.GN
32658,gn,"This article is an edited transcript of the session of the same name at the
38th Annual NABE Economic Policy Conference: Policy Options for Sustainable and
Inclusive Growth. The panelists are experts from government and private
research organizations.",Measuring Race in US Economic Statistics: What Do We Know?,2022-08-10 13:08:01,"Sonya Ravindranath Waddell, John M. Abowd, Camille Busette, Mark Hugo Lopez","http://dx.doi.org/10.1057/s11369-022-00274-3, http://arxiv.org/abs/2208.05252v1, http://arxiv.org/pdf/2208.05252v1",econ.GN
32659,gn,"Using administrative data on all induced abortions recorded in Spain in 2019,
we analyze the characteristics of women undergoing repeat abortions and the
spacing between these procedures. Our findings indicate that compared to women
experiencing their first abortion, those who undergo repeat abortions are more
likely to have lower education levels, have dependent children, live alone, or
be foreign-born, with a non-monotonic relationship with age. We also report
that being less educated, not employed, having dependent children, or being
foreign-born are all strongly related to a higher number of repeat abortions.
Lastly, we find that being less educated, foreign-born, or not employed is
correlated with a shorter time interval between the last two abortions.",Correlates of repeat abortions and their spacing: Evidence from registry data in Spain,2022-08-11 00:29:59,"Catia Nicodemo, Sonia Oreffice, Climent Quintana-Domeque","http://arxiv.org/abs/2208.05567v2, http://arxiv.org/pdf/2208.05567v2",econ.GN
32670,gn,"We study the impact of personalized content recommendations on the usage of
an educational app for children. In a randomized controlled trial, we show that
the introduction of personalized recommendations increases the consumption of
content in the personalized section of the app by approximately 60%. We further
show that the overall app usage increases by 14%, compared to the baseline
system where human content editors select stories for all students at a given
grade level. The magnitude of individual gains from personalized content
increases with the amount of data available about a student and with
preferences for niche content: heavy users with long histories of content
interactions who prefer niche content benefit more than infrequent, newer users
who like popular content. To facilitate the move to personalized recommendation
systems from a simpler system, we describe how we make important design
decisions, such as comparing alternative models using offline metrics and
choosing the right target audience.",Personalized Recommendations in EdTech: Evidence from a Randomized Controlled Trial,2022-08-30 03:35:53,"Keshav Agrawal, Susan Athey, Ayush Kanodia, Emil Palikot","http://arxiv.org/abs/2208.13940v2, http://arxiv.org/pdf/2208.13940v2",econ.GN
32660,gn,"Assessments such as standardized tests and teacher evaluations of students'
classroom participation are central elements of most educational systems.
Assessments inform the student, parent, teacher, and school about the student
learning progress. Individuals use the information to adjust their study
efforts and to make guide their course choice. Schools and teachers use the
information to evaluate effectiveness and inputs. Assessments are also used to
sort students into tracks, educational programmes, and on the labor market.
Policymakers use assessments to reward or penalise schools and parents use
assessment results to select schools. Consequently, assessments incentivize the
individual, the teacher, and the school to do well.
  Because assessments play an important role in individuals' educational
careers, either through the information or the incentive channel, they are also
important for efficiency, equity, and well-being. The information channel is
important for ensuring the most efficient human capital investments: students
learn about the returns and costs of effort investments and about their
abilities and comparative advantages. However, because students are sorted into
educational programs and on the labor market based on assessment results,
students optimal educational investment might not equal their optimal human
capital investment because of the signaling value. Biases in assessments and
heterogeneity in access to assessments are sources of inequality in education
according to gender, origin, and socioeconomic background. These sources have
long-running implications for equality and opportunity. Finally, because
assessment results also carry important consequences for individuals'
educational opportunities and on the labor market, they are a source of stress
and reduced well-being.",Assessments in Education,2022-08-11 16:50:10,Hans Henrik Sievertsen,"http://arxiv.org/abs/2208.05826v1, http://arxiv.org/pdf/2208.05826v1",econ.GN
32661,gn,"Ambient air pollution is harmful to the fetus even in countries with
relatively low levels of pollution. In this paper, I examine the effects of
ambient air pollution on birth outcomes in Norway. I find that prenatal
exposure to ambient nitric oxide in the last trimester causes significant birth
weight and birth length loss under the same sub-postcode fixed effects and
calendar month fixed effects, whereas other ambient air pollutants such as
nitrogen dioxide and sulfur dioxide appear to be at safe levels for the fetus
in Norway. In addition, the marginal adverse effect of ambient nitric oxide is
larger for newborns with disadvantaged parents. Both average concentrations of
nitric oxide and occasional high concentration events can adversely affect
birth outcomes. The contributions of my work include: first, my finding that
prenatal exposure to environmental nitric oxide has an adverse effect on birth
outcomes fills a long-standing knowledge gap. Second, with the large sample
size and geographic division of sub-postal codes in Norway, I can control for a
rich set of spatio-temporal fixed effects to overcome most of the endogeneity
problems caused by the choice of residential area and date of delivery. In
addition, I study ambient air pollution in a low-pollution setting, which
provides new evidence on the health effects of low ambient air pollution.",The effect of ambient air pollution on birth outcomes in Norway,2022-08-12 16:37:20,Xiaoguang Ling,"http://arxiv.org/abs/2208.06271v6, http://arxiv.org/pdf/2208.06271v6",econ.GN
32662,gn,"Measuring beliefs about natural disasters is challenging. Deep
out-of-the-money options allow investors to hedge at a range of strikes and
time horizons, thus the 3-dimensional surface of firm-level option prices
provides information on (i) skewed and fat-tailed beliefs about the impact of
natural disaster risk across space and time dimensions at daily frequency; and
(ii) information on the covariance of wildfire-exposed stocks with investors'
marginal utility of wealth. Each publicly-traded company's daily surface of
option prices is matched with its network of establishments and wildfire
perimeters over two decades. First, wildfires affect investors' risk neutral
probabilities at short and long maturities; investors price asymmetric downward
tail risk and a probability of upward jumps. The volatility smile is more
pronounced. Second, comparing risk-neutral and physical distributions reveals
the option-implied risk aversion with respect to wildfire-exposed stock prices.
Investors' marginal utility of wealth is correlated with wildfire shocks.
Option-implied risk aversion identifies the wildfire-exposed share of
portfolios. For risk aversions consistent with Barro (2012), equity options
suggest (i) investors hold larger shares of wildfire-exposed stocks than the
market portfolio; or (ii) investors may have more pessimistic beliefs about
wildfires' impacts than what observed returns suggest, such as pricing
low-probability unrealized downward tail risk. We calibrate options with models
featuring both upward and downward risk. Results are consistent a significant
pricing of downward jumps.",Do Investors Hedge Against Green Swans? Option-Implied Risk Aversion to Wildfires,2022-08-15 01:27:34,Amine Ouazad,"http://arxiv.org/abs/2208.06930v1, http://arxiv.org/pdf/2208.06930v1",econ.GN
32663,gn,"This study assesses the degree to which the social value of patents can be
connected to the private value of patents across discrete and complex
innovation. The underlying theory suggests that the social value of cumulative
patents is less related to the private value of patents. We use the patents
applied between 1995 to 2002 and granted on or before December 2018 from the
Indian Patent Office (IPO). Here the patent renewal information is utilized as
a proxy for the private value of the patent. We have used a variety of logit
regression model for the impact assessment analysis. The results reveal that
the technology classification (i.e., discrete versus complex innovations) plays
an important role in patent value assessment, and some technologies are
significantly different than the others even within the two broader
classifications. Moreover, the non-resident patents in India are more likely to
have a higher value than the resident patents. According to the conclusions of
this study, only a few technologies from the discrete and complex innovation
categories have some private value. There is no evidence that patent social
value indicators are less useful in complicated technical classes than in
discrete ones.",Assessing the Impact of Patent Attributes on the Value of Discrete and Complex Innovations,2022-08-10 14:11:01,"Mohd Shadab Danish, Pritam Ranjan, Ruchi Sharma","http://dx.doi.org/10.1142/S1363919622500165, http://arxiv.org/abs/2208.07222v1, http://arxiv.org/pdf/2208.07222v1",econ.GN
32671,gn,"Whereas there are recent papers on the effect of robot adoption on employment
and wages, there is no evidence on how robots affect non-monetary working
conditions. We explore the impact of robot adoption on several domains of
non-monetary working conditions in Europe over the period 1995-2005 combining
information from the World Robotics Survey and the European Working Conditions
Survey. In order to deal with the possible endogeneity of robot deployment, we
employ an instrumental variables strategy, using the robot exposure by sector
in other developed countries as an instrument. Our results indicate that
robotization has a negative impact on the quality of work in the dimension of
work intensity and no relevant impact on the domains of physical environment or
skills and discretion.",Does robotization affect job quality? Evidence from European regional labour markets,2022-08-30 16:23:20,"José-Ignacio Antón, Rudolf Winter-Ebmer, Enrique Fernández-Macías","http://arxiv.org/abs/2208.14248v1, http://arxiv.org/pdf/2208.14248v1",econ.GN
32664,gn,"Se repasa brevemente la historia y las finanzas islandesas de manera
diacr\'onica. Se presenta a Islandia como basti\'on del estallido de la crisis
financiera internacional que comienza a gestarse a principios del siglo XXI y
cuyo origen se hace evidente en la fecha simb\'olica del a\~no 2008. Se
analizan las razones fundamentales de esta crisis, centrandonos en las
particularidades de la estructura econ\'omica islandesa. Se consideran las
diferencias y parecidos de esta situaci\'on en relaci\'on a algunos otros
pa\'ises en similares circunstancias. Se estudia el caso del banco Icesave. Se
considera la repercusi\'on que la crisis experimentada por Islandia tiene en el
\'ambito internacional, especialmente en los inversores extranjeros y en los
conflictos jur\'idicos surgidos a ra\'iz de las medidas adoptadas por el
gobierno island\'es para sacar al pa\'is de la bancarrota.
  --
  Icelandic history and diachronically finances are briefly reviewed. Iceland
is presented as a bastion of the outbreak of the global financial crisis begins
to take shape in the early twenty-first century and whose origin is evident in
the symbolic date of 2008. The main reasons for this crisis are analyzed,
focusing on the particularities of Iceland's economic structure. The
differences and similarities of this in relation to some other countries in
similar circumstances are considered. Bank Icesave case is studied. The impact
of the crises experienced by Iceland has in the international arena, especially
foreign investors and legal disputes arising out of actions taken by the
Icelandic government to pull the country out of bankruptcy is considered.",Peculiaridades de la Economia islandesa en los albores del siglo XXI,2022-08-17 16:45:50,I. Martin-de-Santos,"http://dx.doi.org/10.5281/zenodo.4501211, http://arxiv.org/abs/2208.08442v1, http://arxiv.org/pdf/2208.08442v1",econ.GN
32665,gn,"Complex agricultural problems concern many countries, as the economic motives
are increasingly higher, and at the same time the consequences from the
irrational resources use and emissions are becoming more evident. In this work
we study three of the most common agricultural problems and model them through
optimization techniques, showing ways to assess conflicting objectives together
as a system and provide overall optimum solutions. The studied problems refer
to: i) a water-scarce area with overexploited surface and groundwater resources
due to over-pumping for irrigation (Central Greece), ii) a water-abundant area
with issues of water quality deterioration caused by agriculture (Southern
Ontario, Canada), iii) and a case of intensified agriculture based on animal
farming that causes issues of water, soil quality degradation, and increased
greenhouse gases emissions (Central Ireland). Linear, non-linear, and Goal
Programming optimization techniques have been developed and applied for each
case to maximize farmers welfare, make a less intensive use of environmental
resources, and control the emission of pollutants. The proposed approaches and
their solutions are novel applications for each case-study, compared to the
existing literature and practice. Furthermore, they provide useful insights for
most countries facing similar problems, they are easily applicable, and
developed and solved in publicly available tools such as Python.","Integrated modelling approaches for sustainable agri-economic growth and environmental improvement: Examples from Canada, Greece, and Ireland",2022-08-19 01:54:20,"Jorge A. Garcia, Angelos Alamanos","http://arxiv.org/abs/2208.09087v1, http://arxiv.org/pdf/2208.09087v1",econ.GN
32666,gn,"We study the evolution of interest about climate change between different
actors of the population, and how the interest of those actors affect one
another. We first document the evolution individually, and then provide a model
of cross influences between them, that we then estimate with a VAR. We find
large swings over time of said interest for the general public by creating a
Climate Change Index for Europe and the US (CCI) using news media mentions, and
little interest among economists (measured by publications in top journals of
the discipline). The general interest science journals and policymakers have a
more steady interest, although policymakers get interested much later.","The Interactions of Social Norms about Climate Change: Science, Institutions and Economics",2022-08-19 12:33:24,"Antonio Cabrales, Manu García, David Ramos Muñoz, Angel Sánchez","http://arxiv.org/abs/2208.09239v1, http://arxiv.org/pdf/2208.09239v1",econ.GN
32667,gn,"Car dependence has been threatening transportation sustainability as it
contributes to congestion and associated externalities. In response, various
transport policies that restrict the use of private vehicle have been
implemented. However, empirical evaluations of such policies have been limited.
To assess these policies' benefits and costs, it is imperative to accurately
evaluate how such policies affect traffic conditions. In this study, we compile
a refined spatio-temporal resolution data set of the floating-vehicle-based
traffic performance index to examine the effects of a recent nonlocal vehicle
driving restriction policy in Shanghai, one of most populous cities in the
world. Specifically, we explore whether and how the policy impacted traffic
speeds in the short term by employing a quasi-experimental
difference-in-differences modeling approach. We find that: (1) In the first
month, the policy led to an increase of the network-level traffic speed by
1.47% (0.352 km/h) during evening peak hours (17:00-19:00) but had no
significant effects during morning peak hours (7:00-9:00). (2) The policy also
helped improve the network-level traffic speed in some unrestricted hours
(6:00, 12:00, 14:00, and 20:00) although the impact was marginal. (3) The
short-term effects of the policy exhibited heterogeneity across traffic
analysis zones. The lower the metro station density, the greater the effects
were. We conclude that driving restrictions for non-local vehicles alone may
not significantly reduce congestion, and their effects can differ both
temporally and spatially. However, they can have potential side effects such as
increased purchase and usage of new energy vehicles, owners of which can obtain
a local license plate of Shanghai for free.",Panacea or Placebo? Exploring Causal Effects of Nonlocal Vehicle Driving Restriction Policies on Traffic Congestion Using Difference-in-differences Approach,2022-08-24 17:32:02,"Yuan Liang, Quan Yuan, Daoge Wang, Yong Feng, Pengfei Xu, Jiangping Zhou","http://arxiv.org/abs/2208.11577v2, http://arxiv.org/pdf/2208.11577v2",econ.GN
32668,gn,"Governments have implemented school closures and online learning as one of
the main tools to reduce the spread of Covid-19. Despite the potential benefits
in terms of reduction of cases, the educational costs of these policies may be
dramatic. This work identifies the educational costs, expressed as decrease in
test scores, for the whole universe of Italian students attending the 5th, 8th
and 13th grade of the school cycle during the 2021/22 school year. The analysis
relies on a difference-in-difference model in relative time, where the control
group is the closest generation before the Covid-19 pandemic. The results
suggest a national average loss between 1.6-4.1% and 0.5-2.4% in Mathematics
and Italian test scores, respectively. After collecting the precise number of
days of school closures for the universe of students in Sicily, we estimate
that 30 additional days of closure decrease the test score by 1%. However, the
impact is much larger for students from high schools (1.8%) compared to
students from low and middle schools (0.5%). This is likely explained by the
lower relevance of parental inputs and higher reliance on peers inputs, within
the educational production function, for higher grades. Findings are also
heterogeneous across class size and parental job conditions, pointing towards
potential growing inequalities driven by the lack of in front teaching.",Will the last be the first? School closures and educational outcomes,2022-08-24 18:20:17,"Michele Battisti, Giuseppe Maggio","http://arxiv.org/abs/2208.11606v1, http://arxiv.org/pdf/2208.11606v1",econ.GN
32669,gn,"This paper examines the interplay between desegregation, institutional bias,
and individual behavior in education. Using a game-theoretic model that
considers race-heterogeneous social incentives, the study investigates the
effects of between-school desegregation on within-school disparities in
coursework. The analysis incorporates a segregation measure based on entropy
and proposes an optimization-based approach to evaluate the impact of student
reassignment policies. The results highlight that Black and Hispanic students
in predominantly White schools, despite receiving less encouragement to apply
to college, exhibit higher enrollment in college-prep coursework due to
stronger social incentives from their classmates' coursework decisions.",Can Desegregation Close the Racial Gap in High School Coursework?,2022-08-25 22:39:58,Ritika Sethi,"http://arxiv.org/abs/2208.12321v4, http://arxiv.org/pdf/2208.12321v4",econ.GN
32673,gn,"We analyse the drivers of European Power Exchange (EPEX) wholesale
electricity prices between 2012 and early 2022 using machine learning. The
agnostic random forest approach that we use is able to reduce in-sample root
mean square errors (RMSEs) by around 50% when compared to a standard linear
least square model. This indicates that non-linearities and interaction effects
are key in wholesale electricity markets. Out-of-sample prediction errors using
machine learning are (slightly) lower than even in-sample least square errors
using a least square model. The effects of efforts to limit power consumption
and green the energy matrix on wholesale electricity prices are first order.
CO2 permit prices strongly impact electricity prices, as do the prices of
source energy commodities. And carbon permit prices impact has clearly
increased post-2021 (particularly for baseload prices). Among energy sources,
natural gas has the largest effect on electricity prices. Importantly, the role
of wind energy feed-in has slowly risen over time, and its impact is now
roughly on par with that of coal.",Changing Electricity Markets: Quantifying the Price Effects of Greening the Energy Matrix,2022-08-31 09:18:12,"Emanuel Kohlscheen, Richhild Moessner","http://arxiv.org/abs/2208.14650v1, http://arxiv.org/pdf/2208.14650v1",econ.GN
32674,gn,"We provide novel systematic cross-country evidence that the link between
domestic labour markets and CPI inflation has weakened considerably in advanced
economies during recent decades. The central estimate is that the short-run
pass-through from domestic labour cost changes to core CPI inflation decreased
from 0.25 in the 1980s to just 0.02 in the 2010s, while the long-run
pass-through fell from 0.36 to 0.03. We show that the timing of the collapse in
the pass-through coincides with a steep increase in import penetration from a
group of 10 major manufacturing EMEs around the turn of the millennium. This
signals increased competition and market contestability. Besides the extent of
trade openness, we show that the intensity of the pass-through also depends in
a non-linear way on the average level of inflation.",Globalisation and the Decoupling of Inflation from Domestic Labour Costs,2022-08-31 09:23:44,"Emanuel Kohlscheen, Richhild Moessner","http://arxiv.org/abs/2208.14651v1, http://arxiv.org/pdf/2208.14651v1",econ.GN
32675,gn,"This paper examines the drivers of CPI inflation through the lens of a
simple, but computationally intensive machine learning technique. More
specifically, it predicts inflation across 20 advanced countries between 2000
and 2021, relying on 1,000 regression trees that are constructed based on six
key macroeconomic variables. This agnostic, purely data driven method delivers
(relatively) good outcome prediction performance. Out of sample root mean
square errors (RMSE) systematically beat even the in-sample benchmark
econometric models. Partial effects of inflation expectations on CPI outcomes
are also elicited in the paper. Overall, the results highlight the role of
expectations for inflation outcomes in advanced economies, even though their
importance appears to have declined somewhat during the last 10 years.",What does machine learning say about the drivers of inflation?,2022-08-31 09:32:51,Emanuel Kohlscheen,"http://arxiv.org/abs/2208.14653v2, http://arxiv.org/pdf/2208.14653v2",econ.GN
32676,gn,"In spring 2022, the German federal government agreed on a set of measures
that aimed at reducing households' financial burden resulting from a recent
price increase, especially in energy and mobility. These measures included
among others, a nation-wide public transport ticket for 9\ EUR per month and a
fuel tax cut that reduced fuel prices by more than 15\,\%. In transportation
research this is an almost unprecedented behavioral experiment. It allows to
study not only behavioral responses in mode choice and induced demand but also
to assess the effectiveness of transport policy instruments. We observe this
natural experiment with a three-wave survey and an app-based travel diary on a
sample of hundreds of participants as well as an analysis of traffic counts. In
this third report, we provide first findings from the second survey, conducted
during the experiment.",A nation-wide experiment: fuel tax cuts and almost free public transport for three months in Germany -- Report 3 Second wave results,2022-08-31 17:53:28,"Allister Loder, Fabienne Cantner, Andrea Cadavid, Markus B. Siewert, Stefan Wurster, Sebastian Goerg, Klaus Bogenberger","http://arxiv.org/abs/2208.14902v2, http://arxiv.org/pdf/2208.14902v2",econ.GN
32677,gn,"This paper studies how socioeconomically biased screening practices impact
access to elite firms and what policies might effectively reduce bias. Using
administrative data on job search from an elite Indian college, I document
large caste disparities in earnings. I show that these disparities arise
primarily in the final round of screening, comprising non-technical personal
interviews that inquire about characteristics correlated with socioeconomic
status. Other job search stages do not explain disparities, including: job
applications, application reading, written aptitude tests, large group debates
that test for socio-emotional skills, and job choices. Through a novel model of
the job placement process, I show that employer willingness to pay for an
advantaged caste is as large as that for a full standard deviation increase in
college GPA. A hiring subsidy that eliminates the caste penalty would be more
cost-effective in diversifying elite hiring than other policies, such as those
that equalize the caste distribution of pre-college test scores or enforce
hiring quotas.","Making the Elite: Top Jobs, Disparities, and Solutions",2022-08-31 20:15:26,Soumitra Shukla,"http://arxiv.org/abs/2208.14972v2, http://arxiv.org/pdf/2208.14972v2",econ.GN
32678,gn,"Reductions in the cost of genetic sequencing have enabled the construction of
large datasets including both genetic and phenotypic data. Based on these
datasets, polygenic scores (PGSs) summarizing an individual's genetic
propensity for educational attainment have been constructed. It is by now well
established that this PGS predicts wages, income, and occupational prestige and
occupational mobility across generations. It is unknown whether a PGS for
educational attainment can predict upward income and occupational mobility even
within the peak earning years of an individual. Using data from the Wisconsin
Longitudinal Study (WLS), I show that: (i) a PGS for educational attainment
predicts wage, income and occupational prestige mobility between 1974 (when
respondents were about 36 years of age) and 1992 (when respondents were about
53 years of age), conditional on 1974 values of these variables and a range of
covariates; (ii) the effect is not mediated by parental socioeconomic status,
is driven primarily by respondents with only a high school education, and is
replicated in a within sibling-pair design; (iii) conditional on 1974 outcomes,
higher PGS individuals surveyed in 1975 aspired to higher incomes and more
prestigious jobs 10 years hence, an effect driven primarily by respondents with
more than a high school education; (iv) throughout their employment history,
high PGS individuals were more likely to undertake on the job training, and
more likely to change job duties during tenure with an employer; and (v) though
no more likely to change employers or industries during their careers, high PGS
individuals were more likely in 1974 to be working in industries which would
experience high wage growth in subsequent decades. These results contribute to
our understanding of longitudinal inequality and shed light on the sources of
heterogeneity in responses to economic shocks and policy.",Molecular genetics and mid-career economic mobility,2022-08-31 21:25:08,Paul Minard,"http://arxiv.org/abs/2209.00057v1, http://arxiv.org/pdf/2209.00057v1",econ.GN
32679,gn,"This study assesses the effect of the #MeToo movement on the language used in
judicial opinions on sexual violence related cases from 51 U.S. state and
federal appellate courts. The study introduces various indicators to quantify
the extent to which actors in courtrooms employ language that implicitly shifts
responsibility away from the perpetrator and onto the victim. One indicator
measures how frequently the victim is mentioned as the grammatical subject, as
research in the field of psychology suggests that victims are assigned more
blame the more often they are referred to as the grammatical subject. The other
two indices designed to gauge the level of victim-blaming capture the sentiment
of and the context in sentences referencing the victim and/or perpetrator.
Additionally, judicial opinions are transformed into bag-of-words and tf-idf
vectors to facilitate the examination of the evolution of language over time.
The causal effect of the #MeToo movement is estimated by means of a
Difference-in-Differences approach comparing the development of the language in
opinions on sexual offenses and other crimes against persons as well as a Panel
Event Study approach. The results do not clearly identify a
#MeToo-movement-induced change in the language in court but suggest that the
movement may have accelerated the evolution of court language slightly, causing
the effect to materialize with a significant time lag. Additionally, the study
considers potential effect heterogeneity with respect to the judge's gender and
political affiliation. The study combines causal inference with text
quantification methods that are commonly used for classification as well as
with indicators that rely on sentiment analysis, word embedding models and
grammatical tagging.",The Impact of the #MeToo Movement on Language at Court -- A text-based causal inference approach,2022-09-01 15:34:36,Henrika Langen,"http://arxiv.org/abs/2209.00409v2, http://arxiv.org/pdf/2209.00409v2",econ.GN
32680,gn,"We examine how redistribution decisions respond to the source of luck when
there is uncertainty about its role in determining opportunities and outcomes.
We elicit redistribution decisions from a representative U.S. sample who
observe worker outcomes and whether luck could determine earnings directly
(``lucky outcomes'') or indirectly by providing one of the workers with a
relative advantage (``lucky opportunities''). We find that participants
redistribute less and are less responsive to changes in the importance of luck
in environments with lucky opportunities. We show that individuals rely on a
simple heuristic when assessing the impact of unequal opportunities, which
leads them to underappreciate the extent to which small differences in
opportunities can have a large impact on outcomes. These findings have
implications for models of redistribution attitudes and help explain the gap
between lab evidence on support for redistribution and inequality trends.",Inequality of Opportunity and Income Redistribution,2022-09-01 18:28:53,"Marcel Preuss, Germán Reyes, Jason Somerville, Joy Wu","http://arxiv.org/abs/2209.00534v4, http://arxiv.org/pdf/2209.00534v4",econ.GN
32681,gn,"We use natural-language-processing algorithms on a novel dataset of over 900
presidential speeches from ten Latin American countries spanning two centuries
to study the dynamics and determinants of presidential policy priorities. We
show that most speech content can be characterized by a compact set of policy
issues whose relative composition exhibited slow yet substantial shifts over
1819-2022. Presidential attention initially centered on military interventions
and the development of state capacity. Attention gradually evolved towards
building physical capital through investments in infrastructure and public
services and finally turned towards building human capital through investments
in education, health, and social safety nets. We characterize the way in which
president-level characteristics, like age and gender, predict the main policy
issues. Our findings offer novel insights into the dynamics of presidential
attention and the factors that shape it, expanding our understanding of
political agenda-setting.",The Shifting Attention of Political Leaders: Evidence from Two Centuries of Presidential Speeches,2022-09-01 18:32:48,"Oscar Calvo-González, Axel Eizmendi, Germán Reyes","http://arxiv.org/abs/2209.00540v3, http://arxiv.org/pdf/2209.00540v3",econ.GN
32682,gn,"The temperature targets in the Paris Agreement cannot be met without very
rapid reduction of greenhouse gas emissions and removal of carbon dioxide from
the atmosphere. The latter requires large, perhaps prohibitively large
subsidies. The central estimate of the costs of climate policy, unrealistically
assuming least-cost implementation, is 3.8-5.6\% of GDP in 2100. The central
estimate of the benefits of climate policy, unrealistically assuming constant
vulnerability, is 2.8-3.2\% of GDP. The uncertainty about the benefits is
larger than the uncertainty about the costs. The Paris targets do not pass the
cost-benefit test unless risk aversion is high and discount rate low.",Costs and Benefits of the Paris Climate Targets,2022-09-02 12:14:13,Richard S. J. Tol,"http://arxiv.org/abs/2209.00900v1, http://arxiv.org/pdf/2209.00900v1",econ.GN
32683,gn,"This paper discusses the relevance of information overload for explaining
environmental degradation. Our argument goes that information overload and
detachment from nature, caused by energy abundance, have made individuals
unaware of the unsustainable effects of their choices and lifestyles.",Information overload and environmental degradation: learning from H.A. Simon and W. Wenders,2022-09-02 16:18:53,"Tommaso Luzzati, Ilaria Tucci, Pietro Guarnieri","http://arxiv.org/abs/2209.01039v1, http://arxiv.org/pdf/2209.01039v1",econ.GN
32684,gn,"Online platforms often face challenges being both fair (i.e.,
non-discriminatory) and efficient (i.e., maximizing revenue). Using computer
vision algorithms and observational data from a micro-lending marketplace, we
find that choices made by borrowers creating online profiles impact both of
these objectives. We further support this conclusion with a web-based
randomized survey experiment. In the experiment, we create profile images using
Generative Adversarial Networks that differ in a specific feature and estimate
its impact on lender demand. We then counterfactually evaluate alternative
platform policies and identify particular approaches to influencing the
changeable profile photo features that can ameliorate the fairness-efficiency
tension.",Smiles in Profiles: Improving Fairness and Efficiency Using Estimates of User Preferences in Online Marketplaces,2022-09-02 21:46:56,"Susan Athey, Dean Karlan, Emil Palikot, Yuan Yuan","http://arxiv.org/abs/2209.01235v3, http://arxiv.org/pdf/2209.01235v3",econ.GN
32715,gn,"We present results of an experiment benchmarking a workforce training program
against cash transfers for underemployed young adults in Rwanda. 3.5 years
after treatment, the training program enhances productive time use and asset
investment, while the cash transfers drive productive assets, livestock values,
savings, and subjective well-being. Both interventions have powerful effects on
entrepreneurship. But while labor, sales, and profits all go up, the implied
wage rate in these businesses is low. Our results suggest that credit is a
major barrier to self-employment, but deeper reforms may be required to enable
entrepreneurship to provide a transformative pathway out of poverty.",Skills and Liquidity Barriers to Youth Employment: Medium-term Evidence from a Cash Benchmarking Experiment in Rwanda,2022-09-18 17:24:30,"Craig McIntosh, Andrew Zeitlin","http://arxiv.org/abs/2209.08574v1, http://arxiv.org/pdf/2209.08574v1",econ.GN
32685,gn,"Based on the field investigation of West Bengal, this paper investigates
whether the school-aged children of the marginal farmer households are
full-time paid labourers or unpaid domestic labourers along with schooling or
regular students. Probit Regression analysis is applied here to assess the
influencing factors for reducing the size of the child labour force in
practice. The result shows that the higher is the earning of the adult members
of the households, the lower is the incidence of child labour. Moreover, the
credit accessibility of the mother from the Self-help group and more
person-days of the father in work in a reference year are also responsible for
reducing the possibility of a child turning into labour. The study further
suggests that the younger age of the father, father's education, and low
operational landholdings are positive and significant determinants to decide on
a child's education by restricting their excessive domestic work burden.","Child labour and schooling decision of the marginal farmer households: An empirical evidence from the East Medinipur district of West Bengal, India",2022-09-03 08:37:22,Sangita Das,"http://arxiv.org/abs/2209.01330v1, http://arxiv.org/pdf/2209.01330v1",econ.GN
32686,gn,"We study the influence of social messages that promote a digital public good,
a COVID-19 tracing app. We vary whether subjects receive a digital message from
another subject, and, if so, at what cost it came. Observed maximum willingness
to invest in sending varies, from 1 cent up to 20 euros. Does this affect
receivers' sending behavior? Willingness to invest in sending increases when
previously receiving the message. Yet, cost signals have no impact. Thus,
grassroots movements can be started at virtually no cost. App-support matters
normatively as non-supporters are supposed to be punished in triage.",How To Start a Grassroots Movement,2022-09-03 10:04:46,"David Ehrlich, Nora Szech","http://arxiv.org/abs/2209.01345v1, http://arxiv.org/pdf/2209.01345v1",econ.GN
32687,gn,"The COVID-19 pandemic and the mitigation policies implemented in response to
it have resulted in economic losses worldwide. Attempts to understand the
relationship between economics and epidemiology has lead to a new generation of
integrated mathematical models. The data needs for these models transcend those
of the individual fields, especially where human interaction patterns are
closely linked with economic activity. In this article, we reflect upon
modelling efforts to date, discussing the data needs that they have identified,
both for understanding the consequences of the pandemic and policy responses to
it through analysis of historic data and for the further development of this
new and exciting interdisciplinary field.",Data needs for integrated economic-epidemiological models of pandemic mitigation policies,2022-09-03 23:06:12,"David J. Haw, Christian Morgenstern, Giovanni Forchini, Robert Johnson, Patrick Doohan, Peter C. Smith, Katharina D. Hauck","http://dx.doi.org/10.1016/j.epidem.2022.100644, http://arxiv.org/abs/2209.01487v1, http://arxiv.org/pdf/2209.01487v1",econ.GN
32688,gn,"This research aims to identify factors that affect the technological
transition of firms toward industry 4.0 technologies (I4Ts) focusing on firm
capabilities and policy impact using relatedness and complexity measures. For
the analysis, a unique dataset of Korean manufacturing firms' patent and their
financial and market information was used. Following the Principle of
Relatedness, which is a recently shaped empirical principle in the field of
economic complexity, economic geography, and regional studies, we build a
technology space and then trace each firm's footprint on the space. Using the
technology space of firms, we can identify firms that successfully develop a
new industry 4.0 technology and examine whether their accumulated capabilities
in their previous technology domains positively affect their technological
diversification and which factors play a critical role in their transition
towards industry 4.0. In addition, by combining data on whether the firms
received government support for R&D activities, we further analyzed the role of
government policy in supporting firms' knowledge activity in new industry 4.0
technologies. We found that firms with higher related technologies and more
government support are more likely to enter new I4Ts. We expect our research to
inform policymakers who aim to diversify firms' technological capabilities into
I4Ts.",Factors that affect the technological transition of firms toward the industry 4.0 technologies,2022-09-06 09:22:21,"Seung Hwan Kim, Jeong hwan Jeon, Anwar Aridi, Bogang Jun","http://arxiv.org/abs/2209.02239v1, http://arxiv.org/pdf/2209.02239v1",econ.GN
32689,gn,"Using a US nationally representative sample and a double list experiment
designed to elicit views free from social desirability bias, we find that
anti-transgender labor market attitudes are significantly underreported. After
correcting for this concealment, we report that 73 percent of people would be
comfortable with a transgender manager and 74 percent support employment
non-discrimination protection for transgender people. We also show that
respondents severely underestimate the population level of support for
transgender individuals in the workplace, and we find that labor market support
for transgender people is significantly lower than support for gay, lesbian,
and bisexual people. Our results provide timely evidence on workplace-related
views toward transgender people and help us better understand employment
discrimination against them.",Understanding Labor Market Discrimination Against Transgender People: Evidence from a Double List Experiment and a Survey,2022-09-06 12:47:18,"Billur Aksoy, Christopher S. Carpenter, Dario Sansone","http://arxiv.org/abs/2209.02335v1, http://arxiv.org/pdf/2209.02335v1",econ.GN
32690,gn,"Integrated assessment models (IAMs) are a central tool for the quantitative
analysis of climate change mitigation strategies. However, due to their global,
cross-sectoral and centennial scope, IAMs cannot explicitly represent the
spatio-temporal detail required to properly analyze the key role of variable
renewable electricity (VRE) for decarbonizing the power sector and end-use
electrification. In contrast, power sector models (PSMs) incorporate high
spatio-temporal resolutions, but tend to have narrower scopes and shorter time
horizons. To overcome these limitations, we present a novel methodology: an
iterative and fully automated soft-coupling framework that combines the
strengths of a IAM and a PSM. This framework uses the market values of power
generation as well as the capture prices of demand in the PSM as price signals
that change the capacity and power mix of the IAM. Hence, both models make
endogenous investment decisions, leading to a joint solution. We apply the
method to Germany in a proof-of-concept study using the IAM REMIND and the PSM
DIETER, and confirm the theoretical prediction of almost-full convergence both
in terms of decision variables and (shadow) prices. At the end of the iterative
process, the absolute model difference between the generation shares of any
generator type for any year is <5% for a simple configuration (no storage, no
flexible demand), and 6-7% for a more realistic and detailed configuration
(with storage and flexible demand). For the simple configuration, we
mathematically show that this coupling scheme corresponds uniquely to an
iterative mapping of the Lagrangians of two power sector optimization problems
of different time resolutions, which can lead to a comprehensive model
convergence of both decision variables and (shadow) prices. Since our approach
is based on fundamental economic principles, it is applicable also to other
IAM-PSM pairs.",Bidirectional coupling of a long-term integrated assessment model REMIND v3.0.0 with an hourly power sector model DIETER v1.0.2,2022-09-06 12:59:14,"Chen Chris Gong, Falko Ueckerdt, Robert Pietzcker, Adrian Odenweller, Wolf-Peter Schill, Martin Kittel, Gunnar Luderer","http://arxiv.org/abs/2209.02340v3, http://arxiv.org/pdf/2209.02340v3",econ.GN
32691,gn,"This paper proposes a comprehensive model for different Coordination Schemes
(CSs) for Transmission (TSO) and Distribution System Operators (DSO) in the
context of distributed flexibility procurement for balancing and congestion
management. The model proposed focuses on the coordination between the EHV
(TSO) and the HV (DSO) levels, exploring the meshed-to-meshed topology,
including multiple TSO-DSO interface substations. The model is then applied to
a realistic case study in which the Swedish power system is modeled for one
year, considering a representation of the transmission grid together with the
subtransmission grid of Uppsala city. The base case scenario is then subject to
different scalability and replication scenarios. The paper corroborates the
finding that the Common CS leads to the least overall cost of flexibility
procurement. Moreover, it shows the effectiveness of the Local Flexibility
Market (LFM) for the DSO in the Swedish context in reducing potential penalties
in a Multi-level CS.",TSO-DSO Coordination for the Procurement of Balancing and Congestion Management Services: Assessment of a meshed-to-meshed topology,2022-09-06 13:41:41,"Leandro Lind, Rafael Cossent, Pablo Frias","http://arxiv.org/abs/2209.02360v1, http://arxiv.org/pdf/2209.02360v1",econ.GN
32692,gn,"This paper characterizes equilibrium properties of a broad class of economic
models that allow multiple heterogeneous agents to interact in heterogeneous
manners across several markets. Our key contribution is a new theorem providing
sufficient conditions for uniqueness and stability of equilibria in this class
of models. To illustrate the applicability of our theorem, we characterize the
general equilibrium properties of two commonly used quantitative trade models.
Specifically, our analysis provides a first proof of uniqueness and stability
of the equilibrium in multi-country trade models featuring (i) multiple
sectors, or (ii) heterogeneity across countries in terms of their labor cost
shares. These examples also provide a practical toolkit for future research on
how our theorem can be applied to establish uniqueness and stability of
equilibria in a broad set of economic models.",Single and Attractive: Uniqueness and Stability of Economic Equilibria under Monotonicity Assumptions,2022-09-06 19:44:08,"Patrizio Bifulco, Jochen Glück, Oliver Krebs, Bohdan Kukharskyy","http://arxiv.org/abs/2209.02635v1, http://arxiv.org/pdf/2209.02635v1",econ.GN
32693,gn,"Labor unions' greatest potential for political influence likely arises from
their direct connection to millions of individuals at the workplace. There,
they may change the ideological positions of both unionizing workers and their
non-unionizing management. In this paper, we analyze the workplace-level impact
of unionization on workers' and managers' political campaign contributions over
the 1980-2016 period in the United States. To do so, we link
establishment-level union election data with transaction-level campaign
contributions to federal and local candidates. In a difference-in-differences
design that we validate with regression discontinuity tests and a novel
instrumental variables approach, we find that unionization leads to a leftward
shift of campaign contributions. Unionization increases the support for
Democrats relative to Republicans not only among workers but also among
managers, which speaks against an increase in political cleavages between the
two groups. We provide evidence that our results are not driven by
compositional changes of the workforce and are weaker in states with
Right-to-Work laws where unions can invest fewer resources in political
activities.",Do Unions Shape Political Ideologies at Work?,2022-09-06 19:45:51,"Johannes Matzat, Aiko Schmeißer","http://arxiv.org/abs/2209.02637v2, http://arxiv.org/pdf/2209.02637v2",econ.GN
32694,gn,"The recent coronavirus outbreak has made governments face an inconvenient
tradeoff choice, i.e. the choice between saving lives and saving the economy,
forcing them to make immensely consequential decisions among alternative
courses of actions without knowing what the ultimate results would be for the
society as a whole. This paper attempts to frame the coronavirus tradeoff
problem as an economic optimization problem and proposes mathematical
optimization methods to make rationally optimal decisions when faced with
trade-off situations such as those involved in managing through the recent
coronavirus pandemic. The framework introduced and the method proposed in this
paper are on the basis of the theory of rational choice at a societal level,
which assumes that the government is a rational, benevolent agent that
systematically and purposefully takes into account the social marginal costs
and social marginal benefits of its actions to its citizens and makes decisions
that maximize the society's well-being as a whole. We approach solving this
tradeoff problem from a static as well as a dynamic point of view. Finally, we
provide several numerical examples clarifying how the proposed framework and
methods can be applied in the real-world context.",The Coronavirus Tradeoff -- Life vs. Economy: Handling the Tradeoff Rationally and Optimally,2022-09-06 20:06:59,"Ali Zeytoon-Nejad, Tanzid Hasnain","http://dx.doi.org/10.1016/j.ssaho.2021.100215, http://arxiv.org/abs/2209.02651v1, http://arxiv.org/pdf/2209.02651v1",econ.GN
32695,gn,"Syllabus is essentially a concise outline of a course of study, and
conventionally a text document. In the past few decades, however, two novel
variations of syllabus have emerged, namely ""the Graphic Syllabus"" and ""the
Interactive Syllabus"". Each of these two variations of syllabus has its own
special advantages. The present paper argues that there could be devised a new
combined version of the two mentioned variations, called ""the Interactive
Graphic Syllabus"", which can potentially bring us the advantages of both at the
same time. Specifically, using a well-designed Interactive Graphic syllabus can
bring about many advantages such as clarifying complex relationships; causing a
better retention; needing less cognitive energy for interpretation; helping
instructors identify any snags in their course organization; capability of
being integrated easily into a course management system; appealing to many of
learning styles and engaging students with different learning styles.",Using the Interactive Graphic Syllabus in the Teaching of Economics,2022-09-06 20:25:58,Seyyed Ali Zeytoon Nejad Moosavian,"http://arxiv.org/abs/2209.02665v1, http://arxiv.org/pdf/2209.02665v1",econ.GN
32701,gn,"This paper highlights the potential for negative dynamic consequences of
recent trends towards the formation of ""skill-hubs"". I first show evidence that
skill acquisition is biased towards skills which are in demand in local labor
markets. This fact along with large heterogeneity in outcomes by major and
recent reductions in migration rates implies a significant potential for
inefficient skill upgrading over time. To evaluate the impact of local bias in
education in the context of standard models which focus on agglomeration
effects, I develop a structural spatial model which includes educational
investment. The model focuses on two sources of externalities: productivity
through agglomeration and signaling. Both of these affect educational decisions
tilting the balance of aggregate skill composition. Signaling externalities can
provide a substantial wedge in the response to changes in skill demand and
skill concentration with the potential for substantial welfare gains from a
more equal distribution of skills.",What You See is What You Get: Local Labor Markets and Skill Acquisition,2022-09-08 18:55:16,Benjamin Niswonger,"http://arxiv.org/abs/2209.03892v1, http://arxiv.org/pdf/2209.03892v1",econ.GN
32696,gn,"Macroeconomics essentially discusses macroeconomic phenomena from the
perspectives of various schools of economic thought, each of which takes
different views on how macroeconomic agents make decisions and how the
corresponding markets operate. Therefore, developing a clear, comprehensive
understanding of how and in what ways these schools of economic thought differ
is a key and a prerequisite for economics students to prosper academically and
professionally in the discipline. This becomes even more crucial as economics
students pursue their studies toward higher levels of education and graduate
school, during which students are expected to attain higher levels of Bloom's
taxonomy, including analysis, synthesis, evaluation, and creation. Teaching the
distinctions and similarities of the two major schools of economic thought has
never been an easy task to undertake in the classroom. Although the reason for
such a hardship can be multi-fold, one reason has undoubtedly been students'
lack of a holistic view on how the two mainstream economic schools of thought
differ. There is strong evidence that students make smoother transition to
higher levels of education after building up such groundwork, on which they can
build further later on (e.g. Didia and Hasnat, 1998; Marcal and Roberts, 2001;
Islam, et al., 2008; Green, et al., 2009; White, 2016). The paper starts with a
visual spectrum of various schools of economic thought, and then narrows down
the scope to the classical and Keynesian schools, i.e. the backbone of modern
macroeconomics. Afterwards, a holistic table contrasts the two schools in terms
of 50 aspects. Not only does this table help economics students enhance their
comprehension, retention, and critical-thinking capability, it also benefits
macroeconomic instructors to ...",Classicals versus Keynesians: Fifty Distinctions between Two Major Schools of Economic Thought,2022-09-06 20:52:33,Seyyed Ali Zeytoon Nejad Moosavian,"http://dx.doi.org/10.1453/jest.v9i2.2325, http://arxiv.org/abs/2209.02683v1, http://arxiv.org/pdf/2209.02683v1",econ.GN
32697,gn,"Identifying the factors that influence labor force participation could
elucidate how individuals arrive at their labor supply decisions, whose
understanding is, in turn, of crucial importance in analyzing how the supply
side of the labor market functions. This paper investigates the effect of
parenthood status on Labor Force Participation (LFP) decisions using an
individual-level fixed-effects identification strategy. The differences across
individuals and over time in having or not having children as well as being or
not being in the labor force provide the variation needed to assess the
association between individuals' LFP behavior and parenthood. Parenthood could
have different impacts on mothers than it would on fathers. In order to look at
the causal effect of maternity and paternity on LFP separately, the data is
disaggregated by gender. To this end, the effect of a change in the parenthood
status can be measured using individual-level fixed-effects to account for
time-invariant characteristics of individuals becoming a parent. The primary
data source used is the National Longitudinal Surveys (NLS). Considering the
nature of LFP variable, this paper employs Binary Response Models (BRMs) to
estimate LFP equations using individual-level micro data. The findings of the
study show that parenthood has a negative overall effect on LFP. However,
paternity has a significant positive effect on the likelihood of being in the
labor force, whilst maternity has a significant negative impact of LFP. In
addition, the results imply that the effect of parenthood on LFP has been
fading away over time, regardless of the gender of parents. These two pieces of
evidence precisely map onto the theoretical predictions made by the related
mainstream economic theories (the traditional neoclassical theory of labor
supply as well as Becker's household production model). These results are ...",Identifying the Effect of Parenthood on Labor Force Participation: A Gender Comparison,2022-09-06 21:04:13,Seyyed Ali Zeytoon Nejad Moosavian,"http://dx.doi.org/10.1453/jepe.v8i3.2230, http://arxiv.org/abs/2209.02743v1, http://arxiv.org/pdf/2209.02743v1",econ.GN
32698,gn,"Duality is the heart of advanced microeconomics. It exists everywhere
throughout advanced microeconomics, from the beginning of consumer theory to
the end of production theory. The complex, circular relationships among various
theoretical microeconomic concepts involved in the setting of duality theory
have led it to be called the ""wheel of pain"" by many graduate economics
students . Put simply, the main aim of this paper is to turn this ""wheel of
pain"" into a ""wheel of joy"". To be more specific, the primary purpose of this
paper is to graphically decode the logical, complex relationships among a
quartet of dual functions which present preferences as well as a quartet of
demand-related functions in a visual manner.",The Visual Decoding of the Wheel of Duality in Consumer Theory in Modern Microeconomics,2022-09-07 01:35:31,Seyyed Ali Zeytoon Nejad Moosavian,"http://dx.doi.org/10.11114/aef.v3i3.1718, http://arxiv.org/abs/2209.02839v1, http://arxiv.org/pdf/2209.02839v1",econ.GN
32699,gn,"Production theory, defined as the study of the economic process of
transforming inputs into outputs, consists of two simultaneous economic forces:
cost minimization and profit maximization. The cost minimization problem
involves deriving conditional factor demand functions and the cost function.
The profit maximization problem involves deriving the output supply function,
the profit function, and unconditional factor demand functions. Nested within
the process are Shephard's lemma, Hotelling's lemmas, direct and indirect
mathematical relations, and other elements contributing to the dynamics of the
process. The intricacies and hidden underlying influences pose difficulties in
presenting the material for an instructor, and inhibit learning by students.
Simply put, the primary aim of this paper is to facilitate the teaching and
learning of the production theory realm of Economics through the use of a
conceptual visual model. This paper proposes a pedagogical tool in the form of
a detailed graphic illustrating t he relationship between profit maximization
and cost minimization under technical constraints, with an emphasis on the
similarities and differences between the perfect competition and monopoly
cases. The potential that such a visual has to enhance learning when
supplementing traditional context is discussed under the context of
contemporary learning literature. Embedded in the discussion is an example of
how we believe our model could be conceptualized and utilized in a real-world
setting to evaluate an industrial project with an economic point of view.",Clarifying Theoretical Intricacies through the Use of Conceptual Visualization: Case of Production Theory in Advanced Microeconomics,2022-09-07 01:42:42,"Alexandra Naumenko, Seyyed Ali Zeytoon Nejad Moosavian","http://dx.doi.org/10.11114/aef.v3i4.1781, http://arxiv.org/abs/2209.02841v1, http://arxiv.org/pdf/2209.02841v1",econ.GN
32700,gn,"We examine the effect of civil war in Syria on economic growth, human
development and institutional quality. Building on the synthetic control
method, we estimate the missing counterfactual scenario in the hypothetical
absence of the armed conflict that led to unprecedented humanitarian crisis and
population displacement in modern history. By matching Syrian growth and
development trajectories with the characteristics of the donor pool of 66
countries with no armed internal conflict in the period 1996-2021, we estimate
a series of growth and development gaps attributed to civil war. Syrian civil
war appears to have had a temporary negative effect on the trajectory of
economic growth that almost disappeared before the onset of COVID19 pandemic.
By contrast, the civil war led to unprecedented losses in human development,
rising infant mortality and rampantly deteriorating institutional quality. Down
to the present day, each year of the conflict led to 5,700 additional
under-five child deaths with permanently derailed negative effect on longevity.
The civil war led to unprecedent and permanent deterioration in institutional
quality indicated by pervasive weakening of the rule of law and deleterious
impacts on government effectiveness, civil liberties and widespread escalation
of corruption. The estimated effects survive a battery of placebo checks.",Estimating the Effects of Syrian Civil War,2022-09-07 13:22:50,"Aleksandar Keseljevic, Rok Spruk","http://arxiv.org/abs/2209.03046v1, http://arxiv.org/pdf/2209.03046v1",econ.GN
32702,gn,"In this paper, we deal with a storage assignment problem arising in a
fulfilment centre of a major European e-grocery retailer. The centre can be
characterised as a hybrid warehouse consisting of a highly efficient and
partially automated fast-picking area designed as a pick-and-pass system with
multiple stations, and a picker-to-parts area. The storage assignment problem
considered in this paper comprises the decisions to select the products to be
allocated to the fast-picking area, the assignment of the products to picking
stations and the determination of a shelf within the assigned station. The
objective is to achieve a high level of picking efficiency while respecting
station workload balancing and precedence order constraints. We propose to
solve this three-level problem using an integrated MILP model. In computational
experiments with real-world data, we show that using the proposed integrated
approach yields significantly better results than a sequential approach in
which the selection of products to be included in the fast-picking area is
solved before assigning station and shelf. Furthermore, we provide an extension
to the integrated storage assignment model that explicitly accounts for
within-week demand variation. In a set of experiments with
day-of-week-dependent demands we show that while a storage assignment that is
based on average demand figures tends to exhibit a highly imbalanced workload
on certain days of the week, the augmented model yields storage assignments
that are well balanced on each day of the week without compromising the quality
of the solutions in terms of picking efficiency.",Integrated storage assignment for an e-grocery fulfilment centre: Accounting for day-of-week demand patterns,2022-09-08 21:50:14,"David Winkelmann, Frederik Tolkmitt, Matthias Ulrich, Michael Römer","http://arxiv.org/abs/2209.03998v2, http://arxiv.org/pdf/2209.03998v2",econ.GN
32703,gn,"Introduction: Since inception, the United States (US) Food and Drug
Administration (FDA) has kept a robust record of regulated medical devices
(MDs). Based on these data, can we gain insight into the innovation dynamics of
the industry, including the potential for industrial transformation? Areas
Covered: Using Premarket Notifications (PMNs) and Approvals (PMAs) data, it is
shown that from 1976 to 2020 the total composite (PMN + PMA) metric follows a
single secular period: 20.5 years (applications peak-to-peak: 1992-2012;
trough: 2002) and 26.5 years (registrations peak to peak: 1992 to 2019; trough:
2003), with a peak to trough relative percentage difference of 24% and 28%,
respectively. Importantly, PMNs and PMAs independently present as an inverse
structure. Expert Opinion: The evidence suggests: MD innovation is driven by a
singular secular Kutnets-like cyclic phenomenon (independent of economic
crises) derived from a fundamental shift from simple (PMNs) to complex (PMAs)
MDs. Portentously, while the COVID-19 crisis may not affect the overriding
dynamic, the anticipated yet significant (~25%) MD innovation drop may be
potentially attenuated with attentive measures by MD stakeholders. Limitations
of this approach and further thoughts complete this perspective.",Singular Secular Kuznets-like Period Realized Amid Industrial Transformation in US FDA Medical Devices: A Perspective on Innovation from 1976 to 2020,2022-07-11 21:03:06,Iraj Daizadeh,"http://dx.doi.org/10.1080/17434440.2022.2139919, http://arxiv.org/abs/2209.04431v2, http://arxiv.org/pdf/2209.04431v2",econ.GN
32704,gn,"In the backdrop of growing power transitions and rise of strategic high tide
in the world, the Indo Pacific Region (IPR) has emanated as an area which is
likely to witness increased development & enhanced security cooperation through
militarization. With China trying to be at the seat of the Global leadership,
US & its allies in the Indo pacific are aiming at working together, finding
mutually beneficial areas of functional cooperation and redefining the canvas
of security. This purpose of this paper is to analyze an informal alliance,
Quadrilateral Security dialogue (QUAD) in its present form with the
geostrategic landscape in Indo pacific and present recommendations to
strengthen collaboration & response capability.",Revisiting QUAD ambition in the Indo Pacific leveraging Space and Cyber Domain,2022-09-10 10:14:19,Mandeep Singh Rai,"http://arxiv.org/abs/2209.04609v1, http://arxiv.org/pdf/2209.04609v1",econ.GN
32705,gn,"The August 2022 Alaska Special Election for US House contained many
interesting features from the perspective of social choice theory. This
election used instant runoff voting (often referred to as ranked choice voting)
to elect a winner, and many of the weaknesses of this voting method were on
display in this election. For example, the Condorcet winner is different from
the instant runoff winner, and the election demonstrated a monotonicity
paradox. The election also demonstrated a no show paradox; as far as we are
aware, this election represents the first document American ranked choice
election to demonstrate this paradox.",A Mathematical Analysis of the 2022 Alaska Special Election for US House,2022-09-11 04:09:15,"Adam Graham-Squire, David McCune","http://arxiv.org/abs/2209.04764v3, http://arxiv.org/pdf/2209.04764v3",econ.GN
32706,gn,"Contact tracing and quarantine programs have been one of the leading
Non-Pharmaceutical Interventions against COVID-19. Some governments have relied
on mandatory programs, whereas others embrace a voluntary approach. However,
there is limited evidence on the relative effectiveness of these different
approaches. In an interactive online experiment conducted on 731 subjects
representative of the adult US population in terms of sex and region of
residence, we find there is a clear ranking. A fully mandatory program is
better than an optional one, and an optional system is better than no
intervention at all. The ranking is driven by reductions in infections, while
economic activity stays unchanged. We also find that political conservatives
have higher infections and levels of economic activity, and they are less
likely to participate in the contact tracing program.",The economic and health impacts of contact tracing and quarantine programs,2022-09-11 14:43:15,"Darija Barak, Edoardo Gallo, Alastair Langtry","http://arxiv.org/abs/2209.04843v1, http://arxiv.org/pdf/2209.04843v1",econ.GN
32707,gn,"I show that a class of Linear DSGE models with one endogenous state variable
can be represented as a three-state Markov chain. I develop a new analytical
solution method based on this representation, which amounts to solving for a
vector of Markov states and one transition probability. These two objects
constitute sufficient statistics to compute in closed form objects that have
routinely been computed numerically: impulse response function, cumulative sum,
present discount value multiplier. I apply the method to a standard New
Keynesian model that features optimal monetary policy with commitment.",Analyzing Linear DSGE models: the Method of Undetermined Markov States,2022-09-12 11:31:02,Jordan Roulleau-Pasdeloup,"http://arxiv.org/abs/2209.05081v2, http://arxiv.org/pdf/2209.05081v2",econ.GN
32708,gn,"Either saying that the market is structured to promote workforce exploitation
in some sections or the people who participate there benefit from the existing
structure and exploit, exploitation happens - systematically or
opportunistically. This research presents a perspective on how workforce
exploitation may occur when vulnerable groups voluntarily seek employment in
the labor market.",Exploitees vs. Exploiters: Dynamics of Exploitation,2022-09-12 13:32:45,Davood Qorbani,"http://arxiv.org/abs/2209.05129v1, http://arxiv.org/pdf/2209.05129v1",econ.GN
32709,gn,"This study puts forward a conceptual model linking interpersonal influences'
impact on Employee Engagement, Psychological contracts, and Human Resource
Practices. It builds on human and social capital, as well as the social
exchange theory (SET), projecting how interpersonal influences can impact the
psychological contract (PC) and employee engagement (EE) of employees. This
research analyzes the interpersonal influences of Wasta in the Middle East,
Guanxi in China, Jeitinho in Brazil, Blat in Russia, and Pulling Strings in
England. Interpersonal influences draw upon nepotism, favoritism, and
corruption in organizations in many countries. This paper draws on the
qualitative methods of analyzing previous theories. It uses the Model Paper
method of predicting relationships by examining the question of how do
interpersonal influences impact employee engagement and psychological
contract?. It is vital to track the effects of interpersonal influences on PC
and EE, acknowledging that the employer can either empower or disengage our
human capital.","Impact of interpersonal influences on Employee engagement and Psychological contract: Effects of guanxi, wasta, jeitinho, blat and pulling strings",2022-09-12 23:17:31,Elizabeth Kassab Sfeir,"http://arxiv.org/abs/2209.05592v1, http://arxiv.org/pdf/2209.05592v1",econ.GN
32710,gn,"After the outbreak of COVID 19, firms appear to monitor Work From Home (WFH)
workers more than ever out of anxiety that workers may shirk at home or
implement moral hazard at home. Using the Survey of Working Arrangements and
Attitudes (SWAA, Barrero et al., 2021), the evidence of WFH workers' ex post
moral hazard as well as its specific aspects are examined. The results show
that the ex post moral hazard among the WFH workers is generally found.
Interestingly, however, the moral hazard on specific type of productivity,
efficiency, is not detected for the workers at firms with WFH friendly policy
for long term. Moreover, the advantages & challenges for the WFH culture report
that workers with health or disability issues improve their productivity,
whereas certain conditions specific to the WFH environment must be met.",Moral Hazard on Productivity Among Work-From-Home Workers Amid the COVID-19 Pandemic,2022-09-13 04:47:14,Jieun Lee,"http://dx.doi.org/10.13140/RG.2.2.13099.52007, http://arxiv.org/abs/2209.05684v1, http://arxiv.org/pdf/2209.05684v1",econ.GN
32711,gn,"This paper studies the effects of legislation mandating workplace
breastfeeding amenities on various labor market outcomes. Using the Panel Study
of Income Dynamics, I implement both a traditional fixed-effects event study
framework and the Interaction-Weighted event study technique proposed by Sun &
Abraham (2020). I find that workplace breastfeeding legislation increases the
likelihood of female labor force participation by 4.2 percentage points in the
two years directly following implementation. Female labor force participation
remains higher than before implementation in subsequent years, but the
significant effect does not persist. Using the CDC's Infant Feeding Practices
Survey II, I then show that breastfeeding women who do not live in states with
workplace breastfeeding supportive legislation are 3.9 percentage points less
likely to work, supporting the PSID findings. The legislation mainly impacts
white women. I find little effect on labor income or work intensity.",Workplace Breastfeeding Legislation and Labor Market Outcomes,2022-09-13 15:04:15,Julia Hatamyar,"http://arxiv.org/abs/2209.05916v1, http://arxiv.org/pdf/2209.05916v1",econ.GN
32712,gn,"Pakistan has one of the lowest rates of routine childhood immunization
worldwide, with only two-thirds of infants 2 years or younger being fully
immunized (Pakistan Demographic and Health Survey 2019). Government-led,
routine information campaigns have been disrupted over the last few years due
to the on-going COVID-19 pandemic. We use data from a mobile-based campaign
that involved sending out short audio dramas emphasizing the importance of
vaccines and parental responsibilities in Quetta, Pakistan. Five out of eleven
areas designated by the provincial government were randomly selected to receive
the audio calls with a lag of 3 months and form the comparison group in our
analysis. We conduct a difference-in-difference analysis on data collected by
the provincial Department of Health in the 3-month study and find a significant
30% increase over the comparison mean in the number of fully vaccinated
children in campaign areas on average. We find evidence that suggests
vaccination increased in UCs where vaccination centers were within a short
30-minute travel distance, and that the campaign was successful in changing
perceptions about vaccination and reliable sources of advice. Results highlight
the need for careful design and targeting of similar soft behavioral change
campaigns, catering to the constraints and abilities of the context.",Digital 'nudges' to increase childhood vaccination compliance: Evidence from Pakistan,2022-09-14 16:20:57,"Shehryar Munir, Farah Said, Umar Taj, Maida Zafar","http://arxiv.org/abs/2209.06624v1, http://arxiv.org/pdf/2209.06624v1",econ.GN
32713,gn,"Background/Purpose: The use of artificial intelligence (AI) models for
data-driven decision-making in different stages of employee lifecycle (EL)
management is increasing. However, there is no comprehensive study that
addresses contributions of AI in EL management. Therefore, the main goal of
this study was to address this theoretical gap and determine the contribution
of AI models to EL. Methods: This study applied the PRISMA method, a systematic
literature review model, to ensure that the maximum number of publications
related to the subject can be accessed. The output of the PRISMA model led to
the identification of 23 related articles, and the findings of this study were
presented based on the analysis of these articles. Results: The findings
revealed that AL algorithms were used in all stages of EL management (i.e.,
recruitment, on-boarding, employability and benefits, retention, and
off-boarding). It was also disclosed that Random Forest, Support Vector
Machines, Adaptive Boosting, Decision Tree, and Artificial Neural Network
algorithms outperform other algorithms and were the most used in the
literature. Conclusion: Although the use of AI models in solving EL problems is
increasing, research on this topic is still in its infancy stage, and more
research on this topic is necessary.",Artificial Intelligence Models and Employee Lifecycle Management: A Systematic Literature Review,2022-09-15 17:45:23,"Saeed Nosratabadi, Roya Khayer Zahed, Vadim Vitalievich Ponkratov, Evgeniy Vyacheslavovich Kostyrin","http://dx.doi.org/10.2478/orga-2022-0012, http://arxiv.org/abs/2209.07335v1, http://arxiv.org/pdf/2209.07335v1",econ.GN
32714,gn,"Continuing education beyond the compulsory years of schooling is one of the
most important choices an adolescent has to make. Higher education is
associated with a host of social and economic benefits both for the person and
its community. Today, there is ample evidence that educational aspirations are
an important determinant of said choice. We implement a multilevel, networked
experiment in 45 Mexican high schools and provide evidence of the malleability
of educational aspirations. We also show there exists an interdependence of
students' choices and the effect of our intervention with peer networks. We
find that a video intervention, which combines role models and information
about returns to education is successful in updating students' beliefs and
consequently educational aspirations.",Peer Networks and Malleability of Educational Aspirations,2022-09-17 17:28:15,"Michelle González Amador, Robin Cowan, Eleonora Nillesen","http://arxiv.org/abs/2209.08340v1, http://arxiv.org/pdf/2209.08340v1",econ.GN
32716,gn,"European Monetary Union continues to be characterised by significant
macroeconomic imbalances. Germany has shown increasing current account
surpluses at the expense of the other member states (especially the European
periphery). Since the creation of a single currency has implied the
impossibility of implementing competitive devaluations, trade imbalances within
a monetary union can be considered unfair behaviour. We have modelled Eurozone
trade flows in goods through a weighted network from 1995 to 2019. To the best
of our knowledge, this is the first work that applies this methodology to this
kind of data. Network analysis has allowed us to estimate a series of important
centrality measures. A polarisation phenomenon emerges in relation to the
growth of German dominance. The common currency has then not been capable to
remove trade asymmetry, increasing the distance between surplus and deficit
countries. This situation should be addressed with expansionary policies on the
demand side at national and supranational level.",Network analysis and Eurozone trade imbalances,2022-09-20 19:31:12,"Giovanni Carnazza, Pierluigi Vellucci","http://arxiv.org/abs/2209.09837v1, http://arxiv.org/pdf/2209.09837v1",econ.GN
32717,gn,"We propose a game-theoretic model to investigate how non-superpowers with
heterogenous preferences and endowments shape the superpower competition for a
sphere of influence. Two superpowers play a Stackelberg game of providing club
goods while non-superpowers form coalitions to join a club in the presence of
externalities. The coalition formation, which depends on the characteristics of
non-superpowers, influences the behavior of superpowers and thus the club size.
In this sense, non-superpowers have a power over superpowers. We study the
subgame perfect Nash equilibrium and simulate the game to characterize how the
US and China form their clubs depending on other countries.",The Power of Non-Superpowers,2022-09-21 12:07:42,"Tomoo Kikuchi, Shuige Liu","http://arxiv.org/abs/2209.10206v3, http://arxiv.org/pdf/2209.10206v3",econ.GN
32718,gn,"It has long been assumed that inheritances, particularly large ones, have a
negative effect on the labor supply of inheritors. Using Norwegian registry
data, I examine the inheritance-induced decline in inheritors' wages and
occupational income. In contrast to prior research, my estimates allow the
dynamic effect of inheritances on labor supply to vary among inheritor cohorts.
The estimation approach adopted and the 25-year long panel data make it
possible to trace the dynamics of the effect for at least 20 years, which is
twice as long as the study period in previous studies. Since all observations
in the sample are inheritors, I avoid the selection problem arising in studies
employing non-inheritors as controls. I find that large parental inheritances
(more than one million Norwegian kroner) reduce annual wage and occupational
income by, at most, 4.3%, which is about half the decrease previously
identified. The magnitude of the effect increases with the size of the
inheritance. Large inheritances also increase the probability of being
self-employed by more than 1%, although entrepreneurship may be dampened by
inheritances that are excessively large. The inheritance effect lasts for up to
10 years and is heterogeneous across sexes and age groups. Male heirs are more
likely to reduce their labor supply after receiving the transfer. Young heirs
are more likely to be self-employed, and their annual occupational income is,
therefore, less affected by inheritances in the long run; for the very young
inheriting large amounts of wealth from their grandparents, the probability of
their attaining a post-secondary education declines by 2%.",Heterogeneous earning responses to inheritance: new event-study evidence from Norway,2022-09-21 14:05:03,Xiaoguang Ling,"http://arxiv.org/abs/2209.10256v3, http://arxiv.org/pdf/2209.10256v3",econ.GN
32719,gn,"The work-from-home policy affected people of all demographics and
professions, including students and faculty at universities. After the onset of
the COVID-19 pandemic in 2020, institutions moved their operations online,
affecting the motivation levels, communication abilities, and mental health of
students and faculty around the world. This paper is based mainly on primary
data collected from students from around the world, and professors at
universities in Bengaluru, India. It explores the effects of work-from-home as
a policy in terms of how it changed learning during the pandemic and how it has
permanently altered it in a post-pandemic future. Further, it suggests and
evaluates policies on how certain negative effects of the work-from-home policy
can be mitigated.",Effects of Work-From-Home on University Students and Faculty,2022-09-13 10:21:18,Avni Singh,"http://arxiv.org/abs/2209.10405v1, http://arxiv.org/pdf/2209.10405v1",econ.GN
32720,gn,"The subject of international institutions and power politics continues to
occupy a central position in the field of International Relations and to the
world politics. It revolves around key questions on how rising states, regional
powers and small states leverage international institutions for achieving
social, political, economic gains for themselves. Taking into account one of
the rising powers China and the role of international institutions in the
contemporary international politics, this paper aims to demonstrate, how in
pursuit of power politics, various states (Small, Regional and Great powers)
utilise international institutions by making them adapt to the new power
realities critical to world politics.",International institutions and power politics in the context of Chinese Belt and Road Initiative,2022-09-21 19:53:53,Mandeep Singh Rai,"http://arxiv.org/abs/2209.10498v1, http://arxiv.org/pdf/2209.10498v1",econ.GN
32721,gn,"This paper will explore a way forward for French-Afghan relations post United
Nations (U.N.) occupation. The summer of 2021 proved to be very tumultuous as
the Taliban lay in waiting for U.N. forces to withdraw in what many would call
a hasty, poorly thought-out egress operation. The Taliban effectively retook
all major cities, including the capital, Kabul, and found themselves isolated
as U.N. states closed embassies and severed diplomatic relations. Now,
Afghanistan finds itself without international aid and support and on the verge
of an economic crisis. France now has the opportunity to initiate a leadership
role in the establishment and recovery of the new Afghan government.",A French Connection? Recognition and Entente for the Taliban,2022-09-21 20:37:55,"Mandeep Singh Rai Misty Wyatt, Sydney Farrar, Kristina Alabado","http://arxiv.org/abs/2209.10527v1, http://arxiv.org/pdf/2209.10527v1",econ.GN
32722,gn,"We experimentally study a game in which success requires a sufficient total
contribution by members of a group. There are significant uncertainties
surrounding the chance and the total effort required for success. A theoretical
model with max-min preferences towards ambiguity predicts higher contributions
under ambiguity than under risk. However, in a large representative sample of
the Spanish population (1,500 participants) we find that the ATE of ambiguity
on contributions is zero. The main significant interaction with the personal
characteristics of the participants is with risk attitudes, and it increases
contributions. This suggests that policymakers concerned with ambiguous
problems (like climate change) do not need to worry excessively about
ambiguity.",The effect of ambiguity in strategic environments: an experiment,2022-09-22 18:09:35,"Pablo Brañas-Garza, Antonio Cabrales, María Paz Espinosa, Diego Jorrat","http://arxiv.org/abs/2209.11079v1, http://arxiv.org/pdf/2209.11079v1",econ.GN
32723,gn,"This paper studies the transmission of US monetary policy shocks into
Emerging Markets emphasizing the role of investment and financial
heterogeneity. First, we use a panel SVAR model to show that a US interest
tightening leads to a persistent recession in Emerging Markets driven by a
sharp reduction in aggregate investment. Second, we study the role of firms'
financial heterogeneity in the transmission of US interest rate shocks by
exploiting detailed balance sheet dataset from Chile. We find that more
indebted firms experience greater drops in investment in response to a US
tightening shock than less indebted firms. This result is at odds with recent
evidence from US firms, even when using the same identification strategy and
econometric methods. Third, we rationalize this finding using a stylized model
of heterogeneous firms subject to a tightening leverage constraint. Finally, we
present evidence in support of this hypothesis as well as robustness checks to
our main results. Overall, our results suggests that the transmission channel
of US monetary policy shocks within and outside the US differ, a result novel
to the literature.",The Transmission of US Monetary Policy Shocks: The Role of Investment & Financial Heterogeneity,2022-09-22 19:51:52,"Santiago Camara, Sebastian Ramirez Venegas","http://arxiv.org/abs/2209.11150v1, http://arxiv.org/pdf/2209.11150v1",econ.GN
32724,gn,"Environmental degradation, global pandemic and severing natural resource
related problems cater to increase demand resulting from migration is nightmare
for all of us. Huge flocks of people are rushing towards to earn, to live and
to lead a better life. This they do for their own development often ignoring
the environmental cost. With existing model, this paper looks at out migration
(interstate) within India focusing on the various proximate and fundamental
causes relating to migration. The author deploys OLS to see those fundamental
causes. Obviously, these are not exhaustive cause, but definitely plays a role
in migration decision of individual. Finally, this paper advocates for some
policy prescription to cope with this problem.",Determinants Of Migration: Linear regression Analysis in Indian Context,2022-09-23 13:23:46,"Soumik Ghosh, Arpan Chakraborty","http://arxiv.org/abs/2209.11507v1, http://arxiv.org/pdf/2209.11507v1",econ.GN
32725,gn,"The effective medium approximation (EMA) method is commonly used to estimate
the effective conductivity development in composites containing two types of
materials: conductors and insulators. The effective conductivity is a global
parameter that measures how easily the composite conducts electric current.
Currently, financial transactions in society take place in cash or cashless,
and, in the cashless transactions the money flows faster than in the cash
transactions. Therefore, to provide a cashless grading of countries, we
introduce a cashless transaction index (CTI) which is calculated using the EMA
method in which individuals who make cash transactions are analogous to the
insulator element in the composite and individuals who make cash transactions
are analogous to the conductor element. We define the CTI as the logarithmic of
the effective conductivity of a country's transactions. We also introduce the
time dependent equation for the cashless share. Estimates from the proposed
model can explain well the data in the last few years.",Introducing Cashless Transaction Index based on the Effective Medium Approximation,2022-06-14 16:53:15,Mikrajuddin Abdullah,"http://arxiv.org/abs/2209.13470v1, http://arxiv.org/pdf/2209.13470v1",econ.GN
32726,gn,"The aim of this paper is to examine how consumer perceptions of social media
advertisements affect advertising value and brand awareness. With a rapid
increase in the number of social media users over the last ten years, a new
advertising domain has become available for companies. Brands that manage
social media well in their advertising strategies can quickly influence
consumer decision-making and create awareness. However, in social media
advertising, which is different from traditional advertising, creating content
should be produced and this content should be perceived in a short time by
consumers. To achieve this, it is necessary to build rapport with consumers and
to present correctly what they wish to see in advertisements by creating
awareness. In view of the increasing importance of social media advertising,
the study examines how consumer perceptions of Instagram advertisements affect
advertising value and brand awareness. This study was conducted with Generation
Y consumers on the basis of their Instagram habits, a popular social media app.
For this purpose, surveys were held with 665 participants who use Instagram.
The collected data were analyzed using structural equation modeling. According
to the analysis results, Y generations perceptions of Instagram advertisements
have both a positive and negative impact on advertising value and brand
awareness and brand associations.","The Impact of Perceptions of Social Media Advertisements on Advertising Value, Brand Awareness and Brand Associations: Research on Generation Y Instagram Users",2022-09-27 08:43:17,"Ibrahim Halil Efendioglu, Yakup Durmaz","http://dx.doi.org/10.33182/tmj.v10i2.1606, http://arxiv.org/abs/2209.13596v1, http://arxiv.org/pdf/2209.13596v1",econ.GN
32727,gn,"This study examines the roles of government spending and money supply on
alleviating poverty in Africa. The study used 48 Sub-Saharan Africa countries
from 2001 to 2017. The study employed one step and two-step system GMM and
found that both the procedures have similar results. Different specifications
were employed and the model selected was robust, with valid instruments and
absence of autocorrelation at the second order. The study revealed that
government spending and foreign direct investment have significant negative
influence on reducing poverty while money supply has positive influence on the
level of poverty in the region. The implication of the finding is that monetary
policy tool of money supply has no strong influence in combating the menace of
poverty. The study therefore recommends that emphasis should be placed on
increasing more of government spending that would impact on the quality of life
of the people in the region through multiplier effect, improving the financial
system for effective monetary policy and attracting foreign direct inflows
through enabling business environment in Africa.",Government Spending and Money Supply Roles in Alleviating Poverty in Africa,2022-09-29 00:57:10,"Gbatsoron Anjande, Simeon T Asom, Ngutsav Ayila, Bridget Ngodoo Mile, Victor Ushahemba Ijirshar","http://arxiv.org/abs/2209.14443v1, http://arxiv.org/pdf/2209.14443v1",econ.GN
32728,gn,"We investigate the impact of the Covid-19 pandemic on the transition between
high school and college in Brazil. Using microdata from the universe of
students that applied to a selective university, we document how the Covid-19
shock increased enrollment for students in the top 10% high-quality public and
private high schools. This increase comes at the expense of graduates from
relatively lower-quality schools. Furthermore, this effect is entirely driven
by applicants who were at high school during the Covid pandemic. The effect is
large and completely offsets the gains in student background diversity achieved
by a bold quota policy implemented years before Covid. These results suggest
that not only students from underprivileged backgrounds endured larger negative
effects on learning during the pandemic, but they also experienced a stall in
their educational paths.",School closures and educational path: how the Covid-19 pandemic affected transitions to college,2022-10-01 02:39:21,"Fernanda Estevan, Lucas Finamor","http://arxiv.org/abs/2210.00138v1, http://arxiv.org/pdf/2210.00138v1",econ.GN
32729,gn,"SMEs remain a veritable tool that generates employment opportunities. This
study examined the impact of SMEs on employment creation in the Makurdi
metropolis of Benue state. A sample size of 340 entrepreneurs was chosen from
the population of entrepreneurs (SMEs) in the Makurdi metropolis. The study
used logistic regression to analyse the impact of SME activities on employment
creation or generation in the state and found that SMEs contribute
significantly to employment creation in the state but are often faced with the
challenges of lack of capital, absence of business planning, lack of confidence
in the face of competition, unfavorable environment for the development of
SMEs, high government taxes and inadequate technical knowledge. The study
therefore recommended that the government should implement capital or credit
enhancing programmes and an enabling environment for smooth running of the
SMEs. Tax incentives should also be granted to infant enterprises, and tax
administration should be monitored to avoid excessive tax rates imposed by tax
collectors.",The impact of SMEs on employment creation in Makurdi metropolis of Benue state,2022-10-01 02:52:06,"Bridget Ngodoo Mile, Victor Ushahemba Ijirshar, Mlumun Queen Ijirshar","http://arxiv.org/abs/2210.00143v1, http://arxiv.org/pdf/2210.00143v1",econ.GN
32730,gn,"We assess the role of cognitive convenience in the popularity and rigidity of
0 ending prices in convenience settings. Studies show that 0 ending prices are
common at convenience stores because of the transaction convenience that 0
ending prices offer. Using a large store level retail CPI data, we find that 0
ending prices are popular and rigid at convenience stores even when they offer
little transaction convenience. We corroborate these findings with two large
retail scanner price datasets from Dominicks and Nielsen. In the Dominicks
data, we find that there are more 0 endings in the prices of the items in the
front end candies category than in any other category, even though these prices
have no effect on the convenience of the consumers check out transaction. In
addition, in both Dominicks and Nielsens datasets, we find that 0 ending prices
have a positive effect on demand. Ruling out consumer antagonism and retailers
use of heuristics in pricing, we conclude that 0 ending prices are popular and
rigid, and that they increase demand at convenience settings, not only for
their transaction convenience, but also for the cognitive convenience they
offer.","Zero-Ending Prices, Cognitive Convenience, and Price Rigidity",2022-10-02 14:14:07,"Avichai Snir, Haipeng, Chen, Daniel Levy","http://arxiv.org/abs/2210.00488v1, http://arxiv.org/pdf/2210.00488v1",econ.GN
32731,gn,"Wetland quality is a critical factor in determining values of wetland goods
and services. However, in many studies on wetland valuation, wetland quality
has been ignored. While those studies might give useful information to the
local people for decision-making, their lack of wetland quality information may
lead to the difficulty in integrating wetland quality into a cross-studies
research like meta-analysis. In a meta-analysis, a statistical regression
function needs withdrawing from those individual studies for analysis and
prediction. This research introduces the wetland quality factor, a critical but
frequently missed factor, into a meta-analysis and simultaneously considers
other influential factors, such as study method, socio-economic state and other
wetland site characteristics, as well. Thus, a more accurate and valid
meta-regression function is expected. Due to no obvious quality information in
primary studies, we extract two kinds of wetland states from the study context
as relative but globally consistent quality measurement to use in the analysis,
as the first step to explore the effect of wetland quality on values of
ecosystem services.",Wetland Quality as a Determinant of Economic Value of Ecosystem Services: an Exploration,2022-10-03 21:03:56,"Hongyan Chen, Pushpam Kumar, Tom Barker","http://arxiv.org/abs/2210.01153v2, http://arxiv.org/pdf/2210.01153v2",econ.GN
32732,gn,"Considering collaborative patent development, we provide micro-level evidence
for innovation through exchanges of differentiated knowledge. Knowledge
embodied in a patent is proxied by word pairs appearing in its abstract, while
novelty is measured by the frequency with which these word pairs have appeared
in past patents. Inventors are assumed to possess the knowledge associated with
patents in which they have previously participated. We find that collaboration
by inventors with more mutually differentiated knowledge sets is likely to
result in patents with higher novelty.",Collaborative knowledge exchange promotes innovation,2022-10-04 08:50:58,"Tomoya Mori, Jonathan Newton, Shosei Sakaguchi","http://arxiv.org/abs/2210.01392v5, http://arxiv.org/pdf/2210.01392v5",econ.GN
32733,gn,"Collusive practices of firms continue to be a major threat to competition and
consumer welfare. Academic research on this topic aims at understanding the
economic drivers and behavioral patterns of cartels, among others, to guide
competition authorities on how to tackle them. Utilizing topical machine
learning techniques in the domain of natural language processing enables me to
analyze the publications on this issue over more than 20 years in a novel way.
Coming from a stylized oligopoly-game theory focus, researchers recently turned
toward empirical case studies of bygone cartels. Uni- and multivariate time
series analyses reveal that the latter did not supersede the former but filled
a gap the decline in rule-based reasoning has left. Together with a tendency
towards monocultures in topics covered and an endogenous constriction of the
topic variety, the course of cartel research has changed notably: The variety
of subjects included has grown, but the pluralism in economic questions
addressed is in descent. It remains to be seen whether this will benefit or
harm the cartel detection capabilities of authorities in the future.",From Rules to Regs: A Structural Topic Model of Collusion Research,2022-10-06 17:49:55,W. Benedikt Schmal,"http://arxiv.org/abs/2210.02957v1, http://arxiv.org/pdf/2210.02957v1",econ.GN
32734,gn,"This paper investigates volumetric grid tariff designs under consideration of
different pricing mechanisms and resulting cost allocation across
socio-techno-economic consumer categories. In a case study of 1.56 million
Danish households divided into 90 socio-techno-economic categories, we compare
three alternative grid tariffs and investigate their impact on annual
electricity bills. The results of our design consisting of a time-dependent
threshold penalizing individual peak consumption and a system peak tariff show
(a) a range of different allocations that distribute the burden of additional
grid costs across both technologies and (b) strong positive outcomes, including
reduced expenses for lower-income groups and smaller households.",Grid tariff designs coping with the challenges of electrification and their socio-economic impacts,2022-10-07 16:00:33,"Philipp Andreas Gunkel, Claire-Marie Bergaentzlé, Dogan Keles, Fabian Scheller, Henrik Klinge Jacobsen","http://arxiv.org/abs/2210.03514v3, http://arxiv.org/pdf/2210.03514v3",econ.GN
32763,gn,"The prospect theory's value function is usually concave for gains, commonly
convex for losses, and generally steeper for losses than for gains. The neural
system is quite different from the loss and gains sides. Four new studies on
neurons related to this issue have examined neuronal responses to losses,
gains, and reference points, respectively. The value function with a neuronal
cusp could have variations and behavior cusps with catastrophe where a trader
closes her position.",A New Concept of the Value Function,2022-10-31 23:43:00,Kazuo Sano,"http://arxiv.org/abs/2211.00131v3, http://arxiv.org/pdf/2211.00131v3",econ.GN
32735,gn,"This paper provides the first comprehensive empirical analysis of the role of
natural gas for the domestic and international distribution of power. The
crucial role of pipelines for the trade of natural gas determines a set of
network effects that are absent for other natural resources such as oil and
minerals. Gas rents are not limited to producers but also accrue to key players
occupying central nodes in the gas network. Drawing on our new gas pipeline
data, this paper shows that gas betweenness-centrality of a country increases
substantially the ruler's grip on power as measured by leader turnover. A main
mechanism at work is the reluctance of connected gas trade partners to impose
sanctions, meaning that bad behavior of gas-central leaders is tolerated for
longer before being sanctioned. Overall, this reinforces the notion that fossil
fuels are not just poison for the environment but also for political pluralism
and healthy regime turnover.",Power in the Pipeline,2022-10-07 17:15:13,"Quentin Gallea, Massimo Morelli, Dominic Rohner","http://arxiv.org/abs/2210.03572v1, http://arxiv.org/pdf/2210.03572v1",econ.GN
32736,gn,"We show that capital flow (CF) volatility exerts an adverse effect on
exchange rate (FX) volatility, regardless of whether capital controls have been
put in place. However, this effect can be significantly moderated by certain
macroeconomic fundamentals that reflect trade openness, foreign assets
holdings, monetary policy easing, fiscal sustainability, and financial
development. Passing the threshold levels of these macroeconomic fundamentals,
the adverse effect of CF volatility may be negligible. We further construct an
intuitive FX resilience measure, which provides an assessment of the strength
of a country's exchange rates.",FX Resilience around the World: Fighting Volatile Cross-Border Capital Flows,2022-10-10 15:49:21,"Louisa Chen, Estelle Xue Liu, Zijun Liu","http://arxiv.org/abs/2210.04648v1, http://arxiv.org/pdf/2210.04648v1",econ.GN
32737,gn,"Can proximity make friendships more diverse? To address this question, we
propose a learning-driven friendship formation model to study how proximity and
similarity influence the likelihood of forming social connections. The model
predicts that proximity affects more friendships between dissimilar than
similar individuals, in opposition to a preference-driven version of the model.
We use an experiment at selective boarding schools in Peru that generates
random variation in the physical proximity between students to test these
predictions. The empirical evidence is consistent with the learning model:
while social networks exhibit homophily by academic achievement and poverty,
proximity generates more diverse social connections.","Proximity, Similarity, and Friendship Formation: Theory and Evidence",2022-10-13 01:30:03,"A. Arda Gitmez, Román Andrés Zárate","http://arxiv.org/abs/2210.06611v1, http://arxiv.org/pdf/2210.06611v1",econ.GN
32738,gn,"We consider a zonal international power market and investigate potential
economic incentives for short-term reductions of transmission capacities on
existing interconnectors by the responsible transmission system operators
(TSOs). We show that if a TSO aims to maximize domestic total welfare, it often
has an incentive to reduce the capacity on the interconnectors to neighboring
countries.
  In contrast with the (limited) literature on this subject, which focuses on
incentives through the avoidance of future balancing costs, we show that
incentives can exist even if one ignores balancing and focuses solely on
welfare gains in the day-ahead market itself. Our analysis consists of two
parts. In the first part, we develop an analytical framework that explains why
these incentives exist. In particular, we distinguish two mechanisms: one based
on price differences with neighboring countries and one based on the domestic
electricity price. In the second part, we perform numerical experiments using a
model of the Northern-European power system, focusing on the Danish TSO. In 97%
of the historical hours tested, we indeed observe economic incentives for
capacity reductions, leading to significant welfare gains for Denmark and
welfare losses for the system as a whole. We show that the potential for
welfare gains greatly depends on the ability of the TSO to adapt interconnector
capacities to short-term market conditions. Finally, we explore the extent to
which the recently introduced European ""70%-rule"" can mitigate the incentives
for capacity reductions and their welfare effects.",Economic incentives for capacity reductions on interconnectors in the day-ahead market,2022-10-13 19:11:51,"E. Ruben van Beesten, Daan Hulshof","http://arxiv.org/abs/2210.07129v1, http://arxiv.org/pdf/2210.07129v1",econ.GN
32739,gn,"The present study uses domain experts to estimate welfare levels and
indicators from high-resolution satellite imagery. We use the wealth quintiles
from the 2015 Tanzania DHS dataset as ground truth data. We analyse the
performance of the visual estimation of relative wealth at the cluster level
and compare these with wealth rankings from the DHS survey of 2015 for that
country using correlations, ordinal regressions and multinomial logistic
regressions. Of the 608 clusters, 115 received the same ratings from human
experts and the independent DHS rankings. For 59 percent of the clusters,
experts ratings were slightly lower. On the one hand, significant positive
predictors of wealth are the presence of modern roofs and wider roads. For
instance, the log odds of receiving a rating in a higher quintile on the wealth
rankings is 0.917 points higher on average for clusters with buildings with
slate or tile roofing compared to those without. On the other hand, significant
negative predictors included poor road coverage, low to medium greenery
coverage, and low to medium building density. Other key predictors from the
multinomial regression model include settlement structure and farm sizes. These
findings are significant to the extent that these correlates of wealth and
poverty are visually readable from satellite imagery and can be used to train
machine learning models in poverty predictions. Using these features for
training will contribute to more transparent ML models and, consequently,
explainable AI.",Welfare estimations from imagery. A test of domain experts ability to rate poverty from visual inspection of satellite imagery,2022-10-17 10:00:07,"Wahab Ibrahim, Ola Hall","http://arxiv.org/abs/2210.08785v1, http://arxiv.org/pdf/2210.08785v1",econ.GN
32740,gn,"In this paper, I consider a simple heterogeneous agents model of a production
economy with uncertain climate change and examine constrained efficient carbon
taxation. If there are frictionless, complete financial markets, the simple
model predicts a unique Pareto-optimal level of carbon taxes and abatement. In
the presence of financial frictions, however, the optimal level of abatement
cannot be defined without taking a stand on how abatement costs are distributed
among individuals. I propose a simple linear cost-sharing scheme that has
several desirable normative properties. I use calibrated examples of economies
with incomplete financial markets and/or limited market participation to
demonstrate that different schemes to share abatement costs can have large
effects on optimal abatement levels and that the presence of financial
frictions can increase optimal abatement by a factor of three relative to the
case of frictionless financial market.","Climate uncertainty, financial frictions and constrained efficient carbon taxation",2022-10-17 16:06:08,Felix Kübler,"http://arxiv.org/abs/2210.09066v1, http://arxiv.org/pdf/2210.09066v1",econ.GN
32741,gn,"Hand hygiene is one of the key low-cost measures proposed by the World Health
Organization (WHO) to contain the spread of COVID-19. In a field study
conducted during the pandemic in June and July 2020 in Switzerland, we captured
the hand disinfection behavior of customers in five stores of a large retail
chain (n = 8,245). The study reveals considerable differences with respect to
gender and age: Women were 8.3 percentage points more likely to disinfect their
hands compared to men. With respect to age, we identified a steep increase
across age groups, with people age 60 years and older disinfecting their hands
significantly more often than younger adults (>+16.7 percentage points) and
youth (>+ 31.7 percentage points). A validation study conducted in December
2020 (n = 1,918) confirmed the gender and age differences at a later point in
the pandemic. In sum, the differences between gender and age groups are
substantial and should be considered in the design of protective measures to
ensure clean and safe hands.",Large gender and age differences in hand disinfection behavior during the COVID-19 pandemic: Field data from Swiss retail stores,2022-10-17 16:40:10,"Frauke von Bieberstein, Anna-Corinna Kulle, Stefanie Schumacher","http://arxiv.org/abs/2210.09094v1, http://arxiv.org/pdf/2210.09094v1",econ.GN
32742,gn,"Recently, V. Laure van Bambeke used an original approach to solve the famous
problem of transformation of values into production prices by considering that
capital reallocation to each department (branch) was part of the problem. Here,
we confirm the validity of this consideration in relation with the satisfaction
of demand (social need which is able to pay for the given product). In contrast
to V. Laure van Bambeke's method of solving an overdetermined system of
equations (implying that compliance with Marx's fundamental equalities could
only be approached), we show that the transformation problem is solvable from a
determined (two-branch models) or an underdetermined system of equations
enabling to obtain exact solutions through an algorithm we provide, with no
approximation needed. For systems with three branches or more, the solution of
the transformation problem belongs to an infinite ensemble, accounting for the
observed high competition-driven market fluidity. Furthermore, we show that the
transformation problem is solvable in the absence of fixed capital, supporting
that dealing with the latter is not essential and cannot be seen as a potential
flaw of the approach. Our algorithm enables simulations illustrating how the
transient rise in the rate of profit predicted by the Okishio theorem is
consistent with the tendency of the rate of profit to fall (TRPF) subsequent to
capital reallocation, and how the TRPF is governed by the increase of organic
composition, in value. We establish that the long-standing transformation
problem is not such a problem since it is easily solved through our algorithm,
whatever the number of branches considered. This emphasizes the high coherence
of Marx's conception, and its impressive relevance regarding issues such as the
TRPF, which have remained intensely debated.",From Marx's fundamental equalities to the solving of the transformation problem -- Coherence of the model,2022-10-17 16:46:54,"Norbert Ankri, Païkan Marcaggi","http://arxiv.org/abs/2210.09097v1, http://arxiv.org/pdf/2210.09097v1",econ.GN
32743,gn,"A simple model is introduced to study the cooperative behavior of nations
regarding solar geoengineering. The results of this model are explored through
numerical methods. A general finding is that cooperation and coordination
between nations on solar geoengineering is very much incentivized. Furthermore,
the stability of solar geoengineering agreements between nations crucially
depends on the perceived riskiness of solar geoengineering. If solar
geoengineering is perceived as riskier, the stability of the most stable solar
geoengineering agreements is reduced. However, the stability of agreements is
completely independent of countries preferences.",Exploring the stability of solar geoengineering agreements,2022-10-17 17:46:55,Niklas V. Lehmann,"http://arxiv.org/abs/2210.09145v3, http://arxiv.org/pdf/2210.09145v3",econ.GN
32744,gn,"Developing new electricity grid tariffs in the context of household
electrification raises old questions about who pays for what and to what
extent. When electric vehicles (EVs) and heat pumps (HPs) are owned primarily
by households with higher financial status than others, new tariff designs may
clash with the economic argument for efficiency and the political arguments for
fairness. This article combines tariff design and redistributive mechanisms to
strike a balance between time-differentiated signals, revenue stability for the
utility, limited grid costs for vulnerable households, and promoting
electrification. We simulate the impacts of this combination on 1.4 million
Danish households (about 50% of the country's population) and quantify the
cross-subsidization effects between groups. With its unique level of detail,
this study stresses the spillover effects of tariffs. We show that a
subscription-heavy tariff associated with a ToU rate and a low redistribution
factor tackles all the above goals.",Electricity grid tariffs for electrification in households: Bridging the gap between cross-subsidies and fairness,2022-10-18 12:02:41,"Claire-Marie Bergaentzlé, Philipp Andreas Gunkel, Mohammad Ansarin, Yashar Ghiassi-Farrokhfal, Henrik Klinge Jacobsen","http://arxiv.org/abs/2210.09690v1, http://arxiv.org/pdf/2210.09690v1",econ.GN
32745,gn,"The 2022 Russia Ukraine War has led to many sanctions being placed on Russia
and Ukraine. The paper will discuss the impact the 2022 Russian Sanctions have
on agricultural food prices and hunger. The paper also uses Instrumental
Variable Analysis to find how Cryptocurrency and Bitcoin can be used to hedge
against the impact of sanctions. The 6 different countries analyzed in this
study including Bangladesh, El Salvador, Iran, Nigeria, Philippines, and South
Africa, all of which are heavy importers of wheat and corn. The paper shows
that although Bitcoin may be volatile compared to other local currencies, it
might be a good investment to safeguard assets since it is not correlated with
commodity prices.Furthermore, the study demonstrates that transaction volume
has a strong relationship with prices.","Cryptocurrency, Sanctions and Agricultural Prices: An empirical study on the negative implications of sanctions and how decentralized technologies affect the agriculture futures market in developing countries",2022-10-18 21:38:03,Agni Rajinikanth,"http://arxiv.org/abs/2210.10087v1, http://arxiv.org/pdf/2210.10087v1",econ.GN
32746,gn,"Through the reinterpretation of housing data as candlesticks, we extend
Nature Scientific Reports' article by Liang and Unwin [LU22] on stock market
indicators for COVID-19 data, and utilize some of the most prominent technical
indicators from the stock market to estimate future changes in the housing
market, comparing the findings to those one would obtain from studying real
estate ETF's. By providing an analysis of MACD, RSI, and Candlestick indicators
(Bullish Engulfing, Bearish Engulfing, Hanging Man, and Hammer), we exhibit
their statistical significance in making predictions for USA data sets (using
Zillow Housing data) and also consider their applications within three
different scenarios: a stable housing market, a volatile housing market, and a
saturated market. In particular, we show that bearish indicators have a much
higher statistical significance then bullish indicators, and we further
illustrate how in less stable or more populated countries, bearish trends are
only slightly more statistically present compared to bullish trends.",Housing Forecasts via Stock Market Indicators,2022-10-18 23:23:43,"Varun Mittal, Laura P. Schaposnik","http://arxiv.org/abs/2210.10146v1, http://arxiv.org/pdf/2210.10146v1",econ.GN
32747,gn,"In a 3-2 split verdict, the Supreme Court approved the exclusion of India's
socially and economically backward classes from its affirmative action measures
to address economic deprivation. Dissenting justices, including the Chief
Justice of India, protested the majority opinion for sanctioning ``an avowedly
exclusionary and discriminatory principle.'' In order to justify their
controversial decision, majority justices rely on technical arguments which are
categorically false. The confusion of the majority justices is due to a
combination of two related but subtle technical aspects of the affirmative
action system in India. The first aspect is the significance of overlaps
between members of various protected groups, and the second one is the
significance of the processing sequence of protected groups in the presence of
such overlaps. Conventionally, protected classes were determined by the caste
system, which meant they did not overlap. Addition of a new protected class
defined by economic criteria alters this structure, unless it is artificially
enforced. The majority justices failed to appreciate the significance of these
changes in the system, and inaccurately argued that the controversial exclusion
is a technical necessity to provide benefits to previously-unprotected members
of a new class. We show that this case could have been resolved with three
competing policies that each avoids the controversial exclusion. One of these
policies is in line with the core arguments in the majority opinion, whereas a
second one is in line with those in the dissenting opinion.",Market Design for Social Justice: A Case Study on a Constitutional Crisis in India,2022-10-19 00:14:47,"Tayfun Sönmez, Utku Ünver","http://arxiv.org/abs/2210.10166v3, http://arxiv.org/pdf/2210.10166v3",econ.GN
32748,gn,"In spring 2022, the German federal government agreed on a set of measures
that aimed at reducing households' financial burden resulting from a recent
price increase, especially in energy and mobility. These measures included
among others, a nation-wide public transport ticket for 9EUR per month and a
fuel tax cut that reduced fuel prices by more than 15%. In transportation
policy and travel behavior research this is an almost unprecedented behavioral
experiment. It allows to study not only behavioral responses in mode choice and
induced demand but also to assess the effectiveness of transport policy
instruments. We observe this natural experiment with a three-wave survey and an
app-based travel diary on a sample 2'263 individuals; for the Munich Study, 919
participants in the survey-and-app group and 425 in the survey-only group have
been successfully recruited, while 919 participants have been recruited through
a professional panel provider to obtain a representative nation-wide reference
group for the three-wave survey. In this fourth report we present the results
of the third wave. At the end of the study, all three surveys have been
completed by 1'484 participants and 642 participants completed all three
surveys and used the travel diary throughout the entire study. Based on our
results we conclude that when offering a 49EUR-Ticket as a successor to the
9~EUR-Ticket and a local travel pass for 30EUR/month more than 60% of all
9~EUR-Ticket owners would buy one of the two new travel passes. In other words,
a substantial increase in travel pass ownership in Germany can be expected,
with our modest estimate being around 20%. With the announcement of the
introduction of a successor ticket in 2023 as well as with the prevailing high
inflation, this study will continue into the year 2023 to monitor the impact on
mobility and daily activities.",A nation-wide experiment: fuel tax cuts and almost free public transport for three months in Germany -- Report 4 Third wave results,2022-10-19 16:21:08,"Allister Loder, Fabienne Cantner, Andrea Cadavid, Markus B. Siewert, Stefan Wurster, Sebastian Goerg, Klaus Bogenberger","http://arxiv.org/abs/2210.10538v1, http://arxiv.org/pdf/2210.10538v1",econ.GN
32749,gn,"Americans across demographic groups tend to have low financial literacy, with
low-income people and minorities at highest risk. This opens the door to the
exploitation of unbanked low-income families through high-interest alternative
financial services. This paper studies the causes and effects of financial
illiteracy and exclusion in the most at-risk demographic groups, and solutions
proven to bring them into the financial mainstream. This paper finds that
immigrants, ethnic minorities, and low-income families are most likely to be
unbanked. Furthermore, the causes for being unbanked include the high fees of
bank accounts, the inability of Americans to maintain bank accounts due to low
financial assets or time, banking needs being met by alternative financial
services, and being provided minimal help while transitioning from welfare to
the workforce. The most effective solutions to financial illiteracy and
exclusion include partnerships between nonprofits, banks, and businesses that
use existing alternative financial service platforms to transition the unbanked
into using products that meet their needs, educating the unbanked in the use of
mobile banking, and providing profitable consumer credit products targeting
unbanked families with features that support their needs in addition to
targeted and properly implemented financial literacy programs.","Exploring Causes, Effects, and Solutions to Financial Illiteracy and Exclusion among Minority Demographic Groups",2022-10-20 19:49:42,Abhinav Shanbhag,"http://arxiv.org/abs/2210.11403v1, http://arxiv.org/pdf/2210.11403v1",econ.GN
32750,gn,"I offer reflections on adaptation to climate change, with emphasis on
developing areas.",On the Financing of Climate Change Adaptation in Developing Countries,2022-10-20 21:52:36,Francis X. Diebold,"http://arxiv.org/abs/2210.11525v1, http://arxiv.org/pdf/2210.11525v1",econ.GN
32751,gn,"Transforming the construction sector is key to reaching net-zero, and many
stakeholders expect its decarbonization through digitalization. But no
quantified evidence has been brought to date. We propose the first
environmental quantification of the impact of Building Information Modeling
(BIM) in the construction sector. Specifically, the direct and indirect
greenhouse gas (GHG) emissions generated by a monofunctional BIM to plan road
maintenance, a Pavement Management System (PMS), are evaluated using field data
from France. The related carbon footprints are calculated following a life
cycle approach, using different sources of data, including ecoinvent v3.6, and
the IPCC 2013 GWP 100a characterization factors. Three design-build-maintain
pavement alternatives are compared: scenario 1 relates to a massive design and
surface maintenance, scenario 2 to a progressive design and pre-planned
structural maintenance, and scenario 3 to a progressive design and tailored
structural maintenance supported by the PMS. First, results show negligible
direct emissions due to the PMS existence: 0.02% of the life cycle emissions of
scenario 3. Second, complementary sensitivity analyses show that using a PMS is
climate-positive over the life cycle when pavement subgrade bearing capacity
improves over time, and climate-neutral otherwise. The GHG emissions savings
using BIM can reach up to 30% of the life cycle emissions compared to other
scenarios, and 65% when restraining the scope to maintenance and rehabilitation
and excluding original pavement construction. Third, the neutral effect of BIM
in case of a deterioration of the bearing capacity of the subgrade may be
explained by design practices and safety margins, that could be enhanced using
BIM. Fourth, the decarbonization potential of a multifunctional BIM is
discussed, and research perspectives are presented.",BIM can help decarbonize the construction sector: life cycle evidence from Pavement Management Systems,2022-10-22 03:08:20,"Anne de Bortoli, Yacine Baouch, Mustapha Masdan","http://arxiv.org/abs/2210.12307v1, http://arxiv.org/pdf/2210.12307v1",econ.GN
32782,gn,"This paper proposes a brand-new measure of energy efficiency at household
level and explores how it is affected by access to credit. We calculate the
energy and carbon intensity of the related sectors, which experience a
substantial decline from 2005 to 2019. Although there is still high inequality
in energy use and carbon emissions among Chinese households, the energy
efficiency appears to be improved in long run. Our research further maps the
relationship between financial market and energy. The results suggest that
broadened access to credit encourages households to improve energy efficiency,
with higher energy use and carbon emission.",The impact of access to credit on energy efficiency,2022-11-16 15:25:24,"Jun Zhou, Zhichao Yin, Pengpeng Yue","http://dx.doi.org/10.1016/j.frl.2022.103472, http://arxiv.org/abs/2211.08871v1, http://arxiv.org/pdf/2211.08871v1",econ.GN
32752,gn,"The Fisher and GEKS are celebrated as ideal bilateral and multilateral
indexes due to their superior axiomatic and econ-theoretic properties. The
Fisher index is the main index used for constructing CPI by statistical
agencies and the GEKS is the index used for compiling PPPs in World Bank's
International Comparison Program (ICP). Despite such a high status and the
importance attached to these indexes, the stochastic approach to these indexes
is not well-developed and no measures of reliability exist for these important
economic statistics. The main objective of this paper is to fill this gap. We
show how appropriate reliability measures for the Fisher and GEKS indexes, and
other ideal price index numbers can be derived and how they should be
interpreted and used in practice. In an application to 2017 ICP data on a
sample of 173 countries, we estimate the Fisher and GEKS indexes along with
their reliability measures, make comparisons with several other notable indexes
and discuss the implications.",Reliability of Ideal Indexes,2022-10-25 04:03:39,Gholamreza Hajargasht,"http://arxiv.org/abs/2210.13684v1, http://arxiv.org/pdf/2210.13684v1",econ.GN
32753,gn,"The introduction of new payment methods has resulted in one of the most
significant changes in the way we consume goods and services. In this paper, I
present results of a field and a laboratory experiment designed to determine
the effect of payment method (cash vs. mobile payment) on spending, and a
meta-analysis of previous literature about payment method effect. In the field
experiment, I collected cashier receipts from Chinese supermarkets. Compared to
cash payment, mobile payments significantly increased the amount purchased and
the average amount spent on each item. This effect was found to be particularly
large for high price elasticity goods. In the laboratory experiment,
participants were randomly assigned to one of four groups that varied with
respect to the kind of payment and the kind of incentives, eliminating the
potential endogeneity problem from the field experiment. I found that compared
to cash, mobile payments lead to a significantly higher willingness to pay
(WTP) for consumption. In contrast to others, I found that pain of paying does
not moderate the payment method effect; however, other psychological factors
were found to work as potential mechanisms for affecting WTP.",The Influence of Payment Method: Do Consumers Pay More with Mobile Payment?,2022-10-26 14:20:36,Yizhao Jiang,"http://arxiv.org/abs/2210.14631v1, http://arxiv.org/pdf/2210.14631v1",econ.GN
32754,gn,"This study aims to enrich the current literature by providing a new approach
to motivating Generation Z employees in Poland. Employees need to be motivated
in order to be efficient at doing a particular task at the workplace. As young
people born between 1995 and 2004 called Generation Z, enter the labour market
it, is essential to consider how employees' motivation might be affected.
Traditionally identified motivators are known, but many reports indicate that
the motivation continues decreasing. This situation causes some perturbations
in business and fluctuations of staff. In order to prevent this situation, the
employers are looking for new solutions to motivate the employees. A
quantitative approach was used to collect new evidence from 200 Polish
respondents completing an online survey. The research were conducted before and
during pandemic time. We report and analyse the survey results conducted in
Poland among representatives of Generation Z, who were employed for at least 6
months. We developed and validated a new approach to motivation using
methodologies called Factor Analysis. Based on empirical verification, we found
a new tool that connects employee motivation and selected areas of the Hygge
concept called Hygge star model, which has the same semantics before and during
Covid-19 pandemic.",Model of work motivation based on happiness: pandemic related study,2022-10-26 15:02:38,"Joanna Nieżurawska, Radosław A. Kycia, Iveta Ludviga, Agnieszka Niemczynowicz","http://arxiv.org/abs/2210.14655v1, http://arxiv.org/pdf/2210.14655v1",econ.GN
32755,gn,"Electric vehicle adoption is considered to be a promising pathway for
addressing climate change. However, the market for charging stations suffers
from a market failure: a lack of EV sales disincentives charging station
production, which in turn inhibits mass EV adoption. Charging station subsidies
are often discussed as policy levers that can stimulate charging station supply
and correct this market failure. Nonetheless, there is limited research
examining the extent such subsidies are successful in promoting charging
station supply. Using annual data on electric vehicle sales, charging station
counts, and subsidy amounts from 57 California counties and a staggered
difference-in-differences methodology, I find that charging station subsidies
are highly effective: counties that adopt subsidies experience a 36% increase
in charging station supply 2 years following subsidy adoption. This finding
suggests that governmental intervention can help correct the market failure in
the charging station market.",Powering Up a Slow Charging Market: How Do Government Subsidies Affect Charging Station Supply?,2022-10-26 06:00:34,Zunian Luo,"http://arxiv.org/abs/2210.14908v6, http://arxiv.org/pdf/2210.14908v6",econ.GN
32756,gn,"This paper provides a comprehensive examination of a Brazilian corporate tax
reform targeted at the sector and product level. Difference-in-differences
estimates instrumented by sector eligibility show that a 20 percentage point
cut on payroll tax rates caused a 9% employment increase at the firm level,
mostly driven by small firms. This expansion is not driven by formalization of
existing workers, and it is explained by reduction on separations rather than
additional hires. In terms of earnings, there is a significant 4% earnings
increase in the long run, which is concentrated at leadership positions. The
unequal pass-through worsen within-firm wage inequality. I exploit the
exogenous variation on labor cost to document substantial labor market power in
Brazil, where wages are marked down by 36%. Consistent with the empirical
findings, I develop a model of factor demand with imperfect competition in the
goods and labor market to shed light on the mechanism through which imperfect
competition drives corporate tax incidence. The model is identified by the
reduced form elasticities, and allows me to structurally estimate the
capital-labor elasticity of substitution, which differs from the benchmark case
of perfect competition.",The Unequal Incidence of Payroll Taxes with Imperfect Competition: Theory and Evidence,2022-10-28 00:18:21,Felipe Lobel,"http://arxiv.org/abs/2210.15776v1, http://arxiv.org/pdf/2210.15776v1",econ.GN
32783,gn,"The motivation for focusing on economic sanctions is the mixed evidence of
their effectiveness. We assess the role of sanctions on the Russian
international trade flow of agricultural products after 2014. We use a
differences-in-differences model of trade flows data for imported and exported
agricultural products from 2010 to 2020 in Russia. The main expectation was
that the Russian economy would take a hit since it had lost its importers. We
assess the economic impact of the Russian food embargo on agricultural
commodities, questioning whether it has achieved its objective and resulted in
a window of opportunity for the development of the domestic agricultural
sector. Our results confirm that the sanctions have significantly impacted
foodstuff imports; they have almost halved in the first two years since the
sanctions were imposed. However, Russia has embarked on a path to reduce
dependence on food imports and managed self-sufficient agricultural production.",Russian Agricultural Industry under Sanction Wars,2022-11-16 23:47:17,"Alexandra Lukyanova, Ayaz Zeynalov","http://arxiv.org/abs/2211.09205v2, http://arxiv.org/pdf/2211.09205v2",econ.GN
32757,gn,"Did migrants make Paris a Mecca for the arts and Vienna a beacon of classical
music? Or was their rise a pure consequence of local actors? Here, we use data
on more than 22,000 historical individuals born between the years 1000 and 2000
to estimate the contribution of famous immigrants, emigrants, and locals to the
knowledge specializations of European regions. We find that the probability
that a region develops or keeps specialization in an activity (based on the
birth of famous physicists, painters, etc.) grows with both, the presence of
immigrants with knowledge on that activity and immigrants with knowledge in
related activities. In contrast, we do not find robust evidence that the
presence of locals with related knowledge explains entries and/or exits. We
address some endogeneity concerns using fixed-effects models considering any
location-period-activity specific factors (e.g. the presence of a new
university attracting scientists).","The Role of Immigrants, Emigrants, and Locals in the Historical Formation of European Knowledge Agglomerations",2022-10-28 08:50:05,"Philipp Koch, Viktor Stojkoski, César A. Hidalgo","http://dx.doi.org/10.1080/00343404.2023.2275571, http://arxiv.org/abs/2210.15914v6, http://arxiv.org/pdf/2210.15914v6",econ.GN
32758,gn,"Media around the world is disseminated at the national level as well as at
the local level. While the capacity of media to shape preferences and behavior
has been widely recognized, less is known about the differential impacts of
local media. Local media may have particularly important effects on social
norms due to the provision of locally relevant information that becomes common
knowledge in a community. I examine this possibility in a high-stakes context:
the Ebola epidemic in Guinea. I exploit quasi-random variation in access to
distinct media outlets and the timing of a public-health campaign on community
radio. I find that 13% of Ebola cases would have been prevented if places with
access to neighboring community radio stations had instead their own. This is
driven by radio stations' locality, not ethno-linguistic boundaries, and by
coordination in social behaviors sanctioned locally.",Local Media and the Shaping of Social Norms: Evidence from the Ebola outbreak,2022-10-28 10:09:37,Ada Gonzalez-Torres,"http://arxiv.org/abs/2210.15946v2, http://arxiv.org/pdf/2210.15946v2",econ.GN
32759,gn,"Trade is one of the essential feature of human intelligence. The securities
market is the ultimate expression of it. The fundamental indicators of stocks
include information as well as the effects of noise and bias on the stock
prices; however, identifying the effects of noise and bias is generally
difficult. In this article, I present the true fundamentals hypothesis based on
rational expectations and detect the global bias components from the actual
fundamental indicators by using a log-normal distribution model based on the
true fundamentals hypothesis. The analysis results show that biases generally
exhibit the same characteristics, strongly supporting the true fundamentals
hypothesis. Notably, the positive price-to-cash flows from the investing
activities ratio is a proxy for the true fundamentals. Where do these biases
come from? The answer is extremely simple: ``Cash is a fact, profit is an
opinion.'' Namely, opinions of management and accounting are added to true
fundamentals. As a result, Kesten process is realized and the Pareto
distribution is to be obtained. This means that the market knows it and
represents as a stable global bias in the stock market.",Intelligence and Global Bias in the Stock Market,2022-10-28 16:17:05,Kazuo Sano,"http://arxiv.org/abs/2210.16113v1, http://arxiv.org/pdf/2210.16113v1",econ.GN
32760,gn,"In the United States, large post-industrial cites such as Detroit are
well-known for high levels of socioeconomic deprivation. But while Detroit is
an exceptional case, similar levels of deprivation can still be found in other
large cities, as well as in smaller towns and rural areas. This study
calculates a standardized measure for all block groups in the lower 48 states
and DC, before isolating ""high-deprivation"" areas that exceed Detroit's median
value. These block groups are investigated and mapped for the 83 cities with
populations above 250,000, as well as at the state level and for places of all
sizes. Detroit is shown to indeed be unique not only for its levels of
deprivation (which are higher than 95 percent of the country), but also for the
dispersion of highly-deprived block groups throughout the city. Smaller, more
concentrated pockets of high deprivation can be found in nearly every large
city, and some cities below 20,000 residents have an even larger share of
high-deprivation areas. Cities' percentages of high-deprivation areas are
positively related to overall poverty, population density, and the percentage
of White residents, and negatively related to the share of Black residents.","""Rust Belt"" Across America: An Application of a Nationwide, Block-Group-Level Deprivation Index",2022-10-28 17:32:43,Scott W Hegerty,"http://arxiv.org/abs/2210.16155v1, http://arxiv.org/pdf/2210.16155v1",econ.GN
32761,gn,"Many online retailers and search intermediaries present products on ranked
product lists. Because changing the ordering of products influences their
revenues and consumer welfare, these platforms can design rankings to maximize
either of these two metrics. In this paper, I study how rankings serving these
two objectives differ and what determines this difference. First, I highlight
that when position effects are heterogeneous, rankings can increase both
revenues and consumer welfare by increasing the overall conversion rate.
Second, I provide empirical evidence for this heterogeneity, highlighting that
cheaper alternatives have stronger position effects. Finally, I quantify
revenue and consumer welfare effects across the different rankings. To this
end, I develop an estimation procedure for the search and discovery model of
Greminger (2022) that yields a smooth likelihood function by construction. The
results from the model show that rankings designed to maximize revenues can
also increase consumer welfare. Moreover, these revenue-based rankings reduce
consumer welfare only to a limited extent relative to a
consumer-welfare-maximizing ranking.",Heterogeneous Position Effects and the Power of Rankings,2022-10-29 00:18:06,Rafael P. Greminger,"http://arxiv.org/abs/2210.16408v4, http://arxiv.org/pdf/2210.16408v4",econ.GN
32762,gn,"As of August 2022, blockchain-based assets boast a combined market
capitalisation exceeding one trillion USD, among which the most prominent are
the decentralised autonomous organisation (DAO) tokens associated with
decentralised finance (DeFi) protocols. In this work, we seek to value DeFi
tokens using the canonical multiples and Discount Cash Flow (DCF) approaches.
We examine a subset of DeFi services including decentralised exchanges (DEXs),
protocol for loanable funds (PLFs), and yield aggregators. We apply the same
analysis to some publicly traded firms and compare them with DeFi tokens of the
analogous category. Interestingly, despite the crypto bear market lasting for
more than one year as of August 2022, both approaches evidence overvaluation in
DeFi.",DeFi vs TradFi: Valuation Using Multiples and Discounted Cash Flow,2022-10-30 17:04:12,"Teng Andrea Xu, Jiahua Xu, Kristof Lommers","http://arxiv.org/abs/2210.16846v1, http://arxiv.org/pdf/2210.16846v1",econ.GN
32764,gn,"Models of consumer responsiveness to medical care prices are central in the
optimal design of health insurance. However, consumers are rarely given
spending information at the time they consume care, a delay which many models
do not account for and may create distortions in the choice of future care
consumption. We study household responses to scheduled medical services before
and after pricing information arrives, leveraging quasi-experimental variation
in the time it takes for an insurer to process a bill. Immediately after
scheduled services, households increase their spending by roughly 46%; however,
a bill's arrival causes households to reduce their spending by 7 percentage
points (nearly 20% of the initial increase). These corrections are concentrated
among households who are under-informed about their deductible or whose
expenditures fall just short of meeting the deductible, suggesting that a bill
provides useful pricing information. These corrections occur across many types
of health services, including high- and low-value care. We model household
beliefs and learning about expenditures and find that households overestimate
their expenditures by 10% prior to a bill's arrival. This leads to an
over-consumption of $842.80 ($480.59) for the average (median) affected
household member. There is evidence that households learn from repeated use of
medical services; however, household priors are estimated to be as high as 80%
larger than true prices, leaving low-spending households especially
under-informed. Our results suggest that long deductible periods coupled with
price non-transparency may lead to an over-consumption of medical care.",What's in a Bill? A Model of Imperfect Moral Hazard in Healthcare,2022-11-02 16:45:50,"Alex Hoagland, David M. Anderson, Ed Zhu","http://arxiv.org/abs/2211.01116v1, http://arxiv.org/pdf/2211.01116v1",econ.GN
32765,gn,"The negative demand shock due to the COVID-19 lockdown has reduced net demand
for electricity -- system demand less amount of energy produced by intermittent
renewables, hydroelectric units, and net imports -- that must be served by
controllable generation units. Under normal demand conditions, introducing
additional renewable generation capacity reduces net demand. Consequently, the
lockdown can provide insights about electricity market performance with a large
share of renewables. We find that although the lockdown reduced average
day-ahead prices in Italy by 45%, re-dispatch costs increased by 73%, both
relative to the average of the same magnitude for the same period in previous
years. We estimate a deep-learning model using data from 2017--2019 and find
that predicted re-dispatch costs during the lockdown period are only 26% higher
than the same period in previous years. We argue that the difference between
actual and predicted lockdown period re-dispatch costs is the result of
increased opportunities for suppliers with controllable units to exercise
market power in the re-dispatch market in these persistently low net demand
conditions. Our results imply that without grid investments and other
technologies to manage low net demand conditions, an increased share of
intermittent renewables is likely to increase costs of maintaining a reliable
grid.",(Machine) Learning from the COVID-19 Lockdown about Electricity Market Performance with a Large Share of Renewables,2022-11-04 03:33:20,"Christoph Graf, Federico Quaglia, Frank A. Wolak","http://dx.doi.org/10.1016/j.jeem.2020.102398, http://arxiv.org/abs/2211.02196v1, http://arxiv.org/pdf/2211.02196v1",econ.GN
32766,gn,"Existence theory in economics is usually in real domains such as the findings
of chaotic trajectories in models of economic growth, tatonnement, or
overlapping generations models. Computational examples, however, sometimes
converge rapidly to cyclic orbits when in theory they should be nonperiodic
almost surely. We explain this anomaly as the result of digital approximation
and conclude that both theoretical and numerical behavior can still illuminate
essential features of the real data.",Computing Economic Chaos,2022-11-04 16:29:33,"Richard H. Day, Oleg V. Pavlov","http://arxiv.org/abs/2211.02441v1, http://arxiv.org/pdf/2211.02441v1",econ.GN
32767,gn,"We develop an experimentally validated, short and easy-to-use survey module
for measuring individual debt aversion. To this end, we first estimate debt
aversion on an individual level, using choice data from Meissner and Albrecht
(2022). This data also contains responses to a large set of debt aversion
survey items, consisting of existing items from the literature and novel items
developed for this study. Out of these, we identify a survey module comprising
two qualitative survey items to best predict debt aversion in the incentivized
experiment.",The debt aversion survey module: An experimentally validated tool to measure individual debt aversion,2022-11-04 23:36:58,"David Albrecht, Thomas Meissner","http://arxiv.org/abs/2211.02742v1, http://arxiv.org/pdf/2211.02742v1",econ.GN
32768,gn,"Tourism is one of the world's fastest expanding businesses, as well as a
significant source of foreign exchange profits and jobs. The research is based
on secondary sources. The facts and information were primarily gathered and
analyzed from various published papers and articles. The study goals are to
illustrate the current scenario of tourism industry in south Asia, classifies
the restraints and recommends helpful key developments to achieve sustainable
tourism consequently. The study revealed that major challenges of sustainable
tourism in south Asian region are lack of infrastructure facilities, modern and
sufficient recreation facilities, security and safety, proper training and HR,
proper planning from government, marketing and information, product
development, tourism awareness, security and safety, and political instability
etc. The study also provides some suggestive measures that for the long-term
growth of regional tourism, the government should establish and implement
policies involving public and private investment and collaboration.",Prospects and Challenges for Sustainable Tourism: Evidence from South Asian Countries,2022-11-07 13:17:24,"Janifar Alam, Quazi Nur Alam, Abu Kalam","http://arxiv.org/abs/2211.03411v1, http://arxiv.org/pdf/2211.03411v1",econ.GN
32769,gn,"Shadow prices are well understood and are widely used in economic
applications. However, there are limits to where shadow prices can be applied
assuming their natural interpretation and the fact that they reflect the first
order optimality conditions (FOC). In this paper, we present a simple ad-hoc
example demonstrating that marginal cost associated with exercising an optimal
control may exceed the respective cost estimated from a ratio of shadow prices.
Moreover, such cost estimation through shadow prices is arbitrary and depends
on a particular (mathematically equivalent) formulation of the optimization
problem. These facts render a ratio of shadow prices irrelevant to estimation
of optimal marginal cost. The provided illustrative optimization problem links
to a similar approach of calculating social cost of carbon (SCC) in the widely
used dynamic integrated model of climate and the economy (DICE).",Shadow prices and optimal cost in economic applications,2022-11-07 17:28:45,"Nikolay Khabarov, Alexey Smirnov, Michael Obersteiner","http://arxiv.org/abs/2211.03591v2, http://arxiv.org/pdf/2211.03591v2",econ.GN
32770,gn,"Almost every public pension system shares two attributes: earning deductions
to finance benefits, and benefits that depend on earnings. This paper analyzes
theoretically and empirically the trade-off between social insurance and
incentive provision faced by reforms to these two attributes. First, I combine
the social insurance and the optimal linear-income literature to build a model
with a flexible pension contribution rate and benefits' progressivity that
incorporates inter-temporal and inter-worker types of redistribution and
incentive distortion. The model is general, allowing workers to be
heterogeneous on productivity and retirement preparedness, and they exhibit
present-focused bias. I then estimate the model by leveraging three
quasi-experimental variations on the design of the Chilean pension system and
administrative data merged with a panel survey. I find that taxable earnings
respond to changes in the benefit-earnings link, future pension payments, and
net-of-tax rate, which increases the costs of reforms. I also find that
lifetime payroll earnings have a strong positive relationship with productivity
and retirement preparedness, and that pension transfers are effective in
increasing retirement consumption. Therefore, there is a large inter-worker
redistribution value through the pension system. Overall, there are significant
social gains from marginal reforms: a 1% increase in the contribution rate and
in the benefit progressivity generates social gains of 0.08% and 0.29% of the
GDP, respectively. The optimal design has a pension contribution rate of 17%
and focuses 42% of pension public spending on workers below the median of
lifetime earnings.",The Optimal Size and Progressivity of Old-Age Social Security,2022-11-08 02:44:41,Francisco Cabezon,"http://arxiv.org/abs/2211.03912v1, http://arxiv.org/pdf/2211.03912v1",econ.GN
32771,gn,"The purpose of this article is to explore the digital behavior of nature
tourism SMEs in the department of Magdalena-Colombia, hereinafter referred to
as the region. In this sense, the concept of endogenization as an evolutionary
mechanism refers to the application of the discrete choice model as an engine
of analysis for the variables to be studied within the model. The type of study
was quantitative, of correlational level, the sample was 386 agents of the
tourism chain; a survey-type instrument with five factors and a Likert-type
scale was used for data collection. For the extraction of the factors, a
confirmatory factor analysis was used, using structural equations, followed by
a discrete choice model and then the analysis of the results. Among the main
findings are that the SMEs in the tourism chain that tried to incorporate Big
Data activities in the decision-making processes, have greater chances of
success in the digital transformation, in addition, statistical evidence was
found that the training of staff in Data Science, contributes significantly to
the marketing and commercialization processes within the SME in this region.",Digital Transformation of Nature Tourism,2022-11-08 04:40:43,"Raul Enrique Rodriguez Luna, Jose Luis Rosenstiehl Martinez","http://dx.doi.org/10.22267/rtend.222301.185, http://arxiv.org/abs/2211.03945v1, http://arxiv.org/pdf/2211.03945v1",econ.GN
32772,gn,"We develop a quantitative general equilibrium model of multinational activity
embedding corporate taxation and profit shifting. In addition to trade and
investment frictions, our model shows that profit-shifting frictions shape the
geography of multinational production. Key to our model is the distinction
between the corporate tax elasticity of real activity and profit shifting. The
quantification of our model requires estimates of shifted profits flows. We
provide a new, model-consistent methodology to calibrate bilateral
profit-shifting frictions based on accounting identities. We simulate various
tax reforms aimed at curbing tax-dodging practices of multinationals and their
impact on a range of outcomes, including tax revenues and production. Our
results show that the effects of the international relocation of firms across
countries are of comparable magnitude as the direct gains in taxable income.",Profit Shifting Frictions and the Geography of Multinational Activity,2022-11-08 20:23:02,"Alessandro Ferrari, Sébastien Laffitte, Mathieu Parenti, Farid Toubal","http://arxiv.org/abs/2211.04388v2, http://arxiv.org/pdf/2211.04388v2",econ.GN
32773,gn,"This paper uncovers the evolution of cities and Islamist insurgencies, so
called jihad, in the process of the reversal of fortune over the centuries. In
West Africa, water access in ancient periods predicts the locations of the core
cities of inland trade routes -- the trans-Saharan caravan routes -- founded up
to the 1800s, when historical Islamic states played significant economic roles
before European colonization. In contrast, ancient water access does not have a
persistent influence on contemporary city formation and economic activities.
After European colonization and the invention of modern trading technologies,
along with the constant shrinking of water sources, landlocked pre-colonial
core cities contracted or became extinct. Employing an instrumental variable
strategy, we show that these deserted locations have today been replaced by
battlefields for jihadist organizations. We argue that the power relations
between Islamic states and the European military during the 19th century
colonial era shaped the persistence of jihadist ideology as a legacy of
colonization. Investigations into religious ideology related to jihadism, using
individual-level surveys from Muslims, support this mechanism. Moreover, the
concentration of jihadist violence in ""past-core-and-present-periphery"" areas
in West Africa is consistent with a global-scale phenomenon. Finally,
spillovers of violent events beyond these stylized locations are partly
explained by organizational heterogeneity among competing factions (Al Qaeda
and the Islamic State) over time.",The Golden City on the Edge: Economic Geography and Jihad over Centuries,2022-11-09 12:35:54,"Masahiro Kubo, Shunsuke Tsuda","http://arxiv.org/abs/2211.04763v1, http://arxiv.org/pdf/2211.04763v1",econ.GN
32774,gn,"Oil price fluctuations severely impact the economies of both oil-exporting
and importing countries. High oil prices can benefit oil exporters by
increasing foreign currency inflow; however, an economy can suffer from a
weakening of the manufacturing sectors and experience a significant downtrend
in the country's price competitiveness as the domestic currency appreciates. We
investigate the oil price fluctuations from Q1, 2004 to Q3, 2021 and their
impact on the Russian macroeconomic indicators, particularly industrial
production, exchange rate, inflation and interest rates. We assess whether and
how much the Russian macroeconomic variables have been responsive to the oil
price fluctuations in recent years. The outcomes from VAR model confirm that
the monetary channel is more responsive to oil price shocks than the fiscal
one. Regarding fiscal channel of the oil price impact, industrial production is
strongly pro-cyclical to oil price shocks. As for the monetary channel, higher
oil price volatility is pressuring the Russian ruble, inflation and interest
rates are substantially counter-cyclical to oil price shocks.",Macroeconomic performance of oil price shocks in Russia,2022-11-08 12:44:42,"Ayaz Zeynalov, Kristina Tiron","http://arxiv.org/abs/2211.04954v2, http://arxiv.org/pdf/2211.04954v2",econ.GN
32775,gn,"This paper identifies the causal effects of full-scale kremlin aggression on
socio-economic outcomes in Ukraine three months into the full-scale war. First,
forced migration after February 24th, 2022 is associated with an elevated risk
of becoming unemployed by 7.5% points. Second, difference-in-difference
regressions show that in regions with fighting on the ground females without a
higher education face a 9.6-9.9% points higher risk of not having enough money
for food. Finally, in the regions subject to ground attack females with and
without a higher education, as well as males without a higher education are
more likely to become unemployed by 6.1-6.9%, 4.2-4.7% and 6.5-6.6% points
correspondingly. This persistent gender gap in poverty and unemployment, when
even higher education is not protective for females, calls for policy action.
While more accurate results may obtain with more comprehensive surveys, this
paper provides a remarkably robust initial estimate of the war's effects on
poverty, unemployment and internal migration.","Poverty, Unemployment and Displacement in Ukraine: three months into the war",2022-10-30 19:05:32,Maksym Obrizan,"http://arxiv.org/abs/2211.05628v1, http://arxiv.org/pdf/2211.05628v1",econ.GN
32776,gn,"This research is devoted to assessing regional economic disparities in
Ukraine, where regional economic inequality is a crucial issue the country
faces in its medium and long-term development, recently, even in the short
term. We analyze the determinants of regional economic growth, mainly
industrial and agricultural productions, population, human capital, fertility,
migration, and regional government expenditures. Using panel data estimations
from 2004 to 2020 for 27 regions of Ukraine, our results show that the gaps
between regions in Ukraine have widened last two decades. Natural resource
distribution, agricultural and industrial productions, government spending, and
migration can explain the disparities. We show that regional government
spending is highly concentrated in Kyiv, and the potential of the other
regions, especially the Western ones, needs to be used sufficiently. Moreover,
despite its historical and economic opportunity, the East region performed
little development during the last two decades. The inefficient and
inconsistent regional policies played a crucial role in these disparities.",Regional Disparities and Economic Growth in Ukraine,2022-11-10 19:02:11,"Khrystyna Huk, Ayaz Zeynalov","http://arxiv.org/abs/2211.05666v2, http://arxiv.org/pdf/2211.05666v2",econ.GN
32777,gn,"Sports betting markets are proven real-world laboratories to test theories of
asset pricing anomalies and risky behaviour. Using a high-frequency dataset
provided directly by a major bookmaker, containing the odds and amounts staked
throughout German Bundesliga football matches, we test for evidence of momentum
in the betting and pricing behaviour after equalising goals. We find that
bettors see value in teams that have the apparent momentum, staking about 40%
more on them than teams that just conceded an equaliser. Still, there is no
evidence that such perceived momentum matters on average for match outcomes or
is associated with the bookmaker offering favourable odds. We also confirm that
betting on the apparent momentum would lead to substantial losses for bettors.",Gambling on Momentum,2022-11-11 11:07:18,"Marius Ötting, Christian Deutscher, Carl Singleton, Luca De Angelis","http://arxiv.org/abs/2211.06052v1, http://arxiv.org/pdf/2211.06052v1",econ.GN
32778,gn,"Free-riding is widely perceived as a key obstacle for effective climate
policy. In the game-theoretic literature on non-cooperative climate policy and
on climate cooperation, the free-rider hypothesis is ubiquitous. Yet, the
free-rider hypothesis has not been tested empirically in the climate policy
context. With the help of a theoretical model, we demonstrate that if
free-riding were the main driver of lax climate policies around the globe, then
there should be a pronounced country-size effect: Countries with a larger share
of the world's population should, all else equal, internalize more climate
damages and thus set higher carbon prices. We use this theoretical prediction
for testing the free-rider hypothesis empirically. Drawing on data on
emission-weighted carbon prices from 2020, while controlling for a host of
other potential explanatory variables of carbon pricing, we find that the
free-rider hypothesis cannot be supported empirically, based on the criterion
that we propose. Hence, other issues may be more important for explaining
climate policy stringency or the lack thereof in many countries.",Testing the free-rider hypothesis in climate policy,2022-11-11 17:02:16,"Robert C. Schmidt, Moritz Drupp, Frikk Nesje, Hendrik Hoegen","http://arxiv.org/abs/2211.06209v1, http://arxiv.org/pdf/2211.06209v1",econ.GN
32779,gn,"In this paper, I develop an integrated approach to collective models and
matching models of the marriage market. In the collective framework, both
household formation and the intra-household allocation of bargaining power are
taken as given. This is no longer the case in the present contribution, where
both are endogenous to the determination of equilibrium on the marriage market.
I characterize a class of ""proper"" collective models which can be embedded into
a general matching framework with imperfectly transferable utility. In such
models, the bargaining sets are parametrized by an analytical device called
distance function, which plays a key role both for writing down the usual
stability conditions and for estimation. In general, however, distance
functions are not known in closed-form. I provide an efficient method for
computing distance functions, that works even with the most complex collective
models. Finally, I provide a fully-fledged application using PSID data. I
identify the sharing rule and its distribution and study the evolution of the
sharing rule and housework time sharing in the United States since 1969. In a
counterfactual experiment, I simulate the impact of closing the gender wage
gap.",Collective models and the marriage market,2022-11-14 17:42:31,Simon Weber,"http://arxiv.org/abs/2211.07416v1, http://arxiv.org/pdf/2211.07416v1",econ.GN
32780,gn,"Previous studies show that natural disasters decelerate economic growth, and
more so in countries with lower financial development. We confirm these results
with more recent data. We are the first to show that fiscal stability reduces
the negative economic impact of natural disasters in poorer countries, and that
catastrophe bonds have the same effect in richer countries.",Relevance of financial development and fiscal stability in dealing with disasters in Emerging Economies,2022-11-15 14:58:35,"Valeria Terrones, Richard S. J. Tol","http://arxiv.org/abs/2211.08078v1, http://arxiv.org/pdf/2211.08078v1",econ.GN
32781,gn,"We assess the market risk of the DeFi lending protocols using a multi-asset
agent-based model to simulate ensembles of users subject to price-driven
liquidation risk. Our multi-asset methodology shows that the protocol's
systemic risk is small under stress and that enough collateral is always
present to underwrite active loans. Our simulations use a wide variety of
historical data to model market volatility and run the agent-based simulation
to show that even if all the assets like ETH, BTC and MATIC increase their
hourly volatility by more than ten times, the protocol carries less than 0.1\%
default risk given suggested protocol parameter values for liquidation
loan-to-value ratio and liquidation incentives.","A multi-asset, agent-based approach applied to DeFi lending protocol modelling",2022-11-16 15:25:03,"Amit Chaudhary, Daniele Pinna","http://arxiv.org/abs/2211.08870v2, http://arxiv.org/pdf/2211.08870v2",econ.GN
32785,gn,"We describe the design, implementation, and evaluation of a low-cost
(approximately $15 per person) and scalable program, called Challenges, aimed
at aiding women in Poland transition to technology-sector jobs. This program
helps participants develop portfolios demonstrating job-relevant competencies.
We conduct two independent evaluations, one of the Challenges program and the
other of a traditional mentoring program -- Mentoring -- where experienced tech
professionals work individually with mentees to support them in their job
search. Exploiting the fact that both programs were oversubscribed, we
randomized admissions and measured their impact on the probability of finding a
job in the technology sector. We estimate that Mentoring increases the
probability of finding a technology job within four months from 29% to 42% and
Challenges from 20% to 29%, and the treatment effects do not attenuate over 12
months. Since both programs are capacity constrained in practice (only 28% of
applicants can be accommodated), we evaluate the effectiveness of several
alternative prioritization rules based on applicant characteristics. We find
that a policy that selects applicants based on their predicted treatment
effects increases the average treatment effect across the two programs to 22
percentage points. We further analyze how alternative prioritization rules
compare to the selection that mentors used. We find that mentors selected
applicants who were more likely to get a tech job even without participating in
the program, and the treatment effect for applicants with similar
characteristics to those selected by mentors is about half of the effect
attainable when participants are prioritized optimally.",Effective and scalable programs to facilitate labor market transitions for women in technology,2022-11-18 04:25:19,"Susan Athey, Emil Palikot","http://arxiv.org/abs/2211.09968v3, http://arxiv.org/pdf/2211.09968v3",econ.GN
32786,gn,"In spring 2022, the German federal government agreed on a set of measures
that aim at reducing households' financial burden resulting from a recent price
increase, especially in energy and mobility. These measures include among
others, a nation-wide public transport ticket for 9 EUR per month and a fuel
tax cut that reduces fuel prices by more than 15%. In transportation research
this is an almost unprecedented behavioral experiment. It allows to study not
only behavioral responses in mode choice and induced demand but also to assess
the effectiveness of transport policy instruments. We observe this natural
experiment with a three-wave survey and an app-based travel diary on a sample
of hundreds of participants as well as an analysis of traffic counts. In this
fifth report, we present first analyses of the recorded tracking data. 910
participants completed the tracking until September, 30th. First, an overview
over the socio-demographic characteristics of the participants within our
tracking sample is given. We observe an adequate representation of female and
male participants, a slight over-representation of young participants, and an
income distribution similar to the one known from the ""Mobilit\""at in
Deutschland"" survey. Most participants of the tracking study live in Munich,
Germany. General transportation statistics are derived from the data for all
phases of the natural experiment - prior, during, and after the 9 EUR-Ticket -
to assess potential changes in the participants' travel behavior on an
aggregated level. A significant impact of the 9 EUR-Ticket on modal shares can
be seen. An analysis of the participants' mobility behavior considering trip
purposes, age, and income sheds light on how the 9 EUR-Ticket impacts different
social groups and activities. We find that age, income, and trip purpose
significantly influence the impact of the 9 EUR-Ticket on the observed modal
split.",A nation-wide experiment: fuel tax cuts and almost free public transport for three months in Germany -- Report 5 Insights into four months of mobility tracking,2022-11-18 19:39:30,"Lennart Adenaw, David Ziegler, Nico Nachtigall, Felix Gotzler, Allister Loder, Markus B. Siewert, Markus Lienkamp, Klaus Bogenberger","http://arxiv.org/abs/2211.10328v1, http://arxiv.org/pdf/2211.10328v1",econ.GN
32787,gn,"Borrowing constraints are a key component of modern international
macroeconomic models. The analysis of Emerging Markets (EM) economies generally
assumes collateral borrowing constraints, i.e., firms access to debt is
constrained by the value of their collateralized assets. Using credit registry
data from Argentina for the period 1998-2020 we show that less than 15% of
firms debt is based on the value of collateralized assets, with the remaining
85% based on firms cash flows. Exploiting central bank regulations over banks
capital requirements and credit policies we argue that the most prevalent
borrowing constraints is defined in terms of the ratio of their interest
payments to a measure of their present and past cash flows, akin to the
interest coverage borrowing constraint studied by the corporate finance
literature. Lastly, we argue that EMs exhibit a greater share of interest
sensitive borrowing constraints than the US and other Advanced Economies. From
a structural point of view, we show that in an otherwise standard small open
economy DSGE model, an interest coverage borrowing constraints leads to
significantly stronger amplification of foreign interest rate shocks compared
to the standard collateral constraint. This greater amplification provides a
solution to the Spillover Puzzle of US monetary policy rates by which EMs
experience greater negative effects than Advanced Economies after a US interest
rate hike. In terms of policy implications, this greater amplification leads to
managed exchange rate policy being more costly in the presence of an interest
coverage constraint, given their greater interest rate sensitivity, compared to
the standard collateral borrowing constraint.",Borrowing Constraints in Emerging Markets,2022-11-20 07:18:27,"Santiago Camara, Maximo Sangiacomo","http://arxiv.org/abs/2211.10864v1, http://arxiv.org/pdf/2211.10864v1",econ.GN
32788,gn,"In the present study, for the first time, an effort sharing approach based on
Inertia and Capability principles is proposed to assess European Union (EU27)
carbon budget distribution among the Member States. This is done within the
context of achieving the Green Deal objective and EU27 carbon neutrality by
2050. An in-depth analysis is carried out about the role of Economic Decoupling
embedded in the Capability principle to evaluate the correlation between the
expected increase of economic production and the level of carbon intensity in
the Member States. As decarbonization is a dynamic process, the study proposes
a simple mathematical model as a policy tool to assess and redistribute Member
States carbon budgets as frequently as necessary to encourage progress or
overcome the difficulties each Member State may face during the decarbonization
pathways.",Influence of Economic Decoupling in assessing carbon budget quotas for the European Union,2022-11-21 13:08:26,"Ilaria Perissi, Aled Jones","http://dx.doi.org/10.1080/17583004.2023.2217423, http://arxiv.org/abs/2211.11322v1, http://arxiv.org/pdf/2211.11322v1",econ.GN
32789,gn,"More children in Shandong Province are stunted than any other province in
China. Data on more than 122,000 children show a dramatic increase in height
advantage with birth order in Shandong relative to the average of other
provinces. We suggest that the steep birth order gradient in Shandong is due to
a preference for the eldest child, which influences parental fertility
decisions and resource allocation to children. We show that within Shandong
province, the gradient is steeper for regions and cultures with a high
preference for the eldest child. As predicted, this gradient also varies with
the sex of the sibling. By back-calculating, the steeper birth order gradient
in Shandong Province explains more than half of the average height gap between
Shandong Province and the rest of China.",Birth Order and Son Preference to Determine the Children of Shandong Province So Tall,2022-11-22 05:59:56,"Zhu Xiaoxu, Fan kecai, He hai, Zhang Ziyu","http://arxiv.org/abs/2211.11968v2, http://arxiv.org/pdf/2211.11968v2",econ.GN
32790,gn,"The provision of essential urban infrastructure and services for the
expanding population is a persistent financial challenge for many of the
rapidly expanding cities in developing nations like Ethiopia. The land lease
system has received little academic attention as a means of financing urban
infrastructure in developing countries. Therefore, the main objective of this
study is to assess the contribution of land leasing in financing urban
infrastructure and services using evidence from Bahir Dar city, Ethiopia.
Primary and secondary data-gathering techniques have been used. Descriptive
statistics and qualitative analysis have been adopted. The results show land
lease revenue is a dominant source of extra-budgetary revenue for Bahir Dar
city. As evidenced by Bahir Dar city, a significant portion of urban
infrastructure expenditure is financed by revenues from land leasing. However,
despite the critical importance of land lease revenue to investments in urban
infrastructure, there is inefficiency in the collection of potential lease
revenue due to weak information exchange, inadequate land provision for various
uses, lack of transparency in tender committees, and the existence of poor
documentation. Our findings suggest that Bahir Dar City needs to manage lease
revenue more effectively to increase investment in urban infrastructure while
giving due consideration to availing more land for leasing.
  Keywords: urban, land, revenue, inefficiency, lease, financing, Bahir Dar
City","Financing Urban Infrastructure through Land Leasing: Evidence from Bahir Dar City, Ethiopia",2022-11-22 10:29:58,"Wudu Muluneh, Tadesse Amsalu","http://arxiv.org/abs/2211.12061v2, http://arxiv.org/pdf/2211.12061v2",econ.GN
32791,gn,"The United States, much like other countries around the world, faces
significant obstacles to achieving a rapid decarbonization of its economy.
Crucially, decarbonization disproportionately affects the communities that have
been historically, politically, and socially embedded in the nation's fossil
fuel production. However, this effect has rarely been quantified in the
literature. Using econometric estimation methods that control for unobserved
heterogeneity via two-way fixed effects, spatial effects, heterogeneous time
trends, and grouped fixed effects, we demonstrate that mine closures induce a
significant and consistent contemporaneous rise in the unemployment rate across
US counties. A single mine closure can raise a county's unemployment rate by
0.056 percentage points in a given year; this effect is amplified by a factor
of four when spatial econometric dynamics are considered. Although this
response in the unemployment rate fades within 2-3 years, it has far-reaching
effects in its immediate vicinity. Furthermore, we use cluster analysis to
build a novel typology of coal counties based on qualities that are thought to
facilitate a successful recovery in the face of local industrial decline. The
combined findings of the econometric analysis and typology point to the
importance of investing in alternative sectors in places with promising levels
of economic diversity, retraining job seekers in places with lower levels of
educational attainment, providing relocation (or telecommuting) support in
rural areas, and subsidizing childcare and after school programs in places with
low female labor force participation due to the gendered division of domestic
work.",Spatial-temporal dynamics of employment shocks in declining coal mining regions and potentialities of the 'just transition',2022-11-23 01:49:56,"Ebba Mark, Ryan Rafaty, Moritz Schwarz","http://arxiv.org/abs/2211.12619v1, http://arxiv.org/pdf/2211.12619v1",econ.GN
32792,gn,"Is the complexity of medical product (medicines and medical devices)
regulation impacting innovation in the US? If so, how? Here, this question is
investigated as follows: Various novel proxy metrics of regulation (FDA-issued
guidelines) and innovation (corresponding FDA-registrations) from 1976-2020 are
used to determine interdependence, a concept relying on strong correlation and
reciprocal causality (estimated via variable lag transfer entropy and wavelet
coherence). Based on this interdependence, a mapping of regulation onto
innovation is conducted and finds that regulation seems to accelerate then
supports innovation until on or around 2015; at which time, an inverted U-curve
emerged. If empirically evidentiary, an important innovation-regulation nexus
in the US has been reached; and, as such, stakeholders should (re)consider the
complexity of the regulatory landscape to enhance US medical product
innovation. Study limitations, extensions, and further thoughts complete this
investigation.","The Impact of US Medical Product Regulatory Complexity on Innovation: Preliminary Evidence of Interdependence, Early Acceleration, and Subsequent Inversion",2022-11-21 20:41:54,Iraj Daizadeh,"http://dx.doi.org/10.1007/s11095-023-03512-1, http://arxiv.org/abs/2211.12998v1, http://arxiv.org/pdf/2211.12998v1",econ.GN
32793,gn,"Many treatment variables used in empirical applications nest multiple
unobserved versions of a treatment. I show that instrumental variable (IV)
estimands for the effect of a composite treatment are IV-specific weighted
averages of effects of unobserved component treatments. Differences between IVs
in unobserved component compliance produce differences in IV estimands even
without treatment effect heterogeneity. I describe a monotonicity condition
under which IV estimands are positively-weighted averages of unobserved
component treatment effects. Next, I develop a method that allows instruments
that violate this condition to contribute to estimation of treatment effects by
allowing them to place nonconvex, outcome-invariant weights on unobserved
component treatments across multiple outcomes. Finally, I apply the method to
estimate returns to college, finding wage returns that range from 7\% to 30\%
over the life cycle. My findings emphasize the importance of leveraging
instrumental variables that do not shift individuals between versions of
treatment, as well as the importance of policies that encourage students to
attend ""high-return college"" in addition to those that encourage ""high-return
students"" to attend college.",Interpreting Instrumental Variable Estimands with Unobserved Treatment Heterogeneity: The Effects of College Education,2022-11-23 20:05:33,Clint Harris,"http://arxiv.org/abs/2211.13132v1, http://arxiv.org/pdf/2211.13132v1",econ.GN
32794,gn,"The research is done in the context of the upcoming introduction of new
European legislation for the first time for regulation of minimum wage at
European level. Its purpose is to identify the direction and strength of the
correlation amongst changes of minimum wage and unemployment rate in the
context of conflicting findings of the scientific literature. There will be
used statistical instruments for the purposes of the analysis as it
incorporates data for Bulgaria for the period 1991-2021. The significance of
the research is related to the transition to digital economy and the necessity
for complex transformation of the minimum wage functions in the context of the
new socio-economic reality.",The Correlation: minimum wage - unemployment in the conditions of transition to digital economy,2022-11-23 22:46:09,"Shteryo Nozharov, Petya Koralova-Nozharova","http://arxiv.org/abs/2211.13278v1, http://arxiv.org/pdf/2211.13278v1",econ.GN
32796,gn,"The article examines how institutions, automation, unemployment and income
distribution interact in the context of a neoclassical growth model where
profits are interpreted as a surplus over costs of production. Adjusting the
model to the experience of the US economy, I show that joint variations in
labor institutions and technology are required to provide reasonable
explanations for the behavior of income shares, capital returns, unemployment,
and the big ratios in macroeconomics. The model offers new perspectives on
recent trends by showing that they can be analyzed by the interrelation between
the profit-making capacity of capitalist economies and the political
environment determining labor institutions.","Back to the Surplus: An Unorthodox Neoclassical Model of Growth, Distribution and Unemployment with Technical Change",2022-11-28 03:37:52,Juan E. Jacobo,"http://arxiv.org/abs/2211.14978v1, http://arxiv.org/pdf/2211.14978v1",econ.GN
32797,gn,"I introduce a new way of decomposing the evolution of the wealth distribution
using a simple continuous time stochastic model, which separates the effects of
mobility, savings, labor income, rates of return, demography, inheritance, and
assortative mating. Based on two results from stochastic calculus, I show that
this decomposition is nonparametrically identified and can be estimated based
solely on repeated cross-sections of the data. I estimate it in the United
States since 1962 using historical data on income, wealth, and demography. I
find that the main drivers of the rise of the top 1% wealth share since the
1980s have been, in decreasing level of importance, higher savings at the top,
higher rates of return on wealth (essentially in the form of capital gains),
and higher labor income inequality. I then use the model to study the effects
of wealth taxation. I derive simple formulas for how the tax base reacts to the
net-of-tax rate in the long run, which nest insights from several existing
models, and can be calibrated using estimable elasticities. In the benchmark
calibration, the revenue-maximizing wealth tax rate at the top is high (around
12%), but the revenue collected from the tax is much lower than in the static
case.",Uncovering the Dynamics of the Wealth Distribution,2022-11-24 18:43:51,Thomas Blanchet,"http://arxiv.org/abs/2211.15509v1, http://arxiv.org/pdf/2211.15509v1",econ.GN
32798,gn,"We construct a new unified panel dataset that combines route-year-level
freight rates with shipping quantities for the six major routes and
industry-year-level newbuilding, secondhand, and scrap prices from 1966 (the
beginning of the industry) to 2009. We offer detailed instructions on how to
merge various datasets and validate the data's consistency by industry experts
and former executives who have historical knowledge and experience. Using this
dataset, we provide a quantitative and descriptive analysis of the industry
dynamics known as the container crisis. Finally, we identify structural breaks
for each variable to demonstrate the impact of the shipping cartels' collapse.","Unified Container Shipping Industry Data From 1966: Freight Rate, Shipping Quantity, Newbuilding, Secondhand, and Scrap Price",2022-11-29 18:23:04,"Takuma Matsuda, Suguru Otani","http://arxiv.org/abs/2211.16292v3, http://arxiv.org/pdf/2211.16292v3",econ.GN
32799,gn,"To reduce greenhouse gas emissions, many countries plan to massively expand
wind power and solar photovoltaic capacities. These variable renewable energy
sources require additional flexibility in the power sector. Both geographical
balancing enabled by interconnection and electricity storage can provide such
flexibility. In a 100% renewable energy scenario of twelve central European
countries, we investigate how geographical balancing between countries reduces
the need for electricity storage. Our principal contribution is to separate and
quantify the different factors at play. Applying a capacity expansion model and
a factorization method, we disentangle the effect of interconnection on optimal
storage capacities through distinct factors: differences in countries' solar PV
and wind power availability patterns, load profiles, as well as hydropower and
bioenergy capacity portfolios. Results show that interconnection reduces
storage needs by around 30% in contrast to a scenario without interconnection.
Differences in wind power profiles between countries explain around 80% of that
effect.",Geographical balancing of wind power decreases storage needs in a 100% renewable European power sector,2022-11-29 20:45:58,"Alexander Roth, Wolf-Peter Schill","http://dx.doi.org/10.1016/j.isci.2023.107074, http://arxiv.org/abs/2211.16419v2, http://arxiv.org/pdf/2211.16419v2",econ.GN
32800,gn,"This paper explores the role of local knowledge spillover and human capital
as a driver of crowdfunding investment. The role of territory has already been
studied in terms of campaign success, but the impact of territory on the use of
financial sources like equity crowdfunding is not yet known. Using a sample of
435 equity crowdfunding campaigns in 20 Italian regions during a 4-year period
(from 2016 to 2019), this paper evaluates the impact of human capital flow on
the adoption of crowdfunding campaigns. Our results show that inbound knowledge
in the region, measured in terms of ability to attract national and
international students, has a significant effect on the adoption of
crowdfunding campaigns in the region itself.",Crowdfunding as Entrepreneurial Investment: The Role of Local Knowledge Spillover,2022-11-30 16:42:01,"Filippo Marchesani, Francesca Masciarelli","http://arxiv.org/abs/2211.16984v1, http://arxiv.org/pdf/2211.16984v1",econ.GN
32801,gn,"Large amounts of evidence suggest that trust levels in a country are an
important determinant of its macroeconomic growth. In this paper, we
investigate one channel through which trust might support economic performance:
through the levels of patience, also known as time preference in the economics
literature. Following Gabaix and Laibson (2017), we first argue that time
preference can be modelled as optimal Bayesian inference based on noisy signals
about the future, so that it is affected by the perceived certainty of future
outcomes. Drawing on neuroscience literature, we argue that the mechanism
linking trust and patience could be facilitated by the neurotransmitter
oxytocin. On the one hand, it is a neural correlate of trusting behavior. On
the other, it has an impact on the brain's encoding of prediction error, and
could therefore increase the perceived certainty of a neural representation of
a future event. The relationship between trust and time preference is tested
experimentally using the Trust Game. While the paper does not find a
significant effect of trust on time preference or the levels of certainty, it
proposes an experimental design that can successfully manipulate people's
short-term levels of trust for experimental purposes.",Trust and Time Preference: Measuring a Causal Effect in a Random-Assignment Experiment,2022-11-30 18:40:31,Linas Nasvytis,"http://arxiv.org/abs/2211.17080v1, http://arxiv.org/pdf/2211.17080v1",econ.GN
34727,th,"We study a Bayesian persuasion model with two-dimensional states of the
world, in which the sender (she) and receiver (he) have heterogeneous prior
beliefs and care about different dimensions. The receiver is a naive agent who
has a simplistic worldview: he ignores the dependency between the two
dimensions of the state. We provide a characterization for the sender's gain
from persuasion both when the receiver is naive and when he is rational. We
show that the receiver benefits from having a simplistic worldview if and only
if it makes him perceive the states in which his interest is aligned with the
sender as less likely.",Changing Simplistic Worldviews,2024-01-05 18:50:08,"Maxim Senkov, Toygar T. Kerman","http://arxiv.org/abs/2401.02867v1, http://arxiv.org/pdf/2401.02867v1",econ.TH
32802,gn,"With the advancement in the marketing channel, the use of e-commerce has
increased tremendously therefore the basic objective of this study is to
analyze the impact of business analytics and decision support systems on
e-commerce in small and medium enterprises. Small and medium enterprises are
becoming a priority for economies as by implementing some policies and
regulations these businesses could encourage gain development on an
international level. The objective of this study is to analyze the impact of
business analytics and decision support systems on e-commerce in small and
medium enterprises that investigate the relationship between business analytics
and decision support systems in e-commerce businesses. To evaluate the impact
of both on e-commerce the, descriptive analysis approach is adopted that
reviews the research of different scholars who adopted different plans and
strategies to predict the relationship between e-commerce and business
analytics. The study contributes to the literature by examining the impact of
business analytics in SMEs and provides a comprehensive understanding of its
relationship with the decision support system. After analyzing the impact of
business analytics and decision support system in SMEs, the research also
highlights some limitations and provide future recommendations that are helpful
to overcome these limitations.",Impact of Business Analytics and Decision Support Systems on e-commerce in SMEs,2022-11-30 03:49:28,Shah J Miah,"http://arxiv.org/abs/2212.00016v1, http://arxiv.org/pdf/2212.00016v1",econ.GN
32803,gn,"Inflation is painful, for firms, customers, employees, and society. But
careful study of periods of hyperinflation point to ways that firms can adapt.
In particular, companies need to think about how to change prices regularly and
cheaply, because constant price changes can ultimately be very, very expensive.
And they should consider how to communicate those price changes to customers.
Providing clarity and predictability can increase consumer trust and help firms
in the long run.",3 Lessons from Hyperinflationary Periods,2022-12-01 19:40:55,"Mark Bergen, Thomas Bergen, Daniel Levy, Rose Semenov","http://arxiv.org/abs/2212.00640v1, http://arxiv.org/pdf/2212.00640v1",econ.GN
32804,gn,"Following Russia's invasion of Ukraine, Western countries have looked for
ways to limit Russia's oil income. This paper considers, theoretically and
quantitatively, two such options: 1) an export-quantity restriction and 2) a
forced discount on Russian oil. We build a quantifiable model of the global oil
market and analyze how each of these policies affect: which Russian oil fields
fall out of production; the global oil supply; and the global oil price. By
these statics we derive the effects of the policies on Russian oil profits and
oil-importers' economic surplus. The effects on Russian oil profits are
substantial. In the short run (within the first year), a quantity restriction
of 20% yields Russian losses of 62 million USD per day, equivalent to 1.2% of
GDP and 32% of military spending. In the long run (beyond a year) new
investments become unprofitable. Losses rise to 100 million USD per day, 2% of
GDP and 56% of military spending. A price discount of 20% is even more harmful
to Russia, yielding losses of 152 million USD per day, equivalent to 3.1% of
GDP and 85% of military spending in the short run and long run. A price
discount puts generally more burden on Russia and less on importers compared to
a quantity restriction. In fact, a price discount implies net gains for oil
importers as it essentially redistributes oil rents from Russia to importers.
If the restrictions are expected to last for long, the burden on oil importers
decreases. Overall, both policies at all levels imply larger relative losses
for Russia than for oil importers (in shares of their GDP). The case for a
price discount on Russian oil is thus strong. However, Russia may choose not to
export at the discounted price, in which case the price-discount sanction
becomes a de facto supply restriction.",Quantity restrictions and price discounts on Russian oil,2022-12-01 20:30:28,"Henrik Wachtmeister, Johan Gars, Daniel Spiro","http://arxiv.org/abs/2212.00674v2, http://arxiv.org/pdf/2212.00674v2",econ.GN
32805,gn,"Although it has been suggested that the shift from on-site work to telework
will change the city structure, the mechanism of this change is not clear. This
study clarifies how the location of firms changes when the cost of teleworking
decreases and how this affects the urban economy. The two main results obtained
are as follows. (i) The expansion of teleworking causes firms to be located
closer to urban centers or closer to urban fringes. (ii) Teleworking makes
urban production more efficient and cities more compact. This is the first
paper to show that two empirical studies can be represented in a unified
theoretical model and that existing studies obtained by simulation can be
explained analytically.",Shifting to Telework and Firms' Location: Does Telework Make Our Society Efficient?,2022-12-02 05:42:15,Kazufumi Tsuboi,"http://arxiv.org/abs/2212.00934v1, http://arxiv.org/pdf/2212.00934v1",econ.GN
32806,gn,"Abrupt catastrophic events bring business risks into firms. The paper
introduces the Great Lushan Earthquake in 2013 in China as an unexpected shock
to explore the causal effects on public firms in both the long and short term.
DID-PSM methods are conducted to examine the robustness of causal inference.
The identifications and estimations indicate that catastrophic shock
significantly negatively impacts cash flow liquidity and profitability in the
short term. Besides, the practical influences on firms' manufacturing and
operation emerge in the treated group. Firms increase non-business expenditures
and retained earnings as a financial measure to resist series risk during the
shock period. As the long-term payment, the decline in production factors,
particularly in employment level and the loss in fixed assets, are permanent.
The earthquake's comprehensive interactions are also reflected. The recovery
from the disaster would benefit the companies by raising the growth rate of
R\&D and enhancing competitiveness through increasing market share, though
these effects are temporary. PSM-DID and event study methods are implemented to
investigate the general effects of specific strong earthquakes on local public
firms nationwide. Consistent with the Lushan Earthquake, the ratio of cash flow
to sales dropped drastically and recovered in 3 subsequent semesters. The shock
on sales was transitory, only in the current semester.","Short-term shock, long-lasting payment: Evidence from the Lushan Earthquake",2022-12-03 09:52:01,Yujue Wang,"http://arxiv.org/abs/2212.01553v2, http://arxiv.org/pdf/2212.01553v2",econ.GN
32829,gn,"Since its emergence around 2010, deep learning has rapidly become the most
important technique in Artificial Intelligence (AI), producing an array of
scientific firsts in areas as diverse as protein folding, drug discovery,
integrated chip design, and weather prediction. As more scientists and
engineers adopt deep learning, it is important to consider what effect
widespread deployment would have on scientific progress and, ultimately,
economic growth. We assess this impact by estimating the idea production
function for AI in two computer vision tasks that are considered key test-beds
for deep learning and show that AI idea production is notably more
capital-intensive than traditional R&D. Because increasing the
capital-intensity of R&D accelerates the investments that make scientists and
engineers more productive, our work suggests that AI-augmented R&D has the
potential to speed up technological change and economic growth.",Economic impacts of AI-augmented R&D,2022-12-16 02:48:18,"Tamay Besiroglu, Nicholas Emery-Xu, Neil Thompson","http://arxiv.org/abs/2212.08198v2, http://arxiv.org/pdf/2212.08198v2",econ.GN
32807,gn,"The stock market's reaction to the external risk shock is closely related to
the cross-shareholding network structure. This paper takes the public
information of listed companies in the A-share securities market as the primary
sample to study the relationship between the stock return rate, market
performance, and network topology before and after China's stock market crash
in 2015. Data visualization and empirical analysis demonstrate that the return
rate of stocks is related to the company's traditional business ability and the
social capital brought by cross-holding. Several heteroscedasticity tests and
endogeneity tests with IV are conducted to support the robustness. The
structure of the cross-shareholding network experienced upheaval after the
shock, even distorting the effects of market value, and assets holding on the
return rate. The enterprises in the entire shareholding network are connected
more firmly to overcome systematic external risks. The number of enterprise
clusters is significantly reduced during the process. Besides, the number of
newly established cross-shareholding relationships shows an outbreak, which may
explain the rapid maintenance of stability in the financial system. When stable
clustering is formed before and after a stock crash (rather than when it
occurs), the clustering coefficient of clear clustering still has an apparent
positive influence on the return rate of stocks. To sum up, the compacted
network may prevent the firms from pursuing aggressive earning before the
financial crisis, but would protect firms from suffering relatively high losses
during and after the shock.",Compacter networks as a defensive mechanism: How firms clustered during 2015 Financial Crisis in China,2022-12-03 09:58:18,Yujue Wang,"http://arxiv.org/abs/2212.01557v1, http://arxiv.org/pdf/2212.01557v1",econ.GN
32808,gn,"This paper proposes a new coherent model for a comprehensive study of the
cotton price using econometrics and Long-Short term memory neural network
(LSTM) methodologies. We call a simple cotton price trend and then assumed
conjectures in structural method (ARMA), Markov switching dynamic regression,
simultaneous equation system, GARCH families procedures, and Artificial Neural
Networks that determine the characteristics of cotton price trend duration
1990-2020. It is established that in the structural method, the best procedure
is AR (2) by Markov switching estimation. Based on the MS-AR procedure, it
concludes that tending to regime change from decreasing trend to an increasing
one is more significant than a reverse mode. The simultaneous equation system
investigates three procedures based on the acreage cotton, value-added, and
real cotton price. Finally, prediction with the GARCH families TARCH procedure
is the best-fitting model, and in the LSTM neural network, the results show an
accurate prediction by the training-testing method.",A comprehensive study of cotton price fluctuations using multiple Econometric and LSTM neural network models,2022-12-03 13:06:51,"Morteza Tahami Pour Zarandi, Mehdi Ghasemi Meymandi, Mohammad Hemami","http://arxiv.org/abs/2212.01584v2, http://arxiv.org/pdf/2212.01584v2",econ.GN
32809,gn,"I study a two-sided marriage market in which agents have incomplete
preferences -- i.e., they find some alternatives incomparable. The strong
(weak) core consists of matchings wherein no coalition wants to form a new
match between themselves, leaving some (all) agents better off without harming
anyone. The strong core may be empty, while the weak core can be too large. I
propose the concept of the ""compromise core"" -- a nonempty set that sits
between the weak and the strong cores. Similarly, I define the men-(women-)
optimal core and illustrate its benefit in an application to India's
engineering college admissions system.",Matching with Incomplete Preferences,2022-12-06 00:53:52,Aditya Kuvalekar,"http://arxiv.org/abs/2212.02613v3, http://arxiv.org/pdf/2212.02613v3",econ.GN
32810,gn,"Even as policymakers seek to encourage economic development by addressing
misallocation due to frictions in labor markets, the associated production
externalities - such as air pollution - remain unexplored. Using a regression
discontinuity design, we show access to rural roads increases agricultural
fires and particulate emissions. Farm labor exits are a likely mechanism
responsible for the increase in agricultural fires: rural roads cause movement
of workers out of agriculture and induce farmers to use fire - a labor-saving
but polluting technology - to clear agricultural residue or to make harvesting
less labor-intensive. Overall, the adoption of fires due to rural roads
increases infant mortality rate by 5.5% in downwind locations.",Structural transformation and environmental externalities,2022-12-06 02:49:19,"Teevrat Garg, Maulik Jagnani, Hemant K. Pullabhotla","http://arxiv.org/abs/2212.02664v1, http://arxiv.org/pdf/2212.02664v1",econ.GN
32811,gn,"Identifying factors that affect participation is key to a successful
insurance scheme. This study's challenges involve using many factors that could
affect insurance participation to make a better forecast.Huge numbers of
factors affect participation, making evaluation difficult. These interrelated
factors can mask the influence on adhesion predictions, making them
misleading.This study evaluated how 66 common characteristics affect insurance
participation choices. We relied on individual farm data from FADN from 2016 to
2019 with type 1 (Fieldcrops) farming with 10,926 observations.We use three
Machine Learning (ML) approaches (LASSO, Boosting, Random Forest) compare them
to the GLM model used in insurance modelling. ML methodologies can use a large
set of information efficiently by performing the variable selection. A highly
accurate parsimonious model helps us understand the factors affecting insurance
participation and design better products.ML predicts fairly well despite the
complexity of insurance participation problem. Our results suggest Boosting
performs better than the other two ML tools using a smaller set of regressors.
The proposed ML tools identify which variables explain participation choice.
This information includes the number of cases in which single variables are
selected and their relative importance in affecting participation.Focusing on
the subset of information that best explains insurance participation could
reduce the cost of designing insurance schemes.",Can Machine Learning discover the determining factors in participation in insurance schemes? A comparative analysis,2022-12-06 19:02:09,"Luigi Biagini, Simone Severini","http://arxiv.org/abs/2212.03092v3, http://arxiv.org/pdf/2212.03092v3",econ.GN
32867,gn,"We investigate the elasticity of portfolio investment to geographical
distance in a gravity model utilizing a bilateral panel of 86 reporting and 241
counterparty countries/territories for 2007-2017. We find that the elasticity
is more negative for ASEAN than OECD members. The difference is larger if we
exclude Singapore. This indicates that Singapore's behavior is very different
from other ASEAN members. While Singapore tends to invest in faraway OECD
countries, other ASEAN members tend to invest in nearby countries. Our study
also shows the emergence of China as a significant investment destination for
ASEAN members.",ASEAN's Portfolio Investment in a Gravity Model,2023-01-13 11:57:37,"Tomoo Kikuchi, Satoshi Tobe","http://arxiv.org/abs/2301.05443v1, http://arxiv.org/pdf/2301.05443v1",econ.GN
32812,gn,"This paper evaluates Machine Learning (ML) in establishing ratemaking for new
insurance schemes. To make the evaluation feasible, we established expected
indemnities as premiums. Then, we use ML to forecast indemnities using a
minimum set of variables. The analysis simulates the introduction of an income
insurance scheme, the so-called Income Stabilization Tool (IST), in Italy as a
case study using farm-level data from the FADN from 2008-2018. We predicted the
expected IST indemnities using three ML tools, LASSO, Elastic Net, and
Boosting, that perform variable selection, comparing with the Generalized
Linear Model (baseline) usually adopted in insurance investigations.
Furthermore, Tweedie distribution is implemented to consider the peculiarity
shape of the indemnities function, characterized by zero-inflated, no-negative
value, and asymmetric fat-tail. The robustness of the results was evaluated by
comparing the econometric and economic performance of the models. Specifically,
ML has obtained the best goodness-of-fit than baseline, using a small and
stable selection of regressors and significantly reducing the gathering cost of
information. However, Boosting enabled it to obtain the best economic
performance, balancing the most and most minor risky subjects optimally and
achieving good economic sustainability. These findings suggest how machine
learning can be successfully applied in agricultural insurance.This study
represents one of the first to use ML and Tweedie distribution in agricultural
insurance, demonstrating its potential to overcome multiple issues.",Applications of Machine Learning for the Ratemaking in Agricultural Insurances,2022-12-06 19:24:53,Luigi Biagini,"http://arxiv.org/abs/2212.03114v3, http://arxiv.org/pdf/2212.03114v3",econ.GN
32813,gn,"In 1990, one in five U.S. workers were aged over 50 years whereas today it is
one in three. One possible explanation for this is that occupations have become
more accommodating to the preferences of older workers. We explore this by
constructing an ""age-friendliness"" index for occupations. We use Natural
Language Processing to measure the degree of overlap between textual
descriptions of occupations and characteristics which define age-friendliness.
Our index provides an approximation to rankings produced by survey participants
and has predictive power for the occupational share of older workers. We find
that between 1990 and 2020 around three quarters of occupations have seen their
age-friendliness increase and employment in above-average age-friendly
occupations has risen by 49 million. However, older workers have not benefited
disproportionately from this rise, with substantial gains going to younger
females and college graduates and with male non-college educated workers losing
out the most. These findings point to the need to frame the rise of
age-friendly jobs in the context of other labour market trends and
imperfections. Purely age-based policies are insufficient given both
heterogeneity amongst older workers as well as similarities between groups of
older and younger workers. The latter is especially apparent in the overlapping
appeal of specific occupational characteristics.",The Rise of Age-Friendly Jobs,2022-12-07 01:30:22,"Daron Acemoglu, Nicolaj Søndergaard Mühlbach, Andrew J. Scott","http://dx.doi.org/10.1016/j.jeoa.2022.100416, http://arxiv.org/abs/2212.03355v1, http://arxiv.org/pdf/2212.03355v1",econ.GN
32814,gn,"Total factor productivity (TFP) is a key determinant of farm development, a
sector that receives substantial public support. The issue has taken on great
importance today, where the conflict in Ukraine has led to repercussions on the
cereal markets. This paper investigates the effects of different subsidies on
the productivity of cereal farms, accounting that farms differ according to the
level of TFP. We relied on a three-step estimation strategy: i) estimation of
production functions, ii) evaluation of TFP, and iii) assessment of the
relationship between CAP subsidies and TFP. To overcome multiple endogeneity
problems, the System-GMM estimator is adopted. The investigation embraces farms
in France, Germany, Italy, Poland, Spain and the United Kingdom using the FADN
samples from 2008 to 2018. Adding to previous analyses, we compare results from
different countries and investigate three subsets of farms with varying levels
of TFP. The outcomes confirm how CAP negatively impacts farm TFP, but the
extent differs according to the type of subsidies, the six countries and,
within these, among farms with different productivity groups. Therefore there
is room for policy improvements in order to foster the productivity of cereal
farms.",The impact of CAP subsidies on the productivity of cereal farms in six European countries,2022-12-07 11:07:40,"Luigi Biagini, Federico Antonioli, Simone Severini","http://arxiv.org/abs/2212.03503v1, http://arxiv.org/pdf/2212.03503v1",econ.GN
32815,gn,"Choice overload - by which larger choice sets are detrimental to a chooser's
wellbeing - is potentially of great importance to the design of economic
policy. Yet the current evidence on its prevalence is inconclusive. We argue
that existing tests are likely to be underpowered and hence that choice
overload may occur more often than the literature suggests. We propose more
powerful tests based on richer data and characterization theorems for the
Random Utility Model. These new approaches come with significant econometric
challenges, which we show how to address. We apply our tests to new
experimental data and find strong evidence of choice overload that would likely
be missed using current approaches.",A Better Test of Choice Overload,2022-12-07 22:54:59,"Mark Dean, Dilip Ravindran, Jörg Stoye","http://arxiv.org/abs/2212.03931v1, http://arxiv.org/pdf/2212.03931v1",econ.GN
32816,gn,"This paper proposes to estimate the returns-to-scale of production sets by
considering the individual return of each observed firm through the notion of
$\Lambda$-returns to scale assumption. Along this line, the global technology
is then constructed as the intersection of all the individual technologies.
Hence, an axiomatic foundation is proposed to present the notion of
$\Lambda$-returns to scale. This new characterization of the returns-to-scale
encompasses the definition of $\alpha$-returns to scale, as a special case as
well as the standard non-increasing and non-decreasing returns-to-scale models.
A non-parametric procedure based upon the goodness of fit approach is proposed
to assess these individual returns-to-scale. To illustrate this notion of
$\Lambda$-returns to scale assumption, an empirical illustration is provided
based upon a dataset involving 63 industries constituting the whole American
economy over the period 1987-2018.",$Λ$-Returns to Scale and Individual Minimum Extrapolation Principle,2022-12-09 11:35:26,"Jean-Philippe Boussemart, Walter Briec, Raluca Parvulescu, Paola Ravelojaona","http://arxiv.org/abs/2212.04724v2, http://arxiv.org/pdf/2212.04724v2",econ.GN
32868,gn,"In the US airline industry, independent regional airlines fly passengers on
behalf of several national airlines across different markets, giving rise to
$\textit{common subcontracting}$. On the one hand, we find that subcontracting
is associated with lower prices, consistent with the notion that regional
airlines tend to fly passengers at lower costs than major airlines. On the
other hand, we find that $\textit{common}$ subcontracting is associated with
higher prices. These two countervailing effects suggest that the growth of
regional airlines can have anticompetitive implications for the industry.",Common Subcontracting and Airline Prices,2023-01-15 05:09:45,"Gaurab Aryal, Dennis J. Campbell, Federico Ciliberto, Ekaterina A. Khmelnitskaya","http://arxiv.org/abs/2301.05999v4, http://arxiv.org/pdf/2301.05999v4",econ.GN
32817,gn,"This research article explores potential influencing factors of solar
photovoltaic (PV) system adoption by municipal authorities in Germany in the
year 2019. We derive seven hypothesized relationships from the empirical
literature on residential PV adoption, organizational technology adoption, and
sustainability policy adoption by local governments, and apply a twofold
empirical approach to examine them. First, we explore the associations of a set
of explanatory variables on the installed capacity of adopter municipalities
(N=223) in an OLS model. Second, we use a logit model to analyze whether the
identified relationships are also apparent between adopter and non-adopter
municipalities (N=423). Our findings suggest that fiscal capacity (measured by
per capita debt and per capita tax revenue) and peer effects (measured by the
pre-existing installed capacity) are positively associated with both the
installed capacity and adoption. Furthermore, we find that institutional
capacity (measured by the presence of a municipal utility) and environmental
concern (measured by the share of green party votes) are positively associated
with municipal PV adoption. Economic factors (measured by solar irradiation)
show a significant positive but small effect in both regression models. No
evidence was found to support the influence of political will. Results for the
role of municipal characteristics are mixed, although the population size was
consistently positively associated with municipal PV adoption and installed
capacity. Our results support previous studies on PV system adoption
determinants and offer a starting point for additional research on
non-residential decision-making and PV adoption.",Exploring non-residential technology adoption: an empirical analysis of factors associated with the adoption of photovoltaic systems by municipal authorities in Germany,2022-12-10 14:49:33,"Maren Springsklee, Fabian Scheller","http://arxiv.org/abs/2212.05281v1, http://arxiv.org/pdf/2212.05281v1",econ.GN
32818,gn,"Based on the practical process of innovation and entrepreneurship education
for college students in the author's university, this study analyzes and
deconstructs the key concepts of AI knowledge-based crowdsourcing on the basis
of literature research, and analyzes the objective fitting needs of combining
AI knowledge-based crowdsourcing with college students' innovation and
entrepreneurship education practice through a survey and research of a random
sample of college students, and verifies that college students' knowledge and
application of AI knowledge-based crowdsourcing in the learning and practice of
innovation and entrepreneurship The study also verifies the awareness and
application of AI knowledge-based crowdsourcing knowledge by university
students in the learning and practice of innovation and entrepreneurship.",Research on College Students' Innovation and Entrepreneurship Education from The Perspective of Artificial Intelligence Knowledge-Based Crowdsourcing,2022-12-12 17:11:07,"Yufei Xie, Xiang Liu, Qizhong Yuan","http://arxiv.org/abs/2212.05906v1, http://arxiv.org/pdf/2212.05906v1",econ.GN
32819,gn,"We find that the early impact of defense news shocks on GDP is mainly due to
a rise in business inventories, as contractors ramp up production for new
defense contracts. These contracts do not affect government spending (G) until
payment-on-delivery, which occurs 2-3 quarters later. Novel data on defense
procurement obligations reveals that contract awards Granger-cause
VAR-identified shocks to G, but not defense news shocks. This implies that the
early GDP response relative to G is largely driven by the delayed accounting of
defense contracts, while VAR-identified shocks to G miss the impact on
inventories, resulting in lower multiplier estimates.",Why Does GDP Move Before G? It's all in the Measurement,2022-12-12 20:40:44,"Edoardo Briganti, Victor Sellemi","http://arxiv.org/abs/2212.06073v3, http://arxiv.org/pdf/2212.06073v3",econ.GN
32820,gn,"This paper utilizes wealth shocks from winning lottery prizes to examine the
causal effect of financial resources on fertility. We employ extensive panels
of administrative data encompassing over 0.4 million lottery winners in Taiwan
and implement a triple-differences design. Our analyses reveal that a
substantial lottery win can significantly increase fertility, the implied
wealth elasticity of which is around 0.06. Moreover, the primary channel
through which fertility increases is by prompting first births among previously
childless individuals. Finally, our analysis reveals that approximately 25% of
the total fertility effect stems from increased marriage rates following a
lottery win.",The Effect of Financial Resources on Fertility: Evidence from Administrative Data on Lottery Winners,2022-12-12 23:16:46,"Yung-Yu Tsai, Hsing-Wen Han, Kuang-Ta Lo, Tzu-Ting Yang","http://arxiv.org/abs/2212.06223v2, http://arxiv.org/pdf/2212.06223v2",econ.GN
32821,gn,"The study discusses the main features which affect the IT companies valuation
on the example of social networks. The relevance of the chosen topic is due to
the fact that people live in the information age now, and information
technologies surround us everywhere. Because of this, social networks have
become very popular. They assist people to communicate with each other despite
of the time and distance. Social networks are also companies that operate in
order to generate income therefore their owners need to know how promising and
profitable their business is. The social networks differ from traditional
companies in this case the purpose of the research is determining the features
of social networks that affect the accuracy and adequacy of the results of
company valuation. The paper reviews the definitions of information technology,
social networks, history, types of social networks, distinguishing features
based on domestic and foreign literature. There are analyzed methods of
assessing the value of Internet companies, their characteristics and methods of
application. There is the six social networks evaluation was assessed in the
practical part of the study: Facebook, Twitter, Pinterest, Snapchat, Sina Weibo
and Vkontakte on the basis of the literature studied and the methods for
evaluating the Internet companies which recommended in it, including the method
of discounting the cash flow of the company as part of the income approach and
the multiplier method as part of a comparative approach. Based on the analysis,
the features that affect the social networks valuation are identified.",IT companies: the specifics of social networks valuation,2022-12-12 16:10:25,"K. V. Yupatova, O. A. Malafeyev, V. S. Lipatnikov, V. Y. Bezrukikh","http://arxiv.org/abs/2212.06674v1, http://arxiv.org/pdf/2212.06674v1",econ.GN
32822,gn,"The spread of viral pathogens is inherently a spatial process. While the
temporal aspects of viral spread at the epidemiological level have been
increasingly well characterized, the spatial aspects of viral spread are still
understudied due to a striking absence of theoretical expectations of how
spatial dynamics may impact the temporal dynamics of viral populations.
Characterizing the spatial transmission and understanding the factors driving
it are important for anticipating local timing of disease incidence and for
guiding more informed control strategies. Using a unique data set from Nova
Scotia, the objective of this study is to apply a new novel method that
recovers a spatial network of the influenza-like viral spread where the regions
in their dominance are identified and ranked. We, then, focus on identifying
regional predictors of those dominant regions.",Identifying the regional drivers of influenza-like illness in Nova Scotia with dominance analysis,2022-12-13 18:59:56,"Yigit Aydede, Jan Ditzen","http://arxiv.org/abs/2212.06684v1, http://arxiv.org/pdf/2212.06684v1",econ.GN
34728,th,"There has been a remarkable increase in work at the interface of computer
science and game theory in the past decade. In this article I survey some of
the main themes of work in the area, with a focus on the work in computer
science. Given the length constraints, I make no attempt at being
comprehensive, especially since other surveys are also available, and a
comprehensive survey book will appear shortly.",Computer Science and Game Theory: A Brief Survey,2007-03-29 21:43:58,Joseph Y. Halpern,"http://arxiv.org/abs/cs/0703148v1, http://arxiv.org/pdf/cs/0703148v1",cs.GT
32823,gn,"Mental health disorders are particularly prevalent among those in the
criminal justice system and may be a contributing factor in recidivism. Using
North Carolina court cases from 1994 to 2009, this paper evaluates how mandated
mental health treatment as a term of probation impacts the likelihood that
individuals return to the criminal justice system. I use random variation in
judge assignment to compare those who were required to seek weekly mental
health counseling to those who were not. The main findings are that being
assigned to seek mental health treatment decreases the likelihood of three-year
recidivism by about 12 percentage points, or 36 percent. This effect persists
over time, and is similar among various types of individuals on probation. In
addition, I show that mental health treatment operates distinctly from drug
addiction interventions in a multiple-treatment framework. I provide evidence
that mental health treatment's longer-term effectiveness is strongest among
more financially-advantaged probationers, consistent with this setting, in
which the cost of mandated treatment is shouldered by offenders. Finally,
conservative calculations result in a 5:1 benefit-to-cost ratio which suggests
that the treatment-induced decrease in future crime would be more than
sufficient to offset the costs of treatment.",The Role of Mandated Mental Health Treatment in the Criminal Justice System,2022-12-13 20:16:16,Rachel Nesbit,"http://arxiv.org/abs/2212.06736v2, http://arxiv.org/pdf/2212.06736v2",econ.GN
32824,gn,"To meet widely recognised carbon neutrality targets, over the last decade
metropolitan regions around the world have implemented policies to promote the
generation and use of sustainable energy. Nevertheless, there is an
availability gap in formulating and evaluating these policies in a timely
manner, since sustainable energy capacity and generation are dynamically
determined by various factors along dimensions based on local economic
prosperity and societal green ambitions. We develop a novel data-driven
platform to predict and evaluate energy transition policies by applying an
artificial neural network and a technology diffusion model. Using Singapore,
London, and California as case studies of metropolitan regions at distinctive
stages of energy transition, we show that in addition to forecasting renewable
energy generation and capacity, the platform is particularly powerful in
formulating future policy scenarios. We recommend global application of the
proposed methodology to future sustainable energy transition in smart regions.",Data-Driven Prediction and Evaluation on Future Impact of Energy Transition Policies in Smart Regions,2022-12-14 07:19:34,"Chunmeng Yang, Siqi Bu, Yi Fan, Wayne Xinwei Wan, Ruoheng Wang, Aoife Foley","http://arxiv.org/abs/2212.07019v1, http://arxiv.org/pdf/2212.07019v1",econ.GN
32825,gn,"With the development of China's economy and society, the importance of
""craftsman's spirit"" has become more and more prominent. As the main
educational institution for training technical talents, higher vocational
colleges vigorously promote the exploration of the cultivation path of
craftsman spirit in higher vocational education, which provides new ideas and
directions for the reform and development of higher vocational education, and
is the fundamental need of the national innovation driven development strategy.
Based on the questionnaire survey of vocational students in a certain range,
this paper analyzes the problems existing in the cultivation path of craftsman
spirit in Higher Vocational Education from multiple levels and the
countermeasures.",Research on The Cultivation Path of Craftsman Spirit in Higher Vocational Education Based on Survey Data,2022-12-14 11:45:56,"Yufei Xie, Jing Cui, Mengdie Wang","http://arxiv.org/abs/2212.07099v1, http://arxiv.org/pdf/2212.07099v1",econ.GN
32826,gn,"On November 22nd 2022, the lending platform AAVE v2 (on Ethereum) incurred
bad debt resulting from a major liquidation event involving a single user who
had borrowed close to \$40M of CRV tokens using USDC as collateral. This
incident has prompted the Aave community to consider changes to its liquidation
threshold, and limitations on the number of illiquid coins that can be borrowed
on the platform. In this paper, we argue that the bad debt incurred by AAVE was
not due to excess volatility in CRV/USDC price activity on that day, but rather
a fundamental flaw in the liquidation logic which triggered a toxic liquidation
spiral on the platform. We note that this flaw, which is shared by a number of
major DeFi lending markets, can be easily overcome with simple changes to the
incentives driving liquidations. We claim that halting all liquidations once a
user's loan-to-value (LTV) ratio surpasses a certain threshold value can
prevent future toxic liquidation spirals and offer substantial improvement in
the bad debt that a lending market can expect to incur. Furthermore, we
strongly argue that protocols should enact dynamic liquidation incentives and
closing factor policies moving forward for optimal management of protocol risk.",Toxic Liquidation Spirals,2022-12-14 19:10:26,"Jakub Warmuz, Amit Chaudhary, Daniele Pinna","http://arxiv.org/abs/2212.07306v2, http://arxiv.org/pdf/2212.07306v2",econ.GN
32827,gn,"Using stock market responses to drug development announcements, we measure
the $\textit{value of pharmaceutical drug innovations}$ by estimating the
market values of drugs. We estimate the average value of successful drugs to be
\$1.62 billion and the average value at the discovery stage to be \$64.3
million. We also allow for heterogeneous expectations across several major
diseases, such as cancer and diabetes, and estimate their values separately. We
then apply our estimates to (i) determine the average costs of drug development
at the discovery stage and the three phases of clinical trials to be \$58.5,
\$0.6, \$30, and \$41 million, respectively, and (ii) investigate $\textit{drug
buyouts}$, among others, as policies to support drug development.",Valuing Pharmaceutical Drug Innovations,2022-12-14 21:09:32,"Gaurab Aryal, Federico Ciliberto, Leland E. Farmer, Ekaterina Khmelnitskaya","http://arxiv.org/abs/2212.07384v3, http://arxiv.org/pdf/2212.07384v3",econ.GN
32828,gn,"This paper examines the effect of the federal EV income tax subsidy on EV
sales. I find that reduction of the federal subsidy caused sales to decline by
$43.2 \%$. To arrive at this result, I employ historical time series data from
the Department of Energy Alternative Fuels Data Center. Using the fact that the
subsidy is available only for firms with fewer than 200,000 cumulative EV
sales, I separate EV models into two groups. The treatment group consists of
models receiving the full subsidy, and the control group consists of models
receiving the reduced subsidy. This allows for a difference in differences
(DiD) model structure. To examine the robustness of the results, I conduct
regression analyses. Due to a relatively small sample size, the regression
coefficients lack statistical significance. Above all, my results suggest that
federal incentives designed to promote EV consumption are successful in their
objectives.",Cap or No Cap? What Can Governments Do to Promote EV Sales?,2022-10-26 17:18:46,Zunian Luo,"http://arxiv.org/abs/2212.08137v1, http://arxiv.org/pdf/2212.08137v1",econ.GN
32830,gn,"This paper studies the effects of teachers' stereotypical assessments of boys
and girls on students' long-term outcomes, including high school graduation,
college attendance, and formal sector employment. I measure teachers' gender
stereotypical grading based on preconceptions about boys' aptitude in math and
science and girls' aptitude in communicating and verbal skills by analyzing
novel data on teachers' responses to the Implicit Association Test (IAT) and
differences in gender gaps between teacher-assigned and blindly graded tests.
To collect IAT scores on a national scale, I developed a large-scale
educational program accessible to teachers and students in Peruvian public
schools. This analysis provides evidence that teachers' gender stereotypes are
reflected in student evaluations: math teachers with stronger stereotypes
associating boys with scientific disciplines award male students with higher
test scores compared to blindly-graded test scores. In contrast, language arts
teachers who stereotypically associate females with humanities-based
disciplines give female students higher grades. Using graduation, college
enrollment, and matched employer-employee data on 1.7 million public high
school students expected to graduate between 2015 and 2019, I find that female
students who are placed with teachers who use more stereotypical grading
practices are less likely to graduate from high school and apply to college
than male students. In comparison to their male counterparts, female high
school graduates whose teachers employ more stereotypical grading practices are
less likely to be employed in the formal sector and have fewer paid working
hours. Furthermore, exposure to teachers with more stereotypical grading
practices reduces women's monthly earnings, thereby widening the gender pay
gap.",The Long-Term Effects of Teachers' Gender Stereotypes,2022-12-16 04:17:53,Joan Martinez,"http://arxiv.org/abs/2212.08220v2, http://arxiv.org/pdf/2212.08220v2",econ.GN
32831,gn,"Carbon dioxide removal (CDR) moves atmospheric carbon to geological or
land-based sinks. In a first-best setting, the optimal use of CDR is achieved
by a removal subsidy that equals the optimal carbon tax and marginal damages.
We derive second-best policy rules for CDR subsidies and carbon taxes when no
global carbon price exists but a national government implements a unilateral
climate policy. We find that the optimal carbon tax differs from an optimal CDR
subsidy because of carbon leakage and a balance of resource trade effect.
First, the optimal removal subsidy tends to be larger than the carbon tax
because of lower supply-side leakage on fossil resource markets. Second, net
carbon exporters exacerbate this wedge to increase producer surplus of their
carbon resource producers, implying even larger removal subsidies. Third, net
carbon importers may set their removal subsidy even below their carbon tax when
marginal environmental damages are small, to appropriate producer surplus from
carbon exporters.",Optimal pricing for carbon dioxide removal under inter-regional leakage,2022-12-19 11:33:43,"Max Franks, Matthias Kalkuhl, Kai Lessmann","http://dx.doi.org/10.1016/j.jeem.2022.102769, http://arxiv.org/abs/2212.09299v1, http://arxiv.org/pdf/2212.09299v1",econ.GN
32832,gn,"This article presents evidence based on a panel of 35 countries over the past
30 years that the Phillips curve relation holds for food inflation. That is,
broader economic overheating does push up the food component of the CPI in a
systematic way. Further, general inflation expectations from professional
forecasters clearly impact food price inflation. The analysis also quantifies
the extent to which higher food production and imports, or lower food exports,
reduce food inflation. Importantly, the link between domestic and global food
prices is typically weak, with passthroughs within a year ranging from 0.07 to
0.16, after exchange rate variations are taken into account.",Understanding the food component of inflation,2022-12-19 14:38:43,Emanuel Kohlscheen,"http://arxiv.org/abs/2212.09380v1, http://arxiv.org/pdf/2212.09380v1",econ.GN
32833,gn,"Liquid Democracy is a voting system touted as the golden medium between
representative and direct democracy: decisions are taken by referendum, but
voters can delegate their votes as they wish. The outcome can be superior to
simple majority voting, but even when experts are correctly identified,
delegation must be used sparely. We ran two very different experiments: one
follows a tightly controlled lab design; the second is a perceptual task run
online where the precision of information is ambiguous. In both experiments,
delegation rates are high, and Liquid Democracy underperforms both universal
voting and the simpler option of allowing abstention.",Liquid Democracy. Two Experiments on Delegation in Voting,2022-12-19 21:44:13,"Joseph Campbell, Alessandra Casella, Lucas de Lara, Victoria Mooers, Dilip Ravindran","http://arxiv.org/abs/2212.09715v1, http://arxiv.org/pdf/2212.09715v1",econ.GN
32834,gn,"We revisit the results of a recent paper by Equipo Anova, who claim to find
evidence of an improvement in Venezuelan imports of food and medicines
associated with the adoption of U.S. financial sanctions towards Venezuela in
2017. We show that their results are consequence of data coding errors and
questionable methodological choices, including the use an unreasonable
functional form that implies a counterfactual of negative imports in the
absence of sanctions, the omission of data accounting for four-fifths of the
country's food imports at the time of sanctions and incorrect application of
regression discontinuity methods. Once these errors are corrected, the evidence
of a significant improvement in the level and rate of change in imports of
essentials disappears.",Sanctions and Imports of Essential Goods: A Closer Look at the Equipo Anova (2021) Results,2022-12-20 01:54:10,Francisco Rodríguez,"http://arxiv.org/abs/2212.09904v1, http://arxiv.org/pdf/2212.09904v1",econ.GN
32835,gn,"Since several years, the fragility of global supply chains (GSCs) is at
historically high levels. In the same time, the landscape of hybrid threats is
expanding; new forms of hybrid threats create different types of uncertainties.
This paper aims to understand the potential consequences of uncertain events -
like natural disasters, pandemics, hybrid and/or military aggression - on GSC
resilience and robustness. Leveraging a parsimonious supply chain model, we
analyse how the organisational structure of GSCs interacts with uncertainty,
and how risk-aversion vs. ambiguity-aversion, vertical integration vs. upstream
outsourcing, resilience vs. efficiency trade-offs drive a wedge between
decentralised and centralised optimal GSC diversification strategies in
presence of externalities. Parameterising the scalable data model with
World-Input Output Tables, we simulate the survival probability of a GSC and
implications for supply chain robustness and resilience. The presented
model-based simulations provide an interoperable and directly comparable
conceptualisation of positive and normative effects of counterfactual
resilience and robustness policy choices under individually optimal
(decentralised) and socially optimal (centralised) GSC organisation structures.",Enhancing Resilience: Model-based Simulations,2022-12-21 18:47:45,d'Artis Kancs,"http://arxiv.org/abs/2212.11108v1, http://arxiv.org/pdf/2212.11108v1",econ.GN
32836,gn,"The energy consumption, the transfer of resources through the international
trade, the transition towards renewable energies and the environmental
sustainability appear as key drivers in order to evaluate the resilience of the
energy systems. Concerning the consumptions, in the literature a great
attention has been paid to direct energy, but the production of goods and
services also involves indirect energy. Hence, in this work we consider
different types of embodied energy sources and the time evolution of the
sectors' and countries' interactions. Flows are indeed used to construct a
directed and weighted temporal multilayer network based respectively on
renewable and non-renewable sources, where sectors are nodes and layers are
countries. We provide a methodological approach for analysing the network
reliability and resilience and for identifying critical sectors and economies
in the system by applying the Multi-Dimensional HITS algorithm. Then, we
evaluate central arcs in the network at each time period by proposing a novel
topological indicator based on the maximum flow problem. In this way, we
provide a full view of economies, sectors and connections that play a relevant
role over time in the network and whose removal could heavily affect the
stability of the system. We provide a numerical analysis based on the embodied
energy flows among countries and sectors in the period from 1990 to 2016.
Results prove that the methods are effective in catching the different patterns
between renewable and non-renewable energy sources.",Strategic energy flows in input-output relations: a temporal multilayer approach,2022-12-22 13:23:02,"Gian Paolo Clemente, Alessandra Cornaro, Rosanna Grassi, Giorgio Rizzini","http://arxiv.org/abs/2212.11585v1, http://arxiv.org/pdf/2212.11585v1",econ.GN
32837,gn,"Fooji Inc. is a social media engagement platform that has created a
proprietary ""Just-in-time"" delivery network to provide prizes to social media
marketing campaign participants in real-time. In this paper, we prove the
efficacy of the ""Just-in-time"" delivery network through a cluster analysis that
extracts and presents the underlying drivers of campaign engagement.
  We utilize a machine learning methodology with a principal component analysis
to organize Fooji campaigns across these principal components. The arrangement
of data across the principal component space allows us to expose underlying
trends using a $K$-means clustering technique. The most important of these
trends is the demonstration of how the ""Just-in-time"" delivery network improves
social media engagement.",The Effects of Just-in-time Delivery on Social Engagement: A Cluster Analysis,2022-12-23 15:27:02,"Moisés Ramírez, Raziel Ruíz, Nathan Klarer","http://arxiv.org/abs/2212.12285v1, http://arxiv.org/pdf/2212.12285v1",econ.GN
32838,gn,"Recent global events emphasize the importance of a reliable energy supply.
One way to increase energy supply security is through decentralized off-grid
renewable energy systems, for which a growing number of case studies are
researched. This review gives a global overview of the levelized cost of
electricity (LCOE) for these autonomous energy systems, which range from 0.03
\$_{2021}/kWh to over 1.00 \$_{2021}/kWh worldwide. The average LCOEs for 100%
renewable energy systems have decreased by 9% annually between 2016 and 2021
from 0.54 \$_{2021}/kWh to 0.29 \$_{2021}/kWh, presumably due to cost
reductions in renewable energy and storage technologies. Furthermore, we
identify and discuss seven key reasons why LCOEs are frequently overestimated
or underestimated in literature, and how this can be prevented in the future.
Our overview can be employed to verify findings on off-grid systems, to assess
where these systems might be deployed and how costs evolve.",Global LCOEs of decentralized off-grid renewable energy systems,2022-12-24 17:50:59,"Jann Michael Weinand, Maximilian Hoffmann, Jan Göpfert, Tom Terlouw, Julian Schönau, Patrick Kuckertz, Russell McKenna, Leander Kotzur, Jochen Linßen, Detlef Stolten","http://arxiv.org/abs/2212.12742v2, http://arxiv.org/pdf/2212.12742v2",econ.GN
32840,gn,"This article estimates the impact of violence on emigration crossings from
Guatemala to Mexico as final destination during 2009-2017. To identify causal
effects, we use as instruments the variation in deforestation in Guatemala, and
the seizing of cocaine in Colombia. We argue that criminal organizations
deforest land in Guatemala, fueling violence and leading to emigration,
particularly during exogenous supply shocks to cocaine. A one-point increase in
the homicide rate differential between Guatemalan municipalities and Mexico,
leads to 211 additional emigration crossings made by male adults. This rise in
violence, also leads to 20 extra emigration crossings made by children.",Violence in Guatemala pushes adults and children to seek work in Mexico,2022-12-24 21:21:33,Roxana Gutiérrez-Romero,"http://arxiv.org/abs/2212.12796v1, http://arxiv.org/pdf/2212.12796v1",econ.GN
32841,gn,"This paper evaluates the impact of the pandemic and enforcement at the US and
Mexican borders on the emigration of Guatemalans during 2017-2020. During this
period, the number of crossings from Guatemala fell by 10%, according to the
Survey of Migration to the Southern Border of Mexico. Yet, there was a rise of
nearly 30% in the number of emigration crossings of male adults travelling with
their children. This new trend was partly driven by the recent reduction in the
number of children deported from the US. For a one-point reduction in the
number of children deported from the US to Guatemalan municipalities, there was
an increase of nearly 14 in the number of crossings made by adult males leaving
from Guatemala for Mexico; and nearly 0.5 additional crossings made by male
adults travelling with their children. However, the surge of emigrants
travelling with their children was also driven by the acute economic shock that
Guatemala experienced during the pandemic. During this period, air pollution in
the analysed Guatemalan municipalities fell by 4%, night light per capita fell
by 15%, and homicide rates fell by 40%. Unlike in previous years, emigrants are
fleeing poverty rather than violence. Our findings suggest that a reduction in
violence alone will not be sufficient to reduce emigration flows from Central
America, but that economic recovery is needed.",New trends in South-South migration: The economic impact of COVID-19 and immigration enforcement,2022-12-24 21:27:30,"Roxana Gutiérrez-Romero, Nayeli Salgado","http://arxiv.org/abs/2212.12797v1, http://arxiv.org/pdf/2212.12797v1",econ.GN
32842,gn,"This study set out to determine what motivated SMEs in Semarang City to
undertake green supply chain management during the COVID-19 and New Normal
pandemics. The purposive sampling approach was used as the sampling methodology
in this investigation. There are 100 respondents in the research samples. The
AMOS 24.0 program's structural equation modelling (SEM) is used in this
research method. According to the study's findings, the Strategic Orientation
variable significantly and favourably affects the Green Supply Chain Management
variable expected to have a value of 0.945, and the Government Regulation
variable has a positive and strong influence on the variable Green Supply Chain
Management with an estimated value of 0.070, the Green Supply Chain Management
variable with an estimated value of has a positive and significant impact on
the environmental performance variable. 0.504, the Strategic Orientation
variable with an estimated value of has a positive and significant impact on
the environmental performance variable. 0.442, The Environmental Performance
variable is directly impacted positively and significantly by the Government
Regulation variable, with an estimated value of 0.041. This significant
positive influence is because SMEs in Semarang City have government
regulations, along with government support for facilities regarding efforts to
implement the concept of environmental concern, causing high environmental
performance caused by the optimal implementation of Green supply chain
management is built on a collaboration between the government and the supply
chain's participants.",Analysis of the Driving Factors of Implementing Green Supply Chain Management in SME in the City of Semarang,2022-12-25 14:17:17,"Nanang Adie Setyawan, Hadiahti Utami, Bayu Setyo Nugroho, Mellasanti Ayuwardani, Suharmanto","http://dx.doi.org/10.56472/25835238/IRJEMS-V1I2P107, http://arxiv.org/abs/2212.12891v1, http://arxiv.org/pdf/2212.12891v1",econ.GN
32843,gn,"The territory of La Mancha, its rural areas, and its landscapes suffer a kind
of atherosclerosis (""the silent killer"") because of the increase in artificial
surfaces, the fragmentation of the countryside by various infrastructures, the
abandonment of small and medium-sized farms and the loss of agricultural,
material, and intangible heritage. At the same time, agricultural
industrialization hides, behind a supposed productive efficiency, the
deterioration of the quantitative and qualitative ecological status of surface
and groundwater bodies, and causes air pollution, greenhouse gas emissions,
loss of soil fertility, drainage and plowing of wetlands, forgetfulness of the
ancestral environmental heritage, of the emergence of uses and customs of
collective self-government and reduction of the adaptive capacity of
traditional agroecosystems. This work aims, firstly, to shed light on the true
costs of the main causes of environmental degradation in the territory of La
Mancha, while deteriorating relations between rural and urban areas and
determining the loss of territorial identity of La Mancha. the population. In
addition, drivers of change toward a more sustainable social, economic,
hydrological, environmental, and cultural production model are identified.",Hidden costs of La Mancha's production model and drivers of change,2022-12-27 23:53:48,"Máximo Florín, Rafael U. Gosálvez","http://arxiv.org/abs/2212.13611v1, http://arxiv.org/pdf/2212.13611v1",econ.GN
32844,gn,"There are significant differences in innovation performance between
countries. Additionally, the pharmaceutical sector is stronger in some
countries than in others. This suggests that the development of the
pharmaceutical industry can influence a country's innovation performance. Using
the Global Innovation Index and selected performance measures of the
pharmaceutical sector, this study examines how the pharmaceutical sector
influences the innovation performance of countries from the European context.
The dataset of 27 European countries was analysed using simple, and multiple
linear regressions and Pearson correlation. Our findings show that only three
indicators of the pharmaceutical industry, more precisely pharmaceutical
Research and Development, pharmaceutical exports, and pharmaceutical employment
explain the innovation performance of a country largely. Pharmaceutical
Research and Development and exports have a significant positive impact on a
country's innovation performance, whereas employment in the pharmaceutical
industry has a slightly negative impact. Additionally, global innovation
performance has been found to positively influence life expectancy. We further
outline the implications and possible policy directions based on these
findings.",The Impact of the Pharmaceutical Industry on the Innovation Performance of European Countries,2022-12-26 09:45:11,"Szabolcs Nagy, Sergey U. Chernikov, Ekaterina Degtereva","http://dx.doi.org/10.15196/RS130105, http://arxiv.org/abs/2212.13839v1, http://arxiv.org/pdf/2212.13839v1",econ.GN
32845,gn,"The information age is also an era of escalating social problems. The digital
transformation of society and the economy is already underway in all countries,
although the progress in this transformation can vary widely. There are more
social innovation projects addressing global and local social problems in some
countries than in others. This suggests that different levels of digital
transformation might influence the social innovation potential. Using the
International Digital Economy and Society Index and the Social Innovation
Index, this study investigates how digital transformation of the economy and
society affects the capacity for social innovation. A dataset of 29 countries
was analysed using both simple and multiple linear regressions and Pearsons
correlation. Based on the research findings, it can be concluded that the
digital transformation of the economy and society has a significant positive
impact on the capacity for social innovation. It was also found that the
integration of digital technology plays a critical role in digital
transformation. Therefore, the progress in digital transformation is beneficial
to social innovation capacity. In line with the research findings, this study
outlines the implications and possible directions for policy.",The relationship between social innovation and digital economy and society,2022-12-26 11:43:04,"Szabolcs Nagy, Mariann Veresne Somosi","http://dx.doi.org/10.15196/RS120202, http://arxiv.org/abs/2212.13840v1, http://arxiv.org/pdf/2212.13840v1",econ.GN
32869,gn,"Newly-developed large language models (LLM) -- because of how they are
trained and designed -- are implicit computational models of humans -- a homo
silicus. These models can be used the same way economists use homo economicus:
they can be given endowments, information, preferences, and so on and then
their behavior can be explored in scenarios via simulation. I demonstrate this
approach using OpenAI's GPT3 with experiments derived from Charness and Rabin
(2002), Kahneman, Knetsch and Thaler (1986) and Samuelson and Zeckhauser
(1988). The findings are qualitatively similar to the original results, but it
is also trivially easy to try variations that offer fresh insights. Departing
from the traditional laboratory paradigm, I also create a hiring scenario where
an employer faces applicants that differ in experience and wage ask and then
analyze how a minimum wage affects realized wages and the extent of labor-labor
substitution.",Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?,2023-01-18 17:00:40,John J. Horton,"http://arxiv.org/abs/2301.07543v1, http://arxiv.org/pdf/2301.07543v1",econ.GN
32846,gn,"Considering the growing significance of Eurasian economic ties because of
South Korea s New Northern Policy and Russia s New Eastern Policy, this study
investigates the motivations and locational factors of South Korean foreign
direct investment (FDI) in three countries in the Commonwealth of Independent
States (CIS: Kazakhstan, Russia, and Uzbekistan) by employing panel analysis
(pooled ordinary least squares (OLS), fixed effects, random effects) using data
from 1993 to 2017. The results show the positive and significant coefficients
of GDP, resource endowments, and inflation. Unlike conventional South Korean
outward FDI, labour-seeking is not defined as a primary purpose. Exchange
rates, political rights, and civil liberties are identified as insignificant.
The authors conclude that South Korean FDI in Kazakhstan, Russia, and
Uzbekistan is associated with market-seeking (particularly in Kazakhstan and
Russia) and natural resource-seeking, especially the former. From a policy
perspective, our empirical evidence suggests that these countries host
governments could implement mechanisms to facilitate the movement of goods
across regions and countries to increase the attractiveness of small local
markets. The South Korean government could develop financial support and risk
sharing programmes to enhance natural resource-seeking investments and mutual
exchange programmes to overcome the red syndrome complex in South Korean
society.","Motivations and locational factors of FDI in CIS countries: Empirical evidence from South Korean FDI in Kazakhstan, Russia, and Uzbekistan",2022-12-26 11:53:26,"Han-Sol Lee, Sergey U. Chernikov, Szabolcs Nagy","http://dx.doi.org/10.15196/RS110404, http://arxiv.org/abs/2212.13841v1, http://arxiv.org/pdf/2212.13841v1",econ.GN
32847,gn,"Recently, China announced that its ""zero-covid"" policy would end, which will
bring serious challenges to the country's health system. In here we provide
simple calculations that allows us to provide an estimate of what is expected
as an outcome in terms of fatalities, using the fact that it is a highly
contagious disease that will expose most of a highly vaccinated population to
the virus. We use recent findings regarding the amount of reduction in the risk
of severe outcome achieved by vaccination and arrive to an estimate of 1.1 m
deaths, 60% of these are males. In our model, 84% percent of deaths occur in
individuals with age 55 years or older. In a scenario in which this protection
is completely lost due to waning and the infection fatality rate of the
prevalent strain reaches similar levels to the observed in the beginning of the
epidemic, the death toll could reach 2.4 m, 93% in 55 years or older.",What is expected for China's SARS-CoV-2 epidemic?,2022-12-28 20:19:24,"Carlos Hernandez-Suarez, Efren Murillo-Zamora","http://arxiv.org/abs/2212.13966v1, http://arxiv.org/pdf/2212.13966v1",econ.GN
32848,gn,"Did the Covid-19 pandemic have an impact on innovation? Past economic
disruptions, anecdotal evidence, and the previous literature suggest a decline
with substantial differences between industries. We leverage USPTO patent
application data to investigate and quantify the disturbance. We assess
differences by field of technology (at the CPC subclass level) as well as the
impact of direct and indirect relevance for the management of the pandemic.
Direct Covid-19 relevance is identified from a keyword search of the patent
application fulltexts; indirect Covid-19 relevance is derived from past CPC
subclass to subclass citation patterns. We find that direct Covid-19 relevance
is associated with a strong boost to the growth of the number of patent
applications in the first year of the pandemic at the same order of magnitude
(in percentage points) as the percentage of patents referencing Covid-19. We
find no effect for indirect Covid-19 relevance, indicating a focus on applied
research at the expense of more basic research. Fields of technology (CPC
mainsections) have an additional significant impact, with, e.g., mainsections A
(human necessities) and C (chemistry, metallurgy) having a strong performance.",Innovation in times of Covid-19,2022-12-29 05:44:01,"Torsten Heinrich, Jangho Yang","http://arxiv.org/abs/2212.14159v1, http://arxiv.org/pdf/2212.14159v1",econ.GN
32849,gn,"An agent may strategically employ a vague message to mislead an audience's
belief about the state of the world, but this may cause the agent to feel guilt
or negatively impact how the audience perceives the agent. Using a novel
experimental design that allows participants to be vague while at the same time
isolating the internal cost of lying from the social identity cost of appearing
dishonest, we explore the extent to which these two types of lying costs affect
communication. We find that participants exploit vagueness to be consistent
with the truth, while at the same time leveraging the imprecision to their own
benefit. More participants use vague messages in treatments where concern with
social identity is relevant. In addition, we find that social identity concerns
substantially affect the length and patterns of vague messages used across the
treatments.",Lying Aversion and Vague Communication: An Experimental Study,2023-01-01 11:48:57,"Keh-Kuan Sun, Stella Papadokonstantaki","http://dx.doi.org/10.1016/j.euroecorev.2023.104611, http://arxiv.org/abs/2301.00372v2, http://arxiv.org/pdf/2301.00372v2",econ.GN
32850,gn,"The aim of this research is to get a better understanding of the future of
large-scale 3D printing. By developing the market analysis, it will be clear
whether large-scale 3D printing is becoming more of a preferred way of printing
custom-made parts for production companies. Companies can then choose whether
to change their ways, for a more profitable less costly method, or stay on the
route they are on. By getting deep into this topic, a new world of technology
is then being discovered and familiarized. With a mix of theoretical and
practical relevance, a complete coverage could be made on large-scale 3D
printing. This paper could then cover all aspects of this topic, and the reader
could then make their own judgment if large-scale 3D printing would be the best
option.",Large-Scale 3D Printing -- Market Analysis,2022-12-28 20:01:20,Razan Abdelazim Idris Alzain,"http://arxiv.org/abs/2301.00680v1, http://arxiv.org/pdf/2301.00680v1",econ.GN
32870,gn,"There are many indicators of energy security. Few measure what really matters
-- affordable and reliable energy supply -- and the trade-offs between the two.
Reliability is physical, affordability is economic. Russia's latest invasion of
Ukraine highlights some of the problems with energy security, from long-term
contracts being broken to supposedly secure supplies being diverted to retired
power plants being recommissioned to spillovers to other markets. The
transition to carbon-free energy poses new challenges for energy security, from
a shift in dependence from some resources (coal, oil, gas) to others (rare
earths, wind, sunshine) to substantial redundancies in the energy capital stock
to undercapitalized energy companies, while regulatory uncertainty deters
investment. Renewables improve energy security in one dimension, but worsen it
in others, particularly long spells of little wind. Security problems with rare
earths and borrowed capital are less pronounced, as stock rather than flow.",Navigating the energy trilemma during geopolitical and environmental crises,2023-01-18 20:32:46,Richard S. J. Tol,"http://arxiv.org/abs/2301.07671v1, http://arxiv.org/pdf/2301.07671v1",econ.GN
32851,gn,"Since supplanting Canada in 2014, Chinese investors have been the lead
foreign buyers of U.S. real estate, concentrating their purchases in urban
areas with higher Chinese populations like California. The reasons for
investment include prestige, freedom from capital confiscation, and safe,
diversified opportunities from abroad simply being more lucrative and available
than in their home country, where the market is eroding. Interestingly, since
2019, Chinese investors have sold a net 23.6 billion dollars of U.S. commercial
real estate, a stark contrast to past acquisitions between 2013 to 2018 where
they were net buyers of almost 52 billion dollars worth of properties. A
similar trend appears in the residential real estate segment too. In both 2017
and 2018, Chinese buyers purchased over 40,000 U.S. residential properties
which were halved in 2019 and steadily declined to only 6,700 in the past year.
This turnaround in Chinese investment can be attributed to a deteriorating
relationship between the U.S. and China during the Trump Presidency, financial
distress in China, and new Chinese government regulations prohibiting outbound
investments. Additionally, while Chinese investment is a small share of U.S.
real estate (~1.5% at its peak), it has outsized impacts on market valuations
of home prices in U.S. zip codes with higher populations of foreign-born
Chinese, increasing property prices and exacerbating the issue of housing
affordability in these areas. This paper investigates the rapid growth and
decline of Chinese investment in U.S. real estate and its effect on U.S. home
prices in certain demographics.",Historical Patterns and Recent Impacts of Chinese Investors in United States Real Estate,2022-12-28 10:57:38,Kevin Sun,"http://arxiv.org/abs/2301.00681v2, http://arxiv.org/pdf/2301.00681v2",econ.GN
32852,gn,"Despite recent evidence linking gender diversity in the firm with firm
innovativeness, we know little about the underlying mechanisms. Building on and
extending the Upper Echelon and entrepreneurship literature, we address two
lingering questions: why and how does gender diversity in firm ownership affect
firm innovativeness? We use survey data collected from 7,848 owner-managers of
SMEs across 29 emerging markets to test our hypotheses. Our findings
demonstrate that firms with higher gender diversity in ownership are more
likely to invest in R&D and rely upon a breadth of external capital, with such
differentials explaining sizeable proportions of the higher likelihood of
overall firm innovativeness, product and process, as well as organizational and
marketing innovations exhibited by their firms. Our findings are robust to
corrections for alternative measurement of focal variables, sensitivity to
outliers and subsamples, and endogenous self-selection concerns.",Gender Diversity in Ownership and Firm Innovativeness in Emerging Markets. The Mediating Roles of R&D Investments and External Capital,2023-01-03 17:47:11,"Vartuhi Tonoyan, Christopher Boudreaux","http://arxiv.org/abs/2301.01127v1, http://arxiv.org/pdf/2301.01127v1",econ.GN
32853,gn,"The rapid development of technology has drastically changed the way consumers
do their shopping. The volume of global online commerce has significantly been
increasing partly due to the recent COVID-19 crisis that has accelerated the
expansion of e-commerce. A growing number of webshops integrate Artificial
Intelligence (AI), state-of-the-art technology into their stores to improve
customer experience, satisfaction and loyalty. However, little research has
been done to verify the process of how consumers adopt and use AI-powered
webshops. Using the technology acceptance model (TAM) as a theoretical
background, this study addresses the question of trust and consumer acceptance
of Artificial Intelligence in online retail. An online survey in Hungary was
conducted to build a database of 439 respondents for this study. To analyse
data, structural equation modelling (SEM) was used. After the respecification
of the initial theoretical model, a nested model, which was also based on TAM,
was developed and tested. The widely used TAM was found to be a suitable
theoretical model for investigating consumer acceptance of the use of
Artificial Intelligence in online shopping. Trust was found to be one of the
key factors influencing consumer attitudes towards Artificial Intelligence.
Perceived usefulness as the other key factor in attitudes and behavioural
intention was found to be more important than the perceived ease of use. These
findings offer valuable implications for webshop owners to increase customer
acceptance",Consumer acceptance of the use of artificial intelligence in online shopping: evidence from Hungary,2022-12-26 12:03:28,"Szabolcs Nagy, Noemi Hajdu","http://dx.doi.org/10.24818/EA/2021/56/155, http://arxiv.org/abs/2301.01277v1, http://arxiv.org/pdf/2301.01277v1",econ.GN
32854,gn,"In order to succeed, universities are forced to respond to the new challenges
in the rapidly changing world. The recently emerging fourth-generation
universities should meet sustainability objectives to better serve their
students and their communities. It is essential for universities to measure
their sustainability performance to capitalise on their core strengths and to
overcome their weaknesses. In line with the stakeholder theory, the objective
of this study was to investigate students perceptions of university
sustainability including their expectations about and satisfaction with the
efforts that universities make towards sustainability. This paper proposes a
new approach that combines the sustainable university scale, developed by the
authors, with the importance-performance analysis to identify key areas of
university sustainability. To collect data, an online survey was conducted in
Hungary in 2019. The sustainable university scale was found to be a reliable
construct to measure different aspects of university sustainability. Results of
the importance-performance analysis suggest that students consider Hungarian
universities unsustainable. Research findings indicate that Hungarian
universities perform poorly in sustainable purchasing and renewable energy use,
but their location and their efforts towards separate waste collection are
their major competitive advantages. The main domains of university
sustainability were also discussed. This study provides university
decision-makers and researchers with insightful results supporting the
transformation of traditional universities into sustainable, fourth-generation
higher education institutions.",Students Perceptions of Sustainable Universities in Hungary. An Importance-Performance Analysis,2022-12-26 12:14:01,"Szabolcs Nagy, Mariann Veresne Somosi","http://dx.doi.org/10.24818/EA/2020/54/496, http://arxiv.org/abs/2301.01278v1, http://arxiv.org/pdf/2301.01278v1",econ.GN
32855,gn,"Digitalization is making a significant impact on marketing. New marketing
approaches and tools are emerging which are not always clearly categorised.
This article seeks to investigate the relationship between one of the novel
marketing tools, content marketing, and the five elements of the traditional
marketing communication mix. Based on an extensive literature review, this
paper analyses the main differences and similarities between them. This article
aims to generate a debate on the status of content marketing. According to the
authors' opinion, content marketing can be considered as the sixth marketing
communication mix element. However, further research is needed to fill in the
existing knowledge gap.",The relationship between content marketing and the traditional marketing communication tools,2022-12-26 12:38:13,"Szabolcs Nagy, Gergo Hajdu","http://dx.doi.org/10.32976/stratfuz.2021.25, http://arxiv.org/abs/2301.01279v1, http://arxiv.org/pdf/2301.01279v1",econ.GN
32856,gn,"The last decades have seen a resurgence of armed conflict around the world,
renewing the need for durable peace agreements. In this paper, I evaluate the
economic effects of the peace agreement between the Colombian government and
the largest guerrilla group in the country, the FARC, putting an end to one of
the longest and most violent armed conflicts in recent history. Using a
difference-in-difference strategy comparing municipalities that historically
had FARC presence and those with presence of a similar, smaller guerrilla
group, the ELN, before and after the start of a unilateral ceasefire by the
FARC, I establish three sets of results. First, violence indicators
significantly and sizably decreased in historically FARC municipalities.
Second, despite this large reduction in violence, I find precisely-estimated
null effects across a variety of economic indicators, suggesting no effect of
the peace agreement on economic activity. Furthermore, I use a sharp
discontinuity in eligibility to the government's flagship business and job
creation program for conflict-affected areas to evaluate the policy's impact,
also finding precisely-estimated null effects on the same economic indicators.
Third, I present evidence that suggests the reason why historically FARC
municipalities could not reap the economic benefits from the reduction in
violence is a lack of state capacity, caused both by their low initial levels
of state capacity and the lack of state entry post-ceasefire. These results
indicate that peace agreements require complementary investments in state
capacity to yield an economic dividend.",Peace Dividends: The Economic Effects of Colombia's Peace Agreement,2023-01-05 02:00:58,Miguel Fajardo-Steinhäuser,"http://arxiv.org/abs/2301.01843v1, http://arxiv.org/pdf/2301.01843v1",econ.GN
32857,gn,"Cognitive endurance -- the ability to sustain performance on a
cognitively-demanding task over time -- is thought to be a crucial productivity
determinant. However, a lack of data on this variable has limited researchers'
ability to understand its role for success in college and the labor market.
This paper uses college-admission-exam records from 15 million Brazilian high
school students to measure cognitive endurance based on changes in performance
throughout the exam. By exploiting exogenous variation in the order of exam
questions, I show that students are 7.1 percentage points more likely to
correctly answer a given question when it appears at the beginning of the day
versus the end (relative to a sample mean of 34.3%). I develop a method to
decompose test scores into fatigue-adjusted ability and cognitive endurance. I
then merge these measures into a higher-education census and the earnings
records of the universe of Brazilian formal-sector workers to quantify the
association between endurance and long-run outcomes. I find that cognitive
endurance has a statistically and economically significant wage return.
Controlling for fatigue-adjusted ability and other student characteristics, a
one-standard-deviation higher endurance predicts a 5.4% wage increase. This
wage return to endurance is sizable, equivalent to a third of the wage return
to ability. I also document positive associations between endurance and college
attendance, college quality, college graduation, firm quality, and other
outcomes. Finally, I show how systematic differences in endurance across
students interact with the exam design to determine the sorting of students to
colleges. I discuss the implications of these findings for the use of cognitive
assessments for talent selection and investments in interventions that build
cognitive endurance.","Cognitive Endurance, Talent Selection, and the Labor Market Returns to Human Capital",2023-01-06 19:08:35,Germán Reyes,"http://arxiv.org/abs/2301.02575v1, http://arxiv.org/pdf/2301.02575v1",econ.GN
32858,gn,"Health wearables in combination with gamification enable interventions that
have the potential to increase physical activity -- a key determinant of
health. However, the extant literature does not provide conclusive evidence on
the benefits of gamification, and there are persistent concerns that
competition-based gamification approaches will only benefit those who are
highly active at the expense of those who are sedentary. We investigate the
effect of Fitbit leaderboards on the number of steps taken by the user. Using a
unique data set of Fitbit wearable users, some of whom participate in a
leaderboard, we find that leaderboards lead to a 370 (3.5%) step increase in
the users' daily physical activity. However, we find that the benefits of
leaderboards are highly heterogeneous. Surprisingly, we find that those who
were highly active prior to adoption are hurt by leaderboards and walk 630
fewer steps daily after adoption (a 5% relative decrease). In contrast, those
who were sedentary prior to adoption benefited substantially from leaderboards
and walked an additional 1,300 steps daily after adoption (a 15% relative
increase). We find that these effects emerge because sedentary individuals
benefit even when leaderboards are small and when they do not rank first on
them. In contrast, highly active individuals are harmed by smaller leaderboards
and only see benefit when they rank highly on large leaderboards. We posit that
this unexpected divergence in effects could be due to the underappreciated
potential of noncompetition dynamics (e.g., changes in expectations for
exercise) to benefit sedentary users, but harm more active ones.","Health Wearables, Gamification, and Healthful Activity",2023-01-07 05:26:50,"Muhammad Zia Hydari, Idris Adjerid, Aaron D. Striegel","http://dx.doi.org/10.1287/mnsc.2022.4581, http://arxiv.org/abs/2301.02767v1, http://arxiv.org/pdf/2301.02767v1",econ.GN
32859,gn,"The subject of this study is inflation, a problem that has plagued America
and the world over the last several decades. Despite a rich trove of scholarly
studies and a wide range of tools developed to deal with inflation, we are
nowhere near a solution of this problem. We are now in the middle of the
inflation that threatens to become a stagflation or even a full recession; and
we have no idea what to prevent this outcome. This investigation explores the
real source of inflation. Tracing the problem of inflation to production, it
finds that inflation is not a phenomenon intrinsic to economy; rather, it is a
result of inefficiencies and waste in our economy. The investigation leads to a
conclusion that the solution of the problem of inflation is in achieving full
efficiency in production. Our economic production is a result of the evolution
that is propelled by the process of creation. In order to end economic
inefficiencies, we should model our economic practice on the process that
preceded production and has led to its emergence. In addition, the study will
outline ways in which our economic theory and practice must be changed to
achieve full efficiency of our production. Finally, the study provides a
critical overview of the current theories of inflation and remedies that are
proposed to deal with it.",Inflation and Value Creation: An Economic and Philosophic Investigation,2023-01-08 18:30:59,Gennady Shkliarevsky,"http://dx.doi.org/10.13140/RG.2.2.30512.23046, http://arxiv.org/abs/2301.03063v1, http://arxiv.org/pdf/2301.03063v1",econ.GN
32860,gn,"The COVID-19 vaccine reduces infection risk: even if one contracts COVID-19,
the probability of complications like death or hospitalization is lower.
However, vaccination may prompt people to decrease preventive behaviors, such
as staying indoors, handwashing, and wearing a mask. Thereby, if vaccinated
people pursue only their self-interest, the vaccine's effect may be lower than
expected. However, if vaccinated people are pro-social (motivated toward
benefit for the whole society), they might maintain preventive behaviors to
reduce the spread of infection.","The COVID-19 vaccination, preventive behaviors and pro-social motivation: panel data analysis from Japan",2023-01-09 03:04:30,"Eiji Yamamura, Yoshiro Tsutsui, Fumio Ohtake","http://arxiv.org/abs/2301.03124v1, http://arxiv.org/pdf/2301.03124v1",econ.GN
32861,gn,"To end the COVID-19 pandemic, policymakers have relied on various public
health messages to boost vaccine take-up rates amongst people across wide
political spectra, backgrounds, and worldviews. However, much less is
understood about whether these messages affect different people in the same
way. One source of heterogeneity is the belief in a just world (BJW), which is
the belief that in general, good things happen to good people, and bad things
happen to bad people. This study investigates the effectiveness of two common
messages of the COVID-19 pandemic: vaccinate to protect yourself and vaccinate
to protect others in your community. We then examine whether BJW moderates the
effectiveness of these messages. We hypothesize that just-world believers react
negatively to the prosocial pro-vaccine message, as it charges individuals with
the responsibility to care for others around them. Using an unvaccinated sample
of UK residents before vaccines were made widely available (N=526), we
demonstrate that the individual-focused message significantly reduces overall
vaccine skepticism, and that this effect is more robust for individuals with a
low BJW, whereas the community-focused message does not. Our findings highlight
the importance of individual differences in the reception of public health
messages to reduce COVID-19 vaccine skepticism.",How effective are covid-19 vaccine health messages in reducing vaccine skepticism? Heterogeneity in messages effectiveness by just world beliefs,2023-01-09 15:47:30,"Juliane Wiese, Nattavudh Powdthavee","http://arxiv.org/abs/2301.03303v1, http://arxiv.org/pdf/2301.03303v1",econ.GN
32862,gn,"Carbon offsets from voluntarily avoided deforestation projects are generated
based on performance vis-\`a-vis ex-ante deforestation baselines. We examined
the impacts of 27 forest conservation projects in six countries on three
continents using synthetic control methods for causal inference. We compare the
project baselines with ex-post counterfactuals based on observed deforestation
in control sites. Our findings show that most projects have not reduced
deforestation. For projects that did, reductions were substantially lower than
claimed. Methodologies for constructing deforestation baselines for
carbon-offset interventions thus need urgent revisions in order to correctly
attribute reduced deforestation to the conservation interventions, thus
maintaining both incentives for forest conservation and the integrity of global
carbon accounting.",Action needed to make carbon offsets from tropical forest conservation work for climate change mitigation,2023-01-05 23:57:31,"Thales A. P. West, Sven Wunder, Erin O. Sills, Jan Börner, Sami W. Rifai, Alexandra N. Neidermeier, Andreas Kontoleon","http://arxiv.org/abs/2301.03354v1, http://arxiv.org/pdf/2301.03354v1",econ.GN
32863,gn,"The optimal age that a retiree claims social security retirement benefits is
in general a complicated function of many factors. However, if the
beneficiary's finances and health are not the constraining factors, it is
possible to formally derive mathematical models that maximize a well-defined
measure of his total benefits. A model that takes into account various factors
such as the increase in the benefits for delayed claims and the penalties for
early retirement, the advantages of investing some of the benefits in the
financial markets, and the effects of cost-of-living adjustments shows that not
waiting until age 70 is almost always the better option. The optimal claiming
age that maximizes the total benefits, however, depends on the expected market
returns and the rate of cost-of-living adjustments, with the higher market
rates in general pushing the optimal age lower. The models presented here can
be easily tailored to address the particular circumstances and goals of any
individual.",Optimal social security timing,2023-01-10 19:08:23,A. Y. Aydemir,"http://arxiv.org/abs/2301.04052v1, http://arxiv.org/pdf/2301.04052v1",econ.GN
32864,gn,"This empirical study investigates the impact of the Hofstede cultural
dimensions (HCD) on the Global Innovation Index (GII) scores in four different
years (2007, 2009, 2019 and 2021) to compare the impacts during the pre- and
post-crisis (financial and COVID-19) period by employing ordinary least square
(OLS) and robust least square (Robust) analyses. The purpose of this study is
to identify the impact of cultural factors on the innovation development for
different income groups during the pre- and post-crisis period. We found that,
in general, the same cultural properties were required for countries to enhance
innovation inputs and outputs regardless of pre- and post-crisis periods and
time variances. The significant cultural factors (driving forces) of the
innovation performance do not change over time. However, our empirical results
revealed that not the crisis itself but the income group (either developed or
developing) is the factor that influences the relationship between cultural
properties and innovation. It is also worth noting that cultural properties
have lost much of their impact on innovation, particularly in developing
countries, during recent periods. It is highly likely that in terms of
innovation, no cultural development or change can significantly impact the
innovation output of developing countries without the construction of the
appropriate systems.",The Impact of National Culture on Innovation A Comparative Analysis between Developed and Developing Nations during the Pre and Post Crisis Period 2007_2021,2022-12-26 13:20:29,"Han-Sol Lee, Sergey U. Chernikov, Szabolcs Nagy, Ekaterina A. Degtereva","http://dx.doi.org/10.3390/socsci11110522, http://arxiv.org/abs/2301.04607v1, http://arxiv.org/pdf/2301.04607v1",econ.GN
32865,gn,"The need for a more sustainable lifestyle is a key focus for several
countries. Using a questionnaire survey conducted in Hungary, this paper
examines how culture influences environmentally conscious behaviour. Having
investigated the direct impact of Hofstedes cultural dimensions on
pro-environmental behaviour, we found that the culture of a country hardly
affects actual environmentally conscious behaviour. The findings indicate that
only individualism and power distance have a significant but weak negative
impact on pro-environmental behaviour. Based on the findings, we can state that
a positive change in culture is a necessary but not sufficient condition for
making a country greener.",The Effects of Hofstede's Cultural Dimensions on Pro-Environmental Behaviour: How Culture Influences Environmentally Conscious Behaviour,2022-12-26 12:53:29,"Szabolcs Nagy, Csilla Konyha Molnarne","http://dx.doi.org/10.18096/TMP.2018.01.03, http://arxiv.org/abs/2301.04609v1, http://arxiv.org/pdf/2301.04609v1",econ.GN
32866,gn,"With the increasing pervasiveness of ICTs in the fabric of economic
activities, the corporate digital divide has emerged as a new crucial topic to
evaluate the IT competencies and the digital gap between firms and territories.
Given the scarcity of available granular data to measure the phenomenon, most
studies have used survey data. To bridge the empirical gap, we scrape the
website homepage of 182 705 Italian firms, extracting ten features related to
their digital footprint characteristics to develop a new corporate digital
assessment index. Our results highlight a significant digital divide across
dimensions, sectors and geographical locations of Italian firms, opening up new
perspectives on monitoring and near-real-time data-driven analysis.",Measuring Corporate Digital Divide with web scraping: Evidence from Italy,2023-01-12 13:41:45,"Mazzoni Leonardo, Pinelli Fabio, Riccaboni Massimo","http://arxiv.org/abs/2301.04925v1, http://arxiv.org/pdf/2301.04925v1",econ.GN
32871,gn,"In the last decade, the study of labour dynamics has led to the introduction
of labour flow networks (LFNs) as a way to conceptualise job-to-job
transitions, and to the development of mathematical models to explore the
dynamics of these networked flows. To date, LFN models have relied upon an
assumption of static network structure. However, as recent events (increasing
automation in the workplace, the COVID-19 pandemic, a surge in the demand for
programming skills, etc.) have shown, we are experiencing drastic shifts to the
job landscape that are altering the ways individuals navigate the labour
market. Here we develop a novel model that emerges LFNs from agent-level
behaviour, removing the necessity of assuming that future job-to-job flows will
be along the same paths where they have been historically observed. This model,
informed by microdata for the United Kingdom, generates empirical LFNs with a
high level of accuracy. We use the model to explore how shocks impacting the
underlying distributions of jobs and wages alter the topology of the LFN. This
framework represents a crucial step towards the development of models that can
answer questions about the future of work in an ever-changing world.",Endogenous Labour Flow Networks,2023-01-19 13:15:01,"Kathyrn R. Fair, Omar A. Guerrero","http://arxiv.org/abs/2301.07979v2, http://arxiv.org/pdf/2301.07979v2",econ.GN
32872,gn,"There is a strong association between the quality of the writing in a resume
for new labor market entrants and whether those entrants are ultimately hired.
We show that this relationship is, at least partially, causal: a field
experiment in an online labor market was conducted with nearly half a million
jobseekers in which a treated group received algorithmic writing assistance.
Treated jobseekers experienced an 8% increase in the probability of getting
hired. Contrary to concerns that the assistance is taking away a valuable
signal, we find no evidence that employers were less satisfied. We present a
model in which better writing is not a signal of ability but helps employers
ascertain ability, which rationalizes our findings.",Algorithmic Writing Assistance on Jobseekers' Resumes Increases Hires,2023-01-19 17:02:53,"Emma van Inwegen, Zanele Munyikwa, John J. Horton","http://arxiv.org/abs/2301.08083v1, http://arxiv.org/pdf/2301.08083v1",econ.GN
32873,gn,"In this work, we propose a new lecture of input-output model reconciliation
Markov chain and the dominance theory, in the field of interindustrial poles
interactions. A deeper lecture of Leontieff table in term of Markov chain is
given, exploiting spectral properties and time to absorption to characterize
production processes, then the dualities local-global/dominance- Sensitivity
analysis are established, allowing a better understanding of economic poles
arrangement. An application to the Moroccan economy is given.",Input-Output Analysis: New Results From Markov Chain Theory,2023-01-13 21:49:50,"Nizar Riane, Claire David","http://arxiv.org/abs/2301.08136v2, http://arxiv.org/pdf/2301.08136v2",econ.GN
32874,gn,"Industries struggle to build robust environmental transition plans as they
lack the tools to quantify their ecological responsibility over their value
chain. Companies mostly turn to sole greenhouse gas (GHG) emissions reporting
or time-intensive Life Cycle Assessment (LCA), while Environmentally-Extended
Input-Output (EEIO) analysis is more efficient on a wider scale. We illustrate
EEIO analysis usefulness to sketch transition plans on the example of Canada s
road industry - estimation of national environmental contributions, most
important environmental issues, main potential transition levers of the sector,
and metrics prioritization for green purchase plans). To do so, openIO-Canada,
a new Canadian EEIO database, coupled with IMPACT World plus v1.30-1.48
characterization method, provides a multicriteria environmental diagnosis of
Canada s economy. The road industry generates a limited impact (0.5-1.8
percent) but must reduce the environmental burden from material purchases -
mainly concrete and asphalt products - through green purchase plans and
eco-design and invest in new machinery powered with cleaner energies such as
low-carbon electricity or bioenergies. EEIO analysis also captures impacts
often neglected in process-based pavement LCAs - amortization of capital goods,
staff consumptions, and services - and shows some substantial impacts
advocating for enlarging system boundaries in standard LCA. Yet, pavement
construction and maintenance only explain 5 percent of the life cycle carbon
footprint of Canada s road network, against 95 percent for the roads usage.
Thereby, a carbon-neutral pathway for the road industry must first focus on
reducing vehicle consumption and wear through better design and maintenance of
roads (...)",Environmentally-Extended Input-Output analyses efficiently sketch large-scale environmental transition plans -- illustration by Canada's road industry,2023-01-19 23:24:29,"Anne de Bortoli, Maxime Agez","http://dx.doi.org/10.1016/j.jclepro.2023.136039, http://arxiv.org/abs/2301.08302v1, http://arxiv.org/pdf/2301.08302v1",econ.GN
32875,gn,"Nudging is a burgeoning topic in science and in policy, but evidence on the
effectiveness of nudges among differentially-incentivized groups is lacking.
This paper exploits regional variations in the roll-out of the Covid-19 vaccine
in Sweden to examine the effect of a nudge on groups whose intrinsic incentives
are different: 16-17-year-olds, for whom Covid-19 is not dangerous, and
50-59-year-olds, who face a substantial risk of death or severe dis-ease. We
find a significantly stronger response in the younger group, consistent with
the theory that nudges are more effective for choices that are not meaningful
to the individual.",When do Default Nudges Work?,2023-01-20 23:45:00,"Carl Bonander, Mats Ekman, Niklas Jakobsson","http://dx.doi.org/10.1093/ooec/odad094, http://arxiv.org/abs/2301.08797v4, http://arxiv.org/pdf/2301.08797v4",econ.GN
32876,gn,"Using deep learning techniques, we introduce a novel measure for production
process heterogeneity across industries. For each pair of industries during
1990-2021, we estimate the functional distance between two industries'
production processes via deep neural network. Our estimates uncover the
underlying factors and weights reflected in the multi-stage production decision
tree in each industry. We find that the greater the functional distance between
two industries' production processes, the lower are the number of M&As, deal
completion rates, announcement returns, and post-M&A survival likelihood. Our
results highlight the importance of structural heterogeneity in production
technology to firms' business integration decisions.",Learning Production Process Heterogeneity Across Industries: Implications of Deep Learning for Corporate M&A Decisions,2023-01-21 04:43:53,"Jongsub Lee, Hayong Yun","http://arxiv.org/abs/2301.08847v1, http://arxiv.org/pdf/2301.08847v1",econ.GN
32877,gn,"Gender segmentation in labor markets shapes the local effects of
international trade. We develop a theory that combines exports with
gender-segmented labor markets and show that, in this framework, foreign demand
shocks may either increase or decrease the female-to-male employment ratio. If
a foreign demand shock happens in a female-intensive (male-intensive) sector,
the model predicts that the female-to-male employment ratio should increase
(decrease). We then use plausibly exogenous variation in the exposure of
Tunisian local labor markets to foreign demand shocks and show that the
empirical results are consistent with the theoretical prediction. In Tunisia, a
developing country with a high degree of gender segmentation in labor markets,
foreign-demand shocks have been relatively larger in male-intensive sectors.
This induced a decrease in the female-to-male employment ratio, with households
likely substituting female for male labor supply.",Gender-Segmented Labor Markets and Foreign Demand Shocks,2023-01-23 06:31:13,"Carlos Góes, Gladys Lopez-Acevedo, Raymond Robertson","http://arxiv.org/abs/2301.09252v3, http://arxiv.org/pdf/2301.09252v3",econ.GN
32878,gn,"This paper is motivated by computational challenges arising in multi-period
valuation in insurance. Aggregate insurance liability cashflows typically
correspond to stochastic payments several years into the future. However,
insurance regulation requires that capital requirements are computed for a
one-year horizon, by considering cashflows during the year and end-of-year
liability values. This implies that liability values must be computed
recursively, backwards in time, starting from the year of the most distant
liability payments. Solving such backward recursions with paper and pen is
rarely possible, and numerical solutions give rise to major computational
challenges.
  The aim of this paper is to provide explicit and easily computable
expressions for multi-period valuations that appear as limit objects for a
sequence of multi-period models that converge in terms of conditional weak
convergence. Such convergence appears naturally if we consider large insurance
portfolios such that the liability cashflows, appropriately centered and
scaled, converge weakly as the size of the portfolio tends to infinity.",Approximations of multi-period liability values by simple formulas,2023-01-23 17:19:52,"Nils Engler, Filip Lindskog","http://arxiv.org/abs/2301.09450v1, http://arxiv.org/pdf/2301.09450v1",econ.GN
32879,gn,"Maternal sugar consumption in utero may have a variety of effects on
offspring. We exploit the abolishment of the rationing of sweet confectionery
in the UK on April 24, 1949, and its subsequent reintroduction some months
later, in an era of otherwise uninterrupted rationing of confectionery
(1942-1953), sugar (1940-1953) and many other foods, and we consider effects on
late-life cardiovascular disease, BMI, height, type-2 diabetes and the intake
of sugar, fat and carbohydrates, as well as cognitive outcomes and birth
weight. We use individual-level data from the UK Biobank for cohorts born
between April 1947-May 1952. We also explore whether one's genetic
""predisposition"" to the outcome can moderate the effects of prenatal sugar
exposure. We find that prenatal exposure to derationing increases education and
reduces BMI and sugar consumption at higher ages, in line with the
""developmental origins"" explanatory framework, and that the sugar effects are
stronger for those who are genetically ""predisposed"" to sugar consumption.",Prenatal Sugar Consumption and Late-Life Human Capital and Health: Analyses Based on Postwar Rationing and Polygenic Scores,2023-01-24 16:30:40,"Gerard J. van den Berg, Stephanie von Hinke, R. Adele H. Wang","http://arxiv.org/abs/2301.09982v1, http://arxiv.org/pdf/2301.09982v1",econ.GN
32880,gn,"Does the national innovation city and smart city pilot policy, as an
important institutional design to promote the transformation of old and new
dynamics, have an important impact on the digital economy? What are the
intrinsic mechanisms? Based on the theoretical analysis of whether smart city
and national innovation city policies promote urban digital economy, this paper
constructs a multi-temporal double difference model based on a quasi-natural
experiment with urban dual pilot policies and systematically investigates the
impact of dual pilot policies on the development of digital economy. It is
found that both smart cities and national innovation cities can promote the
development of digital economy, while there is a synergistic effect between the
policies. The mechanism test shows that the smart city construction and
national innovation city construction mainly affect the digital economy through
talent agglomeration effect, technology agglomeration effect and financial
agglomeration effect.",Research on the Impact of Innovative City and Smart City Construction on Digital Economy: Evidence from China,2023-01-22 10:09:18,Zhanpeng Huang,"http://arxiv.org/abs/2301.10179v1, http://arxiv.org/pdf/2301.10179v1",econ.GN
32881,gn,"Until the mid 1960s, the UK experienced regular measles epidemics, with the
vast majority of children being infected in early childhood. The introduction
of a measles vaccine substantially reduced its incidence. The first part of
this paper examines the long-term human capital and health effects of this
change in the early childhood disease environment. The second part investigates
interactions between the vaccination campaign and individuals' endowments as
captured using molecular genetic data, shedding light on complementarities
between public health investments and individual endowments. We use two
identification approaches, based on the nationwide introduction of the vaccine
in 1968 and local vaccination trials in 1966. Our results show that exposure to
the vaccination in early childhood positively affects adult height, but only
among those with high genetic endowments for height. We find no effects on
years of education; neither a direct effect, nor evidence of complementarities.",Early life exposure to measles and later-life outcomes: Evidence from the introduction of a vaccine,2023-01-25 15:58:55,"Gerard J. van den Berg, Stephanie von Hinke, Nicolai Vitt","http://arxiv.org/abs/2301.10558v1, http://arxiv.org/pdf/2301.10558v1",econ.GN
32882,gn,"The rising adoption of industrial robots is radically changing the role of
workers in the production process. Robots can be used for some of the more
physically demanding and dangerous production work, thus reducing the
possibility of worker injury. On the other hand, robots may replace workers,
potentially increasing worker anxiety about their job safety. In this paper, we
investigate how individual physical health and mental health outcomes vary with
local exposure to robots for manufacturing workers in China. We find a link
between robot exposure and better physical health of workers, particularly for
younger workers and those with less education. However, we also find that robot
exposure is associated with more mental stress for Chinese workers,
particularly older and less educated workers.",Pain or Anxiety? The Health Consequences of Rising Robot Adoption in China,2023-01-25 19:27:02,"Qiren Liu, Sen Luo, Robert Seamans","http://arxiv.org/abs/2301.10675v1, http://arxiv.org/pdf/2301.10675v1",econ.GN
32883,gn,"We examine the effect of auditing on dividends in small private firms. We
hypothesize that auditing can constrain dividends by way of promoting
accounting conservatism. We use register data on private Norwegian firms and
random variation induced by the introduction of a policy allowing small private
firms to forgo the use of an auditor to estimate the effect of auditing on
dividend payout. Identification is obtained by a regression discontinuity
around the arbitrary thresholds for the policy. Propensity score matching is
used to create a balanced synthetic control. We consistently find that forgoing
auditing led to a significant increase in dividends in small private firms.",Adults in the room? The auditor and dividends in small firms: Evidence from a natural experiment,2023-01-26 16:17:23,"Hakim Lyngstadås, Johannes Mauritzen","http://arxiv.org/abs/2301.11079v1, http://arxiv.org/pdf/2301.11079v1",econ.GN
32884,gn,"The money supply is endogenous if the monetary policy strategy is the so
called Inflation and Interest Rate Targeting, IRT. With that and perfect
credibility, the theory of the price level and inflation only needs the Fisher
equation, but it interprets causality in a new sense: if the monetary authority
raises the policy rate, it will raise the inflation target, and vice versa,
given the natural interest rate. If credibility is not perfect or if
expectations are not completely rational, the theory needs something more. Here
I present a model corresponding to this theory that includes both the steady
state case and the recovery dynamics after a supply shock, with and without
policy reactions to such a shock. But, under the finite horizon assumption for
IRT, at some future point in time the money supply must become exogenous. This
creates the incentive for agents to examine, as of today, statistics on
monetary aggregates and form their forecasts of money supply growth and
inflation rates. Additionally, inflation models of the small open economy allow
us to deduce that the IRT in this case is much more powerful than otherwise,
and for the same degree of credibility. But things are not necessarily easier
for the monetary authority: it must monitor not only internal indicators, but
also external inflation and its determinants, and it must, in certain
circumstances, make more intense adjustments to the interest rate.",Inflation targeting strategy and its credibility,2023-01-26 19:33:08,Carlos Esteban Posada,"http://arxiv.org/abs/2301.11207v1, http://arxiv.org/pdf/2301.11207v1",econ.GN
32885,gn,"Education plays a critical role on promoting preventive behaviours against
the spread of pandemics. In Japan, hand-washing education in primary schools
was positively correlated with preventive behaviours against COVID-19
transmission for adults in 2020 during the early stages of COVID-19 [1]. The
following year, the Tokyo Olympics were held in Japan, and a state of emergency
was declared several times. Public perceptions of and risks associated with the
pandemic changed drastically with the emergence of COVID-19 vaccines. We
re-examine whether effect of hand-washing education on preventive behaviours
persisted by covering a longer period of the COVID-19 pandemic than previous
studies. 26 surveys were conducted nearly once a month for 30 months from March
2020 (the early stage of COVID-19) to September 2022 in Japan. By corresponding
with the same individuals across surveys, we comprehensively gathered data on
preventive behaviours during this period. In addition, we asked about
hand-washing education they had received in their primary school. We used the
data to investigate how and the degree to which school education is associated
with pandemic mitigating preventive behaviours. We found that hand-washing
education in primary school is positively associated with behaviours such as
hand washing and mask wearing as a COVID-19 preventive measure, but not related
to staying at home. We observed a statistically significant difference in hand
washing between adults who received childhood hand-washing education and those
who did not. This difference persisted throughout the study period. In
comparison, the difference in mask wearing between the two groups was smaller,
but still statistically significant. Furthermore, there was no difference in
staying at home between them.",The effect of primary school education on preventive behaviours during COVID-19 in Japan,2023-01-27 03:30:58,"Eiji Yamamura, Yoshiro Tsutsui, Fumio Ohtake","http://arxiv.org/abs/2301.11475v1, http://arxiv.org/pdf/2301.11475v1",econ.GN
32886,gn,"Extreme heat negatively impacts cognition, learning, and task performance.
With increasing global temperatures, workers may therefore be at increased risk
of work-related injuries and illness. This study estimates the effects of
temperature on worker health using records spanning 1985-2020 from an
Australian mandatory insurance scheme. High temperatures are found to cause
significantly more claims, particularly among manual workers in outdoor-based
industries. These adverse effects have not diminished across time, with the
largest effect observed for the 2015-2020 period, indicating increasing
vulnerability to heat. Within occupations, the workers most adversely affected
by heat are female, older-aged and higher-earning. Finally, results from
firm-level panel analyses show that the percentage increase in claims on hot
days is largest at ""safer"" firms.",Heat and Worker Health,2023-01-27 09:40:29,"Andrew Ireland, David Johnston, Rachel Knott","http://dx.doi.org/10.1016/j.jhealeco.2023.102800, http://arxiv.org/abs/2301.11554v2, http://arxiv.org/pdf/2301.11554v2",econ.GN
32887,gn,"We analyze the causal impact of positive and negative feedback on
professional performance. We exploit a unique data source in which
quasi-random, naturally occurring variations within subjective ratings serve as
positive and negative feedback. The analysis shows that receiving positive
feedback has a favorable impact on subsequent performance, while negative
feedback does not have an effect. These main results are found in two different
environments and for distinct cultural backgrounds, experiences, and gender of
the feedback recipients. The findings imply that managers should focus on
giving positive motivational feedback.",'Good job!' The impact of positive and negative feedback on performance,2023-01-27 18:29:39,"Daniel Goller, Maximilian Späth","http://arxiv.org/abs/2301.11776v1, http://arxiv.org/pdf/2301.11776v1",econ.GN
32888,gn,"From the perspective of social choice theory, ranked-choice voting (RCV) is
known to have many flaws. RCV can fail to elect a Condorcet winner and is
susceptible to monotonicity paradoxes and the spoiler effect, for example. We
use a database of 182 American ranked-choice elections for political office
from the years 2004-2022 to investigate empirically how frequently RCV's
deficiencies manifest in practice. Our general finding is that RCV's weaknesses
are rarely observed in real-world elections, with the exception that ballot
exhaustion frequently causes majoritarian failures.","An Examination of Ranked Choice Voting in the United States, 2004-2022",2023-01-28 06:17:08,"Adam Graham-Squire, David McCune","http://arxiv.org/abs/2301.12075v2, http://arxiv.org/pdf/2301.12075v2",econ.GN
32889,gn,"This paper focuses on specific investments under negotiated transfer pricing.
Reasons for transfer pricing studies are primarily to find conditions that
maximize the firm's overall profit, especially in cases with bilateral trading
problems with specific investments. However, the transfer pricing problem has
been developed in the context where managers are fully individual rational
utility maximizers. The underlying assumptions are rather heroic and, in
particular, how managers process information under uncertainty, do not
perfectly match with human decision-making behavior. Therefore, this paper
relaxes key assumptions and studies whether cognitively bounded agents achieve
the same results as fully rational utility maximizers and, in particular,
whether the recommendations on managerial-compensation arrangements and
bargaining infrastructures are designed to maximize headquarters' profit in
such a setting. Based on an agent-based simulation with fuzzy Q-learning
agents, it is shown that in case of symmetric marginal cost parameters, myopic
fuzzy Q-learning agents invest only as much as in the classic hold-up problem,
while non-myopic fuzzy Q-learning agents invest optimally. However, in
scenarios with non-symmetric marginal cost parameters, a deviation from the
previously recommended surplus sharing rules can lead to higher investment
decisions and, thus, to an increase in the firm's overall profit.",The impact of surplus sharing on the outcomes of specific investments under negotiated transfer pricing: An agent-based simulation with fuzzy Q-learning agents,2023-01-28 20:26:58,Christian Mitsch,"http://arxiv.org/abs/2301.12255v1, http://arxiv.org/pdf/2301.12255v1",econ.GN
32890,gn,"This paper discusses the broad challenges shared by e-commerce and the
process industries operating global supply chains. Specifically, we discuss how
process industries and e-commerce differ in many aspects but have similar
challenges ahead of them in order to remain competitive, keep up with the
always increasing requirements of the customers and stakeholders, and gain
efficiency. While both industries have been early adopters of decision support
tools based on machine intelligence, both share unresolved challenges related
to scalability, integration of decision-making over different time horizons
(e.g. strategic, tactical and execution-level decisions) and across internal
business units, and orchestration of human and computer-based decision-makers.
We discuss future trends and research opportunities in the area of supply
chain, and suggest that the methods of multi-agent systems supported by
rigorous treatment of human decision-making in combination with machine
intelligence is a great contender to address these critical challenges.","Future of Supply Chain: Challenges, Trends, and Prospects",2023-01-30 21:42:19,"Cristiana L. Lara, John Wassick","http://arxiv.org/abs/2301.13174v1, http://arxiv.org/pdf/2301.13174v1",econ.GN
32917,gn,"The south central ecoregion was a mosiac ecoregion of forest and grassland
continnum which is transiting towards closed canopy forests and losing
ecosystem benefits. We studied role of active management, its economic benefit,
and landonwers atttitdue and behavior towards restoring ecosystem services in
this region. We further studed how the economic benefit varies in this region
with the change in rainfall.",Economics and human dimension of active managment of forest grassland ecotone in south-central USA under changing climate,2023-02-23 01:18:27,Bijesh Mishra,"http://arxiv.org/abs/2302.11675v1, http://arxiv.org/pdf/2302.11675v1",econ.GN
32891,gn,"I use the unanticipated and large additional tariffs the US imposed on
European Union products due to the Airbus-Boeing conflict to analyze how
exporters reacted to a change in trade policy. Using firm-level data for Spain
and applying a difference-in-differences methodology, I show that the export
revenue in the US of the firms affected by the tariff hike did not
significantly decrease relative to the one of other Spanish exporters to the
US. I show that Spanish exporters were able to neutralize the increase in
tariffs by substituting Spanish products with products originated in countries
unaffected by tariffs and shifting to varieties not affected by tariffs. My
results show that tariff avoidance is another margin exporters can use to
counteract the effects of a tariff hike.",How exporters neutralized an increase in tariffs,2023-02-01 16:02:59,Asier Minondo,"http://arxiv.org/abs/2302.00417v1, http://arxiv.org/pdf/2302.00417v1",econ.GN
32892,gn,"Extreme events, exacerbated by climate change, pose significant risks to the
energy system and its consumers. However there are natural limits to the degree
of protection that can be delivered from a centralised market architecture.
Distributed energy resources provide resilience to the energy system, but their
value remains inadequately recognized by regulatory frameworks. We propose an
insurance framework to align residual outage risk exposure with locational
incentives for distributed investment. We demonstrate that leveraging this
framework in large-scale electricity systems could improve consumer welfare
outcomes in the face of growing risks from extreme events via investment in
distributed energy.",An Insurance Paradigm for Improving Power System Resilience via Distributed Investment,2023-02-03 01:57:59,"Farhad Billimoria, Filiberto Fele, Iacopo Savelli, Thomas Morstyn, Malcolm McCulloch","http://arxiv.org/abs/2302.01456v1, http://arxiv.org/pdf/2302.01456v1",econ.GN
32893,gn,"The use of energy by cryptocurrency mining comes not just with an
environmental cost but also an economic one through increases in electricity
prices for other consumers. Here we investigate the increase in wholesale price
on Texas ERCOT grid due to energy consumption from cryptocurrency mining. For
every GW of cryptocurrency mining load on the grid, we find that the wholesale
price of electricity on the ERCOT grid increases by 2 per Cent. Given that
todays cryptocurrency mining load on the ERCOT grid is around 1 GW, it suggests
that wholesale prices have already risen this amount. There are 27 GW of mining
load waiting to be hooked up to the ERCOT grid. If cryptocurrency mining
increases rapidly, the price of energy in Texas could skyrocket.",A quantification of how much crypto-miners are driving up the wholesale cost of energy in Texas,2023-02-04 22:12:09,"Jangho Lee, Lily Wu, Andrew E. Dessler","http://arxiv.org/abs/2302.02221v1, http://arxiv.org/pdf/2302.02221v1",econ.GN
32894,gn,"This paper assesses whether the higher capital maintenance drives up banks
cost of equity. We investigate the hypothesis using fixed effect panel
estimation with the data from a sample of 28 publicly listed commercial banks
over the 2013 to 2019 periods. We find a significant negative relationship
between banks capital and cost of equity. Empirically our baseline estimates
entail that a 10 percent increase in capital would reduce the cost of equity by
4.39 percent.",Does higher capital maintenance drive up banks cost of equity? Evidence from Bangladesh,2023-01-30 22:33:26,"Md Shah Naoaj, Mir Md Moyazzem Hosen","http://arxiv.org/abs/2302.02762v1, http://arxiv.org/pdf/2302.02762v1",econ.GN
32895,gn,"We propose a novel measure to investigate firms' product specialisation:
product coreness, that captures the centrality of exported products within the
firm's export basket. We study product coreness using firm-product level data
between 2018 and 2020 for Colombia, Ecuador, and Peru. Three main findings
emerge from our analysis. First, the composition of firms' export baskets
changes relatively little from one year to the other, and products far from the
firm's core competencies, with low coreness, are more likely to be dropped.
Second, higher coreness is associated with larger export flows at the firm
level. Third, such firm-level patterns also have implications at the aggregate
level: products that are, on average, exported with higher coreness have higher
export flows at the country level, which holds across all levels of product
complexity. Therefore, the paper shows that how closely a product fits within a
firm's capabilities is important for economic performance at both the firm and
country level. We explore these issues within an econometric framework, finding
robust evidence both across our three countries and for each country
separately.",Being at the core: firm product specialisation,2023-02-06 16:26:14,"Filippo Bontadini, Mercedes Campi, Marco Dueñas","http://arxiv.org/abs/2302.02767v2, http://arxiv.org/pdf/2302.02767v2",econ.GN
32896,gn,"The availability of data on economic uncertainty sparked a lot of interest in
models that can timely quantify episodes of international spillovers of
uncertainty. This challenging task involves trading off estimation accuracy for
more timely quantification. This paper develops a local vector autoregressive
model (VAR) that allows for adaptive estimation of the time-varying
multivariate dependency. Under local, we mean that for each point in time, we
simultaneously estimate the longest interval on which the model is constant
with the model parameters. The simulation study shows that the model can handle
one or multiple sudden breaks as well as a smooth break in the data. The
empirical application is done using monthly Economic Policy Uncertainty data.
The local model highlights that the empirical data primarily consists of long
homogeneous episodes, interrupted by a small number of heterogeneous ones, that
correspond to crises. Based on this observation, we create a crisis index,
which reflects the homogeneity of the sample over time. Furthermore, the local
model shows superiority against the rolling window estimation.",Adaptive local VAR for dynamic economic policy uncertainty spillover,2023-02-06 17:34:03,"Niels Gillmann, Ostap Okhrin","http://arxiv.org/abs/2302.02808v1, http://arxiv.org/pdf/2302.02808v1",econ.GN
32897,gn,"This paper introduces a class of investment project's profitability metrics
that includes the net present value criterion (which labels a project as weakly
profitable if its NPV is nonnegative), internal rate of return (IRR),
profitability index (PI), payback period (PP), and discounted payback period
(DPP) as special cases. We develop an axiomatic characterization of this class,
as well as of the mentioned conventional metrics within the class. The proposed
approach offers several key contributions. First, it provides a unified
interpretation of profitability metrics as indicators of a project's financial
stability across various economic scenarios. Second, it reveals that, except
for an NPV criterion, a profitability metric is inherently undefined for some
projects. In particular, this implies that any extension of IRR to the space of
all projects does not meet a set of reasonable conditions. A similar conclusion
is valid for the other mentioned conventional metrics. For each of these
metrics, we offer a characterization of the pairs of comparable projects and
identify the largest set of projects to which the metric can be unequivocally
extended. Third, our study identifies conditions under which the application of
one metric is superior to others, helping to guide decision-makers in selecting
the most appropriate metric for specific investment contexts.","NPV, IRR, PI, PP, and DPP: a unified view",2023-02-06 18:43:22,Mikhail V. Sokolov,"http://arxiv.org/abs/2302.02875v6, http://arxiv.org/pdf/2302.02875v6",econ.GN
32898,gn,"The problem of determining the value of statistical life in Ukraine in order
to find ways to improve it is an urgent one now. The current level of value is
analyzed, which is a direct consequence of the poor quality of life of a
citizen, hence his low level. The description of the basic theoretical and
methodological approaches to the estimation of the cost of human life is given.
Based on the analysis, a number of hypotheses have been advanced about the use
of statistical calculations to achieve the modeling objectives. Model
calculations are based on the example of Zaporozhye Oblast statistics for
2018-2019. The article elaborates the approach to the estimation of the
economic equivalent of the cost of living on the basis of demographic
indicators and average per capita income, and also analyzes the possibilities
of their application in the realities of the national economy. Using
Statistica, the regression equation parameters were determined for statistical
data of population distribution of Zaporizhzhia region by age groups for 2018.
The calculation parameters were also found using the Excel office application,
using the Solution Finder option to justify the quantitative range of metric
values. It is proved that the proposed approach to modeling and calculations
are simpler and more efficient than the calculation methods proposed earlier.
The study concluded that the value of statistical life in Ukraine is
significantly undervalued.",The approach to modeling the value of statistical life using average per capita income,2023-02-07 08:14:52,"Stanislav Levytskyi, Oleksandr Gneushev, Vasyl Makhlinets","http://dx.doi.org/10.21511/dm.17(4).2019.01, http://arxiv.org/abs/2302.03261v1, http://arxiv.org/pdf/2302.03261v1",econ.GN
32899,gn,"The Becker DeGroot Marshak method is widely used to elicit the valuation that
an individual assigns to an object. Theoretically, the second-price structure
of the method gives individuals the incentive to state their true valuation.
Yet, the elicitation methods empirical accuracy is subject to debate. With this
paper, I provide a clear verification of the qualitative accuracy of the
method. Participants of an incentivized laboratory experiment can sell a
virtual object. The value of the object is publicly known and experimentally
varied in a between-subjects design. Replicating previous findings on the low
quantitative accuracy, I observe a very small share of individuals placing a
payoff-optimal stated valuation. However, the analysis shows that the stated
valuation increases with the value of the object. This result shows the
qualitative accuracy of the BDM method and suggests that the method can be
applied in comparative studies.",The qualitative accuracy of the Becker-DeGroot-Marshak method,2023-02-08 16:47:09,Maximilian Späth,"http://arxiv.org/abs/2302.04055v1, http://arxiv.org/pdf/2302.04055v1",econ.GN
32900,gn,"We use administrative panel data on the universe of Brazilian formal workers
to investigate the effects of the Venezuelan crisis on the Brazilian labor
market, focusing on the state of Roraima, where the crisis had a direct impact.
The results showed that the average monthly wage of Brazilians in Roraima
increased by about 3 percent during the early stages of the crisis compared to
the control states. The study found negligible job displacement and evidence of
Brazilians moving to positions with fewer immigrants. We also found that
immigrant presence in the formal sector potentially pushed wages downwards, but
the presence of immigrants in the informal sector offsets the substitution
effects. Overall, the study highlights the complex and multifaceted nature of
immigration on the labor market and the need for policies that consider the
welfare of immigrants and native workers.",The Effects of the Venezuelan Refugee Crisis on the Brazilian Labor Market,2023-02-08 20:13:07,"Hugo Sant'Anna, Samyam Shrestha","http://arxiv.org/abs/2302.04201v1, http://arxiv.org/pdf/2302.04201v1",econ.GN
32901,gn,"Across the legal profession, statistics related to the numbers of women and
other underrepresented groups in leadership roles continue to paint a bleak
picture of diversity and inclusion. Some approaches to closing this gap have
focused on the cause; some have devised and applied solutions. Questions about
the efficacy of many of these solutions remain essentially unanswered. This
empirical study represents one of the first of its kind. Studies of the legal
profession have not focused on the dynamics of supply and demand in the context
of leadership positions (counsel and partner (equity and non-equity)). Neither
have they examined the interrelationships of these dynamics to race and gender
demographic factors (white female and minorities (male and female)). This
research seeks to determine the supply-demand position of leadership in the
legal profession and establish market equilibrium for these counsel and partner
roles.",Why the Mansfield Rule can't work: a supply demand analysis,2023-02-08 22:47:10,Paola Cecchi Dimeglio,"http://arxiv.org/abs/2302.04307v1, http://arxiv.org/pdf/2302.04307v1",econ.GN
32902,gn,"We study the partial and full set-asides and their implication for changes in
bidding behavior in first-price sealed-bid auctions in the context of United
States Department of Agriculture (USDA) food procurement auctions. Using five
years of bid data on different beef products, we implement weighted least
squares regression models to show that partial set-aside predicts decreases in
both offer prices and winning prices among large and small business bidders.
Full set-aside predicts a small increase in offer prices and winning prices
among small businesses. With these predictions, we infer that net profit of
small businesses is unlikely to increase when set-asides are present.",Set-Asides in USDA Food Procurement Auctions,2023-02-11 23:34:16,"Ni Yan, WenTing Tao","http://arxiv.org/abs/2302.05772v1, http://arxiv.org/pdf/2302.05772v1",econ.GN
32903,gn,"We systematically study cornerstones that must be solved to define an air
traffic control benchmarking system based on a Data Envelopment Analysis.
Primarily, we examine the appropriate decision-making units, what to consider
and what to avoid when choosing inputs and outputs in the case that several
countries are included, and how we can identify and deal with outliers, like
the Maastricht Service Provider. We argue that Air Navigation Service Providers
would be a good choice of decision units within the European context. Based on
that, we discuss candidates for DEA inputs and outputs and emphasize that
monetary values should be excluded. We, further suggest to use super-efficiency
DEA for eliminating outliers. In this context, we compare different DEA
approaches and find that standard DEA is performing well.","Efficiency in European Air Traffic Management -- A Fundamental Analysis of Data, Models, and Methods",2023-02-15 11:40:25,"Thomas Standfuss, Georg Hirte, Michael Schultz, Hartmut Fricke","http://arxiv.org/abs/2302.07525v1, http://arxiv.org/pdf/2302.07525v1",econ.GN
32918,gn,"With the prospect of next-generation automated mobility ecosystem, the
realization of the contended traffic efficiency and safety benefits are
contingent upon the demand landscape for automated vehicles (AVs). Focusing on
the public acceptance behavior of AVs, this empirical study addresses two gaps
in the plethora of travel behavior research on identifying the potential
determinants thereof. First, a clear behavioral understanding is lacking as to
the perceived concern about AV safety and the consequent effect on AV
acceptance behavior. Second, how people appraise the benefits of enhanced
automated mobility to meet their current (pre-AV era) travel behavior and
needs, along with the resulting impacts on AV acceptance and perceived safety
concern, remain equivocal. To fill these gaps, a recursive trivariate
econometric model with ordinal-continuous outcomes is employed, which jointly
estimates AV acceptance (ordinal), perceived AV safety concern (ordinal), and
current annual vehicle-miles traveled (VMT) approximating the current travel
behavior (continuous). Importantly, the co-estimation of the three endogenous
outcomes allows to capture the true interdependencies among them, net of any
correlated unobserved factors that can have common impacts on these outcomes.
Besides the classical socio-economic characteristics, the outcome variables are
further explained by the latent preferences for vehicle attributes (including
vehicle cost, reliability, performance, and refueling) and for existing shared
mobility systems. The model estimation results on a stated preference survey in
the State of California provide insights into proactive policies that can
popularize AVs through gearing towards the most affected population groups,
particularly vehicle cost-conscious, safety-concerned, and lower-VMT (such as
travel-restrictive) individuals.",Behavioral acceptance of automated vehicles: The roles of perceived safety concern and current travel behavior,2023-02-23 21:43:39,"Fatemeh Nazari, Mohamadhossein Noruzoliaee, Abolfazl Mohammadian","http://arxiv.org/abs/2302.12225v2, http://arxiv.org/pdf/2302.12225v2",econ.GN
32904,gn,"Startup companies solve many of today's most complex and challenging
scientific, technical and social problems, such as the decarbonisation of the
economy, air pollution, and the development of novel life-saving vaccines.
Startups are a vital source of social, scientific and economic innovation, yet
the most innovative are also the least likely to survive. The probability of
success of startups has been shown to relate to several firm-level factors such
as industry, location and the economy of the day. Still, attention has
increasingly considered internal factors relating to the firm's founding team,
including their previous experiences and failures, their centrality in a global
network of other founders and investors as well as the team's size. The effects
of founders' personalities on the success of new ventures are mainly unknown.
Here we show that founder personality traits are a significant feature of a
firm's ultimate success. We draw upon detailed data about the success of a
large-scale global sample of startups. We found that the Big 5 personality
traits of startup founders across 30 dimensions significantly differed from
that of the population at large. Key personality facets that distinguish
successful entrepreneurs include a preference for variety, novelty and starting
new things (openness to adventure), like being the centre of attention (lower
levels of modesty) and being exuberant (higher activity levels). However, we do
not find one ""Founder-type"" personality; instead, six different personality
types appear, with startups founded by a ""Hipster, Hacker and Hustler"" being
twice as likely to succeed. Our results also demonstrate the benefits of
larger, personality-diverse teams in startups, which has the potential to be
extended through further research into other team settings within business,
government and research.",The Science of Startups: The Impact of Founder Personalities on Company Success,2023-02-16 01:12:05,"Paul X. McCarthy, Xian Gong, Fabian Stephany, Fabian Braesemann, Marian-Andrei Rizoiu, Margaret L. Kern","http://dx.doi.org/10.1038/s41598-023-41980-y, http://arxiv.org/abs/2302.07968v1, http://arxiv.org/pdf/2302.07968v1",econ.GN
32905,gn,"Policymakers often want the very best data with which to make
decisions--particularly when concerned with questions of national and
international security. But what happens when this data is not available? In
those instances, analysts have come to rely on synthetic data-generating
processes--turning to modeling and simulation tools and survey experiments
among other methods. In the cyber domain, where empirical data at the strategic
level are limited, this is no different--cyber wargames are quickly becoming a
principal method for both exploring and analyzing the security challenges posed
by state and non-state actors in cyberspace. In this chapter, we examine the
design decisions associated with this method.",Wargames as Data: Addressing the Wargamer's Trilemma,2023-02-16 07:04:15,"Andrew W. Reddie, Ruby E. Booth, Bethany L. Goldblum, Kiran Lakkaraju, Jason Reinhardt","http://arxiv.org/abs/2302.08065v1, http://arxiv.org/pdf/2302.08065v1",econ.GN
32906,gn,"We present a revealed preference framework to study sharing of resources in
households with children. We explicitly model the impact of the presence of
children in the context of stable marriage markets under both potential types
of custody arrangement - joint custody and sole custody. Our models deliver
testable revealed preference conditions and allow for the identification of
intrahousehold allocation of resources. Empirical applications to household
data from the Netherlands (joint custody) and Russia (sole custody) show the
methods' potential to identify intrahousehold allocation.","Stable Marriage, Children, and Intrahousehold Allocations",2023-02-16 22:34:10,"Mikhail Freer, Khushboo Surana","http://arxiv.org/abs/2302.08541v1, http://arxiv.org/pdf/2302.08541v1",econ.GN
32907,gn,"We extend the existing growth-at-risk (GaR) literature by examining a long
time period of 130 years in a time-varying parameter regression model. We
identify several important insights for policymakers. First, both the level as
well as the determinants of GaR vary significantly over time. Second, the
stability of upside risks to GDP growth reported in earlier research is
specific to the period known as the Great Moderation, with the distribution of
risks being more balanced before the 1970s. Third, the distribution of GDP
growth has significantly narrowed since the end of the Bretton Woods system.
Fourth, financial stress is always linked to higher downside risks, but it does
not affect upside risks. Finally, other risk indicators, such as credit growth
and house prices, not only drive downside risks, but also contribute to
increased upside risks during boom periods. In this context, the paper also
adds to the financial cycle literature by completing the picture of drivers
(and risks) for both booms and recessions over time.",A tale of two tails: 130 years of growth-at-risk,2023-02-17 17:50:48,"Martin Gächter, Elias Hasler, Florian Huber","http://arxiv.org/abs/2302.08920v1, http://arxiv.org/pdf/2302.08920v1",econ.GN
32908,gn,"This report presents the results of an ex-ante impact assessment of several
scenarios related to the farmer targeting of the input subsidy programme
currently implemented in Senegal. This study has been achieved with the
agricultural household model FSSIM-Dev, calibrated on a sample of 2 278 farm
households from the ESPS-2 survey. The impacts on crop mix, fertilizer
application, farm income and on the government cost are presented and
discussed.",Subsidizing agricultural inputs in Senegal: Comparative analysis of three modes of intervention using a farm household model,2023-02-02 12:32:28,"Aymeric Ricome, Kamel Louhichi, Sergio Gomez y Paloma","http://arxiv.org/abs/2302.09297v1, http://arxiv.org/pdf/2302.09297v1",econ.GN
32909,gn,"This study examines the relationship between globalization and income
inequality, utilizing panel data spanning from 1992 to 2020. Globalization is
measured by the World Bank global-link indicators such as FDI, Remittance,
Trade Openness, and Migration while income inequality is measured by Gini
Coefficient and the median income of 50% of the population. The fixed effect
panel data analysis provides empirical evidence indicating that globalization
tends to reduce income inequality, though its impact varies between developed
and developing countries. The analysis reveals a strong negative correlation
between net foreign direct investment (FDI) inflows and inequality in
developing countries, while no such relationship was found for developed
countries.The relationship holds even if we consider an alternative measure of
inequality. However, when dividing countries by developed and developing
groups, no statistically significant relationship was observed. Policymakers
can use these findings to support efforts to increase FDI, trade, tourism, and
migration to promote growth and reduce income inequality.",The Globalization-Inequality Nexus: A Comparative Study of Developed and Developing Countries,2023-02-19 14:08:38,Md Shah Naoaj,"http://arxiv.org/abs/2302.09537v1, http://arxiv.org/pdf/2302.09537v1",econ.GN
32910,gn,"Air traffic control is considered to be a bottleneck in European air traffic
management. As a result, the performance of the air navigation service
providers is critically examined and also used for benchmarking. Using
quantitative methods, we investigate which endogenous and exogenous factors
affect the performance of air traffic control units on different levels. The
methodological discussion is complemented by an empirical analysis. Results may
be used to derive recommendations for operators, airspace users, and
policymakers. We find that efficiency depends significantly on traffic patterns
and the decisions of airspace users, but changes in the airspace structure
could also make a significant contribution to performance improvements.",Determinants of Performance in European ATM -- How to Analyze a Diverse Industry,2023-02-20 17:02:33,"Thomas Standfuss, Georg Hirte, Frank Fichert, Hartmut Fricke","http://arxiv.org/abs/2302.09986v1, http://arxiv.org/pdf/2302.09986v1",econ.GN
32911,gn,"We propose a new strategy to identify the impact of primary school class rank
while controlling for ability peer effects, by using grades on class exams to
construct the rank, and grades on national standardized tests to measure
students' ability. Leveraging data comprising nearly one million Italian
students, we show that rank significantly impacts subsequent test grades,
although the effect of class value-added is five-fold larger. Higher-ranked
students self-select into high schools with higher average student
achievements. Middle-school enrollment, constrained by residency criteria, is
not affected. Finally, exploiting an extensive survey, we identify
psychological mechanisms channeling the rank effect.",Rather First in a Village than Second in Rome? The Effect of Students' Class Rank in Primary School on Subsequent Academic Achievements,2023-02-20 18:17:08,"Francois-Xavier Ladant, Falco J. Bargagli-Stoffi, Julien Hedou, Paolo Sestito","http://arxiv.org/abs/2302.10026v3, http://arxiv.org/pdf/2302.10026v3",econ.GN
32912,gn,"Energy system models require a large amount of technical and economic data,
the quality of which significantly influences the reliability of the results.
Some of the variables on the important data source ENTSO-E transparency
platform, such as transmission system operators' day-ahead load forecasts, are
known to be biased. These biases and high errors affect the quality of energy
system models. We propose a simple time series model that does not require any
input variables other than the load forecast history to significantly improve
the transmission system operators' load forecast data on the ENTSO-E
transparency platform in real-time, i.e., we successively improve each incoming
data point. We further present an energy system model developed specifically
for the short-term day-ahead market. We show that the improved load data as
inputs reduce pricing errors of the model, with strong reductions particularly
in times when prices are high and the market is tight.",Enhancing Energy System Models Using Better Load Forecasts,2023-02-22 00:28:53,"Thomas Möbius, Mira Watermeyer, Oliver Grothe, Felix Müsgens","http://arxiv.org/abs/2302.11017v1, http://arxiv.org/pdf/2302.11017v1",econ.GN
32913,gn,"What are the effects of investing in public infrastructure? We answer this
question with a New Keynesian model. We recast the model as a Markov chain and
develop a general solution method that nests existing ones inside/outside the
zero lower bound as special cases. Our framework delivers a simple expression
for the contribution of public infrastructure. We show that it provides a
unified framework to study the effects of public investment in three scenarios:
$(i)$ normal times $(ii)$ short-lived liquidity trap $(iii)$ long-lived
liquidity trap. We find that calibrations commonly used lead to multipliers
that diverge with the duration of the trap.",Simple Analytics of the Government Investment Multiplier,2023-02-22 11:52:41,"Chunbing Cai, Jordan Roulleau-Pasdeloup","http://arxiv.org/abs/2302.11212v2, http://arxiv.org/pdf/2302.11212v2",econ.GN
32914,gn,"Spatially targeted investment grant schemes are a common tool to support
firms in lagging regions. We exploit exogenous variations in Germany's main
regional policy instrument (GRW) arriving from institutional reforms to analyse
local employment effects of investment grants. Findings for reduced-form and IV
regressions point to a significant policy channel running from higher funding
rates to increased firm-level investments and newly created jobs. When we
contrast effects for regions with high but declining funding rates to those
with low but rising rates, we find that GRW reforms led to diminishing
employment increases. Especially small firms responded to changing funding
conditions.",Institutional reforms and the employment effects of spatially targeted investment grants: The case of Germany's GRW,2023-02-22 16:49:51,"Björn Alecke, Timo Mitze","http://arxiv.org/abs/2302.11376v1, http://arxiv.org/pdf/2302.11376v1",econ.GN
32915,gn,"Using a model in which agents compete to develop a potentially dangerous new
technology (AI), we study how changes in the pricing of factors of production
(computational resources) affect agents' strategies, particularly their
spending on safety meant to reduce the danger from the new technology. In the
model, agents split spending between safety and performance, with safety
determining the probability of a ``disaster"" outcome, and performance
determining the agents' competitiveness relative to their peers. For given
parameterizations, we determine the theoretically optimal spending strategies
by numerically computing Nash equilibria. Using this approach we find that (1)
in symmetric scenarios, compute price increases are safety-promoting if and
only if the production of performance scales faster than the production of
safety; (2) the probability of a disaster can be made arbitrarily low by
providing a sufficiently large subsidy to a single agent; (3) when agents
differ in productivity, providing a subsidy to the more productive agent is
often better for aggregate safety than providing the same subsidy to other
agent(s) (with some qualifications, which we discuss); (4) when one agent is
much more safety-conscious, in the sense of believing that safety is more
difficult to achieve, relative to his competitors, subsidizing that agent is
typically better for aggregate safety than subsidizing its competitors;
however, subsidizing an agent that is only somewhat more safety-conscious often
decreases safety. Thus, although subsidizing a much more safety-conscious, or
productive, agent often improves safety as intuition suggests, subsidizing a
somewhat more safety-conscious or productive agent can often be harmful.",Industrial Policy for Advanced AI: Compute Pricing and the Safety Tax,2023-02-22 18:18:12,"Mckay Jensen, Nicholas Emery-Xu, Robert Trager","http://arxiv.org/abs/2302.11436v1, http://arxiv.org/pdf/2302.11436v1",econ.GN
32916,gn,"In continuous-choice settings, consumers decide not only on whether to
purchase a product, but also on how much to purchase. Thus, firms optimize a
full price schedule rather than a single price point. This paper provides a
methodology to empirically estimate the optimal schedule under
multi-dimensional consumer heterogeneity. We apply our method to novel data
from an educational-services firm that contains purchase-size information not
only for deals that materialized, but also for potential deals that eventually
failed. We show that this data, combined with identifying assumptions, helps
infer how price sensitivity varies with ""customer size"". Using our estimated
model, we show that the optimal second-degree price discrimination (i.e.,
optimal nonlinear tariff) improves the firm's profit upon linear pricing by at
least 5.5%. That said, this second-degree price discrimination scheme only
recovers 5.1% of the gap between the profitability of linear pricing and that
of infeasible first degree price discrimination. We also conduct several
further counterfactual analyses (i) empirically quantifying the magnitude by
which incentive-compatibility constraints impact the optimal pricing and
profits, (ii) comparing the role of demand- v.s. cost-side factors in shaping
the optimal price schedule, and (iii) studying the implications of fixed fees
for the optimal contract and profitability.",An Empirical Analysis of Optimal Nonlinear Pricing,2023-02-22 23:39:55,"Soheil Ghili, Russ Yoon","http://arxiv.org/abs/2302.11643v2, http://arxiv.org/pdf/2302.11643v2",econ.GN
32919,gn,"This article examines the intertwining relationship between informality and
education-occupation mismatch and the consequent impact on wages. In
particular, we discuss two issues: first, the relative importance of
informality and education-occupation mismatch in determining wages, and second,
the relevance of EOM for formal and informal workers. The analysis reveals that
although both informality and EOM are significant determinants of wages, the
former is more crucial for a developing country like India. Further, we find
that EOM is one of the crucial determinants of wages for formal workers, but it
is not critical for informal workers. The study highlights the need for
considering the bifurcation of formal-informal workers to understand the
complete dynamics of EOM, especially for developing countries where informality
is predominant.","Informality, Education-Occupation Mismatch, and Wages: Evidence from India",2023-02-25 09:05:47,"Shweta Bahl, Ajay Sharma","http://arxiv.org/abs/2302.12999v1, http://arxiv.org/pdf/2302.12999v1",econ.GN
32920,gn,"Greece constitutes a coastal country with a lot of geomorphologic, climatic,
cultural and historic peculiarities favoring the development of many aspects of
tourism. Within this framework, this article examines what are the effects of
tourism in Greece and how determinative these effects are, by applying a
macroscopic analysis on empirical data for the estimation of the contribution
of tourism in the Greek Economy. The available data regard records of the
Balance of Payments in Greece and of the major components of the Balance of the
Invisible Revenues, where a measurable aspect of tourism, the Travel or Tourism
Exchange, is included. At the time period of the available data (2000-2012) two
events of the recent Greek history are distinguished as the most significant
(the Olympic Games in the year 2004 and the economic crisis initiated in the
year 2009) and their impact on the diachronic evolution in the tourism is
discussed. Under an overall assessment, the analysis illustrated that tourism
is a sector of the Greek economy, which is described by a significant
resilience, but it seems that it has not yet been submitted to an effective
developmental plan exploiting the endogenous tourism dynamics of the country,
suggesting currently a promising investment of low risk for the economic growth
of country and the exit of the economic crisis",The Contribution of Tourism in National Economies: Evidence of Greece,2023-02-25 20:03:28,"Olga Kalantzi, Dimitrios Tsiotas, Serafeim Polyzos","http://arxiv.org/abs/2302.13121v1, http://arxiv.org/pdf/2302.13121v1",econ.GN
32921,gn,"We consider price discovery across derivative markets in a general framework
where an agent has private information regarding state probabilities and trades
state-contingent claims. In an equivalent options formulation, the informed
agent has private information regarding arbitrary aspects of an underlying
asset's payoff distribution and trades option portfolios. We characterize the
informed demand, price impact, and information efficiency of prices. The
informed demand formula prescribes option strategies for trading on any given
aspect of the underlying payoff, thereby rationalizing and extending those used
in practice for trading on, e.g., volatility.",Price Discovery for Derivatives,2023-02-27 01:30:13,"Christian Keller, Michael C. Tseng","http://arxiv.org/abs/2302.13426v21, http://arxiv.org/pdf/2302.13426v21",econ.GN
32922,gn,"Existing literature at the nexus of firm productivity and export behavior
mostly focuses on ""learning by exporting,"" whereby firms can improve their
performance by engaging in exports. Whereas, the secondary channel of learning
via cross-firm spillovers from exporting peers, or ""learning from exporters,""
has largely been neglected. Omitting this important mechanism, which can
benefit both exporters and non-exporters, may provide an incomplete assessment
of the total productivity benefits of exporting. In this paper, we develop a
unified empirical framework for productivity measurement that explicitly
accommodates both channels. To do this, we formalize the evolution of firm
productivity as an export-controlled process, allowing future productivity to
be affected by both the firm's own export behavior as well as export behavior
of spatially proximate, same-industry peers. This facilitates a simultaneous,
""internally consistent"" identification of firm productivity and the
corresponding effects of exporting. We apply our methodology to a panel of
manufacturing plants in Chile in 1995-2007 and find significant evidence in
support of both direct and spillover effects of exporting that substantially
boost the productivity of domestic firms.",Detecting Learning by Exporting and from Exporters,2023-02-27 01:31:39,"Jingfang Zhang, Emir Malikov","http://arxiv.org/abs/2302.13427v1, http://arxiv.org/pdf/2302.13427v1",econ.GN
32923,gn,"There is growing empirical evidence that firm heterogeneity is
technologically non-neutral. This paper extends Gandhi et al.'s (2020) proxy
variable framework for structurally identifying production functions to a more
general case when latent firm productivity is multi-dimensional, with both
factor-neutral and (biased) factor-augmenting components. Unlike alternative
methodologies, our model can be identified under weaker data requirements,
notably, without relying on the typically unavailable cross-sectional variation
in input prices for instrumentation. When markets are perfectly competitive, we
achieve point identification by leveraging the information contained in static
optimality conditions, effectively adopting a system-of-equations approach. We
also show how one can partially identify the non-neutral production technology
in the traditional proxy variable framework when firms have market power.",A System Approach to Structural Identification of Production Functions with Multi-Dimensional Productivity,2023-02-27 01:39:48,"Emir Malikov, Shunan Zhao, Jingfang Zhang","http://arxiv.org/abs/2302.13429v1, http://arxiv.org/pdf/2302.13429v1",econ.GN
32924,gn,"Motivated by the long-standing interest in understanding the role of location
for firm performance, this paper provides a semiparametric methodology to
accommodate locational heterogeneity in production analysis. Our approach is
novel in that we explicitly model spatial variation in parameters in the
production-function estimation. We accomplish this by allowing both the
input-elasticity and productivity parameters to be unknown functions of the
firm's geographic location and estimate them via local kernel methods. This
allows the production technology to vary across space, thereby accommodating
neighborhood influences on firm production. In doing so, we are also able to
examine the role of cross-location differences in explaining the variation in
operational productivity among firms. Our model is superior to the alternative
spatial production-function formulations because it (i) explicitly estimates
the cross-locational variation in production functions, (ii) is readily
reconcilable with the conventional production axioms and, more importantly,
(iii) can be identified from the data by building on the popular proxy-variable
methods, which we extend to incorporate locational heterogeneity. Using our
methodology, we study China's chemicals manufacturing industry and find that
differences in technology (as opposed to in idiosyncratic firm heterogeneity)
are the main source of the cross-location differential in total productivity in
this industry.",Accounting for Cross-Location Technological Heterogeneity in the Measurement of Operations Efficiency and Productivity,2023-02-27 01:46:47,"Emir Malikov, Jingfang Zhang, Shunan Zhao, Subal C. Kumbhakar","http://arxiv.org/abs/2302.13430v1, http://arxiv.org/pdf/2302.13430v1",econ.GN
32925,gn,"A core aspect in market design is to encourage participants to truthfully
report their preferences to ensure efficiency and fairness. Our research paper
analyzes the factors that contribute to and the consequences of students
reporting non-truthfully in admissions applications. We survey college
applicants in Denmark about their perceptions of the admission process and
personality to examine recent theories of misreporting preferences. Our
analysis reveals that omissions in reports are largely driven by students'
pessimistic beliefs about their chances of admission. Moreover, such erroneous
beliefs largely account for whether an omission led to a missed opportunity for
admission. However, the low frequency of these errors suggests that most
non-truthful reports are ""white lies"" with minimal negative impact. We find a
novel role of personality and individual circumstances that co-determine the
extent of omissions. We also find that estimates of students' demand are biased
if it is assumed that students report truthfully, and demonstrate that this
bias can be reduced by making a less restrictive assumption. Our results have
implications for the modeling of preferences, information acquisition, and
subjective admission beliefs in strategy-proof mechanisms",Why Do Students Lie and Should We Worry? An Analysis of Non-truthful Reporting,2023-02-27 15:27:29,"Emil Chrisander, Andreas Bjerre-Nielsen","http://arxiv.org/abs/2302.13718v2, http://arxiv.org/pdf/2302.13718v2",econ.GN
32926,gn,"Is the rapid adoption of Artificial Intelligence a sign that creative
destruction (a capitalist innovation process first theorised in 1942) is
occurring? Although its theory suggests that it is only visible over time in
aggregate, this paper devises three hypotheses to test its presence on a macro
level and research methods to produce the required data. This paper tests the
theory using news archives, questionnaires, and interviews with industry
professionals. It considers the risks of adopting Artificial Intelligence, its
current performance in the market and its general applicability to the role.
The results suggest that creative destruction is occurring in the AML industry
despite the activities of the regulators acting as natural blockers to
innovation. This is a pressurised situation where current-generation Artificial
Intelligence may offer more harm than benefit. For managers, this papers
results suggest that safely pursuing AI in AML requires having realistic
expectations of Artificial Intelligence's benefits combined with using a
framework for AI Ethics.",The global economic impact of AI technologies in the fight against financial crime,2023-02-27 17:30:54,James Bell,"http://arxiv.org/abs/2302.13823v1, http://arxiv.org/pdf/2302.13823v1",econ.GN
32927,gn,"The analysis of the effects of monetary policy shocks using the common
econometric models (such as VAR or SVAR) poses several empirical anomalies.
However, it is known that in these econometric models the use of a large amount
of information is accompanied by dimensionality problems. In this context, the
approach in terms of FAVAR (Factor Augmented VAR) models tries to solve this
problem. Moreover, the information contained in the factors is important for
the correct identification of monetary policy shocks and it helps to correct
the empirical anomalies usually encountered in empirical work. Following
Bernanke, Boivin and Eliasz (2005) procedure, we will use the FAVAR model to
analyze the impact of monetary policy shocks on the Moroccan economy. The model
used allows us to obtain impulse response functions for all indicators in the
macroeconomic dataset used (117 quarterly frequency series from 1985: Q1 to
2018: Q4) to have a more realistic and complete representation of the impact of
monetary policy shocks in Morocco.",Econometric assessment of the monetary policy shocks in Morocco: Evidence from a Bayesian Factor-Augmented VAR,2023-02-27 22:52:58,Marouane Daoui,"http://arxiv.org/abs/2302.14114v1, http://arxiv.org/pdf/2302.14114v1",econ.GN
32928,gn,"Understanding couple instability is a topic of social and economic relevance.
This paper investigates how the risk of dissolution relates to efforts to solve
disagreements. We study whether the prevalence of relationship instability in
the past among couples is associated with marital locus of control. This is a
noncognitive trait that captures individuals perception of control over
problems within the couple. We implement a list experiment using the count-item
technique to a sample of current real-life couples to elicit truthful answers
about couple break-up intentions in the past at the individual level. We find
that around 44 per cent of our sample has considered to end their relationship
with their partner in the past. The intention to break-up is more prevalent
among those who score low in marital locus of control, males, low-income
earners, individuals with university studies and couples without children.",The association between Marital Locus of Control and break-up intentions,2023-02-27 23:44:48,"David Boto-García, Federico Perali","http://arxiv.org/abs/2302.14133v1, http://arxiv.org/pdf/2302.14133v1",econ.GN
32929,gn,"Using harmonized administrative data from Scandinavia, we find that
intergenerational rank associations in income have increased uniformly across
Sweden, Denmark, and Norway for cohorts born between 1951 and 1979. Splitting
these trends by gender, we find that father-son mobility has been stable, while
family correlations for mothers and daughters trend upward. Similar patterns
appear in US survey data, albeit with slightly different timing. Finally, based
on evidence from records on occupations and educational attainments, we argue
that the observed decline in intergenerational mobility is consistent with
female skills becoming increasingly valued in the labor market.",Intergenerational Mobility Trends and the Changing Role of Female Labor,2023-02-28 12:31:47,"Ulrika Ahrsjö, René Karadakic, Joachim Kahr Rasmussen","http://arxiv.org/abs/2302.14440v2, http://arxiv.org/pdf/2302.14440v2",econ.GN
32930,gn,"We develop a novel methodology for the proxy variable identification of firm
productivity in the presence of productivity-modifying learning and spillovers
which facilitates a unified ""internally consistent"" analysis of the spillover
effects between firms. Contrary to the popular two-step empirical approach,
ours does not postulate contradictory assumptions about firm productivity
across the estimation steps. Instead, we explicitly accommodate cross-sectional
dependence in productivity induced by spillovers which facilitates
identification of both the productivity and spillover effects therein
simultaneously. We apply our model to study cross-firm spillovers in China's
electric machinery manufacturing, with a particular focus on productivity
effects of inbound FDI.",On the Estimation of Cross-Firm Productivity Spillovers with an Application to FDI,2023-02-27 01:52:40,"Emir Malikov, Shunan Zhao","http://arxiv.org/abs/2302.14602v1, http://arxiv.org/pdf/2302.14602v1",econ.GN
33829,th,"A buyer wishes to purchase a durable good from a seller who in each period
chooses a mechanism under limited commitment. The buyer's valuation is binary
and fully persistent. We show that posted prices implement all equilibrium
outcomes of an infinite-horizon, mechanism selection game. Despite being able
to choose mechanisms, the seller can do no better and no worse than if he chose
prices in each period, so that he is subject to Coase's conjecture. Our
analysis marries insights from information and mechanism design with those from
the literature on durable goods. We do so by relying on the revelation
principle in Doval and Skreta (2020).",Optimal mechanism for the sale of a durable good,2019-04-16 07:34:03,"Laura Doval, Vasiliki Skreta","http://arxiv.org/abs/1904.07456v7, http://arxiv.org/pdf/1904.07456v7",econ.TH
32931,gn,"Propelled by the recent financial product innovations involving derivatives,
securitization and mortgages, commercial banks are becoming more complex,
branching out into many ""nontraditional"" banking operations beyond issuance of
loans. This broadening of operational scope in a pursuit of revenue
diversification may be beneficial if banks exhibit scope economies. The
existing (two-decade-old) empirical evidence lends no support for such
product-scope-driven cost economies in banking, but it is greatly outdated and,
surprisingly, there has been little (if any) research on this subject despite
the drastic transformations that the U.S. banking industry has undergone over
the past two decades in the wake of technological advancements and regulatory
changes. Commercial banks have significantly shifted towards nontraditional
operations, making the portfolio of products offered by present-day banks very
different from that two decades ago. In this paper, we provide new and more
robust evidence about scope economies in U.S. commercial banking. We improve
upon the prior literature not only by analyzing the most recent data and
accounting for bank's nontraditional off-balance sheet operations, but also in
multiple methodological ways. To test for scope economies, we estimate a
flexible time-varying-coefficient panel-data quantile regression model which
accommodates three-way heterogeneity across banks. Our results provide strong
evidence in support of significantly positive scope economies across banks of
virtually all sizes. Contrary to earlier studies, we find no empirical
corroboration for scope diseconomies.",Off-Balance Sheet Activities and Scope Economies in U.S. Banking,2023-02-27 01:56:20,"Jingfang Zhang, Emir Malikov","http://arxiv.org/abs/2302.14603v1, http://arxiv.org/pdf/2302.14603v1",econ.GN
32932,gn,"We conducted regression discontinuity design models in order to evaluate
changes in access to healthcare services and financial protection, using as a
natural experiment the age required to retire in Argentina, the moment in which
people are able to enroll in the free social health insurance called PAMI. The
dependent variables were indicators of the population with health insurance,
out-of-pocket health expenditure, and use of health services. The results show
that PAMI causes a high increase in the population with health insurance and
marginal reductions in health expenditure. No effects on healthcare use were
found.",Evaluación del efecto del PAMI en la cobertura en salud de los adultos mayores en Argentina,2023-02-28 20:36:48,"Juan Marcelo Virdis, Fernando Delbianco, María Eugenia Elorza","http://arxiv.org/abs/2302.14784v1, http://arxiv.org/pdf/2302.14784v1",econ.GN
32933,gn,"Green hydrogen is expected to be traded globally in future greenhouse gas
neutral energy systems. However, there is still a lack of temporally- and
spatially-explicit cost-potentials for green hydrogen considering the full
process chain, which are necessary for creating effective global strategies.
Therefore, this study provides such detailed cost-potential-curves for 28
selected countries worldwide until 2050, using an optimizing energy systems
approach based on open-field photovoltaics (PV) and onshore wind. The results
reveal huge hydrogen potentials (>1,500 PWhLHV/a) and 79 PWhLHV/a at costs
below 2.30 EUR/kg in 2050, dominated by solar-rich countries in Africa and the
Middle East. Decentralized PV-based hydrogen production, even in wind-rich
countries, is always preferred. Supplying sustainable water for hydrogen
production is needed while having minor impact on hydrogen cost. Additional
costs for imports from democratic regions are only total 7% higher. Hence, such
regions could boost the geostrategic security of supply for greenhouse gas
neutral energy systems.",Green Hydrogen Cost-Potentials for Global Trade,2023-03-01 11:19:48,"David Franzmann, Heidi Heinrichs, Felix Lippkau, Thushara Addanki, Christoph Winkler, Patrick Buchenberg, Thomas Hamacher, Markus Blesl, Jochen Linßen, Detlef Stolten","http://dx.doi.org/10.1016/j.ijhydene.2023.05.012, http://arxiv.org/abs/2303.00314v2, http://arxiv.org/pdf/2303.00314v2",econ.GN
32934,gn,"Recent dramatic increases in AI language modeling capabilities has led to
many questions about the effect of these technologies on the economy. In this
paper we present a methodology to systematically assess the extent to which
occupations, industries and geographies are exposed to advances in AI language
modeling capabilities. We find that the top occupations exposed to language
modeling include telemarketers and a variety of post-secondary teachers such as
English language and literature, foreign language and literature, and history
teachers. We find the top industries exposed to advances in language modeling
are legal services and securities, commodities, and investments. We also find a
positive correlation between wages and exposure to AI language modeling.",How will Language Modelers like ChatGPT Affect Occupations and Industries?,2023-03-02 14:04:06,"Ed Felten, Manav Raj, Robert Seamans","http://arxiv.org/abs/2303.01157v2, http://arxiv.org/pdf/2303.01157v2",econ.GN
32935,gn,"Search frictions can impede the formation of optimal matches between consumer
and supplier, or employee and employer, and lead to inefficiencies. This paper
revisits the effect of search frictions on the firm size distribution when
challenging two common but strong assumptions: that all agents share the same
ranking of firms, and that agents meet all firms, whether small or large, at
the same rate. We build a random search model in which we relax those two
assumptions and show that the intensity of search frictions has a non monotonic
effect on market concentration. An increase in friction intensity increases
market concentration up to a certain threshold of frictions, that depends on
the slope of the meeting rate with respect to firm size. We leverage unique
French customs data to estimate this slope. First, we find that in a range of
plausible scenarios, search frictions intensity increases market concentration.
Second, we show that slopes have increased over time, which unambiguously
increases market concentration in our model. Overall, we shed light on the
importance of the structure of frictions, rather than their intensity, to
understand market concentration.",Revisiting the effect of search frictions on market concentration,2023-03-03 13:09:28,"Jules Depersin, Bérengère Patault","http://arxiv.org/abs/2303.01824v1, http://arxiv.org/pdf/2303.01824v1",econ.GN
32936,gn,"Efficient risk transfer is an important condition for ensuring the
sustainability of a market according to the established economics literature.
In an inefficient market, significant financial imbalances may develop and
potentially jeopardise the solvency of some market participants. The constantly
evolving nature of cyber-threats and lack of public data sharing mean that the
economic conditions required for quoted cyber-insurance premiums to be
considered efficient are highly unlikely to be met. This paper develops Monte
Carlo simulations of an artificial cyber-insurance market and compares the
efficient and inefficient outcomes based on the informational setup between the
market participants. The existence of diverse loss distributions is justified
by the dynamic nature of cyber-threats and the absence of any reliable and
centralised incident reporting. It is shown that the limited involvement of
reinsurers when loss expectations are not shared leads to increased premiums
and lower overall capacity. This suggests that the sustainability of the
cyber-insurance market requires both better data sharing and external sources
of risk tolerant capital.",The barriers to sustainable risk transfer in the cyber-insurance market,2023-03-03 19:29:16,"Henry Skeoch, Christos Ioannidis","http://arxiv.org/abs/2303.02061v2, http://arxiv.org/pdf/2303.02061v2",econ.GN
32938,gn,"This article conducts a literature review on the topic of monetary policy in
developing countries and focuses on the effectiveness of monetary policy in
promoting economic growth and the relationship between monetary policy and
economic growth. The literature review finds that the activities of central
banks in developing countries are often overlooked by economic models, but
recent studies have shown that there are many factors that can affect the
effectiveness of monetary policy in these countries. These factors include the
profitability of central banks and monetary unions, the independence of central
banks in their operations, and lags, rigidities, and disequilibrium analysis.
The literature review also finds that studies on the topic have produced mixed
results, with some studies finding that monetary policy has a limited or
non-existent impact on economic growth and others finding that it plays a
crucial role. The article aims to provide a comprehensive understanding of the
current state of research in this field and to identify areas for future study.",Monetary Policy and Economic Growth in Developing Countries: A Literature Review,2023-03-06 17:24:13,Marouane Daoui,"http://arxiv.org/abs/2303.03162v1, http://arxiv.org/pdf/2303.03162v1",econ.GN
32939,gn,"We propose four channels through which government guarantees affect banks'
incentives to smooth income. Empirically, we exploit two complementary settings
that represent plausible exogenous changes in government guarantees: the
increase in implicit guarantees following the creation of the Eurozone and the
removal of explicit guarantees granted to the Landesbanken. We show that
increases (decreases) in government guarantees are associated with significant
decreases (increases) in banks' income smoothing. Taken together, our results
largely corroborate the predominance of a tail-risk channel, wherein government
guarantees reduce banks' tail risk, thereby reducing managers' incentives to
engage in income smoothing.",Government Guarantees and Banks' Income Smoothing,2023-03-07 08:48:38,"Manuela M. Dantas, Kenneth J. Merkley, Felipe B. G. Silva","http://arxiv.org/abs/2303.03661v1, http://arxiv.org/pdf/2303.03661v1",econ.GN
32940,gn,"The vast literature on exchange rate fluctuations estimates the exchange rate
pass-through (ERPT). Most ERPT studies consider annually aggregated data for
developed or large developing countries for estimating ERPT. These estimates
vary widely depending on the type of country, data coverage, and frequency.
However, the ERPT estimation using firm-level high-frequency export data of a
small developing country is rare. In this paper, I estimate the pricing to
market and the exchange rate pass-through at a monthly, quarterly, and annual
level of data frequency to deal with aggregation bias. Furthermore, I
investigate how delivery time-based factors such as frequent shipments and
faster transport affect a firm`s pricing-to-market behavior. Using
transaction-level export data of Bangladesh from 2005 to 2013 and the Poisson
Pseudo Maximum Likelihood (PPML) estimation method, I find very small pricing
to the markets to the exchange rates in the exporter's price. As pass-through
shows how the exporters respond to macro shocks, for Bangladesh, this low
export price response to the exchange rate changes indicates that currency
devaluation might not have a significant effect on the exporter. The minimal
price response and high pass-through contrast with the literature on incomplete
pass-through at the annual level. By considering the characteristics of the
firms, products, and destinations, I investigate the heterogeneity of the
pass-through. The findings remain consistent with several robustness checks.",Exchange Rate Pass-Through and Data Frequency: Firm-Level Evidence from Bangladesh,2023-03-07 21:06:20,Md Deluair Hossen,"http://arxiv.org/abs/2303.04101v1, http://arxiv.org/pdf/2303.04101v1",econ.GN
32941,gn,"In international trade, firms face lengthy ordering-producing-delivery times
and make shipping frequency decisions based on the per-shipment costs and
financing costs. In this paper, I develop a model of importer-exporter
procurement where the importer procures international inputs from exporting
firms in developing countries. The exporters are credit constrained for working
capital, incur the per-shipment fixed costs, and get paid after goods are
delivered to the importer. The model shows that the shipping frequency
increases for high financing costs in origin and destination. Furthermore,
longer delivery times increase shipping frequency as well as procurement costs.
The model also shows that the higher per-shipment fixed costs reduce the
shipping frequency, in line with previous literature. Reduced transaction costs
lower the exporter's demand for financial services through shipping frequency
adjustment, mitigating the financial frictions of the firm. Then, I empirically
investigate whether the conclusions regarding the effect of per-shipment fixed
costs on shipping frequency from the theoretical model and in the existing
literature extend to developing countries. My estimation method addresses
several biases. First, I deal with aggregation bias with the firm, product, and
country-level analysis. Second, I consider the Poisson Pseudo Maximum
Likelihood (PPML) estimation method to deal with heteroscedasticity bias from
the OLS estimation of log-linear models. Third, I fix the distance
non-linearity of Bangladeshi exports. Finally, I consider the effect of
financing cost on shipping frequency to address omitted variable bias. Using
transaction-level export data from Bangladesh, I find that 10% higher
per-shipment costs reduce the shipping frequency by 3.45%. The findings are
robust to different specifications and subsamples.","Financing Costs, Per-Shipment Costs and Shipping Frequency: Firm-Level Evidence from Bangladesh",2023-03-07 23:32:49,Md Deluair Hossen,"http://arxiv.org/abs/2303.04223v1, http://arxiv.org/pdf/2303.04223v1",econ.GN
32942,gn,"This paper aims to evaluate how changing patterns of sectoral gender
segregation play a role in accounting for women's employment contracts and
wages in the UK between 2005 and 2020. We then study wage differentials in
gender-specific dominated sectors. We found that the propensity of women to be
distributed differently across sectors is a major factor contributing to
explaining the differences in wages and contract opportunities. Hence, the
disproportion of women in female-dominated sectors implies contractual features
and lower wages typical of that sector, on average, for all workers. This
difference is primarily explained by ""persistent discriminatory constraints"",
while human capital-related characteristics play a minor role. However, wage
differentials would shrink if workers had the same potential and residual wages
as men in male-dominated sectors. Moreover, this does not happen at the top of
the wage distribution, where wage differentials among women working in
female-dominated sectors are always more pronounced than those of men.",Gender Segregation: Analysis across Sectoral-Dominance in the UK Labour Market,2023-03-08 15:23:37,"Riccardo Leoncini, Mariele Macaluso, Annalivia Polselli","http://arxiv.org/abs/2303.04539v3, http://arxiv.org/pdf/2303.04539v3",econ.GN
32943,gn,"Using the 2017 National Household Travel Survey (NHTS), this study analyzes
America's urban travel trends compared with earlier nationwide travel surveys,
and examines the variations in travel behaviors among a range of socioeconomic
groups. The most noticeable trend for the 2017 NHTS is that although private
automobiles continue to be the dominant travel mode in American cities, the
share of car trips has slightly and steadily decreased since its peak in 2001.
In contrast, the share of transit, non-motorized, and taxicab (including
ride-hailing) trips has steadily increased. Besides this overall trend, there
are important variations in travel behaviors across income, home ownership,
ethnicity, gender, age, and life-cycle stages. Although the trends in transit
development, shared mobility, e-commerce, and lifestyle changes offer optimism
about American cities becoming more multimodal, policymakers should consider
these differences in socioeconomic factors and try to provide more equitable
access to sustainable mobility across different socioeconomic groups.",Socioeconomics of Urban Travel in the U.S.: Evidence from the 2017 NHTS,2023-03-08 05:16:22,"Xize Wang, John L. Renne","http://dx.doi.org/10.1016/j.trd.2023.103622, http://arxiv.org/abs/2303.04812v1, http://arxiv.org/pdf/2303.04812v1",econ.GN
32945,gn,"Preliminary research indicated that an increasing number of young adults end
up in debt collection. Yet, debt collection agencies (DCAs) are still lacking
knowledge on how to approach these consumers. A large-scale mixed-methods
survey of consumers in Germany (N = 996) was conducted to investigate
preference shifts from traditional to digital payment, and communication
channels; and attitude shifts towards financial institutions. Our results show
that, indeed, younger consumers are more likely to prefer digital payment
methods (e.g., Paypal, Apple Pay), while older consumers are more likely to
prefer traditional payment methods such as manual transfer. In the case of
communication channels, we found that older consumers were more likely to
prefer letters than younger consumers. Additional factors that had an influence
on payment and communication preferences include gender, income and living in
an urban area. Finally, we observed attitude shifts of younger consumers by
exhibiting more openness when talking about their debt than older consumers. In
summary, our findings show that consumers' preferences are influenced by
individual differences, specifically age, and we discuss how DCAs can leverage
these insights to optimize their processes.",Preferences and Attitudes towards Debt Collection: A Cross-Generational Investigation,2023-03-09 19:27:05,"Minou Goetze, Christina Herdt, Ricarda Conrad, Stephan Stricker","http://arxiv.org/abs/2303.05380v1, http://arxiv.org/pdf/2303.05380v1",econ.GN
32946,gn,"In this paper, we identify two different sets of problems. The first covers
the problems that the iterative proportional fitting (IPF) algorithm was
developed to solve. These concern completing a population table by using a
sample. The other set concerns constructing a counterfactual population table
with the purpose of comparing two populations. The IPF is commonly applied by
social scientists to solve problems not only in the first set, but also in the
second one. We show that while it is legitimate to use the IPF for the first
set of problems, it is not the right tool to address the problems of the second
kind. We promote an alternative of the IPF, the NM-method, for solving problems
in the second set. We provide both theoretical and empirical comparisons of
these methods.",The iterative proportional fitting algorithm and the NM-method: solutions for two different sets of problems,2023-03-09 01:23:07,Anna Naszodi,"http://arxiv.org/abs/2303.05515v1, http://arxiv.org/pdf/2303.05515v1",econ.GN
32947,gn,"In the past, the seed keywords for CPI prediction were often selected based
on empirical summaries of research and literature studies, which were prone to
select omitted and invalid variables. In this paper, we design a keyword
expansion technique for CPI prediction based on the cutting-edge NLP model,
PANGU. We improve the CPI prediction ability using the corresponding web search
index. Compared with the unsupervised pre-training and supervised downstream
fine-tuning natural language processing models such as BERT and NEZHA, the
PANGU model can be expanded to obtain more reliable CPI-generated keywords by
its excellent zero-sample learning capability without the limitation of the
downstream fine-tuning data set. Finally, this paper empirically tests the
keyword prediction ability obtained by this keyword expansion method with
historical CPI data.",Research on CPI Prediction Based on Natural Language Processing,2023-03-10 05:41:47,"Xiaobin Tang, Nuo Lei","http://arxiv.org/abs/2303.05666v1, http://arxiv.org/pdf/2303.05666v1",econ.GN
32948,gn,"We apply a pseudo panel analysis of survey data from the years 2010 and 2017
about Americans' self-reported marital preferences and perform some formal
tests on the sign and magnitude of the change in educational homophily from the
generation of the early Boomers to the late Boomers, as well as from the early
GenerationX to the late GenerationX. In the analysis, we control for changes in
preferences over the course of the survey respondents' lives. We use the test
results to decide whether the popular iterative proportional fitting (IPF)
algorithm, or its alternative, the NM-method is more suitable for analyzing
revealed marital preferences. These two methods construct different tables
representing counterfactual joint educational distributions of couples.
Thereby, they disagree on the trend of revealed preferences identified from the
prevalence of homogamy by counterfactual decompositions. By finding
self-reported homophily to display a U-shaped pattern, our tests reject the
hypothesis that the IPF is suitable for constructing counterfactuals in
general, while we cannot reject the applicability of the NM. The significance
of our survey-based method-selection is due to the fact that the choice between
the IPF and the NM makes a difference not only to the identified historical
trend of revealed homophily, but also to what future paths of social inequality
are believed to be possible.",What do surveys say about the historical trend of inequality and the applicability of two table-transformation methods?,2023-03-10 16:03:02,Anna Naszodi,"http://arxiv.org/abs/2303.05895v1, http://arxiv.org/pdf/2303.05895v1",econ.GN
32949,gn,"The November 2022 ranked choice election for District 4 School Director in
Oakland, CA, was very interesting from the perspective of social choice theory.
The election did not contain a Condorcet winner and exhibited downward and
upward monotonicity paradoxes, for example. Furthermore, an error in the
settings of the ranked choice tabulation software led to the wrong candidate
being declared the winner. This article explores the strange features of this
election and places it in the broader context of ranked choice elections in the
United States.",Ranked Choice Bedlam in a 2022 Oakland School Director Election,2023-03-10 18:35:42,David McCune,"http://arxiv.org/abs/2303.05985v1, http://arxiv.org/pdf/2303.05985v1",econ.GN
32950,gn,"The first-ever article published in Research Policy was Casimir's (1971)
advocacy of academic freedom in light of the industry's increasing influence on
research in universities. Half a century later, the literature attests to the
dearth of work on the role of academic freedom for innovation. To fill this
gap, we employ instrumental variable techniques to identify the impact of
academic freedom on the quantity (patent applications) and quality (patent
citations) of innovation output. The empirical evidence suggests that improving
academic freedom by one standard deviation increases patent applications and
forward citations by 41% and 29%, respectively. The results hold in a
representative sample of 157 countries over the 1900-2015 period. This research
note is also an alarming plea to policymakers: Global academic freedom has
declined over the past decade for the first time in the last century. Our
estimates suggest that the decline of academic freedom has resulted in a global
loss quantifiable with at least 4.0% fewer patents filed and 5.9% fewer patent
citations.",Academic Freedom and Innovation: A Research Note,2023-03-10 20:29:48,"David Audretsch, Christian Fisch, Chiara Franzoni, Paul P. Momtaz, Silvio Vismara","http://arxiv.org/abs/2303.06097v1, http://arxiv.org/pdf/2303.06097v1",econ.GN
33844,th,"The main aim of this paper is to give some clarifications to the recent paper
published in Computational and Applied Mathematics by Naz and Chaudhry.",Closed form solutions of Lucas Uzawa model with externalities via partial Hamiltonian approach. Some Clarifications,2019-07-29 23:09:22,Constantin Chilarescu,"http://arxiv.org/abs/1907.12623v1, http://arxiv.org/pdf/1907.12623v1",econ.TH
32953,gn,"The network economical sharing economy, with direct exchange as a core
characteristic, is implemented both, on a commons and platform economical
basis. This is due to a gain in importance of trust, collaborative consumption
and democratic management as well as technological progress, in the form of
near zero marginal costs, open source contributions and digital transformation.
Concurrent to these commons-based drivers, the grey area between commerce and
private exchange is used to exploit work, safety and tax regulations by central
platform economists. Instead of central intermediators, the blockchain
technology makes decentralized consensus finding, using Proof-of-Work (PoW)
within a self-sustaining Peer-to-Peer network, possible. Therefore, a
blockchain-based open source mediation seems to offer a commons-compatible
implementation of the sharing economy. This thesis is investigated through a
qualitative case study of Sardex and Interlace with their blockchain
application, based on expert interviews and a structured content analysis. To
detect the most commons-compatible implementation, the different implementation
options through conventional platform intermediators, an open source blockchain
with PoW as well as Interlaces' permissioned blockchain approach, are compared.
The following confrontation is based on deductive criteria, which illustrates
the inherent characteristics of a commons-based sharing economy.",A Commons-Compatible Implementation of the Sharing Economy: Blockchain-Based Open Source Mediation,2023-03-14 13:56:50,"Petra Tschuchnig, Manfred Mayr, Maximilian Tschuchnig, Peter Haber","http://dx.doi.org/10.5220/0009345600720079, http://arxiv.org/abs/2303.07786v1, http://arxiv.org/pdf/2303.07786v1",econ.GN
32954,gn,"This paper proposes a new measure of relative intergenerational mobility
along the educational trait as a proxy of inequality of opportunity. The new
measure is more suitable for controlling for the variations in the trait
distributions of individuals and their parents than the commonly used
intergenerational persistence coefficient. This point is illustrated by our
empirical analysis of US census data from the period between 1960 and 2015: we
show that controlling for the variations in the trait distributions adequately
is vital in assessing the part of intergenerational mobility which is not
caused by the educational expansion. Failing to do so can potentially reverse
the relative priority of various policies aiming at reducing the ""heritability""
of high school degrees and tertiary education diplomas.",Are high school degrees and university diplomas equally heritable in the US? A new measure of relative intergenerational mobility,2023-03-15 11:39:48,"Anna Naszodi, Liliana Cuccu","http://arxiv.org/abs/2303.08445v1, http://arxiv.org/pdf/2303.08445v1",econ.GN
32955,gn,"We explore firm-level markup and profit rates during the COVID-19 pandemic
for a panel of 3,611 publicly traded firms in Compustat and find increases for
the average firm. We offer conditions to give markups and profit rate forecasts
a causal interpretation of what would have happened had the pandemic not
happened. Our estimations suggest that had the pandemic not happened, markups
would have been 4% and 7% higher than observed in 2020 and 2021, respectively,
and profit rates would have been 2.1 and 6.4 percentage points lower. We
perform a battery of tests to assess the robustness of our approach. We further
show significant heterogeneity in the impact of the pandemic on firms by key
firm characteristics and industry. We find that firms with lower than
forecasted markups tend to have lower stock-exchange tenure and fewer
employees.",The Effects of the Pandemic on Market Power and Profitability,2023-03-15 20:03:53,"Juan Andres Espinosa-Torres, Jaime Ramirez-Cuellar","http://arxiv.org/abs/2303.08765v1, http://arxiv.org/pdf/2303.08765v1",econ.GN
32956,gn,"In recent years, European regulators have debated restricting the time an
online tracker can track a user to protect consumer privacy better. Despite the
significance of these debates, there has been a noticeable absence of any
comprehensive cost-benefit analysis. This article fills this gap on the cost
side by suggesting an approach to estimate the economic consequences of
lifetime restrictions on cookies for publishers. The empirical study on cookies
of 54,127 users who received 128 million ad impressions over 2.5 years yields
an average cookie lifetime of 279 days, with an average value of EUR 2.52 per
cookie. Only 13% of all cookies increase their daily value over time, but their
average value is about four times larger than the average value of all cookies.
Restricting cookies lifetime to one year (two years) decreases their lifetime
value by 25% (19%), which represents a decrease in the value of all cookies of
9% (5%). In light of the EUR 10.60 billion cookie-based display ad revenue in
Europe, such restrictions would endanger EUR 904 million (EUR 576 million)
annually, equivalent to EUR 2.08 (EUR 1.33) per EU internet user. The article
discusses these results' marketing strategy challenges and opportunities for
advertisers and publishers.",Economic Consequences of Online Tracking Restrictions: Evidence from Cookies,2023-03-16 11:22:17,"Klaus M. Miller, Bernd Skiera","http://arxiv.org/abs/2303.09147v2, http://arxiv.org/pdf/2303.09147v2",econ.GN
32957,gn,"This book starts from the basic questions that had been raised by the
founders of Economic theory, Smith, Ricardo, and Marx: what makes the value of
commodities, what are production, exchange, money and incomes like profits,
wages and rents. The answers that these economists had provided were mostly
wrong, above all by defining the equivalence of commodities at the level of
exchange, but also because of a confusion made between values and prices, and
wrong views of what production really is and the role of fixed capital. Using
the mathematical theory of measurement and the physical theory of dimensional
analysis, this book provides a coherent theory of value based on an equivalence
relation not at the level of exchange, but of production. Indeed exchange is
considered here as an equivalence relation between money and a monetary price,
and not between commodities, modern monetary theory having demonstrated that
money is not a commodity. The book rejects the conception of production as a
surplus, which owes much to Sraffa's theory of production prices, and is shown
to be severely flawed. It founds the equivalence of commodities at the level of
a production process considered as a transformation process. It rehabilitates
the labor theory of value, based on the connection between money and labor due
the monetary payment of wages, which allows the homogenization of various kinds
of concrete labor into abstract labor. It shows that value is then a dimension
of commodities and that this dimension is time, i.e. the time of physics. On
this background, the book shows that the calculation of values for all
commodities is always possible, even in the case of joint production, and that
there cannot be any commodity residue left by this calculation. As a further
step, this book provides a coherent theory of the realization of the product,
which occurs in the circulation process. Using an idea - the widow's cruse -
introduced by Keynes in his Treatise on Money, it brings to light the mechanism
behind the transformation of money values into money prices and of
surplus-value into profits and other transfer incomes, ensuring the formation
of monetary profits. The book sheds some light on the rate of profit, its
determinants and its evolution, showing in particular the paramount importance
of capitalist consumption as one of its main determinants. In passing it
explains the reasons why in the real world there is a multiplicity of profit
rates. Finally, it allows to solve in a precise and illustrated way the
problems raised by the Marxist law of the tendency of the rate of profit to
fall. Most of the results obtained translate into principles, the first ones
being truly basic, the following ones less basic, but all of them being
fundamental. All in all, this book might provide the first building blocks to
develop a full-fledged and scientific economic theory to many fellow
economists, critical of neo-classical theory, but who have not yet dicovered
the bases of a complete and coherent alternative.","Main Concepts and Principles of Political Economy -- Production and Values, Distribution and Prices, Reproduction and Profits",2023-03-13 13:25:50,Christian Flamant,"http://arxiv.org/abs/2303.09399v1, http://arxiv.org/pdf/2303.09399v1",econ.GN
32958,gn,"The first observation of the paper is that methods for determining
proportional representation in electoral systems may be suitable as
alternatives to the pro-rata order matching algorithm used in stock exchanges.
The main part of our work is to comprehensively consider various well known
proportional representation methods and analyse in details their suitability
for replacing the pro-rata algorithm. Our analysis consists of a theoretical
study as well as simulation studies based on data sampled from a distribution
which has been suggested in the literature as models of limit orders. Based on
our analysis, we put forward the suggestion that the well known Hamilton's
method is a superior alternative to the pro-rata algorithm for order matching
applications.",On Using Proportional Representation Methods as Alternatives to Pro-Rata Based Order Matching Algorithms in Stock Exchanges,2023-03-17 00:07:45,"Sanjay Bhattacherjee, Palash Sarkar","http://arxiv.org/abs/2303.09652v4, http://arxiv.org/pdf/2303.09652v4",econ.GN
32980,gn,"When land becomes more connected, its value can change because of network
externalities. This idea is intuitive and appealing to developers and
policymakers, but documenting their importance is empirically challenging
because it is difficult to isolate the determinants of land value in practice.
We address this challenge with real estate in The Sandbox, a virtual economy
built on blockchain, which provides a series of natural experiments that can be
used to estimate the causal impact of land-based of network externalities. Our
results show that when new land becomes available, the network value of
existing land increases, but there is a trade-off as new land also competes
with existing supply. Our work illustrates the benefits of using virtual worlds
to conduct policy experiments.",Capitalising the Network Externalities of New Land Supply in the Metaverse,2023-03-30 09:26:56,"Kanis Saengchote, Voraprapa Nakavachara, Yishuang Xu","http://arxiv.org/abs/2303.17180v1, http://arxiv.org/pdf/2303.17180v1",econ.GN
32959,gn,"A growing empirical literature finds that firms pass the cost of minimum wage
hikes onto consumers via higher retail prices. Yet, little is known about
minimum wage effects on wholesale prices and whether retailers face a wholesale
cost shock in addition to the labor cost shock. I exploit the vertically
disintegrated market structure of Washington state's legal recreational
cannabis industry to investigate minimum wage pass-through to wholesale and
retail prices. In a difference-in-differences with continuous treatment
framework, I utilize scanner data on $6 billion of transactions across the
supply chain and leverage geographic variation in firms' minimum wage exposure
across six minimum wage hikes between 2018 and 2021. When ignoring wholesale
cost effects, I find retail pass-through elasticities consistent with existing
literature -- yet retail pass-through elasticities more than double once
wholesale cost effects are accounted for. Retail markups do not adjust to the
wholesale cost shock, indicating a full pass-through of the wholesale cost
shock to retail prices. The results highlight the importance of analyzing the
entire supply chain when evaluating the product market effects of minimum wage
hikes.",Minimum Wage Pass-through to Wholesale and Retail Prices: Evidence from Cannabis Scanner Data,2023-03-18 11:55:54,Carl Hase,"http://arxiv.org/abs/2303.10367v3, http://arxiv.org/pdf/2303.10367v3",econ.GN
32960,gn,"In regulatory proceedings, few issues are more hotly debated than the cost of
capital. This article formalises the theoretical foundation of cost of capital
estimation for regulatory purposes. Several common regulatory practices lack a
solid foundation in the theory. For example, the common practice of estimating
a single cost of capital for the regulated firm suffers from a circularity
problem, especially in the context of a multi-year regulatory period. In
addition, the relevant cost of debt cannot be estimated using the
yield-to-maturity on a corporate bond. We suggest possible directions for
reform of cost of capital practices in regulatory proceedings.",A Re-Examination of the Foundations of Cost of Capital for Regulatory Purposes,2023-03-20 04:22:28,Darryl Biggar,"http://arxiv.org/abs/2303.10818v1, http://arxiv.org/pdf/2303.10818v1",econ.GN
32961,gn,"This study rigorously investigates the Keynesian cross model of a national
economy with a focus on the dynamic relationship between government spending
and economic equilibrium. The model consists of two ordinary differential
equations regarding the rate of change of national income and the rate of
consumer spending. Three dynamic relationships between national income and
government spending are studied. This study aims to classify the stabilities of
equilibrium states for the economy by discussing different cases of government
spending. Furthermore, the implication of government spending on the national
economy is investigated based on phase portraits and bifurcation analysis of
the dynamical system in each scenario.",Bifurcation analysis of the Keynesian cross model,2023-03-20 05:41:16,Xinyu Li,"http://arxiv.org/abs/2303.10835v1, http://arxiv.org/pdf/2303.10835v1",econ.GN
32962,gn,"Regulators and browsers increasingly restrict user tracking to protect users
privacy online. Such restrictions also have economic implications for
publishers that rely on selling advertising space to finance their business,
including their content. According to an analysis of 42 million ad impressions
related to 111 publishers, when user tracking is unavailable, the raw price
paid to publishers for ad impressions decreases by about -60%. After
controlling for differences in users, advertisers, and publishers, this
decrease remains substantial, at -18%. More than 90% of the publishers realize
lower prices when prevented from engaging in user tracking. Publishers offering
broad content, such as news websites, suffer more from user tracking
restrictions than publishers with thematically focused content. Collecting a
users browsing history, perceived as generally intrusive to most users,
generates negligible value for publishers. These results affirm the prediction
that ensuring user privacy online has substantial costs for online publishers;
this article offers suggestions to reduce these costs.",The Economic Value of User Tracking for Publishers,2023-03-20 09:50:35,"Rene Laub, Klaus M. Miller, Bernd Skiera","http://arxiv.org/abs/2303.10906v1, http://arxiv.org/pdf/2303.10906v1",econ.GN
32963,gn,"This paper combines the Copula-CoVaR approach with the ARMA-GARCH-skewed
Student-t model to investigate the tail dependence structure and extreme risk
spillover effects between the international agricultural futures and spot
markets, taking four main agricultural commodities, namely soybean, maize,
wheat, and rice as examples. The empirical results indicate that the tail
dependence structures for the four futures-spot pairs are quite different, and
each of them exhibits a certain degree of asymmetry. In addition, the futures
market for each agricultural commodity has significant and robust extreme
downside and upside risk spillover effects on the spot market, and the downside
risk spillover effects for both soybeans and maize are significantly stronger
than their corresponding upside risk spillover effects, while there is no
significant strength difference between the two risk spillover effects for
wheat, and rice. This study provides a theoretical basis for strengthening
global food cooperation and maintaining global food security, and has practical
significance for investors to use agricultural commodities for risk management
and portfolio optimization.",Tail dependence structure and extreme risk spillover effects between the international agricultural futures and spot markets,2023-03-20 14:29:17,"Yun-Shi Dai, Peng-Fei Dai, Wei-Xing Zhou","http://dx.doi.org/10.1016/j.intfin.2023.101820, http://arxiv.org/abs/2303.11030v1, http://arxiv.org/pdf/2303.11030v1",econ.GN
32964,gn,"This paper reanalyzes Khanna (2023), which studies labor market effects of
schooling in India through regression discontinuity designs. Absent from the
data are four districts close to the discontinuity; restoring them cuts the
reduced-form impacts on schooling and log wages by 57% and 63%. Using
regression-specific optimal band-widths and a robust variance estimator
clustered at the geographic unit of treatment makes impacts statistically
indistinguishable from 0. That finding is robust to varying the identifying
threshold and the bandwidth. The estimates of general equilibrium effects and
elasticities of substitution are not unbiased and have effectively infinite
first and second moments.",Large-Scale Education Reform in General Equilibrium: Regression Discontinuity Evidence from India: Comment,2023-03-21 18:45:53,David Roodman,"http://arxiv.org/abs/2303.11956v4, http://arxiv.org/pdf/2303.11956v4",econ.GN
32986,gn,"This paper investigates whether ideological indoctrination by living in a
communist regime relates to low economic performance in a market economy. We
recruit North Korean refugees and measure their implicit bias against South
Korea by using the Implicit Association Test. Conducting double auction and
bilateral bargaining market experiments, we find that North Korean refugees
with a larger bias against the capitalistic society have lower expectations
about their earning potential, exhibit trading behavior with lower target
profits, and earn less profits. These associations are robust to conditioning
on correlates of preferences, human capital, and assimilation experiences.",Implicit Bias against a Capitalistic Society Predicts Market Earnings,2023-04-03 02:31:38,"Syngjoo Choi, Kyu Sup Hahn, Byung-Yeon Kim, Eungik Lee, Jungmin Lee, Sokbae Lee","http://arxiv.org/abs/2304.00651v1, http://arxiv.org/pdf/2304.00651v1",econ.GN
32965,gn,"Immigrants are always accused of stealing people's jobs. Yet, by assumption,
standard immigration models -- the neoclassical model and
Diamond-Mortensen-Pissarides matching model -- rule out displacement of native
workers by immigrants. In these models, when immigrants enter the labor force,
they are absorbed by firms without taking jobs away from native jobseekers.
This paper develops a more general model of immigration, which allows for
displacement of native workers by immigrants. Such generalization seems crucial
to understand and study all the possible effects of immigration on labor
markets. The model blends a matching framework with job rationing. In it, the
entry of immigrants increases the unemployment rate of native workers.
Moreover, the reduction in employment rate is sharper when the labor market is
depressed because jobs are scarcer then. On the plus side, immigration makes it
easier for firms to recruit, which improves firm profits. The overall effect of
immigration on native welfare depends on the state of the labor market.
Immigration always reduces welfare when the labor market is inefficiently
slack, but some immigration improves welfare when the labor market is
inefficiently tight.",Modeling the Displacement of Native Workers by Immigrants,2023-03-23 17:52:48,Pascal Michaillat,"http://arxiv.org/abs/2303.13319v2, http://arxiv.org/pdf/2303.13319v2",econ.GN
32966,gn,"Transforming food systems is essential to bring about a healthier, equitable,
sustainable, and resilient future, including achieving global development and
sustainability goals. To date, no comprehensive framework exists to track food
systems transformation and their contributions to global goals. In 2021, the
Food Systems Countdown to 2030 Initiative (FSCI) articulated an architecture to
monitor food systems across five themes: 1 diets, nutrition, and health; 2
environment, natural resources, and production; 3 livelihoods, poverty, and
equity; 4 governance; and 5 resilience and sustainability. Each theme comprises
three-to-five indicator domains. This paper builds on that architecture,
presenting the inclusive, consultative process used to select indicators and an
application of the indicator framework using the latest available data,
constructing the first global food systems baseline to track transformation.
While data are available to cover most themes and domains, critical indicator
gaps exist such as off-farm livelihoods, food loss and waste, and governance.
Baseline results demonstrate every region or country can claim positive
outcomes in some parts of food systems, but none are optimal across all
domains, and some indicators are independent of national income. These results
underscore the need for dedicated monitoring and transformation agendas
specific to food systems. Tracking these indicators to 2030 and beyond will
allow for data-driven food systems governance at all scales and increase
accountability for urgently needed progress toward achieving global goals.",The State of Food Systems Worldwide: Counting Down to 2030,2023-03-23 23:56:21,"Kate Schneider, Jessica Fanzo, Lawrence Haddad, Mario Herrero, Jose Rosero Moncayo, Anna Herforth, Roseline Reman, Alejandro Guarin, Danielle Resnick, Namukolo Covic, Christophe Béné, Andrea Cattaneo, Nancy Aburto, Ramya Ambikapathi, Destan Aytekin, Simon Barquera, Jane Battersby-Lennard, Ty Beal, Paulina Bizzoto Molina, Carlo Cafiero, Christine Campeau, Patrick Caron, Piero Conforti, Kerstin Damerau, Michael DiGirolamo, Fabrice DeClerck, Deviana Dewi, Ismahane Elouafi, Carola Fabi, Pat Foley, Ty Frazier, Jessica Gephart, Christopher Golden, Carlos Gonzalez Fischer, Sheryl Hendriks, Maddalena Honorati, Jikun Huang, Gina Kennedy, Amos Laar, Rattan Lal, Preetmoninder Lidder, Brent Loken, Quinn Marshall, Yuta Masuda, Rebecca McLaren, Lais Miachon, Hernán Muñoz, Stella Nordhagen, Naina Qayyum, Michaela Saisana, Diana Suhardiman, Rashid Sumaila, Maximo Torrero Cullen, Francesco Tubiello, Jose-Luis Vivero-Pol, Patrick Webb, Keith Wiebe","http://arxiv.org/abs/2303.13669v2, http://arxiv.org/pdf/2303.13669v2",econ.GN
32967,gn,"In 2013, TSOs from the Central European Region complained to the Agency for
the Cooperation of Energy Regulators because of increasing unplanned flows that
were presumed to be caused by a joint German-Austrian bidding zone in the
European electricity market. This paper empirically analyses the effects of the
split of this bidding zone in 2018 on planned and unplanned cross-border flows
between Germany, Austria, Poland, the Czech Republic, Slovakia, and Hungary.
For all bidding zones, apart from the German-Austrian one, planned flows
increased. Further, I find that around the policy intervention between 2017 and
2019, unplanned flows between Germany and Austria as well as for the Czech
Republic and Slovakia decreased. However, for Poland increasing unplanned flows
are found.",The effect of the Austrian-German bidding zone split on unplanned cross-border flows,2023-03-23 11:06:07,Theresa Graefe,"http://arxiv.org/abs/2303.14182v1, http://arxiv.org/pdf/2303.14182v1",econ.GN
32968,gn,"I examine the impacts of extending residency training programs on the supply
and quality of physicians practicing primary care. I leverage mandated extended
residency lengths for primary care practitioners that were rolled out over 20
years in Canada on a province-by-province basis. I compare these primary care
specialties to other specialties that did not change residency length (first
difference) before and after the policy implementation (second difference) to
assess how physician supply evolved in response. To examine quality outcomes, I
use a set of scraped data and repeat this difference-in-differences
identification strategy for complaints resulting in censure against physicians
in Ontario.
  I find declines in the number of primary care providers by 5% for up to nine
years after the policy change. These changes are particularly pronounced in new
grads and younger physicians suggesting that the policy change dissuaded these
physicians from entering primary care residencies. I find no impacts on quality
of physician as measured by public censure of physicians. This suggests that
extending primary care training caused declines in physician supply without any
concomitant improvement in the quality of these physicians. This has
implications for current plans to extend residency training programs.",Effects of extending residencies on the supply and quality of family medicine practitioners; difference-in-differences evidence from the implementation of mandatory family medicine residencies in Canada,2023-03-24 21:56:55,Stephenson Strobel,"http://arxiv.org/abs/2303.14232v2, http://arxiv.org/pdf/2303.14232v2",econ.GN
32969,gn,"Despite the popularity of product recommendations on online investment
platforms, few studies have explored their impact on investor behaviors. Using
data from a global e-commerce platform, we apply regression discontinuity
design to causally examine the effects of product recommendations on online
investors' mutual fund investments. Our findings indicate that recommended
funds experience a significant rise in purchases, especially among low
socioeconomic status investors who are most influenced by these
recommendations. However, investors tend to suffer significantly worse
investment returns after purchasing recommended funds, and this negative impact
is also most significant for investors with low socioeconomic status. To
explain this disparity, we find investors tend to gather less information and
expend reduced effort in fund research when buying recommended funds.
Furthermore, investors' redemption timing of recommended funds is less optimal
than non-recommended funds. We also find that recommended funds experience a
larger return reversal than non-recommended funds. In conclusion, product
recommendations make investors behave more irrationally and these negative
consequences are most significant for investors with low socioeconomic status,
which can amplify wealth inequality among investors in financial markets.",The Effect of Product Recommendations on Online Investor Behaviors,2023-03-24 23:19:43,"Ruiqi Rich Zhu, Cheng He, Yu Jeffrey Hu","http://arxiv.org/abs/2303.14263v2, http://arxiv.org/pdf/2303.14263v2",econ.GN
32987,gn,"We propose a generalization of the synthetic control method to a
multiple-outcome framework, which improves the reliability of treatment effect
estimation. This is done by supplementing the conventional pre-treatment time
dimension with the extra dimension of related outcomes in computing the
synthetic control weights. Our generalization can be particularly useful for
studies evaluating the effect of a treatment on multiple outcome variables. To
illustrate our method, we estimate the effects of non-pharmaceutical
interventions (NPIs) on various outcomes in Sweden in the first 3 quarters of
2020. Our results suggest that if Sweden had implemented stricter NPIs like the
other European countries by March, then there would have been about 70% fewer
cumulative COVID-19 infection cases and deaths by July, and 20% fewer deaths
from all causes in early May, whereas the impacts of the NPIs were relatively
mild on the labor market and economic outcomes.",Synthetic Controls with Multiple Outcomes: Estimating the Effects of Non-Pharmaceutical Interventions in the COVID-19 Pandemic,2023-04-05 10:37:35,"Wei Tian, Seojeong Lee, Valentyn Panchenko","http://arxiv.org/abs/2304.02272v1, http://arxiv.org/pdf/2304.02272v1",econ.GN
32970,gn,"The bidding is the Public Administration's administrative process and other
designated persons by law to select the best proposal, through objective and
impersonal criteria, for contracting services and purchasing goods. In times of
globalization, it is common for companies seeking to expand their business by
participating in biddings. Brazilian legislation allows the participation of
foreign suppliers in bids held in the country. Through a quantitative approach,
this article discusses the weight of foreign suppliers' involvement in federal
bidding between 2011 and 2018. To this end, a literature review was conducted
on public procurement and international biddings. Besides, an extensive data
search was achieved through the Federal Government Procurement Panel. The
results showed that between 2011 and 2018, more than R\$ 422.6 billion was
confirmed in public procurement processes, and of this total, about R\$ 28.9
billion was confirmed to foreign suppliers. The Ministry of Health accounted
for approximately 88.67% of these confirmations. The Invitation, Competition
and International Competition modalities accounted for 0.83% of the amounts
confirmed to foreign suppliers. Impossible Bidding, Waived Bidding, and Reverse
Auction modalities accounted for 99.17% of the confirmed quantities to foreign
suppliers. Based on the discussion of the results and the limitations found,
some directions for further studies and measures to increase public resources
expenditures' effectiveness and efficiency are suggested.",Foreign participation in federal biddings: A quantitative approach using the procurement panel,2023-03-25 14:51:46,Carlos Ferreira,"http://dx.doi.org/10.21874/rsp.v72.i4.4628, http://arxiv.org/abs/2303.14447v1, http://arxiv.org/pdf/2303.14447v1",econ.GN
32971,gn,"In this paper, we investigate the nature of the density metric, which is
employed in the literature on smart specialization and the product space. We
find that although density is supposed to capture relatedness between a
country's current specialization pattern and potential products that it may
diversify into, density is also correlated strongly to the level of
diversification of the country, and (less strongly) to the ubiquity of the
product. Together, diversity and ubiquity capture 93% of the variance of
density. We split density into a part that corresponds to related variety, and
a part that does not (i.e., unrelated variety). In regressions for predicting
gain or loss of specialization, both these parts are significant. The relative
influence of related variety increases with the level of diversification of the
country: only countries that are already diversified show a strong influence of
related variety. In our empirical analysis, we put equal emphasis on gains and
losses of specialization. Our data show that the specializations that were lost
by a country often represented higher product complexity than the
specializations that were gained over the same period. This suggests that
'smart' specialization should be aimed at preserving (some) existing
specializations in addition to gaining new ones. Our regressions indicate that
the relative roles of related and unrelated variety for explaining loss of
specialization are similar to the case of specialization gains. Finally, we
also show that unrelated variety is also important in indicators that are
derived from density, such as the Economic Complexity Outlook Index.",Related or Unrelated Diversification: What is Smart Specialization?,2023-03-25 15:38:04,"Önder Nomaler, Bart Verspagen","http://arxiv.org/abs/2303.14458v1, http://arxiv.org/pdf/2303.14458v1",econ.GN
32972,gn,"With 70 million dead, World War II remains the most devastating conflict in
history. Of the survivors, millions were displaced, returned maimed from the
battlefield, or spent years in captivity. We examine the impact of such wartime
experiences on labor market careers and show that they often become apparent
only at certain life stages. While war injuries reduced employment in old age,
former prisoners of war postponed their retirement. Many displaced workers,
particularly women, never returned to employment. These responses are in line
with standard life-cycle theory and thus likely extend to other conflicts.",Exposure to War and Its Labor Market Consequences over the Life Cycle,2023-03-25 17:39:15,"Sebastian T. Braun, Jan Stuhler","http://arxiv.org/abs/2303.14486v1, http://arxiv.org/pdf/2303.14486v1",econ.GN
32973,gn,"This paper focuses on a decentralized profit-center firm that uses negotiated
transfer pricing as an instrument to coordinate the production process.
Moreover, the firm's headquarters gives its divisions full authority over
operating decisions and it is assumed that each division can additionally make
an upfront investment decision that enhances the value of internal trade. On
early works, the paper expands the number of divisions by one downstream
division and relaxes basic assumptions, such as the assumption of common
knowledge of rationality. Based on an agent-based simulation, it is examined
whether cognitively bounded individuals modeled by fuzzy Q-learning achieve the
same results as fully rational utility maximizers. In addition, the paper
investigates different constellations of bargaining power to see whether a
deviation from the recommended optimal bargaining power leads to a higher
managerial performance. The simulation results show that fuzzy Q-learning
agents perform at least as well or better than fully individual rational
utility maximizers. The study also indicates that, in scenarios with different
marginal costs of divisions, a deviation from the recommended optimal
distribution ratio of the bargaining power of divisions can lead to higher
investment levels and, thus, to an increase in the headquarters' profit.",Specific investments under negotiated transfer pricing: effects of different surplus sharing parameters on managerial performance: An agent-based simulation with fuzzy Q-learning agents,2023-03-25 19:45:32,Christian Mitsch,"http://arxiv.org/abs/2303.14515v1, http://arxiv.org/pdf/2303.14515v1",econ.GN
32974,gn,"Contemporary deep learning based solution methods used to compute approximate
equilibria of high-dimensional dynamic stochastic economic models are often
faced with two pain points. The first problem is that the loss function
typically encodes a diverse set of equilibrium conditions, such as market
clearing and households' or firms' optimality conditions. Hence the training
algorithm trades off errors between those -- potentially very different --
equilibrium conditions. This renders the interpretation of the remaining errors
challenging. The second problem is that portfolio choice in models with
multiple assets is only pinned down for low errors in the corresponding
equilibrium conditions. In the beginning of training, this can lead to
fluctuating policies for different assets, which hampers the training process.
To alleviate these issues, we propose two complementary innovations. First, we
introduce Market Clearing Layers, a neural network architecture that
automatically enforces all the market clearing conditions and borrowing
constraints in the economy. Encoding economic constraints into the neural
network architecture reduces the number of terms in the loss function and
enhances the interpretability of the remaining equilibrium errors. Furthermore,
we present a homotopy algorithm for solving portfolio choice problems with
multiple assets, which ameliorates numerical instabilities arising in the
context of deep learning. To illustrate our method we solve an overlapping
generations model with two permanent risk aversion types, three distinct
assets, and aggregate shocks.",Economics-Inspired Neural Networks with Stabilizing Homotopies,2023-03-26 22:42:16,"Marlon Azinovic, Jan Žemlička","http://arxiv.org/abs/2303.14802v1, http://arxiv.org/pdf/2303.14802v1",econ.GN
32975,gn,"Digital platforms use recommendations to facilitate the exchange between
platform actors, such as trade between buyers and sellers. Platform actors
expect, and legislators increasingly require that competition, including
recommendations, are fair - especially for a market-dominating platform on
which self-preferencing could occur. However, testing for fairness on platforms
is challenging because offers from competing platform actors usually differ in
their attributes, and many distinct fairness definitions exist. This article
considers these challenges, develops a five-step approach to measure fair
competition through recommendations on digital platforms, and illustrates this
approach by conducting two empirical studies. These studies examine Amazon's
search engine recommendations on the Amazon marketplace for more than a million
daily observations from three countries. They find no consistent evidence for
unfair competition through search engine recommendations. The article also
discusses applying the five-step approach in other settings to ensure
compliance with new regulations governing fair competition on digital
platforms, such as the Digital Markets Act in the European Union or the
proposed American Innovation and Choice Online Act in the United States.",Measuring Fair Competition on Digital Platforms,2023-03-27 10:10:28,"Lukas Jürgensmeier, Bernd Skiera","http://arxiv.org/abs/2303.14947v1, http://arxiv.org/pdf/2303.14947v1",econ.GN
32976,gn,"This paper proposes a general equilibrium model for multi-passenger
ridesharing systems, in which interactions between ridesharing drivers,
passengers, platforms, and transportation networks are endogenously captured.
Stable matching is modeled as an equilibrium problem in which no ridesharing
driver or passenger can reduce ridesharing disutility by unilaterally switching
to another matching sequence. This paper is one of the first studies that
explicitly integrates the ridesharing platform multi-passenger matching problem
into the model. By integrating matching sequence with hyper-network,
ridesharing-passenger transfers are avoided in a multi-passenger ridesharing
system. Moreover, the matching stability between the ridesharing drivers and
passengers is extended to address the multi-OD multi-passenger case in terms of
matching sequence. The paper provides a proof for the existence of the proposed
general equilibrium. A sequence-bush algorithm is developed for solving the
multi-passenger ridesharing equilibrium problem. This algorithm is capable to
handle complex ridesharing constraints implicitly. Results illustrate that the
proposed sequence-bush algorithm outperforms general-purpose solver, and
provides insights into the equilibrium of the joint stable matching and route
choice problem. Numerical experiments indicate that ridesharing trips are
typically longer than average trip lengths. Sensitivity analysis suggests that
a properly designed ridesharing unit price is necessary to achieve network
benefits, and travelers with relatively lower values of time are more likely to
participate in ridesharing.",A general equilibrium model for multi-passenger ridesharing systems with stable matching,2023-03-29 14:09:31,"Rui Yao, Shlomo Bekhor","http://dx.doi.org/10.1016/j.trb.2023.05.012, http://arxiv.org/abs/2303.16595v2, http://arxiv.org/pdf/2303.16595v2",econ.GN
32977,gn,"In the passenger car segment, battery-electric vehicles (BEV) have emerged as
the most promising option to de-fossilize transportation. For heavy-duty
vehicles (HDV), the technology space still appears to be more open. Aside from
BEV, electric road systems (ERS) for dynamic power transfer are discussed, as
well as indirect electrification with trucks that use hydrogen fuel cells or
e-fuels. Here we investigate the power sector implications of these alternative
options. We apply an open-source capacity expansion model to future scenarios
of Germany with high renewable energy shares, drawing on detailed route-based
truck traffic data. Results show that power sector costs are lowest for
flexibly charged BEV that also carry out vehicle-to-grid operations, and
highest for HDV using e-fuels. If BEV and ERS-BEV are not charged in an
optimized way, power sector costs increase, but are still substantially lower
than in scenarios with hydrogen or e-fuels. This is a consequence of the
relatively poor energy efficiency of indirect HDV electrification, which
outweighs its temporal flexibility benefits. We further find a higher use of
solar photovoltaic energy for BEV and ERS-BEV, while hydrogen and e-fuel HDV
lead to a higher use of wind power and fossil electricity generation. Results
are qualitatively robust in sensitivity analyses without the European
interconnection or with capacity limits for wind power expansion.","Power sector effects of alternative options for de-fossilizing heavy-duty vehicles: go electric, and charge smartly",2023-03-29 15:35:27,"Carlos Gaete-Morales, Julius Jöhrens, Florian Heining, Wolf-Peter Schill","http://arxiv.org/abs/2303.16629v2, http://arxiv.org/pdf/2303.16629v2",econ.GN
32978,gn,"Using administrative data on Taiwanese lottery winners, this paper examines
the effects of cash windfalls on entrepreneurship. We compare the start-up
decisions of households winning more than 1.5 million NTD (50,000 USD) in the
lottery in a particular year with those of households winning less than 15,000
NTD (500 USD). Our results suggest that a substantial windfall increases the
likelihood of starting a business by 1.5 percentage points (125% from the
baseline mean). Startup wealth elasticity is 0.25 to 0.36. Moreover, households
who tend to be liquidity-constrained drive the windfall-induced entrepreneurial
response. Finally, we examine how households with a business react to a cash
windfall and find that serial entrepreneurs are more likely to start a new
business but do not change their decision to continue the current business.","Liquidity Constraints, Cash Windfalls, and Entrepreneurship: Evidence from Administrative Data on Lottery Winners",2023-03-30 00:20:34,"Hsuan-Hua Huang, Hsing-Wen Han, Kuang-Ta Lo, Tzu-Ting Yang","http://arxiv.org/abs/2303.17029v1, http://arxiv.org/pdf/2303.17029v1",econ.GN
32979,gn,"The study was designed to determine the entrepreneurial capability and
engagement of persons with disabilities toward a framework for inclusive
entrepreneurship. The researcher used descriptive and correlational approaches
through purposive random sampling. The sample came from the City of General
Trias and the Municipality of Rosario, registered under their respective
Persons with Disabilities Affairs Offices (PDAO). The findings indicated that
the respondents are from the working class, are primarily female, are mostly
single, have college degrees, live in a medium-sized home, and earn the bare
minimum. Furthermore, PWDs' perceived capability level in entrepreneurship was
somehow capable, and the majority of engagement level responses were somehow
engaged. Considerably, age and civil status have significant relationships with
most of the variables under study. Finally, the perceived challenges of PWDs'
respondents noted the following: lack of financial capacity, access to credit
and other financial institutions, absence of business information, absence of
access to data, lack of competent business skills, lack of family support, and
lack of personal motivation. As a result, the author proposed a framework that
emphasizes interaction and cooperation between national and local government
units in the formulation of policies promoting inclusive entrepreneurship for
people with disabilities.",Entrepreneurial Capability And Engagement Of Persons With Disabilities Toward A Framework For Inclusive Entrepreneurship,2023-03-30 06:26:36,Xavier Lawrence D. Mendoza,"http://dx.doi.org/10.5281/zenodo.7782073, http://arxiv.org/abs/2303.17130v1, http://arxiv.org/pdf/2303.17130v1",econ.GN
32981,gn,"Building energy retrofits have been identified as key to realizing climate
mitigation goals in Canada. This study aims to provide a roadmap for existing
mid-rise building retrofits in order to understand the required capital
investment, energy savings, energy cost savings, and carbon footprint for
mid-rise residential buildings in Canada. This study employed EnergyPlus to
examine the energy performance of 11 energy retrofit measures for a typical
multi-unit residential building (MURB) in Metro Vancouver, British Columbia,
Canada. The author employed the energy simulation software (EnergyPlus) to
evaluate the pre-and post-retrofit operational energy performance of the
selected MURB. Two base building models powered by natural gas (NG-building)
and electricity (E-building) were created by SketchUP. The energy simulation
results were combined with cost and emission impact data to evaluate the
economic and environmental performance of the selected energy retrofit
measures. The results indicated that the NG-building can produce significant
GHG emission reductions (from 27.64 tCO2e to 3.77 tCO2e) by implementing these
energy retrofit measures. In terms of energy savings, solar PV, ASHP, water
heater HP, and HRV enhancement have great energy saving potential compared to
other energy retrofit measures. In addition, temperature setback, lighting, and
airtightness enhancement present the best economic performance from a life
cycle perspective. However, windows, ASHP, and solar PV, are not economical
choices because of higher life cycle costs. While ASHP can increase life cycle
costs for the NG-building, with the financial incentives provided by the
governments, ASHP could be the best choice to reduce GHG emissions when
stakeholders make decisions on implementing energy retrofits.",Life cycle costing analysis of deep energy retrofits of a mid-rise building to understand the impact of energy conservation measures,2023-04-02 08:31:08,Haonan Zhang,"http://arxiv.org/abs/2304.00456v1, http://arxiv.org/pdf/2304.00456v1",econ.GN
32982,gn,"Abundant evidence has tracked the labour market and health assimilation of
immigrants, including static analyses of differences in how foreign-born and
native-born residents consume health care services. However, we know much less
about how migrants' patterns of health care usage evolve with time of
residence, especially in countries providing universal or quasi-universal
coverage. We investigate this process in Spain by combining all the available
waves of the local health survey, which allows us to separately identify
period, cohort, and assimilation effects. We find that the evidence of health
assimilation is limited and solely applies to migrant females' visits to
general practitioners. Nevertheless, the differential effects of ageing on
health care use between foreign-born and native-born populations contributes to
the convergence of utilisation patterns in most health services after 20 years
in Spain. Substantial heterogeneity over time and by region of origin both
suggest that studies modelling future welfare state finances would benefit from
a more thorough assessment of migration.",Immigrant assimilation in health care utilisation in Spain,2023-04-02 11:21:56,"Zuleika Ferre, Patricia Triunfo, José-Ignacio Antón","http://arxiv.org/abs/2304.00482v1, http://arxiv.org/pdf/2304.00482v1",econ.GN
32983,gn,"Purpose: The objective of this research was to show the response of the
potential reduction of excess capacity in terms of capital intensity to the
growth rate of labor productivity in the manufacturing industrial sector.
Design/Methodology/Approach: The research was carried out in 2019 in 55 groups
of Indian manufacturing industry within six major Indian industrial states.
Mainly, the research used the modified VES (Variable Elasticity Substitution)
estimation model. The research focused on the value of the additional
substitution parameter of capital intensity (mu > 0). Findings: Almost all
selected industry groups with in six states need capital-intensive production.
The results found additional parameter of capital intensity (mu) is greater
than zero for all industry groups. It means that a higher product per man can
be obtained by increasing the capital per worker. Practical Implications:
Research shows that an increasingly need for capital investment in need for
higher labor productivity is likely to induce the manufacturing unit to use
more capacity in existence. It reveals that investors in these selected six
states can increase their capital investment. Originality/Value: The analysis
of the result allowed to determine the fact that capital intensity is an
essential variable for reduction of excess capacity which cannot be ignored in
explaining productivity.",Reduction of Excess Capacity with Response of Capital Intensity,2023-04-02 11:56:50,Samidh Pal,"http://arxiv.org/abs/2304.00489v1, http://arxiv.org/pdf/2304.00489v1",econ.GN
32984,gn,"This paper examines the determinants of fertility among women at different
stages of their reproductive lives in Uruguay. To this end, we employ time
series analysis methods based on data from 1968 to 2021 and panel data
techniques based on department-level statistical information from 1984 to 2019.
The results of our first econometric exercise indicate a cointegration
relationship between fertility and economic performance, education and infant
mortality, with differences observed by reproductive stage. We find a negative
relationship between income and fertility for women aged 20-29 that persists
for women aged 30 and over. This result suggests that having children is
perceived as an opportunity cost for women in this age group. We also observe a
negative relationship between education and adolescent fertility, which has
implications for the design of public policies. A panel data analysis with
econometric techniques allowing us to control for unobserved heterogeneity
confirms that income is a relevant factor for all groups of women and
reinforces the crucial role of education in reducing teenage fertility. We also
identify a negative correlation between fertility and employment rates for
women aged 30 and above. We outline some possible explanations for these
findings in the context of work-life balance issues and argue for the
importance of implementing social policies to address them.",The short- and long-term determinants of fertility in Uruguay,2023-04-02 16:58:54,"Zuleika Ferre, Patricia Triunfo, José-Ignacio Antón","http://arxiv.org/abs/2304.00539v1, http://arxiv.org/pdf/2304.00539v1",econ.GN
32985,gn,"This paper studies the extent to which the cyclicality of occupational
mobility shapes that of aggregate unemployment and its duration distribution.
We document the relation between workers' occupational mobility and
unemployment duration over the long run and business cycle. To interpret this
evidence, we develop a multi-sector business cycle model with heterogenous
agents. The model is quantitatively consistent with several important features
of the US labor market: procyclical gross and countercyclical net occupational
mobility, the large volatility of unemployment and the cyclical properties of
the unemployment duration distribution, among many others. Our analysis shows
that occupational mobility due to workers; changing career prospects, and not
occupation-wide differences, interacts with aggregate conditions to drive the
fluctuations of the unemployment duration distribution and the aggregate
unemployment rate.",Unemployment and Endogenous Reallocation over the Business Cycle,2023-04-02 17:40:17,"Carlos Carrillo-Tudela, Ludo Visschers","http://arxiv.org/abs/2304.00544v1, http://arxiv.org/pdf/2304.00544v1",econ.GN
33001,gn,"This study examines the relationship between automation and income inequality
across different countries, taking into account the varying levels of
technological adoption and labor market institutions. The research employs a
panel data analysis using data from the World Bank, the International Labour
Organization, and other reputable sources. The findings suggest that while
automation leads to an increase in productivity, its effect on income
inequality depends on the country's labor market institutions and social
policies.",The Impact of Automation on Income Inequality: A Cross-Country Analysis,2023-04-16 20:15:23,Asuna Gilfoyle,"http://arxiv.org/abs/2304.07835v1, http://arxiv.org/pdf/2304.07835v1",econ.GN
32988,gn,"This paper investigates the inter-regional intra-industry disparity within
selected Indian manufacturing industries and industrial states. The study uses
three measures - the Output-Capital Ratio, the Capital-Labor Ratio, and the
Output-Labor Ratio - to critically evaluate the level of disparity in average
efficiency of labor and capital, as well as capital intensity. Additionally,
the paper compares the rate of disparity of per capita income between six major
industrial states. The study finds that underutilization of capacity is driven
by an unequal distribution of high-skilled labor supply and upgraded
technologies. To address these disparities, the paper suggests that
policymakers campaign for labor training and technology promotion schemes
throughout all regions of India. By doing so, the study argues, the country can
reduce regional inequality and improve economic outcomes for all.",A Comparative Study of Inter-Regional Intra-Industry Disparity,2023-04-05 16:27:15,Samidh Pal,"http://arxiv.org/abs/2304.02430v1, http://arxiv.org/pdf/2304.02430v1",econ.GN
32989,gn,"March 2020 confinement has shot Portuguese savings to historic levels,
reaching 13.4% of gross disposable income in early 2021 (INE, 2023). To find
similar savings figures we need to go back to 1999. With consumption reduced to
a bare minimum, the Portuguese were forced to save. Households reduced spending
more because of a lack of alternatives to consumption than for any other
reason. The relationship between consumption, savings, and income has occupied
an important role in economic thought [(Keynes, 1936; 1937); (Friedman, 1957)].
Traditionally, high levels of savings have been associated with benefits to the
economy, since financing capacity is enhanced (Singh, 2010). However, the
effects here can be twofold. On the one hand, it seems that Portugal faced the
so-called Savings Paradox (Keynes, 1936). If consumers decide to save a
considerable part of their income, there will be less demand for the goods
produced. Lower demand will lead to lower supply, production, income, and,
paradoxically, fewer savings. On the other hand, after having accumulated
savings at the peak of the pandemic, the Portuguese are now using them to carry
out postponed consumption and, hopefully, to better resist the escalating
inflation. This study aims to examine Portuguese households' savings evolution
during the most critical period of the pandemic, between March 2020 and April
2022. The methodology analyses the correlation between savings, consumption,
and GDP as well as GDP's decomposition into its various components and
concluded that these suddenly forced savings do not fit traditional economic
theories of savings.",Portuguese Households Savings in Times of Pandemic: A Way to Better Resist the Escalating Inflation?,2023-04-05 19:44:41,"Ana Lucia Luis, Natalia Teixeira, Rui Braz","http://arxiv.org/abs/2304.02573v1, http://arxiv.org/pdf/2304.02573v1",econ.GN
32990,gn,"The existing buildings and building construction sectors together are
responsible for over one-third of the total global energy consumption and
nearly 40% of total greenhouse gas (GHG) emissions. GHG emissions from the
building sector are made up of embodied emissions and operational emissions.
Recognizing the importance of reducing energy use and emissions associated with
the building sector, governments have introduced policies, standards, and
design guidelines to improve building energy performance and reduce GHG
emissions associated with operating buildings. However, policy initiatives that
reduce embodied emissions of the existing building sector are lacking. This
research aims to develop policy strategies to reduce embodied carbon emissions
in retrofits. In order to achieve this goal, this research conducted a
literature review and identification of policies and financial incentives in
British Columbia (BC) for reducing overall GHG emissions from the existing
building sector. Then, this research analyzed worldwide policies and incentives
that reduce embodied carbon emissions in the existing building sector. After
reviewing the two categories of retrofit policies, the author identified links
and opportunities between existing BC strategies, tools, and incentives, and
global embodied emission strategies. Finally, this research compiled key
findings from all resources and provided policy recommendations for reducing
embodied carbon emissions in retrofits in BC.",Leveraging policy instruments and financial incentives to reduce embodied carbon in energy retrofits,2023-04-07 01:26:53,Haonan Zhang,"http://arxiv.org/abs/2304.03403v1, http://arxiv.org/pdf/2304.03403v1",econ.GN
32991,gn,"The scale and terms of aggregate borrowing in an economy depend on the manner
in which wealth is distributed across potential creditors with heterogeneous
beliefs about the future. This distribution evolves over time as uncertainty is
resolved, in favour of optimists if loans are repaid in full, and in favour of
pessimists if there is widespread default. We model this process in an economy
with two assets - risky bonds and risk-free cash. Within periods, given the
inherited distribution of wealth across belief types, the scale and terms of
borrowing are endogenously determined. Following good states, aggregate
borrowing and the face value of debt both rise, and the interest rate falls. In
the absence of noise, wealth converges to beliefs that differ systematically
from the objective probability governing state realisations, with greater
risk-aversion associated with greater optimism. In the presence of noise, the
economy exhibits periods of high performance, punctuated by periods of crisis
and stagnation.",The Dynamics of Leverage and the Belief Distribution of Wealth,2023-04-07 04:29:16,"Bikramaditya Datta, Rajiv Sethi","http://arxiv.org/abs/2304.03436v1, http://arxiv.org/pdf/2304.03436v1",econ.GN
32992,gn,"Using a combination of incentive modeling and empirical meta-analyses, this
paper provides a pointed critique at the incentive systems that drive venture
capital firms to optimize their practices towards activities that increase
General Partner utility yet are disjoint from improving the underlying asset of
startup equity. We propose a ""distributed venture firm"" powered by software
automations and governed by a set of functional teams called ""Pods"" that carry
out specific tasks with immediate and long-term payouts given on a deal-by-deal
basis. Avenues are provided for further research to validate this model and
discover likely paths to implementation.",Distributed VC Firms: The Next Iteration of Venture Capital,2023-04-07 10:43:00,"Mohib Jafri, Andy Wu","http://arxiv.org/abs/2304.03525v1, http://arxiv.org/pdf/2304.03525v1",econ.GN
32993,gn,"As the United States is witnessing elevated racial differences pertaining to
economic disparities, we have found a unique example contrary to the
traditional narrative. Idaho is the only US state where Blacks earn more than
Whites and all other races. In this paper, we examine how Idaho Blacks might
have achieved economic success and, more importantly, what factors might have
led to this achievement in reducing racial and economic disparities.
Preliminary research suggests that fewer barriers to land ownership, smaller
populations, well-knit communities, men's involvement in the family, and a
relatively less hostile environment have played a significant role. Further
research by historians can help the nation uncover the underlying factors to
see if some factors are transportable to other parts of the country.",Idaho Blacks: Quiet Economic Triumph of Enduring Champions,2023-04-07 17:48:58,"Rama K. Malladi, Phillip Thompson","http://arxiv.org/abs/2304.03676v1, http://arxiv.org/pdf/2304.03676v1",econ.GN
32994,gn,"In this paper we explore two intertwined issues. First, using primary data we
examine the impact of asymmetric networks, built on rich relational information
on several spheres of living, on access to workfare employment in rural India.
We find that unidirectional relations, as opposed to reciprocal relations, and
the concentration of such unidirectional relations increase access to workfare
jobs. Further in-depth exploration provides evidence that patron-client
relations are responsible for this differential access to such employment for
rural households. Complementary to our empirical exercises, we construct and
analyse a game-theoretical model supporting our findings.","Asymmetric networks, clientelism and their impacts: households' access to workfare employment in rural India",2023-04-09 16:34:50,"Anindya Bhattacharya, Anirban Kar, Alita Nandi","http://arxiv.org/abs/2304.04236v1, http://arxiv.org/pdf/2304.04236v1",econ.GN
32995,gn,"We show that the knowledge of an agent carrying non-trivial unawareness
violates the standard property of 'necessitation', therefore necessitation
cannot be used to refute the standard state-space model. A revised version of
necessitation preserves non-trivial unawareness and solves the classical
Dekel-Lipman-Rustichini result. We propose a generalised knowledge operator
consistent with the standard state-space model of unawareness, including the
model of infinite state-space.",On the state-space model of unawareness,2023-04-10 17:45:13,Alex A. T. Rathke,"http://arxiv.org/abs/2304.04626v2, http://arxiv.org/pdf/2304.04626v2",econ.GN
32996,gn,"As the first phase in the Business Process Management (BPM) lifecycle,
process identification addresses the problem of identifying which processes to
prioritize for improvement. Process selection plays a critical role in this
phase, but it is a step with known pitfalls. Decision makers rely frequently on
subjective criteria, and their knowledge of the alternative processes put
forward for selection is often inconsistent. This leads to poor quality
decision-making and wastes resources. In recent years, a rejection of a
one-size-fits-all approach to BPM in favor of a more context-aware approach has
gained significant academic attention. In this study, the role of context in
the process selection step is considered. The context is qualitative,
subjective, sensitive to decision-making bias and politically charged. We
applied a design-science approach and engaged industry decision makers through
a combination of research methods to assess how different configurations of
process inputs influence and ultimately improve the quality of the process
selection step. The study highlights the impact of framing effects on context
and provides five guidelines to improve effectiveness.",Five guidelines to improve context-aware process selection: an Australian banking perspective,2023-04-11 10:42:01,"Nigel Adams, Adriano Augusto, Michael Davern, Marcello La Rosa","http://arxiv.org/abs/2304.05033v1, http://arxiv.org/pdf/2304.05033v1",econ.GN
32997,gn,"We use algorithmic and network-based tools to build and analyze the bipartite
network connecting jobs with the skills they require. We quantify and represent
the relatedness between jobs and skills by using statistically validated
networks. Using the fitness and complexity algorithm, we compute a skill-based
complexity of jobs. This quantity is positively correlated with the average
salary, abstraction, and non-routinarity level of jobs. Furthermore, coherent
jobs - defined as the ones requiring closely related skills - have, on average,
lower wages. We find that salaries may not always reflect the intrinsic value
of a job, but rather other wage-setting dynamics that may not be directly
related to its skill composition. Our results provide valuable information for
policymakers, employers, and individuals to better understand the dynamics of
the labor market and make informed decisions about their careers.",Mapping job complexity and skills into wages,2023-04-11 17:39:21,"Sabrina Aufiero, Giordano De Marzo, Angelica Sbardella, Andrea Zaccaria","http://arxiv.org/abs/2304.05251v1, http://arxiv.org/pdf/2304.05251v1",econ.GN
32998,gn,"Economic models assume that payroll tax burdens fall fully on workers, but
where does tax incidence fall when taxes are firm-specific and time-varying?
Unemployment insurance in the United States has the key feature of varying both
across employers and over time, creating the potential for labor demand
responses if tax costs cannot be fully passed on to worker wages. Using state
policy changes and matched employer-employee job spells from the LEHD, I study
how employment and earnings respond to payroll tax increases for highly exposed
employers. I find significant drops in employment growth driven by lower
hiring, and minimal evidence of pass-through to earnings. The negative
employment effects are strongest for young and low-earning workers.",Payroll Tax Incidence: Evidence from Unemployment Insurance,2023-04-12 07:46:06,Audrey Guo,"http://arxiv.org/abs/2304.05605v1, http://arxiv.org/pdf/2304.05605v1",econ.GN
32999,gn,"Lithium is a critical material for the energy transition, but conventional
procurement methods have significant environmental impacts. In this study, we
utilize regional energy system optimizations to investigate the techno-economic
potential of the low-carbon alternative of direct lithium extraction in deep
geothermal plants. We show that geothermal plants will become cost-competitive
in conjunction with lithium extraction, even under unfavorable conditions and
partially displace photovoltaics, wind power, and storage from energy systems.
Our analysis indicates that if 10% of municipalities in the Upper Rhine Graben
area in Germany constructed deep geothermal plants, they could provide enough
lithium to produce about 1.2 million electric vehicle battery packs per year,
equivalent to 70% of today`s annual electric vehicle registrations in the
European Union. This approach could offer significant environmental benefits
and has high potential for mass application also in other countries, such as
the United States, United Kingdom, France, and Italy, highlighting the
importance of further research and development of this technology.",Low-carbon Lithium Extraction Makes Deep Geothermal Plants Cost-competitive in Energy Systems,2023-04-14 12:25:36,"Jann Michael Weinand, Ganga Vandenberg, Stanley Risch, Johannes Behrens, Noah Pflugradt, Jochen Linßen, Detlef Stolten","http://arxiv.org/abs/2304.07019v1, http://arxiv.org/pdf/2304.07019v1",econ.GN
33000,gn,"In this paper, the interaction of geopolitical actors in the production and
sale of military equipment is studied. In section 2 the production of military
equipment is considered as the two person zero-sum game. In such game, the
strategies of the players are defined by the information state of the actors.
The optimal strategy of geopolitical actors is found. In section 3, the
conflict process is considered, the optimal strategy is determined for each
geopolitical actor.",Game theoretical models of geopolitical processes. Part I,2023-04-15 17:12:19,"O. A. Malafeyev, N. D. Redinskikh, V. F. Bogachev","http://arxiv.org/abs/2304.07568v1, http://arxiv.org/pdf/2304.07568v1",econ.GN
33845,th,"The main aim of this paper is to prove the existence of a new production
function with variable elasticity of factor substitution. This production
function is a more general form which includes the Cobb-Douglas production
function and the CES production function as particular cases. The econometric
estimates presented in the paper confirm some other results and reinforces the
conclusion that the sigma is well-below the Cobb-Douglas value of one.",A Production Function with Variable Elasticity of Factor Substitution,2019-07-29 23:20:09,Constantin Chilarescu,"http://arxiv.org/abs/1907.12624v1, http://arxiv.org/pdf/1907.12624v1",econ.TH
33002,gn,"India's tea business has a long history and plays a significant role in the
economics of the nation. India is the world's second-largest producer of tea,
with Assam and Darjeeling being the most well-known tea-growing regions. Since
the British introduced tea cultivation to India in the 1820s, the nation has
produced tea. Millions of people are employed in the tea sector today, and it
contributes significantly to the Indian economy in terms of revenue. The
production of tea has changed significantly in India over the years, moving
more and more towards organic and sustainable practices. The industry has also
had to deal with difficulties like competition from other nations that produce
tea, varying tea prices, and labor-related problems. Despite these obstacles,
the Indian tea business is still growing and produces a wide variety of teas,
such as black tea, green tea, and chai tea. Additionally, the sector encourages
travel through ""tea tourism,"" which allows tourists to see how tea is made and
discover its origins in India. Overall, India's tea business continues to play
a significant role in its history, culture, and economy.",Study on the tea market in India,2023-04-16 21:19:52,"Adit Vinod Nair, Adarsh Damani, Devansh Khandelwal, Harshita Sachdev, Sreayans Jain","http://arxiv.org/abs/2304.07851v1, http://arxiv.org/pdf/2304.07851v1",econ.GN
33003,gn,"This chapter develops a feedback economic model that explains the rise of the
Sicilian mafia in the 19th century. Grounded in economic theory, the model
incorporates causal relationships between the mafia activities, predation, law
enforcement, and the profitability of local businesses. Using computational
experiments with the model, we explore how different factors and feedback
effects impact the mafia activity levels. The model explains important
historical observations such as the emergence of the mafia in wealthier regions
and its absence in the poorer districts despite the greater levels of banditry.",Economic Origins of the Sicilian Mafia: A Simulation Feedback Model,2023-04-17 06:36:42,"Oleg V. Pavlov, Jason M. Sardell","http://dx.doi.org/10.1007/978-3-030-67190-7_6, http://arxiv.org/abs/2304.07975v1, http://arxiv.org/pdf/2304.07975v1",econ.GN
33004,gn,"We study pre-vote interactions in a committee that enacts a welfare-improving
reform through voting. Committee members use decentralized promises contingent
on the reform enactment to influence the vote outcome. Equilibrium promises
prevent beneficial coalitional deviations and minimize total promises. We show
that multiple equilibria exist, involving promises from high- to low-intensity
members to enact the reform. Promises dissuade reform opponents from enticing
the least enthusiastic reform supporters to vote against the reform. We explore
whether some recipients of the promises can be supporters of the reform and
discuss the impact of polarization on the total promises.",Democratic Policy Decisions with Decentralized Promises Contingent on Vote Outcome,2023-04-17 09:17:26,"Ali Lazrak, Jianfeng Zhang","http://arxiv.org/abs/2304.08008v4, http://arxiv.org/pdf/2304.08008v4",econ.GN
33005,gn,"Damage functions in integrated assessment models (IAMs) map changes in
climate to economic impacts and form the basis for most of estimates of the
social cost of carbon. Implicit in these functions lies an unwarranted
assumption that restricts the spatial variation (Svar) and temporal variability
(Tvar) of changes in climate to be null. This could bias damage estimates and
the climate policy advice from IAMs. While the effects of Tvar have been
studied in the literature, those of Svar and their interactions with Tvar have
not. Here we present estimates of the economic costs of climate change that
account for both Tvar and Svar, as well as for the seasonality of damages
across sectors. Contrary to the results of recent studies which show little
effect that of Tvar on expected losses, we reveal that ignoring Svar produces
large downward biases, as warming is highly heterogeneous over space. Using a
conservative calibration for the damage function, we show that previous
estimates are biased downwards by about 23-36%, which represents additional
losses of about US$1,400-US$2,300 billion by 2050 and US$17-US$28 trillion by
the end of the century, under a high emissions scenario. The present value of
losses during the period 2020-2100 would be larger than reported in previous
studies by $47-$66 trillion or about 1/2 to 3/4 of annual global GDP in 2020.
Our results imply that using global mean temperature change in IAMs as a
summary measure of warming is not adequate for estimating the costs of climate
change. Instead, IAMs should include a more complete description of climate
conditions.",Economic consequences of the spatial and temporal variability of climate change,2023-04-17 11:07:11,"Francisco Estrada, Richard S. J. Tol, Wouter Botzen","http://arxiv.org/abs/2304.08049v1, http://arxiv.org/pdf/2304.08049v1",econ.GN
33006,gn,"Income and wealth allocation are foundational components of how economies
operate. These are complex distributions, and it is hard to get a real sense
for their dynamics using simplifications like average or median. One metric
that characterizes such distributions better is the Gini Index, which on one
extreme is 0, a completely equitable distribution, and on the other extreme is
1, the most inequitable, where a single individual has all the resources. Most
experts agree that viable economies cannot exist at either extreme, but
identifying a preferred range has historically been a matter of conflicting
political philosophies and emotional appeals. This research explores instead
whether there might be a theoretical and empirical basis for a preferred Gini
Index. Specifically, I explore a simple question: Before financial systems
existed, how were natural assets allocated? Intrinsic human attributes such as
height, strength, & beauty were the original measures of value in which human
social groups traded. Each of these attributes is distributed in a diverse and
characteristic way, which I propose has gradually established the acceptable
bounds of inequality through evolutionary psychology. I collect data for a wide
array of such traits and calculate a Gini Index for them in a novel way
assuming their magnitudes to be analogous to levels of financial wealth. The
values fall into a surprisingly intuitive pattern ranging from Gini=0.02 to
0.51. Income distributions in many countries are within the top half of this
range after taxes and transfers are applied (0.2 to 0.4). Wealth distributions
on the other hand are mostly outside of this range, with the United States at a
very inequitable 0.82. Additional research is needed on the interconnections
between this range of ""natural"" Gini Indexes and human contentment; and whether
any explicit policy goals should be considered to target them.",The Pie: How Has Human Evolution Distributed Non-Financial Wealth?,2023-04-20 00:10:12,Dave Costenaro,"http://arxiv.org/abs/2304.09971v1, http://arxiv.org/pdf/2304.09971v1",econ.GN
33007,gn,"Agriculture and conflict are linked. The extent and the sign of the
relationship vary by the motives of actors and the forms of conflict. We
examine whether harvest-time agricultural shocks, associated with transitory
shifts in employment and income, lead to changes in social conflict. Using 13
years of data from eight Southeast Asian countries, we find a seven percent
increase in battles, and a 12 percent increase in violence against civilians in
the croplands during harvest season compared to the rest of the year. These
statistically significant effects plausibly link agricultural harvest with
conflict through the rapacity mechanism. We validate this mechanism by
comparing the changes in harvest-time violence during presumably good vs. bad
harvest years. We also observe a three percent decrease in protests and riots
during the same period that would align with the opportunity cost mechanism,
but these estimates are not statistically significant. The offsetting
resentment mechanism may be partly responsible for this, but we also cannot
rule out the possibility of the null effect. These findings, which contribute
to research on the agroclimatic and economic roots of conflict, offer valuable
insights to policymakers by suggesting the temporal displacement of conflict,
and specifically of violence against civilians, due to the seasonality of
agricultural harvest in rice-producing regions of Southeast Asia.",Agricultural Shocks and Social Conflict in Southeast Asia,2023-04-20 03:49:06,"Justin Hastings, David Ubilava","http://arxiv.org/abs/2304.10027v2, http://arxiv.org/pdf/2304.10027v2",econ.GN
33025,gn,"Social networks can sustain cooperation by amplifying the consequences of a
single defection through a cascade of relationship losses. Building on Jackson
et al. (2012), we introduce a novel robustness notion to characterize low
cognitive complexity (LCC) networks - a subset of equilibrium networks that
imposes a minimal cognitive burden to calculate and comprehend the consequences
of defection. We test our theory in a laboratory experiment and find that
cooperation is higher in equilibrium than in non-equilibrium networks. Within
equilibrium networks, LCC networks exhibit higher levels of cooperation than
non-LCC networks. Learning is essential for the emergence of equilibrium play.",Cooperation and Cognition in Social Networks,2023-05-02 08:39:57,"Edoardo Gallo, Joseph Lee, Yohanes Eko Riyanto, Erwin Wong","http://arxiv.org/abs/2305.01209v1, http://arxiv.org/pdf/2305.01209v1",econ.GN
33008,gn,"Increasing the adoption of alternative technologies is vital to ensure a
successful transition to net-zero emissions in the manufacturing sector. Yet
there is no model to analyse technology adoption and the impact of policy
interventions in generating sufficient demand to reduce cost. Such a model is
vital for assessing policy-instruments for the implementation of future energy
scenarios. The design of successful policies for technology uptake becomes
increasingly difficult when associated market forces/factors are uncertain,
such as energy prices or technology efficiencies. In this paper we formulate a
novel robust market potential assessment problem under uncertainty, resulting
in policies that are immune to uncertain factors. We demonstrate two case
studies: the potential use of carbon capture and storage for iron and steel
production across the EU, and the transition to hydrogen from natural gas in
steam boilers across the chemicals industry in the UK. Each robust optimisation
problem is solved using an iterative cutting planes algorithm which enables
existing models to be solved under uncertainty. By taking advantage of
parallelisation we are able to solve the nonlinear robust market assessment
problem for technology adoption in times within the same order of magnitude as
the nominal problem. Policy makers often wish to trade-off certainty with
effectiveness of a solution. Therefore, we apply an approximation to chance
constraints, varying the amount of uncertainty to locate less certain but more
effective solutions. Our results demonstrate the possibility of locating robust
policies for the implementation of low-carbon technologies, as well as
providing direct insights for policy-makers into the decrease in policy
effectiveness resulting from increasing robustness. The approach we present is
extensible to a large number of policy design and alternative technology
adoption problems.",Robust Market Potential Assessment: Designing optimal policies for low-carbon technology adoption in an increasingly uncertain world,2023-04-20 13:45:06,"Tom Savage, Antonio del Rio Chanona, Gbemi Oluleye","http://arxiv.org/abs/2304.10203v1, http://arxiv.org/pdf/2304.10203v1",econ.GN
33009,gn,"We study the quality of secondary school track assignment decisions in the
Netherlands, using a regression discontinuity design. In 6th grade, primary
school teachers assign each student to a secondary school track. If a student
scores above a track-specific cutoff on the standardized end-of-primary
education test, the teacher can upwardly revise this assignment. By comparing
students just left and right of these cutoffs, we find that between 50-90% of
the students are ""trapped in track"": these students are on the high track after
four years, only if they started on the high track in first year. The remaining
(minority of) students are ""always low"": they are always on the low track after
four years, independently of where they started. These proportions hold for
students near the cutoffs that shift from the low to the high track in first
year by scoring above the cutoff. Hence, for a majority of these students the
initial (unrevised) track assignment decision is too low. The results replicate
across most of the secondary school tracks, from the vocational to the academic
tracks, and stand out against an education system with a lot of upward and
downward track mobility.",The quality of school track assignment decisions by teachers,2023-04-20 23:29:27,"Joppe de Ree, Matthijs Oosterveen, Dinand Webbink","http://arxiv.org/abs/2304.10636v1, http://arxiv.org/pdf/2304.10636v1",econ.GN
33010,gn,"Public infrastructure procurement is crucial as a prerequisite for public and
private investments and for economic and social capital growth. However, low
performance in execution severely hinders infrastructure provision and benefits
delivery. One of the most sensitive phases in public infrastructure procurement
is the design because of the strategic relationship that it potentially creates
between procurers and contractors in the execution stage, affecting the costs
and the duration of the contract. In this paper, using recent developments in
non-parametric frontiers and propensity score matching, we evaluate the
performance in the execution of public works in Italy. The analysis provides
robust evidence of significant improvement of performance where procurers opt
for a design and build contracts, which lead to lower transaction costs,
allowing contractors to better accommodate the project in the execution. Our
findings bear considerable policy implications.",How 'one-size-fits-all' public works contract does it better? An assessment of infrastructure provision in Italy,2023-04-21 09:53:34,"Massimo Finocchiaro Castroa, Calogero Guccio, Ilde Rizzo","http://arxiv.org/abs/2304.10776v1, http://arxiv.org/pdf/2304.10776v1",econ.GN
33011,gn,"We consider situations where consumers are aware that a statistical model
determines the price of a product based on their observed behavior. Using a
novel experiment varying the context similarity between participant data and a
product, we find that participants manipulate their responses to a survey about
personal characteristics, and manipulation is more successful when the contexts
are similar. Moreover, participants demand less privacy, and make less optimal
privacy choices when the contexts are less similar. Our findings highlight the
importance of data privacy policies in the age of big data, where behavior in
seemingly unrelated contexts might affect prices.",Strategic Responses to Personalized Pricing and Demand for Privacy: An Experiment,2023-04-22 17:11:51,"Inácio Bó, Li Chen, Rustamdjan Hakimov","http://arxiv.org/abs/2304.11415v1, http://arxiv.org/pdf/2304.11415v1",econ.GN
33012,gn,"The study of US-China relations has always been a crucial topic in our
economic development [4][5][7], and the US presidential election plays an
integral role in shaping these relations. The presidential election is held
every four years, and it is crucial to assess the impact of the 2020 election
on China to prepare for the potential effects of the 2024 US presidential
election on the Chinese economy [8][16][20]. To achieve this, we have gathered
statistical data from nearly 70 years and analyzed data related to the US
economy. We have classified the collected data and utilized the analytic
hierarchy process [1][2][3] to evaluate the President's policy
implementation.This approach allowed us to obtain a comprehensive ranking of
the indicators [6][9][11][33]. We then quantified the index data and employed
the entropy weight method to calculate the weight of each index data. Finally,
we used the weighted total score calculation to evaluate the economic status of
the United States in a hierarchical manner after the election of Presidents
Trump and Biden [15][18]. We optimized the index system by incorporating
additional dimension indexes such as ""foreign policy"". We then crawled China's
specific development data from 1990-2020 and substituted it into the model for
analysis and evaluation. This enabled us to obtain detailed quantitative index
data of the degree of influence [10][12][14]. To address China's shortcomings
in science and technology innovation, we recommend strengthening economic
cooperation with developed countries, diversifying market development, and
actively expanding the domestic market through feasible solutions
[13][16][23][36].",Breaking the general election effect. The impact of the 2020 US presidential election on Chinese economy and counter strategies,2023-04-23 05:44:53,Junjie Zhao,"http://arxiv.org/abs/2304.11518v1, http://arxiv.org/pdf/2304.11518v1",econ.GN
33049,gn,Any finite conversation can be rationalized.,Rational Dialogues,2023-05-15 22:43:28,"John Geanakoplos, Herakles Polemarchakis","http://arxiv.org/abs/2305.10164v1, http://arxiv.org/pdf/2305.10164v1",econ.GN
33846,th,"In a recent paper, Naz and Chaudry provided two solutions for the model of
Lucas-Uzawa, via the Partial Hamiltonian Approach. The first one of these
solutions coincides exactly with that determined by Chilarescu. For the second
one, they claim that this is a new solution, fundamentally different than that
obtained by Chilarescu. We will prove in this paper, using the existence and
uniqueness theorem of nonlinear differential equations, that this is not at all
true.",On the Solutions of the Lucas-Uzawa Model,2019-07-30 00:24:57,Constantin Chilarescu,"http://arxiv.org/abs/1907.12658v1, http://arxiv.org/pdf/1907.12658v1",econ.TH
33013,gn,"This research investigates the impact of gender-related differences in
preferences and efficiency in household tasks on the time distribution
throughout marriages, aiming to uncover the underlying reasons behind
gender-based discrepancies in labor earnings, especially regarding childcare
duties. By utilizing aggregated data from Japan's ""Survey on Time Use and
Leisure Activities,"" this study enhances the life cycle model introduced
initially by Blundell et al. (2018) by integrating a heterogeneous range of
ages for a child, covering her growth from infancy to adulthood. The outcomes
derived from the model are then aligned with actual data through a fitting
process, followed by simulations of policies catering to married couples'
varying educational backgrounds. Our model's calculations indicate a reduction
in maternal earnings after childbirth, consistent with the findings of the
empirical investigation known as the ""child penalty."" However, a notable
disparity emerges between the projected outcomes of the model and the observed
data during the subsequent phase of maternal earnings recovery, with a
discrepancy of approximately 40 %. Furthermore, our calculations demonstrate
that a 25 % increase in the income replacement rate for parental leave results
in an almost 20 % increase in the utilization of parental leave. In contrast, a
steady 10 % rise in wages leads to a modest 2.5 % increase in the utilization
of parental leave.","Child Care, Time Allocation, and Life Cycle",2023-04-23 07:14:30,"Hirokuni Iiboshi, Daikuke Ozaki, Yui Yoshii","http://arxiv.org/abs/2304.11531v2, http://arxiv.org/pdf/2304.11531v2",econ.GN
33014,gn,"We investigate a linear quadratic stochastic zero-sum game where two players
lobby a political representative to invest in a wind turbine farm. Players are
time-inconsistent because they discount performance with a non-constant rate.
Our objective is to identify a consistent planning equilibrium in which the
players are aware of their inconsistency and cannot commit to a lobbying
policy. We analyze the equilibrium behavior in both single player and
two-player cases, and compare the behavior of the game under constant and
non-constant discount rates. The equilibrium behavior is provided in
closed-loop form, either analytically or via numerical approximation. Our
numerical analysis of the equilibrium reveals that strategic behavior leads to
more intense lobbying without resulting in overshooting.",Present-Biased Lobbyists in Linear Quadratic Stochastic Differential Games,2023-04-23 11:22:01,"Ali Lazrak, Hanxiao Wang, Jiongmin Yong","http://arxiv.org/abs/2304.11577v2, http://arxiv.org/pdf/2304.11577v2",econ.GN
33015,gn,"In railway infrastructure, construction and maintenance is typically procured
using competitive procedures such as auctions. However, these procedures only
fulfill their purpose - using (taxpayers') money efficiently - if bidders do
not collude. Employing a unique dataset of the Swiss Federal Railways, we
present two methods in order to detect potential collusion: First, we apply
machine learning to screen tender databases for suspicious patterns. Second, we
establish a novel category-managers' tool, which allows for sequential and
decentralized screening. To the best of our knowledge, we pioneer illustrating
the adaption and application of machine-learning based price screens to a
railway-infrastructure market.",On suspicious tracks: machine-learning based approaches to detect cartels in railway-infrastructure procurement,2023-04-24 10:55:08,"Hannes Wallimann, Silvio Sticher","http://arxiv.org/abs/2304.11888v1, http://arxiv.org/pdf/2304.11888v1",econ.GN
33016,gn,"This paper investigates economic convergence in terms of real income per
capita among the autonomous regions of Spain. In order to converge, the series
should cointegrate. This necessary condition is checked using two testing
strategies recently proposed for fractional cointegration, finding no evidence
of cointegration, which rules out the possibility of convergence between all or
some of the Spanish regions. As an additional contribution, an extension of the
critical values of one of the tests of fractional cointegration is provided for
a different number of variables and sample sizes from those originally provided
by the author, fitting those considered in this paper.","Long memory, fractional integration and cointegration analysis of real convergence in Spain",2023-04-03 16:01:48,"Mariam Kamal, Josu Arteche","http://arxiv.org/abs/2304.12433v1, http://arxiv.org/pdf/2304.12433v1",econ.GN
33017,gn,"Choosing the right stock portfolio with the highest efficiencies has always
concerned accurate and legal investors. Investors have always been concerned
about the accuracy and legitimacy of choosing the right stock portfolio with
high efficiency. Therefore, this paper aims to determine the criteria for
selecting an optimal stock portfolio with a high-efficiency ratio in the
Toronto Stock Exchange using the integrated evaluation and decision-making
trial laboratory (DEMATEL) model and Multi-Criteria Fuzzy decision-making
approaches regarding the development of the Gordon model. In the current study,
results obtained using combined multi-criteria fuzzy decision-making
approaches, the practical factors, the relative weight of dividends, discount
rate, and dividend growth rate have been comprehensively illustrated using
combined multi-criteria fuzzy decision-making approaches. A group of 10 experts
with at least a ten-year of experience in the stock exchange field was formed
to review the different and new aspects of the subject (portfolio selection) to
decide the interaction between the group members and the exchange of attitudes
and ideas regarding the criteria. The sequence of influence and effectiveness
of the main criteria with DEMATEL has shown that the profitability criterion
interacts most with other criteria. The criteria of managing methods and
operations (MPO), market, risk, and growth criteria are ranked next in terms of
interaction with other criteria. This study concludes that regarding the
model's appropriate and reliable validity in choosing the optimal stock
portfolio, it is recommended that portfolio managers in companies, investment
funds, and capital owners use the model to select stocks in the Toronto Stock
Exchange optimally.",Selecting Sustainable Optimal Stock by Using Multi-Criteria Fuzzy Decision-Making Approaches Based on the Development of the Gordon Model: A case study of the Toronto Stock Exchange,2023-04-26 23:39:45,Mohsen Mortazavi,"http://arxiv.org/abs/2304.13818v1, http://arxiv.org/pdf/2304.13818v1",econ.GN
33018,gn,"This survey article provides insights regarding the future of affirmative
action by analyzing the implementation methods and the empirical evidence on
the use of placement quotas in the Brazilian higher education system. All
federal universities have required income and racial-based quotas in Brazil
since 2012. Affirmative action in federal universities is uniformly applied
across the country, which makes evaluating its effects particularly valuable.
Affirmative action improves the outcomes of targeted students. Specifically,
race-based quotas raise the share of black students in federal universities, an
effect not observed with income-based quotas alone. Affirmative action has
downstream positive consequences for labor market outcomes. The results suggest
that income and race-based quotas beneficiaries experience substantial
long-term welfare benefits. There is no evidence of mismatching or negative
consequences for targeted students' peers.",Racial and income-based affirmative action in higher education admissions: lessons from the Brazilian experience,2023-04-27 06:05:02,"Rodrigo Zeidan, Silvio Luiz de Almeida, Inácio Bó, Neil Lewis Jr","http://arxiv.org/abs/2304.13936v1, http://arxiv.org/pdf/2304.13936v1",econ.GN
33019,gn,"To achieve carbon emission targets worldwide, decarbonization of the freight
transport sector will be an important factor. To this end, national governments
must make plans that facilitate this transition. National freight transport
models are a useful tool to assess what the effects of various policies and
investments may be. The state of the art consists of very detailed, static
models. While useful for short-term policy assessment, these models are less
suitable for the long-term planning necessary to facilitate the transition to
low-carbon transportation in the upcoming decades.
  In this paper, we fill this gap by developing a framework for strategic
national freight transport modeling, which we call STraM, and which can be
characterized as a multi-period stochastic network design model, based on a
multimodal freight transport formulation. In STraM, we explicitly include
several aspects that are lacking in state-of-the art national freight transport
models: the dynamic nature of long-term planning, as well as new, low-carbon
fuel technologies and long-term uncertainties in the development of these
technologies. We illustrate our model using a case study of Norway and discuss
the resulting insights. In particular, we demonstrate the relevance of modeling
multiple time periods, the importance of including long-term uncertainty in
technology development, and the efficacy of carbon pricing.",STraM: a framework for strategic national freight transport modeling,2023-04-27 10:41:29,"Steffen Jaap Bakker, E. Ruben van Beesten, Ingvild Synnøve Brynildsen, Anette Sandvig, Marit Siqveland, Asgeir Tomasgard","http://arxiv.org/abs/2304.14001v1, http://arxiv.org/pdf/2304.14001v1",econ.GN
33020,gn,"This papers aims to establish the empirical relationship between income, net
wealth and their joint distribution in a selected group of euro area countries.
I estimate measures of dependence between income and net wealth using a
semiparametric copula approach and calculate a bivariate Gini coefficient. By
combining structural inference from vector autoregressions on the macroeconomic
level with a simulation using microeconomic data, I investigate how
conventional and unconventional monetary policy measures affect the joint
distribution. Results indicate that effects of monetary policy are highly
heterogeneous across different countries, both in terms of the dependence of
income and net wealth on each other, and in terms of inequality in both income
and net wealth.",Monetary policy and the joint distribution of income and wealth: The heterogeneous case of the euro area,2023-04-27 18:20:52,Anna Stelzer,"http://arxiv.org/abs/2304.14264v1, http://arxiv.org/pdf/2304.14264v1",econ.GN
33021,gn,"Laws that govern land acquisition can lock in old paradigms. We study one
such case, the Coal Bearing Areas Act of 1957 (CBAA) which provides minimal
social and environmental safegaurds, and deviates in important ways from the
Right to Fair Compensation and Transparency in Land Acquisition, Rehabilitation
and Resettlement Act 2013 (LARR). The lack of due diligence protocol in the
CBAA confers an undue comparative advantage to coal development, which is
inconsistent with India's stance to phase down coal use, reduce air pollution,
and advance modern sources of energy. We argue that the premise under which the
CBAA was historically justified is no longer valid due to a significant change
in the local context. Namely, the environmental and social costs of coal energy
are far more salient and the market has cleaner energy alternatives that are
cost competitive. We recommend updating land acquisition laws to bring coal
under the general purview of LARR or, at minimum, amending the CBAA to ensure
adequate environmental and social safeguards are in place, both in letter and
practice.",Greening our Laws: Revising Land Acquisition Law for Coal Mining in India,2023-04-28 18:53:48,"Sugandha Srivastav, Tanmay Singh","http://arxiv.org/abs/2304.14941v1, http://arxiv.org/pdf/2304.14941v1",econ.GN
33022,gn,"Great socio-economic transitions see the demise of certain industries and the
rise of others. The losers of the transition tend to deploy a variety of
tactics to obstruct change. We develop a political-economy model of interest
group competition and garner evidence of tactics deployed in the global climate
movement. From this we deduce a set of strategies for how the climate movement
competes against entrenched hydrocarbon interests. Five strategies for
overcoming obstructionism emerge: (1) Appeasement, which involves compensating
the losers; (2) Co-optation, which seeks to instigate change by working with
incumbents; (3) Institutionalism, which involves changes to public institutions
to support decarbonization; (4) Antagonism, which creates reputational or
litigation costs to inaction; and (5) Countervailance, which makes low-carbon
alternatives more competitive. We argue that each strategy addresses the
problem of obstructionism through a different lens, reflecting a diversity of
actors and theories of change within the climate movement. The choice of which
strategy to pursue depends on the institutional context.",Political Strategies to Overcome Climate Policy Obstructionism,2023-04-28 19:24:03,"Sugandha Srivastav, Ryan Rafaty","http://dx.doi.org/10.1017/S1537592722002080, http://arxiv.org/abs/2304.14960v1, http://arxiv.org/pdf/2304.14960v1",econ.GN
33023,gn,"Measuring the extent to which educational marital homophily differs in two
consecutive generations is challenging when the educational distributions of
marriageable men and women are also generation-specific. We propose a set of
criteria that indicators may have to satisfy to be considered as suitable
measures of homophily. One of our analytical criteria is on the robustness to
the number of educational categories. Another analytical criterion defined by
us is on the association between intergenerational mobility and homophily. A
third criterion is empirical and concerns the identified historical trend of
homophily, a comprehensive aspect of inequality, in the US between 1960 and
2015. While the ordinal Liu--Lu-indicator and the cardinal indicator
constructed with the Naszodi--Mendonca method satisfy all three criteria, most
indices commonly applied in the literature do not. Our analysis sheds light on
the link between the violation of certain criteria and the sensitivity of the
historical trend of homophily obtained in the empirical assortative mating
literature.","Historical trend of homophily: U-shaped or not U-shaped? Or, how would you set a criterion to decide which criterion is better to choose a criterion?",2023-04-29 13:55:57,Anna Naszodi,"http://arxiv.org/abs/2305.00231v1, http://arxiv.org/pdf/2305.00231v1",econ.GN
33024,gn,"Research shows that naturalization can improve the socio-economic integration
of immigrants, yet many immigrants do not seek to apply. We estimate a policy
rule for a letter-based information campaign encouraging newly eligible
immigrants in Zurich, Switzerland, to naturalize. The policy rule is a decision
tree assigning treatment letters for each individual based on observed
characteristics. We assess performance by fielding the policy rule to one-half
of 1,717 immigrants, while sending random treatment letters to the other half.
Despite only moderate levels of heterogeneity, the policy tree yields a larger,
albeit insignificant, increase in application rates than each individual
treatment.",Optimal multi-action treatment allocation: A two-phase field experiment to boost immigrant naturalization,2023-04-30 21:11:34,"Achim Ahrens, Alessandra Stampi-Bombelli, Selina Kurer, Dominik Hangartner","http://arxiv.org/abs/2305.00545v2, http://arxiv.org/pdf/2305.00545v2",econ.GN
33026,gn,"Building upon the theory and methodology of agricultural policy developed in
the previous chapter, in Chapter 2 we analyse and assess agricultural policy
making in Ukraine since the breakup of Soviet Union till today. Going from top
down to the bottom, we begin by describing the evolution of state policy in the
agri-food sector. In the beginning, we describe the major milestones of
agricultural policy making since independence, paving the way to the political
economy of the modern agricultural policy in Ukraine. Then we describe the role
of agri-food sector in the national economy as well as globally in ensuring
food security in the world. After, we dig deeper and focus on a detailed
performance of agricultural sector by looking at farm structures, their land
use, overall and sector-wise untapped productivity potential. Modern
agricultural policy and institutional set-up is contained and analyzed in
details in the section 2.4. A review of the agricultural up- and downstream
sectors wraps up this chapter",Agricultural Policy in Ukraine,2023-04-29 17:15:33,"Oleg Nivievskyi, Pavlo Martyshev, Sergiy Kvasha","http://arxiv.org/abs/2305.01478v1, http://arxiv.org/pdf/2305.01478v1",econ.GN
33027,gn,"Based on the panel data of 283 prefecture-level cities in China from 2006 to
2019, this paper measures the extent and mechanism of the impact of RMB real
effective exchange rate fluctuations on carbon emission intensity. The results
show that: (1) For every 1% appreciation of the real effective exchange rate of
RMB, the carbon emission intensity decreases by an average of 0.463 tons/10000
yuan; (2) The ""carbon emission reduction effect"" of RMB real effective exchange
rate appreciation is more obvious in the eastern regions, coastal areas,
regions with high urbanization levels, and areas with open information; (3) The
appreciation of RMB real effective exchange rate can reduce carbon dioxide
emission intensity by improving regional R&D and innovation ability,
restraining foreign trade and foreign investment, promoting industrial
structure optimization and upgrading, and improving income inequality.",Carbon Emission Reduction Effect of RMB Appreciation: Empirical Evidence from 283 Prefecture-Level Cities of China,2023-04-28 15:48:32,"Chen Fengxian, Lv Xiaoyao","http://arxiv.org/abs/2305.01558v1, http://arxiv.org/pdf/2305.01558v1",econ.GN
33028,gn,"Exempting soybean and rapeseed exporters from VAT has a negative effect on
the economy of $\$$44.5-60.5 million per year. The implemented policy aimed to
increase the processing of soybeans and rapeseed by Ukrainian plants. As a
result, the processors received $\$$26 million and the state budget gained
$\$$2-18 million. However, soybean farmers, mostly small and medium-sized,
received $ 88.5 million in losses, far outweighing the benefits of processors
and the state budget.",Non-refunding of VAT to soybean exporters or economic impact of Soybean amendments,2023-04-29 17:44:04,"Oleg Nivievskyi, Roman Neyter, Olha Halytsia, Pavlo Martyshev, Oleksandr Donchenko","http://arxiv.org/abs/2305.01559v1, http://arxiv.org/pdf/2305.01559v1",econ.GN
33029,gn,"For more than a decade, Bitcoin has gained as much adoption as it has
received criticism. Fundamentally, Bitcoin is under fire for the high carbon
footprint that results from the energy-intensive proof-of-work (PoW) consensus
algorithm. There is a trend however for Bitcoin mining to adopt a trajectory
toward achieving carbon-negative status, notably due to the adoption of
methane-based mining and mining-based flexible load response (FLR) to
complement variable renewable energy (VRE) generation. Miners and electricity
sellers may increase their profitability not only by taking advantage of excess
energy, but also by selling green tokens to buyers interested in greening their
portfolios. Nevertheless, a proper ''green Bitcoin'' accounting system requires
a standard framework for the accreditation of sustainable bitcoin holdings. The
proper way to build such a framework remains contested. In this paper, we
survey the different sustainable Bitcoin accounting systems. Analyzing the
various alternatives, we suggest a path forward.","Don't Trust, Verify: Towards a Framework for the Greening of Bitcoin",2023-05-03 01:52:10,"Juan Ignacio Ibañez, Alexander Freier","http://arxiv.org/abs/2305.01815v1, http://arxiv.org/pdf/2305.01815v1",econ.GN
33030,gn,"Since Reform and Opening-up 40 years ago, China has made remarkable
achievements in economic fields. And consumption activities, including
household consumption, have played an important role in it. Consumer activity
is the end of economic activity, because the ultimate aim of other economic
activities is to meet consumer demand; consumer activity is the starting point
of economic activity, because consumption can drive economic and social
development. This paper selects the economic data of more than 40 years since
Reform and Opening-up, and establishes the Vector Autoregressive (VAR) model
and Vector Error Correction (VEC) model, analyzing the influence of consumption
level and total consumption of urban and rural residents on economic growth.
The conclusion is that the increase of urban consumption and rural consumption
can lead to the increase of GDP, and in the long run, urban consumption can
promote economic growth more than rural consumption. According to this
conclusion, we analyze the reasons and puts forward some policy suggestions.",The Relationship between Consumption and Economic Growth of Chinese Urban and Rural Residents since Reform and Opening-up -- An Empirical Analysis Based on Econometrics Models,2022-11-18 19:53:57,Zhiheng Yi,"http://arxiv.org/abs/2305.02138v1, http://arxiv.org/pdf/2305.02138v1",econ.GN
33031,gn,"The paper examines the effects of stringent land use regulations, measured
using the Wharton Residential Land Use Regulatory Index (WRLURI), on employment
growth during the period 2010-2020 in the Retail, Professional, and Information
sectors across 878 local jurisdictions in the United States. All the local
jurisdictions exist in both (2006 and 2018) waves of the WRLURI surveys and
hence constitute a unique panel data. We apply a mediation analytical framework
to decompose the direct and indirect effects of land use regulation stringency
on sectoral employment growth and specialization. Our analysis suggests a fully
mediated pattern in the relationship between excessive land use regulations and
employment growth, with housing cost burden as the mediator. Specifically, a
one standard deviation increase in the WRLURI index is associated with an
approximate increase of 0.8 percentage point in the proportion of cost burdened
renters. Relatedly, higher prevalence of cost-burdened renters has moderate
adverse effects on employment growth in two sectors. A one percentage point
increase in the proportion of cost burdened renters is associated with 0.04 and
0.017 percentage point decreases in the Professional and Information sectors,
respectively.",A Mediation Analysis of the Relationship Between Land Use Regulation Stringency and Employment Dynamics,2023-05-03 17:46:16,"Uche Oluku, Shaoming Cheng","http://arxiv.org/abs/2305.02159v1, http://arxiv.org/pdf/2305.02159v1",econ.GN
33032,gn,"The purpose of this research is to examine the relationship between the Dhaka
Stock exchange index return and macroeconomic variables such as exchange rate,
inflation, money supply etc. The long-term relationship between macroeconomic
variables and stock market returns has been analyzed by using the Johnson
Cointegration test, Augmented Dicky Fuller (ADF) and Phillip Perron (PP) tests.
The results revealed the existence of cointegrating relationship between stock
prices and the macroeconomic variables in the Dhaka stock exchange. The
consumer price index, money supply, and exchange rates proved to be strongly
associated with stock returns, while market capitalization was found to be
negatively associated with stock returns. The findings suggest that in the long
run, the Dhaka stock exchange is reactive to macroeconomic indicators.",Macroeconomic factors and Stock exchange return: A Statistical Analysis,2023-05-03 19:10:22,"Md. Fazlul Huq Khan, Md. Masum Billah","http://arxiv.org/abs/2305.02229v1, http://arxiv.org/pdf/2305.02229v1",econ.GN
33033,gn,"Understanding the relationship between emerging technology and research and
development has long been of interest to companies, policy makers and
researchers. In this paper new sources of data and tools are combined with a
novel technique to construct a model linking a defined set of emerging
technologies with the global leading R&D spending companies. The result is a
new map of this landscape. This map reveals the proximity of technologies and
companies in the knowledge embedded in their corresponding Wikipedia profiles,
enabling analysis of the closest associations between the companies and
emerging technologies. A significant positive correlation for a related set of
patent data validates the approach. Finally, a set of Circular Economy Emerging
Technologies are matched to their closest leading R&D spending company,
prompting future research ideas in broader or narrower application of the model
to specific technology themes, company competitor landscapes and national
interest concerns.",Informing Innovation Management: Linking Leading R&D Firms and Emerging Technologies,2023-05-04 03:52:36,"Xian Gong, Claire McFarland, Paul McCarthy, Colin Griffith, Marian-Andrei Rizoiu","http://arxiv.org/abs/2305.02476v1, http://arxiv.org/pdf/2305.02476v1",econ.GN
33034,gn,"We examine the effects of an affirmative action policy at an elite Brazilian
university that reserved 45 percent of admission slots for Black and low-income
students. We find that marginally-admitted students who enrolled through the
affirmative action tracks experienced a 14 percent increase in early-career
earnings. But the adoption of affirmative action also caused a large decrease
in earnings for the university's most highly-ranked students. We present
evidence that the negative spillover effects on highly-ranked students'
earnings were driven by both a reduction in human capital accumulation and a
decline in the value of networking.",The Direct and Spillover Effects of Large-scale Affirmative Action at an Elite Brazilian University,2023-05-04 05:48:55,"Cecilia Machado, Germán Reyes, Evan Riehl","http://arxiv.org/abs/2305.02513v2, http://arxiv.org/pdf/2305.02513v2",econ.GN
33035,gn,"Take up of microcredit by the poor for investment in businesses or human
capital turned out to be very low. We show that this could be explained by risk
aversion, without relying on fixed costs or other forms of non-convexity in the
technology, if the investment is aimed at increasing the probability of
success. Under this framework, rational risk-averse agents choose corner
solutions, unlike in the case of a risky investment with an exogenous
probability of success. Our online experiment confirms our theoretical
predictions about how agents' choices differ when facing the two types of
investments.","Why Not Borrow, Invest, and Escape Poverty?",2023-05-04 07:45:44,"Dagmara Celik Katreniak, Alexey Khazanov, Omer Moav, Zvika Neeman, Hosny Zoabi","http://arxiv.org/abs/2305.02546v1, http://arxiv.org/pdf/2305.02546v1",econ.GN
33036,gn,"How does employer reputation affect the labor market? We investigate this
question using a novel dataset combining reviews from Glassdoor.com and job
applications data from Dice.com. Labor market institutions such as
Glassdoor.com crowd-sources information about employers to alleviate
information problems faced by workers when choosing an employer. Raw
crowd-sourced employer ratings are rounded when displayed to job seekers. By
exploiting the rounding threshold, we identify the causal impact of Glassdoor
ratings using a regression discontinuity framework. We document the effects of
such ratings on both the demand and supply sides of the labor market. We find
that displayed employer reputation affects an employer's ability to attract
workers, especially when the displayed rating is ""sticky."" Employers respond to
having a rating above the rounding threshold by posting more new positions and
re-activating more job postings. The effects are the strongest for private,
smaller, and less established firms, suggesting that online reputation is a
substitute for other types of reputation.",Employer Reputation and the Labor Market: Evidence from Glassdoor.com and Dice.com,2023-05-04 09:43:01,"Ke, Ma, Sophie Yanying Sheng, Haitian Xie","http://arxiv.org/abs/2305.02587v2, http://arxiv.org/pdf/2305.02587v2",econ.GN
33037,gn,"We investigate whether preferences for objects received via a matching
mechanism are influenced by how highly agents rank them in their reported rank
order list. We hypothesize that all else equal, agents receive greater utility
for the same object when they rank it higher. The addition of
rankings-dependent utility implies that it may not be a dominant strategy to
submit truthful preferences to a strategyproof mechanism, and that
non-strategyproof mechanisms that give more agents objects they report as
higher ranked may increase market welfare. We test these hypotheses with a
matching experiment in a strategyproof mechanism, the random serial
dictatorship, and a non-strategyproof mechanism, the Boston mechanism. A novel
feature of our experimental design is that the objects allocated in the
matching markets are real goods, which allows us to directly measure
rankings-dependence by eliciting values for goods both inside and outside of
the mechanism. Our experimental results confirm that the elicited differences
in values do decrease for lower-ranked goods. We find no differences between
the two mechanisms for the rates of truth-telling and the final welfare.",Rankings-Dependent Preferences: A Real Goods Matching Experiment,2023-05-05 19:04:36,"Andrew Kloosterman, Peter Troyan","http://arxiv.org/abs/2305.03644v1, http://arxiv.org/pdf/2305.03644v1",econ.GN
33050,gn,"Purely affective interaction allows the welfare of an individual to depend on
her own actions and on the profile of welfare levels of others. Under an
assumption on the structure of mutual affection that we interpret as
""non-explosive mutual affection,"" we show that equilibria of simultaneous-move
affective interaction are Pareto optimal independently of whether or not an
induced standard game exists. Moreover, if purely affective interaction induces
a standard game, then an equilibrium profile of actions is a Nash equilibrium
of the game, and this Nash equilibrium and Pareto optimal profile of strategies
is locally dominant.",Affective interdependence and welfare,2023-05-12 20:48:13,"Aviad Heifetz, Enrico Minelli, Herakles Polemarchakis","http://arxiv.org/abs/2305.10165v1, http://arxiv.org/pdf/2305.10165v1",econ.GN
33038,gn,"In a response to the 2022 cost-of-living crisis in Europe, the German
government implemented a three-month fuel excise tax cut and a public transport
travel pass for 9 Euro per month valid on all local and regional services.
Following this period, a public debate immediately emerged on a successor to
the so-called ""9-Euro-Ticket"", leading to the political decision of introducing
a similar ticket priced at 49 Euro per month in May 2023, the so-called
""Deutschlandticket"". We observe this introduction of the new public transport
ticket with a sample of 818 participants using a smartphone-based travel diary
with passive tracking and a two-wave survey. The sample comprises 510 remaining
participants of our initial ""9-Euro-Ticket study from 2022 and 308 participants
recruited in March and early April 2023. In this report we report on the status
of the panel before the introduction of the ""Deutschlandticket"".","A nation-wide experiment, part II: the introduction of a 49-Euro-per-month travel pass in Germany -- An empirical study on this fare innovation",2023-05-07 14:22:12,"Allister Loder, Fabienne Cantner, Lennart Adenaw, Markus B. Siewert, Sebastian Goerg, Klaus Bogenberger","http://arxiv.org/abs/2305.04248v1, http://arxiv.org/pdf/2305.04248v1",econ.GN
33039,gn,"Children's well-being of immigrants is facing several challenges related to
physical, mental, and educational risks, which may obstacle human capital
accumulation and further development. In rural China, due to the restriction of
Hukou registration system, nearly 9 million left-behind children (LBC) are in
lack of parental care and supervision in 2020 when their parents internally
migrate out for work. Through the systematic scoping review, this study
provides a comprehensive literature summary and concludes the overall negative
effects of parental migration on LBC's physical, mental (especially for
left-behind girls), and educational outcomes (especially for left-behind boys).
Noticeably, both parents' and mother's migration may exacerbate LBC's
disadvantages. Furthermore, remittance from migrants and more family-level and
social support may help mitigate the negative influence. Finally, we put
forward theoretical and realistic implications which may shed light on
potential research directions. Further studies, especially quantitative
studies, are needed to conduct a longitudinal survey, combine the ongoing Hukou
reform in China, and simultaneously focus on left-behind children and migrant
children.",A Scoping Review of Internal Migration and Left-behind Children's Wellbeing in China,2023-05-07 21:01:20,Jinkai Li,"http://arxiv.org/abs/2305.04348v1, http://arxiv.org/pdf/2305.04348v1",econ.GN
33040,gn,"This paper explores the use of Generative Pre-trained Transformers (GPT) in
strategic game experiments, specifically the ultimatum game and the prisoner's
dilemma. I designed prompts and architectures to enable GPT to understand the
game rules and to generate both its choices and the reasoning behind decisions.
The key findings show that GPT exhibits behaviours similar to human responses,
such as making positive offers and rejecting unfair ones in the ultimatum game,
along with conditional cooperation in the prisoner's dilemma. The study
explores how prompting GPT with traits of fairness concern or selfishness
influences its decisions. Notably, the ""fair"" GPT in the ultimatum game tends
to make higher offers and reject offers more frequently compared to the
""selfish"" GPT. In the prisoner's dilemma, high cooperation rates are maintained
only when both GPT players are ""fair"". The reasoning statements GPT produces
during gameplay reveal the underlying logic of certain intriguing patterns
observed in the games. Overall, this research shows the potential of GPT as a
valuable tool in social science research, especially in experimental studies
and social simulations.",GPT in Game Theory Experiments,2023-05-09 18:11:13,Fulin Guo,"http://arxiv.org/abs/2305.05516v2, http://arxiv.org/pdf/2305.05516v2",econ.GN
33041,gn,"We pose the estimation and predictability of stock market performance. Three
cases are taken: US, Japan, Germany, the monthly index of the value of realized
investment in stocks, prices plus the value of dividend payments (OECD data).
Once deflated and trend removed, harmonic analysis is applied. The series are
taken with and without the periods with evidence of exogenous shocks. The
series are erratic and the random walk hypothesis is reasonably falsified. The
estimation reveals relevant hidden periodicities, which approximate stock value
movements. From July 2008 onwards, it is successfully analyzed whether the
subsequent fall in share value would have been predictable. Again, the data are
irregular and scattered, but the sum of the first five harmonics in relevance
anticipates the fall in stock market values that followed.",A spectral approach to stock market performance,2023-05-09 23:42:05,"Ignacio Escanuela Romana, Clara Escanuela Nieves","http://arxiv.org/abs/2305.05762v1, http://arxiv.org/pdf/2305.05762v1",econ.GN
33042,gn,"Sustainable water management has become an urgent challenge due to irregular
water availability patterns and water quality issues. The effect of climate
change exacerbates this phenomenon in water-scarce areas, such as the
Mediterranean region, stimulating the implementation of solutions aiming to
mitigate or improve environmental, social, and economic conditions. A novel
solution inspired by nature, technology-oriented, explored in the past years,
is constructed wetlands. Commonly applied for different types of wastewater due
to its low cost and simple maintenance, they are considered a promising
solution to remove pollutants while creating an improved ecosystem by
increasing biodiversity around them. This research aims to assess the
sustainability of two typologies of constructed wetlands in two Italian areas:
Sicily, with a vertical subsurface flow constructed wetland, and Emilia
Romagna, with a surface flow constructed wetland. The assessment is performed
by applying a cost-benefit analysis combining primary and secondary data
sources. The analysis considered the market and non-market values in both
proposed scenarios to establish the feasibility of the two options and identify
the most convenient one. Results show that both constructed wetlands bring more
benefits (benefits-cost ratio, BCR) than costs (BCR > 0). In the case of
Sicily, the BCR is lower (1) in the constructed wetland scenario, while in its
absence it is almost double. If other ecosystem services are included the
constructed wetland scenario reach a BCR of 4 and a ROI of 5, showing a better
performance from a costing perspective than the absence one. In Emilia Romagna,
the constructed wetland scenario shows a high BCR (10) and ROI (9), while the
scenario in absence has obtained a negative present value indicating that the
cost do not cover the benefits expected.",Cost-benefit of green infrastructures for water management: A sustainability assessment of full-scale constructed wetlands in Northern and Southern Italy,2023-05-10 19:19:15,"Laura Garcia-Herrero, Stevo Lavrnic, Valentina Guerrieri, Attilio Toscano, Mirco Milani, Giuseppe Luigi Cirelli, Matteo Vittuari","http://dx.doi.org/10.1016/j.ecoleng.2022.106797, http://arxiv.org/abs/2305.06284v2, http://arxiv.org/pdf/2305.06284v2",econ.GN
33068,gn,"Single Transferable Vote (STV) is a voting method used to elect multiple
candidates in ranked-choice elections. One weakness of STV is that it fails
multiple fairness criteria related to monotonicity and no show paradoxes. We
analyze 1,079 local government STV elections in Scotland to estimate the
frequency of such monotonicity anomalies in real-world elections, and compare
our results with prior empirical and theoretical research about the rates at
which such anomalies occur. In 62 of the 1079 elections we found some kind of
monotonicity anomaly. We generally find that the rates of anomalies are similar
to prior empirical research and much lower than what most theoretical research
has found. The STV anomalies we find are the first of their kind to be
documented in real-world multiwinner elections.",Monotonicity Anomalies in Scottish Local Government Elections,2023-05-28 17:49:05,"David McCune, Adam Graham-Squire","http://arxiv.org/abs/2305.17741v3, http://arxiv.org/pdf/2305.17741v3",econ.GN
33043,gn,"Climate change poses new risks for real estate assets. Given that the
majority of home buyers use a loan to pay for their homes and the majority of
these loans are purchased by the Government Sponsored Enterprises (GSEs), it is
important to understand how rising natural disaster risk affects the mortgage
finance market. The climate securitization hypothesis (CSH) posits that, in the
aftermath of natural disasters, lenders strategically react to the GSEs
conforming loan securitization rules that create incentives that foster both
moral hazard and adverse selection effects. The climate risks bundled into GSE
mortgage-backed securities emerge because of the complex securitization chain
that creates weak monitoring and screening incentives. We survey the recent
theoretical literature and empirical literature exploring screening incentive
effects. Using regression discontinuity methods, we test key hypotheses
presented in the securitization literature with a focus on securitization
dynamics immediately after major hurricanes. Our evidence supports the CSH. We
address the data construction issues posed by LaCour-Little et. al. and show
that their concerns do not affect our main results. Under the current rules of
the game, climate risks exacerbates the established lemons problem commonly
found in loan securitization markets.",Mortgage Securitization Dynamics in the Aftermath of Natural Disasters: A Reply,2023-05-12 03:00:18,"Amine Ouazad, Matthew E. Kahn","http://arxiv.org/abs/2305.07179v1, http://arxiv.org/pdf/2305.07179v1",econ.GN
33044,gn,"The distributional impacts of congestion pricing have been widely studied in
the literature and the evidence on this is mixed. Some studies find that
pricing is regressive whereas others suggest that it can be progressive or
neutral depending on the specific spatial characteristics of the urban region,
existing activity and travel patterns, and the design of the pricing scheme.
Moreover, the welfare and distributional impacts of pricing have largely been
studied in the context of passenger travel whereas freight has received
relatively less attention. In this paper, we examine the impacts of several
third-best congestion pricing schemes on both passenger transport and freight
in an integrated manner using a large-scale microsimulator (SimMobility) that
explicitly simulates the behavioral decisions of the entire population of
individuals and business establishments, dynamic multimodal network
performance, and their interactions. Through simulations of a prototypical
North American city, we find that a distance-based pricing scheme yields the
largest welfare gains, although the gains are a modest fraction of toll
revenues (around 30\%). In the absence of revenue recycling or redistribution,
distance-based and cordon-based schemes are found to be particularly
regressive. On average, lower income individuals lose as a result of the
scheme, whereas higher income individuals gain. A similar trend is observed in
the context of shippers -- small establishments having lower shipment values
lose on average whereas larger establishments with higher shipment values gain.
We perform a detailed spatial analysis of distributional outcomes, and examine
the impacts on network performance, activity generation, mode and departure
time choices, and logistics operations.",Evaluating congestion pricing schemes using agent-based passenger and freight microsimulation,2023-05-12 11:45:22,"Peiyu Jing, Ravi Seshadri, Takanori Sakai, Ali Shamshiripour, Andre Romano Alho, Antonios Lentzakis, Moshe E. Ben-Akiva","http://arxiv.org/abs/2305.07318v1, http://arxiv.org/pdf/2305.07318v1",econ.GN
33045,gn,"In this paper we present a new method to trace the flows of phosphate from
the countries where it is mined to the counties where it is used in
agricultural production. We achieve this by combining data on phosphate rock
mining with data on fertilizer use and data on international trade of
phosphate-related products. We show that by making certain adjustments to data
on net exports we can derive the matrix of phosphate flows on the country level
to a large degree and thus contribute to the accuracy of material flow
analyses, a results that is important for improving environmental accounting,
not only for phosphorus but for many other resources.",The use of trade data in the analysis of global phosphate flows,2023-05-12 13:23:12,"Matthias Raddant, Martin Bertau, Gerald Steiner","http://arxiv.org/abs/2305.07362v2, http://arxiv.org/pdf/2305.07362v2",econ.GN
33046,gn,"We develop a simple game-theoretic model to determine the consequences of
explicitly including financial market stability in the central bank objective
function, when policymakers and the financial market are strategic players, and
market stability is negatively affected by policy surprises. We find that the
inclusion of financial sector stability among the policy objectives can induce
an inefficiency, whereby market anticipation of policymakers' goals biases
investment choices. When the central bank has private information about its
policy intentions, the equilibrium communication is vague, because fully
informative communication is not credible. The appointment of a ``kitish''
central banker, who puts little weight on market stability, reduces these
inefficiencies. If interactions are repeated, communication transparency and
overall efficiency can be improved if the central bank punishes any abuse of
market power by withholding forward guidance. At the same time, repeated
interaction also opens the doors to collusion between large investors, with
uncertain welfare consequences.",Kites and Quails: Monetary Policy and Communication with Strategic Financial Markets,2023-05-15 21:57:36,"Giampaolo Bonomi, Ali Uppal","http://arxiv.org/abs/2305.08958v1, http://arxiv.org/pdf/2305.08958v1",econ.GN
33047,gn,"This study is the first to investigate whether financial institutions for
low-income populations have contributed to the historical decline in mortality
rates. Using ward-level panel data from prewar Tokyo City, we found that public
pawn loans were associated with reductions in infant and fetal death rates,
potentially through improved nutrition and hygiene measures. Simple
calculations suggest that popularizing public pawnshops led to a 6% and 8%
decrease in infant mortality and fetal death rates, respectively, from 1927 to
1935. Contrarily, private pawnshops showed no significant association with
health improvements. Our findings enrich the expanding literature on
demographics and financial histories.",Health Impacts of Public Pawnshops in Industrializing Tokyo,2023-05-16 14:16:11,Tatsuki Inoue,"http://arxiv.org/abs/2305.09352v1, http://arxiv.org/pdf/2305.09352v1",econ.GN
33048,gn,"In my dissertation, I will analyze how the product market position of a
mobile app affects its pricing strategies, which in turn impacts an app's
monetization process. Using natural language processing and k-mean clustering
on apps' text descriptions, I created a new variable that measures the
distinctiveness of an app as compared to its peers. I created four pricing
variables, price, cumulative installs, and indicators of whether an app
contains in-app ads and purchases. I found that the effect differs for
successful apps and less successful apps. I measure the success here using
cumulative installs and the firms that developed the apps. Based on third-party
rankings and the shape of the distribution of installs, I set two thresholds
and divided apps into market-leading and market-follower apps. The
market-leading sub-sample consists of apps with high cumulative installs or
developed by prestigious firms, and the market-follower sub-sample consists of
the rest of the apps. I found that the impact of being niche is smaller in the
market-leading apps because of their relatively higher heterogeneity. In
addition, being niche also impact utility apps differently from hedonic apps or
apps with two-sided market characteristics. For the special gaming category,
being niche has some effect but is smaller than in the market follower
sub-sample. My research provides novel empirical evidence of digital products
to various strands of theoretical research, including the optimal
distinctiveness theory, product differentiation, price discrimination in two or
multi-sided markets, and consumer psychology.",Dissertation on Applied Microeconomics of Freemium Pricing Strategies in Mobile App Market,2023-04-22 07:19:06,Naixin Zhu,"http://arxiv.org/abs/2305.09479v1, http://arxiv.org/pdf/2305.09479v1",econ.GN
33051,gn,"We find it is common for consumers who are not in financial distress to make
credit card payments at or close to the minimum. This pattern is difficult to
reconcile with economic factors but can be explained by minimum payment
information presented to consumers acting as an anchor that weighs payments
down. Building on Stewart (2009), we conduct a hypothetical credit card payment
experiment to test an intervention to de-anchor payment choices. This
intervention effectively stops consumers selecting payments at the contractual
minimum. It also increases their average payments, as well as shifting the
distribution of payments. By de-anchoring choices from the minimum, consumers
increasingly choose the full payment amount - which potentially seems to act as
a target payment for consumers. We innovate by linking the experimental
responses to survey responses on financial distress and to actual credit card
payment behaviours. We find that the intervention largely increases payments
made by less financially-distressed consumers. We are also able to evaluate the
potential external validity of our experiment and find that hypothetical
responses are closely related to consumers' actual credit card payments.",Weighing Anchor on Credit Card Debt,2023-05-19 04:29:59,"Benedict Guttman-Kenney, Jesse Leary, Neil Stewart","http://arxiv.org/abs/2305.11375v1, http://arxiv.org/pdf/2305.11375v1",econ.GN
33052,gn,"As the development and use of artificial intelligence (AI) continues to grow,
policymakers are increasingly grappling with the question of how to regulate
this technology. The most far-reaching international initiative is the European
Union (EU) AI Act, which aims to establish the first comprehensive, binding
framework for regulating AI. In this article, we offer the first systematic
analysis of non-state actor preferences toward international regulation of AI,
focusing on the case of the EU AI Act. Theoretically, we develop an argument
about the regulatory preferences of business actors and other non-state actors
under varying conditions of AI sector competitiveness. Empirically, we test
these expectations using data from public consultations on European AI
regulation. Our findings are threefold. First, all types of non-state actors
express concerns about AI and support regulation in some form. Second, there
are nonetheless significant differences across actor types, with business
actors being less concerned about the downsides of AI and more in favor of lax
regulation than other non-state actors. Third, these differences are more
pronounced in countries with stronger commercial AI sectors. Our findings shed
new light on non-state actor preferences toward AI regulation and point to
challenges for policymakers balancing competing interests in society.",AI Regulation in the European Union: Examining Non-State Actor Preferences,2023-05-19 11:46:31,"Jonas Tallberg, Magnus Lundgren, Johannes Geith","http://arxiv.org/abs/2305.11523v2, http://arxiv.org/pdf/2305.11523v2",econ.GN
33053,gn,"Artificial intelligence (AI) represents a technological upheaval with the
potential to change human society. Because of its transformative potential, AI
is increasingly becoming subject to regulatory initiatives at the global level.
Yet, so far, scholarship in political science and international relations has
focused more on AI applications than on the emerging architecture of global AI
regulation. The purpose of this article is to outline an agenda for research
into the global governance of AI. The article distinguishes between two broad
perspectives: an empirical approach, aimed at mapping and explaining global AI
governance; and a normative approach, aimed at developing and applying
standards for appropriate global AI governance. The two approaches offer
questions, concepts, and theories that are helpful in gaining an understanding
of the emerging global governance of AI. Conversely, exploring AI as a
regulatory issue offers a critical opportunity to refine existing general
approaches to the study of global governance.",The Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research,2023-05-19 11:51:44,"Jonas Tallberg, Eva Erman, Markus Furendal, Johannes Geith, Mark Klamberg, Magnus Lundgren","http://arxiv.org/abs/2305.11528v1, http://arxiv.org/pdf/2305.11528v1",econ.GN
33054,gn,"Central Banks interventions are frequent in response to exogenous events with
direct implications on financial market volatility. In this paper, we introduce
the Asymmetric Jump Multiplicative Error Model (AJM), which accounts for a
specific jump component of volatility within an intradaily framework. Taking
the Federal Reserve (Fed) as a reference, we propose a new model-based
classification of monetary announcements based on their impact on the jump
component of volatility. Focusing on a short window following each Fed's
communication, we isolate the impact of monetary announcements from any
contamination carried by relevant events that may occur within the same
announcement day.",Volatility jumps and the classification of monetary policy announcements,2023-05-20 16:36:06,"Giampiero M. Gallo, Demetrio Lacava, Edoardo Otranto","http://arxiv.org/abs/2305.12192v1, http://arxiv.org/pdf/2305.12192v1",econ.GN
33055,gn,"This paper examines the monetary policies the Federal Reserve implemented in
response to the Global Financial Crisis. More specifically, it analyzes the
Federal Reserve's quantitative easing (QE) programs, liquidity facilities, and
forward guidance operations conducted from 2007 to 2018. The essay's detailed
examination of these policies culminates in an interrupted time-series (ITS)
analysis of the long-term causal effects of the QE programs on U.S. inflation
and real GDP. The results of this formal design-based natural experimental
approach show that the QE operations positively affected U.S. real GDP but did
not significantly impact U.S. inflation. Specifically, it is found that, for
the 2011Q2-2018Q4 post-QE period, real GDP per capita in the U.S. increased by
an average of 231 dollars per quarter relative to how it would have changed had
the QE programs not been conducted. Moreover, the results show that, in 2018Q4,
ten years after the beginning of the QE programs, real GDP per capita in the
U.S. was 14% higher relative to what it would have been during that quarter had
there not been the QE programs. These findings contradict Williamson's (2017)
informal natural experimental evidence and confirm the conclusions of VARs and
new Keynesian DSGE models that the Federal Reserve's QE policies positively
affected U.S. real GDP. The results suggest that the current U.S. and worldwide
high inflation rates are likely not because of the QE programs implemented in
response to the financial crisis that accompanied the COVID-19 pandemic. They
are likely due to the unprecedentedly large fiscal stimulus packages used, the
peculiar nature of the financial downturn itself, the negative supply shocks
from the war in Ukraine, or a combination of these factors. This paper is the
first study to measure the macroeconomic effects of QE using a design-based
natural experimental approach.",The Federal Reserve's Response to the Global Financial Crisis and Its Long-Term Impact: An Interrupted Time-Series Natural Experimental Analysis,2023-05-21 05:09:12,Arnaud Cedric Kamkoum,"http://arxiv.org/abs/2305.12318v1, http://arxiv.org/pdf/2305.12318v1",econ.GN
33075,gn,"We test the international applicability of Friedman s famous plucking theory
of the business cycle in 12 advanced economies between 1970 and 2021. We find
that in countries where labour markets are flexible (Australia, Canada, United
Kingdom and United States), unemployment rates typically return to
pre-recession levels, in line with Friedman s theory. Elsewhere, unemployment
rates are less cyclical. Output recoveries differ less across countries, but
more across episodes: on average, half of the decline in GDP during a recession
persists. In terms of sectors, declines in manufacturing are typically fully
reversed. In contrast, construction-driven recessions, which are often
associated with bursting property price bubbles, tend to be persistent.",The shape of business cycles: a cross-country analysis of Friedman s plucking theory,2023-06-02 17:02:49,"Emanuel Kohlscheen, Richhild Moessner, Daniel Rees","http://arxiv.org/abs/2306.01552v1, http://arxiv.org/pdf/2306.01552v1",econ.GN
33056,gn,"As large language models (LLMs) like GPT become increasingly prevalent, it is
essential that we assess their capabilities beyond language processing. This
paper examines the economic rationality of GPT by instructing it to make
budgetary decisions in four domains: risk, time, social, and food preferences.
We measure economic rationality by assessing the consistency of GPT's decisions
with utility maximization in classic revealed preference theory. We find that
GPT's decisions are largely rational in each domain and demonstrate higher
rationality score than those of human subjects in a parallel experiment and in
the literature. Moreover, the estimated preference parameters of GPT are
slightly different from human subjects and exhibit a lower degree of
heterogeneity. We also find that the rationality scores are robust to the
degree of randomness and demographic settings such as age and gender, but are
sensitive to contexts based on the language frames of the choice situations.
These results suggest the potential of LLMs to make good decisions and the need
to further understand their capabilities, limitations, and underlying
mechanisms.",The Emergence of Economic Rationality of GPT,2023-05-22 09:32:28,"Yiting Chen, Tracy Xiao Liu, You Shan, Songfa Zhong","http://arxiv.org/abs/2305.12763v3, http://arxiv.org/pdf/2305.12763v3",econ.GN
33057,gn,"In this contribution, we investigate the role of ownership chains developed
by multinational enterprises across different national borders. First, we
document that parent companies control a majority (58%) of foreign subsidiaries
through indirect control relationships involving at least two countries along
an ownership chain. Therefore, we hypothesize that locations along ownership
chains are driven by the existence of communication costs to transmit
management decisions. In line with motivating evidence, we develop a
theoretical model for competition on corporate control that considers the
possibility that parent companies in the origin countries can delegate their
monitoring activities in final subsidiaries to middlemen subsidiaries that are
located in intermediate jurisdictions. Our model returns us a two-step
empirical strategy with two gravity equations: i) a triangular gravity for
establishing a middleman by the parent, conditional on final investments'
locations; ii) a classical gravity for the location of final investments. First
estimates confirm the predictions that ease of communication at the country
level shapes the heterogeneous locations of subsidiaries along global ownership
chains.",Ownership Chains in Multinational Enterprises,2023-05-22 12:31:39,"Stefania Miricola, Armando Rungi, Gianluca Santoni","http://arxiv.org/abs/2305.12857v1, http://arxiv.org/pdf/2305.12857v1",econ.GN
33058,gn,"This study examines the impact of Total Quality Management (TQM) practices on
organizational outcomes. Results show a significant relationship between TQM
practices such as top executive commitment, education and teaching, process
control, and continuous progress, and how they can be leveraged to enhance
performance outcomes.",The Key to Organizational and construction Excellence: A Study of Total Quality Management,2023-05-22 18:07:14,"M. R. Ibrahim, D. U. Muhammad, B. Muhammad, J. O. Alaezi, J. Agidani","http://arxiv.org/abs/2305.13104v1, http://arxiv.org/pdf/2305.13104v1",econ.GN
33059,gn,"This study aimed to investigate how transformational leadership affects team
processes, mediated by change in team members. A self-administered
questionnaire was distributed to construction project team members in Abuja and
Kaduna, and statistical analysis revealed a significant positive relationship
between transformational leadership and team processes, transformational
leadership and change in team members, changes in team members and team
processes, and changes in team members mediating the relationship between
transformational leadership and team processes. Future studies should consider
cultural differences.",The Missing Link: Exploring the Relationship Between Transformational Leadership and Change in team members in Construction,2023-05-22 18:22:08,M. R. Ibrahim,"http://arxiv.org/abs/2305.13121v1, http://arxiv.org/pdf/2305.13121v1",econ.GN
33060,gn,"Agricultural production is the main source of greenhouse gas emissions, and
therefore it has a great influence on the dynamics of changes in global
warming. The article investigated the problems faced by Czech agricultural
producers on the way to reduce greenhouse gas emissions. The author analyzed
the dynamics of greenhouse gas emissions by various branches of agriculture for
the period 2000-2015. The author proposed the coefficient t -covariances to
determine the interdependence of the given tabular macroeconomic values. This
indicator allows you to analyze the interdependence of macroeconomic variables
that do not have a normal distribution. In the context of the globalization of
the economy and the need to combat global warming in each country, it makes
sense to produce primarily agricultural products that provide maximum added
value with maximum greenhouse gas emissions.",Study of the problem of reducing greenhouse gas emissions in agricultural production Czech Republic,2023-04-28 14:50:09,Yekimov Sergiy,"http://arxiv.org/abs/2305.13253v1, http://arxiv.org/pdf/2305.13253v1",econ.GN
33061,gn,"This article reviews recent advances in addressing empirical identification
issues in cross-country and country-level studies and their implications for
the identification of the effectiveness and consequences of economic sanctions.
I argue that, given the difficulties in assessing causal relationships in
cross-national data, country-level case studies can serve as a useful and
informative complement to cross-national regression studies. However, I also
warn that case studies pose a set of additional potential empirical pitfalls
which can obfuscate rather than clarify the identification of causal mechanisms
at work. Therefore, the most sensible way to read case study evidence is as a
complement rather than as a substitute to cross-national research.",Estimating causal effects of sanctions impacts: what role for country-level studies?,2023-05-24 04:00:42,Francisco Rodríguez,"http://arxiv.org/abs/2305.14605v1, http://arxiv.org/pdf/2305.14605v1",econ.GN
33062,gn,"Venezuela has suffered three economic catastrophes since independence: one
each in the nineteenth, twentieth, and twenty-first centuries. Prominent
explanations for this trilogy point to the interaction of class conflict and
resource dependence. We turn attention to intra-class conflict, arguing that
the most destructive policy choices stemmed not from the rich defending
themselves against the masses but rather from pitched battles among elites.
Others posit that Venezuelan political institutions failed to sustain growth
because they were insufficiently inclusive; we suggest in addition that they
inadequately mediated intra-elite conflict.",Political Conflict and Economic Growth in Post-Independence Venezuela,2023-05-24 07:05:56,"Dorothy Kronick, Francisco Rodríguez","http://arxiv.org/abs/2305.14698v1, http://arxiv.org/pdf/2305.14698v1",econ.GN
33063,gn,"In the EU-27 countries, the importance of social sustainability of digital
transformation (SOSDIT) is heightened by the need to balance economic growth
with social cohesion. By prioritizing SOSDIT, the EU can ensure that its
citizens are not left behind in the digital transformation process and that
technology serves the needs of all Europeans. Therefore, the current study
aimed firstly to evaluate the SOSDIT of EU-27 countries and then to model its
importance in reaching sustainable development goals (SDGs). The current study,
using structural equation modeling, provided quantitative empirical evidence
that digital transformation in Finland, the Netherlands, and Denmark are
respectively most socially sustainable. It is also found that SOSDIT leads the
countries to have a higher performance in reaching SDGs. Finally, the study
provided evidence implying the inverse relationship between the Gini
coefficient and reaching SDGs. In other words, the higher the Gini coefficient
of a country, the lower its performance in reaching SDGs. The findings of this
study contribute to the literature of sustainability and digitalization. It
also provides empirical evidence regarding the SOSDIT level of EU-27 countries
that can be a foundation for the development of policies to improve the
sustainability of digital transformation. According to the findings, this study
provides practical recommendations for countries to ensure that their digital
transformation is sustainable and has a positive impact on society.",Social Sustainability of Digital Transformation: Empirical Evidence from EU-27 Countries,2023-05-25 17:21:01,"Saeed Nosratabadi, Thabit Atobishi, Szilard HegedHus","http://dx.doi.org/10.3390/admsci13050126, http://arxiv.org/abs/2305.16088v1, http://arxiv.org/pdf/2305.16088v1",econ.GN
33064,gn,"The purpose of this study was to model the impact of mentoring on women's
work-life balance. Indeed, this study considered mentoring as a solution to
create a work-life balance of women. For this purpose, semi-structured
interviews with both mentors and mentees of Tehran Municipality were conducted
and the collected data were analyzed using constructivist grounded theory.
Findings provided a model of how mentoring affects women's work-life balance.
According to this model, role management is the key criterion for work-life
balancing among women. In this model, antecedents of role management and the
contextual factors affecting role management, the constraints of mentoring in
the organization, as well as the consequences of effective mentoring in the
organization are described. The findings of this research contribute to the
mentoring literature as well as to the role management literature and provide
recommendations for organizations and for future research.",Modeling the Impact of Mentoring on Women's Work-LifeBalance: A Grounded Theory Approach,2023-05-25 17:27:48,"Parvaneh Bahrami, Saeed Nosratabadi, Khodayar Palouzian, Szilard Hegedus","http://dx.doi.org/10.3390/admsci13010006, http://arxiv.org/abs/2305.16095v1, http://arxiv.org/pdf/2305.16095v1",econ.GN
33065,gn,"We build a new measure of credit and financial market sentiment using Natural
Language Processing on Twitter data. We find that the Twitter Financial
Sentiment Index (TFSI) correlates highly with corporate bond spreads and other
price- and survey-based measures of financial conditions. We document that
overnight Twitter financial sentiment helps predict next day stock market
returns. Most notably, we show that the index contains information that helps
forecast changes in the U.S. monetary policy stance: a deterioration in Twitter
financial sentiment the day ahead of an FOMC statement release predicts the
size of restrictive monetary policy shocks. Finally, we document that sentiment
worsens in response to an unexpected tightening of monetary policy.",More than Words: Twitter Chatter and Financial Market Sentiment,2023-05-25 18:25:51,"Travis Adams, Andrea Ajello, Diego Silva, Francisco Vazquez-Grande","http://arxiv.org/abs/2305.16164v1, http://arxiv.org/pdf/2305.16164v1",econ.GN
33066,gn,"This paper presents a novel approach to distinguish the impact of
duration-dependent forces and adverse selection on the exit rate from
unemployment by leveraging variation in the length of layoff notices. I
formulate a Mixed Hazard model in discrete time and specify the conditions
under which variation in notice length enables the identification of structural
duration dependence while allowing for arbitrary heterogeneity across workers.
Utilizing data from the Displaced Worker Supplement (DWS), I employ the
Generalized Method of Moments (GMM) to estimate the model. According to the
estimates, the decline in the exit rate over the first 48 weeks of unemployment
is largely due to the worsening composition of surviving jobseekers.
Furthermore, I find that an individual's likelihood of exiting unemployment
decreases initially, then increases until unemployment benefits run out, and
remains steady thereafter. These findings are consistent with a standard search
model where returns to search decline early in the spell.",Duration Dependence and Heterogeneity: Learning from Early Notice of Layoff,2023-05-27 05:57:54,Div Bhagia,"http://arxiv.org/abs/2305.17344v1, http://arxiv.org/pdf/2305.17344v1",econ.GN
33067,gn,"This paper surveys the empirical literature of inflation targeting. The main
findings from our review are the following: there is robust empirical evidence
that larger and more developed countries are more likely to adopt the IT
regime; the introduction of this regime is conditional on previous
disinflation, greater exchange rate flexibility, central bank independence, and
higher level of financial development; the empirical evidence has failed to
provide convincing evidence that IT itself may serve as an effective tool for
stabilizing inflation expectations and for reducing inflation persistence; the
empirical research focused on advanced economies has failed to provide
convincing evidence on the beneficial effects of IT on inflation performance,
while there is some evidence that the gains from the IT regime may have been
more prevalent in the emerging market economies; there is not convincing
evidence that IT is associated with either higher output growth or lower output
variability; the empirical research suggests that IT may have differential
effects on exchange-rate volatility in advanced economies versus EMEs; although
the empirical evidence on the impact of IT on fiscal policy is quite limited,
it supports the idea that IT indeed improves fiscal discipline; the empirical
support to the proposition that IT is associated with lower disinflation costs
seems to be rather weak. Therefore, the accumulated empirical literature
implies that IT does not produce superior macroeconomic benefits in comparison
with the alternative monetary strategies or, at most, they are quite modest.",Macroeconomic Effects of Inflation Targeting: A Survey of the Empirical Literature,2023-05-27 16:26:15,Goran Petrevski,"http://arxiv.org/abs/2305.17474v1, http://arxiv.org/pdf/2305.17474v1",econ.GN
33877,th,"We explore an application of all-pay auctions to model trade wars and
territorial annexation. Specifically, in the model we consider the expected
resource, production, and aggressive (military/tariff) power are public
information, but actual resource levels are private knowledge. We consider the
resource transfer at the end of such a competition which deprives the weaker
country of some fraction of its original resources. In particular, we derive
the quasi-equilibria strategies for two country conflicts under different
scenarios. This work is relevant for the ongoing US-China trade war, and the
recent Russian capture of Crimea, as well as historical and future conflicts.",All-Pay Auctions as Models for Trade Wars and Military Annexation,2020-02-10 04:31:31,"Benjamin Kang, James Unwin","http://arxiv.org/abs/2002.03492v1, http://arxiv.org/pdf/2002.03492v1",econ.TH
33069,gn,"Strategic incentives may lead to inefficient and unequal provision of public
services. A prominent example is school admissions. Existing research shows
that applicants ""play the system"" by submitting school rankings strategically.
We investigate whether applicants also play the system by manipulating their
eligibility at schools. We analyze this applicant deception in a theoretical
model and provide testable predictions for commonly-used admission procedures.
We confirm these model predictions empirically by analyzing the implementation
of two reforms. First, we find that the introduction of a residence-based
school-admission criterion in Denmark caused address changes to increase by
more than 100% before the high-school application deadline. This increase
occurred only in areas where the incentive to manipulate is high-powered.
Second, to assess whether this behavior reflects actual address changes, we
study a second reform that required applicants to provide additional proof of
place of residence to approve an address change. The second reform
significantly reduced address changes around the school application deadline,
suggesting that the observed increase in address changes mainly reflects
manipulation. The manipulation is driven by applicants from more affluent
households and their behavior affects non-manipulating applicants.
Counter-factual simulations show that among students not enrolling in their
first listed school, more than 25% would have been offered a place in the
absence of address manipulation and their peer GPA is 0.2SD lower due to the
manipulative behavior of other applicants. Our findings show that popular
school choice systems give applicants the incentive to play the system with
real implications for non-strategic applicants.",Playing the system: address manipulation and access to schools,2023-05-30 14:27:40,"Andreas Bjerre-Nielsen, Lykke Sterll Christensen, Mikkel Høst Gandil, Hans Henrik Sievertsen","http://arxiv.org/abs/2305.18949v1, http://arxiv.org/pdf/2305.18949v1",econ.GN
33070,gn,"Ranked-choice voting anomalies such as monotonicity paradoxes have been
extensively studied through creating hypothetical examples and generating
elections under various models of voter behavior. However, very few real-world
examples of such voting paradoxes have been found and analyzed. We investigate
two single-transferable vote elections from Scotland that demonstrate upward
monotonicity, downward monotonicity, no-show, and committee size paradoxes.
These paradoxes are rarely observed in real-world elections, and this article
is the first case study of such paradoxes in multiwinner elections.",Paradoxical Oddities in Two Multiwinner Elections from Scotland,2023-05-31 20:53:24,"Adam Graham-Squire, David McCune","http://arxiv.org/abs/2305.20078v1, http://arxiv.org/pdf/2305.20078v1",econ.GN
33071,gn,"Urban decarbonization is one of the pillars for strategies to achieve carbon
neutrality around the world. However, the current speed of urban
decarbonization is insufficient to keep pace with efforts to achieve this goal.
Rooftop PVs integrated with electric vehicles (EVs) as battery is a promising
technology capable to supply CO2-free, affordable, and dispatchable electricity
in urban environments (SolarEV City Concept). Here, we evaluated Paris, France
for the decarbonization potentials of rooftop PV + EV in comparison to the
surrounding suburban area Ile-de-France and Kyoto, Japan. We assessed various
scenarios by calculating the energy sufficiency, self-consumption,
self-sufficiency, cost savings, and CO2 emission reduction of the PV + EV
system or PV only system. The combination of EVs with PVs by V2H or V2B systems
at the city or region level was found to be more effective in Ile-de-France
than in Paris suggesting that SolarEV City is more effective for geographically
larger area including Paris. If implemented at a significant scale, they can
add substantial values to rooftop PV economics and keep a high self-consumption
and self-sufficiency, which also allows bypassing the classical battery storage
that is too expensive to be profitable. Furthermore, the systems potentially
allow rapid CO2 emissions reduction; however, with already low-carbon
electricity of France by nuclear power, CO2 abatement (0.020 kgCO2kWh-1
reduction from 0.063 kgCO2kWh-1) by PV + EV system can be limited, in
comparison to that (0.270 kgCO2kWh-1 reduction from 0.352 kgCO2kWh-1) of Kyoto,
also because of the Paris low insolation and high demands in higher latitude
winter. While the SolarEV City Concept can help Paris to move one step closer
to the carbon neutrality goal, there are also implementation challenges for
installing PVs in Paris.",SolarEV City Concept for Paris: A promising idea?,2023-05-31 22:16:51,"Paul Deroubaix, Takuro Kobashi, Léna Gurriaran, Fouzi Benkhelifa, Philippe Ciais, Katsumasa Tanaka","http://arxiv.org/abs/2306.00132v1, http://arxiv.org/pdf/2306.00132v1",econ.GN
33072,gn,"The objective of this paper is to investigate a more efficient cross-border
payment and document handling process for the export of Indian goods to Brazil.
The paper is structured into two sections: first, to explain the problems
unique to the India-Brazil international trade corridor by highlighting the
obstacles of compliance, speed, and payments; and second, to propose a digital
solution for India-brazil trade utilizing Supernets, focusing on the use case
of Indian exports. The solution assumes that stakeholders will be onboarded as
permissioned actors (i.e. nodes) on a Polygon Supernet. By engaging trade and
banking stakeholders, we ensure that the digital solution results in export
benefits for Indian exporters, and a lawful channel to receive hard currency
payments. The involvement of Brazilian and Indian banks ensures that Letter of
Credit (LC) processing time and document handling occur at the speed of
blockchain technology. The ultimate goal is to achieve faster settlement and
negotiation period while maintaining a regulatory-compliant outcome, so that
the end result is faster and easier, yet otherwise identical to the real-world
process in terms of export benefits and compliance.",Examination of Supernets to Facilitate International Trade for Indian Exports to Brazil,2023-06-01 11:29:05,"Evan Winter, Anupam Shah, Ujjwal Gupta, Anshul Kumar, Deepayan Mohanty, Juan Carlos Uribe, Aishwary Gupta, Mini P. Thomas","http://arxiv.org/abs/2306.00439v1, http://arxiv.org/pdf/2306.00439v1",econ.GN
33073,gn,"We analyze the impact of soft credit default (i.e. a delinquency of 90+ days)
on individual trajectories. Using a proprietary dataset on about 2 million
individuals for the years 2004 to 2020, we find that a soft default has
substantial and long-lasting (i.e. up to ten years after the event) negative
effects on credit score, total credit limit, home-ownership status, and income.",Life after (Soft) Default,2023-06-01 14:45:09,"Giacomo De Giorgi, Costanza Naguib","http://arxiv.org/abs/2306.00574v1, http://arxiv.org/pdf/2306.00574v1",econ.GN
33074,gn,"The recent studies signify the growing concern of researchers towards
monitoring and measuring sustainability performance at various levels and in
many fields, including healthcare. However, there is no agreed approach to
assessing the sustainability of health systems. Moreover, social indicators are
less developed and less succinct. Therefore, the authors seek to map
sustainable reference values in healthcare and propose a conceptual and
structured framework that can guide the measurement of the social
sustainability-oriented health systems. Based on a new multi-criteria method
called Strong Sustainability Paradigm based Analytical Hierarchy Process,
(SSP-AHP), the presented approach opens the availability for systems'
comparison and benchmarking. The Strong Sustainability Paradigm incorporated
into the multi-criteria evaluation method prevents the exchangeability of
criteria by promoting alternatives that achieve good performance values on all
criteria, implying sustainability. The research results offer insights into the
core domains, sub-domains, and indicators supporting a more comprehensive
assessment of the social sustainability of health systems. The framework
constructed in this study consists of five major areas: equity, quality,
responsiveness, financial coverage, and adaptability. The proposed set of
indicators can also serve as a reference instrument, providing transparency
about core aspects of performance to be measured and reported, as well as
supporting policy-makers in decisions regarding sectoral strategies in
healthcare. Our findings suggest that the most socially sustainable systems are
Nordic countries. They offer a high level of social and financial protection,
achieving very good health outcomes. On the other hand, the most unsustainable
systems located in central and eastern European countries.",A Strong Sustainability Paradigm Based Analytical Hierarchy Process (SSP-AHP) Method to Evaluate Sustainable Healthcare Systems,2023-05-13 21:10:58,"Jarosław Wątróbski, Aleksandra Bączkiewicz, Iga Rudawska","http://arxiv.org/abs/2306.00718v1, http://arxiv.org/pdf/2306.00718v1",econ.GN
33076,gn,"Energy supply is mandatory for the production of economic value.
Nevertheless, tradition dictates that an enigmatic 'invisible hand' governs
economic valuation. Physical scientists have long proposed alternative but
testable energy cost theories of economic valuation, and have shown the gross
correlation between energy consumption and economic output at the national
level through input-output energy analysis. However, due to the difficulty of
precise energy analysis and highly complicated real markets, no decisive
evidence directly linking energy costs to the selling prices of individual
commodities has yet been found. Over the past century, the US metal market has
accumulated a huge body of price data, which for the first time ever provides
us the opportunity to quantitatively examine the direct energy-value
correlation. Here, by analyzing the market price data of 65 purified chemical
elements (mainly metals) relative to the total energy consumption for refining
them from naturally occurring geochemical conditions, we found a clear
correlation between the energy cost and their market prices. The underlying
physics we proposed has compatibility with conventional economic concepts such
as the ratio between supply and demand or scarcity's role in economic
valuation. It demonstrates how energy cost serves as the 'invisible hand'
governing economic valuation. Thorough understanding of this energy connection
between the human economic and the Earth's biogeochemical metabolism is
essential for improving the overall energy efficiency and furthermore the
sustainability of the human society.",Physical energy cost serves as the ''invisible hand'' governing economic valuation: Direct evidence from biogeochemical data and the U.S. metal market,2023-06-04 14:00:34,Zhicen Liu,"http://arxiv.org/abs/2306.02328v1, http://arxiv.org/pdf/2306.02328v1",econ.GN
33077,gn,"A dynamic model is constructed that generalises the Hartwick and Van Long
(2020) endogenous discounting setup by introducing externalities and asks what
implications this has for optimal natural resource extraction with constant
consumption. It is shown that a modified form of the Hotelling and Hartwick
rule holds in which the externality component of price is a specific function
of the instantaneous user costs and cross price elasticities. It is
demonstrated that the externality adjusted marginal user cost of remaining
natural reserves is equal to the marginal user cost of extracted resources
invested in human-made reproducible capital. This lends itself to a discrete
form with a readily intuitive economic interpretation that illuminates the
stepwise impact of externality pricing on optimal extraction schedules.",Sustainability criterion implied externality pricing for resource extraction,2023-06-07 02:48:59,Daniel Grainger,"http://dx.doi.org/10.1016/j.econlet.2023.111448, http://arxiv.org/abs/2306.04065v1, http://arxiv.org/pdf/2306.04065v1",econ.GN
33078,gn,"The Supreme Court's federal preemption decisions are notoriously
unpredictable. Traditional left-right voting alignments break down in the face
of competing ideological pulls. The breakdown of predictable voting blocs
leaves the business interests most affected by federal preemption uncertain of
the scope of potential liability to injured third parties and unsure even of
whether state or federal law will be applied to future claims.
  This empirical analysis of the Court's decisions over the last fifteen years
sheds light on the Court's unique voting alignments in obstacle preemption
cases. A surprising anti-obstacle preemption coalition is forming as Justice
Thomas gradually positions himself alongside the Court's liberals to form a
five-justice voting bloc opposing obstacle preemption.",An Empirical Study of Obstacle Preemption in the Supreme Court,2023-06-05 17:04:57,Gregory M. Dickinson,"http://arxiv.org/abs/2306.04462v1, http://arxiv.org/pdf/2306.04462v1",econ.GN
33079,gn,"Now almost three decades since its seminal Chevron decision, the Supreme
Court has yet to articulate how that case's doctrine of deference to agency
statutory interpretations relates to one of the most compelling federalism
issues of our time: regulatory preemption of state law. Should courts defer to
preemptive agency interpretations under Chevron, or do preemption's federalism
implications demand a less deferential approach? Commentators have provided no
shortage of possible solutions, but thus far the Court has resisted all of
them.
  This Article makes two contributions to the debate. First, through a detailed
analysis of the Court's recent agency-preemption decisions, I trace its
hesitancy to adopt any of the various proposed rules to its high regard for
congressional intent where areas of traditional state sovereignty are at risk.
Recognizing that congressional intent to delegate preemptive authority varies
from case to case, the Court has hesitated to adopt an across-the-board rule.
Any such rule would constrain the Court and risk mismatch with congressional
intent -- a risk it accepts under Chevron generally but which it finds
particularly troublesome in the delicate area of federal preemption.
  Second, building on this previously underappreciated factor in the Court's
analysis, I suggest a novel solution of variable deference that avoids the
inflexibility inherent in an across-the-board rule while providing greater
predictability than the Court's current haphazard approach. The proposed rule
would grant full Chevron-style deference in those cases where congressional
delegative intent is most likely -- where Congress has expressly preempted some
state law and the agency interpretation merely resolves preemptive scope --
while withholding deference in those cases where Congress has remained
completely silent as to preemption and delegative intent is least likely.",Calibrating Chevron for Preemption,2023-06-05 17:00:14,Gregory M. Dickinson,"http://arxiv.org/abs/2306.04463v1, http://arxiv.org/pdf/2306.04463v1",econ.GN
33080,gn,"This paper shows that disregarding the information effects around the
European Central Bank monetary policy decision announcements biases its
international spillovers. Using data from 23 economies, both Emerging and
Advanced, I show that following an identification strategy that disentangles
pure monetary policy shocks from information effects lead to international
spillovers on industrial production, exchange rates and equity indexes which
are between 2 to 3 times larger in magnitude than those arising from following
the standard high frequency identification strategy. This bias is driven by
pure monetary policy and information effects having intuitively opposite
international spillovers. Results are present for a battery of robustness
checks: for a sub-sample of ``close'' and ``further away'' countries, for both
Emerging and Advanced economies, using local projection techniques and for
alternative methods that control for ``information effects''. I argue that this
biases may have led a previous literature to disregard or find little
international spillovers of ECB rates.",International Spillovers of ECB Interest Rates: Monetary Policy & Information Effects,2023-06-07 19:08:21,Santiago Camara,"http://arxiv.org/abs/2306.04562v1, http://arxiv.org/pdf/2306.04562v1",econ.GN
33113,gn,"We study the price rigidity of regular and sale prices, and how it is
affected by pricing formats (pricing strategies). We use data from three large
Canadian stores with different pricing formats (Every-Day-Low-Price, Hi-Lo, and
Hybrid) that are located within a 1 km radius of each other. Our data contains
both the actual transaction prices and actual regular prices as displayed on
the store shelves. We combine these data with two generated regular price
series (filtered prices and reference prices) and study their rigidity. Regular
price rigidity varies with store formats because different format stores treat
sale prices differently, and consequently define regular prices differently.
Correspondingly, the meanings of price cuts and sale prices vary across store
formats. To interpret the findings, we consider the store pricing format
distribution across the US.",Retail Pricing Format and Rigidity of Regular Prices,2023-06-30 00:35:32,"Sourav Ray, Avichai Snir, Daniel Levy","http://arxiv.org/abs/2306.17309v1, http://arxiv.org/pdf/2306.17309v1",econ.GN
33081,gn,"In Wyeth v. Levine the Supreme Court once again failed to reconcile the
interpretive presumption against preemption with the sometimes competing
Chevron doctrine of deference to agencies' reasonable statutory
interpretations. Rather than resolve the issue of which principle should govern
where the two principles point toward opposite results, the Court continued its
recent practice of applying both principles halfheartedly, carving exceptions,
and giving neither its proper weight.
  This analysis situates Wyeth within the larger framework of the Court's
recent preemption decisions in an effort to explain the Court's hesitancy to
resolve the conflict. The analysis concludes that the Court, motivated by its
strong respect for congressional intent and concern to protect federalism,
applies both the presumption against preemption and the Chevron doctrine on a
sliding scale. Where congressional intent to preempt is clear and vague only as
to scope, the Court is usually quite deferential to agency determinations, but
where congressional preemptive intent is unclear, agency views are accorded
less weight.
  The Court's variable approach to deference is defensible as necessary to
prevent unauthorized incursion into areas of traditional state sovereignty, but
its inherent unpredictability sows confusion among regulated parties, and the
need for flexibility prevents the Court from adopting any of the more
predictable across-the-board approaches to deference proposed by the Court's
critics. A superior approach would combine the Court's concern for federalism
with the certainty of a bright-line rule by granting deference to agency views
where Congress has spoken via a preemption clause of ambiguous scope and no
deference where Congress has remained silent.","Chevron's Sliding Scale in Wyeth v. Levine, 129 S. Ct. 1187 (2009)",2023-06-05 17:07:56,Gregory M. Dickinson,"http://arxiv.org/abs/2306.04771v1, http://arxiv.org/pdf/2306.04771v1",econ.GN
33082,gn,"Risk and uncertainty in each stage of CLSC have greatly increased the
complexity and reduced process efficiency of the closed-loop networks, impeding
the sustainable and resilient development of industries and the circular
economy. Recently, increasing interest in academia have been raised on the risk
and uncertainty analysis of closed-loop supply chain, yet there is no
comprehensive review paper focusing on closed-loop network design considering
risk and uncertainty. This paper examines previous research on the domain of
closed-loop network design under risk and uncertainties to provide constructive
prospects for future study. We selected 106 papers published in the Scopus
database from the year 2004 to 2022. We analyse the source of risk and
uncertainties of the CLSC network and identified appropriate methods for
handling uncertainties in addition to algorithms for solving uncertain CLSCND
problems. We also illustrate the evolution of objectives for designing a
closed-loop supply chain that is expos to risk or uncertainty, and investigate
the application of uncertain network design models in practical industry
sectors. Finally, we draw proper research gaps for each category and clarify
some novel insights for future study. By considering the impacts of risk or
uncertainties of different sources on closed-loop supply chain network design,
we can approach the economical, sustainable, social, and resilient objectives
effectively and efficiently.",Perspectives in closed-loop supply chains network design considering risk and uncertainty factors,2023-06-08 01:53:00,Yang Hu,"http://arxiv.org/abs/2306.04819v1, http://arxiv.org/pdf/2306.04819v1",econ.GN
33083,gn,"Four rounds of surveys of slum dwellers in Dhaka city during the 2020-21
COVID-19 pandemic raise questions about whether the slum dwellers possess some
form of immunity to the effects of COVID-19? If the working poor of Bangladesh
are practically immune to COVID-19, why has this question not been more
actively investigated? We shed light on some explanations for these pandemic
questions and draw attention to the role of intellectual elites and public
policy, suggesting modifications needed for pandemic research.",Losing a Gold Mine?,2023-06-08 08:38:12,"Syed Abul Basher, Salim Rashid, Mohammad Riad Uddin","http://arxiv.org/abs/2306.04946v1, http://arxiv.org/pdf/2306.04946v1",econ.GN
33084,gn,"This study explores the indirect impact of war damage on postwar fertility,
with a specific focus on Japan's air raids during World War II. Using 1935 and
1947 Kinki region town/village data and city air raid damage information, we
explored the ""far-reaching effects"" on fertility rates in nearby undamaged
areas. Our fixed-effects model estimates show that air raids influenced postwar
fertility within a 15-kilometer radius of the bombed cities. These impacts
varied based on bombing intensity, showing both positive and negative effects.
Moreover, a deeper analysis, using the Allied flight path as a natural
experiment, indicates that air raid threats and associated fear led to
increased postwar fertility, even in undamaged areas. This study highlights the
relatively unexplored indirect consequences of war on fertility rates in
neighboring regions and significantly contributes to the literature on the
relationship between wars and fertility.",The far-reaching effects of bombing on fertility in mid-20th century Japan,2023-06-09 12:18:08,Tatsuki Inoue amd Erika Igarashi,"http://arxiv.org/abs/2306.05770v2, http://arxiv.org/pdf/2306.05770v2",econ.GN
33085,gn,"This study investigates the functioning of modern payment systems through the
lens of banks' maturity mismatch practices, and it examines the effects of
banks' refusal to roll over short-term interbank liabilities on financial
stability. Within an agent-based stock-flow consistent framework, banks can
engage in two segments of the interbank market that differ in maturity,
overnight and term. We compare two interbank matching scenarios to assess how
bank-specific maturity targets, dependent on the dictates of the Net Stable
Funding Ratio, impact the dynamics of the interbank market and the
effectiveness of conventional monetary policies. The findings reveal that
maturity misalignment between deficit and surplus banks compromises the
interbank market's efficiency and increases reliance on the central bank's
standing facilities. Monetary policy interest-rate steering practices also
become less effective. The study also uncovers a dual stability-based
configuration in the banking sector, resembling the segmented European
interbank structure. This paper suggests that heterogeneous maturity mismatches
between surplus and deficit banks may result in asymmetric funding frictions
that might precede credit- and sovereign-risk explanations of interbank
tensions. Also, a combined examination of macroprudential tools and
rollover-based interbank dynamics can enhance our understanding of how
regulatory changes impact the stability of heterogeneous banking sectors.",Interbank Decisions and Margins of Stability: an Agent-Based Stock-Flow Consistent Approach,2023-06-09 15:49:02,Jessica Reale,"http://arxiv.org/abs/2306.05860v1, http://arxiv.org/pdf/2306.05860v1",econ.GN
33878,th,"We give a structure theorem for all coalitionally strategy-proof social
choice functions whose range is a subset of cardinality two of a given larger
set of alternatives.
  We provide this in the case where the voters/agents are allowed to express
indifference and the domain consists of profiles of preferences over a society
of arbitrary cardinality. The theorem, that takes the form of a representation
formula, can be used to construct all functions under consideration.",The structure of two-valued strategy-proof social choice functions with indifference,2020-02-15 11:41:18,"Achille Basile, Surekha Rao, K. P. S. Bhaskara Rao","http://arxiv.org/abs/2002.06341v2, http://arxiv.org/pdf/2002.06341v2",econ.TH
33086,gn,"In this study, the relationship between burnout and family functions of the
Melli Iran Bank staff will be studied. A number of employees within the
organization using appropriate scientific methods as the samples were selected
by detailed questionnaire and the appropriate data is collected burnout and
family functions. The method used descriptive statistical population used for
this study consisted of 314 bank loan officers in branches of Melli Iran Bank
of Tehran province and all the officials at the bank for >5 years of service at
Melli Iran Bank branches in Tehran. They are married and men constitute the
study population. The Maslach Burnout Inventory in the end internal to 0/90
alpha emotional exhaustion, depersonalization and low personal accomplishment
Cronbach alpha of 0/79 and inventory by 0/71 within the last family to solve
the problem 0/70, emotional response 0/51, touch 0/70, 0/69 affective
involvement, roles, 0/59, 0/68 behavior is controlled. The results indicate
that the hypothesis that included the relationship between burnout and 6, the
family functioning, problem solving, communication, roles, affective
responsiveness, affective fusion there was a significant relationship between
behavior and the correlation was negative. The burnout is high; the functions
within the family will be in trouble.",The Relationship Between Burnout Operators with the Functions of Family Tehran Banking Melli Iran Bank in 2015,2023-06-09 16:01:57,"Mohammad Heydari, Matineh Moghaddam, Habibollah Danai","http://dx.doi.org/10.36478/sscience.2016.1168.1173, http://arxiv.org/abs/2306.05867v1, http://arxiv.org/pdf/2306.05867v1",econ.GN
33087,gn,"In ranked-choice elections voters cast preference ballots which provide a
voter's ranking of the candidates. The method of ranked-choice voting (RCV)
chooses a winner by using voter preferences to simulate a series of runoff
elections. Some jurisdictions which use RCV limit the number of candidates that
voters can rank on the ballot, imposing what we term a truncation level, which
is the number of candidates that voters are allowed to rank. Given fixed voter
preferences, the winner of the election can change if we impose different
truncation levels. We use a database of 1171 real-world ranked-choice elections
to empirically analyze the potential effects of imposing different truncation
levels in ranked-choice elections. Our general finding is that if the
truncation level is at least three then restricting the number of candidates
which can be ranked on the ballot rarely affects the election winner.",An Empirical Analysis of the Effect of Ballot Truncation on Ranked-Choice Electoral Outcomes,2023-06-09 18:33:30,"Mallory Dickerson, Erin Martin, David McCune","http://arxiv.org/abs/2306.05966v1, http://arxiv.org/pdf/2306.05966v1",econ.GN
33088,gn,"In this study, focusing on organizations in a rapid-response component model
(FRO), the relative importance of each one, from the point of view of customers
and their impact on the purchase of Shahrvand chain stores determined to
directors and managers of the shops, according to customer needs and their
priorities in order to satisfy the customers and take steps to strengthen their
competitiveness. For this purpose, all shahrvand chain stores in Tehran
currently have 10 stores in different parts of Tehran that have been studied
are that of the 10 branches; Five branches were selected. The sampling method
is used in this study population with a confidence level of 95% and 8% error;
150 are more specifically typically 30 were studied in each branch. In this
study, a standard questionnaire of 26 questions which is used FRO validity
using Cronbach's alpha values of ""0/95"" is obtained. The results showed that
each of the six factors on customer loyalty model FRO effective Shahrvand chain
stores. The effect of each of the six Foundation FRO customer loyalty model
shahrvand is different chain stores.",Investigation User Reviews FRO to Determine the Level of Customer Loyalty Model Shahrvand Chain Stores,2023-06-09 16:04:09,"Mohammad Heydari, Matineh Moghaddam, Khadijeh Gholami, Habibollah Danai","http://dx.doi.org/10.36478/ibm.2016.1914.1920, http://arxiv.org/abs/2306.06150v1, http://arxiv.org/pdf/2306.06150v1",econ.GN
33089,gn,"This study probes the effects of Japan's traditional alphabetical
surname-based call system on students' experiences and long-term behavior. It
reveals that early listed surnames enhance cognitive and non-cognitive skill
development. The adoption of mixed-gender lists since the 1980s has amplified
this effect, particularly for females. Furthermore, the study uncovers a strong
correlation between childhood surname order and individuals' intention for
COVID-19 revaccination, while changes in adulthood surnames do not exhibit the
same influence. The implications for societal behaviors and policy are
substantial and wide-ranging.",Surname Order and Revaccination Intentions: The Effect of Mixed-Gender Lists on Gender Differences during the COVID-19 Pandemic,2023-06-10 19:41:47,"Eiji Yamamura, Yoshiro Tsutsui, Fumio Ohtake","http://arxiv.org/abs/2306.06483v1, http://arxiv.org/pdf/2306.06483v1",econ.GN
33090,gn,"This study examines whether the structure of global value chains (GVCs)
affects international spillovers of research and development (R&D). Although
the presence of ``hub'' countries in GVCs has been confirmed by previous
studies, the role of these hub countries in the diffusion of the technology has
not been analyzed. Using a sample of 21 countries and 14 manufacturing
industries during the period 1995-2007, I explore the role of hubs as the
mediator of knowledge by classifying countries and industries based on a
``centrality'' measure. I find that R&D spillovers from exporters with High
centrality are the largest, suggesting that hub countries play an important
role in both gathering and diffusing knowledge. I also find that countries with
Middle centrality are getting important in the diffusion of knowledge. Finally,
positive spillover effects from own are observed only in the G5 countries.",Centrality in Production Networks and International Technology Diffusion,2023-06-11 16:47:20,Rinki Ito,"http://arxiv.org/abs/2306.06680v2, http://arxiv.org/pdf/2306.06680v2",econ.GN
33091,gn,"This paper examines how risk and budget limits on investment mandates affect
the bidding strategy in a uniform-price auction for issuing corporate bonds. I
prove the existence of symmetric Bayesian Nash equilibrium and explore how the
risk limits imposed on the mandate may mitigate severe underpricing, as the
symmetric equilibrium's yield positively relates to the risk limit. Investment
mandates with low-risk acceptance inversely affect the equilibrium bid. The
equilibrium bid provides insights into the optimal mechanism for pricing
corporate bonds conveying information about the bond's valuation, market power,
and the number of bidders. These findings contribute to auction theory and have
implications for empirical research in the corporate bond market.",Auctioning Corporate Bonds: A Uniform-Price under Investment Mandates,2023-06-12 17:13:19,Labrini Zarpala,"http://arxiv.org/abs/2306.07134v1, http://arxiv.org/pdf/2306.07134v1",econ.GN
33092,gn,"Adjustments to public health policy are common. This paper investigates the
impact of COVID-19 policy ambiguity on specific groups' insurance consumption.
The results show that sensitive groups' willingness to pay (WTP) for insurance
is 12.2% above the benchmark. Groups that have experienced income disruptions
are more likely to suffer this. This paper offers fresh perspectives on the
effects of pandemic control shifts.",Response toward Public Health Policy Ambiguity and Insurance Decisions,2023-06-14 05:59:46,Qiang Li,"http://arxiv.org/abs/2306.08214v1, http://arxiv.org/pdf/2306.08214v1",econ.GN
33093,gn,"In spring 2022, the German federal government agreed on a set of policy
measures that aimed at reducing households' financial burden resulting from a
recent price increase, especially in energy and mobility. These included among
others, a nationwide public transport ticket for 9~Euro per month for three
months in June, July, and August 2022. In transport policy research this is an
almost unprecedented behavioral experiment. It allows us to study not only
behavioral responses in mode choice and induced demand but also to assess the
effectiveness of these instruments. We observe this natural experiment with a
three-wave survey and a smartphone-based travel diary with passive tracking on
an initial sample of 2,261 participants with a focus on the Munich metropolitan
region. This area is chosen as it offers a variety of mode options with a dense
and far-reaching public transport network that even provides good access to
many leisure destinations. The app has been providing data from 756
participants until the end of September, the three-wave survey by 1,402, and
the app and the three waves by 637 participants. In this paper, we report on
the study design, the recruitment and study participation as well as the
impacts of the policy measures on the self-reported and app-observed travel
behavior; we present results on consumer choices for a successor ticket to the
9-Euro-Ticket that started in May 2023. We find a substantial shift in the
modal share towards public transport from the car in our sample during the
9-Euro-Ticket period in travel distance (around 5 %) and in trip frequency
(around 7 %). The mobility outcomes of the 9-Euro-Ticket however provide
evidence that cheap public transport as a policy instrument does not suffice to
incentive sustainable travel behavior choices and that other policy instruments
are required in addition.",Germany's nationwide travel experiment in 2022: public transport for 9 Euro per month -- First findings of an empirical study,2023-06-14 10:10:16,"Allister Loder, Fabienne Cantner, Lennart Adenaw, Nico Nachtigall, David Ziegler, Felix Gotzler, Markus B. Siewert, Stefan Wurster, Sebastian Goerg, Markus Lienkamp, Klaus Bogenberger","http://arxiv.org/abs/2306.08297v1, http://arxiv.org/pdf/2306.08297v1",econ.GN
33094,gn,"South Korea has become one of the most important economies in Asia. The
largest Korean multinational firms are affiliated with influential family-owned
business groups known as the chaebol. Despite the surging academic popularity
of the chaebol, there is a considerable knowledge gap in the bibliometric
analysis of business groups in Korea. In an attempt to fill this gap, the
article aims to provide a systematic review of the chaebol and the role that
business groups have played in the economy of Korea. Three distinct
bibliometric networks are analyzed, namely the scientific collaboration
network, bibliographic coupling network, and keyword co-occurrence network.",The rise of the chaebol: A bibliometric analysis of business groups in South Korea,2023-06-14 23:54:04,Artur F. Tomeczek,"http://arxiv.org/abs/2306.08743v1, http://arxiv.org/pdf/2306.08743v1",econ.GN
33095,gn,"This paper reevaluates the conventional approach to quantifying
within-industry resource misallocation, typically measured by the dispersion of
an input marginal product. My findings suggest that this statistic incorporates
inherent productivity heterogeneity and idiosyncratic productivity shocks,
irrespective of the input under scrutiny. Using balance sheet data from
American and European manufacturing firms, I show that total factor
productivity (TFP) volatility accounts for 7% of the variance in the marginal
product of capital, 9% for labor, and 10% for material inputs. Consequently,
this index, taken at face value, fails to identify policy-induced misallocation
for any production input. To overcome this limitation, I propose a comparative
analysis strategy driven by an identified policy variation. This approach
allows the researcher to assess induced misallocation in relative terms whilst
controlling for differences in TFP volatility. I show that the financial crisis
had an uneven impact on the within-industry dispersion of the marginal product
of capital across European nations, reflecting their differing financial sector
maturity and suggesting the existence of financial misallocative frictions. The
crisis did not affect the dispersion of the marginal product for other inputs.","Productivity, Inputs Misallocation, and the Financial Crisis",2023-06-15 01:17:14,Davide Luparello,"http://arxiv.org/abs/2306.08760v1, http://arxiv.org/pdf/2306.08760v1",econ.GN
33096,gn,"I use matched employer-employee records merged with corporate tax information
from 2003 to 2017 to estimate labor market-wide effects of mergers and
acquisitions in Brazil. Labor markets are defined by pairs of commuting zone
and industry sector. In the following year of a merger, market size falls by
10.8%. The employment adjustment is concentrated in merging firms. For the
firms not involved in M&As, I estimate a 1.07% decline in workers earnings and
a positive, although not significant, increase in their size. Most mergers have
a predicted impact of zero points in concentration, measured by the
Herfindahl-Hirschman Index (HHI). I spillover firms, earnings decline similarly
for mergers with high and low predicted changes in HHI. Contrary to the recent
literature on market concentration in developed economies, I find no evidence
of oligopsonistic behavior in Brazilian labor markets.",Local Labor Market Effects of Mergers and Acquisitions in Developing Countries: Evidence from Brazil,2023-06-15 03:56:03,Vitor Costa,"http://arxiv.org/abs/2306.08797v1, http://arxiv.org/pdf/2306.08797v1",econ.GN
33097,gn,"This paper examines the impact of different payment rules on efficiency when
algorithms learn to bid. We use a fully randomized experiment of 427 trials,
where Q-learning bidders participate in up to 250,000 auctions for a commonly
valued item. The findings reveal that the first price auction, where winners
pay the winning bid, is susceptible to coordinated bid suppression, with
winning bids averaging roughly 20% below the true values. In contrast, the
second price auction, where winners pay the second highest bid, aligns winning
bids with actual values, reduces the volatility during learning and speeds up
convergence. Regression analysis, incorporating design elements such as payment
rules, number of participants, algorithmic factors including the discount and
learning rate, asynchronous/synchronous updating, feedback, and exploration
strategies, discovers the critical role of payment rules on efficiency.
Furthermore, machine learning estimators find that payment rules matter even
more with few bidders, high discount factors, asynchronous learning, and coarse
bid spaces. This paper underscores the importance of auction design in
algorithmic bidding. It suggests that computerized auctions like Google
AdSense, which rely on the first price auction, can mitigate the risk of
algorithmic collusion by adopting the second price auction.",Designing Auctions when Algorithms Learn to Bid: The critical role of Payment Rules,2023-06-15 21:35:22,Pranjal Rawat,"http://arxiv.org/abs/2306.09437v1, http://arxiv.org/pdf/2306.09437v1",econ.GN
33114,gn,"The methods of single transferable vote (STV) and sequential ranked-choice
voting (RCV) are different methods for electing a set of winners in multiwinner
elections. STV is a classical voting method that has been widely used
internationally for many years. By contrast, sequential RCV has rarely been
used, and only recently has seen an increase in usage as several cities in Utah
have adopted the method to elect city council members. We use Monte Carlo
simulations and a large database of real-world ranked-choice elections to
investigate the behavior of sequential RCV by comparing it to STV. Our general
finding is that sequential RCV often produces different winner sets than STV.
Furthermore, sequential RCV is best understood as an excellence-based method
which will not produce proportional results, often at the expense of minority
interests.",A Comparison of Sequential Ranked-Choice Voting and Single Transferable Vote,2023-06-30 02:47:00,"David McCune, Erin Martin, Grant Latina, Kaitlyn Simms","http://arxiv.org/abs/2306.17341v1, http://arxiv.org/pdf/2306.17341v1",econ.GN
33098,gn,"The exploration of entrepreneurship has become a priority for scientific
research in recent years. Understanding this phenomenon is particularly
important for the transformation of entrepreneurship into action, which is a
key factor in early-stage entrepreneurial activity. This gains particular
relevance in the university environment, where, in addition to the conventional
teaching and research functions, the entrepreneurial university operation based
on open innovation, as well as the enhancement of entrepreneurial attitudes of
researchers and students, are receiving increased attention. This study is
based on a survey conducted among students attending a Hungarian university of
applied science in Western Transdanubia Region who have demonstrated their
existing entrepreneurial commitment by joining a national startup training and
incubation programme. The main research question of the study is to what extent
student entrepreneurship intention is influenced by the environment of the
entrepreneurial university ecosystem and the support services available at the
university. A further question is whether these factors are able to mitigate
the negative effects of internal cognitive and external barriers by enhancing
entrepreneurial attitudes and perceived behavioural control. The relatively
large number of students involved in the programme allows the data to be
analysed using SEM modelling. The results indicate a strong covariance between
the perceived university support and environment among students. Another
observation is the distinct effect of these institutional factors on perceived
behavioural control of students.",Perceived university support and environment as a factor of entrepreneurial intention: Evidence from Western Transdanubia Region,2023-06-16 11:18:05,"Attila Lajos Makai, Tibor Dőry","http://dx.doi.org/10.1371/journal.pone.0283850, http://arxiv.org/abs/2306.09678v1, http://arxiv.org/pdf/2306.09678v1",econ.GN
33099,gn,"Corporate Social Responsibility (CSR) has become an important topic that is
gaining academic interest. This research paper presents CSREU, a new dataset
with attributes of 115 European companies, which includes several performance
indicators and the respective CSR disclosure scores computed using the Global
Reporting Initiative (GRI) framework. We also examine the correlations between
some of the financial indicators and the CSR disclosure scores of the
companies. According to our results, these correlations are weak and deeper
analysis is required to draw convincing conclusions about the potential impact
of CSR disclosure on financial performance. We hope that the newly created data
and our preliminary results will help and foster research in this field.",CSREU: A Novel Dataset about Corporate Social Responsibility and Performance Indicators,2023-06-16 15:14:16,"Erion Çano, Xhesilda Vogli","http://arxiv.org/abs/2306.09798v1, http://arxiv.org/pdf/2306.09798v1",econ.GN
33100,gn,"Public support and political mobilization are two crucial factors for the
adoption of ambitious climate policies in line with the international
greenhouse gas reduction targets of the Paris Agreement. Despite their compound
importance, they are mainly studied separately. Using a random forest
machine-learning model, this article investigates the relative predictive power
of key established explanations for public support and mobilization for climate
policies. Predictive models may shape future research priorities and contribute
to theoretical advancement by showing which predictors are the most and least
important. The analysis is based on a pre-election conjoint survey experiment
on the Swiss CO2 Act in 2021. Results indicate that beliefs (such as the
perceived effectiveness of policies) and policy design preferences (such as for
subsidies or tax-related policies) are the most important predictors while
other established explanations, such as socio-demographics, issue salience (the
relative importance of issues) or political variables (such as the party
affiliation) have relatively weak predictive power. Thus, beliefs are an
essential factor to consider in addition to explanations that emphasize issue
salience and preferences driven by voters' cost-benefit considerations.",Key predictors for climate policy support and political mobilization: The role of beliefs and preferences,2023-06-16 22:22:33,Simon Montfort,"http://dx.doi.org/10.1371/journal.pclm.0000145, http://arxiv.org/abs/2306.10144v1, http://arxiv.org/pdf/2306.10144v1",econ.GN
33101,gn,"Despite tremendous growth in the volume of new scientific and technological
knowledge, the popular press has recently raised concerns that disruptive
innovative activity is slowing. These dire prognoses were mainly driven by Park
et al. (2023), a Nature publication that uses decades of data and millions of
observations coupled with a novel quantitative metric (the CD index) that
characterizes innovation in science and technology as either consolidating or
disruptive. We challenge the Park et al. (2023) methodology and findings,
principally around concerns of truncation bias and exclusion bias. We show that
88 percent of the decrease in disruptive patents over 1980-2010 reported by the
authors can be explained by their truncation of all backward citations before
1976. We also show that this truncation bias varies by technology class. We
update the analysis to 2016 and account for a change in U.S. patent law that
allows for citations to patent applications in addition to patent grants, which
is ignored by the authors in their analysis. We show that the number of highly
disruptive patents has increased since 1980 -- particularly in IT technologies.
Our results suggest caution in using the Park et al. (2023) methodology as a
basis for research and decision making in public policy, industry restructuring
or firm reorganization aimed at altering the current innovation landscape.",The Illusive Slump of Disruptive Patents,2023-06-19 11:42:31,"Jeffrey T. Macher, Christian Rutzer, Rolf Weder","http://arxiv.org/abs/2306.10774v1, http://arxiv.org/pdf/2306.10774v1",econ.GN
33102,gn,"Nowadays, it is thought that there are only two approaches to political
economy: public finance and public choice; however, this research aims to
introduce a new insight by investigating scholastic sources. We study the
relevant classic books from the thirteenth to the seventeenth centuries and
reevaluate the scholastic literature by doctrines of public finance and public
choice. The findings confirm that the government is the institution for
realizing the common good according to scholastic attitude. Therefore,
scholastic thinkers saw a common mission for the government based on their
essentialist attitude toward human happiness. Social conflicts and lack of
social consent are the product of diversification in ends and desires; hence,
if the end of humans were unified, there would be no conflict of interest.
Accordingly, if the government acts according to its assigned mission, the lack
of public consent is not significant. Based on the scholastic point of view
this study introduces the third approach to political economy, which can be,
consider an analytical synthesis among classical doctrines.",Public Finance or Public Choice? The Scholastic Political Economy As an Essentialist Synthesis,2023-06-19 19:26:52,"Mohammadhosein Bahmanpour-Khalesi, Mohammadjavad Sharifzadeh","http://arxiv.org/abs/2306.11049v1, http://arxiv.org/pdf/2306.11049v1",econ.GN
33103,gn,"Recent years have shown a rapid adoption of residential solar PV with
increased self-consumption and self-sufficiency levels in Europe. A major
driver for their economic viability is the electricity tax exemption for the
consumption of self-produced electricity. This leads to large residential PV
capacities and partially overburdened distribution grids. Furthermore, the tax
exemption that benefits wealthy households that can afford capital-intense
investments in solar panels in particular has sparked discussions about energy
equity and the appropriate taxation level for self-consumption. This study
investigates the implementation of uniform electricity taxes on all
consumption, irrespective of the origin of the production, by means of a case
study of 155,000 hypothetical Danish prosumers. The results show that the new
taxation policy redistributes costs progressively across household sizes. As
more consumption is taxed, the tax level can be reduced by 38%, leading to 61%
of all households seeing net savings of up to 23% off their yearly tax bill.
High-occupancy houses save an average of 116 Euro per year at the expense of
single households living in large dwellings who pay 55 Euro per year more.
Implementing a uniform electricity tax in combination with a reduced overall
tax level can (a) maintain overall tax revenues and (b) increase the
interaction of batteries with the grid at the expense of behind-the-meter
operations. In the end, the implicit cross-subsidy is removed by taxing
self-consumption uniformly, leading to a cost redistribution supporting
occupant-dense households and encouraging the flexible behavior of prosumers.
This policy measure improves economic efficiency and greater use of technology
with positive system-wide impacts.",Uniform taxation of electricity: incentives for flexibility and cost redistribution among household categories,2023-06-20 17:30:34,"Philipp Andreas Gunkel, Febin Kachirayil, Claire-Marie Bergaentzlé, Russell McKenna, Dogan Keles, Henrik Klinge Jacobsen","http://arxiv.org/abs/2306.11566v1, http://arxiv.org/pdf/2306.11566v1",econ.GN
33104,gn,"This paper investigates the significance of consumer opinions in relation to
value in China's A-share market. By analyzing a large dataset comprising over
18 million product reviews by customers on JD.com, we demonstrate that
sentiments expressed in consumer reviews can influence stock returns,
indicating that consumer opinions contain valuable information that can impact
the stock market. Our findings show that Customer Negative Sentiment Tendency
(CNST) and One-Star Tendency (OST) have a negative effect on expected stock
returns, even after controlling for firm characteristics such as market risk,
illiquidity, idiosyncratic volatility, and asset growth. Further analysis
reveals that the predictive power of CNST is stronger in firms with high
sentiment conditions, growth companies, and firms with lower accounting
transparency. We also find that CNST negatively predicts revenue surprises,
earnings surprises, and cash flow shocks. These results suggest that online
satisfaction derived from big data analysis of customer reviews contains novel
information about firms' fundamentals.",The Impact of Customer Online Satisfaction on Stock Returns: Evidence from the E-commerce Reviews in China,2023-06-21 12:02:21,"Zhi Su, Danni Wu, Zhenkun Zhou, Junran Wu, Libo Yin","http://arxiv.org/abs/2306.12119v1, http://arxiv.org/pdf/2306.12119v1",econ.GN
33105,gn,"Poland records one of the lowest gender wage gaps in Europe. At the same
time, it is a socially conservative country where women's rights have been on
the decline. We argue that, in the Polish context, the gender gap in income is
a more appropriate measure of gendered labour market outcomes than the gap in
the hourly wage. We analyse the gender gap in income in Poland in relation to
the parenthood status, using the placebo event history method, adjusted to low
resolution data, and the two waves of the Polish Generations and Gender Survey
(2010, 2014). Contrary to similar studies conducted in Western Europe, our
analysis uncovers a large degree of anticipatory behaviour in both women and
men who expect to become parents. We show that mothers' income decreases by
about 20% after birth, but converges to the income trajectory of non-mothers
after 15 years. In contrast, the income of eventual fathers is higher than that
of non-fathers both before and after birth, suggesting that the fatherhood
child premium might be driven primarily by selection. We also demonstrate a
permanent increase in hours worked for fathers, as opposed to non-fathers and a
decrease in hours worked for mothers who converge to the trajectory of
non-mothers after 15 years from the birth. Finally, we compare the gender gaps
in income and wages of women and men in the sample with those of individuals in
a counterfactual scenario where the entire population is childless. We find no
statistically significant gender gaps in the counterfactual scenario, thereby
concluding that the gender gaps in income and wages in Poland are driven by
parenthood and most likely, by differences in labour market participation and
hours worked.",The Impact of Parenthood on Labour Market Outcomes of Women and Men in Poland,2023-06-22 17:36:53,"Radost Waszkiewicz, Honorata Bogusz","http://arxiv.org/abs/2306.12924v2, http://arxiv.org/pdf/2306.12924v2",econ.GN
33106,gn,"This paper examines the impact of the Anglophone Conflict in Cameroon on
human capital accumulation. Using high-quality individual-level data on test
scores and information on conflict-related violent events, a
difference-in-differences design is employed to estimate the conflict's causal
effects. The results show that an increase in violent events and
conflict-related deaths causes a significant decline in test scores in reading
and mathematics. The conflict also leads to higher rates of teacher absenteeism
and reduced access to electricity in schools. These findings highlight the
adverse consequences of conflict-related violence on human capital
accumulation, particularly within the Anglophone subsystem. The study
emphasizes the disproportionate burden faced by Anglophone pupils due to
language-rooted tensions and segregated educational systems.",Armed Conflict and Early Human Capital Accumulation: Evidence from Cameroon's Anglophone Conflict,2023-06-22 20:45:31,"Hector Galindo-Silva, Guy Tchuente","http://arxiv.org/abs/2306.13070v1, http://arxiv.org/pdf/2306.13070v1",econ.GN
33107,gn,"We measure the extent of consumption insurance to income shocks accounting
for high-order moments of the income distribution. We derive a nonlinear
consumption function, in which the extent of insurance varies with the sign and
magnitude of income shocks. Using PSID data, we estimate an asymmetric
pass-through of bad versus good permanent shocks -- 17% of a 3 sigma negative
shock transmits to consumption compared to 9% of an equal-sized positive shock
-- and the pass-through increases as the shock worsens. Our results are
consistent with surveys of consumption responses to hypothetical events and
suggest that tail income risk matters substantially for consumption.",Consumption Partial Insurance in the Presence of Tail Income Risk,2023-06-23 00:19:03,"Anisha Ghosh, Alexandros Theloudis","http://arxiv.org/abs/2306.13208v2, http://arxiv.org/pdf/2306.13208v2",econ.GN
33150,gn,"We study the dynamic effects of fires on county labor markets in the US using
a novel geophysical measure of fire exposure based on satellite imagery. We
find increased fire exposure causes lower employment growth in the short and
medium run, with medium-run effects being linked to migration. We also document
heterogeneous effects across counties by education and industrial concentration
levels, states of the business cycle, and fire size. By overcoming challenges
in measuring fire impacts, we identify vulnerable places and economic states,
offering guidance on tailoring relief efforts and contributing to a broader
understanding of natural disasters' economic impacts.",Fires and Local Labor Markets,2023-08-05 02:19:12,"Raphaelle G. Coulombe, Akhil Rao","http://arxiv.org/abs/2308.02739v1, http://arxiv.org/pdf/2308.02739v1",econ.GN
33108,gn,"The carbon-reducing effect of attention is scarcer than that of material
resources, and when the government focuses its attention on the environment,
resources will be allocated in a direction that is conducive to reducing
carbon. Using panel data from 30 Chinese provinces from 2007 to 2019, this
study revealed the impact of governments' environmental attention on carbon
emissions and the synergistic mechanism between governments' environmental
attention and informatization level. The findings suggested that (1)the
environmental attention index of local governments in China showed an overall
fluctuating upward trend; (2)governments' environmental atten-tion had the
effect of reducing carbon emissions; (3)the emission-reducing effect of
governments' environmental attention is more significant in the western region
but not in the central and eastern regions; (4)informatization level plays a
positive moderating role in the relationship between governments' environmental
attention and carbon emissions; (5)there is a significant threshold effect on
the carbon reduction effect of governments' environmental attention. Based on
the findings, this study proposed policy implications from the perspectives of
promoting the sustainable enhancement of environmental attention, bringing
institutional functions into play, emphasizing the ecological benefits and
strengthening the disclosure of information.",Does Environmental Attention by Governments Promote Carbon Reductions,2023-06-23 14:01:30,Yichuan Tian,"http://arxiv.org/abs/2306.13436v1, http://arxiv.org/pdf/2306.13436v1",econ.GN
33109,gn,"Segregation on the basis of ethnic groups stands as a pervasive and
persistent social challenge in many cities across the globe. Public spaces
provide opportunities for diverse encounters but recent research suggests
individuals adjust their time spent in such places to cope with extreme
temperatures. We evaluate to what extent such adaptation affects racial
segregation and thus shed light on a yet unexplored channel through which
global warming might affect social welfare. We use large-scale foot traffic
data for millions of places in 315 US cities between 2018 and 2020 to estimate
an index of experienced isolation in daily visits between whites and other
ethnic groups. We find that heat increases segregation. Results from panel
regressions imply that a week with temperatures above 33{\deg}C in a city like
Los Angeles induces an upward shift of visit isolation by 0.7 percentage
points, which equals about 14% of the difference in the isolation index of Los
Angeles to the more segregated city of Atlanta. The segregation-increasing
effect is particularly strong for individuals living in lower-income areas and
at places associated with leisure activities. Combining our estimates with
climate model projections, we find that stringent mitigation policy can have
significant co-benefits in terms of cushioning increases in racial segregation
in the future.",Heat increases experienced racial segregation in the United States,2023-06-01 11:39:10,"Till Baldenius, Nicolas Koch, Hannah Klauber, Nadja Klein","http://arxiv.org/abs/2306.13772v1, http://arxiv.org/pdf/2306.13772v1",econ.GN
33110,gn,"As the two largest emerging emitters with the highest growth in operational
carbon from residential buildings, the historical emission patterns and
decarbonization efforts of China and India warrant further exploration. This
study aims to be the first to present a carbon intensity model considering
end-use performances, assessing the operational decarbonization progress of
residential building in India and China over the past two decades using the
improved decomposing structural decomposition approach. Results indicate (1)
the overall operational carbon intensity increased by 1.4% and 2.5% in China
and India, respectively, between 2000 and 2020. Household expenditure-related
energy intensity and emission factors were crucial in decarbonizing residential
buildings. (2) Building electrification played a significant role in
decarbonizing space cooling (-87.7 in China and -130.2 kilograms of carbon
dioxide (kgCO2) per household in India) and appliances (-169.7 in China and
-43.4 kgCO2 per household in India). (3) China and India collectively
decarbonized 1498.3 and 399.7 mega-tons of CO2 in residential building
operations, respectively. In terms of decarbonization intensity, India (164.8
kgCO2 per household) nearly caught up with China (182.5 kgCO2 per household) in
2020 and is expected to surpass China in the upcoming years, given the
country's robust annual growth rate of 7.3%. Overall, this study provides an
effective data-driven tool for investigating the building decarbonization
potential in China and India, and offers valuable insights for other emerging
economies seeking to decarbonize residential buildings in the forthcoming COP28
age.",Decarbonization patterns of residential building operations in China and India,2023-06-24 07:14:05,"Ran Yan, Nan Zhou, Wei Feng, Minda Ma, Xiwang Xiang, Chao Mao","http://arxiv.org/abs/2306.13858v1, http://arxiv.org/pdf/2306.13858v1",econ.GN
33111,gn,"This study examines the impact of GitHub Copilot on a large sample of Copilot
users (n=934,533). The analysis shows that users on average accept nearly 30%
of the suggested code, leading to increased productivity. Furthermore, our
research demonstrates that the acceptance rate rises over time and is
particularly high among less experienced developers, providing them with
substantial benefits. Additionally, our estimations indicate that the adoption
of generative AI productivity tools could potentially contribute to a $1.5
trillion increase in global GDP by 2030. Moreover, our investigation sheds
light on the diverse contributors in the generative AI landscape, including
major technology companies, startups, academia, and individual developers. The
findings suggest that the driving force behind generative AI software
innovation lies within the open-source ecosystem, particularly in the United
States. Remarkably, a majority of repositories on GitHub are led by individual
developers. As more developers embrace these tools and acquire proficiency in
the art of prompting with generative AI, it becomes evident that this novel
approach to software development has forged a unique inextricable link between
humans and artificial intelligence. This symbiotic relationship has the
potential to shape the construction of the world's software for future
generations.",Sea Change in Software Development: Economic and Productivity Analysis of the AI-Powered Developer Lifecycle,2023-06-26 22:48:17,"Thomas Dohmke, Marco Iansiti, Greg Richards","http://arxiv.org/abs/2306.15033v1, http://arxiv.org/pdf/2306.15033v1",econ.GN
33112,gn,"Motivated by the idea that lack of experience is a source of errors but that
experience should reduce them, we model agents' behavior using a stochastic
choice model, leaving endogenous the accuracy of their choice. In some games,
increased accuracy is conducive to unstable best-response dynamics. We define
the barrier to learning as the minimum level of noise which keeps the
best-response dynamic stable. Using logit Quantal Response, this defines a
limitQR Equilibrium. We apply the concept to centipede, travelers' dilemma, and
11-20 money-request games and to first-price and all-pay auctions, and discuss
the role of strategy restrictions in reducing or amplifying barriers to
learning.",Endogenous Barriers to Learning,2023-06-29 15:49:26,Olivier Compte,"http://arxiv.org/abs/2306.16904v1, http://arxiv.org/pdf/2306.16904v1",econ.GN
33115,gn,"At various stages during the initial onset of the COVID-19 pandemic, various
US states and local municipalities enacted eviction moratoria. One of the main
aims of these moratoria was to slow the spread of COVID-19 infections. We
deploy a semiparametric difference-in-differences approach with an event study
specification to test whether the lifting of these local moratoria led to an
increase in COVID-19 cases and deaths. Our main findings, across a range of
specifications, are inconclusive regarding the impact of the moratoria -
especially after accounting for the number of actual evictions and conducting
the analysis at the county level. We argue that recently developed augmented
synthetic control (ASCM) methods are more appropriate in this setting. Our ASCM
results also suggest that the lifting of eviction moratoria had little to no
impact on COVID-19 cases and deaths. Thus, it seems that eviction moratoria had
little to no robust effect on reducing the spread of COVID-19 throwing into
question its use as a non-pharmaceutical intervention.",Local Eviction Moratoria and the Spread of COVID-19,2023-07-01 10:03:19,"Julia Hatamyar, Christopher F. Parmeter","http://arxiv.org/abs/2307.00251v1, http://arxiv.org/pdf/2307.00251v1",econ.GN
33116,gn,"The relevance of Adam Smith for understanding human morality and sociality is
recognized in the growing interest in his work on moral sentiments among
scholars of various academic backgrounds. But, paradoxically, Adam Smith's
theory of economic value enjoys a less prominent stature today among
economists, who, while they view him as the 'father of modern economics',
considered him more as having had the right intuitions about a market economy
than as having developed the right concepts and the technical tools for
studying it. Yet the neoclassical tradition, which replaced the classical
school around 1870, failed to provide a satisfactory theory of market price
formation. Adam Smith's sketch of market price formation (Ch. VII, Book I,
Wealth of Nations), and more generally the classical view of competition as a
collective higgling and bargaining process, as this paper argues, offers a
helpful foundation on which to build a modern theory of market price formation,
despite any shortcomings of the original classical formulation (notably its
insistence on long-run, natural value). Also, with hindsight, the experimental
market findings established the remarkable stability, efficiency, and
robustness of the old view of competition, suggesting a rehabilitation of
classical price discovery. This paper reappraises classical price theory as
Adam Smith articulated it; we explicate key propositions from his price theory
and derive them from a simple model, which is an elementary sketch of the
authors' more general theory of competitive market price formation.",Adam Smith's Theory of Value: A Reappraisal of Classical Price Discovery,2023-07-01 22:15:01,"Sabiou Inoua, Vernon Smith","http://arxiv.org/abs/2307.00412v1, http://arxiv.org/pdf/2307.00412v1",econ.GN
33117,gn,"This article examines the impact of China's delayed retirement announcement
on households' savings behavior using data from China Family Panel Studies
(CFPS). The article finds that treated households, on average, experience an 8%
increase in savings rates as a result of the policy announcement. This
estimation is both significant and robust. Different types of households
exhibit varying degrees of responsiveness to the policy announcement, with
higher-income households showing a greater impact. The increase in household
savings can be attributed to negative perceptions about future pension income.",Policy Expectation Counts? The Impact of China's Delayed Retirement Announcement on Urban Households Savings Rates,2023-07-05 20:29:57,Shun Zhang,"http://arxiv.org/abs/2307.02455v2, http://arxiv.org/pdf/2307.02455v2",econ.GN
33118,gn,"This article has one single purpose: introduce a new and simple, yet highly
insightful approach to capture, fully and quantitatively, the dynamics of the
circular flow of income in economies. The proposed approach relies mostly on
basic linear algebraic concepts and has deep implications for the disciplines
of economics, physics and econophysics.",A Simple Linear Algebraic Approach to Capture the Dynamics of the Circular Flow of Income,2023-07-06 04:26:25,"Aziz Guergachi, Javid Hakim","http://arxiv.org/abs/2307.02713v1, http://arxiv.org/pdf/2307.02713v1",econ.GN
33119,gn,"This paper examines whether personality influences the allocation of
resources within households. To do so, I model households as couples who make
Pareto-efficient allocations and divide resources according to a distribution
function. Using a sample of Dutch couples from the LISS survey with detailed
information on consumption, labor supply, and personality traits at the
individual level, I find that personality affects intrahousehold allocations
through two channels. Firstly, the level of these traits acts as preference
factors that shape individual tastes for consumed goods and leisure time.
Secondly, by testing distribution factor proportionality and the exclusion
restriction of a conditional demand system, I observe that differences in
personality between spouses act as distribution factors. Specifically, these
differences in personality impact the allocation of resources by affecting the
bargaining process within households. For example, women who are relatively
more conscientious, have higher self-esteem, and engage more cognitively than
their male partners receive a larger share of intrafamily resources.",Does personality affect the allocation of resources within households?,2023-07-06 14:13:03,Gastón P. Fernández,"http://arxiv.org/abs/2307.02918v1, http://arxiv.org/pdf/2307.02918v1",econ.GN
33120,gn,"Evidential cooperation in large worlds (ECL) refers to the idea that humans
and other agents can benefit by cooperating with similar agents with differing
values in causally disconnected parts of a large universe. Cooperating provides
agents with evidence that other similar agents are likely to cooperate too,
resulting in gains from trade for all. This could be a crucial consideration
for altruists.
  I develop a game-theoretic model of ECL as an incomplete information
bargaining problem. The model incorporates uncertainty about others' value
systems and empirical situations, and addresses the problem of selecting a
compromise outcome. Using the model, I investigate issues with ECL and outline
open technical and philosophical questions.
  I show that all cooperators must maximize the same weighted sum of utility
functions to reach a Pareto optimal outcome. However, I argue against selecting
a compromise outcome implicitly by normalizing utility functions. I review
bargaining theory and argue that the Nash bargaining solution could be a
relevant Schelling point. I introduce dependency equilibria (Spohn 2007), an
equilibrium concept suitable for ECL, and generalize a folk theorem showing
that the Nash bargaining solution is a dependency equilibrium. I discuss gains
from trade given uncertain beliefs about other agents and analyze how these
gains decrease in several toy examples as the belief in another agent
decreases.
  Finally, I discuss open issues in my model. First, the Nash bargaining
solution is sometimes not coalitionally stable, meaning that a subset of
cooperators can unilaterally improve payoffs by deviating from the compromise.
I investigate conditions under which stable payoff vectors exist. Second, I
discuss how to model agents' default actions without ECL.",Modeling evidential cooperation in large worlds,2023-07-10 22:59:54,Johannes Treutlein,"http://arxiv.org/abs/2307.04879v2, http://arxiv.org/pdf/2307.04879v2",econ.GN
33121,gn,"This paper presents a novel machine learning approach to GDP prediction that
incorporates volatility as a model weight. The proposed method is specifically
designed to identify and select the most relevant macroeconomic variables for
accurate GDP prediction, while taking into account unexpected shocks or events
that may impact the economy. The proposed method's effectiveness is tested on
real-world data and compared to previous techniques used for GDP forecasting,
such as Lasso and Adaptive Lasso. The findings show that the
Volatility-weighted Lasso method outperforms other methods in terms of accuracy
and robustness, providing policymakers and analysts with a valuable tool for
making informed decisions in a rapidly changing economic environment. This
study demonstrates how data-driven approaches can help us better understand
economic fluctuations and support more effective economic policymaking.
  Keywords: GDP prediction, Lasso, Volatility, Regularization, Macroeconomics
Variable Selection, Machine Learning JEL codes: C22, C53, E37.",Harnessing the Potential of Volatility: Advancing GDP Prediction,2023-06-18 17:42:25,Ali Lashgari,"http://arxiv.org/abs/2307.05391v1, http://arxiv.org/pdf/2307.05391v1",econ.GN
33122,gn,"Workers separate from jobs, search for jobs, accept jobs, and fund
consumption with their wages. Firms recruit workers to fill vacancies. Search
frictions prevent firms from instantly hiring available workers. Unemployment
persists. These features are described by the Diamond-Mortensen-Pissarides
modeling framework. In this class of models, how unemployment responds to
productivity changes depends on resources that can be allocated to job
creation. Yet, this characterization has been made when matching is
parameterized by a Cobb-Douglas technology. For a canonical DMP model, I (1)
demonstrate that a unique steady-state equilibrium will exist as long as the
initial vacancy yields a positive surplus; (2) characterize responses of
unemployment to productivity changes for a general matching technology; and (3)
show how a matching technology that is not Cobb-Douglas implies unemployment
responds more to productivity changes, which is independent of resources
available for job creation, a feature that will be of interest to
business-cycle researchers.",Responses of Unemployment to Productivity Changes for a General Matching Technology,2023-07-12 02:35:00,Rich Ryan,"http://arxiv.org/abs/2307.05843v2, http://arxiv.org/pdf/2307.05843v2",econ.GN
33123,gn,"Using generalized random forests and rich Swedish administrative data, we
show that the earnings effects of job displacement due to establishment
closures are extremely heterogeneous across workers, establishments, and
markets. The decile of workers with the largest predicted effects lose 50
percent of annual earnings the year after displacement and accumulated losses
amount to 250 percent during a decade. In contrast, workers in the least
affected decile experience only marginal losses of less than 6 percent in the
year after displacement. Workers in the most affected decile tend to be lower
paid workers on negative earnings trajectories. This implies that the economic
value of (lost) jobs is greatest for workers with low earnings. The reason is
that many of these workers fail to find new employment after displacement.
Overall, the effects are heterogeneous both within and across establishments
and combinations of important individual characteristics such as age and
schooling. Adverse market conditions matter the most for already vulnerable
workers. The most effective way to target workers with large effects, without
using a complex model, is by focusing on older workers in routine-task
intensive jobs","The Heterogeneous Earnings Impact of Job Loss Across Workers, Establishments, and Markets",2023-07-13 14:11:38,"Susan Athey, Lisa K. Simon, Oskar N. Skans, Johan Vikstrom, Yaroslav Yakymovych","http://arxiv.org/abs/2307.06684v1, http://arxiv.org/pdf/2307.06684v1",econ.GN
33124,gn,"Direct buy advertisers procure advertising inventory at fixed rates from
publishers and ad networks. Such advertisers face the complex task of choosing
ads amongst myriad new publisher sites. We offer evidence that advertisers do
not excel at making these choices. Instead, they try many sites before settling
on a favored set, consistent with advertiser learning. We subsequently model
advertiser demand for publisher inventory wherein advertisers learn about
advertising efficacy across publishers' sites. Results suggest that advertisers
spend considerable resources advertising on sites they eventually abandon -- in
part because their prior beliefs about advertising efficacy on those sites are
too optimistic. The median advertiser's expected CTR at a new site is 0.23%,
five times higher than the true median CTR of 0.045%.
  We consider how pooling advertiser information remediates this problem.
Specifically, we show that ads with similar visual elements garner similar
CTRs, enabling advertisers to better predict ad performance at new sites.
Counterfactual analyses indicate that gains from pooling advertiser information
are substantial: over six months, we estimate a median advertiser welfare gain
of \$2,756 (a 15.5% increase) and a median publisher revenue gain of \$9,618 (a
63.9% increase).",Advertiser Learning in Direct Advertising Markets,2023-07-13 21:34:28,"Carl F. Mela, Jason M. T. Roos, Tulio Sousa","http://arxiv.org/abs/2307.07015v1, http://arxiv.org/pdf/2307.07015v1",econ.GN
33125,gn,"In this paper, we explore the economic, institutional, and
political/governmental factors in attracting Foreign Direct Investment (FDI)
inflows in the emerging twenty-four Asian economies. To examine the significant
determinants of FDI, the study uses panel data for a period of seventeen years
(2002-2018). The panel methodology enables us to deal with endogeneity and
other issues. Multiple regression models are done for empirical evidence. The
study focuses on a holistic approach and considers different variables under
three broad areas: economic, institutional, and political aspects. The
variables include Market Size, Trade Openness, Inflation, Natural Resource,
Lending Rate, Capital Formation as economic factors and Business Regulatory
Environment and Business Disclosure Index as institutional factors and
Political Stability, Government Effectiveness, and Rule of Law as political
factors. The empirical findings show most of the economic factors significantly
affect FDI inflows whereas Business Disclosure is the only important
institutional variable. Moreover, political stability has a significant
positive impact in attracting foreign capital flow though the impact of
government effectiveness is found insignificant. Overall, the economic factors
prevail strongly compared to institutional and political factors.",The Determinants of Foreign Direct Investment (FDI) A Panel Data Analysis for the Emerging Asian Economies,2023-07-13 22:42:06,ATM Omor Faruq,"http://arxiv.org/abs/2307.07037v1, http://arxiv.org/pdf/2307.07037v1",econ.GN
33174,gn,"We study the distribution of strike size, which we measure as lost person
days, for a long period in several countries of Europe and America. When we
consider the full samples, the mixtures of two or three lognormals arise as
very convenient models. When restricting to the upper tails, the Pareto power
law becomes almost indistinguishable of the truncated lognormal.",The Distribution of Strike Size:Empirical Evidence from Europe and North America in the 19th and 20th Centuries,2023-08-19 17:29:45,"Michele Campolieti, Arturo Ramos","http://dx.doi.org/10.1016/j.physa.2020.125424, http://arxiv.org/abs/2308.10030v1, http://arxiv.org/pdf/2308.10030v1",econ.GN
33126,gn,"This study investigates the impact of integrating gender equality into the
Colombian constitution of 1991 on attitudes towards gender equality,
experiences of gender-based discrimination, and labor market participation.
Using a difference-in-discontinuities framework, we compare individuals exposed
to mandatory high school courses on the Constitution with those who were not
exposed. Our findings show a significant increase in labor market
participation, primarily driven by women. Exposure to these courses also shapes
attitudes towards gender equality, with men demonstrating greater support.
Women report experiencing less gender-based discrimination. Importantly, our
results suggest that women's increased labor market participation is unlikely
due to reduced barriers from male partners. A disparity in opinions regarding
traditional gender norms concerning household domains is observed between men
and women, highlighting an ongoing power struggle within the home. However, the
presence of a younger woman in the household appears to influence men's more
positive view of gender equality, potentially indicating a desire to empower
younger women in their future lives. These findings highlight the crucial role
of cultural shocks and the constitutional inclusion of women's rights in
shaping labor market dynamics.","Culture, Gender, and Labor Force Participation: Evidence from Colombia",2023-07-18 00:46:55,"Hector Galindo-Silva, Paula Herrera-Idárraga","http://arxiv.org/abs/2307.08869v1, http://arxiv.org/pdf/2307.08869v1",econ.GN
33127,gn,"Recent highly cited research uses time-series evidence to argue the decline
in interest rates led to a large rise in economic profits and markups. We show
the size of these estimates is sensitive to the sample start date: The rise in
markups from 1984 to 2019 is 14% larger than from 1980 to 2019, a difference
amounting to a $3000 change in income per worker in 2019. The sensitivity comes
from a peak in interest rates in 1984, during a period of heightened
volatility. Our results imply researchers should justify their time-series
selection and incorporate sensitivity checks in their analysis.","The Beginning of the Trend: Interest Rates, Profits, and Markups",2023-07-18 07:48:49,"Anton Bobrov, James Traina","http://arxiv.org/abs/2307.08968v2, http://arxiv.org/pdf/2307.08968v2",econ.GN
33128,gn,"Research has investigated the impact of the COVID-19 pandemic on business
performance and survival, indicating particularly adverse effects for small and
midsize businesses (SMBs). Yet only limited work has examined whether and how
online advertising technology may have helped shape these outcomes,
particularly for SMBs. The aim of this study is to address this gap. By
constructing and analyzing a novel data set of more than 60,000 businesses in
49 countries, we examine the impact of government lockdowns on business
survival. Using discrete-time survival models with instrumental variables and
staggered difference-in-differences estimators, we find that government
lockdowns increased the likelihood of SMB closure around the world but that use
of online advertising technology attenuates this adverse effect. The findings
show heterogeneity in country, industry, and business size, consistent with
theoretical expectations.",COVID-19 Demand Shocks Revisited: Did Advertising Technology Help Mitigate Adverse Consequences for Small and Midsize Businesses?,2023-07-18 10:45:59,"Shun-Yang Lee, Julian Runge, Daniel Yoo, Yakov Bart, Anett Gyurak, J. W. Schneider","http://arxiv.org/abs/2307.09035v1, http://arxiv.org/pdf/2307.09035v1",econ.GN
33129,gn,"In spite of being one of the smallest and wealthiest countries in the
European Union in terms of GDP per capita, Luxembourg is facing socio-economic
challenges due to recent rapid urban transformations. This article contributes
by approaching this phenomenon at the most granular and rarely analysed
geographical level - the neighbourhoods of the capital, Luxembourg City. Based
on collected empirical data covering various socio-demographic dimensions for
2020-2021, an ascending hierarchical classification on principal components is
set out to establish neighbourhoods' socio-spatial patterns. In addition, Chi2
tests are carried out to examine residents' socio-demographic characteristics
and determine income inequalities in neighbourhoods. The results reveal a clear
socio-spatial divide along a north-west south-east axis. Moreover, classical
factors such as gender or citizenship differences are revealed to be poorly
determinant of income inequalities compared with the proportion of social
benefits recipients and single residents.","Socio-spatial Inequalities in a Context of ""Great Economic Wealth"". Case study of neighbourhoods of Luxembourg City",2023-07-18 16:33:58,Natalia Zdanowska,"http://arxiv.org/abs/2307.09251v1, http://arxiv.org/pdf/2307.09251v1",econ.GN
33130,gn,"Teens make life-changing decisions while constrained by the needs and
resources of the households they grow up in. Household behavior models
frequently delegate decision-making to the teen or their parents, ignoring
joint decision-making in the household. I show that teens and parents allocate
time and income jointly by using data from the Costa Rican Encuesta Nacional de
Hogares from 2011 to 2019 and a conditional cash transfer program. First, I
present gender differences in household responses to the transfer using a
marginal treatment effect framework. Second, I explain how the gender gap from
the results is due to the bargaining process between parents and teens. I
propose a collective household model and show that sons bargain cooperatively
with their parents while daughters do not. This result implies that sons have a
higher opportunity cost of attending school than daughters. Public policy
targeting teens must account for this gender disparity to be effective.",Power to the teens? A model of parents' and teens' collective labor supply,2023-06-30 10:02:27,José Alfonso Muñoz-Alvarado,"http://arxiv.org/abs/2307.09634v1, http://arxiv.org/pdf/2307.09634v1",econ.GN
33131,gn,"We obtain an elementary characterization of expected utility based on a
representation of choice in terms of psychological gambles, which requires no
assumption other than coherence between ex-ante and ex-post preferences. Weaker
version of coherence are associated with various attitudes towards complexity
and lead to a characterization of minimax or Choquet expected utility.",Subjective Expected Utility and Psychological Gambles,2023-07-19 13:52:06,Gianluca Cassese,"http://arxiv.org/abs/2307.10328v2, http://arxiv.org/pdf/2307.10328v2",econ.GN
33132,gn,"The extent to which individuals commit to their partner for life has
important implications. This paper develops a lifecycle collective model of the
household, through which it characterizes behavior in three prominent
alternative types of commitment: full, limited, and no commitment. We propose a
test that distinguishes between all three types based on how contemporaneous
and historical news affect household behavior. Our test permits heterogeneity
in the degree of commitment across households. Using recent data from the Panel
Study of Income Dynamics, we reject full and no commitment, while we find
strong evidence for limited commitment.",Commitment and the Dynamics of Household Labor Supply,2023-07-20 19:13:25,"Alexandros Theloudis, Jorge Velilla, Pierre-André Chiappori, J. Ignacio Giménez-Nadal, José Alberto Molina","http://arxiv.org/abs/2307.10983v1, http://arxiv.org/pdf/2307.10983v1",econ.GN
33133,gn,"The main component of the NextGeneration EU (NGEU) program is the Recovery
and Resilience Facility (RRF), spanning an implementation period between 2021
and 2026. The RRF also includes a monitoring system: every six months, each
country is required to send an update on the progress of the plan against 14
common indicators, measured on specific quantitative scales. The aim of this
paper is to present the first empirical evidence on this system, while, at the
same time, emphasizing the potential of its integration with the sustainable
development framework (SDGs). We propose to develop a first linkage between the
14 common indicators and the SDGs which allows us to produce a composite index
(SDGs-RRF) for France, Germany, Italy, and Spain for the period 2014-2021. Over
this time, widespread improvements in the composite index across the four
countries led to a partial reduction of the divergence. The proposed approach
represents a first step towards a wider use of the SDGs for the assessment of
the RRF, in line with their use in the European Semester documents prepared by
the European Commission.",Indicatori comuni del PNRR e framework SDGs: una proposta di indicatore composito,2023-07-20 20:21:14,"Fabio Bacchini, Lorenzo Di Biagio, Giampiero M. Gallo, Vincenzo Spinelli","http://arxiv.org/abs/2307.11039v1, http://arxiv.org/pdf/2307.11039v1",econ.GN
33134,gn,"This paper process two robust models for site selection problems for one of
the major Hospitals in Hong Kong. Three parameters, namely, level of
uncertainty, infeasibility tolerance as well as the level of reliability, are
incorporated. Then, 2 kinds of uncertainty; that is, the symmetric and bounded
uncertainties have been investigated. Therefore, the issue of scheduling under
uncertainty has been considered wherein unknown problem factors could be
illustrated via a given probability distribution function. In this regard, Lin,
Janak, and Floudas (2004) introduced one of the newly developed strong
optimisation protocols. Hence, computers as well as the chemical engineering
[1069-1085] has been developed for considering uncertainty illustrated through
a given probability distribution. Finally, our accurate optimisation protocol
has been on the basis of a min-max framework and in a case of application to
the (MILP) problems it produced a precise solution that has immunity to
uncertain data.",A Robust Site Selection Model under uncertainty for Special Hospital Wards in Hong Kong,2023-07-21 14:34:17,"Mohammad Heydari, Yanan Fan, Kin Keung Lai","http://arxiv.org/abs/2307.11508v1, http://arxiv.org/pdf/2307.11508v1",econ.GN
33135,gn,"The Ministry of Economy has an interest and demand in exploring how to
increase the set of [legally registered] small family farmers in Ukraine and to
examine more in details measures that could reduce the scale of the shadow
agricultural market in Ukraine. Building upon the above political economy
background and demand, we will be undertaking the analysis along the two
separate but not totally independents streams of analysis, i.e. sustainable
small scale (family) farming development and exploring the scale and measures
for reducing the shadow agricultural market in Ukraine",Assessing the role of small farmers and households in agriculture and the rural economy and measures to support their sustainable development,2023-04-29 17:27:55,"Oleg Nivievskyi, Pavlo Iavorskyi, Oleksandr Donchenko","http://arxiv.org/abs/2307.11683v1, http://arxiv.org/pdf/2307.11683v1",econ.GN
33136,gn,"Nitrogen fertilization of boreal forests is investigated in terms of
microeconomics, as a tool for carbon sequestration. The effects of nitrogen
fertilization's timing on the return rate on capital and the expected value of
the timber stock are investigated within a set of semi-fertile,
spruce-dominated boreal stands, using an inventory-based growth model. Early
fertilization tends to shorten rotations, reducing timber stock and carbon
storage. The same applies to fertilization after the second thinning.
Fertilization applied ten years before stand maturity is profitable and
increases the timber stock, but the latter effect is small. Fertilization of
mature stands, extending any rotation by ten years, effectively increases the
carbon stock. Profitability varies but is increased by fertilization, instead
of merely extending the rotation.",Microeconomics of nitrogen fertilization in boreal carbon forestry,2023-07-23 19:03:24,Petri P. Karenlampi,"http://arxiv.org/abs/2307.12362v1, http://arxiv.org/pdf/2307.12362v1",econ.GN
33137,gn,"Smart roadside infrastructure sensors in the form of intelligent
transportation system stations (ITS-Ss) are increasingly deployed worldwide at
relevant traffic nodes. The resulting digital twins of the real environment are
suitable for developing and validating connected and automated driving
functions and for increasing the operational safety of intelligent vehicles by
providing ITS-S real-time data. However, ITS-Ss are very costly to establish
and operate. The choice of sensor technology also has an impact on the overall
costs as well as on the data quality. So far, there is only insufficient
knowledge about the concrete expenses that arise with the construction of
different ITS-S setups. Within this work, multiple modular infrastructure
sensor setups are investigated with the help of a life cycle cost analysis
(LCCA). Their economic efficiency, different user requirements and sensor data
qualities are considered. Based on the static cost model, a Monte Carlo
simulation is performed, to generate a range of possible project costs and to
quantify the financial risks of implementing ITS-S projects of different
scales. Due to its modularity, the calculation model is suitable for diverse
applications and outputs a distinctive evaluation of the underlying
cost-benefit ratio of investigated setups.",Economic Analysis of Smart Roadside Infrastructure Sensors for Connected and Automated Mobility,2023-07-24 18:42:40,"Laurent Kloeker, Gregor Joeken, Lutz Eckstein","http://arxiv.org/abs/2307.12893v1, http://arxiv.org/pdf/2307.12893v1",econ.GN
33138,gn,"Heat pumps are a key technology for reducing fossil fuel use in the heating
sector. A transition to heat pumps implies an increase in electricity demand,
especially in cold winter months. Using an open-source power sector model, we
examine the power sector impacts of a massive expansion of decentralized heat
pumps in Germany in 2030, combined with buffer heat storage of different sizes.
Assuming that the additional electricity used by heat pumps has to be fully
covered by renewable energies in a yearly balance, we quantify the required
additional investments in renewable energy sources. If wind power expansion
potentials are limited, the roll-out of heat pumps can also be accompanied by
solar PV with little additional costs, making use of the European
interconnection. The need for additional firm capacity and electricity storage
generally remains limited even in the case of temporally inflexible heat pumps.
We further find that relatively small heat storage capacities of 2 to 6 hours
can substantially reduce the need for short- and long-duration electricity
storage and other generation capacities, as well as power sector costs. We
further show that 5.8 million additional heat pumps save around 120 TWh of
natural gas and 24 million tonnes of CO$_2$ emissions per year.",Flexible heat pumps: must-have or nice to have in a power sector with renewables?,2023-07-24 19:18:34,"Alexander Roth, Dana Kirchem, Carlos Gaete-Morales, Wolf-Peter Schill","http://arxiv.org/abs/2307.12918v1, http://arxiv.org/pdf/2307.12918v1",econ.GN
33139,gn,"A central question in economics is whether automation will displace human
labor and diminish standards of living. Whilst prior works typically frame this
question as a competition between human labor and machines, we frame it as a
competition between human consumers and human suppliers. Specifically, we
observe that human needs favor long tail distributions, i.e., a long list of
niche items that are substantial in aggregate demand. In turn, the long tails
are reflected in the goods and services that fulfill those needs. With this
background, we propose a theoretical model of economic activity on a long tail
distribution, where innovation in demand for new niche outputs competes with
innovation in supply automation for mature outputs. Our model yields analytic
expressions and asymptotes for the shares of automation and labor in terms of
just four parameters: the rates of innovation in supply and demand, the
exponent of the long tail distribution and an initial value. We validate the
model via non-linear stochastic regression on historical US economic data with
surprising accuracy.","Long Tails, Automation and Labor",2023-07-27 01:06:30,B. N. Kausik,"http://arxiv.org/abs/2307.14525v1, http://arxiv.org/pdf/2307.14525v1",econ.GN
33140,gn,"The misuse of law by women in India is a serious issue that has been
receiving increased attention in recent years. In India, women are often
discriminated against and are not provided with equal rights and opportunities,
leading to a gender bias in many aspects of life. This gender bias is further
exacerbated by the misuse of law by women. There are numerous instances of
women using the law to their advantage, often at the expense of men. This
practice is not only unethical but also unconstitutional. The Indian
Constitution does not explicitly guarantee gender equality. However, several
amendments have been made to the Constitution to ensure that women are treated
equally in accordance with the law. The protection of women from all forms of
discrimination is considered a fundamental right. Despite this, women continue
to be discriminated against in various spheres of life, including marriage,
education, employment and other areas. The misuse of law by women in India is
primarily seen in cases of domestic violence and dowry-related issues and are
punishable by law. However, women often file false dowry harassment cases
against their husbands or in-laws in order to gain an advantage in a divorce or
property dispute.",The misuse of law by Women in India -Constitutionality of Gender Bias,2023-07-27 09:56:30,"Negha Senthil, Jayanthi Vajiram, Nirmala. V","http://arxiv.org/abs/2307.14651v1, http://arxiv.org/pdf/2307.14651v1",econ.GN
33141,gn,"The rapid growth of air and space travel in recent years has resulted in an
increased demand for legal regulation in the aviation and aerospace fields.
This paper provides an overview of air and space law, including the topics of
aircraft accident investigations, air traffic control, international borders
and law, and the regulation of space activities. With the increasing complexity
of air and space travel, it is important to understand the legal implications
of these activities. This paper examines the various legal aspects of air and
space law, including the roles of national governments, international
organizations, and private entities. It also provides an overview of the legal
frameworks that govern these activities and the implications of international
law. Finally, it considers the potential for future developments in the field
of air and space law. This paper provides a comprehensive overview of the legal
aspects of air and space travel and their implications for international and
domestic travel, as well as for international business and other activities in
the air and space domains.",Exploration of legal implications of air and space travel for international and domestic travel and the Environment,2023-07-27 10:35:07,"Jayanthi Vajiram, Negha Senthil, Nean Adhith. P, Ritikaa. VN","http://arxiv.org/abs/2307.14661v1, http://arxiv.org/pdf/2307.14661v1",econ.GN
33142,gn,"We investigate, at the fundamental level, the questions of `why', `when' and
`how' one could or should reach out to poor and vulnerable people to support
them in the absence of governmental institutions. We provide a simple and new
approach that is rooted in linear algebra and basic graph theory to capture the
dynamics of income circulation among economic agents. A new linear algebraic
model for income circulation is introduced, based on which we are able to
categorize societies as fragmented or cohesive. We show that, in the case of
fragmented societies, convincing wealthy agents at the top of the social
hierarchy to support the poor and vulnerable will be very difficult. We also
highlight how linear-algebraic and simple graph-theoretic methods help explain,
from a fundamental point of view, some of the mechanics of class struggle in
fragmented societies. Then, we explain intuitively and prove mathematically
why, in cohesive societies, wealthy agents at the top of the social hierarchy
tend to benefit by supporting the vulnerable in their society. A number of new
concepts emerge naturally from our mathematical analysis to describe the level
of cohesiveness of the society, the number of degrees of separation in business
(as opposed to social) networks, and the level of generosity of the overall
economy, which all tend to affect the rate at which the top wealthy class
recovers its support money back. In the discussion on future perspectives, the
connections between the proposed matrix model and statistical physics concepts
are highlighted.",On the mathematics of the circular flow of economic activity with applications to the topic of caring for the vulnerable during pandemics,2023-07-28 00:01:52,"Aziz Guergachi, Javid Hakim","http://arxiv.org/abs/2307.15197v1, http://arxiv.org/pdf/2307.15197v1",econ.GN
33143,gn,"This study explores the marriage matching of only-child individuals and its
outcome. Specifically, we analyze two aspects. First, we investigate how
marital status (i.e., marriage with an only child, that with a non-only child
and remaining single) differs between only children and non-only children. This
analysis allows us to know whether people choose mates in a positive or a
negative assortative manner regarding only-child status, and to predict whether
only-child individuals benefit from marriage matching premiums or are subject
to penalties regarding partner attractiveness. Second, we measure the
premium/penalty by the size of the gap in partner's socio economic status (SES,
here, years of schooling) between only-child and non--only-child individuals.
The conventional economic theory and the observed marriage patterns of positive
assortative mating on only-child status predict that only-child individuals are
subject to a matching penalty in the marriage market, especially when their
partner is also an only child. Furthermore, our estimation confirms that among
especially women marrying an only-child husband, only children are penalized in
terms of 0.57-years-lower educational attainment on the part of the partner.",Only-child matching penalty in the marriage market,2023-07-28 09:27:25,"Keisuke Kawata, Mizuki Komura","http://arxiv.org/abs/2307.15336v1, http://arxiv.org/pdf/2307.15336v1",econ.GN
33144,gn,"Air pollution generates substantial health damages and economic costs
worldwide. Pollution exposure varies greatly, both between countries and within
them. However, the degree of air quality inequality and its' trajectory over
time have not been quantified at a global level. Here I use economic inequality
indices to measure global inequality in exposure to ambient fine particles with
2.5 microns or less in diameter (PM2.5). I find high and rising levels of
global air quality inequality. The global PM2.5 Gini Index increased from 0.32
in 2000 to 0.36 in 2020, exceeding levels of income inequality in many
countries. Air quality inequality is mostly driven by differences between
countries and less so by variation within them, as decomposition analysis
shows. A large share of people facing the highest levels of PM2.5 exposure are
concentrated in only a few countries. The findings suggest that research and
policy efforts that focus only on differences within countries are overlooking
an important global dimension of environmental justice.",Global air quality inequality over 2000-2020,2023-07-28 19:59:16,Lutz Sager,"http://arxiv.org/abs/2307.15669v1, http://arxiv.org/pdf/2307.15669v1",econ.GN
33145,gn,"The article tries to compare urban and rural literacy of fifteen selected
Indian states during 1981 - 2011 and explores the instruments which can reduce
the disparity in urban and rural educational attainment. The study constructs
Sopher's urban-rural differential literacy index to analyze the trends of
literacy disparity across fifteen states in India over time. Although literacy
disparity has decreased over time, Sopher's index shows that the states of
Andhra Pradesh, Madhya Pradesh, Gujarat, Odisha, Maharashtra and even Karnataka
faced high inequality in education between urban and rural India in 2011.
Additionally, the Fixed Effect panel data regression technique has been applied
in the study to identify the factors which influence urban-rural inequality in
education. The model shows that the following factors can reduce literacy
disparity between urban and rural areas of India: low fertility rate in rural
women, higher percentages of rural females marrying after the age of 21 years,
mother's educational attainment and their labour force participation rate in
rural areas.",Inequality in Educational Attainment: Urban-Rural Comparison in the Indian Context,2023-07-30 17:09:55,Sangita Das,"http://arxiv.org/abs/2307.16238v2, http://arxiv.org/pdf/2307.16238v2",econ.GN
33146,gn,"Stringent climate policy compatible with the targets of the 2015 Paris
Agreement would pose a substantial fiscal challenge. Reducing carbon dioxide
emissions by 95% or more by 2050 would raise 7% (1-17%) of GDP in carbon tax
revenue, half of current, global tax revenue. Revenues are relatively larger in
poorer regions. Subsidies for carbon dioxide sequestration would amount to 6.6%
(0.3-7.1%) of GDP. These numbers are conservative as they were estimated using
models that assume first-best climate policy implementation and ignore the
costs of raising revenue. The fiscal challenge rapidly shrinks if emission
targets are relaxed.",The fiscal implications of stringent climate policy,2023-07-31 13:32:32,Richard S. J. Tol,"http://arxiv.org/abs/2307.16554v1, http://arxiv.org/pdf/2307.16554v1",econ.GN
33147,gn,"Gallice and Monz\'on (2019) present a natural environment that sustains full
co-operation in one-shot social dilemmas among a finite number of
self-interested agents. They demonstrate that in a sequential public goods
game, where agents lack knowledge of their position in the sequence but can
observe some predecessors' actions, full contribution emerges in equilibrium
due to agents' incentive to induce potential successors to follow suit. In this
study, we aim to test the theoretical predictions of this model through an
economic experiment. We conducted three treatments, varying the amount of
information about past actions that a subject can observe, as well as their
positional awareness. Through rigorous structural econometric analysis, we
found that approximately 25% of the subjects behaved in line with the
theoretical predictions. However, we also observed the presence of alternative
behavioural types among the remaining subjects. The majority were classified as
conditional co-operators, showing a willingness to cooperate based on others'
actions. Some subjects exhibited altruistic tendencies, while only a small
minority engaged in free-riding behaviour.",Position Uncertainty in a Sequential Public Goods Game: An Experiment,2023-08-01 01:20:42,"Chowdhury Mohammad Sakib Anwar, Konstantinos Georgalos","http://arxiv.org/abs/2308.00179v1, http://arxiv.org/pdf/2308.00179v1",econ.GN
33148,gn,"Various studies have been conducted in the fields of sustainable operations
management, optimization, and wastewater treatment, yielding unsubstantiated
recovery. In the context of Europes climate neutrality vision, this paper
reviews effective decarbonization strategies and proposes sustainable
approaches to mitigate carbonization in various sectors such as building,
energy, industry, and transportation. The study also explores the role of
digitalization in decarbonization and reviews decarbonization policies that can
direct governments action towards a climate-neutral society. The paper also
presents a review of optimization approaches applied in the fields of science
and technology, incorporating modern optimization techniques based on various
peer-reviewed published research papers. It emphasizes non-conventional energy
and distributed power generating systems along with the deregulated and
regulated environment. Additionally, this paper critically reviews the
performance and capability of micellar enhanced ultrafiltration (MEUF) process
in the treatment of dye wastewater. The review presents evidence of
simultaneous removal of co-existing pollutants and explores the feasibility and
efficiency of biosurfactant in-stead of chemical surfactant. Lastly, the paper
proposes a novel firm-regulator-consumer interaction framework to study
operations decisions and interactive cooperation considering the interactions
among three agents through a comprehensive literature review on sustainable
operations management. The framework provides support for exploring future
research opportunities.","Towards Climate Neutrality: A Comprehensive Overview of Sustainable Operations Management, Optimization, and Wastewater Treatment Strategies",2023-08-01 22:46:01,"Vasileios Alevizos, Ilias Georgousis, Anna-Maria Kapodistria","http://arxiv.org/abs/2308.00808v1, http://arxiv.org/pdf/2308.00808v1",econ.GN
33149,gn,"In economics, risk aversion is modeled via a concave Bernoulli utility within
the expected-utility paradigm. We propose a simple test of expected utility and
concavity. We find little support for either: only 30 percent of the choices
are consistent with a concave utility, only two out of 72 subjects are
consistent with expected utility, and only one of them fits the economic model
of risk aversion. Our findings contrast with the preponderance of seemingly
""risk-averse"" choices that have been elicited using the popular multiple-price
list methodology, a result we replicate in this paper. We demonstrate that this
methodology is unfit to measure risk aversion, and that the high prevalence of
risk aversion it produces is due to parametric misspecification.",A Non-Parametric Test of Risk Aversion,2023-08-04 02:51:40,"Jacob K Goeree, Bernardo Garcia-Pola","http://arxiv.org/abs/2308.02083v1, http://arxiv.org/pdf/2308.02083v1",econ.GN
33151,gn,"The updating required by students in the marketing area is of utmost
importance, since they cannot wait until they are professionals to be updated,
especially in entrepreneurship and around digital marketing. The objective of
this work was to develop a digital marketing specialty module that allows
changing the academic-entrepreneurial-entrepreneurial paradigm of the graduate
of the Centro Universitario de los Altos (CUAltos) in economic-administrative
sciences at the undergraduate, graduate and specialty levels. The module aims
to instruct the student in the latest in digital marketing. As part of this,
two surveys were applied to the teachers of this center to know the viability
and willingness to participate in this module. Among the results, it was
discovered that marketing research specialists who work as teachers at CUAltos
are willing to train other teachers and students of the University of
Guadalajara's system, as well as to advise students in their ventures so that
they can apply different components of this field.","El paradigma del marketing digital en la academia, el emprendimiento universitario y las empresas establecidas",2023-08-06 02:43:06,Guillermo Jose Navarro del Toro,"http://dx.doi.org/10.23913/ride.v13i25.1321, http://arxiv.org/abs/2308.02969v1, http://arxiv.org/pdf/2308.02969v1",econ.GN
33152,gn,"As the global economy continues to grow, ecosystem services tend to stagnate
or degrow. Economic theory has shown how such shifts in relative scarcities can
be reflected in the appraisal of public projects and environmental-economic
accounting, but empirical evidence has been lacking to put the theory into
practice. To estimate the relative price change in ecosystem services that can
be used to make such adjustments, we perform a global meta-analysis of
environmental valuation studies to derive income elasticities of willingness to
pay (WTP) for ecosystem services as a proxy for the degree of limited
substitutability. Based on 749 income-WTP pairs, we estimate an income
elasticity of WTP of around 0.78 (95-CI: 0.6 to 1.0). Combining these results
with a global data set on shifts in the relative scarcity of ecosystem
services, we estimate relative price change of ecosystem services of around 2.2
percent per year. In an application to natural capital valuation of non-timber
forest ecosystem services by the World Bank, we show that their natural capital
value should be uplifted by more than 50 percent (95-CI: 32 to 78 percent),
materially elevating the role of public natural capital. We discuss
implications for relative price adjustments in policy appraisal and for
improving estimates of comprehensive national accounts.","Limited substitutability, relative price changes and the uplifting of public natural capital values",2023-08-08 20:01:49,"Moritz A. Drupp, Zachary M. Turk, Ben Groom, Jonas Heckenhahn","http://arxiv.org/abs/2308.04400v1, http://arxiv.org/pdf/2308.04400v1",econ.GN
33153,gn,"The Mobilit\""at.Leben study investigated travel behavior effects of a natural
experiment in Germany. In response to the 2022 cost-of-living crisis, two
policy measures to reduce travel costs for the population in June, July, and
August 2022 were introduced: a fuel excise tax cut and almost fare-free public
transport with the so-called 9-Euro-Ticket. The announcement of a successor
ticket to the 9-Euro-Ticket, the so-called Deutschlandticket, led to the
immediate decision to continue the study. The Mobilit\""at.Leben study has two
periods, the 9-Euro-Ticket period and the Deutschlandticket period, and
comprises two elements: several questionnaires and a smartphone-based passive
waypoint tracking. The entire duration of the study was almost thirteen months.
  In this paper, we report on the study design, the recruitment strategy, the
study participation in the survey, and the tracking parts, and we share our
experience in conducting such large-scale panel studies. Overall, 3,080 people
registered for our study of which 1,420 decided to use the smartphone tracking
app. While the relevant questionnaires in both phases have been completed by
818 participants, we have 170 study participants who completed the tracking in
both phases and all relevant questionnaires. We find that providing a study
compensation increases participation performance. It can be concluded that
conducting year-long panel studies is possible, providing rich information on
the heterogeneity in travel behavior between and within travelers.",The Mobilität.Leben Study: a Year-Long Mobility-Tracking Panel,2023-08-09 17:16:27,"Allister Loder, Fabienne Cantner, Victoria Dahmen, Klaus Bogenberger","http://arxiv.org/abs/2308.04973v1, http://arxiv.org/pdf/2308.04973v1",econ.GN
33154,gn,"This study examines the relationship between ambiguity and the ideological
positioning of political parties across the political spectrum. We identify a
strong non-monotonic (inverted U-shaped) relationship between party ideology
and ambiguity within a sample of 202 European political parties. This pattern
is observed across all ideological dimensions covered in the data. To explain
this pattern, we propose a novel theory that suggests centrist parties are
perceived as less risky by voters compared to extremist parties, giving them an
advantage in employing ambiguity to attract more voters at a lower cost. We
support our explanation with additional evidence from electoral outcomes and
economic indicators in the respective party countries.",Ideological Ambiguity and Political Spectrum,2023-08-11 05:30:22,Hector Galindo-Silva,"http://arxiv.org/abs/2308.05912v1, http://arxiv.org/pdf/2308.05912v1",econ.GN
33155,gn,"Cross-Impact Balance Analysis (CIB) is a widely used method to build
scenarios and help researchers to formulate policies in different fields, such
as management sciences and social sciences. During the development of the CIB
method over the years, some derivative methods were developed to expand its
application scope, including a method called dynamic CIB. However, the workflow
of dynamic CIB is relatively complex. In this article, we provide another
approach to extend CIB in multiple timespans based on the concept 'scenario
weight' and simplify the workflow to bring convenience to the policy makers.",An approach to extend Cross-Impact Balance method in multiple timespans,2023-08-11 19:52:11,Chonghao Zhao,"http://arxiv.org/abs/2308.06223v1, http://arxiv.org/pdf/2308.06223v1",econ.GN
33156,gn,"We seek to gain more insight into the effect of the crowds on the Home
Advantage by analyzing the particular case of Argentinean football (also known
as soccer), where for more than ten years, the visiting team fans were not
allowed to attend the games. Additionally, during the COVID-19 lockdown, a
significant number of games were played without both away and home team fans.
The analysis of more than 20 years of matches of the Argentinean tournament
indicates that the absence of the away team crowds was beneficial for the Top 5
teams during the first two years after their attendance was forbidden. An
additional intriguing finding is that the lack of both crowds affects
significantly all the teams, to the point of turning the home advantage into
home `disadvantage' for most of the teams.",Visitors Out! The Absence of Away Team Supporters as a Source of Home Advantage in Football,2023-08-03 16:22:30,"Federico Fioravanti, Fernando Delbianco, Fernando Tohmé","http://arxiv.org/abs/2308.06279v2, http://arxiv.org/pdf/2308.06279v2",econ.GN
33157,gn,"We study how disagreement influences the productive performance of a team in
a dynamic game with positive production externalities. Players can hold
different views about the productivity of the available production
technologies. These differences lead to different technology and effort choices
-- ""optimistic"" views induce higher effort than ""skeptical"" views. Views are
resilient, changed only if falsified by surprising evidence. We find that, with
a single technology, when the team disagrees about its productivity optimists
exert more effort early on than when the team is like-minded. With sufficiently
strong externalities, a disagreeing team produces on average more than any
like-minded team, on aggregate. When team members can choose production
technologies, disagreement over which technology works best motivates everyone
to exert more effort and seek early successes that make others adopt their
preferred technology. This always increases average output if the technologies
are similarly productive in reality.",The Disagreement Dividend,2023-08-12 19:44:58,Giampaolo Bonomi,"http://arxiv.org/abs/2308.06607v5, http://arxiv.org/pdf/2308.06607v5",econ.GN
33158,gn,"Stablecoins have gained significant popularity recently, with their market
cap rising to over $180 billion. However, recent events have raised concerns
about their stability. In this paper, we classify stablecoins into four types
based on the source and management of collateral and investigate the stability
of each type under different conditions. We highlight each type's potential
instabilities and underlying tradeoffs using agent-based simulations. The
results emphasize the importance of carefully evaluating the origin of a
stablecoin's collateral and its collateral management mechanism to ensure
stability and minimize risks. Enhanced understanding of stablecoins should be
informative to regulators, policymakers, and investors alike.",The four types of stablecoins: A comparative analysis,2023-08-14 13:03:48,"Matthias Hafner, Marco Henriques Pereira, Helmut Dietl, Juan Beccuti","http://arxiv.org/abs/2308.07041v1, http://arxiv.org/pdf/2308.07041v1",econ.GN
33159,gn,"This paper applies the Hotelling model to the context of exhaustible human
resources in China. We find that over-exploitation of human resources occurs
under conditions of restricted population mobility, rigid wage levels, and
increased foreign trade demand elasticity. Conversely, the existence of
technological replacements for human resources or improvements in the
utilization rate of human resources leads to conservation. Our analysis
provides practical insights for policy-making towards sustainable development.",Exploring the Nexus between Exhaustible Human Resources and Economic Development in China: An Application of the Hotelling Model,2023-06-21 08:07:33,Zhiwei Yang,"http://arxiv.org/abs/2308.07154v1, http://arxiv.org/pdf/2308.07154v1",econ.GN
33160,gn,"Economic Complexity (EC) methods have gained increasing popularity across
fields and disciplines. In particular, the EC toolbox has proved particularly
promising in the study of complex and interrelated phenomena, such as the
transition towards a greener economy. Using the EC approach, scholars have been
investigating the relationship between EC and sustainability, proposing to
identify the distinguishing characteristics of green products and to assess the
readiness of productive and technological structures for the sustainability
transition. This article proposes to review and summarize the data, methods,
and empirical literature that are relevant to the study of the sustainability
transition from an EC perspective. We review three distinct but connected
blocks of literature on EC and environmental sustainability. First, we survey
the evidence linking measures of EC to indicators related to environmental
sustainability. Second, we review articles that strive to assess the green
competitiveness of productive systems. Third, we examine evidence on green
technological development and its connection to non-green knowledge bases.
Finally, we summarize the findings for each block and identify avenues for
further research in this recent and growing body of empirical literature.","Economic complexity and the sustainability transition: A review of data, methods, and literature",2023-08-14 17:28:00,"Bernardo Caldarola, Dario Mazzilli, Lorenzo Napolitano, Aurelio Patelli, Angelica Sbardella","http://arxiv.org/abs/2308.07172v1, http://arxiv.org/pdf/2308.07172v1",econ.GN
33161,gn,"A representation of economic activity in the form of a law of conservation of
value is presented based on the definition of value as potential to act in an
environment. This allows the encapsulation of the term as a conserved quantity
throughout transactions. Marginal value and speed of marginal value are defined
as derivatives of value and marginal value, respectively. Traditional economic
statements are represented here as cycles of value where value is conserved.
Producer-consumer dyads, shortage and surplus, as well as the role of the value
in representing the market and the economy are explored. The role of the
government in the economy is also explained through the cycles of value the
government is involved in. Traditional economic statements and assumptions
produce existing hypotheses as outcomes of the law of conservation of value.",The Cycle of Value The Cycle of Value -- A Conservationist Approach to Economics,2023-08-08 15:27:17,Nick Harkiolakis,"http://arxiv.org/abs/2308.07185v1, http://arxiv.org/pdf/2308.07185v1",econ.GN
33162,gn,"PM-Gati-Shakti Initiative, integration of ministries, including railways,
ports, waterways, logistic infrastructure, mass transport, airports, and roads.
Aimed at enhancing connectivity and bolstering the competitiveness of Indian
businesses, the initiative focuses on six pivotal pillars known as
""Connectivity for Productivity"": comprehensiveness, prioritization,
optimization, synchronization, analytical, and dynamic. In this study, we
explore the application of these pillars to address the problem of ""Maximum
Demand Forecasting in Delhi."" Electricity forecasting plays a very significant
role in the power grid as it is required to maintain a balance between supply
and load demand at all times, to provide a quality electricity supply, for
Financial planning, generation reserve, and many more. Forecasting helps not
only in Production Planning but also in Scheduling like Import / Export which
is very often in India and mostly required by the rural areas and North Eastern
Regions of India. As Electrical Forecasting includes many factors which cannot
be detected by the models out there, We use Classical Forecasting Techniques to
extract the seasonal patterns from the daily data of Maximum Demand for the
Union Territory Delhi. This research contributes to the power supply industry
by helping to reduce the occurrence of disasters such as blackouts, power cuts,
and increased tariffs imposed by regulatory commissions. The forecasting
techniques can also help in reducing OD and UD of Power for different regions.
We use the Data provided by a department from the Ministry of Power and use
different forecast models including Seasonal forecasts for daily data.",PM-Gati Shakti: Advancing India's Energy Future through Demand Forecasting -- A Case Study,2023-07-30 12:36:42,"SujayKumar Reddy M, Gopakumar G","http://arxiv.org/abs/2308.07320v1, http://arxiv.org/pdf/2308.07320v1",econ.GN
33163,gn,"This paper aims to assess the impact of COVID-19 on the public finance of
Chinese local governments, with a particular focus on the effect of lockdown
measures on startups during the pandemic. The outbreak has placed significant
fiscal pressure on local governments, as containment measures have led to
declines in revenue and increased expenses related to public health and social
welfare. In tandem, startups have faced substantial challenges, including
reduced funding and profitability, due to the negative impact of lockdown
measures on entrepreneurship. Moreover, the pandemic has generated short- and
long-term economic shocks, affecting both employment and economic recovery. To
address these challenges, policymakers must balance health concerns with
economic development. In this regard, the government should consider
implementing more preferential policies that focus on startups to ensure their
survival and growth. Such policies may include financial assistance, tax
incentives, and regulatory flexibility to foster innovation and
entrepreneurship. By and large, the COVID-19 pandemic has had a profound impact
on both the public finance of Chinese local governments and the startup
ecosystem. Addressing the challenges faced by local governments and startups
will require a comprehensive approach that balances health and economic
considerations and includes targeted policies to support entrepreneurship and
innovation.",Impact of COVID-19 Lockdown Measures on Chinese Startups and Local Government Public Finance: Challenges and Policy Implications,2023-08-14 23:13:28,Xin Sun,"http://arxiv.org/abs/2308.07437v1, http://arxiv.org/pdf/2308.07437v1",econ.GN
33164,gn,"This paper examines the mode choice behaviour of people who may act as
occasional couriers to provide crowd-shipping (CS) deliveries. Given its recent
increase in popularity, online grocery services have become the main market for
crowd-shipping deliveries' provider. The study included a behavioural survey,
PTV Visum simulations and discrete choice behaviour modelling based on random
utility maximization theory. Mode choice behaviour was examined by considering
the gender heterogeneity of the occasional couriers in a multimodal urban
transport network. The behavioural dataset was collected in the city of
Kharkiv, Ukraine, at the beginning of 2021. The results indicated that women
were willing to provide CS service with 8% less remuneration than men. Women
were also more likely to make 10% longer detours by car and metro than men,
while male couriers were willing to implement 25% longer detours when
travelling by bike or walking. Considering the integration of CS detours into
the couriers' routine trip chains, women couriers were more likely to attach
the CS trip to the work-shopping trip chain whilst men would use the home-home
evening time trip chain. The estimated marginal probability effect indicated a
higher detour time sensitivity with respect to expected profit and the relative
detour costs of the couriers.",Does courier gender matter? Exploring mode choice behaviour for E-groceries crowd-shipping in developing economies,2023-08-15 21:54:54,"Oleksandr Rossolov, Anastasiia Botsman, Serhii Lyfenko, Yusak O. Susilo","http://arxiv.org/abs/2308.07993v1, http://arxiv.org/pdf/2308.07993v1",econ.GN
33165,gn,"In this study, we analyze the relationship between human population growth
and economic dynamics. To do so, we present a modified version of the Verhulst
model and the Solow model, which together simulate population dynamics and the
role of economic variables in capital accumulation. The model incorporates
support and foraging functions, which participate in the dynamic relationship
between population growth and the creation and destruction of carrying
capacity. The validity of the model is demonstrated using empirical data.",Modified Verhulst-Solow model for long-term population and economic growths,2023-08-16 15:21:30,"Iram Gleriaa, Sergio Da Silvab, Leon Brenig, Tarcısio M. Rocha Filho, Annibal Figueiredo","http://arxiv.org/abs/2308.08315v1, http://arxiv.org/pdf/2308.08315v1",econ.GN
33166,gn,"This book presents detailed discussion on the role of higher education in
terms of serving basic knowledge creation, teaching, and doing applied research
for commercialization. The book presents an historical account on how this
challenge was addressed earlier in education history, the cases of successful
academic commercialization, the marriage between basic and applied science and
how universities develop economies of the regions and countries. This book also
discusses cultural and social challenges in research commercialization and
pathways to break the status quo.","Entrepreneurial Higher Education Education, Knowledge and Wealth Creation",2023-08-17 09:36:25,"Rahmat Ullah, Rashid Aftab, Saeed Siyal, Kashif Zaheer","http://arxiv.org/abs/2308.08808v1, http://arxiv.org/pdf/2308.08808v1",econ.GN
33167,gn,"Academic research has shown significant interest in international student
mobility, with previous literature primarily focusing on the migration industry
from a political and public policy perspective. For many countries,
international student mobility plays a crucial role in bolstering their
economies through financial gains and attracting skilled immigrants. While
previous studies have explored the determinants of mobility and country
economic policies, only a few have examined the impact of policy changes on
mobility trends. In this study, the researchers investigate the influence of
immigration policy changes, particularly the optional practical training (OPT)
extension on STEM programs, on Asian students' preference for enrolling in STEM
majors at universities. The study utilizes observational data and employs a
quasi-experimental design, analysing the information using the
difference-in-difference technique. The findings of the research indicate that
the implementation of the STEM extension policy in 2008 has a significant
effect on Asian students' decisions to enroll in a STEM major. Additionally,
the study highlights the noteworthy role of individual factors such as the
specific STEM major, terminal degree pursued, and gender in influencing Asian
students' enrollment decisions.",Econometrics Modelling Approach to Examine the Effect of STEM Policy Changes on Asian Students Enrollment Decision in USA,2023-08-17 16:22:51,"Prathamesh Muzumdar, George Kurian, Ganga Prasad Basyal, Apoorva Muley","http://arxiv.org/abs/2308.08972v1, http://arxiv.org/pdf/2308.08972v1",econ.GN
33175,gn,"We have studied the parametric description of the distribution of the
log-growth rates of the sizes of cities of France, Germany, Italy, Spain and
the USA. We have considered several parametric distributions well known in the
literature as well as some others recently introduced. There are some models
that provide similar excellent performance, for all studied samples. The normal
distribution is not the one observed empirically.",On the parametric description of log-growth rates of cities' sizes of four European countries and the USA,2023-08-19 17:41:52,"Till Massing, Miguel Puente-Ajovín, Arturo Ramos","http://dx.doi.org/10.1016/j.physa.2020.124587, http://arxiv.org/abs/2308.10034v1, http://arxiv.org/pdf/2308.10034v1",econ.GN
33168,gn,"At the outbreak of the recent inflation surge, the public's attention to
inflation was low but increased rapidly once inflation started to rise. I
develop a framework where this behavior is optimal: agents pay little attention
to inflation when inflation is low and stable, but they increase their
attention once inflation exceeds a certain threshold. Using survey inflation
expectations, I estimate the attention threshold to be at an inflation rate of
about 4\%, with attention in the high-attention regime being twice as high as
in the low-attention regime. Embedding this into a general equilibrium monetary
model, I find that the inflation attention threshold gives rise to a dynamic
non-linearity in the Phillips Curve, rendering inflation more sensitive to
fluctuations in the output gap during periods of high attention. When
calibrated to match the empirical findings, the model generates inflation and
inflation expectation dynamics consistent with the recent inflation surge in
the US. The attention threshold induces a state dependency: cost-push shocks
become more inflationary in times of loose monetary policy. These
state-dependent effects are absent in the model with constant attention or
under rational expectations. Following simple Taylor rules triggers frequent
and prolonged episodes of heightened attention, thereby increasing the
volatility of inflation, and - due to the asymmetry of the attention threshold
- also the average level of inflation, which leads to substantial welfare
losses.",The Inflation Attention Threshold and Inflation Surges,2023-08-16 18:33:22,Oliver Pfäuti,"http://arxiv.org/abs/2308.09480v2, http://arxiv.org/pdf/2308.09480v2",econ.GN
33169,gn,"A trite yet fundamental question in economics is: What causes large asset
price fluctuations? A tenfold rise in the price of GameStop equity, between the
22nd and 28th of January 2021, demonstrated that herding behaviour among retail
investors is an important contributing factor. This paper presents a
data-driven guide to the forum that started the hype -- WallStreetBets (WSB).
Our initial experiments decompose the forum using a large language topic model
and network tools. The topic model describes the evolution of the forum over
time and shows the persistence of certain topics (such as the market / S\&P500
discussion), and the sporadic interest in others, such as COVID or crude oil.
Network analysis allows us to decompose the landscape of retail investors into
clusters based on their posting and discussion habits; several large,
correlated asset discussion clusters emerge, surrounded by smaller, niche ones.
A second set of experiments assesses the impact that WSB discussions have had
on the market. We show that forum activity has a Granger-causal relationship
with the returns of several assets, some of which are now commonly classified
as `meme stocks', while others have gone under the radar. The paper extracts a
set of short-term trade signals from posts and long-term (monthly and weekly)
trade signals from forum dynamics, and considers their predictive power at
different time horizons. In addition to the analysis, the paper presents the
dataset, as well as an interactive dashboard, in order to promote further
research.",Wisdom of the Crowds or Ignorance of the Masses? A data-driven guide to WSB,2023-08-18 14:39:21,"Valentina Semenova, Dragos Gorduza, William Wildi, Xiaowen Dong, Stefan Zohren","http://arxiv.org/abs/2308.09485v1, http://arxiv.org/pdf/2308.09485v1",econ.GN
33170,gn,"Unemployment insurance provides temporary cash benefits to eligible
unemployed workers. Benefits are sometimes extended by discretion during
economic slumps. In a model that features temporary benefits and sequential job
opportunities, a worker's reservation wages are studied when policymakers can
make discretionary extensions to benefits. A worker's optimal labor-supply
choice is characterized by a sequence of reservation wages that increases with
weeks of remaining benefits. The possibility of an extension raises the entire
sequence of reservation wages, meaning a worker is more selective when
accepting job offers throughout their spell of unemployment. The welfare
consequences of misperceiving the probability and length of an extension are
investigated. Properties of the model can help policymakers interpret data on
reservation wages, which may be important if extended benefits are used more
often in response to economic slumps, virus pandemics, extreme heat, and
natural disasters.",Discretionary Extensions to Unemployment-Insurance Compensation and Some Potential Costs for a McCall Worker,2023-08-18 22:16:07,Rich Ryan,"http://arxiv.org/abs/2308.09783v3, http://arxiv.org/pdf/2308.09783v3",econ.GN
33171,gn,"Aghamolla and Smith (2023) make a significant contribution to enhancing our
understanding of how managers choose financial reporting complexity. I outline
the key assumptions and implications of the theory, and discuss two empirical
implications: (1) a U-shaped relationship between complexity and returns, and
(2) a negative association between complexity and investor sophistication.
However, the robust equilibrium also implies a counterfactual positive market
response to complexity. I develop a simplified approach in which simple
disclosures indicate positive surprises, and show that this implies greater
investor skepticism toward complexity and a positive association between
investor sophistication and complexity. More work is needed to understand
complexity as an interaction of reporting and economic transactions, rather
than solely as a reporting phenomenon.",Managers' Choice of Disclosure Complexity,2023-08-18 22:21:23,Jeremy Bertomeu,"http://arxiv.org/abs/2308.09789v1, http://arxiv.org/pdf/2308.09789v1",econ.GN
33172,gn,"We study the parametric description of the city size distribution (CSD) of 70
different countries (developed and developing) using seven models, as follows:
the lognormal (LN), the loglogistic (LL), the double Pareto lognormal (dPLN),
the two-lognormal (2LN), the two-loglogistic (2LL), the three-lognormal (3LN)
and the three-loglogistic (3LL). Our results show that 3LN and 3LL are the best
densities in terms of non-rejections out of standard statistical tests.
Meanwhile, according to the information criteria AIC and BIC, there is no
systematically dominant distribution.",Is there a universal parametric city size distribution? Empirical evidence for 70 countries,2023-08-19 17:00:56,"Miguel Puente-Ajovín, Arturo Ramos, Fernando Sanz-Gracia","http://dx.doi.org/10.1007/s00168-020-01001-6, http://arxiv.org/abs/2308.10018v1, http://arxiv.org/pdf/2308.10018v1",econ.GN
33173,gn,"We perform a comparative study for multiple equity indices of different
countries using different models to determine the best fit using the
Kolmogorov-Smirnov statistic, the Anderson-Darling statistic, the Akaike
information criterion and the Bayesian information criteria as goodness-of-fit
measures. We fit models both to daily and to hourly log-returns. The main
result is the excellent performance of a mixture of three Student's $t$
distributions with the numbers of degrees of freedom fixed a priori (3St). In
addition, we find that the different components of the 3St mixture with
small/moderate/high degree of freedom parameter describe the
extreme/moderate/small log-returns of the studied equity indices.",Student't mixture models for stock indices. A comparative study,2023-08-19 17:16:39,"Till Massing, Arturo Ramos","http://dx.doi.org/10.1016/j.physa.2021.126143, http://arxiv.org/abs/2308.10023v1, http://arxiv.org/pdf/2308.10023v1",econ.GN
33177,gn,"Transitioning to a net-zero economy requires a nuanced understanding of
homeowners decision-making pathways when considering the adoption of Low Carbon
Technologies (LCTs). These LCTs present both personal and collective benefits,
with positive perceptions critically influencing attitudes and intentions. Our
study analyses the relationship between two primary benefits: the
household-level financial gain and the broader environmental advantage.
Focusing on the intention to adopt Rooftop Photovoltaic Systems, Energy
Efficient Appliances, and Green Electricity Tariffs, we employ Partial Least
Squares Structural Equation Modeling to demonstrate that the adoption intention
of the LCTs is underpinned by the Theory of Planned Behaviour. Attitudes toward
the LCTs are more strongly related to product-specific benefits than affective
constructs. In terms of evaluative benefits, environmental benefits exhibit a
higher positive association with attitude formation compared to financial
benefits. However, this relationship switches as homeowners move through the
decision process with the financial benefits of selected LCTs having a
consistently higher association with adoption intention. At the same time,
financial benefits also positively affect attitudes. Observing this trend
across both low- and high-cost LCTs, we recommend that policymakers amplify
homeowners' recognition of the individual benefits intrinsic to LCTs and enact
measures that ensure these financial benefits.",Green or greedy: the relationship between perceived benefits and homeowners' intention to adopt residential low-carbon technologies,2023-08-19 23:26:56,"Fabian Scheller, Karyn Morrissey, Karsten Neuhoff, Dogan Keles","http://arxiv.org/abs/2308.10104v1, http://arxiv.org/pdf/2308.10104v1",econ.GN
33178,gn,"Based on a record of dissents on FOMC votes and transcripts of the meetings
from 1976 to 2017, we develop a deep learning model based on self-attention
modules to create a measure of disagreement for each member in each meeting.
While dissents are rare, we find that members often have reservations with the
policy decision. The level of disagreement is mostly driven by current or
predicted macroeconomic data at both the individual and meeting levels, while
personal characteristics of the members matter only at the individual level. We
also use our model to evaluate speeches made by members between meetings, and
we find a weak correlation between the level of disagreement revealed in them
and that of the following meeting. Finally, we find that the level of
disagreement increases whenever monetary policy action is more aggressive.",Agree to Disagree: Measuring Hidden Dissents in FOMC Meetings,2023-08-20 04:48:27,"Kwok Ping Tsang, Zichao Yang","http://arxiv.org/abs/2308.10131v3, http://arxiv.org/pdf/2308.10131v3",econ.GN
33179,gn,"A sustainable solution to negative externalities imposed by road
transportation is replacing internal combustion vehicles with electric vehicles
(EVs), especially plug-in EV (PEV) encompassing plug-in hybrid EV (PHEV) and
battery EV (BEV). However, EV market share is still low and is forecast to
remain low and uncertain. This shows a research need for an in-depth
understanding of EV adoption behavior with a focus on one of the main barriers
to the mass EV adoption, which is the limited electric driving range. The
present study extends the existing literature in two directions; First, the
influence of the psychological aspect of driving range, which is referred to as
range anxiety, is explored on EV adoption behavior by presenting a nested logit
(NL) model with a latent construct. Second, the two-level NL model captures
individuals' decision on EV adoption behavior distinguished by vehicle
transaction type and EV type, where the upper level yields the vehicle
transaction type selected from the set of alternatives including
no-transaction, sell, trade, and add. The fuel type of the vehicles decided to
be acquired, either as traded-for or added vehicles, is simultaneously
determined at the lower level from a set including conventional vehicle, hybrid
EV, PHEV, and BEV. The model is empirically estimated using a stated
preferences dataset collected in the State of California. A notable finding is
that anxiety about driving range influences the preference for BEV, especially
as an added than traded-for vehicle, but not the decision on PHEV adoption.",Exploring the Role of Perceived Range Anxiety in Adoption Behavior of Plug-in Electric Vehicles,2023-08-20 19:24:18,"Fatemeh Nazari, Abolfazl Mohammadian, Thomas Stephens","http://arxiv.org/abs/2308.10313v1, http://arxiv.org/pdf/2308.10313v1",econ.GN
33180,gn,"""Economists miss the boat when they act as if Arrow and Debreu's general
equilibrium model accurately describes markets in the real world of constant
change. In contrast, the classical view on the market mechanism offers a
helpful foundation on which to add modern insights about how markets create and
coordinate information.""",Classical Economics: Lost and Found,2023-08-22 01:27:25,"Sabiou Inoua, Vernon Smith","http://arxiv.org/abs/2308.11069v1, http://arxiv.org/pdf/2308.11069v1",econ.GN
33181,gn,"Gender discrimination in the hiring process is one significant factor
contributing to labor market disparities. However, there is little evidence on
the extent to which gender bias by hiring managers is responsible for these
disparities. In this paper, I exploit a unique dataset of blind auditions of
The Voice television show as an experiment to identify own gender bias in the
selection process. The first televised stage audition, in which four noteworthy
recording artists are coaches, listens to the contestants blindly (chairs
facing away from the stage) to avoid seeing the contestant. Using a
difference-in-differences estimation strategy, a coach (hiring person) is
demonstrably exogenous with respect to the artist's gender, I find that artists
are 4.5 percentage points (11 percent) more likely to be selected when they are
the recipients of an opposite-gender coach. I also utilize the machine-learning
approach in Athey et al. (2018) to include heterogeneity from team gender
composition, order of performance, and failure rates of the coaches. The
findings offer a new perspective to enrich past research on gender
discrimination, shedding light on the instances of gender bias variation by the
gender of the decision maker and team gender composition.",Discrimination and Constraints: Evidence from The Voice,2023-08-23 08:09:15,Anuar Assamidanov,"http://arxiv.org/abs/2308.11922v1, http://arxiv.org/pdf/2308.11922v1",econ.GN
33199,gn,"The paper examined the impact of agricultural credit on economic growth in
Bangladesh. The annual data of agriculture credit were collected from annual
reports of the Bangladesh Bank and other data were collected from the world
development indicator (WDI) of the World Bank. By employing Johansen
cointegration test and vector error correction model (VECM), the study revealed
that there exists a long run relationship between the variables. The results of
the study showed that agriculture credit had a positive impact on GDP growth in
Bangladesh. The study also found that gross capital formation had a positive,
while inflation had a negative association with economic growth in Bangladesh.
Therefore, the government and policymakers should continue their effort to
increase the volume of agriculture credit to achieve sustainable economic
growth.",Agriculture Credit and Economic Growth in Bangladesh: A Time Series Analysis,2023-09-08 07:40:38,"Md. Toaha, Laboni Mondal","http://arxiv.org/abs/2309.04118v1, http://arxiv.org/pdf/2309.04118v1",econ.GN
33182,gn,"This paper examines the impact of government procurement in social welfare
programs on consumers, manufacturers, and the government. We analyze the U.S.
infant formula market, where over half of the total sales are purchased by the
Women, Infants, and Children (WIC) program. The WIC program utilizes
first-price auctions to solicit rebates from the three main formula
manufacturers, with the winner exclusively serving all WIC consumers in the
winning state. The manufacturers compete aggressively in providing rebates
which account for around 85% of the wholesale price. To rationalize and
disentangle the factors contributing to this phenomenon, we model
manufacturers' retail pricing competition by incorporating two unique features:
price inelastic WIC consumers and government regulation on WIC brand prices.
Our findings confirm three sizable benefits from winning the auction: a notable
spill-over effect on non-WIC demand, a significant marginal cost reduction, and
a higher retail price for the WIC brand due to the price inelasticity of WIC
consumers. Our counterfactual analysis shows that procurement auctions affect
manufacturers asymmetrically, with the smallest manufacturer harmed the most.
More importantly, by switching from the current mechanism to a predetermined
rebate procurement, the government can still contain the cost successfully,
consumers' surplus is greatly improved, and the smallest manufacturer benefits
from the switch, promoting market competition.",Procurement in welfare programs: Evidence and implications from WIC infant formula contracts,2023-08-24 03:39:37,"Yonghong An, David Davis, Yizao Liu, Ruli Xiao","http://arxiv.org/abs/2308.12479v1, http://arxiv.org/pdf/2308.12479v1",econ.GN
33183,gn,"The study analyzed the impact of financial inclusion on the effectiveness of
monetary policy in developing countries. By using a panel data set of 10
developing countries during 2004-2020, the study revealed that the financial
inclusion measured by the number of ATM per 100,000 adults had a significant
negative effect on monetary policy, whereas the other measure of financial
inclusion i.e. the number of bank accounts per 100,000 adults had a positive
impact on monetary policy, which is not statistically significant. The study
also revealed that foreign direct investment (FDI), lending rate and exchange
rate had a positive impact on inflation, but only the effect of lending rate is
statistically significant. Therefore, the governments of these countries should
make necessary drives to increase the level of financial inclusion as it
stabilizes the price level by reducing the inflation in the economy.",Financial Inclusion and Monetary Policy: A Study on the Relationship between Financial Inclusion and Effectiveness of Monetary Policy in Developing Countries,2023-08-24 06:57:02,"Gautam Kumar Biswas, Faruque Ahamed","http://arxiv.org/abs/2308.12542v1, http://arxiv.org/pdf/2308.12542v1",econ.GN
33184,gn,"Despite increasing cognitive demands of jobs, knowledge about the role of
health in retirement has centered on its physical dimensions. This paper
estimates a dynamic programming model of retirement that incorporates multiple
health dimensions, allowing differential effects on labor supply across
occupations. Results show that the effect of cognitive health surges
exponentially after age 65, and it explains a notable share of employment
declines in cognitively demanding occupations. Under pension reforms, physical
constraint mainly impedes manual workers from delaying retirement, whereas
cognitive constraint dampens the response of clerical and professional workers.
Multidimensional health thus unevenly exacerbate welfare losses across
occupations.",Occupational Retirement and Pension Reform: The Roles of Physical and Cognitive Health,2023-08-25 06:13:44,Jiayi Wen,"http://arxiv.org/abs/2308.13153v1, http://arxiv.org/pdf/2308.13153v1",econ.GN
33185,gn,"This paper examines the long-term gender-specific impacts of parental health
shocks on adult children's employment in China. We build up an inter-temporal
cooperative framework to analyze household work decisions in response to
parental health deterioration. Then employing an event-study approach, we
establish a causal link between parental health shocks and a notable decline in
female employment rates. Male employment, however, remains largely unaffected.
This negative impact shows no abatement up to eight years that are observable
by the sample. These findings indicate the consequence of ""growing old before
getting rich"" for developing countries.",Parental Health Penalty on Adult Children's Employment: Gender Difference and Long-Term Consequence,2023-08-25 06:29:38,"Jiayi Wen, Haili Huang","http://arxiv.org/abs/2308.13156v1, http://arxiv.org/pdf/2308.13156v1",econ.GN
33186,gn,"We study how choice architecture that companies deploy during data collection
influences consumers' privacy valuations. Further, we explore how this
influence affects the quality of data collected, including both volume and
representativeness. To this end, we run a large-scale choice experiment to
elicit consumers' valuation for their Facebook data while randomizing two
common choice frames: default and price anchor. An opt-out default decreases
valuations by 14-22% compared to opt-in, while a \$0-50 price anchor decreases
valuations by 37-53% compared to a \$50-100 anchor. Moreover, in some consumer
segments, the susceptibility to frame influence negatively correlates with
consumers' average valuation. We find that conventional frame optimization
practices that maximize the volume of data collected can have opposite effects
on its representativeness. A bias-exacerbating effect emerges when consumers'
privacy valuations and frame effects are negatively correlated. On the other
hand, a volume-maximizing frame may also mitigate the bias by getting a high
percentage of consumers into the sample data, thereby improving its coverage.
We demonstrate the magnitude of the volume-bias trade-off in our data and argue
that it should be a decision-making factor in choice architecture design.","Choice Architecture, Privacy Valuations, and Selection Bias in Consumer Data",2023-08-25 20:11:45,"Tesary Lin, Avner Strulov-Shlain","http://arxiv.org/abs/2308.13496v1, http://arxiv.org/pdf/2308.13496v1",econ.GN
33187,gn,"Thick two-sided matching platforms, such as the room-rental market, face the
challenge of showing relevant objects to users to reduce search costs. Many
platforms use ranking algorithms to determine the order in which alternatives
are shown to users. Ranking algorithms may depend on simple criteria, such as
how long a listing has been on the platform, or incorporate more sophisticated
aspects, such as personalized inferences about users' preferences. Using rich
data on a room rental platform, we show how ranking algorithms can be a source
of unnecessary congestion, especially when the ranking is invariant across
users. Invariant rankings induce users to view, click, and request the same
rooms in the platform we study, greatly limiting the number of matches it
creates. We estimate preferences and simulate counterfactuals under different
ranking algorithms varying the degree of user personalization and variation
across users. In our case, increased personalization raises both user match
utility and congestion, which leads to a trade-off. We find that the current
outcome is inefficient as it lies below the possibility frontier, and propose
alternatives that improve upon it.",Managing Congestion in Two-Sided Platforms: The Case of Online Rentals,2023-08-28 19:55:51,"Caterina Calsamiglia, Laura Doval, Alejandro Robinson-Cortés, Matthew Shum","http://arxiv.org/abs/2308.14703v1, http://arxiv.org/pdf/2308.14703v1",econ.GN
33188,gn,"Broadband connectivity is regarded as generally having a positive
macroeconomic effect, but we lack evidence as to how it affects key economic
activity metrics, such as firm creation, at a very local level. This analysis
models the impact of broadband Next Generation Access (NGA) on new business
creation at the local level over the 2011-2015 period in England, United
Kingdom, using high-resolution panel data. After controlling for a range of
factors, we find that faster broadband speeds brought by NGA technologies have
a positive effect on the rate of business growth. We find that in England
between 2011-2015, on average a one percentage increase in download speeds is
associated with a 0.0574 percentage point increase in the annual growth rate of
business establishments. The primary hypothesised mechanism behind the
estimated relationship is the enabling effect that faster broadband speeds have
on innovative business models based on new digital technologies and services.
Entrepreneurs either sought appropriate locations that offer high quality
broadband infrastructure (contributing to new business establishment growth),
or potentially enjoyed a competitive advantage (resulting in a higher survival
rate). The findings of this study suggest that aspiring to reach universal high
capacity broadband connectivity is economically desirable, especially as the
costs of delivering such service decline.",Crowdsourced data indicates broadband has a positive impact on local business creation,2023-08-28 20:35:31,"Yifeng Philip Chen, Edward J. Oughton, Jakub Zagdanski, Maggie Mo Jia, Peter Tyler","http://dx.doi.org/10.1016/j.tele.2023.102035, http://arxiv.org/abs/2308.14734v1, http://arxiv.org/pdf/2308.14734v1",econ.GN
33189,gn,"Labor share, the fraction of economic output accrued as wages, is
inexplicably declining in industrialized countries. Whilst numerous prior works
attempt to explain the decline via economic factors, our novel approach links
the decline to biological factors. Specifically, we propose a theoretical
macroeconomic model where labor share reflects a dynamic equilibrium between
the workforce automating existing outputs, and consumers demanding new output
variants that require human labor. Industrialization leads to an aging
population, and while cognitive performance is stable in the working years it
drops sharply thereafter. Consequently, the declining cognitive performance of
aging consumers reduces the demand for new output variants, leading to a
decline in labor share. Our model expresses labor share as an algebraic
function of median age, and is validated with surprising accuracy on historical
data across industrialized economies via non-linear stochastic regression.",Cognitive Aging and Labor Share,2023-08-29 05:20:10,B. N. Kausik,"http://arxiv.org/abs/2308.14982v6, http://arxiv.org/pdf/2308.14982v6",econ.GN
33190,gn,"Quality information can improve individual judgments but nonetheless fail to
make group decisions more accurate; if individuals choose to attend to the same
information in the same way, the predictive diversity that enables crowd wisdom
may be lost. Decision support systems, from business intelligence software to
public search engines, present individuals with decision aids -- discrete
presentations of relevant information, interpretative frames, or heuristics --
to enhance the quality and speed of decision making, but have the potential to
bias judgments through the selective presentation of information and
interpretative frames. We redescribe the wisdom of the crowd as often having
two decisions, the choice of decision aids and then the primary decision. We
then define \emph{metawisdom of the crowd} as any pattern by which the
collective choice of aids leads to higher crowd accuracy than randomized
assignment to the same aids, a comparison that accounts for the information
content of the aids. While choice is ultimately constrained by the setting, in
two experiments -- the prediction of inflation (N=947, pre-registered) and a
tightly controlled estimation game (N=1198) -- we find strong evidence of
metawisdom. It comes about through diverse errors arising through the use of
diverse aids, not through widespread use of the aids that induce the most
accurate estimates. Thus the microfoundations of crowd wisdom appear in the
first choice, suggesting crowd wisdom can be robust in information choice
problems. Given the implications for collective decision making, more research
on the nature and use of decision aids is needed.",Metawisdom of the Crowd: How Choice Within Aided Decision Making Can Make Crowd Wisdom Robust,2023-08-29 20:22:28,"Jon Atwell, Marlon Twyman II","http://arxiv.org/abs/2308.15451v1, http://arxiv.org/pdf/2308.15451v1",econ.GN
33191,gn,"This paper studies how to accurately elicit quality for alternatives with
multiple attributes. Two multiple price lists (MPLs) are considered: (i) m-MPL
which asks subjects to compare an alternative to money, and (ii) p-MPL where
subjects are endowed with money and asked whether they would like to buy an
alternative or not. Theoretical results show that m-MPL requires fewer
assumptions for accurate quality elicitation compared to p-MPL. Experimental
evidence from a within-subject experiment using consumer products shows that
switch points between the two MPLs are different, which suggests that quality
measures are sensitive to the elicitation method.",Accurate Quality Elicitation in a Multi-Attribute Choice Setting,2023-08-31 23:11:06,Changkuk Im,"http://arxiv.org/abs/2309.00114v2, http://arxiv.org/pdf/2309.00114v2",econ.GN
33192,gn,"Online innovation communities are an important source of innovation for many
organizations. While contributions to such communities are typically made
without financial compensation, these contributions are often governed by
licenses such as Creative Commons that may prevent others from building upon
and commercializing them. While this can diminish the usefulness of
contributions, there is limited work analyzing what leads individuals to impose
restrictions on the use of their work. In this paper, we examine innovators
imposing restrictive licenses within the 3D-printable design community
Thingiverse. Our analyses suggest that innovators are more likely to restrict
commercialization of their contributions as their reputation increases and when
reusing contributions created by others. These findings contribute to
innovation communities and the growing literature on property rights in digital
markets.",Preventing Others from Commercializing Your Innovation: Evidence from Creative Commons Licenses,2023-09-01 18:43:48,"Erdem Dogukan Yilmaz, Tim Meyer, Milan Miric","http://arxiv.org/abs/2309.00536v1, http://arxiv.org/pdf/2309.00536v1",econ.GN
33200,gn,"This report addresses challenges and opportunities for innovation in the
social sciences at the University of Oxford. It summarises findings from two
focus group workshops with innovation experts from the University ecosystem.
Experts included successful social science entrepreneurs and professional
service staff from the University. The workshops focused on four different
dimensions related to innovative activities and commercialisation. The findings
show several challenges at the institutional and individual level, together
with features of the social scientific discipline that impede more innovation
in the social sciences. Based on identifying these challenges, we present
potential solutions and ways forward identified in the focus group discussions
to foster social science innovation. The report aims to illustrate the
potential of innovation and commercialisation of social scientific research for
both researchers and the university.",How to foster innovation in the social sciences? Qualitative evidence from focus group workshops at Oxford University,2023-09-13 13:38:39,"Fabian Braesemann, Moritz Marpe","http://arxiv.org/abs/2309.06875v1, http://arxiv.org/pdf/2309.06875v1",econ.GN
33193,gn,"I propose a new terminology, international trade strength, which is defined
as the ratio of a country's total international trade to its GDP. This
parameter represents a country's ability to generate international trade by
utilizing its GDP. This figure is equivalent to GDP per capita, which
represents a country's ability to use its population to generate GDP. Trade
strength varies by country. The intriguing question is, what distribution
function does the trade strength fulfill? In this paper, a theoretical
foundation for predicting the distribution of trade strength and the rate of
change of trade strength were developed. These two quantities were found to
satisfy the Pareto distribution function. The equations were confirmed using
data from the World Integrated Trade Solution (WITS) and the World Bank by
comparing the Akaike Information Criterion (AIC) and Bayesian Information
Criterion (BIC) to five types of distribution functions (exponential,
lognormal, gamma, Pareto, and Weibull). I also discovered that the fitting
Pareto power parameter is fairly close to the theoretical parameter. In
addition, a formula for forecasting a country's total international trade in
the following years was also developed.",Theoretical foundation for the Pareto distribution of international trade strength and introduction of an equation for international trade forecasting,2023-08-19 14:19:55,Mikrajuddin Abdullah,"http://arxiv.org/abs/2309.00635v1, http://arxiv.org/pdf/2309.00635v1",econ.GN
33194,gn,"The article develops a general equilibrium model where power relations are
central in the determination of unemployment, profitability, and income
distribution. The paper contributes to the market forces versus institutions
debate by providing a unified model capable of identifying key interrelations
between technical and institutional changes in the economy. Empirically, the
model is used to gauge the relative roles of technology and institutions in the
behavior of the labor share, the unemployment rate, the capital-output ratio,
and business profitability and demonstrates how they complement each other in
providing an adequate narrative to the structural changes of the US economy.",There is power in general equilibrium,2023-09-02 14:14:35,Juan Jacobo,"http://arxiv.org/abs/2309.00909v1, http://arxiv.org/pdf/2309.00909v1",econ.GN
33195,gn,"The trade tension between the U.S. and China since 2018 has caused a steady
decoupling of the world's two largest economies. The pandemic outbreak in 2020
complicated this process and had numerous unanticipated repercussions. This
paper investigates how U.S. importers reacted to the trade war and worldwide
lockdowns due to the COVID-19 pandemic. We examine the effects of the two
incidents on U.S. imports separately and collectively, with various economic
scopes. Our findings uncover intricate trading dynamics among the U.S., China,
and Southeast Asia, through which businesses relocated portions of their global
supply chain away from China to avoid high tariffs. Our analysis indicates that
increased tariffs cause the U.S. to import less from China. Meanwhile,
Southeast Asian exporters have integrated more into value chains centered on
Chinese suppliers by participating more in assembling and completing products.
However, the worldwide lockdowns over pandemic have reversed this trend as,
over this period, the U.S. effectively imported more goods directly from China
and indirectly through Southeast Asian exporters that imported from China.",Dual Effects of the US-China Trade War and COVID-19 on United States Imports: Transfer of China's industrial chain?,2023-09-05 17:37:14,"Wei Luo, Siyuan Kang, Sheng Hu, Lixian Su, Rui Dai","http://arxiv.org/abs/2309.02271v1, http://arxiv.org/pdf/2309.02271v1",econ.GN
33196,gn,"Transportation systems will be likely transformed by the emergence of
automated vehicles (AVs) promising for safe, convenient, and efficient
mobility, especially if used in shared systems (shared AV or SAV). However, the
potential tendency is observed towards owning AV as a private asset rather than
using SAV. This calls for a research on investigating individuals' attitude
towards AV in comparison with SAV to recognize the barriers to the public's
tendency towards SAV. To do so, the present study proposes a modeling framework
based on the theories in behavioral psychology to explain individuals'
preference for owning AV over using SAV, built as a latent (subjective)
psychometric construct, by three groups of explanatory latent constructs
including: (i) desire for searching for benefits, i.e., extrinsic motive
manifested in utilitarian beliefs; (ii) tendency towards seeking pleasure and
joy, i.e., intrinsic motive reflected in hedonic beliefs; and (iii) attitude
towards three configurations of shared mobility, i.e., experience with car and
ridesharing, bikesharing, and public transit. Estimated on a sample dataset
from the State of California, the findings can shed initial lights on the
psychological determinants of the public's attitude towards owning AV versus
using SAV, which can furthermore provide policy implications intriguing for
policy makers and stakeholders. Of note, the findings reveal the strongest
influential factor on preference for AV over SAV as hedonic beliefs reflected
in perceived enjoyment. This preference is next affected by utilitarian
beliefs, particularly perceived benefit and trust of stranger, followed by
attitude towards car and ride sharing.",Privately-Owned versus Shared Automated Vehicle: The Roles of Utilitarian and Hedonic Beliefs,2023-09-06 21:01:31,"Fatemeh Nazari, Yellitza Soto, Mohamadhossein Noruzoliaee","http://arxiv.org/abs/2309.03283v1, http://arxiv.org/pdf/2309.03283v1",econ.GN
33197,gn,"Data from national accounts show no effect of change in net saving or
consumption, in ratio to market-value capital, on change in growth rate of
market-value capital (capital acceleration). Thus it appears that capital
growth and acceleration arrive without help from net saving or consumption
restraint. We explore ways in which this is possible, and discuss implications
for economic teaching and public policy",Sources of capital growth,2023-09-07 02:38:23,"Gordon Getty, Nikita Tkachenko","http://arxiv.org/abs/2309.03403v2, http://arxiv.org/pdf/2309.03403v2",econ.GN
33198,gn,"Why do investors delegate financial decisions to supposed experts? We report
a laboratory experiment designed to disentangle four possible motives. Almost
600 investors drawn from the Prolific subject pool choose whether or not to
delegate a real-stakes choice among lotteries to a previous investor (an
``expert'') after seeing information on the performance of several available
experts. We find that a surprisingly large fraction of investors delegate even
trivial choice tasks, suggesting a major role for the blame shifting motive. A
larger fraction of investors delegate our more complex tasks, suggesting that
decision costs play a role for some investors. Some investors who delegate
choose a low quality expert with high earnings, suggesting a role for chasing
past performance. We find no evidence for a fourth possible motive, that
delegation makes risk more acceptable.",Motives for Delegating Financial Decisions,2023-09-07 03:46:02,"Mikhail Freer, Daniel Friedman, Simon Weidenholzer","http://arxiv.org/abs/2309.03419v2, http://arxiv.org/pdf/2309.03419v2",econ.GN
33201,gn,"Research on politically motivated unrest and sovereign risk overlooks whether
and how unrest location matters for sovereign risk in geographically extensive
states. Intuitively, political violence in the capital or nearby would seem to
directly threaten the state's ability to pay its debts. However, it is possible
that the effect on a government could be more pronounced the farther away the
violence is, connected to the longer-term costs of suppressing rebellion. We
use Tsarist Russia to assess these differences in risk effects when unrest
occurs in Russian homeland territories versus more remote imperial territories.
Our analysis of unrest events across the Russian imperium from 1820 to 1914
suggests that unrest increases risk more in imperial territories. Echoing
current events, we find that unrest in Ukraine increases risk most. The price
of empire included higher costs in projecting force to repress unrest and
retain the confidence of the foreign investors financing those costs.",The Price of Empire: Unrest Location and Sovereign Risk in Tsarist Russia,2023-09-13 14:18:26,"Christopher A. Hartwell, Paul M. Vaaler","http://arxiv.org/abs/2309.06885v4, http://arxiv.org/pdf/2309.06885v4",econ.GN
33202,gn,"How can governments attract entrepreneurs and their businesses? The view that
new business creation grows with the optimal level of government investments
remains appealing to policymakers. In contrast with this active approach, we
build a model where governments may adopt a passive approach to stimulating
business creation. The insights from this model suggest new business creation
depends positively on factors beyond government investments--attracting
high-skilled migrants to the region and lower property prices, taxes, and fines
on firms in the informal sector. These findings suggest whether entrepreneurs
generate business creation in the region does not only depend on government
investments. It also depends on location and skilled migration. Our model also
provides methodological implications--the relationship between government
investments and new business creation is endogenously determined, so unless
adjustments are made, econometric estimates will be biased and inconsistent. We
conclude with policy and managerial implications.",Government Investments and Entrepreneurship,2023-09-13 16:31:34,"Joao Ricardo Faria, Laudo Ogura, Mauricio Prado, Christopher J. Boudreaux","http://dx.doi.org/10.1007/s11187-023-00743-9, http://arxiv.org/abs/2309.06949v1, http://arxiv.org/pdf/2309.06949v1",econ.GN
33203,gn,"In this study, the evolutionary development of labor has been tried to be
revealed based on theoretical analysis. Using the example of gdp, which is an
indicator of social welfare, the economic value of the labor of housewives was
tried to be measured with an empirical modeling. To this end; first of all, the
concept of labor was questioned in orthodox (mainstream) economic theories;
then, by abstracting from the labor-employment relationship, it was examined
what effect the labor of unpaid housewives who are unemployed in the capitalist
system could have on gdp. In theoretical analysis; It has been determined that
the changing human profile moves away from rationality and creates limited
rationality and, accordingly, a heterogeneous individual profile. Women were
defined as the new example of heterogeneous individuals, as those who best fit
the definition of limited rational individuals because they prefer to be
housewives. In the empirical analysis of the study, housewife labor was taken
into account as the main variable. In the empirical analysis of the study; In
the case of Turkiye, using turkstat employment data and the atkinson inequality
scale; the impact of housewife labor on gdp was calculated. The results of the
theoretical and empirical analysis were evaluated in the context of
labor-employment independence.",The effect of housewife labor on gdp calculations,2023-09-11 13:10:06,Saadet Yagmur Kumcu,"http://arxiv.org/abs/2309.07160v1, http://arxiv.org/pdf/2309.07160v1",econ.GN
33204,gn,"This paper investigates how the cost of public debt shapes fiscal policy and
its effect on the economy. Using U.S. historical data, I show that when
servicing the debt creates a fiscal burden, the government responds to spending
shocks by limiting debt issuance. As a result, the initial shock triggers only
a limited increase in public spending in the short run, and even leads to
spending reversal in the long run. Under these conditions, fiscal policy loses
its ability to stimulate economic activity. This outcome arises as the fiscal
authority limits its own ability to borrow to ensure public debt
sustainability. These findings are robust to several identification and
estimation strategies.",The Fiscal Cost of Public Debt and Government Spending Shocks,2023-09-14 04:13:16,Venance Riblier,"http://arxiv.org/abs/2309.07371v1, http://arxiv.org/pdf/2309.07371v1",econ.GN
33205,gn,"Determining an individual's strategic reasoning capability based solely on
choice data is a complex task. This complexity arises because sophisticated
players might have non-equilibrium beliefs about others, leading to
non-equilibrium actions. In our study, we pair human participants with computer
players known to be fully rational. This use of robot players allows us to
disentangle limited reasoning capacity from belief formation and social biases.
Our results show that, when paired with robots, subjects consistently
demonstrate higher levels of rationality and maintain stable rationality levels
across different games compared to when paired with humans. This suggests that
strategic reasoning might indeed be a consistent trait in individuals.
Furthermore, the identified rationality limits could serve as a measure for
evaluating an individual's strategic capacity when their beliefs about others
are adequately controlled.",Measuring Higher-Order Rationality with Belief Control,2023-09-14 07:50:35,"Wei James Chen, Meng-Jhang Fong, Po-Hsuan Lin","http://arxiv.org/abs/2309.07427v1, http://arxiv.org/pdf/2309.07427v1",econ.GN
33206,gn,"Large language models offer significant potential for optimising professional
activities, such as streamlining personnel selection procedures. However,
concerns exist about these models perpetuating systemic biases embedded into
their pre-training data. This study explores whether ChatGPT, a chatbot
producing human-like responses to language tasks, displays ethnic or gender
bias in job applicant screening. Using a correspondence audit approach, I
simulated a CV screening task in which I instructed the chatbot to rate
fictitious applicant profiles only differing in names, signalling ethnic and
gender identity. Comparing ratings of Arab, Asian, Black American, Central
African, Dutch, Eastern European, Hispanic, Turkish, and White American male
and female applicants, I show that ethnic and gender identity influence
ChatGPT's evaluations. The ethnic bias appears to arise partly from the
prompts' language and partly from ethnic identity cues in applicants' names.
Although ChatGPT produces no overall gender bias, I find some evidence for a
gender-ethnicity interaction effect. These findings underscore the importance
of addressing systemic bias in language model-driven applications to ensure
equitable treatment across demographic groups. Practitioners aspiring to adopt
these tools should practice caution, given the adverse impact they can produce,
especially when using them for selection decisions involving humans.",Computer says 'no': Exploring systemic hiring bias in ChatGPT using an audit approach,2023-09-14 15:28:55,Louis Lippens,"http://arxiv.org/abs/2309.07664v1, http://arxiv.org/pdf/2309.07664v1",econ.GN
33207,gn,"Remittances have become one of the driving forces of development for
countries all over the world, especially in lower-middle-income nations. This
paper empirically investigates the association between remittance flows and
financial development in 4 lower-middle-income countries of Latin America. By
using a panel data set from 1996 to 2019, the study revealed that remittances
and financial development are positively associated in these countries. The
study also discovered that foreign direct investment and inflation were
positively correlated with financial development while trade openness had a
negative association with financial development. Therefore, policymakers of
these countries should implement and formulate such policies so that migrant
workers would have the incentives to send money through formal channels, which
will augment the effect of remittances on the recipient country.",An Empirical Analysis on Remittances and Financial Development in Latin American Countries,2023-09-16 06:15:58,"Sumaiya Binta Islam, Laboni Mondal","http://arxiv.org/abs/2309.08855v1, http://arxiv.org/pdf/2309.08855v1",econ.GN
33208,gn,"In this paper, we study a standard exchange economy model with Cobb-Douglas
type consumers and give a necessary and sufficient condition for the existence
of an odd period cycle in the Walras-Samuelson (tatonnement) price adjustment
process. This is a new application of a recent result characterising the
existence of a topological chaos for a unimodal interval map by Deng, Khan,
Mitra (2022). Moreover, we show that starting from any positive initial price,
the price is eventually attracted to a chaotic region. Using our
characterisation, we obtain a numerical example of an exchange economy where
the price dynamics exhibit an odd period cycle but no period three cycle.",Characterising the existence of an odd period cycle in price dynamics for an exchange economy,2023-09-17 09:55:36,Tomohiro Uchiyama,"http://arxiv.org/abs/2309.09176v2, http://arxiv.org/pdf/2309.09176v2",econ.GN
33209,gn,"Using a panel of 1,171 villages in rural India that were surveyed in the
India Human Development Surveys, I perform a difference-in-differences analysis
to find that improvements in electricity reliability have a negative effect on
the increase in casual agricultural labor wage rates. Changes in men's wage
rates are found to be affected more adversely than women's, resulting in a
smaller widening of the gender wage gap. I find that better electricity
reliability reduces the time spent by women in fuel collection substantially
which could potentially increase labor supply. The demand for labor remains
unaffected by reliability, which could lead the surplus in labor supply to
cause wage rates to stunt. However, I show that electrical appliances such as
groundwater pumps considerably increase labor demand indicating that
governments could target increasing the adoption of electric pumps along with
bettering the quality of electricity to absorb the surplus labor into
agriculture.",Does Reliable Electricity Mean Lesser Agricultural Labor Wages? Evidence from Indian Villages,2023-09-17 10:02:30,Suryadeepto Nag,"http://arxiv.org/abs/2309.09178v1, http://arxiv.org/pdf/2309.09178v1",econ.GN
33210,gn,"The psychology of science is the least developed member of the family of
science studies. It is growing, however, increasingly into a promising
discipline. After a very brief review of this emerging sub-field of psychology,
we call for it to be invited into the collection of social sciences that
constitute the interdisciplinary field of science policy. Discussing the
classic issue of resource allocation, this paper tries to indicate how prolific
a new psychological conceptualization of this problem would be. Further, from a
psychological perspective, this research will argue in favor of a more
realistic conception of science which would be a complement to the existing one
in science policy.",Examining psychology of science as a potential contributor to science policy,2023-09-17 11:07:25,"Arash Mousavi, Reza Hafezi, Hasan Ahmadi","http://arxiv.org/abs/2309.09202v1, http://arxiv.org/pdf/2309.09202v1",econ.GN
33211,gn,"We in this paper utilize P-GMM (Cheng and Liao, 2015) moment selection
procedure to select valid and relevant moments for estimating and testing
forecast rationality under the flexible loss proposed by Elliott et al. (2005).
We motivate the moment selection in a large dimensional setting, explain the
fundamental mechanism of P-GMM moment selection procedure, and elucidate how to
implement it in the context of forecast rationality by allowing the existence
of potentially invalid moment conditions. A set of Monte Carlo simulations is
conducted to examine the finite sample performance of P-GMM estimation in
integrating the information available in instruments into both the estimation
and testing, and a real data analysis using data from the Survey of
Professional Forecasters issued by the Federal Reserve Bank of Philadelphia is
presented to further illustrate the practical value of the suggested
methodology. The results indicate that the P-GMM post-selection estimator of
forecaster's attitude is comparable to the oracle estimator by using the
available information efficiently. The accompanying power of rationality and
symmetry tests utilizing P-GMM estimation would be substantially increased
through reducing the influence of uninformative instruments. When a forecast
user estimates and tests for rationality of forecasts that have been produced
by others such as Greenbook, P-GMM moment selection procedure can assist in
achieving consistent and more efficient outcomes.",Estimation and Testing of Forecast Rationality with Many Moments,2023-09-18 07:45:12,"Tae-Hwy Lee, Tao Wang","http://arxiv.org/abs/2309.09481v1, http://arxiv.org/pdf/2309.09481v1",econ.GN
33212,gn,"If checks and balances are aimed at protecting citizens from the government's
abuse of power, why do they sometimes weaken them? We address this question in
a laboratory experiment in which subjects choose between two decision rules:
with and without checks and balances. Voters may prefer an unchecked executive
if that enables a reform that, otherwise, is blocked by the legislature.
Consistent with our predictions, we find that subjects are more likely to
weaken checks and balances when there is political gridlock. However, subjects
weaken the controls not only when the reform is beneficial but also when it is
harmful.",Can political gridlock undermine checks and balances? A lab experiment,2023-09-18 21:48:01,"Alvaro Forteza, Irene Mussio, Juan S Pereyra","http://arxiv.org/abs/2309.10080v1, http://arxiv.org/pdf/2309.10080v1",econ.GN
33213,gn,"Most modern ticketing systems rely on a first-come-first-serve or randomized
allocation system to determine the allocation of tickets. Such systems has
received considerable backlash in recent years due to its inequitable allotment
and allocative inefficiency. We analyze a ticketing protocol based on a
variation of the marginal price auction system. Users submit bids to the
protocol based on their own utilities. The protocol awards tickets to the
highest bidders and determines the final ticket price paid by all bidders using
the lowest winning submitted bid. Game theoretic proof is provided to ensure
the protocol more efficiently allocates the tickets to the bidders with the
highest utilities. We also prove that the protocol extracts more economic rents
for the event organizers and the non-optimality of ticket scalping under
time-invariant bidder utilities.",Increasing Ticketing Allocative Efficiency Using Marginal Price Auction Theory,2023-09-20 13:23:39,Boxiang Fu,"http://arxiv.org/abs/2309.11189v1, http://arxiv.org/pdf/2309.11189v1",econ.GN
33214,gn,"We propose new results for the existence and uniqueness of a general
nonparametric and nonseparable competitive equilibrium with substitutes. These
results ensure the invertibility of a general competitive system. The existing
literature has focused on the uniqueness of a competitive equilibrium assuming
that existence holds. We introduce three properties that our supply system must
satisfy: weak substitutes, pivotal substitutes, and responsiveness. These
properties are sufficient to ensure the existence of an equilibrium, thus
providing the existence counterpart to Berry, Gandhi, and Haile (2013)'s
uniqueness results. For two important classes of models, bipartite matching
models with full assignment and discrete choice models, we show that both
models can be reformulated as a competitive system such that our existence and
uniqueness results can be readily applied. We also provide an algorithm to
compute the unique competitive equilibrium. Furthermore, we argue that our
results are particularly useful for studying imperfectly transferable utility
matching models with full assignment and non-additive random utility models.","Existence of a Competitive Equilibrium with Substitutes, with Applications to Matching and Discrete Choice Models",2023-09-20 18:48:27,"Liang Chen, Eugene Choo, Alfred Galichon, Simon Weber","http://arxiv.org/abs/2309.11416v1, http://arxiv.org/pdf/2309.11416v1",econ.GN
33215,gn,"We examine whether substantial AI automation could accelerate global economic
growth by about an order of magnitude, akin to the economic growth effects of
the Industrial Revolution. We identify three primary drivers for such growth:
1) the scalability of an AI ``labor force"" restoring a regime of increasing
returns to scale, 2) the rapid expansion of an AI labor force, and 3) a massive
increase in output from rapid automation occurring over a brief period of time.
Against this backdrop, we evaluate nine counterarguments, including regulatory
hurdles, production bottlenecks, alignment issues, and the pace of automation.
We tentatively assess these arguments, finding most are unlikely deciders. We
conclude that explosive growth seems plausible with AI capable of broadly
substituting for human labor, but high confidence in this claim seems currently
unwarranted. Key questions remain about the intensity of regulatory responses
to AI, physical bottlenecks in production, the economic value of superhuman
abilities, and the rate at which AI automation could occur.",Explosive growth from AI automation: A review of the arguments,2023-09-21 02:45:14,"Ege Erdil, Tamay Besiroglu","http://arxiv.org/abs/2309.11690v2, http://arxiv.org/pdf/2309.11690v2",econ.GN
33216,gn,"Low carbon synfuel can displace transport fossil fuels such as diesel and jet
fuel and help achieve the decarbonization of the transportation sector at a
global scale, but large-scale cost-effective production facilities are needed.
Meanwhile, nuclear power plants are closing due to economic difficulties:
electricity prices are too low and variable to cover their operational costs.
Using existing nuclear power plants to produce synfuels might prevent loss of
these low-carbon assets while producing synfuels at scale, but no
technoeconomic analysis of this Integrated Energy System exist. We quantify the
technoeconomic potential of coupling a synthetic fuel production process with
five example nuclear power plants across the U.S. to explore the influence of
different electricity markets, access to carbon dioxide sources, and fuel
markets. Coupling synfuel production increases nuclear plant profitability by
up to 792 million USD(2020) in addition to a 10 percent rate of return on
investment over a 20 year period. Our analysis identifies drivers for the
economic profitability of the synfuel IES. The hydrogen production tax credit
from the 2022 Inflation Reduction Act is essential to its overall profitability
representing on average three quarters of its revenues. The carbon feedstock
transportation is the highest cost - more than a third on average - closely
followed by the synfuel production process capital costs. Those results show
the key role of incentive policies for the decarbonization of the
transportation sector and the economic importance of the geographic location of
Integrated Energy Systems.",Techno-Economic Analysis of Synthetic Fuel Production from Existing Nuclear Power Plants across the United States,2023-09-21 16:56:42,"Marisol Garrouste, Michael T. Craig, Daniel Wendt, Maria Herrera Diaz, William Jenson, Qian Zhang, Brendan Kochunas","http://arxiv.org/abs/2309.12085v1, http://arxiv.org/pdf/2309.12085v1",econ.GN
33217,gn,"To combat money laundering, banks raise and review alerts on transactions
that exceed confidential thresholds. This paper presents a data-driven approach
to detect smurfing, i.e., money launderers seeking to evade detection by
breaking up large transactions into amounts under the secret thresholds. The
approach utilizes the notion of a counterfactual distribution and relies on two
assumptions: (i) smurfing is unfeasible for the very largest financial
transactions and (ii) money launderers have incentives to make smurfed
transactions close to the thresholds. Simulations suggest that the approach can
detect smurfing when as little as 0.1-0.5\% of all bank transactions are
subject to smurfing. An application to real data from a systemically important
Danish bank finds no evidence of smurfing and, thus, no evidence of leaked
confidential thresholds. An implementation of our approach will be available
online, providing a free and easy-to-use tool for banks.",Searching for Smurfs: Testing if Money Launderers Know Alert Thresholds,2023-09-22 11:29:08,"Rasmus Ingemann Tuffveson Jensen, Joras Ferwerda, Christian Remi Wewer","http://arxiv.org/abs/2309.12704v1, http://arxiv.org/pdf/2309.12704v1",econ.GN
33218,gn,"How much has market power increased in the United States in the last fifty
years? And how did the rise in market power affect aggregate profits? Using
micro-level data from U.S. Compustat, we find that several indicators of market
power have steadily increased since 1970. In particular, the aggregate markup
has gone up from 10% of price over marginal cost in 1970 to 23% in 2020, and
aggregate returns to scale have risen from 1.00 to 1.13. We connect these
market-power indicators to profitability by showing that the aggregate profit
share can be expressed in terms of the aggregate markup, aggregate returns to
scale, and a sufficient statistic for production networks that captures double
marginalization in the economy. We find that despite the rise in market power,
the profit share has been constant at 18% of GDP because the increase in
monopoly rents has been completely offset by rising fixed costs and changes in
technology. Our empirical results have subtle implications for policymakers:
overly aggressive enforcement of antitrust law could decrease firm dynamism and
paradoxically lead to lower competition and higher market power.",The Micro-Aggregated Profit Share,2023-09-22 18:45:18,"Thomas Hasenzagl, Luis Perez","http://arxiv.org/abs/2309.12945v3, http://arxiv.org/pdf/2309.12945v3",econ.GN
33900,th,"A seller is selling a pair of divisible complementary goods to an agent. The
agent consumes the goods only in a specific ratio and freely disposes of excess
in either goods. The value of the bundle and the ratio are private information
of the agent. In this two-dimensional type space model, we characterize the
incentive constraints and show that the optimal (expected revenue-maximizing)
mechanism is a ratio-dependent posted price or a posted price mechanism for a
class of distributions. We also show that the optimal mechanism is a posted
price mechanism when the value and the ratio are independently distributed.",Selling two complementary goods,2020-11-11 18:08:38,"Komal Malik, Kolagani Paramahamsa","http://arxiv.org/abs/2011.05840v4, http://arxiv.org/pdf/2011.05840v4",econ.TH
33219,gn,"Asymmetric information in healthcare implies that patients could have
difficulty trading off non-health and health related information. I document
effects on patient demand when predicted wait time is disclosed to patients in
an emergency department (ED) system. I use a regression discontinuity where EDs
with similar predicted wait times display different online wait times to
patients. I use impulse response functions estimated by local projections to
demonstrate effects of the higher wait time. I find that an additional thirty
minutes of wait time results in 15% fewer waiting patients at urgent cares and
2% fewer waiting patients at EDs within 3 hours of display. I find that the
type of patient that stops using emergency care is triaged as having lower
acuity and would have used an urgent care. However, I find that at very high
wait times there are declines in all acuity patients including sick patients.",Waiting for Dr. Godot: how much and who responds to predicted health care wait times?,2023-09-23 02:51:28,Stephenson Strobel,"http://arxiv.org/abs/2309.13219v1, http://arxiv.org/pdf/2309.13219v1",econ.GN
33220,gn,"This study analyses the tax-induced profit shifting behaviour of firms and
the impact of governments' anti-shifting rules. We derive a model of a firm
that combines internal sales and internal debt in a full profit shifting
strategy, and which is required to apply the arm's length principle and a
general thin capitalisation rule. We find several cases where the firm may
shift profits to low-tax countries while satisfying the usual arm's length
conditions in all countries. Internal sales and internal debt may be regarded
either as complementary or as substitute shifting channels, depending on how
the implicit concealment costs vary after changes in all transactions. We show
that the cross-effect between the shifting channels facilitates profit shifting
by means of accepted transfer prices and interest rates.",Profit shifting under the arm's length principle,2023-09-23 21:33:38,Alex A. T. Rathke,"http://arxiv.org/abs/2309.13449v1, http://arxiv.org/pdf/2309.13449v1",econ.GN
33221,gn,"Understanding how people actually trade off time for money is perhaps the
major question in the field of time discounting. There is indeed a vast body of
work devoted to explore the underlying mechanisms of the individual decision
making process in an intertemporal context. This paper presents a family of new
discount functions whereof we derive a formal axiomatization. Applying the
framework proposed by Bleichrodt, Rohde and Wakker, we further extend their
formulation of CADI and CRDI functions, making discounting a function not only
of time delay but, simultaneously, also of time distortion. Our main purpose
is, in practice, to provide a tractable setting within which individual
intertemporal preferences can be outlined. Furthermore, we apply our models to
study the relation between individual time preferences and personality traits.
For the CADI-CADI, results show that the habit of smoking is heavily related
with both impatience and time perception. Within the Big-Five framework,
conscientiousness, agreeableness and openness are positively related with
patience (low r, initial discount rate).",Discounting and Impatience,2023-09-25 13:20:23,"Salvatore Greco, Diego Rago","http://arxiv.org/abs/2309.14009v1, http://arxiv.org/pdf/2309.14009v1",econ.GN
33222,gn,"Job seekers' misperceptions about the labor market can distort their
decision-making and increase the risk of long-term unemployment. Our study
establishes objective benchmarks for the subjective wage expectations of
unemployed workers. This enables us to provide novel insights into the accuracy
of job seekers' wage expectations. First, especially workers with low objective
earnings potential tend to display excessively optimistic beliefs about their
future wages and anchor their wage expectations too strongly to their
pre-unemployment wages. Second, among long-term unemployed workers,
overoptimism remains persistent throughout the unemployment spell. Third,
higher extrinsic incentives to search more intensively lead job seekers to hold
more optimistic wage expectations, yet this does not translate into higher
realized wages for them. Lastly, we document a connection between
overoptimistic wage expectations and job seekers' tendency to overestimate
their reemployment chances. We discuss the role of information frictions and
motivated beliefs as potential sources of job seekers' optimism and the
heterogeneity in their beliefs.",The Accuracy of Job Seekers' Wage Expectations,2023-09-25 14:21:14,"Marco Caliendo, Robert Mahlstedt, Aiko Schmeißer, Sophie Wagner","http://arxiv.org/abs/2309.14044v1, http://arxiv.org/pdf/2309.14044v1",econ.GN
33223,gn,"Transformative changes in our production and consumption habits are needed to
enable the sustainability transition towards carbon neutrality, no net loss of
biodiversity, and planetary well-being. Organizations are the way we humans
have organized our everyday life, and much of our negative environmental
impacts, also called carbon and biodiversity footprints, are caused by
organizations. Here we show how the financial accounts of any organization can
be exploited to develop an integrated carbon and biodiversity footprint
account. As a metric we utilize spatially explicit potential global loss of
species which, we argue, can be understood as the biodiversity equivalent, the
utility of which for biodiversity is similar to what carbon dioxide equivalent
is for climate. We provide a global Biodiversity Footprint Database that
organizations, experts and researchers can use to assess consumption-based
biodiversity footprints. We also argue that the current integration of
financial and environmental accounting is superficial, and provide a framework
for a more robust financial value-transforming accounting model. To test the
methodologies, we utilized a Finnish university as a living lab. Assigning an
offsetting cost to the footprints significantly altered the financial value of
the organization. We believe such value-transforming accounting is needed in
order to draw the attention of senior executives and investors to the negative
environmental impacts of their organizations.","Value-transforming financial, carbon and biodiversity footprint accounting",2023-09-25 17:47:28,"S. El Geneidy, S. Baumeister, M. Peura, J. S. Kotiaho","http://arxiv.org/abs/2309.14186v1, http://arxiv.org/pdf/2309.14186v1",econ.GN
33224,gn,"Inferring applicant preferences is fundamental in many analyses of
school-choice data. Application mistakes make this task challenging. We propose
a novel approach to deal with the mistakes in a deferred-acceptance matching
environment. The key insight is that the uncertainties faced by applicants,
e.g., due to tie-breaking lotteries, render some mistakes costly, allowing us
to reliably infer relevant preferences. Our approach extracts all information
on preferences robustly to payoff-insignificant mistakes. We apply it to
school-choice data from Staten Island, NYC. Counterfactual analysis suggests
that we underestimate the effects of proposed desegregation reforms when
applicants' mistakes are not accounted for in preference inference and
estimation.",Leveraging Uncertainties to Infer Preferences: Robust Analysis of School Choice,2023-09-25 20:11:17,"Yeon-Koo Che, Dong Woo Hahm, YingHua He","http://arxiv.org/abs/2309.14297v1, http://arxiv.org/pdf/2309.14297v1",econ.GN
33225,gn,"Poland is currently undergoing substantial transformation in its energy
sector, and gaining public support is pivotal for the success of its energy
policies. We conducted a study with 338 Polish participants to investigate
societal attitudes towards various energy sources, including nuclear energy and
renewables. Applying a novel network approach, we identified a multitude of
factors influencing energy acceptance. Political ideology is the central factor
in shaping public acceptance, however we also found that environmental
attitudes, risk perception, safety concerns, and economic variables play
substantial roles. Considering the long-term commitment associated with nuclear
energy and its role in Poland's energy transformation, our findings provide a
foundation for improving energy policy in Poland. Our research underscores the
importance of policies that resonate with the diverse values, beliefs, and
preferences of the population. While the risk-risk trade-off and
technology-focused strategies are effective to a degree, we advocate for a more
comprehensive approach. The framing strategy, which tailors messages to
distinct societal values, shows particular promise.",Nuclear Energy Acceptance in Poland: From Societal Attitudes to Effective Policy Strategies -- Network Modeling Approach,2023-09-26 14:58:51,"Pawel Robert Smolinski, Joseph Januszewicz, Barbara Pawlowska, Jacek Winiarski","http://arxiv.org/abs/2309.14869v1, http://arxiv.org/pdf/2309.14869v1",econ.GN
33226,gn,"Social tipping points are promising levers to achieve net-zero greenhouse gas
emission targets. They describe how social, political, economic or
technological systems can move rapidly into a new state if cascading positive
feedback mechanisms are triggered. Analysing the potential of social tipping
for rapid decarbonization requires considering the inherent complexity of
social systems. Here, we identify that existing scientific literature is
inclined to a narrative-based account of social tipping, lacks a broad
empirical framework and a multi-systems view. We subsequently outline a dynamic
systems approach that entails (i) a systems outlook involving interconnected
feedback mechanisms alongside cross-system and cross-scale interactions, and
including a socioeconomic and environmental injustice perspective (ii) directed
data collection efforts to provide empirical evidence for and monitor social
tipping dynamics, (iii) global, integrated, descriptive modelling to project
future dynamics and provide ex-ante evidence for interventions. Research on
social tipping must be accordingly solidified for climate policy relevance.",A dynamic systems approach to harness the potential of social tipping,2023-09-26 17:33:51,"Sibel Eker, Charlie Wilson, Niklas Höhne, Mark S. McCaffrey, Irene Monasterolo, Leila Niamir, Caroline Zimm","http://arxiv.org/abs/2309.14964v1, http://arxiv.org/pdf/2309.14964v1",econ.GN
33227,gn,"After 2009 many governments implemented austerity measures, often restricting
science funding. Did such restrictions further skew grant income towards elite
scientists and universities? And did increased competition for funding
undermine participation? UK science funding agencies significantly reduced
numbers of grants and total grant funding in response to austerity, but
surprisingly restrictions of science funding were relaxed after the 2015
general election. Exploiting this natural experiment, we show that conventional
measures of university competitiveness are poor proxies for competitiveness. An
alternative measure of university competitiveness, drawn from complexity
science, captures the highly dynamical way in which universities engage in
scientific subjects. Building on a data set of 43,430 UK funded grants between
2006 and 2020, we analyse rankings of UK universities and investigate the
effect of research competitiveness on grant income. When austerity was relaxed
in 2015 the elasticity of grant income w.r.t. research competitiveness fell,
reflecting increased effort by researchers at less competitive universities.
These scientists increased number and size of grant applications, increasing
grant income. The study reveals how funding agencies, facing heterogeneous
competitiveness in the population of scientists, affect research effort across
the distribution of competitiveness.",The importance of quality in austere times: University competitiveness and grant income,2023-09-27 02:41:19,"Ye Sun, Athen Ma, Georg von Graevenitz, Vito Latora","http://arxiv.org/abs/2309.15309v1, http://arxiv.org/pdf/2309.15309v1",econ.GN
33228,gn,"Recently proposed tailpipe emissions standards aim to significant increases
in electric vehicle (EV) sales in the United States. Our work examines whether
this increase is achievable given potential constraints in EV mineral supply
chains. We estimate a model that reflects international sourcing rules,
heterogeneity in the mineral intensity of predominant battery chemistries, and
long-run grid decarbonization efforts. Our efforts yield five key findings.
First, compliance with the proposed standard necessitates replacing at least
10.21 million new ICEVs with EVs between 2027 and 2032. Second, based on
economically viable and geologically available mineral reserves, manufacturing
sufficient EVs is plausible across most battery chemistries and could, subject
to the chemistry leveraged, reduce up to 457.3 million total tons of CO2e.
Third, mineral production capacities of the US and its allies constrain battery
production to a total of 5.09 million EV batteries between 2027 and 2032, well
short of deployment requirements to meet EPA standards even if battery
manufacturing is optimized to exclusively manufacture materials efficient NMC
811 batteries. Fourth, disequilibrium between mineral supply and demand results
in at least 59.54 million tons of CO2e in total lost lifecycle emissions
benefits. Fifth, limited present-day production of battery-grade graphite and
to a lesser extent, cobalt, constrain US electric vehicle battery pack
manufacturing under strict sourcing rules. We demonstrate that should mineral
supply bottlenecks persist, hybrid electric vehicles may offer equivalent
lifecycle emissions benefits as EVs while relaxing mineral production demands,
though this represents a tradeoff of long-term momentum in electric vehicle
deployment in favor of near-term carbon dioxide emissions reductions.",Enumerating the climate impact of disequilibrium in critical mineral supply,2023-09-27 05:43:02,"Lucas Woodley, Chung Yi See, Peter Cook, Megan Yeo, Daniel S. Palmer, Laurena Huh, Seaver Wang, Ashley Nunes","http://arxiv.org/abs/2309.15368v1, http://arxiv.org/pdf/2309.15368v1",econ.GN
33229,gn,"Realized ecosystem services (ES) are the actual use of ES by societies, which
is more directly linked to human well-being than potential ES. However, there
is a lack of a general analysis framework to understand how much ES was
realized. In this study, we first proposed a Supply-Demand-Flow-Use (SDFU)
framework that integrates the supply, demand, flow, and use of ES and
differentiates these concepts into different aspects (e.g., potential vs.
actual ES demand, export and import flows of supply, etc.). Then, we applied
the framework to three examples of ES that can be found in typical urban green
parks (i.e., wild berry supply, pollination, and recreation). We showed how the
framework could assess the actual use of ES and identify the supply-limited,
demand-limited, and supply-demand-balanced types of realized ES. We also
discussed the scaling features, temporal dynamics, and spatial characteristics
of realized ES, as well as some critical questions for future studies. Although
facing challenges, we believe that the applications of the SDFU framework can
provide a systematic way to accurately assess the actual use of ES and better
inform management and policy-making for sustainable use of nature's benefits.
Therefore, we hope that our study will stimulate more research on realized ES
and contribute to a deeper understanding of their roles in enhancing human
well-being.","To better understand realized ecosystem services: An integrated analysis framework of supply, demand, flow and use",2023-09-27 14:11:24,"Shuyao Wu, Kai-Di Liu, Wentao Zhang, Yuehan Dou, Yuqing Chen, Delong Li","http://arxiv.org/abs/2309.15574v1, http://arxiv.org/pdf/2309.15574v1",econ.GN
33230,gn,"New scientific ideas fuel economic progress, yet their identification and
measurement remains challenging. In this paper, we use natural language
processing to identify the origin and impact of new scientific ideas in the
text of scientific publications. To validate the new techniques and their
improvement over traditional metrics based on citations, we first leverage
Nobel prize papers that likely pioneered new scientific ideas with a major
impact on scientific progress. Second, we use literature review papers that
typically summarize existing knowledge rather than pioneer new scientific
ideas. Finally, we demonstrate that papers introducing new scientific ideas are
more likely to become highly cited by both publications and patents. We provide
open access to code and data for all scientific papers up to December 2020.",Beyond Citations: Measuring Novel Scientific Ideas and their Impact in Publication Text,2023-09-28 16:41:50,"Sam Arts, Nicola Melluso, Reinhilde Veugelers","http://arxiv.org/abs/2309.16437v2, http://arxiv.org/pdf/2309.16437v2",econ.GN
33231,gn,"The spread of the coronavirus pandemic had negative repercussions on the
majority of transport systems in virtually all countries. After the lockdown
period, travel restriction policies are now frequently adapted almost real-time
according to observed trends in the spread of the disease, resulting in a
rapidly changing transport market situation. Shared micromobility operators,
whose revenues entirely come from their customers, need to understand how the
demand is affected to adapt their operations. Within this framework, the
present paper investigates how different COVID-19 restriction levels have
affected the usage patterns of shared micromobility.
  Usage data of two dockless micromobility services (bike and e-scooters)
operating in Turin (Italy) are analyzed between October 2020 and March 2021, a
period characterized by different travel restriction levels. The average number
of daily trips, trip distances and trip duration are retrieved for both
services, and then compared to identify significant differences in trends as
restriction levels change. Additionally, related impacts on the spatial
dimension of the services are studied through hotspot maps.
  Results show that both services decreased during restrictions, however
e-scooters experienced a larger variability in their demand and they had a
quicker recovery when travel restrictions were loosened. Shared bikes, in
general, suffered less from travel restriction levels, suggesting their larger
usage for work and study-related trip purposes, which is confirmed also by the
analysis of hotspots. E-scooters are both substituting and complementing public
transport according to restriction levels, while usage patterns of shared bikes
are more independent.",The effect of COVID restriction levels on shared micromobility travel patterns: A comparison between dockless bike sharing and e-scooter services,2023-09-28 16:47:33,"Marco Diana, Andrea Chicco","http://arxiv.org/abs/2309.16440v1, http://arxiv.org/pdf/2309.16440v1",econ.GN
33232,gn,"Despite the popularity of conditional cash transfers in low- and
middle-income countries, evidence on their long-term effects remains scarce.
This paper assesses the impact of the Ecuador's Human Development Grant on the
formal sector labour market outcomes of children in eligible households. This
grant -- one of the first of its kind -- is characterised by weak enforcement
of its eligibility criteria. By means of a regression discontinuity design, we
find that this programme increased formal employment rates and labour income
around a decade after exposure, thereby curbing the intergenerational
transmission of poverty. We discuss possible mediating mechanisms based on
findings from previous literature and, in particular, provide evidence on how
the programme contributed to persistence in school in the medium run.",The long-term impact of (un)conditional cash transfers on labour market outcomes in Ecuador,2023-09-29 16:15:07,"Juan Ponce, José-Ignacio Antón, Mercedes Onofa, Roberto Castillo","http://arxiv.org/abs/2309.17216v1, http://arxiv.org/pdf/2309.17216v1",econ.GN
33233,gn,"This paper developed a microsimulation model to simulate the distributional
impact of price changes using Household Budget Survey data, Income Survey data
and an Input-Output Model. We use the model to assess the distributional and
welfare impact of recent price changes in Pakistan. Particular attention is
paid to price changes in energy goods and food. We firstly assessed the
distributional pattern of expenditures, with domestic energy fuels concentrated
at the bottom of the distribution and motor fuels at the top. The budget share
of electricity and motor fuels is particularly high, while domestic fuels is
relatively low. While the distributional pattern of domestic fuel and
electricity consumption is similar to other countries, there is a particularly
high budget elasticity for motor fuels. The analysis shows that despite large
increases in energy prices, the importance of energy prices for the welfare
losses due to inflation is limited. The overall distributional impact of recent
price changes is mildly progressive, but household welfare is impacted
significantly irrespective of households position along the income
distribution. The biggest driver of the welfare loss at the bottom was food
price inflation, while other goods and services were the biggest driver at the
top of the distribution. To compensate households for increased living costs,
transfers would need to be on average 40 percent of disposable income.
Behavioural responses to price changes have a negligible impact on the overall
welfare cost to households.","The Distributional Impact of Price Inflation in Pakistan: A Case Study of a New Price Focused Microsimulation Framework, PRICES",2023-09-30 05:30:28,"Cathal ODonoghue, Beenish Amjad, Jules Linden, Nora Lustig, Denisa Sologon, Yang Wang","http://arxiv.org/abs/2310.00231v1, http://arxiv.org/pdf/2310.00231v1",econ.GN
33234,gn,"Network reconstruction is a well-developed sub-field of network science, but
it has only recently been applied to production networks, where nodes are firms
and edges represent customer-supplier relationships. We review the literature
that has flourished to infer the topology of these networks by partial,
aggregate, or indirect observation of the data. We discuss why this is an
important endeavour, what needs to be reconstructed, what makes it different
from other network reconstruction problems, and how different researchers have
approached the problem. We conclude with a research agenda.",Reconstructing supply networks,2023-09-30 20:45:03,"Luca Mungo, Alexandra Brintrup, Diego Garlaschelli, François Lafond","http://arxiv.org/abs/2310.00446v1, http://arxiv.org/pdf/2310.00446v1",econ.GN
33235,gn,"This review identifies challenges and effective strategies to decarbonize
China's rapidly growing transportation sector, currently the third largest
carbon emitter, considering China's commitment to peak carbon emissions by 2030
and achieve carbon neutrality by 2060. Key challenges include rising travel
demand, unreached peak car ownership, declining bus ridership, gaps between
energy technology research and practical application, and limited institutional
capacity for decarbonization. This review categorizes current decarbonization
measures, strategies, and policies in China's transportation sector using the
""Avoid, Shift, Improve"" framework, complemented by a novel strategic vector of
""Institutional Capacity & Technology Development"" to capture broader
development perspectives. This comprehensive analysis aims to facilitate
informed decision-making and promote collaborative strategies for China's
transition to a sustainable transportation future.","Review on Decarbonizing the Transportation Sector in China: Overview, Analysis, and Perspectives",2023-10-01 11:25:37,"Jiewei Li, Ling Jin, Han Deng, Lin Yang","http://arxiv.org/abs/2310.00613v1, http://arxiv.org/pdf/2310.00613v1",econ.GN
33236,gn,"This literature review elucidates the implications of behavioral biases,
particularly those stemming from overconfidence and framing, on the
intertemporal choices made by students on their underline demand preferences
for student loans. A secondary objective is to understand the potential utility
of social media to assist students and young borrowers with the debt repayment
process and management of their loan tenures. A close examination of the
literature reveals a substantial influence of these behavioral and cognitive
principles on the intertemporal choices made by students towards debt
repayments. This affects not only the magnitude of loans they acquire but also
the anchoring of the terms of loan conditions associated with repayment.
Furthermore, I establish that harnessing social media as the potential to
cultivate financial literacy and enhanced understanding of loan terms to
expedite the process of debt redemption. This review could serve as a valuable
repository for students, scholars, and policymakers alike, in order to expound
on the cognitive biases that students and consumers often face when applying
and entering into loan contract.",Student debt and behavioral bias: a trillion dollar problem,2023-10-03 17:25:27,Praful Raj,"http://arxiv.org/abs/2310.02081v1, http://arxiv.org/pdf/2310.02081v1",econ.GN
33237,gn,"Despite global efforts to harmonize international trade statistics, our
understanding about trade in digital products and its implications remains
elusive. Here, we study five key aspects of trade in digital products by
introducing a novel dataset on the exports and imports of digital products.
First, we show that compared to trade in physical goods, the origin of digital
products exports is more spatially concentrated. Second, we show that between
2016 and 2021 trade in digital products grew faster than physical trade, at an
annualized growth rate of 20% compared to 6% for physical trade. Third, we show
that trade in digital products is large enough to partly offset some important
trade balance estimates, like the physical trade deficit of the United States.
Fourth, we show that countries that have decoupled economic growth from
greenhouse gas emissions have larger and faster growing exports in digital
product sectors. Finally, we add digital products to measures of economic
complexity, showing that digital products tend to rank high in terms of
sophistication contributing positively to the complexity of digital exporters.
These findings provide a novel lens to understand the impact of the digital
sector in the evolving geography of global trade.","The Growth, Geography, and Implications of Trade in Digital Products",2023-10-03 20:55:34,"Viktor Stojkoski, Philipp Koch, Eva Coll, Cesar A. Hidalgo","http://arxiv.org/abs/2310.02253v2, http://arxiv.org/pdf/2310.02253v2",econ.GN
33238,gn,"We propose a novel machine learning approach to probabilistic forecasting of
hourly day-ahead electricity prices. In contrast to recent advances in
data-rich probabilistic forecasting that approximate the distributions with
some features such as moments, our method is non-parametric and selects the
best distribution from all possible empirical distributions learned from the
data. The model we propose is a multiple output neural network with a
monotonicity adjusting penalty. Such a distributional neural network can learn
complex patterns in electricity prices from data-rich environments and it
outperforms state-of-the-art benchmarks.",Learning Probability Distributions of Day-Ahead Electricity Prices,2023-10-04 18:00:26,"Jozef Barunik, Lubos Hanus","http://arxiv.org/abs/2310.02867v2, http://arxiv.org/pdf/2310.02867v2",econ.GN
33239,gn,"Background: Here we investigate whether releasing COVID-19 vaccines in
limited quantities and at limited times boosted Italy's vaccination campaign in
2021. This strategy exploits insights from psychology and consumer marketing.
Methods: We built an original dataset covering 200 days of vaccination data in
Italy, including 'open day' events. Open-day events (in short: open days) are
instances where COVID-19 vaccines were released in limited quantities and only
for a specific day at a specified location (usually, a large pavilion or a
public building). Our dependent variables are the number of total and first
doses administered in proportion to the eligible population. Our key
independent variable is the presence of open-day events in a given region on a
specific day. We analyzed the data using regression with fixed effects for time
and region. The analysis was robust to alternative model specifications.
Findings: We find that when an open day event was organized, in proportion to
the eligible population, there was an average 0.39-0.44 percentage point
increase in total doses administered and a 0.30-0.33 percentage point increase
in first doses administered. These figures correspond to an average increase of
10,455-11,796 in total doses administered and 8,043-8,847 in the first doses
administered. Interpretation: Releasing vaccines in limited quantities and at
limited times by organizing open-day events was associated with an increase in
COVID-19 vaccinations in most Italian regions. These results call for wider
adoption of vaccination strategies based on the limited release of vaccines for
other infectious diseases or future pandemics.",Initiatives Based on the Psychology of Scarcity Can Increase Covid-19 Vaccinations,2023-10-05 20:10:41,"Alessandro Del Ponte, Audrey De Dominicis, Paolo Canofari","http://arxiv.org/abs/2310.03689v1, http://arxiv.org/pdf/2310.03689v1",econ.GN
33240,gn,"Roughly 3 billion citizens remain offline, equating to approximately 40
percent of the global population. Therefore, providing Internet connectivity is
an essential part of the Sustainable Development Goals (SDGs) (Goal 9). In this
paper a high-resolution global model is developed to evaluate the necessary
investment requirements to achieve affordable universal broadband. The results
indicate that approximately $418 billion needs to be mobilized to connect all
unconnected citizens globally (targeting 40-50 GB/Month per user with 95
percent reliability). The bulk of additional investment is for emerging market
economies (73 percent) and low-income developing countries (24 percent). To our
knowledge, the paper contributes the first high-resolution global assessment
which quantifies universal broadband investment at the sub-national level to
achieve SDG Goal 9.",What would it cost to connect the unconnected? Estimating global universal broadband infrastructure investment,2023-10-05 20:12:34,"Edward J. Oughton, David Amaglobeli, Marian Moszoro","http://arxiv.org/abs/2310.03694v1, http://arxiv.org/pdf/2310.03694v1",econ.GN
33241,gn,"Scanner big data has potential to construct Consumer Price Index (CPI). The
study introduces a new weighted price index called S-FCPIw, which is
constructed using scanner big data from retail sales in China. We address the
limitations of China's CPI especially for its high cost and untimely release,
and demonstrate the reliability of S-FCPIw by comparing it with existing price
indices. S-FCPIw can not only reflect the changes of goods prices in higher
frequency and richer dimension, and the analysis results show that S-FCPIw has
a significant and strong relationship with CPI and Food CPI. The findings
suggest that scanner big data can supplement traditional CPI calculations in
China and provide new insights into macroeconomic trends and inflation
prediction. We have made S-FCPIw publicly available and update it on a weekly
basis to facilitate further study in this field.",A New Weighted Food CPI from Scanner Big Data in China,2023-10-06 16:22:39,"Zhenkun Zhou, Zikun Song, Tao Ren","http://arxiv.org/abs/2310.04242v2, http://arxiv.org/pdf/2310.04242v2",econ.GN
33242,gn,"Scholars have not asked why so many governments created ad hoc scientific
advisory bodies (ahSABs) to address the Covid-19 pandemic instead of relying on
existing public health infrastructure. We address this neglected question with
an exploratory study of the US, UK, Sweden, Italy, Poland, and Uganda. Drawing
on our case studies and the blame-avoidance literature, we find that ahSABs are
created to excuse unpopular policies and take the blame should things go wrong.
Thus, membership typically represents a narrow range of perspectives. An ahSAB
is a good scapegoat because it does little to reduce government discretion and
has limited ability to deflect blame back to government. Our explanation of our
deviant case of Sweden, that did not create and ahSAB, reinforces our general
principles. We draw the policy inference that ahSAB membership should be vetted
by the legislature to ensure broad membership.",Bespoke scapegoats: Scientific advisory bodies and blame avoidance in the Covid-19 pandemic and beyond,2023-10-06 18:19:30,"Roger Koppl, Kira Pronin, Nick Cowen, Marta Podemska-Mikluch, Pablo Paniagua Prieto","http://arxiv.org/abs/2310.04312v1, http://arxiv.org/pdf/2310.04312v1",econ.GN
33243,gn,"The objective of the paper is to understand the role of workers bargaining
for the labor share in transition economies. We rely on a share-capital
schedule, whereby workers bargaining power is represented as a move off the
schedule. Quantitative indicators of bargaining power are amended with
own-constructed qualitative indices from textual information describing the
legal enabling environment for bargaining in each country. Multiple data
constraints impose reliance on a cross-sectional empirical model estimated with
IV methods, whereby former unionization rates and the time since the adoption
of the ILO Collective Bargaining Convention are used as exogenous instruments.
The sample is composed of 23 industrial branches in 69 countries, of which 28
transition ones. In general, we find the stronger bargaining power to influence
higher labor share, when the former is measured either quantitatively or
qualitatively. On the contrary, higher bargaining power results in lower labor
share in transition economies. This is likely a matter of delayed response to
wage pushes, reconciled with the increasing role of MNCs which did not confront
the workers power rise per se, but introduced automation and changed market
structure amid labor-market flexibilization, which eventually deferred
bargaining power-s positive effect on labor share.","Bargain your share: The role of workers bargaining power for labor share, with reference to transition economies",2023-10-07 22:48:36,"Marjan Petreski, Stefan Tanevski","http://arxiv.org/abs/2310.04904v1, http://arxiv.org/pdf/2310.04904v1",econ.GN
33244,gn,"The debate on whether young Americans are becoming less reliant on
automobiles is still ongoing. This research compares driver's license
acquisition patterns between Millennials and their succeeding Generation Z
during late adolescence. It also examines factors influencing teenagers'
decisions to obtain driver's licenses. The findings suggest that the decline in
licensing rates may be attributed in part to generational shifts in attitudes
and cultural changes, such as Generation Z's inclination toward educational
trips and their digital upbringing. This research underscores the implications
for planners, practitioners, and policymakers in adapting to potential shifts
in American car culture.",Are Generation Z Less Car-centric Than Millennials? A Nationwide Analysis Through the Lens of Youth Licensing,2023-10-07 22:56:22,Kailai Wang,"http://arxiv.org/abs/2310.04906v1, http://arxiv.org/pdf/2310.04906v1",econ.GN
33245,gn,"The objective of this paper is to estimate the expected effects of the
pandemic of Covid-19 for child poverty in North Macedonia. We rely on MK-MOD
Tax & Benefit Microsimulation Model for North Macedonia based on the Survey on
Income and Living Conditions 2019. The simulation takes into account the
development of income, as per the observed developments in the first three
quarters of 2020, derived from the Labor Force Survey, which incorporates the
raw effect of the pandemic and the government response. In North Macedonia,
almost no government measure directly aimed the income of children, however,
three key and largest measures addressed household income: the wage subsidy of
14.500 MKD per worker in the hardest hit companies, relaxation of the criteria
for obtaining the guaranteed minimum income, and one-off support to vulnerable
groups of the population in two occasions. Results suggest that the relative
child poverty rate is estimated to increase from 27.8 percent before the
pandemic to 32.4 percent during the pandemic. This increase puts additional
19,000 children below the relative poverty threshold. Results further suggest
that absolute poverty is likely to reduce primarily because of the automatic
stabilizers in the case of social assistance and because of the one-time cash
assistance.",The impact of the pandemic of Covid-19 on child poverty in North Macedonia: Simulation-based estimates,2023-10-08 13:57:28,Marjan Petreski,"http://arxiv.org/abs/2310.05110v1, http://arxiv.org/pdf/2310.05110v1",econ.GN
33246,gn,"In this paper we simulate the poverty effect of the Covid-19 pandemic in
North Macedonia and we analyze the income-saving power of three key government
measures: the employment-retention scheme, the relaxed Guaranteed Minimum
Income support, and one-off cash allowances. In this attempt, the
counterfactual scenarios are simulated by using MK-MOD, the Macedonian Tax and
Benefit Microsimulation Model, incorporating actual data on the shock-s
magnitude from the second quarter of 2020. The results suggest that without the
government interventions, of the country-s two million citizens, an additional
120,000 people would have been pushed into poverty by COVID-19, where 340,000
were already poor before the pandemic. Of the 120,000 newly poor about 16,000
would have been pushed into destitute poverty. The government-s automatic
stabilizers worked to shield the poorest people, though these were clearly
pro-feminine. In all, the analyzed government measures recovered more than half
of the income loss, which curbed the poverty-increasing effect and pulled an
additional 34,000 people out of extreme poverty. The employment-retention
measure was regressive and pro-masculine; the Guaranteed Minimum Income
relaxation (including automatic stabilizers) was progressive and pro-feminine;
and the one-off support has been pro-youth.",Poverty during Covid-19 in North Macedonia: Analysis of the distributional impact of the crisis and government response,2023-10-08 14:04:48,Marjan Petreski,"http://arxiv.org/abs/2310.05114v1, http://arxiv.org/pdf/2310.05114v1",econ.GN
33309,gn,"The biotechnology industry poses challenges and possibilities for startups
and small businesses. It is characterized by high charges and complex policies,
making it difficult for such agencies to set up themselves. However, it
additionally offers avenues for innovation and increase. This paper delves into
powerful techniques that can be a resource in managing biotechnology
innovation, which includes partnerships, highbrow assets improvement, virtual
technologies, customer engagement, and government investment. These strategies
are important for fulfillment in an industry that is constantly evolving. By
embracing agility and area of interest focus, startups and small companies can
successfully compete in this dynamic discipline.",Managing Biotechnology and Healthcare Innovation Challenges and Opportunities for Startups and Small Companies,2023-11-15 06:34:23,"Narges Ramezani, Erfan Mohammadi","http://arxiv.org/abs/2311.08671v2, http://arxiv.org/pdf/2311.08671v2",econ.GN
33247,gn,"The objective of the paper is to understand if the minimum wage plays a role
for the labor share of manufacturing workers in North Macedonia. We decompose
labor share movements on those along a share-capital curve, shifts of this
locus, and deviations from it. We use the capital-output ratio, total factor
productivity and prices of inputs to capture these factors, while the minimum
wage is introduced as an element that moves the curve off. We estimate a panel
of 20 manufacturing branches over the 2012-2019 period with FE, IV and
system-GMM estimators. We find that the role of the minimum wage for the labor
share is industry-specific. For industrial branches which are labor-intensive
and low-pay, it increases workers' labor share, along a complementarity between
capital and labor. For capital-intensive branches, it reduces labor share,
likely through the job loss channel and along a substitutability between labor
and capital. This applies to both branches where foreign investment and heavy
industry are nested.",Minimum wage and manufacturing labor share: Evidence from North Macedonia,2023-10-08 14:12:58,"Marjan Petreski, Jaakko Pehkonen","http://arxiv.org/abs/2310.05117v1, http://arxiv.org/pdf/2310.05117v1",econ.GN
33248,gn,"This paper empirically assesses predictions of Goodwin's model of cyclical
growth regarding demand and distributive regimes when integrating the real and
financial sectors. In addition, it evaluates how financial and employment
shocks affect the labor market and monetary policy variables over six different
U.S. business-cycle peaks. It identifies a parsimonious Time-Varying Vector
Autoregressive model with Stochastic Volatility (TVP-VAR-SV) with the labor
share of income, the employment rate, residential investment, and the interest
rate spread as endogenous variables. Using Bayesian inference methods, key
results suggest (i) a combination of profit-led demand and profit-squeeze
distribution; (ii) weakening of these regimes during the Great Moderation; and
(iii) significant connections between the standard Goodwinian variables and
residential investment as well as term spreads. Findings presented here broadly
conform to the transition to increasingly deregulated financial and labor
markets initiated in the 1980s.",A time-varying finance-led model for U.S. business cycles,2023-10-08 16:01:36,Marcio Santetti,"http://arxiv.org/abs/2310.05153v1, http://arxiv.org/pdf/2310.05153v1",econ.GN
33249,gn,"Private donors contributed more than $350 million to local election officials
to support the administration of the 2020 election. Supporters argue these
grants were neutral and necessary to maintain normal election operations during
the pandemic, while critics worry these grants mostly went to Democratic
strongholds and tilted election outcomes. These concerns have led twenty-four
states to restrict private election grants. How much did these grants shape the
2020 presidential election? To answer this question, we collect administrative
data on private election administration grants and election outcomes. We then
use new advances in synthetic control methods to compare presidential election
results and turnout in counties that received grants to counties with identical
average presidential election results and turnout before 2020. While counties
that favor Democrats were much more likely to apply for a grant, we find that
the grants did not have a noticeable effect on the presidential election. Our
estimates of the average effect of receiving a grant on Democratic vote share
range from 0.02 percentage points to 0.36 percentage points. Our estimates of
the average effect of receiving a grant on turnout range from -0.03 percentage
points to 0.13 percentage points. Across specifications, our 95% confidence
intervals typically include negative effects, and our confidence intervals from
all specifications fail to include effects on Democratic vote share larger than
0.58 percentage points and effects on turnout larger than 0.40 percentage
points. We characterize the magnitude of our effects by asking how large they
are compared to the margin by which Biden won the 2020 election. In simple
bench-marking exercises, we find that the effects of the grants were likely too
small to have changed the outcome of the 2020 presidential election.",Did Private Election Administration Funding Advantage Democrats in 2020?,2023-10-08 23:32:57,"Apoorva Lal, Daniel M Thompson","http://arxiv.org/abs/2310.05275v1, http://arxiv.org/pdf/2310.05275v1",econ.GN
33250,gn,"Cryptocurrencies, enabling secure digital asset transfers without a central
authority, are experiencing increasing interest. With the increasing number of
global and Turkish investors, it is evident that interest in digital assets
will continue to rise sustainably, even in the face of financial fluctuations.
However, it remains uncertain whether consumers perceive blockchain
technology's ease of use and usefulness when purchasing cryptocurrencies. This
study aims to explain blockchain technology's perceived ease of use and
usefulness in cryptocurrency purchases by considering factors such as quality
customer service, reduced costs, efficiency, and reliability. To achieve this
goal, data were obtained from 463 participants interested in cryptocurrencies
in different regions of Turkey. The data were analyzed using SPSS Process Macro
programs. The analysis results indicate that perceived ease of use and
usefulness mediate the effects of customer service and reduced costs,
efficiency, and security on purchase intention.",The Mediating Effect of Blockchain Technology on the Cryptocurrency Purchase Intention,2023-09-28 20:51:40,"İbrahim Halil Efendioğlu, Gökhan Akel, Bekir Değirmenci, Dilek Aydoğdu, Kamile Elmasoğlu, Hande Begüm Bumin Doyduk, Arzu Şeker, Hatice Bahçe","http://arxiv.org/abs/2310.05970v1, http://arxiv.org/pdf/2310.05970v1",econ.GN
33251,gn,"Media hype and technological breakthroughs are fuelling the race to adopt
Artificial Intelligence amongst the business community, but is there evidence
to suggest this will increase productivity? This paper uses 2015-2019 microdata
from the UK Office for National Statistics to identify if the adoption of
Artificial Intelligence techniques increases labour productivity in UK
businesses. Using fixed effects estimation (Within Group) with a log-linear
regression specification the paper concludes that there is no statistically
significant impact of AI adoption on labour productivity.",Does Artificial Intelligence benefit UK businesses? An empirical study of the impact of AI on productivity,2023-10-06 16:52:23,Sam Hainsworth,"http://arxiv.org/abs/2310.05985v2, http://arxiv.org/pdf/2310.05985v2",econ.GN
33252,gn,"Objectives: Lung cancer remains a significant global public health challenge
and is still one of the leading cause of cancer-related death in Argentina.
This study aims to assess the disease and economic burden of lung cancer in the
country.
  Study design: Burden of disease study
  Methods. A mathematical model was developed to estimate the disease burden
and direct medical cost attributable to lung cancer. Epidemiological parameters
were obtained from local statistics, the Global Cancer Observatory, the Global
Burden of Disease databases, and a literature review. Direct medical costs were
estimated through micro-costing. Costs were expressed in US dollars (US$),
April 2023 (1 US$ =216.38 argentine pesos). A second-order Monte Carlo
simulation was performed to estimate the uncertainty.
  Results: Considering approximately 10,000 deaths, 12,000 incident cases, and
14,000 5-year prevalent cases, the economic burden of lung cancer in Argentina
in 2023 was estimated to be US$ 556.20 million (396.96 -718.20), approximately
1.4% of the total healthcare expenditure for the country. The cost increased
with a higher stage of the disease and the main driver was the drug acquisition
(80%). 179,046 Disability-adjusted life years could be attributable to lung
cancer representing the 10% of the total cancer.
  Conclusion: The disease and economic burden of lung cancer in Argentina
implies a high cost for the health system and would represent 19% of the
previously estimated economic burden for 29 cancers in Argentina.",Lung Cancer in Argentina: A Modelling Study of Disease and Economic Burden,2023-10-10 23:31:28,"Andrea Alcaraz, Federico Rodriguez Cairoli, Carla Colaci, Constanza Silvestrini, Carolina Gabay, Natalia Espinola","http://arxiv.org/abs/2310.06999v1, http://arxiv.org/pdf/2310.06999v1",econ.GN
33253,gn,"This paper takes the development of China's Central bank digital currencies
as a perspective, theoretically analyses the impact mechanism of the issuance
and circulation of Central bank digital currencies on China's monetary policy
and various variables of the money multiplier; at the same time, it selects the
quarterly data from 2010 to 2022, and examines the impact of the Central bank
digital currencies on the money supply multiplier through the establishment of
the VECM model. The research results show that: the issuance of China's Central
bank digital currencies will have an impact on the effectiveness of monetary
policy and intermediary indicators; and have a certain positive impact on the
narrow money multiplier and broad money multiplier. Based on theoretical
analyses and empirical tests, this paper proposes that China should explore a
more effective monetary policy in the context of Central bank digital
currencies in the future on the premise of steadily promoting the development
of Central bank digital currencies.",Empirical Analysis of the Impact of Legal Tender Digital Currency on Monetary Policy -Based on China's Data,2023-10-11 12:14:48,"Ruimin Song, TIntian Zhao, Chunhui Zhou","http://arxiv.org/abs/2310.07326v1, http://arxiv.org/pdf/2310.07326v1",econ.GN
33254,gn,"Using as a central instrument a new database, resulting from a compilation of
historical administrative records, which covers the period 1974-2010, we can
have new evidence on how industrial companies used tax benefits, and claim that
these are decisive for the investment decision of the Uruguayan industrial
companies during that period. The aforementioned findings served as a raw
material to also affirm that the incentives to increase investment are factors
that positively influence the level of economic activity and exports, and
negatively on the unemployment rate.",Incentives for Private Industrial Investment in historical perspective: the case of industrial promotion and investment promotion in Uruguay (1974-2010),2023-10-10 11:14:39,Diego Vallarino,"http://arxiv.org/abs/2310.07738v1, http://arxiv.org/pdf/2310.07738v1",econ.GN
33255,gn,"Youth unemployment is a major socioeconomic problem in Nigeria, and several
youth-employment programs have been initiated and implemented to address the
challenge. While detailed analyses of the impacts of some of these programs
have been conducted, empirical analysis of implementation challenges and of the
influence of limited political inclusivity on distribution of program benefits
is rare. Using mixed research methods and primary data collected through
focus-group discussion and key-informant interviews, this paper turns to that
analysis. We found that, although there are several youth-employment programs
in Nigeria, they have not yielded a marked reduction in youth-unemployment
rates. The programs are challenged by factors such as lack of framework for
proper governance and coordination, inadequate funding, lack of institutional
implementation capacity, inadequate oversight of implementation, limited
political inclusivity, lack of prioritization of vulnerable and marginalized
groups, and focus on stand-alone programs that are not tied to long-term
development plans. These issues need to be addressed to ensure that
youth-employment programs yield better outcomes and that youth unemployment is
significantly reduced.",Empirical Review of Youth-Employment Policies in Nigeria,2023-10-11 21:22:34,"Oluwasola E. Omoju, Emily E. Ikhide, Iyabo A. Olanrele, Lucy E. Abeng, Marjan Petreski, Francis O. Adebayo, Itohan Odigie, Amina M. Muhammed","http://arxiv.org/abs/2310.07789v1, http://arxiv.org/pdf/2310.07789v1",econ.GN
33256,gn,"Using a firm-level dataset from the Spanish Technological Innovation Panel
(2003-2016), this study explores the characteristics of environmentally
innovative firms and quantifies the effects of pursuing different types of
environmental innovation strategies (resource-saving, pollution-reducing, and
regulation-driven innovations) on sales, employment, and productivity dynamics.",There are different shades of green: heterogeneous environmental innovations and their effects on firm performance,2023-10-12 17:20:21,"Gianluca Biggi, Andrea Mina, Federico Tamagni","http://arxiv.org/abs/2310.08353v1, http://arxiv.org/pdf/2310.08353v1",econ.GN
33257,gn,"This study explores the relationship between political stability and
inflation in four South Asian countries, employing panel data spanning from
2001 to 2021. To analyze this relationship, the study utilizes the dynamic
ordinary least square (DOLS) and fully modified ordinary least square (FMOLS)
methods, which account for cross-sectional dependence and slope homogeneity in
panel data analysis. The findings consistently reveal that increased political
stability is associated with lower inflation, while reduced political stability
is linked to higher inflation.",The Connection Between Political Stability and Inflation: Insights from Four South Asian Nations,2023-10-12 18:28:45,"Ummya Salma, Md. Fazlul Huq Khan","http://arxiv.org/abs/2310.08415v1, http://arxiv.org/pdf/2310.08415v1",econ.GN
33258,gn,"We study how humans learn from AI, exploiting an introduction of an
AI-powered Go program (APG) that unexpectedly outperformed the best
professional player. We compare the move quality of professional players to
that of APG's superior solutions around its public release. Our analysis of
749,190 moves demonstrates significant improvements in players' move quality,
accompanied by decreased number and magnitude of errors. The effect is
pronounced in the early stages of the game where uncertainty is highest. In
addition, younger players and those in AI-exposed countries experience greater
improvement, suggesting potential inequality in learning from AI. Further,
while players of all levels learn, less skilled players derive higher marginal
benefits. These findings have implications for managers seeking to adopt and
utilize AI effectively within their organizations.",How Does Artificial Intelligence Improve Human Decision-Making? Evidence from the AI-Powered Go Program,2023-10-12 23:28:05,"Sukwoong Choi, Hyo Kang, Namil Kim, Junsik Kim","http://arxiv.org/abs/2310.08704v1, http://arxiv.org/pdf/2310.08704v1",econ.GN
33259,gn,"The pursuit of excellence seems to be the True North of academia. What is
meant by excellence? Can excellence be measured? This article discusses the
concept of excellence in the context of research and competition.",Über wissenschaftliche Exzellenz und Wettbewerb,2023-10-14 17:10:47,Gerald Schweiger,"http://arxiv.org/abs/2310.09588v3, http://arxiv.org/pdf/2310.09588v3",econ.GN
33260,gn,"This research paper presents a thorough economic analysis of Bitcoin and its
impact. We delve into fundamental principles, and technological evolution into
a prominent decentralized digital currency. Analysing Bitcoin's economic
dynamics, we explore aspects such as transaction volume, market capitalization,
mining activities, and macro trends. Moreover, we investigate Bitcoin's role in
economy ecosystem, considering its implications on traditional financial
systems, monetary policies, and financial inclusivity. We utilize statistical
and analytical tools to assess equilibrium , market behaviour, and economic .
Insights from this analysis provide a comprehensive understanding of Bitcoin's
economic significance and its transformative potential in shaping the future of
global finance. This research contributes to informed decision-making for
individuals, institutions, and policymakers navigating the evolving landscape
of decentralized finance.","Economics unchained: Investigating the role of cryptocurrency, blockchain and intricacies of Bitcoin price fluctuations",2023-10-15 08:57:40,Ishmeet Matharoo,"http://arxiv.org/abs/2310.09745v1, http://arxiv.org/pdf/2310.09745v1",econ.GN
33261,gn,"We construct a novel unified merger list in the global container shipping
industry between 1966 (the beginning of the industry) and 2022. Combining the
list with proprietary data, we construct a structural matching model to
describe the historical transition of the importance of a firm's age, size, and
geographical proximity on merger decisions. We find that, as a positive factor,
a firm's size is more important than a firm's age by 9.974 times as a merger
incentive between 1991 and 2005. However, between 2006 and 2022, as a negative
factor, a firm's size is more important than a firm's age by 0.026-0.630 times,
that is, a firm's size works as a disincentive. We also find that the distance
between buyer and seller firms works as a disincentive for the whole period,
but the importance has dwindled to economic insignificance in recent years. In
counterfactual simulations, we observe that the prohibition of mergers between
firms in the same country would affect the merger configuration of not only the
firms involved in prohibited mergers but also those involved in permitted
mergers. Finally, we present interview-based evidence of the consistency
between our merger lists, estimations, and counterfactual simulations with the
industry experts' historical experiences.","Unified Merger List in the Container Shipping Industry from 1966: A Structural Estimation of the Transition of Importance of a Firm's Age, Tonnage Capacity, and Geographical Proximity on Merger Decision",2023-10-15 23:16:18,"Suguru Otani, Takuma Matsuda","http://arxiv.org/abs/2310.09938v2, http://arxiv.org/pdf/2310.09938v2",econ.GN
33262,gn,"This study re-examines the impact of participative budgeting on managerial
performance and budgetary slack, addressing gaps in current research. A revised
conceptual model is developed, considering the conditioning roles of leadership
style and leader-member exchange. The sample includes 408 employees with
managerial experience in manufacturing companies. Hypotheses are tested using a
combination of PLS-SEM and Necessary Condition Analysis (NCA). The results
demonstrate that participative budgeting negatively affects budgetary slack and
directly influences managerial performance, leadership style, and leader-member
exchange. Moreover, leadership style and leader-member exchange moderate these
relationships. The integration of NCA in management accounting research
provides valuable insights for decision-makers, allowing for more effective
measures. This study contributes by encouraging a complementary PLS-SEM and NCA
approach to examine conditional effects. It also enhances understanding of
budgetary control by highlighting the importance of leadership in influencing
participative budgeting outcomes.",A SEM-NCA approach towards the impact of participative budgeting on budgetary slack and managerial performance: The mediating role of leadership style and leader-member exchange,2023-10-16 03:39:22,"Khalid Hasan Al Jasimee, Francisco Javier Blanco-Encomienda","http://arxiv.org/abs/2310.09993v1, http://arxiv.org/pdf/2310.09993v1",econ.GN
33263,gn,"As online educational technology products have become increasingly prevalent,
rich evidence indicates that learners often find it challenging to establish
regular learning habits and complete their programs. Concurrently, online
products geared towards entertainment and social interactions are sometimes so
effective in increasing user engagement and creating frequent usage habits that
they inadvertently lead to digital addiction, especially among youth. In this
project, we carry out a contest-based intervention, common in the entertainment
context, on an educational app for Indian children learning English.
Approximately ten thousand randomly selected learners entered a 100-day reading
contest. They would win a set of physical books if they ranked sufficiently
high on a leaderboard based on the amount of educational content consumed.
Twelve weeks after the end of the contest, when the treatment group had no
additional incentives to use the app, they continued their engagement with it
at a rate 75\% higher than the control group, indicating a successful formation
of a reading habit. In addition, we observed a 6\% increase in retention within
the treatment group. These results underscore the potential of digital
interventions in fostering positive engagement habits with educational
technology products, ultimately enhancing users' long-term learning outcomes.",Digital interventions and habit formation in educational technology,2023-10-17 00:50:56,"Keshav Agrawal, Susan Athey, Ayush Kanodia, Emil Palikot","http://arxiv.org/abs/2310.10850v2, http://arxiv.org/pdf/2310.10850v2",econ.GN
33264,gn,"Sex education can impact pupils sexual activity and convey the social norms
regarding family formation and responsibility, which can have significant
consequences to their future. To investigate the life-cycle effects of social
norm transmission, this study draws on the introduction of comprehensive sex
education in the curriculum of Swedish primary schools during the 1940s to the
1950s. Inspired by social-democratic values, sex education during this period
taught students about abstinence, rational family planning choices, and the
importance of taking social responsibility for their personal decisions. The
study applies a state-of-the-art estimator of the difference-in-differences
method to various outcomes of men and women throughout the life cycle. The
results show that the reform affected most intended outcomes for men and women,
ultimately decreasing gender inequality in earnings. The effects of the reform
also extended to the succeeding generation of girls. Both generations created a
critical mass that altered social norms in favor of collective engagement and
democracy. The findings suggest that social norms, internalized through
school-based sex education, persistently affect peoples outcomes in significant
ways.",Life-Cycle Effects of Comprehensive Sex Education,2023-10-17 14:17:07,"Volha Lazuka, Annika Elwert","http://arxiv.org/abs/2310.11151v2, http://arxiv.org/pdf/2310.11151v2",econ.GN
33265,gn,"As the financial authorities increase their use of artificial intelligence
(AI), micro regulations, such as consumer protection and routine banking
regulations, will benefit because of ample data, short time horizons, clear
objectives, and repeated decisions, leaving plenty of data for AI to train on.
It is different from macro, focused on the stability of the entire financial
system, where AI can potentially undermine financial stability. Infrequent and
mostly unique events frustrate AI learning and hence its use for macro
regulations. Using distributed decision making, humans retain the advantage
over AI for advising on and making decisions in times of extreme stress. Even
if the authorities prefer a conservative approach to AI adoption, it will
likely become widely used by stealth, taking over increasingly high level
functions, driven by significant cost efficiencies, robustness and accuracy. We
propose six criteria against which to judge the suitability of AI use by the
private sector and financial regulation and crisis resolution.",On the use of artificial intelligence in financial regulations and the impact on financial stability,2023-10-17 17:16:23,"Jon Danielsson, Andreas Uthemann","http://arxiv.org/abs/2310.11293v2, http://arxiv.org/pdf/2310.11293v2",econ.GN
33266,gn,"This article examines water disputes through an integrated framework
combining normative and positive perspectives. John Rawls' theory of justice
provides moral guidance, upholding rights to reasonable access for all riparian
states. However, positive analysis using cake-cutting models reveals real-world
strategic constraints. While Rawls defines desired ends, cake-cutting offers
algorithmic means grounded in actual behaviors. The Nile River basin dispute
illustrates this synthesis. Rawls suggests inherent rights to water, but
unrestricted competition could enable monopoly. His principles alone cannot
prevent unfavorable outcomes, given limitations like self-interest. This is
where cake-cutting provides value despite biased claims. Its models identify
arrangements aligning with Rawlsian fairness while incorporating strategic
considerations. The article details the cake-cutting theory, reviews water
conflicts literature, examines the Nile case, explores cooperative vs.
non-cooperative games, and showcases algorithmic solutions. The integrated
framework assesses pathways for implementing Rawlsian ideals given real-world
dynamics. This novel synthesis of normative and positive lenses enriches the
study of water disputes and resource allocation more broadly.",The Sponge Cake Dilemma over the Nile: Achieving Fairness in Resource Allocation through Rawlsian Theory and Algorithms,2023-10-17 00:07:09,Dwayne Woods,"http://arxiv.org/abs/2310.11472v1, http://arxiv.org/pdf/2310.11472v1",econ.GN
33267,gn,"We show that fluctuations in the ratio of non-core to core funding in the
banking systems of advanced economies are driven by a handful of global factors
of both real and financial natures, with country-specific factors playing no
significant roles. Exchange rate flexibility helps insulate the non-core to
core ratio from such global factors but only significantly so outside periods
of major global financial disruptions, as in 2008-2009.",Global Factors in Non-core Bank Funding and Exchange Rate Flexibility,2023-10-17 22:51:47,"Luís A. V. Catão, Jan Ditzen, Daniel Marcel te Kaat","http://arxiv.org/abs/2310.11552v3, http://arxiv.org/pdf/2310.11552v3",econ.GN
33268,gn,"We develop a general model of discrete choice that incorporates peer effects
in preferences and consideration sets. We characterize the equilibrium behavior
and establish conditions under which all parts of the model can be recovered
from a sequence of choices. We allow peers to affect only preferences, only
consideration, or both. We exploit different types of variations to separate
the peer effects in preferences and consideration sets. This allows us to
recover the set (and type) of connections between the agents in the network. We
then use this information to recover the random preferences and the attention
mechanisms of each agent. These nonparametric identification results allow
unrestricted heterogeneity across agents and do not rely on the variation of
either covariates or the set of available options (or menus). We apply our
results to model expansion decisions by coffee chains and find evidence of
limited consideration. We simulate counterfactual predictions and show how
limited consideration slows down competition.",Peer Effects in Consideration and Preferences,2023-10-18 22:12:24,"Nail Kashaev, Natalia Lazzati, Ruli Xiao","http://arxiv.org/abs/2310.12272v1, http://arxiv.org/pdf/2310.12272v1",econ.GN
33269,gn,"This paper studies the widespread price dispersion of homogeneous products
across different online platforms, even when consumers can easily access price
information from comparison websites. We collect data for the 200 most popular
hotels in London (UK) and document that prices vary widely across booking sites
while making reservations for a hotel room. Additionally, we find that prices
listed across different platforms tend to converge as the booking date gets
closer to the date of stay. However, the price dispersion persists until the
date of stay, implying that the ""law of one price"" does not hold. We present a
simple theoretical model to explain this and show that in the presence of
aggregate demand uncertainty and capacity constraints, price dispersion could
exist even when products are homogeneous, consumers are homogeneous, all agents
have perfect information about the market structure, and consumers face no
search costs to acquire information about the products. Our theoretical
intuition and robust empirical evidence provide additional insights into price
dispersion across online platforms in different institutional settings. Our
study complements the existing literature that relies on consumer search costs
to explain the price dispersion phenomenon.",Price dispersion across online platforms: Evidence from hotel room prices in London (UK),2023-10-19 00:40:08,"Debashrita Mohapatra, Debi Prasad Mohapatra, Ram Sewak Dubey","http://arxiv.org/abs/2310.12341v1, http://arxiv.org/pdf/2310.12341v1",econ.GN
33270,gn,"This study investigates the impact of loss-framing and individual risk
attitude on willingness- to purchase insurance products utilizing a game-like
interface as choice architecture. The application presents events as
experienced in real life. Both financial and emotional loss-framing events are
followed by choices to purchase insurance. The participant cohorts considered
were undergraduate students and older participants; the latter group was
further subdivided by income and education. The within-subject analysis reveals
that the loss framing effect on insurance consumption is higher in the younger
population, though contingent on the insurance product type. Health and
accident insurance shows a negative correlation with risk attitudes for younger
participants and a positive correlation with accident insurance for older
participants. Risk attitude and life insurance products showed no dependency.
The findings elucidate the role of age, income, family responsibilities, and
risk attitude in purchasing insurance products. Importantly, it confirms the
heuristics of framing/nudging.",Impact of Loss-Framing and Risk Attitudes on Insurance Purchase: Insights from a Game-like Interface Study,2023-10-20 09:27:06,"Kunal Rajesh Lahoti, Shivani Hanji, Pratik Kamble, Kavita Vemuri","http://arxiv.org/abs/2310.13300v1, http://arxiv.org/pdf/2310.13300v1",econ.GN
33271,gn,"Most health economic analyses are undertaken with the aid of computers.
However, the ethical dimensions of implementing health economic models as
software (or computational health economic models (CHEMs)) are poorly
understood. We propose that developers and funders of CHEMs share ethical
responsibilities to (i) establish socially acceptable user requirements and
design specifications; (ii) ensure fitness for purpose; and (iii) support
socially beneficial use. We further propose that a transparent (T), reusable
(R) and updatable (U) CHEM is suggestive of a project team that has largely
fulfilled these responsibilities. We propose six criteria for assessing CHEMs:
(T1) software files are open access; (T2) project team contributions and
judgments are easily identified; (R1) programming practices promote
generalisability and transferability; (R2) licenses restrict only unethical
reuse; (U1) maintenance infrastructure is in place; and (U2) new releases are
systematically retested and appropriately deprecated. To facilitate CHEMs that
meet TRU criteria, we have developed a prototype software framework in the
open-source programming language R. The framework comprises six code libraries
for authoring CHEMs, supplying CHEMs with data and undertaking analyses with
CHEMs. The prototype software framework integrates with services for software
development and research data archiving. We determine that an initial set of
youth mental health CHEMs we developed with the prototype software framework
wholly meet criteria T1-2, R1-2 and U1 and partially meet criterion U2. Our
assessment criteria and prototype software framework can help inform and
improve ethical implementation of CHEMs. Resource barriers to ethical CHEM
practice should be addressed by research funders.","A prototype software framework for transparent, reusable and updatable computational health economic models",2023-10-22 03:10:53,"Matthew P Hamilton, Caroline X Gao, Glen Wiesner, Kate M Filia, Jana M Menssink, Petra Plencnerova, David Baker, Patrick D McGorry, Alexandra Parker, Jonathan Karnon, Sue M Cotton, Cathrine Mihalopoulos","http://arxiv.org/abs/2310.14138v1, http://arxiv.org/pdf/2310.14138v1",econ.GN
33272,gn,"Climate changes lead to more frequent and intense weather events, posing
escalating risks to road traffic. Crowdsourced data offer new opportunities to
monitor and investigate changes in road traffic flow during extreme weather.
This study utilizes diverse crowdsourced data from mobile devices and the
community-driven navigation app, Waze, to examine the impact of three weather
events (i.e., floods, winter storms, and fog) on road traffic. Three metrics,
speed change, event duration, and area under the curve (AUC), are employed to
assess link-level traffic change and recovery. In addition, a user's perceived
severity is computed to evaluate link-level weather impact based on
crowdsourced reports. This study evaluates a range of new data sources, and
provides insights into the resilience of road traffic to extreme weather, which
are crucial for disaster preparedness, response, and recovery in road
transportation systems.",Modeling Link-level Road Traffic Resilience to Extreme Weather Events Using Crowdsourced Data,2023-10-22 21:26:36,"Songhua Hu, Kailai Wang, Lingyao Li, Yingrui Zhao, Zhenbing He, Yunpeng, Zhang","http://arxiv.org/abs/2310.14380v1, http://arxiv.org/pdf/2310.14380v1",econ.GN
33273,gn,"Using high-frequency donation records from a major medical crowdfunding site
and careful difference-in-difference analysis, we demonstrate that the 2020 BLM
surge decreased the fundraising gap between Black and non-Black beneficiaries
by around 50\%. The reduction is largely attributed to non-Black donors. Those
beneficiaries in counties with moderate BLM activities were most impacted. We
construct innovative instrumental variable approaches that utilize weekends and
rainfall to identify the global and local effects of BLM protests. Results
suggest a broad social movement has a greater influence on charitable-giving
behavior than a local event. Social media significantly magnifies the impact of
protests.",Can the Black Lives Matter Movement Reduce Racial Disparities? Evidence from Medical Crowdfunding,2023-10-23 08:55:28,"Kaixin Liu, Jiwei Zhou, Junda Wang","http://arxiv.org/abs/2310.14590v2, http://arxiv.org/pdf/2310.14590v2",econ.GN
33274,gn,"In this paper, we compare different mitigation policies when housing
investments are irreversible. We use a general equilibrium model with
non-homothetic preferences and an elaborate setup of the residential housing
and energy production sector. In the first-best transition, the energy demand
plays only a secondary role. However, this changes when optimal carbon taxes
are not available. While providing subsidies for retrofits results in the
lowest direct costs for households, it ultimately leads to the highest
aggregate costs and proves to be an ineffective way to decarbonize the economy.
In the second-best context, a phased-in carbon price outperforms the
subsidy-based transition.",Reducing residential emissions: carbon pricing vs. subsidizing retrofits,2023-10-24 12:59:53,"Alkis Blanz, Beatriz Gaitan","http://arxiv.org/abs/2310.15687v1, http://arxiv.org/pdf/2310.15687v1",econ.GN
33275,gn,"In this paper, we study economic dynamics in a standard overlapping
generations model without production. In particular, using numerical methods,
we obtain a necessary and sufficient condition for the existence of a
topological chaos. This is a new application of a recent result characterising
the existence of a topological chaos for a unimodal interval map by Deng, Khan,
Mitra (2022).",A necessary and sufficient condition for the existence of chaotic dynamics in an overlapping generations model,2023-10-24 14:59:01,Tomohiro Uchiyama,"http://arxiv.org/abs/2310.15755v1, http://arxiv.org/pdf/2310.15755v1",econ.GN
33276,gn,"We study the long-term effects of the 2015 German minimum wage introduction
and its subsequent increases on regional employment. Using data from two waves
of the Structure of Earnings Survey allows us to estimate models that account
for changes in the minimum wage bite over time. While the introduction mainly
affected the labour market in East Germany, the raises are also increasingly
affecting low-wage regions in West Germany, such that around one third of
regions have changed their (binary) treatment status over time. We apply
different specifications and extensions of the classic
difference-in-differences approach as well as a set of new estimators that
enables for unbiased effect estimation with a staggered treatment adoption and
heterogeneous treatment effects. Our results indicate a small negative effect
on dependent employment of 0.5 percent, no significant effect on employment
subject to social security contributions, and a significant negative effect of
about 2.4 percent on marginal employment until the first quarter of 2022. The
extended specifications suggest additional effects of the minimum wage
increases, as well as stronger negative effects on total dependent and marginal
employment for those regions that were strongly affected by the minimum wage in
2015 and 2019.",Long-Term Employment Effects of the Minimum Wage in Germany: New Data and Estimators,2023-10-24 19:05:42,"Marco Caliendo, Nico Pestel, Rebecca Olthaus","http://arxiv.org/abs/2310.15964v1, http://arxiv.org/pdf/2310.15964v1",econ.GN
33277,gn,"In today's rapidly evolving workplace, the dynamics of job satisfaction and
its determinants have become a focal point of organizational studies. This
research offers a comprehensive examination of the nexus between spiritual
intelligence and job satisfaction among female employees, with particular
emphasis on the moderating role of ethical work environments. Beginning with an
exploration of the multifaceted nature of human needs, the study delves deep
into the psychological underpinnings that drive job satisfaction. It elucidates
how various tangible and intangible motivators, such as salary benefits and
recognition, play pivotal roles in shaping employee attitudes and behaviors.
Moreover, the research spotlights the unique challenges and experiences of
female employees, advocating for a more inclusive understanding of their needs.
An extensive review of the literature and empirical analysis culminates in the
pivotal finding that integrating spiritual intelligence and ethical
considerations within organizational practices can significantly enhance job
satisfaction. Such a holistic approach, the paper posits, not only bolsters the
well-being and contentment of female employees but also augments overall
organizational productivity, retention rates, and morale.",Elevating Women in the Workplace: The Dual Influence of Spiritual Intelligence and Ethical Environments on Job Satisfaction,2023-10-25 06:54:02,"Ali Bai, Morteza Vahedian, Rashin Ghahreman, Hasan Piri","http://arxiv.org/abs/2310.16341v1, http://arxiv.org/pdf/2310.16341v1",econ.GN
33278,gn,"In this paper, we analyze the long-term distributive impact of climate change
through rising food prices. We use a standard incomplete markets model and
account for non-linear Engel curves for food consumption. For the calibration
of our model, we rely on household data from 92 developing countries,
representing 4.5 billion people. The results indicate that the short-term and
long-term distributive impact of climate change differs. Including general
equilibrium effects change the welfare outcome especially for the poorest
quintile. In the presence of idiosyncratic risk, higher food prices increase
precautionary savings, which through general equilibrium affect labor income of
all agents. Furthermore, this paper studies the impact on inequality for
different allocations of productivity losses across sectors. When climate
impacts affects total factor productivity in both sectors of the economy,
climate impacts increase also wealth inequality.",Climate-related Agricultural Productivity Losses through a Poverty Lens,2023-10-25 12:19:33,Alkis Blanz,"http://arxiv.org/abs/2310.16490v2, http://arxiv.org/pdf/2310.16490v2",econ.GN
33279,gn,"This paper casts within a unified economic framework some key challenges for
the global economic order: de-globalization; the rising impracticability of
global cooperation; and the increasingly confrontational nature of Great Power
competition. In these, economics has been weaponised in the service of national
interest. This need be no bad thing. History provides examples where greater
openness and freer trade emerge from nations seeking only to advance their own
self-interests. But the cases described in the paper provide mixed signals. We
find that some developments do draw on a growing zero-sum perception to
economic and political engagement. That zero-sum explanation alone, however, is
crucially inadequate. Self-serving nations, even when believing the world
zero-sum, have under certain circumstances produced outcomes that have
benefited all. In other circumstances, perfectly-predicted losses have instead
resulted on all sides. Such lose-lose outcomes -- epic fail equilibria --
generalize the Prisoner's Dilemma game and are strictly worse than zero-sum. In
our analysis, Third Nations -- those not frontline in Great Power rivalry --
can serve an essential role in averting epic fail outcomes. The policy
implication is that Third Nations need to provide platforms that will gently
and unobtrusively nudge Great Powers away from epic-fail equilibria and towards
inadvertent cooperation.",Economics for the Global Economic Order: The Tragedy of Epic Fail Equilibria,2023-10-27 14:00:44,"Shiro Armstrong, Danny Quah","http://arxiv.org/abs/2310.18052v1, http://arxiv.org/pdf/2310.18052v1",econ.GN
33280,gn,"We investigate the short- and long-term impacts of the Federal Reserve's
large-scale asset purchases (LSAPs) on non-financial firms' capital structure
using a threshold panel ARDL model. To isolate the effects of LSAPs from other
macroeconomic conditions, we interact firm- and industry-specific indicators of
debt capacity with measures of LSAPs. We find that LSAPs facilitated firms'
access to external financing, with both Treasury and MBS purchases having
positive effects. Our model also allows us to estimate the time profile of the
effects of LSAPs on firm leverage providing robust evidence that they are
long-lasting. These effects have a half-life of 4-5 quarters and a mean lag
length of about six quarters. Nevertheless, the magnitudes are small,
suggesting that LSAPs have contributed only marginally to the rise in U.S.
corporate debt ratios of the past decade.",Causal effects of the Fed's large-scale asset purchases on firms' capital structure,2023-10-28 11:47:31,"Andrea Nocera, M. Hashem Pesaran","http://arxiv.org/abs/2310.18638v1, http://arxiv.org/pdf/2310.18638v1",econ.GN
33281,gn,"This paper studies the action dynamics of network coordination games with
bounded-rational agents. I apply the experience-weighted attraction (EWA) model
to the analysis as the EWA model has several free parameters that can capture
different aspects of agents' behavioural features. I show that the set of
possible long-term action patterns can be largely different when the
behavioural parameters vary, ranging from a unique possibility in which all
agents favour the risk-dominant option to some set of outcomes richer than the
collection of Nash equilibria. Monotonicity and non-monotonicity in the
relationship between the number of possible long-term action profiles and the
behavioural parameters are explored. I also study the question of influential
agents in terms of whose initial predispositions are important to the actions
of the whole network. The importance of agents can be represented by a left
eigenvector of a Jacobian matrix provided that agents' initial attractions are
close to some neutral level. Numerical calculations examine the predictive
power of the eigenvector for the long-run action profile and how agents'
influences are impacted by their behavioural features and network positions.",Experience-weighted attraction learning in network coordination games,2023-10-29 01:35:05,Fulin Guo,"http://arxiv.org/abs/2310.18835v1, http://arxiv.org/pdf/2310.18835v1",econ.GN
33282,gn,"This paper examines how investment in environmentally sustainable practices
impacts employment and labor productivity growth of firms in transition
economies. The study considers labor skill composition and geographical
differences, shedding light on sustainability dynamics. The empirical analysis
relies on the World Bank-s Enterprise Survey 2019 for 24 transition economies,
constructing an environmental sustainability index from various indicators
through a Principal Components Analysis. To address endogeneity, a battery of
fixed effects and instrumental variables are employed. Results reveal the
relevance of environmental sustainability for both employment and labor
productivity growth. However, the significance diminishes when addressing
endogeneity comprehensively, alluding that any relation between environmentally
sustainable practices and jobs growth is more complex and needs time to work.
The decelerating job-creation effect of sustainability investments is however
confirmed for the high-skill firms, while low-skill firms benefit from labor
productivity gains spurred by such investment. Geographically, Central Europe
sees more pronounced labor productivity impacts, possibly due to its higher
development and sustainability-awareness levels as compared to Southeast Europe
and the Commonwealth of Independent States.","Employment, labor productivity and environmental sustainability: Firm-level evidence from transition",2023-10-29 15:14:16,"Marjan Petreski, Stefan Tanevski, Irena Stojmenovska","http://arxiv.org/abs/2310.18989v1, http://arxiv.org/pdf/2310.18989v1",econ.GN
33283,gn,"Coordination development across various subsystems, particularly economic,
social, cultural, and human resources subsystems, is a key aspect of urban
sustainability that has a direct impact on the quality of urbanization.
Hangzhou Metropolitan Circle composing Hangzhou, Huzhou, Jiaxing, Shaoxing, was
the first metropolitan circle approved by National Development and Reform
Commission (NDRC) as a demonstration of economic transformation. To evaluate
the coupling degree of the four cities and to analyze the coordinative
development in the three systems (Digital Economy System, Regional Innovation
System, and Talent Employment System), panel data of these four cities during
the period 2015-2022 were collected. The development level of these three
systems were evaluated using standard deviation and comprehensive development
index evaluation. The results are as follows: (1) the coupling coordination
degree of the four cities in Hangzhou Metropolitan Circle has significant
regional differences, with Hangzhou being a leader while Huzhou, Jiaxing,
Shaoxing have shown steady but slow progress in the coupling development of the
three systems; and (2) the development of digital economy and talent employment
are the breakthrough points for construction in Huzhou, Jiaxing, Shaoxing.
Related suggestions are made based on the coupling coordination results of the
Hangzhou Metropolitan Circle.","Coupling Coordinated Development among Digital Economy, Regional Innovation and Talent Employment A case study of Hangzhou Metropolitan Circle, China",2023-10-29 16:47:50,Luyi Qiu,"http://arxiv.org/abs/2310.19008v1, http://arxiv.org/pdf/2310.19008v1",econ.GN
33284,gn,"Electric mobility hubs (eHUBS) are locations where multiple shared electric
modes including electric cars and e-bikes are available. To assess their
potential to reduce private car use, it is important to investigate to what
extent people would switch to eHUBS modes after their introduction. Moreover,
people may adapt their behaviour differently depending on their current travel
mode. This study is based on stated preference data collected in Amsterdam. We
analysed the data using mixed logit models. We found users of different modes
not only have a varied general preference for different shared modes, but also
have different sensitivity for attributes such as travel time and cost.
Compared to car users, public transport users are more likely to switch towards
the eHUBS modes. People who bike and walk have strong inertia, but the
percentage choosing eHUBS modes doubles when the trip distance is longer (5 or
10 km).",Mode substitution induced by electric mobility hubs: results from Amsterdam,2023-10-29 18:01:12,"Fanchao Liao, Jaap Vleugel, Gustav Bösehans, Dilum Dissanayake, Neil Thorpe, Margaret Bell, Bart van Arem, Gonçalo Homem de Almeida Correia","http://arxiv.org/abs/2310.19036v1, http://arxiv.org/pdf/2310.19036v1",econ.GN
33285,gn,"Green ammonia is poised to be a key part in the hydrogen economy. This paper
discusses green ammonia supply chains from a higher-level industry perspective
with a focus on market structures. The architecture of upstream and downstream
supply chains are explored. Potential ways to accelerate market emergence are
discussed. Market structure is explored based on transaction cost economics and
lessons from the oil and gas industry. Three market structure prototypes are
developed for different phases. In the infancy, a highly vertically integrated
structure is proposed to reduce risks and ensure capital recovery. A
restructuring towards a disintegrated structure is necessary in the next stage
to improve the efficiency. In the late stage, a competitive structure
characterized by a separation between asset ownership and production activities
and further development of short-term and spot markets is proposed towards a
market-driven industry. Finally, a multi-linear regression model is developed
to evaluate the developed structures using a case in the gas industry. Results
indicate that high asset specificity and uncertainty and low frequency lead to
a more disintegrated market structure, and vice versa, thus supporting the
structures designed. We assume the findings and results contribute to
developing green ammonia supply chains and the hydrogen economy.",Green ammonia supply chain and associated market structure: an analysis based on transaction cost economics,2023-10-30 15:39:53,Hanxin Zhao,"http://arxiv.org/abs/2310.19498v1, http://arxiv.org/pdf/2310.19498v1",econ.GN
33286,gn,"This paper develops a new data-driven approach to characterizing latent
worker skill and job task heterogeneity by applying an empirical tool from
network theory to large-scale Brazilian administrative data on worker--job
matching. We microfound this tool using a standard equilibrium model of workers
matching with jobs according to comparative advantage. Our classifications
identify important dimensions of worker and job heterogeneity that standard
classifications based on occupations and sectors miss. The equilibrium model
based on our classifications more accurately predicts wage changes in response
to the 2016 Olympics than a model based on occupations and sectors.
Additionally, for a large simulated shock to demand for workers, we show that
reduced form estimates of the effects of labor market shock exposure on
workers' earnings are nearly 4 times larger when workers and jobs are
classified using our classifications as opposed to occupations and sectors.",What is a Labor Market? Classifying Workers and Jobs Using Network Theory,2023-11-01 21:44:57,"Jamie Fogel, Bernardo Modenesi","http://arxiv.org/abs/2311.00777v1, http://arxiv.org/pdf/2311.00777v1",econ.GN
33287,gn,"Portfolio underdiversification is one of the most costly losses accumulated
over a household's life cycle. We provide new evidence on the impact of
financial inclusion services on households' portfolio choice and investment
efficiency using 2015, 2017, and 2019 survey data for Chinese households. We
hypothesize that higher financial inclusion penetration encourages households
to participate in the financial market, leading to better portfolio
diversification and investment efficiency. The results of the baseline model
are consistent with our proposed hypothesis that higher accessibility to
financial inclusion encourages households to invest in risky assets and
increases investment efficiency. We further estimate a dynamic double machine
learning model to quantitatively investigate the non-linear causal effects and
track the dynamic change of those effects over time. We observe that the
marginal effect increases over time, and those effects are more pronounced
among low-asset, less-educated households and those located in non-rural areas,
except for investment efficiency for high-asset households.",How Does China's Household Portfolio Selection Vary with Financial Inclusion?,2023-11-02 16:00:08,"Yong Bian, Xiqian Wang, Qin Zhang","http://arxiv.org/abs/2311.01206v1, http://arxiv.org/pdf/2311.01206v1",econ.GN
33288,gn,"Connected and Automated Vehicles (CAVs) can have a profound influence on
transport systems. However, most levels of automation and connectivity require
support from the road infrastructure. Additional support such as Cooperative
Intelligent Transport Systems (C-ITS) services can facilitate safe and
efficient traffic, and alleviate the environmental impacts of road surface
vehicles. However, due to the rapidly evolving technology, C-ITS service
deployment requirements are not always clear. Furthermore, the costs and
benefits of infrastructure investments are subject to tremendous uncertainty.
This study articulates the requirements using a structured approach to propose
a CAV-Readiness Framework (CRF). The main purpose of the CRF is allowing road
authorities to assess their physical and digital infrastructure readiness,
define requirements for C-ITS services, and identify future development paths
to reach higher levels of readiness to support CAVs by enabling C-ITS services.
The CRF is intended to guide and support road authorities' investment decisions
on infrastructure.",A connected and automated vehicle readiness framework to support road authorities for C-ITS services,2023-11-02 17:30:05,"Bahman Madadi, Ary P. Silvano, Kevin McPherson, John McCarthy, Risto Öörni, Gonçalo Homem de Almeida Correiaa","http://arxiv.org/abs/2311.01268v1, http://arxiv.org/pdf/2311.01268v1",econ.GN
33299,gn,"Research on the role of Large Language Models (LLMs) in business models and
services is limited. Previous studies have utilized econometric models,
technical showcases, and literature reviews. However, this research is
pioneering in its empirical examination of the influence of LLMs at the firm
level. The study introduces a detailed taxonomy that can guide further research
on the criteria for successful LLM-based business model implementation and
deepen understanding of LLM-driven business transformations. Existing knowledge
on this subject is sparse and general. This research offers a more detailed
business model design framework based on LLM-driven transformations. This
taxonomy is not only beneficial for academic research but also has practical
implications. It can act as a strategic tool for businesses, offering insights
and best practices. Businesses can lev-erage this taxonomy to make informed
decisions about LLM initiatives, ensuring that technology in-vestments align
with strategic goals.",Towards a Taxonomy of Large Language Model based Business Model Transformations,2023-11-09 14:32:37,"Jochen Wulf, Juerg Meierhofer","http://arxiv.org/abs/2311.05288v1, http://arxiv.org/pdf/2311.05288v1",econ.GN
33289,gn,"Historians and political economists have long debated the processes that led
land in frontier regions, managed commons, and a variety of customary
landholding regimes to be enclosed and transformed into more exclusive forms of
private property. Using the framework of aggregative games, we examine
land-holding regimes where access to land is established via possession and
use, and then explore the factors that may initiate decentralized privatization
processes. Factors including population density, potential for technology
improvement, enclosure costs, shifts in group cohesion and bargaining power, or
the policy and institutional environment determine the equilibrium mix of
property regimes. While decentralized processes yield efficient enclosure and
technological transformation in some circumstances, in others, the outcomes
fall short of second-best. This stems from the interaction of different
spillover effects, leading to inefficiently low rates of enclosure and
technological transformation in some cases and excessive enclosure in others.
Implementing policies to strengthen customary governance, compensate displaced
stakeholders, or subsidize/tax enclosure can realign incentives. However,
addressing one market failure while overlooking others can worsen outcomes. Our
analysis offers a unified framework for evaluating claimed mechanisms and
processes across Neoclassical, neo-institutional, and Marxian interpretations
of enclosure processes.","A Model of Enclosures: Coordination, Conflict, and Efficiency in the Transformation of Land Property Rights",2023-11-02 23:58:58,"Matthew J. Baker, Jonathan Conning","http://arxiv.org/abs/2311.01592v1, http://arxiv.org/pdf/2311.01592v1",econ.GN
33290,gn,"The manufacturing industry sector was expected to generate new employment
opportunities and take on labour. Gradually, however, it emerged as a menace to
the sustenance of its workers. According to the findings of this study, 24
manufacturing subsectors with ISIC 2 digits in Indonesia exhibited regressive
and abnormal patterns in the period 2012-2020. This suggests that, to a great
extent, labour absorption has been limited and, in some cases, even shown a
decline. Anomalous occurrences were observed in three subsectors: ISIC 12
(tobacco products), ISIC 26 (computer, electronic and optical products), and
ISIC 31 (furniture). In contrast, regressive phenomena were present in the
remaining 21 ISIC subsectors. Furthermore, the manufacturing industry displayed
a negative correlation between employment and efficiency index, demonstrating
this anomalous and regressive phenomenon. This implies that as the efficiency
index of the manufacturing industry increases, the index of labour absorption
decreases",Labour Absorption In Manufacturing Industry In Indonesia: Anomalous And Regressive Phenomena,2023-11-03 11:58:00,"Tongam Sihol Nababan, Elvis Fresly Purba","http://arxiv.org/abs/2311.01787v1, http://arxiv.org/pdf/2311.01787v1",econ.GN
33291,gn,"Design/methodology/approach We compared the awareness of 813 people in Wuhan
city from January to March 2023 (Wuhan 2023) and 2,973 people in East and South
China from February to May 2020 (China 2020) using responses to questionnaires
conducted at Japanese local subsidiaries during each period. Purpose As the
coronavirus pandemic becomes less terrifying than before, there is a trend in
countries around the world to abolish strict behavioral restrictions imposed by
governments. How should overseas subsidiaries change the way they manage human
resources in response to these system changes? To find an answer to this
question, this paper examines what changes occurred in the mindset of employees
working at local subsidiaries after the government's strict behavioral
restrictions were introduced and lifted during the COVID-19 pandemic. Findings
The results showed that the analytical model based on conservation of resources
(COR) theory can be applied to both China 2020 and Wuhan 2023. However, the
relationship between anxiety, fatigue, compliance, turnover intention, and
psychological and social resources of employees working at local subsidiaries
changed after the initiation and removal of government behavioral restrictions
during the pandemic, indicating that managers need to adjust their human
resource management practices in response to these changes. Originality/value
This is the first study that compares data after the start of government
regulations and data after the regulations were lifted. Therefore, this
research proposes a new analytical framework that companies, especially
foreign-affiliated companies that lack local information, can refer to respond
appropriately to disasters, which expand damage while changing its nature and
influence while anticipating changes in employee awareness.",How to maintain compliance among host country employees who are less anxious after strict government regulations are lifted: An attempt to apply conservation of resources theory to the workplace amid the still-unending COVID-19 pandemic,2023-11-03 17:58:55,"Keisuke Kokubun, Yoshiaki Ino, Kazuyoshi Ishimura","http://arxiv.org/abs/2311.01963v1, http://arxiv.org/pdf/2311.01963v1",econ.GN
33292,gn,"This research explores the intricate relationship between organizational
commitment and nomophobia, illuminating the mediating influence of the ethical
environment. Utilizing Meyer and Allen's three-component model, the study finds
a significant inverse correlation between organizational commitment and
nomophobia, highlighting how strong organizational ties can alleviate the
anxiety of digital disconnection. The ethical environment further emerges as a
significant mediator, indicating its dual role in promoting ethical behavior
and mitigating nomophobia's psychological effects.
  The study's theoretical advancement lies in its empirical evidence on the
seldom-explored nexus between organizational commitment and technology-induced
stress. By integrating organizational ethics and technological impact, the
research offers a novel perspective on managing digital dependence in the
workplace. From a practical standpoint, this study serves as a catalyst for
organizational leaders to reinforce affective and normative commitment, thereby
reducing nomophobia. The findings underscore the necessity of ethical
leadership and comprehensive ethical policies as foundations for employee
well-being in the digital age.
  Conclusively, this study delineates the protective role of organizational
commitment and the significance of ethical environments, guiding organizations
to foster cultures that balance technological efficiency with employee welfare.
As a contribution to both academic discourse and practical application, it
emphasizes the importance of nurturing a supportive and ethically sound
workplace in an era of pervasive digital integration.",Beyond the Screen: Safeguarding Mental Health in the Digital Workplace Through Organizational Commitment and Ethical Environment,2023-11-04 17:46:01,"Ali Bai, Morteza Vahedian","http://arxiv.org/abs/2311.02422v1, http://arxiv.org/pdf/2311.02422v1",econ.GN
33300,gn,"This article concerns the optimal choice of flat taxes on labor and capital
income, and on consumption, in a tractable economic model. Agents manage a
portfolio of bonds and physical capital while subject to idiosyncratic
investment risk and random mortality. We identify the tax rates which maximize
welfare in stationary equilibrium while preserving tax revenue, finding that a
very large increase in welfare can be achieved by only taxing capital income
and consumption. The optimal rate of capital income taxation is zero if the
natural borrowing constraint is strictly binding on entrepreneurs, but may
otherwise be positive and potentially large. The Domar-Musgrave effect, whereby
capital income taxation with full offset provisions encourages risky investment
through loss sharing, explains cases where it is optimal to tax capital income.
In further analysis we study the dynamic response to the substitution of
consumption taxation for labor income taxation. We find that consumption
immediately drops before rising rapidly to the new stationary equilibrium,
which is higher on average than initial consumption for workers but lower for
entrepreneurs.",Optimal taxation and the Domar-Musgrave effect,2023-11-10 04:45:46,"Brendan K. Beare, Alexis Akira Toda","http://arxiv.org/abs/2311.05822v1, http://arxiv.org/pdf/2311.05822v1",econ.GN
33293,gn,"More than one-fifth of the US population does not subscribe to a fixed
broadband service despite being a recognized merit good. For example, less than
4% of citizens earning more than US \$70k annually do not have broadband,
compared to 26% of those earning below US \$20k annually. To address this, the
Biden Administration has undertaken one of the largest broadband investment
programs ever via The Bipartisan Infrastructure Law, with the aim of addressing
this disparity and expanding broadband connectivity to all citizens. Proponents
state this will reduce the US digital divide once-and-for-all. However,
detractors say the program leads to unprecedented borrowing at a late stage of
the economic cycle, leaving little fiscal headroom. Subsequently, in this
paper, we examine broadband availability, adoption, and need and then construct
an input-output model to explore the macroeconomic impacts of broadband
spending in Gross Domestic Product (GDP) terms. Finally, we quantify
inter-sectoral macroeconomic supply chain linkages from this investment. The
results indicate that federal broadband investment of US \$42 billion has the
potential to increase GDP by up to US \$216 billion, equating to 0.2% of annual
US GDP over the next five years, with an estimated Keynesian investment
multiplier of 2.89. To our knowledge, we contribute the first economic impact
assessment of the US Bipartisan Infrastructure Law to the literature.",The contribution of US broadband infrastructure subsidy and investment programs to GDP using input-output modeling,2023-11-04 18:21:55,"Matthew Sprintson, Edward Oughton","http://arxiv.org/abs/2311.02431v1, http://arxiv.org/pdf/2311.02431v1",econ.GN
33294,gn,"This paper studies a preference evolution model in which a population of
agents are matched to play a sequential prisoner's dilemma in an incomplete
information environment. An institution can design an incentive-compatible
screening scheme, such as a special zone that requires an entry fee, or a
costly label for purchase, to segregate the conditional cooperators from the
non-cooperators. We show that institutional intervention of this sort can help
the conditional cooperators to prevail when the psychological benefit of
cooperating for them is sufficiently strong and the membership of the special
zone or the label is inheritable with a sufficiently high probability.",Institutional Screening and the Sustainability of Conditional Cooperation,2023-11-06 04:15:33,"Ethan Holdahl, Jiabin Wu","http://arxiv.org/abs/2311.02813v1, http://arxiv.org/pdf/2311.02813v1",econ.GN
33295,gn,"With the onset of climate change and the increasing need for effective
policies, a multilateral approach is needed to make an impact on the growing
threats facing the environment. Through the use of systematic analysis by way
of C-ROADS and En-ROADS, numerous scenarios have been simulated to shed light
on the most imperative policy factors to mitigate climate change. Within
C-ROADS, it was determined that the impacts of the shrinking ice-albedo effect
on global temperatures is significant, however differential sea ice melting
between the poles may not impact human dwellings, as all regions are impacted
by sea ice melt. Flood risks are also becoming more imminent, specifically in
high population density areas. In terms of afforestation, China is the emerging
leader, and if other countries follow suit, this can incur substantial
dividends. Upon conducting a comprehensive analysis of global trends through
En-ROADS, intriguing patterns appear between the length of a policy initiative,
and its effectiveness. Quick policies with gradual increases in taxation proved
successful. Government intervention was also favorable, however an optimized
model is presented, with moderate subsidization of renewable energy. Through
this systematic analysis of assumptions and policy for effective climate change
mitigation efforts, an optimized, economically-favorable solution arises.",Optimizing Climate Policy through C-ROADS and En-ROADS Analysis,2023-11-07 00:30:58,Iveena Mukherjee,"http://arxiv.org/abs/2311.03546v1, http://arxiv.org/pdf/2311.03546v1",econ.GN
33296,gn,"In this paper, we study a neoclassical growth model with a (productivity
inhibiting) pollution effect. In particular, we obtain a necessary and
sufficient condition for the existence of a topological chaos. We investigate
how the condition changes as the strength of the pollution effect changes. This
is a new application of a recent result characterising the existence of a
topological chaos for a unimodal interval map by Deng, Khan, Mitra (2022).",A necessary and sufficient condition for the existence of chaotic dynamics in a neoclassical growth model with a pollution effect,2023-11-07 01:54:57,Tomohiro Uchiyama,"http://arxiv.org/abs/2311.03594v1, http://arxiv.org/pdf/2311.03594v1",econ.GN
33297,gn,"This article provides a self-contained overview of the theory of rational
asset price bubbles. We cover topics from basic definitions, properties, and
classical results to frontier research, with an emphasis on bubbles attached to
real assets such as stocks, housing, and land. The main message is that bubbles
attached to real assets are fundamentally nonstationary phenomena related to
unbalanced growth. We present a bare-bones model and draw three new insights:
(i) the emergence of asset price bubbles is a necessity, instead of a
possibility; (ii) asset pricing implications are markedly different between
balanced growth of stationary nature and unbalanced growth of nonstationary
nature; and (iii) asset price bubbles occur within larger historical trends
involving shifts in industrial structure driven by technological innovation,
including the transition from the Malthusian economy to the modern economy.",Bubble Economics,2023-11-07 03:54:56,"Tomohiro Hirano, Alexis Akira Toda","http://arxiv.org/abs/2311.03638v2, http://arxiv.org/pdf/2311.03638v2",econ.GN
33298,gn,"Over the last 10 years, there has been a growing interest in diversity in
human capital. Fueled by the business case for diversity, there is an
increasing interest in understanding how the combination of people with
different backgrounds fosters the innovation performance of firms. Studies have
measured diversity on a wide range of personal-level characteristics, at
different levels of the organization, and in particular kinds of settings.
Innovation performance has been measured using an arsenal of indicators, often
drawing on a large range of databases. This paper takes stock of this research,
identifying the current state of affairs and proposing future research
trajectories in the field of diversity and innovation",Workplace diversity and innovation performance: current state of affairs and future directions,2023-11-09 12:02:43,"Christian R. Østergaard, Bram Timmermans","http://arxiv.org/abs/2311.05219v1, http://arxiv.org/pdf/2311.05219v1",econ.GN
33308,gn,"This paper shows the universal representation of symmetric functions with
multidimensional variable-size variables, which justifies the use of
approximation methods aggregating the information of each variable. I then
discuss how the results give insights into economic applications, including
two-step policy function estimation, moment-based Markov equilibrium, and
aggregative games.",The Use of Symmetry for Models with Variable-size Variables,2023-11-15 05:07:18,Takeshi Fukasawa,"http://arxiv.org/abs/2311.08650v2, http://arxiv.org/pdf/2311.08650v2",econ.GN
33301,gn,"Ghana-s current youth unemployment rate is 19.7%, and the country faces a
significant youth unemployment problem. While a range of youth-employment
programs have been created over the years, no systematic documentation and
evaluation of the impacts of these public initiatives has been undertaken.
Clarifying which interventions work would guide policy makers in creating
strategies and programs to address the youth-employment challenge. By
complementing desk reviews with qualitative data gathered from focus-group
discussions and key informant interviews, we observe that most youth-employment
programs implemented in Ghana cover a broad spectrum that includes skills
training, job placement matching, seed capital, and subsidies. Duplication of
initiatives, lack of coordination, and few to non-existent impact evaluations
of programs are the main challenges that plague these programs. For better
coordination and effective policy making, a more centralized and coordinated
system is needed for program design and implementation. Along the same lines,
ensuring rigorous evaluation of existing youth-employment programs is necessary
to provide empirical evidence of the effectiveness and efficiency of these
programs.",Empirical Review of Youth-Employment Programs in Ghana,2023-11-10 16:19:43,"Monica Lambon-Quayefio, Thomas Yeboah, Nkechi S. Owoo, Marjan Petreski, Catherine Koranchie, Edward Asiedu, Mohammed Zakaria, Ernest Berko, Yaw Nsiah Agyemang","http://arxiv.org/abs/2311.06048v1, http://arxiv.org/pdf/2311.06048v1",econ.GN
33302,gn,"Central Bank Digital Currency (CBDC) may assist New Zealand accomplish SDG 8.
I aim to evaluate if SDGs could be achieved together because of mutual
interactions between SDG 8 and other SDGs. The SDGs are categorized by their
shared qualities to affect and effect SDG 8. Also, additional SDGs may help
each other achieve. Considering the CBDC as a fundamental stimulus to achieving
decent work and economic growth, detailed study and analysis of mutual
interactions suggests that SDG 8 and other SDGs can be achieved.",Sustainable Development Goals (SDGs): New Zealand Outlook with Central Bank Digital Currency and SDG 8 Realization on the Horizon,2023-11-12 06:12:51,Qionghua Chu,"http://arxiv.org/abs/2311.06716v3, http://arxiv.org/pdf/2311.06716v3",econ.GN
33303,gn,"In the inverted yield curve environment, I intend to assess the feasibility
of fulfilling Sustainable Development Goal (SDG) 8, decent work and economic
growth, of the United Nations by 2030 in New Zealand. Central Bank Digital
Currency (CBDC) issuance supports SDG 8, based on the Cobb-Douglas production
function, the growth accounting relation, and the Theory of Aggregate Demand.
Bright prospects exist for New Zealand.",Sustainable Development Goal (SDG) 8: New Zealand Prospects while Yield Curve Inverts in Central Bank Digital Currency (CBDC) Era,2023-11-12 06:15:33,Qionghua Chu,"http://arxiv.org/abs/2311.06718v4, http://arxiv.org/pdf/2311.06718v4",econ.GN
33304,gn,"We provide four novel results for nonhomothetic Constant Elasticity of
Substitution preferences (Hanoch, 1975). First, we derive a closed-form
representation of the expenditure function of nonhomothetic CES under
relatively flexible distributional assumptions of demand and price distribution
parameters. Second, we characterize aggregate demand from heterogeneous
households in closed-form, assuming that household total expenditures follow an
empirically plausible distribution. Third, we leverage these results to study
the Euler equation arising from standard intertemporal consumption-saving
problems featuring within-period nonhomothetic CES preferences. Finally, we
show that nonhomothetic CES expenditure shares arise as the solution of a
discrete-choice logit problem.",Aggregation and Closed-Form Results for Nonhomothetic CES Preferences,2023-11-12 08:30:54,"Clement E. Bohr, Martí Mestieri, Emre Enes Yavuz","http://arxiv.org/abs/2311.06740v1, http://arxiv.org/pdf/2311.06740v1",econ.GN
33305,gn,"This study delves into the increasingly pertinent issue of technostress in
the workplace and its multifaceted impact on job performance. Technostress,
emerging from the rapid integration of technology in professional settings, is
identified as a significant stressor affecting employees across various
industries. The research primarily focuses on the ways in which technostress
influences job performance, both negatively and positively, depending on the
context and individual coping mechanisms. Through a blend of qualitative and
quantitative methodologies, including surveys and in-depth interviews, the
study examines the experiences of employees from diverse sectors. It highlights
how technostress manifests in different forms: from anxiety and frustration due
to constant connectivity to the pressure of adapting to new technologies. The
paper also explores the dual role of technology as both a facilitator and a
hindrance in the workplace.
  Significant findings indicate that technostress adversely impacts job
performance, leading to decreased productivity, diminished job satisfaction,
and increased turnover intentions. However, the study also uncovers that
strategic interventions, such as training programs, supportive leadership, and
fostering a positive technological culture, can mitigate these negative
effects. These interventions not only help in managing technostress but also in
harnessing the potential of technology for enhanced job performance.
  Furthermore, the research proposes a model outlining the relationship between
technostress, coping mechanisms, and job performance. This model serves as a
framework for organizations to understand and address the challenges posed by
technostress. The study concludes with recommendations for future research,
particularly in exploring the long-term effects of technostress and the
efficacy of various coping strategies.",Technostress and Job Performance: Understanding the Negative Impacts and Strategic Responses in the Workplace,2023-11-13 07:38:16,"Armita Atrian, Saleh Ghobbeh","http://arxiv.org/abs/2311.07072v1, http://arxiv.org/pdf/2311.07072v1",econ.GN
33306,gn,"This study investigates the strategic behavior of applicants in the Japanese
daycare market, where waitlisted applicants are granted additional priority
points in subsequent application rounds. Utilizing data from Tokyo's Bunkyo
municipality, this paper provides evidence of considerable manipulation, with
parents strategically choosing to be waitlisted to enhance the likelihood of
their child's admission into more selective daycare centers. I extend the
static framework of school choice posited by Agarwal and Somaini (2018) to
incorporate dynamic incentives and estimate a structural model that allows for
reapplication if waitlisted. Empirical findings indicate that approximately 30%
of applicants forgo listing safer options in their initial application, a
behavior significantly pronounced among those who stand to benefit from the
waitlist prioritization. Counterfactual simulations, conducted under the
scenario of no additional waitlist priority, predict a 17.7% decrease in the
number of waitlisted applicants and a 1.2% increase in overall welfare. These
findings highlight the profound influence of dynamic incentives on applicant
behavior and underscore the necessity for reevaluating current priority
mechanisms.",Dynamic Incentives in Centralized Matching: The Case of Japanese Daycare,2023-11-14 08:38:01,Kan Kuno,"http://arxiv.org/abs/2311.07920v1, http://arxiv.org/pdf/2311.07920v1",econ.GN
33307,gn,"We explore a model of non-Bayesian information aggregation in networks.
Agents non-cooperatively choose among Friedkin-Johnsen type aggregation rules
to maximize payoffs. The DeGroot rule is chosen in equilibrium if and only if
there is noiseless information transmission, leading to consensus. With noisy
transmission, while some disagreement is inevitable, the optimal choice of rule
amplifies the disagreement: even with little noise, individuals place
substantial weight on their own initial opinion in every period, exacerbating
the disagreement. We use this framework to think about equilibrium versus
socially efficient choice of rules and its connection to polarization of
opinions across groups.",Consensus and Disagreement: Information Aggregation under (not so) Naive Learning,2023-11-14 18:52:23,"Abhijit Banerjee, Olivier Compte","http://arxiv.org/abs/2311.08256v1, http://arxiv.org/pdf/2311.08256v1",econ.GN
33310,gn,"This paper reviews the literature on biodegradable waste management in
Germany, a multifaceted endeavor that reflects its commitment to sustainability
and environmental responsibility. It examines the processes and benefits of
separate collection, recycling, and eco-friendly utilization of biodegradable
materials, which produce valuable byproducts such as compost, digestate, and
biogas. These byproducts serve as organic fertilizers, peat substitutes, and
renewable energy sources, contributing to ecological preservation and economic
prudence. The paper also discusses the global implications of biodegradable
waste management, such as preventing methane emissions from landfills, a major
source of greenhouse gas, and reusing organic matter and essential nutrients.
Moreover, the paper explores how biodegradable waste management reduces waste
volume, facilitates waste sorting and incineration, and sets a global example
for addressing climate change and working toward a sustainable future. The
paper highlights the importance of a comprehensive and holistic approach to
sustainability that encompasses waste management, renewable energy,
transportation, agriculture, waste reduction, and urban development.",The Future of Sustainability in Germany: Areas for Improvement and Innovation,2023-11-15 06:57:16,"Mehrnaz Kouhihabibi, Erfan Mohammadi","http://arxiv.org/abs/2311.08678v1, http://arxiv.org/pdf/2311.08678v1",econ.GN
33311,gn,"This study contributes to the ongoing debate regarding the balance between
quality and quantity in research productivity and publication activity. Using
empirical regional knowledge production functions, we establish a significant
correlation between R&D spending and research output, specifically publication
productivity, while controlling for patenting activity and socioeconomic
factors. Our focus is on the dilemma of research quantity versus quality, which
is analysed in the context of regional thematic specialization using spatial
lags. When designing policies and making forecasts, it is important to consider
the quality of research measured by established indicators. In this study, we
examine the dual effect of research quality on publication activity. We
identify two groups of quality factors: those related to the quality of
journals and those related to the impact of publications. On average, these
factors have different influences on quantitative measures. The quality of
journals shows a negative relationship with quantity, indicating that as
journal quality increases, the number of publications decreases. On the other
hand, the impact of publications can be approximated by an inverse parabolic
shape, with a positive decreasing slope within a common range of values. This
duality in the relationship between quality factors and quantitative measures
may explain some of the significant variations in conclusions found in the
literature. We compare several models that explore factors influencing
publication activity using a balanced panel dataset of Russian regions from
2009 to 2021. Additionally, we propose a novel approach using thematic
scientometric parameters as a special type of proximity measure between regions
in thematic space. Incorporating spatial spillovers in thematic space allows us
to account for potential cross-sectional dependence in regional data.",Quantity versus quality in publication activity: knowledge production at the regional level,2023-11-15 13:18:56,"Timur Gareev, Irina Peker","http://arxiv.org/abs/2311.08830v1, http://arxiv.org/pdf/2311.08830v1",econ.GN
33312,gn,"This paper discusses the impact of electricity blackouts and poor
infrastructure on the livelihood of residents and the local economy of
Johannesburg, South Africa. The importance of a stable electricity grid plays a
vital role in the effective functioning of urban infrastructure and the
economy. The importance of electricity in the present-day South Africa has not
been emphasized enough to be prioritized at all levels of government,
especially at the local level, as it is where all socio-economic activities
take place. The new South Africa needs to redefine the importance of
electricity by ensuring that it is accessible, affordable, and produced
sustainably, and most of all, by ensuring that the energy transition
initiatives to green energy take place in a planned manner without causing harm
to the economy, which might deepen the plight of South Africans. Currently, the
City of Johannesburg is a growing spatial entity in both demographic and
urbanization terms, and growing urban spaces require a stable supply of
electricity for the proper functioning of urban systems and the growth of the
local economy. The growth of the city brings about a massive demand for
electricity that outstrips the current supply of electricity available on the
local grid. The imbalance in the current supply and growing demand for
electricity result in energy blackouts in the city, which have ripple effects
on the economy and livelihoods of the people of Johannesburg. This paper
examines the impact of electricity blackouts and poor infrastructure on the
livelihood of residents and the local economy of Johannesburg, South Africa.","The impact of Electricity Blackouts and poor infrastructure on the livelihood of residents and the local economy of City of Johannesburg, South Africa",2023-11-15 16:01:25,"Nkosingizwile Mazwi Mchunu, George Okechukwu Onatu, Trynos Gumbo","http://arxiv.org/abs/2311.08929v1, http://arxiv.org/pdf/2311.08929v1",econ.GN
33313,gn,"The paper seeks to interpret the interface between innovation, industry and
urban business agglomeration, by trying to understand how local/regional
entrepreneurs and social agents use the forces of agglomeration to organize
productive activities with a view to achieving greater competitiveness and
strategic advantages. There are many territorial manifestations that
materialize from this new reality and the text seeks to propose a reading of
its characteristics based on what we call ""productive spatial configurations"",
which represent the specific functioning of a certain innovative production
process and its territorial impact. To illustrate this approach, we take as an
example a case study that illustrates how productive spatial configurations
manifest themselves through the revitalization of an industrial economy that
incorporates innovative efforts, whether technological, process or
organizational. This is the localized industrial system (LIS) of clothing and
apparel in Fortaleza, Cear\'a state, Brazil, which reveals an industrial
experience of resistance with significant organizational innovation in the
production and distribution processes of clothing. The main objective of the
proposal is to organize theoretical and empirical tools that will allow us to
read the combination of economic, social and political variables in spaces
where collaborative networks of companies, service centers and public
institutions flourish, in the context of various industrial production
processes. This could point to the progress we need to overcome the many false
controversies on the subject.",New configurations of the interface between innovation and urban spatial agglomeration: the localized industrial systems (lis) of the clothing in Fortaleza/Brazil,2023-11-16 01:57:55,Edilson Pereira Junior,"http://arxiv.org/abs/2311.09429v1, http://arxiv.org/pdf/2311.09429v1",econ.GN
33314,gn,"The focus of this text is to discuss how geographical science can contribute
to an understanding of international migration in the 21st century. To this
end, in the introduction we present the central ideas, as well as the internal
structure of the text. Then, we present the theoretical and methodological
approach that guides the research and, in section three, we show the results
through texts and cartograms based on secondary data and empirical information.
Finally, in the final remarks, we summarize the text with a view to
contributing to the proposed debate.",Urban economics of migration in the cities of the northeast region of Brazil,2023-11-16 02:07:22,Denise Cristina Bomtempo,"http://arxiv.org/abs/2311.09432v1, http://arxiv.org/pdf/2311.09432v1",econ.GN
33315,gn,"In the present study, we use an experimental setting to explore the effects
of sugar-free labels on the willingness to pay for food products. In our
experiment, participants placed bids for sugar-containing and analogous
sugar-free products in a Becker-deGroot-Marschak auction to determine the
willingness to pay. Additionally, they rated each product on the level of
perceived healthiness, sweetness, tastiness and familiarity with the product.
We then used structural equation modelling to estimate the direct, indirect and
total effect of the label on the willingness to pay. The results suggest that
sugar-free labels significantly increase the willingness to pay due to the
perception of sugar-free products as healthier than sugar-containing ones.
However, this positive effect is overridden by a significant decrease in
perceived sweetness (and hence, tastiness) of products labelled as sugar-free
compared to sugar-containing products. As in our sample, healthiness and
tastiness are positively related, while healthiness and sweetness are related
negatively, these results suggest that it is health-sweetness rather than
health-tastiness tradeoff that decreases the efficiency of the sugar-free
labelling in nudging consumers towards healthier options.",The efficacy of the sugar-free labels is reduced by the health-sweetness tradeoff,2023-11-16 16:26:37,"Ksenia Panidi, Yaroslava Grebenschikova, Vasily Klucharev","http://arxiv.org/abs/2311.09885v1, http://arxiv.org/pdf/2311.09885v1",econ.GN
33316,gn,"This paper introduces an innovative travel survey methodology that utilizes
Google Location History (GLH) data to generate travel diaries for
transportation demand analysis. By leveraging the accuracy and omnipresence
among smartphone users of GLH, the proposed methodology avoids the need for
proprietary GPS tracking applications to collect smartphone-based GPS data.
This research enhanced an existing travel survey designer, Travel Activity
Internet Survey Interface (TRAISI), to make it capable of deriving travel
diaries from the respondents' GLH. The feasibility of this data collection
approach is showcased through the Google Timeline Travel Survey (GTTS)
conducted in the Greater Toronto Area, Canada. The resultant dataset from the
GTTS is demographically representative and offers detailed and accurate travel
behavioural insights.",Deriving Weeklong Activity-Travel Dairy from Google Location History: Survey Tool Development and A Field Test in Toronto,2023-11-17 00:56:03,"Melvyn Li, Kaili Wang, Yicong Liu, Khandker Nurul Habib","http://arxiv.org/abs/2311.10210v1, http://arxiv.org/pdf/2311.10210v1",econ.GN
33317,gn,"We study the evolution of population density across Italian municipalities on
the based of their trajectories in the Moran space. We find evidence of spatial
dynamical patterns of concentrated urban growth, urban sprawl, agglomeration,
and depopulation. Over the long run, three distinct settlement systems emerge:
urban, suburban, and rural. We discuss how estimating these demographic trends
at the municipal level can help the design and validation of policies
contrasting the socio-economic decline in specific Italian areas, as in the
case of the Italian National Strategy for Inner Areas (Strategia Nazionale per
le Aree Interne, SNAI).",Unveiling spatial patterns of population in Italian municipalities,2023-11-17 16:42:08,"Davide Fiaschi, Angela Parenti, Cristiano Ricci","http://arxiv.org/abs/2311.10520v1, http://arxiv.org/pdf/2311.10520v1",econ.GN
33318,gn,"This paper studies how religious competition, as measured by the emergence of
religious organizations with innovative worship styles and cultural practices,
impacts domestic violence. Using data from Colombia, the study estimates a
two-way fixed-effects model and reveals that the establishment of the first
non-Catholic church in a predominantly Catholic municipality leads to a
significant decrease in reported cases of domestic violence. This effect
persists in the long run, indicating that religious competition introduces
values and practices that discourage domestic violence, such as household
stability and reduced male dominance. Additionally, the effect is more
pronounced in municipalities with less clustered social networks, suggesting
the diffusion of these values and practices through social connections. This
research contributes to the understanding of how culture influences domestic
violence, emphasizing the role of religious competition as a catalyst for
cultural change.","Religious Competition, Culture and Domestic Violence: Evidence from Colombia",2023-11-17 22:21:18,"Hector Galindo-Silva, Guy Tchuente","http://arxiv.org/abs/2311.10831v1, http://arxiv.org/pdf/2311.10831v1",econ.GN
33319,gn,"The ethical imperative for technology should be first, do no harm. But
digital innovations like AI and social media increasingly enable societal
harms, from bias to misinformation. As these technologies grow ubiquitous, we
need solutions to address unintended consequences. This report proposes a model
to incentivize developers to prevent foreseeable algorithmic harms. It does
this by expanding negligence and product liability laws. Digital product
developers would be incentivized to mitigate potential algorithmic risks before
deployment to protect themselves and investors. Standards and penalties would
be set proportional to harm. Insurers would require harm mitigation during
development in order to obtain coverage. This shifts tech ethics from move fast
and break things to first, do no harm. Details would need careful refinement
between stakeholders to enact reasonable guardrails without stifling
innovation. Policy and harm prevention frameworks would likely evolve over
time. Similar accountability schemes have helped address workplace,
environmental, and product safety. Introducing algorithmic harm negligence
liability would acknowledge the real societal costs of unethical tech. The
timing is right for reform. This proposal provides a model to steer the digital
revolution toward human rights and dignity. Harm prevention must be prioritized
over reckless growth. Vigorous liability policies are essential to stop
technologists from breaking things","First, Do No Harm: Algorithms, AI, and Digital Product Liability",2023-11-17 23:47:41,Marc J. Pfeiffer,"http://arxiv.org/abs/2311.10861v1, http://arxiv.org/pdf/2311.10861v1",econ.GN
33320,gn,"Energy security is the guarantee for achieving the goal of carbon peaking and
carbon neutrality, and exploring energy resilience is one of the important ways
to promote energy security transition and adapt to changes in international and
domestic energy markets. This paper applies the combined dynamic evaluation
method to measure China's energy resilience level from 2004-2021, analyses the
spatio-temporal dynamic evolution of China's energy resilience through the
center of gravity-standard deviation ellipse and kernel density estimation, and
employs geo-detectors to detect the main influencing factors and interactions
of China's energy resilience. The study finds that:(1)China's energy resilience
level generally shows a zigzagging forward development trend, and the spatial
imbalance characteristic of China's energy resilience is more obvious.(2)The
spatial dynamics of China's energy resilience level evolves in a
northeast-southwest direction, and the whole moves towards the southwest, with
an overall counterclockwise trend of constant offset.(3)When the energy
resilience level of neighboring provinces is too low or too high, it has little
effect on the improvement of the energy resilience level of the province; when
the energy resilience level of neighboring provinces is 1-1.4, it has a
positive spatial correlation with the energy resilience level of the province,
and the synergistic development of the provinces can improve the energy
resilience level together.(4)GDP, the number of employees, the number of
employees enrolled in basic pension and medical insurance, and the number of
patent applications in high-tech industries have a more significant impact on
China's energy resilience, while China's energy resilience is affected by the
interaction of multiple factors.",Research on the Dynamic Evolution and Influencing Factors of Energy Resilience in China,2023-11-18 09:45:10,"Tie Wei, Youqi Chen, Zhicheng Duan","http://arxiv.org/abs/2311.10987v1, http://arxiv.org/pdf/2311.10987v1",econ.GN
33321,gn,"Ranking pertaining to the human-centered tasks -- underscoring their
paramount significance in these domains such as evaluation and hiring process
-- exhibits widespread prevalence across various industries. Consequently,
decision-makers are taking proactive measurements to promote diversity,
underscore equity, and advance inclusion. Their unwavering commitment to these
ideals emanates from the following convictions: (i) Diversity encompasses a
broad spectrum of differences; (ii) Equity involves the assurance of equitable
opportunities; and (iii) Inclusion revolves around the cultivation of a sense
of value and impartiality, concurrently empowering individuals. Data-driven AI
tools have been used for screening and ranking processes. However, there is a
growing concern that the presence of pre-existing biases in databases may be
exacerbated, particularly in the context of imbalanced datasets or the
black-box-schema. In this research, we propose a model-driven recruitment
decision support tool that addresses fairness together with equity in the
screening phase. We introduce the term ``pDEI"" to represent the output-input
oriented production efficiency adjusted by socioeconomic disparity. Taking into
account various aspects of interpreting socioeconomic disparity, our goals are
(i) maximizing the relative efficiency of underrepresented groups and (ii)
understanding how socioeconomic disparity affects the cultivation of a
DEI-positive workplace.",Workforce pDEI: Productivity Coupled with DEI,2023-11-19 08:15:15,"Lanqing Du, Jinwook Lee","http://arxiv.org/abs/2311.11231v2, http://arxiv.org/pdf/2311.11231v2",econ.GN
33322,gn,"Some people did not get the COVID-19 vaccine even though it was offered at no
cost. A monetary incentive might lead people to vaccinate, although existing
studies have provided different findings about this effect. We investigate how
monetary incentives differ according to individual characteristics. Using panel
data with online experiments, we found that (1) subsidies reduced vaccine
intention but increased it after controlling heterogeneity; (2) the stronger
the social image against the vaccination, the lower the monetary incentive; and
(3) persistently unvaccinated people would intend to be vaccinated only if a
large subsidy was provided.",Would Monetary Incentives to COVID-19 vaccination reduce motivation?,2023-11-20 18:04:26,"Eiji Yamamura, Yoshiro Tsutsui, Fumio Ohtake","http://arxiv.org/abs/2311.11828v1, http://arxiv.org/pdf/2311.11828v1",econ.GN
33323,gn,"The focus on Renewable Energy Communities (REC) is fastly growing after the
European Union (EU) has introduced a dedicated regulation in 2018. The idea of
creating local groups of citizens, small- and medium-sized companies, and
public institutions, which self-produce and self-consume energy from renewable
sources is at the same time a way to save money for the participants, increase
efficiency of the energy system, and reduce CO$_2$ emissions. Member states
inside the EU are fixing more detailed regulations, which describe, how public
incentives are measured. A natural objective for the incentive policies is of
course to promote the self-consumption of a REC. A sophisticated incentive
policy is that based on the so called 'virtual framework'. Under this framework
all the energy produced by a REC is sold to the market, and all the energy
consumed must be paid to retailers: self-consumption occurs only 'virtually',
thanks a money compensation (paid by a central authority) for every MWh
produced and consumed by the REC in the same hour. In this context, two
problems have to be solved: the optimal investment in new technologies and a
fair division of the incentive among the community members. We address these
problems by considering a particular type of REC, composed by a representative
household and a biogas producer, where the potential demand of the community is
given by the household's demand, while both members produce renewable energy.
We set the problem as a leader-follower problem: the leader decide how to share
the incentive for the self-consumed energy, while the followers decide their
own optimal installation strategy. We solve the leader's problem by searching
for a Nash bargaining solution for the incentive's fair division, while the
follower problem is solved by finding the Nash equilibria of a static
competitive game between the members.",Optimal Investment and Fair Sharing Rules of the Incentives for Renewable Energy Communities,2023-11-18 15:46:05,"Almendra Awerkin, Paolo Falbo, Tiziano Vargiolu","http://arxiv.org/abs/2311.12055v1, http://arxiv.org/pdf/2311.12055v1",econ.GN
33324,gn,"The study of work-family conflict (WFC) and work-family policies (WFP) and
their impact on the well-being of employees in the tourism sector is
increasingly attracting the attention of researchers. To overcome the adverse
effects of WFC, managers should promote WFP, which contribute to increased
well-being at work and employees' commitment. This paper aims to analyze the
impact of WFP accessibility and organizational support on well-being directly
and by mediating the organizational commitment that these policies might
encourage. In addition, we also study whether these relationships vary
according to gender and employee seniority. To test the hypotheses derived from
this objective, we collected 530 valid and completed questionnaires from
workers in the tourism sector in Spain, which we analyzed using structural
equation modeling based on the PLS-SEM approach. The results show that human
resource management must consider the importance of organizational support for
workers to make WFP accessible and generate organizational commitment and
well-being at work.",Organizational support for work-family life balance as an antecedent to the well-being of tourism employees in Spain,2023-11-23 16:53:13,"Jose Aurelio Medina-Garrido, Jose Maria Biedma-Ferrer, Maria Bogren","http://dx.doi.org/10.1016/j.jhtm.2023.08.018, http://arxiv.org/abs/2311.14009v1, http://arxiv.org/pdf/2311.14009v1",econ.GN
33325,gn,"I analyze the risk-coping behaviors among factory worker households in early
20th-century Tokyo. I digitize a unique daily longitudinal survey dataset on
household budgets to examine the extent to which consumption is affected by
idiosyncratic shocks. I find that while the households were so vulnerable that
the shocks impacted their consumption levels, the income elasticity for food
consumption is relatively low in the short-run perspective. The result from
mechanism analysis suggests that credit purchases played a role in smoothing
the short-run food consumption. The event-study analysis using the adverse
health shock as the idiosyncratic income shock confirms the robustness of the
results. I also find evidence that the misassignment of payday in data
aggregation results in a systematic attenuation bias due to measurement error.",Consumption Smoothing in Metropolis: Evidence from the Working-class Households in Prewar Tokyo,2023-11-24 10:43:38,Kota Ogasawara,"http://arxiv.org/abs/2311.14320v2, http://arxiv.org/pdf/2311.14320v2",econ.GN
33326,gn,"For decades, entrepreneurship has been promoted in academia and tourism
sector and has it been seen as an opportunity for new business ventures. In
terms of entrepreneurial behavior, effectual logic shows how the individual
uses his or her resources to create new opportunities. In this context, this
paper aims to determine effectual propensity as an antecedent of
entrepreneurial intentions. For this purpose, and based on the TPB model, we
conducted our research with tourism students from Cadiz and Seville (Spain)
universities with Smart PLS 3. The results show that effectual propensity
influences entrepreneurial intentions and that attitude and perceived
behavioral control mediate between subjective norms and intentions. Our
research has a great added value since we have studied for the first time
efficacious propensity as an antecedent of intentions in people who have never
been entrepreneurs.",Impact of effectual propensity on entrepreneurial intention,2023-11-24 11:34:19,"Alicia Martin-Navarro, Felix Velicia-Martin, Jose Aurelio Medina-Garrido, Pedro R. Palos-Sanchez","http://dx.doi.org/10.1016/j.jbusres.2022.113604, http://arxiv.org/abs/2311.14340v1, http://arxiv.org/pdf/2311.14340v1",econ.GN
33327,gn,"BPMS (Business Process Management System) represents a type of software that
automates the organizational processes looking for efficiency. Since the
knowledge of organizations lies in their processes, it seems probable that a
BPMS can be used to manage the knowledge applied in these processes. Through
the BPMS-KM Support Model, this study aims to determine the reliability and
validity of a 65-item instrument to measure the utility and the use of a BPMS
for knowledge management (KM). A questionnaire was sent to 242 BPMS users and
to determine its validity, a factorial analysis was conducted. The results
showed that the measuring instrument is trustworthy and valid. It represents
implications for research, since it provides an instrument validated for
research on the success of a BPMS for KM. There would also be practical
implications, since managers can evaluate the use of BPMS, in addition to
automating processes to manage knowledge.",Testing an instrument to measure the BPMS-KM Support Model,2023-11-24 11:48:04,"Alicia Martin-Navarro, Maria Paula Lechuga Sancho, Jose Aurelio Medina-Garrido","http://dx.doi.org/10.1016/j.eswa.2021.115005, http://arxiv.org/abs/2311.14348v1, http://arxiv.org/pdf/2311.14348v1",econ.GN
33328,gn,"Purpose: The objective of this work is to analyze the impact of implementing
work-family reconciliation measures on workers' perception and how this can
influence their behavior, especially in their organizational performance.
Design/methodology/approach: A literature review and the main research works
related to the work-family conflict and reconciliation measures to overcome
this conflict have been conducted to draw conclusions about their impact on
worker performance. Contributions and results: This work proposes an
integrative model that shows the existing relationships between work-family
reconciliation and perceptual variables on one side, and those related to the
worker's organizational behavior on the other. Perceptual variables such as
stress, job satisfaction, and motivation are analyzed. Regarding variables
related to the worker's organizational behavior, absenteeism, turnover, and
performance are analyzed. The results of the analysis provide evidence that the
existence of work-family reconciliation is perceived favorably by workers and
improves their organizational behavior, especially their performance.
Originality/Added value: This study integrates different perspectives related
to the conflict and work-family reconciliation, from an eclectic vision. Thus,
it contributes to existing literature with a more comprehensive approach to the
investigated topic. Additionally, the proposed integrative model allows for
useful conclusions for management from both a purely human resources management
perspective and organizational productivity improvement. Keywords: Work-family
conflict, work-family reconciliation, perceptual variables, organizational
performance, human resources managements",Impact of family-friendly HRM policies in organizational performance,2023-11-24 12:00:41,"Jose Maria Biedma Ferrer, Jose Aurelio Medina Garrido","http://dx.doi.org/10.3926/ic.506, http://arxiv.org/abs/2311.14358v2, http://arxiv.org/pdf/2311.14358v2",econ.GN
33329,gn,"We consider a regulator driving individual choices towards increasing social
welfare by providing personal incentives. We formalise and solve this problem
by maximising social welfare under a budget constraint. The personalised
incentives depend on the alternatives available to each individual and on her
preferences. A polynomial time approximation algorithm computes a policy within
few seconds. We analytically prove that it is boundedly close to the optimum.
We efficiently calculate the curve of social welfare achievable for each value
of budget within a given range. This curve can be useful for the regulator to
decide the appropriate amount of budget to invest. We extend our formulation to
enforcement, taxation and non-personalised-incentive policies. We analytically
show that our personalised-incentive policy is also optimal within this class
of policies and construct close-to-optimal enforcement and proportional
tax-subsidy policies. We then compare analytically and numerically our policy
with other state-of-the-art policies. Finally, we simulate a large-scale
application to mode choice to reduce CO2 emissions.",Personalised incentives with constrained regulator's budget,2023-11-24 14:28:25,"Lucas Javaudin, Andrea Araldo, André de Palma","http://dx.doi.org/10.1080/23249935.2023.2284353, http://arxiv.org/abs/2311.14417v1, http://arxiv.org/pdf/2311.14417v1",econ.GN
33330,gn,"Process mining has become one of the best programs that can outline the event
logs of production processes in visualized detail. We have addressed the
important problem that easily occurs in the industrial process called
Bottleneck. The analysis process was focused on extracting the bottlenecks in
the production line to improve the flow of production. Given enough stored
history logs, the field of process mining can provide a suitable answer to
optimize production flow by mitigating bottlenecks in the production stream.
Process mining diagnoses the productivity processes by mining event logs, this
can help to expose the opportunities to optimize critical production processes.
We found that there is a considerable bottleneck in the process because of the
weaving activities. Through discussions with specialists, it was agreed that
the main problem in the weaving processes, especially machines that were
exhausted in overloading processes. The improvement in the system has measured
by teamwork; the cycle time for process has improved to 91%, the worker's
performance has improved to 96%,product quality has improved by 85%, and lead
time has optimized from days and weeks to hours.",Application of Process Mining and Sequence Clustering in Recognizing an Industrial Issue,2023-11-26 20:31:55,Hamza Saad,"http://arxiv.org/abs/2311.15362v1, http://arxiv.org/pdf/2311.15362v1",econ.GN
33331,gn,"Privatization and commercialization of airports in recent years are drawing a
different picture in the aeronautical industry. Airport benchmarking shows the
accommodation and performance of airports in the evolution of the market and
the new requirements that they have to face. AENA manages a wide and
heterogeneous network of airports. There are 46 airports divided into three
categories and with particularities due to their geographical location or the
competitive environment where they are located. This paper analyzes the
technical efficiency and its determinants of the 39 commercial airports of the
AENA network between the years 2011-2014. To do this, two benchmarking
techniques, SFA and DEA, are used, with a two-stage analysis. The average
efficiency of the network is between 75-79\%. The results with the two
techniques are similar with a correlation of 0.67. With regard to the
commercial part of the network, AENA has a high margin for improvement because
it is below the world and European average. AENA must focus on the development
of the commercial area and the introduction of competition within the network
to improve the technical efficiency of regional airports mainly.",An efficiency analysis of Spanish airports,2023-11-08 17:56:54,Adrian Nerja,"http://dx.doi.org/10.19272/202106701002, http://arxiv.org/abs/2311.16156v1, http://arxiv.org/pdf/2311.16156v1",econ.GN
33332,gn,"I present new evidence of the effects of climate shocks on political violence
and social unrest. Using granular conflict and weather data covering the entire
continent of Africa from 1997 to 2023, I find that exposure to El Nino events
during the crop-growing season decreases political violence during the early
postharvest season. A one degree warming of sea surface temperature in the
tropical Pacific Ocean, a usual proxy for a moderate-strength El Nino event,
results in at least three percent reduction in political violence with civilian
targeting. This main finding, backed by a series of robustness checks, supports
the idea that agriculture is the key channel and rapacity is the key motive
connecting climatic shocks and political violence. Reassuringly, the magnitude
of the estimated effect nearly doubles when I use subsets of data that are
better suited to unveiling the proposed mechanism, and the effect only
manifests during and after the harvest, but not prior to it, when I apply an
event study model. This study advances the knowledge of the relationship
between climate and conflict. And because El Nino events can be predicted
several months in advance, these findings can contribute to creating a platform
for early warnings of political violence, specifically in predominantly
agrarian societies in Africa.","Climate, Crops, and Postharvest Conflict",2023-11-28 02:29:29,David Ubilava,"http://arxiv.org/abs/2311.16370v2, http://arxiv.org/pdf/2311.16370v2",econ.GN
33333,gn,"The objective of this research was to determine the degree of knowledge that
the inhabitants of the Guadalajara Metropolitan Area (made up of the
municipalities Guadalajara, Tlajomulco de Z\'u\~niga, Tlaquepaque, Zapopan and
Tonal\'a) had regarding tequila and the brands produced in Los Altos of
Jalisco. For this, a survey consisting of five questions was designed, which
was applied in the central square, center or z\'ocalo of each municipality. The
results show that the big brands, when acquired by international companies,
focused their attention on capturing the consumer in international markets,
since the prices of the same products that they export have been out of the
pocket of those who like That drink, so it could be considered that the big
brands, have left the national market a little behind, of course they did not
abandon it completely, but it stopped being their main objective. Therefore, it
can be concluded that the national market is the window of opportunity to join
the small and still unknown producers to work together and in a grouped way,
they are able to standardize a series of products that being of the same
quality and same packaging, they can cover the national market, and perhaps, in
the future, become a large company distributed throughout the territory and
begin the export process only with national capital.",El tequila para consumo nacional como una ventana de oportunidades para el pequeño productor agavero,2023-11-28 22:54:11,Guillermo José Navarro del Toro,"http://dx.doi.org/10.23913/ricea.v10i19.159, http://arxiv.org/abs/2311.17193v1, http://arxiv.org/pdf/2311.17193v1",econ.GN
33334,gn,"Honey consumption in Russia has been actively growing in recent years due to
the increasing interest in healthy and environment-friendly food products.
However, it remains an open question which characteristics of honey are the
most significant for consumers and, more importantly, from an economic point of
view, for which of them consumers are willing to pay. The purpose of this study
was to investigate the role of sensory characteristics in assessing consumers'
willingness to pay for honey and to determine which properties and
characteristics ""natural"" honey should have to encourage repeated purchases by
target consumers. The study involved a behavioral experiment that included a
pre-test questionnaire, blind tasting of honey samples, an in-room test to
assess perceived quality, and a closed auction using the
Becker-DeGroote-Marschak method. As the result, it was revealed that the
correspondence of the expected sensations to the actual taste, taste intensity,
duration of the aftertaste and the sensations of tickling in the throat had a
positive effect on both the perceived quality of the product and the
willingness to pay for it, while perception of off-flavors or added sugar had a
negative impact. Using factor analysis, we have combined 21 sensory
characteristics of honey into eight components that were sufficient to obtain
the flavor portrait of honey by Russian consumers.",The impact of sensory characteristics on the willingness to pay for honey,2023-11-30 09:12:38,"Julia Zaripova, Ksenia Chuprianova, Irina Polyakova, Daria Semenova, Sofya Kulikova","http://arxiv.org/abs/2311.18269v1, http://arxiv.org/pdf/2311.18269v1",econ.GN
33335,gn,"Recently, environmental, social, and governance (ESG) has become an important
factor in companies' sustainable development. Artificial intelligence (AI) is
also a core digital technology that can create innovative, sustainable,
comprehensive, and resilient environments. ESG- and AI-based digital
transformation is a relevant strategy for managing business value and
sustainability in corporate green management operations. Therefore, this study
examines how corporate sustainability relates to ESG- and AI-based digital
transformation. Furthermore, it confirms the moderating effect of green
innovation on the process of increasing sustainability. To achieve the purpose
of this study, 359 data points collected for hypothesis testing were used for
statistical analysis and for mobile business platform users. The following
conclusions are drawn. (1) ESG activities have become key variables that enable
sustainable corporate growth. Companies can implement eco-friendly operating
processes through ESG activities. (2) This study verifies the relationship
between AI-based digital transformation and corporate sustainability and
confirms that digital transformation positively affects corporate
sustainability. In addition, societal problems can be identified and
environmental accidents prevented through technological innovation. (3) This
study does not verify the positive moderating effect of green innovation;
however, it emphasizes its necessity and importance. Although green innovation
improves performance only in the long term, it is a key factor for companies
pursuing sustainable growth. This study reveals that ESG- and AI-based digital
transformation is an important tool for promoting corporate sustainability,
broadening the literature in related fields and providing insights for
corporate management and government policymakers to advance corporate
sustainability.",Does ESG and Digital Transformation affects Corporate Sustainability? The Moderating role of Green Innovation,2023-11-30 11:44:28,"Chenglin Qing, Shanyue Jin","http://arxiv.org/abs/2311.18351v1, http://arxiv.org/pdf/2311.18351v1",econ.GN
33336,gn,"The aim of the research paper is to understand the sustainability challenges
faced by resorts mainly luxury in Maldives and to implement the sustainable
tourism practices. The Maldives economy is dependent mostly on the fishing,
boat building, boat repairing and tourism. Over recent years there is a drastic
change that has took place in Maldives in tourism industry. Maldives has
progressed to be the upper middle-income country and luxury resorts are the
reason for increased GDP in the country. Although there are some practices
associated with the luxury resorts to follow in terms of environmental
concerns. Present study focuses on the triple bottom line approach and the 12
major Sustainable Tourism Principles as a framework for sustainability
practices and its implementation including the challenges associated in
Maldives. The paper suggests some recommendations on several paradigm of
enforcing laws and regulations, waste management facilities, fostering
collaboration along with promoting local agriculture. The study also
contemplates on several other areas such as on the impact of sustainability
initiatives, coral restoration, and the use of sustainable supply chains. The
intent of the current research is to suggest methods to promote the sustainable
practices in luxury resort in Maldives.",Implementing Sustainable Tourism practices in luxury resorts of Maldives: Sustainability principles & Tripple Bottomline Approach,2023-11-30 13:55:33,"Dr Mir Hasan Naqvi, Asnan Ahmed, Dr Asif Pervez","http://arxiv.org/abs/2311.18453v1, http://arxiv.org/pdf/2311.18453v1",econ.GN
33337,gn,"The aim of this paper is to carry out a systematic analysis of the literature
to show the state of the art of Business Processes Management Systems (BPMS).
BPMS represents a technology that automates business processes connecting users
with their tasks. For this, a systematic review of the literature of the last
ten years was carried out, using scientific papers indexed in the main
databases of the knowledge area. The papers generated by the search were later
analysed and filtered. Among the findings of this study, the academic interest
and the multidisciplinary nature of the subject, as this type of studies have
been identified in different areas of knowledge. Our research is a starting
point for future research eager to develop a more robust theory and broaden the
interest of the subject due its economic impact on process management.",BPMS for management: a systematic literature review,2023-12-01 12:19:50,"Alicia Martin-Navarro, Maria Paula Lechuga Sancho, Jose Aurelio Medina-Garrido","http://dx.doi.org/10.3989/redc.2018.3.1532, http://arxiv.org/abs/2312.00442v1, http://arxiv.org/pdf/2312.00442v1",econ.GN
33338,gn,"Business Process Management Systems (BPMS) represent a technology that
automates business processes, connecting users to their tasks. There are many
business processes within the port activity that can be improved through the
use of more efficient technologies and BPMS in particular, which can help to
coordinate and automate critical processes such as cargo manifests, customs
declaration the management of scales, or dangerous goods, traditionally
supported by EDI technologies. These technologies could be integrated with
BPMS, modernizing port logistics management. The aim of this work is to
demonstrate, through a systematic analysis of the literature, the state of the
art in BPMS research in the port industry. For this, a systematic review of the
literature of the last ten years was carried out. The works generated by the
search were subsequently analysed and filtered. After the investigation, it is
discovered that the relationship between BPMS and the port sector is
practically non-existent which represents an important gap to be covered and a
future line of research.",Business process management systems in port processes: a systematic literature review,2023-12-01 12:20:17,"Alicia Martin-Navarro, Maria Paula Lechuga Sancho, Jose Aurelio Medina-Garrido","http://dx.doi.org/10.1504/ijasm.2020.109245, http://arxiv.org/abs/2312.00443v1, http://arxiv.org/pdf/2312.00443v1",econ.GN
33339,gn,"The tourism sector is a sector with many opportunities for business
development. Entrepreneurship in this sector promotes economic growth and job
creation. Knowing how entrepreneurial intention develops facilitates its
transformation into entrepreneurial behaviour. Entrepreneurial behaviour can
adopt a causal logic, an effectual logic or a combination of both. Considering
the causal logic, decision-making is done through prediction. In this way,
entrepreneurs try to increase their market share by planning strategies and
analysing possible deviations from their plans. Previous literature studies
causal entrepreneurial behaviour, as well as variables such as creative
innovation, proactive decisions and entrepreneurship training when the
entrepreneur has already created his or her firm. However, there is an obvious
gap at a stage prior to the start of entrepreneurial activity when the
entrepreneurial intention is formed. This paper analyses how creativity,
proactivity, entrepreneurship education and the propensity for causal behaviour
influence entrepreneurial intentions. To achieve the research objective, we
analysed a sample of 464 undergraduate tourism students from two universities
in southern Spain. We used SmartPLS 3 software to apply a structural equation
methodology to the measurement model composed of nine hypotheses. The results
show, among other relationships, that causal propensity, entrepreneurship
learning programmes and proactivity are antecedents of entrepreneurial
intentions. These findings have implications for theory, as they fill a gap in
the field of entrepreneurial intentions. Considering propensity towards causal
behaviour before setting up the firm is unprecedented. Furthermore, the results
of this study have practical implications for the design of public education
policies and the promotion of business creation in the tourism sector.",Causal propensity as an antecedent of entrepreneurial intentions in tourism students,2023-12-01 14:46:22,"Alicia Martin-Navarro, Felix Velicia-Martin, Jose Aurelio Medina-Garrido, Ricardo Gouveia Rodrigues","http://dx.doi.org/10.1007/s11365-022-00826-1, http://arxiv.org/abs/2312.00517v1, http://arxiv.org/pdf/2312.00517v1",econ.GN
33340,gn,"Open government and open (government) data are seen as tools to create new
opportunities, eliminate or at least reduce information inequalities and
improve public services. More than a decade of these efforts has provided much
experience, practices, and perspectives to learn how to better deal with them.
This paper focuses on benchmarking of open data initiatives over the years and
attempts to identify patterns observed among European countries that could lead
to disparities in the development, growth, and sustainability of open data
ecosystems. To do this, we studied benchmarks and indices published over the
last years (57 editions of 8 artifacts) and conducted a comparative case study
of eight European countries, identifying patterns among them considering
different potentially relevant contexts such as e-government, open government
data, open data indices and rankings, and others relevant for the country under
consideration. Using a Delphi method, we reached a consensus within a panel of
experts and validated a final list of 94 patterns, including their frequency of
occurrence among studied countries and their effects on the respective
countries. Finally, we took a closer look at the developments in identified
contexts over the years and defined 21 recommendations for more resilient and
sustainable open government data initiatives and ecosystems and future steps in
this area.",Identifying patterns and recommendations of and for sustainable open data initiatives: a benchmarking-driven analysis of open government data initiatives among European countries,2023-12-01 15:58:17,"Martin Lnenicka, Anastasija Nikiforova, Mariusz Luterek, Petar Milic, Daniel Rudmark, Sebastian Neumaier, Caterina Santoro, Cesar Casiano Flores, Marijn Janssen, Manuel Pedro Rodríguez Bolívar","http://arxiv.org/abs/2312.00551v2, http://arxiv.org/pdf/2312.00551v2",econ.GN
33341,gn,"The literature on effectual theory offers validated scales to measure
effectual or causal logic in entrepreneurs' decision-making. However, there are
no adequate scales to assess in advance the effectual or causal propensity of
people with an entrepreneurial intention before the creation of their
companies. We aim to determine the validity and reliability of an instrument to
measure that propensity by first analysing those works that provide recognised
validated scales with which to measure the effectual or causal logic in people
who have already started up companies. Then, considering these scales, we
designed a scale to evaluate the effectual or causal propensity in people who
had not yet started up companies using a sample of 230 final-year business
administration students to verify its reliability and validity. The validated
scale has theoretical implications for the literature on potential
entrepreneurship and entrepreneurial intention and practical implications for
promoters of entrepreneurship who need to orient the behaviour of
entrepreneurs, entrepreneurs of established businesses who want to implement a
specific strategic orientation, entrepreneurs who want to evaluate the
effectual propensity of their potential partners and workers, and academic
institutions interested in orienting the entrepreneurial potential of their
students.",How effectual will you be? Development and validation of a scale in higher education,2023-12-01 23:38:59,"Alicia Martin-Navarro, Jose Aurelio Medina-Garrido, Felix Velicia-Martin","http://dx.doi.org/10.1016/j.ijme.2021.100547, http://arxiv.org/abs/2312.00916v1, http://arxiv.org/pdf/2312.00916v1",econ.GN
33342,gn,"We introduce a new survey of professors at roughly 150 of the most
research-intensive institutions of higher education in the US. We document
seven new features of how research-active professors are compensated, how they
spend their time, and how they perceive their research pursuits: (1) there is
more inequality in earnings within fields than there is across fields; (2)
institutions, ranks, tasks, and sources of earnings can account for roughly
half of the total variation in earnings; (3) there is significant variation
across fields in the correlations between earnings and different kinds of
research output, but these account for a small amount of earnings variation;
(4) measuring professors' productivity in terms of output-per-year versus
output-per-research-hour can yield substantial differences; (5) professors'
beliefs about the riskiness of their research are best predicted by their
fundraising intensity, their risk-aversion in their personal lives, and the
degree to which their research involves generating new hypotheses; (6) older
and younger professors have very different research outputs and time
allocations, but their intended audiences are quite similar; (7) personal
risk-taking is highly predictive of professors' orientation towards applied,
commercially-relevant research.",New Facts and Data about Professors and their Research,2023-12-03 19:20:38,"Kyle R. Myers, Wei Yang Tham, Jerry Thursby, Marie Thursby, Nina Cohodes, Karim Lakhani, Rachel Mural, Yilun Xu","http://arxiv.org/abs/2312.01442v1, http://arxiv.org/pdf/2312.01442v1",econ.GN
33343,gn,"The well-being of individuals in a crowd is interpreted as a product of the
crossover of individuals from heterogeneous communities, which may occur via
interactions with other crowds. The index moving-direction entropy
corresponding to the diversity of the moving directions of individuals is
introduced to represent such an inter-community crossover. Multi-scale moving
direction entropies, composed of various geographical mesh sizes to compute the
index values, are used to capture the information flow owing to human movements
from/to various crowds. The generated map of high values of multiscale moving
direction entropy is shown to coincide significantly with the preference of
people to live in each region.",Generating a Map of Well-being Regions using Multiscale Moving Direction Entropy on Mobile Sensors,2023-12-05 08:40:17,"Yukio Ohsawa, Sae Kondo, Yi Sun, Kaira Sekiguchi","http://arxiv.org/abs/2312.02516v1, http://arxiv.org/pdf/2312.02516v1",econ.GN
33344,gn,"This research report journal aims to investigate the feasibility of
establishing a nuclear presence at Western Michigan University. The report will
analyze the potential benefits and drawbacks of introducing nuclear technology
to WMUs campus. The study will also examine the current state of nuclear
technology and its applications in higher education. The report will conclude
with a recommendation on whether WMU should pursue the establishment of a
nuclear presence on its campus.",The Feasibility of Establishing A Nuclear Presence at Western Michigan University,2023-12-06 05:48:19,"Andrew Thomas Goheen, Zayon Deshon Mobley-Wright, Rayne Symone Strother, Ryan Thomas Guthrie","http://arxiv.org/abs/2312.03249v1, http://arxiv.org/pdf/2312.03249v1",econ.GN
33345,gn,"This paper investigates a two-echelon inventory system with a central
warehouse and N (N > 2) retailers managed by a centralized information-sharing
mechanism. In particular, the paper mathematically models an easy-to-implement
inventory control system that facilitates making use of information. Some
assumptions of the paper include: a) constant delivery time for retailers and
the central warehouse and b) Poisson demand with identical rates for retailers.
The inventory policy comprises continuous review (R,Q)-policy on the part of
retailers and triggering the system with m batches (of a given size Q) at the
central warehouse. Besides, the central warehouse monitors retailers' inventory
and may order batches sooner than retailers' reorder point, when their
inventory position reaches R + s. An earlier study proposed the policy and its
cost function approximation. This paper derives an exact mathematical model of
the cost function for the aforementioned problem.",The Cost Function of a Two-Level Inventory System with identical retailers benefitting from Information Sharing,2023-12-06 23:46:06,"Amir Hosein Afshar Sedigh, Rasoul Haji, Seyed Mehdi Sajadifar","http://arxiv.org/abs/2312.03898v1, http://arxiv.org/pdf/2312.03898v1",econ.GN
33346,gn,"Decarbonizing China's energy system requires both greening the power supply
and end-use electrification. While the latter speeds up with the electric
vehicle adoption, a rapid power sector transformation can be technologically
and institutionally challenging. Using an integrated assessment model, we
analyze the synergy between power sector decarbonization and end-use
electrification in China's net-zero pathway from a system perspective. We show
that even with a slower coal power phase-out, reaching a high electrification
rate of 60% by 2050 is a robust optimal strategy. Comparing emission intensity
of typical end-use applications, we find most have reached parity with
incumbent fossil fuel technologies even under China's current power mix due to
efficiency gains. Since a 10-year delay in coal power phase-out can result in
an additional cumulative emission of 28% (4%) of the global 1.5{\deg}C
(2{\deg}C) CO2 budget, policy measures should be undertaken today to ensure a
power sector transition without unexpected delays.",Robust CO2-abatement from early end-use electrification under uncertain power transition speed in China's netzero transition,2023-12-07 17:50:13,"Chen Chris Gong, Falko Ueckerdt, Christoph Bertram, Yuxin Yin, David Bantje, Robert Pietzcker, Johanna Hoppe, Michaja Pehl, Gunnar Luderer","http://arxiv.org/abs/2312.04332v1, http://arxiv.org/pdf/2312.04332v1",econ.GN
33347,gn,"Grandparents were anticipated to participated in grand-rearing. The COVID-19
pandemic had detached grandparents from rearing grandchildren. The research
questions of this study were as follows: How does the change in family
relations impact the well-being (SWB) of grandparents and parents? We examined
how family structure influenced subjective SWB before and after COVID-19. We
focused on the effects of children, grandchildren, and their gender on
grandparents and parents. We found that compared with the happiness level
before COVID-19, (1) granddaughters increased their grandmothers SWB after
COVID-19, (2) both daughters and sons reduced their fathers SWB after COVID-19,
whereas neither daughters nor sons changed their mothers SWB, and (3) the
negative effect of sons reduced substantially if their fathers had younger
brothers. Learning from interactions with younger brothers in childhood,
fathers could avoid the deterioration of relationships with their sons, even
when unexpected events possibly changed the lifestyle of the family and their
relationship.","Family Structure, Gender and Subjective Well-being: Effect of Child ren before and after COVID 19 in Japan",2023-12-07 19:29:18,"Eiji Yamamura, Fumio Ohtake","http://arxiv.org/abs/2312.04411v1, http://arxiv.org/pdf/2312.04411v1",econ.GN
33348,gn,"This study aims to examine the relationship between Foreign Direct Investment
(FDI), personal remittances received, and official development assistance (ODA)
in the economic growth of Bangladesh. The study utilizes time series data on
Bangladesh from 1976 to 2021. Additionally, this research contributes to the
existing literature by introducing the Foreign Capital Depthless Index (FCDI)
and exploring its impact on Bangladesh's economic growth. The results of the
Vector Error Correction Model (VECM) suggest that the economic growth of
Bangladesh depends on FDI, remittances, and aid in the long run. However, these
variables do not exhibit a causal relationship with GDP in the short run. The
relationship between FCDI and economic growth is positive in the long run.
Nevertheless, the presence of these three variables has a more significant
impact on the economic growth of Bangladesh",Foreign Capital and Economic Growth: Evidence from Bangladesh,2023-12-08 00:08:17,"Ummya Salma, Md. Fazlul Huq Khan, Md. Masum Billah","http://arxiv.org/abs/2312.04695v1, http://arxiv.org/pdf/2312.04695v1",econ.GN
33349,gn,"Leveraged developers facing rollover risk are more likely to engage in fire
sales. Using COVID-19 as a natural experiment, we find evidence of fire sale
externalities in the Thai condominium market. Resales in properties whose
developers have higher leverage ratios have lower listing prices for listed
developers (who have access to capital market financing) but not unlisted
developers (who primarily use bank financing). We attribute this difference to
the flexibility of bank loan renegotiation versus the rigidity of debt capital
market repayments and highlight the role of commercial banks in financial
intermediation in the presence of information asymmetry.","Developers' Leverage, Capital Market Financing, and Fire Sale Externalities Evidence from the Thai Condominium Market",2023-12-08 15:49:02,Kanis Saengchote,"http://arxiv.org/abs/2312.05013v1, http://arxiv.org/pdf/2312.05013v1",econ.GN
33350,gn,"This paper describes an econometric model of the Brazilian domestic carrier
Azul Airlines' network construction. We employed a discrete-choice framework of
airline route entry to examine the effects of the merger of another regional
carrier, Trip Airlines, with Azul in 2012, especially on its entry decisions.
We contrasted the estimated entry determinants before and after the merger with
the benchmarks of the US carriers JetBlue Airways and Southwest Airlines
obtained from the literature, and proposed a methodology for comparing
different airline entry patterns by utilizing the kappa statistic for
interrater agreement. Our empirical results indicate a statistically
significant agreement between raters of Azul and JetBlue, but not Southwest,
and only for entries on previously existing routes during the pre-merger
period. The results suggest that post-merger, Azul has adopted a more
idiosyncratic entry pattern, focusing on the regional flights segment to
conquer many monopoly positions across the country, and strengthening its
profitability without compromising its distinguished expansion pace in the
industry.",An empirical analysis of the determinants of network construction for Azul Airlines,2023-12-09 21:35:14,"B. F. Oliveira, A. V. M Oliveira","http://arxiv.org/abs/2312.05630v1, http://arxiv.org/pdf/2312.05630v1",econ.GN
33351,gn,"This study investigates the implications of algorithmic pricing in digital
marketplaces, focusing on Airbnb's pricing dynamics. With the advent of
Airbnb's new pricing tool, this research explores how digital tools influence
hosts' pricing strategies, potentially leading to market dynamics that straddle
the line between efficiency and collusion. Utilizing a Regression Discontinuity
Design (RDD) and Propensity Score Matching (PSM), the study examines the causal
effects of the pricing tool on pricing behavior among hosts with different
operational strategies. The findings aim to provide insights into the evolving
landscape of digital economies, examining the balance between competitive
market practices and the risk of tacit collusion facilitated by algorithmic
pricing. This study contributes to the discourse on digital market regulation,
offering a nuanced understanding of the implications of AI-driven tools in
market dynamics and antitrust analysis.",The New Age of Collusion? An Empirical Study into Airbnb's Pricing Dynamics and Market Behavior,2023-12-09 21:41:55,Richeng Piao,"http://arxiv.org/abs/2312.05633v1, http://arxiv.org/pdf/2312.05633v1",econ.GN
33352,gn,"This study examines the role of human capital investment in driving
sustainable socio-economic growth within the energy industry. The fuel and
energy sector undeniably forms the backbone of contemporary economies,
supplying vital resources that underpin industrial activities, transportation,
and broader societal operations. In the context of the global shift toward
sustainability, it is crucial to focus not just on technological innovation but
also on cultivating human capital within this sector. This is particularly
relevant considering the recent shift towards green and renewable energy
solutions. In this study, we utilize bibliometric analysis, drawing from a
dataset of 1933 documents (represented by research papers, conference
proceedings, and book chapters) indexed in the Web of Science (WoS) database.
We conduct a network cluster analysis of the textual and bibliometric data
using VOSViewer software. The findings stemming from our analysis indicate that
investments in human capital are perceived as important in achieving long-term
sustainable economic growth in the energy companies both in Russia and
worldwide. In addition, it appears that the role of human capital in the energy
sector is gaining more popularity both among Russian and international
researchers and academics.",Human capital in the sustainable economic development of the energy sector,2023-12-11 18:35:04,"Evgeny Kuzmin, Maksim Vlasov, Wadim Strielkowski, Marina Faminskaya, Konstantin Kharchenko","http://arxiv.org/abs/2312.06450v1, http://arxiv.org/pdf/2312.06450v1",econ.GN
33353,gn,"The design of research grants has been hypothesized to be a useful tool for
influencing researchers and their science. We test this by conducting two
thought experiments in a nationally representative survey of academic
researchers. First, we offer participants a hypothetical grant with randomized
attributes and ask how the grant would influence their research strategy.
Longer grants increase researchers' willingness to take risks, but only among
tenured professors, which suggests that job security and grant duration are
complements. Both longer and larger grants reduce researchers' focus on speed,
which suggests a significant amount of racing in science is in pursuit of
resources. But along these and other strategic dimensions, the effect of grant
design is small. Second, we identify researchers' indifference between the two
grant design parameters and find they are very unwilling to trade off the
amount of funding a grant provides in order to extend the duration of the grant
$\unicode{x2014}$ money is much more valuable than time. Heterogeneity in this
preference can be explained with a straightforward model of researchers'
utility. Overall, our results suggest that the design of research grants is
more relevant to selection effects on the composition of researchers pursuing
funding, as opposed to having large treatment effects on the strategies of
researchers that receive funding.","Money, Time, and Grant Design",2023-12-11 19:06:24,"Kyle Myers, Wei Yang Tham","http://arxiv.org/abs/2312.06479v1, http://arxiv.org/pdf/2312.06479v1",econ.GN
33354,gn,"The decarbonization of buildings requires the phase-out of fossil fuel
heating systems. Heat pumps are considered a crucial technology to supply a
substantial part of heating energy for buildings. Yet, their introduction is
not without challenges, as heat pumps generate additional electricity demand as
well as peak loads. To better understand these challenges, an ambitious
simultaneous heat pump rollout in several central European countries with an
hourly-resolved capacity expansion model of the power sector is studied. I
assess the structure of hours and periods of peak heat demands and their
concurrence with hours and periods of peak residual load. In a 2030 scenario, I
find that meeting 25% of total heat demand in buildings with heat pumps would
be covered best with additional wind power generation capacities. I also
identify the important role of small thermal energy storage that could reduce
the need for additional firm generation capacity. Due to the co-occurrence of
heat demand, interconnection between countries does not substantially reduce
the additional generation capacities needed for heat pump deployment. Based on
six different weather years, my analysis cautions against relying on results
based on a single weather year.",Power sector impacts of a simultaneous European heat pump rollout,2023-12-11 21:24:26,Alexander Roth,"http://arxiv.org/abs/2312.06589v1, http://arxiv.org/pdf/2312.06589v1",econ.GN
33355,gn,"As AI adoption accelerates, research on its economic impacts becomes a
salient source to consider for stakeholders of AI policy. Such research is
however still in its infancy, and one in need of review. This paper aims to
accomplish just that and is structured around two main themes. Firstly, the
path towards transformative AI, and secondly the wealth created by it. It is
found that sectors most embedded into global value chains will drive economic
impacts, hence special attention is paid to the international trade
perspective. When it comes to the path towards transformative AI, research is
heterogenous in its predictions, with some predicting rapid, unhindered
adoption, and others taking a more conservative view based on potential
bottlenecks and comparisons to past disruptive technologies. As for wealth
creation, while some agreement is to be found in AI's growth boosting
abilities, predictions on timelines are lacking. Consensus exists however
around the dispersion of AI induced wealth, which is heavily biased towards
developed countries due to phenomena such as anchoring and reduced bargaining
power of developing countries. Finally, a shortcoming of economic growth models
in failing to consider AI risk is discovered. Based on the review, a
calculated, and slower adoption rate of AI technologies is recommended.",The Transformative Effects of AI on International Economics,2023-12-09 03:03:48,Rafael Andersson Lipcsey,"http://arxiv.org/abs/2312.06679v1, http://arxiv.org/pdf/2312.06679v1",econ.GN
33356,gn,"This thesis aims to assess the importance of the air transport sector for
Sweden's economy in the context of the COVID-19 pandemic. Two complementary
research goals are formulated. Firstly, investigating economic linkages of the
Swedish air transport sector. Secondly, estimating the effects of the pandemic
on Swedish air transport, and the spin-off effects on the economy as a whole.
Overview of literature in the field reveals that while a fair amount of
research exists on the importance of air transport, unsurprisingly, pandemic
effects have yet to be investigated. The methodological framework chosen is
input-output analysis, thus, the backbone of data materials used is the Swedish
input-output table, complemented by additional datasets. For measuring economic
linkages, basic input-output analysis is applied. Meanwhile, to capture the
pandemic's impacts, a combination of inoperability analysis and the partial
hypothetical extraction model is implemented. It is found that while Swedish
air transport plays an important role in turning the cogs of the Swedish
economy, when compared to all other sectors, it ranks on the lower end.
Furthermore, while the COVID-19 pandemic has a detrimental short-term impact on
Swedish air transport, the spin-off effects for the economy as a whole are
milder. It is concluded, that out of a value-added perspective, the Swedish
government, and by extension policy makers globally need not support failing
airlines, and other air transport actors. Nonetheless, some aspects could not
be captured through the methods used, such as the possible importance of
airlines to national security at times of dodgy global cooperation. Lastly, to
accurately capture not only short-term, but also long-term effects of the
pandemic, future research using alternative, causality-based frameworks and
dynamic input-output models will be highly beneficial.",An Inquiry Into The Economic Linkages Between The Swedish Air Transport Sector And The Economy As A Whole In The Context Of The Covid-19 Pandemic,2023-12-09 19:10:38,Rafael Andersson Lipcsey,"http://arxiv.org/abs/2312.06694v1, http://arxiv.org/pdf/2312.06694v1",econ.GN
33357,gn,"Independent retrospective analyses of the effectiveness of reducing
deforestation and forest degradation (REDD) projects are vital to ensure
climate change benefits are being delivered. A recent study in Science by West
et al. (1) appeared therefore to be a timely alert that the majority of
projects operating in the 2010s failed to reduce deforestation rates.
Unfortunately, their analysis suffered from major flaws in the choice of
underlying data, resulting in poorly matched and unstable counterfactual
scenarios. These were compounded by calculation errors, biasing the study
against finding that projects significantly reduced deforestation. This flawed
analysis of 24 projects unfairly condemned all 100+ REDD projects, and risks
cutting off finance for protecting vulnerable tropical forests from destruction
at a time when funding needs to grow rapidly.",Serious errors impair an assessment of forest carbon projects: A rebuttal of West et al. (2023),2023-12-11 22:10:04,"Edward T. A. Mitchard, Harry Carstairs, Riccardo Cosenza, Sassan S. Saatchi, Jason Funk, Paula Nieto Quintano, Thom Brade, Iain M. McNicol, Patrick Meir, Murray B. Collins, Eric Nowak","http://arxiv.org/abs/2312.06793v1, http://arxiv.org/pdf/2312.06793v1",econ.GN
33358,gn,"One of the main stages for achieving success is the adoption of new
technology by its users. Several studies show that Property technology is
advantageous for real estate stakeholders. Hence, the purpose of this paper is
to investigate the users' engagement behavior to adopt Property technology in
the Vietnamese real estate market. To that end, a purposive sample of 142
participants was recruited to complete an online quantitative approach based
survey. The survey consisted of a modified and previously validated measure of
acceptance based on the extended demographic version of the unified theory of
acceptance and use of technology), as well as usage scale items. The empirical
findings confirm that participants were generally accepting of Property
technology in the Vietnamese real estate market. The highest mean values were
associated with the subscales of effort expectancy and performance expectancy,
while we can easily identify the lowest mean value in the social influence
subscale. The usage of Property technology was slightly more concerned with the
gathering of information on properties and markets than transactions or
portfolio management. This study provides an in depth understanding of Property
technology for firms' managers and marketers. Online social interactions might
be either harmful or fruitful for firms depending on the type of interaction
and engagement behavior. This is especially true of property portals and social
media forums that would help investors to connect, communicate and learn.
Findings can help users to improve their strategies for digital marketing. By
providing robust findings by addressing issues like omitted variables and
endogeneity, the findings of this study are promising for developing new
hypotheses and theoretical models in the context of the Vietnamese real estate
market.",The behavioral intention to adopt Proptech services in Vietnam real estate market,2023-12-12 08:48:08,Le Tung Bach,"http://dx.doi.org/10.5281/zenodo.10360520, http://arxiv.org/abs/2312.06994v1, http://arxiv.org/pdf/2312.06994v1",econ.GN
33360,gn,"This paper explores the critical question of the sustainability of Russian
solar energy initiatives in the absence of governmental financial support. The
study aims to determine if Russian energy companies can maintain operations in
the solar energy sector without relying on direct state subsidies.
Methodologically, the analysis utilizes established investment metrics such as
Net Present Value (NPV), Internal Rate of Return (IRR), and Discounted Payback
Period (DPP), tailored to reflect the unique technical and economic aspects of
Russian solar energy facilities and to evaluate the influence of
sector-specific risks on project efficiency, using a rating approach. We
examined eleven solar energy projects under ten different scenarios to
understand the dynamics of direct state support, exploring variations in
support cessation, reductions in financial assistance, and the projects'
resilience to external risk factors. Our multi-criteria scenario assessment
indicates that, under the prevailing market conditions, the Russian solar
energy sector is not yet equipped to operate efficiently without ongoing state
financial subsidies. Interestingly, our findings also suggest that the solar
energy sector in Russia has a greater potential to reduce its dependence on
state support compared to the wind energy sector. Based on these insights, we
propose energy policy recommendations aimed at gradually minimizing direct
government funding in the Russian renewable energy market. This strategy is
designed to foster self-sufficiency and growth in the solar energy sector.",Would Russian solar energy projects be feasible independent of state subsidies?,2023-12-12 16:12:33,"Gordon Rausser, Galina Chebotareva, Wadim Strielkowski, Lubos Smutka","http://arxiv.org/abs/2312.07240v1, http://arxiv.org/pdf/2312.07240v1",econ.GN
33361,gn,"The article discusses the basic concepts of strategic planning in the Russian
Federation, highlights the legal, financial and resource features that act as
restrictions in decision making in the field of socio-economic development of
municipalities. The analysis concluded that to design an adequate model of
socio-economic development of municipalities is a very difficult task,
particularly when the traditional approaches are applied. To solve the task, we
proposed to use the semantic modeling as well as cognitive maps which are able
to point out the set of dependencies that arise between factors having a direct
impact on socio-economic development.",Knowledge Management in Socio-Economic Development of Municipal Units: Basic Concepts,2023-12-12 17:44:16,"Maria Shishanina, Anatoly Sidorov","http://arxiv.org/abs/2312.07328v1, http://arxiv.org/pdf/2312.07328v1",econ.GN
33362,gn,"Research on productive structures has shown that economic complexity
conditions economic growth. However, little is known about which type of
complexity, e.g., export or industrial complexity, matters more for regional
economic growth in a large emerging country like Brazil. Brazil exports natural
resources and agricultural goods, but a large share of the employment derives
from services, non-tradables, and within-country manufacturing trade. Here, we
use a large dataset on Brazil's formal labor market, including approximately
100 million workers and 581 industries, to reveal the patterns of export
complexity, industrial complexity, and economic growth of 558 micro-regions
between 2003 and 2019. Our results show that export complexity is more evenly
spread than industrial complexity. Only a few -- mainly developed urban places
-- have comparative advantages in sophisticated services. Regressions show that
a region's industrial complexity is a significant predictor for 3-year growth
prospects, but export complexity is not. Moreover, economic complexity in
neighboring regions is significantly associated with economic growth. The
results show export complexity does not appropriately depict Brazil's knowledge
base and growth opportunities. Instead, promoting the sophistication of the
heterogeneous regional industrial structures and development spillovers is a
key to growth.","Export complexity, industrial complexity and regional economic growth in Brazil",2023-12-12 20:50:28,"Ben-Hur Francisco Cardoso, Eva Yamila da Silva Catela, Guilherme Viegas, Flávio L. Pinheiro, Dominik Hartmann","http://arxiv.org/abs/2312.07469v1, http://arxiv.org/pdf/2312.07469v1",econ.GN
33363,gn,"The rise of new modes of interaction with AI skyrocketed the popularity,
applicability, and amount of use cases. Despite this evolution, conceptual
integration is falling behind. Studies suggest that there is hardly a
systematization in using AI in organizations. Thus, by taking a
service-dominant logic perspective, specifically, the concept of resource
integration patterns, the most potent application of AI for organizational use
- namely information retrieval - is analyzed. In doing so, we propose a
systematization that can be applied to deepen understanding of core technical
concepts, further investigate AI in contexts, and help explore research
directions guided by SDL.",New Kids on the Block: On the impact of information retrieval on contextual resource integration patterns,2023-12-13 06:38:09,"Martin Semmann, Mahei Manhei Li","http://arxiv.org/abs/2312.07878v1, http://arxiv.org/pdf/2312.07878v1",econ.GN
33364,gn,"Open online crowd-prediction platforms are increasingly used to forecast
trends and complex events. Despite the large body of research on
crowd-prediction and forecasting tournaments, online crowd-prediction platforms
have never been directly compared to other forecasting methods. In this
analysis, exchange rate crowd-predictions made on Metaculus are compared to
predictions made by the random-walk, a statistical model considered extremely
hard-to-beat. The random-walk provides less erroneous forecasts, but the
crowd-prediction does very well. By using the random-walk as a benchmark, this
analysis provides a rare glimpse into the forecasting skill displayed on open
online crowd-prediction platforms.",Forecasting skill of a crowd-prediction platform: A comparison of exchange rate forecasts,2023-12-14 19:15:16,Niklas Valentin Lehmann,"http://arxiv.org/abs/2312.09081v1, http://arxiv.org/pdf/2312.09081v1",econ.GN
33365,gn,"Borda Count Method is an important theory in the field of voting theory. The
basic idea and implementation methodology behind the approach is simple and
straight forward. Borda Count Method has been used in sports award evaluations
and many other scenarios, and therefore is an important aspect of our society.
An often ignored ground truth is that online cultural rating platforms such as
Douban.com and Goodreads.com often adopt integer rating values for large scale
public audience, and therefore leading to Poisson/Pareto behavior. In this
paper, we rely on the theory developed by Wang from 2021 to 2023 to demonstrate
that online cultural rating platform rating data often evolve into
Poisson/Pareto behavior, and individualistic voting preferences are predictable
without any data input, so Borda Count Method (or, Range Voting Method) has
intrinsic fallacy and should not be used as a voting theory method.",The Fallacy of Borda Count Method -- Why it is Useless with Group Intelligence and Shouldn't be Used with Big Data including Banking Customer Services,2023-12-16 13:08:03,Hao Wang,"http://dx.doi.org/10.1051/shsconf/202317904008, http://arxiv.org/abs/2312.10405v1, http://arxiv.org/pdf/2312.10405v1",econ.GN
33366,gn,"Carbon abatement decisions are usually based on the implausible assumption of
constant social preference. This paper focuses on a specific case of market and
non-market goods, and investigates the optimal climate policy when social
preference for them is also changed by climate policy in the DICE model. The
relative price of non-market goods grows over time due to increases in both
relative scarcity and appreciation of it. Therefore, climbing relative price
brings upward the social cost of carbon denominated in terms of market goods.
Because abatement decision affects the valuation of non-market goods in the
utility function, unlike previous climate-economy models, we solve the model
iteratively by taking the obtained abatement rates from the last run as inputs
in the current run. The results in baseline calibration advocate a more
stringent climate policy, where endogenous social preference to climate policy
raises the social cost of carbon further by roughly 12%-18% this century.
Moreover, neglecting changing social preference leads to an underestimate of
non-market goods damages by 15%. Our results support that climate policy is
self-reinforced if it favors more expensive consumption type.",Endogenous preference for non-market goods in carbon abatement decision,2023-12-18 11:15:32,"Fangzhi Wang, Hua Liao, Richard S. J. Tol, Changjing Ji","http://arxiv.org/abs/2312.11010v1, http://arxiv.org/pdf/2312.11010v1",econ.GN
33367,gn,"In October 2011, Denmark introduced the world's first and, to date, only tax
targeting saturated fat. However, this tax was subsequently abolished in
January 2013. Leveraging exogenous variation from untaxed Northern-German
consumers, we employ a difference-in-differences approach to estimate the
causal effects of both the implementation and repeal of the tax on consumption
and expenditure behavior across eight product categories targeted by the tax.
Our findings reveal significant heterogeneity in the tax's impact across these
products. During the taxed period, there was a notable decline in consumption
of bacon, liver sausage, salami, and cheese, particularly among low-income
households. In contrast, expenditure on butter, cream, margarine, and sour
cream increased as prices rose. Interestingly, we do not observe any difference
in expenditure increases between high and low-income households, suggesting
that the latter were disproportionately affected by the tax. After the repeal
of the tax, we do not observe any significant decline in consumption. On the
contrary, there was an overall increase in consumption for certain products,
prompting concerns about unintended consequences resulting from the brief
implementation of the tax.","Nudging Nutrition: Lessons from the Danish ""Fat Tax""",2023-11-29 13:49:58,"Christian Møller Dahl, Nadja van 't Hoff, Giovanni Mellace, Sinne Smed","http://arxiv.org/abs/2312.11481v1, http://arxiv.org/pdf/2312.11481v1",econ.GN
33368,gn,"This paper examines the effects of daily exercise time on the academic
performance of junior high school students in China, with an attempt to figure
out the most appropriate daily exercise time for students from the perspective
of improving students' test scores. By dividing the daily exercise time into
five sections to construct a categorical variable in a linear regression model
as well as using another model to draw intuitive figures, I find that spending
both too little and too much time on physical activity every day would have
adverse impacts on students' academic performance, with differences existing in
the impacts by gender, grade, city scale, and location type of the school. The
findings of this paper carry implications for research, school health and
education policy and physical and general education practice. The results also
provide recommendations for students, parents and teachers.",Effects of Daily Exercise Time on the Academic Performance of Students: An Empirical Analysis Based on CEPS Data,2023-11-30 09:55:17,Ningyi Li,"http://arxiv.org/abs/2312.11484v1, http://arxiv.org/pdf/2312.11484v1",econ.GN
33369,gn,"This paper explores the role of energy-saving technologies and energy
efficiency in the post-COVID era. The pandemic meant major rethinking of the
entrenched patterns in energy saving and efficiency. It also provided
opportunities for reevaluating energy consumption for households and
industries. In addition, it highlighted the importance of employing digital
tools and technologies in energy networks and smart grids (e.g. Internet of
Energy (IoE), peer-to-peer (P2P) prosumer networks, or the AI-powered
autonomous power systems (APS)). In addition, the pandemic added novel legal
aspects to the energy efficiency and energy saving and enhanced inter-national
collaborations and partnerships. The paper highlights the importance of energy
efficiency measures and examines various technologies that can contribute to a
sustainable and resilient energy future. Using the bibliometric network
analysis of 12960 publications indexed in Web of Science databases, it
demonstrates the potential benefits and challenges associated with implementing
energy-saving technologies and autonomic power systems in a post-COVID world.
Our findings emphasize the need for robust policies, technological
advancements, and public engagement to foster energy efficiency and mitigate
the environmental impacts of energy consumption.",Energy-saving technologies and energy efficiency in the post-pandemic world,2023-12-19 00:15:07,"Wadim Strielkowski, Larisa Gorina, Elena Korneeva, Olga Kovaleva","http://arxiv.org/abs/2312.11711v2, http://arxiv.org/pdf/2312.11711v2",econ.GN
33370,gn,"The least-cost diet problem introduces students to optimization and linear
programming, using the health consequences of food choice. We provide a
graphical example, Excel workbook and Word template using actual data on item
prices, food composition and nutrient requirements for a brief exercise in
which students guess at and then solve for nutrient adequacy at lowest cost,
before comparing modeled diets to actual consumption which has varying degrees
of nutrient adequacy. The graphical example is a 'three sisters' diet of corn,
beans and squash, and the full multidimensional model is compared to current
food consumption in Ethiopia. This updated Stigler diet shows how cost
minimization relates to utility maximization, and links to ongoing research and
policy debates about the affordability of healthy diets worldwide.","Least-cost diets to teach optimization and consumer behavior, with applications to health equity, poverty measurement and international development",2023-12-19 03:56:44,"Jessica K. Wallingford, William A. Masters","http://arxiv.org/abs/2312.11767v1, http://arxiv.org/pdf/2312.11767v1",econ.GN
33371,gn,"This article presents an analysis of China's economic evolution amidst
demographic changes from 1990 to 2050, offering valuable insights for academia
and policymakers. It uniquely intertwines various economic theories with
empirical data, examining the impact of an aging population, urbanization, and
family dynamics on labor, demand, and productivity. The study's novelty lies in
its integration of Classical, Neoclassical, and Endogenous Growth theories,
alongside models like Barro and Sala-i-Martin, to contextualize China's
economic trajectory. It provides a forward-looking perspective, utilizing
econometric methods to predict future trends, and suggests practical policy
implications. This comprehensive approach sheds light on managing demographic
transitions in a global context, making it a significant contribution to the
field of demographic economics.",Managing Demographic Transitions: A Comprehensive Analysis of China's Path to Economic Sustainability,2023-12-19 05:40:37,Yuxin Hu,"http://arxiv.org/abs/2312.11806v1, http://arxiv.org/pdf/2312.11806v1",econ.GN
33373,gn,"Purpose: The aim of this study was to explore the feasibility of natural
capital accounting for the purpose of strengthening sustainability claims by
reporting entities. The study linked riparian land improvement to ecosystem
services and tested options for incorporating natural capital into financial
accounting practices, specifically on the balance sheet. Methodology: To test
the approach, the study used a public asset manager (a water utility) with
accountabilities to protect the environment including maintaining and enhancing
riparian land assets. Research activities included stakeholder engagement,
physical asset measurement, monetary valuation and financial recognition of
natural capital income and assets. Natural capital income was estimated by
modelling and valuing ecosystem services relating to stormwater filtration and
carbon storage. Findings: This research described how a water utility could
disclose changes in the natural capital assets they manage either through
voluntary disclosures, in notes to the financial statements or as balance sheet
items. We found that current accounting standards allowed the recognition of
some types of environmental income and assets where ecosystem services were
associated with cost savings. The proof-of-concept employed to estimate
environmental income through ecosystem service modelling proved useful to
strengthen sustainability claims or report financial returns on natural capital
investment. Originality/value: This study applied financial accounting
processes and principles to a realistic public asset management scenario with
direct participation by the asset manager working together with academic
researchers and a sub-national government environment management agency.
Importantly it established that natural assets could be included in financial
statements, proposing a new approach to measuring and reporting on natural
capital.",Recognising natural capital on the balance sheet: options for water utilities,2023-12-21 04:34:09,"Marie-Chantale Pelletier, Claire Horner, Mathew Vickers, Aliya Gul, Eren Turak, Christine Turner","http://arxiv.org/abs/2312.13515v1, http://arxiv.org/pdf/2312.13515v1",econ.GN
33374,gn,"This paper studies the effect of antitrust enforcement on venture capital
(VC) investments and VC-backed companies. To establish causality, I exploit the
DOJ's decision to close several antitrust field offices in 2013, which reduced
the antitrust enforcement in areas near the closed offices. I find that the
reduction in antitrust enforcement causes a significant decrease in VC
investments in startups located in the affected areas. Furthermore, these
affected VC-backed startups exhibit a reduced likelihood of successful exits
and diminished innovation performance. These negative results are mainly driven
by startups in concentrated industries, where incumbents tend to engage in
anticompetitive behaviors more frequently. To mitigate the adverse effect,
startups should innovate more to differentiate their products. This paper sheds
light on the importance of local antitrust enforcement in fostering competition
and innovation.",The Effect of Antitrust Enforcement on Venture Capital Investments,2023-12-21 07:19:00,Wentian Zhang,"http://arxiv.org/abs/2312.13564v1, http://arxiv.org/pdf/2312.13564v1",econ.GN
33375,gn,"In Japanese primary and secondary schools, an alphabetical name list is used
in various situations. Generally, students are called on by the teacher during
class and in the cer-emony if their family name is early on the list.
Therefore, students whose surname ini-tials are earlier in the Japanese
alphabet acquire more experience. Surname advantages are considered to have a
long-term positive effect on life in adulthood. This study ex-amines the
surname effect. The data set is constructed by gathering lists of
representative figures from various fields. Based on the list, we calculate the
proportion of surname groups according to Japanese alphabetical column lists.
The major findings are as follows: (1) people whose surnames are in the
A-column (the first column among 10 Japanese name col-umns) are 20% more likely
to appear among the ruling elite but are less likely to ap-pear in
entertainment and sports lists. (2) This tendency is rarely observed in the
Uni-versity of Tokyo entrance examination pass rate. Consequently, the A-column
sur-names are advantageous in helping students succeed as part of the elite
after graduating from universities but not when gaining entry into
universities. The surname helps form non-cognitive skills that help students
become part of the ruling elite instead of specif-ic cognitive skills that help
students enter elite universities. Keywords: Surname, Education, Name order,
Hidden curriculum, Cognitive skill, Non-cognitive skill, Elite, Vice Minister,
Academic, Prestigious university, Enter-tainment, Sports.",Name order and the top elite: Long-term effects of a hidden cur-riculum,2023-12-21 21:42:55,Eiji Yamamura,"http://arxiv.org/abs/2312.14120v1, http://arxiv.org/pdf/2312.14120v1",econ.GN
33376,gn,"Scientific and technological progress has historically been very beneficial
to humanity but this does not always need to be true. Going forward, science
may enable bad actors to cause genetically engineered pandemics that are more
frequent and deadly than prior pandemics. I develop a quantitative economic
model to assess the social returns to science, taking into account benefits to
health and income, and forecast damages from new biological capabilities
enabled by science. I set parameters for this model based on historical trends
and forecasts from a large forecasting tournament of domain experts and
superforecasters, which included forecasts about genetically engineered
pandemic events. The results depend on the forecast likelihood that new
scientific capabilities might lead to the end of our advanced civilization -
there is substantial disagreement about this probability from participants in
the forecasting tournament I use. If I set aside this remote possibility, I
find the expected future social returns to science are strongly positive.
Otherwise, the desirability of accelerating science depends on the value placed
on the long-run future, in addition to which set of (quite different) forecasts
of extinction risk are preferred. I also explore the sensitivity of these
conclusions to a range of alternative assumptions.",The Returns to Science in the Presence of Technological Risk,2023-12-21 23:45:20,Matt Clancy,"http://arxiv.org/abs/2312.14289v1, http://arxiv.org/pdf/2312.14289v1",econ.GN
33377,gn,"A retiree's appetite for risk is a common input into the lifetime utility
models that are traditionally used to find optimal strategies for the
decumulation of retirement savings.
  In this work, we consider a retiree with potentially differing appetites for
the key financial risks of decumulation. We set out to determine whether these
differing risk appetites have a significant impact on the retiree's optimal
choice of strategy. To do so, we design and implement a framework which selects
the optimal decumulation strategy from a general set of admissible strategies
in line with a retiree's goals, and under differing appetites for the key risks
of decumulation.
  Overall, we find significant evidence to suggest that a retiree's differing
appetites for different decumulation risks will impact their optimal choice of
strategy at retirement. Through an illustrative example calibrated to the
Australian context, we find results which are consistent with actual behaviours
in this jurisdiction (in particular, a shallow market for annuities), which
lends support to our framework and may provide some new insight into the
so-called annuity puzzle.",Optimal Strategies for the Decumulation of Retirement Savings under Differing Appetites for Liquidity and Investment Risks,2023-12-22 04:06:47,"Benjamin Avanzi, Lewis de Felice","http://arxiv.org/abs/2312.14355v1, http://arxiv.org/pdf/2312.14355v1",econ.GN
33378,gn,"Using a systematic review and meta-analysis, this study investigates the
impact of the COVID-19 pandemic on job burnout among nurses. We review
healthcare articles following the PRISMA 2020 guidelines and identify the main
aspects and factors of burnout among nurses during the pandemic. Using the
Maslach Burnout questionnaire, we searched PubMed, ScienceDirect, and Google
Scholar, three open-access databases, for relevant sources measuring emotional
burnout, personal failure, and nurse depersonalization. Two reviewers extract
and screen data from the sources and evaluate the risk of bias. The analysis
reveals that 2.75% of nurses experienced job burnout during the pandemic, with
a 95% confidence interval and rates varying from 1.87% to 7.75%. These findings
emphasize the need for interventions to address the pandemic's effect on job
burnout among nurses and enhance their well-being and healthcare quality. We
recommend considering individual, organizational, and contextual factors
influencing healthcare workers' burnout. Future research should focus on
identifying effective interventions to lower burnout in nurses and other
healthcare professionals during pandemics and high-stress situations.",In the Line of Fire: A Systematic Review and Meta-Analysis of Job Burnout Among Nurses,2023-12-22 20:25:55,"Zahra Ghasemi Kooktapeh, Hakimeh Dustmohammadloo, Hooman Mehrdoost, Farivar Fatehi","http://arxiv.org/abs/2312.14853v1, http://arxiv.org/pdf/2312.14853v1",econ.GN
33379,gn,"Prevalence of open defecation is associated with adverse health effects,
detrimental not only for the individual but also the community. Therefore,
neighborhood characteristics can influence collective progressive behavior such
as improved sanitation practices. This paper uses primary data collected from
rural and urban areas of Bihar to study the relationship between jati
(sub-castes) level fractionalization within the community and toilet ownership
and its usage for defecation. The findings indicate a diversity dividend
wherein jati fractionalization is found to improve toilet ownership and usage
significantly. While exploring the channels, we find social expectations to
play an important role, where individuals from diverse communities tend to
believe that there is a higher prevalence of toilet usage within the community.
To assess the reasons for the existence of these social expectations, we use
data from an egocentric network survey on a sub-sample of the households. The
findings reveal that in fractionalized communities, the neighbors with whom our
respondents interacted are more likely to be from different jatis. They are
also more likely to use toilets and approve of its usage due to health reasons.
Discussions about toilets are more common among neighbors from fractionalized
communities, which underscore the discernible role of social learning. The
inferences drawn from the paper have significant implications for community
level behavioral change interventions that aim at reducing open defecation.","Learning from diversity: jati fractionalization, social expectations and improved sanitation practices in India",2023-12-23 14:04:02,"Sania Ashraf, Cristina Bicchieri, Upasak Das, Tanu Gupta, Alex Shpenev","http://arxiv.org/abs/2312.15221v1, http://arxiv.org/pdf/2312.15221v1",econ.GN
33380,gn,"The present study aimed to forecast the exports of a select group of
Organization for Economic Co-operation and Development (OECD) countries and
Iran using the neural networks. The data concerning the exports of the above
countries from 1970 to 2019 were collected. The collected data were implemented
to forecast the exports of the investigated countries for 2021 to 2025. The
analysis was performed using the Multi-Layer-Perceptron (MLP) neural network in
Python. Out of the total number, 75 percent were used as training data, and 25
percent were used as the test data. The findings of the study were evaluated
with 99% accuracy, which indicated the reliability of the output of the
network. The Results show that Covid-19 has affected exports over time.
However, long-term export contracts are less affected by tensions and crises,
due to the effect of exports on economic growth, per capita income and it is
better for economic policies of countries to use long-term export contracts.",Forecasting exports in selected OECD countries and Iran using MLP Artificial Neural Network,2023-12-24 21:38:51,"Soheila Khajoui, Saeid Dehyadegari, Sayyed Abdolmajid Jalaee","http://arxiv.org/abs/2312.15535v1, http://arxiv.org/pdf/2312.15535v1",econ.GN
33381,gn,"We build a dynamic multi-region model of climate and economy with emission
permit trading among 12 aggregated regions in the world. We solve for the
dynamic Nash equilibrium under noncooperation, wherein each region adheres to
the emission cap constraints following commitments outlined in the 2015 Paris
Agreement. Our model shows that the emission permit price reaches $749 per ton
of carbon by 2050. We demonstrate that a regional carbon tax is complementary
to the global cap-and-trade system, and the optimal regional carbon tax is
equal to the difference between the regional marginal abatement cost and the
permit price.",Dynamics of Global Emission Permit Prices and Regional Social Cost of Carbon under Noncooperation,2023-12-25 02:10:30,"Yongyang Cai, Khyati Malik, Hyeseon Shin","http://arxiv.org/abs/2312.15563v1, http://arxiv.org/pdf/2312.15563v1",econ.GN
33382,gn,"The environmental conservation issue has led consumers to rethink the
products they purchase. Nowadays, many consumers are willing to pay more for
products that genuinely adhere to environmental standards to support the
environment. Consequently, concepts like green marketing have gradually
infiltrated marketing literature, making environmental considerations one of
the most important activities for companies. Accordingly, this research
investigates the impacts of green marketing strategy on perceived brand quality
(case study: food exporting companies). The study population comprises 345
employees and managers from companies such as Kalleh, Solico, Pemina, Sorbon,
Mac, Pol, and Casel. Using Cochran's formula, a sample of 182 individuals was
randomly selected. This research is practical; the required data were collected
through surveys and questionnaires. The findings indicate that (1) green
marketing strategy has a significant positive effect on perceived brand
quality, (2) green products have a significant positive effect on perceived
brand quality, (3) green promotion has a significant positive effect on
perceived brand quality, (4) green distribution has a significant positive
effect on perceived brand quality, and (5) green pricing has a significant
positive effect on perceived brand quality.",Mental Perception of Quality: Green Marketing as a Catalyst for Brand Quality Enhancement,2023-12-26 06:10:43,"Saleh Ghobbe, Mahdi Nohekhan","http://arxiv.org/abs/2312.15865v1, http://arxiv.org/pdf/2312.15865v1",econ.GN
33446,gn,"This is the first paper that estimates the price determinants of BitCoin in a
Generalised Autoregressive Conditional Heteroscedasticity framework using high
frequency data. Derived from a theoretical model, we estimate BitCoin
transaction demand and speculative demand equations in a GARCH framework using
hourly data for the period 2013-2018. In line with the theoretical model, our
empirical results confirm that both the BitCoin transaction demand and
speculative demand have a statistically significant impact on the BitCoin price
formation. The BitCoin price responds negatively to the BitCoin velocity,
whereas positive shocks to the BitCoin stock, interest rate and the size of the
BitCoin economy exercise an upward pressure on the BitCoin price.",The Price of BitCoin: GARCH Evidence from High Frequency Data,2018-12-22 08:25:15,"Pavel Ciaian, d'Artis Kancs, Miroslava Rajcaniova","http://arxiv.org/abs/1812.09452v1, http://arxiv.org/pdf/1812.09452v1",q-fin.ST
33383,gn,"This paper estimates how electricity price divergence within Sweden has
affected incentives to invest in photovoltaic (PV) generation between 2016 and
2022 based on a synthetic control approach. Sweden is chosen as the research
subject since it is together with Italy the only EU country with multiple
bidding zones and is facing dramatic divergence in electricity prices between
low-tariff bidding zones in Northern and high-tariff bidding zones in Southern
Sweden since 2020. The results indicate that PV uptake in municipalities
located north of the bidding zone border is reduced by 40.9-48% compared to
their Southern counterparts. Based on these results, the creation of separate
bidding zones within countries poses a threat to the expansion of PV generation
and other renewables since it disincentivizes investment in areas with low
electricity prices.",Can the creation of separate bidding zones within countries create imbalances in PV uptake? Evidence from Sweden,2023-12-26 21:41:24,Johanna Fink,"http://arxiv.org/abs/2312.16161v1, http://arxiv.org/pdf/2312.16161v1",econ.GN
33384,gn,"Open defecation, which is linked with poor health outcomes, lower cognitive
ability and productivity, has been widespread in India. This paper assesses the
impact of a randomized norm-centric intervention implemented in peri-urban
areas of Tamil Nadu in India on raising the value attached to residence in
areas with a lower prevalence of open defecation, measured through Willingness
to Pay (WTP). The intervention aimed to change social expectations about toilet
usage through audio announcements, wall paintings, household visits, and
community meetings. The findings indicate a significant increase in the WTP for
relocating to areas with lower prevalence of open defecation. The results are
consistent when using local average treatment effect estimations wherein the
possibility of spillovers in the control areas is accounted for. They are also
robust to potential bias due to local socio-political events during the study
period and COVID-led attrition. We further observe a significant increase in
toilet ownership and usage. While assessing the mechanism, we find that change
in empirical expectations through the intervention (what one believes about the
prevalence of toilet usage in the community) is one of the primary mediating
channels. Normative expectations (what one believes about community approval of
toilet usage) are found to have limited effect. The findings underscore the
need for norm-centric interventions to propel change in beliefs and achieve
long-term and sustainable sanitation behavior.",Valuing Open Defecation Free Surroundings: Experimental Evidence from a Norm-Based Intervention in India,2023-12-23 14:20:47,"Sania Ashraf, Cristina Bicchieri, Upasak Das. Alex Shpenev","http://arxiv.org/abs/2312.16205v1, http://arxiv.org/pdf/2312.16205v1",econ.GN
33385,gn,"The idea that marketing, in addition to profitability and sales, should also
consider the consumer's health is not and has not been a far-fetched concept.
It can be stated that there is no longer a way back to producing
environmentally harmful products, and gradually, governmental pressures,
competition, and changing customer attitudes are obliging companies to adopt
and implement a green marketing approach. Over time, concepts such as green
marketing have penetrated marketing literature, making environmental
considerations one of the most important activities of companies. For this
purpose, this research examines the effects of green marketing strategy on
brand loyalty (case study: food exporting companies). The population of this
study consists of 345 employees and managers of companies like Kalleh, Solico,
Pemina, Sorben, Mac, Pol, and Casel, out of which 182 were randomly selected as
a sample using Cochran's formula. This research is practical; the required data
were collected through a survey and questionnaire. The research results
indicate that (1) green marketing strategy significantly affects brand loyalty.
(2) Green products have a significant positive effect on brand loyalty. (3)
Green promotion has a significant positive effect on brand loyalty. (4) Green
distribution has a significant positive effect on brand loyalty. (5) Green
pricing has a significant positive effect on brand loyalty.",The Green Advantage: Analyzing the Effects of Eco-Friendly Marketing on Consumer Loyalty,2023-12-27 22:30:58,"Erfan Mohammadi, MohammadMahdi Barzegar, Mahdi Nohekhan","http://arxiv.org/abs/2312.16698v1, http://arxiv.org/pdf/2312.16698v1",econ.GN
33386,gn,"Objective: About 12,000 people are diagnosed with lung cancer (LC) each year
in Argentina, and the diagnosis has a significant personal and family. The
objective of this study was to characterize the Health-Related Quality of Life
(HRQoL) and the economic impact in patients with LC and in their households.
Methods: Observational cross-sectional study, through validated structured
questionnaires to patients with a diagnosis of Non-Small Cell Lung Cancer
(NSCLC) in two referral public health care centers in Argentina. Questionnaries
used: Health-Related Quality of life (EuroQol EQ-5D-3L questionnaire);
financial toxicity (COST questionnaire); productivity loss (WPAI-GH, Work
Productivity and Activity Impairment Questionnaire: General Health), and
out-of-pocket expenses. Results: We included 101 consecutive patients (mean age
67.5 years; 55.4% men; 57.4% with advanced disease -stage III/IV-). The mean
EQ-VAS was 68.8 (SD:18.3), with 82.2% describing fair or poor health. The most
affected dimensions were anxiety/depression, pain, and activities of daily
living. Patients reported an average 59% decrease in their ability to perform
regular daily activities as measured by WPAI-GH. 54.5% reported a reduction in
income due to the disease, and 19.8% lost their jobs. The annual economic
productivity loss was estimated at USD 2,465 per person. 70.3%) of patients
reported financial toxicity. The average out-of-pocket expenditure was USD
100.38 per person per month, which represented 18.5% of household income.
Catastrophic expenditures were present in 37.1% of households. When performing
subgroup analysis by disease severity, all outcomes were worse in the
subpopulation with advanced disease. Conclusions Patients with NSCLC treated in
public hospitals in Argentina have significant health-related quality of life
and economic impact, worsening in patients with advanced disease.","Health-related Quality of life, Financial Toxicity, Productivity Loss and Catastrophic Health Expenditures After Lung Cancer Diagnosis in Argentina",2023-12-27 23:17:57,"Lucas Gonzalez, Andrea Alcaraz, Carolina Gabay, Monica Castro, Silvina Vigo, Eduardo Carinci, Federico Augustovski","http://arxiv.org/abs/2312.16710v1, http://arxiv.org/pdf/2312.16710v1",econ.GN
33387,gn,"Evidence on the effectiveness of retraining U.S. unemployed workers primarily
comes from evaluations of training programs, which represent one narrow avenue
for skill acquisition. We use high-quality records from Ohio and a matching
method to estimate the effects of retraining, broadly defined as enrollment in
postsecondary institutions. Our simple method bridges two strands of the
dynamic treatment effect literature that estimate the
treatment-now-versus-later and treatment-versus-no-treatment effects. We find
that enrollees experience earnings gains of six percent three to four years
after enrolling, after depressed earnings during the first two years. The
earnings effects are driven by industry-switchers, particularly to healthcare.",Further Education During Unemployment,2023-12-28 20:00:13,"Pauline Leung, Zhuan Pei","http://arxiv.org/abs/2312.17123v2, http://arxiv.org/pdf/2312.17123v2",econ.GN
33470,gn,"The application of Markov chains to modelling refugee crises is explored,
focusing on local migration of individuals at the level of cities and days. As
an explicit example we apply the Markov chains migration model developed here
to UNHCR data on the Burundi refugee crisis. We compare our method to a
state-of-the-art `agent-based' model of Burundi refugee movements, and
highlight that Markov chain approaches presented here can improve the match to
data while simultaneously being more algorithmically efficient.",Markov Chain Models of Refugee Migration Data,2019-03-19 23:58:02,"Vincent Huang, James Unwin","http://arxiv.org/abs/1903.08255v1, http://arxiv.org/pdf/1903.08255v1",physics.soc-ph
33388,gn,"This paper investigates whether foreign investment (FDI) into Africa is at
least partially responsive to World Bank-measured market friendliness.
Specifically, I conducted analyses of four countries between 2009 and 2017,
using cases that represent two of the highest scorers on the bank's Doing
Business index as of 2008 (Mauritius and South Africa) and the two lowest
scorers (DRC and CAR), and subsequently traced all four for growths or declines
in FDI in relation to their scores in the index. The findings show that there
is a moderate association between decreased costs of starting a business and
growth of FDI. Mauritius, South Africa and the DRC reduced their total cost of
starting a business by 71.7%, 143.7% and 122.9% for the entire period, and saw
inward FDI increases of 167.6%, 79.8% and 152.21%, respectively. The CAR
increased the cost of starting businesses but still saw increases in FDI.
However, the country also saw the least amount of growth in FDI at only 13.3%.",Does the World Bank's Ease of Doing Business Index Matter for FDI? Findings from Africa,2023-12-30 16:27:31,Bhaso Ndzendze,"http://arxiv.org/abs/2401.00227v1, http://arxiv.org/pdf/2401.00227v1",econ.GN
33389,gn,"Earlier in my career, prevalent approaches in the emerging field of market
design largely represented the experiences and perspectives of leaders who were
commissioned to design or reform various institutions. Since being commissioned
for a similar task seemed unlikely for me as an aspiring design economist, I
developed my own minimalist approach to market design. Using the policy
objectives of stakeholders, my approach creates a new institution from the
existing one with minimal interference with its elements that compromise the
objectives. Minimalist market design initially evolved through my integrated
research and policy efforts in school choice from 1997 to 2005 and in kidney
exchange from 2003 to 2007. Given its success in school choice and kidney
exchange, I systematically followed this approach in many other, often unusual
real-world settings. In recent years, my efforts in minimalist market design
led to the 2021 reform of the US Army's branching system for its cadets to
military specialties, the adoption of reserve systems during the Covid-19
pandemic for vaccine allocation in 15 states and therapies in 2 states, and the
deployment of a highly efficient liver exchange system in T\""urkiye. This same
methodology also predicted the rescission of a 1995 Supreme Court judgment in
India, resulting in countless litigations and interruptions of public
recruitment for 25 years, as well as the mandates of its replacement. In this
monograph, I describe the philosophy, evolution, and successful applications of
minimalist market design, contrasting it with the mainstream paradigm for the
field. In doing so, I also provide a paradigm for economists who want to
influence policy and change institutions through their research.",Minimalist Market Design: A Framework for Economists with Policy Aspirations,2023-12-30 22:44:53,Tayfun Sönmez,"http://arxiv.org/abs/2401.00307v1, http://arxiv.org/pdf/2401.00307v1",econ.GN
33390,gn,"This study investigates the long-term economic impact of sea-level rise (SLR)
on coastal regions in Europe, focusing on Gross Domestic Product (GDP). Using a
novel dataset covering regional SLR and economic growth from 1900 to 2020, we
quantify the relationships between SLR and regional GDP per capita across 79
coastal EU & UK regions. Our results reveal that the current SLR has already
negatively influenced GDP of coastal regions, leading to a cumulative 4.7% loss
at 39 cm of SLR. Over the 120 year period studied, the actualised impact of SLR
on the annual growth rate is between -0.02% and 0.04%. Extrapolating these
findings to future climate and socio-economic scenarios, we show that in the
absence of additional adaptation measures, GDP losses by 2100 could range
between -6.3% and -20.8% under the most extreme SLR scenario (SSP5-RCP8.5
High-end Ice, or -4.0% to -14.1% in SSP5-RCP8.5 High Ice). This statistical
analysis utilising a century-long dataset, provides an empirical foundation for
designing region-specific climate adaptation strategies to mitigate economic
damages caused by SLR. Our evidence supports the argument for strategically
relocating assets and establishing coastal setback zones when it is
economically preferable and socially agreeable, given that protection
investments have an economic impact.",Actualised and future changes in regional economic growth through sea level rise,2023-12-31 19:42:45,"Theodoros Chatzivasileiadis, Ignasi Cortes Arbues, Jochen Hinkel, Daniel Lincke, Richard S. J. Tol","http://arxiv.org/abs/2401.00535v1, http://arxiv.org/pdf/2401.00535v1",econ.GN
33391,gn,"The social cost of carbon (SCC) serves as a concise gauge of climate change's
economic impact, often reported at the global and country level. SCC values are
disproportionately high for less-developed, populous countries. Assessing the
contributions of urban and non-urban areas to the SCC can provide additional
insights for climate policy. Cities are essential for defining global
emissions, influencing warming levels and associated damages. High exposure and
concurrent socioenvironmental problems exacerbate climate change risks in
cities. Using a spatially explicit integrated assessment model, the SCC is
estimated at USD$137-USD$579/tCO2, rising to USD$262-USD$1,075/tCO2 when
including urban heat island (UHI) warming. Urban SCC dominates, with both urban
exposure and the UHI contributing significantly. A permanent 1% reduction of
the UHI in urban areas yields net present benefits of USD$484-USD$1,562 per
urban dweller. Global cities have significant leverage and incentives for a
swift transition to a low-carbon economy, and for reducing local warming.",Urban and non-urban contributions to the social cost of carbon,2024-01-01 11:35:11,"Francisco Estrada, Veronica Lupi, Wouter Botzen, Richard S. J. Tol","http://arxiv.org/abs/2401.00919v1, http://arxiv.org/pdf/2401.00919v1",econ.GN
33392,gn,"Most of the world still lacks access to sufficient quantities of all food
groups needed for an active and healthy life. This study traces historical and
projected changes in global food systems toward alignment with the new Healthy
Diet Basket (HDB) used by UN agencies and the World Bank to monitor the cost
and affordability of healthy diets worldwide. We use HDB as a standard to
measure adequacy of national, regional and global supply-demand balances,
finding substantial but inconsistent progress toward closer alignment with
dietary guidelines, with large global shortfalls in fruits, vegetables, and
legumes, nuts, and seeds, and large disparities among regions in use of animal
source foods. Projections show that additional investments in the supply of
agricultural products would modestly accelerate improvements in adequacy where
shortfalls are greatest, revealing the need for complementary investments to
increase purchasing power and demand for under-consumed food groups especially
in low-income countries.","How and where global food supplies fall short of healthy diets: Past trends and future projections, 1961-2020 and 2010-2050",2024-01-02 10:52:05,"Leah Costlow, Anna Herforth, Timothy B. Sulser, Nicola Cenacchi, William A. Masters","http://arxiv.org/abs/2401.01080v1, http://arxiv.org/pdf/2401.01080v1",econ.GN
33393,gn,"Given the move towards industrialization in societies, the increase in
dynamism and competition among companies to capture market share, raising
concerns about the environment, government, and international regulations and
obligations, increased consumer awareness, pressure from nature-loving groups,
etc., organizations have become more attentive to issues related to
environmental management. Over time, concepts such as green marketing have
permeated marketing literature, making environmental considerations one of the
most important activities of companies. To this end, this research examines the
impact of green marketing strategy on brand awareness (case study: food
exporting companies). The population of this research consists of 345 employees
and managers of companies like Kalleh, Solico, Pemina, Sorbon, Mac, Pol, and
Castle, from which 182 individuals were randomly selected as the sample using
Cochran's formula. This research is practical, and the required data have been
collected through a survey and a questionnaire. The research results indicate
that (1) green marketing strategy significantly affects brand awareness. (2)
Green products have a significant positive effect on brand awareness. (3) Green
promotions have a significant positive effect on brand awareness. (4) Green
distribution has a significant positive effect on brand awareness. (5) Green
pricing has a significant positive effect on brand awareness.","Impact of Green Marketing Strategy on Brand Awareness: Business, Management, and Human Resources Aspects",2024-01-04 06:03:29,"Mahdi Nohekhan, Mohammadmahdi Barzegar","http://arxiv.org/abs/2401.02042v1, http://arxiv.org/pdf/2401.02042v1",econ.GN
33394,gn,"This study investigates the impact of female leadership on the financial
constraints of firms, which are publicly listed entrepreneurial enterprises in
China. Utilizing data from 938 companies on the China Growth Enterprise Market
(GEM) over a period of 2013-2022, this paper explores how the female presence
in CEO positions, senior management, and board membership influences a firm's
ability to manage financial constraints. Our analysis employs the
Kaplan-Zingales (KZ) Index to measure these constraints, encompassing some key
financial factors such as cash flow, dividends, and leverage. The findings
reveal that companies with female CEOs or a higher proportion of women in top
management are associated with reduced financial constraints. However, the
influence of female board members is less clear-cut. Our study also delves into
the variances of these effects between high-tech and low-tech industry sectors,
emphasizing how internal gender biases in high-tech industries may impede the
alleviation of financing constraints on firms. This research contributes to a
nuanced understanding of the role of gender dynamics in corporate financial
management, especially in the context of China's evolving economic landscape.
It underscores the importance of promoting female leadership not only for
gender equity but also for enhancing corporate financial resilience.",Female Entrepreneur on Board:Assessing the Effect of Gender on Corporate Financial Constraints,2024-01-04 11:35:43,Ruiying Xiao,"http://arxiv.org/abs/2401.02134v1, http://arxiv.org/pdf/2401.02134v1",econ.GN
33395,gn,"The service quality of a passenger transport operator can be measured through
face-to-face surveys at the terminals or on board. However, the resulting
responses may suffer from the influence of the intrinsic aspects of the
respondent's personality and emotional context at the time of the interview.
This study proposes a methodology to generate and select control variables for
these latent psychosituational traits, thus mitigating the risk of omitted
variable bias. We developed an econometric model of the determinants of
passenger satisfaction in a survey conducted at the largest airport in Latin
America, S\~ao Paulo GRU Airport. Our focus was on the role of flight delays in
the perception of quality. The results of this study confirm the existence of a
relationship between flight delays and the global satisfaction of passengers
with airports. In addition, favorable evaluations regarding airports'
food/beverage concessions and Wi-Fi services, but not their retail options,
have a relevant moderating effect on that relationship. Furthermore,
dissatisfaction arising from passengers' interaction with the airline can have
negative spillover effects on their satisfaction with the airport. We also
found evidence of blame-attribution behavior, in which only delays of internal
origin, such as failures in flight management, are significant, indicating that
passengers overlook weather-related flight delays. Finally, the results suggest
that an empirical specification that does not consider the latent
psychosituational traits of passengers produces a relevant overestimation of
the absolute effect of flight delays on passenger satisfaction.",Airport service quality perception and flight delays: examining the influence of psychosituational latent traits of respondents in passenger satisfaction surveys,2024-01-04 11:44:05,"Alessandro V. M. Oliveira, Bruno F. Oliveira, Moises D. Vassallo","http://dx.doi.org/10.1016/j.retrec.2023.101371, http://arxiv.org/abs/2401.02139v1, http://arxiv.org/pdf/2401.02139v1",econ.GN
33396,gn,"It is known that a player in a noncooperative game can benefit by publicly
restricting his possible moves before play begins. We show that, more
generally, a player may benefit by publicly committing to pay an external party
an amount that is contingent on the game's outcome. We explore what happens
when external parties -- who we call ``game miners'' -- discover this fact and
seek to profit from it by entering an outcome-contingent contract with the
players. We analyze various structured bargaining games between miners and
players for determining such an outcome-contingent contract. These bargaining
games include playing the players against one another, as well as allowing the
players to pay the miner(s) for exclusivity and first-mover advantage. We
establish restrictions on the strategic settings in which a game miner can
profit and bounds on the game miner's profit. We also find that game miners can
lead to both efficient and inefficient equilibria.",Game Mining: How to Make Money from those about to Play a Game,2024-01-04 19:52:01,"James W. Bono, David H. Wolpert","http://arxiv.org/abs/2401.02353v1, http://arxiv.org/pdf/2401.02353v1",econ.GN
33397,gn,"During the Great Recession, Democrats in the United States argued that
government spending could be utilized to ""grease the wheels"" of the economy in
order to create wealth and to increase employment; Republicans, on the other
hand, contended that government spending is wasteful and discouraged
investment, thereby increasing unemployment. Today, in 2020, we find ourselves
in the midst of another crisis where government spending and fiscal stimulus is
again being considered as a solution. In the present paper, we address this
question by formulating an optimal control problem generalizing the model of
Radner & Shepp (1996). The model allows for the company to borrow continuously
from the government. We prove that there exists an optimal strategy; rigorous
verification proofs for its optimality are provided. We proceed to prove that
government loans increase the expected net value of a company. We also examine
the consequences of different profit-taking behaviors among firms who receive
fiscal stimulus.",Fiscal stimulus as an optimal control problem,2014-10-22 18:50:20,"Philip A. Ernst, Michael B. Imerman, Larry Shepp, Quan Zhou","http://arxiv.org/abs/1410.6084v3, http://arxiv.org/pdf/1410.6084v3",econ.GN
33398,gn,"Using Gretl, I apply ARMA, Vector ARMA, VAR, state-space model with a Kalman
filter, transfer-function and intervention models, unit root tests,
cointegration test, volatility models (ARCH, GARCH, ARCH-M, GARCH-M,
Taylor-Schwert GARCH, GJR, TARCH, NARCH, APARCH, EGARCH) to analyze quarterly
time series of GDP and Government Consumption Expenditures & Gross Investment
(GCEGI) from 1980 to 2013. The article is organized as: (I) Definition; (II)
Regression Models; (III) Discussion. Additionally, I discovered a unique
interaction between GDP and GCEGI in both the short-run and the long-run and
provided policy makers with some suggestions. For example in the short run, GDP
responded positively and very significantly (0.00248) to GCEGI, while GCEGI
reacted positively but not too significantly (0.08051) to GDP. In the long run,
current GDP responded negatively and permanently (0.09229) to a shock in past
GCEGI, while current GCEGI reacted negatively yet temporarily (0.29821) to a
shock in past GDP. Therefore, policy makers should not adjust current GCEGI
based merely on the condition of current and past GDP. Although increasing
GCEGI does help GDP in the short-term, significantly abrupt increase in GCEGI
might not be good to the long-term health of GDP. Instead, a balanced,
sustainable, and economically viable solution is recommended, so that the
short-term benefits to the current economy from increasing GCEGI often largely
secured by the long-term loan outweigh or at least equal to the negative effect
to the future economy from the long-term debt incurred by the loan. Finally, I
found that non-normally distributed volatility models generally perform better
than normally distributed ones. More specifically, TARCH-GED performs the best
in the group of non-normally distributed, while GARCH-M does the best in the
group of normally distributed.",Comprehensive Time-Series Regression Models Using GRETL -- U.S. GDP and Government Consumption Expenditures & Gross Investment from 1980 to 2013,2014-12-17 16:03:38,Juehui Shi,"http://dx.doi.org/10.2139/ssrn.2540535, http://arxiv.org/abs/1412.5397v3, http://arxiv.org/pdf/1412.5397v3",econ.GN
33399,gn,"The Machina thought experiments pose to major non-expected utility models
challenges that are similar to those posed by the Ellsberg thought experiments
to subjective expected utility theory (SEUT). We test human choices in the
`Ellsberg three-color example', confirming typical ambiguity aversion patterns,
and the `Machina 50/51 and reflection examples', partially confirming the
preferences hypothesized by Machina. Then, we show that a quantum-theoretic
framework for decision-making under uncertainty recently elaborated by some of
us allows faithful modeling of all data on the Ellsberg and Machina paradox
situations. In the quantum-theoretic framework subjective probabilities are
represented by quantum probabilities, while quantum state transformations
enable representations of ambiguity aversion and subjective attitudes toward
it.",Testing Ambiguity and Machina Preferences Within a Quantum-theoretic Framework for Decision-making,2017-06-06 11:48:14,"Diederik Aerts, Suzette Geriente, Catarina Moreira, Sandro Sozzo","http://arxiv.org/abs/1706.02168v1, http://arxiv.org/pdf/1706.02168v1",q-fin.EC
33400,gn,"We show that the space in which scientific, technological and economic
developments interplay with each other can be mathematically shaped using
pioneering multilayer network and complexity techniques. We build the
tri-layered network of human activities (scientific production, patenting, and
industrial production) and study the interactions among them, also taking into
account the possible time delays. Within this construction we can identify
which capabilities and prerequisites are needed to be competitive in a given
activity, and even measure how much time is needed to transform, for instance,
the technological know-how into economic wealth and scientific innovation,
being able to make predictions with a very long time horizon. Quite
unexpectedly, we find empirical evidence that the naive knowledge flow from
science, to patents, to products is not supported by data, being instead
technology the best predictor for industrial and scientific production for the
next decades.","Unfolding the innovation system for the development of countries: co-evolution of Science, Technology and Production",2017-07-17 16:31:31,"Emanuele Pugliese, Giulio Cimini, Aurelio Patelli, Andrea Zaccaria, Luciano Pietronero, Andrea Gabrielli","http://dx.doi.org/10.1038/s41598-019-52767-5, http://arxiv.org/abs/1707.05146v3, http://arxiv.org/pdf/1707.05146v3",econ.GN
33401,gn,"We analyse the autocatalytic structure of technological networks and evaluate
its significance for the dynamics of innovation patenting. To this aim, we
define a directed network of technological fields based on the International
Patents Classification, in which a source node is connected to a receiver node
via a link if patenting activity in the source field anticipates patents in the
receiver field in the same region more frequently than we would expect at
random. We show that the evolution of the technology network is compatible with
the presence of a growing autocatalytic structure, i.e. a portion of the
network in which technological fields mutually benefit from being connected to
one another. We further show that technological fields in the core of the
autocatalytic set display greater fitness, i.e. they tend to appear in a
greater number of patents, thus suggesting the presence of positive spillovers
as well as positive reinforcement. Finally, we observe that core shifts take
place whereby different groups of technology fields alternate within the
autocatalytic structure; this points to the importance of recombinant
innovation taking place between close as well as distant fields of the
hierarchical classification of technological fields.",Technology networks: the autocatalytic origins of innovation,2017-08-11 15:02:02,"Lorenzo Napolitano, Evangelos Evangelou, Emanuele Pugliese, Paolo Zeppini, Graham Room","http://arxiv.org/abs/1708.03511v3, http://arxiv.org/pdf/1708.03511v3",econ.GN
33402,gn,"This paper presents an intertemporal bimodal network to analyze the evolution
of the semantic content of a scientific field within the framework of topic
modeling, namely using the Latent Dirichlet Allocation (LDA). The main
contribution is the conceptualization of the topic dynamics and its
formalization and codification into an algorithm. To benchmark the
effectiveness of this approach, we propose three indexes which track the
transformation of topics over time, their rate of birth and death, and the
novelty of their content. Applying the LDA, we test the algorithm both on a
controlled experiment and on a corpus of several thousands of scientific papers
over a period of more than 100 years which account for the history of the
economic thought.",A Bimodal Network Approach to Model Topic Dynamics,2017-09-27 10:49:03,"Luigi Di Caro, Marco Guerzoni, Massimiliano Nuccio, Giovanni Siragusa","http://arxiv.org/abs/1709.09373v1, http://arxiv.org/pdf/1709.09373v1",cs.CL
33403,gn,"I show that firms price almost competitively and consumers can infer product
quality from prices in markets where firms differ in quality and production
cost, and learning prices is costly. Bankruptcy risk or regulation links higher
quality to lower cost. If high-quality firms have lower cost, then they can
signal quality by cutting prices. Then the low-quality firms must cut prices to
retain customers. This price-cutting race to the bottom ends in a separating
equilibrium in which the low-quality firms charge their competitive price and
the high-quality firms charge slightly less.",Competitive pricing despite search costs if lower price signals quality,2018-06-04 02:52:22,Sander Heinsalu,"http://arxiv.org/abs/1806.00898v1, http://arxiv.org/pdf/1806.00898v1",econ.GN
33404,gn,"Methods for predicting the likely upper economic limit for the wind fleet in
the United Kingdom should be simple to use whilst being able to cope with
evolving technologies, costs and grid management strategies. This paper present
two such models, both of which use data on historical wind patterns but apply
different approaches to estimating the extent of wind shedding as a function of
the size of the wind fleet. It is clear from the models that as the wind fleet
increases in size, wind shedding will progressively increase, and as a result
the overall economic efficiency of the wind fleet will be reduced. The models
provide almost identical predictions of the efficiency loss and suggest that
the future upper economic limit of the wind fleet will be mainly determined by
the wind fleet Headroom, a concept described in some detail in the paper. The
results, which should have general applicability, are presented in graphical
form, and should obviate the need for further modelling using the primary data.
The paper also discusses the effectiveness of the wind fleet in decarbonising
the grid, and the growing competition between wind and solar fleets as sources
of electrical energy for the United Kingdom.",Two Different Methods for Modelling the Likely Upper Economic Limit of the Future United Kingdom Wind Fleet,2018-06-19 22:29:55,"Anthony D Stephens, David R Walwyn","http://arxiv.org/abs/1806.07436v1, http://arxiv.org/pdf/1806.07436v1",physics.soc-ph
33405,gn,"This study analyzes public debts and deficits between European countries. The
statistical evidence here seems in general to reveal that sovereign debts and
government deficits of countries within European Monetary Unification-in
average- are getting worse than countries outside European Monetary
Unification, in particular after the introduction of Euro currency. This
socioeconomic issue might be due to Maastricht Treaty, the Stability and Growth
Pact, the new Fiscal Compact, strict Balanced-Budget Rules, etc. In fact, this
economic policy of European Union, in phases of economic recession, may
generate delay and rigidity in the application of prompt counter-cycle (or
acyclical) interventions to stimulate the economy when it is in a downturn
within countries. Some implications of economic policy are discussed.",National debts and government deficits within European Monetary Union: Statistical evidence of economic issues,2018-06-05 11:18:34,Mario Coccia,"http://arxiv.org/abs/1806.07830v1, http://arxiv.org/pdf/1806.07830v1",econ.GN
33406,gn,"I analyze factory worker households in the early 1920s in Osaka to examine
idiosyncratic income shocks and consumption. Using the household-level monthly
panel dataset, I find that while households could not fully cope with
idiosyncratic income shocks at that time, they mitigated fluctuations in
indispensable consumption during economic hardship. In terms of risk-coping
mechanisms, I find suggestive evidence that savings institutions helped
mitigate vulnerabilities and that both using borrowing institutions and
adjusting labor supply served as risk-coping strategies among households with
less savings.",Consumption smoothing in the working-class households of interwar Japan,2018-07-16 12:02:14,Kota Ogasawara,"http://arxiv.org/abs/1807.05737v13, http://arxiv.org/pdf/1807.05737v13",econ.GN
33407,gn,"We show how increasing returns to scale in urban scaling can artificially
emerge, systematically and predictably, without any sorting or positive
externalities. We employ a model where individual productivities are
independent and identically distributed lognormal random variables across all
cities. We use extreme value theory to demonstrate analytically the paradoxical
emergence of increasing returns to scale when the variance of log-productivity
is larger than twice the log-size of the population size of the smallest city
in a cross-sectional regression. Our contributions are to derive an analytical
prediction for the artificial scaling exponent arising from this mechanism and
to develop a simple statistical test to try to tell whether a given estimate is
real or an artifact. Our analytical results are validated analyzing simulations
and real microdata of wages across municipalities in Colombia. We show how an
artificial scaling exponent emerges in the Colombian data when the sizes of
random samples of workers per municipality are $1\%$ or less of their total
size.",Artificial Increasing Returns to Scale and the Problem of Sampling from Lognormals,2018-07-25 06:16:57,"Andres Gomez-Lievano, Vladislav Vysotsky, Jose Lobo","http://dx.doi.org/10.1177/2399808320942366, http://arxiv.org/abs/1807.09424v3, http://arxiv.org/pdf/1807.09424v3",econ.GN
33408,gn,"Foundations of equilibrium thermodynamics are the equation of state (EoS) and
four postulated laws of thermodynamics. We use equilibrium thermodynamics
paradigms in constructing the EoS for microeconomics system that is a market.
This speculation is hoped to be first step towards whole pictures of
thermodynamical paradigm of economics.",Towards equation of state for a market: A thermodynamical paradigm of economics,2018-07-22 22:55:33,Burin Gumjudpai,"http://arxiv.org/abs/1807.09595v2, http://arxiv.org/pdf/1807.09595v2",econ.GN
33409,gn,"We present a new metric estimating fitness of countries and complexity of
products by exploiting a non-linear non-homogeneous map applied to the publicly
available information on the goods exported by a country. The non homogeneous
terms guarantee both convergence and stability. After a suitable rescaling of
the relevant quantities, the non homogeneous terms are eventually set to zero
so that this new metric is parameter free. This new map almost reproduces the
results of the original homogeneous metrics already defined in literature and
allows for an approximate analytic solution in case of actual binarized
matrices based on the Revealed Comparative Advantage (RCA) indicator. This
solution is connected with a new quantity describing the neighborhood of nodes
in bipartite graphs, representing in this work the relations between countries
and exported products. Moreover, we define the new indicator of country
net-efficiency quantifying how a country efficiently invests in capabilities
able to generate innovative complex high quality products. Eventually, we
demonstrate analytically the local convergence of the algorithm involved.",A new and stable estimation method of country economic fitness and product complexity,2018-07-23 17:07:29,"Vito D. P. Servedio, Paolo Buttà, Dario Mazzilli, Andrea Tacchella, Luciano Pietronero","http://dx.doi.org/10.3390/e20100783, http://arxiv.org/abs/1807.10276v2, http://arxiv.org/pdf/1807.10276v2",econ.GN
33410,gn,"Social and political polarization is a significant source of conflict and
poor governance in many societies. Thus, understanding its causes has become a
priority of scholars across many disciplines. Here we demonstrate that shifts
in socialization strategies analogous to political polarization and identity
politics can arise as a locally-beneficial response to both rising wealth
inequality and economic decline. Adopting a perspective of cultural evolution,
we develop a framework to study the emergence of polarization under shifting
economic environments. In many contexts, interacting with diverse out-groups
confers benefits from innovation and exploration greater than those that arise
from interacting exclusively with a homogeneous in-group. However, when the
economic environment favors risk-aversion, a strategy of seeking low-risk
interactions can be important to maintaining individual solvency. To capture
this dynamic, we assume that in-group interactions have a lower expected
outcome, but a more certain one. Thus in-group interactions are less risky than
out-group interactions. Our model shows that under conditions of economic
decline or increasing wealth inequality, some members of the population benefit
from adopting a risk-averse, in-group favoring strategy. Moreover, we show that
such in-group polarization can spread rapidly to the whole population and
persist even when the conditions that produced it have reversed. Finally we
offer empirical support for the role of income inequality as a driver of
affective polarization in the United States, mirroring findings on a panel of
developed democracies. Our work provides a framework for studying how disparate
forces interplay, via cultural evolution, to shape patterns of identity, and
unifies what are often seen as conflicting explanations for political
polarization: identity threat versus economic anxiety.",Polarization under rising inequality and economic decline,2018-07-28 19:21:02,"Alexander J. Stewart, Nolan McCarty, Joanna J. Bryson","http://dx.doi.org/10.1126/sciadv.abd4201, http://arxiv.org/abs/1807.11477v2, http://arxiv.org/pdf/1807.11477v2",econ.GN
33412,gn,"Today's age of data holds high potential to enhance the way we pursue and
monitor progress in the fields of development and humanitarian action. We study
the relation between data utility and privacy risk in large-scale behavioral
data, focusing on mobile phone metadata as paradigmatic domain. To measure
utility, we survey experts about the value of mobile phone metadata at various
spatial and temporal granularity levels. To measure privacy, we propose a
formal and intuitive measure of reidentification risk$\unicode{x2014}$the
information ratio$\unicode{x2014}$and compute it at each granularity level. Our
results confirm the existence of a stark tradeoff between data utility and
reidentifiability, where the most valuable datasets are also most prone to
reidentification. When data is specified at ZIP-code and hourly levels, outside
knowledge of only 7% of a person's data suffices for reidentification and
retrieval of the remaining 93%. In contrast, in the least valuable dataset,
specified at municipality and daily levels, reidentification requires on
average outside knowledge of 51%, or 31 data points, of a person's data to
retrieve the remaining 49%. Overall, our findings show that coarsening data
directly erodes its value, and highlight the need for using data-coarsening,
not as stand-alone mechanism, but in combination with data-sharing models that
provide adjustable degrees of accountability and security.",Mapping the Privacy-Utility Tradeoff in Mobile Phone Data for Development,2018-08-01 07:19:50,"Alejandro Noriega-Campero, Alex Rutherford, Oren Lederman, Yves A. de Montjoye, Alex Pentland","http://arxiv.org/abs/1808.00160v1, http://arxiv.org/pdf/1808.00160v1",cs.CY
33413,gn,"We propose three novel gerrymandering algorithms which incorporate the
spatial distribution of voters with the aim of constructing gerrymandered,
equal-population, connected districts. Moreover, we develop lattice models of
voter distributions, based on analogies to electrostatic potentials, in order
to compare different gerrymandering strategies. Due to the probabilistic
population fluctuations inherent to our voter models, Monte Carlo methods can
be applied to the districts constructed via our gerrymandering algorithms.
Through Monte Carlo studies we quantify the effectiveness of each of our
gerrymandering algorithms and we also argue that gerrymandering strategies
which do not include spatial data lead to (legally prohibited) highly
disconnected districts. Of the three algorithms we propose, two are based on
different strategies for packing opposition voters, and the third is a new
approach to algorithmic gerrymandering based on genetic algorithms, which
automatically guarantees that all districts are connected. Furthermore, we use
our lattice voter model to examine the effectiveness of isoperimetric quotient
tests and our results provide further quantitative support for implementing
compactness tests in real-world political redistricting.",Lattice Studies of Gerrymandering Strategies,2018-08-08 18:33:07,"Kyle Gatesman, James Unwin","http://dx.doi.org/10.1017/pan.2020.22, http://arxiv.org/abs/1808.02826v1, http://arxiv.org/pdf/1808.02826v1",physics.soc-ph
33427,gn,"The paper studies pricing of insurance products focusing on the pricing of
annuities under uncertainty. This pricing problem is crucial for financial
decision making and was studied intensively, however, many open questions still
remain. In particular, there is a so-called ""annuity puzzle"" related to certain
inconsistency of existing financial theory with the empirical observations for
the annuities market. The paper suggests a pricing method based on the risk
minimization such that both producer and customer seek to minimize the mean
square hedging error accepted as a measure of risk. This leads to two different
versions of the pricing problem: the selection of the annuity price given the
rate of regular payments, and the selection of the rate of payments given the
annuity price. It appears that solutions of these two problems are different.
This can contribute to explanation for the ""annuity puzzle"".",On a gap between rational annuitization price for producer and price for customer,2018-09-24 17:16:48,Nikolai Dokuchaev,"http://dx.doi.org/10.1057/s41272-018-00163-5, http://arxiv.org/abs/1809.08960v1, http://arxiv.org/pdf/1809.08960v1",econ.GN
33414,gn,"Crowdfunding is gradually becoming a modern marketing pattern. By noting that
the success of crowdfunding depends on network externalities, our research aims
to utilize them to provide an applicable referral mechanism in a
crowdfunding-based marketing pattern. In the context of network externalities,
measuring the value of leading customers is chosen as the key to coping with
the research problem by considering that leading customers take a critical
stance in forming a referral network. Accordingly, two sequential-move game
models (i.e., basic model and extended model) were established to measure the
value of leading customers, and a skill of matrix transformation was adopted to
solve the model by transforming a complicated multi-sequence game into a simple
simultaneous-move game. Based on the defined value of leading customers, a
network-based referral mechanism was proposed by exploring exactly how many
awards are allocated along the customer sequence to encourage the leading
customers' actions of successful recommendation and by demonstrating two
general rules of awarding the referrals in our model setting. Moreover, the
proposed solution approach helps deepen an understanding of the effect of the
leading position, which is meaningful for designing more numerous referral
approaches.",Network-based Referral Mechanism in a Crowdfunding-based Marketing Pattern,2018-08-09 12:41:12,"Yongli Li, Zhi-Ping Fan, Wei Zhang","http://arxiv.org/abs/1808.03070v1, http://arxiv.org/pdf/1808.03070v1",econ.TH
33415,gn,"Many-body systems can have multiple equilibria. Though the energy of
equilibria might be the same, still systems may resist to switch from an
unfavored equilibrium to a favored one. In this paper we investigate occurrence
of such phenomenon in economic networks. In times of crisis when governments
intend to stimulate economy, a relevant question is on the proper size of
stimulus bill. To address the answer, we emphasize the role of hysteresis in
economic networks. In times of crises, firms and corporations cut their
productions; now since their level of activity is correlated, metastable
features in the network become prominent. This means that economic networks
resist against the recovery actions. To measure the size of resistance in the
network against recovery, we deploy the XY model. Though theoretically the XY
model has no hysteresis, when it comes to the kinetic behavior in the
deterministic regimes, we observe a dynamic hysteresis. We find that to
overcome the hysteresis of the network, a minimum size of stimulation is needed
for success. Our simulations show that as long as the networks are
Watts-Strogatz, such minimum is independent of the characteristics of the
networks.",Hysteresis of economic networks in an XY model,2018-08-10 06:55:20,"Ali Hosseiny, Mohammadreza Absalan, Mohammad Sherafati, Mauro Gallegati","http://dx.doi.org/10.1016/j.physa.2018.08.064, http://arxiv.org/abs/1808.03404v1, http://arxiv.org/pdf/1808.03404v1",physics.soc-ph
33416,gn,"Price stability has often been cited as a key reason that cryptocurrencies
have not gained widespread adoption as a medium of exchange and continue to
prove incapable of powering the economy of decentralized applications (DApps)
efficiently. Exeum proposes a novel method to provide price stable digital
tokens whose values are pegged to real world assets, serving as a bridge
between the real world and the decentralized economy.
  Pegged tokens issued by Exeum - for example, USDE refers to a stable token
issued by the system whose value is pegged to USD - are backed by virtual
assets in a virtual asset exchange where users can deposit the base token of
the system and take long or short positions. Guaranteeing the stability of the
pegged tokens boils down to the problem of maintaining the peg of the virtual
assets to real world assets, and the main mechanism used by Exeum is
controlling the swap rate of assets. If the swap rate is fully controlled by
the system, arbitrageurs can be incentivized enough to restore a broken peg;
Exeum distributes statistical arbitrage trading software to decentralize this
type of market making activity. The last major component of the system is a
central bank equivalent that determines the long term interest rate of the base
token, pays interest on the deposit by inflating the supply if necessary, and
removes the need for stability fees on pegged tokens, improving their
usability.
  To the best of our knowledge, Exeum is the first to propose a truly
decentralized method for developing a stablecoin that enables 1:1 value
conversion between the base token and pegged assets, completely removing the
mismatch between supply and demand. In this paper, we will also discuss its
applications, such as improving staking based DApp token models, price stable
gas fees, pegging to an index of DApp tokens, and performing cross-chain asset
transfer of legacy crypto assets.",Exeum: A Decentralized Financial Platform for Price-Stable Cryptocurrencies,2018-08-10 13:45:58,"Jaehyung Lee, Minhyung Cho","http://arxiv.org/abs/1808.03482v1, http://arxiv.org/pdf/1808.03482v1",cs.CR
33417,gn,"The binomial system is an electoral system unique in the world. It was used
to elect the senators and deputies of Chile during 27 years, from the return of
democracy in 1990 until 2017. In this paper we study the real voting power of
the different political parties in the Senate of Chile during the whole
binomial period. We not only consider the different legislative periods, but
also any party changes between one period and the next. The real voting power
is measured by considering power indices from cooperative game theory, which
are based on the capability of the political parties to form winning
coalitions. With this approach, we can do an analysis that goes beyond the
simple count of parliamentary seats.",Voting power of political parties in the Senate of Chile during the whole binomial system period: 1990-2017,2018-08-14 22:32:11,"Fabián Riquelme, Pablo González-Cantergiani, Gabriel Godoy","http://arxiv.org/abs/1808.07854v2, http://arxiv.org/pdf/1808.07854v2",econ.GN
33418,gn,"We argue that a negative interest rate policy (NIRP) can be an effect tool
for macroeconomic stabilization. We first discuss how implementing negative
rates on reserves held at a central bank does not pose any theoretical
difficulty, with a reduction in rates operating in exactly the same way when
rates are positive or negative, and show that this is compatible with an
endogenous money point of view. We then propose a simplified stock-flow
consistent macroeconomic model where rates are allowed to become arbitrarily
negative and present simulation evidence for their stabilizing effects. In
practice, the existence of physical cash imposes a lower bound for interest
rates, which in our view is the main reason for the lack of effectiveness of
negative interest rates in the countries that adopted them as part of their
monetary policy. We conclude by discussing alternative ways to overcome this
lower bound , in particular the use of central bank digital currencies.",On the Normality of Negative Interest Rates,2018-08-23 22:13:42,"Matheus R. Grasselli, Alexander Lipton","http://arxiv.org/abs/1808.07909v1, http://arxiv.org/pdf/1808.07909v1",econ.GN
33419,gn,"We relate models based on costs of switching beliefs (e.g. due to
inattention) to hypothesis tests. Specifically, for an inference problem with a
penalty for mistakes and for switching the inferred value, a band of inaction
is optimal. We show this band is equivalent to a confidence interval, and
therefore to a two-sided hypothesis test.",Switching Cost Models as Hypothesis Tests,2018-08-29 11:49:50,"Samuel N. Cohen, Timo Henckel, Gordon D. Menzies, Johannes Muhle-Karbe, Daniel J. Zizzo","http://arxiv.org/abs/1808.09686v1, http://arxiv.org/pdf/1808.09686v1",econ.GN
33421,gn,"The increasing digitization of political speech has opened the door to
studying a new dimension of political behavior using text analysis. This work
investigates the value of word-level statistical data from the US Congressional
Record--which contains the full text of all speeches made in the US
Congress--for studying the ideological positions and behavior of senators.
Applying machine learning techniques, we use this data to automatically
classify senators according to party, obtaining accuracy in the 70-95% range
depending on the specific method used. We also show that using text to predict
DW-NOMINATE scores, a common proxy for ideology, does not improve upon these
already-successful results. This classification deteriorates when applied to
text from sessions of Congress that are four or more years removed from the
training set, pointing to a need on the part of voters to dynamically update
the heuristics they use to evaluate party based on political speech. Text-based
predictions are less accurate than those based on voting behavior, supporting
the theory that roll-call votes represent greater commitment on the part of
politicians and are thus a more accurate reflection of their ideological
preferences. However, the overall success of the machine learning approaches
studied here demonstrates that political speeches are highly predictive of
partisan affiliation. In addition to these findings, this work also introduces
the computational tools and methods relevant to the use of political speech
data.","""Read My Lips"": Using Automatic Text Analysis to Classify Politicians by Party and Ideology",2018-09-04 02:13:00,Eitan Sapiro-Gheiler,"http://arxiv.org/abs/1809.00741v1, http://arxiv.org/pdf/1809.00741v1",econ.GN
33422,gn,"The dual crises of the sub-prime mortgage crisis and the global financial
crisis has prompted a call for explanations of non-equilibrium market dynamics.
Recently a promising approach has been the use of agent based models (ABMs) to
simulate aggregate market dynamics. A key aspect of these models is the
endogenous emergence of critical transitions between equilibria, i.e. market
collapses, caused by multiple equilibria and changing market parameters.
Several research themes have developed microeconomic based models that include
multiple equilibria: social decision theory (Brock and Durlauf), quantal
response models (McKelvey and Palfrey), and strategic complementarities
(Goldstein). A gap that needs to be filled in the literature is a unified
analysis of the relationship between these models and how aggregate criticality
emerges from the individual agent level. This article reviews the agent-based
foundations of markets starting with the individual agent perspective of
McFadden and the aggregate perspective of catastrophe theory emphasising
connections between the different approaches. It is shown that changes in the
uncertainty agents have in the value of their interactions with one another,
even if these changes are one-sided, plays a central role in systemic market
risks such as market instability and the twin crises effect. These interactions
can endogenously cause crises that are an emergent phenomena of markets.",Multi-agent Economics and the Emergence of Critical Markets,2018-09-05 08:25:07,Michael S. Harré,"http://arxiv.org/abs/1809.01332v1, http://arxiv.org/pdf/1809.01332v1",econ.GN
33423,gn,"The world of cryptocurrency is not transparent enough though it was
established for innate transparent tracking of capital flows. The most
contributing factor is the violation of securities laws and scam in Initial
Coin Offering (ICO) which is used to raise capital through crowdfunding. There
is a lack of proper regularization and appreciation from governments around the
world which is a serious problem for the integrity of cryptocurrency market. We
present a hypothetical case study of a new cryptocurrency to establish the
transparency and equal right for every citizen to be part of a global system
through the collaboration between people and government. The possible outcome
is a model of a regulated and trusted cryptocurrency infrastructure that can be
further tailored to different sectors with a different scheme.",Worldcoin: A Hypothetical Cryptocurrency for the People and its Government,2018-09-08 10:19:30,Sheikh Rabiul Islam,"http://arxiv.org/abs/1809.02769v1, http://arxiv.org/pdf/1809.02769v1",q-fin.GN
33425,gn,"We measure trends in the diffusion of misinformation on Facebook and Twitter
between January 2015 and July 2018. We focus on stories from 570 sites that
have been identified as producers of false stories. Interactions with these
sites on both Facebook and Twitter rose steadily through the end of 2016.
Interactions then fell sharply on Facebook while they continued to rise on
Twitter, with the ratio of Facebook engagements to Twitter shares falling by
approximately 60 percent. We see no similar pattern for other news, business,
or culture sites, where interactions have been relatively stable over time and
have followed similar trends on the two platforms both before and after the
election.",Trends in the Diffusion of Misinformation on Social Media,2018-09-16 18:49:14,"Hunt Allcott, Matthew Gentzkow, Chuan Yu","http://arxiv.org/abs/1809.05901v1, http://arxiv.org/pdf/1809.05901v1",cs.SI
33426,gn,"Communication is now a standard tool in the central bank's monetary policy
toolkit. Theoretically, communication provides the central bank an opportunity
to guide public expectations, and it has been shown empirically that central
bank communication can lead to financial market fluctuations. However, there
has been little research into which dimensions or topics of information are
most important in causing these fluctuations. We develop a semi-automatic
methodology that summarizes the FOMC statements into its main themes,
automatically selects the best model based on coherency, and assesses whether
there is a significant impact of these themes on the shape of the U.S Treasury
yield curve using topic modeling methods from the machine learning literature.
Our findings suggest that the FOMC statements can be decomposed into three
topics: (i) information related to the economic conditions and the mandates,
(ii) information related to monetary policy tools and intermediate targets, and
(iii) information related to financial markets and the financial crisis. We
find that statements are most influential during the financial crisis and the
effects are mostly present in the curvature of the yield curve through
information related to the financial theme.",Central Bank Communication and the Yield Curve: A Semi-Automatic Approach using Non-Negative Matrix Factorization,2018-09-24 04:46:05,Ancil Crayton,"http://arxiv.org/abs/1809.08718v1, http://arxiv.org/pdf/1809.08718v1",econ.GN
33429,gn,"It is well-known that demand response can improve the system efficiency as
well as lower consumers' (prosumers') electricity bills. However, it is not
clear how we can either qualitatively identify the prosumer with the most
impact potential or quantitatively estimate each prosumer's contribution to the
total social welfare improvement when additional resource capacity/flexibility
is introduced to the system with demand response, such as allowing net-selling
behavior. In this work, we build upon existing literature on the electricity
market, which consists of price-taking prosumers each with various appliances,
an electric utility company and a social welfare optimizing distribution system
operator, to design a general sensitivity analysis approach (GSAA) that can
estimate the potential of each consumer's contribution to the social welfare
when given more resource capacity. GSAA is based on existence of an efficient
competitive equilibrium, which we establish in the paper. When prosumers'
utility functions are quadratic, GSAA can give closed forms characterization on
social welfare improvement based on duality analysis. Furthermore, we extend
GSAA to a general convex settings, i.e., utility functions with strong
convexity and Lipschitz continuous gradient. Even without knowing the specific
forms the utility functions, we can derive upper and lower bounds of the social
welfare improvement potential of each prosumer, when extra resource is
introduced. For both settings, several applications and numerical examples are
provided: including extending AC comfort zone, ability of EV to discharge and
net selling. The estimation results show that GSAA can be used to decide how to
allocate potentially limited market resources in the most impactful way.",A General Sensitivity Analysis Approach for Demand Response Optimizations,2018-10-07 10:03:46,"Ding Xiang, Ermin Wei","http://arxiv.org/abs/1810.02815v1, http://arxiv.org/pdf/1810.02815v1",cs.CE
33431,gn,"We provide an exact analytical solution of the Nash equilibrium for $k$-
price auctions. We also introduce a new type of auction and demonstrate that it
has fair solutions other than the second price auctions, therefore paving the
way for replacing second price auctions.",k-price auctions and Combination auctions,2018-09-26 18:33:48,"Martin Mihelich, Yan Shu","http://arxiv.org/abs/1810.03494v3, http://arxiv.org/pdf/1810.03494v3",q-fin.MF
33432,gn,"A methodology is presented to rank universities on the basis of the lists of
programmes the students applied for. We exploit a crucial feature of the
centralised assignment system to higher education in Hungary: a student is
admitted to the first programme where the score limit is achieved. This makes
it possible to derive a partial preference order of each applicant. Our
approach integrates the information from all students participating in the
system, is free of multicollinearity among the indicators, and contains few ad
hoc parameters. The procedure is implemented to rank faculties in the Hungarian
higher education between 2001 and 2016. We demonstrate that the ranking given
by the least squares method has favourable theoretical properties, is robust
with respect to the aggregation of preferences, and performs well in practice.
The suggested ranking is worth considering as a reasonable alternative to the
standard composite indices.",University rankings from the revealed preferences of the applicants,2018-10-09 18:44:53,"László Csató, Csaba Tóth","http://dx.doi.org/10.1016/j.ejor.2020.03.008, http://arxiv.org/abs/1810.04087v6, http://arxiv.org/pdf/1810.04087v6",stat.AP
33433,gn,"We investigate the macroeconomic consequences of narrow banking in the
context of stock-flow consistent models. We begin with an extension of the
Goodwin-Keen model incorporating time deposits, government bills, cash, and
central bank reserves to the base model with loans and demand deposits and use
it to describe a fractional reserve banking system. We then characterize narrow
banking by a full reserve requirement on demand deposits and describe the
resulting separation between the payment system and lending functions of the
resulting banking sector. By way of numerical examples, we explore the
properties of fractional and full reserve versions of the model and compare
their asymptotic properties. We find that narrow banking does not lead to any
loss in economic growth when the models converge to a finite equilibrium, while
allowing for more direct monitoring and prevention of financial breakdowns in
the case of explosive asymptotic behaviour.",The Broad Consequences of Narrow Banking,2018-10-12 22:23:40,"Matheus R Grasselli, Alexander Lipton","http://arxiv.org/abs/1810.05689v1, http://arxiv.org/pdf/1810.05689v1",econ.GN
33434,gn,"Ranking algorithms are the information gatekeepers of the Internet era. We
develop a stylized model to study the effects of ranking algorithms on opinion
dynamics. We consider a search engine that uses an algorithm based on
popularity and on personalization. We find that popularity-based rankings
generate an advantage of the fewer effect: fewer websites reporting a given
signal attract relatively more traffic overall. This highlights a novel,
ranking-driven channel that explains the diffusion of misinformation, as
websites reporting incorrect information may attract an amplified amount of
traffic precisely because they are few. Furthermore, when individuals provide
sufficiently positive feedback to the ranking algorithm, popularity-based
rankings tend to aggregate information while personalization acts in the
opposite direction.",Opinion Dynamics via Search Engines (and other Algorithmic Gatekeepers),2018-10-16 16:18:22,"Fabrizio Germano, Francesco Sobbrio","http://arxiv.org/abs/1810.06973v2, http://arxiv.org/pdf/1810.06973v2",cs.SI
33498,gn,"Behavioral economics changed the way we think about market participants and
revolutionized policy-making by introducing the concept of choice architecture.
However, even though effective on the level of a population, interventions from
behavioral economics, nudges, are often characterized by weak generalisation as
they struggle on the level of individuals. Recent developments in data science,
artificial intelligence (AI) and machine learning (ML) have shown ability to
alleviate some of the problems of weak generalisation by providing tools and
methods that result in models with stronger predictive power. This paper aims
to describe how ML and AI can work with behavioral economics to support and
augment decision-making and inform policy decisions by designing personalized
interventions, assuming that enough personalized traits and psychological
variables can be sampled.",Machine learning and behavioral economics for personalized choice architecture,2019-07-03 21:53:59,"Emir Hrnjic, Nikodem Tomczak","http://arxiv.org/abs/1907.02100v1, http://arxiv.org/pdf/1907.02100v1",econ.GN
33435,gn,"This paper presents an analytical treatment of economic systems with an
arbitrary number of agents that keeps track of the systems' interactions and
agents' complexity. This formalism does not seek to aggregate agents. It rather
replaces the standard optimization approach by a probabilistic description of
both the entire system and agents'behaviors. This is done in two distinct
steps. A first step considers an interacting system involving an arbitrary
number of agents, where each agent's utility function is subject to
unpredictable shocks. In such a setting, individual optimization problems need
not be resolved. Each agent is described by a time-dependent probability
distribution centered around his utility optimum. The entire system of agents
is thus defined by a composite probability depending on time, agents'
interactions and forward-looking behaviors. This dynamic system is described by
a path integral formalism in an abstract space-the space of the agents'
actions-and is very similar to a statistical physics or quantum mechanics
system. We show that this description, applied to the space of agents'actions,
reduces to the usual optimization results in simple cases. Compared to a
standard optimization, such a description markedly eases the treatment of
systems with small number of agents. It becomes however useless for a large
number of agents. In a second step therefore, we show that for a large number
of agents, the previous description is equivalent to a more compact description
in terms of field theory. This yields an analytical though approximate
treatment of the system. This field theory does not model the aggregation of a
microeconomic system in the usual sense. It rather describes an environment of
a large number of interacting agents. From this description, various phases or
equilibria may be retrieved, along with individual agents' behaviors and their
interactions with the environment. For illustrative purposes, this paper
studies a Business Cycle model with a large number of agents.",A Path Integral Approach to Business Cycle Models with Large Number of Agents,2018-10-16 13:03:35,"Aïleen Lotz, Pierre Gosselin, Marc Wambst","http://arxiv.org/abs/1810.07178v1, http://arxiv.org/pdf/1810.07178v1",econ.GN
33436,gn,"We study the value and the optimal strategies for a two-player zero-sum
optimal stopping game with incomplete and asymmetric information. In our
Bayesian set-up, the drift of the underlying diffusion process is unknown to
one player (incomplete information feature), but known to the other one
(asymmetric information feature). We formulate the problem and reduce it to a
fully Markovian setup where the uninformed player optimises over stopping times
and the informed one uses randomised stopping times in order to hide their
informational advantage. Then we provide a general verification result which
allows us to find the value of the game and players' optimal strategies by
solving suitable quasi-variational inequalities with some non-standard
constraints. Finally, we study an example with linear payoffs, in which an
explicit solution of the corresponding quasi-variational inequalities can be
obtained.",Dynkin games with incomplete and asymmetric information,2018-10-17 20:21:01,"Tiziano De Angelis, Erik Ekström, Kristoffer Glover","http://arxiv.org/abs/1810.07674v5, http://arxiv.org/pdf/1810.07674v5",math.PR
33437,gn,"Despite the success of demand response programs in retail electricity markets
in reducing average consumption, the random responsiveness of consumers to
price event makes their efficiency questionable to achieve the flexibility
needed for electric systems with a large share of renewable energy. The
variance of consumers' responses depreciates the value of these mechanisms and
makes them weakly reliable. This paper aims at designing demand response
contracts which allow to act on both the average consumption and its variance.
The interaction between a risk--averse producer and a risk--averse consumer is
modelled through a Principal--Agent problem, thus accounting for the moral
hazard underlying demand response contracts. We provide closed--form solution
for the optimal contract in the case of constant marginal costs of energy and
volatility for the producer and constant marginal value of energy for the
consumer. We show that the optimal contract has a rebate form where the initial
condition of the consumption serves as a baseline. Further, the consumer cannot
manipulate the baseline at his own advantage. The second--best price for energy
and volatility are non--constant and non--increasing in time. The price for
energy is lower (resp. higher) than the marginal cost of energy during
peak--load (resp. off--peak) periods. We illustrate the potential benefit
issued from the implementation of an incentive mechanism on the responsiveness
of the consumer by calibrating our model with publicly available data. We
predict a significant increase of responsiveness under our optimal contract and
a significant increase of the producer satisfaction.",Optimal electricity demand response contracting with responsiveness incentives,2018-10-22 05:41:56,"René Aïd, Dylan Possamaï, Nizar Touzi","http://arxiv.org/abs/1810.09063v3, http://arxiv.org/pdf/1810.09063v3",math.OC
33438,gn,"This work fits in the context of community microgrids, where members of a
community can exchange energy and services among themselves, without going
through the usual channels of the public electricity grid. We introduce and
analyze a framework to operate a community microgrid, and to share the
resulting revenues and costs among its members. A market-oriented pricing of
energy exchanges within the community is obtained by implementing an internal
local market based on the marginal pricing scheme. The market aims at
maximizing the social welfare of the community, thanks to the more efficient
allocation of resources, the reduction of the peak power to be paid, and the
increased amount of reserve, achieved at an aggregate level. A community
microgrid operator, acting as a benevolent planner, redistributes revenues and
costs among the members, in such a way that the solution achieved by each
member within the community is not worse than the solution it would achieve by
acting individually. In this way, each member is incentivized to participate in
the community on a voluntary basis. The overall framework is formulated in the
form of a bilevel model, where the lower level problem clears the market, while
the upper level problem plays the role of the community microgrid operator.
Numerical results obtained on a real test case implemented in Belgium show
around 54% cost savings on a yearly scale for the community, as compared to the
case when its members act individually.",A Community Microgrid Architecture with an Internal Local Market,2018-10-23 15:05:51,"Bertrand Cornélusse, Iacopo Savelli, Simone Paoletti, Antonio Giannitrapani, Antonio Vicino","http://dx.doi.org/10.1016/j.apenergy.2019.03.109, http://arxiv.org/abs/1810.09803v3, http://arxiv.org/pdf/1810.09803v3",cs.SY
33439,gn,"We develop an equilibrium theory of attention and politics. In a spatial
model of electoral competition where candidates have varying policy
preferences, we examine what kinds of political behaviors capture voters'
limited attention and how this concern affects the overall political outcomes.
Following the seminal works of Downs (1957) and Sims (1998), we assume that
voters are rationally inattentive and can process information about the
policies at a cost proportional to entropy reduction. The main finding is an
equilibrium phenomenon called attention- and media-driven extremism, namely as
we increase the attention cost or garble the news technology, a truncated set
of the equilibria captures voters' attention through enlarging the policy
differentials between the varying types of the candidates. We supplement our
analysis with historical accounts, and discuss its relevance in the new era
featured with greater media choices and distractions, as well as the rise of
partisan media and fake news.",The Politics of Attention,2018-10-26 20:59:56,"Li Hu, Anqi Li","http://arxiv.org/abs/1810.11449v3, http://arxiv.org/pdf/1810.11449v3",econ.GN
33440,gn,"Recent technology advances have enabled firms to flexibly process and analyze
sophisticated employee performance data at a reduced and yet significant cost.
We develop a theory of optimal incentive contracting where the monitoring
technology that governs the above procedure is part of the designer's strategic
planning. In otherwise standard principal-agent models with moral hazard, we
allow the principal to partition agents' performance data into any finite
categories and to pay for the amount of information the output signal carries.
Through analysis of the trade-off between giving incentives to agents and
saving the monitoring cost, we obtain characterizations of optimal monitoring
technologies such as information aggregation, strict MLRP, likelihood
ratio-convex performance classification, group evaluation in response to rising
monitoring costs, and assessing multiple task performances according to agents'
endogenous tendencies to shirk. We examine the implications of these results
for workforce management and firms' internal organizations.",Optimal Incentive Contract with Endogenous Monitoring Technology,2018-10-26 21:06:53,"Anqi Li, Ming Yang","http://arxiv.org/abs/1810.11471v6, http://arxiv.org/pdf/1810.11471v6",econ.TH
33441,gn,"We examine problems of ``intermediated implementation,'' in which a single
principal can only regulate limited aspects of the consumption bundles traded
between intermediaries and agents with hidden characteristics. An example is
sales, in which retailers offer menus of consumption bundles to customers with
hidden tastes, whereas a manufacturer with a potentially different goal from
retailers' is limited to regulating sold consumption goods but not retail
prices by legal barriers. We study how the principal can implement through
intermediaries any social choice rule that is incentive compatible and
individually rational for agents. We demonstrate the effectiveness of per-unit
fee schedules and distribution regulations, which hinges on whether
intermediaries have private or interdependent values. We give further
applications to healthcare regulation and income redistribution.",Intermediated Implementation,2018-10-26 21:09:37,"Anqi Li, Yiqing Xing","http://arxiv.org/abs/1810.11475v7, http://arxiv.org/pdf/1810.11475v7",econ.TH
33443,gn,"We develop a machine-learning-based method, Principal Smooth-Dynamics
Analysis (PriSDA), to identify patterns in economic development and to automate
the development of new theory of economic dynamics. Traditionally, economic
growth is modeled with a few aggregate quantities derived from simplified
theoretical models. Here, PriSDA identifies important quantities. Applied to 55
years of data on countries' exports, PriSDA finds that what most distinguishes
countries' export baskets is their diversity, with extra weight assigned to
more sophisticated products. The weights are consistent with previous measures
of product complexity in the literature. The second dimension of variation is a
proficiency in machinery relative to agriculture. PriSDA then couples these
quantities with per-capita income and infers the dynamics of the system over
time. According to PriSDA, the pattern of economic development of countries is
dominated by a tendency toward increased diversification. Moreover, economies
appear to become richer after they diversify (i.e., diversity precedes growth).
The model predicts that middle-income countries with diverse export baskets
will grow the fastest in the coming decades, and that countries will converge
onto intermediate levels of income and specialization. PriSDA is generalizable
and may illuminate dynamics of elusive quantities such as diversity and
complexity in other natural and social systems.",Machine-learned patterns suggest that diversification drives economic development,2018-12-09 21:00:46,"Charles D. Brummitt, Andres Gomez-Lievano, Ricardo Hausmann, Matthew H. Bonds","http://arxiv.org/abs/1812.03534v1, http://arxiv.org/pdf/1812.03534v1",physics.soc-ph
33444,gn,"While deep neural networks (DNNs) have been increasingly applied to choice
analysis showing high predictive power, it is unclear to what extent
researchers can interpret economic information from DNNs. This paper
demonstrates that DNNs can provide economic information as complete as
classical discrete choice models (DCMs). The economic information includes
choice predictions, choice probabilities, market shares, substitution patterns
of alternatives, social welfare, probability derivatives, elasticities,
marginal rates of substitution (MRS), and heterogeneous values of time (VOT).
Unlike DCMs, DNNs can automatically learn the utility function and reveal
behavioral patterns that are not prespecified by domain experts. However, the
economic information obtained from DNNs can be unreliable because of the three
challenges associated with the automatic learning capacity: high sensitivity to
hyperparameters, model non-identification, and local irregularity. To
demonstrate the strength and challenges of DNNs, we estimated the DNNs using a
stated preference survey, extracted the full list of economic information from
the DNNs, and compared them with those from the DCMs. We found that the
economic information either aggregated over trainings or population is more
reliable than the disaggregate information of the individual observations or
trainings, and that even simple hyperparameter searching can significantly
improve the reliability of the economic information extracted from the DNNs.
Future studies should investigate other regularizations and DNN architectures,
better optimization algorithms, and robust DNN training methods to address
DNNs' three challenges, to provide more reliable economic information from
DNN-based choice models.",Deep Neural Networks for Choice Analysis: Extracting Complete Economic Information for Interpretation,2018-12-11 19:40:29,"Shenhao Wang, Qingyi Wang, Jinhua Zhao","http://arxiv.org/abs/1812.04528v3, http://arxiv.org/pdf/1812.04528v3",econ.GN
33445,gn,"We construct two examples of shareholder networks in which shareholders are
connected if they have shares in the same company. We do this for the
shareholders in Turkish companies and we compare this against the network
formed from the shareholdings in Dutch companies. We analyse the properties of
these two networks in terms of the different types of shareholder. We create a
suitable randomised version of these networks to enable us to find significant
features in our networks. For that we find the roles played by different types
of shareholder in these networks, and also show how these roles differ in the
two countries we study.",How the network properties of shareholders vary with investor type and country,2018-12-17 14:00:02,"Qing Yao, Tim Evans, Kim Christensen","http://dx.doi.org/10.1371/journal.pone.0220965, http://arxiv.org/abs/1812.06694v2, http://arxiv.org/pdf/1812.06694v2",q-fin.GN
33448,gn,"It is an enduring question how to combine revealed preference (RP) and stated
preference (SP) data to analyze travel behavior. This study presents a
framework of multitask learning deep neural networks (MTLDNNs) for this
question, and demonstrates that MTLDNNs are more generic than the traditional
nested logit (NL) method, due to its capacity of automatic feature learning and
soft constraints. About 1,500 MTLDNN models are designed and applied to the
survey data that was collected in Singapore and focused on the RP of four
current travel modes and the SP with autonomous vehicles (AV) as the one new
travel mode in addition to those in RP. We found that MTLDNNs consistently
outperform six benchmark models and particularly the classical NL models by
about 5% prediction accuracy in both RP and SP datasets. This performance
improvement can be mainly attributed to the soft constraints specific to
MTLDNNs, including its innovative architectural design and regularization
methods, but not much to the generic capacity of automatic feature learning
endowed by a standard feedforward DNN architecture. Besides prediction, MTLDNNs
are also interpretable. The empirical results show that AV is mainly the
substitute of driving and AV alternative-specific variables are more important
than the socio-economic variables in determining AV adoption. Overall, this
study introduces a new MTLDNN framework to combine RP and SP, and demonstrates
its theoretical flexibility and empirical power for prediction and
interpretation. Future studies can design new MTLDNN architectures to reflect
the speciality of RP and SP and extend this work to other behavioral analysis.",Multitask Learning Deep Neural Networks to Combine Revealed and Stated Preference Data,2019-01-02 04:02:00,"Shenhao Wang, Qingyi Wang, Jinhua Zhao","http://arxiv.org/abs/1901.00227v2, http://arxiv.org/pdf/1901.00227v2",econ.GN
33449,gn,"Even in the face of deteriorating and highly volatile demand, firms often
invest in, rather than discard, aging technologies. In order to study this
phenomenon, we model the firm's profit stream as a Brownian motion with
negative drift. At each point in time, the firm can continue operations, or it
can stop and exit the project. In addition, there is a one-time option to make
an investment which boosts the project's profit rate. Using stochastic
analysis, we show that the optimal policy always exists and that it is
characterized by three thresholds. There are investment and exit thresholds
before investment, and there is a threshold for exit after investment. We also
effect a comparative statics analysis of the thresholds with respect to the
drift and the volatility of the Brownian motion. When the profit boost upon
investment is sufficiently large, we find a novel result: the investment
threshold decreases in volatility.",Invest or Exit? Optimal Decisions in the Face of a Declining Profit Stream,2019-01-06 05:03:52,H. Dharma Kwon,"http://dx.doi.org/10.1287/opre.1090.0740, http://arxiv.org/abs/1901.01486v1, http://arxiv.org/pdf/1901.01486v1",math.OC
33450,gn,"The increasing integration of world economies, which organize in complex
multilayer networks of interactions, is one of the critical factors for the
global propagation of economic crises. We adopt the network science approach to
quantify shock propagation on the global trade-investment multiplex network. To
this aim, we propose a model that couples a Susceptible-Infected-Recovered
epidemic spreading dynamics, describing how economic distress propagates
between connected countries, with an internal contagion mechanism, describing
the spreading of such economic distress within a given country. At the local
level, we find that the interplay between trade and financial interactions
influences the vulnerabilities of countries to shocks. At the large scale, we
find a simple linear relation between the relative magnitude of a shock in a
country and its global impact on the whole economic system, albeit the strength
of internal contagion is country-dependent and the intercountry propagation
dynamics is non-linear. Interestingly, this systemic impact can be predicted on
the basis of intra-layer and inter-layer scale factors that we name network
multipliers, that are independent of the magnitude of the initial shock. Our
model sets-up a quantitative framework to stress-test the robustness of
individual countries and of the world economy to propagating crashes.",The interconnected wealth of nations: Shock propagation on global trade-investment multiplex networks,2019-01-08 18:16:02,"Michele Starnini, Marián Boguñá, M. Ángeles Serrano","http://arxiv.org/abs/1901.01976v1, http://arxiv.org/pdf/1901.01976v1",physics.soc-ph
33451,gn,"Brazilian executive body has consistently vetoed legislative initiatives
easing creation and emancipation of municipalities. The literature lists
evidence of the negative results of municipal fragmentation, especially so for
metropolitan regions. In order to provide evidences for the argument of
metropolitan union, this paper quantifies the quality of life of metropolitan
citizens in the face of four alternative rules of distribution of municipal tax
collection. Methodologically, a validated agent-based spatial model is
simulated. On top of that, econometric models are tested using real exogenous
variables and simulated data. Results suggest two central conclusions. First,
the progressiveness of the Municipal Participation Fund and its relevance to a
better quality of life in metropolitan municipalities is confirmed. Second,
municipal financial merging would improve citizens' quality of life, compared
to the status quo for 23 Brazilian metropolises. Further, the paper presents
quantitative evidence that allows comparing alternative tax distributions for
each of the 40 simulated metropolises, identifying more efficient forms of
fiscal distribution and contributing to the literature and to contemporary
parliamentary debate.",Modeling tax distribution in metropolitan regions with PolicySpace,2018-12-31 22:01:03,Bernardo Alves Furtado,"http://arxiv.org/abs/1901.02391v1, http://arxiv.org/pdf/1901.02391v1",econ.GN
33452,gn,"Blockchain in supply chain management is expected to boom over the next five
years. It is estimated that the global blockchain supply chain market would
grow at a compound annual growth rate of 87% and increase from \$45 million in
2018 to \$3,314.6 million by 2023. Blockchain will improve business for all
global supply chain stakeholders by providing enhanced traceability,
facilitating digitisation, and securing chain-of-custody. This paper provides a
synthesis of the existing challenges in global supply chain and trade
operations, as well as the relevant capabilities and potential of blockchain.
We further present leading pilot initiatives on applying blockchains to supply
chains and the logistics industry to fulfill a range of needs. Finally, we
discuss the implications of blockchain on customs and governmental agencies,
summarize challenges in enabling the wide scale deployment of blockchain in
global supply chain management, and identify future research directions.","Blockchain in Global Supply Chains and Cross Border Trade: A Critical Synthesis of the State-of-the-Art, Challenges and Opportunities",2019-01-05 23:33:36,"Yanling Chang, Eleftherios Iakovou, Weidong Shi4","http://arxiv.org/abs/1901.02715v1, http://arxiv.org/pdf/1901.02715v1",cs.CY
33454,gn,"Our computational economic analysis investigates the relationship between
inequality, mobility and the financial accumulation process. Extending the
baseline model by Levy et al., we characterise the economic process through
stylised return structures generating alternative evolutions of income and
wealth through time. First, we explore the limited heuristic contribution of
one and two factors models comprising one single stock (capital wealth) and one
single flow factor (labour) as pure drivers of income and wealth generation and
allocation over time. Second, we introduce heuristic modes of taxation in line
with the baseline approach. Our computational economic analysis corroborates
that the financial accumulation process featuring compound returns plays a
significant role as source of inequality, while institutional arrangements
including taxation play a significant role in framing and shaping the aggregate
economic process that evolves over socioeconomic space and time.","Inequality, mobility and the financial accumulation process: A computational economic analysis",2019-01-13 12:14:52,"Simone Righi, Yuri Biondi","http://dx.doi.org/10.1007/s11403-019-00236-7, http://arxiv.org/abs/1901.03951v1, http://arxiv.org/pdf/1901.03951v1",econ.GN
33455,gn,"Metrics can be used by businesses to make more objective decisions based on
data. Software startups in particular are characterized by the uncertain or
even chaotic nature of the contexts in which they operate. Using data in the
form of metrics can help software startups to make the right decisions amidst
uncertainty and limited resources. However, whereas conventional business
metrics and software metrics have been studied in the past, metrics in the
spe-cific context of software startup are not widely covered within academic
literature. To promote research in this area and to create a starting point for
it, we have conducted a multi-vocal literature review focusing on practitioner
literature in order to compile a list of metrics used by software startups.
Said list is intended to serve as a basis for further research in the area, as
the metrics in it are based on suggestions made by practitioners and not
empirically verified.",100+ Metrics for Software Startups - A Multi-Vocal Literature Review,2019-01-15 16:47:06,"Kai-Kristian Kemell, Xiaofeng Wang, Anh Nguyen-Duc, Jason Grendus, Tuure Tuunanen, Pekka Abrahamsson","http://arxiv.org/abs/1901.04819v1, http://arxiv.org/pdf/1901.04819v1",cs.GL
33456,gn,"In almost every election cycle, the validity of the United States Electoral
College is brought into question. The 2016 Presidential Election again brought
up the issue of a candidate winning the popular vote but not winning the
Electoral College, with Hillary Clinton receiving close to three million more
votes than Donald Trump. However, did the popular vote actually determine the
most liked candidate in the election? In this paper, we demonstrate that
different voting policies can alter which candidate is elected. Additionally,
we explore the trade-offs between each of these mechanisms. Finally, we
introduce two novel mechanisms with the intent of electing the least polarizing
candidate.",Dancing with Donald: Polarity in the 2016 Presidential Election,2019-01-22 05:43:28,"Robert Chuchro, Kyle D'Souza, Darren Mei","http://arxiv.org/abs/1901.07542v1, http://arxiv.org/pdf/1901.07542v1",econ.GN
33458,gn,"We consider a government that aims at reducing the debt-to-gross domestic
product (GDP) ratio of a country. The government observes the level of the
debt-to-GDP ratio and an indicator of the state of the economy, but does not
directly observe the development of the underlying macroeconomic conditions.
The government's criterion is to minimize the sum of the total expected costs
of holding debt and of debt's reduction policies. We model this problem as a
singular stochastic control problem under partial observation. The contribution
of the paper is twofold. Firstly, we provide a general formulation of the model
in which the level of debt-to-GDP ratio and the value of the macroeconomic
indicator evolve as a diffusion and a jump-diffusion, respectively, with
coefficients depending on the regimes of the economy. These are described
through a finite-state continuous-time Markov chain. We reduce via filtering
techniques the original problem to an equivalent one with full information (the
so-called separated problem), and we provide a general verification result in
terms of a related optimal stopping problem under full information. Secondly,
we specialize to a case study in which the economy faces only two regimes, and
the macroeconomic indicator has a suitable diffusive dynamics. In this setting
we provide the optimal debt reduction policy. This is given in terms of the
continuous free boundary arising in an auxiliary fully two-dimensional optimal
stopping problem.",Optimal Reduction of Public Debt under Partial Observation of the Economic Growth,2019-01-24 14:25:58,"Giorgia Callegaro, Claudia Ceci, Giorgio Ferrari","http://arxiv.org/abs/1901.08356v2, http://arxiv.org/pdf/1901.08356v2",math.OC
33459,gn,"Will a large economy be stable? Building on Robert May's original argument
for large ecosystems, we conjecture that evolutionary and behavioural forces
conspire to drive the economy towards marginal stability. We study networks of
firms in which inputs for production are not easily substitutable, as in
several real-world supply chains. Relying on results from Random Matrix Theory,
we argue that such networks generically become dysfunctional when their size
increases, when the heterogeneity between firms becomes too strong or when
substitutability of their production inputs is reduced. At marginal stability
and for large heterogeneities, we find that the distribution of firm sizes
develops a power-law tail, as observed empirically. Crises can be triggered by
small idiosyncratic shocks, which lead to ""avalanches"" of defaults
characterized by a power-law distribution of total output losses. This scenario
would naturally explain the well-known ""small shocks, large business cycles""
puzzle, as anticipated long ago by Bak, Chen, Scheinkman and Woodford.",May's Instability in Large Economies,2019-01-28 15:37:50,"José Moran, Jean-Philippe Bouchaud","http://dx.doi.org/10.1103/PhysRevE.100.032307, http://arxiv.org/abs/1901.09629v4, http://arxiv.org/pdf/1901.09629v4",physics.soc-ph
33460,gn,"We examine the novel problem of the estimation of transaction arrival
processes in the intraday electricity markets. We model the inter-arrivals
using multiple time-varying parametric densities based on the generalized F
distribution estimated by maximum likelihood. We analyse both the in-sample
characteristics and the probabilistic forecasting performance. In a rolling
window forecasting study, we simulate many trajectories to evaluate the
forecasts and gain significant insights into the model fit. The prediction
accuracy is evaluated by a functional version of the MAE (mean absolute error),
RMSE (root mean squared error) and CRPS (continuous ranked probability score)
for the simulated count processes. This paper fills the gap in the literature
regarding the intensity estimation of transaction arrivals and is a major
contribution to the topic, yet leaves much of the field for further
development. The study presented in this paper is conducted based on the German
Intraday Continuous electricity market data, but this method can be easily
applied to any other continuous intraday electricity market. For the German
market, a specific generalized gamma distribution setup explains the overall
behaviour significantly best, especially as the tail behaviour of the process
is well covered.",Estimation and simulation of the transaction arrival process in intraday electricity markets,2019-01-28 18:17:45,"Michał Narajewski, Florian Ziel","http://dx.doi.org/10.3390/en12234518, http://arxiv.org/abs/1901.09729v4, http://arxiv.org/pdf/1901.09729v4",econ.GN
33461,gn,"Connected and automated vehicles (CAVs) are poised to reshape transportation
and mobility by replacing humans as the driver and service provider. While the
primary stated motivation for vehicle automation is to improve safety and
convenience of road mobility, this transformation also provides a valuable
opportunity to improve vehicle energy efficiency and reduce emissions in the
transportation sector. Progress in vehicle efficiency and functionality,
however, does not necessarily translate to net positive environmental outcomes.
Here we examine the interactions between CAV technology and the environment at
four levels of increasing complexity: vehicle, transportation system, urban
system, and society. We find that environmental impacts come from
CAV-facilitated transformations at all four levels, rather than from CAV
technology directly. We anticipate net positive environmental impacts at the
vehicle, transportation system, and urban system levels, but expect greater
vehicle utilization and shifts in travel patterns at the society level to
offset some of these benefits. Focusing on the vehicle-level improvements
associated with CAV technology is likely to yield excessively optimistic
estimates of environmental benefits. Future research and policy efforts should
strive to clarify the extent and possible synergetic effects from a systems
level in order to envisage and address concerns regarding the short- and
long-term sustainable adoption of CAV technology.","A Review on Energy, Environmental, and Sustainability Implications of Connected and Automated Vehicles",2019-01-23 06:20:24,"Morteza Taiebat, Austin L. Brown, Hannah R. Safford, Shen Qu, Ming Xu","http://dx.doi.org/10.1021/acs.est.8b00127, http://arxiv.org/abs/1901.10581v2, http://arxiv.org/pdf/1901.10581v2",cs.CY
33462,gn,"We investigate an optimal investment-consumption and optimal level of
insurance on durable consumption goods with a positive loading in a
continuous-time economy. We assume that the economic agent invests in the
financial market and in durable as well as perishable consumption goods to
derive utilities from consumption over time in a jump-diffusion market.
Assuming that the financial assets and durable consumption goods can be traded
without transaction costs, we provide a semi-explicit solution for the optimal
insurance coverage for durable goods and financial asset. With transaction
costs for trading the durable good proportional to the total value of the
durable good, we formulate the agent's optimization problem as a combined
stochastic and impulse control problem, with an implicit intervention value
function. We solve this problem numerically using stopping time iteration, and
analyze the numerical results using illustrative examples.",Optimal Investment-Consumption-Insurance with Durable and Perishable Consumption Goods in a Jump Diffusion Market,2019-03-02 08:51:52,"Jin Sun, Ryle S. Perera, Pavel V. Shevchenko","http://arxiv.org/abs/1903.00631v1, http://arxiv.org/pdf/1903.00631v1",econ.GN
33463,gn,"This paper proposes a novel trading system which plays the role of an
artificial counselor for stock investment. In this paper, the stock future
prices (technical features) are predicted using Support Vector Regression.
Thereafter, the predicted prices are used to recommend which portions of the
budget an investor should invest in different existing stocks to have an
optimum expected profit considering their level of risk tolerance. Two
different methods are used for suggesting best portions, which are Markowitz
portfolio theory and fuzzy investment counselor. The first approach is an
optimization-based method which considers merely technical features, while the
second approach is based on Fuzzy Logic taking into account both technical and
fundamental features of the stock market. The experimental results on New York
Stock Exchange (NYSE) show the effectiveness of the proposed system.",Artificial Counselor System for Stock Investment,2019-03-03 21:18:37,"Hadi NekoeiQachkanloo, Benyamin Ghojogh, Ali Saheb Pasand, Mark Crowley","http://dx.doi.org/10.1609/aaai.v33i01.33019558, http://arxiv.org/abs/1903.00955v1, http://arxiv.org/pdf/1903.00955v1",q-fin.GN
33464,gn,"Are there positive or negative externalities in knowledge production? Do
current contributions to knowledge production increase or decrease the future
growth of knowledge? We use a randomized field experiment, which added relevant
content to some pages in Wikipedia while leaving similar pages unchanged. We
find that the addition of content has a negligible impact on the subsequent
long-run growth of content. Our results have implications for information
seeding and incentivizing contributions, implying that additional content does
not generate sizable externalities by inspiring nor discouraging future
contributions.",Externalities in Knowledge Production: Evidence from a Randomized Field Experiment,2019-03-02 15:17:10,"Marit Hinnosaar, Toomas Hinnosaar, Michael Kummer, Olga Slivko","http://arxiv.org/abs/1903.01861v1, http://arxiv.org/pdf/1903.01861v1",econ.GN
33465,gn,"Modern macroeconomic theories were unable to foresee the last Great Recession
and could neither predict its prolonged duration nor the recovery rate. They
are based on supply-demand equilibria that do not exist during recessionary
shocks. Here we focus on resilience as a nonequilibrium property of networked
production systems and develop a linear response theory for input-output
economics. By calibrating the framework to data from 56 industrial sectors in
43 countries between 2000 and 2014, we find that the susceptibility of
individual industrial sectors to economic shocks varies greatly across
countries, sectors, and time. We show that susceptibility-based predictions
that take sector- and country-specific recovery into account, outperform--by
far--standard econometric growth-models. Our results are analytically rigorous,
empirically testable, and flexible enough to address policy-relevant scenarios.
We illustrate the latter by estimating the impact of recently imposed tariffs
on US imports (steel and aluminum) on specific sectors across European
countries.",Economic resilience from input-output susceptibility improves predictions of economic growth and recovery,2019-02-27 16:02:17,"Peter Klimek, Sebastian Poledna, Stefan Thurner","http://dx.doi.org/10.1038/s41467-019-09357-w, http://arxiv.org/abs/1903.03203v1, http://arxiv.org/pdf/1903.03203v1",econ.GN
33466,gn,"The standard model of sequential capacity choices is the Stackelberg quantity
leadership model with linear demand. I show that under the standard
assumptions, leaders' actions are informative about market conditions and
independent of leaders' beliefs about the arrivals of followers. However, this
Stackelberg independence property relies on all standard assumptions being
satisfied. It fails to hold whenever the demand function is non-linear,
marginal cost is not constant, goods are differentiated, firms are
non-identical, or there are any externalities. I show that small deviations
from the linear demand assumption may make the leaders' choices completely
uninformative.",Stackelberg Independence,2019-03-11 00:39:06,Toomas Hinnosaar,"http://arxiv.org/abs/1903.04060v1, http://arxiv.org/pdf/1903.04060v1",cs.GT
33467,gn,"We develop an alternative theory to the aggregate matching function in which
workers search for jobs through a network of firms: the labor flow network. The
lack of an edge between two companies indicates the impossibility of labor
flows between them due to high frictions. In equilibrium, firms' hiring
behavior correlates through the network, generating highly disaggregated local
unemployment. Hence, aggregation depends on the topology of the network in
non-trivial ways. This theory provides new micro-foundations for the Beveridge
curve, wage dispersion, and the employer-size premium. We apply our model to
employer-employee matched records and find that network topologies with
Pareto-distributed connections cause disproportionately large changes on
aggregate unemployment under high labor supply elasticity.",Frictional Unemployment on Labor Flow Networks,2019-03-01 16:30:46,"Robert L. Axtell, Omar A. Guerrero, Eduardo López","http://arxiv.org/abs/1903.04954v1, http://arxiv.org/pdf/1903.04954v1",econ.GN
33468,gn,"Despite decades of research, the relationship between the quality of science
and the value of inventions has remained unclear. We present the result of a
large-scale matching exercise between 4.8 million patent families and 43
million publication records. We find a strong positive relationship between
quality of scientific contributions referenced in patents and the value of the
respective inventions. We rank patents by the quality of the science they are
linked to. Strikingly, high-rank patents are twice as valuable as low-rank
patents, which in turn are about as valuable as patents without direct science
link. We show this core result for various science quality and patent value
measures. The effect of science quality on patent value remains relevant even
when science is linked indirectly through other patents. Our findings imply
that what is considered ""excellent"" within the science sector also leads to
outstanding outcomes in the technological or commercial realm.",Science Quality and the Value of Inventions,2019-03-12 19:19:24,"Felix Poege, Dietmar Harhoff, Fabian Gaessler, Stefano Baruffaldi","http://dx.doi.org/10.1126/sciadv.aay7323, http://arxiv.org/abs/1903.05020v2, http://arxiv.org/pdf/1903.05020v2",econ.GN
33471,gn,"This paper studies the measurement of advertising effects on online platforms
when parallel experimentation occurs, that is, when multiple advertisers
experiment concurrently. It provides a framework that makes precise how
parallel experimentation affects this measurement problem: while ignoring
parallel experimentation yields an estimate of the average effect of
advertising in-place, this estimate has limited value in decision-making in an
environment with advertising competition; and, account for parallel
experimentation provides a richer set of advertising effects that capture the
true uncertainty advertisers face due to competition. It then provides an
experimental design that yields data that allow advertisers to estimate these
effects and implements this design on JD.com, a large e-commerce platform that
is also a publisher of digital ads. Using traditional and kernel-based
estimators, it obtains results that empirically illustrate how these effects
can crucially affect advertisers' decisions. Finally, it shows how competitive
interference can be summarized via simple metrics that can assist
decision-making.",Parallel Experimentation on Advertising Platforms,2019-03-27 03:26:25,"Caio Waisman, Navdeep S. Sahni, Harikesh S. Nair, Xiliang Lin","http://arxiv.org/abs/1903.11198v3, http://arxiv.org/pdf/1903.11198v3",econ.GN
33472,gn,"Objective: To estimate the relationship between a hospital data breach and
hospital quality outcome
  Materials and Methods: Hospital data breaches reported to the U.S. Department
of Health and Human Services breach portal and the Privacy Rights Clearinghouse
database were merged with the Medicare Hospital Compare data to assemble a
panel of non-federal acutecare inpatient hospitals for years 2011 to 2015. The
study panel included 2,619 hospitals. Changes in 30-day AMI mortality rate
following a hospital data breach were estimated using a multivariate regression
model based on a difference-in-differences approach.
  Results: A data breach was associated with a 0.338[95% CI, 0.101-0.576]
percentage point increase in the 30-day AMI mortality rate in the year
following the breach and a 0.446[95% CI, 0.164-0.729] percentage point increase
two years after the breach. For comparison, the median 30-day AMI mortality
rate has been decreasing about 0.4 percentage points annually since 2011 due to
progress in care. The magnitude of the breach impact on hospitals' AMI
mortality rates was comparable to a year's worth historical progress in
reducing AMI mortality rates.
  Conclusion: Hospital data breaches significantly increased the 30-day
mortality rate for AMI. Data breaches may disrupt the processes of care that
rely on health information technology. Financial costs to repair a breach may
also divert resources away from patient care. Thus breached hospitals should
carefully focus investments in security procedures, processes, and health
information technology that jointly lead to better data security and improved
patient outcomes.",Do Hospital Data Breaches Reduce Patient Care Quality?,2019-04-03 18:26:12,"Sung J. Choi, M. Eric Johnson","http://arxiv.org/abs/1904.02058v1, http://arxiv.org/pdf/1904.02058v1",econ.GN
33473,gn,"In multi-period stochastic optimization problems, the future optimal decision
is a random variable whose distribution depends on the parameters of the
optimization problem. We analyze how the expected value of this random variable
changes as a function of the dynamic optimization parameters in the context of
Markov decision processes. We call this analysis \emph{stochastic comparative
statics}. We derive both \emph{comparative statics} results and
\emph{stochastic comparative statics} results showing how the current and
future optimal decisions change in response to changes in the single-period
payoff function, the discount factor, the initial state of the system, and the
transition probability function. We apply our results to various models from
the economics and operations research literature, including investment theory,
dynamic pricing models, controlled random walks, and comparisons of stationary
distributions.",Stochastic Comparative Statics in Markov Decision Processes,2019-04-11 02:56:03,Bar Light,"http://arxiv.org/abs/1904.05481v2, http://arxiv.org/pdf/1904.05481v2",math.OC
33474,gn,"This paper proposes a theory of pricing premised upon the assumptions that
customers dislike unfair prices---those marked up steeply over cost---and that
firms take these concerns into account when setting prices. Since they do not
observe firms' costs, customers must extract costs from prices. The theory
assumes that customers infer less than rationally: when a price rises due to a
cost increase, customers partially misattribute the higher price to a higher
markup---which they find unfair. Firms anticipate this response and trim their
price increases, which drives the passthrough of costs into prices below one:
prices are somewhat rigid. Embedded in a New Keynesian model as a replacement
for the usual pricing frictions, our theory produces monetary nonneutrality:
when monetary policy loosens and inflation rises, customers misperceive markups
as higher and feel unfairly treated; firms mitigate this perceived unfairness
by reducing their markups; in general equilibrium, employment rises. The theory
also features a hybrid short-run Phillips curve, realistic impulse responses of
output and employment to monetary and technology shocks, and an upward-sloping
long-run Phillips curve.",Pricing under Fairness Concerns,2019-04-11 15:13:54,"Erik Eyster, Kristof Madarasz, Pascal Michaillat","http://dx.doi.org/10.1093/jeea/jvaa041, http://arxiv.org/abs/1904.05656v4, http://arxiv.org/pdf/1904.05656v4",econ.TH
33475,gn,"SREC markets are a relatively novel market-based system to incentivize the
production of energy from solar means. A regulator imposes a floor on the
amount of energy each regulated firm must generate from solar power in a given
period and provides them with certificates for each generated MWh. Firms offset
these certificates against the floor and pay a penalty for any lacking
certificates. Certificates are tradable assets, allowing firms to purchase/sell
them freely. In this work, we formulate a stochastic control problem for
generating and trading in SREC markets from a regulated firm's perspective. We
account for generation and trading costs, the impact both have on SREC prices,
provide a characterization of the optimal strategy, and develop a numerical
algorithm to solve this control problem. Through numerical experiments, we
explore how a firm who acts optimally behaves under various conditions. We find
that an optimal firm's generation and trading behaviour can be separated into
various regimes, based on the marginal benefit of obtaining an additional SREC,
and validate our theoretical characterization of the optimal strategy. We also
conduct parameter sensitivity experiments and conduct comparisons of the
optimal strategy to other candidate strategies.",Optimal Behaviour in Solar Renewable Energy Certificate (SREC) Markets,2019-04-12 20:39:17,"Arvind Shrivats, Sebastian Jaimungal","http://arxiv.org/abs/1904.06337v6, http://arxiv.org/pdf/1904.06337v6",q-fin.MF
33476,gn,"Paid crowdsourcing platforms suffer from low-quality work and unfair
rejections, but paradoxically, most workers and requesters have high reputation
scores. These inflated scores, which make high-quality work and workers
difficult to find, stem from social pressure to avoid giving negative feedback.
We introduce Boomerang, a reputation system for crowdsourcing that elicits more
accurate feedback by rebounding the consequences of feedback directly back onto
the person who gave it. With Boomerang, requesters find that their highly-rated
workers gain earliest access to their future tasks, and workers find tasks from
their highly-rated requesters at the top of their task feed. Field experiments
verify that Boomerang causes both workers and requesters to provide feedback
that is more closely aligned with their private opinions. Inspired by a
game-theoretic notion of incentive-compatibility, Boomerang opens opportunities
for interaction design to incentivize honest reporting over strategic
dishonesty.",Boomerang: Rebounding the Consequences of Reputation Feedback on Crowdsourcing Platforms,2019-04-14 19:40:01,"Snehalkumar, S. Gaikwad, Durim Morina, Adam Ginzberg, Catherine Mullings, Shirish Goyal, Dilrukshi Gamage, Christopher Diemert, Mathias Burton, Sharon Zhou, Mark Whiting, Karolina Ziulkoski, Alipta Ballav, Aaron Gilbee, Senadhipathige S. Niranga, Vibhor Sehgal, Jasmine Lin, Leonardy Kristianto, Angela Richmond-Fuller, Jeff Regino, Nalin Chhibber, Dinesh Majeti, Sachin Sharma, Kamila Mananova, Dinesh Dhakal, William Dai, Victoria Purynova, Samarth Sandeep, Varshine Chandrakanthan, Tejas Sarma, Sekandar Matin, Ahmed Nasser, Rohit Nistala, Alexander Stolzoff, Kristy Milland, Vinayak Mathur, Rajan Vaish, Michael S. Bernstein","http://dx.doi.org/10.1145/2984511.2984542, http://arxiv.org/abs/1904.06722v1, http://arxiv.org/pdf/1904.06722v1",cs.CY
33477,gn,"Most products are produced and sold by supply chain networks, where an
interconnected network of producers and intermediaries set prices to maximize
their profits. I show that there exists a unique equilibrium in a price-setting
game on a network. The key distortion reducing both total profits and social
welfare is multiple-marginalization, which is magnified by strategic
interactions. Individual profits are proportional to influentiality, which is a
new measure of network centrality defined by the equilibrium characterization.
The results emphasize the importance of the network structure when considering
policy questions such as mergers or trade tariffs.",Price Setting on a Network,2019-04-14 23:49:43,Toomas Hinnosaar,"http://arxiv.org/abs/1904.06757v1, http://arxiv.org/pdf/1904.06757v1",cs.GT
33478,gn,"We examine the implications of consumer privacy when preferences today depend
upon past consumption choices, and consumers shop from different sellers in
each period. Although consumers are ex ante identical, their initial
consumption choices cannot be deterministic. Thus ex post heterogeneity in
preferences arises endogenously. Consumer privacy improves social welfare,
consumer surplus and the profits of the second-period seller, while reducing
the profits of the first period seller, relative to the situation where
consumption choices are observed by the later seller.",Consumer Privacy and Serial Monopoly,2019-04-16 16:15:37,"V. Bhaskar, Nikita Roketskiy","http://arxiv.org/abs/1904.07644v1, http://arxiv.org/pdf/1904.07644v1",econ.TH
33479,gn,"In recent years, there has been a proliferation of online gambling sites,
which made gambling more accessible with a consequent rise in related problems,
such as addiction. Hence, the analysis of the gambling behaviour at both the
individual and the aggregate levels has become the object of several
investigations. In this paper, resorting to classical methods of the kinetic
theory, we describe the behaviour of a multi-agent system of gamblers
participating in lottery-type games on a virtual-item gambling market. The
comparison with previous, often empirical, results highlights the ability of
the kinetic approach to explain how the simple microscopic rules of a
gambling-type game produce complex collective trends, which might be difficult
to interpret precisely by looking only at the available data.",Multiple-interaction kinetic modelling of a virtual-item gambling economy,2019-04-16 16:38:27,"Giuseppe Toscani, Andrea Tosin, Mattia Zanella","http://dx.doi.org/10.1103/PhysRevE.100.012308, http://arxiv.org/abs/1904.07660v1, http://arxiv.org/pdf/1904.07660v1",physics.soc-ph
33480,gn,"There are various types of pyramid schemes which have inflicted or are
inflicting losses on many people in the world. We propose a pyramid scheme
model which has the principal characters of many pyramid schemes appeared in
recent years: promising high returns, rewarding the participants recruiting the
next generation of participants, and the organizer will take all the money away
when he finds the money from the new participants is not enough to pay the
previous participants interest and rewards. We assume the pyramid scheme
carries on in the tree network, ER random network, SW small-world network or BA
scale-free network respectively, then give the analytical results of how many
generations the pyramid scheme can last in these cases. We also use our model
to analyse a pyramid scheme in the real world and we find the connections
between participants in the pyramid scheme may constitute a SW small-world
network.","A Pyramid Scheme Model Based on ""Consumption Rebate"" Frauds",2019-04-17 11:43:17,"Yong Shi, Bo Li, Wen Long","http://dx.doi.org/10.1088/1674-1056/28/7/078901, http://arxiv.org/abs/1904.08136v3, http://arxiv.org/pdf/1904.08136v3",physics.soc-ph
33482,gn,"As demand for electric vehicles (EVs) is expanding, meeting the need for
charging infrastructure, especially in urban areas, has become a critical
issue. One method of adding charging stations is to install them at parking
spots. This increases the value of these spots to EV drivers needing to charge
their vehicles. However, there is a cost to constructing these spots and such
spots may preclude drivers not needing to charge from using them, reducing the
parking options for such drivers\color{black}. We look at two models for how
decisions surrounding investment in charging stations on existing parking spots
may be undertaken. First, we analyze two firms who compete over installing
stations under government set mandates or subsidies. Given the cost of
constructing spots and the competitiveness of the markets, we find it is
ambiguous whether setting higher mandates or higher subsidies for spot
construction leads to better aggregate outcomes. Second, we look at a system
operator who faces uncertainty on the size of the EV market. If they are risk
neutral, we find a relatively small change in the uncertainty of the EV market
can lead to large changes in the optimal charging capacity.",Investment in EV charging spots for parking,2019-04-22 20:38:21,"Brendan Badia, Randall Berry, Ermin Wei","http://arxiv.org/abs/1904.09967v1, http://arxiv.org/pdf/1904.09967v1",cs.GT
33483,gn,"We consider infinite dimensional optimization problems motivated by the
financial model called Arbitrage Pricing Theory. Using probabilistic and
functional analytic tools, we provide a dual characterization of the
super-replication cost. Then, we show the existence of optimal strategies for
investors maximizing their expected utility and the convergence of their
reservation prices to the super-replication cost as their risk-aversion tends
to infinity.",Risk-neutral pricing for APT,2019-04-25 13:34:30,"Laurence Carassus, Miklos Rasonyi","http://dx.doi.org/10.1007/s10957-020-01699-6, http://arxiv.org/abs/1904.11252v2, http://arxiv.org/pdf/1904.11252v2",econ.GN
33484,gn,"From both global and local perspectives, there are strong reasons to promote
energy efficiency. These reasons have prompted leaders in the European Union
(EU) and countries of the Middle East and North Africa (MENA) to adopt policies
to move their citizenry toward more efficient energy consumption. Energy
efficiency policy is typically framed at the national, or transnational level.
Policy makers then aim to incentivize microeconomic actors to align their
decisions with macroeconomic policy. We suggest another path towards greater
energy efficiency: Highlighting individual benefits at microeconomic level. By
simulating lighting, heating and cooling operations in a model single-family
home equipped with modest automation, we show that individual actors can be led
to pursue energy efficiency out of enlightened self-interest. We apply
simple-to-use, easily, scalable impact indicators that can be made available to
homeowners and serve as intrinsic economic, environmental and social motivators
for pursuing energy efficiency. The indicators reveal tangible homeowner
benefits realizable under both the market-based pricing structure for energy in
Germany and the state-subsidized pricing structure in Algeria. Benefits accrue
under both the continental climate regime of Germany and the Mediterranean
regime of Algeria, notably in the case that cooling energy needs are
considered. Our findings show that smart home technology provides an attractive
path for advancing energy efficiency goals. The indicators we assemble can help
policy makers both to promote tangible benefits of energy efficiency to
individual homeowners, and to identify those investments of public funds that
best support individual pursuit of national and transnational energy goals.",Multiple Benefits through Smart Home Energy Management Solutions -- A Simulation-Based Case Study of a Single-Family House in Algeria and Germany,2019-04-25 09:59:41,"Marc Ringel, Roufaida Laidi, Djamel Djenouri","http://dx.doi.org/10.3390/en12081537, http://arxiv.org/abs/1904.11496v1, http://arxiv.org/pdf/1904.11496v1",econ.GN
33485,gn,"The Artificial Intelligence paradigm (hereinafter referred to as ""AI"") builds
on the analysis of data able, among other things, to snap pictures of the
individuals' behaviors and preferences. Such data represent the most valuable
currency in the digital ecosystem, where their value derives from their being a
fundamental asset in order to train machines with a view to developing AI
applications. In this environment, online providers attract users by offering
them services for free and getting in exchange data generated right through the
usage of such services. This swap, characterized by an implicit nature,
constitutes the focus of the present paper, in the light of the disequilibria,
as well as market failures, that it may bring about. We use mobile apps and the
related permission system as an ideal environment to explore, via econometric
tools, those issues. The results, stemming from a dataset of over one million
observations, show that both buyers and sellers are aware that access to
digital services implicitly implies an exchange of data, although this does not
have a considerable impact neither on the level of downloads (demand), nor on
the level of the prices (supply). In other words, the implicit nature of this
exchange does not allow market indicators to work efficiently. We conclude that
current policies (e.g. transparency rules) may be inherently biased and we put
forward suggestions for a new approach.",Regulating AI: do we need new tools?,2019-04-27 12:30:13,"Otello Ardovino, Jacopo Arpetti, Marco Delmastro","http://arxiv.org/abs/1904.12134v1, http://arxiv.org/pdf/1904.12134v1",econ.GN
33486,gn,"An increasing concern in power systems is how to elicit flexibilities in
demand, which leads to nontraditional electricity products for accommodating
loads of different flexibility levels. We have proposed Multiple-Arrival
Multiple-Deadline (MAMD) differentiated energy services for the flexible loads
which require constant power for specified durations. Such loads are
indifferent to the actual power delivery time as long as the duration
requirements are satisfied between the specified arrival times and deadlines.
The focus of this paper is the market implementation of such services. In a
forward market, we establish the existence of an efficient competitive
equilibrium to verify the economic feasibility, which implies that selfish
market participants can attain the maximum social welfare in a distributed
manner. We also show the strengths of the MAMD services by simulation.",Market Implementation of Multiple-Arrival Multiple-Deadline Differentiated Energy Services,2019-06-07 08:39:02,"Yanfang Mo, Wei Chen, Li Qiu, Pravin Varaiya","http://dx.doi.org/10.1016/j.automatica.2020.108933, http://arxiv.org/abs/1906.02904v2, http://arxiv.org/pdf/1906.02904v2",cs.SY
33497,gn,"This topic review communicates working experiences regarding interaction of a
multiplicity of processes. Our experiences come from climate change modelling,
materials science, cell physiology and public health, and macroeconomic
modelling. We look at the astonishing advances of recent years in broad-band
temporal frequency sampling, multiscale modelling and fast large-scale
numerical simulation of complex systems, but also the continuing uncertainty of
many science-based results.
  We describe and analyse properties that depend on the time scale of the
measurement; structural instability; tipping points; thresholds; hysteresis;
feedback mechanisms with runaways or stabilizations or delays. We point to
grave disorientation in statistical sampling, the interpretation of
observations and the design of control when neglecting the presence or
emergence of multiple characteristic times.
  We explain what these working experiences can demonstrate for environmental
research.","Multiplicity of time scales in climate, matter, life, and economy",2019-07-01 12:21:09,"Bernhelm Booss-Bavnbek, Rasmus Kristoffer Pedersen, Ulf Rørbæk Pedersen","http://arxiv.org/abs/1907.01902v2, http://arxiv.org/pdf/1907.01902v2",econ.GN
33487,gn,"The potential impact of automation on the labor market is a topic that has
generated significant interest and concern amongst scholars, policymakers, and
the broader public. A number of studies have estimated occupation-specific risk
profiles by examining the automatability of associated skills and tasks.
However, relatively little work has sought to take a more holistic view on the
process of labor reallocation and how employment prospects are impacted as
displaced workers transition into new jobs. In this paper, we develop a new
data-driven model to analyze how workers move through an empirically derived
occupational mobility network in response to automation scenarios which
increase labor demand for some occupations and decrease it for others. At the
macro level, our model reproduces a key stylized fact in the labor market known
as the Beveridge curve and provides new insights for explaining the curve's
counter-clockwise cyclicality. At the micro level, our model provides
occupation-specific estimates of changes in short and long-term unemployment
corresponding to a given automation shock. We find that the network structure
plays an important role in determining unemployment levels, with occupations in
particular areas of the network having very few job transition opportunities.
Such insights could be fruitfully applied to help design more efficient and
effective policies aimed at helping workers adapt to the changing nature of the
labor market.",Automation and occupational mobility: A data-driven network model,2019-06-10 18:59:27,"R. Maria del Rio-Chanona, Penny Mealy, Mariano Beguerisse-Díaz, Francois Lafond, J. Doyne Farmer","http://arxiv.org/abs/1906.04086v3, http://arxiv.org/pdf/1906.04086v3",econ.GN
33488,gn,"Recent advances in computing power and the potential to make more realistic
assumptions due to increased flexibility have led to the increased prevalence
of simulation models in economics. While models of this class, and particularly
agent-based models, are able to replicate a number of empirically-observed
stylised facts not easily recovered by more traditional alternatives, such
models remain notoriously difficult to estimate due to their lack of tractable
likelihood functions. While the estimation literature continues to grow,
existing attempts have approached the problem primarily from a frequentist
perspective, with the Bayesian estimation literature remaining comparatively
less developed. For this reason, we introduce a Bayesian estimation protocol
that makes use of deep neural networks to construct an approximation to the
likelihood, which we then benchmark against a prominent alternative from the
existing literature. Overall, we find that our proposed methodology
consistently results in more accurate estimates in a variety of settings,
including the estimation of financial heterogeneous agent models and the
identification of changes in dynamics occurring in models incorporating
structural breaks.",Bayesian Estimation of Economic Simulation Models using Neural Networks,2019-06-11 15:17:34,Donovan Platt,"http://arxiv.org/abs/1906.04522v1, http://arxiv.org/pdf/1906.04522v1",econ.GN
33490,gn,"A crucial goal of funding research and development has always been to advance
economic development. On this basis, a consider-able body of research
undertaken with the purpose of determining what exactly constitutes economic
impact and how to accurately measure that impact has been published. Numerous
indicators have been used to measure economic impact, although no single
indicator has been widely adapted. Based on patent data collected from
Altmetric we predict patent citations through various social media features
using several classification models. Patents citing a research paper implies
the potential it has for direct application inits field. These predictions can
be utilized by researchers in deter-mining the practical applications for their
work when applying for patents.",Predicting Patent Citations to measure Economic Impact of Scholarly Research,2019-06-07 21:25:32,"Abdul Rahman Shaikh, Hamed Alhoori","http://arxiv.org/abs/1906.08244v1, http://arxiv.org/pdf/1906.08244v1",cs.DL
33491,gn,"Since the 2007-2009 financial crisis, substantial academic effort has been
dedicated to improving our understanding of interbank lending networks (ILNs).
Because of data limitations or by choice, the literature largely lacks multiple
loan maturities. We employ a complete interbank loan contract dataset to
investigate whether maturity details are informative of the network structure.
Applying the layered stochastic block model of Peixoto (2015) and other tools
from network science on a time series of bilateral loans with multiple maturity
layers in the Russian ILN, we find that collapsing all such layers consistently
obscures mesoscale structure. The optimal maturity granularity lies between
completely collapsing and completely separating the maturity layers and depends
on the development phase of the interbank market, with a more developed market
requiring more layers for optimal description. Closer inspection of the
inferred maturity bins associated with the optimal maturity granularity reveals
specific economic functions, from liquidity intermediation to financing.
Collapsing a network with multiple underlying maturity layers or extracting one
such layer, common in economic research, is therefore not only an incomplete
representation of the ILN's mesoscale structure, but also conceals existing
economic functions. This holds important insights and opportunities for
theoretical and empirical studies on interbank market functioning, contagion,
stability, and on the desirable level of regulatory data disclosure.",Loan maturity aggregation in interbank lending networks obscures mesoscale structure and economic functions,2019-06-12 17:22:35,"Marnix Van Soom, Milan van den Heuvel, Jan Ryckebusch, Koen Schoors","http://dx.doi.org/10.1038/s41598-019-48924-5, http://arxiv.org/abs/1906.08617v1, http://arxiv.org/pdf/1906.08617v1",q-fin.GN
33506,gn,"This paper proposes the governance framework of a gamified social network for
charity crowdfunding fueled by public computing. It introduces optimal scarce
resource allocation model, technological configuration of the FIRE consensus
protocol, and multi-layer incentivization structure that maximizes value
creation within the network.",Powershare Mechanics,2019-07-18 13:39:02,"Beka Dalakishvili, Ana Mikatadze","http://arxiv.org/abs/1907.07975v1, http://arxiv.org/pdf/1907.07975v1",econ.GN
33492,gn,"Competing firms can increase profits by setting prices collectively, imposing
significant costs on consumers. Such groups of firms are known as cartels and
because this behavior is illegal, their operations are secretive and difficult
to detect. Cartels feel a significant internal obstacle: members feel short-run
incentives to cheat. Here we present a network-based framework to detect
potential cartels in bidding markets based on the idea that the chance a group
of firms can overcome this obstacle and sustain cooperation depends on the
patterns of its interactions. We create a network of firms based on their
co-bidding behavior, detect interacting groups, and measure their cohesion and
exclusivity, two group-level features of their collective behavior. Applied to
a market for school milk, our method detects a known cartel and calculates that
it has high cohesion and exclusivity. In a comprehensive set of nearly 150,000
public contracts awarded by the Republic of Georgia from 2011 to 2016, detected
groups with high cohesion and exclusivity are significantly more likely to
display traditional markers of cartel behavior. We replicate this relationship
between group topology and the emergence of cooperation in a simulation model.
Our method presents a scalable, unsupervised method to find groups of firms in
bidding markets ideally positioned to form lasting cartels.",A network approach to cartel detection in public auction markets,2019-06-20 17:44:46,"Johannes Wachs, János Kertész","http://dx.doi.org/10.1038/s41598-019-47198-1, http://arxiv.org/abs/1906.08667v1, http://arxiv.org/pdf/1906.08667v1",physics.soc-ph
33493,gn,"Cost Surfaces are a quantitative means of assigning social, environmental,
and engineering costs that impact movement across landscapes. Cost surfaces are
a crucial aspect of route optimization and least cost path (LCP) calculations
and are used in a wide range of disciplines including computer science,
landscape ecology, and energy infrastructure modeling. Linear features present
a key weakness to traditional routing calculations along costs surfaces because
they cannot identify whether moving from a cell to its adjacent neighbors
constitutes crossing a linear barrier (increased cost) or following a corridor
(reduced cost). Following and avoiding linear features can drastically change
predicted routes. In this paper, we introduce an approach to address this
""adjacency"" issue using a search kernel that identifies these critical barriers
and corridors. We have built this approach into a new Java-based open-source
software package called CostMAP (cost surface multi-layer aggregation program),
which calculates cost surfaces and cost networks using the search kernel.
CostMAP not only includes the new adjacency capability, it is also a versatile
multi-platform package that allows users to input multiple GIS data layers and
to set weights and rules for developing a weighted-cost network. We compare
CostMAP performance with traditional cost surface approaches and show
significant performance gains, both following corridors and avoiding barriers,
using examples in a movement ecology framework and pipeline routing for carbon
capture, and storage (CCS). We also demonstrate that the new software can
straightforwardly calculate cost surfaces on a national scale.",CostMAP: An open-source software package for developing cost surfaces,2019-06-17 23:30:13,"Brendan Hoover, Richard S. Middleton, Sean Yaw","http://arxiv.org/abs/1906.08872v1, http://arxiv.org/pdf/1906.08872v1",physics.soc-ph
33494,gn,"Gifts are important instruments for forming bonds in interpersonal
relationships. Our study analyzes the phenomenon of gift contagion in online
groups. Gift contagion encourages social bonds by prompting further gifts; it
may also promote group interaction and solidarity. Using data on 36 million
online red packet gifts on a large social site in East Asia, we leverage a
natural experimental design to identify the social contagion of gift giving in
online groups. Our natural experiment is enabled by the randomization of the
gift amount allocation algorithm on the platform, which addresses the common
challenge of causal identifications in observational data. Our study provides
evidence of gift contagion: on average, receiving one additional dollar causes
a recipient to send 18 cents back to the group within the subsequent 24 hours.
Decomposing this effect, we find that it is mainly driven by the extensive
margin -- more recipients are triggered to send red packets. Moreover, we find
that this effect is stronger for ""luckiest draw"" recipients, suggesting the
presence of a group norm regarding the next red packet sender. Finally, we
investigate the moderating effects of group- and individual-level social
network characteristics on gift contagion as well as the causal impact of
receiving gifts on group network structure. Our study has implications for
promoting group dynamics and designing marketing strategies for product
adoption.",Gift Contagion in Online Groups: Evidence From Virtual Red Packets,2019-06-24 06:08:13,"Yuan Yuan, Tracy Liu, Chenhao Tan, Qian Chen, Alex Pentland, Jie Tang","http://arxiv.org/abs/1906.09698v5, http://arxiv.org/pdf/1906.09698v5",econ.GN
33495,gn,"Are cryptocurrency traders driven by a desire to invest in a new asset class
to diversify their portfolio or are they merely seeking to increase their
levels of risk? To answer this question, we use individual-level brokerage data
and study their behavior in stock trading around the time they engage in their
first cryptocurrency trade. We find that when engaging in cryptocurrency
trading investors simultaneously increase their risk-seeking behavior in stock
trading as they increase their trading intensity and use of leverage. The
increase in risk-seeking in stocks is particularly pronounced when volatility
in cryptocurrency returns is low, suggesting that their overall behavior is
driven by excitement-seeking.",Are cryptocurrency traders pioneers or just risk-seekers? Evidence from brokerage accounts,2019-06-28 00:17:34,"Matthias Pelster, Bastian Breitmayer, Tim Hasso","http://dx.doi.org/10.1016/j.econlet.2019.06.013, http://arxiv.org/abs/1906.11968v1, http://arxiv.org/pdf/1906.11968v1",q-fin.GN
33496,gn,"The possibility of re-switching of techniques in Piero Sraffa's intersectoral
model, namely the returning capital-intensive techniques with monotonic changes
in the profit rate, is traditionally considered as a paradox putting at stake
the viability of the neoclassical theory of production. It is argued here that
this phenomenon can be rationalized within the neoclassical paradigm. Sectoral
interdependencies can give rise to non-monotonic effects of progressive
variations in income distribution on relative prices. The re-switching of
techniques is, therefore, the result of cost-minimizing technical choices
facing returning ranks of relative input prices in full consistency with the
neoclassical perspective.",Solving the Reswitching Paradox in the Sraffian Theory of Capital,2019-07-02 09:36:54,Carlo Milana,"http://arxiv.org/abs/1907.01189v5, http://arxiv.org/pdf/1907.01189v5",econ.TH
33520,gn,"We develop a model to predict consumer default based on deep learning. We
show that the model consistently outperforms standard credit scoring models,
even though it uses the same data. Our model is interpretable and is able to
provide a score to a larger class of borrowers relative to standard credit
scoring models while accurately tracking variations in systemic risk. We argue
that these properties can provide valuable insights for the design of policies
targeted at reducing consumer default and alleviating its burden on borrowers
and lenders, as well as macroprudential regulation.",Predicting Consumer Default: A Deep Learning Approach,2019-08-30 04:06:02,"Stefania Albanesi, Domonkos F. Vamossy","http://arxiv.org/abs/1908.11498v2, http://arxiv.org/pdf/1908.11498v2",econ.GN
33501,gn,"We develop an agent-based simulation of the catastrophe insurance and
reinsurance industry and use it to study the problem of risk model homogeneity.
The model simulates the balance sheets of insurance firms, who collect premiums
from clients in return for ensuring them against intermittent, heavy-tailed
risks. Firms manage their capital and pay dividends to their investors, and use
either reinsurance contracts or cat bonds to hedge their tail risk. The model
generates plausible time series of profits and losses and recovers stylized
facts, such as the insurance cycle and the emergence of asymmetric, long tailed
firm size distributions. We use the model to investigate the problem of risk
model homogeneity. Under Solvency II, insurance companies are required to use
only certified risk models. This has led to a situation in which only a few
firms provide risk models, creating a systemic fragility to the errors in these
models. We demonstrate that using too few models increases the risk of
nonpayment and default while lowering profits for the industry as a whole. The
presence of the reinsurance industry ameliorates the problem but does not
remove it. Our results suggest that it would be valuable for regulators to
incentivize model diversity. The framework we develop here provides a first
step toward a simulation model of the insurance industry for testing policies
and strategies for better capital management.",A simulation of the insurance industry: The problem of risk model homogeneity,2019-07-12 23:50:08,"Torsten Heinrich, Juan Sabuco, J. Doyne Farmer","http://arxiv.org/abs/1907.05954v2, http://arxiv.org/pdf/1907.05954v2",econ.GN
33502,gn,"As the rental housing market moves online, the Internet offers divergent
possible futures: either the promise of more-equal access to information for
previously marginalized homeseekers, or a reproduction of longstanding
information inequalities. Biases in online listings' representativeness could
impact different communities' access to housing search information, reinforcing
traditional information segregation patterns through a digital divide. They
could also circumscribe housing practitioners' and researchers' ability to draw
broad market insights from listings to understand rental supply and
affordability. This study examines millions of Craigslist rental listings
across the US and finds that they spatially concentrate and over-represent
whiter, wealthier, and better-educated communities. Other significant
demographic differences exist in age, language, college enrollment, rent,
poverty rate, and household size. Most cities' online housing markets are
digitally segregated by race and class, and we discuss various implications for
residential mobility, community legibility, gentrification, housing voucher
utilization, and automated monitoring and analytics in the smart cities
paradigm. While Craigslist contains valuable crowdsourced data to better
understand affordability and available rental supply in real-time, it does not
evenly represent all market segments. The Internet promises information
democratization, and online listings can reduce housing search costs and
increase choice sets. However, technology access/preferences and information
channel segregation can concentrate such information-broadcasting benefits in
already-advantaged communities, reproducing traditional inequalities and
reinforcing residential sorting and segregation dynamics. Technology platforms
like Craigslist construct new institutions with the power to shape spatial
economies.",Online Rental Housing Market Representation and the Digital Reproduction of Urban Inequality,2019-07-13 21:56:57,Geoff Boeing,"http://dx.doi.org/10.1177/0308518X19869678, http://arxiv.org/abs/1907.06118v2, http://arxiv.org/pdf/1907.06118v2",econ.GN
33503,gn,"Data providers such as government statistical agencies perform a balancing
act: maximising information published to inform decision-making and research,
while simultaneously protecting privacy. The emergence of identified
administrative datasets with the potential for sharing (and thus linking)
offers huge potential benefits but significant additional risks. This article
introduces the principles and methods of linking data across different sources
and points in time, focusing on potential areas of risk. We then consider
confidentiality risk, focusing in particular on the ""intruder"" problem central
to the area, and looking at both risks from data producer outputs and from the
release of micro-data for further analysis. Finally, we briefly consider
potential solutions to micro-data release, both the statistical solutions
considered in other contributed articles and non-statistical solutions.",Confidentiality and linked data,2019-07-15 15:31:03,"Felix Ritchie, Jim Smith","http://arxiv.org/abs/1907.06465v1, http://arxiv.org/pdf/1907.06465v1",cs.CR
33504,gn,"This work critics on nature of thermodynamics coordinates and on roles of the
variables in the equation of state (EoS). Coordinate variables in the EoS are
analyzed so that central concepts are noticed and are used to lay a foundation
in building of a new EoS or in testing EoS status of a newly constructed
empirical equation. With these concepts, we classify EoS into two classes. We
find that the EoS of market with unitary price demand and linear
price-dependent supply function proposed by \cite{GumjMarket}, is not an EoS
because it has only one degree of freedom.",Nature of thermodynamics equation of state towards economics equation of state,2019-07-10 22:24:33,"Burin Gumjudpai, Yuthana Sethapramote","http://arxiv.org/abs/1907.07108v1, http://arxiv.org/pdf/1907.07108v1",physics.soc-ph
33505,gn,"This paper brings together divergent approaches to time inconsistency from
macroeconomic policy and behavioural economics. Behavioural discount functions
from behavioural microeconomics are embedded into a game-theoretic analysis of
temptation versus enforcement to construct an encompassing model, nesting
combinations of time consistent and time inconsistent preferences. The analysis
presented in this paper shows that, with hyperbolic/quasihyperbolic
discounting, the enforceable range of inflation targets is narrowed. This
suggests limits to the effectiveness of monetary targets, under certain
conditions. The paper concludes with a discussion of monetary policy
implications, explored specifically in the light of current macroeconomic
policy debates.",Behavioural Macroeconomic Policy: New perspectives on time inconsistency,2019-07-18 06:31:06,Michelle Baddeley,"http://arxiv.org/abs/1907.07858v1, http://arxiv.org/pdf/1907.07858v1",econ.TH
33507,gn,"We analyze how tariff design incentivizes households to invest in residential
photovoltaic and battery systems, and explore selected power sector effects. To
this end, we apply an open-source power system model featuring prosumage agents
to German 2030 scenarios. Results show that lower feed-in tariffs substantially
reduce investments in photovoltaics, yet optimal battery sizing and
self-generation are relatively robust. With increasing fixed parts of retail
tariffs, optimal battery capacities and self-generation are smaller, and
households contribute more to non-energy power sector costs. When choosing
tariff designs, policy makers should not aim to (dis-)incentivize prosumage as
such, but balance effects on renewable capacity expansion and system cost
contribution.","Prosumage of solar electricity: tariff design, capacity investments, and power system effects",2019-07-23 16:08:21,"Claudia Günther, Wolf-Peter Schill, Alexander Zerrahn","http://arxiv.org/abs/1907.09855v1, http://arxiv.org/pdf/1907.09855v1",econ.GN
33508,gn,"We propose an extended spatial evolutionary public goods game (SEPGG) model
to study the dynamics of individual career choice and the corresponding social
output. Based on the social value orientation theory, we categorized two
classes of work, namely the public work if it serves public interests, and the
private work if it serves personal interests. Under the context of SEPGG,
choosing public work is to cooperate and choosing private work is to defect. We
then investigate the effects of employee productivity, human capital and
external subsidies on individual career choices of the two work types, as well
as the overall social welfare. From simulation results, we found that when
employee productivity of public work is low, people are more willing to enter
the private sector. Although this will make both the effort level and human
capital of individuals doing private work higher than those engaging in public
work, the total outcome of the private sector is still lower than that of the
public sector provided a low level of public subsidies. When the employee
productivity is higher for public work, a certain amount of subsidy can greatly
improve system output. On the contrary, when the employee productivity of
public work is low, provisions of subsidy to the public sector can result in a
decline in social output.",Career Choice as an Extended Spatial Evolutionary Public Goods Game,2019-07-31 06:13:10,"Yuan Cheng, Yanbo Xue, Meng Chang","http://dx.doi.org/10.1016/j.chaos.2020.109856, http://arxiv.org/abs/1907.13296v1, http://arxiv.org/pdf/1907.13296v1",physics.soc-ph
33509,gn,"This paper is part of the research on the interlinkages between insurers and
their contribution to systemic risk on the insurance market. Its main purpose
is to present the results of the analysis of linkage dynamics and systemic risk
in the European insurance sector which are obtained using correlation networks.
These networks are based on dynamic dependence structures modelled using a
copula. Then, we determine minimum spanning trees (MST). Finally, the linkage
dynamics is described by means of selected topological network measures.",Linkages and systemic risk in the European insurance sector: Some new evidence based on dynamic spanning trees,2019-08-03 13:02:47,"Anna Denkowska, Stanisław Wanat","http://arxiv.org/abs/1908.01142v2, http://arxiv.org/pdf/1908.01142v2",q-fin.ST
33510,gn,"Foreign power interference in domestic elections is an existential threat to
societies. Manifested through myriad methods from war to words, such
interference is a timely example of strategic interaction between economic and
political agents. We model this interaction between rational game players as a
continuous-time differential game, constructing an analytical model of this
competition with a variety of payoff structures. All-or-nothing attitudes by
only one player regarding the outcome of the game lead to an arms race in which
both countries spend increasing amounts on interference and
counter-interference operations. We then confront our model with data
pertaining to the Russian interference in the 2016 United States presidential
election contest. We introduce and estimate a Bayesian structural time series
model of election polls and social media posts by Russian Twitter troll
accounts. Our analytical model, while purposefully abstract and simple,
adequately captures many temporal characteristics of the election and social
media activity. We close with a discussion of our model's shortcomings and
suggestions for future research.",Noncooperative dynamics in election interference,2019-08-07 21:42:08,"David Rushing Dewhurst, Christopher M. Danforth, Peter Sheridan Dodds","http://dx.doi.org/10.1103/PhysRevE.101.022307, http://arxiv.org/abs/1908.02793v4, http://arxiv.org/pdf/1908.02793v4",physics.soc-ph
33511,gn,"I propose a novel method, the Wasserstein Index Generation model (WIG), to
generate a public sentiment index automatically. To test the model`s
effectiveness, an application to generate Economic Policy Uncertainty (EPU)
index is showcased.",Wasserstein Index Generation Model: Automatic Generation of Time-series Index with Application to Economic Policy Uncertainty,2019-08-12 23:25:41,Fangzhou Xie,"http://dx.doi.org/10.1016/j.econlet.2019.108874, http://arxiv.org/abs/1908.04369v4, http://arxiv.org/pdf/1908.04369v4",econ.GN
33512,gn,"In the article we describe an enhancement to the Demand for Labour (DL)
survey conducted by Statistics Poland, which involves the inclusion of skills
obtained from online job advertisements. The main goal is to provide estimates
of the demand for skills (competences), which is missing in the DL survey. To
achieve this, we apply a data integration approach combining traditional
calibration with the LASSO-assisted approach to correct representation error in
the online data. Faced with the lack of access to unit-level data from the DL
survey, we use estimated population totals and propose a~bootstrap approach
that accounts for the uncertainty of totals reported by Statistics Poland. We
show that the calibration estimator assisted with LASSO outperforms traditional
calibration in terms of standard errors and reduces representation bias in
skills observed in online job ads. Our empirical results show that online data
significantly overestimate interpersonal, managerial and self-organization
skills while underestimating technical and physical skills. This is mainly due
to the under-representation of occupations categorised as Craft and Related
Trades Workers and Plant and Machine Operators and Assemblers.",Enhancing the Demand for Labour survey by including skills from online job advertisements using model-assisted calibration,2019-08-08 13:43:33,"Maciej Beręsewicz, Greta Białkowska, Krzysztof Marcinkowski, Magdalena Maślak, Piotr Opiela, Robert Pater, Katarzyna Zadroga","http://dx.doi.org/10.18148/srm/2021.v15i2.7670, http://arxiv.org/abs/1908.06731v1, http://arxiv.org/pdf/1908.06731v1",econ.GN
33513,gn,"The concept of a blockchain has given way to the development of
cryptocurrencies, enabled smart contracts, and unlocked a plethora of other
disruptive technologies. But, beyond its use case in cryptocurrencies, and in
network coordination and automation, blockchain technology may have serious
sociotechnical implications in the future co-existence of robots and humans.
Motivated by the recent explosion of interest around blockchains, and our
extensive work on open-source blockchain technology and its integration into
robotics - this paper provides insights in ways in which blockchains and other
decentralized technologies can impact our interactions with robot agents and
the social integration of robots into human society.",Robonomics: The Study of Robot-Human Peer-to-Peer Financial Transactions and Agreements,2019-08-18 11:04:41,"Irvin Steve Cardenas, Jong-Hoon Kim","http://arxiv.org/abs/1908.07393v1, http://arxiv.org/pdf/1908.07393v1",cs.CY
33514,gn,"The hidden-action model captures a fundamental problem of principal-agent
theory and provides an optimal sharing rule when only the outcome but not the
effort can be observed. However, the hidden-action model builds on various
explicit and also implicit assumptions about the information of the contracting
parties. This paper relaxes key assumptions regarding the availability of
information included in the hidden-action model in order to study whether and,
if so, how fast the optimal sharing rule is achieved and how this is affected
by the various types of information employed in the principal-agent relation.
Our analysis particularly focuses on information about the environment and
about feasible actions for the agent. We follow an approach to transfer
closed-form mathematical models into agent-based computational models and show
that the extent of information about feasible options to carry out a task only
has an impact on performance if decision makers are well informed about the
environment, and that the decision whether to perform exploration or
exploitation when searching for new feasible options only affects performance
in specific situations. Having good information about the environment, on the
contrary, appears to be crucial in almost all situations.",Decision-facilitating information in hidden-action setups: An agent-based approach,2019-08-21 20:23:31,"Stephan Leitner, Friederike Wall","http://arxiv.org/abs/1908.07998v2, http://arxiv.org/pdf/1908.07998v2",econ.GN
33515,gn,"An e-scooter trip model is estimated from four U.S. cities: Portland, Austin,
Chicago and New York City. A log-log regression model is estimated for
e-scooter trips based on user age, population, land area, and the number of
scooters. The model predicts 75K daily e-scooter trips in Manhattan for a
deployment of 2000 scooters, which translates to 77 million USD in annual
revenue. We propose a novel nonlinear, multifactor model to break down the
number of daily trips by the alternative modes of transportation that they
would likely substitute based on statistical similarity. The model parameters
reveal a relationship with direct trips of bike, walk, carpool, automobile and
taxi as well as access/egress trips with public transit in Manhattan. Our model
estimates that e-scooters could replace 32% of carpool; 13% of bike; and 7.2%
of taxi trips. The distance structure of revenue from access/egress trips is
found to differ from that of other substituted trips.",Forecasting e-scooter substitution of direct and access trips by mode and distance,2019-08-22 00:58:25,"Mina Lee, Joseph Y. J. Chow, Gyugeun Yoon, Brian Yueshuai He","http://dx.doi.org/10.1016/j.trd.2021.102892, http://arxiv.org/abs/1908.08127v3, http://arxiv.org/pdf/1908.08127v3",econ.GN
33517,gn,"Revenue sharing contracts between Content Providers (CPs) and Internet
Service Providers (ISPs) can act as leverage for enhancing the infrastructure
of the Internet. ISPs can be incentivized to make investments in network
infrastructure that improve Quality of Service (QoS) for users if attractive
contracts are negotiated between them and CPs. The idea here is that part of
the net profit gained by CPs are given to ISPs to invest in the network. The
Moral Hazard economic framework is used to model such an interaction, in which
a principal determines a contract, and an agent reacts by adapting her effort.
In our setting, several competitive CPs interact through one common ISP. Two
cases are studied: (i) the ISP differentiates between the CPs and makes a
(potentially) different investment to improve the QoS of each CP, and (ii) the
ISP does not differentiate between CPs and makes a common investment for both.
The last scenario can be viewed as \emph{network neutral behavior} on the part
of the ISP. We analyse the optimal contracts and show that the CP that can
better monetize its demand always prefers the non-neutral regime.
Interestingly, ISP revenue, as well as social utility, are also found to be
higher under the non-neutral regime.",Revenue Sharing in the Internet: A Moral Hazard Approach and a Net-neutrality Perspective,2019-08-26 13:12:10,"Fehmina Malik, Manjesh K. ~Hanawal, Yezekael Hayel, Jayakrishnan Nair","http://arxiv.org/abs/1908.09580v1, http://arxiv.org/pdf/1908.09580v1",econ.GN
33518,gn,"Results of a convincing causal statistical inference related to
socio-economic phenomena are treated as especially desired background for
conducting various socio-economic programs or government interventions.
Unfortunately, quite often real socio-economic issues do not fulfill
restrictive assumptions of procedures of causal analysis proposed in the
literature. This paper indicates certain empirical challenges and conceptual
opportunities related to applications of procedures of data depth concept into
a process of causal inference as to socio-economic phenomena. We show, how to
apply a statistical functional depths in order to indicate factual and
counterfactual distributions commonly used within procedures of causal
inference. Thus a modification of Rubin causality concept is proposed, i.e., a
centrality-oriented causality concept. The presented framework is especially
useful in a context of conducting causal inference basing on official
statistics, i.e., basing on already existing databases. Methodological
considerations related to extremal depth, modified band depth, Fraiman-Muniz
depth, and multivariate Wilcoxon sum rank statistic are illustrated by means of
example related to a study of an impact of EU direct agricultural subsidies on
a digital development in Poland in a period of 2012-2019.",Centrality-oriented Causality -- A Study of EU Agricultural Subsidies and Digital Developement in Poland,2019-08-29 11:36:09,"Kosiorowski Daniel, Jerzy P. Rydlewski","http://dx.doi.org/10.37190/ord200303, http://arxiv.org/abs/1908.11099v2, http://arxiv.org/pdf/1908.11099v2",stat.AP
33519,gn,"We study the relationship between national culture and the disposition effect
by investigating international differences in the degree of investors'
disposition effect. We utilize brokerage data of 387,993 traders from 83
countries and find great variation in the degree of the disposition effect
across the world. We find that the cultural dimensions of long-term orientation
and indulgence help to explain why certain nationalities are more prone to the
disposition effect. We also find support on an international level for the role
of age and gender in explaining the disposition effect.",Culture and the disposition effect,2019-08-30 03:21:31,"Bastian Breitmayer, Tim Hasso, Matthias Pelster","http://dx.doi.org/10.1016/j.econlet.2019.108653, http://arxiv.org/abs/1908.11492v1, http://arxiv.org/pdf/1908.11492v1",q-fin.GN
33521,gn,"Equal access to voting is a core feature of democratic government. Using data
from millions of smartphone users, we quantify a racial disparity in voting
wait times across a nationwide sample of polling places during the 2016 U.S.
presidential election. Relative to entirely-white neighborhoods, residents of
entirely-black neighborhoods waited 29% longer to vote and were 74% more likely
to spend more than 30 minutes at their polling place. This disparity holds when
comparing predominantly white and black polling places within the same states
and counties, and survives numerous robustness and placebo tests. We shed light
on the mechanism for these results and discuss how geospatial data can be an
effective tool to both measure and monitor these disparities going forward.",Racial Disparities in Voting Wait Times: Evidence from Smartphone Data,2019-08-30 21:24:17,"M. Keith Chen, Kareem Haggag, Devin G. Pope, Ryne Rohla","http://arxiv.org/abs/1909.00024v2, http://arxiv.org/pdf/1909.00024v2",econ.GN
33522,gn,"We conduct a sequential social-learning experiment where subjects each guess
a hidden state based on private signals and the guesses of a subset of their
predecessors. A network determines the observable predecessors, and we compare
subjects' accuracy on sparse and dense networks. Accuracy gains from social
learning are twice as large on sparse networks compared to dense networks.
Models of naive inference where agents ignore correlation between observations
predict this comparative static in network density, while the finding is
difficult to reconcile with rational-learning models.",An Experiment on Network Density and Sequential Learning,2019-09-05 09:05:21,"Krishna Dasaratha, Kevin He","http://dx.doi.org/10.1016/j.geb.2021.04.004, http://arxiv.org/abs/1909.02220v3, http://arxiv.org/pdf/1909.02220v3",econ.TH
33523,gn,"We consider a non-zero-sum linear quadratic Gaussian (LQG) dynamic game with
asymmetric information. Each player observes privately a noisy version of a
(hidden) state of the world $V$, resulting in dependent private observations.
We study perfect Bayesian equilibria (PBE) for this game with equilibrium
strategies that are linear in players' private estimates of $V$. The main
difficulty arises from the fact that players need to construct estimates on
other players' estimate on $V$, which in turn would imply that an infinite
hierarchy of estimates on estimates needs to be constructed, rendering the
problem unsolvable. We show that this is not the case: each player's estimate
on other players' estimates on $V$ can be summarized into her own estimate on
$V$ and some appropriately defined public information. Based on this finding we
characterize the PBE through a backward/forward algorithm akin to dynamic
programming for the standard LQG control problem. Unlike the standard LQG
problem, however, Kalman filter covariance matrices, as well as some other
required quantities, are observation-dependent and thus cannot be evaluated
off-line through a forward recursion.",Linear Equilibria for Dynamic LQG Games with Asymmetric Information and Dependent Types,2019-09-11 05:55:28,"Nasimeh Heydaribeni, Achilleas Anastasopoulos","http://arxiv.org/abs/1909.04834v1, http://arxiv.org/pdf/1909.04834v1",econ.GN
33526,gn,"This systemic risk paper introduces inhomogeneous random financial networks
(IRFNs). Such models are intended to describe parts, or the entirety, of a
highly heterogeneous network of banks and their interconnections, in the global
financial system. Both the balance sheets and the stylized crisis behaviour of
banks are ingredients of the network model. A systemic crisis is pictured as
triggered by a shock to banks' balance sheets, which then leads to the
propagation of damaging shocks and the potential for amplification of the
crisis, ending with the system in a cascade equilibrium. Under some conditions
the model has ``locally tree-like independence (LTI)'', where a general
percolation theoretic argument leads to an analytic fixed point equation
describing the cascade equilibrium when the number of banks $N$ in the system
is taken to infinity. This paper focusses on mathematical properties of the
framework in the context of Eisenberg-Noe solvency cascades generalized to
account for fractional bankruptcy charges. New results including a definition
and proof of the ``LTI property'' of the Eisenberg-Noe solvency cascade
mechanism lead to explicit $N=\infty$ fixed point equations that arise under
very general model specifications. The essential formulas are shown to be
implementable via well-defined approximation schemes, but numerical exploration
of some of the wide range of potential applications of the method is left for
future work.",Systemic Cascades On Inhomogeneous Random Financial Networks,2019-09-20 00:18:48,T. R. Hurd,"http://arxiv.org/abs/1909.09239v1, http://arxiv.org/pdf/1909.09239v1",q-fin.GN
33527,gn,"We consider the problem of designing a derivatives exchange aiming at
addressing clients needs in terms of listed options and providing suitable
liquidity. We proceed into two steps. First we use a quantization method to
select the options that should be displayed by the exchange. Then, using a
principal-agent approach, we design a make take fees contract between the
exchange and the market maker. The role of this contract is to provide
incentives to the market maker so that he offers small spreads for the whole
range of listed options, hence attracting transactions and meeting the
commercial requirements of the exchange.",How to design a derivatives market?,2019-09-20 02:30:11,"Bastien Baldacci, Paul Jusselin, Mathieu Rosenbaum","http://arxiv.org/abs/1909.09257v1, http://arxiv.org/pdf/1909.09257v1",q-fin.TR
33529,gn,"The paper uses diffusion models to understand the main determinants of
diffusion of solar photovoltaic panels (SPP) worldwide, focusing on the role of
public incentives. We applied the generalized Bass model (GBM) to adoption data
of 26 countries between 1992-2016. The SPP market appears as a frail and
complicate one, lacking public media support. Even the major shocks in adoption
curves, following state incentive implemented after 2006, failed to go beyond
short-term effects and therefore were unable to provide sustained momentum to
the market. This suggests that further barriers to adoption should be removed.",What do adoption patterns of solar panels observed so far tell about governments' incentive? insight from diffusion models,2019-09-22 17:38:31,"Anita M. Bunea, Pietro Manfredi, Pompeo Della Posta, Mariangela Guidolin","http://arxiv.org/abs/1909.10017v1, http://arxiv.org/pdf/1909.10017v1",stat.AP
33530,gn,"A model is proposed to allocate Formula One World Championship prize money
among the constructors. The methodology is based on pairwise comparison
matrices, allows for the use of any weighting method, and makes possible to
tune the level of inequality. We introduce an axiom called scale invariance,
which requires the ranking of the teams to be independent of the parameter
controlling inequality. The eigenvector method is revealed to violate this
condition in our dataset, while the row geometric mean method always satisfies
it. The revenue allocation is not influenced by the arbitrary valuation given
to the race prizes in the official points scoring system of Formula One and
takes the intensity of pairwise preferences into account, contrary to the
standard Condorcet method. Our approach can be used to share revenues among
groups when group members are ranked several times.",Revenue allocation in Formula One: a pairwise comparison approach,2019-09-25 20:05:32,"Dóra Gréta Petróczy, László Csató","http://dx.doi.org/10.1080/03081079.2020.1870224, http://arxiv.org/abs/1909.12931v4, http://arxiv.org/pdf/1909.12931v4",econ.GN
33531,gn,"An important question in economics is how people choose between different
payments in the future. The classical normative model predicts that a decision
maker discounts a later payment relative to an earlier one by an exponential
function of the time between them. Descriptive models use non-exponential
functions to fit observed behavioral phenomena, such as preference reversal.
Here we propose a model of discounting, consistent with standard axioms of
choice, in which decision makers maximize the growth rate of their wealth. Four
specifications of the model produce four forms of discounting -- no
discounting, exponential, hyperbolic, and a hybrid of exponential and
hyperbolic -- two of which predict preference reversal. Our model requires no
assumption of behavioral bias or payment risk.",Microfoundations of Discounting,2019-10-04 23:30:45,"Alexander T. I. Adamou, Yonatan Berman, Diomides P. Mavroyiannis, Ole B. Peters","http://arxiv.org/abs/1910.02137v3, http://arxiv.org/pdf/1910.02137v3",econ.TH
33532,gn,"We analyze the optimal control of disease prevention and treatment in a basic
SIS model. We develop a simple macroeconomic setup in which the social planner
determines how to optimally intervene, through income taxation, in order to
minimize the social cost, inclusive of infection and economic costs, of the
spread of an epidemic disease. The disease lowers economic production and thus
income by reducing the size of the labor force employed in productive
activities, tightening thus the economy's overall resources constraint. We
consider a framework in which the planner uses the collected tax revenue to
intervene in either prevention (aimed at reducing the rate of infection) or
treatment (aimed at increasing the speed of recovery). Both optimal prevention
and treatment policies allow the economy to achieve a disease-free equilibrium
in the long run but their associated costs are substantially different along
the transitional dynamic path. By quantifying the social costs associated with
prevention and treatment we determine which policy is most cost-effective under
different circumstances, showing that prevention (treatment) is desirable
whenever the infectivity rate is low (high).",Optimal Control of Prevention and Treatment in a Basic Macroeconomic-Epidemiological Model,2019-10-08 16:28:11,"Davide La Torre, Tufail Malik, Simone Marsiglio","http://arxiv.org/abs/1910.03383v1, http://arxiv.org/pdf/1910.03383v1",econ.TH
33533,gn,"This paper offers new mathematical models to measure the most productive
scale size (MPSS) of production systems with mixed structure networks (mixed of
series and parallel). In the first property, we deal with a general multi-stage
network which can be transformed, using dummy processes, into a series of
parallel networks. In the second property, we consider a direct network
combined with series and parallel structure. In this paper, we propose new
models to measure the overall MPSS of the production systems and their internal
processes. MPSS decomposition is discussed and examined. As a real-life
application, this study measures the efficiency and MPSS of research and
development (R&D) activities of Chinese provinces within an R&D value chain
network. In the R&D value chain, profitability and marketability stages are
connected in series, where the profitability stage is composed of operation and
R&D efforts connected in parallel. The MPSS network model provides not only the
MPSS measurement but also values that indicate the appropriate degree of
intermediate measures for the two stages. Improvement strategy is given for
each region based on the gap between the current and the appropriate level of
intermediate measures. Our findings show that the marketability efficiency
values of Chinese R&D regions were low, and no regions are operated under the
MPSS. As a result, most Chinese regions performed inefficiently regarding both
profitability and marketability. This finding provides initial evidence that
the generally lower profitability and marketability efficiency of Chinese
regions is a severe problem that may be due to wasted resources on production
and R&D.",Most productive scale size of China's regional R&D value chain: A mixed structure network,2019-10-09 09:20:50,"Saeed Assani, Jianlin Jiang, Ahmad Assani, Feng Yang","http://arxiv.org/abs/1910.03805v1, http://arxiv.org/pdf/1910.03805v1",econ.GN
33534,gn,"Due to superstition, license plates with desirable combinations of characters
are highly sought after in China, fetching prices that can reach into the
millions in government-held auctions. Despite the high stakes involved, there
has been essentially no attempt to provide price estimates for license plates.
We present an end-to-end neural network model that simultaneously predict the
auction price, gives the distribution of prices and produces latent feature
vectors. While both types of neural network architectures we consider
outperform simpler machine learning methods, convolutional networks outperform
recurrent networks for comparable training time or model complexity. The
resulting model powers our online price estimator and search engine.",Predicting Auction Price of Vehicle License Plate with Deep Residual Learning,2019-10-08 19:38:06,Vinci Chow,"http://dx.doi.org/10.1007/978-3-030-26142-9_16, http://arxiv.org/abs/1910.04879v1, http://arxiv.org/pdf/1910.04879v1",cs.CV
33535,gn,"It is well-known that value added per worker is extremely heterogeneous among
firms, but relatively little has been done to characterize this heterogeneity
more precisely. Here we show that the distribution of value-added per worker
exhibits heavy tails, a very large support, and consistently features a
proportion of negative values, which prevents log transformation. We propose to
model the distribution of value added per worker using the four parameter
L\'evy stable distribution, a natural candidate deriving from the Generalised
Central Limit Theorem, and we show that it is a better fit than key
alternatives. Fitting a distribution allows us to capture dispersion through
the tail exponent and scale parameters separately. We show that these
parametric measures of dispersion are at least as useful as interquantile
ratios, through case studies on the evolution of dispersion in recent years and
the correlation between dispersion and intangible capital intensity.",Measuring productivity dispersion: a parametric approach using the Lévy alpha-stable distribution,2019-10-11 17:41:02,"Jangho Yang, Torsten Heinrich, Julian Winkler, François Lafond, Pantelis Koutroumpis, J. Doyne Farmer","http://arxiv.org/abs/1910.05219v2, http://arxiv.org/pdf/1910.05219v2",econ.GN
33536,gn,"People and companies move money with every financial transaction they make.
We aim to understand how such activity gives rise to large-scale patterns of
monetary flow. In this work, we trace the movement of e-money through the
accounts of a mobile money system using the provider's own transaction records.
The resulting transaction sequences---balance-respecting trajectories---are
data objects that represent observed monetary flows. Common sequential motifs
correspond to known use-cases of mobile money: digital payments, digital
transfers, and money storage. We find that each activity creates a distinct
network structure within the system, and we uncover coordinated gaming of the
mobile money provider's commission schedule. Moreover, we find that e-money
passes through the system in anywhere from minutes to months. This pronounced
heterogeneity, even within the same use-case, can inform the modeling of
turnover in money supply. Our methodology relates economic activity at the
transaction level to large-scale patterns of monetary flow, broadening the
scope of empirical study about the network and temporal structure of the
economy.",Networks of monetary flow at native resolution,2019-10-12 19:39:07,Carolina Mattsson,"http://arxiv.org/abs/1910.05596v1, http://arxiv.org/pdf/1910.05596v1",physics.soc-ph
33537,gn,"Charles Cobb and Paul Douglas in 1928 used data from the US manufacturing
sector for 1899-1922 to introduce what is known today as the Cobb-Douglas
production function that has been widely used in economic theory for decades.
We employ the R programming language to fit the formulas for the parameters of
the Cobb-Douglas production function generated by the authors recently via the
bi-Hamiltonian approach to the same data set utilized by Cobb and Douglas. We
conclude that the formulas for the output elasticities and total factor
productivity are compatible with the original 1928 data.",The Cobb-Douglas production function revisited,2019-10-12 01:41:45,"Roman G. Smirnov, Kunpeng Wang","http://arxiv.org/abs/1910.06739v2, http://arxiv.org/pdf/1910.06739v2",econ.GN
33538,gn,"Inefficient markets allow investors to consistently outperform the market. To
demonstrate that inefficiencies exist in sports betting markets, we created a
betting algorithm that generates above market returns for the NFL, NBA, NCAAF,
NCAAB, and WNBA betting markets. To formulate our betting strategy, we
collected and examined a novel dataset of bets, and created a non-parametric
win probability model to find positive expected value situations. As the United
States Supreme Court has recently repealed the federal ban on sports betting,
research on sports betting markets is increasingly relevant for the growing
sports betting industry.",Beating the House: Identifying Inefficiencies in Sports Betting Markets,2019-10-20 02:16:17,"Sathya Ramesh, Ragib Mostofa, Marco Bornstein, John Dobelman","http://arxiv.org/abs/1910.08858v2, http://arxiv.org/pdf/1910.08858v2",econ.GN
33539,gn,"The famous Saint Petersburg Paradox (St. Petersburg Paradox) shows that the
theory of expected value does not capture the real-world economics of
decision-making problems. Over the years, many economic theories were developed
to resolve the paradox and explain gaps in the economic value theory in the
evaluation of economic decisions, the subjective utility of the expected
outcomes, and risk aversion as observed in the game of the St. Petersburg
Paradox. In this paper, we use the concept of the relative net utility to
resolve the St. Petersburg Paradox. Because the net utility concept is able to
explain both behavioral economics and the St. Petersburg Paradox, it is deemed
to be a universal approach to handling utility. This paper shows how the
information content of the notion of net utility value allows us to capture a
broader context of the impact of a decision's possible achievements. It
discusses the necessary conditions that the utility function has to conform to
avoid the paradox. Combining these necessary conditions allows us to define the
theorem of indifference in the evaluation of economic decisions and to present
the role of the relative net utility and net utility polarity in a value
rational decision-making process.",Relative Net Utility and the Saint Petersburg Paradox,2019-10-21 10:01:14,"Daniel Muller, Tshilidzi Marwala","http://arxiv.org/abs/1910.09544v3, http://arxiv.org/pdf/1910.09544v3",econ.GN
33552,gn,"We compare the profit of the optimal third-degree price discrimination policy
against a uniform pricing policy. A uniform pricing policy offers the same
price to all segments of the market. Our main result establishes that for a
broad class of third-degree price discrimination problems with concave profit
functions (in the price space) and common support, a uniform price is
guaranteed to achieve one half of the optimal monopoly profits. This profit
bound holds for any number of segments and prices that the seller might use
under third-degree price discrimination. We establish that these conditions are
tight and that weakening either common support or concavity can lead to
arbitrarily poor profit comparisons even for regular or monotone hazard rate
distributions.",Third-Degree Price Discrimination Versus Uniform Pricing,2019-12-11 11:15:49,"Dirk Bergemann, Francisco Castro, Gabriel Weintraub","http://arxiv.org/abs/1912.05164v3, http://arxiv.org/pdf/1912.05164v3",econ.GN
33540,gn,"Recent advances in the fields of machine learning and neurofinance have
yielded new exciting research perspectives in practical inference of
behavioural economy in financial markets and microstructure study. We here
present the latest results from a recently published stock market simulator
built around a multi-agent system architecture, in which each agent is an
autonomous investor trading stocks by reinforcement learning (RL) via a
centralised double-auction limit order book. The RL framework allows for the
implementation of specific behavioural and cognitive traits known to trader
psychology, and thus to study the impact of these traits on the whole stock
market at the mesoscale. More precisely, we narrowed our agent design to three
such psychological biases known to have a direct correspondence with RL theory,
namely delay discounting, greed, and fear. We compared ensuing simulated data
to real stock market data over the past decade or so, and find that market
stability benefits from larger populations of agents prone to delay discounting
and most astonishingly, to greed.",Mesoscale impact of trader psychology on stock markets: a multi-agent AI approach,2019-10-10 18:15:06,"J. Lussange, S. Palminteri, S. Bourgeois-Gironde, B. Gutkin","http://arxiv.org/abs/1910.10099v1, http://arxiv.org/pdf/1910.10099v1",q-fin.GN
33541,gn,"We analyze pricing mechanisms in electricity markets with AC power flow
equations that define a nonconvex feasible set for the economic dispatch
problem. Specifically, we consider two possible pricing schemes. The first
among these prices are derived from Lagrange multipliers that satisfy
Karush-Kuhn-Tucker conditions for local optimality of the nonconvex market
clearing problem. The second is derived from optimal dual multipliers of the
convex semidefinite programming (SDP) based relaxation of the market clearing
problem. Relationships between these prices, their revenue adequacy and market
equilibrium properties are derived and compared. The SDP prices are shown to
equal distribution locational marginal prices derived with second-order conic
relaxations of power flow equations over radial distribution networks. We
illustrate our theoretical findings through numerical experiments.",Pricing Economic Dispatch with AC Power Flow via Local Multipliers and Conic Relaxation,2019-10-23 20:14:44,"Mariola Ndrio, Anna Winnicki, Subhonmesh Bose","http://arxiv.org/abs/1910.10673v3, http://arxiv.org/pdf/1910.10673v3",eess.SY
33542,gn,"While the benefits of common and public goods are shared, they tend to be
scarce when contributions are provided voluntarily. Failure to cooperate in the
provision or preservation of these goods is fundamental to sustainability
challenges, ranging from local fisheries to global climate change. In the real
world, such cooperative dilemmas occur in multiple interactions with complex
strategic interests and frequently without full information. We argue that
voluntary cooperation enabled across multiple coalitions (akin to
polycentricity) not only facilitates greater generation of non-excludable
public goods, but may also allow evolution toward a more cooperative, stable,
and inclusive approach to governance. Contrary to any previous study, we show
that these merits of multi-coalition governance are far more general than the
singular examples occurring in the literature, and are robust under diverse
conditions of excludability, congestability of the non-excludable public good,
and arbitrary shapes of the return-to-contribution function. We first confirm
the intuition that a single coalition without enforcement and with players
pursuing their self-interest without knowledge of returns to contribution is
prone to cooperative failure. Next, we demonstrate that the same pessimistic
model but with a multi-coalition structure of governance experiences relatively
higher cooperation by enabling recognition of marginal gains of cooperation in
the game at stake. In the absence of enforcement, public-goods regimes that
evolve through a proliferation of voluntary cooperative forums can maintain and
increase cooperation more successfully than singular, inclusive regimes.",Coalition-structured governance improves cooperation to provide public goods,2019-10-24 05:13:43,"Vítor V. Vasconcelos, Phillip M. Hannam, Simon A. Levin, Jorge M. Pacheco","http://arxiv.org/abs/1910.11337v1, http://arxiv.org/pdf/1910.11337v1",econ.GN
33543,gn,"We study how personalized news aggregation for rationally inattentive voters
(NARI) affects policy polarization and public opinion. In a two-candidate
electoral competition model, an attention-maximizing infomediary aggregates
source data about candidates' valence into easy-to-digest news. Voters decide
whether to consume news, trading off the expected gain from improved expressive
voting against the attention cost. NARI generates policy polarization even if
candidates are office-motivated. Personalized news aggregation makes extreme
voters the disciplining entity of policy polarization, and the skewness of
their signals is crucial for sustaining a high degree of policy polarization in
equilibrium. Analysis of disciplining voters yields insights into the
equilibrium and welfare consequences of regulating infomediaries.",The Politics of Personalized News Aggregation,2019-10-24 23:04:16,"Lin Hu, Anqi Li, Ilya Segal","http://arxiv.org/abs/1910.11405v15, http://arxiv.org/pdf/1910.11405v15",econ.GN
33544,gn,"Input Output (IO) tables provide a standardised way of looking at monetary
flows between all industries in an economy. IO tables can be thought of as
networks - with the nodes being different industries and the edges being the
flows between them. We develop a network-based analysis to consider a
multi-regional IO network at regional and subregional level within a country.
We calculate both traditional matrix-based IO measures (e.g. 'multipliers') and
new network theory-based measures at this higher spatial resolution. We
contrast these methods with the results of a disruption model applied to the
same IO data in order to demonstrate that betweenness centrality gives a good
indication of flow on economic disruption, while eigenvector-type centrality
measures give results comparable to traditional IO multipliers.We also show the
effects of treating IO networks at different levels of spatial aggregation.",Using network science to quantify economic disruptions in regional input-output networks,2019-10-28 11:35:03,"Emily P. Harvey, Dion R. J. O'Neale","http://arxiv.org/abs/1910.12498v1, http://arxiv.org/pdf/1910.12498v1",physics.soc-ph
33545,gn,"Cultural trends and popularity cycles can be observed all around us, yet our
theories of social influence and identity expression do not explain what
perpetuates these complex, often unpredictable social dynamics. We propose a
theory of social identity expression based on the opposing, but not mutually
exclusive, motives to conform and to be unique among one's neighbors in a
social network. We then model the social dynamics that arise from these
motives. We find that the dynamics typically enter random walks or stochastic
limit cycles rather than converging to a static equilibrium. We also prove that
without social network structure or, alternatively, without the uniqueness
motive, reasonable adaptive dynamics would necessarily converge to equilibrium.
Thus, we show that nuanced psychological assumptions (recognizing preferences
for uniqueness along with conformity) and realistic social network structure
are both necessary for explaining how complex, unpredictable cultural trends
emerge.","Hipsters and the Cool: A Game Theoretic Analysis of Social Identity, Trends and Fads",2019-10-29 19:52:44,"Russell Golman, Aditi Jain, Sonica Saraf","http://arxiv.org/abs/1910.13385v1, http://arxiv.org/pdf/1910.13385v1",econ.TH
33546,gn,"What is motivation and how does it work? Where do goals come from and how do
they vary within and between species and individuals? Why do we prefer some
things over others? MEDO is a theoretical framework for understanding these
questions in abstract terms, as well as for generating and evaluating specific
hypotheses that seek to explain goal-oriented behavior. MEDO views preferences
as selective pressures influencing the likelihood of particular outcomes. With
respect to biological organisms, these patterns must compete and cooperate in
shaping system evolution. To the extent that shaping processes are themselves
altered by experience, this enables feedback relationships where histories of
reward and punishment can impact future motivation. In this way, various biases
can undergo either amplification or attenuation, resulting in preferences and
behavioral orientations of varying degrees of inter-temporal and
inter-situational stability. MEDO specifically models all shaping dynamics in
terms of natural selection operating on multiple levels--genetic, neural, and
cultural--and even considers aspects of development to themselves be
evolutionary processes. Thus, MEDO reflects a kind of generalized Darwinism, in
that it assumes that natural selection provides a common principle for
understanding the emergence of complexity within all dynamical systems in which
replication, variation, and selection occur. However, MEDO combines this
evolutionary perspective with economic decision theory, which describes both
the preferences underlying individual choices, as well as the preferences
underlying choices made by engineers in designing optimized systems. In this
way, MEDO uses economic decision theory to describe goal-oriented behaviors as
well as the interacting evolutionary optimization processes from which they
emerge. (Please note: this manuscript was written and finalized in 2012.)",Multilevel evolutionary developmental optimization (MEDO): A theoretical framework for understanding preferences and selection dynamics,2019-10-28 01:58:17,Adam Safron,"http://arxiv.org/abs/1910.13443v2, http://arxiv.org/pdf/1910.13443v2",q-bio.NC
33547,gn,"As the decade turns, we reflect on nearly thirty years of successful
manipulation of the world's public equity markets. This reflection highlights a
few of the key enabling ingredients and lessons learned along the way. A
quantitative understanding of market impact and its decay, which we cover
briefly, lets you move long-term market prices to your advantage at acceptable
cost. Hiding your footprints turns out to be less important than moving prices
in the direction most people want them to move. Widespread (if misplaced) trust
of market prices -- buttressed by overestimates of the cost of manipulation and
underestimates of the benefits to certain market participants -- makes price
manipulation a particularly valuable and profitable tool. Of the many recent
stories heralding the dawn of the present golden age of misinformation, the
manipulation leading to the remarkable increase in the market capitalization of
the world's publicly traded companies over the past three decades is among the
best.",Celebrating Three Decades of Worldwide Stock Market Manipulation,2019-11-21 02:32:59,Bruce Knuteson,"http://arxiv.org/abs/1912.01708v1, http://arxiv.org/pdf/1912.01708v1",q-fin.GN
33548,gn,"The steel industry has great impacts on the economy and the environment of
both developed and underdeveloped countries. The importance of this industry
and these impacts have led many researchers to investigate the relationship
between a country's steel consumption and its economic activity resulting in
the so-called intensity of use model. This paper investigates the validity of
the intensity of use model for the case of Iran's steel consumption and extends
this hypothesis by using the indexes of economic activity to model the steel
consumption. We use the proposed model to train support vector machines and
predict the future values for Iran's steel consumption. The paper provides
detailed correlation tests for the factors used in the model to check for their
relationships with the steel consumption. The results indicate that Iran's
steel consumption is strongly correlated with its economic activity following
the same pattern as the economy has been in the last four decades.",Modeling and Prediction of Iran's Steel Consumption Based on Economic Activity Using Support Vector Machines,2019-12-05 07:13:35,"Hossein Kamalzadeh, Saeid Nassim Sobhan, Azam Boskabadi, Mohsen Hatami, Amin Gharehyakheh","http://arxiv.org/abs/1912.02373v1, http://arxiv.org/pdf/1912.02373v1",econ.GN
33549,gn,"A consumer who wants to consume a good in a particular period may
nevertheless attempt to buy it earlier if he is concerned that in delaying he
would find the good already sold. This paper considers a model in which the
good may be offered in two periods; the period in which all consumers most
value the good (period 2), and an earlier period (period 1). Examining the
profit-maximizing strategy of the firm under unbounded demand, we find that
even with no cost of making the product available early, the firm does not
profit, and usually loses, by making the product available early.
Interestingly, the price that maximizes profits induces all arrivals to occur
early, or all arrivals to occur late, depending on the parameters. The firm
would not set a price which induces consumers to arrive in both periods. In
particular, if the firm controls the penalty for arriving early, then it should
set a high penalty so that no one arrives early. The Nash equilibrium behavior
of consumers, when deciding if and when to arrive is more complicated than one
may suppose, and can generate some unexpected behavior. For example, when there
is unbounded demand, most potential consumers decide not to arrive at all.
Additionally, the arrival rate may decline with the surplus a person gets from
buying the good. Surprisingly, we find that an increase in the number of units
for sale increases the number of consumers who arrive early. Moreover, we find
that the profit-maximizing price increases with the number of units offered for
sale. This too is unexpected as an increase in supply often results in price
reduction. In our case, an increase in the number of units on sale also
increases demand, and the seller may profit by increasing the price. In the
single-unit case, we give closed solutions for the equilibrium customer
behavior and profit-maximizing firm strategy and conduct sensitivity analysis.","How Advance Sales can Reduce Profits: When to Buy, When to Sell, and What Price to Charge",2019-12-05 23:49:29,"Amihai Glazer, Refael Hassin, Irit Nowik","http://arxiv.org/abs/1912.02869v2, http://arxiv.org/pdf/1912.02869v2",econ.GN
33551,gn,"Multimodal agglomerations, in the form of the existence of many cities,
dominate modern economic geography. We focus on the mechanism by which
multimodal agglomerations realize endogenously. In a spatial model with
agglomeration and dispersion forces, the spatial scale (local or global) of the
dispersion force determines whether endogenous spatial distributions become
multimodal. Multimodal patterns can emerge only under a global dispersion
force, such as competition effects, which induce deviations to locations
distant from an existing agglomeration and result in a separate agglomeration.
A local dispersion force, such as the local scarcity of land, causes the
flattening of existing agglomerations. The resulting spatial configuration is
unimodal if such a force is the only source of dispersion. This view allows us
to categorize extant models into three prototypical classes: those with only
global, only local, and local and global dispersion forces. The taxonomy
facilitates model choice depending on each study's objective.",Multimodal agglomeration in economic geography,2019-12-11 07:39:51,"Takashi Akamatsu, Tomoya Mori, Minoru Osawa, Yuki Takayama","http://arxiv.org/abs/1912.05113v5, http://arxiv.org/pdf/1912.05113v5",econ.GN
33553,gn,"This study provides the first systematic, international, large-scale evidence
on the extent and nature of multiple institutional affiliations on journal
publications. Studying more than 15 million authors and 22 million articles
from 40 countries we document that: In 2019, almost one in three articles was
(co-)authored by authors with multiple affiliations and the share of authors
with multiple affiliations increased from around 10% to 16% since 1996. The
growth of multiple affiliations is prevalent in all fields and it is stronger
in high impact journals. About 60% of multiple affiliations are between
institutions from within the academic sector. International co-affiliations,
which account for about a quarter of multiple affiliations, most often involve
institutions from the United States, China, Germany and the United Kingdom,
suggesting a core-periphery network. Network analysis also reveals a number
communities of countries that are more likely to share affiliations. We discuss
potential causes and show that the timing of the rise in multiple affiliations
can be linked to the introduction of more competitive funding structures such
as 'excellence initiatives' in a number of countries. We discuss implications
for science and science policy.",The Rise of Multiple Institutional Affiliations in Academia,2019-12-11 22:08:01,"Hanna Hottenrott, Michael Rose, Cornelia Lawson","http://arxiv.org/abs/1912.05576v3, http://arxiv.org/pdf/1912.05576v3",econ.GN
33554,gn,"This paper develops a new model of business cycles. The model is economical
in that it is solved with an aggregate demand-aggregate supply diagram, and the
effects of shocks and policies are obtained by comparative statics. The model
builds on two unconventional assumptions. First, producers and consumers meet
through a matching function. Thus, the model features unemployment, which
fluctuates in response to aggregate demand and supply shocks. Second, wealth
enters the utility function, so the model allows for permanent zero-lower-bound
episodes. In the model, the optimal monetary policy is to set the interest rate
at the level that eliminates the unemployment gap. This optimal interest rate
is computed from the prevailing unemployment gap and monetary multiplier (the
effect of the nominal interest rate on the unemployment rate). If the
unemployment gap is exceedingly large, monetary policy cannot eliminate it
before reaching the zero lower bound, but a wealth tax can.",An Economical Business-Cycle Model,2019-12-16 05:32:56,"Pascal Michaillat, Emmanuel Saez","http://dx.doi.org/10.1093/oep/gpab021, http://arxiv.org/abs/1912.07163v2, http://arxiv.org/pdf/1912.07163v2",econ.TH
33555,gn,"The digital transformation is driving revolutionary innovations and new
market entrants threaten established sectors of the economy such as the
automotive industry. Following the need for monitoring shifting industries, we
present a network-centred analysis of car manufacturer web pages. Solely
exploiting publicly-available information, we construct large networks from web
pages and hyperlinks. The network properties disclose the internal corporate
positioning of the three largest automotive manufacturers, Toyota, Volkswagen
and Hyundai with respect to innovative trends and their international outlook.
We tag web pages concerned with topics like e-mobility and environment or
autonomous driving, and investigate their relevance in the network. Sentiment
analysis on individual web pages uncovers a relationship between page linking
and use of positive language, particularly with respect to innovative trends.
Web pages of the same country domain form clusters of different size in the
network that reveal strong correlations with sales market orientation. Our
approach maintains the web content's hierarchical structure imposed by the web
page networks. It, thus, presents a method to reveal hierarchical structures of
unstructured text content obtained from web scraping. It is highly transparent,
reproducible and data driven, and could be used to gain complementary insights
into innovative strategies of firms and competitive landscapes, which would not
be detectable by the analysis of web content alone.",Mining the Automotive Industry: A Network Analysis of Corporate Positioning and Technological Trends,2019-12-20 23:55:21,"Niklas Stoehr, Fabian Braesemann, Michael Frommelt, Shi Zhou","http://arxiv.org/abs/1912.10097v2, http://arxiv.org/pdf/1912.10097v2",cs.SI
33556,gn,"We propose a new method for conducting Bayesian prediction that delivers
accurate predictions without correctly specifying the unknown true data
generating process. A prior is defined over a class of plausible predictive
models. After observing data, we update the prior to a posterior over these
models, via a criterion that captures a user-specified measure of predictive
accuracy. Under regularity, this update yields posterior concentration onto the
element of the predictive class that maximizes the expectation of the accuracy
measure. In a series of simulation experiments and empirical examples we find
notable gains in predictive accuracy relative to conventional likelihood-based
prediction.",Focused Bayesian Prediction,2019-12-29 05:38:13,"Ruben Loaiza-Maya, Gael M. Martin, David T. Frazier","http://arxiv.org/abs/1912.12571v2, http://arxiv.org/pdf/1912.12571v2",stat.ME
33558,gn,"At the initial stages of this research, the assumption was that the
franchised businesses perhaps should not be affected much by recession as there
are multiple cash pools available inherent to the franchised business model.
However, after analyzing the available data, it indicated otherwise, the stock
price performance as discussed indicates a different pattern. The stock price
data is analyzed with an unconventional tool, Weibull distribution and
observations confirmed the presence of either a reverse trend in franchised
business than what is observed for non-franchised or the franchised stock
followed large food suppliers. There is a layered ownership and cash flow in a
franchised business model. The parent company run by franchiser depends on the
performance of child companies run by franchisees. Both parent and child
companies are run as independent businesses but only the parent company is
listed as a stock ticker in stock exchange. Does this double layer of vertical
operation, cash reserve, and cash flow protect them better in recession? The
data analyzed in this paper indicates that the recession effect can be more
severe; and if it dives with the average market, expect a slower recovery of
stock prices in a franchised business model. This paper characterizes the
differences and explains the natural experiment with available financial data.",Effect of Franchised Business models on Fast Food Company Stock Prices in Recession and Recovery with Weibull Analysis,2019-12-30 17:29:24,"Sandip Dutta, Vignesh Prabhu","http://arxiv.org/abs/1912.12940v1, http://arxiv.org/pdf/1912.12940v1",econ.GN
33560,gn,"Traditional US rental housing data sources such as the American Community
Survey and the American Housing Survey report on the transacted market - what
existing renters pay each month. They do not explicitly tell us about the spot
market - i.e., the asking rents that current homeseekers must pay to acquire
housing - though they are routinely used as a proxy. This study compares
governmental data to millions of contemporaneous rental listings and finds that
asking rents diverge substantially from these most recent estimates.
Conventional housing data understate current market conditions and
affordability challenges, especially in cities with tight and expensive rental
markets.",Rental Housing Spot Markets: How Online Information Exchanges Can Supplement Transacted-Rents Data,2020-02-05 02:23:35,"Geoff Boeing, Jake Wegmann, Junfeng Jiao","http://dx.doi.org/10.1177/0739456X20904435, http://arxiv.org/abs/2002.01578v1, http://arxiv.org/pdf/2002.01578v1",econ.GN
33561,gn,"We consider a frictionless constant endowment economy based on Leeper (1991).
In this economy, it is shown that, under an ad-hoc monetary rule and an ad-hoc
fiscal rule, there are two equilibria. One has active monetary policy and
passive fiscal policy, while the other has passive monetary policy and active
fiscal policy. We consider an extended setup in which the policy maker
minimizes a loss function under quasi-commitment, as in Schaumburg and
Tambalotti (2007). Under this formulation there exists a unique Ramsey
equilibrium, with an interest rate peg and a passive fiscal policy. We thank
John P. Conley, Luis de Araujo and one referree for their very helpful
comments.",Ramsey Optimal Policy versus Multiple Equilibria with Fiscal and Monetary Interactions,2020-02-11 19:10:47,"Jean-Bernard Chatelain, Kirsten Ralf","http://arxiv.org/abs/2002.04508v1, http://arxiv.org/pdf/2002.04508v1",econ.GN
33562,gn,"We consider the model of economic growth with time delayed investment
function. Assuming the investment is time distributed we can use the linear
chain trick technique to transform delay differential equation system to
equivalent system of ordinary differential system (ODE). The time delay
parameter is a mean time delay of gamma distribution. We reduce the system with
distribution delay to both three and four-dimensional ODEs. We study the Hopf
bifurcation in these systems with respect to two parameters: the time delay
parameter and the rate of growth parameter. We derive the results from the
analytical as well as numerical investigations. From the former we obtain the
sufficient criteria on the existence and stability of a limit cycle solution
through the Hopf bifurcation. In numerical studies with the Dana and Malgrange
investment function we found two Hopf bifurcations with respect to the rate
growth parameter and detect the existence of stable long-period cycles in the
economy. We find that depending on the time delay and adjustment speed
parameters the range of admissible values of the rate of growth parameter
breaks down into three intervals. First we have stable focus, then the limit
cycle and again the stable solution with two Hopf bifurcations. Such behaviour
appears for some middle interval of admissible range of values of the rate of
growth parameter.",Bifurcations in economic growth model with distributed time delay transformed to ODE,2020-02-12 17:25:28,"Luca Guerrini, Adam Krawiec, Marek Szydlowski","http://arxiv.org/abs/2002.05016v1, http://arxiv.org/pdf/2002.05016v1",econ.TH
33563,gn,"Although recent studies have shown that electricity systems with shares of
wind and solar above 80% can be affordable, economists have raised concerns
about market integration. Correlated generation from variable renewable sources
depresses market prices, which can cause wind and solar to cannibalise their
own revenues and prevent them from covering their costs from the market. This
cannibalisation appears to set limits on the integration of wind and solar, and
thus to contradict studies that show that high shares are cost effective. Here
we show from theory and with simulation examples how market incentives interact
with prices, revenue and costs for renewable electricity systems. The decline
in average revenue seen in some recent literature is due to an implicit policy
assumption that technologies are forced into the system, whether it be with
subsidies or quotas. This decline is mathematically guaranteed regardless of
whether the subsidised technology is variable or not. If instead the driving
policy is a carbon dioxide cap or tax, wind and solar shares can rise without
cannibalising their own market revenue, even at penetrations of wind and solar
above 80%. The strong dependence of market value on the policy regime means
that market value needs to be used with caution as a measure of market
integration. Declining market value is not necessarily a sign of integration
problems, but rather a result of policy choices.",Decreasing market value of variable renewables can be avoided by policy action,2020-02-07 14:34:05,"T. Brown, L. Reichenberg","http://dx.doi.org/10.1016/j.eneco.2021.105354, http://arxiv.org/abs/2002.05209v3, http://arxiv.org/pdf/2002.05209v3",q-fin.GN
33591,gn,"We use the Newcomb-Benford law to test if countries have manipulated reported
data during the COVID-19 pandemic. We find that democratic countries, countries
with the higher gross domestic product (GDP) per capita, higher healthcare
expenditures, and better universal healthcare coverage are less likely to
deviate from the Newcomb-Benford law. The relationship holds for the cumulative
number of reported deaths and total cases but is more pronounced for the death
toll. The findings are robust for second-digit tests, for a sub-sample of
countries with regional data, and in relation to the previous swine flu (H1N1)
2009-2010 pandemic. The paper further highlights the importance of independent
surveillance data verification projects.",Who Manipulates Data During Pandemics? Evidence from Newcomb-Benford Law,2020-07-29 16:58:46,"Vadim S. Balashov, Yuxing Yan, Xiaodi Zhu","http://arxiv.org/abs/2007.14841v4, http://arxiv.org/pdf/2007.14841v4",econ.GN
33564,gn,"Business cycles are positively correlated (``comove'') across countries.
However, standard models that attribute comovement to propagation of exogenous
shocks struggle to generate a level of comovement that is as high as in the
data. In this paper, we consider models that produce business cycles
endogenously, through some form of non-linear dynamics -- limit cycles or
chaos. These models generate stronger comovement, because they combine shock
propagation with synchronization of endogenous dynamics. In particular, we
study a demand-driven model in which business cycles emerge from strategic
complementarities within countries, synchronizing their oscillations through
international trade linkages. We develop an eigendecomposition that explores
the interplay between non-linear dynamics, shock propagation and network
structure, and use this theory to understand the mechanisms of synchronization.
Next, we calibrate the model to data on 24 countries and show that the
empirical level of comovement can only be matched by combining endogenous
business cycles with exogenous shocks. Our results lend support to the
hypothesis that business cycles are at least in part caused by underlying
non-linear dynamics.",Synchronization of endogenous business cycles,2020-02-16 14:26:09,Marco Pangallo,"http://arxiv.org/abs/2002.06555v3, http://arxiv.org/pdf/2002.06555v3",econ.GN
33565,gn,"The start up costs in many kinds of generators lead to complex cost
structures, which in turn yield severe market loopholes in the locational
marginal price (LMP) scheme. Convex hull pricing (a.k.a. extended LMP) is
proposed to improve the market efficiency by providing the minimal uplift
payment to the generators. In this letter, we consider a stylized model where
all generators share the same generation capacity. We analyze the generators'
possible strategic behaviors in such a setting, and then propose an index for
market power quantification in the convex hull pricing schemes.",Market Power in Convex Hull Pricing,2020-02-15 12:10:06,"Jian Sun, Chenye Wu","http://arxiv.org/abs/2002.07595v1, http://arxiv.org/pdf/2002.07595v1",math.OC
33566,gn,"The speeches stated by influential politicians can have a decisive impact on
the future of a country. In particular, the economic content of such speeches
affects the economy of countries and their financial markets. For this reason,
we examine a novel dataset containing the economic content of 951 speeches
stated by 45 US Presidents from George Washington (April 1789) to Donald Trump
(February 2017). In doing so, we use an economic glossary carried out by means
of text mining techniques. The goal of our study is to examine the structure of
significant interconnections within a network obtained from the economic
content of presidential speeches. In such a network, nodes are represented by
talks and links by values of cosine similarity, the latter computed using the
occurrences of the economic terms in the speeches. The resulting network
displays a peculiar structure made up of a core (i.e. a set of highly central
and densely connected nodes) and a periphery (i.e. a set of non-central and
sparsely connected nodes). The presence of different economic dictionaries
employed by the Presidents characterize the core-periphery structure. The
Presidents' talks belonging to the network's core share the usage of generic
(non-technical) economic locutions like ""interest"" or ""trade"". While the use of
more technical and less frequent terms characterizes the periphery (e.g.
""yield"" ). Furthermore, the speeches close in time share a common economic
dictionary. These results together with the economics glossary usages during
the US periods of boom and crisis provide unique insights on the economic
content relationships among Presidents' speeches.",The interconnectedness of the economic content in the speeches of the US Presidents,2020-02-19 00:10:55,"Matteo Cinelli, Valerio Ficcadenti, Jessica Riccioni","http://dx.doi.org/10.1007/s10479-019-03372-2, http://arxiv.org/abs/2002.07880v1, http://arxiv.org/pdf/2002.07880v1",econ.GN
33567,gn,"We consider a large population dynamic game in discrete time. The peculiarity
of the game is that players are characterized by time-evolving types, and so
reasonably their actions should not anticipate the future values of their
types. When interactions between players are of mean-field kind, we relate Nash
equilibria for such games to an asymptotic notion of dynamic Cournot-Nash
equilibria. Inspired by the works of Blanchet and Carlier for the static
situation, we interpret dynamic Cournot-Nash equilibria in the light of causal
optimal transport theory. Further specializing to games of potential type, we
establish existence, uniqueness and characterization of equilibria. Moreover we
develop, for the first time, a numerical scheme for causal optimal transport,
which is then leveraged in order to compute dynamic Cournot-Nash equilibria.
This is illustrated in a detailed case study of a congestion game.",Cournot-Nash equilibrium and optimal transport in a dynamic setting,2020-02-20 18:09:05,"Beatrice Acciaio, Julio Backhoff-Veraguas, Junchao Jia","http://arxiv.org/abs/2002.08786v2, http://arxiv.org/pdf/2002.08786v2",math.OC
33568,gn,"The Levy-Levy-Solomon model (A microscopic model of the stock market: cycles,
booms, and crashes, Economic Letters 45 (1))is one of the most influential
agent-based economic market models. In several publications this model has been
discussed and analyzed. Especially Lux and Zschischang (Some new results on the
Levy, Levy and Solomon microscopic stock market model, Physica A, 291(1-4))
have shown that the model exhibits finite-size effects. In this study we extend
existing work in several directions. First, we show simulations which reveal
finite-size effects of the model. Secondly, we shed light on the origin of
these finite-size effects. Furthermore, we demonstrate the sensitivity of the
Levy-Levy-Solomon model with respect to random numbers. Especially, we can
conclude that a low-quality pseudo random number generator has a huge impact on
the simulation results. Finally, we study the impact of the stopping criteria
in the market clearance mechanism of the Levy-Levy-Solomon model.",Novel Insights in the Levy-Levy-Solomon Agent-Based Economic Market Model,2020-02-13 22:00:11,"Maximilian Beikirch, Torsten Trimborn","http://arxiv.org/abs/2002.10222v1, http://arxiv.org/pdf/2002.10222v1",q-fin.TR
33569,gn,"We consider a general formulation of the random horizon Principal-Agent
problem with a continuous payment and a lump-sum payment at termination. In the
European version of the problem, the random horizon is chosen solely by the
principal with no other possible action from the agent than exerting effort on
the dynamics of the output process. We also consider the American version of
the contract, which covers the seminal Sannikov's model, where the agent can
also quit by optimally choosing the termination time of the contract. Our main
result reduces such non-zero-sum stochastic differential games to appropriate
stochastic control problems which may be solved by standard methods of
stochastic control theory. This reduction is obtained by following Sannikov's
approach, further developed by Cvitanic, Possamai, and Touzi. We first
introduce an appropriate class of contracts for which the agent's optimal
effort is immediately characterized by the standard verification argument in
stochastic control theory. We then show that this class of contracts is dense
in an appropriate sense so that the optimization over this restricted family of
contracts represents no loss of generality. The result is obtained by using the
recent well-posedness result of random horizon second-order backward SDE.",Random horizon principal-agent problems,2020-02-25 18:46:54,"Yiqing Lin, Zhenjie Ren, Nizar Touzi, Junjian Yang","http://arxiv.org/abs/2002.10982v2, http://arxiv.org/pdf/2002.10982v2",math.OC
33571,gn,"Electrical energy storage is considered essential for the future energy
systems. Among all the energy storage technologies, battery systems may provide
flexibility to the power grid in a more distributed and decentralized way. In
countries with deregulated electricity markets, grid-connected battery systems
should be operated under the specific market design of the country. In this
work, using the Spanish electricity market as an example, the barriers to
grid-connected battery systems are investigated using utilization analysis. The
concept of ""potentially profitable utilization time"" is proposed and introduced
to identify and evaluate future potential grid applications for battery
systems. The numerical and empirical analysis suggests that the high cycle cost
for battery systems is still the main barrier for grid-connected battery
systems. In Spain, for energy arbitrage within the day-ahead market, it is
required that the battery wear cost decreases to 15 Euro/MWh to make the
potentially profitable utilization rate higher than 20%. Nevertheless, the
potentially profitable utilization of batteries is much higher in the
applications when higher flexibility is demanded. The minimum required battery
wear cost corresponding to 20% potentially profitable utilization time
increases to 35 Euro/MWh for energy arbitrage within the day-ahead market and
ancillary services, and 50 Euro/MWh for upward secondary reserve. The results
of this study contribute to the awareness of battery storage technology and its
flexibility in grid applications. The findings also have significant
implications for policy makers and market operators interested in promoting
grid-connected battery storage under a deregulated power market.",Barriers to grid-connected battery systems: Evidence from the Spanish electricity market,2020-06-28 16:08:05,"Yu Hu, David Soler Soneira, María Jesús Sánchez","http://dx.doi.org/10.1016/j.est.2021.102262, http://arxiv.org/abs/2007.00486v2, http://arxiv.org/pdf/2007.00486v2",q-fin.GN
33572,gn,"We estimate the effect of the Coronavirus (Covid-19) pandemic on racial
animus, as measured by Google searches and Twitter posts including a commonly
used anti-Asian racial slur. Our empirical strategy exploits the plausibly
exogenous variation in the timing of the first Covid-19 diagnosis across
regions in the United States. We find that the first local diagnosis leads to
an immediate increase in racist Google searches and Twitter posts, with the
latter mainly coming from existing Twitter users posting the slur for the first
time. This increase could indicate a rise in future hate crimes, as we document
a strong correlation between the use of the slur and anti-Asian hate crimes
using historic data. Moreover, we find that the rise in the animosity is
directed at Asians rather than other minority groups and is stronger on days
when the connection between the disease and Asians is more salient, as proxied
by President Trump's tweets mentioning China and Covid-19 at the same time. In
contrast, the negative economic impact of the pandemic plays little role in the
initial increase in racial animus. Our results suggest that de-emphasizing the
connection between the disease and a particular racial group can be effective
in curbing current and future racial animus.",From Fear to Hate: How the Covid-19 Pandemic Sparks Racial Animus in the United States,2020-07-03 04:21:44,"Runjing Lu, Yanying Sheng","http://arxiv.org/abs/2007.01448v1, http://arxiv.org/pdf/2007.01448v1",econ.GN
33573,gn,"The airline industry was severely hit by the COVID-19 crisis with an average
demand decrease of about $64\%$ (IATA, April 2020) which triggered already
several bankruptcies of airline companies all over the world. While the
robustness of the world airline network (WAN) was mostly studied as an
homogeneous network, we introduce a new tool for analyzing the impact of a
company failure: the `airline company network' where two airlines are connected
if they share at least one route segment. Using this tool, we observe that the
failure of companies well connected with others has the largest impact on the
connectivity of the WAN. We then explore how the global demand reduction
affects airlines differently, and provide an analysis of different scenarios if
its stays low and does not come back to its pre-crisis level. Using traffic
data from the Official Aviation Guide (OAG) and simple assumptions about
customer's airline choice strategies, we find that the local effective demand
can be much lower than the average one, especially for companies that are not
monopolistic and share their segments with larger companies. Even if the
average demand comes back to $60\%$ of the total capacity, we find that between
$46\%$ and $59\%$ of the companies could experience a reduction of more than
$50\%$ of their traffic, depending on the type of competitive advantage that
drives customer's airline choice. These results highlight how the complex
competitive structure of the WAN weakens its robustness when facing such a
large crisis.",Scenarios for a post-COVID-19 world airline network,2020-07-04 17:31:12,"Jiachen Ye, Peng Ji, Marc Barthelemy","http://arxiv.org/abs/2007.02109v1, http://arxiv.org/pdf/2007.02109v1",physics.soc-ph
33579,gn,"The purpose of this study is to investigate the effects of the COVID-19
pandemic on economic policy uncertainty in the US and the UK. The impact of the
increase in COVID-19 cases and deaths in the country, and the increase in the
number of cases and deaths outside the country may vary. To examine this, the
study employs bootstrap ARDL cointegration approach from March 8, 2020 to May
24, 2020. According to the bootstrap ARDL results, a long-run equilibrium
relationship is confirmed for five out of the 10 models. The long-term
coefficients obtained from the ARDL models suggest that an increase in COVID-19
cases and deaths outside of the UK and the US has a significant effect on
economic policy uncertainty. The US is more affected by the increase in the
number of COVID-19 cases. The UK, on the other hand, is more negatively
affected by the increase in the number of COVID-19 deaths outside the country
than the increase in the number of cases. Moreover, another important finding
from the study demonstrates that COVID-19 is a factor of great uncertainty for
both countries in the short-term.",COVID-19 Induced Economic Uncertainty: A Comparison between the United Kingdom and the United States,2020-06-29 03:22:07,Ugur Korkut Pata,"http://arxiv.org/abs/2007.07839v1, http://arxiv.org/pdf/2007.07839v1",econ.GN
33574,gn,"A popular analogue used in the space domain is that of historical building
projects, notably cathedrals that took decades and in some cases centuries to
complete. Cathedrals are often taken as archetypes for long-term projects. In
this article, I will explore the cathedral from the point of view of project
management and systems architecting and draw implications for long-term
projects in the space domain, notably developing a starship. I will show that
the popular image of a cathedral as a continuous long-term project is in
contradiction to the current state of research. More specifically, I will show
that for the following propositions: The cathedrals were built based on an
initial detailed master plan; Building was a continuous process that adhered to
the master plan; Investments were continuously provided for the building
process. Although initial plans might have existed, the construction process
took often place in multiple campaigns, sometimes separated by decades. Such
interruptions made knowledge-preservation very challenging. The reason for the
long stretches of inactivity was mostly due to a lack of funding. Hence, the
availability of funding coincided with construction activity. These findings
paint a much more relevant picture of cathedral building for long-duration
projects today: How can a project be completed despite a range of uncertainties
regarding loss in skills, shortage in funding, and interruptions? It is
concluded that long-term projects such as an interstellar exploration program
can take inspiration from cathedrals by developing a modular architecture,
allowing for extensibility and flexibility, thinking about value delivery at an
early point, and establishing mechanisms and an organization for stable
funding.",The Cathedral and the Starship: Learning from the Middle Ages for Future Long-Duration Projects,2020-07-07 20:43:49,Andreas M. Hein,"http://arxiv.org/abs/2007.03654v1, http://arxiv.org/pdf/2007.03654v1",econ.GN
33575,gn,"We investigate the networks of Japanese corporate boards and its influence on
the appointments of female board members. We find that corporate boards with
women show homophily with respect to gender. The corresponding firms often have
above average profitability. We also find that new appointments of women are
more likely at boards which observe female board members at other firms to
which they are tied by either ownership relations or corporate board
interlocks.",Interdependencies of female board member appointments,2020-07-08 12:37:07,"Matthias Raddant, Hiroshi Takahashi","http://dx.doi.org/10.1016/j.irfa.2022.102080, http://arxiv.org/abs/2007.03980v4, http://arxiv.org/pdf/2007.03980v4",econ.GN
33576,gn,"Economists have mainly focused on human capital accumulation rather than on
the causes and consequences of human capital depreciation in late adulthood. To
investigate how human capital depreciates over the life cycle, we examine how a
newly introduced pension program, the National Rural Pension Scheme, affects
cognitive performance in rural China. We find significant adverse effects of
access to pension benefits on cognitive functioning among the elderly. We
detect the most substantial impact of the program on delayed recall, a
cognition measure linked to the onset of dementia. In terms of mechanisms,
cognitive deterioration in late adulthood is mediated by a substantial
reduction in social engagement, volunteering, and activities fostering mental
acuity.",Do Pension Benefits Accelerate Cognitive Decline in Late Adulthood? Evidence from Rural China,2020-07-12 04:31:13,"Plamen Nikolov, Md Shahadath Hossain","http://arxiv.org/abs/2007.05884v9, http://arxiv.org/pdf/2007.05884v9",econ.GN
33577,gn,"The rapid rise of antibiotic resistance is a serious threat to global public
health. Without further incentives, pharmaceutical companies have little
interest in developing antibiotics, since the success probability is low and
development costs are huge. The situation is exacerbated by the ""antibiotics
dilemma"": Developing narrow-spectrum antibiotics against resistant bacteria is
most beneficial for society, but least attractive for companies since their
usage is more limited than for broad-spectrum drugs and thus sales are low.
Starting from a general mathematical framework for the study of
antibiotic-resistance dynamics with an arbitrary number of antibiotics, we
identify efficient treatment protocols and introduce a market-based refunding
scheme that incentivizes pharmaceutical companies to develop narrow-spectrum
antibiotics: Successful companies can claim a refund from a newly established
antibiotics fund that partially covers their development costs. The proposed
refund involves a fixed and variable part. The latter (i) increases with the
use of the new antibiotic for currently resistant strains in comparison with
other newly developed antibiotics for this purpose---the resistance
premium---and (ii) decreases with the use of this antibiotic for non-resistant
bacteria. We outline how such a refunding scheme can solve the antibiotics
dilemma and cope with various sources of uncertainty inherent in antibiotic
R\&D. Finally, connecting our refunding approach to the recently established
antimicrobial resistance (AMR) action fund, we discuss how the antibiotics fund
can be financed.",Incentivizing Narrow-Spectrum Antibiotic Development with Refunding,2020-07-13 19:00:13,"Lucas Böttcher, Hans Gersbach","http://dx.doi.org/10.1007/s11538-022-01013-7, http://arxiv.org/abs/2007.06468v1, http://arxiv.org/pdf/2007.06468v1",q-bio.PE
33578,gn,"The Pakistan competition policy, as in many other countries, was originally
designed to regulate business conduct in traditional markets and for tangible
goods and services. However, the development and proliferation of the internet
has led to the emergence of digital companies which have disrupted many sectors
of the economy. These platforms provide digital infrastructure for a range of
services including search engines, marketplaces, and social networking sites.
The digital economy poses a myriad of challenges for competition authorities
worldwide, especially with regard to digital mergers and acquisitions (M&As).
While some jurisdictions such as the European Union and the United States have
taken significant strides in regulating technological M&As, there is an
increasing need for developing countries such as Pakistan to rethink their
competition policy tools. This paper investigates whether merger reviews in the
Pakistan digital market are informed by the same explanatory variables as in
the traditional market, by performing an empirical comparative analysis of the
Competition Commission of Pakistan's (CCP's) M&A decisions between 2014 and
2019. The findings indicate the CCP applies the same decision factors in
reviewing both traditional and digital M&As. As such, this paper establishes a
basis for igniting the policy and economic debate of regulating the digital
platform industry in Pakistan.",The New Digital Platforms: Merger Control in Pakistan,2020-06-22 15:06:46,"Shahzada Aamir Mushtaq, Wang Yuhui","http://arxiv.org/abs/2007.06535v1, http://arxiv.org/pdf/2007.06535v1",q-fin.GN
33580,gn,"We interpret attitudes towards science and pseudosciences as cultural traits
that diffuse in society through communication efforts exerted by agents. We
present a tractable model that allows us to study the interaction among the
diffusion of an epidemic, vaccination choices, and the dynamics of cultural
traits. We apply it to study the impact of homophily between pro-vaxxers and
anti-vaxxers on the total number of cases (the cumulative infection). We show
that, during the outbreak of a disease, homophily has the direct effect of
decreasing the speed of recovery. Hence, it may increase the number of cases
and make the disease endemic. The dynamics of the shares of the two cultural
traits in the population is crucial in determining the sign of the total effect
on the cumulative infection: more homophily is beneficial if agents are not too
flexible in changing their cultural trait, is detrimental otherwise.","Epidemic dynamics with homophily, vaccination choices, and pseudoscience attitudes",2020-07-16 12:23:02,"Matteo Bizzarri, Fabrizio Panebianco, Paolo Pin","http://arxiv.org/abs/2007.08523v3, http://arxiv.org/pdf/2007.08523v3",q-bio.PE
33581,gn,"High air pollution levels are associated with school absences. However, low
level pollution impact on individual school absences are under-studied. We
modelled PM2.5 and ozone concentrations at 36 schools from July 2015 to June
2018 using data from a dense, research grade regulatory sensor network. We
determined exposures and daily absences at each school. We used generalized
estimating equations model to retrospectively estimate rate ratios for
association between outdoor pollutant concentrations and school absences. We
estimated lost school revenue, productivity, and family economic burden. PM2.5
and ozone concentrations and absence rates vary across the School District.
Pollution exposure were associated with as high a rate ratio of 1.02 absences
per ug/m$^3$ and 1.01 per ppb increase for PM2.5 and ozone, respectively.
Significantly, even PM2.5 and ozone exposure below regulatory standards (<12.1
ug/m$^3$ and <55 ppb) was associated with positive rate ratios of absences:
1.04 per ug/m$^3$ and 1.01 per ppb increase, respectively. Granular local
measurements enabled demonstration of air pollution impacts that varied between
schools undetectable with averaged pollution levels. Reducing pollution by 50%
would save $452,000 per year districtwide. Pollution reduction benefits would
be greatest in schools located in socioeconomically disadvantaged areas.
Exposures to air pollution, even at low levels, are associated with increased
school absences. Heterogeneity in exposure, disproportionately affecting
socioeconomically disadvantaged schools, points to the need for fine resolution
exposure estimation. The economic cost of absences associated with air
pollution is substantial even excluding indirect costs such as hospital visits
and medication. These findings may help inform decisions about recess during
severe pollution events and regulatory considerations for localized pollution
sources.",Absentee and Economic Impact of Low-Level Fine Particulate Matter and Ozone Exposure in K-12 Students,2020-07-16 04:27:27,"Daniel L. Mendoza, Cheryl S. Pirozzi, Erik T. Crosman, Theodore G. Liou, Yue Zhang, Jessica J. Cleeves, Stephen C. Bannister, William R. L. Anderegg, Robert Paine III","http://dx.doi.org/10.13140/RG.2.2.12720.17925, http://arxiv.org/abs/2007.09230v1, http://arxiv.org/pdf/2007.09230v1",econ.GN
33582,gn,"The threat of the new coronavirus (COVID-19) is increasing. Regarding the
difference in the infection rate observed in each region, in addition to
studies seeking the cause due to differences in the social distance (population
density), there is an increasing trend toward studies seeking the cause due to
differences in social capital. However, studies have not yet been conducted on
whether social capital could influence the infection rate even if it controls
the effect of population density. Therefore, in this paper, we analyzed the
relationship between infection rate, population density, and social capital
using statistical data for each prefecture. Statistical analysis showed that
social capital not only correlates with infection rates and population
densities but still has a negative correlation with infection rates controlling
for the effects of population density. Besides, controlling the relationship
between variables by mean age showed that social capital had a greater
correlation with infection rate than population density. In other words, social
capital mediates the correlation between population density and infection
rates. This means that social distance alone is not enough to deter coronavirus
infection, and social capital needs to be recharged.",Social capital may mediate the relationship between social distance and COVID-19 prevalence,2020-07-20 11:36:30,Keisuke Kokubun,"http://dx.doi.org/10.1177/00469580211005189, http://arxiv.org/abs/2007.09939v2, http://arxiv.org/pdf/2007.09939v2",econ.GN
33584,gn,"Nursing homes and other long term-care facilities account for a
disproportionate share of COVID-19 cases and fatalities worldwide. Outbreaks in
U.S. nursing homes have persisted despite nationwide visitor restrictions
beginning in mid-March. An early report issued by the Centers for Disease
Control and Prevention identified staff members working in multiple nursing
homes as a likely source of spread from the Life Care Center in Kirkland,
Washington to other skilled nursing facilities. The full extent of staff
connections between nursing homes---and the crucial role these connections
serve in spreading a highly contagious respiratory infection---is currently
unknown given the lack of centralized data on cross-facility nursing home
employment. In this paper, we perform the first large-scale analysis of nursing
home connections via shared staff using device-level geolocation data from 30
million smartphones, and find that 7 percent of smartphones appearing in a
nursing home also appeared in at least one other facility---even after visitor
restrictions were imposed. We construct network measures of nursing home
connectedness and estimate that nursing homes have, on average, connections
with 15 other facilities. Controlling for demographic and other factors, a
home's staff-network connections and its centrality within the greater network
strongly predict COVID-19 cases. Traditional federal regulatory metrics of
nursing home quality are unimportant in predicting outbreaks, consistent with
recent research. Results suggest that eliminating staff linkages between
nursing homes could reduce COVID-19 infections in nursing homes by 44 percent.",Nursing Home Staff Networks and COVID-19,2020-07-23 08:04:12,"M. Keith Chen, Judith A. Chevalier, Elisa F. Long","http://dx.doi.org/10.1073/pnas.2015455118, http://arxiv.org/abs/2007.11789v2, http://arxiv.org/pdf/2007.11789v2",econ.GN
33585,gn,"In the first half of 2020, several countries have responded to the challenges
posed by the Covid-19 pandemic by restricting their export of medical supplies.
Such measures are meant to increase the domestic availability of critical
goods, and are commonly used in times of crisis. Yet, not much is known about
their impact, especially on countries imposing them. Here we show that export
bans are, by and large, counterproductive. Using a model of shock diffusion
through the network of international trade, we simulate the impact of
restrictions under different scenarios. We observe that while they would be
beneficial to a country implementing them in isolation, their generalized use
makes most countries worse off relative to a no-ban scenario. As a corollary,
we estimate that prices increase in many countries imposing the restrictions.
We also find that the cost of restraining from export bans is small, even when
others continue to implement them. Finally, we document a change in countries'
position within the international trade network, suggesting that export bans
have geopolitical implications.",(Unintended) Consequences of export restrictions on medical goods during the Covid-19 pandemic,2020-07-23 14:43:09,"Marco Grassia, Giuseppe Mangioni, Stefano Schiavo, Silvio Traverso","http://dx.doi.org/10.1093/comnet/cnab045, http://arxiv.org/abs/2007.11941v1, http://arxiv.org/pdf/2007.11941v1",physics.soc-ph
33586,gn,"In this paper, we provide a philosophical account of the value of creative
systems for individuals and society. We characterize creativity in very broad
philosophical terms, encompassing natural, existential, and social creative
processes, such as natural evolution and entrepreneurship, and explain why
creativity understood in this way is instrumental for advancing human
well-being in the long term. We then explain why current mainstream AI tends to
be anti-creative, which means that there are moral costs of employing this type
of AI in human endeavors, although computational systems that involve
creativity are on the rise. In conclusion, there is an argument for ethics to
be more hospitable to creativity-enabling AI, which can also be in a trade-off
with other values promoted in AI ethics, such as its explainability and
accuracy.",The societal and ethical relevance of computational creativity,2020-07-23 15:39:10,"Michele Loi, Eleonora Viganò, Lonneke van der Plas","http://arxiv.org/abs/2007.11973v1, http://arxiv.org/pdf/2007.11973v1",cs.AI
33587,gn,"In this online appendix we provide additional information and analyses to
support ""The Determinants of Social Connectedness in Europe."" We include a
number of case studies illustrating how language, history, and other factors
have shaped European social networks. We also look at the effects of social
connectedness. Our results provide empirical support for theoretical models
that suggest social networks play an important role in individuals' travel
decisions. We study variation in the degree of connectedness of regions to
other European countries, finding a negative correlation between Euroscepticism
and greater levels of international connection.",Online Appendix & Additional Results for The Determinants of Social Connectedness in Europe,2020-07-23 20:05:53,"Michael Bailey, Drew Johnston, Theresa Kuchler, Dominic Russel, Bogdan State, Johannes Stroebel","http://arxiv.org/abs/2007.12177v1, http://arxiv.org/pdf/2007.12177v1",econ.GN
33588,gn,"Uncertainty plays an important role in the global economy. In this paper, the
economic policy uncertainty (EPU) indices of the United States and China are
selected as the proxy variable corresponding to the uncertainty of national
economic policy. By adopting the visibility graph algorithm, the four economic
policy uncertainty indices of the United States and China are mapped into
complex networks, and the topological properties of the corresponding networks
are studied. The Hurst exponents of all the four indices are within
$\left[0.5,1\right]$, which implies that the economic policy uncertainty is
persistent. The degree distributions of the EPU networks have power-law tails
and are thus scale-free. The average clustering coefficients of the four EPU
networks are high and close to each other, while these networks exhibit weak
assortative mixing. We also find that the EPU network in United States based on
daily data shows the small-world feature since the average shortest path length
increases logarithmically with the network size such that
$L\left(N\right)=0.626\ln N+0.405$. Our research highlights the possibility to
study the EPU from the view angle of complex networks.",Visibility graph analysis of economy policy uncertainty indices,2020-07-25 11:12:17,"Peng-Fei Dai, Xiong Xiong, Wei-Xing Zhou","http://dx.doi.org/10.1016/j.physa.2019.121748, http://arxiv.org/abs/2007.12880v1, http://arxiv.org/pdf/2007.12880v1",q-fin.ST
33589,gn,"Recent attempts at cooperating on climate change mitigation highlight the
limited efficacy of large-scale agreements, when commitment to mitigation is
costly and initially rare. Bottom-up approaches using region-specific
mitigation agreements promise greater success, at the cost of slowing global
adoption. Here, we show that a well-timed switch from regional to global
negotiations dramatically accelerates climate mitigation compared to using only
local, only global, or both agreement types simultaneously. This highlights the
scale-specific roles of mitigation incentives: local incentives capitalize on
regional differences (e.g., where recent disasters incentivize mitigation) by
committing early-adopting regions, after which global agreements draw in
late-adopting regions. We conclude that global agreements are key to overcoming
the expenses of mitigation and economic rivalry among regions but should be
attempted once regional agreements are common. Gradually up-scaling efforts
could likewise accelerate mitigation at smaller scales, for instance when
costly ecosystem restoration initially faces limited public and legislative
support.",A well-timed switch from local to global agreements accelerates climate change mitigation,2020-07-27 02:06:22,"Vadim A. Karatayev, Vítor V. Vasconcelos, Anne-Sophie Lafuite, Simon A. Levin, Chris T. Bauch, Madhur Anand","http://dx.doi.org/10.1038/s41467-021-23056-5, http://arxiv.org/abs/2007.13238v1, http://arxiv.org/pdf/2007.13238v1",nlin.AO
33590,gn,"We introduce a theoretical framework that highlights the impact of physical
distancing variables such as human mobility and physical proximity on the
evolution of epidemics and, crucially, on the reproduction number. In
particular, in response to the coronavirus disease (CoViD-19) pandemic,
countries have introduced various levels of 'lockdown' to reduce the number of
new infections. Specifically we use a collisional approach to an infection-age
structured model described by a renewal equation for the time homogeneous
evolution of epidemics. As a result, we show how various contributions of the
lockdown policies, namely physical proximity and human mobility, reduce the
impact of SARS-CoV-2 and mitigate the risk of disease resurgence. We check our
theoretical framework using real-world data on physical distancing with two
different data repositories, obtaining consistent results. Finally, we propose
an equation for the effective reproduction number which takes into account
types of interactions among people, which may help policy makers to improve
remote-working organizational structure.",Epidemic response to physical distancing policies and their impact on the outbreak risk,2020-07-29 09:12:25,"Fabio Vanni, David Lambert, Luigi Palatella","http://dx.doi.org/10.1038/s41598-021-02760-8, http://arxiv.org/abs/2007.14620v2, http://arxiv.org/pdf/2007.14620v2",physics.soc-ph
33592,gn,"Equilibrium models for energy markets under uncertain demand and supply have
attracted considerable attentions. This paper focuses on modelling crude oil
market share under the COVID-19 pandemic using two-stage stochastic
equilibrium. We describe the uncertainties in the demand and supply by random
variables and provide two types of production decisions (here-and-now and
wait-and-see). The here-and-now decision in the first stage does not depend on
the outcome of random events to be revealed in the future and the wait-and-see
decision in the second stage is allowed to depend on the random events in the
future and adjust the feasibility of the here-and-now decision in rare
unexpected scenarios such as those observed during the COVID-19 pandemic. We
develop a fast algorithm to find a solution of the two-stage stochastic
equilibrium. We show the robustness of the two-stage stochastic equilibrium
model for forecasting the oil market share using the real market data from
January 2019 to May 2020.",Equilibrium Oil Market Share under the COVID-19 Pandemic,2020-07-30 10:08:39,"Xiaojun Chen, Yun Shi, Xiaozhou Wang","http://arxiv.org/abs/2007.15265v1, http://arxiv.org/pdf/2007.15265v1",math.OC
33593,gn,"The International Monetary Fund (IMF) provides financial assistance to its
member-countries in economic turmoil, but requires at the same time that these
countries reform their public policies. In several contexts, these reforms are
at odds with population health. While researchers have empirically analyzed the
consequences of these reforms on health, no analysis exist on identifying fair
tradeoffs between consequences on population health and economic outcomes. Our
article analyzes and identifies the principles governing these tradeoffs.
First, this article reviews existing policy-evaluation studies, which show, on
balance, that IMF policies frequently cause adverse effects on child health and
material standards in the pursuit of macroeconmic improvement. Second, this
article discusses four theories in distributive ethics (maximization,
egalitarianianism, prioritarianiasm, and sufficientarianism) to identify which
is the most compatible with the core mission of the IMF, that is, improved
macroeconomics (Articles of Agreement) while at the same time balancing
consequences on health. Using a distributive-ethics analyses of IMF polices, we
argue that sufficientarianism is the most compatible theory. Third, this
article offer a qualitative rearticulation of the Articles of Agreement, and
formalize sufficientarian principles in the language of causal inference. We
also offer a framework on how to empirically measure, from observational data,
the extent that IMF policies trade off fairly between population health and
economic outcomes. We conclude with policy recommendations and suggestions for
future research.",Combining distributive ethics and causal Inference to make trade-offs between austerity and population health,2020-07-30 18:54:40,"Adel Daoud, Anders Herlitz, SV Subramanian","http://arxiv.org/abs/2007.15550v2, http://arxiv.org/pdf/2007.15550v2",econ.GN
33594,gn,"In recent years online social networks have become increasingly prominent in
political campaigns and, concurrently, several countries have experienced shock
election outcomes. This paper proposes a model that links these two phenomena.
In our set-up, the process of learning from others on a network is influenced
by confirmation bias, i.e. the tendency to ignore contrary evidence and
interpret it as consistent with one's own belief. When agents pay enough
attention to themselves, confirmation bias leads to slower learning in any
symmetric network, and it increases polarization in society. We identify a
subset of agents that become more/less influential with confirmation bias. The
socially optimal network structure depends critically on the information
available to the social planner. When she cannot observe agents' beliefs, the
optimal network is symmetric, vertex-transitive and has no self-loops. We
explore the implications of these results for electoral outcomes and media
markets. Confirmation bias increases the likelihood of shock elections, and it
pushes fringe media to take a more extreme ideology.","Social networks, confirmation bias and shock elections",2020-11-01 18:01:45,"Edoardo Gallo, Alastair Langtry","http://arxiv.org/abs/2011.00520v1, http://arxiv.org/pdf/2011.00520v1",econ.TH
33595,gn,"Campbell-Goodhart's law relates to the causal inference error whereby
decision-making agents aim to influence variables which are correlated to their
goal objective but do not reliably cause it. This is a well known error in
Economics and Political Science but not widely labelled in Artificial
Intelligence research. Through a simple example, we show how off-the-shelf deep
Reinforcement Learning (RL) algorithms are not necessarily immune to this
cognitive error. The off-policy learning method is tricked, whilst the
on-policy method is not. The practical implication is that naive application of
RL to complex real life problems can result in the same types of policy errors
that humans make. Great care should be taken around understanding the causal
model that underpins a solution derived from Reinforcement Learning.",Causal Campbell-Goodhart's law and Reinforcement Learning,2020-11-02 17:42:20,Hal Ashton,"http://arxiv.org/abs/2011.01010v2, http://arxiv.org/pdf/2011.01010v2",cs.LG
33596,gn,"The COVID-19 pandemic constitutes one of the largest threats in recent
decades to the health and economic welfare of populations globally. In this
paper, we analyze different types of policy measures designed to fight the
spread of the virus and minimize economic losses. Our analysis builds on a
multi-group SEIR model, which extends the multi-group SIR model introduced by
Acemoglu et al.~(2020). We adjust the underlying social interaction patterns
and consider an extended set of policy measures. The model is calibrated for
Germany. Despite the trade-off between COVID-19 prevention and economic
activity that is inherent to shielding policies, our results show that
efficiency gains can be achieved by targeting such policies towards different
age groups. Alternative policies such as physical distancing can be employed to
reduce the degree of targeting and the intensity and duration of shielding. Our
results show that a comprehensive approach that combines multiple policy
measures simultaneously can effectively mitigate population mortality and
economic harm.",Insights from Optimal Pandemic Shielding in a Multi-Group SEIR Framework,2020-11-02 19:23:41,"Philipp Bach, Victor Chernozhukov, Martin Spindler","http://arxiv.org/abs/2011.01092v1, http://arxiv.org/pdf/2011.01092v1",econ.GN
33597,gn,"As more tech companies engage in rigorous economic analyses, we are
confronted with a data problem: in-house papers cannot be replicated due to use
of sensitive, proprietary, or private data. Readers are left to assume that the
obscured true data (e.g., internal Google information) indeed produced the
results given, or they must seek out comparable public-facing data (e.g.,
Google Trends) that yield similar results. One way to ameliorate this
reproducibility issue is to have researchers release synthetic datasets based
on their true data; this allows external parties to replicate an internal
researcher's methodology. In this brief overview, we explore synthetic data
generation at a high level for economic analyses.",Synthetic Data Generation for Economists,2020-11-03 02:05:55,"Allison Koenecke, Hal Varian","http://arxiv.org/abs/2011.01374v2, http://arxiv.org/pdf/2011.01374v2",econ.GN
33598,gn,"In this work of speculative science, scientists from a distant star system
explain the emergence and consequences of triparentalism, when three
individuals are required for sexual reproduction, which is the standard form of
mating on their home world. The report details the evolution of their
reproductive system--that is, the conditions under which triparentalism and
three self-avoiding mating types emerged as advantageous strategies for sexual
reproduction. It also provides an overview of the biological consequences of
triparental reproduction with three mating types, including the genetic
mechanisms of triparental reproduction, asymmetries between the three mating
types, and infection dynamics arising from their different mode of sexual
reproduction. The report finishes by discussing how central aspects of their
society, such as short-lasting unions among individuals and the rise of a
monoculture, might have arisen as a result of their triparental system.",Greetings from a Triparental Planet,2020-11-03 09:56:21,"Gizem Bacaksizlar, Stefani Crabtree, Joshua Garland, Natalie Grefenstette, Albert Kao, David Kinney, Artemy Kolchinsky, Tyler Marghetis, Michael Price, Maria Riolo, Hajime Shimao, Ashley Teufel, Tamara van der Does, Vicky Chuqiao Yang","http://arxiv.org/abs/2011.01508v1, http://arxiv.org/pdf/2011.01508v1",q-bio.PE
33599,gn,"Washing hands, social distancing and staying at home are the preventive
measures set in place to contain the spread of the COVID-19, a disease caused
by SARS-CoV-2. These measures, although straightforward to follow, highlight
the tip of an imbalanced socio-economic and socio-technological iceberg. Here,
a System Dynamic (SD) model of COVID-19 preventive measures and their
correlation with the 17 Sustainable Development Goals (SDGs) is presented. The
result demonstrates a better informed view of the COVID-19 vulnerability
landscape. This novel qualitative approach refreshes debates on the future of
SDGS amid the crisis and provides a powerful mental representation for decision
makers to find leverage points that aid in preventing long-term disruptive
impacts of this health crisis on people, planet and economy. There is a need
for further tailor-made and real-time qualitative and quantitative scientific
research to calibrate the criticality of meeting the SDGS targets in different
countries according to ongoing lessons learned from this health crisis.",How do the Covid-19 Prevention Measures Interact with Sustainable Development Goals?,2020-10-13 19:10:03,Shima Beigi,"http://dx.doi.org/10.20944/preprints202010.0279.v1, http://arxiv.org/abs/2011.02290v1, http://arxiv.org/pdf/2011.02290v1",physics.soc-ph
33600,gn,"Political orientation polarizes the attitudes of more educated individuals on
controversial issues. A highly controversial issue in Europe is immigration. We
found the same polarizing pattern for opinion toward immigration in a
representative sample of citizens of a southern European middle-size city.
Citizens with higher numeracy, scientific and economic literacy presented a
more polarized view of immigration, depending on their worldview orientation.
Highly knowledgeable individuals endorsing an egalitarian-communitarian
worldview were more in favor of immigration, whereas highly knowledgeable
individuals with a hierarchical-individualist worldview were less in favor of
immigration. Those low in numerical, economic, and scientific literacy did not
show a polarized attitude. Results highlight the central role of
socio-political orientation over information theories in shaping attitudes
toward immigration.","The polarizing impact of numeracy, economic literacy, and science literacy on attitudes toward immigration",2020-11-04 18:44:33,"Lucia Savadori, Giuseppe Espa, Maria Michela Dickson","http://arxiv.org/abs/2011.02362v1, http://arxiv.org/pdf/2011.02362v1",econ.GN
33601,gn,"The carbon footprint of Bitcoin has drawn wide attention, but Bitcoin's
long-term impact on the climate remains uncertain. Here we present a framework
to overcome uncertainties in previous estimates and project Bitcoin's
electricity consumption and carbon footprint in the long term. If we assume
Bitcoin's market capitalization grows in line with the one of gold, we find
that the annual electricity consumption of Bitcoin may increase from 60 to 400
TWh between 2020 and 2100. The future carbon footprint of Bitcoin strongly
depends on the decarbonization pathway of the electricity sector. If the
electricity sector achieves carbon neutrality by 2050, Bitcoin's carbon
footprint has peaked already. However, in the business-as-usual scenario,
emissions sum up to 2 gigatons until 2100, an amount comparable to 7% of global
emissions in 2019. The Bitcoin price spike at the end of 2020 shows, however,
that progressive development of market capitalization could yield an
electricity consumption of more than 100 TWh already in 2021, and lead to
cumulative emissions of over 5 gigatons by 2100. Therefore, we also discuss
policy instruments to reduce Bitcoin's future carbon footprint.",Bitcoin's future carbon footprint,2020-11-05 04:58:16,"Shize Qin, Lena Klaaßen, Ulrich Gallersdörfer, Christian Stoll, Da Zhang","http://arxiv.org/abs/2011.02612v2, http://arxiv.org/pdf/2011.02612v2",econ.GN
33602,gn,"Political campaigns are among the most sophisticated marketing exercises in
the United States. As part of their marketing communication strategy, an
increasing number of politicians adopt social media to inform their
constituencies. This study documents the returns from adopting a new
technology, namely Twitter, for politicians running for Congress by focusing on
the change in campaign contributions received. We compare weekly donations
received just before and just after a politician opens a Twitter account in
regions with high and low levels of Twitter penetration, controlling for
politician-month fixed effects. Specifically, over the course of a political
campaign, we estimate that the differential effect of opening a Twitter account
in regions with high vs low levels of Twitter penetration amounts to an
increase of 0.7-2% in donations for all politicians and 1-3.1% for new
politicians, who were never elected to the Congress before. In contrast, the
effect of joining Twitter for experienced politicians remains negligibly small.
We find some evidence consistent with the explanation that the effect is driven
by new information about the candidates, e.g., the effect is primarily driven
by new donors rather than past donors, by candidates without Facebook accounts
and tweeting more informatively. Overall, our findings imply that social media
can intensify political competition by lowering costs of disseminating
information for new entrants to their constituents and thus may reduce the
barriers to enter politics.",Social Media and Political Contributions: The Impact of New Technology on Political Competition,2020-11-05 18:48:25,"Maria Petrova, Ananya Sen, Pinar Yildirim","http://arxiv.org/abs/2011.02924v1, http://arxiv.org/pdf/2011.02924v1",econ.GN
33608,gn,"In tis paper we consider approaches for time series forecasting based on deep
neural networks and neuro-fuzzy nets. Also, we make short review of researches
in forecasting based on various models of ANFIS models. Deep Learning has
proven to be an effective method for making highly accurate predictions from
complex data sources. Also, we propose our models of DL and Neuro-Fuzzy
Networks for this task. Finally, we show possibility of using these models for
data science tasks. This paper presents also an overview of approaches for
incorporating rule-based methodology into deep learning neural networks.",Deep Neural Networks and Neuro-Fuzzy Networks for Intellectual Analysis of Economic Systems,2020-11-11 09:21:08,"Alexey Averkin, Sergey Yarushev","http://arxiv.org/abs/2011.05588v1, http://arxiv.org/pdf/2011.05588v1",cs.NE
33603,gn,"In a recent article in the American Economic Review, Tatyana Deryugina and
David Molitor (DM) analyzed the effect of Hurricane Katrina on the mortality of
elderly and disabled residents of New Orleans. The authors concluded that
Hurricane Katrina improved the eight-year survival rate of elderly and disabled
residents of New Orleans by 3% and that most of this decline in mortality was
due to declines in mortality among those who moved to places with lower
mortality. In this article, I provide a critical assessment of the evidence
provided by DM to support their conclusions. There are three main problems.
First, DM generally fail to account for the fact that people of different ages,
races or sex will have different probabilities of dying as time goes by, and
when they do allow for this, results change markedly. Second, DM do not account
for the fact that residents in New Orleans are likely to be selected
non-randomly on the basis of health because of the relatively high mortality
rate in New Orleans compared to the rest of the country. Third, there is
considerable evidence that among those who moved from New Orleans, the
destination chosen was non-random. Finally, DM never directly assessed changes
in mortality of those who moved, or stayed, in New Orleans before and after
Hurricane Katrina. These problems lead me to conclude that the evidence
presented by DM does not support their inferences.",Did Hurricane Katrina Reduce Mortality?,2020-11-06 17:49:13,Robert Kaestner,"http://arxiv.org/abs/2011.03392v2, http://arxiv.org/pdf/2011.03392v2",econ.GN
33604,gn,"This paper extends Xing's (2023abcd) optimal growth models of catching-up
economies from the case of production function switching to that of economic
structure switching and argues how a country develops its economy by endogenous
structural transformation and efficient resource allocation in a market
mechanism. To achieve this goal, the paper first summarizes three attributes of
economic structures from the literature, namely, structurality, durationality,
and transformality, and discuss their implications for methods of economic
modeling. Then, with the common knowledge assumption, the paper extends Xing's
(2023a) optimal growth model that is based on production function switching and
considers an extended Ramsey model with endogenous structural transformation in
which the social planner chooses the optimal industrial structure, recource
allocation with the chosen structure, and consumption to maximize the
representative household's total utility subject to the resource constraint.
The paper next establishes the mathematical underpinning of the static,
dynamic, and switching equilibria. The Ramsey growth model and its equilibria
are then extended to economies with complicated economic structures consisting
of hierarchical production, technology adoption and innovation, infrastructure,
and economic and political institutions. The paper concludes with a brief
discussion of applications of the proposed methodology to economic development
problems in other scenarios.",Endogenous structural transformation in economic development,2020-11-07 07:56:55,"Justin Y. F. Lin, Haipeng Xing","http://arxiv.org/abs/2011.03695v3, http://arxiv.org/pdf/2011.03695v3",econ.TH
33605,gn,"Motivated by recent applications of sequential decision making in matching
markets, in this paper we attempt at formulating and abstracting market designs
for P2P lending. We describe a paradigm to set the stage for how peer to peer
investments can be conceived from a matching market perspective, especially
when both borrower and lender preferences are respected. We model these
specialized markets as an optimization problem and consider different utilities
for agents on both sides of the market while also understanding the impact of
equitable allocations to borrowers. We devise a technique based on sequential
decision making that allow the lenders to adjust their choices based on the
dynamics of uncertainty from competition over time and that also impacts the
rewards in return for their investments. Using simulated experiments we show
the dynamics of the regret based on the optimal borrower-lender matching and
find that the lender regret depends on the initial preferences set by the
lenders which could affect their learning over decision making steps.",Bandits in Matching Markets: Ideas and Proposals for Peer Lending,2020-10-30 23:12:26,Soumajyoti Sarkar,"http://arxiv.org/abs/2011.04400v5, http://arxiv.org/pdf/2011.04400v5",cs.GT
33606,gn,"In this study we arrive at a closed form expression for measuring vector
assortativity in networks motivated by our use-case which is to observe
patterns of social mobility in a society. Based on existing works on social
mobility within economics literature, and social reproduction within sociology
literature, we motivate the construction of an occupational network structure
to observe mobility patterns. Basing on existing literature, over this
structure, we define mobility as assortativity of occupations attributed by the
representation of categories such as gender, geography or social groups. We
compare the results from our vector assortativity measure and averaged scalar
assortativity in the Indian context, relying on NSSO 68th round on employment
and unemployment. Our findings indicate that the trends indicated by our vector
assortativity measure is very similar to what is indicated by the averaged
scalar assortativity index. We discuss some implications of this work and
suggest future directions.",Occupational Network Structure and Vector Assortativity for illustrating patterns of social mobility,2020-11-06 11:31:03,Vinay Reddy Venumuddala,"http://arxiv.org/abs/2011.04466v1, http://arxiv.org/pdf/2011.04466v1",econ.GN
33607,gn,"The electricity market, which was initially designed for dispatchable power
plants and inflexible demand, is being increasingly challenged by new trends,
such as the high penetration of intermittent renewables and the transformation
of the consumers energy space. To accommodate these new trends and improve the
performance of the market, several modifications to current market designs have
been proposed in the literature. Given the vast variety of these proposals,
this paper provides a comprehensive investigation of the modifications proposed
in the literature as well as a detailed assessment of their suitability for
improving market performance under the continuously evolving electricity
landscape. To this end, first, a set of criteria for an ideal market design is
proposed, and the barriers present in current market designs hindering the
fulfillment of these criteria are identified. Then, the different market
solutions proposed in the literature, which could potentially mitigate these
barriers, are extensively explored. Finally, a taxonomy of the proposed
solutions is presented, highlighting the barriers addressed by each proposal
and the associated implementation challenges. The outcomes of this analysis
show that even though each barrier is addressed by at least one proposed
solution, no single proposal is able to address all the barriers
simultaneously. In this regard, a future-proof market design must combine
different elements of proposed solutions to comprehensively mitigate market
barriers and overcome the identified implementation challenges. Thus, by
thoroughly reviewing this rich body of literature, this paper introduces key
contributions enabling the advancement of the state-of-the-art towards
increasingly efficient electricity market.",Short Term Electricity Market Designs: Identified Challenges and Promising Solutions,2020-11-09 20:32:12,"Lina Silva-Rodriguez, Anibal Sanjab, Elena Fumagalli, Ana Virag, Madeleine Gibescu","http://arxiv.org/abs/2011.04587v1, http://arxiv.org/pdf/2011.04587v1",econ.GN
33609,gn,"Recent studies exploiting city-level time series have shown that, around the
world, several crimes declined after COVID-19 containment policies have been
put in place. Using data at the community-level in Chicago, this work aims to
advance our understanding on how public interventions affected criminal
activities at a finer spatial scale. The analysis relies on a two-step
methodology. First, it estimates the community-wise causal impact of social
distancing and shelter-in-place policies adopted in Chicago via Structural
Bayesian Time-Series across four crime categories (i.e., burglary, assault,
narcotics-related offenses, and robbery). Once the models detected the
direction, magnitude and significance of the trend changes, Firth's Logistic
Regression is used to investigate the factors associated to the statistically
significant crime reduction found in the first step of the analyses.
Statistical results first show that changes in crime trends differ across
communities and crime types. This suggests that beyond the results of aggregate
models lies a complex picture characterized by diverging patterns. Second,
regression models provide mixed findings regarding the correlates associated
with significant crime reduction: several relations have opposite directions
across crimes with population being the only factor that is stably and
positively associated with significant crime reduction.",Disentangling Community-level Changes in Crime Trends During the COVID-19 Pandemic in Chicago,2020-11-11 12:34:49,"Gian Maria Campedelli, Serena Favarin, Alberto Aziani, Alex R. Piquero","http://dx.doi.org/10.1186/s40163-020-00131-8, http://arxiv.org/abs/2011.05658v1, http://arxiv.org/pdf/2011.05658v1",econ.GN
33610,gn,"Community electricity storage systems for multiple applications promise
benefits over household electricity storage systems. More economical
flexibility options such as demand response and sector coupling might reduce
the market size for storage facilities. This paper assesses the economic
performance of community electricity storage systems by taking competitive
flexibility options into account. For this purpose, an actor-related,
scenario-based optimization framework is applied. The results are in line with
the literature and show that community storage systems are economically more
efficient than household storage systems. Relative storage capacity reductions
of community storage systems over household storage systems are possible, as
the demand and generation profiles are balanced out among end users. On
average, storage capacity reductions of 9% per household are possible in the
base case, resulting in lower specific investments. The simultaneous
application of demand-side flexibility options such as sector coupling and
demand response enable a further capacity reduction of the community storage
size by up to 23%. At the same time, the competition between flexibility
options leads to smaller benefits regarding the community storage flexibility
potential, which reduces the market viability for these applications. In the
worst case, the cannibalization effects reach up to 38% between the flexibility
measures. The losses of the flexibility benefits outweigh the savings of the
capacity reduction whereby sector coupling constitutes a far greater
influencing factor than demand response. Overall, in consideration of the
stated cost trends, the economies of scale, and the reduction possibilities, a
profitable community storage model might be reached between 2025 and 2035.
Future work should focus on the analysis of policy frameworks.",Competition between simultaneous demand-side flexibility options: The case of community electricity storage systems,2020-11-11 17:19:05,"Fabian Scheller, Robert Burkhardt, Robert Schwarzeit, Russell McKenna, Thomas Bruckner","http://dx.doi.org/10.1016/j.apenergy.2020.114969, http://arxiv.org/abs/2011.05809v1, http://arxiv.org/pdf/2011.05809v1",eess.SY
33611,gn,"The advancement in the field of statistical methodologies to economic data
has paved its path towards the dire need for designing efficient military
management policies. India is ranked as the third largest country in terms of
military spender for the year 2019. Therefore, this study aims at utilizing the
Box-Jenkins ARIMA model for time series forecasting of the military expenditure
of India in forthcoming times. The model was generated on the SIPRI dataset of
Indian military expenditure of 60 years from the year 1960 to 2019. The trend
was analysed for the generation of the model that best fitted the forecasting.
The study highlights the minimum AIC value and involves ADF testing (Augmented
Dickey-Fuller) to transform expenditure data into stationary form for model
generation. It also focused on plotting the residual error distribution for
efficient forecasting. This research proposed an ARIMA (0,1,6) model for
optimal forecasting of military expenditure of India with an accuracy of 95.7%.
The model, thus, acts as a Moving Average (MA) model and predicts the
steady-state exponential growth of 36.94% in military expenditure of India by
2024.",Forecasting and Analyzing the Military Expenditure of India Using Box-Jenkins ARIMA Model,2020-11-10 14:41:11,"Deepanshu Sharma, Kritika Phulli","http://arxiv.org/abs/2011.06060v1, http://arxiv.org/pdf/2011.06060v1",econ.GN
33612,gn,"Cities are centers for the integration of capital and incubators of
invention, and attracting venture capital (VC) is of great importance for
cities to advance in innovative technology and business models towards a
sustainable and prosperous future. Yet we still lack a quantitative
understanding of the relationship between urban characteristics and VC
activities. In this paper, we find a clear nonlinear scaling relationship
between VC activities and the urban population of Chinese cities. In such
nonlinear systems, the widely applied linear per capita indicators would be
either biased to larger cities or smaller cities depends on whether it is
superlinear or sublinear, while the residual of cities relative to the
prediction of scaling law is a more objective and scale-invariant metric.
%(i.e., independent of the city size). Such a metric can distinguish the
effects of local dynamics and scaled growth induced by the change of population
size. The spatiotemporal evolution of such metrics on VC activities reveals
three distinct groups of cities, two of which stand out with increasing and
decreasing trends, respectively. And the taxonomy results together with spatial
analysis also signify different development modes between large urban
agglomeration regions. Besides, we notice the evolution of scaling exponents on
VC activities are of much larger fluctuations than on socioeconomic output of
cities, and a conceptual model that focuses on the growth dynamics of different
sized cities can well explain it, which we assume would be general to other
scenarios.",Assessing the attraction of cities on venture capital from a scaling law perspective,2020-11-12 12:57:26,"Ruiqi Li, Lingyun Lu, Weiwei Gu, Shaodong Ma, Gang Xu, H. Eugene Stanley","http://dx.doi.org/10.1109/ACCESS.2021.3068317, http://arxiv.org/abs/2011.06287v1, http://arxiv.org/pdf/2011.06287v1",physics.soc-ph
33614,gn,"Digital platforms offer vast potential for increased value creation and
innovation, especially through cross-organizational data sharing. It appears
that SMEs in Germany are currently hesitant or unable to create their own
platforms. To get a holistic overview of the structure of the German
SME-focused platform landscape (that is platforms that are led by or targeting
SMEs), we applied a mixed method approach of traditional desk research and a
quantitative analysis. The study identified large geographical disparity along
the borders of the new and old German federal states, and overall fewer
platform ventures by SMEs, rather than large companies and startups. Platform
ventures for SMEs are more likely set up as partnerships. We indicate that high
capital intensity might be a reason for that.",A Mixed-Method Landscape Analysis of SME-focused B2B Platforms in Germany,2020-11-13 14:08:11,"Tina Krell, Fabian Braesemann, Fabian Stephany, Nicolas Friederici, Philip Meier","http://dx.doi.org/10.2139/ssrn.3614485, http://arxiv.org/abs/2011.06859v1, http://arxiv.org/pdf/2011.06859v1",econ.GN
33615,gn,"Ambient temperatures are rising globally, with the greatest increases
recorded at night. Concurrently, the prevalence of insufficient sleep is
increasing in many populations, with substantial costs to human health and
well-being. Even though nearly a third of the human lifespan is spent asleep,
it remains unknown whether temperature and weather impact objective measures of
sleep in real-world settings, globally. Here we link billions of sleep
measurements from wearable devices comprising over 7 million nighttime sleep
records across 68 countries to local daily meteorological data from 2015 to
2017. Rising nighttime temperatures shorten within-person sleep duration
primarily through delayed onset, increasing the probability of insufficient
sleep. The effect of temperature on sleep loss is substantially larger for
residents from lower income countries and older adults, and females are
affected more than are males. Nighttime temperature increases inflict the
greatest sleep loss during summer and fall months, and we do not find evidence
of short-term acclimatization. Coupling historical behavioral measurements with
output from climate models, we project that climate change will further erode
human sleep, producing substantial geographic inequalities. Our findings have
significant implications for adaptation planning and illuminate a pathway
through which rising temperatures may globally impact public health.",Ambient heat and human sleep,2020-11-14 02:04:42,"Kelton Minor, Andreas Bjerre-Nielsen, Sigga Svala Jonasdottir, Sune Lehmann, Nick Obradovich","http://arxiv.org/abs/2011.07161v1, http://arxiv.org/pdf/2011.07161v1",cs.CY
33616,gn,"COVID-19 has had a much larger impact on the financial markets compared to
previous epidemics because the news information is transferred over the social
networks at a speed of light. Using Twitter's API, we compiled a unique dataset
with more than 26 million COVID-19 related Tweets collected from February 2nd
until May 1st, 2020. We find that more frequent use of the word ""stock"" in
daily Tweets is associated with a substantial decline in log returns of three
key US indices - Dow Jones Industrial Average, S&P500, and NASDAQ. The results
remain virtually unchanged in multiple robustness checks.",COVID-19 and the stock market: evidence from Twitter,2020-11-13 22:59:50,"Rahul Goel, Lucas Javier Ford, Maksym Obrizan, Rajesh Sharma","http://arxiv.org/abs/2011.08717v1, http://arxiv.org/pdf/2011.08717v1",econ.GN
33617,gn,"This paper experimentally studies whether individuals hold a first-order
belief that others apply Bayes' rule to incorporate private information into
their beliefs, which is a fundamental assumption in many Bayesian and
non-Bayesian social learning models. We design a novel experimental setting in
which the first-order belief assumption implies that social information is
equivalent to private information. Our main finding is that participants'
reported reservation prices of social information are significantly lower than
those of private information, which provides evidence that casts doubt on the
first-order belief assumption. We also build a novel belief error model in
which participants form a random posterior belief with a Bayesian posterior
belief kernel to explain the experimental findings. A structural estimation of
the model suggests that participants' sophisticated consideration of others'
belief error and their exaggeration of the error both contribute to the
difference in reservation prices.",Belief Error and Non-Bayesian Social Learning: Experimental Evidence,2020-11-19 07:06:32,"Boğaçhan Çelen, Sen Geng, Huihui Li","http://arxiv.org/abs/2011.09640v1, http://arxiv.org/pdf/2011.09640v1",econ.GN
33618,gn,"Job security can never be taken for granted, especially in times of rapid,
widespread and unexpected social and economic change. These changes can force
workers to transition to new jobs. This may be because new technologies emerge
or production is moved abroad. Perhaps it is a global crisis, such as COVID-19,
which shutters industries and displaces labor en masse. Regardless of the
impetus, people are faced with the challenge of moving between jobs to find new
work. Successful transitions typically occur when workers leverage their
existing skills in the new occupation. Here, we propose a novel method to
measure the similarity between occupations using their underlying skills. We
then build a recommender system for identifying optimal transition pathways
between occupations using job advertisements (ads) data and a longitudinal
household survey. Our results show that not only can we accurately predict
occupational transitions (Accuracy = 76%), but we account for the asymmetric
difficulties of moving between jobs (it is easier to move in one direction than
the other). We also build an early warning indicator for new technology
adoption (showcasing Artificial Intelligence), a major driver of rising job
transitions. By using real-time data, our systems can respond to labor demand
shifts as they occur (such as those caused by COVID-19). They can be leveraged
by policy-makers, educators, and job seekers who are forced to confront the
often distressing challenges of finding new jobs.",Skill-driven Recommendations for Job Transition Pathways,2020-11-24 02:58:26,"Nikolas Dawson, Mary-Anne Williams, Marian-Andrei Rizoiu","http://dx.doi.org/10.1371/journal.pone.0254722, http://arxiv.org/abs/2011.11801v2, http://arxiv.org/pdf/2011.11801v2",econ.GN
33639,gn,"We examine probabilistic forecasts for battleground states in the 2020 US
presidential election, using daily data from two sources over seven months: a
model published by The Economist, and prices from the PredictIt exchange. We
find systematic differences in accuracy over time, with markets performing
better several months before the election, and the model performing better as
the election approached. A simple average of the two forecasts performs better
than either one of them overall, even though no average can outperform both
component forecasts for any given state-date pair. This effect arises because
the model and the market make different kinds of errors in different states:
the model was confidently wrong in some cases, while the market was excessively
uncertain in others. We conclude that there is value in using hybrid
forecasting methods, and propose a market design that incorporates model
forecasts via a trading bot to generate synthetic predictions. We also propose
and conduct a profitability test that can be used as a novel criterion for the
evaluation of forecasting performance.","Models, Markets, and the Forecasting of Elections",2021-02-06 22:05:07,"Rajiv Sethi, Julie Seager, Emily Cai, Daniel M. Benjamin, Fred Morstatter","http://arxiv.org/abs/2102.04936v4, http://arxiv.org/pdf/2102.04936v4",econ.GN
33619,gn,"Traditional statistical and measurements are unable to solve all industrial
data in the right way and appropriate time. Open markets mean the customers are
increased, and production must increase to provide all customer requirements.
Nowadays, large data generated daily from different production processes and
traditional statistical or limited measurements are not enough to handle all
daily data. Improve production and quality need to analyze data and extract the
important information about the process how to improve. Data mining applied
successfully in the industrial processes and some algorithms such as mining
association rules, and decision tree recorded high professional results in
different industrial and production fields. The study applied seven algorithms
to analyze production data and extract the best result and algorithm in the
industry field. KNN, Tree, SVM, Random Forests, ANN, Na\""ive Bayes, and
AdaBoost applied to classify data based on three attributes without neglect any
variables whether this variable is numerical or categorical. The best results
of accuracy and area under the curve (ROC) obtained from Decision tree and its
ensemble algorithms (Random Forest and AdaBoost). Thus, a decision tree is an
appropriate algorithm to handle manufacturing and production data especially
this algorithm can handle numerical and categorical data.",The Application of Data Mining in the Production Processes,2020-11-24 23:00:59,Hamza Saad,"http://dx.doi.org/10.11648/j.ie.20180201.14, http://arxiv.org/abs/2011.12348v1, http://arxiv.org/pdf/2011.12348v1",econ.GN
33621,gn,"This paper extends the sequential search model of Wolinsky (1986) by allowing
firms to choose how much match value information to disclose to visiting
consumers. This restores the Diamond paradox (Diamond 1971): there exist no
symmetric equilibria in which consumers engage in active search, so consumers
obtain zero surplus and firms obtain monopoly profits. Modifying the scenario
to one in which prices are advertised, we discover that the no-active-search
result persists, although the resulting symmetric equilibria are ones in which
firms price at marginal cost.",Persuasion Produces the (Diamond) Paradox,2020-11-27 21:42:03,Mark Whitmeyer,"http://arxiv.org/abs/2011.13900v4, http://arxiv.org/pdf/2011.13900v4",econ.TH
33622,gn,"Expanding multi-country emissions trading system is considered as crucial to
fill the existing mitigation gap for the 2\degree C climate target. Trustworthy
emissions accounting is the cornerstone of such a system encompassing different
jurisdictions. However, traditional emissions measuring, reporting, and
verification practices that support data authenticity might not be applicable
as detailed data from large utilities and production facilities to be covered
in the multi-country emissions trading system are usually highly sensitive and
of severe national security concern. In this study, we propose a cryptographic
framework for an authenticated and secure emissions accounting system that can
resolve this data dilemma. We demonstrate that integrating a sequence of
cryptographic protocols can preserve data authenticity and security for a
stylized multi-country emissions trading system. We call for more research to
promote applications of modern cryptography in future international climate
governance to build trust and strengthen collaboration.",An authenticated and secure accounting system for international emissions trading,2020-11-27 22:00:44,"Chenxing Li, Yang Yu, Andrew Chi-Chih Yao, Da Zhang, Xiliang Zhang","http://arxiv.org/abs/2011.13954v1, http://arxiv.org/pdf/2011.13954v1",econ.GN
33623,gn,"The financial turmoil surrounding the Great Recession called for
unprecedented intervention by Central Banks: unconventional policies affected
various areas in the economy, including stock market volatility. In order to
evaluate such effects, by including Markov Switching dynamics within a recent
Multiplicative Error Model, we propose a model--based classification of the
dates of a Central Bank's announcements to distinguish the cases where the
announcement implies an increase or a decrease in volatility, or no effect. In
detail, we propose two smoothed probability--based classification methods,
obtained as a by--product of the model estimation, which provide very similar
results to those coming from a classical k--means clustering procedure. The
application on four Eurozone market volatility series shows a successful
classification of 144 European Central Bank announcements.",On Classifying the Effects of Policy Announcements on Volatility,2020-11-28 12:23:15,"Giampiero M. Gallo, Demetrio Lacava, Edoardo Otranto","http://arxiv.org/abs/2011.14094v2, http://arxiv.org/pdf/2011.14094v2",q-fin.GN
33624,gn,"Covid-19 has rapidly redefined the agenda of technological research and
development both for academics and practitioners. If the medical scientific
publication system has promptly reacted to this new situation, other domains,
particularly in new technologies, struggle to map what is happening in their
contexts. The pandemic has created the need for a rapid detection of
technological convergence phenomena, but at the same time it has made clear
that this task is impossible on the basis of traditional patent and publication
indicators. This paper presents a novel methodology to perform a rapid
detection of the fast technological convergence phenomenon that is occurring
under the pressure of the Covid-19 pandemic. The fast detection has been
performed thanks to the use of a novel source: the online blogging platform
Medium. We demonstrate that the hybrid structure of this social journalism
platform allows a rapid detection of innovation phenomena, unlike other
traditional sources. The technological convergence phenomenon has been modelled
through a network-based approach, analysing the differences of networks
computed during two time periods (pre and post COVID-19). The results led us to
discuss the repurposing of technologies regarding ""Remote Control"", ""Remote
Working"", ""Health"" and ""Remote Learning"".",Rapid detection of fast innovation under the pressure of COVID-19,2021-01-30 12:37:40,"Nicola Melluso, Andrea Bonaccorsi, Filippo Chiarello, Gualtiero Fantoni","http://dx.doi.org/10.1371/journal.pone.0244175, http://arxiv.org/abs/2102.00197v1, http://arxiv.org/pdf/2102.00197v1",cs.IR
33625,gn,"This paper draws upon the evolutionary concepts of technological relatedness
and knowledge complexity to enhance our understanding of the long-term
evolution of Artificial Intelligence (AI). We reveal corresponding patterns in
the emergence of AI - globally and in the context of specific geographies of
the US, Japan, South Korea, and China. We argue that AI emergence is associated
with increasing related variety due to knowledge commonalities as well as
increasing complexity. We use patent-based indicators for the period between
1974-2018 to analyse the evolution of AI's global technological space, to
identify its technological core as well as changes to its overall relatedness
and knowledge complexity. At the national level, we also measure countries'
overall specialisations against AI-specific ones. At the global level, we find
increasing overall relatedness and complexity of AI. However, for the
technological core of AI, which has been stable over time, we find decreasing
related variety and increasing complexity. This evidence points out that AI
innovations related to core technologies are becoming increasingly distinct
from each other. At the country level, we find that the US and Japan have been
increasing the overall relatedness of their innovations. The opposite is the
case for China and South Korea, which we associate with the fact that these
countries are overall less technologically developed than the US and Japan.
Finally, we observe a stable increasing overall complexity for all countries
apart from China, which we explain by the focus of this country in technologies
not strongly linked to AI.",An evolutionary view on the emergence of Artificial Intelligence,2021-01-30 17:46:23,"Matheus E. Leusin, Bjoern Jindra, Daniel S. Hain","http://arxiv.org/abs/2102.00233v1, http://arxiv.org/pdf/2102.00233v1",econ.GN
33626,gn,"Rapid rise in income inequality in India is a serious concern. While the
emphasis is on inclusive growth, it seems difficult to tackle the problem
without looking at the intricacies of the problem. Social mobility is one such
important tool which helps in reaching the cause of the problem and focuses on
bringing long term equality in the country. The purpose of this study is to
examine the role of social background and education attainment in generating
occupation mobility in the country. By applying an extended version of the RC
association model to 68th round (2011-12) of the Employment and Unemployment
Survey by the National Sample Survey Office of India, we found that the role of
education is not important in generating occupation mobility in India, while
social background plays a critical role in determining one's occupation. This
study successfully highlights the strong intergenerational occupation
immobility in the country and also the need to focus on education. In this
regard, further studies are needed to uncover other crucial factors limiting
the growth of individuals in the country.",Social Mobility in India,2021-01-31 16:18:36,"A. Singh, A. Forcina, K. Muniyoor","http://arxiv.org/abs/2102.00447v1, http://arxiv.org/pdf/2102.00447v1",econ.GN
33627,gn,"This study uses the unprecedented changes in the sex ratio due to the losses
of men during World War II to identify the impacts of the gender imbalance on
marriage market and birth outcomes in Japan. Using newly digitized census-based
historical statistics, we find evidence that men had a stronger bargaining
position in the marriage market and intra-household fertility decisions than
women. Under relative male scarcity, while people, especially younger people,
were more likely to marry and divorce, widowed women were less likely to
remarry than widowed men. We also find that women's bargaining position in the
marriage market might not have improved throughout the 1950s. Given the
institutional changes in the abortion law after the war, marital fertility and
stillbirth rates increased in the areas that suffered relative male scarcity.
Our result on out-of-wedlock births indicates that the theoretical prediction
of intra-household bargaining is considered to be robust in an economy in which
marital fertility is dominant.",The Impacts of the Gender Imbalance on Marriage and Birth: Evidence from World War II in Japan,2021-02-01 11:03:28,"Kota Ogasawara, Erika Igarashi","http://arxiv.org/abs/2102.00687v2, http://arxiv.org/pdf/2102.00687v2",econ.GN
33628,gn,"This paper focuses on social cloud formation, where agents are involved in a
closeness-based conditional resource sharing and build their resource sharing
network themselves. The objectives of this paper are: (1) to investigate the
impact of agents' decisions of link addition and deletion on their local and
global resource availability, (2) to analyze spillover effects in terms of the
impact of link addition between a pair of agents on others' utility, (3) to
study the role of agents' closeness in determining what type of spillover
effects these agents experience in the network, and (4) to model the choices of
agents that suggest with whom they want to add links in the social cloud. The
findings include the following. Firstly, agents' decision of link addition
(deletion) increases (decreases) their local resource availability. However,
these observations do not hold in the case of global resource availability.
Secondly, in a connected network, agents experience either positive or negative
spillover effect and there is no case with no spillover effects. Agents observe
no spillover effects if and only if the network is disconnected and consists of
more than two components (sub-networks). Furthermore, if there is no change in
the closeness of an agent (not involved in link addition) due to a newly added
link, then the agent experiences negative spillover effect. Although an
increase in the closeness of agents is necessary in order to experience
positive spillover effects, the condition is not sufficient. By focusing on
parameters such as closeness and shortest distances, we provide conditions
under which agents choose to add links so as to maximise their resource
availability.",Resource Availability in the Social Cloud: An Economics Perspective,2021-01-30 09:00:03,"Pramod C. Mane, Nagarajan Krishnamurthy, Kapil Ahuja","http://arxiv.org/abs/2102.01071v1, http://arxiv.org/pdf/2102.01071v1",cs.GT
33640,gn,"We investigate the changes in the self-citation behavior of Italian
professors following the introduction of a citation-based incentive scheme, for
national accreditation to academic appointments. Previous contributions on
self-citation behavior have either focused on small samples or relied on simple
models, not controlling for all confounding factors. The present work adopts a
complex statistics model implemented on bibliometric individual data for over
15,000 Italian professors. Controlling for a number of covariates (number of
citable papers published by the author; presence of international authors;
number of co-authors; degree of the professor's specialization), the average
increase in self-citation rates following introduction of the ASN is of 9.5%.
The increase is common to all disciplines and academic ranks, albeit with
diverse magnitude. Moreover, the increase is sensitive to the relative
incentive, depending on the status of the scholar with respect to the
scientific accreditation. A further analysis shows that there is much
heterogeneity in the individual patterns of self-citing behavior, albeit with
very few outliers.",The effects of citation-based research evaluation schemes on self-citation behavior,2021-02-10 12:58:45,"Giovanni Abramo, Ciriaco Andrea D'Angelo, Leonardo Grilli","http://arxiv.org/abs/2102.05358v1, http://arxiv.org/pdf/2102.05358v1",cs.DL
33629,gn,"The increasing penetration of intermittent renewables, storage devices, and
flexible loads is introducing operational challenges in distribution grids. The
proper coordination and scheduling of these resources using a distributed
approach is warranted, and can only be achieved through local retail markets
employing transactive energy schemes. To this end, we propose a
distribution-level retail market operated by a Distribution System Operator
(DSO), which schedules DERs and determines the real-time distribution-level
Locational Marginal Price (d-LPM). The retail market is built using a
distributed Proximal Atomic Coordination (PAC) algorithm, which solves the
optimal power flow model while accounting for network physics, rendering
locationally and temporally varying d-LMPs. A numerical study of the market
structure is carried out via simulations of the IEEE-123 node network using
data from ISO-NE and Eversource in Massachusetts, US. The market performance is
compared to existing retail practices, including demand response (DR) with
no-export rules and net metering. The DSO-centric market increases DER
utilization, permits continual market participation for DR, lowers electricity
rates for customers, and eliminates the subsidies inherent to net metering
programs. The resulting lower revenue stream for the DSO highlights the
evolving business model of the modern utility, moving from commoditized markets
towards performance-based ratemaking.",Reinventing the Utility for DERs: A Proposal for a DSO-Centric Retail Electricity Market,2021-02-02 05:57:30,"Rabab Haider, David D'Achiardi, Venkatesh Venkataramanan, Anurag Srivastava, Anjan Bose, Anuradha M. Annaswamy","http://dx.doi.org/10.1016/j.adapen.2021.100026, http://arxiv.org/abs/2102.01269v1, http://arxiv.org/pdf/2102.01269v1",econ.GN
33630,gn,"Based on some analytic structural properties of the Gini and Kolkata indices
for social inequality, as obtained from a generic form of the Lorenz function,
we make a conjecture that the limiting (effective saturation) value of the
above-mentioned indices is about 0.865. This, together with some more new
observations on the citation statistics of individual authors (including Nobel
laureates), suggests that about $14\%$ of people or papers or social conflicts
tend to earn or attract or cause about $86\%$ of wealth or citations or deaths
respectively in very competitive situations in markets, universities or wars.
This is a modified form of the (more than a) century old $80-20$ law of Pareto
in economy (not visible today because of various welfare and other strategies)
and gives an universal value ($0.86$) of social (inequality) constant or
number.",Limiting Value of the Kolkata Index for Social Inequality and a Possible Social Constant,2021-02-02 17:51:43,"Asim Ghosh, Bikas K Chakrabarti","http://dx.doi.org/10.1016/j.physa.2021.125944, http://arxiv.org/abs/2102.01527v5, http://arxiv.org/pdf/2102.01527v5",physics.soc-ph
33631,gn,"We use a five percent sample of Americans' credit bureau data, combined with
a regression discontinuity approach, to estimate the effect of universal health
insurance at age 65-when most Americans become eligible for Medicare-at the
national, state, and local level. We find a 30 percent reduction in debt
collections-and a two-thirds reduction in the geographic variation in
collections-with limited effects on other financial outcomes. The areas that
experienced larger reductions in collections debt at age 65 were concentrated
in the Southern United States, and had higher shares of black residents, people
with disabilities, and for-profit hospitals.",The Great Equalizer: Medicare and the Geography of Consumer Financial Strain,2021-02-03 19:57:23,"Paul Goldsmith-Pinkham, Maxim Pinkovskiy, Jacob Wallace","http://arxiv.org/abs/2102.02142v1, http://arxiv.org/pdf/2102.02142v1",econ.GN
33632,gn,"The COVID-19 pandemic has disrupted human activities, leading to
unprecedented decreases in both global energy demand and GHG emissions. Yet a
little known that there is also a low carbon shift of the global energy system
in 2020. Here, using the near-real-time data on energy-related GHG emissions
from 30 countries (about 70% of global power generation), we show that the
pandemic caused an unprecedented de-carbonization of global power system,
representing by a dramatic decrease in the carbon intensity of power sector
that reached a historical low of 414.9 tCO2eq/GWh in 2020. Moreover, the share
of energy derived from renewable and low-carbon sources (nuclear, hydro-energy,
wind, solar, geothermal, and biomass) exceeded that from coal and oil for the
first time in history in May of 2020. The decrease in global net energy demand
(-1.3% in the first half of 2020 relative to the average of the period in
2016-2019) masks a large down-regulation of fossil-fuel-burning power plants
supply (-6.1%) coincident with a surge of low-carbon sources (+6.2%).
Concomitant changes in the diurnal cycle of electricity demand also favored
low-carbon generators, including a flattening of the morning ramp, a lower
midday peak, and delays in both the morning and midday load peaks in most
countries. However, emission intensities in the power sector have since
rebounded in many countries, and a key question for climate mitigation is thus
to what extent countries can achieve and maintain lower, pandemic-level carbon
intensities of electricity as part of a green recovery.",De-carbonization of global energy use during the COVID-19 pandemic,2021-02-05 18:37:57,"Zhu Liu, Biqing Zhu, Philippe Ciais, Steven J. Davis, Chenxi Lu, Haiwang Zhong, Piyu Ke, Yanan Cui, Zhu Deng, Duo Cui, Taochun Sun, Xinyu Dou, Jianguang Tan, Rui Guo, Bo Zheng, Katsumasa Tanaka, Wenli Zhao, Pierre Gentine","http://arxiv.org/abs/2102.03240v1, http://arxiv.org/pdf/2102.03240v1",physics.ao-ph
33633,gn,"Following the risk-taking model of Seel and Strack, $n$ players decide when
to stop privately observed Brownian motions with drift and absorption at zero.
They are then ranked according to their level of stopping and paid a
rank-dependent reward. We study the problem of a principal who aims to induce a
desirable equilibrium performance of the players by choosing how much reward is
attributed to each rank. Specifically, we determine optimal reward schemes for
principals interested in the average performance and the performance at a given
rank. While the former can be related to reward inequality in the Lorenz sense,
the latter can have a surprising shape.",Reward Design in Risk-Taking Contests,2021-02-05 23:44:03,"Marcel Nutz, Yuchong Zhang","http://arxiv.org/abs/2102.03417v2, http://arxiv.org/pdf/2102.03417v2",math.OC
33641,gn,"The massive shock of the COVID-19 pandemic is already showing its negative
effects on economies around the world, unprecedented in recent history.
COVID-19 infections and containment measures have caused a general slowdown in
research and new knowledge production. Because of the link between R&D spending
and economic growth, it is to be expected then that a slowdown in research
activities will slow in turn the global recovery from the pandemic. Many recent
studies also claim an uneven impact on scientific production across gender. In
this paper, we investigate the phenomenon across countries, analysing preprint
depositions. Differently from other works, that compare the number of preprint
depositions before and after the pandemic outbreak, we analyse the depositions
trends across geographical areas, and contrast after-pandemic depositions with
expected ones. Differently from common belief and initial evidence, in few
countries female scientists increased their scientific output while males
plunged.",Gendered impact of COVID-19 pandemic on research production: a cross-country analysis,2021-02-10 13:00:58,"Giovanni Abramo, Ciriaco Andrea D'Angelo, Ida Mele","http://arxiv.org/abs/2102.05360v1, http://arxiv.org/pdf/2102.05360v1",cs.DL
33634,gn,"Internet access is essential for economic development and helping to deliver
the Sustainable Development Goals, especially as even basic broadband can
revolutionize available economic opportunities. Yet, more than one billion
people still live without internet access. Governments must make strategic
choices to connect these citizens, but currently have few independent,
transparent and scientifically reproducible assessments to rely on. This paper
develops open-source software to test broadband universal service strategies
which meet the 10 Mbps target being considered by the UN Broadband Commission.
The private and government costs of different infrastructure decisions are
quantified in six East and West African countries (C\^ote D`Ivoire, Mali,
Senegal, Kenya, Tanzania and Uganda). The results provide strong evidence that
`leapfrogging` straight to 4G in unconnected areas is the least-cost option for
providing broadband universal service, with savings between 13-51% over 3G. The
results also demonstrate how the extraction of spectrum and tax revenues in
unviable markets provide no net benefit, as for every $1 taken in revenue, a $1
infrastructure subsidy is required from government to achieve broadband
universal service. Importantly, the use of a Shared Rural Network in unviable
locations provides impressive cost savings (up to 78%), while retaining the
benefits of dynamic infrastructure competition in viable urban and suburban
areas. This paper provides evidence to design national and international
policies aimed at broadband universal service.",Policy options for digital infrastructure strategies: A simulation model for broadband universal service in Africa,2021-02-06 14:09:14,Edward Oughton,"http://arxiv.org/abs/2102.03561v1, http://arxiv.org/pdf/2102.03561v1",cs.CY
33635,gn,"Public organizations need innovative approaches for managing common goods and
to explain the dynamics linking the (re)generation of common goods and
organizational performance. Although system dynamics is recognised as a useful
approach for managing common goods, public organizations rarely adopt the
system dynamics for this goal. The paper aims to review the literature on the
system dynamics and its recent application, known as dynamic performance
management, to highlight the state of the art and future opportunities on the
management of common goods. The authors analyzed 144 documents using a
systematic literature review. The results obtained outline a fair number of
documents, countries and journals involving the study of system dynamics, but
do not cover sufficient research on the linking between the (re)generation of
common goods and organizational performance. This paper outlines academic and
practical contributions. Firstly, it contributes to the theory of common goods.
It provides insight for linking the management of common goods and
organizational performance through the use of dynamic performance management
approach. Furthermore, it shows scholars the main research opportunities.
Secondly, it indicates to practitioners the documents providing useful ideas on
the adoption of system dynamics for managing common goods.",Dynamic Performance Management: An Approach for Managing the Common Goods,2021-02-08 12:53:19,"A. Sardi, E. Sorano","http://dx.doi.org/10.3390/su11226435, http://arxiv.org/abs/2102.04090v1, http://arxiv.org/pdf/2102.04090v1",econ.GN
33636,gn,"The current world challenges include issues such as infectious disease
pandemics, environmental health risks, food safety, and crime prevention.
Through this article, a special emphasis is given to one of the main challenges
in the healthcare sector during the COVID-19 pandemic, the cyber risk. Since
the beginning of the Covid-19 pandemic, the World Health Organization has
detected a dramatic increase in the number of cyber-attacks. For instance, in
Italy the COVID-19 emergency has heavily affected cybersecurity; from January
to April 2020, the total of attacks, accidents, and violations of privacy to
the detriment of companies and individuals has doubled. Using a systematic and
rigorous approach, this paper aims to analyze the literature on the cyber risk
in the healthcare sector to understand the real knowledge on this topic. The
findings highlight the poor attention of the scientific community on this
topic, except in the United States. The literature lacks research contributions
to support cyber risk management in subject areas such as Business, Management
and Accounting; Social Science; and Mathematics. This research outlines the
need to empirically investigate the cyber risk, giving a practical solution to
health facilities. Keywords: cyber risk; cyber-attack; cybersecurity; computer
security; COVID-19; coronavirus;information technology risk; risk management;
risk assessment; health facilities; healthcare sector;systematic literature
review; insurance",Cyber Risk in Health Facilities: A Systematic Literature Review,2021-02-08 12:59:39,"Alberto Sardi, Alessandro Rizzi, Enrico Sorano, Anna Guerrieri","http://dx.doi.org/10.3390/su12177002, http://arxiv.org/abs/2102.04093v1, http://arxiv.org/pdf/2102.04093v1",econ.GN
33637,gn,"Machine learning models are increasingly used in a wide variety of financial
settings. The difficulty of understanding the inner workings of these systems,
combined with their wide applicability, has the potential to lead to
significant new risks for users; these risks need to be understood and
quantified. In this sub-chapter, we will focus on a well studied application of
machine learning techniques, to pricing and hedging of financial options. Our
aim will be to highlight the various sources of risk that the introduction of
machine learning emphasises or de-emphasises, and the possible risk mitigation
and management strategies that are available.",Black-box model risk in finance,2021-02-09 14:10:51,"Samuel N. Cohen, Derek Snow, Lukasz Szpruch","http://arxiv.org/abs/2102.04757v1, http://arxiv.org/pdf/2102.04757v1",q-fin.CP
33638,gn,"Analysis of policies for managing epidemics require simultaneously an
economic and epidemiological perspective. We adopt a cost-of-policy framework
to model both the virus spread and the cost of handling the pandemic. Because
it is harder and more costly to fight the pandemic when the circulation is
higher, we find that the optimal policy is to go to zero or near-zero case
numbers. Without imported cases, if a region is willing to implement measures
to prevent spread at one level in number of cases, it must also be willing to
prevent the spread with at a lower level, since it will be cheaper to do so and
has only positive other effects. With imported cases, if a region is not
coordinating with other regions, we show the cheapest policy is continually low
but nonzero cases due to decreasing cost of halting imported cases. When it is
coordinating, zero is cost-optimal. Our analysis indicates that within Europe
cooperation targeting a reduction of both within country transmission, and
between country importation risk, should help achieve lower transmission and
reduced costs.",Lowest-cost virus suppression,2021-02-09 14:18:30,"Jacob Janssen, Yaneer Bar-Yam","http://arxiv.org/abs/2102.04758v1, http://arxiv.org/pdf/2102.04758v1",econ.GN
33642,gn,"This study is about public-private research collaboration. In particular, we
want to measure how the propensity of academics to collaborate with their
colleagues from private firms varies over time and whether the typical profile
of such academics change. Furthermore, we investigate the change of the weights
of main drivers underlying the academics' propensity to collaborate with
industry. In order to achieve such goals, we apply an inferential model on a
dataset of professors working in Italian universities in two subsequent
periods, 2010-2013 and 2014-2017. Results can be useful for supporting the
definition of policies aimed at fostering public-private research
collaborations, and should be taken into account when assessing their
effectiveness afterwards.",Do the propensity and drivers of academics' engagement in research collaboration with industry vary over time?,2021-02-10 13:05:07,"Giovanni Abramo, Francesca Apponi, Ciriaco Andrea D'Angelo","http://arxiv.org/abs/2102.05364v1, http://arxiv.org/pdf/2102.05364v1",econ.GN
33643,gn,"We propose a novel approach to the statistical analysis of stochastic
simulation models and, especially, agent-based models (ABMs). Our main goal is
to provide fully automated, model-independent and tool-supported techniques and
algorithms to inspect simulations and perform counterfactual analysis. Our
approach: (i) is easy-to-use by the modeller, (ii) improves reproducibility of
results, (iii) optimizes running time given the modeller's machine, (iv)
automatically chooses the number of required simulations and simulation steps
to reach user-specified statistical confidence, and (v) automates a variety of
statistical tests. In particular, our techniques are designed to distinguish
the transient dynamics of the model from its steady-state behaviour (if any),
estimate properties in both 'phases', and provide indications on the
(non-)ergodic nature of the simulated processes - which, in turn, allows one to
gauge the reliability of a steady-state analysis. Estimates are equipped with
statistical guarantees, allowing for robust comparisons across computational
experiments. To demonstrate the effectiveness of our approach, we apply it to
two models from the literature: a large-scale macro-financial ABM and a small
scale prediction market model. Compared to prior analyses of these models, we
obtain new insights and we are able to identify and fix some erroneous
conclusions.",Automated and Distributed Statistical Analysis of Economic Agent-Based Models,2021-02-10 15:39:34,"Andrea Vandin, Daniele Giachini, Francesco Lamperti, Francesca Chiaromonte","http://dx.doi.org/10.1016/j.jedc.2022.104458, http://arxiv.org/abs/2102.05405v2, http://arxiv.org/pdf/2102.05405v2",econ.GN
33644,gn,"With today's technological advancements, mobile phones and wearable devices
have become extensions of an increasingly diffused and smart digital
infrastructure. In this paper, we examine mobile health (mHealth) platforms and
their health and economic impacts on the outcomes of chronic disease patients.
We partnered with a major mHealth firm that provides one of the largest mHealth
apps in Asia specializing in diabetes care. We designed a randomized field
experiment based on detailed patient health activities (e.g., exercises, sleep,
food intake) and blood glucose values from 1,070 diabetes patients over several
months. We find the adoption of the mHealth app leads to an improvement in
health behavior, which leads to both short term metrics (reduction in patients'
blood glucose and glycated hemoglobin levels) and longer-term metrics (hospital
visits and medical expenses). Patients who adopted the mHealth app undertook
more exercise, consumed healthier food, walked more steps and slept for longer
times. They also were more likely to substitute offline visits with telehealth.
A comparison of mobile vs. PC version of the same app demonstrates that mobile
has a stronger effect than PC in helping patients make these behavioral
modifications with respect to diet, exercise and lifestyle, which leads to an
improvement in their healthcare outcomes. We also compared outcomes when the
platform facilitates personalized health reminders to patients vs. generic
reminders. Surprisingly, we find personalized mobile messages with
patient-specific guidance can have an inadvertent (smaller) effect on patient
app engagement and lifestyle changes, leading to a lower health improvement.
However, they are more like to encourage a substitution of offline visits by
telehealth. Overall, our findings indicate the massive potential of mHealth
technologies and platform design in achieving better healthcare outcomes.",Empowering Patients Using Smart Mobile Health Platforms: Evidence From A Randomized Field Experiment,2021-02-10 18:48:09,"Anindya Ghose, Xitong Guo, Beibei Li, Yuanyuan Dang","http://arxiv.org/abs/2102.05506v2, http://arxiv.org/pdf/2102.05506v2",econ.GN
33645,gn,"COVID-19 has impacted the economy of almost every country in the world. Of
particular interest are the responses of the economic indicators of developing
nations (such as BRICS) to the COVID-19 shock. As an extension to our earlier
work on the dynamic associations of pandemic growth, exchange rate, and stock
market indices in the context of India, we look at the same question with
respect to the BRICS nations. We use structural variable autoregression (SVAR)
to identify the dynamic underlying associations across the normalized growth
measurements of the COVID-19 cumulative case, recovery, and death counts, and
those of the exchange rate, and stock market indices, using data over 203 days
(March 12 - September 30, 2020). Using impulse response analyses, the COVID-19
shock to the growth of exchange rate was seen to persist for around 10+ days,
and that for stock exchange was seen to be around 15 days. The models capture
the contemporaneous nature of these shocks and the subsequent responses,
potentially guiding to inform policy decisions at a national level. Further,
causal inference-based analyses would allow us to infer relationships that are
stronger than mere associations.",Dynamic Structural Impact of the COVID-19 Outbreak on the Stock Market and the Exchange Rate: A Cross-country Analysis Among BRICS Nations,2021-02-10 19:37:07,"Rupam Bhattacharyya, Sheo Rama, Atul Kumar, Indrajit Banerjee","http://arxiv.org/abs/2102.05554v1, http://arxiv.org/pdf/2102.05554v1",econ.GN
33646,gn,"Cryptoassets such as cryptocurrencies and tokens are increasingly traded on
decentralized exchanges. The advantage for users is that the funds are not in
custody of a centralized external entity. However, these exchanges are prone to
manipulative behavior. In this paper, we illustrate how wash trading activity
can be identified on two of the first popular limit order book-based
decentralized exchanges on the Ethereum blockchain, IDEX and EtherDelta. We
identify a lower bound of accounts and trading structures that meet the legal
definitions of wash trading, discovering that they are responsible for a wash
trading volume in equivalent of 159 million U.S. Dollars. While self-trades and
two-account structures are predominant, complex forms also occur. We quantify
these activities, finding that on both exchanges, more than 30\% of all traded
tokens have been subject to wash trading activity. On EtherDelta, 10% of the
tokens have almost exclusively been wash traded. All data is made available for
future research. Our findings underpin the need for countermeasures that are
applicable in decentralized systems.",Detecting and Quantifying Wash Trading on Decentralized Cryptocurrency Exchanges,2021-02-13 23:58:48,"Friedhelm Victor, Andrea Marie Weintraud","http://dx.doi.org/10.1145/3442381.3449824, http://arxiv.org/abs/2102.07001v1, http://arxiv.org/pdf/2102.07001v1",cs.CR
33647,gn,"We compare two jury selection procedures meant to safeguard against the
inclusion of biased jurors that are perceived as causing minorities to be
under-represented. The Strike and Replace procedure presents potential jurors
one-by-one to the parties, while the Struck procedure presents all potential
jurors before the parties exercise their challenges. Struck more effectively
excludes extreme jurors but leads to a worse representation of minorities. The
advantage of Struck in terms of excluding extremes is sizable in a wide range
of cases. In contrast, Strike and Replace better represents minorities only if
the minority and majority are polarized. Results are robust to assuming the
parties statistically discriminate against jurors based on group identity.",Exclusion of Extreme Jurors and Minority Representation: The Effect of Jury Selection Procedures,2021-02-14 21:56:46,"Andrea Moro, Martin Van der Linden","http://arxiv.org/abs/2102.07222v7, http://arxiv.org/pdf/2102.07222v7",econ.GN
33648,gn,"Artificial Intelligence (AI) is increasingly becoming a trusted advisor in
people's lives. A new concern arises if AI persuades people to break ethical
rules for profit. Employing a large-scale behavioural experiment (N = 1,572),
we test whether AI-generated advice can corrupt people. We further test whether
transparency about AI presence, a commonly proposed policy, mitigates potential
harm of AI-generated advice. Using the Natural Language Processing algorithm,
GPT-2, we generated honesty-promoting and dishonesty-promoting advice.
Participants read one type of advice before engaging in a task in which they
could lie for profit. Testing human behaviour in interaction with actual AI
outputs, we provide first behavioural insights into the role of AI as an
advisor. Results reveal that AI-generated advice corrupts people, even when
they know the source of the advice. In fact, AI's corrupting force is as strong
as humans'.",The corruptive force of AI-generated advice,2021-02-15 16:15:12,"Margarita Leib, Nils C. Köbis, Rainer Michael Rilke, Marloes Hagens, Bernd Irlenbusch","http://arxiv.org/abs/2102.07536v1, http://arxiv.org/pdf/2102.07536v1",cs.AI
33649,gn,"Work has now begun on the sixth generation of cellular technologies (`6G`)
and cost-efficient global broadband coverage is already becoming a key pillar.
Indeed, we are still far from providing universal and affordable broadband
connectivity, despite this being a key part of the Sustainable Development
Goals (Target 9.c). Currently, both Mobile Network Operators and governments
still lack independent analysis of the strategies that can help achieve this
target with the cellular technologies available (4G and 5G). Therefore, this
paper undertakes quantitative assessment demonstrating how current 5G policies
affect universal broadband, as well as drawing conclusions over how decisions
made now affect future evolution to 6G. Using a method based on an open-source
techno-economic codebase, combining remote sensing with least-cost network
algorithms, performance analytics are provided for different 4G and 5G
universal broadband strategies. As an example, the assessment approach is
applied to India, the world`s second-largest mobile market and a country with
very high spectrum prices. The results demonstrate the trade-offs between
technological decisions. This includes demonstrating how important current
infrastructure policy is, particularly given fiber backhaul will be essential
for delivering 6G quality of service. We find that by eliminating the spectrum
licensing costs, 100% 5G population coverage can viably be achieved using fiber
backhaul. Therefore, supportive infrastructure policies are essential in
providing a superior foundation for evolution to future cellular generation,
such as 6G.",Supportive 5G Infrastructure Policies are Essential for Universal 6G: Assessment using an Open-source Techno-economic Simulation Model utilizing Remote Sensing,2021-02-16 14:15:46,"Edward J. Oughton, Ashutosh Jha","http://arxiv.org/abs/2102.08086v3, http://arxiv.org/pdf/2102.08086v3",econ.GN
33650,gn,"Online real estate platforms have become significant marketplaces
facilitating users' search for an apartment or a house. Yet it remains
challenging to accurately appraise a property's value. Prior works have
primarily studied real estate valuation based on hedonic price models that take
structured data into account while accompanying unstructured data is typically
ignored. In this study, we investigate to what extent an automated visual
analysis of apartment floor plans on online real estate platforms can enhance
hedonic rent price appraisal. We propose a tailored two-staged deep learning
approach to learn price-relevant designs of floor plans from historical price
data. Subsequently, we integrate the floor plan predictions into hedonic rent
price models that account for both structural and locational characteristics of
an apartment. Our empirical analysis based on a unique dataset of 9174 real
estate listings suggests that current hedonic models underutilize the available
data. We find that (1) the visual design of floor plans has significant
explanatory power regarding rent prices - even after controlling for structural
and locational apartment characteristics, and (2) harnessing floor plans
results in an up to 10.56% lower out-of-sample prediction error. We further
find that floor plans yield a particularly high gain in prediction performance
for older and smaller apartments. Altogether, our empirical findings contribute
to the existing research body by establishing the link between the visual
design of floor plans and real estate prices. Moreover, our approach has
important implications for online real estate platforms, which can use our
findings to enhance user experience in their real estate listings.",Integrating Floor Plans into Hedonic Models for Rent Price Appraisal,2021-02-16 17:05:33,"Kirill Solovev, Nicolas Pröllochs","http://dx.doi.org/10.1145/3442381.3449967, http://arxiv.org/abs/2102.08162v1, http://arxiv.org/pdf/2102.08162v1",cs.LG
33652,gn,"We study to what extent the Bitcoin blockchain security permanently depends
on the underlying distribution of cryptocurrency market outcomes. We use daily
blockchain and Bitcoin data for 2014-2019 and employ the ARDL approach. We test
three equilibrium hypotheses: (i) sensitivity of the Bitcoin blockchain to
mining reward; (ii) security outcomes of the Bitcoin blockchain and the
proof-of-work cost; and (iii) the speed of adjustment of the Bitcoin blockchain
security to deviations from the equilibrium path. Our results suggest that the
Bitcoin price and mining rewards are intrinsically linked to Bitcoin security
outcomes. The Bitcoin blockchain security's dependency on mining costs is
geographically differenced - it is more significant for the global mining
leader China than for other world regions. After input or output price shocks,
the Bitcoin blockchain security reverts to its equilibrium security level.",The economic dependency of the Bitcoin security,2021-02-16 15:36:21,"Pavel Ciaian, d'Artis Kancs, Miroslava Rajcaniova","http://arxiv.org/abs/2102.08378v1, http://arxiv.org/pdf/2102.08378v1",econ.GN
33653,gn,"The internet's rapid growth stimulates the emergence of start-up companies
based on information technology and telecommunication (ICT) in Indonesia and
Singapore. As the number of start-ups and its investor growth, the network of
its relationship become larger and complex, but on the other side feel small.
Everyone in the ICT start-up investment network can be reached in short steps,
led to a phenomenon called small-world phenomenon, a principle that we are all
connected by a short chain of relationships. We investigate the pattern of the
relationship between a start-up with its investor and the small world
characteristics using network analysis methodology. The research is conducted
by creating the ICT start-up investment network model of each country and
calculate its small-world network properties to see the characteristic of the
networks. Then we compare and analyze the result of each network model. The
result of this research is to give knowledge about the current condition of ICT
start-up investment in Indonesia and Singapore. The research is beneficial for
business intelligence purposes to support decision-making related to ICT
start-up investment.",The Small World Phenomenon and Network Analysis of ICT Startup Investment in Indonesia and Singapore,2021-02-18 04:20:54,"Farid Naufal Aslam, Andry Alamsyah","http://arxiv.org/abs/2102.09102v1, http://arxiv.org/pdf/2102.09102v1",cs.CY
33654,gn,"E-commerce provides an efficient and effective way to exchange goods between
sellers and customers. E-commerce has been a popular method for doing business,
because of its simplicity of having commerce activity transparently available,
including customer voice and opinion about their own experience. Those
experiences can be a great benefit to understand customer experience
comprehensively, both for sellers and future customers. This paper applies to
e-commerces and customers in Indonesia. Many Indonesian customers expressed
their voice to open social network services such as Twitter and Facebook, where
a large proportion of data is in the form of conversational data. By
understanding customer behavior through open social network service, we can
have descriptions about the e-commerce services level in Indonesia. Thus, it is
related to the government's effort to improve the Indonesian digital economy
ecosystem. A method for finding core topics in large-scale internet
unstructured text data is needed, where the method should be fast but
sufficiently accurate. Processing large-scale data is not a straightforward
job, it often needs special skills of people and complex software and hardware
computer system. We propose a fast methodology of text mining methods based on
frequently appeared words and their word association to form network text
methodology. This method is adapted from Social Network Analysis by the model
relationships between words instead of actors.",A Core of E-Commerce Customer Experience based on Conversational Data using Network Text Methodology,2021-02-18 04:33:14,"Andry Alamsyah, Nurlisa Laksmiani, Lies Anisa Rahimi","http://arxiv.org/abs/2102.09107v1, http://arxiv.org/pdf/2102.09107v1",econ.GN
33655,gn,"In an infinitely repeated pricing game, pricing algorithms based on
artificial intelligence (Q-learning) may consistently learn to charge
supra-competitive prices even without communication. Although concerns on
algorithmic collusion have arisen, little is known on underlying factors. In
this work, we experimentally analyze the dynamics of algorithms with three
variants of experience replay. Algorithmic collusion still has roots in human
preferences. Randomizing experience yields prices close to the static Bertrand
equilibrium and higher prices are easily restored by favoring the latest
experience. Moreover, relative performance concerns also stabilize the
collusion. Finally, we investigate the scenarios with heterogeneous agents and
test robustness on various factors.",Understanding algorithmic collusion with experience replay,2021-02-18 06:28:41,Bingyan Han,"http://arxiv.org/abs/2102.09139v2, http://arxiv.org/pdf/2102.09139v2",econ.GN
33656,gn,"The vast majority of existing studies that estimate the average unexplained
gender pay gap use unnecessarily restrictive linear versions of the
Blinder-Oaxaca decomposition. Using a notably rich and large data set of 1.7
million employees in Switzerland, we investigate how the methodological
improvements made possible by such big data affect estimates of the unexplained
gender pay gap. We study the sensitivity of the estimates with regard to i) the
availability of observationally comparable men and women, ii) model flexibility
when controlling for wage determinants, and iii) the choice of different
parametric and semi-parametric estimators, including variants that make use of
machine learning methods. We find that these three factors matter greatly.
Blinder-Oaxaca estimates of the unexplained gender pay gap decline by up to 39%
when we enforce comparability between men and women and use a more flexible
specification of the wage equation. Semi-parametric matching yields estimates
that when compared with the Blinder-Oaxaca estimates, are up to 50% smaller and
also less sensitive to the way wage determinants are included.",The Gender Pay Gap Revisited with Big Data: Do Methodological Choices Matter?,2021-02-18 11:12:32,"Anthony Strittmatter, Conny Wunsch","http://arxiv.org/abs/2102.09207v2, http://arxiv.org/pdf/2102.09207v2",econ.GN
33657,gn,"Decentralised government levels are often entrusted with the management of
public works and required to ensure well-timed infrastructure delivery to their
communities. We investigate whether monitoring the activity of local procuring
authorities during the execution phase of the works they manage may expedite
the infrastructure delivery process. Focussing on an Italian regional law which
imposes monitoring by the regional government on ""strategic"" works carried out
by local buyers, we draw causal claims using a regression-discontinuity
approach, made unusual by the presence of multiple assignment variables.
Estimation is performed through discrete-time survival analysis techniques.
Results show that monitoring does expedite infrastructure delivery.",The Expediting Effect of Monitoring on Infrastructural Works. A Regression-Discontinuity Approach with Multiple Assignment Variables,2021-02-16 18:14:41,"Giuseppe Francesco Gori, Patrizia Lattarulo, Marco Mariani","http://arxiv.org/abs/2102.09625v1, http://arxiv.org/pdf/2102.09625v1",econ.GN
33658,gn,"We simulate a spatial behavioral model of the diffusion of an infection to
understand the role of geographic characteristics: the number and distribution
of outbreaks, population size, density, and agents' movements. We show that
several invariance properties of the SIR model concerning these variables do
not hold when agents interact with neighbors in a (two dimensional)
geographical space. Indeed, the spatial model's local interactions generate
matching frictions and local herd immunity effects, which play a fundamental
role in the infection dynamics. We also show that geographical factors affect
how behavioral responses affect the epidemics. We derive relevant implications
for estimating the effects of the epidemics and policy interventions that use
panel data from several geographical units.",Learning Epidemiology by Doing: The Empirical Implications of a Spatial-SIR Model with Behavioral Responses,2021-02-19 23:16:20,"Alberto Bisin, Andrea Moro","http://dx.doi.org/10.1016/j.jue.2021.103368, http://arxiv.org/abs/2102.10145v2, http://arxiv.org/pdf/2102.10145v2",econ.GN
33660,gn,"We study the general problem of Bayesian persuasion (optimal information
design) with continuous actions and continuous state space in arbitrary
dimensions. First, we show that with a finite signal space, the optimal
information design is always given by a partition. Second, we take the limit of
an infinite signal space and characterize the solution in terms of a
Monge-Kantorovich optimal transport problem with an endogenous information
transport cost. We use our novel approach to: 1. Derive necessary and
sufficient conditions for optimality based on Bregman divergences for
non-convex functions. 2. Compute exact bounds for the Hausdorff dimension of
the support of an optimal policy. 3. Derive a non-linear, second-order partial
differential equation whose solutions correspond to regular optimal policies.
We illustrate the power of our approach by providing explicit solutions to
several non-linear, multidimensional Bayesian persuasion problems.",Optimal Transport of Information,2021-02-22 14:28:36,"Semyon Malamud, Anna Cieslak, Andreas Schrimpf","http://arxiv.org/abs/2102.10909v4, http://arxiv.org/pdf/2102.10909v4",econ.GN
33661,gn,"In this paper, we use a variety of machine learning methods to quantify the
extent to which economic and technological factors are predictive of the
progression of Central Bank Digital Currencies (CBDC) within a country, using
as our measure of this progression the CBDC project index (CBDCPI). We find
that a financial development index is the most important feature for our model,
followed by the GDP per capita and an index of the voice and accountability of
the country's population. Our results are consistent with previous qualitative
research which finds that countries with a high degree of financial development
or digital infrastructure have more developed CBDC projects. Further, we obtain
robust results when predicting the CBDCPI at different points in time.",Data-driven analysis of central bank digital currency (CBDC) projects drivers,2021-02-23 20:15:56,"Toshiko Matsui, Daniel Perez","http://arxiv.org/abs/2102.11807v1, http://arxiv.org/pdf/2102.11807v1",cs.LG
33662,gn,"Policymakers decide on alternative policies facing restricted budgets and
uncertain, ever-changing future. Designing public policies is further difficult
due to the need to decide on priorities and handle effects across policies.
Housing policies, specifically, involve heterogeneous characteristics of
properties themselves and the intricacy of housing markets and the spatial
context of cities. We propose PolicySpace2 (PS2) as an adapted and extended
version of the open source PolicySpace agent-based model. PS2 is a computer
simulation that relies on empirically detailed spatial data to model real
estate, along with labor, credit, and goods and services markets. Interaction
among workers, firms, a bank, households and municipalities follow the
literature benchmarks to integrate economic, spatial and transport scholarship.
PS2 is applied to a comparison among three competing public policies aimed at
reducing inequality and alleviating poverty: (a) house acquisition by the
government and distribution to lower income households, (b) rental vouchers,
and (c) monetary aid. Within the model context, the monetary aid, that is,
smaller amounts of help for a larger number of households, makes the economy
perform better in terms of production, consumption, reduction of inequality,
and maintenance of financial duties. PS2 as such is also a framework that may
be further adapted to a number of related research questions.",PolicySpace2: modeling markets and endogenous public policies,2021-02-23 23:29:59,Bernardo Alves Furtado,"http://arxiv.org/abs/2102.11929v4, http://arxiv.org/pdf/2102.11929v4",cs.MA
33663,gn,"Online property business or known as e-commerce is currently experiencing an
increase in home sales. Indonesia's e-commerce property business has positive
trending shown by the increasing sales of more than 500% from 2011 to 2015. A
prediction of the property price is important to help investors or the public
to have accurate information before buying property. One of the methods for
prediction is a classification based on several distinctive property industry
attributes, such as building size, land size, number of rooms, and location.
Today, data is easily obtained, there are many open data from E-commerce sites.
E-commerce contains information about homes and other properties advertised to
sell. People also regularly visit the site to find the right property or to
sell the property using price information which collectively available as open
data. To predict the property sales, this research employed two different
classification methods in Data Mining which are Decision Tree and k-NN
classification. We compare which model classification is better to predict
property price and their attributes. We use Indonesia's biggest property-based
e-commerce site Rumah123.com as our open data source, and choose location
Bandung in our experiment. The accuracy result of the decision tree is 75% and
KNN is 71%, other than that k-NN can explore more data patterns than the
Decision Tree.",Property Business Classification Model Based on Indonesia E-Commerce Data,2021-02-24 17:29:34,"Andry Alamsyah, Fariz Denada Sudrajat, Herry Irawan","http://arxiv.org/abs/2102.12300v1, http://arxiv.org/pdf/2102.12300v1",econ.GN
33664,gn,"We consider the problem of reducing the carbon emissions of a set of firms
over a finite horizon. A regulator dynamically allocates emission allowances to
each firm. Firms face idiosyncratic as well as common economic shocks on
emissions, and have linear quadratic abatement costs. Firms can trade
allowances so to minimise total expected costs, from abatement and trading plus
a quadratic terminal penalty. Using variational methods, we exhibit in
closed-form the market equilibrium in function of regulator's dynamic
allocation. We then solve the Stackelberg game between the regulator and the
firms. Again, we obtain a closed-form expression of the dynamic allocation
policies that allow a desired expected emission reduction. Optimal policies are
not unique but share common properties. Surprisingly, all optimal policies
induce a constant abatement effort and a constant price of allowances. Dynamic
allocations outperform static ones because of adjustment costs and uncertainty,
in particular given the presence of common shocks. Our results are robust to
some extensions, like risk aversion of firms or different penalty functions.",Optimal dynamic regulation of carbon emissions market: A variational approach,2021-02-24 20:32:36,"René Aïd, Sara Biagini","http://arxiv.org/abs/2102.12423v1, http://arxiv.org/pdf/2102.12423v1",econ.GN
33680,gn,"We introduce a novel machine learning approach to leverage historical and
contemporary maps and systematically predict economic statistics. Our simple
algorithm extracts meaningful features from the maps based on their color
compositions for predictions. We apply our method to grid-level population
levels in Sub-Saharan Africa in the 1950s and South Korea in 1930, 1970, and
2015. Our results show that maps can reliably predict population density in the
mid-20th century Sub-Saharan Africa using 9,886 map grids (5km by 5 km).
Similarly, contemporary South Korean maps can generate robust predictions on
income, consumption, employment, population density, and electric consumption.
In addition, our method is capable of predicting historical South Korean
population growth over a century.",Using maps to predict economic activity,2021-12-27 17:13:20,"Imryoung Jeong, Hyunjoo Yang","http://arxiv.org/abs/2112.13850v2, http://arxiv.org/pdf/2112.13850v2",econ.GN
33665,gn,"A multi-regional input-output table (MRIOT) containing the transactions among
the region-sectors in an economy defines a weighted and directed network. Using
network analysis tools, we analyze the regional and sectoral structure of the
Chinese economy and their temporal dynamics from 2007 to 2012 via the MRIOTs of
China. Global analyses are done with network topology measures. Growth-driving
province-sector clusters are identified with community detection methods.
Influential province-sectors are ranked by weighted PageRank scores. The
results revealed a few interesting and telling insights. The level of
inter-province-sector activities increased with the rapid growth of the
national economy, but not as fast as that of intra-province economic
activities. Regional community structures were deeply associated with
geographical factors. The community heterogeneity across the regions was high
and the regional fragmentation increased during the study period. Quantified
metrics assessing the relative importance of the province-sectors in the
national economy echo the national and regional economic development policies
to a certain extent.",Regional and Sectoral Structures and Their Dynamics of Chinese Economy: A Network Perspective from Multi-Regional Input-Output Tables,2021-02-24 21:38:03,"Tao Wang, Shiying Xiao, Jun Yan, Panpan Zhang","http://dx.doi.org/10.1016/j.physa.2021.126196, http://arxiv.org/abs/2102.12454v1, http://arxiv.org/pdf/2102.12454v1",physics.soc-ph
33666,gn,"For centuries, national economies created wealth by engaging in international
trade and production. The resulting international supply networks not only
increase wealth for countries, but also create systemic risk: economic shocks,
triggered by company failures in one country, may propagate to other countries.
Using global supply network data on the firm-level, we present a method to
estimate a country's exposure to direct and indirect economic losses caused by
the failure of a company in another country. We show the network of systemic
risk-flows across the world. We find that rich countries expose poor countries
much more to systemic risk than the other way round. We demonstrate that higher
systemic risk levels are not compensated with a risk premium in GDP, nor do
they correlate with economic growth. Systemic risk around the globe appears to
be distributed more unequally than wealth. These findings put the often praised
benefits for developing countries from globalized production in a new light,
since they relate them to the involved risks in the production processes.
Exposure risks present a new dimension of global inequality, that most affects
the poor in supply shock crises. It becomes fully quantifiable with the
proposed method.",Inequality in economic shock exposures across the global firm-level supply network,2021-12-01 14:01:20,"Abhijit Chakraborty, Tobias Reisch, Christian Diem, Stefan Thurner","http://arxiv.org/abs/2112.00415v1, http://arxiv.org/pdf/2112.00415v1",econ.GN
33667,gn,"Know your customer (KYC) processes place a great burden on banks, because
they are costly, inefficient, and inconvenient for customers. While blockchain
technology is often mentioned as a potential solution, it is not clear how to
use the technology's advantages without violating data protection regulations
and customer privacy. We demonstrate how blockchain-based self-sovereign
identity (SSI) can solve the challenges of KYC. We follow a rigorous design
science research approach to create a framework that utilizes SSI in the KYC
process, deriving nascent design principles that theorize on blockchain's role
for SSI.",Designing a Framework for Digital KYC Processes Built on Blockchain-Based Self-Sovereign Identity,2021-11-11 14:05:06,"Vincent Schlatt, Johannes Sedlmeir, Simon Feulner, Nils Urbach","http://dx.doi.org/10.1016/j.im.2021.103553, http://arxiv.org/abs/2112.01237v1, http://arxiv.org/pdf/2112.01237v1",cs.CY
33668,gn,"There is growing interest in the role of sentiment in economic
decision-making. However, most research on the subject has focused on positive
and negative valence. Conviction Narrative Theory (CNT) places Approach and
Avoidance sentiment (that which drives action) at the heart of real-world
decision-making, and argues that it better captures emotion in financial
markets. This research, bringing together psychology and machine learning,
introduces new techniques to differentiate Approach and Avoidance from positive
and negative sentiment on a fundamental level of meaning. It does this by
comparing word-lists, previously constructed to capture these concepts in text
data, across a large range of semantic features. The results demonstrate that
Avoidance in particular is well defined as a separate type of emotion, which is
evaluative/cognitive and action-orientated in nature. Refining the Avoidance
word-list according to these features improves macroeconomic models, suggesting
that they capture the essence of Avoidance and that it plays a crucial role in
driving real-world economic decision-making.",Differentiating Approach and Avoidance from Traditional Notions of Sentiment in Economic Contexts,2021-12-05 19:05:16,"Jacob Turton, Ali Kabiri, David Tuckett, Robert Elliott Smith, David P. Vinson","http://arxiv.org/abs/2112.02607v1, http://arxiv.org/pdf/2112.02607v1",cs.CL
33669,gn,"This study measures the tendency to publish in international scientific
journals. For each of nearly 35 thousands Scopus-indexed journals, we derive
seven globalization indicators based on the composition of authors by country
of origin and other characteristics. These are subsequently scaled up to the
level of 174 countries and 27 disciplines between 2005 and 2017. The results
indicate that advanced countries maintain high globalization of scientific
communication that is not varying across disciplines. Social sciences and
health sciences are less globalized than physical and life sciences. Countries
of the former Soviet bloc score far lower on the globalization measures,
especially in social sciences or health sciences. Russia remains among the
least globalized during the whole period, with no upward trend. Contrary, China
has profoundly globalized its science system, gradually moving from the lowest
globalization figures to the world average. The paper concludes with
reflections on measurement issues and policy implications.",Globalization of Scientific Communication: Evidence from authors in academic journals by country of origin,2021-12-05 23:12:15,Vít Macháček,"http://arxiv.org/abs/2112.02672v1, http://arxiv.org/pdf/2112.02672v1",cs.DL
33681,gn,"An efficient, reliable, and interpretable global solution method, the Deep
learning-based algorithm for Heterogeneous Agent Models (DeepHAM), is proposed
for solving high dimensional heterogeneous agent models with aggregate shocks.
The state distribution is approximately represented by a set of optimal
generalized moments. Deep neural networks are used to approximate the value and
policy functions, and the objective is optimized over directly simulated paths.
In addition to being an accurate global solver, this method has three
additional features. First, it is computationally efficient in solving complex
heterogeneous agent models, and it does not suffer from the curse of
dimensionality. Second, it provides a general and interpretable representation
of the distribution over individual states, which is crucial in addressing the
classical question of whether and how heterogeneity matters in macroeconomics.
Third, it solves the constrained efficiency problem as easily as it solves the
competitive equilibrium, which opens up new possibilities for studying optimal
monetary and fiscal policies in heterogeneous agent models with aggregate
shocks.",DeepHAM: A Global Solution Method for Heterogeneous Agent Models with Aggregate Shocks,2021-12-29 06:09:19,"Jiequn Han, Yucheng Yang, Weinan E","http://arxiv.org/abs/2112.14377v2, http://arxiv.org/pdf/2112.14377v2",econ.GN
33670,gn,"In allusion to some contradicting results in existing research, this paper
selects China's latest stock data from 2005 to 2020 for empirical analysis. By
choosing this periods' data, we avoid the periods of China's significant stock
market reforms to reduce the impact of the government's policy on the factor
effect. In this paper, the redundant factors (HML, CMA) are orthogonalized, and
the regression analysis of 5*5 portfolio of Size-B/M and Size-Inv is carried
out with these two orthogonalized factors. It found that the HML and the CMA
are still significant in many portfolios, indicating that they have a strong
explanatory ability, which is also consistent with the results of GRS test. All
these show that the five-factor model has a better ability to explain the
excess return rate. In the concrete analysis, this paper uses the methods of
the five-factor 25-group portfolio returns calculation, the five-factor
regression analysis, the orthogonal treatment, the five-factor 25-group
regression and the GRS test to more comprehensively explain the excellent
explanatory ability of the five-factor model to the excess return. Then, we
analyze the possible reasons for the strong explanatory ability of the HML, CMA
and RMW from the aspects of price to book ratio, turnover rate and correlation
coefficient. We also give a detailed explanation of the results, and analyze
the changes of China's stock market policy and investors' investment style
recent years. Finally, this paper attempts to put forward some useful
suggestions on the development of asset pricing model and China's stock market.","A revised comparison between FF five-factor model and three-factor model,based on China's A-share market",2021-10-16 07:46:46,"Zhijing Zhang, Yue Yu, Qinghua Ma, Haixiang Yao","http://arxiv.org/abs/2112.03170v1, http://arxiv.org/pdf/2112.03170v1",q-fin.GN
33671,gn,"This work was partially supported by the Program of Fundamental Research of
the Department of Physics and Astronomy of the National Academy of Sciences of
Ukraine ""Mathematical models of non equilibrium processes in open systems"" N
0120U100857.",Mathematical Model of International Trade and Global Economy,2021-12-08 17:12:05,"N. S. Gonchar, O. P. Dovzhyk, A. S. Zhokhin, W. H. Kozyrski, A. P. Makhort","http://arxiv.org/abs/2112.04297v2, http://arxiv.org/pdf/2112.04297v2",econ.GN
33672,gn,"This paper focuses on the operation of an electricity market that accounts
for participants that bid at a sub-minute timescale. To that end, we model the
market-clearing process as a dynamical system, called market dynamics, which is
temporally coupled with the grid frequency dynamics and is thus required to
guarantee system-wide stability while meeting the system operational
constraints. We characterize participants as price-takers who rationally update
their bids to maximize their utility in response to real-time schedules of
prices and dispatch. For two common bidding mechanisms, based on quantity and
price, we identify a notion of alignment between participants' behavior and
planners' goals that leads to a saddle-based design of the market that
guarantees convergence to a point meeting all operational constraints. We
further explore cases where this alignment property does not hold and observe
that misaligned participants' bidding can destabilize the closed-loop system.
We thus design a regularized version of the market dynamics that recovers all
the desirable stability and steady-state performance guarantees. Numerical
tests validate our results on the IEEE 39-bus system.","On the Stability, Economic Efficiency and Incentive Compatibility of Electricity Market Dynamics",2021-12-10 23:04:44,"Pengcheng You, Yan Jiang, Enoch Yeung, Dennice F. Gayme, Enrique Mallada","http://arxiv.org/abs/2112.05811v1, http://arxiv.org/pdf/2112.05811v1",math.OC
33673,gn,"This paper is part of the Global Income Dynamics Project cross-country
comparison of earnings inequality, volatility, and mobility. Using data from
the U.S. Census Bureau's Longitudinal Employer-Household Dynamics (LEHD)
infrastructure files we produce a uniform set of earnings statistics for the
U.S. From 1998 to 2019, we find U.S. earnings inequality has increased and
volatility has decreased. The combination of increased inequality and reduced
volatility suggest earnings growth differs substantially across different
demographic groups. We explore this further by estimating 12-year average
earnings for a single cohort of age 25-54 eligible workers. Differences in
labor supply (hours paid and quarters worked) are found to explain almost 90%
of the variation in worker earnings, although even after controlling for labor
supply substantial earnings differences across demographic groups remain
unexplained. Using a quantile regression approach, we estimate counterfactual
earnings distributions for each demographic group. We find that at the bottom
of the earnings distribution differences in characteristics such as hours paid,
geographic division, industry, and education explain almost all the earnings
gap, however above the median the contribution of the differences in the
returns to characteristics becomes the dominant component.","U.S. Long-Term Earnings Outcomes by Sex, Race, Ethnicity, and Place of Birth",2021-12-10 23:41:26,"Kevin L. McKinney, John M. Abowd, Hubert P. Janicki","http://arxiv.org/abs/2112.05822v1, http://arxiv.org/pdf/2112.05822v1",econ.GN
33674,gn,"This paper investigates the equity impacts of autonomous vehicles (AV) on
for-hire human drivers and passengers in a ride-hailing market, and examines
regulation policies that protect human drivers and improve transport equity for
ride-hailing passengers. We consider a transportation network companies (TNC)
that employs a mixture of AVs and human drivers to provide ride-hailing
services. The TNC platform determines the spatial prices, fleet size, human
driver payments, and vehicle relocation strategies to maximize its profit,
while individual passengers choose between different transport modes to
minimize their travel costs. A market equilibrium model is proposed to capture
the interactions among passengers, human drivers, AVs, and TNC over the
transportation network. The overall problem is formulated as a non-concave
program, and an algorithm is developed to derive its approximate solution with
a theoretical performance guarantee. Our study shows that TNC prioritizes AV
deployment in higher-demand areas to make a higher profit. As AVs flood into
these higher-demand areas, they compete with human drivers in the urban core
and push them to relocate to suburbs. This leads to reduced earning
opportunities for human drivers and increased spatial inequity for passengers.
To mitigate these concerns, we consider: (a) a minimum wage for human drivers;
and (b) a restrictive pickup policy that prohibits AVs from picking up
passengers in higher-demand areas. In the former case, we show that a minimum
wage for human drivers will protect them from the negative impact of AVs with
negligible impacts on passengers. However, there exists a threshold beyond
which the minimum wage will trigger the platform to replace the majority of
human drivers with AVs.",Regulating Transportation Network Companies with a Mixture of Autonomous Vehicles and For-Hire Human Drivers,2021-12-14 11:10:33,"Di Ao, Jing Gao, Zhijie Lai, Sen Li","http://arxiv.org/abs/2112.07218v2, http://arxiv.org/pdf/2112.07218v2",math.OC
33718,gn,"We study a class of optimal stopping games (Dynkin games) of preemption type,
with uncertainty about the existence of competitors. The set-up is well-suited
to model, for example, real options in the context of investors who do not want
to publicly reveal their interest in a certain business opportunity. We show
that there exists a Nash equilibrium in randomized stopping times which is
described explicitly in terms of the corresponding one-player game.",Playing with ghosts in a Dynkin game,2019-05-16 10:17:31,"Tiziano De Angelis, Erik Ekström","http://arxiv.org/abs/1905.06564v1, http://arxiv.org/pdf/1905.06564v1",math.PR
33675,gn,"Climate change mitigation is a global challenge that, however, needs to be
resolved by national-level authorities, resembling a tragedy of the commons.
This paradox is reflected at European scale, as climate commitments are made by
the EU collectively, but implementation is the responsibility of individual
Member States. Here, we investigate 30.000 near-optimal effort-sharing
scenarios where the European electricity sector is decarbonized by at least 55%
relative to 1990, in line with 2030 ambitions. Using a highly detailed
brownfield electricity system optimization model, the optimal electricity
system is simulated for a suite of effort-sharing scenarios. Results reveal
large inequalities in the efforts required to decarbonize national electricity
sectors, with some countries facing cost-optimal pathways to reach 55% emission
reductions, while others are confronted with relatively high abatement costs.
Specifically, we find that several countries with modest or low levels of GDP
per capita will experience high abatement costs, and when passed over into
electricity prices this may lead to increased energy poverty in certain parts
of Europe",30.000 ways to reach 55% decarbonization of the European electricity sector,2021-12-14 12:20:34,"Tim T. Pedersen, Mikael Skou Andersen, Marta Victoria, Gorm B. Andresen","http://dx.doi.org/10.1016/j.isci.2023.106677, http://arxiv.org/abs/2112.07247v3, http://arxiv.org/pdf/2112.07247v3",econ.GN
33676,gn,"The literature about tradable credit schemes (TCS) as a demand management
system alleviating congestion flourished in the past decade. Most proposed
formulations are based on static models and thus do not account for the
congestion dynamics. This paper considers elastic demand and implements a TCS
to foster modal shift by restricting the number of cars allowed in the network
over the day. A trip-based Macroscopic Fundamental Diagram (MFD) model
represents the traffic dynamics at the whole urban scale. We assume the users
have different OD pairs and choose between driving their car or riding the
transit following a logit model. We aim to compute the modal shares and credit
price at equilibrium under TCS. The travel times are linearized with respect to
the modal shares to improve the convergence. We then present a method to find
the credit charge minimizing the total travel time alone or combined with the
carbon emission. The proposed methodology is illustrated with a typical demand
profile from 7:00 to 10:00 for Lyon Metropolis. We show that traffic dynamics
and trip heterogeneity matter when deriving the modal equilibrium under a TCS.
A method is described to compute the linearization of the travel times and
compared against a classical descend method (MSA). The proposed linearization
is a promising tool to circumvent the complexity of the implicit formulation of
the trip-based MFD. Under an optimized TCS, the total travel time decreases by
17% and the carbon emission by 45% by increasing the PT share by 24 points.",Modal equilibrium of a tradable credit scheme with a trip-based MFD and logit-based decision-making,2021-12-14 13:30:40,"Louis Balzer, Ludovic Leclercq","http://dx.doi.org/10.1016/j.trc.2022.103642, http://arxiv.org/abs/2112.07277v2, http://arxiv.org/pdf/2112.07277v2",econ.GN
33677,gn,"We test the hypothesis that protest participation decisions in an adult
population of potential climate protesters are interdependent. Subjects
(n=1,510) from the four largest German cities were recruited two weeks before
protest date. We measured participation (ex post) and beliefs about the other
subjects' participation (ex ante) in an online survey, used a randomized
informational intervention to induce exogenous variance in beliefs, and
estimated the causal effect of a change in belief on the probability of
participation using a control function approach. Participation decisions are
found to be strategic substitutes: a one percentage-point increase of belief
causes a .67 percentage-point decrease in the probability of participation in
the average subject.",Free-Riding for Future: Field Experimental Evidence of Strategic Substitutability in Climate Protest,2021-12-17 15:40:51,"Johannes Jarke-Neuert, Grischa Perino, Henrike Schwickert","http://arxiv.org/abs/2112.09478v1, http://arxiv.org/pdf/2112.09478v1",econ.GN
33678,gn,"The number, importance, and popularity of rankings measuring innovation
performance and the strength and resources of ecosystems that provide its
spatial framework are on an increasing trend globally. In addition to
influencing the specific decisions taken by economic actors, these rankings
significantly impact the development of innovation-related policies at
regional, national, and international levels. The importance of startup
ecosystems is proven by the growing scientific interest, which is demonstrated
by the increasing number of related scientific articles. The concept of the
startup ecosystem is a relatively new category, the application of which in
everyday and scientific life has been gaining ground since the end of the
2000s. In parallel, of course, the demand for measurability and comparability
has emerged among decision-makers and scholars. This demand is met by startup
ecosystem rankings, which now measure and rank the performance of individual
ecosystems on a continental and global scale. However, while the number of
scientific publications examining rankings related to higher education,
economic performance, or even innovation, can be measured in the order of
thousands, scientific research has so far rarely or tangentially addressed the
rankings of startup ecosystems. This study and the related research intend to
fill this gap by presenting and analysing the characteristics of global
rankings and identifying possible future research directions.",Startup Ecosystem Rankings,2021-12-21 20:22:19,Attila Lajos Makai,"http://dx.doi.org/10.35618/hsr2021.02.en070, http://arxiv.org/abs/2112.11931v1, http://arxiv.org/pdf/2112.11931v1",cs.DL
33679,gn,"Many modern organisations employ methods which involve monitoring of
employees' actions in order to encourage teamwork in the workplace. While
monitoring promotes a transparent working environment, the effects of making
monitoring itself transparent may be ambiguous and have received surprisingly
little attention in the literature. Using a novel laboratory experiment, we
create a working environment in which first movers can (or cannot) observe
second mover's monitoring at the end of a round. Our framework consists of a
standard repeated sequential Prisoner's Dilemma, where the second mover can
observe the choices made by first movers either exogenously or endogenously. We
show that mutual cooperation occurs significantly more frequently when
monitoring is made transparent. Additionally, our results highlight the key
role of conditional cooperators (who are more likely to monitor) in promoting
teamwork. Overall, the observed cooperation enhancing effects are due to
monitoring actions that carry information about first movers who use it to
better screen the type of their co-player and thereby reduce the risk of being
exploited.",Should transparency be (in-)transparent? On monitoring aversion and cooperation in teams,2021-12-23 18:01:39,"Michalis Drouvelis, Johannes Jarke-Neuert, Johannes Lohse","http://arxiv.org/abs/2112.12621v1, http://arxiv.org/pdf/2112.12621v1",econ.GN
33682,gn,"We study how the financial literature has evolved in scale, research team
composition, and article topicality across 32 finance-focused academic journals
from 1992 to 2021. We document that the field has vastly expanded regarding
outlets and published articles. Teams have become larger, and the proportion of
women participating in research has increased significantly. Using the
Structural Topic Model, we identify 45 topics discussed in the literature. We
investigate the topic coverage of individual journals and can identify highly
specialized and generalist outlets, but our analyses reveal that most journals
have covered more topics over time, thus becoming more generalist. Finally, we
find that articles with at least one woman author focus more on topics related
to social and governance aspects of corporate finance. We also find that teams
with at least one top-tier institution scholar tend to focus more on
theoretical aspects of finance.",Thirty Years of Academic Finance,2021-12-30 06:04:53,"David Ardia, Keven Bluteau, Mohammad Abbas Meghani","http://dx.doi.org/10.1111/joes.12571, http://arxiv.org/abs/2112.14902v2, http://arxiv.org/pdf/2112.14902v2",q-fin.GN
33683,gn,"To simultaneously overcome the limitation of the Gini index in that it is
less sensitive to inequality at the tails of income distribution and the
limitation of the inter-decile ratios that ignore inequality in the middle of
income distribution, an inequality index is introduced. It comprises three
indicators, namely, the Gini index, the income share held by the top 10%, and
the income share held by the bottom 10%. The data from the World Bank database
and the Organization for Economic Co-operation and Development Income
Distribution Database between 2005 and 2015 are used to demonstrate how the
inequality index works. The results show that it can distinguish income
inequality among countries that share the same Gini index but have different
income gaps between the top 10% and the bottom 10%. It could also distinguish
income inequality among countries that have the same ratio of income share held
by the top 10% to income share held by the bottom 10% but differ in the values
of the Gini index. In addition, the inequality index could capture the dynamics
where the Gini index of a country is stable over time but the ratio of income
share of the top 10% to income share of the bottom 10% is increasing.
Furthermore, the inequality index could be applied to other scientific
disciplines as a measure of statistical heterogeneity and for size
distributions of any non-negative quantities.",A simple method for measuring inequality,2021-12-31 06:53:54,"Thitithep Sitthiyot, Kanyarat Holasut","http://dx.doi.org/10.1057/s41599-020-0484-6, http://arxiv.org/abs/2112.15284v1, http://arxiv.org/pdf/2112.15284v1",econ.GN
33684,gn,"Given many popular functional forms for the Lorenz curve do not have a
closed-form expression for the Gini index and no study has utilized the
observed Gini index to estimate parameter(s) associated with the corresponding
parametric functional form, a simple method for estimating the Lorenz curve is
introduced. It utilizes 3 indicators, namely, the Gini index and the income
shares of the bottom and the top in order to calculate the values of parameters
associated with the specified functional form which has a closed-form
expression for the Gini index. No error minimization technique is required in
order to estimate the Lorenz curve. The data on the Gini index and the income
shares of 4 countries that have different level of income inequality, economic,
sociological, and regional backgrounds from the United Nations University-World
Income Inequality Database are used to illustrate how the simple method works.
The overall results indicate that the estimated Lorenz curves fit the actual
observations practically well. This simple method could be useful in the
situation where the availability of data on income distribution is low.
However, if more data on income distribution are available, this study shows
that the specified functional form could be used to directly estimate the
Lorenz curve. Moreover, the estimated values of the Gini index calculated based
on the specified functional form are virtually identical to their actual
observations.",A simple method for estimating the Lorenz curve,2021-12-31 07:05:10,"Thitithep Sitthiyot, Kanyarat Holasut","http://dx.doi.org/10.1057/s41599-021-00948-x, http://arxiv.org/abs/2112.15291v1, http://arxiv.org/pdf/2112.15291v1",econ.GN
33685,gn,"Correlation and cluster analyses (k-Means, Gaussian Mixture Models) were
performed on Generation Z engagement surveys at the workplace. The clustering
indicates relations between various factors that describe the engagement of
employees. The most noticeable factors are a clear statement about the
responsibilities at work, and challenging work. These factors are essential in
practice. The results of this paper can be used in preparing better
motivational systems aimed at Generation Z employees.",Towards the global vision of engagement of Generation Z at the workplace: Mathematical modeling,2021-12-31 15:04:44,"Radosław A. Kycia, Agnieszka Niemczynowicz, Joanna Nieżurawska-Zając","http://arxiv.org/abs/2112.15401v1, http://arxiv.org/pdf/2112.15401v1",econ.GN
33686,gn,"Tax analysis and forecasting of revenues are of paramount importance to
ensure fiscal policy's viability and sustainability. However, the measures
taken to contain the spread of the recent pandemic pose an unprecedented
challenge to established models and approaches. This paper proposes a model to
forecast tax revenues in Bulgaria for the fiscal years 2020-2022 built in
accordance with the International Monetary Fund's recommendations on a dataset
covering the period between 1995 and 2019. The study further discusses the
actual trustworthiness of official Bulgarian forecasts, contrasting those
figures with the model previously estimated. This study's quantitative results
both confirm the pandemic's assumed negative impact on tax revenues and prove
that econometrics can be tweaked to produce consistent revenue forecasts even
in the relatively-unexplored case of Bulgaria offering new insights to
policymakers and advocates.","Forecasting pandemic tax revenues in a small, open economy",2021-12-22 13:39:59,Fabio Ashtar Telarico,"http://dx.doi.org/10.7251/NOEEN2129018T, http://arxiv.org/abs/2112.15431v1, http://arxiv.org/pdf/2112.15431v1",econ.GN
33687,gn,"We present an analytical model to study the role of expectation feedbacks and
overlapping portfolios on systemic stability of financial systems. Building on
[Corsi et al., 2016], we model a set of financial institutions having Value at
Risk capital requirements and investing in a portfolio of risky assets, whose
prices evolve stochastically in time and are endogenously driven by the trading
decisions of financial institutions. Assuming that they use adaptive
expectations of risk, we show that the evolution of the system is described by
a slow-fast random dynamical system, which can be studied analytically in some
regimes. The model shows how the risk expectations play a central role in
determining the systemic stability of the financial system and how wrong risk
expectations may create panic-induced reduction or over-optimistic expansion of
balance sheets. Specifically, when investors are myopic in estimating the risk,
the fixed point equilibrium of the system breaks into leverage cycles and
financial variables display a bifurcation cascade eventually leading to chaos.
We discuss the role of financial policy and the effects of some market
frictions, as the cost of diversification and financial transaction taxes, in
determining the stability of the system in the presence of adaptive
expectations of risk.",When panic makes you blind: a chaotic route to systemic risk,2018-05-02 16:19:25,"Piero Mazzarisi, Fabrizio Lillo, Stefano Marmi","http://arxiv.org/abs/1805.00785v1, http://arxiv.org/pdf/1805.00785v1",econ.GN
33688,gn,"Two critical questions about intergenerational outcomes are: one, whether
significant barriers or traps exist between different social or economic
strata; and two, the extent to which intergenerational outcomes do (or can be
used to) affect individual investment and consumption decisions. We develop a
model to explicitly relate these two questions, and prove the first such `rat
race' theorem, showing that a fundamental relationship exists between high
levels of individual investment and the existence of a wealth trap, which traps
otherwise identical agents at a lower level of wealth. Our simple model of
intergenerational wealth dynamics involves agents which balance current
consumption with investment in a single descendant. Investments then determine
descendant wealth via a potentially nonlinear and discontinuous competitiveness
function about which we do not make concavity assumptions. From this model we
demonstrate how to infer such a competitiveness function from investments,
along with geometric criteria to determine individual decisions. Additionally
we investigate the stability of a wealth distribution, both to local
perturbations and to the introduction of new agents with no wealth.",When a `rat race' implies an intergenerational wealth trap,2018-04-30 21:46:19,Joel Nishimura,"http://arxiv.org/abs/1805.01019v1, http://arxiv.org/pdf/1805.01019v1",econ.GN
33689,gn,"A fundamental question in the field of social studies of science is how
research fields emerge, grow and decline over time and space. This study
confronts this question here by developing an inductive analysis of emerging
research fields represented by human microbiome, evolutionary robotics and
astrobiology. In particular, number of papers from starting years to 2017 of
each emerging research field is analyzed considering the subject areas (i.e.,
disciplines) of authors. Findings suggest some empirical laws of the evolution
of research fields: the first law states that the evolution of a specific
research field is driven by few scientific disciplines (3- 5) that generate
more than 80% of documents (concentration of the scientific production); the
second law states that the evolution of research fields is path-dependent of a
critical discipline (it can be a native discipline that has originated the
research field or a new discipline emerged during the social dynamics of
science); the third law states that a research field can be driven during its
evolution by a new discipline originated by a process of specialization within
science. The findings here can explain and generalize, whenever possible some
properties of the evolution of scientific fields that are due to interaction
between disciplines, convergence between basic and applied research fields and
interdisciplinary in scientific research. Overall, then, this study begins the
process of clarifying and generalizing, as far as possible, the properties of
the social construction and evolution of science to lay a foundation for the
development of sophisticated theories.",The laws of the evolution of research fields,2018-05-08 16:09:31,Mario Coccia,"http://arxiv.org/abs/1805.03492v1, http://arxiv.org/pdf/1805.03492v1",econ.GN
33690,gn,"We demonstrate using multi-layered networks, the existence of an empirical
linkage between the dynamics of the financial network constructed from the
market indices and the macroeconomic networks constructed from macroeconomic
variables such as trade, foreign direct investments, etc. for several countries
across the globe. The temporal scales of the dynamics of the financial
variables and the macroeconomic fundamentals are very different, which make the
empirical linkage even more interesting and significant. Also, we find that
there exist in the respective networks, core-periphery structures (determined
through centrality measures) that are composed of the similar set of countries
-- a result that may be related through the `gravity model' of the
country-level macroeconomic networks. Thus, from a multi-lateral openness
perspective, we elucidate that for individual countries, larger trade
connectivity is positively associated with higher financial return
correlations. Furthermore, we show that the Economic Complexity Index and the
equity markets have a positive relationship among themselves, as is the case
for Gross Domestic Product. The data science methodology using network theory,
coupled with standard econometric techniques constitute a new approach to
studying multi-level economic phenomena in a comprehensive manner.",Multi-layered Network Structure: Relationship Between Financial and Macroeconomic Dynamics,2018-05-17 18:42:51,"Kiran Sharma, Anindya S. Chakrabarti, Anirban Chakraborti","http://arxiv.org/abs/1805.06829v2, http://arxiv.org/pdf/1805.06829v2",econ.GN
33691,gn,"This paper analyzes the impact of air transport connectivity and
accessibility on scientific collaboration.",The impact of air transport availability on research collaboration: A case study of four universities,2018-11-06 04:05:38,"Adam Ploszaj, Xiaoran Yan, Katy Borner","http://arxiv.org/abs/1811.02106v1, http://arxiv.org/pdf/1811.02106v1",econ.GN
33693,gn,"Over the past five decades a number of multilateral index number systems have
been proposed for spatial and cross-country price comparisons. These
multilateral indexes are usually expressed as solutions to systems of linear or
nonlinear equations. In this paper, we provide general theorems that can be
used to establish necessary and sufficient conditions for the existence and
uniqueness of the Geary-Khamis, IDB, Neary and Rao indexes as well as potential
new systems including two generalized systems of index numbers. One of our main
results is that the necessary and sufficient conditions for existence and
uniqueness of solutions can often be stated in terms of graph-theoretic
concepts and a verifiable condition based on observed quantities of
commodities.","Multilateral Index Number Systems for International Price Comparisons: Properties, Existence and Uniqueness",2018-11-10 08:24:08,"Gholamreza Hajargasht, Prasada Rao","http://arxiv.org/abs/1811.04197v2, http://arxiv.org/pdf/1811.04197v2",econ.TH
34537,th,"I study a model of information acquisition and transmission in which the
sender's ability to misreport her findings is limited. In equilibrium, the
sender only influences the receiver by choosing to remain selectively ignorant,
rather than by deceiving her about the discoveries. Although deception does not
occur, I highlight how deception possibilities determine what information the
sender chooses to acquire and transmit. I then turn to comparative statics,
characterizing in which sense the sender benefits from her claims being more
verifiable, showing this is akin to increasing her commitment power. Finally, I
characterize sender- and receiver-optimal falsification environments.",Covert learning and disclosure,2023-04-06 13:49:58,Matteo Escudé,"http://arxiv.org/abs/2304.02989v3, http://arxiv.org/pdf/2304.02989v3",econ.TH
33695,gn,"We point out a simple equities trading strategy that allows a sufficiently
large, market-neutral, quantitative hedge fund to achieve outsized returns
while simultaneously contributing significantly to increasing global wealth
inequality. Overnight and intraday return distributions in major equity indices
in the United States, Canada, France, Germany, and Japan suggest a few such
firms have been implementing this strategy successfully for more than
twenty-five years.",How to Increase Global Wealth Inequality for Fun and Profit,2018-11-12 23:34:26,Bruce Knuteson,"http://arxiv.org/abs/1811.04994v1, http://arxiv.org/pdf/1811.04994v1",econ.GN
33696,gn,"In recent years, artificial intelligence (AI) decision-making and autonomous
systems became an integrated part of the economy, industry, and society. The
evolving economy of the human-AI ecosystem raising concerns regarding the risks
and values inherited in AI systems. This paper investigates the dynamics of
creation and exchange of values and points out gaps in perception of
cost-value, knowledge, space and time dimensions. It shows aspects of value
bias in human perception of achievements and costs that encoded in AI systems.
It also proposes rethinking hard goals definitions and cost-optimal
problem-solving principles in the lens of effectiveness and efficiency in the
development of trusted machines. The paper suggests a value-driven with cost
awareness strategy and principles for problem-solving and planning of effective
research progress to address real-world problems that involve diverse forms of
achievements, investments, and survival scenarios.",Economics of Human-AI Ecosystem: Value Bias and Lost Utility in Multi-Dimensional Gaps,2018-11-16 00:59:41,Daniel Muller,"http://arxiv.org/abs/1811.06606v2, http://arxiv.org/pdf/1811.06606v2",cs.AI
33697,gn,"We consider a stochastic game-theoretic model of an investment market in
continuous time with short-lived assets and study strategies, called survival,
which guarantee that the relative wealth of an investor who uses such a
strategy remains bounded away from zero. The main results consist in obtaining
a sufficient condition for a strategy to be survival and showing that all
survival strategies are asymptotically close to each other. It is also proved
that a survival strategy allows an investor to accumulate wealth in a certain
sense faster than competitors.",Survival investment strategies in a continuous-time market model with competition,2018-11-30 00:14:38,Mikhail Zhitlukhin,"http://arxiv.org/abs/1811.12491v2, http://arxiv.org/pdf/1811.12491v2",q-fin.MF
33698,gn,"Connected and automated vehicles (CAVs) are expected to yield significant
improvements in safety, energy efficiency, and time utilization. However, their
net effect on energy and environmental outcomes is unclear. Higher fuel economy
reduces the energy required per mile of travel, but it also reduces the fuel
cost of travel, incentivizing more travel and causing an energy ""rebound
effect."" Moreover, CAVs are predicted to vastly reduce the time cost of travel,
inducing further increases in travel and energy use. In this paper, we forecast
the induced travel and rebound from CAVs using data on existing travel
behavior. We develop a microeconomic model of vehicle miles traveled (VMT)
choice under income and time constraints; then we use it to estimate
elasticities of VMT demand with respect to fuel and time costs, with fuel cost
data from the 2017 United States National Household Travel Survey (NHTS) and
wage-derived predictions of travel time cost. Our central estimate of the
combined price elasticity of VMT demand is -0.4, which differs substantially
from previous estimates. We also find evidence that wealthier households have
more elastic demand, and that households at all income levels are more
sensitive to time costs than to fuel costs. We use our estimated elasticities
to simulate VMT and energy use impacts of full, private CAV adoption under a
range of possible changes to the fuel and time costs of travel. We forecast a
2-47% increase in travel demand for an average household. Our results indicate
that backfire - i.e., a net rise in energy use - is a possibility, especially
in higher income groups. This presents a stiff challenge to policy goals for
reductions in not only energy use but also traffic congestion and local and
global air pollution, as CAV use increases.",Forecasting the Impact of Connected and Automated Vehicles on Energy Use A Microeconomic Study of Induced Travel and Energy Rebound,2019-01-31 20:19:33,"Morteza Taiebat, Samuel Stolper, Ming Xu","http://dx.doi.org/10.1016/j.apenergy.2019.03.174, http://arxiv.org/abs/1902.00382v4, http://arxiv.org/pdf/1902.00382v4",econ.GN
33699,gn,"Humanity has been fascinated by the pursuit of fortune since time immemorial,
and many successful outcomes benefit from strokes of luck. But success is
subject to complexity, uncertainty, and change - and at times becoming
increasingly unequally distributed. This leads to tension and confusion over to
what extent people actually get what they deserve (i.e., fairness/meritocracy).
Moreover, in many fields, humans are over-confident and pervasively confuse
luck for skill (I win, it's skill; I lose, it's bad luck). In some fields,
there is too much risk taking; in others, not enough. Where success derives in
large part from luck - and especially where bailouts skew the incentives
(heads, I win; tails, you lose) - it follows that luck is rewarded too much.
This incentivizes a culture of gambling, while downplaying the importance of
productive effort. And, short term success is often rewarded, irrespective, and
potentially at the detriment, of the long-term system fitness. However, much
success is truly meritocratic, and the problem is to discern and reward based
on merit. We call this the fair reward problem. To address this, we propose
three different measures to assess merit: (i) raw outcome; (ii) risk adjusted
outcome, and (iii) prospective. We emphasize the need, in many cases, for the
deductive prospective approach, which considers the potential of a system to
adapt and mutate in novel futures. This is formalized within an evolutionary
system, comprised of five processes, inter alia handling the
exploration-exploitation trade-off. Several human endeavors - including
finance, politics, and science -are analyzed through these lenses, and concrete
solutions are proposed to support a prosperous and meritocratic society.",The fair reward problem: the illusion of success and how to solve it,2019-02-03 13:22:28,"Didier Sornette, Spencer Wheatley, Peter Cauwels","http://arxiv.org/abs/1902.04940v2, http://arxiv.org/pdf/1902.04940v2",econ.GN
33700,gn,"Interest in agent-based models of financial markets and the wider economy has
increased consistently over the last few decades, in no small part due to their
ability to reproduce a number of empirically-observed stylised facts that are
not easily recovered by more traditional modelling approaches. Nevertheless,
the agent-based modelling paradigm faces mounting criticism, focused
particularly on the rigour of current validation and calibration practices,
most of which remain qualitative and stylised fact-driven. While the literature
on quantitative and data-driven approaches has seen significant expansion in
recent years, most studies have focused on the introduction of new calibration
methods that are neither benchmarked against existing alternatives nor
rigorously tested in terms of the quality of the estimates they produce. We
therefore compare a number of prominent ABM calibration methods, both
established and novel, through a series of computational experiments in an
attempt to determine the respective strengths and weaknesses of each approach
and the overall quality of the resultant parameter estimates. We find that
Bayesian estimation, though less popular in the literature, consistently
outperforms frequentist, objective function-based approaches and results in
reasonable parameter estimates in many contexts. Despite this, we also find
that agent-based model calibration techniques require further development in
order to definitively calibrate large-scale models. We therefore make
suggestions for future research.",A Comparison of Economic Agent-Based Model Calibration Methods,2019-02-15 21:45:48,Donovan Platt,"http://arxiv.org/abs/1902.05938v1, http://arxiv.org/pdf/1902.05938v1",q-fin.CP
33701,gn,"Technological progress is leading to proliferation and diversification of
trading venues, thus increasing the relevance of the long-standing question of
market fragmentation versus consolidation. To address this issue
quantitatively, we analyse systems of adaptive traders that choose where to
trade based on their previous experience. We demonstrate that only based on
aggregate parameters about trading venues, such as the demand to supply ratio,
we can assess whether a population of traders will prefer fragmentation or
specialization towards a single venue. We investigate what conditions lead to
market fragmentation for populations with a long memory and analyse the
stability and other properties of both fragmented and consolidated steady
states. Finally we investigate the dynamics of populations with finite memory;
when this memory is long the true long-time steady states are consolidated but
fragmented states are strongly metastable, dominating the behaviour out to long
times.",Market fragmentation and market consolidation: Multiple steady states in systems of adaptive traders choosing where to trade,2019-02-18 15:56:09,"Aleksandra Alorić, Peter Sollich","http://dx.doi.org/10.1103/PhysRevE.99.062309, http://arxiv.org/abs/1902.06549v2, http://arxiv.org/pdf/1902.06549v2",q-fin.TR
33702,gn,"We introduce a constrained priority mechanism that combines outcome-based
matching from machine-learning with preference-based allocation schemes common
in market design. Using real-world data, we illustrate how our mechanism could
be applied to the assignment of refugee families to host country locations, and
kindergarteners to schools. Our mechanism allows a planner to first specify a
threshold $\bar g$ for the minimum acceptable average outcome score that should
be achieved by the assignment. In the refugee matching context, this score
corresponds to the predicted probability of employment, while in the student
assignment context it corresponds to standardized test scores. The mechanism is
a priority mechanism that considers both outcomes and preferences by assigning
agents (refugee families, students) based on their preferences, but subject to
meeting the planner's specified threshold. The mechanism is both strategy-proof
and constrained efficient in that it always generates a matching that is not
Pareto dominated by any other matching that respects the planner's threshold.",Combining Outcome-Based and Preference-Based Matching: A Constrained Priority Mechanism,2019-02-20 03:17:33,"Avidit Acharya, Kirk Bansak, Jens Hainmueller","http://arxiv.org/abs/1902.07355v3, http://arxiv.org/pdf/1902.07355v3",econ.GN
33703,gn,"We propose a heterogeneous agent market model (HAM) in continuous time. The
market is populated by fundamental traders and chartists, who both use simple
linear trading rules. Most of the related literature explores stability, price
dynamics and profitability either within deterministic models or by simulation.
Our novel formulation lends itself to analytic treatment even in the stochastic
case. We prove conditions for the (stochastic) stability of the price process,
and also for the price to mean-revert to the fundamental value. Assuming
stability, we derive analytic formulae on how the population ratios influence
price dynamics and the profitability of the strategies. Our results suggest
that whichever trader type is more present in the market will achieve higher
returns.",Analytic solutions in a continuous-time financial market model,2019-02-26 18:26:29,"Zsolt Bihary, Attila András Víg","http://arxiv.org/abs/1902.09999v1, http://arxiv.org/pdf/1902.09999v1",econ.GN
33704,gn,"Space conditioning, and cooling in particular, is a key factor in human
productivity and well-being across the globe. During the 21st century, global
cooling demand is expected to grow significantly due to the increase in wealth
and population in sunny nations across the globe and the advance of global
warming. The same locations that see high demand for cooling are also ideal for
electricity generation via photovoltaics (PV). Despite the apparent synergy
between cooling demand and PV generation, the potential of the cooling sector
to sustain PV generation has not been assessed on a global scale. Here, we
perform a global assessment of increased PV electricity adoption enabled by the
residential cooling sector during the 21st century. Already today, utilizing PV
production for cooling could facilitate an additional installed PV capacity of
approximately 540 GW, more than the global PV capacity of today. Using
established scenarios of population and income growth, as well as accounting
for future global warming, we further project that the global residential
cooling sector could sustain an added PV capacity between 20-200 GW each year
for most of the 21st century, on par with the current global manufacturing
capacity of 100 GW. Furthermore, we find that without storage, PV could
directly power approximately 50% of cooling demand, and that this fraction is
set to increase from 49% to 56% during the 21st century, as cooling demand
grows in locations where PV and cooling have a higher synergy. With this
geographic shift in demand, the potential of distributed storage also grows. We
simulate that with a 1 m$^3$ water-based latent thermal storage per household,
the fraction of cooling demand met with PV would increase from 55% to 70%
during the century. These results show that the synergy between cooling and PV
is notable and could significantly accelerate the growth of the global PV
industry.",Meeting Global Cooling Demand with Photovoltaics during the 21st Century,2019-02-24 13:52:59,"Hannu S. Laine, Jyri Salpakari, Erin E. Looney, Hele Savin, Ian Marius Peters, Tonio Buonassisi","http://dx.doi.org/10.1039/C9EE00002J, http://arxiv.org/abs/1902.10080v1, http://arxiv.org/pdf/1902.10080v1",econ.GN
33725,gn,"At the zero lower bound, the New Keynesian model predicts that output and
inflation collapse to implausibly low levels, and that government spending and
forward guidance have implausibly large effects. To resolve these anomalies, we
introduce wealth into the utility function; the justification is that wealth is
a marker of social status, and people value status. Since people partly save to
accrue social status, the Euler equation is modified. As a result, when the
marginal utility of wealth is sufficiently large, the dynamical system
representing the zero-lower-bound equilibrium transforms from a saddle to a
source---which resolves all the anomalies.",Resolving New Keynesian Anomalies with Wealth in the Utility Function,2019-05-31 17:45:11,"Pascal Michaillat, Emmanuel Saez","http://dx.doi.org/10.1162/rest_a_00893, http://arxiv.org/abs/1905.13645v3, http://arxiv.org/pdf/1905.13645v3",econ.TH
33705,gn,"We study the problem of demand response contracts in electricity markets by
quantifying the impact of considering a mean-field of consumers, whose
consumption is impacted by a common noise. We formulate the problem as a
Principal-Agent problem with moral hazard in which the Principal - she - is an
electricity producer who observes continuously the consumption of a continuum
of risk-averse consumers, and designs contracts in order to reduce her
production costs. More precisely, the producer incentivises the consumers to
reduce the average and the volatility of their consumption in different usages,
without observing the efforts they make. We prove that the producer can benefit
from considering the mean-field of consumers by indexing contracts on the
consumption of one Agent and aggregate consumption statistics from the
distribution of the entire population of consumers. In the case of linear
energy valuation, we provide closed-form expression for this new type of
optimal contracts that maximises the utility of the producer. In most cases, we
show that this new type of contracts allows the Principal to choose the risks
she wants to bear, and to reduce the problem at hand to an uncorrelated one.",Mean-field moral hazard for optimal energy demand response management,2019-02-27 12:16:34,"Romuald Elie, Emma Hubert, Thibaut Mastrolia, Dylan Possamaï","http://arxiv.org/abs/1902.10405v3, http://arxiv.org/pdf/1902.10405v3",math.PR
33706,gn,"Pairwise comparisons are used in a wide variety of decision situations where
the importance of alternatives should be measured on a numerical scale. One
popular method to derive the priorities is based on the right eigenvector of a
multiplicative pairwise comparison matrix. We consider two monotonicity axioms
in this setting. First, increasing an arbitrary entry of a pairwise comparison
matrix is not allowed to result in a counter-intuitive rank reversal, that is,
the favoured alternative in the corresponding row cannot be ranked lower than
any other alternative if this was not the case before the change (rank
monotonicity). Second, the same modification should not decrease the normalised
weight of the favoured alternative (weight monotonicity). Both properties are
satisfied by the geometric mean method but violated by the eigenvector method.
The axioms do not uniquely determine the geometric mean. The relationship
between the two monotonicity properties and the Saaty inconsistency index are
investigated for the eigenvector method via simulations. Even though their
violation turns out not to be a usual problem even for heavily inconsistent
matrices, all decision-makers should be informed about the possible occurrence
of such unexpected consequences of increasing a matrix entry.",On the monotonicity of the eigenvector method,2019-02-28 00:19:22,"László Csató, Dóra Gréta Petróczy","http://dx.doi.org/10.1016/j.ejor.2020.10.020, http://arxiv.org/abs/1902.10790v5, http://arxiv.org/pdf/1902.10790v5",math.OC
33707,gn,"This paper considers an often forgotten relationship, the time delay between
a cause and its effect in economies and finance. We treat the case of Foreign
Direct Investment (FDI) and economic growth, - measured through a country Gross
Domestic Product (GDP). The pertinent data refers to 43 countries, over
1970-2015, - for a total of 4278 observations. When countries are grouped
  according to the Inequality-Adjusted Human Development Index (IHDI), it is
found that a time lag dependence effect exists in FDI-GDP correlations.
  This is established through a time-dependent Pearson 's product-moment
correlation coefficient matrix.
  Moreover, such a Pearson correlation coefficient is observed to evolve from
positive
  to negative values depending on the IHDI, from low to high. It is
""politically and policy
  ""relevant"" that
  the correlation is statistically significant providing the time lag is less
than 3 years. A ""rank-size"" law is demonstrated.
  It is recommended to reconsider such a time lag effect when discussing
previous analyses whence conclusions on international business, and thereafter
on forecasting.",Evidence for Gross Domestic Product growth time delay dependence over Foreign Direct Investment. A time-lag dependent correlation study,2019-05-05 09:32:36,"Marcel Ausloos, Ali Eskandary, Parmjit Kaur, Gurjeet Dhesi","http://dx.doi.org/10.1016/j.physa.2019.121181, http://arxiv.org/abs/1905.01617v1, http://arxiv.org/pdf/1905.01617v1",q-fin.GN
33708,gn,"The fast pace of artificial intelligence (AI) and automation is propelling
strategists to reshape their business models. This is fostering the integration
of AI in the business processes but the consequences of this adoption are
underexplored and need attention. This paper focuses on the overall impact of
AI on businesses - from research, innovation, market deployment to future
shifts in business models. To access this overall impact, we design a
three-dimensional research model, based upon the Neo-Schumpeterian economics
and its three forces viz. innovation, knowledge, and entrepreneurship. The
first dimension deals with research and innovation in AI. In the second
dimension, we explore the influence of AI on the global market and the
strategic objectives of the businesses and finally, the third dimension
examines how AI is shaping business contexts. Additionally, the paper explores
AI implications on actors and its dark sides.","Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models",2019-05-03 15:05:08,"Neha Soni, Enakshi Khular Sharma, Narotam Singh, Amita Kapoor","http://arxiv.org/abs/1905.02092v1, http://arxiv.org/pdf/1905.02092v1",econ.GN
33709,gn,"Encouraging sustainable mobility patterns is at the forefront of policymaking
at all scales of governance as the collective consciousness surrounding climate
change continues to expand. Not every community, however, possesses the
necessary economic or socio-cultural capital to encourage modal shifts away
from private motorized vehicles towards active modes. The current literature on
`soft' policy emphasizes the importance of tailoring behavior change campaigns
to individual or geographic context. Yet, there is a lack of insight and
appropriate tools to promote active mobility and overcome transport
disadvantage from the local community perspective. The current study
investigates the promotion of walking and cycling adoption using a series of
focus groups with local residents in two geographic communities, namely
Chicago's (1) Humboldt Park neighborhood and (2) suburb of Evanston. The
research approach combines traditional qualitative discourse analysis with
quantitative text-mining tools, namely topic modeling and sentiment analysis.
The analysis uncovers the local mobility culture, embedded norms and values
associated with acceptance of active travel modes in different communities. We
observe that underserved populations within diverse communities view active
mobility simultaneously as a necessity and as a symbol of privilege that is
sometimes at odds with the local culture. The mixed methods approach to
analyzing community member discourses is translated into policy findings that
are either tailored to local context or broadly applicable to curbing
automobile dominance. Overall, residents of both Humboldt Park and Evanston
envision a society in which multimodalism replaces car-centrism, but
differences in the local physical and social environments would and should
influence the manner in which overarching policy objectives are met.",Where does active travel fit within local community narratives of mobility space and place?,2019-05-07 19:25:12,"Alec Biehl, Ying Chen, Karla Sanabria-Veaz, David Uttal, Amanda Stathopoulos","http://dx.doi.org/10.1016/j.tra.2018.10.023, http://arxiv.org/abs/1905.02674v1, http://arxiv.org/pdf/1905.02674v1",econ.GN
33710,gn,"In this feasibility study, the impact of academic research from social
sciences and humanities on technological innovation is explored through a study
of citations patterns of journal articles in patents. Specifically we focus on
citations of journals from the field of environmental economics in patents
included in an American patent database (USPTO). Three decades of patents have
led to a small set of journal articles (85) that are being cited from the field
of environmental economics. While this route of measuring how academic research
is validated through its role in stimulating technological progress may be
rather limited (based on this first exploration), it may still point to a
valuable and interesting topic for further research.",Does Environmental Economics lead to patentable research?,2019-05-07 10:17:24,"Xiaojun Hu, Ronald Rousseau, Sandra Rousseau","http://arxiv.org/abs/1905.02875v1, http://arxiv.org/pdf/1905.02875v1",cs.DL
33711,gn,"Inspired by Gambetta's theory on the origins of the mafia in Sicily, we
report a geo-concentrating phenomenon of scams in China, and propose a novel
economic explanation. Our analysis has some policy implications.","From Sicilian mafia to Chinese ""scam villages""",2019-05-06 13:23:15,Jeff Yan,"http://arxiv.org/abs/1905.03108v1, http://arxiv.org/pdf/1905.03108v1",cs.CR
33712,gn,"Online reviews are feedback voluntarily posted by consumers about their
consumption experiences. This feedback indicates customer attitudes such as
affection, awareness and faith towards a brand or a firm and demonstrates
inherent connections with a company's future sales, cash flow and stock
pricing. However, the predicting power of online reviews for long-term returns
on stocks, especially at the individual level, has received little research
attention, making a comprehensive exploration necessary to resolve existing
debates. In this paper, which is based exclusively on online reviews, a
methodology framework for predicting long-term returns of individual stocks
with competent performance is established. Specifically, 6,246 features of 13
categories inferred from more than 18 million product reviews are selected to
build the prediction models. With the best classifier selected from
cross-validation tests, a satisfactory increase in accuracy, 13.94%, was
achieved compared to the cutting-edge solution with 10 technical indicators
being features, representing an 18.28% improvement relative to the random
value. The robustness of our model is further evaluated and testified in
realistic scenarios. It is thus confirmed for the first time that long-term
returns of individual stocks can be predicted by online reviews. This study
provides new opportunities for investors with respect to long-term investments
in individual stocks.",Online reviews can predict long-term returns of individual stocks,2019-04-30 05:53:46,"Junran Wu, Ke Xu, Jichang Zhao","http://arxiv.org/abs/1905.03189v1, http://arxiv.org/pdf/1905.03189v1",econ.GN
33713,gn,"The subject of the present article is the study of correlations between large
insurance companies and their contribution to systemic risk in the insurance
sector. Our main goal is to analyze the conditional structure of the
correlation on the European insurance market and to compare systemic risk in
different regimes of this market. These regimes are identified by monitoring
the weekly rates of returns of eight of the largest insurers (five from Europe
and the biggest insurers from the USA, Canada and China) during the period
January 2005 to December 2018. To this aim we use statistical clustering
methods for time units (weeks) to which we assigned the conditional variances
obtained from the estimated copula-DCC-GARCH model. The advantage of such an
approach is that there is no need to assume a priori a number of market
regimes, since this number has been identified by means of clustering quality
validation. In each of the identified market regimes we determined the commonly
now used CoVaR systemic risk measure. From the performed analysis we conclude
that all the considered insurance companies are positively correlated and this
correlation is stronger in times of turbulences on global markets which shows
an increased exposure of the European insurance sector to systemic risk during
crisis. Moreover, in times of turbulences on global markets the value level of
the CoVaR systemic risk index is much higher than in ""normal conditions"".",Dependencies and systemic risk in the European insurance sector: Some new evidence based on copula-DCC-GARCH model and selected clustering methods,2019-05-08 21:01:40,"Anna Denkowska, Stanisław Wanat","http://arxiv.org/abs/1905.03273v1, http://arxiv.org/pdf/1905.03273v1",econ.GN
33716,gn,"Uncovering the structure of socioeconomic systems and timely estimation of
socioeconomic status are significant for economic development. The
understanding of socioeconomic processes provides foundations to quantify
global economic development, to map regional industrial structure, and to infer
individual socioeconomic status. In this review, we will make a brief manifesto
about a new interdisciplinary research field named Computational
Socioeconomics, followed by detailed introduction about data resources,
computational tools, data-driven methods, theoretical models and novel
applications at multiple resolutions, including the quantification of global
economic inequality and complexity, the map of regional industrial structure
and urban perception, the estimation of individual socioeconomic status and
demographic, and the real-time monitoring of emergent events. This review,
together with pioneering works we have highlighted, will draw increasing
interdisciplinary attentions and induce a methodological shift in future
socioeconomic studies.",Computational Socioeconomics,2019-05-15 16:28:40,"Jian Gao, Yi-Cheng Zhang, Tao Zhou","http://dx.doi.org/10.1016/j.physrep.2019.05.002, http://arxiv.org/abs/1905.06166v1, http://arxiv.org/pdf/1905.06166v1",physics.soc-ph
33717,gn,"This paper introduces a new correction scheme to a conventional
regression-based event study method: a topological machine-learning approach
with a self-organizing map (SOM).We use this new scheme to analyze a major
market event in Japan and find that the factors of abnormal stock returns can
be easily can be easily identified and the event-cluster can be depicted.We
also find that a conventional event study method involves an empirical analysis
mechanism that tends to derive bias due to its mechanism, typically in an
event-clustered market situation. We explain our new correction scheme and
apply it to an event in the Japanese market --- the holding disclosure of the
Government Pension Investment Fund (GPIF) on July 31, 2015.",Improving Regression-based Event Study Analysis Using a Topological Machine-learning Method,2019-05-16 08:44:07,"Takashi Yamashita, Ryozo Miura","http://arxiv.org/abs/1905.06536v1, http://arxiv.org/pdf/1905.06536v1",econ.GN
33720,gn,"Ride-hailing marketplaces like Uber and Lyft use dynamic pricing, often
called surge, to balance the supply of available drivers with the demand for
rides. We study driver-side payment mechanisms for such marketplaces,
presenting the theoretical foundation that has informed the design of Uber's
new additive driver surge mechanism. We present a dynamic stochastic model to
capture the impact of surge pricing on driver earnings and their strategies to
maximize such earnings. In this setting, some time periods (surge) are more
valuable than others (non-surge), and so trips of different time lengths vary
in the induced driver opportunity cost.
  First, we show that multiplicative surge, historically the standard on
ride-hailing platforms, is not incentive compatible in a dynamic setting. We
then propose a structured, incentive-compatible pricing mechanism. This
closed-form mechanism has a simple form and is well-approximated by Uber's new
additive surge mechanism. Finally, through both numerical analysis and real
data from a ride-hailing marketplace, we show that additive surge is more
incentive compatible in practice than is multiplicative surge.",Driver Surge Pricing,2019-05-18 10:23:52,"Nikhil Garg, Hamid Nazerzadeh","http://arxiv.org/abs/1905.07544v4, http://arxiv.org/pdf/1905.07544v4",cs.GT
33721,gn,"Rinne et al. conduct an interesting analysis of the impact of wind turbine
technology and land-use on wind power potentials, which allows profound
insights into each factors contribution to overall potentials. The paper
presents a detailed model of site-specific wind turbine investment cost (i.e.
road- and grid access costs) complemented by a model used to estimate
site-independent costs. We believe that propose a cutting edge model of
site-specific investment costs. However, the site-independent cost model is
flawed in our opinion. This flaw most likely does not impact the results
presented in the paper, although we expect a considerable generalization error.
Thus the application of the wind turbine cost model in other contexts may lead
to unreasonable results. More generally, the derivation of the wind turbine
cost model serves as an example of how applications of automated regression
analysis can go wrong.",The perils of automated fitting of datasets: the case of a wind turbine cost model,2019-05-22 00:01:15,"Claude Klöckl, Katharina Gruber, Peter Regner, Sebastian Wehrle, Johannes Schmidt","http://arxiv.org/abs/1905.08870v2, http://arxiv.org/pdf/1905.08870v2",stat.AP
33722,gn,"This paper contributes to the literature on the impact of Big Science Centres
on technological innovation. We exploit a unique dataset with information on
CERN's procurement orders to study the collaborative innovation process between
CERN and its industrial partners. After a qualitative discussion of case
studies, survival and count data models are estimated; the impact of CERN
procurement on suppliers' innovation is captured by the number of patent
applications. The fact that firms in our sample received their first order over
a long time span (1995-2008) delivers a natural partition of industrial
partners into ""suppliers"" and ""not yet suppliers"". This allows estimating the
impact of CERN on the hazard to file a patent for the first time and on the
number of patent applications, as well as the time needed for these effects to
show up. We find that a ""CERN effect"" does exist: being an industrial partner
of CERN is associated with an increase in the hazard to file a patent for the
first time and in the number of patent applications. These effects require a
significant ""gestation lag"" in the range of five to eight years, pointing to a
relatively slow process of absorption of new ideas.",Technological Learning and Innovation Gestation Lags at the Frontier of Science: from CERN Procurement to Patent,2019-05-23 12:36:25,"Andrea Bastianin, Paolo Castelnovo, Massimo Florio, Anna Giunta","http://arxiv.org/abs/1905.09552v1, http://arxiv.org/pdf/1905.09552v1",econ.GN
33724,gn,"I study optimal disclosure policies in sequential contests. A contest
designer chooses at which periods to publicly disclose the efforts of previous
contestants. I provide results for a wide range of possible objectives for the
contest designer. While different objectives involve different trade-offs, I
show that under many circumstances the optimal contest is one of the three
basic contest structures widely studied in the literature: simultaneous,
first-mover, or sequential contest.",Contest Architecture with Public Disclosures,2019-05-27 10:05:17,Toomas Hinnosaar,"http://arxiv.org/abs/1905.11004v2, http://arxiv.org/pdf/1905.11004v2",econ.TH
33727,gn,"This research develops a socioeconomic health index for nations through a
model-based approach which incorporates spatial dependence and examines the
impact of a policy through a causal modeling framework. As the gross domestic
product (GDP) has been regarded as a dated measure and tool for benchmarking a
nation's economic performance, there has been a growing consensus for an
alternative measure---such as a composite `wellbeing' index---to holistically
capture a country's socioeconomic health performance. Many conventional ways of
constructing wellbeing/health indices involve combining different observable
metrics, such as life expectancy and education level, to form an index.
However, health is inherently latent with metrics actually being observable
indicators of health. In contrast to the GDP or other conventional health
indices, our approach provides a holistic quantification of the overall
`health' of a nation. We build upon the latent health factor index (LHFI)
approach that has been used to assess the unobservable ecological/ecosystem
health. This framework integratively models the relationship between metrics,
the latent health, and the covariates that drive the notion of health. In this
paper, the LHFI structure is integrated with spatial modeling and statistical
causal modeling, so as to evaluate the impact of a policy variable (mandatory
maternity leave days) on a nation's socioeconomic health, while formally
accounting for spatial dependency among the nations. We apply our model to
countries around the world using data on various metrics and potential
covariates pertaining to different aspects of societal health. The approach is
structured in a Bayesian hierarchical framework and results are obtained by
Markov chain Monte Carlo techniques.",Modeling National Latent Socioeconomic Health and Examination of Policy Effects via Causal Inference,2019-11-01 21:13:30,"F. Swen Kuh, Grace S. Chiu, Anton H. Westveld","http://arxiv.org/abs/1911.00512v1, http://arxiv.org/pdf/1911.00512v1",stat.AP
33729,gn,"We discuss the idea of a purely algorithmic universal world iCurrency set
forth in [Kakushadze and Liew, 2014] (https://ssrn.com/abstract=2542541) and
expanded in [Kakushadze and Liew, 2017] (https://ssrn.com/abstract=3059330) in
light of recent developments, including Libra. Is Libra a contender to become
iCurrency? Among other things, we analyze the Libra proposal, including the
stability and volatility aspects, and discuss various issues that must be
addressed. For instance, one cannot expect a cryptocurrency such as Libra to
trade in a narrow band without a robust monetary policy. The presentation in
the main text of the paper is intentionally nontechnical. It is followed by an
extensive appendix with a mathematical description of the dynamics of
(crypto)currency exchange rates in target zones, mechanisms for keeping the
exchange rate from breaching the band, the role of volatility, etc.",iCurrency?,2019-10-29 00:12:11,"Zura Kakushadze, Willie Yu","http://arxiv.org/abs/1911.01272v2, http://arxiv.org/pdf/1911.01272v2",q-fin.GN
33730,gn,"Different shares of distinct commodity sectors in production, trade, and
consumption illustrate how resources and capital are allocated and invested.
Economic progress has been claimed to change the share distribution in a
universal manner as exemplified by the Engel's law for the household
expenditure and the shift from primary to manufacturing and service sector in
the three sector model. Searching for large-scale quantitative evidence of such
correlation, we analyze the gross-domestic product (GDP) and international
trade data based on the standard international trade classification (SITC) in
the period 1962 to 2000. Three categories, among ten in the SITC, are found to
have their export shares significantly correlated with the GDP over countries
and time; The machinery category has positive and food and crude materials have
negative correlations. The export shares of commodity categories of a country
are related to its GDP by a power-law with the exponents characterizing the
GDP-elasticity of their export shares. The distance between two countries in
terms of their export portfolios is measured to identify several clusters of
countries sharing similar portfolios in 1962 and 2000. We show that the
countries whose GDP is increased significantly in the period are likely to
transit to the clusters displaying large share of the machinery category.",Engel's law in the commodity composition of exports,2019-11-05 05:04:34,"Sung-Gook Choi, Deok-Sun Lee","http://dx.doi.org/10.1038/s41598-019-52281-8, http://arxiv.org/abs/1911.01568v1, http://arxiv.org/pdf/1911.01568v1",q-fin.GN
33731,gn,"We demonstrate that the tail dependence should always be taken into account
as a proxy for systematic risk of loss for investments. We provide the clear
statistical evidence of that the structure of investment portfolios on a
regulated market should be adjusted to the price of gold. Our finding suggests
that the active bartering of oil for goods would prevent collapsing the
national market facing international sanctions.","A Regulated Market Under Sanctions: On Tail Dependence Between Oil, Gold, and Tehran Stock Exchange Index",2019-11-02 05:05:32,"Abootaleb Shirvani, Dimitri Volchenkov","http://dx.doi.org/10.5890/JVTSD.2019.09.004, http://arxiv.org/abs/1911.01826v1, http://arxiv.org/pdf/1911.01826v1",q-fin.ST
33901,th,"We analyze situations in which players build reputations for honesty rather
than for playing particular actions. A patient player facing a sequence of
short-run opponents makes an announcement about their intended action after
observing an idiosyncratic shock, and before players act. The patient player is
either an honest type whose action coincides with their announcement, or an
opportunistic type who can freely choose their actions. We show that the
patient player can secure a high payoff by building a reputation for being
honest when the short-run players face uncertainty about which of the patient
player's actions are currently feasible, but may receive a low payoff when
there is no such uncertainty.",A Reputation for Honesty,2020-11-14 01:51:31,"Drew Fudenberg, Ying Gao, Harry Pei","http://arxiv.org/abs/2011.07159v1, http://arxiv.org/pdf/2011.07159v1",econ.TH
33732,gn,"This study explores the time-varying structure of market efficiency for the
prewar Japanese stock market using a new market capitalization-weighted stock
price index based on the adaptive market hypothesis (AMH). First, we find that
the degree of market efficiency in the prewar Japanese stock market varies over
time and with major historical events. Second, the AMH is supported in this
market. Third, this study concludes that market efficiency was maintained
throughout the period, whereas previous studies did not come to the same
conclusion due to differences in the calculation methods of stock indices.
Finally, as government intervention in the market intensified throughout the
1930s, the market efficiency declined, as well as rapidly taking into account
the war risk premium, especially from the time when the Pacific War became
inevitable.","Measuring the Time-Varying Market Efficiency in the Prewar Japanese Stock Market, 1924-1943",2019-11-11 06:28:44,"Kenichi Hirayama, Akihiko Noda","http://arxiv.org/abs/1911.04059v5, http://arxiv.org/pdf/1911.04059v5",q-fin.ST
33733,gn,"This paper aims to explore the mechanical effect of a company's share
repurchase on earnings per share (EPS). In particular, while a share repurchase
scheme will reduce the overall number of shares, suggesting that the EPS may
increase, clearly the expenditure will reduce the net earnings of a company,
introducing a trade-off between these competing effects. We first of all review
accretive share repurchases, then characterise the increase in EPS as a
function of price paid by the company. Subsequently, we analyse and quantify
the estimated difference in earnings growth between a company's natural growth
in the absence of buyback scheme to that with its earnings altered as a result
of the buybacks. We conclude with an examination of the effect of share
repurchases in two cases studies in the US stock-market.",Quantitative earnings enhancement from share buybacks,2019-11-11 14:57:01,"Lawrence Middleton, James Dodd, Graham Baird","http://arxiv.org/abs/1911.04199v1, http://arxiv.org/pdf/1911.04199v1",q-fin.GN
33734,gn,"As Mobility as a Service (MaaS) systems become increasingly popular, travel
is changing from unimodal trips to personalized services offered by a platform
of mobility operators. Evaluation of MaaS platforms depends on modeling both
user route decisions as well as operator service and pricing decisions. We
adopt a new paradigm for traffic assignment in a MaaS network of multiple
operators using the concept of stable matching to allocate costs and determine
prices offered by operators corresponding to user route choices and operator
service choices without resorting to nonconvex bilevel programming
formulations. Unlike our prior work, the proposed model allows travelers to
make multimodal, multi-operator trips, resulting in stable cost allocations
between competing network operators to provide MaaS for users. An algorithm is
proposed to efficiently generate stability conditions for the stable outcome
model. Extensive computational experiments demonstrate the use of the model to
handling pricing responses of MaaS operators in technological and capacity
changes, government acquisition, consolidation, and firm entry, using the
classic Sioux Falls network. The proposed algorithm replicates the same
stability conditions as explicit path enumeration while taking only 17 seconds
compared to explicit path enumeration timing out over 2 hours.",A many-to-many assignment game and stable outcome algorithm to evaluate collaborative Mobility-as-a-Service platforms,2019-11-11 21:20:25,"Theodoros P. Pantelidis, Joseph Y. J. Chow, Saeid Rasulkhani","http://dx.doi.org/10.1016/j.trb.2020.08.002, http://arxiv.org/abs/1911.04435v3, http://arxiv.org/pdf/1911.04435v3",cs.CY
33735,gn,"The rank-size plots of a large number of different physical and
socio-economic systems are usually said to follow Zipf's law, but a unique
framework for the comprehension of this ubiquitous scaling law is still
lacking. Here we show that a dynamical approach is crucial: during their
evolution, some systems are attracted towards Zipf's law, while others presents
Zipf's law only temporarily and, therefore, spuriously. A truly Zipfian
dynamics is characterized by a dynamical constraint, or coherence, among the
parameters of the generating PDF, and the number of elements in the system. A
clear-cut example of such coherence is natural language. Our framework allows
us to derive some quantitative results that go well beyond the usual Zipf's
law: i) earthquakes can evolve only incoherently and thus show Zipf's law
spuriously; this allows an assessment of the largest possible magnitude of an
earthquake occurring in a geographical region. ii) We prove that Zipfian
dynamics are not additive, explaining analytically why US cities evolve
coherently, while world cities do not. iii) Our concept of coherence can be
used for model selection, for example, the Yule-Simon process can describe the
dynamics of world countries' GDP. iv) World cities present spurious Zipf's law
and we use this property for estimating the maximal population of an urban
agglomeration.",Dynamical approach to Zipf's law,2019-11-12 16:42:09,"Giordano De Marzo, Andrea Gabrielli, Andrea Zaccaria, Luciano Pietronero","http://dx.doi.org/10.1103/PhysRevResearch.3.013084, http://arxiv.org/abs/1911.04844v3, http://arxiv.org/pdf/1911.04844v3",physics.soc-ph
33736,gn,"We provide an exact analytical solution of the Nash equilibrium for the $k$th
price auction by using inverse of distribution functions. As applications, we
identify the unique symmetric equilibrium where the valuations have polynomial
distribution, fat tail distribution and exponential distributions.",Analytical solution of $k$th price auction,2019-11-11 11:09:02,"Martin Mihelich, Yan Shu","http://arxiv.org/abs/1911.04865v2, http://arxiv.org/pdf/1911.04865v2",econ.TH
33738,gn,"This paper studies a search problem where a consumer is initially aware of
only a few products. At every point in time, the consumer then decides between
searching among alternatives he is already aware of and discovering more
products. I show that the optimal policy for this search and discovery problem
is fully characterized by tractable reservation values. Moreover, I prove that
a predetermined index fully specifies the purchase decision of a consumer
following the optimal search policy. Finally, a comparison highlights
differences to classical random and directed search.",Optimal Search and Discovery,2019-11-18 20:16:33,Rafael P. Greminger,"http://arxiv.org/abs/1911.07773v5, http://arxiv.org/pdf/1911.07773v5",econ.TH
33739,gn,"We analyze a macroeconomic model with intergenerational equity considerations
and spatial spillovers, which gives rise to a multicriteria optimization
problem. Intergenerational equity requires to add in the definition of social
welfare a long run sustainability criterion to the traditional discounted
utilitarian criterion. The spatial structure allows for the possibility of
heterogeneiity and spatial diffusion implies that all locations within the
spatial domain are interconnected via spatial spillovers. We rely on different
techniques (scalarization, $\epsilon$-constraint method and goal programming)
to analyze such a spatial multicriteria problem, relying on numerical
approaches to illustrate the nature of the trade-off between the discounted
utilitarian and the sustainability criteria.",A Multicriteria Macroeconomic Model with Intertemporal Equity and Spatial Spillovers,2019-11-19 16:13:08,"Herb Kunze, Davide La Torre, Simone Marsiglio","http://arxiv.org/abs/1911.08247v1, http://arxiv.org/pdf/1911.08247v1",econ.TH
33740,gn,"A speculative agent with Prospect Theory preference chooses the optimal time
to purchase and then to sell an indivisible risky asset to maximize the
expected utility of the round-trip profit net of transaction costs. The
optimization problem is formulated as a sequential optimal stopping problem and
we provide a complete characterization of the solution. Depending on the
preference and market parameters, the optimal strategy can be ""buy and hold"",
""buy low sell high"", ""buy high sell higher"" or ""no trading"". Behavioral
preference and market friction interact in a subtle way which yields surprising
implications on the agent's trading patterns. For example, increasing the
market entry fee does not necessarily curb speculative trading, but instead it
may induce a higher reference point under which the agent becomes more
risk-seeking and in turn is more likely to trade.","Speculative Trading, Prospect Theory and Transaction Costs",2019-11-22 19:08:46,"Alex S. L. Tse, Harry Zheng","http://arxiv.org/abs/1911.10106v4, http://arxiv.org/pdf/1911.10106v4",q-fin.MF
33741,gn,"When individuals in a social network learn about an unknown state from
private signals and neighbors' actions, the network structure often causes
information loss. We consider rational agents and Gaussian signals in the
canonical sequential social-learning problem and ask how the network changes
the efficiency of signal aggregation. Rational actions in our model are
log-linear functions of observations and admit a signal-counting interpretation
of accuracy. Networks where agents observe multiple neighbors but not their
common predecessors confound information, and even a small amount of
confounding can lead to much lower accuracy. In a class of networks where
agents move in generations and observe the previous generation, we quantify the
information loss with an aggregative efficiency index. Aggregative efficiency
is a simple function of network parameters: increasing in observations and
decreasing in confounding. Later generations contribute little additional
information, even with arbitrarily large generations.",Aggregative Efficiency of Bayesian Learning in Networks,2019-11-22 19:17:42,"Krishna Dasaratha, Kevin He","http://arxiv.org/abs/1911.10116v8, http://arxiv.org/pdf/1911.10116v8",econ.TH
33742,gn,"A mathematical model of measurement of the perception of well-being for
groups with increasing incomes, but proportionally unequal is proposed.
Assuming that welfare grows with own income and decreases with relative
inequality (income of the other concerning one's own), possible scenarios for
long-term behavior in welfare functions are concluded. Also, it is proved that
a high relative inequality (parametric definition) always implies the loss of
the self-perception of the well-being of the most disadvantaged group.",Income growth with high inequality implies loss of well-being: A mathematical model,2019-11-25 23:15:27,Fernando Córdova-Lepe,"http://arxiv.org/abs/1911.11205v1, http://arxiv.org/pdf/1911.11205v1",econ.GN
33743,gn,"Composite development indicators used in policy making often subjectively
aggregate a restricted set of indicators. We show, using dimensionality
reduction techniques, including Principal Component Analysis (PCA) and for the
first time information filtering and hierarchical clustering, that these
composite indicators miss key information on the relationship between different
indicators. In particular, the grouping of indicators via topics is not
reflected in the data at a global and local level. We overcome these issues by
using the clustering of indicators to build a new set of cluster driven
composite development indicators that are objective, data driven, comparable
between countries, and retain interpretabilty. We discuss their consequences on
informing policy makers about country development, comparing them with the top
PageRank indicators as a benchmark. Finally, we demonstrate that our new set of
composite development indicators outperforms the benchmark on a dataset
reconstruction task.",A new set of cluster driven composite development indicators,2019-11-25 23:43:25,"Anshul Verma, Orazio Angelini, Tiziana Di Matteo","http://arxiv.org/abs/1911.11226v3, http://arxiv.org/pdf/1911.11226v3",econ.GN
33744,gn,"We propose a new model for regulation to achieve AI safety: global regulatory
markets. We first sketch the model in general terms and provide an overview of
the costs and benefits of this approach. We then demonstrate how the model
might work in practice: responding to the risk of adversarial attacks on AI
models employed in commercial drones.",Regulatory Markets for AI Safety,2019-12-11 22:21:54,"Jack Clark, Gillian K. Hadfield","http://arxiv.org/abs/2001.00078v1, http://arxiv.org/pdf/2001.00078v1",cs.CY
33745,gn,"Using results from neurobiology on perceptual decision making and value-based
decision making, the problem of decision making between lotteries is
reformulated in an abstract space where uncertain prospects are mapped to
corresponding active neuronal representations. This mapping allows us to
maximize non-extensive entropy in the new space with some constraints instead
of a utility function. To achieve good agreements with behavioral data, the
constraints must include at least constraints on the weighted average of the
stimulus and on its variance. Both constraints are supported by the
adaptability of neuronal responses to an external stimulus. By analogy with
thermodynamic and information engines, we discuss the dynamics of choice
between two lotteries as they are being processed simultaneously in the brain
by rate equations that describe the transfer of attention between lotteries and
within the various prospects of each lottery. This model is able to give new
insights on risk aversion and on behavioral anomalies not accounted for by
Prospect Theory.",Entropic Decision Making,2020-01-01 04:37:20,Adnan Rebei,"http://arxiv.org/abs/2001.00122v1, http://arxiv.org/pdf/2001.00122v1",q-bio.NC
33917,th,"We present a necessary and sufficient condition for Alt's system to be
represented by a continuous utility function. Moreover, we present a necessary
and sufficient condition for this utility function to be concave. The latter
condition can be seen as an extension of Gossen's first law, and thus has an
economic interpretation. Together with the above results, we provide a
necessary and sufficient condition for Alt's utility to be continuously
differentiable.",An Axiom for Concavifiable Preferences in View of Alt's Theory,2021-02-14 23:27:05,Yuhki Hosoya,"http://dx.doi.org/10.1016/j.jmateco.2021.102583, http://arxiv.org/abs/2102.07237v3, http://arxiv.org/pdf/2102.07237v3",econ.TH
33746,gn,"Multiple studies have documented racial, gender, political ideology, or
ethnical biases in comparative judicial systems. Supplementing this literature,
we investigate whether judges rule cases differently when one of the litigants
is a politician. We suggest a theory of power collusion, according to which
judges might use rulings to buy cooperation or threaten members of the other
branches of government. We test this theory using a sample of small claims
cases in the state of S\~ao Paulo, Brazil, where no collusion should exist. The
results show a negative bias of 3.7 percentage points against litigant
politicians, indicating that judges punish, rather than favor, politicians in
court. This punishment in low-salience cases serves as a warning sign for
politicians not to cross the judiciary when exercising checks and balances,
suggesting yet another barrier to judicial independence in development
settings.",Judicial Favoritism of Politicians: Evidence from Small Claims Court,2020-01-03 19:57:27,"Andre Assumpcao, Julio Trecenti","http://arxiv.org/abs/2001.00889v2, http://arxiv.org/pdf/2001.00889v2",econ.GN
33747,gn,"Nyman and Ormerod (2017) show that the machine learning technique of random
forests has the potential to give early warning of recessions. Applying the
approach to a small set of financial variables and replicating as far as
possible a genuine ex ante forecasting situation, over the period since 1990
the accuracy of the four-step ahead predictions is distinctly superior to those
actually made by the professional forecasters. Here we extend the analysis by
examining the contributions made to the Great Recession of the late 2000s by
each of the explanatory variables. We disaggregate private sector debt into its
household and non-financial corporate components. We find that both household
and non-financial corporate debt were key determinants of the Great Recession.
We find a considerable degree of non-linearity in the explanatory models. In
contrast, the public sector debt to GDP ratio appears to have made very little
contribution. It did rise sharply during the Great Recession, but this was as a
consequence of the sharp fall in economic activity rather than it being a
cause. We obtain similar results for both the United States and the United
Kingdom.",Understanding the Great Recession Using Machine Learning Algorithms,2020-01-02 18:30:52,"Rickard Nyman, Paul Ormerod","http://arxiv.org/abs/2001.02115v1, http://arxiv.org/pdf/2001.02115v1",econ.GN
33748,gn,"This paper uses stop-level passenger count data in four cities to understand
the nation-wide bus ridership decline between 2012 and 2018. The local
characteristics associated with ridership change are evaluated in Portland,
Miami, Minneapolis/St-Paul, and Atlanta. Poisson models explain ridership as a
cross-section and the change thereof as a panel. While controlling for the
change in frequency, jobs, and population, the correlation with local
socio-demographic characteristics are investigated using data from the American
Community Survey. The effect of changing neighborhood demographics on bus
ridership are modeled using Longitudinal Employer-Household Dynamics data. At a
point in time, neighborhoods with high proportions of non-white, carless, and
most significantly, high-school-educated residents are the most likely to have
high ridership. Over time, white neighborhoods are losing the most ridership
across all four cities. In Miami and Atlanta, places with high concentrations
of residents with college education and without access to a car also lose
ridership at a faster rate. In Minneapolis/St-Paul, the proportion of
college-educated residents is linked to ridership gain. The sign and
significance of these results remain consistent even when controlling for
intra-urban migration. Although bus ridership is declining across neighborhood
characteristics, these results suggest that the underlying cause of bus
ridership decline must be primarily affecting the travel behavior of white bus
riders.",Whos Ditching the Bus?,2020-01-07 20:56:43,"Simon J. Berrebi, Kari E. Watkins","http://dx.doi.org/10.1016/j.tra.2020.02.016, http://arxiv.org/abs/2001.02200v3, http://arxiv.org/pdf/2001.02200v3",physics.soc-ph
33749,gn,"Emerging autonomous vehicles (AV) can either supplement the public
transportation (PT) system or compete with it. This study examines the
competitive perspective where both AV and PT operators are profit-oriented with
dynamic adjustable supply strategies under five regulatory structures regarding
whether the AV operator is allowed to change the fleet size and whether the PT
operator is allowed to adjust headway. Four out of the five scenarios are
constrained competition while the other one focuses on unconstrained
competition to find the Nash Equilibrium. We evaluate the competition process
as well as the system performance from the standpoints of four stakeholders --
the AV operator, the PT operator, passengers, and the transport authority. We
also examine the impact of PT subsidies on the competition results including
both demand-based and supply-based subsidies. A heuristic algorithm is proposed
to update supply strategies for AV and PT based on the operators' historical
actions and profits. An agent-based simulation model is implemented in the
first-mile scenario in Tampines, Singapore. We find that the competition can
result in higher profits and higher system efficiency for both operators
compared to the status quo. After the supply updates, the PT services are
spatially concentrated to shorter routes feeding directly to the subway station
and temporally concentrated to peak hours. On average, the competition reduces
the travel time of passengers but increases their travel costs. Nonetheless,
the generalized travel cost is reduced when incorporating the value of time.
With respect to the system efficiency, the bus supply adjustment increases the
average vehicle load and reduces the total vehicle kilometer traveled measured
by the passenger car equivalent (PCE), while the AV supply adjustment does the
opposite.",Competition between shared autonomous vehicles and public transit: A case study in Singapore,2020-01-09 22:34:54,"Baichuan Mo, Zhejing Cao, Hongmou Zhang, Yu Shen, Jinhua Zhao","http://dx.doi.org/10.1016/j.trc.2021.103058, http://arxiv.org/abs/2001.03197v3, http://arxiv.org/pdf/2001.03197v3",physics.soc-ph
33750,gn,"What resources and technologies are strategic? This question is often the
focus of policy and theoretical debates, where the label ""strategic"" designates
those assets that warrant the attention of the highest levels of the state. But
these conversations are plagued by analytical confusion, flawed heuristics, and
the rhetorical use of ""strategic"" to advance particular agendas. We aim to
improve these conversations through conceptual clarification, introducing a
theory based on important rivalrous externalities for which socially optimal
behavior will not be produced alone by markets or individual national security
entities. We distill and theorize the most important three forms of these
externalities, which involve cumulative-, infrastructure-, and
dependency-strategic logics. We then employ these logics to clarify three
important cases: the Avon 2 engine in the 1950s, the U.S.-Japan technology
rivalry in the late 1980s, and contemporary conversations about artificial
intelligence.",The Logic of Strategic Assets: From Oil to Artificial Intelligence,2020-01-10 01:16:05,"Jeffrey Ding, Allan Dafoe","http://dx.doi.org/10.1080/09636412.2021.1915583, http://arxiv.org/abs/2001.03246v2, http://arxiv.org/pdf/2001.03246v2",econ.GN
33927,th,"We provide an analysis of the recent work by Malaney-Weinstein on ""Economics
as Gauge Theory"" presented on November 10, 2021 at the Money and Banking
Workshop hosted by University of Chicago. In particular, we distill the
technical mathematics used in their work into a form more suitable to a wider
audience. Furthermore, we resolve the conjectures posed by Malaney-Weinstein,
revealing that they provide no discernible value for the calculation of index
numbers or rates of inflation. Our conclusion is that the main contribution of
the Malaney-Weinstein work is that it provides a striking example of how to
obscure simple concepts through an uneconomical use of gauge theory.",A Response to Economics as Gauge Theory,2021-12-07 05:52:28,Timothy Nguyen,"http://arxiv.org/abs/2112.03460v2, http://arxiv.org/pdf/2112.03460v2",econ.TH
33751,gn,"Some consumers, particularly households, are unwilling to face volatile
electricity prices, and they can perceive as unfair price differentiation in
the same local area. For these reasons, nodal prices in distribution networks
are rarely employed. However, the increasing availability of renewable
resources and emerging price-elastic behaviours pave the way for the effective
introduction of marginal nodal pricing schemes in distribution networks. The
aim of the proposed framework is to show how traditional non-flexible consumers
can coexist with flexible users in a local distribution area. Flexible users
will pay nodal prices, whereas non-flexible consumers will be charged a fixed
price derived from the underlying nodal prices. Moreover, the developed
approach shows how a distribution system operator should manage the local grid
by optimally determining the lines to be expanded, and the collected network
tariff levied on grid users, while accounting for both congestion rent and
investment costs. The proposed model is formulated as a non-linear integer
bilevel program, which is then recast as an equivalent single optimization
problem, by using integer algebra and complementarity relations. The power
flows in the distribution area are modelled by resorting to a second-order cone
relaxation, whose solution is exact for radial networks under mild assumptions.
The final model results in a mixed-integer quadratically constrained program,
which can be solved with off-the-shelf solvers. Numerical test cases based on
both 5-bus and 33-bus networks are reported to show the effectiveness of the
proposed method.",Electricity prices and tariffs to keep everyone happy: a framework for fixed and nodal prices coexistence in distribution grids with optimal tariffs for investment cost recovery,2020-01-09 00:17:22,"Iacopo Savelli, Thomas Morstyn","http://dx.doi.org/10.1016/j.omega.2021.102450, http://arxiv.org/abs/2001.04283v2, http://arxiv.org/pdf/2001.04283v2",econ.GN
33752,gn,"We consider an economic geography model with two inter-regional proximity
structures: one governing goods trade and the other governing production
externalities across regions. We investigate how the introduction of the latter
affects the timing of endogenous agglomeration and the spatial distribution of
workers across regions. As transportation costs decline, the economy undergoes
a progressive dispersion process. Mono-centric agglomeration emerges when
inter-regional trade and/or production externalities incur high transportation
costs, while uniform dispersion occurs when these costs become negligibly small
(i.e., when distance dies). In multi-regional geography, the network structure
of production externalities can determine the geographical distribution of
workers as economic integration increases. If production externalities are
governed solely by geographical distance, a mono-centric spatial distribution
emerges in the form of suburbanization. However, if geographically distant
pairs of regions are connected through tight production linkages, multi-centric
spatial distribution can be sustainable.",Production externalities and dispersion process in a multi-region economy,2020-01-15 04:18:39,"Minoru Osawa, José M. Gaspar","http://arxiv.org/abs/2001.05095v2, http://arxiv.org/pdf/2001.05095v2",econ.GN
33754,gn,"Recently dozens of school districts and college admissions systems around the
world have reformed their admission rules. As a main motivation for these
reforms the policymakers cited strategic flaws of the rules: students had
strong incentives to game the system, which caused dramatic consequences for
non-strategic students. However, almost none of the new rules were
strategy-proof. We explain this puzzle. We show that after the reforms the
rules became more immune to strategic admissions: each student received a
smaller set of schools that he can get in using a strategy, weakening
incentives to manipulate. Simultaneously, the admission to each school became
strategy-proof to a larger set of students, making the schools more available
for non-strategic students. We also show that the existing explanation of the
puzzle due to Pathak and S\""onmez (2013) is incomplete.",Comparing School Choice and College Admission Mechanisms By Their Immunity to Strategic Admissions,2020-01-17 09:38:51,"Somouaoga Bonkoungou, Alexander S. Nesterov","http://arxiv.org/abs/2001.06166v2, http://arxiv.org/pdf/2001.06166v2",econ.TH
33755,gn,"The purpose of this paper is to use the votes cast at the 2019 European
elections held in United Kingdom to re-visit the analysis conducted subsequent
to its 2016 European Union referendum vote. This exercise provides a staging
post on public opinion as the United Kingdom moves to leave the European Union
during 2020. A composition data analysis in a seemingly unrelated regression
framework is adopted that respects the compositional nature of the vote
outcome; each outcome is a share that adds up to 100% and each outcome is
related to the alternatives. Contemporary explanatory data for each counting
area is sourced from the themes of socio-demographics, employment, life
satisfaction and place. The study find that there are still strong and stark
divisions in the United Kingdom, defined by age, qualifications, employment and
place. The use of a compositional analysis approach produces challenges in
regards to the interpretation of these models, but marginal plots are seen to
aid the interpretation somewhat.",Who voted for a No Deal Brexit? A Composition Model of Great Britains 2019 European Parliamentary Elections,2020-01-16 14:38:44,Stephen Clark,"http://arxiv.org/abs/2001.06548v1, http://arxiv.org/pdf/2001.06548v1",physics.soc-ph
33779,th,"In this paper, we build a computational model for the analysis of
international wheat spot price formation, its dynamics and the dynamics of
internationally exchanged quantities. The model has been calibrated using
FAOSTAT data to evaluate its in-sample predictive power. The model is able to
generate wheat prices in twelve international markets and wheat used quantities
in twenty-four world regions. The time span considered goes from 1992 to 2013.
In our study, a particular attention was paid to the impact of Russian
Federation's 2010 grain export ban on wheat price and internationally traded
quantities. Among other results, we find that wheat average weighted world
price in 2013 would have been 3.55\% lower than the observed one if the Russian
Federation would not have imposed the export ban in 2010.",Investigating Wheat Price with a Multi-Agent Model,2018-07-27 14:18:19,"Gianfranco Giulioni, Edmondo Di Giuseppe, Massimiliano Pasqui, Piero Toscano, Francesco Miglietta","http://arxiv.org/abs/1807.10537v1, http://arxiv.org/pdf/1807.10537v1",econ.TH
33756,gn,"Smart cities that make broad use of digital technologies have been touted as
possible solutions for the population pressures faced by many cities in
developing countries and may help meet the rising demand for services and
infrastructure. Nevertheless, the high financial cost involved in
infrastructure maintenance, the substantial size of the informal economies, and
various governance challenges are curtailing government idealism regarding
smart cities. This review examines the state of smart city development in
developing countries, which includes understanding the conceptualisations,
motivations, and unique drivers behind (and barriers to) smarty city
development. A total of 56 studies were identified from a systematic literature
review from an initial pool of 3928 social sciences literature identified from
two academic databases. Data were analysed using thematic synthesis and
thematic analysis. The review found that technology-enabled smart cities in
developing countries can only be realised when concurrent socioeconomic, human,
legal, and regulatory reforms are instituted. Governments need to step up their
efforts to fulfil the basic infrastructure needs of citizens, raise more
revenue, construct clear regulatory frameworks to mitigate the technological
risks involved, develop human capital, ensure digital inclusivity, and promote
environmental sustainability. A supportive ecosystem that encourages citizen
participation, nurtures start-ups, and promotes public-private partnerships
needs to be created to realise their smart city vision.",Smart City Governance in Developing Countries: A Systematic Literature Review,2020-01-28 08:24:38,"Si Ying Tan, Araz Taeihagh","http://dx.doi.org/10.3390/su12030899, http://arxiv.org/abs/2001.10173v1, http://arxiv.org/pdf/2001.10173v1",cs.CY
33757,gn,"Housing scholars stress the importance of the information environment in
shaping housing search behavior and outcomes. Rental listings have increasingly
moved online over the past two decades and, in turn, online platforms like
Craigslist are now central to the search process. Do these technology platforms
serve as information equalizers or do they reflect traditional information
inequalities that correlate with neighborhood sociodemographics? We synthesize
and extend analyses of millions of US Craigslist rental listings and find they
supply significantly different volumes, quality, and types of information in
different communities. Technology platforms have the potential to broaden,
diversify, and equalize housing search information, but they rely on landlord
behavior and, in turn, likely will not reach this potential without a
significant redesign or policy intervention. Smart cities advocates hoping to
build better cities through technology must critically interrogate technology
platforms and big data for systematic biases.",Housing Search in the Age of Big Data: Smarter Cities or the Same Old Blind Spots?,2020-01-31 01:10:29,"Geoff Boeing, Max Besbris, Ariela Schachter, John Kuk","http://dx.doi.org/10.1080/10511482.2019.1684336, http://arxiv.org/abs/2001.11585v1, http://arxiv.org/pdf/2001.11585v1",stat.AP
33758,gn,"Most national economies are linked by international trade. Consequently,
economic globalization forms a massive and complex economic network with strong
links, that is, interactions arising from increasing trade. Various interesting
collective motions are expected to emerge from strong economic interactions in
a global economy under trade liberalization. Among the various economic
collective motions, economic crises are our most intriguing problem. In our
previous studies, we have revealed that the Kuramoto's coupled limit-cycle
oscillator model and the Ising-like spin model on networks are invaluable tools
for characterizing the economic crises. In this study, we develop a
mathematical theory to describe an interacting agent model that derives the
Kuramoto model and the Ising-like spin model by using appropriate
approximations. Our interacting agent model suggests phase synchronization and
spin ordering during economic crises. We confirm the emergence of the phase
synchronization and spin ordering during economic crises by analyzing various
economic time series data. We also develop a network reconstruction model based
on entropy maximization that considers the sparsity of the network. Here
network reconstruction means estimating a network's adjacency matrix from a
node's local information. The interbank network is reconstructed using the
developed model, and a comparison is made of the reconstructed network with the
actual data. We successfully reproduce the interbank network and the known
stylized facts. In addition, the exogenous shock acting on an industry
community in a supply chain network and financial sector are estimated.
Estimation of exogenous shocks acting on communities of in the real economy in
the supply chain network provide evidence of the channels of distress
propagating from the financial sector to the real economy through the supply
chain network.",An Interacting Agent Model of Economic Crisis,2020-01-30 07:53:32,Yuichi Ikeda,"http://arxiv.org/abs/2001.11843v1, http://arxiv.org/pdf/2001.11843v1",physics.soc-ph
33759,gn,"We propose a simple model where the innovation rate of a technological domain
depends on the innovation rate of the technological domains it relies on. Using
data on US patents from 1836 to 2017, we make out-of-sample predictions and
find that the predictability of innovation rates can be boosted substantially
when network effects are taken into account. In the case where a technology$'$s
neighborhood future innovation rates are known, the average predictability gain
is 28$\%$ compared to simpler time series model which do not incorporate
network effects. Even when nothing is known about the future, we find positive
average predictability gains of 20$\%$. The results have important policy
implications, suggesting that the effective support of a given technology must
take into account the technological ecosystem surrounding the targeted
technology.",Technological interdependencies predict innovation dynamics,2020-03-01 23:44:38,"Anton Pichler, François Lafond, J. Doyne Farmer","http://arxiv.org/abs/2003.00580v1, http://arxiv.org/pdf/2003.00580v1",physics.soc-ph
33760,gn,"We discuss various limits of a simple random exchange model that can be used
for the distribution of wealth. We start from a discrete state space - discrete
time version of this model and, under suitable scaling, we show its functional
convergence to a continuous space - discrete time model. Then, we show a
thermodynamic limit of the empirical distribution to the solution of a kinetic
equation of Boltzmann type. We solve this equation and we show that the
solutions coincide with the appropriate limits of the invariant measure for the
Markov chain. In this way we complete Boltzmann's program of deriving kinetic
equations from random dynamics for this simple model. Three families of
invariant measures for the mean field limit are discovered and we show that
only two of those families can be obtained as limits of the discrete system and
the third is extraneous. Finally, we cast our results in the framework of
integer partitions and strengthen some results already available in the
literature.",Continuum and thermodynamic limits for a simple random-exchange model,2020-03-02 17:22:46,"Bertram Düring, Nicos Georgiou, Sara Merino-Aceituno, Enrico Scalas","http://arxiv.org/abs/2003.00930v1, http://arxiv.org/pdf/2003.00930v1",math.PR
33761,gn,"We consider a market in which both suppliers and consumers compete for a
product via scalar-parameterized supply offers and demand bids.
Scalar-parameterized offers/bids are appealing due to their modeling simplicity
and desirable mathematical properties with the most prominent being bounded
efficiency loss and price markup under strategic interactions. Our model
incorporates production capacity constraints and minimum inelastic demand
requirements. Under perfect competition, the market mechanism yields
allocations that maximize social welfare. When market participants are
price-anticipating, we show that there exists a unique Nash equilibrium, and
provide an efficient way to compute the resulting market allocation. Moreover,
we explicitly characterize the bounds on the welfare loss and prices observed
at the Nash equilibrium.",A Scalar Parameterized Mechanism for Two-Sided Markets,2020-03-03 00:50:00,"Mariola Ndrio, Khaled Alshehri, Subhonmesh Bose","http://arxiv.org/abs/2003.01206v1, http://arxiv.org/pdf/2003.01206v1",econ.GN
33762,gn,"We study on topological properties of global supply chain network in terms of
degree distribution, hierarchical structure, and degree-degree correlation in
the global supply chain network. The global supply chain data is constructed by
collecting various company data from the web site of Standard & Poor's Capital
IQ platform in 2018. The in- and out-degree distributions are characterized by
a power law with in-degree exponent = 2.42 and out-degree exponent = 2.11. The
clustering coefficient decays as power law with an exponent = 0.46. The nodal
degree-degree correlation indicates the absence of assortativity. The Bow-tie
structure of GWCC reveals that the OUT component is the largest and it consists
41.1% of total firms. The GSCC component comprises 16.4% of total firms. We
observe that the firms in the upstream or downstream sides are mostly located a
few steps away from the GSCC. Furthermore, we uncover the community structure
of the network and characterize them according to their location and industry
classification. We observe that the largest community consists of consumer
discretionary sector mainly based in the US. These firms belong to the OUT
component in the bow-tie structure of the global supply chain network. Finally,
we confirm the validity for propositions S1 (short path length), S2 (power-law
degree distribution), S3 (high clustering coefficient), S4 (""fit-gets-richer""
growth mechanism), S5 (truncation of power-law degree distribution), and S7
(community structure with overlapping boundaries) in the global supply chain
network.",Bow-tie structure and community identification of global supply chain network,2020-03-05 00:56:12,"Abhijit Chakraborty, Yuichi Ikeda","http://dx.doi.org/10.1371/journal.pone.0239669, http://arxiv.org/abs/2003.02343v1, http://arxiv.org/pdf/2003.02343v1",physics.soc-ph
33763,gn,"Skill verification is a central problem in workforce hiring. Companies and
academia often face the difficulty of ascertaining the skills of an applicant
since the certifications of the skills claimed by a candidate are generally not
immediately verifiable and costly to test. Blockchains have been proposed in
the literature for skill verification and tamper-proof information storage in a
decentralized manner. However, most of these approaches deal with storing the
certificates issued by traditional universities on the blockchain. Among the
few techniques that consider the certification procedure itself, questions like
(a) scalability with limited staff, (b) uniformity of grades over multiple
evaluators, or (c) honest effort extraction from the evaluators are usually not
addressed. We propose a blockchain-based platform named SkillCheck, which
considers the questions above, and ensure several desirable properties. The
platform incentivizes effort in grading via payments with tokens which it
generates from the payments of the users of the platform, e.g., the recruiters
and test-takers. We provide a detailed description of the design of the
platform along with the provable properties of the algorithm.",SkillCheck: An Incentive-based Certification System using Blockchains,2020-03-07 12:12:46,"Jay Gupta, Swaprava Nath","http://arxiv.org/abs/2003.03540v2, http://arxiv.org/pdf/2003.03540v2",cs.CR
33764,gn,"Empirical distributions of wealth and income can be reproduced using
simplified agent-based models of economic interactions, analogous to
microscopic collisions of gas particles. Building upon these models of freely
interacting agents, we explore the effect of a segregated economic network in
which interactions are restricted to those between agents of similar wealth.
Agents on a 2D lattice undergo kinetic exchanges with their nearest neighbours,
while continuously switching places to minimize local wealth differences. A
spatial concentration of wealth leads to a steady state with increased global
inequality and a magnified distinction between local and global measures of
combatting poverty. Individual saving propensity proves ineffective in the
segregated economy, while redistributive taxation transcends the spatial
inhomogeneity and greatly reduces inequality. Adding fluctuations to the
segregation dynamics, we observe a sharp phase transition to lower inequality
at a critical temperature, accompanied by a sudden change in the distribution
of the wealthy elite.",Effect of segregation on inequality in kinetic models of wealth exchange,2020-03-04 19:23:56,"Lennart Fernandes, Jacques Tempere","http://dx.doi.org/10.1140/epjb/e2020-100534-7, http://arxiv.org/abs/2003.04129v1, http://arxiv.org/pdf/2003.04129v1",physics.soc-ph
33765,gn,"We develop a simple theoretic game a model to analyze the relationship
between electoral sys tems and governments' choice in trade policies. We show
that existence of international pressure or foreign lobby changes a
government's final decision on trade policy, and trade policy in countries with
proportional electoral system is more protectionist than in countries with
majoritarian electoral system. Moreover, lobbies pay more to affect the trade
policy outcomes in countries with proportional representation systems.",Electoral systems and international trade policy,2020-03-12 15:12:17,"Serkan Kucuksenel, Osman Gulseven","http://arxiv.org/abs/2003.05725v1, http://arxiv.org/pdf/2003.05725v1",econ.GN
33766,gn,"An economic model of crime is used to explore the consistent estimation of a
simultaneous linear equation without recourse to instrumental variables. A
maximum-likelihood procedure (NISE) is introduced, and its results are compared
to ordinary least squares and two-stage least squares. The paper is motivated
by previous research on the crime model and by the well-known practical problem
that valid instruments are frequently unavailable.",NISE Estimation of an Economic Model of Crime,2020-03-17 17:10:57,Eric Blankmeyer,"http://arxiv.org/abs/2003.07860v1, http://arxiv.org/pdf/2003.07860v1",econ.GN
33767,gn,"This article studies the interregional Greek road network (GRN) by applying
complex network analysis (CNA) and an empirical approach. The study aims to
extract the socioeconomic information immanent to the GRN's topology and to
interpret the way in which this road network serves and promotes the regional
development. The analysis shows that the topology of the GRN is submitted to
spatial constraints, having lattice-like characteristics. Also, the GRN's
structure is described by a gravity pattern, where places of higher population
enjoy greater functionality, and its interpretation in regional terms
illustrates the elementary pattern expressed by regional development through
road construction. The study also reveals some interesting contradictions
between the metropolitan and non-metropolitan (excluding Attica and
Thessaloniki) comparison. Overall, the article highlights the effectiveness of
using complex network analysis in the modeling of spatial networks and in
particular of transportation systems and promotes the use of the network
paradigm in the spatial and regional research.",Modeling of the Greek road transportation network using complex network analysis,2020-03-18 11:37:15,Dimitrios Tsiotas,"http://arxiv.org/abs/2003.08091v1, http://arxiv.org/pdf/2003.08091v1",physics.soc-ph
33795,th,"Many online shops offer functionality that help their customers navigate the
available alternatives. For instance, options to filter and to sort goods are
wide-spread. In this paper we show that sorting and filtering can be used by
rational consumers to find their most preferred choice -- quickly. We
characterize the preferences which can be expressed through filtering and
sorting and show that these preferences exhibit a simple and intuitive logical
structure.",Sorting and filtering as effective rational choice procedures,2018-09-18 17:24:59,"Paulo Oliva, Philipp Zahn","http://arxiv.org/abs/1809.06766v3, http://arxiv.org/pdf/1809.06766v3",econ.TH
33768,gn,"This article attempts to highlight the importance that transportation has in
the economic development of Greece and in particular the importance of the
transportation infrastructure and transportation networks, which suggest a
fixed structured capital covering the total of the country. For this purpose,
longitudinal and cross-sectoral statistical data are examined over a set of
fundamental macroeconomic measures and metrics. Furthermore, the study attempts
to highlight the structural and functional aspects composing the concept of
transportation networks and to highlight the necessity of their joint
consideration on the relevant research. The transportation networks that are
examined in this paper are the Greek road (GRN), rail (GRAN), maritime (GMN)
and air transport network (GAN), which are studied both in terms of their
geometry and technical characteristics, as well as of their historical, traffic
and political framework. For the empirical assessment of the transportation
networks importance in Greece an econometric model is constructed, expressing
the welfare level of the Greek regions as a multivariate function of their
transportation infrastructure and of their socioeconomic environment. The
further purpose of the article is to highlight, macroscopically, all the
aspects related the study of transportation infrastructure and networks.",Transportation networks and their significance to economic development,2020-03-18 11:43:11,"Dimitrios Tsiotas, Martha Geraki, Spyros Niavis","http://arxiv.org/abs/2003.08094v1, http://arxiv.org/pdf/2003.08094v1",physics.soc-ph
33769,gn,"This article studies the Greek interregional commuting network (GRN) by using
measures and methods of complex network analysis and empirical techniques. The
study aims to detect structural characteristics of the commuting phenomenon,
which are configured by the functionality of the land transport
infrastructures, and to interpret how this network serves and promotes the
regional development. In the empirical analysis, a multiple linear regression
model for the number of commuters is constructed, which is based on the
conceptual framework of the term network, in effort to promote the
interdisciplinary dialogue. The analysis highlights the effect of the spatial
constraints on the network's structure, provides information on the major road
transport infrastructure projects that constructed recently and influenced the
country capacity, and outlines a gravity pattern describing the commuting
phenomenon, which expresses that cities of high population attract large
volumes of commuting activity within their boundaries, a fact that contributes
to the reduction of their outgoing commuting and consequently to the increase
of their inbound productivity. Overall, this paper highlights the effectiveness
of complex network analysis in the modeling of spatial and particularly of
transportation network and promotes the use of the network paradigm in the
spatial and regional research.",The commuting phenomenon as a complex network: The case of Greece,2020-03-18 11:45:39,"Dimitrios Tsiotas, Konstantinos Raptopoulos","http://arxiv.org/abs/2003.08096v1, http://arxiv.org/pdf/2003.08096v1",physics.soc-ph
33770,gn,"The Erasmus Program (EuRopean community Action Scheme for the Mobility of
University Students), the most important student exchange program in the world,
financed by the European Union and started in 1987, is characterized by a
strong gender bias. Girls participate to the program more than boys. This work
quantifies the gender bias in the Erasmus program between 2008 and 2013, using
novel data at the university level. It describes the structure of the program
in great details, carrying out the analysis across fields of study, and
identifies key universities as senders and receivers. In addition, it tests the
difference in the degree distribution of the Erasmus network along time and
between genders, giving evidence of a greater density in the female Erasmus
network with respect to the one of the male Erasmus network.",Gender bias in the Erasmus students network,2020-03-20 12:57:24,"Luca De Benedictis, Silvia Leoni","http://dx.doi.org/10.1007/s41109-020-00297-9, http://arxiv.org/abs/2003.09167v1, http://arxiv.org/pdf/2003.09167v1",physics.soc-ph
33771,gn,"A kernel density is an aggregate of kernel functions, which are itself
densities and could be kernel densities. This is used to decompose a kernel
into its constituent parts. Pearson's test for equality of proportions is
applied to quantiles to test whether the component distributions differ from
one another. The proposed methods are illustrated with a meta-analysis of the
social cost of carbon. Different discount rates lead to significantly different
Pigou taxes, but not different growth rates. Estimates have not varied over
time. Different authors have contributed different estimates, but these
differences are insignificant. Kernel decomposition can be applied in many
other fields with discrete explanatory variables.",Kernel density decomposition with an application to the social cost of carbon,2020-03-20 16:44:53,Richard S. J. Tol,"http://arxiv.org/abs/2003.09276v1, http://arxiv.org/pdf/2003.09276v1",stat.ME
33772,gn,"Given the importance of public support for policy change and implementation,
public policymakers and researchers have attempted to understand the factors
associated with this support for climate change mitigation policy. In this
article, we compare the feasibility of using different supervised learning
methods for regression using a novel socio-economic data set which measures
public support for potential climate change mitigation policies. Following this
model selection, we utilize gradient boosting regression, a well-known
technique in the machine learning community, but relatively uncommon in public
policy and public opinion research, and seek to understand what factors among
the several examined in previous studies are most central to shaping public
support for mitigation policies in climate change studies. The use of this
method provides novel insights into the most important factors for public
support for climate change mitigation policies. Using national survey data, we
find that the perceived risks associated with climate change are more decisive
for shaping public support for policy options promoting renewable energy and
regulating pollutants. However, we observe a very different behavior related to
public support for increasing the use of nuclear energy where climate change
risk perception is no longer the sole decisive feature. Our findings indicate
that public support for renewable energy is inherently different from that for
nuclear energy reliance with the risk perception of climate change, dominant
for the former, playing a subdued role for the latter.",Determining feature importance for actionable climate change mitigation policies,2020-03-19 04:06:52,"Romit Maulik, Junghwa Choi, Wesley Wehde, Prasanna Balaprakash","http://arxiv.org/abs/2003.10234v1, http://arxiv.org/pdf/2003.10234v1",stat.OT
33796,th,"I consider the monopolistic pricing of informational good. A buyer's
willingness to pay for information is from inferring the unknown payoffs of
actions in decision making. A monopolistic seller and the buyer each observes a
private signal about the payoffs. The seller's signal is binary and she can
commit to sell any statistical experiment of her signal to the buyer. Assuming
that buyer's decision problem involves rich actions, I characterize the profit
maximizing menu. It contains a continuum of experiments, each containing
different amount of information. I also find a complementarity between buyer's
private information and information provision: when buyer's private signal is
more informative, the optimal menu contains more informative experiments.",Selling Information,2018-09-18 17:35:49,Weijie Zhong,"http://arxiv.org/abs/1809.06770v2, http://arxiv.org/pdf/1809.06770v2",econ.TH
33773,gn,"The recent emergence of a new coronavirus, COVID-19, has gained extensive
coverage in public media and global news. As of 24 March 2020, the virus has
caused viral pneumonia in tens of thousands of people in Wuhan, China, and
thousands of cases in 184 other countries and territories. This study explores
the potential use of Google Trends (GT) to monitor worldwide interest in this
COVID-19 epidemic. GT was chosen as a source of reverse engineering data, given
the interest in the topic. Current data on COVID-19 is retrieved from (GT)
using one main search topic: Coronavirus. Geographical settings for GT are
worldwide, China, South Korea, Italy and Iran. The reported period is 15
January 2020 to 24 March 2020. The results show that the highest worldwide peak
in the first wave of demand for information was on 31 January 2020. After the
first peak, the number of new cases reported daily rose for 6 days. A second
wave started on 21 February 2020 after the outbreaks were reported in Italy,
with the highest peak on 16 March 2020. The second wave is six times as big as
the first wave. The number of new cases reported daily is rising day by day.
This short communication gives a brief introduction to how the demand for
information on coronavirus epidemic is reported through GT.","The Second Worldwide Wave of Interest in Coronavirus since the COVID-19 Outbreaks in South Korea, Italy and Iran: A Google Trends Study",2020-03-24 20:33:59,Artur Strzelecki,"http://dx.doi.org/10.1016/j.bbi.2020.04.042, http://arxiv.org/abs/2003.10998v3, http://arxiv.org/pdf/2003.10998v3",cs.CY
33774,gn,"This work investigates whether and how COVID-19 containment policies had an
immediate impact on crime trends in Los Angeles. The analysis is conducted
using Bayesian structural time-series and focuses on nine crime categories and
on the overall crime count, daily monitored from January 1st 2017 to March 28th
2020. We concentrate on two post-intervention time windows - from March 4th to
March 16th and from March 4th to March 28th 2020 - to dynamically assess the
short-term effects of mild and strict policies. In Los Angeles, overall crime
has significantly decreased, as well as robbery, shoplifting, theft, and
battery. No significant effect has been detected for vehicle theft, burglary,
assault with a deadly weapon, intimate partner assault, and homicide. Results
suggest that, in the first weeks after the interventions are put in place,
social distancing impacts more directly on instrumental and less serious
crimes. Policy implications are also discussed.",Exploring the Effects of COVID-19 Containment Policies on Crime: An Empirical Analysis of the Short-term Aftermath in Los Angeles,2020-03-24 01:36:21,"Gian Maria Campedelli, Alberto Aziani, Serena Favarin","http://dx.doi.org/10.1007/s12103-020-09578-6, http://arxiv.org/abs/2003.11021v3, http://arxiv.org/pdf/2003.11021v3",stat.OT
33775,gn,"I estimate the Susceptible-Infected-Recovered (SIR) epidemic model for
Coronavirus Disease 2019 (COVID-19). The transmission rate is heterogeneous
across countries and far exceeds the recovery rate, which enables a fast
spread. In the benchmark model, 28% of the population may be simultaneously
infected at the peak, potentially overwhelming the healthcare system. The peak
reduces to 6.2% under the optimal mitigation policy that controls the timing
and intensity of social distancing. A stylized asset pricing model suggests
that the stock price temporarily decreases by 50% in the benchmark case but
shows a W-shaped, moderate but longer bear market under the optimal policy.",Susceptible-Infected-Recovered (SIR) Dynamics of COVID-19 and Economic Impact,2020-03-25 08:00:17,Alexis Akira Toda,"http://arxiv.org/abs/2003.11221v2, http://arxiv.org/pdf/2003.11221v2",q-bio.PE
33776,gn,"When multiple innovations compete for adoption, historical chance leading to
early advantage can generate lock-in effects that allow suboptimal innovations
to succeed at the expense of superior alternatives. Research on the diffusion
of innovafacetion has identified many possible sources of early advantage, but
these mechanisms can benefit both optimal and suboptimal innovations. This
paper moves beyond chance-as-explanation to identify structural principles that
systematically impact the likelihood that the optimal strategy will spread. A
formal model of innovation diffusion shows that the network structure of
organizational relationships can systematically impact the likelihood that
widely adopted innovations will be payoff optimal. Building on prior diffusion
research, this paper focuses on the role of central actors i.e. well-connected
people or firms. While contagion models of diffusion highlight the benefits of
central actors for spreading innovations further and faster, the present
analysis reveals a dark side to this influence: the mere presence of central
actors in a network increases rates of adoption but also increases the
likelihood of suboptimal outcomes. This effect, however, does not represent a
speed-optimality tradeoff, as dense networks are both fast and optimal. This
finding is consistent with related research showing that network centralization
undermines collective intelligence.",Network Structure and Collective Intelligence in the Diffusion of Innovation,2020-03-26 22:05:17,Joshua Becker,"http://arxiv.org/abs/2003.12112v4, http://arxiv.org/pdf/2003.12112v4",physics.soc-ph
33777,th,"We add the assumption that players know their opponents' payoff functions and
rationality to a model of non-equilibrium learning in signaling games. Agents
are born into player roles and play against random opponents every period.
Inexperienced agents are uncertain about the prevailing distribution of
opponents' play, but believe that opponents never choose conditionally
dominated strategies. Agents engage in active learning and update beliefs based
on personal observations. Payoff information can refine or expand learning
predictions, since patient young senders' experimentation incentives depend on
which receiver responses they deem plausible. We show that with payoff
knowledge, the limiting set of long-run learning outcomes is bounded above by
rationality-compatible equilibria (RCE), and bounded below by uniform RCE. RCE
refine the Intuitive Criterion (Cho and Kreps, 1987) and include all divine
equilibria (Banks and Sobel, 1987). Uniform RCE sometimes but not always
exists, and implies universally divine equilibrium.",Payoff Information and Learning in Signaling Games,2017-08-31 23:49:22,"Drew Fudenberg, Kevin He","http://dx.doi.org/10.1016/j.geb.2019.11.011, http://arxiv.org/abs/1709.01024v5, http://arxiv.org/pdf/1709.01024v5",econ.TH
33778,th,"Player-Compatible Equilibrium (PCE) imposes cross-player restrictions on the
magnitudes of the players' ""trembles"" onto different strategies. These
restrictions capture the idea that trembles correspond to deliberate
experiments by agents who are unsure of the prevailing distribution of play.
PCE selects intuitive equilibria in a number of examples where trembling-hand
perfect equilibrium (Selten, 1975) and proper equilibrium (Myerson, 1978) have
no bite. We show that rational learning and weighted fictitious play imply our
compatibility restrictions in a steady-state setting.",Player-Compatible Learning and Player-Compatible Equilibrium,2017-12-24 21:24:19,"Drew Fudenberg, Kevin He","http://dx.doi.org/10.1016/j.jet.2021.105238, http://arxiv.org/abs/1712.08954v8, http://arxiv.org/pdf/1712.08954v8",econ.TH
33780,th,"The principle that rational agents should maximize expected utility or
choiceworthiness is intuitively plausible in many ordinary cases of
decision-making under uncertainty. But it is less plausible in cases of
extreme, low-probability risk (like Pascal's Mugging), and intolerably
paradoxical in cases like the St. Petersburg and Pasadena games. In this paper
I show that, under certain conditions, stochastic dominance reasoning can
capture most of the plausible implications of expectational reasoning while
avoiding most of its pitfalls. Specifically, given sufficient background
uncertainty about the choiceworthiness of one's options, many
expectation-maximizing gambles that do not stochastically dominate their
alternatives ""in a vacuum"" become stochastically dominant in virtue of that
background uncertainty. But, even under these conditions, stochastic dominance
will not require agents to accept options whose expectational superiority
depends on sufficiently small probabilities of extreme payoffs. The sort of
background uncertainty on which these results depend looks unavoidable for any
agent who measures the choiceworthiness of her options in part by the total
amount of value in the resulting world. At least for such agents, then,
stochastic dominance offers a plausible general principle of choice under
uncertainty that can explain more of the apparent rational constraints on such
choices than has previously been recognized.",Exceeding Expectations: Stochastic Dominance as a General Decision Theory,2018-07-28 08:46:34,Christian Tarsney,"http://arxiv.org/abs/1807.10895v5, http://arxiv.org/pdf/1807.10895v5",econ.TH
33781,th,"The purpose is to compare the perfect Stochastic Return (SR) model like
Islamic banks to the Fixed Return (FR) model as in conventional banks by
measuring up their impacts at the macroeconomic level. We prove that if the
optimal choice of investor share in SR model {\alpha}* realizes the
indifference of the financial institution toward SR and FR models, there exists
{\alpha} less than {\alpha}* such that the banks strictly prefers the SR model.
Also, there exists {\alpha}, {\gamma} and {\lambda} verifying the conditions of
{\alpha}-sharing such that each party in economy can be better under the SR
model and the economic welfare could be improved in a Pareto-efficient way.",Banking Stability System: Does it Matter if the Rate of Return is Fixed or Stochastic?,2018-07-29 22:35:24,Hassan Ghassan,"http://arxiv.org/abs/1807.11102v1, http://arxiv.org/pdf/1807.11102v1",econ.TH
33782,th,"An experimenter seeks to learn a subject's preference relation. The
experimenter produces pairs of alternatives. For each pair, the subject is
asked to choose. We argue that, in general, large but finite data do not give
close approximations of the subject's preference, even when the limiting
(countably infinite) data are enough to infer the preference perfectly. We
provide sufficient conditions on the set of alternatives, preferences, and
sequences of pairs so that the observation of finitely many choices allows the
experimenter to learn the subject's preference with arbitrary precision. While
preferences can be identified under our sufficient conditions, we show that it
is harder to identify utility functions. We illustrate our results with several
examples, including consumer choice, expected utility, and preferences in the
Anscombe-Aumann model.",Preference Identification,2018-07-31 00:14:36,"Christopher P. Chambers, Federico Echenique, Nicolas S. Lambert","http://arxiv.org/abs/1807.11585v1, http://arxiv.org/pdf/1807.11585v1",econ.TH
33783,th,"A strictly strategy-proof mechanism is one that asks agents to use strictly
dominant strategies. In the canonical one-dimensional mechanism design setting
with private values, we show that strict strategy-proofness is equivalent to
strict monotonicity plus the envelope formula, echoing a well-known
characterisation of (weak) strategy-proofness. A consequence is that
strategy-proofness can be made strict by an arbitrarily small modification, so
that strictness is 'essentially for free'.",Strictly strategy-proof auctions,2018-07-31 18:25:47,"Matteo Escudé, Ludvig Sinander","http://dx.doi.org/10.1016/j.mathsocsci.2020.07.002, http://arxiv.org/abs/1807.11864v4, http://arxiv.org/pdf/1807.11864v4",econ.TH
33784,th,"Dynamic Random Subjective Expected Utility (DR-SEU) allows to model choice
data observed from an agent or a population of agents whose beliefs about
objective payoff-relevant states and tastes can both evolve stochastically. Our
observable, the augmented Stochastic Choice Function (aSCF) allows, in contrast
to previous work in decision theory, for a direct test of whether the agent's
beliefs reflect the true data-generating process conditional on their private
information as well as identification of the possibly incorrect beliefs. We
give an axiomatic characterization of when an agent satisfies the model, both
in a static as well as in a dynamic setting. We look at the case when the agent
has correct beliefs about the evolution of objective states as well as at the
case when her beliefs are incorrect but unforeseen contingencies are
impossible.
  We also distinguish two subvariants of the dynamic model which coincide in
the static setting: Evolving SEU, where a sophisticated agent's utility evolves
according to a Bellman equation and Gradual Learning, where the agent is
learning about her taste. We prove easy and natural comparative statics results
on the degree of belief incorrectness as well as on the speed of learning about
taste.
  Auxiliary results contained in the online appendix extend previous decision
theory work in the menu choice and stochastic choice literature from a
technical as well as a conceptual perspective.",Dynamic Random Subjective Expected Utility,2018-08-01 15:29:15,Jetlir Duraj,"http://arxiv.org/abs/1808.00296v1, http://arxiv.org/pdf/1808.00296v1",econ.TH
33785,th,"We establish that statistical discrimination is possible if and only if it is
impossible to uniquely identify the signal structure observed by an employer
from a realized empirical distribution of skills. The impossibility of
statistical discrimination is shown to be equivalent to the existence of a
fair, skill-dependent, remuneration for workers. Finally, we connect the
statistical discrimination literature to Bayesian persuasion, establishing that
if discrimination is absent, then the optimal signaling problem results in a
linear payoff function (as well as a kind of converse).","A characterization of ""Phelpsian"" statistical discrimination",2018-08-03 23:52:23,"Christopher P. Chambers, Federico Echenique","http://arxiv.org/abs/1808.01351v1, http://arxiv.org/pdf/1808.01351v1",econ.TH
33786,th,"Under the same assumptions made by Mas-Colell et al. (1995), I develop a
short, simple, and complete proof of existence of equilibrium prices based on
excess demand functions. The result is obtained by applying the Brouwer fixed
point theorem to a trimmed simplex which does not contain prices equal to zero.
The mathematical techniques are based on some results obtained in Neuefeind
(1980) and Geanakoplos (2003).",Existence of Equilibrium Prices: A Pedagogical Proof,2018-08-09 15:53:09,Simone Tonin,"http://arxiv.org/abs/1808.03129v2, http://arxiv.org/pdf/1808.03129v2",econ.TH
33787,th,"News utility is the idea that the utility of an agent depends on changes in
her beliefs over consumption and money. We introduce news utility into
otherwise classical static Bayesian mechanism design models. We show that a key
role is played by the timeline of the mechanism, i.e. whether there are delays
between the announcement stage, the participation stage, the play stage and the
realization stage of a mechanism. Depending on the timing, agents with news
utility can experience two additional news utility effects: a surprise effect
derived from comparing to pre-mechanism beliefs, as well as a realization
effect derived from comparing post-play beliefs with the actual outcome of the
mechanism.
  We look at two distinct mechanism design settings reflecting the two main
strands of the classical literature. In the first model, a monopolist screens
an agent according to the magnitude of her loss aversion. In the second model,
we consider a general multi-agent Bayesian mechanism design setting where the
uncertainty of each player stems from not knowing the intrinsic types of the
other agents. We give applications to auctions and public good provision which
illustrate how news utility changes classical results.
  For both models we characterize the optimal design of the timeline. A
timeline featuring no delay between participation and play but a delay in
realization is never optimal in either model. In the screening model the
optimal timeline is one without delays. In auction settings, under fairly
natural assumptions the optimal timeline has delays between all three stages of
the mechanism.",Mechanism Design with News Utility,2018-08-13 02:28:31,Jetlir Duraj,"http://arxiv.org/abs/1808.04020v1, http://arxiv.org/pdf/1808.04020v1",econ.TH
33788,th,"This paper establishes an interesting link between $k$th price auctions and
Catalan numbers by showing that for distributions that have linear density, the
bid function at any symmetric, increasing equilibrium of a $k$th price auction
with $k\geq 3$ can be represented as a finite series of $k-2$ terms whose
$\ell$th term involves the $\ell$th Catalan number. Using an integral
representation of Catalan numbers, together with some classical combinatorial
identities, we derive the closed form of the unique symmetric, increasing
equilibrium of a $k$th price auction for a non-uniform distribution.",$k$th price auctions and Catalan numbers,2018-08-17 23:59:41,"Abdel-Hameed Nawar, Debapriya Sen","http://dx.doi.org/10.1016/j.econlet.2018.08.018, http://arxiv.org/abs/1808.05996v1, http://arxiv.org/pdf/1808.05996v1",econ.TH
33789,th,"Several structural results for the set of competitive equilibria in trading
networks with frictions are established: The lattice theorem, the rural
hospitals theorem, the existence of side-optimal equilibria, and a
group-incentive-compatibility result hold with imperfectly transferable utility
and in the presence of frictions. While our results are developed in a trading
network model, they also imply analogous (and new) results for exchange
economies with combinatorial demand and for two-sided matching markets with
transfers.",The Structure of Equilibria in Trading Networks with Frictions,2018-08-23 23:04:22,Jan Christoph Schlegel,"http://arxiv.org/abs/1808.07924v6, http://arxiv.org/pdf/1808.07924v6",econ.TH
33790,th,"We study a repeated game with payoff externalities and observable actions
where two players receive information over time about an underlying
payoff-relevant state, and strategically coordinate their actions. Players
learn about the true state from private signals, as well as the actions of
others. They commonly learn the true state (Cripps et al., 2008), but do not
coordinate in every equilibrium. We show that there exist stable equilibria in
which players can overcome unfavorable signal realizations and eventually
coordinate on the correct action, for any discount factor. For high discount
factors, we show that in addition players can also achieve efficient payoffs.",Repeated Coordination with Private Learning,2018-08-31 23:39:11,"Pathikrit Basu, Kalyan Chatterjee, Tetsuya Hoshino, Omer Tamuz","http://arxiv.org/abs/1809.00051v1, http://arxiv.org/pdf/1809.00051v1",econ.TH
33791,th,"We study the indirect cost of information from sequential information cost
minimization. A key sub-additivity condition, together with monotonicity
equivalently characterizes the class of indirect cost functions generated from
any direct information cost. Adding an extra (uniform) posterior separability
condition equivalently characterizes the indirect cost generated from any
direct cost favoring incremental evidences. We also provide the necessary and
sufficient condition when prior independent direct cost generates posterior
separable indirect cost.",The Indirect Cost of Information,2018-09-03 22:34:23,Weijie Zhong,"http://arxiv.org/abs/1809.00697v2, http://arxiv.org/pdf/1809.00697v2",econ.TH
33792,th,"This paper considers the core of a competitive market economy with an
endogenous social division of labour. The theory is founded on the notion of a
""consumer-producer"", who consumes as well as produces commodities. First, we
show that the Core of such an economy with an endogenous social division of
labour can be founded on deviations of coalitions of arbitrary size, extending
the seminal insights of Vind and Schmeidler for pure exchange economies.
Furthermore, we establish the equivalence between the Core and the set of
competitive equilibria for continuum economies with an endogenous social
division of labour. Our analysis also concludes that self-organisation in a
social division of labour can be incorporated into the Edgeworthian barter
process directly. This is formulated as a Core equivalence result stated for a
Structured Core concept based on renegotiations among fully specialised
economic agents, i.e., coalitions that use only fully developed internal
divisions of labour. Our approach bridges the gap between standard economies
with social production and coalition production economies. Therefore, a more
straightforward and natural interpretation of coalitional improvement and the
Core can be developed than for coalition production economies.",The Core of an Economy with an Endogenous Social Division of Labour,2018-09-05 16:10:18,Robert P. Gilles,"http://arxiv.org/abs/1809.01470v1, http://arxiv.org/pdf/1809.01470v1",econ.TH
33793,th,"We consider a symmetric two-player contest, in which the choice set of effort
is constrained. We apply a fundamental property of the payoff function to show
that, under standard assumptions, there exists a unique Nash equilibrium in
pure strategies. It is shown that all equilibria are near the unconstrained
equilibrium. Perhaps surprisingly, this is not the case when players have
different prize evaluations.",A note on contests with a constrained choice set of effort,2018-09-12 16:48:21,"Doron Klunover, John Morgan","http://arxiv.org/abs/1809.04436v11, http://arxiv.org/pdf/1809.04436v11",econ.TH
33794,th,"I consider the sequential implementation of a target information structure. I
characterize the set of decision time distributions induced by all signal
processes that satisfy a per-period learning capacity constraint. I find that
all decision time distributions have the same expectation, and the maximal and
minimal elements by mean-preserving spread order are deterministic distribution
and exponential distribution. The result implies that when time preference is
risk loving (e.g. standard or hyperbolic discounting), Poisson signal is
optimal since it induces the most risky exponential decision time distribution.
When time preference is risk neutral (e.g. constant delay cost), all signal
processes are equally optimal.",Time preference and information acquisition,2018-09-13 21:13:08,Weijie Zhong,"http://arxiv.org/abs/1809.05120v2, http://arxiv.org/pdf/1809.05120v2",econ.TH
33798,th,"In this paper, we show that the presence of the Archimedean and the
mixture-continuity properties of a binary relation, both empirically
non-falsifiable in principle, foreclose the possibility of consistency
(transitivity) without decisiveness (completeness), or decisiveness without
consistency, or in the presence of a weak consistency condition, neither. The
basic result can be sharpened when specialized from the context of a
generalized mixture set to that of a mixture set in the sense of
Herstein-Milnor (1953). We relate the results to the antecedent literature, and
view them as part of an investigation into the interplay of the structure of
the choice space and the behavioral assumptions on the binary relation defined
on it; the ES research program due to Eilenberg (1941) and Sonnenschein (1965),
and one to which Schmeidler (1971) is an especially influential contribution.",Completeness and Transitivity of Preferences on Mixture Sets,2018-10-05 02:20:53,"Tsogbadral Galaabaatar, M. Ali Khan, Metin Uyanık","http://dx.doi.org/10.1016/j.mathsocsci.2019.03.004, http://arxiv.org/abs/1810.02454v1, http://arxiv.org/pdf/1810.02454v1",econ.TH
33799,th,"A ""statistician"" takes an action on behalf of an agent, based on the agent's
self-reported personal data and a sample involving other people. The action
that he takes is an estimated function of the agent's report. The estimation
procedure involves model selection. We ask the following question: Is
truth-telling optimal for the agent given the statistician's procedure? We
analyze this question in the context of a simple example that highlights the
role of model selection. We suggest that our simple exercise may have
implications for the broader issue of human interaction with ""machine learning""
algorithms.",The Model Selection Curse,2018-10-06 00:03:33,"Kfir Eliaz, Ran Spiegler","http://arxiv.org/abs/1810.02888v1, http://arxiv.org/pdf/1810.02888v1",econ.TH
33800,th,"Healthy nutrition promotions and regulations have long been regarded as a
tool for increasing social welfare. One of the avenues taken in the past decade
is sugar consumption regulation by introducing a sugar tax. Such a tax
increases the price of extensive sugar containment in products such as soft
drinks. In this article we consider a typical problem of optimal regulatory
policy design, where the task is to determine the sugar tax rate maximizing the
social welfare. We model the problem as a sequential game represented by the
three-level mathematical program. On the upper level, the government decides
upon the tax rate. On the middle level, producers decide on the product
pricing. On the lower level, consumers decide upon their preferences towards
the products. While the general problem is computationally intractable, the
problem with a few product types is polynomially solvable, even for an
arbitrary number of heterogeneous consumers. This paper presents a simple,
intuitive and easily implementable framework for computing optimal sugar tax in
a market with a few products. This resembles the reality as the soft drinks,
for instance, are typically categorized in either regular or no-sugar drinks,
e.g. Coca-Cola and Coca-Cola Zero. We illustrate the algorithm using an example
based on the real data and draw conclusions for a specific local market.",Optimal policy design for the sugar tax,2018-10-16 22:23:40,"Kelly Geyskens, Alexander Grigoriev, Niels Holtrop, Anastasia Nedelko","http://arxiv.org/abs/1810.07243v1, http://arxiv.org/pdf/1810.07243v1",econ.TH
33801,th,"Although the integration of two-sided matching markets using stable
mechanisms generates expected gains from integration, I show that there are
worst-case scenarios in which these are negative. The losses from integration
can be large enough that the average rank of an agent's spouse decreases by
37.5% of the length of their preference list in any stable matching mechanism.",The Losses from Integration in Matching Markets can be Large,2018-10-24 13:54:41,Josué Ortega,"http://dx.doi.org/10.1016/j.econlet.2018.10.028, http://arxiv.org/abs/1810.10287v1, http://arxiv.org/pdf/1810.10287v1",econ.TH
33802,th,"McFadden and Richter (1991) and later McFadden (2005) show that the Axiom of
Revealed Stochastic Preference characterizes rationalizability of choice
probabilities through random utility models on finite universal choice spaces.
This note proves the result in one short, elementary paragraph and extends it
to set valued choice. The latter requires a different axiom than is reported in
McFadden (2005).",Revealed Stochastic Preference: A One-Paragraph Proof and Generalization,2018-10-24 23:19:00,Jörg Stoye,"http://dx.doi.org/10.1016/j.econlet.2018.12.023, http://arxiv.org/abs/1810.10604v2, http://arxiv.org/pdf/1810.10604v2",econ.TH
33803,th,"We define and investigate a property of mechanisms that we call ""strategic
simplicity,"" and that is meant to capture the idea that, in strategically
simple mechanisms, strategic choices require limited strategic sophistication.
We define a mechanism to be strategically simple if choices can be based on
first-order beliefs about the other agents' preferences and first-order
certainty about the other agents' rationality alone, and there is no need for
agents to form higher-order beliefs, because such beliefs are irrelevant to the
optimal strategies. All dominant strategy mechanisms are strategically simple.
But many more mechanisms are strategically simple. In particular, strategically
simple mechanisms may be more flexible than dominant strategy mechanisms in the
bilateral trade problem and the voting problem.",Strategically Simple Mechanisms,2018-12-03 18:47:19,"Tilman Borgers, Jiangtao Li","http://arxiv.org/abs/1812.00849v1, http://arxiv.org/pdf/1812.00849v1",econ.TH
33804,th,"This paper studies the income fluctuation problem with capital income risk
(i.e., dispersion in the rate of return to wealth). Wealth returns and labor
earnings are allowed to be serially correlated and mutually dependent. Rewards
can be bounded or unbounded. Under rather general conditions, we develop a set
of new results on the existence and uniqueness of solutions, stochastic
stability of the model economy, as well as efficient computation of the ergodic
wealth distribution. A variety of applications are discussed. Quantitative
analysis shows that both stochastic volatility and mean persistence in wealth
returns have nontrivial impact on wealth inequality.",The Income Fluctuation Problem with Capital Income Risk: Optimality and Stability,2018-12-04 13:36:47,"Qingyin Ma, John Stachurski, Alexis Akira Toda","http://arxiv.org/abs/1812.01320v1, http://arxiv.org/pdf/1812.01320v1",econ.TH
33805,th,"In group decision making, the preference map and Cook-Seiford vector are two
concepts as ways of describing ties-permitted ordinal rankings. This paper
shows that they are equivalent for representing ties-permitted ordinal
rankings. Transformation formulas from one to the other are given and the
inherent consistency of the mutual conversion is discussed. The proposed
methods are illustrated by some examples. Some possible future applications of
the proposed formulas are also pointed out.",Mutual Conversion Between Preference Maps And Cook-Seiford Vectors,2018-12-10 01:29:14,Fujun Hou,"http://arxiv.org/abs/1812.03566v1, http://arxiv.org/pdf/1812.03566v1",econ.TH
33807,th,"Growth theory has rarely considered energy despite its invisible hand in all
physical systems. We develop a theoretical framework that places energy
transfers at centerstage of growth theory based on two principles: (1) goods
are material rearrangements and (2) such rearrangements are done by energy
transferred by prime movers (e.g. workers, engines). We derive the implications
of these principles for an autarkic agent that maximizes utility subject to an
energy budget constraint and maximizes energy surplus to relax such constraint.
The solution to these problems shows that growth is driven by positive marginal
energy surplus of energy goods (e.g. rice, oil), yet materializes through prime
mover accumulation. This perspective brings under one framework several results
from previous attempts to insert energy within growth theory, reconciles
economics with natural sciences, and provides a basis for a general
reinterpretation of economics and growth as the interplay between human desires
and thermodynamic processes.",A theoretical framework to consider energy transfers within growth theory,2018-12-12 21:59:33,"Benjamin Leiva, Octavio Ramirez, John R. Schramski","http://arxiv.org/abs/1812.05091v1, http://arxiv.org/pdf/1812.05091v1",econ.TH
33808,th,"We propose a decision-theoretic model akin to Savage (1972) that is useful
for defining causal effects. Within this framework, we define what it means for
a decision maker (DM) to act as if the relation between the two variables is
causal. Next, we provide axioms on preferences and show that these axioms are
equivalent to the existence of a (unique) Directed Acyclic Graph (DAG) that
represents the DM's preference. The notion of representation has two
components: the graph factorizes the conditional independence properties of the
DM's subjective beliefs, and arrows point from cause to effect. Finally, we
explore the connection between our representation and models used in the
statistical causality literature (for example, Pearl (1995)).",Causality: a decision theoretic approach,2018-12-18 17:55:54,Pablo Schenone,"http://arxiv.org/abs/1812.07414v5, http://arxiv.org/pdf/1812.07414v5",econ.TH
33809,th,"This paper investigates the consumption and risk taking decision of an
economic agent with partial irreversibility of consumption decision by
formalizing the theory proposed by Duesenberry (1949). The optimal policies
exhibit a type of the (s, S) policy: there are two wealth thresholds within
which consumption stays constant. Consumption increases or decreases at the
thresholds and after the adjustment new thresholds are set. The share of risky
investment in the agent's total investment is inversely U-shaped within the (s,
S) band, which generates time-varying risk aversion that can fluctuate widely
over time. This property can explain puzzles and questions on asset pricing and
households' portfolio choices, e.g., why aggregate consumption is so smooth
whereas the high equity premium is high and the equity return has high
volatility, why the risky share is so low whereas the estimated risk aversion
by the micro-level data is small, and whether and when an increase in wealth
has an impact on the risky share. Also, the partial irreversibility model can
explain both the excess sensitivity and the excess smoothness of consumption.","Duesenberry's Theory of Consumption: Habit, Learning, and Ratcheting",2018-12-25 08:41:08,"Kyoung Jin Choi, Junkee Jeon, Hyeng Keun Koo","http://arxiv.org/abs/1812.10038v1, http://arxiv.org/pdf/1812.10038v1",econ.TH
33810,th,"This note considers cartel stability when the cartelized products are
vertically differentiated. If market shares are maintained at pre-collusive
levels, then the firm with the lowest competitive price-cost margin has the
strongest incentive to deviate from the collusive agreement. The lowest-quality
supplier has the tightest incentive constraint when the difference in unit
production costs is sufficiently small.",Cartel Stability under Quality Differentiation,2018-12-26 15:24:51,"Iwan Bos, Marco Marini","http://dx.doi.org/10.1016/j.econlet.2018.10.024, http://arxiv.org/abs/1812.10293v1, http://arxiv.org/pdf/1812.10293v1",econ.TH
33811,th,"We study conditions for the existence of stable and group-strategy-proof
mechanisms in a many-to-one matching model with contracts if students'
preferences are monotone in contract terms. We show that ""equivalence"",
properly defined, to a choice profile under which contracts are substitutes and
the law of aggregate holds is a necessary and sufficient condition for the
existence of a stable and group-strategy-proof mechanism.
  Our result can be interpreted as a (weak) embedding result for choice
functions under which contracts are observable substitutes and the observable
law of aggregate demand holds.",Equivalent Choice Functions and Stable Mechanisms,2018-12-26 17:26:57,Jan Christoph Schlegel,"http://arxiv.org/abs/1812.10326v3, http://arxiv.org/pdf/1812.10326v3",econ.TH
33812,th,"Interdistrict school choice programs-where a student can be assigned to a
school outside of her district-are widespread in the US, yet the market-design
literature has not considered such programs. We introduce a model of
interdistrict school choice and present two mechanisms that produce stable or
efficient assignments. We consider three categories of policy goals on
assignments and identify when the mechanisms can achieve them. By introducing a
novel framework of interdistrict school choice, we provide a new avenue of
research in market design.",Interdistrict School Choice: A Theory of Student Assignment,2018-12-29 09:35:36,"Isa E. Hafalir, Fuhito Kojima, M. Bumin Yenmez","http://arxiv.org/abs/1812.11297v2, http://arxiv.org/pdf/1812.11297v2",econ.TH
33813,th,"We study a finite horizon optimal contracting problem of a risk-neutral
principal and a risk-averse agent who receives a stochastic income stream when
the agent is unable to make commitments. The problem involves an infinite
number of constraints at each time and each state of the world. Miao and Zhang
(2015) have developed a dual approach to the problem by considering a
Lagrangian and derived a Hamilton-Jacobi-Bellman equation in an infinite
horizon. We consider a similar Lagrangian in a finite horizon, but transform
the dual problem into an infinite series of optimal stopping problems. For each
optimal stopping problem we provide an analytic solution by providing an
integral equation representation for the free boundary. We provide a
verification theorem that the value function of the original principal's
problem is the Legender-Fenchel transform of the integral of the value
functions of the optimal stopping problems. We also provide some numerical
simulation results of optimal contracting strategies",Optimal Insurance with Limited Commitment in a Finite Horizon,2018-12-31 05:10:19,"Junkee Jeon, Hyeng Keun Koo, Kyunghyun Park","http://arxiv.org/abs/1812.11669v3, http://arxiv.org/pdf/1812.11669v3",econ.TH
33973,th,"Under very general conditions, we construct a micro-macro model for closed
economy with a large number of heterogeneous agents. By introducing both
financial capital (i.e. valued capital---- equities of firms) and physical
capital (i.e. capital goods), our framework gives a logically consistent,
complete factor income distribution theory with micro-foundation. The model
shows factor incomes obey different distribution rules at the micro and macro
levels, while marginal distribution theory and no-arbitrage princi-ple are
unified into a common framework. Our efforts solve the main problems of
Cambridge capital controversy, and reasonably explain the equity premium
puzzle. Strong empirical evidences support our results.","A Contribution to Theory of Factor Income Distribution, Cambridge Capital Controversy and Equity Premium Puzzle",2019-11-28 05:21:57,Xiaofeng Liu,"http://arxiv.org/abs/1911.12490v2, http://arxiv.org/pdf/1911.12490v2",econ.TH
33814,th,"We present a limits-to-arbitrage model to study the impact of securitization,
leverage and credit risk protection on the cyclicity of bank credit. In a
stable bank credit situation, no cycles of credit expansion or contraction
appear. Unlevered securitization together with mis-pricing of securitized
assets increases lending cyclicality, favoring credit booms and busts. Leverage
changes the state of affairs with respect to the simple securitization. First,
the volume of real activity and banking profits increases. Second, banks sell
securities when markets decline. This selling puts further pressure on falling
prices. The mis-pricing of credit risk protection or securitized assets
influences the real economy. Trading in these contracts reduces the amount of
funding available to entrepreneurs, particularly to high-credit-risk borrowers.
This trading decreases the liquidity of the securitized assets, and especially
those based on investments with high credit risk.","Credit Cycles, Securitization, and Credit Default Swaps",2019-01-01 19:46:59,Juan Ignacio Peña,"http://arxiv.org/abs/1901.00177v1, http://arxiv.org/pdf/1901.00177v1",econ.TH
33815,th,"We are studying the Gately point, an established solution concept for
cooperative games. We point out that there are superadditive games for which
the Gately point is not unique, i.e. in general the concept is rather
set-valued than an actual point. We derive conditions under which the Gately
point is guaranteed to be a unique imputation and provide a geometric
interpretation. The Gately point can be understood as the intersection of a
line defined by two points with the set of imputations. Our uniqueness
conditions guarantee that these two points do not coincide. We provide
demonstrative interpretations for negative propensities to disrupt. We briefly
show that our uniqueness conditions for the Gately point include quasibalanced
games and discuss the relation of the Gately point to the $\tau$-value in this
context. Finally, we point out relations to cost games and the ACA method and
end upon a few remarks on the implementation of the Gately point and an
upcoming software package for cooperative game theory.",Conditions for the uniqueness of the Gately point for cooperative games,2019-01-06 04:48:26,"Jochen Staudacher, Johannes Anwander","http://arxiv.org/abs/1901.01485v1, http://arxiv.org/pdf/1901.01485v1",econ.TH
33816,th,"We consider a model for decision making based on an adaptive, k-period,
learning process where the priors are selected according to Von
Neumann-Morgenstern expected utility principle. A preference relation between
two prospects is introduced, defined by the condition which prospect is
selected more often. We show that the new preferences have similarities with
the preferences obtained by Kahneman and Tversky (1979) in the context of the
prospect theory. Additionally, we establish that in the limit of large learning
period, the new preferences coincide with the expected utility principle.",RPS(1) Preferences,2019-01-15 06:16:30,Misha Perepelitsa,"http://arxiv.org/abs/1901.04995v3, http://arxiv.org/pdf/1901.04995v3",econ.TH
33817,th,"We study a communication game between an informed sender and an uninformed
receiver with repeated interactions and voluntary transfers. Transfers motivate
the receiver's decision-making and signal the sender's information. Although
full separation can always be supported in equilibrium, partial or complete
pooling is optimal if the receiver's decision-making is highly responsive to
information. In this case, the receiver's decision-making is disciplined by
pooling extreme states where she is most tempted to defect.",Relational Communication,2019-01-17 09:45:55,"Anton Kolotilin, Hongyi Li","http://arxiv.org/abs/1901.05645v3, http://arxiv.org/pdf/1901.05645v3",econ.TH
33818,th,"In this paper we study a model of weighted network formation. The bilateral
interaction is modeled as a Tullock contest game with the possibility of a
draw. We describe stable networks under different concepts of stability. We
show that a Nash stable network is either the empty network or the complete
network. The complete network is not immune to bilateral deviations. When we
allow for limited farsightedness, stable networks immune to bilateral
deviations must be complete $M$-partite networks, with partitions of different
sizes. The empty network is the efficient network. We provide several
comparative statics results illustrating the importance of network structure in
mediating the effects of shocks and interventions. In particular, we show that
an increase in the likelihood of a draw has a non-monotonic effect on the level
of wasteful contest spending in the society. To the best of our knowledge, this
paper is the first attempt to model weighted network formation when the actions
of individuals are neither strategic complements nor strategic substitutes.",A Noncooperative Model of Contest Network Formation,2019-01-22 23:18:44,Kenan Huremovic,"http://arxiv.org/abs/1901.07605v4, http://arxiv.org/pdf/1901.07605v4",econ.TH
33819,th,"Nowadays, we are surrounded by a large number of complex phenomena ranging
from rumor spreading, social norms formation to rise of new economic trends and
disruption of traditional businesses. To deal with such phenomena,Complex
Adaptive System (CAS) framework has been found very influential among social
scientists,especially economists. As the most powerful methodology of CAS
modeling, Agent-based modeling (ABM) has gained a growing application among
academicians and practitioners. ABMs show how simple behavioral rules of agents
and local interactions among them at micro-scale can generate surprisingly
complex patterns at macro-scale. Despite a growing number of ABM publications,
those researchers unfamiliar with this methodology have to study a number of
works to understand (1) the why and what of ABMs and (2) the ways they are
rigorously developed. Therefore, the major focus of this paper is to help
social sciences researchers,especially economists get a big picture of ABMs and
know how to develop them both systematically and rigorously.",Theories and Practice of Agent based Modeling: Some practical Implications for Economic Planners,2019-01-24 00:09:16,"Hossein Sabzian, Mohammad Ali Shafia, Ali Maleki, Seyeed Mostapha Seyeed Hashemi, Ali Baghaei, Hossein Gharib","http://arxiv.org/abs/1901.08932v1, http://arxiv.org/pdf/1901.08932v1",econ.TH
33820,th,"Existing cooperative game theoretic studies of bargaining power in gas
pipeline systems are based on the so called characteristic function form (CFF).
This approach is potentially misleading if some pipelines fall under regulated
third party access (TPA). TPA, which is by now the norm in the EU, obliges the
owner of a pipeline to transport gas for others, provided they pay a regulated
transport fee. From a game theoretic perspective, this institutional setting
creates so called ""externalities,"" the description of which requires partition
function form (PFF) games. In this paper we propose a method to compute
payoffs, reflecting the power structure, for a pipeline system with regulated
TPA. The method is based on an iterative flow mechanism to determine gas flows
and transport fees for individual players and uses the recursive core and the
minimal claim function to convert the PPF game back into a CFF game, which can
be solved by standard methods. We illustrate the approach with a simple
stylized numerical example of the gas network in Central Eastern Europe with a
focus on Ukraine's power index as a major transit country.",Modelling transfer profits as externalities in a cooperative game-theoretic model of natural gas networks,2019-01-31 18:41:54,"Dávid Csercsik, Franz Hubert, Balázs R. Sziklai, László Á. Kóczy","http://dx.doi.org/10.1016/j.eneco.2019.01.013, http://arxiv.org/abs/1901.11435v2, http://arxiv.org/pdf/1901.11435v2",econ.TH
33821,th,"I propose a cheap-talk model in which the sender can use private messages and
only cares about persuading a subset of her audience. For example, a candidate
only needs to persuade a majority of the electorate in order to win an
election. I find that senders can gain credibility by speaking truthfully to
some receivers while lying to others. In general settings, the model admits
information transmission in equilibrium for some prior beliefs. The sender can
approximate her preferred outcome when the fraction of the audience she needs
to persuade is sufficiently small. I characterize the sender-optimal
equilibrium and the benefit of not having to persuade your whole audience in
separable environments. I also analyze different applications and verify that
the results are robust to some perturbations of the model, including
non-transparent motives as in Crawford and Sobel (1982), and full commitment as
in Kamenica and Gentzkow (2011).",Persuading part of an audience,2019-03-01 05:21:13,Bruno Salcedo,"http://arxiv.org/abs/1903.00129v1, http://arxiv.org/pdf/1903.00129v1",econ.TH
33822,th,"The potential benefits of portfolio diversification have been known to
investors for a long time. Markowitz (1952) suggested the seminal approach for
optimizing the portfolio problem based on finding the weights as budget shares
that minimize the variance of the underlying portfolio. Hatemi-J and El-Khatib
(2015) suggested finding the weights that will result in maximizing the risk
adjusted return of the portfolio. This approach seems to be preferred by the
rational investors since it combines risk and return when the optimal budget
shares are sought for. The current paper provides a general solution for this
risk adjusted return problem that can be utilized for any potential number of
assets that are included in the portfolio.",Exact Solution for the Portfolio Diversification Problem Based on Maximizing the Risk Adjusted Return,2019-03-04 08:49:43,"Abdulnasser Hatemi-J, Mohamed Ali Hajji, Youssef El-Khatib","http://arxiv.org/abs/1903.01082v1, http://arxiv.org/pdf/1903.01082v1",econ.TH
33823,th,"Consumers in many markets are uncertain about firms' qualities and costs, so
buy based on both the price and the quality inferred from it. Optimal pricing
depends on consumer heterogeneity only when firms with higher quality have
higher costs, regardless of whether costs and qualities are private or public.
If better quality firms have lower costs, then good quality is sold cheaper
than bad under private costs and qualities, but not under public. However, if
higher quality is costlier, then price weakly increases in quality under both
informational environments.",Price competition with uncertain quality and cost,2019-03-10 16:20:23,Sander Heinsalu,"http://arxiv.org/abs/1903.03987v2, http://arxiv.org/pdf/1903.03987v2",econ.TH
33824,th,"The broad concept of an individual's welfare is actually a cluster of related
specific concepts that bear a ""family resemblance"" to one another. One might
care about how a policy will affect people both in terms of their subjective
preferences and also in terms of some notion of their objective interests. This
paper provides a framework for evaluation of policies in terms of welfare
criteria that combine these two considerations. Sufficient conditions are
provided for such a criterion to imply the same ranking of social states as
does Pareto's unanimity criterion. Sufficiency is proved via study of a
community of agents with interdependent ordinal preferences.",J. S. Mill's Liberal Principle and Unanimity,2019-03-19 02:49:31,Edward J. Green,"http://arxiv.org/abs/1903.07769v1, http://arxiv.org/pdf/1903.07769v1",econ.TH
33825,th,"What are the value and form of optimal persuasion when information can be
generated only slowly? We study this question in a dynamic model in which a
'sender' provides public information over time subject to a graduality
constraint, and a decision-maker takes an action in each period. Using a novel
'viscosity' dynamic programming principle, we characterise the sender's
equilibrium value function and information provision. We show that the
graduality constraint inhibits information provision relative to unconstrained
persuasion. The gap can be substantial, but closes as the constraint slackens.
Contrary to unconstrained persuasion, less-than-full information may be
provided even if players have aligned preferences but different prior beliefs.",Slow persuasion,2019-03-21 18:24:33,"Matteo Escudé, Ludvig Sinander","http://dx.doi.org/10.3982/TE5175, http://arxiv.org/abs/1903.09055v6, http://arxiv.org/pdf/1903.09055v6",econ.TH
33826,th,"We study the core of normal form games with a continuum of players and
without side payments. We consider the weak-core concept, which is an
approximation of the core, introduced by Weber, Shapley and Shubik. For payoffs
depending on the players' strategy profile, we prove that the weak-core is
nonempty. The existence result establishes a weak-core element as a limit of
elements in weak-cores of appropriate finite games. We establish by examples
that our regularity hypotheses are relevant in the continuum case and the
weak-core can be strictly larger than the Aumann's $\alpha$-core. For games
where payoffs depend on the distribution of players' strategy profile, we prove
that analogous regularity conditions ensuring the existence of pure strategy
Nash equilibria are irrelevant for the non-vacuity of the weak-core.",On the core of normal form games with a continuum of players : a correction,2019-03-23 16:23:30,Youcef Askoura,"http://dx.doi.org/10.1016/j.mathsocsci.2017.06.001, http://arxiv.org/abs/1903.09819v1, http://arxiv.org/pdf/1903.09819v1",econ.TH
33827,th,"We consider the interim core of normal form cooperative games and exchange
economies with incomplete information based on the partition model. We develop
a solution concept that we can situate roughly between Wilson's coarse core and
Yannelis's private core. We investigate the interim negotiation of contracts
and address the two situations of contract delivery: interim and ex post. Our
solution differs from Wilson's concept because the measurability of strategies
in our solution is postponed until the consumption date (assumed with respect
to the information that will be known by the players at the consumption date).
For interim consumption, our concept differs from Yannelis's private core
because players can negotiate conditional on proper common knowledge events in
our solution, which strengthens the interim aspect of the game, as we will
illustrate with examples.",An interim core for normal form games and exchange economies with incomplete information: a correction,2019-03-23 22:00:00,Youcef Askoura,"http://dx.doi.org/10.1016/j.jmateco.2015.03.005, http://arxiv.org/abs/1903.09867v1, http://arxiv.org/pdf/1903.09867v1",econ.TH
33828,th,"Observational learning often involves congestion: an agent gets lower payoff
from an action when more predecessors have taken that action. This preference
to act differently from previous agents may paradoxically increase all but one
agent's probability of matching the actions of the predecessors. The reason is
that when previous agents conform to their predecessors despite the preference
to differ, their actions become more informative. The desire to match
predecessors' actions may reduce herding by a similar reasoning.",Herding driven by the desire to differ,2019-03-31 21:00:49,Sander Heinsalu,"http://arxiv.org/abs/1904.00454v1, http://arxiv.org/pdf/1904.00454v1",econ.TH
33830,th,"Central to the official ""green growth"" discourse is the conjecture that
absolute decoupling can be achieved with certain market instruments. This paper
evaluates this claim focusing on the role of technology, while changes in GDP
composition are treated elsewhere. Some fundamental difficulties for absolute
decoupling, referring specifically to thermodynamic costs, are identified
through a stylized model based on empirical knowledge on innovation and
learning. Normally, monetary costs decrease more slowly than production grows,
and this is unlikely to change should monetary costs align with thermodynamic
costs, except, potentially, in the transition after the price reform.
Furthermore, thermodynamic efficiency must eventually saturate for physical
reasons. While this model, as usual, introduces technological innovation just
as a source of efficiency, innovation also creates challenges: therefore,
attempts to sustain growth by ever-accelerating innovation collide also with
the limited reaction capacity of people and institutions. Information
technology could disrupt innovation dynamics in the future, permitting quicker
gains in eco-efficiency, but only up to saturation and exacerbating the
downsides of innovation. These observations suggest that long-term
sustainability requires much deeper transformations than the green growth
discourse presumes, exposing the need to rethink scales, tempos and
institutions, in line with ecological economics and the degrowth literature.",Limits to green growth and the dynamics of innovation,2019-04-21 15:17:39,Salvador Pueyo,"http://arxiv.org/abs/1904.09586v2, http://arxiv.org/pdf/1904.09586v2",econ.TH
33831,th,"We demonstrate how a static optimal income taxation problem can be analyzed
using dynamical methods. Specifically, we show that the taxation problem is
intimately connected to the heat equation. Our first result is a new property
of the optimal tax which we call the fairness principle. The optimal tax at any
income is invariant under a family of properly adjusted Gaussian averages (the
heat kernel) of the optimal taxes at other incomes. That is, the optimal tax at
a given income is equal to the weighted by the heat kernels average of optimal
taxes at other incomes and income densities. Moreover, this averaging happens
at every scale tightly linked to each other providing a unified weighting
scheme at all income ranges. The fairness principle arises not due to equality
considerations but rather it represents an efficient way to smooth the burden
of taxes and generated revenues across incomes. Just as nature wants to
distribute heat evenly, the optimal way for a government to raise revenues is
to distribute the tax burden and raised revenues evenly among individuals. We
then construct a gradient flow of taxes -- a dynamic process changing the
existing tax system in the direction of the increase in tax revenues -- and
show that it takes the form of a heat equation. The fairness principle holds
also for the short-term asymptotics of the gradient flow, where the averaging
is done over the current taxes. The gradient flow we consider can be viewed as
a continuous process of a reform of the nonlinear income tax schedule and thus
unifies the variational approach to taxation and optimal taxation. We present
several other characteristics of the gradient flow focusing on its smoothing
properties.",Tax Mechanisms and Gradient Flows,2019-04-30 17:27:53,"Stefan Steinerberger, Aleh Tsyvinski","http://arxiv.org/abs/1904.13276v1, http://arxiv.org/pdf/1904.13276v1",econ.TH
33832,th,"We develop a theory of repeated interaction for coalitional behavior. We
consider stage games where both individuals and coalitions may deviate.
However, coalition members cannot commit to long-run behavior, and anticipate
that today's actions influence tomorrow's behavior. We evaluate the degree to
which history-dependence can deter coalitional deviations. If monitoring is
perfect, every feasible and strictly individually rational payoff can be
supported by history-dependent conventions. By contrast, if players can make
secret side-payments to each other, every coalition achieves a coalitional
minmax value, potentially reducing the set of supportable payoffs to the core
of the stage game.",Conventions and Coalitions in Repeated Games,2019-06-01 22:57:09,"S. Nageeb Ali, Ce Liu","http://arxiv.org/abs/1906.00280v2, http://arxiv.org/pdf/1906.00280v2",econ.TH
33833,th,"We study the migrants' assimilation, which we conceptualize as forming human
capital productive on the labor market of a developed host country, and we link
the observed frequent lack of assimilation with the relative deprivation that
the migrants start to feel when they move in social space towards the natives.
In turn, we presume that the native population is heterogenous and consists of
high-skill and low-skill workers. The presence of assimilated migrants might
shape the comparison group of the natives, influencing the relative deprivation
of the low-skill workers and, in consequence, the choice to form human capital
and become highly skilled. To analyse this interrelation between assimilation
choices of migrants and skill formation of natives, we construct a
coevolutionary model of the open-to-migration economy. Showing that the economy
might end up in a non-assimilation equilibrium, we discuss welfare consequences
of an assimilation policy funded from tax levied on the native population. We
identify conditions under which such costly policy can bring the migrants to
assimilation and at the same time increase the welfare of the natives, even
though the incomes of the former take a beating.",The interplay between migrants and natives as a determinant of migrants' assimilation: A coevolutionary approach,2019-06-06 18:49:37,"Jakub Bielawski, Marcin Jakubek","http://dx.doi.org/10.24425/cejeme.2021.139797, http://arxiv.org/abs/1906.02657v1, http://arxiv.org/pdf/1906.02657v1",econ.TH
33834,th,"Auction theories are believed to provide a better selling opportunity for the
resources to be allocated. Various organizations have taken measures to
increase trust among participants towards their auction system, but trust alone
cannot ensure a high level of participation. We propose a new type of auction
system which takes advantage of lucky draw and gambling addictions to increase
the engagement level of candidates in an auction. Our system makes use of
security features present in existing auction systems for ensuring fairness and
maintaining trust among participants.",Addictive Auctions: using lucky-draw and gambling addiction to increase participation during auctioning,2019-06-07 19:51:07,Ravin Kumar,"http://dx.doi.org/10.51483/IJMRE.1.1.2021.68-74, http://arxiv.org/abs/1906.03237v1, http://arxiv.org/pdf/1906.03237v1",econ.TH
33835,th,"We revisit the linear Cournot model with uncertain demand that is studied in
Lagerl\""of (2006)* and provide sufficient conditions for equilibrium uniqueness
that complement the existing results. We show that if the distribution of the
demand intercept has the decreasing mean residual demand (DMRD) or the
increasing generalized failure rate (IGFR) property, then uniqueness of
equilibrium is guaranteed. The DMRD condition implies log-concavity of the
expected profits per unit of output without additional assumptions on the
existence or the shape of the density of the demand intercept and, hence,
answers in the affirmative the conjecture of Lagerl\""of (2006)* that such
conditions may not be necessary.
  *Johan Lagerl\""of, Equilibrium uniqueness in a Cournot model with demand
uncertainty. The B.E. Journal in Theoretical Economics, Vol. 6: Iss 1.
(Topics), Article 19:1--6, 2006.",On the Equilibrium Uniqueness in Cournot Competition with Demand Uncertainty,2019-06-09 06:55:34,"Stefanos Leonardos, Costis Melolidakis","http://dx.doi.org/10.1515/bejte-2019-0131, http://arxiv.org/abs/1906.03558v3, http://arxiv.org/pdf/1906.03558v3",econ.TH
33836,th,"We study a pure-exchange incomplete-market economy with heterogeneous agents.
In each period, the agents choose how much to save (i.e., invest in a risk-free
bond), how much to consume, and which bundle of goods to consume while their
endowments are fluctuating. We focus on a competitive stationary equilibrium
(CSE) in which the wealth distribution is invariant, the agents maximize their
expected discounted utility, and both the prices of consumption goods and the
interest rate are market-clearing. Our main contribution is to extend some
general equilibrium results to an incomplete-market Bewley-type economy with
many consumption goods. Under mild conditions on the agents' preferences, we
show that the aggregate demand for goods depends only on their relative prices
and that the aggregate demand for savings is homogeneous of degree in prices,
and we prove the existence of a CSE. When the agents' preferences can be
represented by a CES (constant elasticity of substitution) utility function
with an elasticity of substitution that is higher than or equal to one, we
prove that the CSE is unique. Under the same preferences, we show that a higher
inequality of endowments does not change the equilibrium prices of goods, and
decreases the equilibrium interest rate. Our results shed light on the impact
of market incompleteness on the properties of general equilibrium models.",General equilibrium in a heterogeneous-agent incomplete-market economy with many consumption goods and a risk-free bond,2019-06-17 04:16:27,Bar Light,"http://arxiv.org/abs/1906.06810v2, http://arxiv.org/pdf/1906.06810v2",econ.TH
33837,th,"We study bilateral trade with interdependent values as an informed-principal
problem. The mechanism-selection game has multiple equilibria that differ with
respect to principal's payoff and trading surplus. We characterize the
equilibrium that is worst for every type of principal, and characterize the
conditions under which there are no equilibria with different payoffs for the
principal. We also show that this is the unique equilibrium that survives the
intuitive criterion.",Informed Principal Problems in Bilateral Trading,2019-06-25 06:33:54,Takeshi Nishimura,"http://arxiv.org/abs/1906.10311v6, http://arxiv.org/pdf/1906.10311v6",econ.TH
33838,th,"I introduce a stability notion, dynamic stability, for two-sided dynamic
matching markets where (i) matching opportunities arrive over time, (ii)
matching is one-to-one, and (iii) matching is irreversible. The definition
addresses two conceptual issues. First, since not all agents are available to
match at the same time, one must establish which agents are allowed to form
blocking pairs. Second, dynamic matching markets exhibit a form of externality
that is not present in static markets: an agent's payoff from remaining
unmatched cannot be defined independently of what other contemporaneous agents'
outcomes are. Dynamically stable matchings always exist. Dynamic stability is a
necessary condition to ensure timely participation in the economy by ensuring
that agents do not strategically delay the time at which they are available to
match.",Dynamically Stable Matching,2019-06-27 02:59:29,Laura Doval,"http://arxiv.org/abs/1906.11391v5, http://arxiv.org/pdf/1906.11391v5",econ.TH
33839,th,"This paper introduces an evolutionary dynamics based on imitate the better
realization (IBR) rule. Under this rule, agents in a population game imitate
the strategy of a randomly chosen opponent whenever the opponent`s realized
payoff is higher than their own. Such behavior generates an ordinal mean
dynamics which is polynomial in strategy utilization frequencies. We
demonstrate that while the dynamics does not possess Nash stationarity or
payoff monotonicity, under it pure strategies iteratively strictly dominated by
pure strategies are eliminated and strict equilibria are locally stable. We
investigate the relationship between the dynamics based on the IBR rule and the
replicator dynamics. In trivial cases, the two dynamics are topologically
equivalent. In Rock-Paper-Scissors games we conjecture that both dynamics
exhibit the same types of behavior, but the partitions of the game set do not
coincide. In other cases, the IBR dynamics exhibits behaviors that are
impossible under the replicator dynamics.",Ordinal Imitative Dynamics,2019-07-09 19:19:21,George Loginov,"http://arxiv.org/abs/1907.04272v1, http://arxiv.org/pdf/1907.04272v1",econ.TH
33840,th,"In this paper we develop a general framework to analyze stochastic dynamic
problems with unbounded utility functions and correlated and unbounded shocks.
We obtain new results of the existence and uniqueness of solutions to the
Bellman equation through a general fixed point theorem that generalizes known
results for Banach contractions and local contractions. We study an endogenous
growth model as well as the Lucas asset pricing model in an exchange economy,
significantly expanding their range of applicability.",Existence and Uniqueness of Solutions to the Stochastic Bellman Equation with Unbounded Shock,2019-07-17 08:52:13,Juan Pablo Rincón-Zapatero,"http://arxiv.org/abs/1907.07343v1, http://arxiv.org/pdf/1907.07343v1",econ.TH
33841,th,"In this note, we consider the pricing problem of a profit-maximizing
monopolist who faces naive consumers with convex self-control preferences.",Contract Design with Costly Convex Self-Control,2019-07-17 19:35:31,"Yusufcan Masatlioglu, Daisuke Nakajima, Emre Ozdenoren","http://arxiv.org/abs/1907.07628v1, http://arxiv.org/pdf/1907.07628v1",econ.TH
33842,th,"Firms strategically disclose product information in order to attract
consumers, but recipients often find it costly to process all of it, especially
when products have complex features. We study a model of competitive
information disclosure by two senders, in which the receiver may garble each
sender's experiment, subject to a cost increasing in the informativeness of the
garbling. For a large class of parameters, it is an equilibrium for the senders
to provide the receiver's first best level of information - i.e. as much as she
would learn if she herself controlled information provision. Information on one
sender substitutes for information on the other, which nullifies the
profitability of a unilateral provision of less information. Thus, we provide a
novel channel through which competition with attention costs encourages
information disclosure.",Competing to Persuade a Rationally Inattentive Agent,2019-07-22 15:10:19,"Vasudha Jain, Mark Whitmeyer","http://arxiv.org/abs/1907.09255v2, http://arxiv.org/pdf/1907.09255v2",econ.TH
33843,th,"We study a method for proportional representation that was proposed at the
turn from the nineteenth to the twentieth century by Gustav Enestr\""om and
Edvard Phragm\'en. Like Phragm\'en's better-known iterative minimax method, it
is assumed that the voters express themselves by means of approval voting. In
contrast to the iterative minimax method, however, here one starts by fixing a
quota, i.e. the number of votes that give the right to a seat. As a matter of
fact, the method of Enestr\""om and Phragm\'en can be seen as an extension of
the method of largest remainders from closed lists to open lists, or also as an
adaptation of the single transferable vote to approval rather than preferential
voting. The properties of this method are studied and compared with those of
other methods of the same kind.",The method of Eneström and Phragmén for parliamentary elections by means of approval voting,2019-07-23 11:59:31,"Rosa Camps, Xavier Mora, Laia Saumell","http://arxiv.org/abs/1907.10590v1, http://arxiv.org/pdf/1907.10590v1",econ.TH
33847,th,"A Bayesian agent experiences gain-loss utility each period over changes in
belief about future consumption (""news utility""), with diminishing sensitivity
over the magnitude of news. Diminishing sensitivity induces a preference over
news skewness: gradual bad news, one-shot good news is worse than one-shot
resolution, which is in turn worse than gradual good news, one-shot bad news.
So, the agent's preference between gradual information and one-shot resolution
can depend on his consumption ranking of different states. In a dynamic
cheap-talk framework where a benevolent sender communicates the state over
multiple periods, the babbling equilibrium is essentially unique without loss
aversion. More loss-averse agents may enjoy higher news utility in equilibrium,
contrary to the commitment case. We characterize the family of gradual good
news equilibria that exist with high enough loss aversion, and find the sender
conveys progressively larger pieces of good news. We discuss applications to
media competition and game shows.",Dynamic Information Design with Diminishing Sensitivity Over News,2019-07-31 23:33:02,"Jetlir Duraj, Kevin He","http://arxiv.org/abs/1908.00084v5, http://arxiv.org/pdf/1908.00084v5",econ.TH
33848,th,"In the paper there is studied an optimal saving model in which the
interest-rate risk for saving is a fuzzy number. The total utility of
consumption is defined by using a concept of possibilistic expected utility. A
notion of possibilistic precautionary saving is introduced as a measure of the
variation of optimal saving level when moving from a sure saving model to a
possibilistic risk model. A first result establishes a necessary and sufficient
condition that the presence of a possibilistic interest-rate risk generates an
extra-saving. This result can be seen as a possibilistic version of a
Rothschilld and Stiglitz theorem on a probabilistic model of saving. A second
result of the paper studies the variation of the optimal saving level when
moving from a probabilistic model (the interest-rate risk is a random variable)
to a possibilistic model (the interest-rate risk is a fuzzy number).",The interest rate for saving as a possibilistic risk,2019-07-22 09:57:21,"Irina Georgescu, Jani Kinnunen","http://dx.doi.org/10.1016/j.physa.2020.124460, http://arxiv.org/abs/1908.00445v1, http://arxiv.org/pdf/1908.00445v1",econ.TH
33849,th,"We theoretically study the effect of a third person enforcement on a one-shot
prisoner's dilemma game played by two persons, with whom the third person plays
repeated prisoner's dilemma games. We find that the possibility of the third
person's future punishment causes them to cooperate in the one-shot game.",Third person enforcement in a prisoner's dilemma game,2019-08-14 09:13:54,Tatsuhiro Shichijo,"http://arxiv.org/abs/1908.04971v1, http://arxiv.org/pdf/1908.04971v1",econ.TH
33850,th,"In this paper, we extend and improve the production chain model introduced by
Kikuchi et al. (2018). Utilizing the theory of monotone concave operators, we
prove the existence, uniqueness, and global stability of equilibrium price,
hence improving their results on production networks with multiple upstream
partners. We propose an algorithm for computing the equilibrium price function
that is more than ten times faster than successive evaluations of the operator.
The model is then generalized to a stochastic setting that offers richer
implications for the distribution of firms in a production network.",Equilibrium in Production Chains with Multiple Upstream Partners,2019-08-22 08:35:46,"Meng Yu, Junnan Zhang","http://dx.doi.org/10.1016/j.jmateco.2019.04.002, http://arxiv.org/abs/1908.08208v1, http://arxiv.org/pdf/1908.08208v1",econ.TH
33851,th,"Data-based decisionmaking must account for the manipulation of data by agents
who are aware of how decisions are being made and want to affect their
allocations. We study a framework in which, due to such manipulation, data
becomes less informative when decisions depend more strongly on data. We
formalize why and how a decisionmaker should commit to underutilizing data.
Doing so attenuates information loss and thereby improves allocation accuracy.",Improving Information from Manipulable Data,2019-08-27 20:02:31,"Alex Frankel, Navin Kartik","http://dx.doi.org/10.1093/jeea/jvab017, http://arxiv.org/abs/1908.10330v5, http://arxiv.org/pdf/1908.10330v5",econ.TH
33852,th,"In various situations, decision makers face experts that may provide
conflicting advice. This advice may be in the form of probabilistic forecasts
over critical future events. We consider a setting where the two forecasters
provide their advice repeatedly and ask whether the decision maker can learn to
compare and rank the two forecasters based on past performance. We take an
axiomatic approach and propose three natural axioms that a comparison test
should comply with. We propose a test that complies with our axioms. Perhaps,
not surprisingly, this test is closely related to the likelihood ratio of the
two forecasts over the realized sequence of events. More surprisingly, this
test is essentially unique. Furthermore, using results on the rate of
convergence of supermartingales, we show that whenever the two
experts\textquoteright{} advice are sufficiently distinct, the proposed test
will detect the informed expert in any desired degree of precision in some
fixed finite time.",A Cardinal Comparison of Experts,2019-08-28 14:26:38,"Itay Kavaler, Rann Smorodinsky","http://arxiv.org/abs/1908.10649v3, http://arxiv.org/pdf/1908.10649v3",econ.TH
33853,th,"This paper uses an axiomatic foundation to create a new measure for the cost
of learning that allows for multiple perceptual distances in a single choice
environment so that some events can be harder to differentiate between than
others. The new measure maintains the tractability of Shannon's classic measure
but produces richer choice predictions and identifies a new form of
informational bias significant for welfare and counterfactual analysis.",Rational Inattention and Perceptual Distance,2019-09-03 02:26:24,David Walker-Jones,"http://arxiv.org/abs/1909.00888v2, http://arxiv.org/pdf/1909.00888v2",econ.TH
33854,th,"I introduce a model of predictive scoring. A receiver wants to predict a
sender's quality. An intermediary observes multiple features of the sender and
aggregates them into a score. Based on the score, the receiver makes a
decision. The sender wants the most favorable decision, and she can distort
each feature at a privately known cost. I characterize the most accurate
scoring rule. This rule underweights some features to deter sender distortion,
and overweights other features so that the score is correct on average. The
receiver prefers this scoring rule to full disclosure because information
aggregation mitigates his commitment problem.",Scoring Strategic Agents,2019-09-04 18:34:09,Ian Ball,"http://arxiv.org/abs/1909.01888v5, http://arxiv.org/pdf/1909.01888v5",econ.TH
33975,th,"This paper explores the gain maximization problem of two nations engaging in
non-cooperative bilateral trade. Probabilistic model of an exchange of
commodities under different price systems is considered. Volume of commodities
exchanged determines the demand each nation has over the counter party's
currency. However, each nation can manipulate this quantity by imposing a
tariff on imported commodities. As long as the gain from trade is determined by
the balance between imported and exported commodities, such a scenario results
in a two party game where Nash equilibrium tariffs are determined for various
foreign currency demand functions and ultimately, the exchange rate based on
optimal tariffs is obtained.",Bilateral Tariffs Under International Competition,2020-01-08 12:48:32,"Tsotne Kutalia, Revaz Tevzadze","http://arxiv.org/abs/2001.02426v1, http://arxiv.org/pdf/2001.02426v1",econ.TH
33855,th,"This paper analyzes the role of money in asset markets characterized by
search frictions. We develop a dynamic framework that brings together a model
for illiquid financial assets `a la Duffie, Garleanu, and Pedersen, and a
search-theoretic model of monetary exchange `a la Lagos and Wright. The
presence of decentralized financial markets generates an essential role for
money, which helps investors re-balance their portfolios. We provide conditions
that guarantee the existence of a monetary equilibrium. In this case, asset
prices are always above their fundamental value, and this differential
represents a liquidity premium. We are able to derive an asset pricing theory
that delivers an explicit connection between monetary policy, asset prices, and
welfare. We obtain a negative relationship between inflation and equilibrium
asset prices. This key result stems from the complementarity between money and
assets in our framework.",Illiquid Financial Markets and Monetary Policy,2019-09-04 18:39:10,"Athanasios Geromichalos, Juan M. Licari, Jose Suarez-Lledo","http://arxiv.org/abs/1909.01889v1, http://arxiv.org/pdf/1909.01889v1",econ.TH
33856,th,"We propose a pseudo-market solution to resource allocation problems subject
to constraints. Our treatment of constraints is general: including
bihierarchical constraints due to considerations of diversity in school choice,
or scheduling in course allocation; and other forms of constraints needed to
model, for example, the market for roommates, and combinatorial assignment
problems. Constraints give rise to pecuniary externalities, which are
internalized via prices. Agents pay to the extent that their purchases affect
the value of relevant constraints at equilibrium prices. The result is a
constrained efficient market equilibrium outcome. The outcome is fair whenever
the constraints do not single out individual agents. Our result can be extended
to economies with endowments, and address participation constraints.",Constrained Pseudo-market Equilibrium,2019-09-13 02:23:03,"Federico Echenique, Antonio Miralles, Jun Zhang","http://arxiv.org/abs/1909.05986v4, http://arxiv.org/pdf/1909.05986v4",econ.TH
33857,th,"We explore conclusions a person draws from observing society when he allows
for the possibility that individuals' outcomes are affected by group-level
discrimination. Injecting a single non-classical assumption, that the agent is
overconfident about himself, we explain key observed patterns in social
beliefs, and make a number of additional predictions. First, the agent believes
in discrimination against any group he is in more than an outsider does,
capturing widely observed self-centered views of discrimination. Second, the
more group memberships the agent shares with an individual, the more positively
he evaluates the individual. This explains one of the most basic facts about
social judgments, in-group bias, as well as ""legitimizing myths"" that justify
an arbitrary social hierarchy through the perceived superiority of the
privileged group. Third, biases are sensitive to how the agent divides society
into groups when evaluating outcomes. This provides a reason why some
ethnically charged questions should not be asked, as well as a potential
channel for why nation-building policies might be effective. Fourth, giving the
agent more accurate information about himself increases all his biases. Fifth,
the agent is prone to substitute biases, implying that the introduction of a
new outsider group to focus on creates biases against the new group but lowers
biases vis a vis other groups. Sixth, there is a tendency for the agent to
agree more with those in the same groups. As a microfoundation for our model,
we provide an explanation for why an overconfident agent might allow for
potential discrimination in evaluating outcomes, even when he initially did not
conceive of this possibility.",Overconfidence and Prejudice,2019-09-18 18:20:35,"Paul Heidhues, Botond Kőszegi, Philipp Strack","http://arxiv.org/abs/1909.08497v1, http://arxiv.org/pdf/1909.08497v1",econ.TH
33858,th,"Under some initial conditions, it is shown that time consistency requirements
prevent rational expectation equilibrium (REE) existence for dynamic stochastic
general equilibrium models induced by consumer heterogeneity, in contrast to
static models. However, one can consider REE-prohibiting initial conditions as
limits of other initial conditions. The REE existence issue then is overcome by
using a limit of economies. This shows that significant care must be taken of
when dealing with rational expectation equilibria.",Time-consistent decisions and rational expectation equilibrium existence in DSGE models,2019-09-23 13:06:12,Minseong Kim,"http://arxiv.org/abs/1909.10915v4, http://arxiv.org/pdf/1909.10915v4",econ.TH
33859,th,"I prove an envelope theorem with a converse: the envelope formula is
equivalent to a first-order condition. Like Milgrom and Segal's (2002) envelope
theorem, my result requires no structure on the choice set. I use the converse
envelope theorem to extend to general outcomes and preferences the canonical
result in mechanism design that any increasing allocation is implementable, and
apply this to selling information.",The converse envelope theorem,2019-09-25 02:00:49,Ludvig Sinander,"http://dx.doi.org/10.3982/ECTA18119, http://arxiv.org/abs/1909.11219v5, http://arxiv.org/pdf/1909.11219v5",econ.TH
33860,th,"We analyze undiscounted continuous-time games of strategic experimentation
with two-armed bandits. The risky arm generates payoffs according to a L\'{e}vy
process with an unknown average payoff per unit of time which nature draws from
an arbitrary finite set. Observing all actions and realized payoffs, plus a
free background signal, players use Markov strategies with the common posterior
belief about the unknown parameter as the state variable. We show that the
unique symmetric Markov perfect equilibrium can be computed in a simple closed
form involving only the payoff of the safe arm, the expected current payoff of
the risky arm, and the expected full-information payoff, given the current
belief. In particular, the equilibrium does not depend on the precise
specification of the payoff-generating processes.",Undiscounted Bandit Games,2019-09-29 20:25:31,"Godfrey Keller, Sven Rady","http://dx.doi.org/10.1016/j.geb.2020.08.003, http://arxiv.org/abs/1909.13323v4, http://arxiv.org/pdf/1909.13323v4",econ.TH
33861,th,"This note strengthens the main result of Lagziel and Lehrer (2019) (LL) ""A
bias in screening"" using Chambers Healy (2011) (CH) ""Reversals of
signal-posterior monotonicity for any bounded prior"". LL show that the
conditional expectation of an unobserved variable of interest, given that a
noisy signal of it exceeds a cutoff, may decrease in the cutoff. CH prove that
the distribution of a variable conditional on a lower signal may first order
stochastically dominate the distribution conditional on a higher signal.
  The nonmonotonicity result is also extended to the empirically relevant
exponential and Pareto distributions, and to a wide range of signals.",Reversals of signal-posterior monotonicity imply a bias of screening,2019-10-08 01:32:24,Sander Heinsalu,"http://arxiv.org/abs/1910.03117v6, http://arxiv.org/pdf/1910.03117v6",econ.TH
33862,th,"We analyze the implications of transboundary pollution externalities on
environmental policymaking in a spatial and finite time horizon setting. We
focus on a simple regional optimal pollution control problem in order to
compare the global and local solutions in which, respectively, the
transboundary externality is and is not taken into account in the determination
of the optimal policy by individual local policymakers. We show that the local
solution is suboptimal and as such a global approach to environmental problems
is effectively needed. Our conclusions hold true in different frameworks,
including situations in which the spatial domain is either bounded or
unbounded, and situations in which macroeconomic-environmental feedback effects
are taken into account. We also show that if every local economy implements an
environmental policy stringent enough, then the global average level of
pollution will fall. If this is the case, over the long run the entire global
economy will be able to achieve a completely pollution-free status.","Transboundary Pollution Externalities: Think Globally, Act Locally?",2019-10-10 13:21:09,"Davide La Torre, Danilo Liuzzi, Simone Marsiglio","http://arxiv.org/abs/1910.04469v1, http://arxiv.org/pdf/1910.04469v1",econ.TH
33863,th,"We study a model of vendors competing to sell a homogeneous product to
customers spread evenly along a circular city. This model is based on
Hotelling's celebrated paper in 1929. Our aim in this paper is to present a
necessary and sufficient condition for the equilibrium. This yields a
representation for the equilibrium. To achieve this, we first formulate the
model mathematically. Next, we prove that the condition holds if and only if
vendors are equilibrium.",Necessary and sufficient condition for equilibrium of the Hotelling model on a circle,2019-10-23 13:50:05,"Satoshi Hayashi, Naoki Tsuge","http://arxiv.org/abs/1910.11154v2, http://arxiv.org/pdf/1910.11154v2",econ.TH
33864,th,"We present a unified duality approach to Bayesian persuasion. The optimal
dual variable, interpreted as a price function on the state space, is shown to
be a supergradient of the concave closure of the objective function at the
prior belief. Strong duality holds when the objective function is Lipschitz
continuous.
  When the objective depends on the posterior belief through a set of moments,
the price function induces prices for posterior moments that solve the
corresponding dual problem. Thus, our general approach unifies known results
for one-dimensional moment persuasion, while yielding new results for the
multi-dimensional case. In particular, we provide a necessary and sufficient
condition for the optimality of convex-partitional signals, derive structural
properties of solutions, and characterize the optimal persuasion scheme in the
case when the state is two-dimensional and the objective is quadratic.",The Persuasion Duality,2019-10-24 22:34:46,"Piotr Dworczak, Anton Kolotilin","http://arxiv.org/abs/1910.11392v2, http://arxiv.org/pdf/1910.11392v2",econ.TH
33865,th,"In several matching markets, in order to achieve diversity, agents'
priorities are allowed to vary across an institution's available seats, and the
institution is let to choose agents in a lexicographic fashion based on a
predetermined ordering of the seats, called a (capacity-constrained)
lexicographic choice rule. We provide a characterization of lexicographic
choice rules and a characterization of deferred acceptance mechanisms that
operate based on a lexicographic choice structure under variable capacity
constraints. We discuss some implications for the Boston school choice system
and show that our analysis can be helpful in applications to select among
plausible choice rules.",Lexicographic Choice Under Variable Capacity Constraints,2019-10-29 15:59:16,"Battal Dogan, Serhat Dogan, Kemal Yildiz","http://arxiv.org/abs/1910.13237v1, http://arxiv.org/pdf/1910.13237v1",econ.TH
33866,th,"How does an expert's ability persuade change with the availability of
messages? We study games of Bayesian persuasion the sender is unable to fully
describe every state of the world or recommend all possible actions. We
characterize the set of attainable payoffs. Sender always does worse with
coarse communication and values additional signals. We show that there exists
an upper bound on the marginal value of a signal for the sender. In a special
class of games, the marginal value of a signal is increasing when the receiver
is difficult to persuade. We show that an additional signal does not directly
translate into more information and the receiver might prefer coarse
communication. Finally, we study the geometric properties of optimal
information structures. Using these properties, we show that the sender's
optimization problem can be solved by searching within a finite set.",Persuasion with Coarse Communication,2019-10-30 00:36:17,"Yunus C. Aybas, Eray Turkel","http://arxiv.org/abs/1910.13547v6, http://arxiv.org/pdf/1910.13547v6",econ.TH
33867,th,"We study a disclosure game with a large evidence space. There is an unknown
binary state. A sender observes a sequence of binary signals about the state
and discloses a left truncation of the sequence to a receiver in order to
convince him that the state is good. We focus on truth-leaning equilibria (cf.
Hart et al. (2017)), where the sender discloses truthfully when doing so is
optimal, and the receiver takes off-path disclosure at face value. In
equilibrium, seemingly sub-optimal truncations are disclosed, and the
disclosure contains the longest truncation that yields the maximal difference
between the number of good and bad signals. We also study a general framework
of disclosure games which is compatible with large evidence spaces, a wide
range of disclosure technologies, and finitely many states. We characterize the
unique equilibrium value function of the sender and propose a method to
construct equilibria for a broad class of games.",Disclosure Games with Large Evidence Spaces,2019-10-30 05:45:47,Shaofei Jiang,"http://arxiv.org/abs/1910.13633v3, http://arxiv.org/pdf/1910.13633v3",econ.TH
33868,th,"We study how a principal should optimally choose between implementing a new
policy and maintaining the status quo when information relevant for the
decision is privately held by agents. Agents are strategic in revealing their
information; the principal cannot use monetary transfers to elicit this
information, but can verify an agent's claim at a cost. We characterize the
mechanism that maximizes the expected utility of the principal. This mechanism
can be implemented as a cardinal voting rule, in which agents can either cast a
baseline vote, indicating only whether they are in favor of the new policy, or
they make specific claims about their type. The principal gives more weight to
specific claims and verifies a claim whenever it is decisive.",Costly Verification in Collective Decisions,2019-10-30 19:50:51,"Albin Erlanson, Andreas Kleiner","http://arxiv.org/abs/1910.13979v2, http://arxiv.org/pdf/1910.13979v2",econ.TH
33985,th,"We offer a search-theoretic model of statistical discrimination, in which
firms treat identical groups unequally based on their occupational choices. The
model admits symmetric equilibria in which the group characteristic is ignored,
but also asymmetric equilibria in which a group is statistically discriminated
against, even when symmetric equilibria are unique. Moreover, a robust
possibility is that symmetric equilibria become unstable when the group
characteristic is introduced. Unlike most previous literature, our model can
justify affirmative action since it eliminates asymmetric equilibria without
distorting incentives.",A Search Model of Statistical Discrimination,2020-04-14 19:41:52,"Jiadong Gu, Peter Norman","http://arxiv.org/abs/2004.06645v2, http://arxiv.org/pdf/2004.06645v2",econ.TH
33869,th,"Since the 1990s, artificial intelligence (AI) systems have achieved
'superhuman performance' in major zero-sum games, where winning has an
unambiguous definition. However, most economic and social interactions are
non-zero-sum, where measuring 'performance' is a non-trivial task. In this
paper, I introduce a novel benchmark, super-Nash performance, and a solution
concept, optimin, whereby every player maximizes their minimal payoff under
unilateral profitable deviations of the others. Optimin achieves super-Nash
performance in that, for every Nash equilibrium, there exists an optimin where
each player not only receives but also guarantees super-Nash payoffs, even if
other players deviate unilaterally and profitably from the optimin. Further,
optimin generalizes and unifies several key results across domains: it
coincides with (i) the maximin strategies in zero-sum games, and (ii) the core
in cooperative games when the core is nonempty, though it exists even if the
core is empty; additionally, optimin generalizes (iii) Nash equilibrium in
$n$-person constant-sum games. Finally, optimin is consistent with the
direction of non-Nash deviations in games in which cooperation has been
extensively studied, including the finitely repeated prisoner's dilemma, the
centipede game, the traveler's dilemma, and the finitely repeated public goods
game.",Super-Nash performance in games,2019-11-30 17:31:59,Mehmet S. Ismail,"http://arxiv.org/abs/1912.00211v5, http://arxiv.org/pdf/1912.00211v5",econ.TH
33870,th,"Paul A. Samuelson's (1966) capitulation during the so-called Cambridge
controversy on the re-switching of techniques in capital theory had
implications not only in pointing at supposed internal contradiction of the
marginal theory of production and distribution, but also in preserving vested
interests in the academic and political world. Based on a new non-switching
theorem, the present paper demonstrates that Samuelson's capitulation was
logically groundless from the point of view of the economic theory of
production.",Refuting Samuelson's Capitulation on the Re-switching of Techniques in the Cambridge Capital Controversy,2019-12-03 12:12:10,Carlo Milana,"http://arxiv.org/abs/1912.01250v3, http://arxiv.org/pdf/1912.01250v3",econ.TH
33871,th,"We study collusion in a second-price auction with two bidders in a dynamic
environment. One bidder can make a take-it-or-leave-it collusion proposal,
which consists of both an offer and a request of bribes, to the opponent. We
show that there always exists a robust equilibrium in which the collusion
success probability is one. In the equilibrium, for each type of initiator the
expected payoff is generally higher than the counterpart in any robust
equilibria of the single-option model (Es\""{o} and Schummer (2004)) and any
other separating equilibria in our model.",Perfect bidder collusion through bribe and request,2019-12-08 06:35:28,"Jingfeng Lu, Zongwei Lu, Christian Riis","http://arxiv.org/abs/1912.03607v2, http://arxiv.org/pdf/1912.03607v2",econ.TH
33872,th,"Central to privacy concerns is that firms may use consumer data to price
discriminate. A common policy response is that consumers should be given
control over which firms access their data and how. Since firms learn about a
consumer's preferences based on the data seen and the consumer's disclosure
choices, the equilibrium implications of consumer control are unclear. We study
whether such measures improve consumer welfare in monopolistic and competitive
markets. We find that consumer control can improve consumer welfare relative to
both perfect price discrimination and no personalized pricing. First, consumers
can use disclosure to amplify competitive forces. Second, consumers can
disclose information to induce even a monopolist to lower prices. Whether
consumer control improves welfare depends on the disclosure technology and
market competitiveness. Simple disclosure technologies suffice in competitive
markets. When facing a monopolist, a consumer needs partial disclosure
possibilities to obtain any welfare gains.",Voluntary Disclosure and Personalized Pricing,2019-12-10 18:41:10,"S. Nageeb Ali, Greg Lewis, Shoshana Vasserman","http://arxiv.org/abs/1912.04774v2, http://arxiv.org/pdf/1912.04774v2",econ.TH
33873,th,"We present a fuzzy version of the Group Identification Problem (""Who is a
J?"") introduced by Kasher and Rubinstein (1997). We consider a class $N =
\{1,2,\ldots,n\}$ of agents, each one with an opinion about the membership to a
group J of the members of the society, consisting in a function $\pi : N \to
[0; 1]$, indicating for each agent, including herself, the degree of membership
to J. We consider the problem of aggregating those functions, satisfying
different sets of axioms and characterizing different aggregators. While some
results are analogous to those of the originally crisp model, the fuzzy version
is able to overcome some of the main impossibility results of Kasher and
Rubinstein.",Fuzzy Group Identification Problems,2019-12-11 17:48:59,"Federico Fioravanti, Fernando Tohmé","http://dx.doi.org/10.1016/j.fss.2021.07.006, http://arxiv.org/abs/1912.05540v2, http://arxiv.org/pdf/1912.05540v2",econ.TH
33874,th,"This article is addressing the problem of river sharing between two agents
along a river in the presence of negative externalities. Where, each agent
claims river water based on the hydrological characteristics of the
territories. The claims can be characterized by some international framework
(principles) of entitlement. These international principles are appears to be
inequitable by the other agents in the presence of negative externalities. The
negotiated treaties address sharing water along with the issue of negative
externalities imposed by the upstream agent on the downstream agents. The
market based bargaining mechanism is used for modeling and for characterization
of agreement points.",A Bilateral River Bargaining Problem with Negative Externality,2019-12-12 12:28:10,"Shivshanker Singh Patel, Parthasarathy Ramachandran","http://arxiv.org/abs/1912.05844v1, http://arxiv.org/pdf/1912.05844v1",econ.TH
33875,th,"Kasher and Rubinstein (1997) introduced the problem of classifying the
members of a group in terms of the opinions of their potential members. This
involves a finite set of agents $N = \{1,2,\ldots,n\}$, each one having an
opinion about which agents should be classified as belonging to a specific
subgroup J. A Collective Identity Function (CIF) aggregates those opinions
yielding the class of members deemed $J$. Kasher and Rubinstein postulate
axioms, intended to ensure fair and socially desirable outcomes, characterizing
different CIFs. We follow their lead by replacing their liberal axiom by other
axioms, constraining the spheres of influence of the agents. We show that some
of them lead to different CIFs while in another instance we find an
impossibility result.",Alternative Axioms in Group Identification Problems,2019-12-11 17:57:48,"Federico Fioravanti, Fernando Tohmé","http://arxiv.org/abs/1912.05961v1, http://arxiv.org/pdf/1912.05961v1",econ.TH
33876,th,"In an auction each party bids a certain amount and the one which bids the
highest is the winner. Interestingly, auctions can also be used as models for
other real-world systems. In an all pay auction all parties must pay a forfeit
for bidding. In the most commonly studied all pay auction, parties forfeit
their entire bid, and this has been considered as a model for expenditure on
political campaigns. Here we consider a number of alternative forfeits which
might be used as models for different real-world competitions, such as
preparing bids for defense or infrastructure contracts.",All-Pay Auctions with Different Forfeits,2020-02-07 06:07:47,"Benjamin Kang, James Unwin","http://arxiv.org/abs/2002.02599v1, http://arxiv.org/pdf/2002.02599v1",econ.TH
33879,th,"In this paper we propose a mechanism for the allocation of pipeline
capacities, assuming that the participants bidding for capacities do have
subjective evaluation of various network routes. The proposed mechanism is
based on the concept of bidding for route-quantity pairs. Each participant
defines a limited number of routes and places multiple bids, corresponding to
various quantities, on each of these routes. The proposed mechanism assigns a
convex combination of the submitted bids to each participant, thus its called
convex combinatorial auction. The capacity payments in the proposed model are
determined according to the Vickrey-Clarke-Groves principle. We compare the
efficiency of the proposed algorithm with a simplified model of the method
currently used for pipeline capacity allocation in the EU (simultaneous
ascending clock auction of pipeline capacities) via simulation, according to
various measures, such as resulting utility of players, utilization of network
capacities, total income of the auctioneer and fairness.",Convex Combinatorial Auction of Pipeline Network Capacities,2020-02-16 14:19:41,Dávid Csercsik,"http://dx.doi.org/10.1016/j.eneco.2022.106084, http://arxiv.org/abs/2002.06554v1, http://arxiv.org/pdf/2002.06554v1",econ.TH
33880,th,"In this work I clarify VAT evasion incentives through a game theoretical
approach. Traditionally, evasion has been linked to the decreasing risk
aversion in higher revenues (Allingham and Sandmo (1972), Cowell (1985)
(1990)). I claim tax evasion to be a rational choice when compliance is
stochastically more expensive than evading, even in absence of controls and
sanctions. I create a framework able to measure the incentives for taxpayers to
comply. The incentives here are deductions of specific VAT documented expenses
from the income tax. The issue is very well known and deduction policies at
work in many countries. The aim is to compute the right parameters for each
precise class of taxpayers. VAT evasion is a collusive conduct between the two
counterparts of the transaction. I therefore first explore the convenience for
the two private counterparts to agree on the joint evasion and to form a
coalition. Crucial is that compliance incentives break the agreement among the
transaction participants' coalition about evading. The game solution leads to
boundaries for marginal tax rates or deduction percentages, depending on
parameters, able to create incentives to comply The stylized example presented
here for VAT policies, already in use in many countries, is an attempt to
establish a more general method for tax design, able to make compliance the
""dominant strategy"", satisfying the ""outside option"" constraint represented by
evasion, even in absence of audit and sanctions. The theoretical results
derived here can be easily applied to real data for precise tax design
engineering.",VAT Compliance Incentives,2020-02-18 23:09:37,Maria-Augusta Miceli,"http://arxiv.org/abs/2002.07862v3, http://arxiv.org/pdf/2002.07862v3",econ.TH
33881,th,"For a many-to-many matching market, we study the lattice structure of the set
of random stable matchings. We define a partial order on the random stable set
and present two intuitive binary operations to compute the least upper bound
and the greatest lower bound for each side of the matching market. Then, we
prove that with these binary operations the set of random stable matchings
forms two dual lattices.",Lattice structure of the random stable set in many-to-many matching market,2020-02-19 16:10:31,"Noelia Juarez, Pablo A. Neme, Jorge Oviedo","http://arxiv.org/abs/2002.08156v5, http://arxiv.org/pdf/2002.08156v5",econ.TH
33882,th,"We consider a principal agent project selection problem with asymmetric
information. There are $N$ projects and the principal must select exactly one
of them. Each project provides some profit to the principal and some payoff to
the agent and these profits and payoffs are the agent's private information. We
consider the principal's problem of finding an optimal mechanism for two
different objectives: maximizing expected profit and maximizing the probability
of choosing the most profitable project. Importantly, we assume partial
verifiability so that the agent cannot report a project to be more profitable
to the principal than it actually is. Under this no-overselling constraint, we
characterize the set of implementable mechanisms. Using this characterization,
we find that in the case of two projects, the optimal mechanism under both
objectives takes the form of a simple cutoff mechanism. The simple structure of
the optimal mechanism also allows us to find evidence in support of the
well-known ally-principle which says that principal delegates more authority to
an agent who shares their preferences.",Project selection with partially verifiable information,2020-07-02 09:29:38,"Sumit Goel, Wade Hann-Caruthers","http://arxiv.org/abs/2007.00907v2, http://arxiv.org/pdf/2007.00907v2",econ.TH
33883,th,"Let V be a finite society whose members express weak orderings (hence also
indifference, possibly) about two alternatives. We show a simple representation
formula that is valid for all, and only, anonymous, non-manipulable, binary
social choice functions on V . The number of such functions is $2^{n+1}$ if V
contains $n$ agents.","Anonymous, non-manipulable, binary social choice",2020-07-03 11:36:33,"Achille Basile, Surekha Rao, K. P. S. Bhaskara Rao","http://arxiv.org/abs/2007.01552v1, http://arxiv.org/pdf/2007.01552v1",econ.TH
33884,th,"We consider a dynamic moral hazard problem between a principal and an agent,
where the sole instrument the principal has to incentivize the agent is the
disclosure of information. The principal aims at maximizing the (discounted)
number of times the agent chooses a particular action, e.g., to work hard. We
show that there exists an optimal contract, where the principal stops
disclosing information as soon as its most preferred action is a static best
reply for the agent or else continues disclosing information until the agent
perfectly learns the principal's private information. If the agent perfectly
learns the state, he learns it in finite time with probability one; the more
patient the agent, the later he learns it.",Contracting over persistent information,2020-07-12 16:19:27,"Wei Zhao, Claudio Mezzetti, Ludovic Renou, Tristan Tomala","http://arxiv.org/abs/2007.05983v3, http://arxiv.org/pdf/2007.05983v3",econ.TH
33885,th,"Evidence games study situations where a sender persuades a receiver by
selectively disclosing hard evidence about an unknown state of the world.
Evidence games often have multiple equilibria. Hart et al. (2017) propose to
focus on truth-leaning equilibria, i.e., perfect Bayesian equilibria where the
sender discloses truthfully when indifferent, and the receiver takes off-path
disclosure at face value. They show that a truth-leaning equilibrium is an
equilibrium of a perturbed game where the sender has an infinitesimal reward
for truth-telling. We show that, when the receiver's action space is finite,
truth-leaning equilibrium may fail to exist, and it is not equivalent to
equilibrium of the perturbed game. To restore existence, we introduce a
disturbed game with a small uncertainty about the receiver's payoff. A
purifiable truthful equilibrium is the limit of a sequence of truth-leaning
equilibria in the disturbed games as the disturbances converge to zero. It
exists and features a simple characterization. A truth-leaning equilibrium that
is also purifiable truthful is an equilibrium of the perturbed game. Moreover,
purifiable truthful equilibria are receiver optimal and give the receiver the
same payoff as the optimal deterministic mechanism.",Equilibrium Refinement in Finite Action Evidence Games,2020-07-13 17:28:25,Shaofei Jiang,"http://arxiv.org/abs/2007.06403v2, http://arxiv.org/pdf/2007.06403v2",econ.TH
33886,th,"Dynamic tolls present an opportunity for municipalities to eliminate
congestion and fund infrastructure. Imposing tolls that regulate travel along a
public highway through monetary fees raise worries of inequity. In this
article, we introduce the concept of time poverty, emphasize its value in
policy-making in the same ways income poverty is already considered, and argue
the potential equity concern posed by time-varying tolls that produce time
poverty. We also compare the cost burdens of a no-toll, system optimal toll,
and a proposed ``time-equitable"" toll on heterogeneous traveler groups using an
analytical Vickrey bottleneck model where travelers make departure time
decisions to arrive at their destination at a fixed time. We show that the
time-equitable toll is able to eliminate congestion while creating equitable
travel patterns amongst traveler groups.",Time-Equitable Dynamic Tolling Scheme For Single Bottlenecks,2020-07-11 22:19:57,"John W Helsel, Venktesh Pandey, Stephen D. Boyles","http://arxiv.org/abs/2007.07091v1, http://arxiv.org/pdf/2007.07091v1",econ.TH
33887,th,"We study a dynamic stopping game between a principal and an agent. The agent
is privately informed about his type. The principal learns about the agent's
type from a noisy performance measure, which can be manipulated by the agent
via a costly and hidden action. We fully characterize the unique Markov
equilibrium of this game. We find that terminations/market crashes are often
preceded by a spike in (expected) performance. Our model also predicts that,
due to endogenous signal manipulation, too much transparency can inhibit
learning. As the players get arbitrarily patient, the principal elicits no
useful information from the observed signal.",Learning from Manipulable Signals,2020-07-17 08:15:23,"Mehmet Ekmekci, Leandro Gorno, Lucas Maestri, Jian Sun, Dong Wei","http://arxiv.org/abs/2007.08762v4, http://arxiv.org/pdf/2007.08762v4",econ.TH
33888,th,"This paper characterizes informational outcomes in a model of dynamic
signaling with vanishing commitment power. It shows that contrary to popular
belief, informative equilibria with payoff-relevant signaling can exist without
requiring unreasonable off-path beliefs. The paper provides a sharp
characterization of possible separating equilibria: all signaling must take
place through attrition, when the weakest type mixes between revealing own type
and pooling with the stronger types. The framework explored in the paper is
general, imposing only minimal assumptions on payoff monotonicity and
single-crossing. Applications to bargaining, monopoly price signaling, and
labor market signaling are developed to demonstrate the results in specific
contexts.",Only Time Will Tell: Credible Dynamic Signaling,2020-07-19 06:02:25,Egor Starkov,"http://arxiv.org/abs/2007.09568v3, http://arxiv.org/pdf/2007.09568v3",econ.TH
33889,th,"We prove that a random choice rule satisfies Luce's Choice Axiom if and only
if its support is a choice correspondence that satisfies the Weak Axiom of
Revealed Preference, thus it consists of alternatives that are optimal
according to some preference, and random choice then occurs according to a tie
breaking among such alternatives that satisfies Renyi's Conditioning Axiom. Our
result shows that the Choice Axiom is, in a precise formal sense, a
probabilistic version of the Weak Axiom. It thus supports Luce's view of his
own axiom as a ""canon of probabilistic rationality.""",A Canon of Probabilistic Rationality,2020-07-18 00:43:23,"Simone Cerreia-Vioglio, Per Olov Lindberg, Fabio Maccheroni, Massimo Marinacci, Aldo Rustichini","http://arxiv.org/abs/2007.11386v2, http://arxiv.org/pdf/2007.11386v2",econ.TH
33890,th,"Data analytics allows an analyst to gain insight into underlying populations
through the use of various computational approaches, including Monte Carlo
methods. This paper discusses an approach to apply Monte Carlo methods to
hedonic games. Hedonic games have gain popularity over the last two decades
leading to several research articles that are concerned with the necessary,
sufficient, or both conditions of the existence of a core partition.
Researchers have used analytical methods for this work. We propose that using a
numerical approach will give insights that might not be available through
current analytical methods. In this paper, we describe an approach to
representing hedonic games, with strict preferences, in a matrix form that can
easily be generated; that is, a hedonic game with randomly generated
preferences for each player. Using this generative approach, we were able to
create and solve, i.e., find any core partitions, of millions of hedonic games.
Our Monte Carlo experiment generated games with up to thirteen players. The
results discuss the distribution form of the core size of the games of a given
number of players. We also discuss computational considerations. Our numerical
study of hedonic games gives insight into the underlying properties of hedonic
games.",Generating Empirical Core Size Distributions of Hedonic Games using a Monte Carlo Method,2020-07-23 19:54:41,"Andrew J. Collins, Sheida Etemadidavan, Wael Khallouli","http://arxiv.org/abs/2007.12127v1, http://arxiv.org/pdf/2007.12127v1",econ.TH
33891,th,"We examine a patient player's behavior when he can build reputations in front
of a sequence of myopic opponents. With positive probability, the patient
player is a commitment type who plays his Stackelberg action in every period.
We characterize the patient player's action frequencies in equilibrium. Our
results clarify the extent to which reputations can refine the patient player's
behavior and provide new insights to entry deterrence, business transactions,
and capital taxation. Our proof makes a methodological contribution by
establishing a new concentration inequality.",Equilibrium Behaviors in Repeated Games,2020-07-28 08:58:15,"Yingkai Li, Harry Pei","http://arxiv.org/abs/2007.14002v4, http://arxiv.org/pdf/2007.14002v4",econ.TH
33892,th,"We study dynamic signaling when the informed party does not observe the
signals generated by her actions. A long-run player signals her type
continuously over time to a myopic second player who privately monitors her
behavior; in turn, the myopic player transmits his private inferences back
through an imperfect public signal of his actions. Preferences are
linear-quadratic and the information structure is Gaussian. We construct linear
Markov equilibria using belief states up to the long-run player's
$\textit{second-order belief}$. Because of the private monitoring, this state
is an explicit function of the long-run player's past play. A novel separation
effect then emerges through this second-order belief channel, altering the
traditional signaling that arises when beliefs are public. Applications to
models of leadership, reputation, and trading are examined.",Signaling with Private Monitoring,2020-07-30 18:14:03,"Gonzalo Cisternas, Aaron Kolb","http://arxiv.org/abs/2007.15514v1, http://arxiv.org/pdf/2007.15514v1",econ.TH
33986,th,"The basic concepts of the differential geometry are shortly reviewed and
applied to the study of VES production function in the spirit of the works of
V\^ilcu and collaborators. A similar characterization is given for a more
general production function, namely the Kadiyala production function, in the
case of developable surfaces.",A geometric characterization of VES and Kadiyala-type production functions,2020-04-20 23:32:04,"Nicolò Cangiotti, Mattia Sensi","http://dx.doi.org/10.2298/FIL2105661C, http://arxiv.org/abs/2004.09617v1, http://arxiv.org/pdf/2004.09617v1",econ.TH
33893,th,"In this paper I discuss truthful equilibria in common agency models.
Specifically, I provide general conditions under which truthful equilibria are
plausible, easy to calculate and efficient. These conditions generalize similar
results in the literature and allow the use of truthful equilibria in novel
economic applications. Moreover, I provide two such applications. The first
application is a market game in which multiple sellers sell a uniform good to a
single buyer. The second application is a lobbying model in which there are
externalities in contributions between lobbies. This last example indicates
that externalities between principals do not necessarily prevent efficient
equilibria. In this regard, this paper provides a set of conditions, under
which, truthful equilibria in common agency models with externalities are
efficient.",Truthful Equilibria in Generalized Common Agency Models,2020-07-31 13:25:28,Ilias Boultzis,"http://arxiv.org/abs/2007.15942v1, http://arxiv.org/pdf/2007.15942v1",econ.TH
33894,th,"Generating payoff matrices of normal-form games at random, we calculate the
frequency of games with a unique pure strategy Nash equilibrium in the ensemble
of $n$-player, $m$-strategy games. These are perfectly predictable as they must
converge to the Nash equilibrium. We then consider a wider class of games that
converge under a best-response dynamic, in which each player chooses their
optimal pure strategy successively. We show that the frequency of convergent
games goes to zero as the number of players or the number of strategies goes to
infinity. In the $2$-player case, we show that for large games with at least
$10$ strategies, convergent games with multiple pure strategy Nash equilibria
are more likely than games with a unique Nash equilibrium. Our novel approach
uses an $n$-partite graph to describe games.",The Frequency of Convergent Games under Best-Response Dynamics,2020-11-02 18:33:19,"Samuel C. Wiese, Torsten Heinrich","http://arxiv.org/abs/2011.01052v1, http://arxiv.org/pdf/2011.01052v1",econ.TH
33895,th,"We study the problem of assigning objects to agents in the presence of
arbitrary linear constraints when agents are allowed to be indifferent between
objects. Our main contribution is the generalization of the (Extended)
Probabilistic Serial mechanism via a new mechanism called the Constrained
Serial Rule. This mechanism is computationally efficient and maintains
desirable efficiency and fairness properties namely constrained ordinal
efficiency and envy-freeness among agents of the same type. Our mechanism is
based on a linear programming approach that accounts for all constraints and
provides a re-interpretation of the bottleneck set of agents that form a
crucial part of the Extended Probabilistic Serial mechanism.",Constrained Serial Rule on the Full Preference Domain,2020-11-02 21:16:57,Priyanka Shende,"http://arxiv.org/abs/2011.01178v1, http://arxiv.org/pdf/2011.01178v1",econ.TH
33896,th,"This paper analyses how risk-taking behaviour and preferences over
consumption rank can emerge as a neutrally stable equilibrium when individuals
face an anti-coordination task. If in an otherwise homogeneous society
information about relative consumption becomes available, this cannot be
ignored. Despite concavity in the objective function, stable types must be
willing to accept risky gambles to differentiate themselves, and thus allow for
coordination. Relative consumption acts as a form of costly communication. This
suggests status preferences to be salient in settings where miscoordination is
particularly costly.",Evolution of Risk-Taking Behaviour and Status Preferences in Anti-Coordination Games,2020-11-05 13:26:28,Manuel Staab,"http://arxiv.org/abs/2011.02740v2, http://arxiv.org/pdf/2011.02740v2",econ.TH
33897,th,"Cross-group externalities and network effects in two-sided platform markets
shape market structure and competition policy, and are the subject of extensive
study. Less understood are the within-group externalities that arise when the
platform designs many-to-many matchings: the value to agent $i$ of matching
with agent $j$ may depend on the set of agents with which $j$ is matched. These
effects are present in a wide range of settings in which firms compete for
individuals' custom or attention. I characterize platform-optimal matchings in
a general model of many-to-many matching with within-group externalities. I
prove a set of comparative statics results for optimal matchings, and show how
these can be used to analyze the welfare effects various changes, including
vertical integration by the platform, horizontal mergers between firms on one
side of the market, and changes in the platform's information structure. I then
explore market structure and regulation in two in-depth applications. The first
is monopolistic competition between firms on a retail platform such as Amazon.
The second is a multi-channel video program distributor (MVPD) negotiating
transfer fees with television channels and bundling these to sell to
individuals.",Platform-Mediated Competition,2020-11-08 03:57:05,Quitzé Valenzuela-Stookey,"http://arxiv.org/abs/2011.03879v1, http://arxiv.org/pdf/2011.03879v1",econ.TH
33898,th,"In many settings, multiple uninformed agents bargain simultaneously with a
single informed agent in each of multiple periods. For example, workers and
firms negotiate each year over salaries, and the firm has private information
about the value of workers' output. I study the effects of transparency in
these settings; uninformed agents may observe others' past bargaining outcomes,
e.g. wages. I show that in equilibrium, each uninformed agent will choose in
each period whether to try to separate the informed agent's types (screen) or
receive the same outcome regardless of type (pool). In other words, the agents
engage in a form of experimentation via their bargaining strategies. There are
two main theoretical insights. First, there is a complementary screening
effect: the more agents screen in equilibrium, the lower the information rents
that each will have to pay. Second, the payoff of the informed agent will have
a certain supermodularity property, which implies that equilibria with
screening are ""fragile"" to deviations by uninformed agents. I apply the results
to study pay-secrecy regulations and anti-discrimination policy. I show that,
surprisingly, penalties for pay discrimination have no impact on bargaining
outcomes. I discuss how this result depends on the legal framework for
discrimination cases, and suggest changes to enhance the efficacy of
anti-discrimination regulations. In particular, anti-discrimination law should
preclude the so-called ""salary negotiation defense"".",Screening and Information-Sharing Externalities,2020-11-08 19:25:42,Quitzé Valenzuela-Stookey,"http://arxiv.org/abs/2011.04013v1, http://arxiv.org/pdf/2011.04013v1",econ.TH
33899,th,"We study the problem of allocating $n$ indivisible objects to $n$ agents when
the latter can express strict and purely ordinal preferences and preference
intensities. We suggest a rank-based criterion to make ordinal interpersonal
comparisons of preference intensities in such an environment without assuming
interpersonally comparable utilities. We then define an allocation to be
""intensity-efficient"" if it is Pareto efficient and also such that, whenever
another allocation assigns the same pairs of objects to the same pairs of
agents but in a ""flipped"" way, then the former assigns the commonly preferred
alternative within every such pair to the agent who prefers it more. We show
that an intensity-efficient allocation exists for all 1,728 profiles when
$n=3$.",Intensity-Efficient Allocations,2020-11-09 13:23:22,Georgios Gerasimou,"http://arxiv.org/abs/2011.04306v1, http://arxiv.org/pdf/2011.04306v1",econ.TH
33902,th,"Despite the tremendous successes of science in providing knowledge and
technologies, the Replication Crisis has highlighted that scientific
institutions have much room for improvement. Peer-review is one target of
criticism and suggested reforms. However, despite numerous controversies peer
review systems, plus the obvious complexity of the incentives affecting the
decisions of authors and reviewers, there is very little systematic and
strategic analysis of peer-review systems. In this paper, we begin to address
this feature of the peer-review literature by applying the tools of game
theory. We use simulations to develop an evolutionary model based around a game
played by authors and reviewers, before exploring some of its tendencies. In
particular, we examine the relative impact of double-blind peer-review and open
review on incentivising reviewer effort under a variety of parameters. We also
compare (a) the impact of one review system versus another with (b) other
alterations, such as higher costs of reviewing. We find that is no reliable
difference between peer-review systems in our model. Furthermore, under some
conditions, higher payoffs for good reviewing can lead to less (rather than
more) author effort under open review. Finally, compared to the other
parameters that we vary, it is the exogenous utility of author effort that
makes an important and reliable difference in our model, which raises the
possibility that peer-review might not be an important target for institutional
reforms.",Double blind vs. open review: an evolutionary game logit-simulating the behavior of authors and reviewers,2020-11-16 11:59:30,"Mantas Radzvilas, Francesco De Pretis, William Peden, Daniele Tortoli, Barbara Osimani","http://arxiv.org/abs/2011.07797v1, http://arxiv.org/pdf/2011.07797v1",econ.TH
33903,th,"We identify a new dynamic agency problem: that of incentivising the prompt
disclosure of productive information. To study it, we introduce a general model
in which a technological breakthrough occurs at an uncertain time and is
privately observed by an agent, and a principal must incentivise disclosure via
her control of a payoff-relevant physical allocation. We uncover a deadline
structure of optimal mechanisms: they have a simple deadline form in an
important special case, and a graduated deadline structure in general. We apply
our results to the design of unemployment insurance schemes.",Screening for breakthroughs,2020-11-19 23:14:22,"Gregorio Curello, Ludvig Sinander","http://arxiv.org/abs/2011.10090v7, http://arxiv.org/pdf/2011.10090v7",econ.TH
33904,th,"We study a persuasion problem in which a sender designs an information
structure to induce a non-Bayesian receiver to take a particular action. The
receiver, who is privately informed about his preferences, is a wishful
thinker: he is systematically overoptimistic about the most favorable outcomes.
We show that wishful thinking can lead to a qualitative shift in the structure
of optimal persuasion compared to the Bayesian case, whenever the sender is
uncertain about what the receiver perceives as the best-case outcome in his
decision problem.",Persuading a Wishful Thinker,2020-11-27 20:25:01,"Victor Augias, Daniel M. A. Barreto","http://arxiv.org/abs/2011.13846v5, http://arxiv.org/pdf/2011.13846v5",econ.TH
33905,th,"Based on an axiomatic approach we propose two related novel one-parameter
families of indicators of change which put in a relation classical indicators
of change such as absolute change, relative change and the log-ratio.",On Absolute and Relative Change,2020-11-30 17:05:41,"Silvan Brauen, Philipp Erpf, Micha Wasem","http://arxiv.org/abs/2011.14807v1, http://arxiv.org/pdf/2011.14807v1",econ.TH
33906,th,"I study dynamic random utility with finite choice sets and exogenous total
menu variation, which I refer to as stochastic utility (SU). First, I
characterize SU when each choice set has three elements. Next, I prove several
mathematical identities for joint, marginal, and conditional Block--Marschak
sums, which I use to obtain two characterizations of SU when each choice set
but the last has three elements. As a corollary under the same cardinality
restrictions, I sharpen an axiom to obtain a characterization of SU with full
support over preference tuples. I conclude by characterizing SU without
cardinality restrictions. All of my results hold over an arbitrary finite
discrete time horizon.",Dynamic Random Choice,2021-01-30 06:32:56,Ricky Li,"http://arxiv.org/abs/2102.00143v2, http://arxiv.org/pdf/2102.00143v2",econ.TH
33907,th,"This paper provides a behavioral analysis of conservatism in beliefs. I
introduce a new axiom, Dynamic Conservatism, that relaxes Dynamic Consistency
when information and prior beliefs ""conflict."" When the agent is a subjective
expected utility maximizer, Dynamic Conservatism implies that conditional
beliefs are a convex combination of the prior and the Bayesian posterior.
Conservatism may result in belief dynamics consistent with confirmation bias,
representativeness, and the good news-bad news effect, suggesting a deeper
behavioral connection between these biases. An index of conservatism and a
notion of comparative conservatism are characterized. Finally, I extend
conservatism to the case of an agent with incomplete preferences that admit a
multiple priors representation.",Conservative Updating,2021-01-30 08:13:23,Matthew Kovach,"http://arxiv.org/abs/2102.00152v1, http://arxiv.org/pdf/2102.00152v1",econ.TH
33908,th,"We study the robust double auction mechanisms, that is, the double auction
mechanisms that satisfy dominant strategy incentive compatibility, ex-post
individual rationality and ex-post budget balance. We first establish that the
price in any robust mechanism does not depend on the valuations of the trading
players. We next establish that, with a non-bossiness assumption, the price in
any robust mechanism does not depend on players' valuations at all, whether
trading or non-trading. Our main result is the characterization result that,
with a non-bossy assumption along with other assumptions on the properties of
the mechanism, the generalized posted mechanism in which a constant price is
posted for each possible set of traders is the only robust double auction
mechanism. We also show that, even without the non-bossiness assumption, it is
quite difficult to find a reasonable robust double auction mechanism other than
the generalized posted price mechanism.",Robust double auction mechanisms,2021-02-01 10:12:24,Kiho Yoon,"http://arxiv.org/abs/2102.00669v3, http://arxiv.org/pdf/2102.00669v3",econ.TH
33909,th,"We propose a multivariate extension of Yaari's dual theory of choice under
risk. We show that a decision maker with a preference relation on
multidimensional prospects that preserves first order stochastic dominance and
satisfies comonotonic independence behaves as if evaluating prospects using a
weighted sum of quantiles. Both the notions of quantiles and of comonotonicity
are extended to the multivariate framework using optimal transportation maps.
Finally, risk averse decision makers are characterized within this framework
and their local utility functions are derived. Applications to the measurement
of multi-attribute inequality are also discussed.",Dual theory of choice with multivariate risks,2021-02-04 15:38:18,"Alfred Galichon, Marc Henry","http://dx.doi.org/10.1016/j.jet.2011.06.002, http://arxiv.org/abs/2102.02578v2, http://arxiv.org/pdf/2102.02578v2",econ.TH
33910,th,"We introduce the refined assortment optimization problem where a firm may
decide to make some of its products harder to get instead of making them
unavailable as in the traditional assortment optimization problem. Airlines,
for example, offer fares with severe restrictions rather than making them
unavailable. This is a more subtle way of handling the trade-off between demand
induction and demand cannibalization. For the latent class MNL model, a firm
that engages in refined assortment optimization can make up to $\min(n,m)$
times more than one that insists on traditional assortment optimization, where
$n$ is the number of products and $m$ the number of customer types.
Surprisingly, the revenue-ordered assortment heuristic has the same performance
guarantees relative to {\em personalized} refined assortment optimization as it
does to traditional assortment optimization. Based on this finding, we
construct refinements of the revenue-order heuristic and measure their improved
performance relative to the revenue-ordered assortment and the optimal
traditional assortment optimization problem. We also provide tight bounds on
the ratio of the expected revenues for the refined versus the traditional
assortment optimization for some well known discrete choice models.",The Refined Assortment Optimization Problem,2021-02-05 10:59:34,"Gerardo Berbeglia, Alvaro Flores, Guillermo Gallego","http://arxiv.org/abs/2102.03043v1, http://arxiv.org/pdf/2102.03043v1",econ.TH
33911,th,"Sanctioned by its constitution, India is home to the world's most
comprehensive affirmative action program, where historically discriminated
groups are protected with vertical reservations implemented as ""set asides,""
and other disadvantaged groups are protected with horizontal reservations
implemented as ""minimum guarantees."" A mechanism mandated by the Supreme Court
in 1995 suffers from important anomalies, triggering countless litigations in
India. Foretelling a recent reform correcting the flawed mechanism, we propose
the 2SMG mechanism that resolves all anomalies, and characterize it with
desiderata reflecting laws of India. Subsequently rediscovered with a high
court judgment and enforced in Gujarat, 2SMG is also endorsed by Saurav Yadav
v. State of UP (2020), in a Supreme Court ruling that rescinded the flawed
mechanism. While not explicitly enforced, 2SMG is indirectly enforced for an
important subclass of applications in India, because no other mechanism
satisfies the new mandates of the Supreme Court.",Can Economic Theory Be Informative for the Judiciary? Affirmative Action in India via Vertical and Horizontal Reservations,2021-02-05 17:09:33,"Tayfun Sönmez, M. Bumin Yenmez","http://arxiv.org/abs/2102.03186v2, http://arxiv.org/pdf/2102.03186v2",econ.TH
33912,th,"Experimental work regularly finds that individual choices are not
deterministically rationalized by well-defined preferences. Nonetheless, recent
work shows that data collected from many individuals can be stochastically
rationalized by a distribution of well-defined preferences. We study the
relationship between deterministic and stochastic rationalizability. We show
that a population can be stochastically rationalized even when half of the
individuals in the population cannot be deterministically rationalized. We also
find the ability to detect individuals who are not deterministically
rationalized from population level data can decrease as the number of
observations increases.","Non-rationalizable Individuals, Stochastic Rationalizability, and Sampling",2021-02-06 01:17:14,"Changkuk Im, John Rehbeck","http://arxiv.org/abs/2102.03436v3, http://arxiv.org/pdf/2102.03436v3",econ.TH
33913,th,"We propose a multivariate extension of a well-known characterization by S.
Kusuoka of regular and coherent risk measures as maximal correlation
functionals. This involves an extension of the notion of comonotonicity to
random vectors through generalized quantile functions. Moreover, we propose to
replace the current law invariance, subadditivity and comonotonicity axioms by
an equivalent property we call strong coherence and that we argue has more
natural economic interpretation. Finally, we reformulate the computation of
regular and coherent risk measures as an optimal transportation problem, for
which we provide an algorithm and implementation.",Comonotonic measures of multivariate risks,2021-02-08 16:06:37,"Ivar Ekeland, Alfred Galichon, Marc Henry","http://dx.doi.org/10.1111/j.1467-9965.2010.00453.x, http://arxiv.org/abs/2102.04175v1, http://arxiv.org/pdf/2102.04175v1",econ.TH
33914,th,"This paper provides closed-form formulas for a multidimensional two-sided
matching problem with transferable utility and heterogeneity in tastes. When
the matching surplus is quadratic, the marginal distributions of the
characteristics are normal, and when the heterogeneity in tastes is of the
continuous logit type, as in Choo and Siow (J Polit Econ 114:172-201, 2006), we
show that the optimal matching distribution is also jointly normal and can be
computed in closed form from the model primitives. Conversely, the quadratic
surplus function can be identified from the optimal matching distribution, also
in closed-form. The closed-form formulas make it computationally easy to solve
problems with even a very large number of matches and allow for quantitative
predictions about the evolution of the solution as the technology and the
characteristics of the matching populations change.","Matching in Closed-Form: Equilibrium, Identification, and Comparative Statics",2021-02-08 19:02:13,"Raicho Bojilov, Alfred Galichon","http://dx.doi.org/10.1007/s00199-016-0961-8, http://arxiv.org/abs/2102.04295v2, http://arxiv.org/pdf/2102.04295v2",econ.TH
33915,th,"The random utility model is known to be unidentified, but there are times
when the model admits a unique representation. We offer two characterizations
for the existence of a unique random utility representation. Our first
characterization puts conditions on a graphical representation of the data set.
Non-uniqueness arises when multiple inflows can be assigned to multiple
outflows on this graph. Our second characterization provides a direct test for
uniqueness given a random utility representation. We also show that the support
of a random utility representation is identified if and only if the
representation itself is identified.",Identification in the Random Utility Model,2021-02-10 20:04:31,Christopher Turansick,"http://dx.doi.org/10.1016/j.jet.2022.105489, http://arxiv.org/abs/2102.05570v5, http://arxiv.org/pdf/2102.05570v5",econ.TH
33916,th,"We revisit Machina's local utility as a tool to analyze attitudes to
multivariate risks. We show that for non-expected utility maximizers choosing
between multivariate prospects, aversion to multivariate mean preserving
increases in risk is equivalent to the concavity of the local utility
functions, thereby generalizing Machina's result in Machina (1982). To analyze
comparative risk attitudes within the multivariate extension of rank dependent
expected utility of Galichon and Henry (2011), we extend Quiggin's monotone
mean and utility preserving increases in risk and show that the useful
characterization given in Landsberger and Meilijson (1994) still holds in the
multivariate case.",Local Utility and Multivariate Risk Aversion,2021-02-08 18:35:54,"Arthur Charpentier, Alfred Galichon, Marc Henry","http://dx.doi.org/10.1287/moor.2015.0736, http://arxiv.org/abs/2102.06075v2, http://arxiv.org/pdf/2102.06075v2",econ.TH
33918,th,"Aggregating risks from multiple sources can be complex and demanding, and
decision makers usually adopt heuristics to simplify the evaluation process.
This paper axiomatizes two closed related and yet different heuristics, narrow
bracketing and correlation neglect, by relaxing the independence axiom in the
expected utility theory. The flexibility of our framework allows for
applications in various economic problems. First, our model can explain the
experimental evidence of narrow bracketing over monetary gambles. Second, when
one source represents background risk, we can accommodate Rabin (2000)'s
critique and explain risk aversion over small gambles. Finally, when different
sources represent consumptions in different periods, we unify three seemingly
distinct models of time preferences and propose a novel model that
simultaneously satisfies indifference to temporal resolution of uncertainty,
separation of time and risk preferences, and recursivity in the domain of
lotteries. As a direct application to macroeconomics and finance, we provide an
alternative to Epstein and Zin (1989) which avoids the unreasonably high timing
premium discussed in Epstein, Farhi, and Strzalecki (2014).",A Theory of Choice Bracketing under Risk,2021-02-15 03:41:34,Mu Zhang,"http://arxiv.org/abs/2102.07286v2, http://arxiv.org/pdf/2102.07286v2",econ.TH
33919,th,"We consider the classical Ramsey-Cass-Koopmans capital accumulation model and
present three examples in which the Hamilton-Jacobi-Bellman (HJB) equation is
neither necessary nor sufficient for a function to be the value function. Next,
we present assumptions under which the HJB equation becomes a necessary and
sufficient condition for a function to be the value function, and using this
result, we propose a new method for solving the original problem using the
solution to the HJB equation. Our assumptions are so mild that many
macroeconomic growth models satisfy them. Therefore, our results ensure that
the solution to the HJB equation is rigorously the value function in many
macroeconomic models, and present a new solving method for these models.",On the Basis of the Hamilton-Jacobi-Bellman Equation in Economic Dynamics,2021-02-15 13:11:18,Yuhki Hosoya,"http://dx.doi.org/10.1016/j.physd.2023.133684, http://arxiv.org/abs/2102.07431v6, http://arxiv.org/pdf/2102.07431v6",econ.TH
33920,th,"This paper concerns Saez and Stantcheva's (2016) generalized social marginal
welfare weights (GSMWW), which aggregate losses and gains due to tax policies,
while incorporating non-utilitarian ethical considerations. The approach
evaluates local tax changes without a global social objective. I show that
local tax policy comparisons implicitly entail global comparisons. Moreover,
whenever welfare weights do not have a utilitarian structure, these implied
global comparisons are inconsistent. I argue that broader ethical values cannot
in general be represented simply by modifying the weights placed on benefits to
different people, and a more thoroughgoing modification of the utilitarian
approach is required.",Generalized Social Marginal Welfare Weights Imply Inconsistent Comparisons of Tax Policies,2021-02-15 20:54:15,Itai Sher,"http://arxiv.org/abs/2102.07702v3, http://arxiv.org/pdf/2102.07702v3",econ.TH
33921,th,"A monopolist wants to sell one item per period to a consumer with evolving
and persistent private information. The seller sets a price each period
depending on the history so far, but cannot commit to future prices. We show
that, regardless of the degree of persistence, any equilibrium under a D1-style
refinement gives the seller revenue no higher than what she would get from
posting all prices in advance.",Dynamic Pricing with Limited Commitment,2021-02-15 21:37:54,"Martino Banchio, Frank Yang","http://arxiv.org/abs/2102.07742v3, http://arxiv.org/pdf/2102.07742v3",econ.TH
33922,th,"We introduce uncertainty into a pure exchange economy and establish a
connection between Shannon's differential entropy and uniqueness of price
equilibria. The following conjecture is proposed under the assumption of a
uniform probability distribution: entropy is minimal if and only if the price
is unique for every economy. We show the validity of this conjecture for an
arbitrary number of goods and two consumers and, under certain conditions, for
an arbitrary number of consumers and two goods.",Minimal entropy and uniqueness of price equilibria in a pure exchange economy,2021-02-19 12:38:59,"Andrea Loi, Stefano Matta","http://arxiv.org/abs/2102.09827v1, http://arxiv.org/pdf/2102.09827v1",econ.TH
33923,th,"Models of updating a set of priors either do not allow a decision maker to
make inference about her priors (full bayesian updating or FB) or require an
extreme degree of selection (maximum likelihood updating or ML). I characterize
a general method for updating a set of priors, partial bayesian updating (PB),
in which the decision maker (i) utilizes an event-dependent threshold to
determine whether a prior is likely enough, conditional on observed
information, and then (ii) applies Bayes' rule to the sufficiently likely
priors. I show that PB nests FB and ML and explore its behavioral properties.",Ambiguity and Partial Bayesian Updating,2021-02-23 03:13:55,Matthew Kovach,"http://arxiv.org/abs/2102.11429v3, http://arxiv.org/pdf/2102.11429v3",econ.TH
33924,th,"Is more novel research always desirable? We develop a model in which
knowledge shapes society's policies and guides the search for discoveries.
Researchers select a question and how intensely to study it. The novelty of a
question determines both the value and difficulty of discovering its answer. We
show that the benefits of discoveries are nonmonotone in novelty. Knowledge
expands endogenously step-by-step over time. Through a dynamic externality,
moonshots -- research on questions more novel than what is myopically optimal
-- can improve the evolution of knowledge. Moonshots induce research cycles in
which subsequent researchers connect the moonshot to previous knowledge.",A Quest for Knowledge,2021-02-26 15:47:00,"Christoph Carnehl, Johannes Schneider","http://arxiv.org/abs/2102.13434v7, http://arxiv.org/pdf/2102.13434v7",econ.TH
33925,th,"This paper generalizes the concept of Bayes correlated equilibrium (Bergemann
and Morris, 2016) to multi-stage games. We demonstrate the power of our
characterization results by applying them to a number of illustrative examples
and applications.",Information Design in Multi-stage Games,2021-02-26 16:58:01,"Miltiadis Makris, Ludovic Renou","http://arxiv.org/abs/2102.13482v2, http://arxiv.org/pdf/2102.13482v2",econ.TH
33926,th,"We study a cheap-talk game where two experts first choose what information to
acquire and then offer advice to a decision-maker whose actions affect the
welfare of all. The experts cannot commit to reporting strategies. Yet, we show
that the decision-maker's ability to cross-verify the experts' advice acts as a
commitment device for the experts. We prove the existence of an equilibrium,
where an expert's equilibrium payoff is equal to what he would obtain if he
could commit to truthfully revealing his information.",Cross-verification and Persuasive Cheap Talk,2021-02-26 19:04:44,"Alp Atakan, Mehmet Ekmekci, Ludovic Renou","http://arxiv.org/abs/2102.13562v2, http://arxiv.org/pdf/2102.13562v2",econ.TH
33928,th,"The development of energy systems is not a technocratic process but equally
shaped by societal and cultural forces. Key instruments in this process are
model-based scenarios describing a future energy system. Applying the concept
of fictional expectations from social economics, we show how energy scenarios
are tools to channel political, economic, and academic efforts into a common
direction. To impact decision-making, scenarios do not have to be accurate --
but credible and evoke coherent expectations in diverse stakeholders. To gain
credibility, authors of scenarios engage with stakeholders and appeal to the
authority of institutions or quantitative methods.
  From these insights on energy scenarios, we draw consequences for developing
and applying planning models, the quantitative tool energy scenarios build on.
Planning models should be open and accessible to facilitate stakeholder
participation, avoid needlessly complex methods to minimize expert bias and aim
for a large scope to be policy relevant. Rather than trying to simulate social
preferences and convictions within engineering models, scenario development
should pursue broad and active participation of all stakeholders, including
citizens.","A collective blueprint, not a crystal ball: How expectations and participation shape long-term energy scenarios",2021-12-09 13:30:09,"Leonard Göke, Jens Weibezahn, Christian von Hirschhausen","http://dx.doi.org/10.1016/j.erss.2023.102957, http://arxiv.org/abs/2112.04821v2, http://arxiv.org/pdf/2112.04821v2",econ.TH
33929,th,"We study whether a planner can robustly implement a state-contingent social
choice function when (i) agents must incur a cost to learn the state and (ii)
the planner faces uncertainty regarding agents' preferences over outcomes,
information costs, and beliefs and higher-order beliefs about one another's
payoffs. We propose mechanisms that can approximately implement any desired
social choice function when the perturbations concerning agents' payoffs have
small ex ante probability. The mechanism is also robust to trembles in agents'
strategies and when agents receive noisy information about the state.",Robust Implementation with Costly Information,2021-12-11 20:23:28,"Harry Pei, Bruno Strulovici","http://arxiv.org/abs/2112.06032v1, http://arxiv.org/pdf/2112.06032v1",econ.TH
33930,th,"We examine the trade-off between the provision of incentives to exert costly
effort (ex-ante moral hazard) and the incentives needed to prevent the agent
from manipulating the profit observed by the principal (ex-post moral hazard).
Formally, we build a model of two-stage hidden actions where the agent can both
influence the expected revenue of a business and manipulate its observed
profit. We show that manipulation-proofness is sensitive to the interaction
between the manipulation technology and the probability distribution of the
stochastic output. The optimal contract is manipulation-proof whenever the
manipulation technology is linear. However, a convex manipulation technology
sometimes leads to contracts with manipulations in equilibrium. Whenever the
distribution satisfies the monotone likelihood ratio property, we can always
find a manipulation technology for which the optimal contract is not
manipulation-proof.",Ex-post moral hazard and manipulation-proof contracts,2021-12-13 20:13:14,Jean-Gabriel Lauzier,"http://arxiv.org/abs/2112.06811v1, http://arxiv.org/pdf/2112.06811v1",econ.TH
33931,th,"This article examines differentiability properties of the value function of
positioning choice problems, a class of optimisation problems in
finite-dimensional Euclidean spaces. We show that positioning choice problems'
value function is always almost everywhere differentiable even when the
objective function is discontinuous. To obtain this result we first show that
the Dini superdifferential is always well-defined for the maxima of positioning
choice problems. This last property allows to state first-order necessary
conditions in terms of Dini supergradients. We then prove our main result,
which is an ad-hoc envelope theorem for positioning choice problems. Lastly,
after discussing necessity of some key assumptions, we conjecture that similar
theorems might hold in other spaces as well.",Envelope theorem and discontinuous optimisation: the case of positioning choice problems,2021-12-13 20:20:55,Jean-Gabriel Lauzier,"http://arxiv.org/abs/2112.06815v1, http://arxiv.org/pdf/2112.06815v1",econ.TH
33932,th,"We design the insurance contract when the insurer faces arson-type risks. The
optimal contract must be manipulation-proof. It is therefore continuous, it has
a bounded slope, and it satisfies the no-sabotage condition when arson-type
actions are free. Any contract that mixes a deductible, coinsurance and an
upper limit is manipulation-proof. We also show that the ability to perform
arson-type actions reduces the insured's welfare as less coverage is offered in
equilibrium.",Insurance design and arson-type risks,2021-12-13 20:24:39,Jean-Gabriel Lauzier,"http://arxiv.org/abs/2112.06817v1, http://arxiv.org/pdf/2112.06817v1",econ.TH
33933,th,"Suppliers of differentiated goods make simultaneous pricing decisions, which
are strategically linked. Because of market power, the equilibrium is
inefficient. We study how a policymaker should target a budget-balanced
tax-and-subsidy policy to increase welfare. A key tool is a certain basis for
the goods space, determined by the network of interactions among suppliers. It
consists of eigenbundles -- orthogonal in the sense that a tax on any
eigenbundle passes through only to its own price -- with pass-through
coefficients determined by associated eigenvalues. Our basis permits a simple
characterization of optimal interventions. A planner maximizing consumer
surplus should tax eigenbundles with low pass-through and subsidize ones with
high pass-through. The Pigouvian leverage of the system -- the gain in consumer
surplus achievable by an optimal tax scheme -- depends only on the dispersion
of the eigenvalues of the matrix of strategic interactions. We interpret these
results in terms of the network structure of the market.",Taxes and Market Power: A Principal Components Approach,2021-12-15 17:21:27,"Andrea Galeotti, Benjamin Golub, Sanjeev Goyal, Eduard Talamàs, Omer Tamuz","http://arxiv.org/abs/2112.08153v2, http://arxiv.org/pdf/2112.08153v2",econ.TH
33934,th,"This paper introduces in production theory a large class of efficiency
measures that can be derived from the notion of utility function. This article
also establishes a relation between these distance functions and Stone-Geary
utility functions. More specifically, the paper focusses on new distance
function that generalizes several existing efficiency measures. The new
distance function is inspired from the Atkinson inequality index and maximizes
the sum of the netput expansions required to reach an efficient point. A
generalized duality theorem is proved and a duality result linking the new
distance functions and the profit function is obtained. For all feasible
production vectors, it includes as special cases most of the dual
correspondences previously established in the literature. Finally, we identify
a large class of measures for which these duality results can be obtained
without convexity.",Distance Functions and Generalized Means: Duality and Taxonomy,2021-12-17 14:24:05,Walter Briec,"http://arxiv.org/abs/2112.09443v2, http://arxiv.org/pdf/2112.09443v2",econ.TH
33935,th,"Algorithm designers increasingly optimize not only for accuracy, but also for
the fairness of the algorithm across pre-defined groups. We study the tradeoff
between fairness and accuracy for any given set of inputs to the algorithm. We
propose and characterize a fairness-accuracy frontier, which consists of the
optimal points across a broad range of preferences over fairness and accuracy.
Our results identify a simple property of the inputs, group-balance, which
qualitatively determines the shape of the frontier. We further study an
information-design problem where the designer flexibly regulates the inputs
(e.g., by coarsening an input or banning its use) but the algorithm is chosen
by another agent. Whether it is optimal to ban an input generally depends on
the designer's preferences. But when inputs are group-balanced, then excluding
group identity is strictly suboptimal for all designers, and when the designer
has access to group identity, then it is strictly suboptimal to exclude any
informative input.",Algorithm Design: A Fairness-Accuracy Frontier,2021-12-18 20:43:41,"Annie Liang, Jay Lu, Xiaosheng Mu","http://arxiv.org/abs/2112.09975v4, http://arxiv.org/pdf/2112.09975v4",econ.TH
33936,th,"Consider a mechanism designer who implements a choice rule through a dynamic
protocol, gradually learning agents' private information through yes-or-no
questions addressed to one agent at a time. A protocol is contextually private
for a choice rule if the designer learns only the subsets of private
information needed to determine the outcome. While the serial dictatorship and
the first-price auction have contextually private implementations, the
second-price auction does not, nor does any stable matching rule. When the
designer can additionally ask about the number of agents whose private
information satisfies a certain property, the second-price auction, the uniform
$k$th price auction, and the Walrasian double auction have contextually private
t\^atonnement implementations.",Contextually Private Mechanisms,2021-12-20 22:16:32,"Andreas Haupt, Zoë Hitzig","http://arxiv.org/abs/2112.10812v5, http://arxiv.org/pdf/2112.10812v5",econ.TH
33937,th,"We study multi-unit auctions in which bidders have limited knowledge of
opponent strategies and values. We characterize optimal prior-free bids; these
bids minimize the maximal loss in expected utility resulting from uncertainty
surrounding opponent behavior. Optimal bids are readily computable despite
bidders having multi-dimensional private information, and in certain cases
admit closed-form solutions. In the pay-as-bid auction the minimax-loss bid is
unique; in the uniform-price auction the minimax-loss bid is unique if the
bidder is allowed to determine the quantities for which they bid, as in many
practical applications. We compare minimax-loss bids and auction outcomes
across auction formats, and derive testable predictions.",Bidding in Multi-Unit Auctions under Limited Information,2021-12-21 19:13:50,"Bernhard Kasberger, Kyle Woodward","http://arxiv.org/abs/2112.11320v2, http://arxiv.org/pdf/2112.11320v2",econ.TH
33938,th,"It is well-known that subjective beliefs cannot be identified with
traditional choice data unless we impose the strong assumption that preferences
are state-independent. This is seen as one of the biggest pitfalls of
incentivized belief elicitation. The two common approaches are either to
exogenously assume that preferences are state-independent, or to use
intractable elicitation mechanisms that require an awful lot of hard-to-get
non-traditional choice data. In this paper we use a third approach, introducing
a novel methodology that retains the simplicity of standard elicitation
mechanisms without imposing the awkward state-independence assumption. The cost
is that instead of insisting on full identification of beliefs, we seek
identification of misreporting. That is, we elicit beliefs with a standard
simple elicitation mechanism, and then by means of a single additional
observation we can tell whether the reported beliefs deviate from the actual
beliefs, and if so, in which direction they do.",Identification of misreported beliefs,2021-12-24 10:32:19,Elias Tsakas,"http://arxiv.org/abs/2112.12975v1, http://arxiv.org/pdf/2112.12975v1",econ.TH
33939,th,"We consider dynamic stochastic economies with heterogeneous agents and
introduce the concept of uniformly self-justified equilibria (USJE) --
temporary equilibria for which forecasts are best uniform approximations to a
selection of the equilibrium correspondence. In a USJE, individuals'
forecasting functions for the next period's endogenous variables are assumed to
lie in a compact, finite-dimensional set of functions, and the forecasts
constitute the best approximation within this set. We show that USJE always
exist and develop a simple algorithm to compute them. Therefore, they are more
tractable than rational expectations equilibria that do not always exist. As an
application, we discuss a stochastic overlapping generations exchange economy
and provide numerical examples to illustrate the concept of USJE and the
computational method.",Uniformly Self-Justified Equilibria,2021-12-28 12:08:57,"Felix Kubler, Simon Scheidegger","http://arxiv.org/abs/2112.14054v1, http://arxiv.org/pdf/2112.14054v1",econ.TH
33940,th,"This note analyzes the outcome equivalence conditions of two popular
affirmative action policies, majority quota and minority reserve, under the
student optimal stable mechanism. These two affirmative actions generate an
identical matching outcome, if the market either is effectively competitive or
contains a sufficiently large number of schools.",On the Equivalence of Two Competing Affirmative Actions in School Choice,2021-12-28 13:13:10,Yun Liu,"http://arxiv.org/abs/2112.14074v3, http://arxiv.org/pdf/2112.14074v3",econ.TH
33941,th,"We study a game where households convert paper assets, such as money, into
consumption goods, to preempt inflation. The game features a unique equilibrium
with high (low) inflation, if money supply is high (low). For intermediate
levels of money supply, there exist multiple equilibria with either high or low
inflation. Equilibria with moderate inflation, however, do not exist, and can
thus not be targeted by a central bank. That is, depending on agents'
equilibrium play, money supply is always either too high or too low for
moderate inflation. We also show that inflation rates of long-lived goods, such
as houses, cars, expensive watches, furniture, or paintings, are a leading
indicator for broader, economy wide, inflation.",The Inflation Game,2021-12-26 16:52:25,Wolfgang Kuhle,"http://arxiv.org/abs/2112.14697v1, http://arxiv.org/pdf/2112.14697v1",econ.TH
33942,th,"We present the equivalence between the fuzzy core and the core under minimal
assumptions. Due to the exact version of the Lyapunov convexity theorem in
Banach spaces, we clarify that the additional structure of commodity spaces and
preferences is unnecessary whenever the measure space of agents is ""saturated"".
As a spin-off of the above equivalence, we obtain the coincidence of the core,
the fuzzy core, and the Schmeidler's restricted core under minimal assumptions.
The coincidence of the fuzzy core and the restricted core has not been
articulated anywhere.",Fuzzy Core Equivalence in Large Economies: A Role for the Infinite-Dimensional Lyapunov Theorem,2021-12-31 19:31:58,"M. Ali Khan, Nobusumi Sagara","http://arxiv.org/abs/2112.15539v1, http://arxiv.org/pdf/2112.15539v1",econ.TH
33943,th,"This paper corrects some mathematical errors in Holmstr\""om (1999) and
clarifies the assumptions that are sufficient for the results of Holmstr\""om
(1999). The results remain qualitatively the same.","Corrigendum to ""Managerial Incentive Problems: A Dynamic Perspective""",2018-10-30 10:18:48,Sander Heinsalu,"http://arxiv.org/abs/1811.00455v2, http://arxiv.org/pdf/1811.00455v2",econ.TH
33944,th,"This paper studies a robust version of the classic surplus extraction
problem, in which the designer knows only that the beliefs of each type belong
to some set, and designs mechanisms that are suitable for all possible beliefs
in that set. We derive necessary and sufficient conditions for full extraction
in this setting, and show that these are natural set-valued analogues of the
classic convex independence condition identified by Cremer and McLean (1985,
1988). We show that full extraction is neither generically possible nor
generically impossible, in contrast to the standard setting in which full
extraction is generic. When full extraction fails, we show that natural
additional conditions can restrict both the nature of the contracts a designer
can offer and the surplus the designer can obtain.",Uncertainty and Robustness of Surplus Extraction,2018-11-04 07:11:25,"Giuseppe Lopomo, Luca Rigotti, Chris Shannon","http://arxiv.org/abs/1811.01320v2, http://arxiv.org/pdf/1811.01320v2",econ.TH
33945,th,"We characterize three interrelated concepts in epistemic game theory:
permissibility, proper rationalizability, and iterated admissibility. We define
the lexicographic epistemic model for a game with incomplete information. Based
on it, we give two groups of characterizations. The first group characterizes
permissibility and proper rationalizability. The second group characterizes
permissibility in an alternative way and iterated admissibility. In each group,
the conditions for the latter are stronger than those for the former, which
corresponds to the fact that proper rationalizability and iterated
admissibility are two (compatible) refinements of permissibility within the
complete information framework. The intrinsic difference between the two groups
are the role of rationality: the first group does not need it, while the second
group does.","Characterizing Permissibility, Proper Rationalizability, and Iterated Admissibility by Incomplete Information",2018-11-04 17:10:48,Shuige Liu,"http://arxiv.org/abs/1811.01933v1, http://arxiv.org/pdf/1811.01933v1",econ.TH
33946,th,"We develop a tool akin to the revelation principle for dynamic
mechanism-selection games in which the designer can only commit to short-term
mechanisms. We identify a canonical class of mechanisms rich enough to
replicate the outcomes of any equilibrium in a mechanism-selection game between
an uninformed designer and a privately informed agent. A cornerstone of our
methodology is the idea that a mechanism should encode not only the rules that
determine the allocation, but also the information the designer obtains from
the interaction with the agent. Therefore, how much the designer learns, which
is the key tension in design with limited commitment, becomes an explicit part
of the design. Our result simplifies the search for the designer-optimal
outcome by reducing the agent's behavior to a series of participation,
truthtelling, and Bayes' plausibility constraints the mechanisms must satisfy.",Mechanism Design with Limited Commitment,2018-11-08 21:02:20,"Laura Doval, Vasiliki Skreta","http://arxiv.org/abs/1811.03579v6, http://arxiv.org/pdf/1811.03579v6",econ.TH
33947,th,"We provide tools to analyze information design problems subject to
constraints. We do so by extending the insight in Le Treust and Tomala (2019)
to the case of multiple inequality and equality constraints. Namely, that an
information design problem subject to constraints can be represented as an
unconstrained information design problem with a additional states, one for each
constraint. Thus, without loss of generality, optimal solutions induce as many
posteriors as the number of states and constraints. We provide results that
refine this upper bound. Furthermore, we provide conditions under which there
is no duality gap in constrained information design, thus validating a
Lagrangian approach. We illustrate our results with applications to mechanism
design with limited commitment (Doval and Skreta, 2022a) and persuasion of a
privately informed receiver (Kolotilin et al., 2017).",Constrained Information Design,2018-11-08 21:21:56,"Laura Doval, Vasiliki Skreta","http://arxiv.org/abs/1811.03588v3, http://arxiv.org/pdf/1811.03588v3",econ.TH
33948,th,"We formalize the argument that political disagreements can be traced to a
""clash of narratives"". Drawing on the ""Bayesian Networks"" literature, we model
a narrative as a causal model that maps actions into consequences, weaving a
selection of other random variables into the story. An equilibrium is defined
as a probability distribution over narrative-policy pairs that maximizes a
representative agent's anticipatory utility, capturing the idea that public
opinion favors hopeful narratives. Our equilibrium analysis sheds light on the
structure of prevailing narratives, the variables they involve, the policies
they sustain and their contribution to political polarization.",A Model of Competing Narratives,2018-11-10 14:08:15,"Kfir Eliaz, Ran Spiegler","http://arxiv.org/abs/1811.04232v1, http://arxiv.org/pdf/1811.04232v1",econ.TH
33949,th,"People employ their knowledge to recognize things. This paper is concerned
with how to measure people's knowledge for recognition and how it changes. The
discussion is based on three assumptions. Firstly, we construct two evolution
process equations, of which one is for uncertainty and knowledge, and the other
for uncertainty and ignorance. Secondly, by solving the equations, formulas for
measuring the levels of knowledge and the levels of ignorance are obtained in
two particular cases. Thirdly, a new concept of knowledge entropy is
introduced. Its similarity with Boltzmann's entropy and its difference with
Shannon's Entropy are examined. Finally, it is pointed out that the obtained
formulas of knowledge and knowledge entropy reflect two fundamental principles:
(1) The knowledge level of a group is not necessarily a simple sum of the
individuals' knowledge levels; and (2) An individual's knowledge entropy never
increases if the individual's thirst for knowledge never decreases.",Measuring Knowledge for Recognition and Knowledge Entropy,2018-11-15 04:41:54,Fujun Hou,"http://arxiv.org/abs/1811.06135v1, http://arxiv.org/pdf/1811.06135v1",econ.TH
33950,th,"We investigate whether fairness is compatible with efficiency in economies
with multi-self agents, who may not be able to integrate their multiple
objectives into a single complete and transitive ranking. We adapt
envy-freeness, egalitarian-equivalence and the fair-share guarantee in two
different ways. An allocation is unambiguously-fair if it satisfies the chosen
criterion of fairness according to every objective of any agent; it is
aggregate-fair if it satisfies the criterion for some aggregation of each
agent's objectives.
  While efficiency is always compatible with the unambiguous fair-share
guarantee, it is incompatible with unambiguous envy-freeness in economics with
at least three agents. Two agents are enough for efficiency and unambiguous
egalitarian-equivalence to clash. Efficiency and the unambiguous fair-share
guarantee can be attained together with aggregate envy-freeness, or aggregate
egalitarian-equivalence.",Fairness for Multi-Self Agents,2018-11-16 09:00:45,"Sophie Bade, Erel Segal-Halevi","http://arxiv.org/abs/1811.06684v4, http://arxiv.org/pdf/1811.06684v4",econ.TH
33951,th,"The observed proportionality between nominal prices and average embodied
energies cannot be interpreted with conventional economic theory. A model is
presented that places energy transfers as the focal point of scarcity based on
the idea that (1) goods are material rearrangements, and (2) humans can only
rearrange matter with energy transfers. Modified consumer and producer problems
for an autarkic agent show that the opportunity cost of goods are given by
their marginal energy transfers, which depend on subjective and objective
factors (e.g. consumer preferences and direct energy transfers). Allowing for
exchange and under perfect competition, nominal prices arise as social
manifestations of goods' marginal energy transfers. The proportionality between
nominal prices and average embodied energy follows given the relation between
the latter and marginal energy transfers.",Why are prices proportional to embodied energies?,2018-11-30 00:47:27,Benjamin Leiva,"http://arxiv.org/abs/1811.12502v1, http://arxiv.org/pdf/1811.12502v1",econ.TH
33952,th,"We suggest that one individual holds multiple degrees of belief about an
outcome, given the evidence. We then investigate the implications of such noisy
probabilities for a buyer and a seller of binary options and find the odds
agreed upon to ensure zero-expectation betting, differ from those consistent
with the relative frequency of outcomes. More precisely, the buyer and the
seller agree to odds that are higher (lower) than the reciprocal of their
averaged unbiased probabilities when this average indicates the outcome is more
(less) likely to occur than chance. The favorite-longshot bias thereby emerges
to establish the foundation of an equitable market. As corollaries, our work
suggests the old-established way of revealing someone's degree of belief
through wagers may be more problematic than previously thought, and implies
that betting markets cannot generally promise to support rational decisions.",Fair Odds for Noisy Probabilities,2018-11-30 01:20:09,Ulrik W. Nash,"http://arxiv.org/abs/1811.12516v1, http://arxiv.org/pdf/1811.12516v1",econ.TH
33953,th,"How can a receiver design an information structure in order to elicit
information from a sender? We study how a decision-maker can acquire more
information from an agent by reducing her own ability to observe what the agent
transmits. Intuitively, when the two parties' preferences are not perfectly
aligned, this garbling relaxes the sender's concern that the receiver will use
her information to the sender's disadvantage. We characterize the optimal
information structure for the receiver. The main result is that under broad
conditions, the receiver can do just as well as if she could commit to a rule
mapping the sender's message to actions: information design is just as good as
full commitment. Similarly, we show that these conditions guarantee that ex
ante information acquisition always benefits the receiver, even though this
learning might actually lower the receiver's expected payoff in the absence of
garbling. We illustrate these effects in a range of economically relevant
examples.",Bayesian Elicitation,2019-02-04 01:08:48,Mark Whitmeyer,"http://arxiv.org/abs/1902.00976v2, http://arxiv.org/pdf/1902.00976v2",econ.TH
33954,th,"A principal can restrict an agent's information (the persuasion problem) or
restrict an agent's discretion (the delegation problem). We show that these
problems are generally equivalent - solving one solves the other. We use tools
from the persuasion literature to generalize and extend many results in the
delegation literature, as well as to address novel delegation problems, such as
monopoly regulation with a participation constraint.",Persuasion Meets Delegation,2019-02-07 17:22:31,"Anton Kolotilin, Andriy Zapechelnyuk","http://arxiv.org/abs/1902.02628v1, http://arxiv.org/pdf/1902.02628v1",econ.TH
33955,th,"Most comparisons of preferences are instances of single-crossing dominance.
We examine the lattice structure of single-crossing dominance, proving
characterisation, existence and uniqueness results for minimum upper bounds of
arbitrary sets of preferences. We apply these theorems to derive new
comparative statics theorems for collective choice and under analyst
uncertainty, to characterise a general 'maxmin' class of uncertainty-averse
preferences over Savage acts, and to revisit the tension between liberalism and
Pareto-efficiency in social choice.",The preference lattice,2019-02-19 22:56:54,"Gregorio Curello, Ludvig Sinander","http://arxiv.org/abs/1902.07260v4, http://arxiv.org/pdf/1902.07260v4",econ.TH
33956,th,"We study the relationship between invariant transformations on extensive game
structures and backward dominance procedure (BD), a generalization of the
classical backward induction introduced in Perea (2014). We show that
behavioral equivalence with unambiguous orderings of information sets, a
critical property that guarantees BD's applicability, can be characterized by
the classical Coalescing and a modified Interchange/Simultanizing in Battigalli
et al. (2020). We also give conditions on transformations that improve BD's
efficiency. In addition, we discuss the relationship between transformations
and Bonanno (2014)'s generalized backward induction.",Compactification of Extensive Game Structures and Backward Dominance Procedure,2019-05-01 19:06:02,Shuige Liu,"http://arxiv.org/abs/1905.00355v3, http://arxiv.org/pdf/1905.00355v3",econ.TH
33957,th,"This paper considers incentives to provide goods that are partially shareable
along social links. We introduce a model in which each individual in a social
network not only decides how much of a shareable good to provide, but also
decides which subset of neighbours to nominate as co-beneficiaries. An outcome
of the model specifies an endogenously generated subnetwork of the original
network and a public goods game occurring over the realised subnetwork. We
prove the existence of specialised pure strategy Nash equilibria: those in
which some individuals contribute while the remaining individuals free ride. We
then consider how the set of efficient specialised equilibria vary as the
constraints on sharing are relaxed and we show that, paradoxically, an increase
in shareability may decrease efficiency.",Public goods in networks with constraints on sharing,2019-05-05 17:38:03,"Stefanie Gerke, Gregory Gutin, Sung-Ha Hwang, Philip Neary","http://arxiv.org/abs/1905.01693v4, http://arxiv.org/pdf/1905.01693v4",econ.TH
33958,th,"In the pool of people seeking partners, a uniformly greater preference for
abstinence increases the prevalence of infection and worsens everyone's
welfare. In contrast, prevention and treatment reduce prevalence and improve
payoffs. The results are driven by adverse selection: people who prefer more
partners are likelier disease carriers. A given decrease in the number of
matches is a smaller proportional reduction for people with many partners, thus
increases the fraction of infected in the pool. The greater disease risk
further decreases partner-seeking and payoffs.",When abstinence increases prevalence,2019-05-06 17:49:58,Sander Heinsalu,"http://arxiv.org/abs/1905.02073v1, http://arxiv.org/pdf/1905.02073v1",econ.TH
33959,th,"We introduce and study the property of orthogonal independence, a restricted
additivity axiom applying when alternatives are orthogonal. The axiom requires
that the preference for one marginal change over another should be maintained
after each marginal change has been shifted in a direction that is orthogonal
to both.
  We show that continuous preferences satisfy orthogonal independence if and
only if they are spherical: their indifference curves are spheres with the same
center, with preference being ""monotone"" either away or towards the center.
Spherical preferences include linear preferences as a special (limiting) case.
We discuss different applications to economic and political environments. Our
result delivers Euclidean preferences in models of spatial voting, quadratic
welfare aggregation in social choice, and expected utility in models of choice
under uncertainty.",Spherical Preferences,2019-05-08 08:12:18,"Christopher P. Chambers, Federico Echenique","http://arxiv.org/abs/1905.02917v3, http://arxiv.org/pdf/1905.02917v3",econ.TH
33960,th,"McKelvey and Palfrey (1995)'s monotone structural Quantal Response
Equilibrium theory may be misspecified for the study of monotone behavior.",The paradox of monotone structural QRE,2019-05-14 22:54:02,"Rodrigo A. Velez, Alexander L. Brown","http://arxiv.org/abs/1905.05814v2, http://arxiv.org/pdf/1905.05814v2",econ.TH
33961,th,"We advance empirical equilibrium analysis (Velez and Brown, 2020,
arXiv:1907.12408) of the winner-bid and loser-bid auctions for the dissolution
of a partnership. We show, in a complete information environment, that even
though these auctions are essentially equivalent for the Nash equilibrium
prediction, they can be expected to differ in fundamental ways when they are
operated. Besides the direct policy implications, two general consequences
follow. First, a mechanism designer who accounts for the empirical plausibility
of equilibria may not be constrained by Maskin invariance. Second, a mechanism
designer who does not account for the empirical plausibility of equilibria may
inadvertently design biased mechanisms.",Empirical bias of extreme-price auctions: analysis,2019-05-13 05:24:13,"Rodrigo A. Velez, Alexander L. Brown","http://arxiv.org/abs/1905.08234v2, http://arxiv.org/pdf/1905.08234v2",econ.TH
33962,th,"In this paper, the credit scoring problem is studied by incorporating
networked information, where the advantages of such incorporation are
investigated theoretically in two scenarios. Firstly, a Bayesian optimal filter
is proposed to provide risk prediction for lenders assuming that published
credit scores are estimated merely from structured financial data. Such
prediction can then be used as a monitoring indicator for the risk management
in lenders' future decisions. Secondly, a recursive Bayes estimator is further
proposed to improve the precision of credit scoring by incorporating the
dynamic interaction topology of clients. It is shown that under the proposed
evolution framework, the designed estimator has a higher precision than any
efficient estimator, and the mean square errors are strictly smaller than the
Cram\'er-Rao lower bound for clients within a certain range of scores. Finally,
simulation results for a special case illustrate the feasibility and
effectiveness of the proposed algorithms.",Credit Scoring by Incorporating Dynamic Networked Information,2019-05-28 16:21:48,"Yibei Li, Ximei Wang, Boualem Djehiche, Xiaoming Hu","http://arxiv.org/abs/1905.11795v2, http://arxiv.org/pdf/1905.11795v2",econ.TH
33963,th,"For a many-to-one matching market where firms have strict and
$\boldsymbol{q}$-responsive preferences, we give a characterization of the set
of strongly stable fractional matchings as the union of the convex hull of all
connected sets of stable matchings. Also, we prove that a strongly stable
fractional matching is represented as a convex combination of stable matchings
that are ordered in the common preferences of all firms.",On the many-to-one strongly stable fractional matching set,2019-05-29 17:36:34,"Pablo A. Neme, Jorge Oviedo","http://arxiv.org/abs/1905.12500v2, http://arxiv.org/pdf/1905.12500v2",econ.TH
33964,th,"We study surplus extraction in the general environment of McAfee and Reny
(1992), and provide two alternative proofs of their main theorem. The first is
an analogue of the classic argument of Cremer and McLean (1985, 1988), using
geometric features of the set of agents' beliefs to construct a menu of
contracts extracting the desired surplus. This argument, which requires a
finite state space, also leads to a counterexample showing that full extraction
is not possible without further significant conditions on agents' beliefs or
surplus, even if the designer offers an infinite menu of contracts. The second
argument uses duality and applies for an infinite state space, thus yielding
the general result of McAfee and Reny (1992). Both arguments suggest methods
for studying surplus extraction in settings beyond the standard model, in which
the designer or agents might have objectives other than risk neutral expected
value maximization.","Detectability, Duality, and Surplus Extraction",2019-05-30 02:45:27,"Giuseppe Lopomo, Luca Rigotti, Chris Shannon","http://arxiv.org/abs/1905.12788v2, http://arxiv.org/pdf/1905.12788v2",econ.TH
33965,th,"In this paper, a relation between shadow price and the Lagrangian multiplier
for nonsmooth problem is explored. It is shown that the Lagrangian Multiplier
is the upper bound of shadow price for convex optimization and a class of
Lipschtzian optimizations. This work can be used in shadow pricing for
nonsmooth situation. The several nonsmooth functions involved in this class of
Lipschtzian optimizations is listed. Finally, an application to electricity
pricing is discussed.",Characterizing Shadow Price via Lagrangian Multiplier for Nonsmooth Problem,2019-05-31 16:57:14,Yan Gao,"http://arxiv.org/abs/1905.13622v2, http://arxiv.org/pdf/1905.13622v2",econ.TH
33966,th,"We study intertemporal decision making under uncertainty. We fully
characterize discounted expected utility in a framework \`a la Savage. Despite
the popularity of this model, no characterization is available in this setting.
The concept of stationarity, introduced by Koopmans for deterministic
discounted utility, plays a central role for both attitudes towards time and
towards uncertainty. We show that a strong stationarity axiom characterizes
discounted expected utility. When hedging considerations are taken into
account, a weaker stationarity axiom generalizes discounted expected utility to
Choquet discounted expected utility, allowing for non-neutral attitudes towards
uncertainty.",Time discounting under uncertainty,2019-11-01 16:36:05,"Lorenzo Bastianello, José Heleno Faro","http://arxiv.org/abs/1911.00370v2, http://arxiv.org/pdf/1911.00370v2",econ.TH
33974,th,"This paper proposes a simple descriptive model of discrete-time double
auction markets for divisible assets. As in the classical models of exchange
economies, we consider a finite set of agents described by their initial
endowments and preferences. Instead of the classical Walrasian-type market
models, however, we assume that all trades take place in a centralized double
auction where the agents communicate through sealed limit orders for buying and
selling. We find that, under nonstrategic bidding, the double auction clears
with zero trades precisely when the agents' current holdings are on the Pareto
frontier. More interestingly, the double auctions implement Adam Smith's
""invisible hand"" in the sense that, when starting from disequilibrium, repeated
double auctions lead to a sequence of allocations that converges to
individually rational Pareto allocations.",Efficient allocations in double auction markets,2020-01-05 19:12:28,Teemu Pennanen,"http://arxiv.org/abs/2001.02071v2, http://arxiv.org/pdf/2001.02071v2",econ.TH
33967,th,"We present an abstract social aggregation theorem. Society, and each
individual, has a preorder that may be interpreted as expressing values or
beliefs. The preorders are allowed to violate both completeness and continuity,
and the population is allowed to be infinite. The preorders are only assumed to
be represented by functions with values in partially ordered vector spaces, and
whose product has convex range. This includes all preorders that satisfy strong
independence. Any Pareto indifferent social preorder is then shown to be
represented by a linear transformation of the representations of the individual
preorders. Further Pareto conditions on the social preorder correspond to
positivity conditions on the transformation. When all the Pareto conditions
hold and the population is finite, the social preorder is represented by a sum
of individual preorder representations. We provide two applications. The first
yields an extremely general version of Harsanyi's social aggregation theorem.
The second generalizes a classic result about linear opinion pooling.",Aggregation for potentially infinite populations without continuity or completeness,2019-11-03 14:24:12,"David McCarthy, Kalle Mikkola, Teruji Thomas","http://arxiv.org/abs/1911.00872v1, http://arxiv.org/pdf/1911.00872v1",econ.TH
33968,th,"This paper proposes and axiomatizes a new updating rule: Relative Maximum
Likelihood (RML) for ambiguous beliefs represented by a set of priors (C). This
rule takes the form of applying Bayes' rule to a subset of C. This subset is a
linear contraction of C towards its subset ascribing a maximal probability to
the observed event. The degree of contraction captures the extent of
willingness to discard priors based on likelihood when updating. Two well-known
updating rules of multiple priors, Full Bayesian (FB) and Maximum Likelihood
(ML), are included as special cases of RML. An axiomatic characterization of
conditional preferences generated by RML updating is provided when the
preferences admit Maxmin Expected Utility representations. The axiomatization
relies on weakening the axioms characterizing FB and ML. The axiom
characterizing ML is identified for the first time in this paper, addressing a
long-standing open question in the literature.",Relative Maximum Likelihood Updating of Ambiguous Beliefs,2019-11-07 02:34:55,Xiaoyu Cheng,"http://arxiv.org/abs/1911.02678v6, http://arxiv.org/pdf/1911.02678v6",econ.TH
33969,th,"Two extensive game structures with imperfect information are said to be
behaviorally equivalent if they share the same map (up to relabelings) from
profiles of structurally reduced strategies to induced terminal paths. We show
that this is the case if and only if one can be transformed into the other
through a composition of two elementary transformations, commonly known as
\textquotedblleft Interchanging of Simultaneous Moves\textquotedblright\ and
\textquotedblleft Coalescing Moves/Sequential Agent
Splitting.\textquotedblright",Behavioral Equivalence of Extensive Game Structures,2019-11-07 17:11:44,"Pierpaolo Battigalli, Paolo Leonetti, Fabio Maccheroni","http://arxiv.org/abs/1911.02918v1, http://arxiv.org/pdf/1911.02918v1",econ.TH
33970,th,"We study a seller who sells a single good to multiple bidders with
uncertainty over the joint distribution of bidders' valuations, as well as
bidders' higher-order beliefs about their opponents. The seller only knows the
(possibly asymmetric) means of the marginal distributions of each bidder's
valuation and the range. An adversarial nature chooses the worst-case
distribution within this ambiguity set along with the worst-case information
structure. We find that a second-price auction with a symmetric, random reserve
price obtains the optimal revenue guarantee within a broad class of mechanisms
we refer to as competitive mechanisms, which include standard auction formats,
including the first-price auction, with or without reserve prices. The optimal
mechanism possesses two notable characteristics. First, the mechanism treats
all bidders identically even in the presence of ex-ante asymmetries. Second,
when bidders are identical and the number of bidders $n$ grows large, the
seller's optimal reserve price converges in probability to a non-binding
reserve price and the revenue guarantee converges to the best possible revenue
guarantee at rate $O(1/n)$.",Distributionally Robust Optimal Auction Design under Mean Constraints,2019-11-17 00:38:44,Ethan Che,"http://arxiv.org/abs/1911.07103v3, http://arxiv.org/pdf/1911.07103v3",econ.TH
33971,th,"We consider a platform facilitating trade between sellers and buyers with the
objective of maximizing consumer surplus. Even though in many such marketplaces
prices are set by revenue-maximizing sellers, platforms can influence prices
through (i) price-dependent promotion policies that can increase demand for a
product by featuring it in a prominent position on the webpage and (ii) the
information revealed to sellers about the value of being promoted. Identifying
effective joint information design and promotion policies is a challenging
dynamic problem as sellers can sequentially learn the promotion value from
sales observations and update prices accordingly. We introduce the notion of
confounding promotion policies, which are designed to prevent a Bayesian seller
from learning the promotion value (at the expense of the short-run loss of
diverting some consumers from the best product offering). Leveraging these
policies, we characterize the maximum long-run average consumer surplus that is
achievable through joint information design and promotion policies when the
seller sets prices myopically. We then construct a Bayesian Nash equilibrium in
which the seller's best response to the platform's optimal policy is to price
myopically in every period. Moreover, the equilibrium we identify is
platform-optimal within the class of horizon-maximin equilibria, in which
strategies are not predicated on precise knowledge of the horizon length, and
are designed to maximize payoff over the worst-case horizon. Our analysis
allows one to identify practical long-run average optimal platform policies in
a broad range of demand models.",Information Disclosure and Promotion Policy Design for Platforms,2019-11-21 05:52:44,"Yonatan Gur, Gregory Macnamara, Ilan Morgenstern, Daniela Saban","http://arxiv.org/abs/1911.09256v5, http://arxiv.org/pdf/1911.09256v5",econ.TH
33972,th,"To divide a ""manna"" {\Omega} of private items (commodities, workloads, land,
time intervals) between n agents, the worst case measure of fairness is the
welfare guaranteed to each agent, irrespective of others' preferences. If the
manna is non atomic and utilities are continuous (not necessarily monotone or
convex), we can guarantee the minMax utility: that of our agent's best share in
her worst partition of the manna; and implement it by Kuhn's generalisation of
Divide and Choose. The larger Maxmin utility -- of her worst share in her best
partition -- cannot be guaranteed, even for two agents. If for all agents more
manna is better than less (or less is better than more), our Bid & Choose rules
implement guarantees between minMax and Maxmin by letting agents bid for the
smallest (or largest) size of a share they find acceptable.",Guarantees in Fair Division: general or monotone preferences,2019-11-22 15:52:34,"Anna bogomolnaia, Herve Moulin","http://arxiv.org/abs/1911.10009v3, http://arxiv.org/pdf/1911.10009v3",econ.TH
33976,th,"This paper studies whether a planner who only has information about the
network topology can discriminate among agents according to their network
position. The planner proposes a simple menu of contracts, one for each
location, in order to maximize total welfare, and agents choose among the menu.
This mechanism is immune to deviations by single agents, and to deviations by
groups of agents of sizes 2, 3 and 4 if side-payments are ruled out. However,
if compensations are allowed, groups of agents may have an incentive to jointly
deviate from the optimal contract in order to exploit other agents. We identify
network topologies for which the optimal contract is group incentive compatible
with transfers: undirected networks and regular oriented trees, and network
topologies for which the planner must assign uniform quantities: single root
and nested neighborhoods directed networks.",Targeting in social networks with anonymized information,2020-01-09 20:33:58,"Francis Bloch, Shaden Shabayek","http://arxiv.org/abs/2001.03122v1, http://arxiv.org/pdf/2001.03122v1",econ.TH
33977,th,"Always, if the number of states is equal to two; or if the number of receiver
actions is equal to two and i. The number of states is three or fewer, or ii.
The game is cheap talk, or ii. There are just two available messages for the
sender. A counterexample is provided for each failure of these conditions.","In Simple Communication Games, When Does Ex Ante Fact-Finding Benefit the Receiver?",2020-01-26 05:24:23,Mark Whitmeyer,"http://arxiv.org/abs/2001.09387v1, http://arxiv.org/pdf/2001.09387v1",econ.TH
33978,th,"We study the susceptibility of committee governance (e.g. by boards of
directors), modelled as the collective determination of a ranking of a set of
alternatives, to manipulation of the order in which pairs of alternatives are
voted on -- agenda-manipulation. We exhibit an agenda strategy called insertion
sort that allows a self-interested committee chair with no knowledge of how
votes will be cast to do as well as if she had complete knowledge. Strategies
with this 'regret-freeness' property are characterised by their efficiency, and
by their avoidance of two intuitive errors. What distinguishes regret-free
strategies from each other is how they prioritise among alternatives; insertion
sort prioritises lexicographically.",Agenda-manipulation in ranking,2020-01-30 17:21:05,"Gregorio Curello, Ludvig Sinander","http://dx.doi.org/10.1093/restud/rdac071, http://arxiv.org/abs/2001.11341v5, http://arxiv.org/pdf/2001.11341v5",econ.TH
33979,th,"We introduce a solution concept for extensive-form games of incomplete
information in which players need not assign likelihoods to what they do not
know about the game. This is embedded in a model in which players can hold
multiple priors. Players make choices by looking for compromises that yield a
good performance under each of their updated priors. Our solution concept is
called perfect compromise equilibrium. It generalizes perfect Bayesian
equilibrium. We show how it deals with ambiguity in Cournot and Bertrand
markets, public good provision, Spence's job market signaling, bilateral trade
with common value, and forecasting.","Compromise, Don't Optimize: Generalizing Perfect Bayesian Equilibrium to Allow for Ambiguity",2020-03-05 14:34:01,"Karl Schlag, Andriy Zapechelnyuk","http://arxiv.org/abs/2003.02539v2, http://arxiv.org/pdf/2003.02539v2",econ.TH
33980,th,"I consider decision-making constrained by considerations of morality,
rationality, or other virtues. The decision maker (DM) has a true preference
over outcomes, but feels compelled to choose among outcomes that are top-ranked
by some preference that he considers ""justifiable."" This model unites a broad
class of empirical work on distributional preferences, charitable donations,
prejudice/discrimination, and corruption/bribery. I provide a behavioral
characterization of the model. I also show that the set of justifications can
be identified from choice behavior when the true preference is known, and that
choice behavior substantially restricts both the true preference and
justifications when neither is known. I argue that the justifiability model
represents an advancement over existing models of rationalization because the
structure it places on possible ""rationales"" improves tractability,
interpretation and identification.",A Model of Justification,2020-03-15 18:04:15,Sarah Ridout,"http://arxiv.org/abs/2003.06844v1, http://arxiv.org/pdf/2003.06844v1",econ.TH
33981,th,"The resident matching algorithm, Gale-Shapley, currently used by SF Match and
the National Residency Match Program (NRMP), has been in use for over 50 years
without fundamental alteration. The algorithm is a 'stable-marriage' method
that favors applicant outcomes. However, in these 50 years, there has been a
big shift in the supply and demand of applicants and programs. These changes
along with the way the Match is implemented have induced a costly race among
applicants to apply and interview at as many programs as possible. Meanwhile
programs also incur high costs as they maximize their probability of matching
by interviewing as many candidates as possible.",Game Theoretic Consequences of Resident Matching,2020-03-12 18:58:44,Yue Wu,"http://arxiv.org/abs/2003.07205v2, http://arxiv.org/pdf/2003.07205v2",econ.TH
33982,th,"We consider a dynamic model of Bayesian persuasion in which information takes
time and is costly for the sender to generate and for the receiver to process,
and neither player can commit to their future actions. Persuasion may totally
collapse in a Markov perfect equilibrium (MPE) of this game. However, for
persuasion costs sufficiently small, a version of a folk theorem holds:
outcomes that approximate Kamenica and Gentzkow (2011)'s sender-optimal
persuasion as well as full revelation and everything in between are obtained in
MPE, as the cost vanishes.",Keeping the Listener Engaged: a Dynamic Model of Bayesian Persuasion,2020-03-16 20:17:56,"Yeon-Koo Che, Kyungmin Kim, Konrad Mierendorff","http://arxiv.org/abs/2003.07338v5, http://arxiv.org/pdf/2003.07338v5",econ.TH
33983,th,"The simple pairwise comparison is a method to provide different criteria with
weights. We show that the values of those weights (in particular the maximum)
depend just on the number of criteria. Additionally, it is shown that the
distance between the weights is always the same.
  -----
  Der einfache paarweise Vergleich ist ein Verfahren verschiedene Kriterien mit
einer Gewichtung zu versehen. Wir zeigen, dass die Werte dieser Gewichte
(insbesondere auch der maximale Wert) ausschlie{\ss}lich von der Anzahl der
Kriterien abh\""angt. Dar\""uber hinaus wird gezeigt, dass der Abstand der
Gewichtungen stets gleich ist.",Bemerkungen zum paarweisen Vergleich,2020-03-24 20:17:03,Stefan Lörcks,"http://arxiv.org/abs/2003.10978v1, http://arxiv.org/pdf/2003.10978v1",econ.TH
33984,th,"We say a model is continuous in utilities (resp., preferences) if small
perturbations of utility functions (resp., preferences) generate small changes
in the model's outputs. While similar, these two concepts are equivalent only
when the topology satisfies the following universal property: for each
continuous mapping from preferences to model's outputs there is a unique
mapping from utilities to model's outputs that is faithful to the preference
map and is continuous. The topologies that satisfy such a universal property
are called final topologies. In this paper we analyze the properties of the
final topology for preference sets. This is of practical importance since most
of the analysis on continuity is done via utility functions and not the
primitive preference space. Our results allow the researcher to extrapolate
continuity in utility to continuity in the underlying preferences.",Final Topology for Preference Spaces,2020-04-06 03:17:10,Pablo Schenone,"http://arxiv.org/abs/2004.02357v2, http://arxiv.org/pdf/2004.02357v2",econ.TH
33987,th,"A group of experts, for instance climate scientists, is to choose among two
policies $f$ and $g$. Consider the following decision rule. If all experts
agree that the expected utility of $f$ is higher than the expected utility of
$g$, the unanimity rule applies, and $f$ is chosen. Otherwise the precautionary
principle is implemented and the policy yielding the highest minimal expected
utility is chosen.
  This decision rule may lead to time inconsistencies when an intermediate
period of partial resolution of uncertainty is added. We provide axioms that
enlarge the initial group of experts with veto power, which leads to a set of
probabilistic beliefs that is ""rectangular"" in a minimal sense. This makes this
decision rule dynamically consistent and provides, as a byproduct, a novel
behavioral characterization of rectangularity.",Dynamically Consistent Objective and Subjective Rationality,2020-04-26 13:52:27,"Lorenzo Bastianello, José Heleno Faro, Ana Santos","http://arxiv.org/abs/2004.12347v1, http://arxiv.org/pdf/2004.12347v1",econ.TH
33988,th,"Motivated by the need for real-world matching problems, this paper formulates
a large class of practical choice rules, Generalized Lexicographic Choice Rules
(GLCR), for institutions that consist of multiple divisions. Institutions fill
their divisions sequentially, and each division is endowed with a sub-choice
rule that satisfies classical substitutability and size monotonicity in
conjunction with a new property that we introduce, quota monotonicity. We allow
rich interactions between divisions in the form of capacity transfers. The
overall choice rule of an institution is defined as the union of the
sub-choices of its divisions. The cumulative offer mechanism (COM) with respect
to GLCR is the unique stable and strategy-proof mechanism. We define a
choice-based improvement notion and show that the COM respects improvements. We
employ the theory developed in this paper in our companion paper, Ayg\""un and
Turhan (2020), to design satisfactory matching mechanisms for India with
comprehensive affirmative action constraints.",Matching with Generalized Lexicographic Choice Rules,2020-04-28 06:33:30,"Orhan Aygün, Bertan Turhan","http://arxiv.org/abs/2004.13261v2, http://arxiv.org/pdf/2004.13261v2",econ.TH
33989,th,"Since 1950, India has been implementing the most comprehensive affirmative
action program in the world. Vertical reservations are provided to members of
historically discriminated Scheduled Castes (SC), Scheduled Tribes (ST), and
Other Backward Classes (OBC). Horizontal reservations are provided for other
disadvantaged groups, such as women and disabled people, within each vertical
category. There is no well-defined procedure to implement horizontal
reservations jointly with vertical reservation and OBC de-reservations.
Sequential processes currently in use for OBC de-reservations and meritorious
reserve candidates lead to severe shortcomings. Most importantly, indirect
mechanisms currently used in practice do not allow reserve category applicants
to fully express their preferences. To overcome these and other related issues,
we design several different choice rules for institutions that take
meritocracy, vertical and horizontal reservations, and OBC de-reservations into
account. We propose a centralized mechanism to satisfactorily clear matching
markets in India.",Designing Direct Matching Mechanism for India with Comprehensive Affirmative Action,2020-04-28 06:37:08,"Orhan Aygün, Bertan Turhan","http://arxiv.org/abs/2004.13264v5, http://arxiv.org/pdf/2004.13264v5",econ.TH
33990,th,"In many real-world matching applications, there are restrictions for
institutions either on priorities of their slots or on the transferability of
unfilled slots over others (or both). Motivated by the need in such real-life
matching problems, this paper formulates a family of practical choice rules,
slot-specific priorities with capacity transfers (SSPwCT). These practical
rules invoke both slot-specific priorities structure and transferability of
vacant slots. We show that the cumulative offer mechanism (COM) is stable,
strategy-proof and respects improvements with regards to SSPwCT choice rules.
Transferring the capacity of one more unfilled slot, while all else is
constant, leads to strategy-proof Pareto improvement of the COM. Following
Kominer's (2020) formulation, we also provide comparative static results for
expansion of branch capacity and addition of new contracts in the SSPwCT
framework. Our results have implications for resource allocation problems with
diversity considerations.",Slot-specific Priorities with Capacity Transfers,2020-04-28 06:40:36,"Michelle Avataneo, Bertan Turhan","http://arxiv.org/abs/2004.13265v2, http://arxiv.org/pdf/2004.13265v2",econ.TH
33991,th,"Behavioural economics provides labels for patterns in human economic
behaviour. Probability weighting is one such label. It expresses a mismatch
between probabilities used in a formal model of a decision (i.e. model
parameters) and probabilities inferred from real people's decisions (the same
parameters estimated empirically). The inferred probabilities are called
""decision weights."" It is considered a robust experimental finding that
decision weights are higher than probabilities for rare events, and
(necessarily, through normalisation) lower than probabilities for common
events. Typically this is presented as a cognitive bias, i.e. an error of
judgement by the person. Here we point out that the same observation can be
described differently: broadly speaking, probability weighting means that a
decision maker has greater uncertainty about the world than the observer. We
offer a plausible mechanism whereby such differences in uncertainty arise
naturally: when a decision maker must estimate probabilities as frequencies in
a time series while the observer knows them a priori. This suggests an
alternative presentation of probability weighting as a principled response by a
decision maker to uncertainties unaccounted for in an observer's model.",What are we weighting for? A mechanistic model for probability weighting,2020-04-30 22:09:23,"Ole Peters, Alexander Adamou, Mark Kirstein, Yonatan Berman","http://arxiv.org/abs/2005.00056v1, http://arxiv.org/pdf/2005.00056v1",econ.TH
33992,th,"This paper identifies the mathematical equivalence between economic networks
of Cobb-Douglas agents and Artificial Neural Networks. It explores two
implications of this equivalence under general conditions. First, a burgeoning
literature has established that network propagation can transform microeconomic
perturbations into large aggregate shocks. Neural network equivalence amplifies
the magnitude and complexity of this phenomenon. Second, if economic agents
adjust their production and utility functions in optimal response to local
conditions, market pricing is a sufficient and robust channel for information
feedback leading to macro learning.",On the Equivalence of Neural and Production Networks,2020-05-01 20:28:50,"Roy Gernhardt, Bjorn Persson","http://arxiv.org/abs/2005.00510v2, http://arxiv.org/pdf/2005.00510v2",econ.TH
34054,th,"By analyzing the duopoly market of computer graphics cards, we categorized
the effects of enterprise's technological progress into two types, namely, cost
reduction and product diversification. Our model proved that technological
progress is the most effective means for enterprises in this industry to
increase profits. Due to the technology-intensive nature of this industry,
monopolistic enterprises face more intense competition compared with
traditional manufacturing. Therefore, they have more motivation for
technological innovation. Enterprises aiming at maximizing profits have
incentives to reduce costs and achieve a higher degree of product
differentiation through technological innovation.",The Duopoly Analysis of Graphics Card Market,2020-10-18 22:29:37,Nan Miles Xi,"http://arxiv.org/abs/2010.09074v1, http://arxiv.org/pdf/2010.09074v1",econ.TH
33993,th,"We study a school choice problem under affirmative action policies where
authorities reserve a certain fraction of the slots at each school for specific
student groups, and where students have preferences not only over the schools
they are matched to but also the type of slots they receive. Such reservation
policies might cause waste in instances of low demand from some student groups.
To propose a solution to this issue, we construct a family of choice functions,
dynamic reserves choice functions, for schools that respect within-group
fairness and allow the transfer of otherwise vacant slots from low-demand
groups to high-demand groups. We propose the cumulative offer mechanism (COM)
as an allocation rule where each school uses a dynamic reserves choice function
and show that it is stable with respect to schools' choice functions, is
strategy-proof, and respects improvements. Furthermore, we show that
transferring more of the otherwise vacant slots leads to strategy-proof Pareto
improvement under the COM.",Dynamic Reserves in Matching Markets,2020-05-03 17:42:41,"Orhan Aygün, Bertan Turhan","http://arxiv.org/abs/2005.01103v1, http://arxiv.org/pdf/2005.01103v1",econ.TH
33994,th,"Empirical evidence suggests that the rich have higher propensity to save than
do the poor. While this observation may appear to contradict the homotheticity
of preferences, we theoretically show that that is not the case. Specifically,
we consider an income fluctuation problem with homothetic preferences and
general shocks and prove that consumption functions are asymptotically linear,
with an exact analytical characterization of asymptotic marginal propensities
to consume (MPC). We provide necessary and sufficient conditions for the
asymptotic MPCs to be zero. We calibrate a model with standard constant
relative risk aversion utility and show that zero asymptotic MPCs are
empirically plausible, implying that our mechanism has the potential to
accommodate a large saving rate of the rich and high wealth inequality (small
Pareto exponent) as observed in the data.",A Theory of the Saving Rate of the Rich,2020-05-05 06:20:13,"Qingyin Ma, Alexis Akira Toda","http://dx.doi.org/10.1016/j.jet.2021.105193, http://arxiv.org/abs/2005.02379v5, http://arxiv.org/pdf/2005.02379v5",econ.TH
33995,th,"We propose and axiomatize the categorical thinking model (CTM) in which the
framing of the decision problem affects how agents categorize alternatives,
that in turn affects their evaluation of it. Prominent models of salience,
status quo bias, loss-aversion, inequality aversion, and present bias all fit
under the umbrella of CTM. This suggests categorization is an underlying
mechanism of key departures from the neoclassical model of choice. We
specialize CTM to provide a behavioral foundation for the salient thinking
model of Bordalo et al. (2013) that highlights its strong predictions and
distinctions from other models.",Choice with Endogenous Categorization,2020-05-11 18:39:47,"Andrew Ellis, Yusufcan Masatlioglu","http://arxiv.org/abs/2005.05196v3, http://arxiv.org/pdf/2005.05196v3",econ.TH
33996,th,"An equilibrium is communication-proof if it is unaffected by new
opportunities to communicate and renegotiate. We characterize the set of
equilibria of coordination games with pre-play communication in which players
have private preferences over the coordinated outcomes. The set of
communication-proof equilibria is a small and relatively homogeneous subset of
the set of qualitatively diverse Bayesian Nash equilibria. Under a
communication-proof equilibrium, players never miscoordinate, play their
jointly preferred outcome whenever there is one, and communicate only the
ordinal part of their preferences. Moreover, such equilibria are robust to
changes in players' beliefs and interim Pareto efficient","Communication, Renegotiation and Coordination with Private Values",2020-05-12 15:16:34,"Yuval Heller, Christoph Kuzmics","http://arxiv.org/abs/2005.05713v4, http://arxiv.org/pdf/2005.05713v4",econ.TH
33997,th,"We develop a result on expected posteriors for Bayesians with heterogenous
priors, dubbed information validates the prior (IVP). Under familiar ordering
requirements, Anne expects a (Blackwell) more informative experiment to bring
Bob's posterior mean closer to Anne's prior mean. We apply the result in two
contexts of games of asymmetric information: voluntary testing or
certification, and costly signaling or falsification. IVP can be used to
determine how an agent's behavior responds to additional exogenous or
endogenous information. We discuss economic implications.",Information Validates the Prior: A Theorem on Bayesian Updating and Applications,2020-05-12 15:20:05,"Navin Kartik, Frances Lee, Wing Suen","http://dx.doi.org/10.1257/aeri.20200284, http://arxiv.org/abs/2005.05714v3, http://arxiv.org/pdf/2005.05714v3",econ.TH
33998,th,"We study population dynamics under which each revising agent tests each
strategy k times, with each trial being against a newly drawn opponent, and
chooses the strategy whose mean payoff was highest. When k = 1, defection is
globally stable in the prisoner`s dilemma. By contrast, when k > 1 we show that
there exists a globally stable state in which agents cooperate with probability
between 28% and 50%. Next, we characterize stability of strict equilibria in
general games. Our results demonstrate that the empirically plausible case of k
> 1 can yield qualitatively different predictions than the case of k = 1 that
is commonly studied in the literature.",Instability of Defection in the Prisoner's Dilemma Under Best Experienced Payoff Dynamics,2020-05-12 17:01:05,"Srinivas Arigapudi, Yuval Heller, Igal Milchtaich","http://arxiv.org/abs/2005.05779v3, http://arxiv.org/pdf/2005.05779v3",econ.TH
33999,th,"We propose a new method to define trading algorithms in market design
environments. Dropping the traditional idea of clearing cycles in generated
graphs, we use parameterized linear equations to define trading algorithms. Our
method has two advantages. First, our method avoids discussing the details of
who trades with whom and how, which can be a difficult question in complex
environments. Second, by controlling parameter values in our equations, our
method is flexible and transparent to satisfy various fairness criteria. We
apply our method to several models and obtain new trading algorithms that are
efficient and fair.",Efficient and fair trading algorithms in market design environments,2020-05-14 14:36:11,"Jingsheng Yu, Jun Zhang","http://arxiv.org/abs/2005.06878v3, http://arxiv.org/pdf/2005.06878v3",econ.TH
34013,th,"I study repeated communication games between a patient sender and a sequence
of receivers. The sender has persistent private information about his
psychological cost of lying, and in every period, can privately observe the
realization of an i.i.d. state before communication takes place. I characterize
every type of sender's highest equilibrium payoff. When the highest lying cost
in the support of the receivers' prior belief approaches the sender's benefit
from lying, every type's highest equilibrium payoff in the repeated
communication game converges to his equilibrium payoff in a one-shot Bayesian
persuasion game. I also show that in every sender-optimal equilibrium, no type
of sender mixes between telling the truth and lying at every history. When
there exist ethical types whose lying costs outweigh their benefits, I provide
necessary and sufficient conditions for all non-ethical type senders to attain
their optimal commitment payoffs. I identify an outside option effect through
which the possibility of being ethical decreases every non-ethical type's
payoff.",Repeated Communication with Private Lying Cost,2020-06-15 04:08:33,Harry Pei,"http://arxiv.org/abs/2006.08069v1, http://arxiv.org/pdf/2006.08069v1",econ.TH
34000,th,"Voting is the aggregation of individual preferences in order to select a
winning alternative. Selection of a winner is accomplished via a voting rule,
e.g., rank-order voting, majority rule, plurality rule, approval voting. Which
voting rule should be used? In social choice theory, desirable properties of
voting rules are expressed as axioms to be satisfied. This thesis focuses on
axioms concerning strategic manipulation by voters. Sometimes, voters may
intentionally misstate their true preferences in order to alter the outcome for
their own advantage. For example, in plurality rule, if a voter knows that
their top-choice candidate will lose, then they might instead vote for their
second-choice candidate just to avoid an even less desirable result. When no
coalition of voters can strategically manipulate, then the voting rule is said
to satisfy the axiom of Strategy-Proofness. A less restrictive axiom is Weak
Strategy-Proofness (as defined by Dasgupta and Maskin (2019)), which allows for
strategic manipulation by all but the smallest coalitions. Under certain
intuitive conditions, Dasgupta and Maskin (2019) proved that the only voting
rules satisfying Strategy-Proofness are rank-order voting and majority rule. In
my thesis, I generalize their result, by proving that rank-order voting and
majority rule are surprisingly still the only voting rules satisfying Weak
Strategy-Proofness.",Exploring Weak Strategy-Proofness in Voting Theory,2020-05-13 22:53:08,Anne Carlstein,"http://arxiv.org/abs/2005.07521v1, http://arxiv.org/pdf/2005.07521v1",econ.TH
34001,th,"This paper considers a simple model where a social planner can influence the
spread-intensity of an infection wave, and, consequently, also the economic
activity and population health, through a single parameter. Population health
is assumed to only be negatively affected when the number of simultaneously
infected exceeds health care capacity. The main finding is that if (i) the
planner attaches a positive weight on economic activity and (ii) it is more
harmful for the economy to be locked down for longer than shorter time periods,
then the optimal policy is to (weakly) exceed health care capacity at some
time.",Optimal Trade-Off Between Economic Activity and Health During an Epidemic,2020-05-15 18:12:08,"Tommy Andersson, Albin Erlanson, Daniel Spiro, Robert Östling","http://arxiv.org/abs/2005.07590v1, http://arxiv.org/pdf/2005.07590v1",econ.TH
34002,th,"Efficiency and fairness are two desiderata in market design. Fairness
requires randomization in many environments. Observing the inadequacy of Top
Trading Cycle (TTC) to incorporate randomization, Yu and Zhang (2020) propose
the class of Fractional TTC mechanisms to solve random allocation problems
efficiently and fairly. The assumption of strict preferences in the paper
restricts the application scope. This paper extends Fractional TTC to the full
preference domain in which agents can be indifferent between objects.
Efficiency and fairness of Fractional TTC are preserved. As a corollary, we
obtain an extension of the probabilistic serial mechanism in the house
allocation model to the full preference domain. Our extension does not require
any knowledge beyond elementary computation.",Fractional Top Trading Cycle on the Full Preference Domain,2020-05-19 13:06:01,"Jingsheng Yu, Jun Zhang","http://arxiv.org/abs/2005.09340v1, http://arxiv.org/pdf/2005.09340v1",econ.TH
34003,th,"Many markets rely on traders truthfully communicating who has cheated in the
past and ostracizing those traders from future trade. This paper investigates
when truthful communication is incentive compatible. We find that if each side
has a myopic incentive to deviate, then communication incentives are satisfied
only when the volume of trade is low. By contrast, if only one side has a
myopic incentive to deviate, then communication incentives do not constrain the
volume of supportable trade. Accordingly, there are strong gains from
structuring trade so that one side either moves first or has its cooperation
guaranteed by external enforcement.",Communication and Cooperation in Markets,2020-05-20 07:03:57,"S. Nageeb Ali, David A. Miller","http://arxiv.org/abs/2005.09839v1, http://arxiv.org/pdf/2005.09839v1",econ.TH
34004,th,"Following the ideas laid out in Myerson (1996), Hofbauer (2000) defined a
Nash equilibrium of a finite game as sustainable if it can be made the unique
Nash equilibrium of a game obtained by deleting/adding a subset of the
strategies that are inferior replies to it. This paper proves two results about
sustainable equilibria. The first concerns the Hofbauer-Myerson conjecture
about the relationship between the sustainability of an equilibrium and its
index: for a generic class of games, an equilibrium is sustainable iff its
index is $+1$. Von Schemde and von Stengel (2008) proved this conjecture for
bimatrix games; we show that the conjecture is true for all finite games. More
precisely, we prove that an isolated equilibrium has index +1 if and only if it
can be made unique in a larger game obtained by adding finitely many strategies
that are inferior replies to that equilibrium. Our second result gives an
axiomatic extension of sustainability to all games and shows that only the Nash
components with positive index can be sustainable.",On Sustainable Equilibria,2020-05-28 18:40:15,"Srihari Govindan, Rida Laraki, Lucas Pahl","http://arxiv.org/abs/2005.14094v2, http://arxiv.org/pdf/2005.14094v2",econ.TH
34005,th,"Plurality and approval voting are two well-known voting systems with
different strengths and weaknesses. In this paper we consider a new voting
system we call beta(k) which allows voters to select a single first-choice
candidate and approve of any other number of candidates, where k denotes the
relative weight given to a first choice; this system is essentially a hybrid of
plurality and approval. Our primary goal is to characterize the behavior of
beta(k) for any value of k. Under certain reasonable assumptions, beta(k) can
be made to mimic plurality or approval voting in the event of a single winner
while potentially breaking ties otherwise. Under the assumption that voters are
honest, we show that it is possible to find the values of k for which a given
candidate will win the election if the respective approval and plurality votes
are known. Finally, we show how some of the commonly used voting system
criteria are satisfied by beta(k).",Evaluating the Properties of a First Choice Weighted Approval Voting System,2020-05-31 00:12:52,"Peter Butler, Jerry Lin","http://arxiv.org/abs/2006.00368v1, http://arxiv.org/pdf/2006.00368v1",econ.TH
34006,th,"We consider contest success functions (CSFs) that extract contestants' prize
values. In the common-value case, there exists a CSF extractive in any
equilibrium. In the observable-private-value case, there exists a CSF
extractive in some equilibrium; there exists a CSF extractive in any
equilibrium if and only if the number of contestants is greater than or equal
to three or the values are homogeneous. In the unobservable-private-value case,
there exists no CSF extractive in some equilibrium. When extractive CSFs exist,
we explicitly present one of them.",Extractive contest design,2020-06-02 20:45:06,Tomohiko Kawamori,"http://arxiv.org/abs/2006.01808v3, http://arxiv.org/pdf/2006.01808v3",econ.TH
34030,th,"The Solow-Swan model is shortly reviewed from a mathematical point of view.
By considering non-constant returns to scale, we obtain a general solution
strategy. We then compute the exact solution for the Cobb-Douglas production
function, for both the classical model and the von Bertalanffy model. Numerical
simulations are provided.",Exact solutions for a Solow-Swan model with non-constant returns to scale,2020-08-13 16:16:15,"Nicolò Cangiotti, Mattia Sensi","http://dx.doi.org/10.1007/s13226-022-00341-7, http://arxiv.org/abs/2008.05875v1, http://arxiv.org/pdf/2008.05875v1",econ.TH
34007,th,"While auction theory views bids and valuations as continuous variables,
real-world auctions are necessarily discrete. In this paper, we use a
combination of analytical and computational methods to investigate whether
incorporating discreteness substantially changes the predictions of auction
theory, focusing on the case of uniformly distributed valuations so that our
results bear on the majority of auction experiments. In some cases, we find
that introducing discreteness changes little. For example, the first-price
auction with two bidders and an even number of values has a symmetric
equilibrium that closely resembles its continuous counterpart and converges to
its continuous counterpart as the discretisation goes to zero. In others,
however, we uncover discontinuity results. For instance, introducing an
arbitrarily small amount of discreteness into the all-pay auction makes its
symmetric, pure-strategy equilibrium disappear; and appears (based on
computational experiments) to rob the game of pure-strategy equilibria
altogether. These results raise questions about the continuity approximations
on which auction theory is based and prompt a re-evaluation of the experimental
literature.",The importance of being discrete: on the inaccuracy of continuous approximations in auction theory,2020-06-04 20:06:50,"Itzhak Rasooly, Carlos Gavidia-Calderon","http://arxiv.org/abs/2006.03016v3, http://arxiv.org/pdf/2006.03016v3",econ.TH
34008,th,"We study several models of growth driven by innovation and imitation by a
continuum of firms, focusing on the interaction between the two. We first
investigate a model on a technology ladder where innovation and imitation
combine to generate a balanced growth path (BGP) with compact support, and with
productivity distributions for firms that are truncated power-laws. We start
with a simple model where firms can adopt technologies of other firms with
higher productivities according to exogenous probabilities. We then study the
case where the adoption probabilities depend on the probability distribution of
productivities at each time. We finally consider models with a finite number of
firms, which by construction have firm productivity distributions with bounded
support. Stochastic imitation and innovation can make the distance of the
productivity frontier to the lowest productivity level fluctuate, and this
distance can occasionally become large. Alternatively, if we fix the length of
the support of the productivity distribution because firms too far from the
frontier cannot survive, the number of firms can fluctuate randomly.",Innovation and imitation,2020-06-11 13:29:36,"Jess Benhabib, Éric Brunet, Mildred Hager","http://arxiv.org/abs/2006.06315v2, http://arxiv.org/pdf/2006.06315v2",econ.TH
34009,th,"A proposer requires the approval of a veto player to change a status quo.
Preferences are single peaked. Proposer is uncertain about Vetoer's ideal
point. We study Proposer's optimal mechanism without transfers. Vetoer is given
a menu, or a delegation set, to choose from. The optimal delegation set
balances the extent of Proposer's compromise with the risk of a veto. Under
reasonable conditions, ""full delegation"" is optimal: Vetoer can choose any
action between the status quo and Proposer's ideal action. This outcome largely
nullifies Proposer's bargaining power; Vetoer frequently obtains her ideal
point, and there is Pareto efficiency despite asymmetric information. More
generally, we identify when ""interval delegation"" is optimal. Optimal interval
delegation can be a Pareto improvement over cheap talk. We derive comparative
statics. Vetoer receives less discretion when preferences are more likely to be
aligned, by contrast to expertise-based delegation. Methodologically, our
analysis handles stochastic mechanisms.",Delegation in Veto Bargaining,2020-06-11 22:50:47,"Navin Kartik, Andreas Kleiner, Richard Van Weelden","http://dx.doi.org/10.1257/aer.20201817, http://arxiv.org/abs/2006.06773v3, http://arxiv.org/pdf/2006.06773v3",econ.TH
34010,th,"We study private-good allocation under general constraints. Several prominent
examples are special cases, including house allocation, roommate matching,
social choice, and multiple assignment. Every individually strategy-proof and
Pareto efficient two-agent mechanism is an ""adapted local dictatorship."" Every
group strategy-proof N-agent mechanism has two-agent marginal mechanisms that
are adapted local dictatorships. These results yield new characterizations and
unifying insights for known characterizations. We find all group strategy-proof
and Pareto efficient mechanisms for the roommates problem. We give a related
result for multiple assignment. We prove the Gibbard--Satterthwaite Theorem and
give a partial converse.",Incentives and Efficiency in Constrained Allocation Mechanisms,2020-06-11 22:56:44,"Joseph Root, David S. Ahn","http://arxiv.org/abs/2006.06776v2, http://arxiv.org/pdf/2006.06776v2",econ.TH
34011,th,"A well-intentioned principal provides information to a rationally inattentive
agent without internalizing the agent's cost of processing information.
Whatever information the principal makes available, the agent may choose to
ignore some. We study optimal information provision in a tractable model with
quadratic payoffs where full disclosure is not optimal. We characterize
incentive-compatible information policies, that is, those to which the agent
willingly pays full attention. In a leading example with three states, optimal
disclosure involves information distortion at intermediate costs of attention.
As the cost increases, optimal information abruptly changes from downplaying
the state to exaggerating the state.",Optimal Attention Management: A Tractable Framework,2020-06-14 01:11:31,"Elliot Lipnowski, Laurent Mathevet, Dong Wei","http://arxiv.org/abs/2006.07729v2, http://arxiv.org/pdf/2006.07729v2",econ.TH
34012,th,"I study a social learning model in which the object to learn is a strategic
player's endogenous actions rather than an exogenous state. A patient seller
faces a sequence of buyers and decides whether to build a reputation for
supplying high quality products. Each buyer does not have access to the
seller's complete records, but can observe all previous buyers' actions, and
some informative private signal about the seller's actions. I examine how the
buyers' private signals affect the speed of social learning and the seller's
incentives to establish reputations. When each buyer privately observes a
bounded subset of the seller's past actions, the speed of learning is strictly
positive but can vanish to zero as the seller becomes patient. As a result,
reputation building can lead to low payoff for the patient seller and low
social welfare. When each buyer observes an unboundedly informative private
signal about the seller's current-period action, the speed of learning is
uniformly bounded from below and a patient seller can secure high returns from
building reputations. My results shed light on the effectiveness of various
policies in accelerating social learning and encouraging sellers to establish
good reputations.",Reputation Building under Observational Learning,2020-06-15 04:06:33,Harry Pei,"http://arxiv.org/abs/2006.08068v6, http://arxiv.org/pdf/2006.08068v6",econ.TH
34031,th,"This is a general competitive analysis paper. A model is presented that
describes how an individual with a physical disability, or mobility impairment,
would go about utility maximization. These results are then generalized.
Subsequently, a selection of disability policies from Canada and the United
States are compared to the insights of the model, and it is shown that there
are sources of inefficiency in many North American disability support systems.",Mobility and Social Efficiency,2020-08-18 01:23:32,Ryan Steven Kostiuk,"http://arxiv.org/abs/2008.07650v1, http://arxiv.org/pdf/2008.07650v1",econ.TH
34014,th,"I study a repeated game in which a patient player (e.g., a seller) wants to
win the trust of some myopic opponents (e.g., buyers) but can strictly benefit
from betraying them. Her benefit from betrayal is strictly positive and is her
persistent private information. I characterize every type of patient player's
highest equilibrium payoff. Her persistent private information affects this
payoff only through the lowest benefit in the support of her opponents' prior
belief. I also show that in every equilibrium which is optimal for the patient
player, her on-path behavior is nonstationary, and her long-run action
frequencies are pinned down for all except two types. Conceptually, my
payoff-type approach incorporates a realistic concern that no type of
reputation-building player is immune to reneging temptations. Compared to
commitment-type models, the incentive constraints for all types of patient
player lead to a sharp characterization of her highest attainable payoff and
novel predictions on her behaviors.",Trust and Betrayals: Reputational Payoffs and Behaviors without Commitment,2020-06-15 04:11:13,Harry Pei,"http://arxiv.org/abs/2006.08071v1, http://arxiv.org/pdf/2006.08071v1",econ.TH
34015,th,"This paper develops a Nash-equilibrium extension of the classic SIR model of
infectious-disease epidemiology (""Nash SIR""), endogenizing people's decisions
whether to engage in economic activity during a viral epidemic and allowing for
complementarity in social-economic activity. An equilibrium epidemic is one in
which Nash equilibrium behavior during the epidemic generates the epidemic.
There may be multiple equilibrium epidemics, in which case the epidemic
trajectory can be shaped through the coordination of expectations, in addition
to other sorts of interventions such as stay-at-home orders and accelerated
vaccine development. An algorithm is provided to compute all equilibrium
epidemics.",Nash SIR: An Economic-Epidemiological Model of Strategic Behavior During a Viral Epidemic,2020-06-17 22:21:18,David McAdams,"http://arxiv.org/abs/2006.10109v1, http://arxiv.org/pdf/2006.10109v1",econ.TH
34016,th,"Economic theory has provided an estimable intuition in understanding the
perplexing ideologies in law, in the areas of economic law, tort law, contract
law, procedural law and many others. Most legal systems require the parties
involved in a legal dispute to exchange information through a process called
discovery. The purpose is to reduce the relative optimisms developed by
asymmetric information between the parties. Like a head or tail phenomenon in
stochastic processes, uncertainty in the adjudication affects the decisions of
the parties in a legal negotiation. This paper therefore applies the principles
of aleatory analysis to determine how negotiations fail in the legal process,
introduce the axiological concept of optimal transaction cost and formulates a
numerical methodology based on backwards induction and stochastic options
pricing economics in estimating the reasonable and fair bargain in order to
induce settlements thereby increasing efficiency and reducing social costs.",Mechanism of Instrumental Game Theory in The Legal Process via Stochastic Options Pricing Induction,2020-06-19 13:39:08,"Kwadwo Osei Bonsu, Shoucan Chen","http://dx.doi.org/10.9734/arjom/2020/v16i830215, http://arxiv.org/abs/2006.11061v1, http://arxiv.org/pdf/2006.11061v1",econ.TH
34017,th,"This paper examines necessary and sufficient conditions for the uniqueness of
dynamic Groves mechanisms when the domain of valuations is restricted. Our
approach is to appropriately define the total valuation function, which is the
expected discounted sum of each period's valuation function from the allocation
and thus a dynamic counterpart of the static valuation function, and then to
port the results for static Groves mechanisms to the dynamic setting.",The uniqueness of dynamic Groves mechanisms on restricted domains,2020-06-25 08:43:19,Kiho Yoon,"http://arxiv.org/abs/2006.14190v1, http://arxiv.org/pdf/2006.14190v1",econ.TH
34018,th,"McCall (1970) examines the search behaviour of an infinitely-lived and
risk-neutral job seeker maximizing her lifetime earnings by accepting or
rejecting real-valued scalar wage offers. In practice, job offers have multiple
attributes, and job seekers solve a multicriteria search problem. This paper
presents a multicriteria search model and new comparative statics results.",Comparative Statics in Multicriteria Search Models,2020-06-25 17:44:46,Veli Safak,"http://arxiv.org/abs/2006.14452v1, http://arxiv.org/pdf/2006.14452v1",econ.TH
34019,th,"We consider games in which players search for a hidden prize, and they have
asymmetric information about the prize location. We study the social payoff in
equilibria of these games. We present sufficient conditions for the existence
of an equilibrium that yields the first-best payoff (i.e., the highest social
payoff under any strategy profile), and we characterize the first-best payoff.
The results have interesting implications for innovation contests and R&D
races.",Social Welfare in Search Games with Asymmetric Information,2020-06-26 11:35:09,"Gilad Bavly, Yuval Heller, Amnon Schreiber","http://arxiv.org/abs/2006.14860v2, http://arxiv.org/pdf/2006.14860v2",econ.TH
34020,th,"In a decision problem comprised of multiple choices, a person may fail to
take into account the interdependencies between their choices. To understand
how people make decisions in such problems, we design a novel experiment and
revealed preference tests that determine how each subject brackets their
choices. In separate portfolio allocation under risk, social allocation, and
induced-value function shopping experiments, we find that 40-43% of our
subjects are consistent with narrow bracketing while 0-15% are consistent with
broad bracketing. Adjusting for each model's predictive precision, 74% of
subjects are best described by narrow bracketing, 13% by broad bracketing, and
6% by intermediate cases.",Revealing Choice Bracketing,2020-06-26 12:04:44,"Andrew Ellis, David J. Freeman","http://arxiv.org/abs/2006.14869v3, http://arxiv.org/pdf/2006.14869v3",econ.TH
34021,th,"We investigate how distorted, yet structured, beliefs can persist in
strategic situations. Specifically, we study two-player games in which each
player is endowed with a biased-belief function that represents the discrepancy
between a player's beliefs about the opponent's strategy and the actual
strategy. Our equilibrium condition requires that (i) each player choose a
best-response strategy to his distorted belief about the opponent's strategy,
and (ii) the distortion functions form best responses to one another. We obtain
sharp predictions and novel insights into the set of stable outcomes and their
supporting stable biases in various classes of games.",Biased-Belief Equilibrium,2020-06-27 10:29:08,"Yuval Heller, Eyal Winter","http://dx.doi.org/10.1257/mic.20170400, http://arxiv.org/abs/2006.15306v1, http://arxiv.org/pdf/2006.15306v1",econ.TH
34032,th,"We study Bayesian Persuasion with multiple senders who have access to
conditionally independent experiments (and possibly others). Senders have
zero-sum preferences over information revealed. We characterize when any set of
states can be pooled in equilibrium and when all equilibria are fully
revealing. The state is fully revealed in every equilibrium if and only if
sender utility functions are `globally nonlinear'. With two states, this is
equivalent to some sender having nontrivial preferences. The upshot is that
`most' zero-sum sender preferences result in full revelation. We explore what
conditions are important for competition to result in such stark information
revelation.",Competing Persuaders in Zero-Sum Games,2020-08-19 18:52:16,"Dilip Ravindran, Zhihan Cui","http://arxiv.org/abs/2008.08517v2, http://arxiv.org/pdf/2008.08517v2",econ.TH
34022,th,"We develop a framework in which individuals' preferences coevolve with their
abilities to deceive others about their preferences and intentions.
Specifically, individuals are characterised by (i) a level of cognitive
sophistication and (ii) a subjective utility function. Increased cognition is
costly, but higher-level individuals have the advantage of being able to
deceive lower-level opponents about their preferences and intentions in some of
the matches. In the remaining matches, the individuals observe each other's
preferences. Our main result shows that, essentially, only efficient outcomes
can be stable. Moreover, under additional mild assumptions, we show that an
efficient outcome is stable if and only if the gain from unilateral deviation
is smaller than the effective cost of deception in the environment.",Coevolution of deception and preferences: Darwin and Nash meet Machiavelli,2020-06-27 10:37:36,"Yuval Heller, Erik Mohlin","http://dx.doi.org/10.1016/j.geb.2018.09.011, http://arxiv.org/abs/2006.15308v1, http://arxiv.org/pdf/2006.15308v1",econ.TH
34023,th,"Black and Cox (1976) claim that the value of junior debt is increasing in
asset risk when the firm's value is low. We show, using closed-form solution,
that the junior debt's value is hump-shaped. This has interesting implications
for the market-discipline role of banks' junior debt.",A closed-form solution to the risk-taking motivation of subordinated debtholders,2020-06-27 10:45:16,"Yuval Heller, SharonPeleg-Lazar, Alon Raviv","http://dx.doi.org/10.1016/j.econlet.2019.05.003, http://arxiv.org/abs/2006.15309v1, http://arxiv.org/pdf/2006.15309v1",econ.TH
34024,th,"We study environments in which agents are randomly matched to play a
Prisoner's Dilemma, and each player observes a few of the partner's past
actions against previous opponents. We depart from the existing related
literature by allowing a small fraction of the population to be commitment
types. The presence of committed agents destabilizes previously proposed
mechanisms for sustaining cooperation. We present a novel intuitive combination
of strategies that sustains cooperation in various environments. Moreover, we
show that under an additional assumption of stationarity, this combination of
strategies is essentially the unique mechanism to support full cooperation, and
it is robust to various perturbations. Finally, we extend the results to a
setup in which agents also observe actions played by past opponents against the
current partner, and we characterize which observation structure is optimal for
sustaining cooperation.",Observations on Cooperation,2020-06-27 10:59:16,"Yuval Heller, Erik Mohlin","http://dx.doi.org/10.1093/restud/rdx076, http://arxiv.org/abs/2006.15310v1, http://arxiv.org/pdf/2006.15310v1",econ.TH
34025,th,"A patient player privately observes a persistent state that directly affects
his myopic opponents' payoffs, and can be one of the several commitment types
that plays the same mixed action in every period. I characterize the set of
environments under which the patient player obtains at least his commitment
payoff in all equilibria regardless of his stage-game payoff function. Due to
interdependent values, the patient player cannot guarantee his mixed commitment
payoff by imitating the mixed-strategy commitment type, and small perturbations
to a pure commitment action can significantly reduce the patient player's
guaranteed equilibrium payoff.",Reputation for Playing Mixed Actions: A Characterization Theorem,2020-06-29 20:19:58,Harry Pei,"http://arxiv.org/abs/2006.16206v2, http://arxiv.org/pdf/2006.16206v2",econ.TH
34026,th,"A priority system has traditionally been the protocol of choice for the
allocation of scarce life-saving resources during public health emergencies.
Covid-19 revealed the limitations of this allocation rule. Many argue that
priority systems abandon ethical values such as equity by discriminating
against disadvantaged communities. We show that a restrictive feature of the
traditional priority system largely drives these limitations. Following
minimalist market design, an institution design paradigm that integrates
research and policy efforts, we formulate pandemic allocation of scarce
life-saving resources as a new application of market design. Interfering only
with the restrictive feature of the priority system to address its
shortcomings, we formulate a reserve system as an alternative allocation rule.
Our theoretical analysis develops a general theory of reserve design. We relate
our analysis to debates during Covid-19 and describe the impact of our paper on
policy and practice.","Fair Allocation of Vaccines, Ventilators and Antiviral Treatments: Leaving No Ethical Value Behind in Health Care Rationing",2020-08-02 04:13:23,"Parag A. Pathak, Tayfun Sönmez, M. Utku Ünver, M. Bumin Yenmez","http://arxiv.org/abs/2008.00374v3, http://arxiv.org/pdf/2008.00374v3",econ.TH
34027,th,"We study sequential search without priors. Our interest lies in decision
rules that are close to being optimal under each prior and after each history.
We call these rules dynamically robust. The search literature employs optimal
rules based on cutoff strategies that are not dynamically robust. We derive
dynamically robust rules and show that their performance exceeds 1/2 of the
optimum against binary environments and 1/4 of the optimum against all
environments. This performance improves substantially with the outside option
value, for instance, it exceeds 2/3 of the optimum if the outside option
exceeds 1/6 of the highest possible alternative.",Robust Sequential Search,2020-08-02 18:26:08,"Karl H. Schlag, Andriy Zapechelnyuk","http://arxiv.org/abs/2008.00502v1, http://arxiv.org/pdf/2008.00502v1",econ.TH
34028,th,"This paper derives primitive, easily verifiable sufficient conditions for
existence and uniqueness of (stochastic) recursive utilities for several
important classes of preferences. In order to accommodate models commonly used
in practice, we allow both the state-space and per-period utilities to be
unbounded. For many of the models we study, existence and uniqueness is
established under a single, primitive ""thin tail"" condition on the distribution
of growth in per-period utilities. We present several applications to robust
preferences, models of ambiguity aversion and learning about hidden states, and
Epstein-Zin preferences.",Existence and uniqueness of recursive utilities without boundedness,2020-07-30 23:24:56,Timothy M. Christensen,"http://dx.doi.org/10.1016/j.jet.2022.105413, http://arxiv.org/abs/2008.00963v3, http://arxiv.org/pdf/2008.00963v3",econ.TH
34029,th,"The standard model of choice in economics is the maximization of a complete
and transitive preference relation over a fixed set of alternatives. While
completeness of preferences is usually regarded as a strong assumption,
weakening it requires care to ensure that the resulting model still has enough
structure to yield interesting results. This paper takes a step in this
direction by studying the class of ""connected preferences"", that is,
preferences that may fail to be complete but have connected maximal domains of
comparability. We offer four new results. Theorem 1 identifies a basic
necessary condition for a continuous preference to be connected in the sense
above, while Theorem 2 provides sufficient conditions. Building on the latter,
Theorem 3 characterizes the maximal domains of comparability. Finally, Theorem
4 presents conditions that ensure that maximal domains are arc-connected.",Connected Incomplete Preferences,2020-08-10 23:16:51,"Leandro Gorno, Alessandro Rivello","http://arxiv.org/abs/2008.04401v1, http://arxiv.org/pdf/2008.04401v1",econ.TH
34033,th,"Electre Tri is a set of methods designed to sort alternatives evaluated on
several criteria into ordered categories. In these methods, alternatives are
assigned to categories by comparing them with reference profiles that represent
either the boundary or central elements of the category. The original Electre
Tri-B method uses one limiting profile for separating a category from the
category below. A more recent method, Electre Tri-nB, allows one to use several
limiting profiles for the same purpose. We investigate the properties of
Electre Tri-nB using a conjoint measurement framework. When the number of
limiting profiles used to define each category is not restricted, Electre
Tri-nB is easy to characterize axiomatically and is found to be equivalent to
several other methods proposed in the literature. We extend this result in
various directions.",A theoretical look at ELECTRE TRI-nB and related sorting models,2020-08-20 13:44:34,"Denis Bouyssou, Thierry Marchant, Marc Pirlot","http://arxiv.org/abs/2008.09484v3, http://arxiv.org/pdf/2008.09484v3",econ.TH
34034,th,"We examine the design of optimal rating systems in the presence of moral
hazard. First, an intermediary commits to a rating scheme. Then, a
decision-maker chooses an action that generates value for the buyer. The
intermediary then observes a noisy signal of the decision-maker's choice and
sends the buyer a signal consistent with the rating scheme. Here we fully
characterize the set of allocations that can arise in equilibrium under any
arbitrary rating system. We use this characterization to study various design
aspects of optimal rating systems. Specifically, we study the properties of
optimal ratings when the decision-maker's effort is productive and when the
decision-maker can manipulate the intermediary's signal with a noise. With
manipulation, rating uncertainty is a fairly robust feature of optimal rating
systems.",Optimal Rating Design under Moral Hazard,2020-08-21 18:11:22,"Maryam Saeedi, Ali Shourideh","http://arxiv.org/abs/2008.09529v3, http://arxiv.org/pdf/2008.09529v3",econ.TH
34035,th,"A collective choice problem is a finite set of social alternatives and a
finite set of economic agents with vNM utility functions. We associate a public
goods economy with each collective choice problem and establish the existence
and efficiency of (equal income) Lindahl equilibrium allocations. We interpret
collective choice problems as cooperative bargaining problems and define a
set-valued solution concept, {\it the equitable solution} (ES). We provide
axioms that characterize ES and show that ES contains the Nash bargaining
solution. Our main result shows that the set of ES payoffs is the same a the
set of Lindahl equilibrium payoffs. We consider two applications: in the first,
we show that in a large class of matching problems without transfers the set of
Lindahl equilibrium payoffs is the same as the set of (equal income) Walrasian
equilibrium payoffs. In our second application, we show that in any discrete
exchange economy without transfers every Walrasian equilibrium payoff is a
Lindahl equilibrium payoff of the corresponding collective choice market.
Moreover, for any cooperative bargaining problem, it is possible to define a
set of commodities so that the resulting economy's utility possibility set is
that bargaining problem {\it and} the resulting economy's set of Walrasian
equilibrium payoffs is the same as the set of Lindahl equilibrium payoffs of
the corresponding collective choice market.",Lindahl Equilibrium as a Collective Choice Rule,2020-08-23 03:37:13,"Faruk Gul, Wolfgang Pesendorfer","http://arxiv.org/abs/2008.09932v2, http://arxiv.org/pdf/2008.09932v2",econ.TH
34036,th,"The aim of this article is to propose a core game theory model of transaction
costs wherein it is indicated how direct costs determine the probability of
loss and subsequent transaction costs. The existence of optimum is proven, and
the way in which exposure influences the location of the optimum is
demonstrated. The decisions are described as a two-player game and it is
discussed how the transaction cost sharing rule determines whether the optimum
point of transaction costs is the same as the equilibrium of the game. A game
modelling dispute between actors regarding changing the share of transaction
costs to be paid by each party is also presented. Requirements of efficient
transaction cost sharing rules are defined, and it is posited that a solution
exists which is not unique. Policy conclusions are also devised based on
principles of design of institutions to influence the nature of transaction
costs.","Transaction Costs: Economies of Scale, Optimum, Equilibrium and Efficiency",2020-08-24 15:07:58,"László Kállay, Tibor Takács, László Trautmann","http://arxiv.org/abs/2008.10348v1, http://arxiv.org/pdf/2008.10348v1",econ.TH
34037,th,"We characterize Pareto optimality via ""near"" weighted utilitarian welfare
maximization. One characterization sequentially maximizes utilitarian welfare
functions using a finite sequence of nonnegative and eventually positive
welfare weights. The other maximizes a utilitarian welfare function with a
certain class of positive hyperreal weights. The social welfare ordering
represented by these ""near"" weighted utilitarian welfare criteria is
characterized by the standard axioms for weighted utilitarianism under a
suitable weakening of the continuity axiom.","""Near"" Weighted Utilitarian Characterizations of Pareto Optima",2020-08-25 07:50:38,"Yeon-Koo Che, Jinwoo Kim, Fuhito Kojima, Christopher Thomas Ryan","http://arxiv.org/abs/2008.10819v2, http://arxiv.org/pdf/2008.10819v2",econ.TH
34038,th,"We study a model of electoral accountability and selection whereby
heterogeneous voters aggregate incumbent politician's performance data into
personalized signals through paying limited attention. Extreme voters' signals
exhibit an own-party bias, which hampers their ability to discern the good and
bad performances of the incumbent. While this effect alone would undermine
electoral accountability and selection, there is a countervailing effect
stemming from partisan disagreement, which makes the centrist voter more likely
to be pivotal. In case the latter's unbiased signal is very informative about
the incumbent's performance, the combined effect on electoral accountability
and selection can actually be a positive one. For this reason, factors that
carry a negative connotation in every political discourse -- such as increasing
mass polarization and shrinking attention span -- have ambiguous accountability
and selection effects in general. Correlating voters' signals, if done
appropriately, unambiguously improves electoral accountability and selection
and, hence, voter welfare.",Electoral Accountability and Selection with Personalized Information Aggregation,2020-09-03 18:59:54,"Anqi Li, Lin Hu","http://arxiv.org/abs/2009.03761v7, http://arxiv.org/pdf/2009.03761v7",econ.TH
34046,th,"A variety of social, economic, and political interactions have long been
modelled after Blotto games. In this paper, we introduce a general model of
dynamic $n$-player Blotto contests. The players have asymmetric resources, and
the battlefield prizes are not necessarily homogeneous. Each player's
probability of winning the prize in a battlefield is governed by a contest
success function and players' resource allocation on that battlefield. We show
that there exists a subgame perfect equilibrium in which players allocate their
resources proportional to the battlefield prizes for every history. This result
is robust to exogenous resource shocks throughout the game.",Proportional resource allocation in dynamic n-player Blotto games,2020-10-10 23:37:10,"Nejat Anbarcı, Kutay Cingiz, Mehmet S. Ismail","http://arxiv.org/abs/2010.05087v2, http://arxiv.org/pdf/2010.05087v2",econ.TH
34039,th,"In random expected utility (Gul and Pesendorfer, 2006), the distribution of
preferences is uniquely recoverable from random choice. This paper shows
through two examples that such uniqueness fails in general if risk preferences
are random but do not conform to expected utility theory. In the first,
non-uniqueness obtains even if all preferences are confined to the betweenness
class (Dekel, 1986) and are suitably monotone. The second example illustrates
random choice behavior consistent with random expected utility that is also
consistent with random non-expected utility. On the other hand, we find that if
risk preferences conform to weighted utility theory (Chew, 1983) and are
monotone in first-order stochastic dominance, random choice again uniquely
identifies the distribution of preferences. Finally, we argue that, depending
on the domain of risk preferences, uniqueness may be restored if joint
distributions of choice across a limited number of feasible sets are available.",Random Non-Expected Utility: Non-Uniqueness,2020-09-09 12:16:22,Yi-Hsuan Lin,"http://arxiv.org/abs/2009.04173v1, http://arxiv.org/pdf/2009.04173v1",econ.TH
34040,th,"Recently, many matching systems around the world have been reformed. These
reforms responded to objections that the matching mechanisms in use were unfair
and manipulable. Surprisingly, the mechanisms remained unfair even after the
reforms: the new mechanisms may induce an outcome with a blocking student who
desires and deserves a school which she did not receive. However, as we show in
this paper, the reforms introduced matching mechanisms which are more fair
compared to the counterfactuals. First, most of the reforms introduced
mechanisms that are more fair by stability: whenever the old mechanism does not
have a blocking student, the new mechanism does not have a blocking student
either. Second, some reforms introduced mechanisms that are more fair by
counting: the old mechanism always has at least as many blocking students as
the new mechanism. These findings give a novel rationale to the reforms and
complement the recent literature showing that the same reforms have introduced
less manipulable matching mechanisms. We further show that the fairness and
manipulability of the mechanisms are strongly logically related.",Reforms meet fairness concerns in school and college admissions,2020-09-11 09:16:31,"Somouaoga Bonkoungou, Alexander Nesterov","http://arxiv.org/abs/2009.05245v2, http://arxiv.org/pdf/2009.05245v2",econ.TH
34041,th,"Strategy-proof mechanisms are widely used in market design. In an abstract
allocation framework where outside options are available to agents, we obtain
two results for strategy-proof mechanisms. They provide a unified foundation
for several existing results in distinct models and imply new results in some
models. The first result proves that, for individually rational and
strategy-proof mechanisms, pinning down every agent's probability of choosing
his outside option is equivalent to pinning down a mechanism. The second result
provides a sufficient condition for two strategy-proof mechanisms to be
equivalent when the number of possible allocations is finite.",Strategy-proof allocation with outside option,2020-09-11 12:51:25,Jun Zhang,"http://arxiv.org/abs/2009.05311v2, http://arxiv.org/pdf/2009.05311v2",econ.TH
34042,th,"We introduce and study a model of long-run convention formation for rare
interactions. Players in this model form beliefs by observing a
recency-weighted sample of past interactions, to which they noisily best
respond. We propose a continuous state Markov model, well-suited for our
setting, and develop a methodology that is relevant for a larger class of
similar learning models. We show that the model admits a unique asymptotic
distribution which concentrates its mass on some minimal CURB block
configuration. In contrast to existing literature of long-run convention
formation, we focus on behavior inside minimal CURB blocks and provide
conditions for convergence to (approximate) mixed equilibria conventions inside
minimal CURB blocks.",Stochastic Stability of a Recency Weighted Sampling Dynamic,2020-09-27 20:58:39,"Alexander Aurell, Gustav Karreskog","http://arxiv.org/abs/2009.12910v2, http://arxiv.org/pdf/2009.12910v2",econ.TH
34043,th,"Carroll and Kimball (1996) have shown that, in the class of utility functions
that are strictly increasing, strictly concave, and have nonnegative third
derivatives, hyperbolic absolute risk aversion (HARA) is sufficient for the
concavity of consumption functions in general consumption-saving problems. This
paper shows that HARA is necessary, implying the concavity of consumption is
not a robust prediction outside the HARA class.",Necessity of Hyperbolic Absolute Risk Aversion for the Concavity of Consumption Functions,2020-09-28 21:25:37,Alexis Akira Toda,"http://dx.doi.org/10.1016/j.jmateco.2020.102460, http://arxiv.org/abs/2009.13564v2, http://arxiv.org/pdf/2009.13564v2",econ.TH
34044,th,"When learning from others, people tend to focus their attention on those with
similar views. This is often attributed to flawed reasoning, and thought to
slow learning and polarize beliefs. However, we show that echo chambers are a
rational response to uncertainty about the accuracy of information sources, and
can improve learning and reduce disagreement. Furthermore, overextending the
range of views someone is exposed to can backfire, slowing their learning by
making them less responsive to information from others. We model a Bayesian
decision maker who chooses a set of information sources and then observes a
signal from one. With uncertainty about which sources are accurate, focusing
attention on signals close to one's own expectation can be beneficial, as their
expected accuracy is higher. The optimal echo chamber balances the credibility
of views similar to one's own against the usefulness of those further away.",Optimal Echo Chambers,2020-10-03 04:41:06,"Gabriel Martinez, Nicholas H. Tenev","http://arxiv.org/abs/2010.01249v9, http://arxiv.org/pdf/2010.01249v9",econ.TH
34045,th,"The problem of finding a (continuous) utility function for a semiorder has
been studied since in 1956 R.D. Luce introduced in \emph{Econometrica} the
notion. There was almost no results on the continuity of the representation. A
similar result to Debreu's Lemma, but for semiorders, was never achieved.
Recently, some necessary conditions for the existence of a continuous
representation as well as some conjectures were presented by A. Estevan. In the
present paper we prove these conjectures, achieving the desired version of
Debreu's Open Gap Lemma for bounded semiorders. This result allows to remove
the open-closed and closed-open gaps of a subset $S\subseteq \mathbb{R}$, but
now keeping the constant threshold, so that $x+1<y$ if and only if $g(x)+1<g(y)
\, (x,y\in S)$. Therefore, the continuous representation (in the sense of
Scott-Suppes) of bounded semiorders is characterized. These results are
achieved thanks to the key notion of $\epsilon$-continuity, which generalizes
the idea of continuity for semiorders.",Debreu's open gap lemma for semiorders,2020-10-02 01:30:52,A. Estevan,"http://dx.doi.org/10.1016/j.jmp.2023.102754, http://arxiv.org/abs/2010.04265v1, http://arxiv.org/pdf/2010.04265v1",econ.TH
34089,th,"In this note, we prove the existence of an equilibrium concept, dubbed
conditional strategy equilibrium, for non-cooperative games in which a strategy
of a player is a function from the other players' actions to her own actions.
We study the properties of efficiency and coalition-proofness of the
conditional strategy equilibrium in $n$-person games.",Conditional strategy equilibrium,2021-03-11 22:59:57,"Lorenzo Bastianello, Mehmet S. Ismail","http://arxiv.org/abs/2103.06928v3, http://arxiv.org/pdf/2103.06928v3",econ.TH
34047,th,"Consider a persuasion game where both the sender and receiver are ambiguity
averse with maxmin expected utility (MEU) preferences and the sender can choose
to design an ambiguous information structure. This paper studies the game with
an ex-ante formulation: The sender first commits to a (possibly ambiguous)
information structure and then the receiver best responds by choosing an
ex-ante message-contingent action plan. Under this formulation, I show it is
never strictly beneficial for the sender to use an ambiguous information
structure as opposed to a standard (unambiguous) information structure. This
result is shown to be robust to the receiver having non-MEU Uncertainty Averse
preferences but not to the sender having non-MEU preferences.",Ambiguous Persuasion: An Ex-Ante Formulation,2020-10-12 03:23:54,Xiaoyu Cheng,"http://arxiv.org/abs/2010.05376v3, http://arxiv.org/pdf/2010.05376v3",econ.TH
34048,th,"Is the overall value of a world just the sum of values contributed by each
value-bearing entity in that world? Additively separable axiologies (like total
utilitarianism, prioritarianism, and critical level views) say 'yes', but
non-additive axiologies (like average utilitarianism, rank-discounted
utilitarianism, and variable value views) say 'no'. This distinction is
practically important: additive axiologies support 'arguments from astronomical
scale' which suggest (among other things) that it is overwhelmingly important
for humanity to avoid premature extinction and ensure the existence of a large
future population, while non-additive axiologies need not. We show, however,
that when there is a large enough 'background population' unaffected by our
choices, a wide range of non-additive axiologies converge in their implications
with some additive axiology -- for instance, average utilitarianism converges
to critical-level utilitarianism and various egalitarian theories converge to
prioritiarianism. We further argue that real-world background populations may
be large enough to make these limit results practically significant. This means
that arguments from astronomical scale, and other arguments in practical ethics
that seem to presuppose additive separability, may be truth-preserving in
practice whether or not we accept additive separability as a basic axiological
principle.",Non-Additive Axiologies in Large Worlds,2020-10-14 10:02:23,"Christian Tarsney, Teruji Thomas","http://arxiv.org/abs/2010.06842v1, http://arxiv.org/pdf/2010.06842v1",econ.TH
34049,th,"We show that under plausible levels of background risk, no theory of choice
under risk -- such as expected utility theory, prospect theory, or rank
dependent utility -- can simultaneously satisfy the following three economic
postulates: (i) Decision makers are risk-averse over small gambles, (ii) they
respect stochastic dominance, and (iii) they account for background risk.",Background risk and small-stakes risk aversion,2020-10-16 00:42:47,"Xiaosheng Mu, Luciano Pomatto, Philipp Strack, Omer Tamuz","http://arxiv.org/abs/2010.08033v2, http://arxiv.org/pdf/2010.08033v2",econ.TH
34050,th,"We use H\""older's inequality to get simple derivations of certain economic
formulas involving CES, Armington, or $n$-stage Armington functions.",An Application of Hölder's Inequality to Economics,2020-10-16 05:52:59,James Otterson,"http://arxiv.org/abs/2010.08122v1, http://arxiv.org/pdf/2010.08122v1",econ.TH
34051,th,"We study stable allocations in college admissions markets where students can
attend the same college under different financial terms. The deferred
acceptance algorithm identifies a stable allocation where funding is allocated
based on merit. While merit-based stable allocations assign the same students
to college, non-merit-based stable allocations may differ in the number of
students assigned to college. In large markets, this possibility requires
heterogeneity in applicants' sensitivity to financial terms. In Hungary, where
such heterogeneity is present, a non-merit-based stable allocation would
increase the number of assigned applicants by 1.9%, and affect 8.3% of the
applicants relative to any merit-based stable allocation. These findings
contrast sharply with findings from the matching (without contracts)
literature.",The Large Core of College Admission Markets: Theory and Evidence,2020-10-17 00:17:25,"Péter Biró, Avinatan Hassidim, Assaf Romm, Ran I. Shorrer, Sándor Sóvágó","http://arxiv.org/abs/2010.08631v2, http://arxiv.org/pdf/2010.08631v2",econ.TH
34052,th,"I formulate and characterize the following two-stage choice behavior. The
decision maker is endowed with two preferences. She shortlists all maximal
alternatives according to the first preference. If the first preference is
decisive, in the sense that it shortlists a unique alternative, then that
alternative is the choice. If multiple alternatives are shortlisted, then, in a
second stage, the second preference vetoes its minimal alternative in the
shortlist, and the remaining members of the shortlist form the choice set. Only
the final choice set is observable. I assume that the first preference is a
weak order and the second is a linear order. Hence the shortlist is fully
rationalizable but one of its members can drop out in the second stage, leading
to bounded rational behavior. Given the asymmetric roles played by the
underlying binary relations, the consequent behavior exhibits a minimal
compromise between two preferences. To our knowledge it is the first Choice
function that satisfies Sen's $\beta$ axiom of choice,but not $\alpha$.",A Model of Choice with Minimal Compromise,2020-10-17 14:39:50,Mario Vazquez Corte,"http://arxiv.org/abs/2010.08771v3, http://arxiv.org/pdf/2010.08771v3",econ.TH
34053,th,"We study the information design problem in a single-unit auction setting. The
information designer controls independent private signals according to which
the buyers infer their binary private values. Assuming that the seller adopts
the optimal auction due to Myerson (1981) in response, we characterize both the
buyer-optimal information structure, which maximizes the buyers' surplus, and
the sellerworst information structure, which minimizes the seller's revenue. We
translate both information design problems into finite-dimensional, constrained
optimization problems in which one can explicitly solve for the optimal
information structures. In contrast to the case with one buyer (Roesler and
Szentes, 2017), we show that with two or more buyers, the symmetric
buyer-optimal information structure is different from the symmetric
seller-worst information structure. The good is always sold under the
seller-worst information structure but not under the buyer-optimal information
structure. Nevertheless, as the number of buyers goes to infinity, both
symmetric information structures converge to no disclosure. We also show that
in an ex ante symmetric setting, an asymmetric information structure is never
seller-worst but can generate a strictly higher surplus for the buyers than the
symmetric buyer-optimal information structure.",Information Design in Optimal Auctions,2020-10-18 17:06:09,"Yi-Chun Chen, Xiangqian Yang","http://arxiv.org/abs/2010.08990v2, http://arxiv.org/pdf/2010.08990v2",econ.TH
34055,th,"We study interactions with uncertainty about demand sensitivity. In our
solution concept (1) firms choose seemingly-optimal strategies given the level
of sophistication of their data analytics, and (2) the levels of sophistication
form best responses to one another. Under the ensuing equilibrium firms
underestimate price elasticities and overestimate advertising effectiveness, as
observed empirically. The misestimates cause firms to set prices too high and
to over-advertise. In games with strategic complements (substitutes), profits
Pareto dominate (are dominated by) those of the Nash equilibrium. Applying the
model to team production games explains the prevalence of overconfidence among
entrepreneurs and salespeople.",Naive analytics equilibrium,2020-10-29 20:46:00,"Ron Berman, Yuval Heller","http://arxiv.org/abs/2010.15810v2, http://arxiv.org/pdf/2010.15810v2",econ.TH
34056,th,"We consider the problem of allocating indivisible objects to agents when
agents have strict preferences over objects. There are inherent trade-offs
between competing notions of efficiency, fairness and incentives in assignment
mechanisms. It is, therefore, natural to consider mechanisms that satisfy two
of these three properties in their strongest notions, while trying to improve
on the third dimension. In this paper, we are motivated by the following
question: Is there a strategy-proof and envy-free random assignment mechanism
more efficient than equal division?
  Our contributions in this paper are twofold. First, we further explore the
incompatibility between efficiency and envy-freeness in the class of
strategy-proof mechanisms. We define a new notion of efficiency that is weaker
than ex-post efficiency and prove that any strategy-proof and envy-free
mechanism must sacrifice efficiency even in this very weak sense. Next, we
introduce a new family of mechanisms called Pairwise Exchange mechanisms and
make the surprising observation that strategy-proofness is equivalent to
envy-freeness within this class. We characterize the set of all neutral and
strategy-proof (and hence, also envy-free) mechanisms in this family and show
that they admit a very simple linear representation.",Strategy-proof and Envy-free Mechanisms for House Allocation,2020-10-30 20:28:30,"Priyanka Shende, Manish Purohit","http://arxiv.org/abs/2010.16384v1, http://arxiv.org/pdf/2010.16384v1",econ.TH
34057,th,"We study a game of strategic information design between a sender, who chooses
state-dependent information structures, a mediator who can then garble the
signals generated from these structures, and a receiver who takes an action
after observing the signal generated by the first two players. We characterize
sufficient conditions for information revelation, compare outcomes with and
without a mediator and provide comparative statics with regard to the
preferences of the sender and the mediator. We also provide novel conceptual
and computational insights about the set of feasible posterior beliefs that the
sender can induce, and use these results to obtain insights about equilibrium
outcomes. The sender never benefits from mediation, while the receiver might.
Strikingly, the receiver benefits when the mediator's preferences are not
perfectly aligned with hers; rather the mediator should prefer more information
revelation than the sender, but less than perfect revelation.",Mediated Persuasion,2020-12-01 00:20:01,Andrew Kosenko,"http://arxiv.org/abs/2012.00098v2, http://arxiv.org/pdf/2012.00098v2",econ.TH
34058,th,"We consider the allocation of indivisible objects when agents have
preferences over their own allocations, but share the ownership of the
resources to be distributed. Examples might include seats in public schools,
faculty offices, and time slots in public tennis courts. Given an allocation,
groups of agents who would prefer an alternative allocation might challenge it.
An assignment is popular if it is not challenged by another one. By assuming
that agents' ability to challenge allocations can be represented by weighted
votes, we characterize the conditions under which popular allocations might
exist and when these can be implemented via strategy-proof mechanisms. Serial
dictatorships that use orderings consistent with the agents' weights are not
only strategy-proof and Pareto efficient, but also popular, whenever these
assignments exist. We also provide a new characterization for serial
dictatorships as the only mechanisms that are popular, strategy-proof,
non-wasteful, and satisfy a consistency condition.",Strategy-proof Popular Mechanisms,2020-12-02 10:44:43,"Mustafa Oğuz Afacan, Inácio Bó","http://arxiv.org/abs/2012.01004v2, http://arxiv.org/pdf/2012.01004v2",econ.TH
34059,th,"We evaluate the goal of maximizing the number of individuals matched to
acceptable outcomes. We show that it implies incentive, fairness, and
implementation impossibilities. Despite that, we present two classes of
mechanisms that maximize assignments. The first are Pareto efficient, and
undominated -- in terms of number of assignments -- in equilibrium. The second
are fair for unassigned students and assign weakly more students than stable
mechanisms in equilibrium.",Assignment Maximization,2020-12-02 11:03:57,"Mustafa Oğuz Afacan, Inácio Bó, Bertan Turhan","http://arxiv.org/abs/2012.01011v1, http://arxiv.org/pdf/2012.01011v1",econ.TH
34060,th,"We introduce a new family of mechanisms for one-sided matching markets,
denoted pick-an-object (PAO) mechanisms. When implementing an allocation rule
via PAO, agents are asked to pick an object from individualized menus. These
choices may be rejected later on, and these agents are presented with new
menus. When the procedure ends, agents are assigned the last object they
picked. We characterize the allocation rules that can be sequentialized by PAO
mechanisms, as well as the ones that can be implemented in a robust truthful
equilibrium. We justify the use of PAO as opposed to direct mechanisms by
showing that its equilibrium behavior is closely related to the one in
obviously strategy-proof (OSP) mechanisms, but implements commonly used rules,
such as Gale-Shapley DA and top trading cycles, which are not
OSP-implementable. We run laboratory experiments comparing truthful behavior
when using PAO, OSP, and direct mechanisms to implement different rules. These
indicate that agents are more likely to behave in line with the theoretical
prediction under PAO and OSP implementations than their direct counterparts.",Pick-an-object Mechanisms,2020-12-02 11:37:16,"Inácio Bó, Rustamdjan Hakimov","http://arxiv.org/abs/2012.01025v4, http://arxiv.org/pdf/2012.01025v4",econ.TH
34061,th,"We propose an extended version of Gini index defined on the set of infinite
utility streams, $X=Y^\mathbb{N}$ where $Y\subset \mathbb{R}$. For $Y$
containing at most finitely many elements, the index satisfies the generalized
Pigou-Dalton transfer principles in addition to the anonymity axiom.",Extended Gini Index,2020-12-08 03:59:51,"Ram Sewak Dubey, Giorgio Laguzzi","http://arxiv.org/abs/2012.04141v2, http://arxiv.org/pdf/2012.04141v2",econ.TH
34242,th,"We consider social learning in a changing world. Society can remain
responsive to state changes only if agents regularly act upon fresh
information, which limits the value of social learning. When the state is close
to persistent, a consensus whereby most agents choose the same action typically
emerges. The consensus action is not perfectly correlated with the state
though, because the society exhibits inertia following state changes. Phases of
inertia may be longer when signals are more precise, even if agents draw large
samples of past actions, as actions then become too correlated within samples,
thereby reducing informativeness and welfare.",Stationary social learning in a changing environment,2022-01-06 19:05:45,"Raphaël Lévy, Marcin Pęski, Nicolas Vieille","http://arxiv.org/abs/2201.02122v1, http://arxiv.org/pdf/2201.02122v1",econ.TH
34062,th,"During its history, the ultimate goal of economics has been to develop
similar frameworks for modeling economic behavior as invented in physics. This
has not been successful, however, and current state of the process is the
neoclassical framework that bases on static optimization. By using a static
framework, however, we cannot model and forecast the time paths of economic
quantities because for a growing firm or a firm going into bankruptcy, a
positive profit maximizing flow of production does not exist. Due to these
problems, we present a dynamic theory for the production of a profit-seeking
firm where the adjustment may be stable or unstable. This is important,
currently, because we should be able to forecast the possible future
bankruptcies of firms due to the Covid-19 pandemic. By using the model, we can
solve the time moment of bankruptcy of a firm as a function of several
parameters. The proposed model is mathematically identical with Newtonian model
of a particle moving in a resisting medium, and so the model explains the
reasons that stop the motion too. The frameworks for modeling dynamic events in
physics are thus applicable in economics, and we give reasons why physics is
more important for the development of economics than pure mathematics. (JEL
D21, O12)
  Keywords: Limitations of neoclassical framework, Dynamics of production,
Economic force, Connections between economics and physics.",How Covid-19 Pandemic Changes the Theory of Economics?,2020-12-08 20:16:34,Matti Estola,"http://arxiv.org/abs/2012.04571v1, http://arxiv.org/pdf/2012.04571v1",econ.TH
34063,th,"How to guarantee that firms perform due diligence before launching
potentially dangerous products? We study the design of liability rules when (i)
limited liability prevents firms from internalizing the full damage they may
cause, (ii) penalties are paid only if damage occurs, regardless of the
product's inherent riskiness, (iii) firms have private information about their
products' riskiness before performing due diligence. We show that (i) any
liability mechanism can be implemented by a tariff that depends only on the
evidence acquired by the firm if a damage occurs, not on any initial report by
the firm about its private information, (ii) firms that assign a higher prior
to product riskiness always perform more due diligence but less than is
socially optimal, and (iii) under a simple and intuitive condition, any
type-specific launch thresholds can be implemented by a monotonic tariff.",Liability Design with Information Acquisition,2020-12-09 17:16:07,"Francisco Poggi, Bruno Strulovici","http://arxiv.org/abs/2012.05066v1, http://arxiv.org/pdf/2012.05066v1",econ.TH
34064,th,"In many countries and institutions around the world, the hiring of workers is
made through open competitions. In them, candidates take tests and are ranked
based on scores in exams and other predetermined criteria. Those who satisfy
some eligibility criteria are made available for hiring from a ""pool of
workers."" In each of an ex-ante unknown number of rounds, vacancies are
announced, and workers are then hired from that pool. When the scores are the
only criterion for selection, the procedure satisfies desired fairness and
independence properties. We show that when affirmative action policies are
introduced, the established methods of reserves and procedures used in Brazil,
France, and Australia, fail to satisfy those properties. We then present a new
rule, which we show to be the unique rule that extends static notions of
fairness to problems with multiple rounds while satisfying aggregation
independence, a consistency requirement. Finally, we show that if multiple
institutions hire workers from a single pool, even minor consistency
requirements are incompatible with variations in the institutions' rules.",Hiring from a pool of workers,2020-12-17 15:27:42,"Azar Abizada, Inácio Bó","http://arxiv.org/abs/2012.09541v1, http://arxiv.org/pdf/2012.09541v1",econ.TH
34065,th,"Along with the energy transition, the energy markets change their
organization toward more decentralized and self-organized structures, striving
for locally optimal profits. These tendencies may endanger the physical grid
stability. One realistic option is the exhaustion of reserve energy due to an
abuse by arbitrageurs. We map the energy market to different versions of a
minority game and determine the expected amount of arbitrage as well as its
fluctuations as a function of the model parameters. Of particular interest are
the impact of heterogeneous contributions of arbitrageurs, the interplay
between external stochastic events and nonlinear price functions of reserve
power, and the effect of risk aversion due to suspected penalties. The
non-monotonic dependence of arbitrage on the control parameters reveals an
underlying phase transition that is the counterpart to replica symmetry
breaking in spin glasses. As conclusions from our results we propose economic
and statutory measures to counteract a detrimental effect of arbitrage.",Minority games played by arbitrageurs on the energy market,2020-12-18 22:29:47,"Tim Ritmeester, Hildegard Meyer-Ortmanns","http://dx.doi.org/10.1016/j.physa.2021.125927, http://arxiv.org/abs/2012.10475v2, http://arxiv.org/pdf/2012.10475v2",econ.TH
34066,th,"The role of specific cognitive processes in deviations from constant
discounting in intertemporal choice is not well understood. We evaluated
decreased impatience in intertemporal choice tasks independent of discounting
rate and non-linearity in long-scale time representation; nonlinear time
representation was expected to explain inconsistencies in discounting rate.
Participants performed temporal magnitude estimation and intertemporal choice
tasks. Psychophysical functions for time intervals were estimated by fitting
linear and power functions, while discounting functions were estimated by
fitting exponential and hyperbolic functions. The temporal magnitude estimates
of 65% of the participants were better fit with power functions (mostly
compression). 63% of the participants had intertemporal choice patterns
corresponding best to hyperbolic functions. Even when the perceptual bias in
the temporal magnitude estimations was compensated in the discounting rate
computation, the data of 8 out of 14 participants continued exhibiting temporal
inconsistency. The results suggest that temporal inconsistency in discounting
rate can be explained to different degrees by the bias in temporal
representations. Non-linearity in temporal representation and discounting rate
should be evaluated on an individual basis. Keywords: Intertemporal choice,
temporal magnitude, model comparison, impatience, time inconsistency",The role of time estimation in decreased impatience in Intertemporal Choice,2020-12-19 19:33:07,"Camila S. Agostino Peter M. E. Claessens, Fuat Balci, Yossi Zana","http://arxiv.org/abs/2012.10735v1, http://arxiv.org/pdf/2012.10735v1",econ.TH
34097,th,"Motivated by data on coauthorships in scientific publications, we analyze a
team formation process that generalizes matching models and network formation
models, allowing for overlapping teams of heterogeneous size. We apply
different notions of stability: myopic team-wise stability, which extends to
our setup the concept of pair-wise stability, coalitional stability, where
agents are perfectly rational and able to coordinate, and stochastic stability,
where agents are myopic and errors occur with vanishing probability. We find
that, in many cases, coalitional stability in no way refines myopic team-wise
stability, while stochastically stable states are feasible states that maximize
the overall number of activities performed by teams.",Efficiency and Stability in a Process of Teams Formation,2021-03-25 12:38:06,"Leonardo Boncinelli, Alessio Muscillo, Paolo Pin","http://dx.doi.org/10.1007/s13235-022-00438-y, http://arxiv.org/abs/2103.13712v2, http://arxiv.org/pdf/2103.13712v2",econ.TH
34067,th,"We seek to take a different approach in deriving the optimal search policy
for the repeated consumer search model found in Fishman and Rob (1995) with the
main motivation of dropping the assumption of prior knowledge of the price
distribution $F(p)$ in each period. We will do this by incorporating the famous
multi-armed bandit problem (MAB). We start by modifying the MAB framework to
fit the setting of the repeated consumer search model and formulate the
objective as a dynamic optimization problem. Then, given any sequence of
exploration, we assign a value to each store in that sequence using Bellman
equations. We then proceed to break down the problem into individual optimal
stopping problems for each period which incidentally coincides with the
framework of the famous secretary problem where we proceed to derive the
optimal stopping policy. We will see that implementing the optimal stopping
policy in each period solves the original dynamic optimization by `forward
induction' reasoning.",Expanding on Repeated Consumer Search Using Multi-Armed Bandits and Secretaries,2020-12-22 12:53:55,Tung Yu Marco Chan,"http://arxiv.org/abs/2012.11900v2, http://arxiv.org/pdf/2012.11900v2",econ.TH
34068,th,"We introduce a new updating rule, the conditional maximum likelihood rule
(CML) for updating ambiguous information. The CML formula replaces the
likelihood term in Bayes' rule with the maximal likelihood of the given signal
conditional on the state. We show that CML satisfies a new axiom, increased
sensitivity after updating, while other updating rules do not. With CML, a
decision maker's posterior is unaffected by the order in which independent
signals arrive. CML also accommodates recent experimental findings on updating
signals of unknown accuracy and has simple predictions on learning with such
signals. We show that an information designer can almost achieve her maximal
payoff with a suitable ambiguous information structure whenever the agent
updates according to CML.",A Theory of Updating Ambiguous Information,2020-12-26 03:12:59,Rui Tang,"http://arxiv.org/abs/2012.13650v1, http://arxiv.org/pdf/2012.13650v1",econ.TH
34069,th,"We define notions of dominance between two actions in a dynamic game. Local
dominance considers players who have a blurred view of the future and compare
the two actions by first focusing on the outcomes that may realize at the
current stage. When considering the possibility that the game may continue,
they can only check that the local comparison is not overturned under the
assumption of ""continuing in the same way"" after the two actions (in a newly
defined sense). Despite the lack of forward planning, local dominance solves
dynamic mechanisms that were found easy to play and implements social choice
functions that cannot be implemented in obviously-dominant strategies.",Local Dominance,2020-12-28 18:34:55,"Emiliano Catonini, Jingyi Xue","http://arxiv.org/abs/2012.14432v5, http://arxiv.org/pdf/2012.14432v5",econ.TH
34070,th,"We propose a model of incomplete \textit{twofold multiprior preferences}, in
which an act $f$ is ranked above an act $g$ only when $f$ provides higher
utility in a worst-case scenario than what $g$ provides in a best-case
scenario. The model explains failures of contingent reasoning, captured through
a weakening of the state-by-state monotonicity (or dominance) axiom. Our model
gives rise to rich comparative statics results, as well as extension exercises,
and connections to choice theory. We present an application to second-price
auctions.",Twofold Multiprior Preferences and Failures of Contingent Reasoning,2020-12-29 04:36:26,"Federico Echenique, Masaki Miyashita, Yuta Nakamura, Luciano Pomatto, Jamie Vinson","http://arxiv.org/abs/2012.14557v3, http://arxiv.org/pdf/2012.14557v3",econ.TH
34071,th,"We present a directed variant of Salop (1979) model to analyze bus transport
dynamics. The players are operators competing in cooperative and
non-cooperative games. Utility, like in most bus concession schemes in emerging
countries, is proportional to the total fare collection. Competition for
picking up passengers leads to well documented and dangerous driving practices
that cause road accidents, traffic congestion and pollution. We obtain
theoretical results that support the existence and implementation of such
practices, and give a qualitative description of how they come to occur. In
addition, our results allow to compare the current or base transport system
with a more cooperative one.",Bus operators in competition: a directed location approach,2021-01-04 21:35:28,"Fernanda Herrera, Sergio I. López","http://arxiv.org/abs/2101.01155v2, http://arxiv.org/pdf/2101.01155v2",econ.TH
34072,th,"This study examines the mechanism design problem for public goods provision
in a large economy with $n$ independent agents. We propose a class of
dominant-strategy incentive compatible and ex-post individually rational
mechanisms, which we call the adjusted mean-thresholding (AMT) mechanisms. We
show that when the cost of provision grows slower than the $\sqrt{n}$-rate, the
AMT mechanisms are both eventually ex-ante budget balanced and asymptotically
efficient. When the cost grows faster than the $\sqrt{n}$-rate, in contrast, we
show that any incentive compatible, individually rational, and eventually
ex-ante budget balanced mechanism must have provision probability converging to
zero and hence cannot be asymptotically efficient. The AMT mechanisms have a
simple form and are more informationally robust when compared to, for example,
the second-best mechanism. This is because the construction of an AMT mechanism
depends only on the first moment of the valuation distribution.",Strength in Numbers: Robust Mechanisms for Public Goods with Many Agents,2021-01-07 11:12:58,"Jin Xi, Haitian Xie","http://arxiv.org/abs/2101.02423v4, http://arxiv.org/pdf/2101.02423v4",econ.TH
34073,th,"I study costly information acquisition in a two-sided matching problem, such
as matching applicants to schools. An applicant's utility is a sum of common
and idiosyncratic components. The idiosyncratic component is unknown to the
applicant but can be learned at a cost. When applicants are assigned using an
ordinal strategy-proof mechanism, too few acquire information, generating a
significant welfare loss. Affirmative action and other realistic policies may
lead to a Pareto improvement. As incentives to acquire information differ
across mechanisms, ignoring such incentives may lead to incorrect welfare
assessments, for example, in comparing a popular Immediate Assignment and an
ordinal strategy-proof mechanism.",Assignment mechanisms: common preferences and information acquisition,2021-01-18 08:32:15,Georgy Artemov,"http://arxiv.org/abs/2101.06885v2, http://arxiv.org/pdf/2101.06885v2",econ.TH
34098,th,"We investigate the formation of Free Trade Agreement (FTA) in a competing
importers framework with $n$ countries. We show that (i) FTA formation causes a
negative externality to non-participants, (ii) a non-participant is willing to
join an FTA, and (iii) new participation may decrease the welfare of incumbent
participants. A unique subgame perfect equilibrium of a sequential FTA
formation game does not achieve global free trade under an open-access rule
where a new applicant needs consent of members for accession, currently
employed by many open regionalism agreements including APEC. We further show
that global FTA is a unique subgame perfect equilibrium under an open-access
rule without consent.",The Formation of Global Free Trade Agreement,2021-03-30 10:10:02,"Akira Okada, Yasuhiro Shirata","http://arxiv.org/abs/2103.16118v2, http://arxiv.org/pdf/2103.16118v2",econ.TH
34074,th,"The Kolkata Paise Restaurant Problem is a challenging game, in which $n$
agents must decide where to have lunch during their lunch break. The game is
very interesting because there are exactly $n$ restaurants and each restaurant
can accommodate only one agent. If two or more agents happen to choose the same
restaurant, only one gets served and the others have to return back to work
hungry. In this paper we tackle this problem from an entirely new angle. We
abolish certain implicit assumptions, which allows us to propose a novel
strategy that results in greater utilization for the restaurants. We emphasize
the spatially distributed nature of our approach, which, for the first time,
perceives the locations of the restaurants as uniformly distributed in the
entire city area. This critical change in perspective has profound
ramifications in the topological layout of the restaurants, which now makes it
completely realistic to assume that every agent has a second chance. Every
agent now may visit, in case of failure, more than one restaurants, within the
predefined time constraints.",DKPRG or how to succeed in the Kolkata Paise Restaurant gamevia TSP,2021-01-16 22:14:50,"Kalliopi Kastampolidou, Christos Papalitsas, Theodore Andronikos","http://arxiv.org/abs/2101.07760v1, http://arxiv.org/pdf/2101.07760v1",econ.TH
34075,th,"Large-scale institutional changes require strong commitment and involvement
of all stakeholders. We use the standard framework of cooperative game theory
developed by Ichiishi (1983, pp. 78-149) to: (i) establish analytically the
difference between policy maker and political leader; (ii) formally study
interactions between a policy maker and his followers; (iii) examine the role
of leadership in the implementation of structural reforms. We show that a
policy maker can be both partisan and non-partisan, while a political leader
can only be non-partisan. Following this distinction, we derive the probability
of success of an institutional change, as well as the nature of the gain that
such a change would generate on the beneficiary population. Based on the
restrictions of this simple mathematical model and using some evidence from the
Congolese experience between 2012 and 2016, we show that institutional changes
can indeed benefit the majority of the population, when policy makers are truly
partisan.",Leadership and Institutional Reforms,2021-01-21 19:28:31,"Matata Ponyo Mapon, Jean-Paul K. Tsasa","http://arxiv.org/abs/2101.08702v1, http://arxiv.org/pdf/2101.08702v1",econ.TH
34076,th,"We study interactions between strategic players and markets whose behavior is
guided by an algorithm. Algorithms use data from prior interactions and a
limited set of decision rules to prescribe actions. While as-if rational play
need not emerge if the algorithm is constrained, it is possible to guide
behavior across a rich set of possible environments using limited details.
Provided a condition known as weak learnability holds, Adaptive Boosting
algorithms can be specified to induce behavior that is (approximately) as-if
rational. Our analysis provides a statistical perspective on the study of
endogenous model misspecification.",Machine Learning for Strategic Inference,2021-01-24 03:30:17,"In-Koo Cho, Jonathan Libgober","http://arxiv.org/abs/2101.09613v1, http://arxiv.org/pdf/2101.09613v1",econ.TH
34077,th,"We study information design settings where the designer controls information
about a state, and there are multiple agents interacting in a game who are
privately informed about their types. Each agent's utility depends on all
agents' types and actions, as well as (linearly) on the state. To optimally
screen the agents, the designer first asks agents to report their types and
then sends a private action recommendation to each agent whose distribution
depends on all reported types and the state. We show that there always exists
an optimal mechanism which is laminar partitional. Such a mechanism partitions
the state space for each type profile and recommends the same action profile
for states that belong to the same partition element. Furthermore, the convex
hulls of any two partition elements are such that either one contains the other
or they have an empty intersection. In the single-agent case, each state is
either perfectly revealed or lies in an interval in which the number of
different signal realizations is at most the number of different types of the
agent plus two. A similar result is established for the multi-agent case.
  We also highlight the value of screening: without screening the best
achievable payoff could be as low as one over the number of types fraction of
the optimal payoff. Along the way, we shed light on the solutions of
optimization problems over distributions subject to a mean-preserving
contraction constraint and additional side constraints, which might be of
independent interest.",Optimal Disclosure of Information to a Privately Informed Receiver,2021-01-26 00:42:31,"Ozan Candogan, Philipp Strack","http://arxiv.org/abs/2101.10431v5, http://arxiv.org/pdf/2101.10431v5",econ.TH
34078,th,"Two types of interventions are commonly implemented in networks:
characteristic intervention, which influences individuals' intrinsic
incentives, and structural intervention, which targets the social links among
individuals. In this paper we provide a general framework to evaluate the
distinct equilibrium effects of both types of interventions. We identify a
hidden equivalence between a structural intervention and an endogenously
determined characteristic intervention. Compared with existing approaches in
the literature, the perspective from such an equivalence provides several
advantages in the analysis of interventions that target network structure. We
present a wide range of applications of our theory, including identifying the
most wanted criminal(s) in delinquent networks and targeting the key connector
for isolated communities.",Structural Interventions in Networks,2021-01-29 09:13:21,"Yang Sun, Wei Zhao, Junjie Zhou","http://arxiv.org/abs/2101.12420v2, http://arxiv.org/pdf/2101.12420v2",econ.TH
34079,th,"In this note, I explore the implications of informational robustness under
the assumption of common belief in rationality. That is, predictions for
incomplete-information games which are valid across all possible information
structures. First, I address this question from a global perspective and then
generalize the analysis to allow for localized informational robustness.",Informational Robustness of Common Belief in Rationality,2021-03-03 16:52:30,Gabriel Ziegler,"http://dx.doi.org/10.1016/j.geb.2022.01.025, http://arxiv.org/abs/2103.02402v2, http://arxiv.org/pdf/2103.02402v2",econ.TH
34080,th,"We characterize decreasing impatience, a common behavioral phenomenon in
intertemporal choice. Discount factors that display decreasing impatience are
characterized through a convexit y axiom for investments at fixed interest
rates. Then we show that they are equivalent to a geometric average of
generalized quasi-hype rbolic discount rates. Finally, they emerge through
parimutuel preference aggregation of exponential discount factors.",Decreasing Impatience,2021-03-04 23:00:35,"Christopher P. Chambers, Federico Echenique, Alan D. Miller","http://arxiv.org/abs/2103.03290v4, http://arxiv.org/pdf/2103.03290v4",econ.TH
34081,th,"A fundamental assumption in classical mechanism design is that buyers are
perfect optimizers. However, in practice, buyers may be limited by their
computational capabilities or a lack of information, and may not be able to
perfectly optimize. This has motivated the introduction of approximate
incentive compatibility (IC) as an appealing solution concept for practical
mechanism design. While most of the literature focuses on the analysis of
particular approximate IC mechanisms, this paper is the first to study the
design of optimal mechanisms in the space of approximate IC mechanisms and to
explore how much revenue can be garnered by moving from exact to approximate
incentive constraints. We study the problem of a seller facing one buyer with
private values and analyze optimal selling mechanisms under
$\varepsilon$-incentive compatibility. We establish that the gains that can be
garnered depend on the local curvature of the seller's revenue function around
the optimal posted price when the buyer is a perfect optimizer. If the revenue
function behaves locally like an $\alpha$-power for $\alpha \in (1,\infty)$,
then no mechanism can garner gains higher than order
$\varepsilon^{\alpha/(2\alpha-1)}$. This improves upon state-of-the-art results
which imply maximum gains of $\varepsilon^{1/2}$ by providing the first
parametric bounds that capture the impact of revenue function's curvature on
revenue gains. Furthermore, we establish that an optimal mechanism needs to
randomize as soon as $\varepsilon>0$ and construct a randomized mechanism that
is guaranteed to achieve order $\varepsilon^{\alpha/(2\alpha-1)}$ additional
revenues, leading to a tight characterization of the revenue implications of
approximate IC constraints. Our work brings forward the need to optimize not
only over allocations and payments but also over best responses, and we develop
a new framework to address this challenge.",Mechanism Design under Approximate Incentive Compatibility,2021-03-05 03:38:06,"Santiago Balseiro, Omar Besbes, Francisco Castro","http://arxiv.org/abs/2103.03403v2, http://arxiv.org/pdf/2103.03403v2",econ.TH
34082,th,"This paper develops an integer programming approach to two-sided many-to-one
matching by investigating stable integral matchings of a fictitious market
where each worker is divisible. We show that stable matchings exist in a
discrete matching market when firms' preference profile satisfies a total
unimodularity condition that is compatible with various forms of
complementarities. We provide a class of firms' preference profiles that
satisfy this condition.",Stable matching: an integer programming approach,2021-03-05 04:33:39,Chao Huang,"http://dx.doi.org/10.3982/TE4830, http://arxiv.org/abs/2103.03418v2, http://arxiv.org/pdf/2103.03418v2",econ.TH
34083,th,"This paper studies the provision of incentives for information acquisition.
Information is costly for an agent to acquire and unobservable to a principal.
We show that any Pareto optimal contract has a decomposition into a fraction of
output, a state-dependent transfer, and an optimal distortion. Under this
decomposition: 1) the fraction of output paid is increasing in the set of
experiments available to the agent, 2) the state-dependent transfer indexes
contract payments to account for differences in output between states, 3) the
optimal distortion exploits complementarities in the cost of information
acquisition: experiment probabilities unalterable via contract payments stuck
against liability limits are substituted for, the substitution occurring
according to complementarities in the cost of information acquisition, and 4)
if and only if the agent's cost of experimentation is mutual information, the
optimal distortion takes the form of a decision-dependent transfer.",Contracts for acquiring information,2021-03-05 22:38:07,"Aubrey Clark, Giovanni Reggiani","http://arxiv.org/abs/2103.03911v1, http://arxiv.org/pdf/2103.03911v1",econ.TH
34084,th,"The motivation of this note is to show how singular values affect local
uniqueness. More precisely, Theorem 3.1 shows how to construct a neighborhood
(a ball) of a regular equilibrium whose diameter represents an estimate of
local uniqueness, hence providing a measure of how isolated a (local) unique
equilibrium can be. The result, whose relevance in terms of comparative statics
is evident, is based on reasonable and natural assumptions and hence is
applicable in many different settings, ranging from pure exchange economies to
non-cooperative games.",A note on local uniqueness of equilibria: How isolated is a local equilibrium?,2021-03-08 21:40:21,Stefano Matta,"http://arxiv.org/abs/2103.04968v1, http://arxiv.org/pdf/2103.04968v1",econ.TH
34085,th,"We study random joint choice rules, allowing for interdependence of choice
across agents. These capture random choice by multiple agents, or a single
agent across goods or time periods. Our interest is in separable choice rules,
where each agent can be thought of as acting independently of the other. A
random joint choice rule satisfies marginality if for every individual choice
set, we can determine the individual's choice probabilities over alternatives
independently of the other individual's choice set. We offer two
characterizations of random joint choice rules satisfying marginality in terms
of separable choice rules. While marginality is a necessary condition for
separability, we show that it fails to be sufficient. We provide an additional
condition on the marginal choice rules which, along with marginality, is
sufficient for separability.",Correlated Choice,2021-03-09 00:28:18,"Christopher P. Chambers, Yusufcan Masatlioglu, Christopher Turansick","http://arxiv.org/abs/2103.05084v5, http://arxiv.org/pdf/2103.05084v5",econ.TH
34086,th,"This paper studies the relationship between core and competitive equilibira
in economies that consist of a continuum of agents and some large agents. We
construct a class of these economies in which the core and competitive
allocations do not coincide.",Core equivalence with large agents,2021-03-09 02:26:06,Aubrey Clark,"http://arxiv.org/abs/2103.05136v1, http://arxiv.org/pdf/2103.05136v1",econ.TH
34087,th,"This paper studies a communication game between an uninformed decision maker
and two perfectly informed senders with conflicting interests. Senders can
misreport information at a cost that increases with the size of the
misrepresentation. The main results show that equilibria where the decision
maker obtains the complete-information payoff hinge on beliefs with undesirable
properties. The imposition of a minimal and sensible belief structure is
sufficient to generate a robust and essentially unique equilibrium with partial
information transmission. A complete characterization of this equilibrium
unveils the language senders use to communicate.",Competition in Costly Talk,2021-03-09 12:47:59,Federico Vaccari,"http://arxiv.org/abs/2103.05317v2, http://arxiv.org/pdf/2103.05317v2",econ.TH
34088,th,"We study joint implementation of reservation and de-reservation policies in
India that has been enforcing a comprehensive affirmative action since 1950.
The landmark judgement of the Supreme Court of India in 2008 mandated that
whenever OBC category (with 27 percent reservation) has unfilled positions they
must be reverted to general category applicants in admissions to public schools
without specifying how to implement it. We disclose the drawbacks of recently
reformed allocation procedure in admissions to technical colleges and offer a
solution through de-reservation via choice rules. We propose a novel priority
design, Backward Transfers (BT) choice rule, for institutions and the deferred
acceptance mechanism under these rules (DA-BT) for centralized clearinghouses.
We show that DA-BT corrects the shortcomings of existing mechanisms. By
formulating the legal requirements and policy goals in India as formal axioms,
we show that the DA-BT mechanism is the unique mechanism for concurrent
implementation of reservation and de-reservation policies.",How to De-Reserves Reserves: Admissions to Technical Colleges in India,2021-03-10 09:58:54,"Orhan Aygün, Bertan Turhan","http://arxiv.org/abs/2103.05899v5, http://arxiv.org/pdf/2103.05899v5",econ.TH
34090,th,"Omnichannel retailing, a new form of distribution system, seamlessly
integrates the Internet and physical stores. This study considers the pricing
and fulfillment strategies of a retailer that has two sales channels: online
and one physical store. The retailer offers consumers three purchasing options:
delivery from the fulfillment center, buy online and pick up in-store (BOPS),
and purchasing at the store. Consumers choose one of these options to maximize
their utility, dividing them into several segments. Given the retailer can
induce consumers to the profitable segment by adjusting the online and store
prices, our analysis shows that it has three optimal strategies: (1) The
retailer excludes consumers far from the physical store from the market and
lets the others choose BOPS or purchasing at the store. (2) It lets consumers
far from the physical store choose delivery from the fulfillment center and the
others choose BOPS or purchasing at the store. (3) It lets all consumers choose
delivery from the fulfillment center. Finally, we present simple dynamic
simulations that considers how the retailer's optimal strategy changes as
consumers' subjective probability of believing the product is in stock
decreases. The results show that the retailer should offer BOPS in later
periods of the selling season to maximize its profit as the subjective
probability decreases.",Price and Fulfillment Strategies in Omnichannel Retailing,2021-03-12 14:14:11,Yasuyuki Kusuda,"http://arxiv.org/abs/2103.07214v1, http://arxiv.org/pdf/2103.07214v1",econ.TH
34091,th,"I study a model of advisors with hidden motives: a seller discloses
information about an object's value to a potential buyer, who doesn't know the
object's value or how profitable the object's sale is to the seller (the
seller's motives). I characterize optimal disclosure rules, used by the seller
to steer sales from lower- to higher-profitability objects. I investigate the
effects of a mandated transparency policy, which reveals the seller's motives
to the buyer. I show that, by removing the seller's steering incentive,
transparency can dissuade the seller from disclosing information about the
object's value, and from acquiring that information in the first place. This
result refines our understanding of effective regulation in advice markets, and
links it to the commitment protocol in the advisor-advisee relation.",Advisors with Hidden Motives,2021-03-12 21:20:54,Paula Onuchic,"http://arxiv.org/abs/2103.07446v4, http://arxiv.org/pdf/2103.07446v4",econ.TH
34092,th,"We consider a game in which the action set of each player is uncountable, and
show that, from weak assumptions on the common prior, any mixed strategy has an
approximately equivalent pure strategy. The assumption of this result can be
further weakened if we consider the purification of a Nash equilibrium.
Combined with the existence theorem for a Nash equilibrium, we derive an
existence theorem for a pure strategy approximated Nash equilibrium under
sufficiently weak assumptions. All of the pure strategies we derive in this
paper can take a finite number of possible actions.",On the Approximate Purification of Mixed Strategies in Games with Infinite Action Sets,2021-03-13 18:46:30,"Yuhki Hosoya, Chaowen Yu","http://dx.doi.org/10.1007/s40505-022-00219-1, http://arxiv.org/abs/2103.07736v4, http://arxiv.org/pdf/2103.07736v4",econ.TH
34093,th,"We study players interacting under the veil of ignorance, who have -- coarse
-- beliefs represented as subsets of opponents' actions. We analyze when these
players follow $\max \min$ or $\max\max$ decision criteria, which we identify
with pessimistic or optimistic attitudes, respectively. Explicitly formalizing
these attitudes and how players reason interactively under ignorance, we
characterize the behavioral implications related to common belief in these
events: while optimism is related to Point Rationalizability, a new algorithm
-- Wald Rationalizability -- captures pessimism. Our characterizations allow us
to uncover novel results: ($i$) regarding optimism, we relate it to wishful
thinking \'a la Yildiz (2007) and we prove that dropping the (implicit)
""belief-implies-truth"" assumption reverses an existence failure described
therein; ($ii$) we shed light on the notion of rationality in ordinal games;
($iii$) we clarify the conceptual underpinnings behind a discontinuity in
Rationalizability hinted in the analysis of Weinstein (2016).",Optimism and Pessimism in Strategic Interactions under Ignorance,2021-03-15 15:14:42,"Pierfrancesco Guarino, Gabriel Ziegler","http://dx.doi.org/10.1016/j.geb.2022.10.012, http://arxiv.org/abs/2103.08319v4, http://arxiv.org/pdf/2103.08319v4",econ.TH
34094,th,"The idea of this paper comes from the famous remark of Piketty and Zuckman:
""It is natural to imagine that $\sigma$ was much less than one in the
eighteenth and nineteenth centuries and became larger than one in the twentieth
and twenty-first centuries. One expects a higher elasticity of substitution in
high-tech economies where there are lots of alternative uses and forms for
capital."" The main aim of this paper is to prove the existence of a production
function of variable elasticity of substitution with values greater than one.",A production function with variable elasticity of substitution greater than one,2021-03-15 22:50:11,Constantin Chilarescu,"http://arxiv.org/abs/2103.08679v1, http://arxiv.org/pdf/2103.08679v1",econ.TH
34095,th,"We show that adding noise before publishing datasets effectively screens
p-hacked findings: spurious explanations of the outcome variable produced by
attempting multiple econometric specifications. Noise creates ""baits"" that
affect two types of researchers differently. Uninformed p-hackers, who are
fully ignorant of the true mechanism and engage in data mining, often fall for
baits. Informed researchers who start with an ex-ante hypothesis are minimally
affected. We characterize the optimal noise level and highlight the relevant
trade-offs. Dissemination noise is a tool that statistical agencies currently
use to protect privacy. We argue this existing practice can be repurposed to
improve research credibility.",Screening $p$-Hackers: Dissemination Noise as Bait,2021-03-16 19:05:19,"Federico Echenique, Kevin He","http://arxiv.org/abs/2103.09164v5, http://arxiv.org/pdf/2103.09164v5",econ.TH
34096,th,"A sender sells an object of unknown quality to a receiver who pays his
expected value for it. Sender and receiver might hold different priors over
quality. The sender commits to a monotonic categorization of quality. We
characterize the sender's optimal monotonic categorization. Using our
characterization, we study the optimality of full pooling or full separation,
the alternation of pooling and separation, and make precise a sense in which
pooling is dominant relative to separation. We discuss applications, extensions
and generalizations, among them the design of a grading scheme by a
profit-maximizing school which seeks to signal student qualities and
simultaneously incentivize students to learn. Such incentive constraints force
monotonicity, and can also be embedded as a distortion of the school's prior
over student qualities, generating a categorization problem with distinct
sender and receiver priors.",Conveying Value via Categories,2021-03-23 22:21:43,"Paula Onuchic, Debraj Ray","http://arxiv.org/abs/2103.12804v1, http://arxiv.org/pdf/2103.12804v1",econ.TH
34100,th,"The electricity industry has been one of the first to face technological
changes motivated by sustainability concerns. Whilst efficiency aspects of
market design have tended to focus upon market power concerns, the new policy
challenges emphasise sustainability. We argue that market designs need to
develop remedies for market conduct integrated with regard to environmental
externalities. Accordingly, we develop an incentive-based market clearing
mechanism using a power network representation with a distinctive feature of
incomplete information regarding generation costs. The shortcomings of price
caps to mitigate market power, in this context, are overcome with the proposed
mechanism.",A Pricing Mechanism to Jointly Mitigate Market Power and Environmental Externalities in Electricity Markets,2021-04-01 19:00:22,"Lamia Varawala, Mohammad Reza Hesamzadeh, György Dán, Derek Bunn, Juan Rosellón","http://dx.doi.org/10.1016/j.eneco.2023.106646, http://arxiv.org/abs/2104.00578v2, http://arxiv.org/pdf/2104.00578v2",econ.TH
34101,th,"This document contains all proofs omitted from our working paper 'Screening
for breakthroughs'; specifically, the June 2023 version of the paper
(arXiv:2011.10090v7).",Screening for breakthroughs: Omitted proofs,2021-04-05 20:46:25,"Gregorio Curello, Ludvig Sinander","http://arxiv.org/abs/2104.02044v4, http://arxiv.org/pdf/2104.02044v4",econ.TH
34102,th,"The guarantee of an anonymous mechanism is the worst case welfare an agent
can secure against unanimously adversarial others. How high can such a
guarantee be, and what type of mechanism achieves it? We address the worst case
design question in the n-person probabilistic voting/bargaining model with p
deterministic outcomes. If n is no less than p the uniform lottery is the only
maximal (unimprovable) guarantee; there are many more if p>n, in particular the
ones inspired by the random dictator mechanism and by voting by veto. If n=2
the maximal set M(n,p) is a simple polytope where each vertex combines a round
of vetoes with one of random dictatorship. For p>n>2, we show that the dual
veto and random dictator guarantees, together with the uniform one, are the
building blocks of 2 to the power d simplices of dimension d in M(n,p), where d
is the quotient of p-1 by n. Their vertices are guarantees easy to interpret
and implement. The set M(n,p) may contain other guarantees as well; what we can
say in full generality is that it is a finite union of polytopes, all sharing
the uniform guarantee.",Wost Case in Voting and Bargaining,2021-04-06 09:43:57,Anna bogomolnaia Ron Holzman Herve Moulin,"http://arxiv.org/abs/2104.02316v1, http://arxiv.org/pdf/2104.02316v1",econ.TH
34103,th,"I examine global dynamics in a monetary model with overlapping generations of
finite-horizon agents and a binding lower bound on nominal interest rates. Debt
targeting rules exacerbate the possibility of self-fulfilling liquidity traps,
for agents expect austerity following deflationary slumps. Conversely, activist
but sustainable fiscal policy regimes - implementing intertemporally balanced
tax cuts and/or transfer increases in response to disinflationary trajectories
- are capable of escaping liquidity traps and embarking inflation into a
globally stable path that converges to the target. Should fiscal stimulus of
last resort be overly aggressive, however, spiral dynamics around the
liquidity-trap steady state exist, causing global indeterminacy.",Fiscal Stimulus of Last Resort,2021-04-06 22:27:33,Alessandro Piergallini,"http://arxiv.org/abs/2104.02753v1, http://arxiv.org/pdf/2104.02753v1",econ.TH
34104,th,"Working becomes harder as we grow tired or bored. I model individuals who
underestimate these changes in marginal disutility -- as implied by ""projection
bias"" -- when deciding whether or not to continue working. This bias causes
people's plans to change: early in the day when they are rested, they plan to
work more than late in the day when they are rested. Despite initially
overestimating how much they will work, people facing a single task with
decreasing returns to effort work optimally. However, when facing multiple
tasks, they misprioritize urgent but unimportant over important but non-urgent
tasks. And when they face a single task with all-or-nothing rewards (such as
being promoted) they start, and repeatedly work on, some overly ambitious tasks
that they later abandon. Each day they stop working once they have grown tired,
which can lead to large daily welfare losses. Finally, when they have either
increasing or decreasing productivity, people work less each day than
previously planned. This moves people closer to optimal effort for decreasing,
and further away from optimal effort for increasing productivity.",Projection Bias in Effort Choices,2021-04-09 15:21:00,Marc Kaufmann,"http://arxiv.org/abs/2104.04327v1, http://arxiv.org/pdf/2104.04327v1",econ.TH
34105,th,"I study how strategic communication among voters shapes both political
outcomes and parties' advertising strategies in a model of informative campaign
advertising. Two main results are derived. First, echo chambers arise
endogenously. Surprisingly, a small ideological distance between voters is not
sufficient to guarantee that a chamber is created, bias direction plays a
crucial role. Second, when voters' network entails a significant waste of
information, parties tailor their advertising to the opponent's supporters
rather than to their own.",Echo Chambers: Voter-to-Voter Communication and Political Competition,2021-04-10 10:47:44,Monica Anna Giovanniello,"http://arxiv.org/abs/2104.04703v1, http://arxiv.org/pdf/2104.04703v1",econ.TH
34106,th,"A vast majority of the school choice literature focuses on designing
mechanisms to simultaneously assign students to many schools, and employs a
""make it up as you go along"" approach when it comes to each school's admissions
policy. An alternative approach is to focus on the admissions policy for one
school. This is especially relevant for effectively communicating policy
objectives such as achieving a diverse student body or implementing affirmative
action. I argue that the latter approach is relatively under-examined and
deserves more attention in the future.",Mechanism Design Approach to School Choice: One versus Many,2021-04-17 11:53:49,Battal Dogan,"http://arxiv.org/abs/2104.08485v1, http://arxiv.org/pdf/2104.08485v1",econ.TH
34107,th,"Consumers only discover at the first seller which product best fits their
needs, then check its price online, then decide on buying. Switching sellers is
costly. Equilibrium prices fall in the switching cost, eventually to the
monopoly level, despite the exit of lower-value consumers when changing sellers
becomes costlier. More expensive switching makes some buyers exit the market,
leaving fewer inframarginal buyers to the sellers. Marginal buyers may change
in either direction, so for a range of parameters, all firms cut prices.",Costlier switching strengthens competition even without advertising,2021-04-18 22:02:30,Sander Heinsalu,"http://arxiv.org/abs/2104.08934v1, http://arxiv.org/pdf/2104.08934v1",econ.TH
34108,th,"This paper approaches the problem of understanding collective agency from a
logical and game-theoretical perspective. Instead of collective intentionality,
our analysis highlights the role of Pareto optimality. To facilitate the
analysis, we propose a logic of preference and functional dependence by
extending the logic of functional dependence. In this logic, we can express
Pareto optimality and thus reason about collective agency.","Pareto Optimality, Functional Dependence and Collective Agency",2021-04-19 11:10:15,"Chenwei Shi, Yiyang Wang","http://arxiv.org/abs/2104.09112v1, http://arxiv.org/pdf/2104.09112v1",econ.TH
34109,th,"School choice is of great importance both in theory and practice. This paper
studies the (student-optimal) top trading cycles mechanism (TTCM) in an
axiomatic way. We introduce two new axioms: MBG (mutual best
group)-quota-rationality and MBG-robust efficiency. While stability implies
MBG-quota-rationality, MBG-robust efficiency is weaker than robust efficiency,
which is stronger than the combination of efficiency and group
strategy-proofness. The TTCM is characterized by MBG-quota-rationality and
MBG-robust efficiency. Our results construct a new basis to compare the TTCM
with the other school choice mechanisms, especially the student-optimal stable
mechanism under Ergin but not Kesten-acyclic priority structures.",New axioms for top trading cycles,2021-04-19 12:36:12,"Siwei Chen, Yajing Chen, Chia-Ling Hsu","http://arxiv.org/abs/2104.09157v3, http://arxiv.org/pdf/2104.09157v3",econ.TH
34110,th,"This paper considers the problem of randomly assigning a set of objects to a
set of agents based on the ordinal preferences of agents. We generalize the
well-known immediate acceptance algorithm to the afore-mentioned random
environments and define the probabilistic rank rule (PR rule). We introduce two
new axioms: sd-rank-fairness, and equal-rank envy-freeness. Sd-rank-fairness
implies sd-efficiency. Equal-rank envy-freeness implies equal treatment of
equals. Sd-rank-fairness and equal-rank envy-freeness are enough to
characterize the PR rule.",The probabilistic rank random assignment rule and its axiomatic characterization,2021-04-19 12:41:32,"Yajing Chen, Patrick Harless, Zhenhua Jiao","http://arxiv.org/abs/2104.09165v1, http://arxiv.org/pdf/2104.09165v1",econ.TH
34111,th,"We analyze the relation between strategy-proofness and preference reversal in
the case that agents may declare indifference. Interestingly, Berga and Moreno
(2020), have recently derived preference reversal from group strategy-proofness
of social choice functions on strict preferences domains if the range has no
more than three elements. We extend this result and at the same time simplify
it. Our analysis points out the role of individual strategy-proofness in
deriving the preference reversal property, giving back to the latter its
original individual nature (cfr. Eliaz, 2004).
  Moreover, we show that the difficulties Berga and Moreno highlighted relaxing
the assumption on the cardinality of the range, disappear under a proper
assumption on the domain. We introduce the concept of complete sets of
preferences and show that individual strategy-proofness is sufficient to obtain
the preference reversal property when the agents' feasible set of orderings is
complete. This covers interesting cases like single peaked preferences, rich
domains admitting regular social choice functions, and universal domains. The
fact that we use individual rather than group strategy-proofness, allows to get
immediately some of the known, and some new, equivalences between individual
and group strategy-proofness. Finally, we show that group strategy-proofness is
only really needed to obtain preference reversal if there are infinitely many
voters.",On the relation between Preference Reversal and Strategy-Proofness,2021-04-20 21:50:28,"K. P. S. Bhaskara Rao, Achille Basile, Surekha Rao","http://arxiv.org/abs/2104.10205v2, http://arxiv.org/pdf/2104.10205v2",econ.TH
34112,th,"The paper develops a general methodology for analyzing policies with
path-dependency (hysteresis) in stochastic models with forward looking
optimizing agents. Our main application is a macro-climate model with a
path-dependent climate externality. We derive in closed form the dynamics of
the optimal Pigouvian tax, that is, its drift and diffusion coefficients. The
dynamics of the present marginal damages is given by the recently developed
functional It\^o formula. The dynamics of the conditional expectation process
of the future marginal damages is given by a new total derivative formula that
we prove. The total derivative formula represents the evolution of the
conditional expectation process as a sum of the expected dynamics of hysteresis
with respect to time, a form of a time derivative, and the expected dynamics of
hysteresis with the shocks to the trajectory of the stochastic process, a form
of a stochastic derivative. We then generalize the results. First, we propose a
general class of hysteresis functionals that permits significant tractability.
Second, we characterize in closed form the dynamics of the stochastic
hysteresis elasticity that represents the change in the whole optimal policy
process with an introduction of small hysteresis effects. Third, we determine
the optimal policy process.",Policy with stochastic hysteresis,2021-04-20 22:52:26,"Georgii Riabov, Aleh Tsyvinski","http://arxiv.org/abs/2104.10225v1, http://arxiv.org/pdf/2104.10225v1",econ.TH
34113,th,"One of the consequences of persistent technological change is that it force
individuals to make decisions under extreme uncertainty. This means that
traditional decision-making frameworks cannot be applied. To address this issue
we introduce a variant of Case-Based Decision Theory, in which the solution to
a problem obtains in terms of the distance to previous problems. We formalize
this by defining a space based on an orthogonal basis of features of problems.
We show how this framework evolves upon the acquisition of new information,
namely features or values of them arising in new problems. We discuss how this
can be useful to evaluate decisions based on not yet existing data.",A Graph-based Similarity Function for CBDT: Acquiring and Using New Information,2021-04-29 14:36:33,"Federico E. Contiggiani, Fernando Delbianco, Fernando Tohmé","http://arxiv.org/abs/2104.14268v1, http://arxiv.org/pdf/2104.14268v1",econ.TH
34114,th,"The European Union Emission Trading Scheme (EU ETS) is a cornerstone of the
EU's strategy to fight climate change and an important device for plummeting
greenhouse gas (GHG) emissions in an economically efficient manner. The power
industry has switched to an auction-based allocation system at the onset of
Phase III of the EU ETS to bring economic efficiency by negating windfall
profits that have been resulted from grandfathered allocation of allowances in
the previous phases. In this work, we analyze and simulate the interaction of
oligopolistic generators in an electricity market with a game-theoretical
framework where the electricity and the emissions markets interact in a
two-stage electricity market. For analytical simplicity, we assume a single
futures market where the electricity is committed at the futures price, and the
emissions allowance is contracted in advance, prior to a spot market where the
energy and allowances delivery takes place. Moreover, a coherent risk measure
is applied (Conditional Value at Risk) to model both risk averse and risk
neutral generators and a two-stage stochastic optimization setting is
introduced to deal with the uncertainty of renewable capacity, demand,
generation, and emission costs. The performance of the proposed equilibrium
model and its main properties are examined through realistic numerical
simulations. Our results show that renewable generators are surging and
substituting conventional generators without compromising social welfare.
Hence, both renewable deployment and emission allowance auctioning are
effectively reducing GHG emissions and promoting low-carbon economic path.",Contracts in Electricity Markets under EU ETS: A Stochastic Programming Approach,2021-04-30 18:31:32,"Arega Getaneh Abate, Rossana Riccardi, Carlos Ruiz","http://dx.doi.org/10.1016/j.eneco.2021.105309, http://arxiv.org/abs/2104.15062v1, http://arxiv.org/pdf/2104.15062v1",econ.TH
34115,th,"Consumers often resort to third-party information such as word of mouth,
testimonials and reviews to learn more about the quality of a new product.
However, it may be difficult for consumers to assess the precision of such
information. We use a monopoly setting to investigate how the precision of
third-party information and consumers' ability to recognize precision impact
firm profits. Conventional wisdom suggests that when a firm is high quality, it
should prefer a market where consumers are better at recognizing precise
signals. Yet in a broad range of conditions, we show that when the firm is high
quality, it is more profitable to sell to consumers who do not recognize
precise signals. Given the ability of consumers to assess precision, we show a
low quality firm always suffers from more precise information. However, a high
quality firm can also suffer from more precise information. The precision range
in which a high quality firm gains or suffers from better information depends
on how skilled consumers are at recognizing precision.",Is More Precise Word of Mouth Better for a High Quality Firm? ... Not Always,2021-05-03 20:38:07,"Mohsen Foroughifar, David Soberman","http://arxiv.org/abs/2105.01040v2, http://arxiv.org/pdf/2105.01040v2",econ.TH
34116,th,"In a commodity market, revenue adequate prices refer to compensations that
ensure that a market participant has a non-negative profit. In this article, we
study the problem of deriving revenue adequate prices for an electricity
market-clearing model with uncertainties resulting from the use of variable
renewable energy sources (VRES). To handle the uncertain nature of the problem,
we use a chance-constrained optimization (CCO) approach, which has recently
become very popular choice when constructing dispatch electricity models with
penetration of VRES (or other sources of uncertainty). Then, we show how prices
that satisfy revenue adequacy in expectation for the market administrator, and
cost recovery in expectation for all conventional and VRES generators, can be
obtained from the optimal dual variables associated with the deterministic
equivalent of the CCO market-clearing model. These results constitute a novel
contribution to the research of research on revenue adequate, equilibrium, and
other types of pricing schemes that have been derived in the literature when
the market uncertainties are modeled using stochastic or robust optimization
approaches. Unlike in the stochastic approach, the CCO market-clearing model
studied here produces uncertainty uniform real-time prices that do not depend
on the real-time realization of the VRES generation outcomes. To illustrate our
results, we consider a case study electricity market, and contrast the market
prices obtained using a revenue adequate stochastic approach and the proposed
revenue adequate CCO approach.",Revenue Adequate Prices for Chance-Constrained Electricity Markets with Variable Renewable Energy Sources,2021-05-04 03:54:07,"Xin Shi, Alberto J. Lamadrid L., Luis F. Zuluaga","http://arxiv.org/abs/2105.01233v1, http://arxiv.org/pdf/2105.01233v1",econ.TH
34117,th,"We provide an original analysis of historical documents to describe the
assignment procedures used to allocate entry-level civil service jobs in China
from the tenth to the early twentieth century. The procedures tried to take
different objectives into account through trial and error. By constructing a
formal model that combines these procedures into a common framework, we compare
their effectiveness in minimizing unfilled jobs and prioritizing high-level
posts. We show that the problem was inherently complex such that changes made
to improve the outcome could have the opposite effect. Based on a small
modification of the last procedure used, we provide a new mechanism for
producing maximum matchings under constraints in a transparent and public way.",Designing Heaven's Will: The job assignment in the Chinese imperial civil service,2021-05-06 09:32:10,"Inácio Bó, Li Chen","http://arxiv.org/abs/2105.02457v2, http://arxiv.org/pdf/2105.02457v2",econ.TH
34118,th,"We study robustly optimal mechanisms for selling multiple items. The seller
maximizes revenue robustly against a worst-case distribution of a buyer's
valuations within a set of distributions, called an ``ambiguity'' set. We
identify the exact forms of robustly optimal selling mechanisms and the
worst-case distributions when the ambiguity set satisfies a variety of moment
conditions on the values of subsets of goods. We also identify general
properties of the ambiguity set that lead to the robust optimality of partial
bundling which includes separate sales and pure bundling as special cases.",Robustly Optimal Mechanisms for Selling Multiple Goods,2021-05-06 20:16:30,"Yeon-Koo Che, Weijie Zhong","http://arxiv.org/abs/2105.02828v3, http://arxiv.org/pdf/2105.02828v3",econ.TH
34119,th,"A researcher observes a finite sequence of choices made by multiple agents in
a binary-state environment. Agents maximize expected utilities that depend on
their chosen alternative and the unknown underlying state. Agents learn about
the time-varying state from the same information and their actions change
because of the evolving common belief. The researcher does not observe agents'
preferences, the prior, the common information and the stochastic process for
the state. We characterize the set of choices that are rationalized by this
model and generalize the information environments to allow for private
information. We discuss the implications of our results for uncovering
discrimination and committee decision making.",Dynamic Choices and Common Learning,2021-05-08 15:07:55,"Rahul Deb, Ludovic Renou","http://arxiv.org/abs/2105.03683v1, http://arxiv.org/pdf/2105.03683v1",econ.TH
34120,th,"I study the design of auctions in which the auctioneer is assumed to have
information only about the marginal distribution of a generic bidder's
valuation, but does not know the correlation structure of the joint
distribution of bidders' valuations. I assume that a generic bidder's valuation
is bounded and $\bar{v}$ is the maximum valuation of a generic bidder. The
performance of a mechanism is evaluated in the worst case over the uncertainty
of joint distributions that are consistent with the marginal distribution. For
the two-bidder case, the second-price auction with the uniformly distributed
random reserve maximizes the worst-case expected revenue across all
dominant-strategy mechanisms under certain regularity conditions. For the
$N$-bidder ($N\ge3$) case, the second-price auction with the $\bar{v}-$scaled
$Beta (\frac{1}{N-1},1)$ distributed random reserve maximizes the worst-case
expected revenue across standard (a bidder whose bid is not the highest will
never be allocated) dominant-strategy mechanisms under certain regularity
conditions. When the probability mass condition (part of the regularity
conditions) does not hold, the second-price auction with the $s^*-$scaled $Beta
(\frac{1}{N-1},1)$ distributed random reserve maximizes the worst-case expected
revenue across standard dominant-strategy mechanisms, where $s^*\in
(0,\bar{v})$.",Correlation-Robust Optimal Auctions,2021-05-11 01:30:52,Wanchang Zhang,"http://arxiv.org/abs/2105.04697v2, http://arxiv.org/pdf/2105.04697v2",econ.TH
34257,th,"In this paper, we show that if every consumer in an economy has a
quasi-linear utility function, then the normalized equilibrium price is unique,
and it is locally stable with respect to the t\^atonnement process. If the
dimension of the consumption space is two, then this result can be expressed by
the corresponding partial equilibrium model. Our study can be seen as that
extends the results in partial equilibrium theory to economies with more than
two dimensional consumption space.",On the Uniqueness and Stability of the Equilibrium Price in Quasi-Linear Economies,2022-02-09 20:13:43,Yuhki Hosoya,"http://arxiv.org/abs/2202.04573v3, http://arxiv.org/pdf/2202.04573v3",econ.TH
34121,th,"I construct a novel random double auction as a robust bilateral trading
mechanism for a profit-maximizing intermediary who facilitates trade between a
buyer and a seller. It works as follows. The intermediary publicly commits to
charging a fixed commission fee and randomly drawing a spread from a uniform
distribution. Then the buyer submits a bid price and the seller submits an ask
price simultaneously. If the difference between the bid price and the ask price
is greater than the realized spread, then the asset is transacted at the
midpoint price, and each pays the intermediary half of the fixed commission
fee. Otherwise, no trade takes place, and no one pays or receives anything. I
show that the random double auction maximizes the worst-case expected profit
across all dominant-strategy incentive compatible and ex-post individually
rational mechanisms for the symmetric case. I also construct a robust trading
mechanism with similar properties for the asymmetric case.",Random Double Auction: A Robust Bilateral Trading Mechanism,2021-05-12 07:46:28,Wanchang Zhang,"http://arxiv.org/abs/2105.05427v2, http://arxiv.org/pdf/2105.05427v2",econ.TH
34122,th,"We study how a consensus emerges in a finite population of like-minded
individuals who are asymmetrically informed about the realization of the true
state of the world. Agents observe a private signal about the state and then
start exchanging messages. Generalizing the classical model of rational
dialogues of Geanakoplos and Polemarchakis (1982) and its subsequent
extensions, we dispense with the standard assumption that the state space is a
probability space and we do not put any bound on the cardinality of the state
space itself or the information partitions. We show that a class of rational
dialogues can be found that always lead to consensus provided that three main
conditions are met. First, everybody must be able to send messages to everybody
else, either directly or indirectly. Second, communication must be reciprocal.
Finally, agents need to have the opportunity to engage in dialogues of
transfinite length.",Learning to agree over large state spaces,2021-05-13 17:10:35,Michele Crescenzi,"http://dx.doi.org/10.1016/j.jmateco.2022.102654, http://arxiv.org/abs/2105.06313v2, http://arxiv.org/pdf/2105.06313v2",econ.TH
34123,th,"In dynamic settings each economic agent's choices can be revealing of her
private information. This elicitation via the rationalization of observable
behavior depends each agent's perception of which payoff-relevant contingencies
other agents persistently deem as impossible. We formalize the potential
heterogeneity of these perceptions as disagreements at higher-orders about the
set of payoff states of a dynamic game. We find that apparently negligible
disagreements greatly affect how agents interpret information and assess the
optimality of subsequent behavior: When knowledge of the state space is only
'almost common', strategic uncertainty may be greater when choices are
rationalized than when they are not--forward and backward induction
predictions, respectively, and while backward induction predictions are robust
to small disagreements about the state space, forward induction predictions are
not. We also prove that forward induction predictions always admit unique
selections a la Weinstein and Yildiz (2007) (also for spaces not satisfying
richness) and backward induction predictions do not.","Heterogeneously Perceived Incentives in Dynamic Environments: Rationalization, Robustness and Unique Selections",2021-05-14 14:32:13,"Evan Piermont, Peio Zuazo-Garin","http://arxiv.org/abs/2105.06772v1, http://arxiv.org/pdf/2105.06772v1",econ.TH
34124,th,"Experts in a population hold (a) beliefs over a state (call these state
beliefs), as well as (b) beliefs over the distribution of beliefs in the
population (call these hypothetical beliefs). If these are generated via
updating a common prior using a fixed information structure, then the
information structure can (generically) be derived by regressing hypothetical
beliefs on state beliefs, provided there are at least as many signals as
states. In addition, the prior solves an eigenvector equation derived from a
matrix determined by the state beliefs and the hypothetical beliefs. Thus, the
ex-ante informational environment (i.e., how signals are generated) can be
determined using ex-post data (i.e., the beliefs in the population). I discuss
implications of this finding, as well as what is identified when there are more
states than signals.",Identifying Wisdom (of the Crowd): A Regression Approach,2021-05-15 02:40:49,Jonathan Libgober,"http://arxiv.org/abs/2105.07097v2, http://arxiv.org/pdf/2105.07097v2",econ.TH
34125,th,"We study the problem of when to sell an indivisible object. There is a
monopolistic seller who owns an indivisible object and plans to sell it over a
given span of time to the set of potential buyers whose valuations for the
object evolve over time. We formulate the seller's problem as a dynamic
mechanism design problem. We provide the procedure for finding the optimal
solution and show how to check incentive compatibility. We also examine
sufficient conditions for the optimality of myopic stopping rule and ex-post
individual rationality. In addition, we present some comparative static results
regarding the seller's revenue and the selling time.",When to sell an indivisible object: Optimal timing with Markovian buyers,2021-05-17 10:33:34,Kiho Yoon,"http://arxiv.org/abs/2105.07649v4, http://arxiv.org/pdf/2105.07649v4",econ.TH
34126,th,"With the increase of variable renewable energy sources (VRES) share in
electricity systems, manystudies were developed in order to determine their
optimal technological and spatial mix. Modern PortfolioTheory (MPT) has been
frequently applied in this context. However, some crucial aspects, important
inenergy planning, are not addressed by these analyses. We, therefore, propose
several improvements andevaluate how each change in formulation impacts
results. More specifically, we address generation costs, system demand, and
firm energy output, present a formal model and apply it to the case of Brazil.
Wefound that, after including our proposed modifications, the resulting
efficient frontier differs strongly fromthe one obtained in the original
formulation. Portfolios with high output standard deviation are not ableto
provide a firm output level at competitive costs. Furthermore, we show that
diversification plays animportant role in smoothing output from VRES portfolios",Improvements to Modern Portfolio Theory based models applied to electricity systems,2021-05-18 01:16:14,"Gabriel Malta Castro, Claude Klöckl, Peter Regner, Johannes Schmidt, Amaro Olimpio Pereira Jr","http://arxiv.org/abs/2105.08182v1, http://arxiv.org/pdf/2105.08182v1",econ.TH
34127,th,"We present a model of optimal training of a rational, sluggish agent. A
trainer commits to a discrete-time, finite-state Markov process that governs
the evolution of training intensity. Subsequently, the agent monitors the state
and adjusts his capacity at every period. Adjustments are incremental: the
agent's capacity can only change by one unit at a time. The trainer's objective
is to maximize the agent's capacity - evaluated according to its lowest value
under the invariant distribution - subject to an upper bound on average
training intensity. We characterize the trainer's optimal policy, and show how
stochastic, time-varying training intensity can dramatically increase the
long-run capacity of a rational, sluggish agent. We relate our theoretical
findings to ""periodization"" training techniques in exercise physiology.",Anabolic Persuasion,2021-05-18 22:15:34,"Kfir Eliaz, Ran Spiegler","http://arxiv.org/abs/2105.08786v1, http://arxiv.org/pdf/2105.08786v1",econ.TH
34128,th,"I study a static textbook model of monetary policy and relax the conventional
assumption that the private sector has rational expectations. Instead, the
private sector forms inflation forecasts according to a misspecified subjective
model that disagrees with the central bank's (true) model over the causal
underpinnings of the Phillips Curve. Following the AI/Statistics literature on
Bayesian Networks, I represent the private sector's model by a direct acyclic
graph (DAG). I show that when the private sector's model reverses the direction
of causality between inflation and output, the central bank's optimal policy
can exhibit an attenuation effect that is sensitive to the noisiness of the
true inflation-output equations.",A Simple Model of Monetary Policy under Phillips-Curve Causal Disagreements,2021-05-19 11:34:55,Ran Spiegler,"http://arxiv.org/abs/2105.08988v1, http://arxiv.org/pdf/2105.08988v1",econ.TH
34129,th,"We analyze the optimal strategy for a government to promote large-scale
investment projects under information frictions. Specifically, we propose a
model where the government collects information on probability of each
investment and discloses it to private investors a la Kamenica and Gentzkow
(2011). We derive the government's optimal information policy, which is
characterized as threshold values for the unknown probability of the projects
released to the private investors, and study how the underlying features of the
economy affect the optimal policies. We find that when multiple projects are
available, the government promotes the project with a bigger spillover effect
by fully revealing the true state of the economy only when its probability is
substantially high. Moreover, the development of the financial/information
market also affects the optimal rule.",State-Promoted Investment for Industrial Reforms: an Information Design Approach,2021-05-20 11:06:10,"Keeyoung Rhee, Myungkyu Shim, Ji Zhang","http://arxiv.org/abs/2105.09576v1, http://arxiv.org/pdf/2105.09576v1",econ.TH
34130,th,"We study a class of preference domains that satisfies the familiar properties
of minimal richness, diversity and no-restoration. We show that a specific
preference restriction, hybridness, has been embedded in these domains so that
the preferences are single-peaked at the ""extremes"" and unrestricted in the
""middle"". We also study the structure of strategy-proof and unanimous Random
Social Choice Functions on these domains. We show them to be special cases of
probabilistic fixed ballot rules (introduced by Ehlers, Peters, and Storcken
(2002)).",Probabilistic Fixed Ballot Rules and Hybrid Domains,2021-05-22 13:12:17,"Shurojit Chatterji, Souvik Roy, Soumyarup Sadhukhan, Arunava Sen, Huaxia Zeng","http://arxiv.org/abs/2105.10677v2, http://arxiv.org/pdf/2105.10677v2",econ.TH
34131,th,"We review the theory of information cascades and social learning. Our goal is
to describe in a relatively integrated and accessible way the more important
themes, insights and applications of the literature as it has developed over
the last thirty years. We also highlight open questions and promising
directions for further theoretical and empirical exploration.",Information Cascades and Social Learning,2021-05-24 02:36:16,"Sushil Bikhchandani, David Hirshleifer, Omer Tamuz, Ivo Welch","http://arxiv.org/abs/2105.11044v1, http://arxiv.org/pdf/2105.11044v1",econ.TH
34132,th,"We generalize the stochastic revealed preference methodology of McFadden and
Richter (1990) for finite choice sets to settings with limited consideration.
Our approach is nonparametric and requires partial choice set variation. We
impose a monotonicity condition on attention first proposed by Cattaneo et al.
(2020) and a stability condition on the marginal distribution of preferences.
Our framework is amenable to statistical testing. These new restrictions extend
widely known parametric models of consideration with heterogeneous preferences.",A Random Attention and Utility Model,2021-05-24 16:34:15,"Nail Kashaev, Victor H. Aguiar","http://arxiv.org/abs/2105.11268v3, http://arxiv.org/pdf/2105.11268v3",econ.TH
34133,th,"This paper describes a basic model of a gift economy in the shape of a Giving
Game and reveals the fundamental structure of such a game. Main result is that
the game shows a community effect in that a small subgroup of players
eventually keeps all circulating goods for themselves. Example applications are
where computers are sharing processing power for complex calculations, or when
commodity traders are making transactions in some professional community. The
Giving Game may equally well be viewed as a basic model of clientelism or
corruption. Keywords in this paper are giving, gift economy, community effect,
stabilization, computational complexity, corruption, micro-economics, game
theory, stock trading, distributed computing, crypto currency, blockchain.",The Giving Game,2021-05-25 11:55:06,Peter Weijland,"http://arxiv.org/abs/2105.11761v1, http://arxiv.org/pdf/2105.11761v1",econ.TH
34134,th,"We study full implementation with evidence in an environment with bounded
utilities. We show that a social choice function is Nash implementable in a
direct revelation mechanism if and only if it satisfies the measurability
condition proposed by <cite>BL2012</cite>. Building on a novel classification
of lies according to their refutability with evidence, the mechanism requires
only two agents, accounts for mixed-strategy equilibria and accommodates
evidentiary costs. While monetary transfers are used, they are off the
equilibrium and can be balanced with three or more agents. In a richer model of
evidence due to <cite>KT2012</cite>, we establish pure-strategy implementation
with two or more agents in a direct revelation mechanism. We also obtain a
necessary and sufficient condition on the evidence structure for
renegotiation-proof bilateral contracts, based on the classification of lies.",Direct Implementation with Evidence,2021-05-26 05:12:34,"Soumen Banerjee, Yi-Chun Chen, Yifei Sun","http://arxiv.org/abs/2105.12298v6, http://arxiv.org/pdf/2105.12298v6",econ.TH
34135,th,"In many choice problems, the interaction between several distinct variables
determines the payoff of each alternative. I propose and axiomatize a model of
a decision maker who recognizes that she may not accurately perceive the
correlation between these variables, and who takes this into account when
making her decision. She chooses as if she calculates each alternative's
expected outcome under multiple possible correlation structures, and then
evaluates it according to the worst expected outcome.",Correlation Concern,2021-05-27 20:48:30,Andrew Ellis,"http://arxiv.org/abs/2105.13341v1, http://arxiv.org/pdf/2105.13341v1",econ.TH
34136,th,"We study rotation programs within the standard implementation frame-work
under complete information. A rotation program is a myopic stableset whose
states are arranged circularly, and agents can effectively moveonly between two
consecutive states. We provide characterizing conditionsfor the implementation
of efficient rules in rotation programs. Moreover,we show that the conditions
fully characterize the class of implementablemulti-valued and efficient rules",An Implementation Approach to Rotation Programs,2021-05-30 17:50:27,"Ville Korpela, Michele Lombardi, Riccardo D. Saulle","http://arxiv.org/abs/2105.14560v1, http://arxiv.org/pdf/2105.14560v1",econ.TH
34137,th,"When multiple sources of information are available, any decision must take
into account their correlation. If information about this correlation is
lacking, an agent may find it desirable to make a decision that is robust to
possible correlations. Our main results characterize the strategies that are
robust to possible hidden correlations. In particular, with two states and two
actions, the robustly optimal strategy pays attention to a single information
source, ignoring all others. More generally, the robustly optimal strategy may
need to combine multiple information sources, but can be constructed quite
simply by using a decomposition of the original problem into separate decision
problems, each requiring attention to only one information source. An
implication is that an information source generates value to the agent if and
only if it is best for at least one of these decomposed problems.",Robust Merging of Information,2021-05-31 23:25:15,"Henrique de Oliveira, Yuhta Ishii, Xiao Lin","http://arxiv.org/abs/2106.00088v1, http://arxiv.org/pdf/2106.00088v1",econ.TH
34138,th,"The question of ""Justice"" still divides social research and moral philosophy.
Several Theories of Justice and conceptual approaches compete here, and
distributive justice remains a major societal controversy. From an evolutionary
point of view, fair and just exchange can be nothing but ""equivalent"", and this
makes ""strict"" reciprocity (merit, equity) the foundational principle of
justice, both theoretically and empirically. But besides being just, justice
must be effective, efficient, and communicable. Moral reasoning is a
communicative strategy for resolving conflict, enhancing status, and
maintaining cooperation, thereby making justice rather a social bargain and an
optimization problem. Social psychology (intuitions, rules of thumb,
self-bindings) can inform us when and why the two auxiliary principles equality
and need are more likely to succeed than merit would. Nevertheless, both
equality and need are governed by reciprocal considerations, and self-bindings
help to interpret altruism as ""very generalized reciprocity"". The Meritocratic
Principle can be implemented, and its controversy avoided, by concentrating on
""non-merit"", i.e., institutionally draining the wellsprings of undeserved
incomes (economic rents). Avoiding or taxing away economic rents is an
effective implementation of justice in liberal democracies. This would enable
market economies to bring economic achievement and income much more in line,
thus becoming more just.",Justice as a Social Bargain and Optimization Problem,2021-06-02 01:18:43,Andreas Siemoneit,"http://arxiv.org/abs/2106.00830v1, http://arxiv.org/pdf/2106.00830v1",econ.TH
34139,th,"How does the control power of corporate shareholder arise? On the fundamental
principles of economics, we discover that the probability of top1 shareholder
possessing optimal control power evolves in Fibonacci series pattern and
emerges as the wave between 1/2 and 2/3 along with time in period of 12h (h is
the time distance between the state and state of the evolution). This novel
feature suggests the efficiency of the allocation of corporate shareholders'
right and power. Data on the Chinese stock market support this prediction.",The Origin of Corporate Control Power,2021-06-03 11:30:03,"Jie He, Min Wang","http://arxiv.org/abs/2106.01681v12, http://arxiv.org/pdf/2106.01681v12",econ.TH
34140,th,"This paper introduces dynamic mechanism design in an elementary fashion. We
first examine optimal dynamic mechanisms: We find necessary and sufficient
conditions for perfect Bayesian incentive compatibility and formulate the
optimal dynamic mechanism problem. We next examine efficient dynamic
mechanisms: We establish the uniqueness of Groves mechanism and investigate
budget balance of the dynamic pivot mechanism in some detail for a bilateral
trading environment. This introduction reveals that many results and techniques
of static mechanism design can be straightforwardly extended and adapted to the
analysis of dynamic settings.",Dynamic mechanism design: An elementary introduction,2021-06-09 10:16:04,Kiho Yoon,"http://arxiv.org/abs/2106.04850v1, http://arxiv.org/pdf/2106.04850v1",econ.TH
34141,th,"When making a decision based on observational data, a person's choice depends
on her beliefs about which correlations reflect causality and which do not. We
model an agent who predicts the outcome of each available action from
observational data using a subjective causal model represented by a directed
acyclic graph (DAG). An analyst can identify the agent's DAG from her random
choice rule. Her choices reveal the chains of causal reasoning that she
undertakes and the confounding variables she adjusts for, and these objects pin
down her model. When her choices determine the data available, her behavior
affects her inferences, which in turn affect her choices. We provide necessary
and sufficient conditions for testing whether such an agent's behavior is
compatible with the model.",Subjective Causality in Choice,2021-06-10 20:54:00,"Andrew Ellis, Heidi Christina Thysen","http://arxiv.org/abs/2106.05957v3, http://arxiv.org/pdf/2106.05957v3",econ.TH
34142,th,"Army cadets obtain occupations through a centralized process. Three
objectives -- increasing retention, aligning talent, and enhancing trust --
have guided reforms to this process since 2006. West Point's mechanism for the
Class of 2020 exacerbated challenges implementing Army policy aims. We
formulate these desiderata as axioms and study their implications theoretically
and with administrative data. We show that the Army's objectives not only
determine an allocation mechanism, but also a specific priority policy, a
uniqueness result that integrates mechanism and priority design. These results
led to a re-design of the mechanism, now adopted at both West Point and ROTC.",Mechanism Design meets Priority Design: Redesigning the US Army's Branching Process,2021-06-11 22:08:18,"Kyle Greenberg, Parag A. Pathak, Tayfun Sonmez","http://arxiv.org/abs/2106.06582v1, http://arxiv.org/pdf/2106.06582v1",econ.TH
34143,th,"We consider an observational learning model with exogenous public payoff
shock. We show that confounded learning doesn't arise for almost all private
signals and almost all shocks, even if players have sufficiently divergent
preferences.",Fragility of Confounded Learning,2021-06-14 22:06:04,Xuanye Wang,"http://arxiv.org/abs/2106.07712v1, http://arxiv.org/pdf/2106.07712v1",econ.TH
34144,th,"The paper presents a model of two-speed evolution in which the payoffs in the
population game (or, alternatively, the individual preferences) slowly adjust
to changes in the aggregate behavior of the population. The model investigates
how, for a population of myopic agents with homogeneous preferences, changes in
the environment caused by current aggregate behavior may affect future payoffs
and hence alter future behavior. The interaction between the agents is based on
a symmetric two-strategy game with positive externalities and negative feedback
from aggregate behavior to payoffs, so that at every point in time the
population has an incentive to coordinate, whereas over time the more popular
strategy becomes less appealing. Under the best response dynamics and the logit
dynamics with small noise levels the joint trajectories of preferences and
behavior converge to closed orbits around the unique steady state, whereas for
large noise levels the steady state of the logit dynamics becomes a sink. Under
the replicator dynamics the unique steady state of the system is repelling and
the trajectories are unbounded unstable spirals.",Cyclical behavior of evolutionary dynamics in coordination games with changing payoffs,2021-06-15 18:33:10,George Loginov,"http://dx.doi.org/10.1007/s00182-021-00783-z, http://arxiv.org/abs/2106.08224v1, http://arxiv.org/pdf/2106.08224v1",econ.TH
34146,th,"The COVID-19 pandemic has forced changes in production and especially in
human interaction, with ""social distancing"" a standard prescription for slowing
transmission of the disease. This paper examines the economic effects of social
distancing at the aggregate level, weighing both the benefits and costs to
prolonged distancing. Specifically we fashion a model of economic recovery when
the productive capacity of factors of production is restricted by social
distancing, building a system of equations where output growth and social
distance changes are interdependent. The model attempts to show the complex
interactions between output levels and social distancing, developing cycle
paths for both variables. Ultimately, however, defying gravity via prolonged
social distancing shows that a lower growth path is inevitable as a result.",Defying Gravity: The Economic Effects of Social Distancing,2021-06-17 13:27:46,"Alfredo D. Garcia, Christopher A. Hartwell, Martín Andrés Szybisz","http://arxiv.org/abs/2106.09361v1, http://arxiv.org/pdf/2106.09361v1",econ.TH
34147,th,"Using a general network model with multiple activities, we analyse a
planner's welfare maximising interventions taking into account within-activity
network spillovers and cross-activity interdependence. We show that the
direction of the optimal intervention, under sufficiently large budgets,
critically depends on the spectral properties of two matrices: the first matrix
depicts the social connections among agents, while the second one quantifies
the strategic interdependence among different activities. In particular, the
first principal component of the interdependence matrix determines budget
resource allocation across different activities, while the first (last)
principal component of the network matrix shapes the resource allocation across
different agents when network effects are strategic complements (substitutes).
We explore some comparative statics analysis with respective to model
primitives and discuss several applications and extensions.",Multi-activity Influence and Intervention,2021-06-17 14:46:06,"Ryan Kor, Junjie Zhou","http://arxiv.org/abs/2106.09410v2, http://arxiv.org/pdf/2106.09410v2",econ.TH
34148,th,"This note shows that the value of ambiguous persuasion characterized in
Beauchene, Li and Li(2019) can be given by a concavification program as in
Bayesian persuasion (Kamenica and Gentzkow, 2011). In addition, it implies that
an ambiguous persuasion game can be equivalently formalized as a Bayesian
persuasion game by distorting the utility functions. This result is obtained
under a novel construction of ambiguous persuasion.",A Concavification Approach to Ambiguous Persuasion,2021-06-21 20:16:40,Xiaoyu Cheng,"http://arxiv.org/abs/2106.11270v2, http://arxiv.org/pdf/2106.11270v2",econ.TH
34149,th,"Individuals use models to guide decisions, but many models are wrong. This
paper studies which misspecified models are likely to persist when individuals
also entertain alternative models. Consider an agent who uses her model to
learn the relationship between action choices and outcomes. The agent exhibits
sticky model switching, captured by a threshold rule such that she switches to
an alternative model when it is a sufficiently better fit for the data she
observes. The main result provides a characterization of whether a model
persists based on two key features that are straightforward to derive from the
primitives of the learning environment, namely, the model's asymptotic accuracy
in predicting the equilibrium pattern of observed outcomes and the 'tightness'
of the prior around this equilibrium. I show that misspecified models can be
robust in that they persist against a wide range of competing models --
including the correct model -- despite individuals observing an infinite amount
of data. Moreover, simple misspecified models with entrenched priors can be
even more robust than correctly specified models. I use this characterization
to provide a learning foundation for the persistence of systemic biases in two
applications. First, in an effort-choice problem, I show that overconfidence in
one's ability is more robust than underconfidence. Second, a simplistic binary
view of politics is more robust than the more complex correct view when
individuals consume media without fully recognizing the reporting bias.",Robust Misspecified Models and Paradigm Shifts,2021-06-24 05:18:22,Cuimin Ba,"http://arxiv.org/abs/2106.12727v3, http://arxiv.org/pdf/2106.12727v3",econ.TH
34150,th,"In this paper we consider the issue of a unique prediction in one to one two
sided matching markets, as defined by Gale and Shapley (1962), and we prove the
following. Theorem. Let P be a one-to-one two-sided matching market and let P
be its associated normal form, a (weakly) smaller matching market with the same
set of stable matchings, that can be obtained using procedures introduced in
Irving and Leather (1986) and Balinski and Ratier (1997). The following three
statements are equivalent (a) P has a unique stable matching. (b) Preferences
on P* are acyclic, as defined by Chung (2000). (c) In P* every market
participant's preference list is a singleton.",Unique Stable Matchings,2021-06-24 15:53:53,"Gregory Z. Gutin, Philip R. Neary, Anders Yeo","http://arxiv.org/abs/2106.12977v4, http://arxiv.org/pdf/2106.12977v4",econ.TH
34151,th,"We explore the ways that a reference point may direct attention. Utilizing a
stochastic choice framework, we provide behavioral foundations for the
Reference-Dependent Random Attention Model (RD-RAM). Our characterization
result shows that preferences may be uniquely identified even when the
attention process depends arbitrarily on both the menu and the reference point.
The RD-RAM is able to capture rich behavioral patterns, including frequency
reversals among non-status quo alternatives and choice overload. We also
analyze specific attention processes, characterizing reference-dependent
versions of several prominent models of stochastic consideration.",Reference Dependence and Random Attention,2021-06-25 02:15:08,"Matthew Kovach, Elchin Suleymanov","http://arxiv.org/abs/2106.13350v3, http://arxiv.org/pdf/2106.13350v3",econ.TH
34152,th,"This paper considers truncation strategies in housing markets. First, we
characterize the top trading cycles rule in housing markets by the following
two groups of axioms: individual rationality, Pareto efficiency,
truncation-invariance; individual rationality, endowments-swapping-proofness,
truncation-invariance. The new characterizations refine Ma (1994), Fujinaka and
Wakayama (2018), and Chen and Zhao (2021) simultaneously, because
truncation-invariance is weaker than both strategy-proofness and rank
monotonicity. Second, we show that the characterization result of the current
paper can no longer be obtained by further weakening truncation-invariance to
truncation-proofness.",Truncation strategies in housing markets,2021-06-28 11:15:34,"Yajing Chen, Zhenhua Jiao, Chenfeng Zhang","http://arxiv.org/abs/2106.14456v1, http://arxiv.org/pdf/2106.14456v1",econ.TH
34322,th,"In social choice theory, Sen's value restriction and Pattanaik's not-strict
value restriction are both attractive conditions for testing social preference
transitivity and/or non-empty social choice set existence. This article
introduces a novel mathematical representation tool, called possibility
preference map (PPM), for weak orderings, and then reformulates the value
restriction and the not-strict value restriction in terms of PPM. The
reformulations all appear elegant since they take the form of minmax.",Reformulating the Value Restriction and the Not-Strict Value Restriction in Terms of Possibility Preference Map,2022-05-16 02:22:18,Fujun Hou,"http://arxiv.org/abs/2205.07400v1, http://arxiv.org/pdf/2205.07400v1",econ.TH
34153,th,"This study investigates the effect of behavioral mistakes on the evolutionary
stability of the cooperative equilibrium in a repeated public goods game. Many
studies show that behavioral mistakes have detrimental effects on cooperation
because they reduce the expected length of mutual cooperation by triggering the
conditional retaliation of the cooperators. However, this study shows that
behavioral mistakes could have positive effects. Conditional cooperative
strategies are either neutrally stable or are unstable in a mistake-free
environment, but we show that behavioral mistakes can make \textit{all} of the
conditional cooperative strategies evolutionarily stable. We show that
behavioral mistakes stabilize the cooperative equilibrium based on the most
intolerant cooperative strategy by eliminating the behavioral
indistinguishability between conditional cooperators in the cooperative
equilibrium. We also show that mistakes make the tolerant conditional
cooperative strategies evolutionarily stable by preventing the defectors from
accumulating the free-rider's advantages. Lastly, we show that the behavioral
mistakes could serve as a criterion for the equilibrium selection among
cooperative equilibria.",Behavioral Mistakes Support Cooperation in an N-Person Repeated Public Goods Game,2021-06-30 14:36:08,"Jung-Kyoo Choi, Jun Sok Huhh","http://arxiv.org/abs/2106.15994v1, http://arxiv.org/pdf/2106.15994v1",econ.TH
34154,th,"This paper studies the strategic reasoning in a static game played between
heterogeneous populations. Each population has representative preferences with
a random variable indicating individuals' idiosyncrasies. By assuming
rationality, common belief in rationality, and the transparency that players'
beliefs are consistent with the distribution p of the idiosyncrasies, we apply
$\Delta$-rationalizability (Battigalli and Siniscalchi, 2003) and deduce an
empirically testable property of population behavior called
$\Delta^{p}$-rationalizability. We show that every action distribution
generated from a quantal response equilibrium (QRE, McKelvey and Palfrey, 1995)
is $\Delta^{p}$-rationalizable. We also give conditions under which the
converse is true, i.e., when the assumptions provide an epistemic foundation
for QRE and make it the robust prediction of population behavior.","Rationalization, Quantal Response Equilibrium, and Robust Action Distributions in Populations",2021-06-30 17:12:06,"Shuige Liu, Fabio Maccheroni","http://arxiv.org/abs/2106.16081v2, http://arxiv.org/pdf/2106.16081v2",econ.TH
34155,th,"We study a model of collective search by teams. Discoveries beget discoveries
and correlated search results are governed by a Brownian path. Search results'
variation at any point -- the search scope -- is jointly controlled. Agents
individually choose when to cease search and implement their best discovery. We
characterize equilibrium and optimal policies. Search scope is constant and
independent of search outcomes as long as no member leaves. It declines after
departures. A simple drawdown stopping boundary governs each agent's search
termination. We show the emergence of endogenous exit waves, whereby possibly
heterogeneous agents cease search all at once.",Collective Progress: Dynamics of Exit Waves,2021-07-01 15:30:02,"Doruk Cetemen, Can Urgun, Leeat Yariv","http://arxiv.org/abs/2107.00406v1, http://arxiv.org/pdf/2107.00406v1",econ.TH
34156,th,"We investigate the dynamics of wealth inequality in an economy where
households have positional preferences, with the strength of the positional
concern determined endogenously by inequality of wealth distribution in the
society. We demonstrate that in the long run such an economy converges to a
unique egalitarian steady-state equilibrium, with all households holding equal
positive wealth, when the initial inequality is sufficiently low. Otherwise,
the steady state is characterised by polarisation of households into rich, who
own all the wealth, and poor, whose wealth is zero. A fiscal policy with
government consumption funded by taxes on labour income and wealth can move the
economy from any initial state towards an egalitarian equilibrium with a higher
aggregate wealth.",Fiscal policy and inequality in a model with endogenous positional concerns,2021-07-01 15:36:17,"Kirill Borissov, Nigar Hashimzade","http://arxiv.org/abs/2107.00410v1, http://arxiv.org/pdf/2107.00410v1",econ.TH
34157,th,"We consider auction environments in which at the time of the auction bidders
observe signals about their ex-post value. We introduce a model of novice
bidders who do not know know the joint distribution of signals and instead
build a statistical model relating others' bids to their own ex post value from
the data sets accessible from past similar auctions. Crucially, we assume that
only ex post values and bids are accessible while signals observed by bidders
in past auctions remain private. We consider steady-states in such
environments, and importantly we allow for correlation in the signal
distribution. We first observe that data-driven bidders may behave suboptimally
in classical auctions such as the second-price or first-price auctions whenever
there are correlations. Allowing for a mix of rational (or experienced) and
data-driven (novice) bidders results in inefficiencies in such auctions, and we
show the inefficiency extends to all auction-like mechanisms in which bidders
are restricted to submit one-dimensional (real-valued) bids.",Auction Design with Data-Driven Misspecifications,2021-07-01 20:54:41,"Philippe Jehiel, Konrad Mierendorff","http://arxiv.org/abs/2107.00640v1, http://arxiv.org/pdf/2107.00640v1",econ.TH
34158,th,"In this paper, we investigate how the COVID-19 pandemics and more precisely
the lockdown of a sector of the economy may have changed our habits and,
there-fore, altered the demand of some goods even after the re-opening. In a
two-sector infinite horizon economy, we show that the demand of the goods
produced by the sector closed during the lockdown could shrink or expand with
respect to their pre-pandemic level depending on the length of the lockdown and
the relative strength of the satiation effect and the substitutability effect.
We also provide conditions under which this sector could remain inactive even
after the lockdown as well as an insight on the policy which should be adopted
to avoid this outcome.",Habits and demand changes after COVID-19,2021-07-02 11:57:34,"Mauro Bambi, Daria Ghilli, Fausto Gozzi, Marta Leocata","http://arxiv.org/abs/2107.00909v2, http://arxiv.org/pdf/2107.00909v2",econ.TH
34159,th,"This agent-based model contributes to a theory of corporate culture in which
company performance and employees' behaviour result from the interaction
between financial incentives, motivational factors and endogenous social norms.
Employees' personal values are the main drivers of behaviour. They shape
agents' decisions about how much of their working time to devote to individual
tasks, cooperative, and shirking activities. The model incorporates two aspects
of the management style, analysed both in isolation and combination: (i)
monitoring efforts affecting intrinsic motivation, i.e. the firm is either
trusting or controlling, and (ii) remuneration schemes affecting extrinsic
motivation, i.e. individual or group rewards. The simulations show that
financial incentives can (i) lead to inefficient levels of cooperation, and
(ii) reinforce value-driven behaviours, amplified by emergent social norms. The
company achieves the highest output with a flat wage and a trusting management.
Employees that value self-direction highly are pivotal, since they are strongly
(de-)motivated by the management style.","The effects of incentives, social norms, and employees' values on work performance",2021-07-02 18:34:44,"Michael Roos, Jessica Reale, Frederik Banning","http://dx.doi.org/10.1371/journal.pone.0262430, http://arxiv.org/abs/2107.01139v3, http://arxiv.org/pdf/2107.01139v3",econ.TH
34160,th,"This article proposes a characterization of admissions markets that can
predict the distribution of students at each school or college under both
centralized and decentralized admissions paradigms. The characterization builds
on recent research in stable assignment, which models students as a probability
distribution over the set of ordinal preferences and scores. Although stable
assignment mechanisms presuppose a centralized admissions process, I show that
stable assignments coincide with equilibria of a decentralized, iterative
market in which schools adjust their admissions standards in pursuit of a
target class size. Moreover, deferred acceptance algorithms for stable
assignment are a special case of a well-understood price dynamic called
t\^{a}tonnement. The second half of the article turns to a parametric
distribution of student types that enables explicit computation of the
equilibrium and is invertible in the schools' preferability parameters.
Applying this model to a public dataset produces an intuitive ranking of the
popularity of American universities and a realistic estimate of each school's
demand curve, and does so without imposing an equilibrium assumption or
requiring the granular student information used in conventional logistic
regressions.",Characterizing nonatomic admissions markets,2021-07-03 07:25:50,Max Kapur,"http://arxiv.org/abs/2107.01340v1, http://arxiv.org/pdf/2107.01340v1",econ.TH
34161,th,"We study the connection between risk aversion, number of consumers and
uniqueness of equilibrium. We consider an economy with two goods and $c$
impatience types, where each type has additive separable preferences with HARA
Bernoulli utility function,
$u_H(x):=\frac{\gamma}{1-\gamma}\left(b+\frac{a}{\gamma}x\right)^{1-\gamma}$.
We show that if $\gamma\in \left(1, \frac{c}{c-1}\right]$, the equilibrium is
unique. Moreover, the methods used, involving Newton's symmetric polynomials
and Descartes' rule of signs, enable us to offer new sufficient conditions for
uniqueness in a closed-form expression highlighting the role played by
endowments, patience and specific HARA parameters. Finally, new necessary and
sufficient conditions in ensuring uniqueness are derived for the particular
case of CRRA Bernoulli utility functions with $\gamma =3$.",Risk aversion and uniqueness of equilibrium: a polynomial approach,2021-07-05 14:28:36,"Andrea Loi, Stefano Matta","http://arxiv.org/abs/2107.01947v3, http://arxiv.org/pdf/2107.01947v3",econ.TH
34162,th,"We present a framework for analyzing the near miss effect in lotteries. A
decision maker (DM) facing a lottery, falsely interprets losing outcomes that
are close to winning ones, as a sign that success is within reach. As a result
of this false belief, the DM will prefer lotteries that induce a higher
frequency of near misses, even if the underlying probability of winning is
constant. We define a near miss index that measures the near miss effect
induced by a given lottery and analyze the optimal lottery design in terms of
near miss. This analysis leads us to establish a fruitful connection between
our near miss framework and the field of coding theory. Building on this
connection we compare different lottery frames and the near miss effect they
induce. Analyzing an interaction between a seller and a buyer of lotteries
allows us to gain further insight into the optimal framing of lotteries and
might offer a potential explanation as to why lotteries with a very small
probability of winning are commonplace and attractive.",The Near Miss Effect and the Framing of Lotteries,2021-07-06 11:46:45,Michael Crystal,"http://arxiv.org/abs/2107.02478v2, http://arxiv.org/pdf/2107.02478v2",econ.TH
34163,th,"This paper addresses the general problem of designing ordinal classification
methods based on comparing actions with limiting boundaries of ordered classes
(categories). The fundamental requirement of the method consists of setting a
relational system (D,S), where S and D are reflexive and transitive relations,
respectively, S should be compatible with the order of the set of classes, and
D is a subset of S. An asymmetric preference relation P is defined from S.
Other requirements are imposed on the actions which compose the limiting
boundaries between adjacent classes, in such a way that each class is closed
below and above. The paper proposes S-based and P-based assignment procedures.
Each of them is composed of two complementary assignment procedures, which
correspond through the transposition operation and should be used conjointly.
The methods work under several basic conditions on the set of limiting
boundaries. Under other more demanding separability requirements, each
procedure fulfills the set of structural properties established for other
outranking-based ordinal classification methods. Our proposal avoids the
conflict between the required correspondence through the transposition
operation and the assignment of limiting actions to the classes to which they
belong. We thus propose very diverse S and P-based ordinal classification
approaches with desirable properties, which can be designed by using decision
models with the capacity to build preference relations fulfilling the basic
requirements to S and D.",a theoretical look at ordinal classification methods based on comparing actions with limiting boundaries between adjacent classes,2021-07-07 22:01:03,"Eduardo Fernandez, Jose Rui Figueira, Jorge Navarro","http://arxiv.org/abs/2107.03440v1, http://arxiv.org/pdf/2107.03440v1",econ.TH
34164,th,"From a theoretical view, this paper addresses the general problem of
designing ordinal classification methods based on comparing actions with subset
of actions, which are representative of their classes (categories). The basic
demand of the proposal consists in setting a relational system (D, S), where S
is a reflexive relation compatible with the preferential order of the set of
classes, and D is a transitive relation such that D is a subset of S. Different
ordinal classification methods can be derived from diverse model of preferences
fulfilling the basic conditions on S and D. Two complementary assignment
procedures compose each method, which correspond through the transposition
operation and should be used complementarily. The methods work under relatively
slight conditions on the representative actions and satisfy several fundamental
properties. ELECTRE TRI-nC, INTERCLASS-nC, and the hierarchical ELECTRE TRI-nC
with interacting criteria, can be considered as particular cases of this
general framework.",A theoretical look at ordinal classification methods based on reference sets composed of characteristic actions,2021-07-09 23:06:55,"Eduardo Fernandez, Jorge Navarro, Efrain Solares","http://arxiv.org/abs/2107.04656v1, http://arxiv.org/pdf/2107.04656v1",econ.TH
34188,th,"The Expected Shortfall (ES) is one of the most important regulatory risk
measures in finance, insurance, and statistics, which has recently been
characterized via sets of axioms from perspectives of portfolio risk management
and statistics. Meanwhile, there is large literature on insurance design with
ES as an objective or a constraint. A visible gap is to justify the special
role of ES in insurance and actuarial science. To fill this gap, we study
characterization of risk measures induced by efficient insurance contracts,
i.e., those that are Pareto optimal for the insured and the insurer. One of our
major results is that we characterize a mixture of the mean and ES as the risk
measure of the insured and the insurer, when contracts with deductibles are
efficient. Characterization results of other risk measures, including the mean
and distortion risk measures, are also presented by linking them to different
sets of contracts.",Risk measures induced by efficient insurance contracts,2021-09-01 14:40:12,"Qiuqi Wang, Ruodu Wang, Ricardas Zitikis","http://arxiv.org/abs/2109.00314v2, http://arxiv.org/pdf/2109.00314v2",econ.TH
34165,th,"In a legal dispute, parties engage in a series of negotiations so as to
arrive at a reasonable settlement. The parties need to present a fair and
reasonable bargain in order to induce the WTA, willingness to accept of the
plaintiff and the WTP, willingness to pay of the defendant. Cooperation can
only be attained when the WTA of the plaintiff is less than or equal to the WTP
of the defendant. From an economic perspective, the legal process can
considered as market place of buying and selling claims. High transaction costs
decrease the reasonable bargain, thereby making cooperation more appealing to
the defendant. On the other hand, low or zero transaction costs means the
reasonable bargain is only dependent on the expected gain from winning at trial
and the settlement benefit thereby making cooperation more appealing to the
plaintiff. Hence, we need to find a way of adjusting the number of settlements
with the number of trials in order to maximize the social utility value. This
paper uses Cobb-Douglas optimization to formulate an optimal transaction cost
algorithm considering the confinement of a generalized legal framework.",Axiomatic Formulation of the Optimal Transaction Cost Theory in the Legal Process through Cobb-Douglas Optimization,2021-07-15 10:35:22,Kwadwo Osei Bonsu,"http://dx.doi.org/10.2478/ers-2021-0027, http://arxiv.org/abs/2107.07168v1, http://arxiv.org/pdf/2107.07168v1",econ.TH
34166,th,"We develop conditions under which individual choices and Walrasian
equilibrium prices and allocations can be exactly inferred from finite market
data. First, we consider market data that consist of individual demands as
prices and incomes change. Second, we show that finitely many observations of
individual endowments and associated Walrasian equilibrium prices, and only
prices, suffice to identify individual demands and, as a consequence,
equilibrium comparative statics.",Exact inference from finite market data,2021-07-15 15:59:50,"Felix Kübler, Raghav Malhotra, Herakles Polemarchakis","http://arxiv.org/abs/2107.07294v1, http://arxiv.org/pdf/2107.07294v1",econ.TH
34167,th,"People rationalize their past choices, even those that were mistakes in
hindsight. We propose a formal theory of this behavior. The theory predicts
that sunk costs affect later choices. Its model primitives are identified by
choice behavior and it yields tractable comparative statics.",A Theory of Ex Post Rationalization,2021-07-15 20:44:38,"Erik Eyster, Shengwu Li, Sarah Ridout","http://arxiv.org/abs/2107.07491v9, http://arxiv.org/pdf/2107.07491v9",econ.TH
34168,th,"In this paper, we show that when policy-motivated parties can commit to a
particular platform during a uni-dimensional electoral contest where valence
issues do not arise there must be a positive association between the policies
preferred by candidates and the policies adopted in expectation in the lowest
and the highest equilibria of the electoral contest. We also show that this
need not be so if the parties cannot commit to a particular policy. The
implication is that evidence of a negative relationship between enacted and
preferred policies is suggestive of parties that hold positions from which they
would like to move from yet are unable to do so.",Monotone Comparative Statics in the Calvert-Wittman Model,2021-07-16 17:05:07,"Francisco Rodríguez, Eduardo Zambrano","http://arxiv.org/abs/2107.07910v2, http://arxiv.org/pdf/2107.07910v2",econ.TH
34169,th,"The hawk-dove game admits two types of equilibria: an asymmetric pure
equilibrium in which players in one population play hawk and players in the
other population play dove, and an inefficient symmetric mixed equilibrium, in
which hawks are frequently matched against each other. The existing literature
shows that populations will converge to playing one of the pure equilibria from
almost any initial state. By contrast, we show that plausible sampling
dynamics, in which agents occasionally revise their actions by observing either
opponents' behavior or payoffs in a few past interactions, can induce the
opposite result: global convergence to one of the inefficient mixed stationary
states.",Sampling dynamics and stable mixing in hawk-dove games,2021-07-18 15:32:24,"Srinivas Arigapudi, Yuval Heller, Amnon Schreiber","http://arxiv.org/abs/2107.08423v4, http://arxiv.org/pdf/2107.08423v4",econ.TH
34170,th,"We present mathematical definitions for rights structures, government and
non-attenuation in a generalized N-person game (Debreu abstract economy), thus
providing a formal basis for property rights theory.",A mathematical definition of property rights in a Debreu economy,2021-07-14 19:51:55,Abhimanyu Pallavi Sudhir,"http://arxiv.org/abs/2107.09651v1, http://arxiv.org/pdf/2107.09651v1",econ.TH
34171,th,"Frequent violations of fair principles in real-life settings raise the
fundamental question of whether such principles can guarantee the existence of
a self-enforcing equilibrium in a free economy. We show that elementary
principles of distributive justice guarantee that a pure-strategy Nash
equilibrium exists in a finite economy where agents freely (and
non-cooperatively) choose their inputs and derive utility from their pay. Chief
among these principles is that: 1) your pay should not depend on your name, and
2) a more productive agent should not earn less. When these principles are
violated, an equilibrium may not exist. Moreover, we uncover an intuitive
condition -- technological monotonicity -- that guarantees equilibrium
uniqueness and efficiency. We generalize our findings to economies with social
justice and inclusion, implemented in the form of progressive taxation and
redistribution, and guaranteeing a basic income to unproductive agents. Our
analysis uncovers a new class of strategic form games by incorporating
normative principles into non-cooperative game theory. Our results rely on no
particular assumptions, and our setup is entirely non-parametric. Illustrations
of the theory include applications to exchange economies, surplus distribution
in a firm, contagion and self-enforcing lockdown in a networked economy, and
bias in the academic peer-review system.
  Keywords: Market justice; Social justice; Inclusion; Ethics; Discrimination;
Self-enforcing contracts; Fairness in non-cooperative games; Pure strategy Nash
equilibrium; Efficiency.
  JEL Codes: C72, D30, D63, J71, J38",A Free and Fair Economy: A Game of Justice and Inclusion,2021-07-27 18:11:22,"Ghislain H. Demeze-Jouatsa, Roland Pongou, Jean-Baptiste Tondji","http://arxiv.org/abs/2107.12870v1, http://arxiv.org/pdf/2107.12870v1",econ.TH
34172,th,"The Condorcet Jury Theorem or the Miracle of Aggregation are frequently
invoked to ensure the competence of some aggregate decision-making processes.
In this article we explore an estimation of the prior probability of the thesis
predicted by the theorem (if there are enough voters, majority rule is a
competent decision procedure). We use tools from measure theory to conclude
that, prima facie, it will fail almost surely. To update this prior either more
evidence in favor of competence would be needed or a modification of the
decision rule. Following the latter, we investigate how to obtain an almost
sure competent information aggregation mechanism for almost any evidence on
voter competence (including the less favorable ones). To do so, we substitute
simple majority rule by weighted majority rule based on some weights correlated
with epistemic rationality such that every voter is guaranteed a minimal weight
equal to one.",On the probability of the Condorcet Jury Theorem or the Miracle of Aggregation,2021-08-02 12:05:53,Álvaro Romaniega,"http://dx.doi.org/10.1016/j.mathsocsci.2022.06.002, http://arxiv.org/abs/2108.00733v4, http://arxiv.org/pdf/2108.00733v4",econ.TH
34173,th,"It is common to encounter the situation with uncertainty for decision makers
(DMs) in dealing with a complex decision making problem. The existing evidence
shows that people usually fear the extreme uncertainty named as the unknown.
This paper reports the modified version of the typical regret theory by
considering the fear experienced by DMs for the unknown. Based on the responses
of undergraduate students to the hypothetical choice problems with an unknown
outcome, some experimental evidences are observed and analyzed. The framework
of the modified regret theory is established by considering the effects of an
unknown outcome. A fear function is equipped and some implications are proved.
The behavioral foundation of the modified regret theory is further developed by
modifying the axiomatic properties of the existing one as those based on the
utility function; and it is recalled as the utility-based behavioral foundation
for convenience. The application to the medical decision making with an unknown
risk is studied and the effects of the fear function are investigated. The
observations reveal that the existence of an unknown outcome could enhance,
impede or reverse the preference relation of people in a choice problem, which
can be predicted by the developed regret theory.",Regret theory under fear of the unknown,2021-08-04 06:11:45,Fang Liu,"http://arxiv.org/abs/2108.01825v1, http://arxiv.org/pdf/2108.01825v1",econ.TH
34174,th,"Welfare economics relies on access to agents' utility functions: we revisit
classical questions in welfare economics, assuming access to data on agents'
past choices instead of their utilities. Our main result considers the
existence of utilities that render a given allocation Pareto optimal. We show
that a candidate allocation is efficient for some utilities consistent with the
choice data if and only if it is efficient for an incomplete relation derived
from the revealed preference relations and convexity. Similar ideas are used to
make counterfactual choices for a single consumer, policy comparisons by the
Kaldor criterion, and determining which allocations, and which prices, may be
part of a Walrasian equilibrium.",Empirical Welfare Economics,2021-08-06 22:18:43,"Christopher P Chambers, Federico Echenique","http://arxiv.org/abs/2108.03277v4, http://arxiv.org/pdf/2108.03277v4",econ.TH
34175,th,"This paper characterizes lexicographic preferences over alternatives that are
identified by a finite number of attributes. Our characterization is based on
two key concepts: a weaker notion of continuity called 'mild continuity'
(strict preference order between any two alternatives that are different with
respect to every attribute is preserved around their small neighborhoods) and
an 'unhappy set' (any alternative outside such a set is preferred to all
alternatives inside). Three key aspects of our characterization are: (i) use of
continuity arguments, (ii) the stepwise approach of looking at two attributes
at a time and (iii) in contrast with the previous literature, we do not impose
noncompensation on the preference and consider an alternative weaker condition.",A characterization of lexicographic preferences,2021-08-06 22:22:44,"Mridu Prabal Goswami, Manipushpak Mitra, Debapriya Sen","http://arxiv.org/abs/2108.03280v1, http://arxiv.org/pdf/2108.03280v1",econ.TH
34176,th,"We highlight the tension between stability and equality in non transferable
utility matching. We consider many to one matchings and refer to the two sides
of the market as students and schools. The latter have aligned preferences,
which in this context means that a school's utility is the sum of its students'
utilities. We show that the unique stable allocation displays extreme
inequality between matched pairs.",Stable and extremely unequal,2021-08-14 19:58:41,"Alfred Galichon, Octavia Ghelfi, Marc Henry","http://arxiv.org/abs/2108.06587v2, http://arxiv.org/pdf/2108.06587v2",econ.TH
34177,th,"This paper introduces a framework to study innovation in a strategic setting,
in which innovators allocate their resources between exploration and
exploitation in continuous time. Exploration creates public knowledge, while
exploitation delivers private benefits. Through the analysis of a class of
Markov equilibria, we demonstrate that knowledge spillovers accelerate
knowledge creation and expedite its availability, thereby encouraging
innovators to increase exploration. The prospect of the ensuing superior
long-term innovations further motivates exploration, giving rise to a positive
feedback loop. This novel feedback loop can substantially mitigate the
free-riding problem arising from knowledge spillovers.",Strategic Exploration for Innovation,2021-08-16 19:54:49,Shangen Li,"http://arxiv.org/abs/2108.07218v3, http://arxiv.org/pdf/2108.07218v3",econ.TH
34178,th,"We propose a boundedly rational model of choice where agents eliminate
dominated alternatives using a transitive rationale before making a choice
using a complete rationale. This model is related to the seminal two-stage
model of Manzini and Mariotti (2007), the Rational Shortlist Method (RSM). We
analyze the model through reversals in choice and provide its behavioral
characterization. The procedure satisfies a weaker version of the Weak Axiom of
Revealed Preference (WARP) allowing for at most two reversals in choice in
terms of set inclusion for any pair of alternatives. We show that the
underlying rationales can be identified from the observable reversals in the
choice. We also characterize a variant of this model in which both the
rationales are transitive",Choice by Rejection,2021-08-17 06:43:49,"Bhavook Bhardwaj, Kriti Manocha","http://arxiv.org/abs/2108.07424v1, http://arxiv.org/pdf/2108.07424v1",econ.TH
34179,th,"We propose a theoretical framework under which preference profiles can be
meaningfully compared. Specifically, given a finite set of feasible allocations
and a preference profile, we first define a ranking vector of an allocation as
the vector of all individuals' rankings of this allocation. We then define a
partial order on preference profiles and write ""$P \geq P^{'}$"", if there
exists an onto mapping $\psi$ from the Pareto frontier of $P^{'}$ onto the
Pareto frontier of $P$, such that the ranking vector of any Pareto efficient
allocation $x$ under $P^{'}$ is weakly dominated by the ranking vector of the
image allocation $\psi(x)$ under $P$. We provide a characterization of the
maximal and minimal elements under the partial order. In particular, we
illustrate how an individualistic form of social preferences can be maximal in
a specific setting. We also discuss how the framework can be further
generalized to incorporate additional economic ingredients.",A Partial Order on Preference Profiles,2021-08-19 06:17:34,Wayne Yuan Gao,"http://arxiv.org/abs/2108.08465v2, http://arxiv.org/pdf/2108.08465v2",econ.TH
34180,th,"We characterize ex-post implementable allocation rules for single object
auctions under quasi-linear preferences with convex interdependent value
functions. We show that requiring ex-post implementability is equivalent to
requiring that the allocation rule must satisfy a condition that we call
eventual monotonicity (EM), which is a weakening of monotonicity, a familiar
condition used to characterize dominant strategy implementation.",Ex-post implementation with interdependent values,2021-08-21 23:29:16,"Saurav Goyal, Aroon Narayanan","http://arxiv.org/abs/2108.09580v1, http://arxiv.org/pdf/2108.09580v1",econ.TH
34181,th,"We show the role that an important equation first studied by Fritz John plays
in mechanism design.",Fritz John's equation in mechanism design,2021-08-24 09:26:35,Alfred Galichon,"http://arxiv.org/abs/2108.10538v1, http://arxiv.org/pdf/2108.10538v1",econ.TH
34182,th,"This paper studies single-peaked domains where the designer is uncertain
about the underlying alignment according to which the domain is single-peaked.
The underlying alignment is common knowledge amongst agents, but preferences
are private knowledge. Thus, the state of the world has both a public and
private element, with the designer uninformed of both. I first posit a relevant
solution concept called implementation in mixed information equilibria, which
requires Nash implementation in the public information and dominant strategy
implementation in the private information given the public information. I then
identify necessary and sufficient conditions for social rules to be
implementable. The characterization is used to identify unanimous and anonymous
implementable social rules for various belief structures of the designer, which
basically boils down to picking the right rules from the large class of median
rules identified by Moulin (1980), and hence this result can be seen as
identifying which median rules are robust to designer uncertainty.",Single-peaked domains with designer uncertainty,2021-08-25 17:48:38,Aroon Narayanan,"http://arxiv.org/abs/2108.11268v1, http://arxiv.org/pdf/2108.11268v1",econ.TH
34183,th,"Goods and services -- public housing, medical appointments, schools -- are
often allocated to individuals who rank them similarly but differ in their
preference intensities. We characterize optimal allocation rules when
individual preferences are known and when they are not. Several insights
emerge. First-best allocations may involve assigning some agents ""lotteries""
between high- and low-ranked goods. When preference intensities are private
information, second-best allocations always involve such lotteries and,
crucially, may coincide with first-best allocations. Furthermore, second-best
allocations may entail disposal of services. We discuss a market-based
alternative and show how it differs.",Who Cares More? Allocation with Diverse Preference Intensities,2021-08-26 23:30:19,"Pietro Ortoleva, Evgenii Safonov, Leeat Yariv","http://arxiv.org/abs/2108.12025v1, http://arxiv.org/pdf/2108.12025v1",econ.TH
34184,th,"When making decisions under risk, people often exhibit behaviors that
classical economic theories cannot explain. Newer models that attempt to
account for these irrational behaviors often lack neuroscience bases and
require the introduction of subjective and problem-specific constructs. Here,
we present a decision-making model inspired by the prediction error signals and
introspective neuronal replay reported in the brain. In the model, decisions
are chosen based on anticipated surprise, defined by a nonlinear average of the
differences between individual outcomes and a reference point. The reference
point is determined by the expected value of the possible outcomes, which can
dynamically change during the mental simulation of decision-making problems
involving sequential stages. Our model elucidates the contribution of each
stage to the appeal of available options in a decision-making problem. This
allows us to explain several economic paradoxes and gambling behaviors. Our
work could help bridge the gap between decision-making theories in economics
and neurosciences.",An economic decision-making model of anticipated surprise with dynamic expectation,2021-08-27 18:23:07,"Ho Ka Chan, Taro Toyoizumi","http://arxiv.org/abs/2108.12347v2, http://arxiv.org/pdf/2108.12347v2",econ.TH
34185,th,"In discrete matching markets, substitutes and complements can be
unidirectional between two groups of workers when members of one group are more
important or competent than those of the other group for firms. We show that a
stable matching exists and can be found by a two-stage Deferred Acceptance
mechanism when firms' preferences satisfy a unidirectional substitutes and
complements condition. This result applies to both firm-worker matching and
controlled school choice. Under the framework of matching with continuous
monetary transfers and quasi-linear utilities, we show that substitutes and
complements are bidirectional for a pair of workers.",Unidirectional substitutes and complements,2021-08-28 07:59:31,Chao Huang,"http://arxiv.org/abs/2108.12572v1, http://arxiv.org/pdf/2108.12572v1",econ.TH
34186,th,"We study binary coordination games with random utility played in networks. A
typical equilibrium is fuzzy -- it has positive fractions of agents playing
each action. The set of average behaviors that may arise in an equilibrium
typically depends on the network. The largest set (in the set inclusion sense)
is achieved by a network that consists of a large number of copies of a large
complete graph. The smallest set (in the set inclusion sense) is achieved on a
lattice-type network. It consists of a single outcome that corresponds to a
novel version of risk dominance that is appropriate for games with random
utility.",Fuzzy Conventions,2021-08-30 21:53:27,Marcin Pęski,"http://arxiv.org/abs/2108.13474v1, http://arxiv.org/pdf/2108.13474v1",econ.TH
34187,th,"A great deal of empirical research has examined who falls for misinformation
and why. Here, we introduce a formal game-theoretic model of engagement with
news stories that captures the strategic interplay between (mis)information
consumers and producers. A key insight from the model is that observed patterns
of engagement do not necessarily reflect the preferences of consumers. This is
because producers seeking to promote misinformation can use strategies that
lead moderately inattentive readers to engage more with false stories than true
ones -- even when readers prefer more accurate over less accurate information.
We then empirically test people's preferences for accuracy in the news. In
three studies, we find that people strongly prefer to click and share news they
perceive as more accurate -- both in a general population sample, and in a
sample of users recruited through Twitter who had actually shared links to
misinformation sites online. Despite this preference for accurate news -- and
consistent with the predictions of our model -- we find markedly different
engagement patterns for articles from misinformation versus mainstream news
sites. Using 1,000 headlines from 20 misinformation and 20 mainstream news
sites, we compare Facebook engagement data with 20,000 accuracy ratings
collected in a survey experiment. Engagement with a headline is negatively
correlated with perceived accuracy for misinformation sites, but positively
correlated with perceived accuracy for mainstream sites. Taken together, these
theoretical and empirical results suggest that consumer preferences cannot be
straightforwardly inferred from empirical patterns of engagement.",The Game Theory of Fake News,2021-08-31 11:55:47,"Alexander J. Stewart, Antonio A. Arechar, David G. Rand, Joshua B. Plotkin","http://arxiv.org/abs/2108.13687v6, http://arxiv.org/pdf/2108.13687v6",econ.TH
34189,th,"A screening instrument is costly if it is socially wasteful and productive
otherwise. A principal screens an agent with multidimensional private
information and quasilinear preferences that are additively separable across
two components: a one-dimensional productive component and a multidimensional
costly component. Can the principal improve upon simple one-dimensional
mechanisms by also using the costly instruments? We show that if the agent has
preferences between the two components that are positively correlated in a
suitably defined sense, then simply screening the productive component is
optimal. The result holds for general type and allocation spaces, and allows
for nonlinear and interdependent valuations. We discuss applications to
multiproduct pricing (including bundling, quality discrimination, and upgrade
pricing), intertemporal price discrimination, and labor market screening.",Costly Multidimensional Screening,2021-09-01 19:58:23,Frank Yang,"http://arxiv.org/abs/2109.00487v3, http://arxiv.org/pdf/2109.00487v3",econ.TH
34190,th,"We present a deliberation model where a group of individuals with
heterogeneous preferences iteratively forms expert committees whose members are
tasked with the updating of an exogenously given status quo change proposal.
Every individual holds some initial voting power that is represented by a
finite amount of indivisible units with some underlying value. Iterations
happen in three stages. In the first stage, everyone decides which units to
keep for themselves and where to distribute the rest. With every ownership
mutation, a unit's underlying value diminishes by some exogenously given
amount. In the second stage, the deliberative committee is formed by the
individuals with the most accumulated voting power. These experts can author
corrections to the proposal which are proportional to their accumulated power.
In the third stage, if an individual outside of the committee disagrees with a
correction, she can vote against it with their remaining voting power. A
correction is discarded if more than half of the total voting power outside of
the committee is against it. If either the committee or the proposal remain
unchanged for two consecutive iterations, the process stops. We show that this
will happen in finite time.",Deliberative Democracy with Dilutive Voting Power Sharing,2021-09-03 13:56:34,Dimitrios Karoukis,"http://arxiv.org/abs/2109.01436v2, http://arxiv.org/pdf/2109.01436v2",econ.TH
34191,th,"The locations problem in infinite ethics concerns the relative moral status
of different categories of potential bearers of value, the primary examples of
which are people and points in time. The challenge is to determine which
category of value bearers are of ultimate moral significance: the ultimate
locations, for short. This paper defends the view that the ultimate locations
are 'people at times'. A person at a time is not a specific person, but the
person born at a specific point in time (de dicto). The main conclusion of the
paper is that the unsettling implications of the time- and person-centered
approaches to infinite ethics can be avoided. Most notably, a broad class of
worlds that person-centered views deem incomparable can be strictly ranked.",Infinite utility: counterparts and ultimate locations,2021-09-04 15:08:11,Adam Jonsson,"http://arxiv.org/abs/2109.01852v2, http://arxiv.org/pdf/2109.01852v2",econ.TH
34192,th,"Agents exert hidden effort to produce randomly-sized innovations in a
technology they share. Returns from using the technology grow as it develops,
but so does the opportunity cost of effort, due to an
'exploration-exploitation' trade-off. As monitoring is imperfect, there exists
a unique (strongly) symmetric equilibrium, and effort in any equilibrium ceases
no later than in the single-agent problem. Small innovations may hurt all
agents in the symmetric equilibrium, as they severely reduce effort. Allowing
agents to discard innovations increases effort and payoffs, preserving
uniqueness. Under natural conditions, payoffs rise above those of all
equilibria with forced disclosure.",Incentives for Collective Innovation,2021-09-04 18:01:05,Gregorio Curello,"http://arxiv.org/abs/2109.01885v7, http://arxiv.org/pdf/2109.01885v7",econ.TH
34193,th,"Cheng(2021) proposes and characterizes Relative Maximum Likelihood (RML)
updating rule when the ambiguous beliefs are represented by a set of priors.
Relatedly, this note proposes and characterizes Extended RML updating rule when
the ambiguous beliefs are represented by a convex capacity. Two classical
updating rules for convex capacities, Dempster-Shafer (Shafer, 1976) and
Fagin-Halpern rules (Fagin and Halpern, 1990) are included as special cases of
Extended RML.",Extended Relative Maximum Likelihood Updating of Choquet Beliefs,2021-09-06 19:48:29,Xiaoyu Cheng,"http://arxiv.org/abs/2109.02597v1, http://arxiv.org/pdf/2109.02597v1",econ.TH
34194,th,"We introduce a notion of competitive signaling equilibrium (CSE) in
one-to-one matching markets with a continuum of heterogeneous senders and
receivers. We then study monotone CSE where equilibrium outcomes - sender
actions, receiver reactions, beliefs, and matching - are all monotone in the
stronger set order. We show that if the sender utility is monotone-supermodular
and the receiver's utility is weakly monotone-supermodular, a CSE is stronger
monotone if and only if it passes Criterion D1 (Cho and Kreps (1987), Banks and
Sobel (1987)). Given any interval of feasible reactions that receivers can
take, we fully characterize a unique stronger monotone CSE and establishes its
existence with quasilinear utility functions.",Monotone Equilibrium in Matching Markets with Signaling,2021-09-08 02:36:09,"Seungjin Han, Alex Sam, Youngki Shin","http://arxiv.org/abs/2109.03370v4, http://arxiv.org/pdf/2109.03370v4",econ.TH
34195,th,"We introduce a novel framework that considers how a firm could fairly
compensate its workers. A firm has a group of workers, each of whom has varying
productivities over a set of tasks. After assigning workers to tasks, the firm
must then decide how to distribute its output to the workers. We first consider
three compensation rules and various fairness properties they may satisfy. We
show that among efficient and symmetric rules: the Egalitarian rule is the only
rule that does not decrease a worker's compensation when every worker becomes
weakly more productive (Group Productivity Monotonicity); the Shapley Value
rule is the only rule that, for any two workers, equalizes the impact one
worker has on the other worker's compensation (Balanced Impact); and the
Individual Contribution rule is the only rule that is invariant to the removal
of workers and their assigned tasks (Consistency). We introduce other rules and
axioms, and relate each rule to each axiom.",Fair Compensation,2021-09-10 02:11:43,John E. Stovall,"http://arxiv.org/abs/2109.04583v3, http://arxiv.org/pdf/2109.04583v3",econ.TH
34196,th,"This note provides a critical discussion of the \textit{Critical
Cost-Efficiency Index} (CCEI) as used to assess deviations from
utility-maximizing behavior. I argue that the CCEI is hard to interpret, and
that it can disagree with other plausible measures of ""irrational"" behavior.
The common interpretation of CCEI as wasted income is questionable. Moreover, I
show that one agent may have more unstable preferences than another, but seem
more rational according to the CCEI. This calls into question the (now common)
use of CCEI as an ordinal and cardinal measure of degrees of rationality.",On the meaning of the Critical Cost Efficiency Index,2021-09-14 01:57:51,Federico Echenique,"http://arxiv.org/abs/2109.06354v4, http://arxiv.org/pdf/2109.06354v4",econ.TH
34197,th,"What are the testable implications of the Bayesian rationality hypothesis?
This paper argues that the absolute continuity of posteriors with respect to
priors constitutes the entirety of the empirical content of this hypothesis. I
consider a decision-maker who chooses a sequence of actions and an
econometrician who observes the decision-maker's actions, but not her signals.
The econometrician is interested in testing the hypothesis that the
decision-maker follows Bayes' rule to update her belief. I show that without a
priori knowledge of the set of models considered by the decision-maker, there
are almost no observations that would lead the econometrician to conclude that
the decision-maker is not Bayesian. The absolute continuity of posteriors with
respect to priors remains the only implication of Bayesian rationality, even if
the set of actions is sufficiently rich that the decision-maker's actions fully
reveal her beliefs, and even if the econometrician observes a large number of
ex ante identical agents who observe i.i.d. signals and face the same sequence
of decision problems.",Tests of Bayesian Rationality,2021-09-15 02:04:11,Pooya Molavi,"http://arxiv.org/abs/2109.07007v3, http://arxiv.org/pdf/2109.07007v3",econ.TH
34198,th,"We model a dynamic data economy with fully endogenous growth where agents
generate data from consumption and share them with innovation and production
firms. Different from other productive factors such as labor or capital, data
are nonrival in their uses across sectors which affect both the level and
growth of economic outputs. Despite the vertical nonrivalry, the innovation
sector dominates the production sector in data usage and contribution to growth
because (i) data are dynamically nonrival and add to knowledge accumulation,
and (ii) innovations ""desensitize"" raw data and enter production as knowledge,
which allays consumers' privacy concerns. Data uses in both sectors interact to
generate spillover of allocative distortion and exhibit an apparent
substitutability due to labor's rivalry and complementarity with data.
Consequently, growth rates under a social planner and a decentralized
equilibrium differ, which is novel in the literature and has policy
implications. Specifically, consumers' failure to fully internalize knowledge
spillover when bearing privacy costs, combined with firms' market power,
underprice data and inefficiently limit their supply, leading to
underemployment in the innovation sector and a suboptimal long-run growth.
Improving data usage efficiency is ineffective in mitigating the
underutilization of data, but interventions in the data market and direct
subsidies hold promises.",Endogenous Growth Under Multiple Uses of Data,2021-09-21 11:43:06,"Lin William Cong, Wenshi Wei, Danxia Xie, Longtian Zhang","http://arxiv.org/abs/2109.10027v1, http://arxiv.org/pdf/2109.10027v1",econ.TH
34199,th,"We build an endogenous growth model with consumer-generated data as a new key
factor for knowledge accumulation. Consumers balance between providing data for
profit and potential privacy infringement. Intermediate good producers use data
to innovate and contribute to the final good production, which fuels economic
growth. Data are dynamically nonrival with flexible ownership while their
production is endogenous and policy-dependent. Although a decentralized economy
can grow at the same rate (but are at different levels) as the social optimum
on the Balanced Growth Path, the R&D sector underemploys labor and overuses
data -- an inefficiency mitigated by subsidizing innovators instead of direct
data regulation. As a data economy emerges and matures, consumers' data
provision endogenously declines after a transitional acceleration, allaying
long-run privacy concerns but portending initial growth traps that call for
interventions.","Knowledge Accumulation, Privacy, and Growth in a Data Economy",2021-09-21 11:44:34,"Lin William Cong, Danxia Xie, Longtian Zhang","http://dx.doi.org/10.1287/mnsc.2021.3986, http://arxiv.org/abs/2109.10028v1, http://arxiv.org/pdf/2109.10028v1",econ.TH
34200,th,"I describe a Bayesian persuasion problem where Receiver has a private type
representing a cutoff for choosing Sender's preferred action, and Sender has
maxmin preferences over all Receiver type distributions with known mean and
bounds. This problem can be represented as a zero-sum game where Sender chooses
a distribution of posterior mean beliefs that is a mean-preserving contraction
of the prior over states, and an adversarial Nature chooses a Receiver type
distribution with the known mean; the player with the higher realization from
their chosen distribution wins. I formalize the connection between maxmin
persuasion and similar games used to model political spending, all-pay
auctions, and competitive persuasion. In both a standard binary-state setting
and a new continuous-state setting, Sender optimally linearizes the prior
distribution over states to create a distribution of posterior means that is
uniform on a known interval with an atom at the lower bound of its support.",Persuasion with Ambiguous Receiver Preferences,2021-09-23 20:57:14,Eitan Sapiro-Gheiler,"http://arxiv.org/abs/2109.11536v5, http://arxiv.org/pdf/2109.11536v5",econ.TH
34201,th,"This paper proposes the notion of robust PBE in a general competing mechanism
game of incomplete information where a mechanism allows its designer to send a
message to himself at the same time agents send messages. It identifies the
utility environments where the notion of robust PBE coincides with that of
strongly robust PBE (Epstein and Peters (1999), Han (2007)) and with that of
robust PBE respectively. If each agent's utility function is additively
separable with respect to principals' actions, it is possible to provide the
full characterization of equilibrium allocations under the notion of robust PBE
and its variations, in terms of Bayesian incentive compatible (BIC) direct
mechanisms, without reference to the set of arbitrary general mechanisms
allowed in the game. However, in the standard competing mechanism agme, the
adoption of robust PBE as the solution concept does not lead to the full
characterization of equilibrium allocations in terms of BIC direct mechanisms
even with agents' separable utility functions.",Robust Equilibria in General Competing Mechanism Games,2021-09-27 19:41:45,Seungjin Han,"http://arxiv.org/abs/2109.13177v3, http://arxiv.org/pdf/2109.13177v3",econ.TH
34202,th,"This paper proposes Hodge Potential Choice (HPC), a new solution for abstract
games with irreflexive dominance relations. This solution is formulated by
involving geometric tools like differential forms and Hodge decomposition onto
abstract games. We provide a workable algorithm for the proposed solution with
a new data structure of abstract games. From the view of gaming, HPC overcomes
several weaknesses of conventional solutions. HPC coincides with Copeland
Choice in complete cases and can be extended to slove games with marginal
strengths. It will be proven that the Hodge potential choice possesses three
prevalent axiomatic properties: neutrality, strong monotonicity, dominance
cycle s reversing independence, and sensitivity to mutual dominance. To compare
the HPC with Copeland Choice in large samples of games, we design digital
experiments with randomly generated abstract games with different sizes and
completeness. The experimental results present the advantage of HPC in the
statistical sense.",New Solution based on Hodge Decomposition for Abstract Games,2021-09-29 19:37:51,"Yihao Luo, Jinhui Pang, Weibin Han, Huafei Sun","http://arxiv.org/abs/2109.14539v3, http://arxiv.org/pdf/2109.14539v3",econ.TH
34210,th,"We study a model of moral hazard with heterogeneous beliefs where each of
agent's actions gives rise to a pair of probability distributions over output
levels, one representing the beliefs of the agent and the other those of the
principal. The agent's relative optimism or pessimism dictates whether the
contract is high-powered (i.e. with high variability between wage levels) or
low-powered. When the agent is sufficiently more optimistic than the principal,
the trade-off between risk-sharing and incentive provision may be eliminated.
Using Monotone Likelihood Ratio ranking to model disagreement in the parties'
beliefs, we show that incentives move in the direction of increasing
disagreement. In general, the shape of the wage scheme is sensitive to the
differences in beliefs. Thereby, key features of optimal incentive contracts
under common beliefs do not readily generalize to the case of belief
heterogeneity.",Moral Hazard with Heterogeneous Beliefs,2021-10-08 23:24:40,"Martin Dumav, Urmee Khan, Luca Rigotti","http://arxiv.org/abs/2110.04368v1, http://arxiv.org/pdf/2110.04368v1",econ.TH
34203,th,"We introduce odds-ratios in discrete choice models and utilize them to
formulate bounds instrumental to the development of heuristics for the
assortment optimization problem subject to totally unimodular constraints, and
to the assess the benefit of personalized assortments. These heuristics, which
only require the first and last-choice probabilities of the underlying discrete
choice model, are broadly applicable, efficient, and come with worst-case
performance guarantees. We propose a clairvoyant firm model to assess, in the
limit, the potential benefits of personalized assortments. Our numerical study
indicates that when the mean utilities of the products are heterogeneous among
the consumer types, and the variance of the utilities is small, then firms can
gain substantial benefits from personalized assortments. We support these
observations, and others, with theoretical findings. For regular DCMs, we show
that a clairvoyant firm can generate up to $n$ times more in expected revenues
than a traditional firm. For discrete choice models with independent value
gaps, we demonstrate that the clairvoyant firm can earn at most twice as much
as a traditional firm. Prophet inequalities are also shown to hold for a
variety of DCMs with dependent value gaps, including the MNL and GAM. While the
consumers' surplus can potentially be larger under personalized assortments,
clairvoyant firms with pricing power can extract all surplus, and earn
arbitrarily more than traditional firms that optimize over prices but do not
personalize them. For the price-aware MNL, however, a clairvoyant firm can earn
at most $\exp(1)$ more than a traditional firm.","Bounds, Heuristics, and Prophet Inequalities for Assortment Optimization",2021-09-30 08:57:57,"Guillermo Gallego, Gerardo Berbeglia","http://arxiv.org/abs/2109.14861v4, http://arxiv.org/pdf/2109.14861v4",econ.TH
34204,th,"This paper develops a monetary-search model where the money multiplier is
endogenously determined. I show that when the central bank pays interest on
reserves, the money multiplier and the quantity of the reserve can depend on
the nominal interest rate and the interest on reserves. The calibrated model
can explain the evolution of the money multiplier and the excess
reserve-deposit ratio in the pre-2008 and post-2008 periods. The quantitative
analysis suggests that the dramatic changes in the money multiplier after 2008
are driven by the introduction of the interest on reserves with a low nominal
interest rate.",Money Creation and Banking: Theory and Evidence,2021-09-30 16:05:12,Heon Lee,"http://arxiv.org/abs/2109.15096v1, http://arxiv.org/pdf/2109.15096v1",econ.TH
34205,th,"We analyze competition on nonlinear prices in homogeneous goods markets with
consumer search. In equilibrium firms offer two-part tariffs consisting of a
linear price and lump-sum fee. The equilibrium production is socially efficient
as the linear price of equilibrium two-part tariffs equals to the production
marginal cost. Firms thus compete in lump-sum fees, which are dispersed in
equilibrium. We show that sellers enjoy higher profit, whereas consumers are
worse-off with two-part tariffs than with linear prices. The competition
softens because with two-part tariffs firms can make effective per-consumer
demand less elastic than the actual demand.","Nonlinear Prices, Homogeneous Goods, Search",2021-09-30 18:16:05,Atabek Atayev,"http://arxiv.org/abs/2109.15198v1, http://arxiv.org/pdf/2109.15198v1",econ.TH
34206,th,"In many markets buyers are poorly informed about which firms sell the product
(product availability) and prices, and therefore have to spend time to obtain
this information. In contrast, sellers typically have a better idea about which
rivals offer the product. Information asymmetry between buyers and sellers on
product availability, rather than just prices, has not been scrutinized in the
literature on consumer search. We propose a theoretical model that incorporates
this kind of information asymmetry into a simultaneous search model. Our key
finding is that greater product availability may harm buyers by mitigating
their willingness to search and, thus, softening competition.",Uncertain Product Availability in Search Markets,2021-09-30 18:33:24,Atabek Atayev,"http://arxiv.org/abs/2109.15211v1, http://arxiv.org/pdf/2109.15211v1",econ.TH
34207,th,"Consumers can acquire information through their own search efforts or through
their social network. Information diffusion via word-of-mouth communication
leads to some consumers free-riding on their ""friends"" and less information
acquisition via active search. Free-riding also has an important positive
effect, however, in that consumers that do not actively search themselves are
more likely to be able to compare prices before purchase, imposing competitive
pressure on firms. We show how market prices depend on the characteristics of
the network and on search cost. For example, if the search cost becomes small,
price dispersion disappears, while the price level converges to the monopoly
level, implying that expected prices are decreasing for small enough search
cost. More connected societies have lower market prices, while price dispersion
remains even in fully connected societies.",Information Acquisition and Diffusion in Markets,2021-09-30 20:33:36,"Atabek Atayev, Maarten Janssen","http://arxiv.org/abs/2109.15288v1, http://arxiv.org/pdf/2109.15288v1",econ.TH
34208,th,"In markets with search frictions, consumers can acquire information about
goods either through costly search or from friends via word-of-mouth (WOM)
communication. How do sellers' market power react to a very large increase in
the number of consumers' friends with whom they engage in WOM? The answer to
the question depends on whether consumers are freely endowed with price
information. If acquiring price quotes is costly, equilibrium prices are
dispersed and the expected price is higher than the marginal cost of
production. This implies that firms retain market power even if price
information is disseminated among a very large number of consumers due to
technological progress, such as social networking websites.",Truly Costly Search and Word-of-Mouth Communication,2021-09-30 21:03:12,Atabek Atayev,"http://arxiv.org/abs/2110.00032v1, http://arxiv.org/pdf/2110.00032v1",econ.TH
34209,th,"The sustainable use of common-pool resources (CPRs) is a major environmental
governance challenge because of their possible over-exploitation. Research in
this field has overlooked the feedback between user decisions and resource
dynamics. Here we develop an online game to perform a set of experiments in
which users of the same CPR decide on their individual harvesting rates, which
in turn depend on the resource dynamics. We show that, if users share common
goals, a high level of self-organized cooperation emerges, leading to long-term
resource sustainability. Otherwise, selfish/individualistic behaviors lead to
resource depletion (""Tragedy of the Commons""). To explain these results, we
develop an analytical model of coupled resource-decision dynamics based on
optimal control theory and show how this framework reproduces the empirical
results.",The emergence of cooperation from shared goals in the Systemic Sustainability Game of common pool resources,2021-10-01 18:06:07,"Chengyi Tu, Paolo DOdorico, Zhe Li, Samir Suweis","http://arxiv.org/abs/2110.00474v1, http://arxiv.org/pdf/2110.00474v1",econ.TH
34211,th,"This paper studies the optimal mechanism to motivate effort in a dynamic
principal-agent model without transfers. An agent is engaged in a task with
uncertain future rewards and can shirk irreversibly at any time. The principal
knows the reward of the task and provides information to the agent over time in
order to motivate effort. We derive the optimal information policy in closed
form and thus identify two conditions, each of which guarantees that delayed
disclosure is valuable. First, if the principal is impatient compared to the
agent, she prefers the front-loaded effort schedule induced by delayed
disclosure. In a stationary environment, delayed disclosure is beneficial if
and only if the principal is less patient than the agent. Second, if the
environment makes the agent become pessimistic over time in absence of any
information disclosure, then providing delayed news can counteract this
downward trend in the agent's belief and encourage the agent to work longer.
Notably, the level of patience remains a crucial determinant of the optimal
policy structure.",Motivating Effort with Information about Future Rewards,2021-10-12 02:19:42,Chang Liu,"http://arxiv.org/abs/2110.05643v3, http://arxiv.org/pdf/2110.05643v3",econ.TH
34212,th,"The partition of society into groups, polarization, and social networks are
part of most conversations today. How do they influence price competition? We
discuss Bertrand duopoly equilibria with demand subject to network effects.
Contrary to models where network effects depend on one aggregate variable
(demand for each choice), partitioning the dependence into groups creates a
wealth of pure price equilibria with profit for both price setters, even if
positive network effects are the dominant element of the game. If there is some
asymmetry in how groups interact, two groups are sufficient. If network effects
are based on undirected and unweighted graphs, at least five groups are
required but, without other differentiation, outcomes are symmetric.",Group network effects in price competition,2021-10-12 14:05:29,"Renato Soeiro, Alberto Pinto","http://arxiv.org/abs/2110.05891v1, http://arxiv.org/pdf/2110.05891v1",econ.TH
34213,th,"The theory of full implementation has been criticized for using
integer/modulo games which admit no equilibrium (Jackson (1992)). To address
the critique, we revisit the classical Nash implementation problem due to
Maskin (1999) but allow for the use of lotteries and monetary transfers as in
Abreu and Matsushima (1992, 1994). We unify the two well-established but
somewhat orthogonal approaches in full implementation theory. We show that
Maskin monotonicity is a necessary and sufficient condition for (exact)
mixed-strategy Nash implementation by a finite mechanism. In contrast to
previous papers, our approach possesses the following features: finite
mechanisms (with no integer or modulo game) are used; mixed strategies are
handled explicitly; neither undesirable outcomes nor transfers occur in
equilibrium; the size of transfers can be made arbitrarily small; and our
mechanism is robust to information perturbations. Finally, our result can be
extended to infinite/continuous settings and ordinal settings.",Maskin Meets Abreu and Matsushima,2021-10-13 11:04:59,"Yi-Chun Chen, Takashi Kunimoto, Yifei Sun, Siyang Xiong","http://arxiv.org/abs/2110.06551v3, http://arxiv.org/pdf/2110.06551v3",econ.TH
34214,th,"In this paper, we study opinion dynamics in a balanced social structure
consisting of two groups. Agents learn the true state of the world naively
learning from their neighbors and from an unbiased source of information.
Agents want to agree with others of the same group--in-group identity -- but to
disagree with those of the opposite group--out-group conflict. We characterize
the long-run opinions, and show that agents' influence depends on their
Bonacich centrality in the signed network of opinion exchange. Finally, we
study the effect of group size, the weight given to unbiased information and
homophily when agents in the same group are homogeneous.","Group Identity, Social Learning and Opinion Dynamics",2021-10-14 11:42:27,"Sebastiano Della Lena, Luca Paolo Merlino","http://arxiv.org/abs/2110.07226v2, http://arxiv.org/pdf/2110.07226v2",econ.TH
34215,th,"We study the optimal auction design problem when bidders' preferences follow
the maxmin expected utility model. We suppose that each bidder's set of priors
consists of beliefs close to the seller's belief, where ""closeness"" is defined
by a divergence. For a given allocation rule, we identify a class of optimal
transfer candidates, named the win-lose dependent transfers, with the following
property: each type of bidder's transfer conditional on winning or losing is
independent of the competitor's type report. Our result reduces the
infinite-dimensional optimal transfer problem to a two-dimensional optimization
problem. By solving the reduced problem, we find that: (i) among efficient
mechanisms with no premiums for losers, the first-price auction is optimal;
and, (ii) among efficient winner-favored mechanisms where each bidder pays
smaller amounts when she wins than loses: the all-pay auction is optimal. Under
a simplifying assumption, these two auctions remain optimal under the
endogenous allocation rule.",Auction design with ambiguity: Optimality of the first-price and all-pay auctions,2021-10-16 15:45:59,"Sosung Baik, Sung-Ha Hwang","http://arxiv.org/abs/2110.08563v1, http://arxiv.org/pdf/2110.08563v1",econ.TH
34216,th,"This study develops an economic model for a social planner who prioritizes
health over short-term wealth accumulation during a pandemic. Agents are
connected through a weighted undirected network of contacts, and the planner's
objective is to determine the policy that contains the spread of infection
below a tolerable incidence level, and that maximizes the present discounted
value of real income, in that order of priority. The optimal unique policy
depends both on the configuration of the contact network and the tolerable
infection incidence. Comparative statics analyses are conducted: (i) they
reveal the tradeoff between the economic cost of the pandemic and the infection
incidence allowed; and (ii) they suggest a correlation between different
measures of network centrality and individual lockdown probability with the
correlation increasing with the tolerable infection incidence level. Using
unique data on the networks of nursing and long-term homes in the U.S., we
calibrate our model at the state level and estimate the tolerable COVID-19
infection incidence level. We find that laissez-faire (more tolerance to the
virus spread) pandemic policy is associated with an increased number of deaths
in nursing homes and higher state GDP growth. In terms of the death count,
laissez-faire is more harmful to nursing homes than more peripheral in the
networks, those located in deprived counties, and those who work for a profit.
We also find that U.S. states with a Republican governor have a higher level of
tolerable incidence, but policies tend to converge with high death count.",Optimally Targeting Interventions in Networks during a Pandemic: Theory and Evidence from the Networks of Nursing Homes in the United States,2021-10-19 22:56:50,"Roland Pongou, Guy Tchuente, Jean-Baptiste Tondji","http://arxiv.org/abs/2110.10230v1, http://arxiv.org/pdf/2110.10230v1",econ.TH
34217,th,"I establish a translation invariance property of the Blackwell order over
experiments, show that garbling experiments bring them closer together, and use
these facts to define a cardinal measure of informativeness. Experiment $A$ is
inf-norm more informative (INMI) than experiment $B$ if the infinity norm of
the difference between a perfectly informative structure and $A$ is less than
the corresponding difference for $B$. The better experiment is ""closer"" to the
fully revealing experiment; distance from the identity matrix is interpreted as
a measure of informativeness. This measure coincides with Blackwell's order
whenever possible, is complete, order invariant, and prior-independent, making
it an attractive and computationally simple extension of the Blackwell order to
economic contexts.",Algebraic Properties of Blackwell's Order and A Cardinal Measure of Informativeness,2021-10-21 21:27:21,Andrew Kosenko,"http://arxiv.org/abs/2110.11399v1, http://arxiv.org/pdf/2110.11399v1",econ.TH
34218,th,"Players allocate their budget to links, a local public good and a private
good. A player links to free ride on others' public good provision. We derive
sufficient conditions for the existence of a Nash equilibrium. In equilibrium,
large contributors link to each other, while others link to them. Poorer
players can be larger contributors if linking costs are sufficiently high. In
large societies, free riding reduces inequality only in networks in which it is
initially low; otherwise, richer players benefit more, as they can afford more
links. Finally, we study the policy implications, deriving income
redistribution that increases welfare and personalized prices that implement
the efficient solution.",Free Riding in Networks,2021-10-22 11:28:10,"Markus Kinateder, Luca Paolo Merlino","http://dx.doi.org/10.1016/j.euroecorev.2023.104378, http://arxiv.org/abs/2110.11651v1, http://arxiv.org/pdf/2110.11651v1",econ.TH
34219,th,"In a many-to-many matching model in which agents' preferences satisfy
substitutability and the law of aggregate demand, we present an algorithm to
compute the full set of stable matchings. This algorithm relies on the idea of
""cycles in preferences"" and generalizes the algorithm presented in Roth and
Sotomayor (1990) for the one-to-one model.",Cycles to compute the full set of many-to-many stable matchings,2021-10-22 18:25:53,"Agustin G. Bonifacio, Noelia Juarez, Pablo Neme, Jorge Oviedo","http://arxiv.org/abs/2110.11846v2, http://arxiv.org/pdf/2110.11846v2",econ.TH
34220,th,"Reverse causality is a common causal misperception that distorts the
evaluation of private actions and public policies. This paper explores the
implications of this error when a decision maker acts on it and therefore
affects the very statistical regularities from which he draws faulty
inferences. Using a quadratic-normal parameterization and applying the
Bayesian-network approach of Spiegler (2016), I demonstrate the subtle
equilibrium effects of a certain class of reverse-causality errors, with
illustrations in diverse areas: development psychology, social policy, monetary
economics and IO. In particular, the decision context may protect the decision
maker from his own reverse-causality causal error. That is, the cost of
reverse-causality errors can be lower for everyday decision makers than for an
outside observer who evaluates their choices.",On the Behavioral Consequences of Reverse Causality,2021-10-23 16:32:05,Ran Spiegler,"http://arxiv.org/abs/2110.12218v1, http://arxiv.org/pdf/2110.12218v1",econ.TH
34221,th,"A seller with one unit of a good faces N\geq3 buyers and a single competitor
who sells one other identical unit in a second-price auction with a reserve
price. Buyers who do not get the seller's good will compete in the competitor's
subsequent auction. We characterize the optimal mechanism for the seller in
this setting. The first-order approach typically fails, so we develop new
techniques. The optimal mechanism features transfers from buyers with the two
highest valuations, allocation to the buyer with the second-highest valuation,
and a withholding rule that depends on the highest two or three valuations. It
can be implemented by a modified third-price auction or a pay-your-bid auction
with a rebate. This optimal withholding rule raises significantly more revenue
than would a standard reserve price. Our analysis also applies to procurement
auctions. Our results have implications for sequential competition in
mechanisms.",How To Sell (or Procure) in a Sequential Auction,2021-10-25 20:29:19,"Kenneth Hendricks, Thomas Wiseman","http://arxiv.org/abs/2110.13121v1, http://arxiv.org/pdf/2110.13121v1",econ.TH
34222,th,"This paper studies the advance$-$purchase problem when a consumer has
reference$-$dependent preferences in the form of Koszegi and Rabin (2009), in
which planning affects reference formation. When the consumer exhibits plan
revisability, loss aversion increases the price at which she is willing to
pre$-$purchase. This implies that loss aversion can lead to the selection of a
riskier option in certain situations. Moreover, I endogenize the seller$'$s
price-commitment behavior in the advance$-$purchase problem. The result shows
that the seller commits to his spot price even though he is not obliged to,
which was treated as a given assumption in previous literature.","Buy It Now, or Later, or Not: Loss Aversion in Advance Purchasing",2021-10-28 10:39:14,Senran Lin,"http://arxiv.org/abs/2110.14929v3, http://arxiv.org/pdf/2110.14929v3",econ.TH
34223,th,"A textbook chapter on modeling choice behavior and designing institutional
choice functions for matching and market design. The chapter is to appear in:
Online and Matching-Based Market Design. Federico Echenique, Nicole Immorlica
and Vijay V. Vazirani, Editors. Cambridge University Press. 2021",Choice and Market Design,2021-10-29 00:48:17,"Samson Alva, Battal Doğan","http://arxiv.org/abs/2110.15446v2, http://arxiv.org/pdf/2110.15446v2",econ.TH
34224,th,"We examine the surplus extraction problem in a mechanism design setting with
behavioral types. In our model behavioral types always perfectly reveal their
private information. We characterize the sufficient conditions that guarantee
full extraction in a finite version of the reduced form environment of McAfee
and Reny (1992). We found that the standard convex independence condition
identified in Cremer and McLean (1988) is required only among the beliefs of
strategic types, while a weaker condition is required for the beliefs of
behavioral types.",Surplus Extraction with Behavioral Types,2021-10-29 22:45:57,Nicolas Pastrian,"http://arxiv.org/abs/2111.00061v2, http://arxiv.org/pdf/2111.00061v2",econ.TH
34225,th,"In this paper we study the aggregation of fuzzy preferences on
non-necessarily finite societies. We characterize in terms of possibility and
impossibility a family of models of complete preferences in which the
transitivity is defined for any t-norm. For that purpose, we have described
each model by means of some crisp binary relations and we have applied the
results obtained by Kirman and Sondermann.",Fuzzy Arrovian Theorems when preferences are complete,2021-11-04 20:10:24,Armajac Raventós-Pujol,"http://dx.doi.org/10.22111/ijfs.2022.7085, http://arxiv.org/abs/2111.03010v1, http://arxiv.org/pdf/2111.03010v1",econ.TH
34227,th,"We show that the expectation of the $k^{\mathrm{th}}$-order statistic of an
i.i.d. sample of size $n$ from a monotone reverse hazard rate (MRHR)
distribution is convex in $n$ and that the expectation of the
$(n-k+1)^{\mathrm{th}}$-order statistic from a monotone hazard rate (MHR)
distribution is concave in $n$ for $n\ge k$. We apply this result to the
analysis of independent private value auctions in which the auctioneer faces a
convex cost of attracting bidders. In this setting, MHR valuation distributions
lead to concavity of the auctioneer's objective. We extend this analysis to
auctions with reserve values, in which concavity is assured for sufficiently
small reserves or for a sufficiently large number of bidders.",Concavity and Convexity of Order Statistics in Sample Size,2021-11-08 21:32:36,Mitchell Watt,"http://arxiv.org/abs/2111.04702v2, http://arxiv.org/pdf/2111.04702v2",econ.TH
34228,th,"This paper demonstrates the additive and multiplicative version of a long-run
law of unexpected shocks for any economic variable. We derive these long-run
laws by the martingale theory without relying on the stationary and ergodic
conditions. We apply these long-run laws to asset return, risk-adjusted asset
return, and the pricing kernel process and derive new asset pricing
implications. Moreover, we introduce several dynamic long-term measures on the
pricing kernel process, which relies on the sample data of asset return.
Finally, we use these long-term measures to diagnose leading asset pricing
models.",Long Run Law and Entropy,2021-11-11 17:35:58,Weidong Tian,"http://arxiv.org/abs/2111.06238v1, http://arxiv.org/pdf/2111.06238v1",econ.TH
34229,th,"We explore the possibility of designing matching mechanisms that can
accommodate non-standard choice behavior. We pin down the necessary and
sufficient conditions on participants' choice behavior for the existence of
stable and incentive compatible mechanisms. Our results imply that
well-functioning matching markets can be designed to adequately accommodate a
plethora of choice behaviors, including the standard behavior that is
consistent with preference maximization. To illustrate the significance of our
results in practice, we show that a simple modification in a commonly used
matching mechanism enables it to accommodate non-standard choice behavior.",Non-Standard Choice in Matching Markets,2021-11-12 20:01:13,"Gian Caspari, Manshu Khanna","http://arxiv.org/abs/2111.06815v1, http://arxiv.org/pdf/2111.06815v1",econ.TH
34230,th,"We study the tradeoff between fundamental risk and time. A time-constrained
agent has to solve a problem. She dynamically allocates effort between
implementing a risky initial idea and exploring alternatives. Discovering an
alternative implies progress that has to be converted to a solution. As time
runs out, the chances of converting it in time shrink. We show that the agent
may return to the initial idea after having left it in the past to explore
alternatives. Our model helps explain so-called false starts. To finish fast,
the agent delays exploring alternatives reducing the overall success
probability.",On Risk and Time Pressure: When to Think and When to Do,2021-11-15 00:20:31,"Christoph Carnehl, Johannes Schneider","http://dx.doi.org/10.1093/jeea/jvac027, http://arxiv.org/abs/2111.07451v2, http://arxiv.org/pdf/2111.07451v2",econ.TH
34231,th,"Dworczak et al. (2021) study when certain market structures are optimal in
the presence of heterogeneous preferences. A key assumption is that the social
planner knows the joint distribution of the value of the good and marginal
value of money. This paper studies whether relevant features of this
distribution are identified from choice data. We show that the features of the
distribution needed to characterize optimal market structure cannot be
identified when demand is known for all prices. While this is a negative
result, we show that the distribution of good value and marginal utility of
money is fully identified when there is an observed measure of quality that
varies. Thus, while Dworczak et al. (2021) abstract from quality, we show how
including quality is crucial for potential applications.",Obstacles to Redistribution Through Markets and One Solution,2021-11-18 22:14:12,"Roy Allen, John Rehbeck","http://arxiv.org/abs/2111.09910v1, http://arxiv.org/pdf/2111.09910v1",econ.TH
34232,th,"Law enforcement acquires costly evidence with the aim of securing the
conviction of a defendant, who is convicted if a decision-maker's belief
exceeds a certain threshold. Either law enforcement or the decision-maker is
biased and is initially overconfident that the defendant is guilty. Although an
innocent defendant always prefers an unbiased decision-maker, he may prefer
that law enforcement have some bias to none. Nevertheless, fixing the level of
bias, an innocent defendant may prefer that the decision-maker, not law
enforcement, is biased.",Whose Bias?,2021-11-19 20:31:50,"Vasudha Jain, Mark Whitmeyer","http://arxiv.org/abs/2111.10335v1, http://arxiv.org/pdf/2111.10335v1",econ.TH
34233,th,"In many settings affirmative action policies apply at two levels
simultaneously, for instance, at university as well as at its departments. We
show that commonly used methods in reserving positions for beneficiaries of
affirmative action are often inadequate in such settings. We present a
comprehensive evaluation of existing procedures to formally document their
shortcomings. We propose a new solution with appealing theoretical properties
and quantify the benefits of adopting it using recruitment advertisement data
from India.",Affirmative Action in Two Dimensions: A Multi-Period Apportionment Problem,2021-11-23 18:55:10,"Haydar Evren, Manshu Khanna","http://arxiv.org/abs/2111.11963v1, http://arxiv.org/pdf/2111.11963v1",econ.TH
34234,th,"In this paper, I introduce a profit-maximizing centralized marketplace into a
decentralized market with search frictions. Agents choose between the
centralized marketplace and the decentralized bilateral trade. I characterize
the optimal marketplace in this market choice game using a mechanism design
approach. In the unique equilibrium, the centralized marketplace and the
decentralized trade coexist. The profit of the marketplace decreases as the
search frictions in the decentralized market are reduced. However, it is always
higher than the half of the profit when the frictions are prohibitively high
for decentralized trade. I also show that the ratio of the reduction in the
profit depends only on the degree of search frictions and not on the
distribution of valuations. The thickness of the centralized marketplace does
not depend on the search frictions. I derive conditions under which, this
equilibrium results in higher welfare than either institution on its own.",Coexistence of Centralized and Decentralized Markets,2021-11-24 22:56:53,Berk Idem,"http://arxiv.org/abs/2111.12767v1, http://arxiv.org/pdf/2111.12767v1",econ.TH
34241,th,"A buyer and a seller bargain over the price of an object. Both players can
build reputations for being obstinate by offering the same price over time.
Before players bargain, the seller decides whether to adopt a new technology
that can lower his cost of production. We show that even when the buyer cannot
observe the seller's adoption decision, players' reputational incentives can
lead to inefficient under-adoption and significant delays in reaching
agreement, and that these inefficiencies arise in equilibrium if and only if
the social benefit from adoption is large enough. Our result implies that an
increase in the benefit from adoption may lower the probability of adoption and
that the seller's opportunity to adopt a cost-saving technology may lower
social welfare.",Reputational Bargaining and Inefficient Technology Adoption,2022-01-06 00:14:45,"Harry Pei, Maren Vairo","http://arxiv.org/abs/2201.01827v6, http://arxiv.org/pdf/2201.01827v6",econ.TH
34235,th,"In the classical Bayesian persuasion model an informed player and an
uninformed one engage in a static interaction. The informed player, the sender,
knows the state of nature, while the uninformed one, the receiver, does not.
The informed player partially shares his private information with the receiver
and the latter then, based on her belief about the state, takes an action. This
action determines, together with the state of nature, the utility of both
players. We consider a dynamic Bayesian persuasion situation where the state of
nature evolves according to a Markovian law. In this repeated persuasion model
an optimal disclosure strategy of the sender should, at any period, balance
between getting high stage payoff and future implications on the receivers'
beliefs. We discuss optimal strategies under different discount factors and
characterize when the asymptotic value achieves the maximal value possible.",Markovian Persuasion,2021-11-29 10:46:19,"Ehud Lehrer, Dimitry Shaiderman","http://arxiv.org/abs/2111.14365v1, http://arxiv.org/pdf/2111.14365v1",econ.TH
34236,th,"Recovering and distinguishing between the potentially meaningful indifference
and/or indecisiveness parts of a decision maker's preferences from their
observable choice behaviour is important for testing theory and conducting
welfare analysis. This paper contributes a way forward in two ways. First, it
reports on a new experimental design with a forced and a non-forced general
choice treatment where the 282 subjects in the sample were allowed to choose
*multiple* gift-card bundles from each menu they saw. Second, it analyses the
collected data with a new non-parametric combinatorial-optimization method that
allows to compare the goodness-of-fit of utility maximization to that of models
of undominated or dominant choice with incomplete preferences, in each case
with or without indifferences. Most subjects in the sample are
well-approximated by some indifference-permitting instance of those three
models. Furthermore, the model-specific preference recovery typically elicits a
unique preference relation with a non-trivial indifference part and, where
relevant, a distinct indecisiveness part. The two kinds of distinctions between
indifference and indecisiveness uncovered herein are theory-guided and
documented empirically for the first time. Our findings suggest that accounting
for the decision makers' possible indifference or inability to compare some
alternatives can increase the descriptive accuracy of theoretical models and
their empirical tests thereof, while it can also aid attempts to recover
people's stable but possibly weak or incomplete preferences.",Towards Eliciting Weak or Incomplete Preferences in the Lab: A Model-Rich Approach,2021-11-29 13:23:05,Georgios Gerasimou,"http://arxiv.org/abs/2111.14431v4, http://arxiv.org/pdf/2111.14431v4",econ.TH
34237,th,"A preference domain is called a non-dictatorial domain if it allows the
design of unanimous social choice functions (henceforth, rules) that are
non-dictatorial and strategy-proof. We study a class of preference domains
called unidimensional domains and establish that the unique seconds property
(introduced by Aswal, Chatterji, and Sen (2003)) characterizes all
non-dictatorial domains. The principal contribution is the subsequent
exhaustive classification of all non-dictatorial, unidimensional domains and
canonical strategy-proof rules on these domains, based on a simple property of
two-voter rules called invariance. The preference domains that constitute the
classification are semi-single-peaked domains (introduced by Chatterji, Sanver,
and Sen (2013)) and semi-hybrid domains (introduced here) which are two
appropriate weakenings of single-peaked domains and are shown to allow
strategy-proof rules to depend on non-peak information of voters' preferences;
the canonical rules for these domains are the projection rule and the hybrid
rule respectively. As a refinement of the classification, single-peaked domains
and hybrid domains emerge as the only unidimensional domains that force
strategy-proof rules to be determined completely by voters' preference peaks.",A Taxonomy of Non-dictatorial Unidimensional Domains,2022-01-03 09:37:34,"Shurojit Chatterji, Huaxia Zeng","http://arxiv.org/abs/2201.00496v3, http://arxiv.org/pdf/2201.00496v3",econ.TH
34238,th,"Learning models do not in general imply that weakly dominated strategies are
irrelevant or justify the related concept of ""forward induction,"" because
rational agents may use dominated strategies as experiments to learn how
opponents play, and may not have enough data to rule out a strategy that
opponents never use. Learning models also do not support the idea that the
selected equilibria should only depend on a game's normal form, even though two
games with the same normal form present players with the same decision problems
given fixed beliefs about how others play. However, playing the extensive form
of a game is equivalent to playing the normal form augmented with the
appropriate terminal node partitions so that two games are information
equivalent, i.e., the players receive the same feedback about others'
strategies.","Observability, Dominance, and Induction in Learning Models",2022-01-03 20:45:51,"Daniel Clark, Drew Fudenberg, Kevin He","http://dx.doi.org/10.1016/j.jet.2022.105569, http://arxiv.org/abs/2201.00776v1, http://arxiv.org/pdf/2201.00776v1",econ.TH
34239,th,"We study the mechanism design problem of selling a public good to a group of
agents by a principal in the correlated private value environment. We assume
the principal only knows the expectations of the agents' values, but does not
know the joint distribution of the values. The principal evaluates a mechanism
by the worst-case expected revenue over joint distributions that are consistent
with the known expectations. We characterize maxmin public good mechanisms
among dominant-strategy incentive compatible and ex-post individually rational
mechanisms for the two-agent case and for a special $N$-agent ($N>2$) case.",Robust Private Supply of a Public Good,2022-01-04 03:58:33,Wanchang Zhang,"http://arxiv.org/abs/2201.00923v2, http://arxiv.org/pdf/2201.00923v2",econ.TH
34240,th,"I introduce a dynamic model of learning and random meetings between a
long-lived agent with unknown ability and heterogeneous projects with
observable qualities. The outcomes of the agent's matches with the projects
determine her posterior belief about her ability (i.e., her reputation). In a
self-type learning framework with endogenous outside option, I find the optimal
project selection strategy of the agent, that determines what types of projects
the agent with a certain level of reputation will accept. Sections of the
optimal matching set become increasing intervals, with different cutoffs across
different types of the projects. Increasing the meeting rate has asymmetric
effects on the sections of the matching sets: it unambiguously expands the
section for the high type projects, while on some regions, it initially expands
and then shrinks the section of the low type projects.","Reputation, Learning and Project Choice in Frictional Economies",2022-01-05 23:50:26,Farzad Pourbabaee,"http://arxiv.org/abs/2201.01813v3, http://arxiv.org/pdf/2201.01813v3",econ.TH
34243,th,"We introduce the component-wise egalitarian Myerson value for network games.
This new value being a convex combination of the Myerson value and the
component-wise equal division rule is a player-based allocation rule. In
network games under the cooperative framework, the Myerson value is an extreme
example of marginalism, while the equal division rule signifies egalitarianism.
In the proposed component-wise egalitarian Myerson value, a convexity parameter
combines these two attributes and determines the degree of solidarity to the
players. Here, by solidarity, we mean the mutual support or compensation among
the players in a network. We provide three axiomatic characterizations of the
value. Further, we propose an implementation mechanism for the component-wise
egalitarian Myerson value under subgame perfect Nash equilibrium.",The component-wise egalitarian Myerson value for Network Games,2022-01-08 11:54:40,"Surajit Borkotokey, Sujata Goala, Niharika Kakoty, Parishmita Boruah","http://arxiv.org/abs/2201.02793v1, http://arxiv.org/pdf/2201.02793v1",econ.TH
34244,th,"The paper focuses on the bribery network emphasizing harassment bribery. A
bribery network ends with the police officer whose utility from the bribe is
positive and the approving officer in the network. The persistent nature of
corruption is due to colluding behavior of the bribery networks. The
probability of detection of bribery incidents will help in improving
controlling corruption in society. The asymmetric form of punishment and award
equivalent to the amount of punishment to the network can enhance the
probability of detection of harassment bribery $(p_{h})$ and thus increasing
the probability of detection of overall bribery $(p_{h} \in p)$.",A study on bribery networks with a focus on harassment bribery and ways to control corruption,2022-01-08 13:46:50,Chanchal Pramanik,"http://arxiv.org/abs/2201.02804v1, http://arxiv.org/pdf/2201.02804v1",econ.TH
34245,th,"We explore heterogenous prices as a source of heterogenous or stochastic
demand. Heterogenous prices could arise either because there is actual price
variation among consumers or because consumers (mis)perceive prices
differently. Our main result says the following: if heterogenous prices have a
distribution among consumers that is (in a sense) stable across observations,
then a model where consumers have a common utility function but face
heterogenous prices has precisely the same implications as a heterogenous
preference/random utility model (with no price heterogeneity).",Price Heterogeneity as a source of Heterogenous Demand,2022-01-11 08:07:05,"John K. -H. Quah, Gerelt Tserenjigmid","http://arxiv.org/abs/2201.03784v2, http://arxiv.org/pdf/2201.03784v2",econ.TH
34246,th,"It is well known that differences in the average number of friends among
social groups can cause inequality in the average wage and/or unemployment
rate. However, the impact of social network structure on inequality is not
evident. In this paper, we show that not only the average number of friends but
also the heterogeneity of degree distribution can affect inter-group
inequality. A worker group with a scale-free network tends to be disadvantaged
in the labor market compared to a group with an Erd\H{o}s-R\'{e}nyi network
structure. This feature becomes strengthened as the skewness of the degree
distribution increases in scale-free networks. We show that the government's
policy of discouraging referral hiring worsens social welfare and can
exacerbate inequality.",Referral Hiring and Social Network Structure,2022-01-16 14:11:54,Yoshitaka Ogisu,"http://arxiv.org/abs/2201.06020v2, http://arxiv.org/pdf/2201.06020v2",econ.TH
34247,th,"In cooperative games with transferable utilities, the Shapley value is an
extreme case of marginalism while the Equal Division rule is an extreme case of
egalitarianism. The Shapley value does not assign anything to the
non-productive players and the Equal Division rule does not concern itself to
the relative efficiency of the players in generating a resource. However, in
real life situations neither of them is a good fit for the fair distribution of
resources as the society is neither devoid of solidarity nor it can be
indifferent to rewarding the relatively more productive players. Thus a
trade-off between these two extreme cases has caught attention from many
researchers. In this paper, we obtain a new value for cooperative games with
transferable utilities that adopts egalitarianism in smaller coalitions on one
hand and on the other hand takes care of the players' marginal productivity in
sufficiently large coalitions. Our value is identical with the Shapley value on
one extreme and the Equal Division rule on the other extreme. We provide four
characterizations of the value using variants of standard axioms in the
literature. We have also developed a strategic implementation mechanism of our
value in sub-game perfect Nash equilibrium.",Consolidating Marginalism and Egalitarianism: A New Value for Transferable Utility Games,2022-01-23 08:16:41,"D. Choudhury, S. Borkotokey, Rajnish Kumar, Sudipta Sarangi","http://arxiv.org/abs/2201.09182v1, http://arxiv.org/pdf/2201.09182v1",econ.TH
34248,th,"We study the strategic advantages of coarsening one's utility by clustering
nearby payoffs together (i.e., classifying them the same way). Our solution
concept, coarse-utility equilibrium (CUE) requires that (1) each player
maximizes her coarse utility, given the opponent's strategy, and (2) the
classifications form best replies to one another. We characterize CUEs in
various games. In particular, we show that there is a qualitative difference
between CUEs in which only one of the players clusters payoffs, and those in
which all players cluster their payoffs, and that the latter type induce
players to treat co-players better than in Nash equilibria in the large class
of games with monotone externalities.",The Benefits of Coarse Preferences,2022-01-25 10:35:43,"Joseph Y. Halpern, Yuval Heller, Eyal Winter","http://arxiv.org/abs/2201.10141v5, http://arxiv.org/pdf/2201.10141v5",econ.TH
34249,th,"Social decision schemes (SDSs) map the preferences of a group of voters over
some set of $m$ alternatives to a probability distribution over the
alternatives. A seminal characterization of strategyproof SDSs by Gibbard
implies that there are no strategyproof Condorcet extensions and that only
random dictatorships satisfy ex post efficiency and strategyproofness. The
latter is known as the random dictatorship theorem. We relax
Condorcet-consistency and ex post efficiency by introducing a lower bound on
the probability of Condorcet winners and an upper bound on the probability of
Pareto-dominated alternatives, respectively. We then show that the SDS that
assigns probabilities proportional to Copeland scores is the only anonymous,
neutral, and strategyproof SDS that can guarantee the Condorcet winner a
probability of at least 2/m. Moreover, no strategyproof SDS can exceed this
bound, even when dropping anonymity and neutrality. Secondly, we prove a
continuous strengthening of Gibbard's random dictatorship theorem: the less
probability we put on Pareto-dominated alternatives, the closer to a random
dictatorship is the resulting SDS. Finally, we show that the only anonymous,
neutral, and strategyproof SDSs that maximize the probability of Condorcet
winners while minimizing the probability of Pareto-dominated alternatives are
mixtures of the uniform random dictatorship and the randomized Copeland rule.",Relaxed Notions of Condorcet-Consistency and Efficiency for Strategyproof Social Decision Schemes,2022-01-25 19:06:29,"Felix Brandt, Patrick Lederer, René Romen","http://arxiv.org/abs/2201.10418v1, http://arxiv.org/pdf/2201.10418v1",econ.TH
34250,th,"We study a general class of consumption-savings problems with recursive
preferences. We characterize the sign of the consumption response to arbitrary
shocks in terms of the product of two sufficient statistics: the elasticity of
intertemporal substitution between contemporaneous consumption and continuation
utility (EIS), and the relative elasticity of the marginal value of wealth
(REMV). Under homotheticity, the REMV always equals one, so the propensity of
the agent to save or dis-save is always signed by the relationship of the EIS
with unity. We apply our results to derive comparative statics in classical
problems of portfolio allocation, consumption-savings with income risk, and
entrepreneurial investment. Our results suggest empirical identification
strategies for both the value of the EIS and its relationship with unity.",Robust Comparative Statics for the Elasticity of Intertemporal Substitution,2022-01-26 02:30:22,"Joel P. Flynn, Lawrence D. W. Schmidt, Alexis Akira Toda","http://arxiv.org/abs/2201.10673v1, http://arxiv.org/pdf/2201.10673v1",econ.TH
34251,th,"We propose a stochastic model of opinion exchange in networks. A finite set
of agents is organized in a fixed network structure. There is a binary state of
the world and each agent receives a private signal on the state. We model
beliefs as urns where red balls represent one possible value of the state and
blue balls the other value. The model revolves purely around communication and
beliefs dynamics. Communication happens in discrete time and, at each period,
agents draw and display one ball from their urn with replacement. Then, they
reinforce their urns by adding balls of the colors drawn by their neighbors. We
show that for any network structure, this process converges almost-surely to a
stable state. Futher, we show that if the communication network is connected,
this stable state is such that all urns have the same proportion of balls. This
result strengthens the main convergence properties of non-Bayesian learning
models. Yet, contrary to those models, we show that this limit proportion is a
full-support random variable. This implies that an arbitrarily small proportion
of misinformed agents can substantially change the value of the limit
consensus. We propose a set of conjectures on the distribution of this limit
proportion based on simulations. In particular, we show evidence that the limit
belief follows a beta distribution and that its average value is independent
from the network structure.",Stochastic Consensus and the Shadow of Doubt,2022-01-28 16:14:20,Emilien Macault,"http://arxiv.org/abs/2201.12100v1, http://arxiv.org/pdf/2201.12100v1",econ.TH
34252,th,"We study a social bandit problem featuring production and diffusion of
knowledge. While higher connectivity enhances knowledge diffusion, it may
reduce knowledge production as agents shy away from experimentation with new
ideas and free ride on the observation of other agents. As a result, under some
conditions, greater connectivity can lead to homogeneity and lower social
welfare.",The Impact of Connectivity on the Production and Diffusion of Knowledge,2022-02-01 22:57:01,"Gustavo Manso, Farzad Pourbabaee","http://arxiv.org/abs/2202.00729v1, http://arxiv.org/pdf/2202.00729v1",econ.TH
34253,th,"We provide a complete characterization of optimal extinction in a two-sector
model of economic growth through three results, surprising in both their
simplicity and intricacy. (i) When the discount factor is below a threshold
identified by the well-known $\delta$-normality condition for the existence of
a stationary optimal stock, the economy's capital becomes extinct in the long
run. (ii) This extinction may be staggered if and only if the investment-good
sector is capital intensive. (iii) We uncover a sequence of thresholds of the
discount factor, identified by a family of rational functions, that represent
bifurcations for optimal postponements on the path to extinction. We also
report various special cases of the model having to do with unsustainable
technologies and equal capital intensities that showcase long-term optimal
growth, all of topical interest and all neglected in the antecedent literature.",On Sustainability and Survivability in the Matchbox Two-Sector Model: A Complete Characterization of Optimal Extinction,2022-02-04 19:06:14,"Liuchun Deng, Minako Fujio, M. Ali Khan","http://arxiv.org/abs/2202.02209v1, http://arxiv.org/pdf/2202.02209v1",econ.TH
34254,th,"We study sequential bargaining between a proposer and a veto player. Both
have single-peaked preferences, but the proposer is uncertain about the veto
player's ideal point. The proposer cannot commit to future proposals. When
players are patient, there can be equilibria with Coasian dynamics: the veto
player's private information can largely nullify proposer's bargaining power.
Our main result, however, is that under some conditions there are also
equilibria in which the proposer obtains the high payoff that he would with
commitment power. The driving force is that the veto player's single-peaked
preferences give the proposer an option to ""leapfrog"", i.e., to secure
agreement from only low-surplus types early on to credibly extract surplus from
high types later. Methodologically, we exploit the connection between
sequential bargaining and static mechanism design.",Sequential Veto Bargaining with Incomplete Information,2022-02-05 04:50:03,"S. Nageeb Ali, Navin Kartik, Andreas Kleiner","http://arxiv.org/abs/2202.02462v3, http://arxiv.org/pdf/2202.02462v3",econ.TH
34255,th,"This paper analyzes the asymptotic performance of two popular affirmative
action policies, majority quota and minority reserve, under the immediate
acceptance mechanism (IAM) and the top trading cycles mechanism (TTCM) in the
contest of school choice. The matching outcomes of these two affirmative
actions are asymptotically equivalent under the IAM when all students are
sincere. Given the possible preference manipulations under the IAM, we
characterize the asymptotically equivalent sets of Nash equilibrium outcomes of
the IAM with these two affirmative actions. However, these two affirmative
actions induce different matching outcomes under the TTCM with non-negligible
probability even in large markets.",On the Asymptotic Performance of Affirmative Actions in School Choice,2022-02-08 18:22:30,"Di Feng, Yun Liu","http://arxiv.org/abs/2202.03927v5, http://arxiv.org/pdf/2202.03927v5",econ.TH
34256,th,"This paper analyzes repeated version of the bilateral trade model where the
independent payoff relevant private information of the buyer and the seller is
correlated across time. Using this setup it makes the following five
contributions. First, it derives necessary and sufficient conditions on the
primitives of the model as to when efficiency can be attained under ex post
budget balance and participation constraints. Second, in doing so, it
introduces an intermediate notion of budget balance called interim budget
balance that allows for the extension of liquidity but with participation
constraints for the issuing authority interpreted here as an intermediary.
Third, it pins down the class of all possible mechanisms that can implement the
efficient allocation with and without an intermediary. Fourth, it provides a
foundation for the role of an intermediary in a dynamic mechanism design model
under informational constraints. And, fifth, it argues for a careful
interpretation of the ""folk proposition"" that less information is better for
efficiency in dynamic mechanisms under ex post budget balance and observability
of transfers.",Efficiency with(out) intermediation in repeated bilateral trade,2022-02-09 02:55:35,Rohit Lamba,"http://arxiv.org/abs/2202.04201v1, http://arxiv.org/pdf/2202.04201v1",econ.TH
34258,th,"This paper studies durable goods monopoly without commitment under an
informationally robust objective. A seller cannot commit to future prices and
does not know the information arrival process according to which a
representative buyer learns about her valuation. To avoid known conceptual
difficulties associated with formulating a dynamically-consistent maxmin
objective, we posit the seller's uncertainty is resolved by an explicit player
(nature) who chooses the information arrival process adversarially and
sequentially. Under a simple transformation of the buyer's value distribution,
the solution (in the gap case) is payoff-equivalent to a classic environment
where the buyer knows her valuation at the beginning. This result immediately
delivers a sharp characterization of the equilibrium price path. Furthermore,
we provide a (simple to check and frequently satisfied) sufficient condition
which guarantees that no arbitrary (even dynamically-inconsistent) information
arrival process can lower the seller's profit against this equilibrium price
path. We call a price path with this property a reinforcing solution, and
suggest this concept may be of independent interest as a way of tractably
analyzing limited commitment robust objectives. We consider alternative ways of
specifying the robust objective, and also show that the analogy to known-values
in the no-gap case need not hold in general.",Coasian Dynamics under Informational Robustness,2022-02-09 21:22:21,"Jonathan Libgober, Xiaosheng Mu","http://arxiv.org/abs/2202.04616v2, http://arxiv.org/pdf/2202.04616v2",econ.TH
34259,th,"We study discrete allocation problems, as in the textbook notion of an
exchange economy, but with indivisible goods. The problem is well-known to be
difficult. The model is rich enough to encode some of the most pathological
bargaining configurations in game theory, like the roommate problem. Our
contribution is to show the existence of stable allocations (outcomes in the
weak core, or in the bargaining set) under different sets of assumptions.
Specifically, we consider dichotomous preferences, categorical economies, and
discrete TU markets. The paper uses varied techniques, from Scarf's balanced
games to a generalization of the TTC algorithm by means of Tarski fixed points.",Stable allocations in discrete economies,2022-02-09 23:04:05,"Federico Echenique, Sumit Goel, SangMok Lee","http://arxiv.org/abs/2202.04706v2, http://arxiv.org/pdf/2202.04706v2",econ.TH
34260,th,"We provide a necessary and sufficient condition for rationalizable
implementation of social choice functions, i.e., we offer a complete answer
regarding what social choice functions can be rationalizably implemented.",Rationalizable Implementation of Social Choice Functions: Complete Characterization,2022-02-10 10:57:36,Siyang Xiong,"http://arxiv.org/abs/2202.04885v1, http://arxiv.org/pdf/2202.04885v1",econ.TH
34261,th,"Knoblauch (2014) and Knoblauch (2015) investigate the relative size of the
collection of binary relations with desirable features as compared to the set
of all binary relations using symmetric difference metric (Cantor) topology and
Hausdorff metric topology. We consider Ellentuck and doughnut topologies to
further this line of investigation. We report the differences among the size of
the useful binary relations in Cantor, Ellentuck and doughnut topologies. It
turns out that the doughnut topology admits binary relations with more general
properties in contrast to the other two. We further prove that among the
induced Cantor and Ellentuck topologies, the latter captures the relative size
of partial orders among the collection of all quasi-orders. Finally we show
that the class of ethical binary relations is small in Ellentuck (and therefore
in Cantor) topology but is not small in doughnut topology. In essence, the
Ellentuck topology fares better compared to Cantor topology in capturing the
relative size of collections of binary relations.",How rare are the properties of binary relations?,2022-02-10 21:36:38,"Ram Sewak Dubey, Giorgio Laguzzi","http://arxiv.org/abs/2202.05229v1, http://arxiv.org/pdf/2202.05229v1",econ.TH
34262,th,"A principal hires an agent to acquire soft information about an unknown
state. Even though neither how the agent learns nor what the agent discovers
are contractible, we show the principal is unconstrained as to what information
the agent can be induced to acquire and report honestly. When the agent is risk
neutral, and a) is not asked to learn too much, b) can acquire information
sufficiently cheaply, or c) can face sufficiently large penalties, the
principal can attain the first-best outcome. We discuss the effect of risk
aversion (on the part of the agent) and characterize the second-best contracts.",Buying Opinions,2022-02-10 21:54:45,"Mark Whitmeyer, Kun Zhang","http://arxiv.org/abs/2202.05249v5, http://arxiv.org/pdf/2202.05249v5",econ.TH
34263,th,"Will people eventually learn the value of an asset through observable
information? This paper studies observational learning in a market with
competitive prices. Comparing a market with public signals and a market with
private signals in a sequential trading model, we find that Pairwise
Informativeness (PI) is the sufficient and necessary learning condition for a
market with public signals; and Avery and Zemsky Condition (AZC) is the
sufficient and necessary learning condition for a market with private signals.
Moreover, when the number of states is 2 or 3, PI and AZC are equivalent. And
when the number of states is greater than 3, PI and Monotonic Likelihood Ratio
Property (MLRP) together imply asymptotic learning in the private signal case.",Observational Learning with Competitive Prices,2022-02-14 01:10:39,Zikai Xu,"http://arxiv.org/abs/2202.06425v3, http://arxiv.org/pdf/2202.06425v3",econ.TH
34264,th,"The commitment power of senders distinguishes Bayesian persuasion problems
from other games with (strategic) communication. Persuasion games with multiple
senders have largely studied simultaneous commitment and signalling settings.
However, many real-world instances with multiple senders have sequential
signalling. In such contexts, commitments can also be made sequentially, and
then the order of commitment by the senders -- the sender signalling last
committing first or last -- could significantly impact the equilibrium payoffs
and strategies. For a two-sender persuasion game where the senders are
partially aware of the state of the world, we find necessary and sufficient
conditions to determine when different commitment orders yield different payoff
profiles. In particular, for the two-sender setting, we show that different
payoff profiles arise if two properties hold: 1) the two senders are willing to
collaborate in persuading the receiver in some state(s); and 2) the sender
signalling second can carry out a credible threat when committing first such
that the other sender's room to design signals gets constrained.",Order of Commitments in Bayesian Persuasion with Partial-informed Senders,2022-02-14 07:25:28,"Shih-Tang Su, Vijay G. Subramanian","http://arxiv.org/abs/2202.06479v1, http://arxiv.org/pdf/2202.06479v1",econ.TH
35241,th,"We study the costs and benefits of selling data to a competitor. Although
selling all consumers' data may decrease total firm profits, there exist other
selling mechanisms -- in which only some consumers' data is sold -- that render
both firms better off. We identify the profit-maximizing mechanism, and show
that the benefit to firms comes at a cost to consumers. We then construct
Pareto-improving mechanisms, in which each consumers' welfare, as well as both
firms' profits, increase. Finally, we show that consumer opt-in can serve as an
instrument to induce firms to choose a Pareto-improving mechanism over a
profit-maximizing one.",Selling Data to a Competitor,2023-02-01 10:33:18,"Ronen Gradwohl, Moshe Tennenholtz","http://arxiv.org/abs/2302.00285v1, http://arxiv.org/pdf/2302.00285v1",cs.GT
34265,th,"This paper proposes a framework in which agents are constrained to use simple
models to forecast economic variables and characterizes the resulting biases.
It considers agents who can only entertain state-space models with no more than
$d$ states, where $d$ measures the intertemporal complexity of a model. Agents
are boundedly rational in that they can only consider models that are too
simple to nest the true process, yet they use the best model among those
considered. I show that using simple models adds persistence to forward-looking
decisions and increases the comovement among them. I then explain how this
insight can bring the predictions of three workhorse macroeconomic models
closer to data. In the new-Keynesian model, forward guidance becomes less
powerful. In the real business cycle model, consumption responds more
sluggishly to productivity shocks. The Diamond--Mortensen--Pissarides model
exhibits more internal propagation and more realistic comovement in response to
productivity and separation shocks.",Simple Models and Biased Forecasts,2022-02-14 21:26:31,Pooya Molavi,"http://arxiv.org/abs/2202.06921v4, http://arxiv.org/pdf/2202.06921v4",econ.TH
34266,th,"A privately-informed sender can commit to any disclosure policy towards a
receiver. We show that full disclosure is optimal under a sufficient condition
with some desirable properties. First, it speaks directly to the utility
functions of the parties, as opposed to the indirect utility function of the
sender; this makes it easily interpretable and verifiable. Second, it does not
require the sender's payoff to be a function of the posterior mean. Third, it
is weaker than the known conditions for some special cases. With this, we show
that full disclosure is optimal under modeling assumptions commonly used in
principal-agent papers.",On the optimality of full disclosure,2022-02-16 12:17:15,"Emiliano Catonini, Sergey Stepanov","http://arxiv.org/abs/2202.07944v4, http://arxiv.org/pdf/2202.07944v4",econ.TH
34267,th,"We must divide a finite number of indivisible goods and cash transfers
between agents with quasi-linear but otherwise arbitrary utilities over the
subsets of goods. We compare two division rules with cognitively feasible and
privacy preserving individual messages. In Sell&Buy agents bid for the role of
Seller or Buyer: with two agents the smallest bid defines the Seller who then
charges any a price constrained only by her winning bid. In Divide&Choose
agents bid for the role of Divider, then everyone bids on the shares of the
Divider's partition. S&B dominates D&C on two counts: its guaranteed utility in
the worst case rewards (resp. penalises) more subadditive (resp. superadditive)
utilities; playing safe is never ambiguous and is also better placed to collect
a larger share of the efficient surplus.",Fair Division with Money and Prices,2022-02-16 17:51:44,"Anna Bogomolnaia, Herve Moulin","http://arxiv.org/abs/2202.08117v1, http://arxiv.org/pdf/2202.08117v1",econ.TH
34268,th,"In school choice, students make decisions based on their expectations of
particular schools' suitability, and the decision to gather information about
schools is influenced by the acceptance odds determined by the mechanism in
place. We study a school choice model where students can obtain information
about their preferences by incurring a cost. We demonstrate greater homogeneity
in rank-order reports and reduced information acquisition under the
Deferred-Acceptance (DA) mechanism, resulting in an increased reliance on
random tie-breaking and ultimately inefficient outcomes. Thus, it is critical
for the DA mechanism to have easy access to school information in order to
maintain its efficiency.",Preference Learning in School Choice Problems,2022-02-17 01:48:21,SangMok Lee,"http://arxiv.org/abs/2202.08366v2, http://arxiv.org/pdf/2202.08366v2",econ.TH
34269,th,"This paper presents four theorems that connect continuity postulates in
mathematical economics to solvability axioms in mathematical psychology, and
ranks them under alternative supplementary assumptions. Theorem 1 connects
notions of continuity (full, separate, Wold, weak Wold, Archimedean, mixture)
with those of solvability (restricted, unrestricted) under the completeness and
transitivity of a binary relation. Theorem 2 uses the primitive notion of a
separately-continuous function to answer the question when an analogous
property on a relation is fully continuous. Theorem 3 provides a portmanteau
theorem on the equivalence between restricted solvability and various notions
of continuity under weak monotonicity. Finally, Theorem 4 presents a variant of
Theorem 3 that follows Theorem 1 in dispensing with the dimensionality
requirement and in providing partial equivalences between solvability and
continuity notions. These theorems are motivated for their potential use in
representation theorems.",Continuity Postulates and Solvability Axioms in Economic Theory and in Mathematical Psychology: A Consolidation of the Theory of Individual Choice,2022-02-17 05:33:12,"Aniruddha Ghosh, M. Ali Khan, Metin Uyanik","http://arxiv.org/abs/2202.08415v2, http://arxiv.org/pdf/2202.08415v2",econ.TH
34270,th,"A seller offers a buyer a schedule of transfers and associated product
qualities, as in Mussa and Rosen (1978). After observing this schedule, the
buyer chooses a flexible costly signal about his type. We show it is without
loss to focus on a class of mechanisms that compensate the buyer for his
learning costs. Using these mechanisms, we prove the quality always lies
strictly below the efficient level. This strict downward distortion holds even
if the buyer acquires no information or when the buyer's posterior type is the
highest possible given his signal, reversing the ""no distortion at the top""
feature that holds when information is exogenous.","Monopoly, Product Quality, and Flexible Learning",2022-02-21 07:26:23,"Jeffrey Mensch, Doron Ravid","http://arxiv.org/abs/2202.09985v4, http://arxiv.org/pdf/2202.09985v4",econ.TH
34271,th,"Attempts at predatory capture may provoke a defensive response that reduces
the very value of the predated resource. We provide a game-theoretic analysis
of simultaneous-move, two-player Attacker-Defender games that model such
interactions. When initial endowments are equal, Attackers win about a third of
such games in equilibrium. Under power disparities, Attackers become
particularly aggressive when they are approximately one-third poorer than
Defenders. With non-conflictual outside options Attackers become exceptionally
aggressive when their opponent has access to high-benefit, low-cost production,
and refrain from attack most when they are unilaterally provided with a
high-benefit, high-cost production option.",Equilibria of Attacker-Defender Games,2022-02-21 12:34:32,"Zsombor Z. Méder, Carsten K. W. de Dreu, Jörg Gross","http://arxiv.org/abs/2202.10072v2, http://arxiv.org/pdf/2202.10072v2",econ.TH
34272,th,"An agent progressively learns about a state of the world. A bookmaker is
ready to offer one bet after every new discovery. I say that the agent is
Dutch-booked when she is willing to accept every single bet, but her expected
payoff is negative under each state, where the expected payoff is computed with
the objective probabilities of different discoveries conditional on the state.
I introduce a rule of coherence among beliefs after counterfactual discoveries
that is necessary and sufficient to avoid being Dutch-booked. This rule
characterizes an agent who derives all her beliefs with Bayes rule from the
same Lexicographic Conditional Probability System (Blume, Brandenburger and
Dekel, 1991).",A Dutch book argument for belief consistency,2022-02-21 14:13:22,Emiliano Catonini,"http://arxiv.org/abs/2202.10121v1, http://arxiv.org/pdf/2202.10121v1",econ.TH
34273,th,"We study information design in games in which each player has a continuum of
actions. We show that an information structure is designer-optimal whenever the
equilibrium play it induces can also be implemented in a principal-agent
contracting problem. We use this observation to solve three novel applications.
In an investment game, the optimal structure fully informs a single investor
while providing no information to others. In an expectation polarization game,
the optimal structure fully informs half of the players while providing no
information to the other half. In a price competition game, the optimal
structure is noise-free Gaussian and recommends prices linearly in the states.
Our analysis further informs on the robustness and uniqueness of the optimal
information structures.",Information Design in Smooth Games,2022-02-22 16:32:47,"Alex Smolin, Takuro Yamashita","http://arxiv.org/abs/2202.10883v4, http://arxiv.org/pdf/2202.10883v4",econ.TH
34274,th,"For a many-to-one matching model, we study the matchings obtained through the
restabilization of stable matchings that had been disrupted by a change in the
population. We include a simple representation of the stable matching obtained
in terms of the initial stable matching (i.e., before being disrupted by
changes in the population) and the firm-optimal stable matching. (We used
Lattice Theory to characterize the outcome of the restabilization process.) We
also describe the connection between the original stable matching and the one
obtained after the restabilization process in the new market.",The outcome of the restabilization process in matching markets,2022-02-25 04:30:28,Millán Guerra Beatriz Alejandra,"http://arxiv.org/abs/2202.12452v1, http://arxiv.org/pdf/2202.12452v1",econ.TH
34275,th,"Individuals increasingly rely on social networking platforms to form
opinions. However, these platforms typically aim to maximize engagement, which
may not align with social good. In this paper, we introduce an opinion dynamics
model where agents are connected in a social network, and update their opinions
based on their neighbors' opinions and on the content shown to them by the
platform. We focus on a stochastic block model with two blocks, where the
initial opinions of the individuals in different blocks are different. We prove
that for large and dense enough networks the trajectory of opinion dynamics in
such networks can be approximated well by a simple two-agent system. The latter
admits tractable analytical analysis, which we leverage to provide interesting
insights into the platform's impact on the social learning outcome in our
original two-block model. Specifically, by using our approximation result, we
show that agents' opinions approximately converge to some limiting opinion,
which is either: consensus, where all agents agree, or persistent disagreement,
where agents' opinions differ. We find that when the platform is weak and there
is a high number of connections between agents with different initial opinions,
a consensus equilibrium is likely. In this case, even if a persistent
disagreement equilibrium arises, the polarization in this equilibrium, i.e.,
the degree of disagreement, is low. When the platform is strong, a persistent
disagreement equilibrium is likely and the equilibrium polarization is high. A
moderate platform typically leads to a persistent disagreement equilibrium with
moderate polarization. We analyze the effect of initial polarization on
consensus and explore numerically various extensions including a three block
stochastic model and a correlation between initial opinions and agents'
connection probabilities.",Social Learning under Platform Influence: Consensus and Persistent Disagreement,2022-02-25 04:35:07,"Ozan Candogan, Nicole Immorlica, Bar Light, Jerry Anunrojwong","http://arxiv.org/abs/2202.12453v2, http://arxiv.org/pdf/2202.12453v2",econ.TH
34276,th,"We study how governments promote social welfare through the design of
contracting environments. We model the regulation of contracting as default
delegation: the government chooses a delegation set of contract terms it is
willing to enforce, and influences the default terms that serve as outside
options in parties' negotiations. Our analysis shows that limiting the
delegation set principally mitigates externalities, while default terms
primarily achieve distributional objectives. Applying our model to the
regulation of labor contracts, we derive comparative statics on the optimal
default delegation policy. As equity concerns or externalities increase,
in-kind support for workers increases (e.g. through benefits requirements and
public health insurance). Meanwhile, when worker bargaining power decreases
away from parity, support for workers increases in cash (e.g. through cash
transfers and minimum wage laws).","Optimal Defaults, Limited Enforcement and the Regulation of Contracts",2022-03-02 19:56:49,"Zoë Hitzig, Benjamin Niswonger","http://arxiv.org/abs/2203.01233v2, http://arxiv.org/pdf/2203.01233v2",econ.TH
34277,th,"India is home to a comprehensive affirmative action program that reserves a
fraction of positions at governmental institutions for various disadvantaged
groups. While there is a Supreme Court-endorsed mechanism to implement these
reservation policies when all positions are identical, courts have refrained
from endorsing explicit mechanisms when positions are heterogeneous. This
lacunae has resulted in widespread adoption of unconstitutional mechanisms,
countless lawsuits, and inconsistent court rulings. Formulating mandates in the
landmark Supreme Court judgment Saurav Yadav (2020) as technical axioms, we
show that the 2SMH-DA mechanism is uniquely suited to overcome these
challenges.",Constitutional Implementation of Affirmative Action Policies in India,2022-03-03 05:02:26,"Tayfun Sonmez, M. Bumin Yenmez","http://arxiv.org/abs/2203.01483v2, http://arxiv.org/pdf/2203.01483v2",econ.TH
34278,th,"A speculator can take advantage of a procurement auction by acquiring items
for sale before the auction. The accumulated market power can then be exercised
in the auction and may lead to a large enough gain to cover the acquisition
costs. I show that speculation always generates a positive expected profit in
second-price auctions but could be unprofitable in first-price auctions. In the
case where speculation is profitable in first-price auctions, it is more
profitable in second-price auctions. This comparison in profitability is driven
by different competition patterns in the two auction mechanisms. In terms of
welfare, speculation causes private value destruction and harms efficiency.
Sellers benefit from the acquisition offer made by the speculator. Therefore,
speculation comes at the expense of the auctioneer.",Speculation in Procurement Auctions,2022-03-06 23:35:54,Shanglyu Deng,"http://dx.doi.org/10.1016/j.jet.2023.105692, http://arxiv.org/abs/2203.03044v3, http://arxiv.org/pdf/2203.03044v3",econ.TH
34279,th,"I study how past choices affect future choices in the framework of attention.
Limited consideration causes a failure of ""rationality"", where better options
are not chosen because the DM has failed to consider them. I innovate and
consider choice sequences, where past choices are necessarily considered in
future choice problems. This provides a link between two kinds of rationality
violations: those that occur in a cross-section of one-shot decisions and those
that occur along a sequence of realized choices. In my setting, the former
helps identify attention whereas the latter pins down true preferences. Both
types of violations vanish over time and furnish a novel notion of rationality.
A series of results shows that a DM may suffer from limited attention even
though it cannot be detected when choices sequences are not considered.
Moreover, attention across time is inherently related to models of attention at
a given time, and a full characterization of compatible models is provided.",Choice and Attention Across Time,2022-03-07 12:56:14,Xi Zhi Lim,"http://arxiv.org/abs/2203.03243v2, http://arxiv.org/pdf/2203.03243v2",econ.TH
34280,th,"We study a method for calculating the utility function from a candidate of a
demand function that is not differentiable, but is locally Lipschitz. Using
this method, we obtain two new necessary and sufficient conditions for a
candidate of a demand function to be a demand function. The first is conditions
for the Slutsky matrix, and the second is the existence of a concave solution
to a partial differential equation. Moreover, we show that the upper
semi-continuous weak order that corresponds to the demand function is unique,
and that this weak order is represented by our calculated utility function. We
provide applications of these results to econometric theory. First, we show
that, under several requirements, if a sequence of demand functions converges
to some function with respect to the metric of compact convergence, then the
limit is also a demand function. Second, the space of demand functions that
have uniform Lipschitz constants on any compact set is compact under the above
metric. Third, the mapping from a demand function to the calculated utility
function becomes continuous. We also show a similar result on the topology of
pointwise convergence.",Non-Smooth Integrability Theory,2022-03-09 17:37:46,Yuhki Hosoya,"http://arxiv.org/abs/2203.04770v3, http://arxiv.org/pdf/2203.04770v3",econ.TH
34281,th,"This paper analyzes a model of innovation diffusion with case-based
individuals a la Gilboa and Schmeidler (1995,1996,1997), who decide whether to
consume an incumbent or a new product based on their and their social
neighbors' previous consumption experiences. I analyze how diffusion pattern
changes with individual characteristics, innovation characteristics and social
network. In particular, radical innovation leads to higher initial speed but
lower acceleration compared to increment innovation. Social network with
stronger overall social tie, lower degree of homophily or higher exposure of
reviews from early adopters speed up diffusion of innovation.",Innovation Diffusion among Case-based Decision-makers,2022-03-11 10:56:11,Benson Tsz Kin Leung,"http://arxiv.org/abs/2203.05785v2, http://arxiv.org/pdf/2203.05785v2",econ.TH
34282,th,"Keeping up with ""The Joneses"" matters. This paper examines a model of
reference dependent choice where reference points are determined by social
comparisons. An increase in the strength of social comparisons, even by only a
few agents, increases consumption and decreases welfare for everyone.
Strikingly, a higher marginal cost of consumption can increase welfare. In a
labour market, social comparisons with co-workers create a big fish in a small
pond effect, inducing incomplete labour market sorting. Further, it is the
skilled workers with the weakest social networks who are induced to give up
income to become the big fish.","Keeping up with ""The Joneses"": reference dependent choice with social comparisons",2022-03-19 14:57:15,Alastair Langtry,"http://arxiv.org/abs/2203.10305v2, http://arxiv.org/pdf/2203.10305v2",econ.TH
34283,th,"It is well known that individual beliefs cannot be identified using
traditional choice data, unless we impose the practically restrictive and
conceptually awkward assumption that utilities are state-independent. In this
paper, we propose a novel methodology that solves this long-standing
identification problem in a simple way, using a variant of the strategy method.
Our method relies on the concept of a suitable proxy. The crucial property is
that the agent does not have any stakes in the proxy conditional on the
realization of the original state space. Then, instead of trying to identify
directly the agent's beliefs about the state space, we elicit her conditional
beliefs about the proxy given each state realization. The latter can be easily
done with existing elicitation tools and without worrying about the
identification problem. It turns out that this is enough to uniquely identify
the agent's beliefs. We present different classes of proxies that one can
reasonably use, and we show that it is almost always possible to find one which
is easy to implement. Such flexibility makes our method, not only
theoretically-sound, but also empirically appealing. We also show how our new
method allows us to provide a novel well-founded definition of a utility
function over states. Last but not least, it also allows us to cleanly identify
motivated beliefs, freed from confounding distortions caused by the
identification problem.",Belief identification with state-dependent utilities,2022-03-20 12:34:37,Elias Tsakas,"http://arxiv.org/abs/2203.10505v3, http://arxiv.org/pdf/2203.10505v3",econ.TH
34284,th,"This paper presents a theoretical model of an autocrat who controls the media
in an attempt to persuade society of his competence. We base our analysis on a
Bayesian persuasion framework in which citizens have heterogeneous preferences
and beliefs about the autocrat. We characterize the autocrat's information
manipulation strategy when society is monolithic and when it is divided. When
the preferences and beliefs in society are more diverse, the autocrat engages
in less information manipulation. Our findings thus suggest that the diversity
of attitudes and opinions can act as a bulwark against information manipulation
by hostile actors.","Informational Autocrats, Diverse Societies",2022-03-23 22:47:40,"A. Arda Gitmez, Pooya Molavi","http://arxiv.org/abs/2203.12698v3, http://arxiv.org/pdf/2203.12698v3",econ.TH
34285,th,"If prices of assets traded in a financial market are determined by non-linear
pricing rules, different versions of the Call-Put Parity have been considered.
We show that, under monotonicity, parities between call and put options and
discount certificates characterize ambiguity-sensitive (Choquet and/or Sipos)
pricing rules, i.e., pricing rules that can be represented via discounted
expectations with respect to non-additive probability measures. We analyze how
non-additivity relates to arbitrage opportunities and we give necessary and
sufficient conditions for Choquet and Sipos pricing rules to be arbitrage-free.
Finally, we identify violations of the Call-Put Parity with the presence of
bid-ask spreads.","Put-Call Parities, absence of arbitrage opportunities and non-linear pricing rules",2022-03-30 16:33:55,"Lorenzo Bastianello, Alain Chateauneuf, Bernard Cornet","http://arxiv.org/abs/2203.16292v1, http://arxiv.org/pdf/2203.16292v1",econ.TH
34286,th,"In a many-to-one matchingmodel with responsive preferences in which
indifferences are allowed, we study three notions of core, three notions of
stability, and their relationships. We show that (i) the core contains the
stable set, (ii) the strong core coincides with the strongly stable set, and
(iii) the super core coincides with the super stable set. We also show how the
core and the strong core in markets with indifferences relate to the stable
matchings of their associated tie-breaking strict markets.",Core and stability notions in many-to-one matching markets with indifferences,2022-03-30 16:35:29,"Agustín G. Bonifacio, Noelia Juarez, Pablo Neme, Jorge Oviedo","http://arxiv.org/abs/2203.16293v1, http://arxiv.org/pdf/2203.16293v1",econ.TH
35242,th,"If in a signaling game the receiver expects to gain no information by
monitoring the signal of the sender, then when a cost to monitor is implemented
he will never pay that cost regardless of his off-path beliefs. This is the
argument of a recent paper by T. Denti (2021). However, which pooling
equilibrium does a receiver anticipate to gain no information through
monitoring? This paper seeks to prove that given a sufficiently small cost to
monitor any pooling equilibrium with a non-zero index will survive close to the
original equilibrium.",Signaling Games with Costly Monitoring,2023-02-02 17:21:33,Reuben Bearman,"http://arxiv.org/abs/2302.01116v1, http://arxiv.org/pdf/2302.01116v1",econ.TH
34287,th,"A policymaker discloses public information to interacting agents who also
acquire costly private information. More precise public information reduces the
precision and cost of acquired private information. Considering this effect,
what disclosure rule should the policymaker adopt? We address this question
under two alternative assumptions using a linear quadratic Gaussian game with
arbitrary quadratic material welfare and convex information costs. First, the
policymaker knows the cost of private information and adopts an optimal
disclosure rule to maximize the expected welfare. Second, the policymaker is
uncertain about the cost and adopts a robust disclosure rule to maximize the
worst-case welfare. Depending on the elasticity of marginal cost, an optimal
rule is qualitatively the same as that in the case of either a linear
information cost or exogenous private information. The worst-case welfare is
strictly increasing if and only if full disclosure is optimal under some
information costs, which provides a new rationale for central bank
transparency.",Optimal and Robust Disclosure of Public Information,2022-03-31 08:32:43,Takashi Ui,"http://arxiv.org/abs/2203.16809v2, http://arxiv.org/pdf/2203.16809v2",econ.TH
34288,th,"This paper presents a method to avoid the ominous prediction of the previous
work: if insured incomes of infinitely lived individuals are unobservable, then
efficient allocations lead to permanent welfare inequality between individuals
and immiseration of society. The proposed mechanism (i) promises within-period
full insurance by postponing risks; (ii) these insurance contracts are
actuarially fair; (iii) it does not lead to social ranking and is sustainable.",Insuring uninsurable income,2022-04-01 13:49:28,Michiko Ogaku,"http://arxiv.org/abs/2204.00347v7, http://arxiv.org/pdf/2204.00347v7",econ.TH
34289,th,"We study the implementation of fixed priority top trading cycles (FPTTC)
rules via simply dominant mechanisms (Pycia and Troyan, 2019) in the context of
assignment problems, where agents are to be assigned at most one indivisible
object and monetary transfers are not allowed. We consider both models - with
and without outside options, and characterize all simply dominant FPTTC rules
in both models. We further introduce the notion of simple strategy-proofness to
resolve the issue with agents being concerned about having time-inconsistent
preferences, and discuss its relation with simple dominance.",Simple dominance of fixed priority top trading cycles,2022-04-05 15:24:34,Pinaki Mandal,"http://arxiv.org/abs/2204.02154v3, http://arxiv.org/pdf/2204.02154v3",econ.TH
34290,th,"This paper presents a set of tests and an algorithm for agnostic, data-driven
selection among macroeconomic DSGE models inspired by structure learning
methods for DAGs. As the log-linear state-space solution to any DSGE model is
also a DAG it is possible to use associated concepts to identify a unique
ground-truth state-space model which is compatible with an underlying DGP,
based on the conditional independence relationships which are present in that
DGP. In order to operationalise search for this ground-truth model, the
algorithm tests feasible analogues of these conditional independence criteria
against the set of combinatorially possible state-space models over observed
variables. This process is consistent in large samples. In small samples the
result may not be unique, so conditional independence tests can be combined
with likelihood maximisation in order to select a single optimal model. The
efficacy of this algorithm is demonstrated for simulated data, and results for
real data are also provided and discussed.",Causal Discovery of Macroeconomic State-Space Models,2022-04-05 20:27:33,Emmet Hall-Hoffarth,"http://arxiv.org/abs/2204.02374v1, http://arxiv.org/pdf/2204.02374v1",econ.TH
34291,th,"In this paper we consider stable matchings that are subject to assignment
constraints. These are matchings that require certain assigned pairs to be
included, insist that some other assigned pairs are not, and, importantly, are
stable. Our main contribution is an algorithm that determines when assignment
constraints are compatible with stability. Whenever a stable matching
consistent with the assignment constraints exists, our algorithm will output
all of them (each in polynomial time per solution). This provides market
designers with (i) a tool to test the feasibility of stable matchings with
assignment constraints, and (ii) a separate tool to implement them.",Finding all stable matchings with assignment constraints,2022-04-08 13:43:55,"Gregory Gutin, Philip R. Neary, Anders Yeo","http://arxiv.org/abs/2204.03989v3, http://arxiv.org/pdf/2204.03989v3",econ.TH
34292,th,"We introduce a new notion of rationalizability where the rationalizing
relation may depend on the set of feasible alternatives. More precisely, we say
that a choice function is locally rationalizable if it is rationalized by a
family of rationalizing relations such that a strict preference between two
alternatives in some feasible set is preserved when removing other
alternatives. We then show that a choice function is locally rationalizable if
and only if it satisfies Sen's $\gamma$ and give similar characterizations for
local rationalizability via transitive, PIP-transitive, and quasi-transitive
relations. Local rationalizability is realized via families of revealed
preference relations that are sandwiched in between the base relation and the
revealed preference relation of a choice function. We demonstrate the adequacy
of our results for analyzing and constructing consistent social choice
functions by giving simple characterizations of the top cycle and the uncovered
set using transitive and quasi-transitive local rationalizability.",Local Rationalizability and Choice Consistency,2022-04-11 16:03:59,"Felix Brandt, Chris Dong","http://arxiv.org/abs/2204.05062v2, http://arxiv.org/pdf/2204.05062v2",econ.TH
34293,th,"We consider the disclosure problem of a sender with a large data set of hard
evidence who wants to persuade a receiver to take higher actions. Because the
receiver will make inferences based on the distribution of the data they see,
the sender has an incentive to drop observations to mimic the distributions
that would be observed under better states. We predict which observations the
sender discloses using a model that approximates large datasets with a
continuum of data. It is optimal for the sender to play an imitation strategy,
under which they submit evidence that imitates the natural distribution under
some more desirable target state. We characterize the partial-pooling outcomes
under these imitation strategies, and show that they are supported by data on
the outcomes that maximally distinguish higher states. Relative to full
information, the equilibrium with voluntary disclosure reduces the welfare of
senders with little data or a favorable state, who fully disclose their data
but suffer the receiver's skepticism, and benefits senders with access to large
datasets, who can profitably drop observations under low states.",Inference from Selectively Disclosed Data,2022-04-14 21:54:22,Ying Gao,"http://arxiv.org/abs/2204.07191v3, http://arxiv.org/pdf/2204.07191v3",econ.TH
34294,th,"We study a dynamic generalization of stochastic rationality in consumer
behavior, the Dynamic Random Utility Model (DRUM). Under DRUM, a consumer draws
a utility function from a stochastic utility process and maximizes this utility
subject to her budget constraint in each time period. Utility is random, with
unrestricted correlation across time periods and unrestricted heterogeneity in
a cross-section. We provide a revealed preference characterization of DRUM when
we observe a panel of choices from budgets. This characterization is amenable
to statistical testing. Our result unifies Afriat's (1967) theorem that works
with time-series data and the static random utility framework of
McFadden-Richter (1990) that works with cross-sections of choice.",Nonparametric Analysis of Dynamic Random Utility Models,2022-04-14 23:50:42,"Nail Kashaev, Victor H. Aguiar","http://arxiv.org/abs/2204.07220v1, http://arxiv.org/pdf/2204.07220v1",econ.TH
34295,th,"We compare the outcomes of the most prominent strategy-proof and stable
algorithm (Deferred Acceptance, DA) and the most prominent strategy-proof and
Pareto optimal algorithm (Top Trading Cycles, TTC) to the allocation generated
by the rank-minimizing mechanism (RM). While one would expect that RM improves
upon both DA and TTC in terms of rank efficiency, the size of the improvement
is nonetheless surprising. Moreover, while it is not explicitly designed to do
so, RM also significantly improves the placement of the worst-off student.
Furthermore, RM generates less justified envy than TTC. We corroborate our
findings using data on school admissions in Budapest.",The cost of strategy-proofness in school choice,2022-04-15 01:35:43,"Josue Ortega, Thilo Klein","http://dx.doi.org/10.1016/j.geb.2023.07.008, http://arxiv.org/abs/2204.07255v6, http://arxiv.org/pdf/2204.07255v6",econ.TH
34296,th,"In the canonical persuasion model, comparative statics has been an open
question. We answer it, delineating which shifts of the sender's interim payoff
lead her optimally to choose a more informative signal. Our first theorem
identifies a coarse notion of 'increased convexity' that we show characterises
those shifts of the sender's interim payoff that lead her optimally to choose
no less informative signals. To strengthen this conclusion to 'more
informative' requires further assumptions: our second theorem identifies the
necessary and sufficient condition on the sender's interim payoff, which
strictly generalises the 'S' shape commonly imposed in the literature. We
identify conditions under which increased alignment of interests between sender
and receiver lead to comparative statics, and study applications.",The comparative statics of persuasion,2022-04-14 17:38:53,"Gregorio Curello, Ludvig Sinander","http://arxiv.org/abs/2204.07474v3, http://arxiv.org/pdf/2204.07474v3",econ.TH
34297,th,"Eigen mode selection ought to be a practical issue in some real game systems,
as it is a practical issue in the dynamics behaviour of a building, bridge, or
molecular, because of the mathematical similarity in theory. However, its
reality and accuracy have not been known in real games. We design a 5-strategy
game which, in the replicator dynamics theory, is predicted to exist two eigen
modes. Further, in behaviour game theory, the game is predicted that the mode
selection should depends on the game parameter. We conduct human subject game
experiments by controlling the parameter. The data confirm that, the
predictions on the mode existence as well as the mode selection are
significantly supported. This finding suggests that, like the equilibrium
selection concept in classical game theory, eigen mode selection is an issue in
game dynamics theory.",Eigen mode selection in human subject game experiment,2022-04-17 22:10:55,"Zhijian Wang, Qinmei Yao, Yijia Wang","http://arxiv.org/abs/2204.08071v1, http://arxiv.org/pdf/2204.08071v1",econ.TH
34298,th,"Drafts are sequential allocation procedures for distributing heterogeneous
and indivisible objects among agents subject to some priority order (e.g.,
allocating players' contract rights to teams in professional sports leagues).
Agents reveal ordinal preferences over objects and bundles are partially
ordered by pairwise comparison. We provide a simple characterization of draft
rules: they are the only allocation rules satisfying respectfulness of a
priority (RP), envy-freeness up to one object (EF1), non-wastefulness (NW) and
resource-monotonicity (RM). RP and EF1 are crucial for competitive balance in
sports leagues. We also prove three related impossibility theorems: (i) weak
strategy-proofness (WSP) is incompatible with RP, EF1, and NW; (ii) WSP is
incompatible with EF1 and (Pareto) efficiency (EFF); and (iii) when there are
two agents, strategy-proofness (SP) is incompatible with EF1 and NW. However,
draft rules satisfy the competitive-balance properties, RP and EF1, together
with EFF and maxmin strategy-proofness. Intuitively, draft rules are
strategy-proof when agents face uncertainty about other agents' preference
reports and they are maxmin utility maximizers. If agents may declare some
objects unacceptable, then draft rules are characterized by RP, EF1, NW, and
RM, together with individual rationality and truncation-invariance.",An Axiomatic Characterization of Draft Rules,2022-04-18 16:13:34,"Jacob Coreno, Ivan Balbuzanov","http://arxiv.org/abs/2204.08300v4, http://arxiv.org/pdf/2204.08300v4",econ.TH
34299,th,"In the classical Bayesian persuasion model an informed player and an
uninformed one engage in a static interaction. The informed player, the sender,
knows the state of nature, while the uninformed one, the receiver, does not.
The informed player partially shares his private information with the receiver
and the latter then, based on her belief about the state, takes an action. This
action, together with the state of nature, determines the utility of both
players. This paper analyzes a dynamic Bayesian persuasion model where the
state of nature evolves according to a Markovian law. Here, the sender always
knows the realized state, while the receiver randomly gets to know it. We
discuss the value of the sender when he becomes more and more patient and its
relation to the revelation rate, namely the probability at which the true state
is revealed to the receiver at any stage.",Markovian Persuasion with Stochastic Revelations,2022-04-19 07:35:20,"Ehud Lehrer, Dimitry Shaiderman","http://arxiv.org/abs/2204.08659v2, http://arxiv.org/pdf/2204.08659v2",econ.TH
34300,th,"A monopoly seller is privately and imperfectly informed about the buyer's
value of the product. The seller uses information to price discriminate the
buyer. A designer offers a mechanism that provides the seller with additional
information based on the seller's report about her type. We establish the
impossibility of screening for welfare purposes, i.e., the designer can attain
any implementable combination of buyer surplus and seller profit by providing
the same signal to all seller types. We use this result to characterize the set
of implementable welfare outcomes, study the seller's incentive to acquire
third-party data, and demonstrate the trade-off between buyer surplus and
efficiency.",Data Provision to an Informed Seller,2022-04-19 10:53:41,"Shota Ichihashi, Alex Smolin","http://arxiv.org/abs/2204.08723v2, http://arxiv.org/pdf/2204.08723v2",econ.TH
34301,th,"We describe a context-sensitive model of choice, in which the selection
process is shaped not only by the attractiveness of items but also by their
semantics ('salience'). All items are ranked according to a relation of
salience, and a linear order is associated to each item. The selection of a
single element from a menu is justified by one of the linear orders indexed by
the most salient items in the menu. The general model provides a structured
explanation for any observed behavior, and allows us to to model the
'moodiness' of a decision maker, which is typical of choices requiring as many
distinct rationales as items. Asymptotically all choices are moody. We single
out a model of linear salience, in which the salience order is transitive and
complete, and characterize it by a behavioral property, called WARP(S). Choices
rationalizable by linear salience can only exhibit non-conflicting violations
of WARP. We also provide numerical estimates, which show the high selectivity
of this testable model of bounded rationality.",Semantics meets attractiveness: Choice by salience,2022-04-19 13:42:24,"Alfio Giarlotta, Angelo Petralia, Stephen Watson","http://arxiv.org/abs/2204.08798v2, http://arxiv.org/pdf/2204.08798v2",econ.TH
34302,th,"Classical Bayesian mechanism design relies on the common prior assumption,
but such prior is often not available in practice. We study the design of
prior-independent mechanisms that relax this assumption: the seller is selling
an indivisible item to $n$ buyers such that the buyers' valuations are drawn
from a joint distribution that is unknown to both the buyers and the seller;
buyers do not need to form beliefs about competitors, and the seller assumes
the distribution is adversarially chosen from a specified class. We measure
performance through the worst-case regret, or the difference between the
expected revenue achievable with perfect knowledge of buyers' valuations and
the actual mechanism revenue.
  We study a broad set of classes of valuation distributions that capture a
wide spectrum of possible dependencies: independent and identically distributed
(i.i.d.) distributions, mixtures of i.i.d. distributions, affiliated and
exchangeable distributions, exchangeable distributions, and all joint
distributions. We derive in quasi closed form the minimax values and the
associated optimal mechanism. In particular, we show that the first three
classes admit the same minimax regret value, which is decreasing with the
number of competitors, while the last two have the same minimax regret equal to
that of the single buyer case. Furthermore, we show that the minimax optimal
mechanisms have a simple form across all settings: a second-price auction with
random reserve prices, which shows its robustness in prior-independent
mechanism design. En route to our results, we also develop a principled
methodology to determine the form of the optimal mechanism and worst-case
distribution via first-order conditions that should be of independent interest
in other minimax problems.",On the Robustness of Second-Price Auctions in Prior-Independent Mechanism Design,2022-04-22 06:18:32,"Jerry Anunrojwong, Santiago R. Balseiro, Omar Besbes","http://arxiv.org/abs/2204.10478v3, http://arxiv.org/pdf/2204.10478v3",econ.TH
34303,th,"We study dynamic screening problems where elements are subjected to noisy
evaluations and, in every stage, some of the elements are rejected while the
remaining ones are independently re-evaluated in subsequent stages. We prove
that, ceteris paribus, the quality of a dynamic screening process is not
monotonic in the number of stages. Specifically, we examine the accepted
elements' values and show that adding a single stage to a screening process may
produce inferior results, in terms of stochastic dominance, whereas increasing
the number of stages substantially leads to a first-best outcome.",Dynamic screening,2022-04-28 13:19:27,"David Lagziel, Ehud Lehrer","http://arxiv.org/abs/2204.13392v1, http://arxiv.org/pdf/2204.13392v1",econ.TH
34304,th,"I consider a mechanism design problem of selling multiple goods to multiple
bidders when the designer has minimal amount of information. I assume that the
designer only knows the upper bounds of bidders' values for each good and has
no additional distributional information. The designer takes a minimax regret
approach. The expected regret from a mechanism given a joint distribution over
value profiles and an equilibrium is defined as the difference between the full
surplus and the expected revenue. The designer seeks a mechanism, referred to
as a minimax regret mechanism, that minimizes her worst-case expected regret
across all possible joint distributions over value profiles and all equilibria.
I find that a separate second-price auction with random reserves is a minimax
regret mechanism for general upper bounds. Under this mechanism, the designer
holds a separate auction for each good; the formats of these auctions are
second-price auctions with random reserves.",Auctioning Multiple Goods without Priors,2022-04-28 21:11:07,Wanchang Zhang,"http://arxiv.org/abs/2204.13726v1, http://arxiv.org/pdf/2204.13726v1",econ.TH
34305,th,"In school choice problems, the motivation for students' welfare (efficiency)
is restrained by concerns to respect schools' priorities (fairness). Among the
fair matchings, even the best one in terms of welfare (SOSM) is inefficient.
Moreover, any mechanism that improves welfare over the SOSM is manipulable by
the students. First, we characterize the ""least manipulable"" mechanisms in this
class: monotonically-promoting transformation proofness ensures that no student
is better off by promoting their assigned school under the true preferences.
Second, we use the notion that a matching is less unfair if it yields a smaller
set of students whose priorities are violated, and define minimal unfairness
accordingly. We then show that the Efficiency Adjusted Deferred Acceptance
(EADA) mechanism is minimally unfair in the class of efficient and
monotonically-promoting transformation proof mechanisms. When the objective is
to improve students' welfare over the SOSM, this characterization implies an
important insight into the frontier of the main axioms in school choice.",Improving the Deferred Acceptance with Minimal Compromise,2022-04-29 21:30:05,"Mustafa Oguz Afacan, Umut Dur, A. Arda Gitmez, Özgür Yılmaz","http://arxiv.org/abs/2205.00032v3, http://arxiv.org/pdf/2205.00032v3",econ.TH
34306,th,"We study games in which the set of strategies is multi-dimensional, and new
agents might learn various strategic dimensions from different mentors. We
introduce a new family of dynamics, the recombinator dynamics, which is
characterised by a single parameter, the recombination rate r in [0,1]. The
case of r = 0 coincides with the standard replicator dynamics. The opposite
case of r = 1 corresponds to a setup in which each new agent learns each new
strategic dimension from a different mentor, and combines these dimensions into
her adopted strategy. We fully characterise stationary states and stable states
under these dynamics, and we show that they predict novel behaviour in various
applications.",Mentors and Recombinators: Multi-Dimensional Social Learning,2022-04-30 17:11:10,"Srinivas Arigapudi, Omer Edhan, Yuval Heller, Ziv Hellman","http://arxiv.org/abs/2205.00278v2, http://arxiv.org/pdf/2205.00278v2",econ.TH
34338,th,"We develop a theory of monotone comparative statics for models with
adjustment costs. We show that comparative-statics conclusions may be drawn
under the usual ordinal complementarity assumptions on the objective function,
assuming very little about costs: only a mild monotonicity condition is
required. We use this insight to prove a general le Chatelier principle: under
the ordinal complementarity assumptions, if short-run adjustment is subject to
a monotone cost, then the long-run response to a shock is greater than the
short-run response. We extend these results to a fully dynamic model of
adjustment over time: the le Chatelier principle remains valid, and under
slightly stronger assumptions, optimal adjustment follows a monotone path. We
apply our results to models of capital investment and of sticky prices.",Comparative statics with adjustment costs and the le Chatelier principle,2022-06-01 12:27:10,"Eddie Dekel, John K. -H. Quah, Ludvig Sinander","http://arxiv.org/abs/2206.00347v2, http://arxiv.org/pdf/2206.00347v2",econ.TH
34307,th,"We study the behavioral implications of Rationality and Common Strong Belief
in Rationality (RCSBR) with contextual assumptions allowing players to
entertain misaligned beliefs, i.e., players can hold beliefs concerning their
opponents' beliefs where there is no opponent holding those very beliefs.
Taking the analysts' perspective, we distinguish the infinite hierarchies of
beliefs actually held by players (""real types"") from those that are a byproduct
of players' hierarchies (""imaginary types"") by introducing the notion of
separating type structure. We characterize the behavioral implications of RCSBR
for the real types across all separating type structures via a family of
subsets of Full Strong Best-Reply Sets of Battigalli & Friedenberg (2012). By
allowing misalignment, in dynamic games we can obtain behavioral predictions
inconsistent with RCSBR (in the standard framework), contrary to the case of
belief-based analyses for static games--a difference due to the dichotomy
""non-monotonic vs. monotonic"" reasoning.",Strategic Behavior under Context Misalignment,2022-05-02 00:23:13,"Pierfrancesco Guarino, Gabriel Ziegler","http://arxiv.org/abs/2205.00564v1, http://arxiv.org/pdf/2205.00564v1",econ.TH
34308,th,"This study investigates the prevention of market manipulation using a
price-impact model of financial market trading as a linear system. First, I
define a trading game between speculators such that they implement a
manipulation trading strategy that exploits momentum traders. Second, I
identify market intervention by a controller (e.g., a central bank) with a
control of the system. The main result shows that there is a control strategy
that prevents market manipulation as a subgame perfect equilibrium outcome of
the trading game. On the equilibrium path, no intervention is realized. This
study also characterizes the set of manipulation-proof linear pricing rules of
the system. The set is very restrictive if there is no control, while the
presence of control drastically expands the set.",A Model of Financial Market Control,2022-05-03 04:01:33,Yoshihiro Ohashi,"http://arxiv.org/abs/2205.01260v1, http://arxiv.org/pdf/2205.01260v1",econ.TH
34309,th,"This paper presents a sandbox example of how the integration of models
borrowed from Behavioral Economic (specifically Protection-Motivation Theory)
into ML algorithms (specifically Bayesian Networks) can improve the performance
and interpretability of ML algorithms when applied to Behavioral Data. The
integration of Behavioral Economics knowledge to define the architecture of the
Bayesian Network increases the accuracy of the predictions in 11 percentage
points. Moreover, it simplifies the training process, making unnecessary
training computational efforts to identify the optimal structure of the
Bayesian Network. Finally, it improves the explicability of the algorithm,
avoiding illogical relations among variables that are not supported by previous
behavioral cybersecurity literature. Although preliminary and limited to 0ne
simple model trained with a small dataset, our results suggest that the
integration of behavioral economics and complex ML models may open a promising
strategy to improve the predictive power, training costs and explicability of
complex ML models. This integration will contribute to solve the scientific
issue of ML exhaustion problem and to create a new ML technology with relevant
scientific, technological and market implications.",Integration of Behavioral Economic Models to Optimize ML performance and interpretability: a sandbox example,2022-05-03 12:29:36,"Emilio Soria-Olivas, José E. Vila Gisbert, Regino Barranquero Cardeñosa, Yolanda Gomez","http://arxiv.org/abs/2205.01387v1, http://arxiv.org/pdf/2205.01387v1",econ.TH
34310,th,"I present an example in which the individuals' preferences are strict
orderings, and under the majority rule, a transitive social ordering can be
obtained and thus a non-empty choice set can also be obtained. However, the
individuals' preferences in that example do not satisfy any conditions
(restrictions) of which at least one is required by Inada (1969) for social
preference transitivity under the majority rule. Moreover, the considered
individuals' preferences satisfy none of the conditions of value restriction
(VR), extremal restriction (ER) or limited agreement (LA), some of which is
required by Sen and Pattanaik (1969) for the existence of a non-empty social
choice set. Therefore, the example is an exception to a number of theorems of
social preference transitivity and social choice set existence under the
majority rule. This observation indicates that the collection of the conditions
listed by Inada (1969) is not as complete as might be supposed. This is also
the case for the collection of conditions VR, ER and LA considered by Sen and
Pattanaik (1969). This observation is a challenge to some necessary conditions
in the current social choice theory. In addition to seeking new conditions, one
possible way to deal with this challenge may be, from a theoretical
prospective, to represent the identified conditions (such as the VR, ER and LA)
in terms of a common mathematical tool, and then, people may find more.",A counter example to the theorems of social preference transitivity and social choice set existence under the majority rule,2022-05-04 15:57:18,Fujun Hou,"http://arxiv.org/abs/2205.02040v2, http://arxiv.org/pdf/2205.02040v2",econ.TH
34311,th,"What would you do if you were asked to ""add"" knowledge? Would you say that
""one plus one knowledge"" is two ""knowledges""? Less than that? More? Or
something in between? Adding knowledge sounds strange, but it brings to the
forefront questions that are as fundamental as they are eclectic. These are
questions about the nature of knowledge and about the use of mathematics to
model reality. In this chapter, I explore the mathematics of adding knowledge
starting from what I believe is an overlooked but key observation: the idea
that knowledge is non-fungible.",Knowledge is non-fungible,2022-05-04 19:35:59,César A. Hidalgo,"http://arxiv.org/abs/2205.02167v1, http://arxiv.org/pdf/2205.02167v1",econ.TH
34312,th,"We correct a bound in the definition of approximate truthfulness used in the
body of the paper of Jackson and Sonnenschein (2007). The proof of their main
theorem uses a different permutation-based definition, implicitly claiming that
the permutation-version implies the bound-based version. We show that this
claim holds only if the bound is loosened. The new bound is still strong enough
to guarantee that the fraction of lies vanishes as the number of problems
grows, so the theorem is correct as stated once the bound is loosened.","Comment on Jackson and Sonnenschein (2007) ""Overcoming Incentive Constraints by Linking Decisions""",2022-05-06 19:38:06,"Ian Ball, Matt O. Jackson, Deniz Kattwinkel","http://arxiv.org/abs/2205.03352v1, http://arxiv.org/pdf/2205.03352v1",econ.TH
34313,th,"We propose a new notion of credibility for Bayesian persuasion problems. A
disclosure policy is credible if the sender cannot profit from tampering with
her messages while keeping the message distribution unchanged. We show that the
credibility of a disclosure policy is equivalent to a cyclical monotonicity
condition on its induced distribution over states and actions. We also
characterize how credibility restricts the Sender's ability to persuade under
different payoff structures. In particular, when the sender's payoff is
state-independent, all disclosure policies are credible. We apply our results
to the market for lemons, and show that no useful information can be credibly
disclosed by the seller, even though a seller who can commit to her disclosure
policy would perfectly reveal her private information to maximize profit.",Credible Persuasion,2022-05-07 01:51:25,"Xiao Lin, Ce Liu","http://arxiv.org/abs/2205.03495v1, http://arxiv.org/pdf/2205.03495v1",econ.TH
34314,th,"Orbit-use management efforts can be structured as binding national regulatory
policies or as self-enforcing international treaties. New treaties to control
space debris growth appear unlikely in the near future. Spacefaring nations can
pursue national regulatory policies, though regulatory competition and open
access to orbit make their effectiveness unclear. We develop a game-theoretic
model of national regulatory policies and self-enforcing international treaties
for orbit-use management in the face of open access, regulatory competition,
and catastrophe. While open access limits the effectiveness of national
policies, market-access control ensures the policies can improve environmental
quality. A large enough stock of legacy debris ensures existence of a global
regulatory equilibrium where all nations choose to levy environmental
regulations on all satellites. The global regulatory equilibrium supports a
self-enforcing treaty to avert catastrophe by making it costlier to leave the
treaty and free ride.",International cooperation and competition in orbit-use management,2022-05-08 21:11:29,"Aditya Jain, Akhil Rao","http://arxiv.org/abs/2205.03926v2, http://arxiv.org/pdf/2205.03926v2",econ.TH
34315,th,"A single unit of a good is sold to one of two bidders. Each bidder has either
a high prior valuation or a low prior valuation for the good. Their prior
valuations are independently and identically distributed. Each bidder may
observe an independently and identically distributed signal about her prior
valuation. The seller knows the distribution of the prior valuation profile and
knows that signals are independently and identically distributed, but does not
know the signal distribution. In addition, the seller knows that bidders play
undominated strategies. I find that a second-price auction with a random
reserve maximizes the worst-case expected revenue over all possible signal
distributions and all equilibria in undominated strategies.",Information-Robust Optimal Auctions,2022-05-09 12:25:54,Wanchang Zhang,"http://arxiv.org/abs/2205.04137v1, http://arxiv.org/pdf/2205.04137v1",econ.TH
34316,th,"This paper studies Markov perfect equilibria in a repeated duopoly model
where sellers choose algorithms. An algorithm is a mapping from the
competitor's price to own price. Once set, algorithms respond quickly.
Customers arrive randomly and so do opportunities to revise the algorithm. In
the simple game with two possible prices, monopoly outcome is the unique
equilibrium for standard functional forms of the profit function. More
generally, with multiple prices, exercise of market power is the rule -- in all
equilibria, the expected payoff of both sellers is above the competitive
outcome, and that of at least one seller is close to or above the monopoly
outcome. Sustenance of such collusion seems outside the scope of standard
antitrust laws for it does not involve any direct communication.",Pricing with algorithms,2022-05-10 07:15:21,"Rohit Lamba, Sergey Zhuk","http://arxiv.org/abs/2205.04661v2, http://arxiv.org/pdf/2205.04661v2",econ.TH
34317,th,"We study partial commitment in leader-follower games. A collection of subsets
covering the leader's action space determines her commitment opportunities. We
characterize the outcomes resulting from all possible commitment structures of
this kind. If the commitment structure is an interval partition, then the
leader's payoff is bounded by the payoffs she obtains under the full and
no-commitment benchmarks. We apply our results to study new design problems.",The Limits of Commitment,2022-05-11 17:46:27,"Jacopo Bizzotto, Toomas Hinnosaar, Adrien Vigier","http://arxiv.org/abs/2205.05546v2, http://arxiv.org/pdf/2205.05546v2",econ.TH
34318,th,"This paper studies two-sided many-to-one matching in which firms have
complementary preferences. We show that stable matchings exist under a
balancedness condition that rules out a specific type of odd-length cycles
formed by firms' acceptable sets. We also provide a class of preference
profiles that satisfy this condition. Our results indicate that stable matching
is compatible with a wide range of firms' complementary preferences.",Two-sided matching with firms' complementary preferences,2022-05-11 19:16:07,Chao Huang,"http://arxiv.org/abs/2205.05599v2, http://arxiv.org/pdf/2205.05599v2",econ.TH
34319,th,"This paper surveys the literature on theories of discrimination, focusing
mainly on new contributions. Recent theories expand on the traditional
taste-based and statistical discrimination frameworks by considering specific
features of learning and signaling environments, often using novel information-
and mechanism-design language; analyzing learning and decision making by
algorithms; and introducing agents with behavioral biases and misspecified
beliefs. This survey also attempts to narrow the gap between the economic
perspective on ``theories of discrimination'' and the broader study of
discrimination in the social science literature. In that respect, I first
contribute by identifying a class of models of discriminatory institutions,
made up of theories of discriminatory social norms and discriminatory
institutional design. Second, I discuss issues relating to the measurement of
discrimination, and the classification of discrimination as bias or
statistical, direct or systemic, and accurate or inaccurate.",Recent Contributions to Theories of Discrimination,2022-05-12 13:07:43,Paula Onuchic,"http://arxiv.org/abs/2205.05994v3, http://arxiv.org/pdf/2205.05994v3",econ.TH
34320,th,"We consider stopping problems in which a decision maker (DM) faces an unknown
state of nature and decides sequentially whether to stop and take an
irreversible action; pay a fee and obtain additional information; or wait
without acquiring information. We discuss the value and quality of information.
The former is the maximal discounted expected revenue the DM can generate. We
show that among all history-dependent fee schemes, the upfront scheme (as
opposed, for instance, to pay-for-use) is optimal: it generates the highest
possible value of information. The effects on the optimal strategy of obtaining
information from a more accurate source and of having a higher discount factor
are distinct, as far as expected stopping time and its distribution are
concerned. However, these factors have a similar effect in that they both
enlarge the set of cases in which the optimal strategy prescribes waiting.",The Value of Information in Stopping Problems,2022-05-13 15:14:32,"Ehud Lehrer, Tao Wang","http://arxiv.org/abs/2205.06583v1, http://arxiv.org/pdf/2205.06583v1",econ.TH
34321,th,"We study the link between Phelps-Aigner-Cain-type statistical discrimination
and familiar notions of statistical informativeness. Our central insight is
that Blackwell's Theorem, suitably relabeled, characterizes statistical
discrimination in terms of statistical informativeness. This delivers one-half
of Chambers and Echenique's (2021) characterization of statistical
discrimination as a corollary, and suggests a different interpretation of it:
that discrimination is inevitable. In addition, Blackwell's Theorem delivers a
number of finer-grained insights into the nature of statistical discrimination.
We argue that the discrimination-informativeness link is quite general,
illustrating with an informativeness characterization of a different type of
discrimination.",Statistical discrimination and statistical informativeness,2022-05-14 23:36:41,"Matteo Escudé, Paula Onuchic, Ludvig Sinander, Quitzé Valenzuela-Stookey","http://arxiv.org/abs/2205.07128v2, http://arxiv.org/pdf/2205.07128v2",econ.TH
34323,th,"The literature on centralized matching markets often assumes that a true
preference of each player is known to herself and fixed, but empirical evidence
casts doubt on its plausibility. To circumvent the problem, we consider
evolutionary dynamics of preference revelation games in marriage problems. We
formulate the asymptotic stability of a matching, indicating the dynamical
robustness against sufficiently small changes in players' preference reporting
strategies, and show that asymptotically stable matchings are stable when they
exist. The simulation results of replicator dynamics are presented to
demonstrate the asymptotic stability. We contribute a practical insight for
market designers that a stable matching may be realized by introducing a
learning period in which participants find appropriate reporting strategies
through trial and error. We also open doors to a novel area of research by
demonstrating ways to employ evolutionary game theory in studies on centralized
markets.",Asymptotically stable matchings and evolutionary dynamics of preference revelation games in marriage problems,2022-05-17 07:06:08,"Hidemasa Ishii, Nariaki Nishino","http://arxiv.org/abs/2205.08079v1, http://arxiv.org/pdf/2205.08079v1",econ.TH
34324,th,"We present some conditions for social preference transitivity under the
majority rule when the individual preferences include cycles. First, our
concern is with the restriction on the preference orderings of individuals
except those (called cycle members) whose preferences constitute the cycles,
but the considered transitivity is, of course, of the society as a whole. In
our discussion, the individual preferences are assumed concerned and the cycle
members' preferences are assumed as strict orderings. Particularly, for an
alternative triple when one cycle is involved and the society is sufficient
large (at least 5 individuals in the society), we present a sufficient
condition for social transitivity; when two antagonistic cycles are involved
and the society has at least 9 individuals, necessary and sufficient conditions
are presented which are merely restricted on the preferences of those
individuals except the cycle members. Based on the work due to Slutsky (1977)
and Gaertner \& Heinecke (1978), we then outline a conceptual
$\hat{O}\mbox{-}\hat{I}$ framework of social transitivity in an axiomatic
manner. Connections between some already identified conditions and the
$\hat{O}\mbox{-}\hat{I}$ framework is examined.",Conditions for Social Preference Transitivity When Cycle Involved and A $\hat{O}\mbox{-}\hat{I}$ Framework,2022-05-17 13:50:15,Fujun Hou,"http://arxiv.org/abs/2205.08223v2, http://arxiv.org/pdf/2205.08223v2",econ.TH
34325,th,"A proposed measure of voting power should satisfy two conditions to be
plausible: first, it must be conceptually justified, capturing the intuitive
meaning of what voting power is; second, it must satisfy reasonable postulates.
This paper studies a set of postulates, appropriate for a priori voting power,
concerning blockers (or vetoers) in a binary voting game. We specify and
motivate five such postulates, namely, two subadditivity blocker postulates,
two minimum-power blocker postulates, each in weak and strong versions, and the
added-blocker postulate. We then test whether three measures of voting power,
namely the classic Penrose-Banzhaf measure, the classic Shapley-Shubik index,
and the newly proposed Recursive Measure, satisfy these postulates. We find
that the first measure fails four of the postulates, the second fails two,
while the third alone satisfies all five postulates. This work consequently
adds to the plausibility of the Recursive Measure as a reasonable measure of
voting power.",The Blocker Postulates for Measures of Voting Power,2022-05-17 16:57:15,"Arash Abizadeh, Adrian Vetta","http://arxiv.org/abs/2205.08368v1, http://arxiv.org/pdf/2205.08368v1",econ.TH
34326,th,"Can cost-reducing technical change lead to a fall in the long run rate of
profit if class struggle manages to keep the rate of exploitation constant? In
a general circulating capital model, we derive sufficient conditions for
cost-reducing technical change to both keep the rate of exploitation constant
and lead to a fall in the equilibrium rate of profit. Further, if the real wage
bundle is such that the maximum price-value ratio is larger than 1 plus the
rate of exploitation, then starting from any configuration of technology and
real wage, we can always find a viable, CU-LS technical change that satisfies
the sufficient conditions for the previous result. Taken together, these
results vindicate Marx's claim in Volume III of Capital, that if the rate of
exploitation remains unchanged then viable, CU-LS technical change in
capitalist economies can lead to a fall in the long run rate of profit.",Marx after Okishio: Falling Rate of Profit with Constant Rate of Exploitation,2022-05-18 17:31:40,"Deepankar Basu, Oscar Orellana","http://arxiv.org/abs/2205.08956v2, http://arxiv.org/pdf/2205.08956v2",econ.TH
34327,th,"An agent acquires a costly flexible signal before making a decision. We
explore to what degree knowledge of the agent's information costs helps predict
her behavior. We establish an impossibility result: learning costs alone
generate no testable restrictions on choice without also imposing constraints
on actions' state-dependent utilities. By contrast, choices from a menu often
uniquely pin down the agent's decisions in all submenus. To prove the latter
result, we define iteratively differentiable cost functions, a tractable class
amenable to first-order techniques. Finally, we construct tight tests for a
multi-menu data set to be consistent with a given cost.",Predicting Choice from Information Costs,2022-05-20 23:49:40,"Elliot Lipnowski, Doron Ravid","http://arxiv.org/abs/2205.10434v2, http://arxiv.org/pdf/2205.10434v2",econ.TH
34328,th,"We prove that supply correspondences are characterized by two properties: the
law of supply and being homogeneous of degree zero.",A Simple Characterization of Supply Correspondences,2022-05-21 03:46:37,"Alexey Kushnir, Vinod Krishnamoorthy","http://arxiv.org/abs/2205.10472v1, http://arxiv.org/pdf/2205.10472v1",econ.TH
34329,th,"Two dynamic game forms are said to be behaviorally equivalent if they share
the ""same"" profiles of structurally reduced strategies (Battigalli et al.,
2020). In the context of dynamic implementation, behaviorally equivalent game
forms are interchangeable under a wide range of solution concepts for the
purpose of implementing a social choice function. A gradual mechanism (Chew and
Wang, 2022), which designs a procedure of information transmission mediated by
a central administrator, enables a formal definition of information flow. We
provide a characterization of behavioral equivalence between gradual mechanisms
in terms of their informational equivalence -- each agent is designed the
""same"" information flow. Information flow also helps in defining an intuitive
notion of immediacy for gradual mechanisms which is equivalent to their game
structures being minimal. Given that the class of gradual mechanisms serves as
a revelation principle for dynamic implementation (Li, 2017; Akbarpour and Li,
2020; Mackenzie, 2020; Chew and Wang, 2022), the class of immediate gradual
mechanisms provides a refined revelation principle.",Information Design of Dynamic Mechanisms,2022-05-22 18:02:26,"Soo Hong Chew, Wenqian Wang","http://arxiv.org/abs/2205.10844v1, http://arxiv.org/pdf/2205.10844v1",econ.TH
34330,th,"A principal must decide between two options. Which one she prefers depends on
the private information of two agents. One agent always prefers the first
option; the other always prefers the second. Transfers are infeasible. One
application of this setting is the efficient division of a fixed budget between
two competing departments. We first characterize all implementable mechanisms
under arbitrary correlation. Second, we study when there exists a mechanism
that yields the principal a higher payoff than she could receive by choosing
the ex-ante optimal decision without consulting the agents. In the budget
example, such a profitable mechanism exists if and only if the information of
one department is also relevant for the expected returns of the other
department. We generalize this insight to derive necessary and sufficient
conditions for the existence of a profitable mechanism in the n-agent
allocation problem with independent types.",Mechanisms without transfers for fully biased agents,2022-05-22 22:28:01,"Deniz Kattwinkel, Axel Niemeyer, Justus Preusser, Alexander Winter","http://arxiv.org/abs/2205.10910v1, http://arxiv.org/pdf/2205.10910v1",econ.TH
34331,th,"We consider the problem of aggregating individual preferences over
alternatives into a social ranking. A key feature of the problems that we
consider, and the one that allows us to obtain positive results, in contrast to
negative results such as Arrow's Impossibility Theorem, is that the
alternatives to be ranked are outcomes of a competitive process. Examples
include rankings of colleges or academic journals. The foundation of our
ranking method is that alternatives that an agent desires, those that they have
been rejected by, should be ranked higher than the one they receive. We provide
a mechanism to produce a social ranking given any preference profile and
outcome assignment, and characterize this ranking as the unique one that
satisfies certain desirable axioms.",Desirable Rankings: A New Method for Ranking Outcomes of a Competitive Process,2022-05-24 03:50:46,"Thayer Morrill, Peter Troyan","http://arxiv.org/abs/2205.11684v1, http://arxiv.org/pdf/2205.11684v1",econ.TH
34332,th,"We consider the problem of designing prices for public transport where
payment enforcing is done through random inspection of passengers' tickets as
opposed to physically blocking their access. Passengers are fully strategic
such that they may choose different routes or buy partial tickets in their
optimizing decision. We derive expressions for the prices that make every
passenger choose to buy the full ticket. Using travel and pricing data from the
Washington DC metro, we show that a switch to a random inspection method for
ticketing while keeping current prices could lead to more than 59% of revenue
loss due to fare evasion, while adjusting prices to take incentives into
consideration would reduce that loss to less than 20%, without any increase in
prices.",Incentive-compatible public transportation fares with random inspection,2022-05-24 10:33:37,"Inácio Bó, Chiu Yu Ko","http://arxiv.org/abs/2205.11858v1, http://arxiv.org/pdf/2205.11858v1",econ.TH
34333,th,"We investigate how individuals form expectations about population behavior
using statistical inference based on observations of their social relations.
Misperceptions about others' connectedness and behavior arise from sampling
bias stemming from the friendship paradox and uncertainty from small samples.
In a game where actions are strategic complements, we characterize the
equilibrium and analyze equilibrium behavior. We allow for agent sophistication
to account for the sampling bias and demonstrate how sophistication affects the
equilibrium. We show how population behavior depends on both sources of
misperceptions and illustrate when sampling uncertainty plays a critical role
compared to sampling bias.",Statistical inference in social networks: how sampling bias and uncertainty shape decisions,2022-05-25 23:37:37,"Andreas Bjerre-Nielsen, Martin Benedikt Busch","http://arxiv.org/abs/2205.13046v1, http://arxiv.org/pdf/2205.13046v1",econ.TH
34334,th,"We propose a finite automaton-style solution concept for supergames. In our
model, we define an equilibrium to be a cycle of state switches and a supergame
to be an infinite walk on states of a finite stage game. We show that if the
stage game is locally non-cooperative, and the utility function is monotonously
decreasing as the number of defective agents increases, the symmetric
multiagent prisoners' dilemma supergame must contain one symmetric equilibrium
and can contain asymmetric equilibria.",Asymmetric Equilibria in Symmetric Multiplayer Prisoners Dilemma Supergames,2022-05-27 09:13:14,Davidson Cheng,"http://arxiv.org/abs/2205.13772v1, http://arxiv.org/pdf/2205.13772v1",econ.TH
34335,th,"We develop a model of content filtering as a game between the filter and the
content consumer, where the latter incurs information costs for examining the
content. Motivating examples include censoring misinformation, spam/phish
filtering, and recommender systems. When the attacker is exogenous, we show
that improving the filter's quality is weakly Pareto improving, but has no
impact on equilibrium payoffs until the filter becomes sufficiently accurate.
Further, if the filter does not internalize the information costs, its lack of
commitment power may render it useless and lead to inefficient outcomes. When
the attacker is also strategic, improvements to filter quality may sometimes
decrease equilibrium payoffs.",Content Filtering with Inattentive Information Consumers,2022-05-27 18:47:27,"Ian Ball, James Bono, Justin Grana, Nicole Immorlica, Brendan Lucier, Aleksandrs Slivkins","http://arxiv.org/abs/2205.14060v4, http://arxiv.org/pdf/2205.14060v4",econ.TH
34336,th,"In pairwise interactions assortativity in cognition means that pairs where
both decision-makers use the same cognitive process are more likely to occur
than what happens under random matching. In this paper we study both the
mechanisms determining assortativity in cognition and its effects. In
particular, we analyze an applied model where assortativity in cognition helps
explain the emergence of cooperation and the degree of prosociality of
intuition and deliberation, which are the typical cognitive processes
postulated by the dual process theory in psychology. Our findings rely on
agent-based simulations, but analytical results are also obtained in a special
case. We conclude with examples showing that assortativity in cognition can
have different implications in terms of its societal desirability.",Assortativity in cognition,2022-05-30 17:02:33,"Ennio Bilancini, Leonardo Boncinelli, Eugenio Vicario","http://arxiv.org/abs/2205.15114v1, http://arxiv.org/pdf/2205.15114v1",econ.TH
34337,th,"This paper extends the unified network model, proposed by Acemoglu et al.
(2016b), such that interaction functions can be heterogeneous, and the
sensitivity matrix has less than or equal to one spectral radius. We show the
existence and (almost surely) uniqueness of equilibrium under both eventually
contracting and noncontracting assumptions. Applying the equilibrium in the
study of systemic risk, we provide a measure to determine the key player who
causes the most significant impact if removed from the network.",Uniqueness of Equilibria in Interactive Networks,2022-06-01 03:14:50,Chien-Hsiang Yeh,"http://arxiv.org/abs/2206.00158v1, http://arxiv.org/pdf/2206.00158v1",econ.TH
34339,th,"We study a planner's optimal interventions in both the standalone marginal
utilities of players on a network and weights on the links that connect
players. The welfare-maximizing joint intervention exhibits the following
properties: (a) when the planner's budget is moderate (so that optimal
interventions are interior), the change in weight on any link connecting a pair
of players is proportional to the product of eigen-centralities of the pair;
(b) when the budget is sufficiently large, the optimal network takes a simple
form: It is either a complete network under strategic complements or a complete
balanced bipartite network under strategic substitutes. We show that the
welfare effect of joint intervention is shaped by the principal eigenvalue,
while the distributional effect is captured by the dispersion of the
corresponding eigenvalues, i.e., the eigen-centralities. Comparing joint
intervention in our setting with single intervention solely on the standalone
marginal utilities, as studied by Galeotti et al. (2020), we find that joint
intervention always yields a higher aggregate welfare, but may lead to greater
inequality, which highlights a possible trade-off between the welfare and
distributional impacts of joint intervention.",Welfare and Distributional Effects of Joint Intervention in Networks,2022-06-08 15:56:49,"Ryan Kor, Junjie Zhou","http://arxiv.org/abs/2206.03863v1, http://arxiv.org/pdf/2206.03863v1",econ.TH
34340,th,"The existing studies on consumer search agree that consumers are worse-off
when they do not observe sellers' production marginal cost than when they do.
In this paper we challenge this conclusion. Employing a canonical model of
simultaneous search, we show that it may be favorable for consumer to not
observe the production marginal cost. The reason is that, in expectation,
consumer search more intensely when they do not know sellers' production
marginal cost than when they know it. More intense search imposes higher
competitive pressure on sellers, which in turn benefits consumers through lower
prices.",Information Asymmetry and Search Intensity,2022-06-09 18:44:11,Atabek Atayev,"http://arxiv.org/abs/2206.04576v1, http://arxiv.org/pdf/2206.04576v1",econ.TH
34341,th,"In this paper, we consider coordination and anti-coordination heterogeneous
games played by a finite population formed by different types of individuals
who fail to recognize their own type but do observe the type of their opponent.
We show that there exists symmetric Nash equilibria in which players
discriminate by acting differently according to the type of opponent that they
face in anti-coordination games, while no such equilibrium exists in
coordination games. Moreover, discrimination has a limit: the maximum number of
groups where the treatment differs is three. We then discuss the theoretical
results in light of the observed behavior of people in some specific
psychological contexts.",Discrimination in Heterogeneous Games,2022-06-10 16:27:49,"Annick Laruelle, André Rocha","http://arxiv.org/abs/2206.05087v1, http://arxiv.org/pdf/2206.05087v1",econ.TH
34342,th,"We study a class of {\em aggregation rules} that could be applied to ethical
AI decision-making. These rules yield the decisions to be made by automated
systems based on the information of profiles of preferences over possible
choices. We consider two different but very intuitive notions of preferences of
an alternative over another one, namely {\it pairwise majority} and {\it
position} dominance. Preferences are represented by permutation processes over
alternatives and aggregation rules are applied to obtain results that are
socially considered to be ethically correct. In this setting, we find many
aggregation rules that satisfy desirable properties for an autonomous system.
We also address the problem of the stability of the aggregation process, which
is important when the information is variable. These results are a contribution
for an AI designer that wants to justify the decisions made by an autonomous
system.\\ \textit{Keywords:} Aggregation Operators; Permutation Process;
Decision Analysis.",Classes of Aggregation Rules for Ethical Decision Making in Automated Systems,2022-06-10 17:59:13,"Federico Fioravanti, Iyad Rahwan, Fernando Abel Tohmé","http://arxiv.org/abs/2206.05160v3, http://arxiv.org/pdf/2206.05160v3",econ.TH
34343,th,"We use a combinatorial approach to compute the number of non-isomorphic
choices on four elements that can be explained by models of bounded
rationality.",On the number of non-isomorphic choices on four elements,2022-06-14 16:33:56,"Alfio Giarlotta, Angelo Petralia, Stephen Watson","http://arxiv.org/abs/2206.06840v1, http://arxiv.org/pdf/2206.06840v1",econ.TH
34344,th,"Model misspecification is a critical issue in many areas of theoretical and
empirical economics. In the specific context of misspecified Markov Decision
Processes, Esponda and Pouzo (2021) defined the notion of Berk-Nash equilibrium
and established its existence in the setting of finite state and action spaces.
However, many substantive applications (including two of the three motivating
examples presented by Esponda and Pouzo, as well as Gaussian and log-normal
distributions, and CARA, CRRA and mean-variance preferences) involve continuous
state or action spaces, and are thus not covered by the Esponda-Pouzo existence
theorem. We extend the existence of Berk-Nash equilibrium to compact action
spaces and sigma-compact state spaces, with possibly unbounded payoff
functions. A complication arises because the Berk-Nash equilibrium notion
depends critically on Radon-Nikodym derivatives, which are necessarily bounded
in the finite case but typically unbounded in misspecified continuous models.
The proofs rely on nonstandard analysis and, relative to previous applications
of nonstandard analysis in economic theory, draw on novel argumentation
traceable to work of the second author on nonstandard representations of Markov
processes.",On Existence of Berk-Nash Equilibria in Misspecified Markov Decision Processes with Infinite Spaces,2022-06-16 23:41:54,"Robert M. Anderson, Haosui Duanmu, Aniruddha Ghosh, M. Ali Khan","http://arxiv.org/abs/2206.08437v3, http://arxiv.org/pdf/2206.08437v3",econ.TH
34345,th,"In persuasion problems where the receiver's action is one-dimensional and his
utility is single-peaked, optimal signals are characterized by duality, based
on a first-order approach to the receiver's problem. A signal is optimal iff
the induced joint distribution over states and actions is supported on a
compact set (the contact set) where the dual constraint binds. A signal that
pools at most two states in each realization is always optimal, and such
pairwise signals are the only solutions under a non-singularity condition on
utilities (the twist condition). We provide conditions under which higher
actions are induced at more or less extreme pairs of states. Finally, we
provide conditions for the optimality of either full disclosure or negative
assortative disclosure, where signal realizations can be ordered from least to
most extreme. Optimal negative assortative disclosure is characterized as the
solution to a pair of ordinary differential equations.",Persuasion with Non-Linear Preferences,2022-06-18 13:12:23,"Anton Kolotilin, Roberto Corrao, Alexander Wolitzky","http://arxiv.org/abs/2206.09164v2, http://arxiv.org/pdf/2206.09164v2",econ.TH
34347,th,"Selection shapes all kinds of behaviors, including how we make decisions
under uncertainty. The risk attitude reflected from it should be simple,
flexible, yet consistent. In this paper we engaged evolutionary dynamics to
find the decision making rule concerning risk that is evolutionarily superior,
and developed the theory of evolutionary rationality. We highlight the
importance of selection intensity and fitness, as well as their equivalents in
the human mind, named as attention degree and meta-fitness, in the decision
making process. Evolutionary rationality targets the maximization of the
geometric mean of meta-fitness (or fitness), and attention degree (or selection
intensity) holds the key in the change of attitude of the same individual
towards different events and under varied situations. Then as an example, the
Allais paradox is presented to show the application of evolutionary
rationality, in which the anomalous choices made by the majority of people can
be well justified by a simple change in the attention degree.",Evolutionary rationality of risk preference,2022-06-20 17:43:27,"Songjia Fan, Yi Tao, Cong Li","http://arxiv.org/abs/2206.09813v1, http://arxiv.org/pdf/2206.09813v1",econ.TH
34348,th,"I study a class of verifiable disclosure games where the sender's payoff is
state independent and the receiver's optimal action only depends on the
expected state. The sender's messages are verifiable in the sense that they can
be vague but can never be wrong. What is the sender's preferred equilibrium?
When does the sender gain nothing from having commitment power? I identify
conditions for an information design outcome to be an equilibrium outcome of
the verifiable disclosure game, and give simple sufficient conditions under
which the sender does not benefit from commitment power. These results help in
characterizing the sender's preferred equilibria and her equilibrium payoff set
in a class of verifiable disclosure games. I apply these insights to study
influencing voters and selling with quality disclosure.",Withholding Verifiable Information,2022-06-20 20:57:56,Kun Zhang,"http://arxiv.org/abs/2206.09918v3, http://arxiv.org/pdf/2206.09918v3",econ.TH
34349,th,"Blockchain consensus is a state whereby each node in a network agrees on the
current state of the blockchain. Existing protocols achieve consensus via a
contest or voting procedure to select one node as a dictator to propose new
blocks. However, this procedure can still lead to potential attacks that make
consensus harder to achieve or lead to coordination issues if multiple,
competing chains (i.e., forks) are created with the potential that an
untruthful fork might be selected. We explore the potential for mechanisms to
be used to achieve consensus that are triggered when there is a dispute
impeding consensus. Using the feature that nodes stake tokens in proof of stake
(POS) protocols, we construct revelation mechanisms in which the unique
(subgame perfect) equilibrium involves validating nodes propose truthful blocks
using only the information that exists amongst all nodes. We construct
operationally and computationally simple mechanisms under both Byzantine Fault
Tolerance and a Longest Chain Rule, and discuss their robustness to attacks.
Our perspective is that the use of simple mechanisms is an unexplored area of
blockchain consensus and has the potential to mitigate known trade-offs and
enhance scalability.",Mechanism Design Approaches to Blockchain Consensus,2022-06-21 04:22:58,"Joshua S. Gans, Richard Holden","http://arxiv.org/abs/2206.10065v1, http://arxiv.org/pdf/2206.10065v1",econ.TH
34350,th,"We study envy-free allocations in a many-to-many matching model with
contracts in which agents on one side of the market (doctors) are endowed with
substitutable choice functions and agents on the other side of the market
(hospitals) are endowed with responsive preferences. Envy-freeness is a
weakening of stability that allows blocking contracts involving a hospital with
a vacant position and a doctor that does not envy any of the doctors that the
hospital currently employs. We show that the set of envy-free allocations has a
lattice structure. Furthermore, we define a Tarski operator on this lattice and
use it to model a vacancy chain dynamic process by which, starting from any
envy-free allocation, a stable one is reached.",The lattice of envy-free many-to-many matchings with contracts,2022-06-22 01:23:17,"Agustin G. Bonifacio, Nadia Guinazu, Noelia Juarez, Pablo Neme, Jorge Oviedo","http://arxiv.org/abs/2206.10758v1, http://arxiv.org/pdf/2206.10758v1",econ.TH
34351,th,"In assignment problems, the average rank of the objects received is often
used as key measure of the quality of the resulting match. The rank-minimizing
(RM) mechanism optimizes directly for this objective, by returning an
assignment that minimizes the average rank. While a naturally appealing
mechanism for practical market design, one shortcoming is that this mechanism
fails to be strategyproof. In this paper, I show that, while the RM mechanism
is manipulable, it is not obviously manipulable.",Non-Obvious Manipulability of the Rank-Minimizing Mechanism,2022-06-22 23:12:08,Peter Troyan,"http://arxiv.org/abs/2206.11359v1, http://arxiv.org/pdf/2206.11359v1",econ.TH
34352,th,"We introduce the ""local-global"" approach for a divisible portfolio and
perform an equilibrium analysis for two variants of core-selecting auctions.
Our main novelty is extending the Nearest-VCG pricing rule in a dynamic
two-round setup, mitigating bidders' free-riding incentives and further
reducing the sellers' costs. The two-round setup admits an
information-revelation mechanism that may offset the ""winner's curse"", and it
is in accord with the existing iterative procedure of combinatorial auctions.
With portfolio trading becoming an increasingly important part of investment
strategies, our mechanism contributes to increasing interest in portfolio
auction protocols.",A core-selecting auction for portfolio's packages,2022-06-23 10:52:23,"Lamprirni Zarpala, Dimitris Voliotis","http://arxiv.org/abs/2206.11516v2, http://arxiv.org/pdf/2206.11516v2",econ.TH
34353,th,"We present an equilibrium model of politics in which political platforms
compete over public opinion. A platform consists of a policy, a coalition of
social groups with diverse intrinsic attitudes to policies, and a narrative. We
conceptualize narratives as subjective models that attribute a commonly valued
outcome to (potentially spurious) postulated causes. When quantified against
empirical observations, these models generate a shared belief among coalition
members over the outcome as a function of its postulated causes. The intensity
of this belief and the members' intrinsic attitudes to the policy determine the
strength of the coalition's mobilization. Only platforms that generate maximal
mobilization prevail in equilibrium. Our equilibrium characterization
demonstrates how false narratives can be detrimental for the common good, and
how political fragmentation leads to their proliferation. The false narratives
that emerge in equilibrium attribute good outcomes to the exclusion of social
groups from ruling coalitions.",False Narratives and Political Mobilization,2022-06-25 14:01:06,"Kfir Eliaz, Simone Galperti, Ran Spiegler","http://arxiv.org/abs/2206.12621v1, http://arxiv.org/pdf/2206.12621v1",econ.TH
34354,th,"The paper describes a funding mechanism called Quadratic Finance (QF) and
deploys a bit of calculus to show that within a very clean and simple linear
model QF maximizes social utility. They differentiate the social utility
function. The mathematical content of this note is that by taking one further
derivative, one may also deduce that QF is the unique solution.","Spinoza, Leibniz, Kant, and Weyl",2022-06-29 18:20:28,Michael H. Freedman,"http://arxiv.org/abs/2206.14711v2, http://arxiv.org/pdf/2206.14711v2",econ.TH
34355,th,"We consider the problem of a Bayesian agent receiving signals over time and
then taking an action. The agent chooses when to stop and take an action based
on her current beliefs, and prefers (all else equal) to act sooner rather than
later. The signals received by the agent are determined by a principal, whose
objective is to maximize engagement (the total attention paid by the agent to
the signals). We show that engagement maximization by the principal minimizes
the agent's welfare; the agent does no better than if she gathered no
information. Relative to a benchmark in which the agent chooses the signals,
engagement maximization induces excessive information acquisition and extreme
beliefs. An optimal strategy for the principal involves ""suspensive signals""
that lead the agent's belief to become ""less certain than the prior"" and
""decisive signals"" that lead the agent's belief to jump to the stopping region.",Engagement Maximization,2022-07-02 02:22:40,"Benjamin Hébert, Weijie Zhong","http://arxiv.org/abs/2207.00685v3, http://arxiv.org/pdf/2207.00685v3",econ.TH
34356,th,"When a government makes many different policy decisions, lobbying can be
viewed as a contest between the government and many different special interest
groups. The government fights lobbying by interest groups with its own
political capital. In this world, we find that a government wants to `sell
protection' -- give favourable treatment in exchange for contributions -- to
certain interest groups. It does this in order to build its own `war chest' of
political capital, which improves its position in fights with other interest
groups. And it does so until it wins all remaining contests with certainty.
This stands in contrast to existing models that often view lobbying as driven
by information or agency problems.",Inside the West Wing: Lobbying as a contest,2022-07-02 14:26:27,Alastair Langtry,"http://arxiv.org/abs/2207.00800v2, http://arxiv.org/pdf/2207.00800v2",econ.TH
34357,th,"I analyze a novel reputation game between a patient seller and a sequence of
myopic consumers, in which the consumers have limited memories and do not know
the exact sequence of the seller's actions. I focus on the case where each
consumer only observes the number of times that the seller took each of his
actions in the last K periods. When payoffs are monotone-supermodular, I show
that the patient seller can approximately secure his commitment payoff in all
equilibria as long as K is at least one. I also show that the consumers can
approximately attain their first-best welfare in all equilibria if and only if
their memory length K is lower than some cutoff. Although a longer memory
enables more consumers to punish the seller once the seller shirks, it weakens
their incentives to punish the seller once they observe him shirking",Reputation Effects under Short Memories,2022-07-06 18:27:02,Harry Pei,"http://arxiv.org/abs/2207.02744v4, http://arxiv.org/pdf/2207.02744v4",econ.TH
34358,th,"This paper studies a dynamic information acquisition model with payoff
externalities. Two players can acquire costly information about an unknown
state before taking a safe or risky action. Both information and the action
taken are private. The first player to take the risky action has an advantage
but whether the risky action is profitable depends on the state. The players
face the tradeoff between being first and being right. In equilibrium, for
different priors, there exist three kinds of randomisation: when the players
are pessimistic, they enter the competition randomly; when the players are less
pessimistic, they acquire information and then randomly stop; when the players
are relatively optimistic, they randomly take an action without acquiring
information.",Private Information Acquisition and Preemption: a Strategic Wald Problem,2022-07-06 21:18:31,Guo Bai,"http://arxiv.org/abs/2207.02898v1, http://arxiv.org/pdf/2207.02898v1",econ.TH
34359,th,"We study the assignment of indivisible goods to individuals when monetary
transfers are not allowed. Previous literature has mainly focused on efficiency
(from both ex-ante and ex-post perspectives) and individually fair assignments.
As a result, egalitarian concerns have been overlooked. We draw inspiration
from the assignment of apartments in housing cooperatives, where families
consider egalitarianism of assignments as a first-order requirement.
Specifically, they aim to avoid situations where some families receive their
most preferred apartments while others are assigned options ranked very low in
their preferences. Building on Rawls' concept of fairness, we introduce the
notion of Rawlsian assignment. We prove that a unique Rawlsian assignment
always exists. Furthermore, the Rawlsian rule is efficient and anonymous. To
illustrate our analysis, we use preference data from housing cooperatives. We
show that the Rawlsian rule substantially improves, from an egalitarian
perspective, both the probabilistic serial rule, and the rule currently used to
assign apartments in the housing cooperatives.",Rawlsian Assignments,2022-07-06 22:15:02,"Tom Demeulemeester, Juan S. Pereyra","http://arxiv.org/abs/2207.02930v5, http://arxiv.org/pdf/2207.02930v5",econ.TH
34360,th,"The continuous time model of dynamic asset trading is the central model of
modern finance. Because trading cannot in fact take place at every moment of
time, it would seem desirable to show that the continuous time model can be
viewed as the limit of models in which trading can occur only at (many)
discrete moments of time. This paper demonstrates that, if we take terminal
wealth constraints and self-financing constraints as seriously in the discrete
model as in the continuous model, then the continuous trading model need not be
the limit of discrete trading models. This raises serious foundational
questions about the continuous time model.",Asset Trading in Continuous Time: A Cautionary Tale,2022-07-07 19:04:55,William R. Zame,"http://arxiv.org/abs/2207.03397v1, http://arxiv.org/pdf/2207.03397v1",econ.TH
34361,th,"The benefits of using complex network analysis (CNA) to study complex
systems, such as an economy, have become increasingly evident in recent years.
However, the lack of a single comparative index that encompasses the overall
wellness of a structure can hinder the simultaneous analysis of multiple
ecosystems. A formula to evaluate the structure of an economic ecosystem is
proposed here, implementing a mathematical approach based on CNA metrics to
construct a comparative measure that reflects the collaboration dynamics and
its resultant structure. This measure provides the relevant actors with an
enhanced sense of the social dynamics of an economic ecosystem, whether related
to business, innovation, or entrepreneurship. Available graph metrics were
analysed, and 14 different formulas were developed. The efficiency of these
formulas was evaluated on real networks from 11 different innovation-driven
entrepreneurial economic ecosystems in six countries from Latin America and
Europe and on 800 random graphs simulating similarly constructed networks.",A proposal for measuring the structure of economic ecosystems: a mathematical and complex network analysis approach,2022-07-10 01:41:19,"M. S. Tedesco, M. A. Nunez-Ochoa, F. Ramos, O. Medrano, K Beuchot","http://arxiv.org/abs/2207.04346v1, http://arxiv.org/pdf/2207.04346v1",econ.TH
34363,th,"We introduce a notion of substitutability for correspondences and establish a
monotone comparative static result, unifying results such as the inverse
isotonicity of M-matrices, Berry, Gandhi and Haile's identification of demand
systems, monotone comparative statics, and results on the structure of the core
of matching games without transfers (Gale and Shapley) and with transfers
(Demange and Gale). More specifically, we introduce the notions of 'unified
gross substitutes' and 'nonreversingness' and show that if Q is a supply
correspondence defined on a set of prices P which is a sublattice of R^N, and Q
satisfies these two properties, then the set of prices yielding supply vector q
is increasing (in the strong set order) in q; and it is a sublattice of P.",Monotone Comparative Statics for Equilibrium Problems,2022-07-14 11:32:11,"Alfred Galichon, Larry Samuelson, Lucas Vernet","http://arxiv.org/abs/2207.06731v1, http://arxiv.org/pdf/2207.06731v1",econ.TH
34364,th,"In a digital age where companies face rapid changes in technology, consumer
trends, and business environments, there is a critical need for continual
revision of the business model in response to disruptive innovation. A pillar
of innovation in business practices is the adoption of novel pricing schemes,
such as Pay-What-You-Want (PWYW). In this paper, we employed game theory and
behavioral economics to model consumers' behavior in response to a PWYW pricing
strategy where there is an information asymmetry between the consumer and
supplier. In an effort to minimize the information asymmetry, we incorporated
the supplier's cost and the consumer's reference prices as two parameters that
might influence the consumer's payment decision. Our model shows that
consumers' behavior varies depending on the available information. As a result,
when an external reference point is provided, the consumer tends to pay higher
amounts to follow the social herd or respect her self-image. However, the
external reference price can also decrease her demand when, in the interest of
fairness, she forgoes the purchase because the amount she is willing to pay is
less that what she recognizes to be an unrecoverable cost to the supplier.",A Game-theoretic Model of the Consumer Behavior Under Pay-What-You-Want Pricing Strategy,2022-07-18 23:20:22,"Vahid Ashrafimoghari, Jordan W. Suchow","http://dx.doi.org/10.13140/RG.2.2.20271.61601, http://arxiv.org/abs/2207.08923v1, http://arxiv.org/pdf/2207.08923v1",econ.TH
34365,th,"We study a model of voting with two alternatives in a symmetric environment.
We characterize the interim allocation probabilities that can be implemented by
a symmetric voting rule. We show that every such interim allocation
probabilities can be implemented as a convex combination of two families of
deterministic voting rules: qualified majority and qualified anti-majority. We
also provide analogous results by requiring implementation by a symmetric
monotone (strategy-proof) voting rule and by a symmetric unanimous voting rule.
We apply our results to show that an ex-ante Rawlsian rule is a convex
combination of a pair of qualified majority rules.",Symmetric reduced form voting,2022-07-19 16:09:40,"Xu Lang, Debasis Mishra","http://arxiv.org/abs/2207.09253v4, http://arxiv.org/pdf/2207.09253v4",econ.TH
34366,th,"I provide a novel approach to characterizing the set of interim realizable
allocations, in the spirit of Matthews (1984) and Border (1991). The approach
allows me to identify precisely why exact characterizations are difficult to
obtain in some settings. The main results of the paper then show how to adapt
the approach in order to obtain approximate characterizations of the interim
realizable set in such settings.
  As an application, I study multi-item allocation problems when agents have
capacity constraints. I identify necessary conditions for interim
realizability, and show that these conditions are sufficient for realizability
when the interim allocation in question is scaled by 1/2. I then characterize a
subset of the realizable polytope which contains all such scaled allocations.
This polytope is generated by a majorization relationship between the scaled
interim allocations and allocations induced by a certain ``greedy algorithm''.
I use these results to study mechanism design with equity concerns and model
ambiguity. I also relate optimal mechanisms to the commonly used deferred
acceptance and serial dictatorship matching algorithms. For example, I provide
conditions on the principal's objective such that by carefully choosing school
priorities and running deferred acceptance, the principal can guarantee at
least half of the optimal (full information) payoff.",Greedy Allocations and Equitable Matchings,2022-07-22 23:29:25,Quitzé Valenzuela-Stookey,"http://arxiv.org/abs/2207.11322v2, http://arxiv.org/pdf/2207.11322v2",econ.TH
34367,th,"Recent literature shows that dynamic matching mechanisms may outperform the
standard mechanisms to deliver desirable results. We highlight an
under-explored design dimension, the time constraints that students face under
such a dynamic mechanism. First, we theoretically explore the effect of time
constraints and show that the outcome can be worse than the outcome produced by
the student-proposing deferred acceptance mechanism. Second, we present
evidence from the Inner Mongolian university admissions that time constraints
can prevent dynamic mechanisms from achieving stable outcomes, creating losers
and winners among students.",Time-constrained Dynamic Mechanisms for College Admissions,2022-07-25 16:17:31,"Li Chen, Juan S. Pereyra, Min Zhu","http://arxiv.org/abs/2207.12179v1, http://arxiv.org/pdf/2207.12179v1",econ.TH
34368,th,"Evidence suggests that participants in strategy-proof matching mechanisms
play dominated strategies. To explain the data, we introduce expectation-based
loss aversion into a school-choice setting and characterize choice-acclimating
personal equilibria. We find that non-truthful preference submissions can be
strictly optimal if and only if they are top-rank monotone. In equilibrium,
inefficiency or justified envy may arise in seemingly stable or efficient
mechanisms. Specifically, students who are more loss averse or less confident
than their peers obtain suboptimal allocations.",Loss aversion in strategy-proof school-choice mechanisms,2022-07-29 16:20:10,"Vincent Meisner, Jonas von Wangenheim","http://arxiv.org/abs/2207.14666v1, http://arxiv.org/pdf/2207.14666v1",econ.TH
34369,th,"Recent developments in the theory of production networks offer interesting
applications and revival of input-output analysis. Some recent papers have
studied the propagation of a temporary, negative shock through an input-output
network. Such analyses of shock propagation relies on eigendecomposition of
relevant input-output matrices. It is well known that only diagonalizable
matrices can be eigendecomposed; those that are not diagonalizable, are known
as defective matrices. In this paper, we provide necessary and sufficient
conditions for diagonalizability of any square matrix using its rank and
eigenvalues. To apply our results, we offer examples of input-output tables
from India in the 1950s that were not diagonalizable and were hence, defective.",Input-Output Tables and Some Theory of Defective Matrices,2022-07-30 17:08:53,"Mohit Arora, Deepankar Basu","http://arxiv.org/abs/2208.00226v1, http://arxiv.org/pdf/2208.00226v1",econ.TH
34370,th,"We consider classes of non-manipulable two-valued social choice functions,
i.e., social choice functions with range of cardinality two within a larger set
of alternatives. Corresponding to the different classes, the functional forms
are described. Further we show that they, and some others from previous
literature, can all be unified using a common structure. Such a structure
relies on the concept of character function we introduce. This is a function
from the domain of the admissible profiles, that notably we do not assume to be
universal, to a partially ordered set. The proper choice of the character
function, marks the distinction among different classes and permits the
description of all social choice functions of a class in simple terms.",The character of non-manipulable collective choices between two alternatives,2022-08-02 20:07:33,"Achille Basile, K. P. S. Bhaskara Rao, Surekha Rao","http://arxiv.org/abs/2208.01594v2, http://arxiv.org/pdf/2208.01594v2",econ.TH
34371,th,"In this paper we provide three new results axiomatizing the core of games in
characteristic function form (not necessarily having transferable utility)
obeying an innocuous condition (that the set of individually rational pay-off
vectors is bounded). One novelty of this exercise is that our domain is the
{\em entire} class of such games: i.e., restrictions like ""non-levelness"" or
""balancedness"" are not required.",A look back at the core of games in characteristic function form: some new axiomatization results,2022-08-02 21:33:14,Anindya Bhattacharya,"http://arxiv.org/abs/2208.01690v1, http://arxiv.org/pdf/2208.01690v1",econ.TH
34372,th,"Two spatial equilibria, agglomeration and dispersion, in a continuous space
core-periphery model are examined to discuss which equilibrium is socially
preferred. It is shown that when transport cost is lower than a critical value,
the agglomeration equilibrium is preferable in the sense of Scitovszky, while
when the transport cost is above the critical value, the two equilibria can not
be ordered in the sense of Scitovszky.",Agglomeration and welfare of the Krugman model in a continuous space,2022-08-03 13:48:37,Kensuke Ohtake,"http://dx.doi.org/10.1016/j.mathsocsci.2023.04.001, http://arxiv.org/abs/2208.01972v3, http://arxiv.org/pdf/2208.01972v3",econ.TH
34373,th,"We study conditioning on null events, or surprises, and behaviorally
characterize the Ordered Surprises (OS) representation of beliefs. For feasible
events, our Decision Maker (DM) is Bayesian. For null events, our DM considers
a hierarchy of beliefs until one is consistent with the surprise. The DM adopts
this prior and applies Bayes' rule. Unlike Bayesian updating, OS is a complete
updating rule: conditional beliefs are well-defined for any event. OS is
(behaviorally) equivalent to the Conditional Probability System (Myerson,
1986b) and is a special case of Hypothesis Testing (Ortoleva, 2012), clarifying
the relationships between the various approaches to null events.",Ordered Surprises and Conditional Probability Systems,2022-08-04 11:56:45,"Adam Dominiak, Matthew Kovach, Gerelt Tserenjigmid","http://arxiv.org/abs/2208.02533v2, http://arxiv.org/pdf/2208.02533v2",econ.TH
34374,th,"Having fixed capacities, homogeneous products and price sensitive customer
purchase decision are primary distinguishing characteristics of numerous
revenue management systems. Even with two or three rivals, competition is still
highly fierce. This paper studies sub-game perfect Nash equilibrium of a price
competition in an oligopoly market with perishable assets. Sellers each has one
unit of a good that cannot be replenished, and they compete in setting prices
to sell their good over a finite sales horizon. Each period, buyers desire one
unit of the good and the number of buyers coming to the market in each period
is random. All sellers' prices are accessible for buyers, and search is
costless. Using stochastic dynamic programming methods, the best response of
sellers can be obtained from a one-shot price competition game regarding
remained periods and the current-time demand structure. Assuming a binary
demand model, we demonstrate that the duopoly model has a unique Nash
equilibrium and the oligopoly model does not reveal price dispersion with
respect to a particular metric. We illustrate that, when considering a
generalized demand model, the duopoly model has a unique mixed strategy Nash
equilibrium while the oligopoly model has a unique symmetric mixed strategy
Nash equilibrium.",Subgame perfect Nash equilibrium for dynamic pricing competition with finite planning horizon,2022-08-04 21:31:52,Niloofar Fadavi,"http://arxiv.org/abs/2208.02842v1, http://arxiv.org/pdf/2208.02842v1",econ.TH
34375,th,"In this paper we study a rational inattention model in environments where the
decision maker faces uncertainty about the true prior distribution over states.
The decision maker seeks to select a stochastic choice rule over a finite set
of alternatives that is robust to prior ambiguity. We fully characterize the
distributional robustness of the rational inattention model in terms of a
tractable concave program. We establish necessary and sufficient conditions to
construct robust consideration sets. Finally, we quantify the impact of prior
uncertainty, by introducing the notion of \emph{Worst-Case Sensitivity}.",On the Distributional Robustness of Finite Rational Inattention Models,2022-08-05 22:41:40,Emerson Melo,"http://arxiv.org/abs/2208.03370v2, http://arxiv.org/pdf/2208.03370v2",econ.TH
34376,th,"I study the optimal pricing process for selling a unit good to a buyer with
prospect theory preferences. In the presence of probability weighting, the
buyer is dynamically inconsistent and can be either sophisticated or naive
about her own inconsistency. If the buyer is naive, the uniquely optimal
mechanism is to sell a ``loot box'' that delivers the good with some constant
probability in each period. In contrast, if the buyer is sophisticated, the
uniquely optimal mechanism introduces worst-case insurance: after successive
failures in obtaining the good from all previous loot boxes, the buyer can
purchase the good at full price.",Gacha Game: When Prospect Theory Meets Optimal Pricing,2022-08-07 03:12:05,Tan Gan,"http://arxiv.org/abs/2208.03602v3, http://arxiv.org/pdf/2208.03602v3",econ.TH
34377,th,"We study a buyer-seller problem of a novel good for which the seller does not
yet know the production cost. A contract can be agreed upon at either the
ex-ante stage, before learning the cost, or at the ex-post stage, when both
parties will incur a costly delay, but the seller knows the production cost. We
show that the optimal ex-ante contract for a profit-maximizing seller is a
fixed price contract with an ""at-will"" clause: the seller can choose to cancel
the contract upon discovering her production cost. However, sometimes the
seller can do better by offering a guaranteed-delivery price at the ex-ante
stage and a second price at the ex-post stage if the buyer rejects the first
offer. Such a ""limited commitment"" mechanism can raise profits, allowing the
seller to make the allocation partially dependent on the cost while not
requiring it to be embedded in the contract terms. Analogous results hold in a
model where the buyer does not know her valuation ex-ante and offers a
procurement contract to a seller.",Pricing Novel Goods,2022-08-09 21:26:14,"Francesco Giovannoni, Toomas Hinnosaar","http://arxiv.org/abs/2208.04985v1, http://arxiv.org/pdf/2208.04985v1",econ.TH
34378,th,"`Rank and Yank' is practiced in many organizations. This paper is concerned
with the condtions for none to be whipped by `Rank and Yank' when the
evaluation data under each criterion are assumed to be ordinal rankings and the
majority rule is used. Two sufficient conditions are set forth of which the
first one formulates the alternatives indifference definition in terms of the
election matrix, while the second one specifies a certain balance in the
probabilities of alternatives being ranked at positions. In a sense, `none to
be whipped' means that the organization is of stability. Thus the second
sufficient condition indicates an intrinsic relation of balance and
organization stability. In addition, directions for future research are put
forward.",Conditions for none to be whipped by `Rank and Yank' under the majority rule,2022-08-10 03:55:37,Fujun Hou,"http://arxiv.org/abs/2208.05093v1, http://arxiv.org/pdf/2208.05093v1",econ.TH
34379,th,"This paper studies the allocation of voting weights in a committee
representing groups of different sizes. We introduce a partial ordering of
weight allocations based on stochastic comparison of social welfare. We show
that when the number of groups is sufficiently large, this ordering
asymptotically coincides with the total ordering induced by the cosine
proportionality between the weights and the group sizes. A corollary is that a
class of expectation-form objective functions, including expected welfare, the
mean majority deficit and the probability of inversions, are asymptotically
monotone in the cosine proportionality.",Welfare ordering of voting weight allocations,2022-08-10 15:51:26,Kazuya Kikuchi,"http://arxiv.org/abs/2208.05316v2, http://arxiv.org/pdf/2208.05316v2",econ.TH
34380,th,"We study how office-seeking parties use direct democracy to shape elections.
A party with a strong electoral base can benefit from using a binding
referendum to resolve issues that divide its core supporters. When referendums
do not bind, however, an electorally disadvantaged party may initiate a
referendum to elevate new issues in order to divide the supporters of its
stronger opponent. We identify conditions under which direct democracy improves
congruence between policy outcomes and voter preferences, but also show that it
can lead to greater misalignment both on issues subject to direct democracy and
those that are not.",Pandora's Ballot Box: Electoral Politics of Direct Democracy,2022-08-10 22:43:10,"Peter Buisseret, Richard Van Weelden","http://arxiv.org/abs/2208.05535v1, http://arxiv.org/pdf/2208.05535v1",econ.TH
34381,th,"In this note we consider situations of (multidimensional) spatial majority
voting. We show that under some assumptions usual in this literature, with an
even number of voters if the core of the voting situation is singleton (and in
the interior of the policy space) then the element in the core is never a
Condorcet winner. This is in sharp contrast with what happens with an odd
number of voters: in that case, under identical assumptions, it is well known
that if the core of the voting situation is non-empty then the single element
in the core is the Condorcet winner as well.",On spatial majority voting with an even (vis-a-vis odd) number of voters: a note,2022-08-14 16:11:55,"Anindya Bhattacharya, Francesco Ciardiello","http://arxiv.org/abs/2208.06849v1, http://arxiv.org/pdf/2208.06849v1",econ.TH
34382,th,"An observer wants to understand a decision-maker's welfare from her choice.
She believes that decisions are made under limited attention. We argue that the
standard model of limited attention cannot help the observer greatly. To
address this issue, we study a family of models of choice under limited
attention by imposing an attention floor in the decision process. We construct
an algorithm that recovers the revealed preference relation given an incomplete
data set in these models. Next, we take these models to the experimental data.
We first show that assuming that subjects make at least one comparison before
finalizing decisions (that is, an attention floor of 2) is almost costless in
terms of describing the behavior when compared to the standard model of limited
attention. In terms of revealed preferences, on the other hand, the amended
model does significantly better. We can not recover any preferences for 63% of
the subjects in the standard model, while the amended model reveals some
preferences for all subjects. In total, the amended model allows us to recover
one-third of the preferences that would be recovered under full attention.",Revealed Preference Analysis Under Limited Attention,2022-08-16 13:41:36,"Mikhail Freer, Hassan Nosratabadi","http://arxiv.org/abs/2208.07659v3, http://arxiv.org/pdf/2208.07659v3",econ.TH
34383,th,"We investigate Gale's important paper published in 1960. This paper contains
an example of a candidate of the demand function that satisfies the weak axiom
of revealed preference and that is doubtful that it is a demand function of
some weak order. We examine this paper and first scrutinize what Gale proved.
Then we identify a gap in Gale's proof and show that he failed to show that
this candidate of the demand function is not a demand function. Next, we
present three complete proofs of Gale's claim. First, we construct a proof that
was constructible in 1960 by a fact that Gale himself demonstrated. Second, we
construct a modern and simple proof using Shephard's lemma. Third, we construct
a proof that follows the direction that Gale originally conceived. Our
conclusion is as follows: although, in 1960, Gale was not able to prove that
the candidate of the demand function that he constructed is not a demand
function, he substantially proved it, and therefore it is fair to say that the
credit for finding a candidate of the demand function that satisfies the weak
axiom but is not a demand function is attributed to Gale.",On Gale's Contribution in Revealed Preference Theory,2022-08-17 01:01:20,Yuhki Hosoya,"http://arxiv.org/abs/2208.07970v2, http://arxiv.org/pdf/2208.07970v2",econ.TH
34384,th,"Models of stochastic choice typically use conditional choice probabilities
given menus as the primitive for analysis, but in the field these are often
hard to observe. Moreover, studying preferences over menus is not possible with
this data. We assume that an analyst can observe marginal frequencies of choice
and availability, but not conditional choice frequencies, and study the
testable implications of some prominent models of stochastic choice for this
dataset. We also analyze whether parameters of these models can be identified.
Finally, we characterize the marginal distributions that can arise under
two-stage models in the spirit of Gul and Pesendorfer [2001] and of kreps
[1979] where agents select the menu before choosing an alternative.",Marginal stochastic choice,2022-08-17 22:24:03,"Yaron Azrieli, John Rehbeck","http://arxiv.org/abs/2208.08492v1, http://arxiv.org/pdf/2208.08492v1",econ.TH
34436,th,"We provide a syntactic construction of correlated equilibrium. For any finite
game, we study how players coordinate their play on a signal by means of a
public strategy whose instructions are expressed in some natural language.
Language can be ambiguous in that different players may assign different truth
values to the very same formula in the same state of the world. We model
ambiguity using the player-dependent logic of Halpern and Kets (2015). We show
that, absent any ambiguity, self-enforcing coordination always induces a
correlated equilibrium of the underlying game. When language ambiguity is
allowed, self-enforcing coordination strategies induce subjective correlated
equilibria.",Coordination through ambiguous language,2022-11-07 13:31:04,Michele Crescenzi,"http://arxiv.org/abs/2211.03426v1, http://arxiv.org/pdf/2211.03426v1",econ.TH
34385,th,"Current electric vehicle market trends indicate an increasing adoption rate
across several countries. To meet the expected growing charging demand, it is
necessary to scale up the current charging infrastructure and to mitigate
current reliability deficiencies, e.g., due to broken connectors or misreported
charging station availability status. However, even within a properly
dimensioned charging infrastructure, a risk for local bottlenecks remains if
several drivers cannot coordinate their charging station visit decisions. Here,
navigation service platforms can optimally balance charging demand over
available stations to reduce possible station visit conflicts and increase user
satisfaction. While such fleet-optimized charging station visit recommendations
may alleviate local bottlenecks, they can also harm the system if
self-interested navigation service platforms seek to maximize their own
customers' satisfaction. To study these dynamics, we model fleet-optimized
charging station allocation as a resource allocation game in which navigation
platforms constitute players and assign potentially free charging stations to
drivers. We show that no pure Nash equilibrium guarantee exists for this game,
which motivates us to study VCG mechanisms both in offline and online settings,
to coordinate players' strategies toward a better social outcome. Extensive
numerical studies for the city of Berlin show that when coordinating players
through VCG mechanisms, the social cost decreases on average by 42 % in the
online setting and by 52 % in the offline setting.",Coordinating charging request allocation between self-interested navigation service platforms,2022-08-19 22:44:58,"Marianne Guillet, Maximilian Schiffer","http://arxiv.org/abs/2208.09530v1, http://arxiv.org/pdf/2208.09530v1",econ.TH
34386,th,"We study some problems of collective choice when individuals can have
expressive preferences, that is, where a decision-maker may care not only about
the material benefit from choosing an action but also about some intrinsic
morality of the action or whether the action conforms to some identity-marker
of the decision-maker. We construct a simple framework for analyzing mechanism
design problems with such preferences and present some results focussing on the
phenomenon we call ""Brexit anomaly"". The main findings are that while
deterministic mechanisms are quite susceptible to Brexit anomaly, even with
stringent domain restriction, random mechanisms assure more positive results.",On mechanism design with expressive preferences: an aspect of the social choice of Brexit,2022-08-21 12:07:58,"Anindya Bhattacharya, Debapriya Sen","http://arxiv.org/abs/2208.09851v1, http://arxiv.org/pdf/2208.09851v1",econ.TH
34387,th,"A range of empirical puzzles in finance has been explained as a consequence
of traders being averse to ambiguity. Ambiguity averse traders can behave in
financial portfolio problems in ways that cannot be rationalized as maximizing
subjective expected utility. However, this paper shows that when traders have
access to limit orders, all investment behavior of an ambiguity-averse
decision-maker is observationally equivalent to the behavior of a subjective
expected utility maximizer with the same risk preferences; ambiguity aversion
has no additional explanatory power.",Limit Orders and Knightian Uncertainty,2022-08-23 11:26:25,"Michael Greinecker, Christoph Kuzmics","http://arxiv.org/abs/2208.10804v1, http://arxiv.org/pdf/2208.10804v1",econ.TH
34388,th,"In this paper we propose a geometric approach to the selection of the equi-
librium price. After a perturbation of the parameters, the new price is
selected thorough the composition of two maps: the projection on the
linearization of the equilibrium man- ifold, a method that underlies
econometric modeling, and the exponential map, that associates a tangent vector
with a geodesic on the manifold. As a corollary of our main result, we prove
the equivalence between zero curvature and uniqueness of equilibrium in the
case of an arbitrary number of goods and two consumers, thus extending the
previous result by [6].",Equilibrium selection: a geometric approach,2022-08-23 13:39:53,"Andrea Loi, Stefano Matta, Daria Uccheddu","http://arxiv.org/abs/2208.10860v1, http://arxiv.org/pdf/2208.10860v1",econ.TH
34389,th,"We study statistical discrimination based on the school which an individual
attended. Employers face uncertainty regarding an individual's productive
value. Knowing which school an individual went to is useful for two reasons:
firstly, average student ability may differ across schools; secondly, different
schools may provide different incentives to exert effort. We examine the
optimal way of grouping students in the face of school-based statistical
discrimination. We argue that an optimal school system exhibits coarse
stratification with respect to ability, and more lenient grading at the
top-tier schools than at the bottom-tier schools. Our paper contributes to the
ongoing policy debate on school tracking.",School-Based Statistical Discrimination,2022-08-23 14:59:32,"Jacopo Bizzotto, Adrien Vigier","http://arxiv.org/abs/2208.10894v2, http://arxiv.org/pdf/2208.10894v2",econ.TH
34390,th,"Sequential equilibrium is the conventional approach for analyzing multi-stage
games of incomplete information. It relies on mutual consistency of beliefs. To
relax mutual consistency, I theoretically and experimentally explore the
dynamic cognitive hierarchy (DCH) solution. One property of DCH is that the
solution can vary between two different games sharing the same reduced normal
form, i.e., violation of invariance under strategic equivalence. I test this
prediction in a laboratory experiment using two strategically equivalent
versions of the dirty-faces game. The game parameters are calibrated to
maximize the expected difference in behavior between the two versions, as
predicted by DCH. The experimental results indicate significant differences in
behavior between the two versions, and more importantly, the observed
differences align with DCH. This suggests that implementing a dynamic game
experiment in reduced normal form (using the ""strategy method"") could lead to
distortions in behavior.",Cognitive Hierarchies in Multi-Stage Games of Incomplete Information: Theory and Experiment,2022-08-23 23:52:38,Po-Hsuan Lin,"http://arxiv.org/abs/2208.11190v3, http://arxiv.org/pdf/2208.11190v3",econ.TH
34391,th,"We study a model of delegation in which a principal takes a multidimensional
action and an agent has private information about a multidimensional state of
the world. The principal can design any direct mechanism, including stochastic
ones. We provide necessary and sufficient conditions for an arbitrary mechanism
to maximize the principal's expected payoff. We also discuss simple conditions
which ensure that some convex delegation set is optimal. A key step of our
analysis shows that a mechanism is incentive compatible if and only if its
induced indirect utility is convex and lies below the agent's first-best
payoff.",Optimal Delegation in a Multidimensional World,2022-08-25 05:52:30,Andreas Kleiner,"http://arxiv.org/abs/2208.11835v1, http://arxiv.org/pdf/2208.11835v1",econ.TH
34437,th,"We study Nash implementation by stochastic mechanisms, and provide a
surprisingly simple full characterization, which is in sharp contrast to the
classical, albeit complicated, full characterization in Moore and Repullo
(1990).",Nash implementation by stochastic mechanisms: a simple full characterization,2022-11-10 12:14:51,Siyang Xiong,"http://arxiv.org/abs/2211.05431v1, http://arxiv.org/pdf/2211.05431v1",econ.TH
34392,th,"Two characterizations of the whole class of strategy-proof aggregation rules
on rich domains of locally unimodal preorders in finite median
join-semilattices are provided. In particular, it is shown that such a class
consists precisely of generalized weak sponsorship rules induced by certain
families of order filters of the coalition poset. It follows that the
co-majority rule and many other inclusive aggregation rules belong to that
class. The co-majority rule for an odd number of agents is characterized and
shown to be equivalent to a Condorcet-Kemeny median rule. Applications to
preference aggregation rules including Arrowian social welfare functions are
also considered. The existence of strategy-proof anonymous, weakly neutral and
unanimity-respecting social welfare functions which are defined on arbitrary
profiles of total preorders and satisfy a suitably relaxed independence
condition is shown to follow from our characterizations.",Strategy-proof aggregation rules in median semilattices with applications to preference aggregation,2022-08-26 18:40:58,"Ernesto Savaglio, Stefano Vannucci","http://arxiv.org/abs/2208.12732v1, http://arxiv.org/pdf/2208.12732v1",econ.TH
34393,th,"We study the ability of different classes of voting rules to induce agents to
report their preferences truthfully, if agents want to avoid regret. First, we
show that regret-free truth-telling is equivalent to strategy-proofness among
tops-only rules. Then, we focus on three important families of (non-tops-only)
voting methods: maxmin, scoring, and Condorcet consistent ones. We prove
positive and negative results for both neutral and anonymous versions of maxmin
and scoring rules. In several instances we provide necessary and sufficient
conditions. We also show that Condorcet consistent rules that satisfy a mild
monotonicity requirement are not regret-free truth-telling. Successive
elimination rules fail to be regret-free truth-telling despite not satisfying
the monotonicity condition. Lastly, we provide two characterizations for the
case of three alternatives and two agents.",Regret-free truth-telling voting rules,2022-08-29 22:43:12,"R. Pablo Arribillaga, Agustín G. Bonifacio, Marcelo Ariel Fernandez","http://arxiv.org/abs/2208.13853v2, http://arxiv.org/pdf/2208.13853v2",econ.TH
34394,th,"This paper studies sequential information acquisition by an ambiguity-averse
decision maker (DM), who decides how long to collect information before taking
an irreversible action. The agent optimizes against the worst-case belief and
updates prior by prior. We show that the consideration of ambiguity gives rise
to rich dynamics: compared to the Bayesian DM, the DM here tends to experiment
excessively when facing modest uncertainty and, to counteract it, may stop
experimenting prematurely when facing high uncertainty. In the latter case, the
DM's stopping rule is non-monotonic in beliefs and features randomized
stopping.",Prolonged Learning and Hasty Stopping: the Wald Problem with Ambiguity,2022-08-30 13:09:29,"Sarah Auster, Yeon-Koo Che, Konrad Mierendorff","http://arxiv.org/abs/2208.14121v3, http://arxiv.org/pdf/2208.14121v3",econ.TH
34395,th,"I analyze long-term contracting in insurance markets with asymmetric
information. The buyer privately observes her risk type, which evolves
stochastically over time. A long-term contract specifies a menu of insurance
policies, contingent on the history of type reports and contractable accident
information. The optimal contract offers the consumer in each period a choice
between a perpetual complete coverage policy with fixed premium and a risky
continuation contract in which current period's accidents may affect not only
within-period consumption (partial coverage) but also future policies.
  The model allows for arbitrary restrictions to the extent to which firms can
use accident information in pricing. In the absence of pricing restrictions,
accidents as well as choices of partial coverage are used in the efficient
provision of incentives. If firms are unable to use accident history, longer
periods of partial coverage choices are rewarded, leading to menus with cheaper
full-coverage options and more attractive partial-coverage options; and
allocative inefficiency decreases along all histories.
  These results are used to study a model of perfect competition, where the
equilibrium is unique whenever it exists, as well as the monopoly problem,
where necessary and sufficient conditions for the presence of information rents
are given.",Optimal dynamic insurance contracts,2022-08-31 01:56:39,Vitor Farinha Luz,"http://arxiv.org/abs/2208.14560v1, http://arxiv.org/pdf/2208.14560v1",econ.TH
34396,th,"We study how fads emerge from social learning in a changing environment. We
consider a sequential learning model in which rational agents arrive in order,
each acting only once, and the underlying unknown state is constantly evolving.
Each agent receives a private signal, observes all past actions of others, and
chooses an action to match the current state. Since the state changes over
time, cascades cannot last forever, and actions fluctuate too. We show that in
the long run, actions change more often than the state. This describes many
real-life faddish behaviors in which people often change their actions more
frequently than what is necessary.",The Emergence of Fads in a Changing World,2022-08-31 02:55:03,Wanying Huang,"http://arxiv.org/abs/2208.14570v1, http://arxiv.org/pdf/2208.14570v1",econ.TH
34397,th,"We analyze the optimal delegation problem between a principal and an agent,
assuming that the latter has state-independent preferences. Among all
incentive-compatible direct mechanisms, the veto mechanisms -- in which the
principal commits to mixing between the status quo option and another
state-dependent option -- yield the highest expected payoffs for the principal.
In the optimal veto mechanism, the principal uses veto (i.e., choosing the
status quo option) only when the state is above some threshold, and both the
veto probability and the state-dependent option increase as the state gets more
extreme. Our model captures the aspect of many real-world scenarios that the
agent only cares about the principal's final decision, and the result provides
grounds for the veto delegation pervasive in various organizations.",The optimality of (stochastic) veto delegation,2022-08-31 15:59:22,"Xiaoxiao Hu, Haoran Lei","http://arxiv.org/abs/2208.14829v2, http://arxiv.org/pdf/2208.14829v2",econ.TH
34398,th,"We study the revenue-maximizing mechanism when a buyer's value evolves
endogenously because of learning-by-consuming. A seller sells one unit of a
divisible good, while the buyer relies on his private, rough valuation to
choose his first-stage consumption level. Consuming more leads to a more
precise valuation estimate, after which the buyer determines the second-stage
consumption level. The optimum is a menu of try-and-decide contracts,
consisting of a first-stage price-quantity pair and a second-stage per-unit
price for the remaining quantity. In equilibrium, a higher first-stage
valuation buyer pays more for higher first-stage consumption and enjoys a lower
second-stage per-unit price. Methodologically, we deal with the difficulty that
due to the failure of single-crossing condition, monotonicity in allocation
plus the envelope condition is insufficient for incentive compatibility. Our
results help to understand contracts about sequential consumption with the
learning feature; e.g., leasing contracts for experience goods and trial
sessions for certain courses.",Learning by Consuming: Optimal Pricing with Endogenous Information Provision,2022-09-03 18:55:08,"Huiyi Guo, Wei He, Bin Liu","http://arxiv.org/abs/2209.01453v1, http://arxiv.org/pdf/2209.01453v1",econ.TH
34399,th,"This paper studies relations among axioms on individuals' intertemporal
choices under risk. The focus is on Risk Averse over Time Lotteries (RATL),
meaning that a fixed prize is preferred to a lottery with the same monetary
prize but a random delivery time. Though surveys and lab experiments documented
RATL choices, Expected Discounted Utility cannot accommodate any RATL. This
paper's contribution is two-fold. First, under a very weak form of
Independence, we generalize the incompatibility of RATL with two axioms about
intertemporal choices: Stochastic Impatience (SI) and No Future Bias. Next, we
prove a representation theorem that gives a class of models satisfying RATL and
SI everywhere. This illustrates that there is no fundamental conflict between
RATL and SI, and leaves open possibility that RATL behavior is caused by Future
Bias.",Risk and Intertemporal Preferences over Time Lotteries,2022-09-05 09:57:40,Minghao Pan,"http://arxiv.org/abs/2209.01790v1, http://arxiv.org/pdf/2209.01790v1",econ.TH
34400,th,"Two type structures are hierarchy-equivalent if they induce the same set of
hierarchies of beliefs. This note shows that the behavioral implications of
""cautious rationality and common cautious belief in cautious rationality""
(Catonini and De Vito 2021) do not vary across hierarchy-equivalent type
structures.",Invariance and hierarchy-equivalence,2022-09-05 15:11:17,Nicodemo De Vito,"http://arxiv.org/abs/2209.01926v1, http://arxiv.org/pdf/2209.01926v1",econ.TH
34401,th,"Galichon, Samuelson and Vernet (2022) introduced a class of problems,
equilibrium flow problems, that nests several classical economic models such as
bipartite matching models, minimum-cost flow problems and hedonic pricing
models. We establish conditions for the existence of equilibrium prices in the
equilibrium flow problem, in the process generalizing Hall's theorem.",The Existence of Equilibrium Flows,2022-09-09 20:40:24,"Alfred Galichon, Larry Samuelson, Lucas Vernet","http://arxiv.org/abs/2209.04426v1, http://arxiv.org/pdf/2209.04426v1",econ.TH
34402,th,"We study the extent to which information can be used to extract attention
from a decision maker (DM) for whom waiting is costly, but information is
instrumentally valuable. We show all feasible joint distributions over stopping
times and actions are implementable by inducing either an extremal stopping
belief or a unique continuation belief. For a designer with arbitrary
preferences over DM's stopping times and actions, there is no commitment gap --
optimal information structures can always be modified to be
sequentially-optimal while preserving the joint distribution over times and
actions. Moreover, all designer-optimal structures leave DM with no surplus. We
then solve the problem of a designer with arbitrary increasing preferences over
DM's stopping times: optimal structures gradually steer DM's continuation
beliefs toward a basin of uncertainty and have a ``block structure''. We
further solve the problem of a designer with preferences over both DM's
stopping times and actions for binary states and actions: when the designer's
preferences over times and actions are (i) additively separable, optimal
structures take a ``bang-bang structure''; (ii) supermodular, optimal
structures take a ``pure bad news structure''. Our results speak directly to
the attention economy.",Attention Capture,2022-09-12 22:49:17,"Andrew Koh, Sivakorn Sanguanmoo","http://arxiv.org/abs/2209.05570v4, http://arxiv.org/pdf/2209.05570v4",econ.TH
34403,th,"This paper addresses the question of how to best communicate information over
time in order to influence an agent's belief and induced actions in a model
with a binary state of the world that evolves according to a Markov process,
and with a finite number of actions. We characterize the sender's optimal
message strategy in the limit, as the length of each period decreases to zero.
The optimal strategy is not myopic. Depending on the agent's beliefs, sometimes
no information is revealed, and sometimes the agent's belief is split into two
well-chosen posterior beliefs.",Markovian Persuasion with Two States,2022-09-14 13:28:40,"Galit Ashkenazi-Golan, Penélope Hernández, Zvika Neeman, Eilon Solan","http://arxiv.org/abs/2209.06536v1, http://arxiv.org/pdf/2209.06536v1",econ.TH
34404,th,"We introduce a method to derive from a characterization of institutional
choice rules (or priority rules), a characterization of the Gale-Shapley
deferred-acceptance (DA) matching rule based on these choice rules. We apply
our method to school choice in Chile, where we design choice rules for schools
that are uniquely compatible with the School Inclusion Law and derive a set of
matching properties, compatible with the law, that characterizes the DA rule
based on the designed choice rules. Our method provides a recipe for
establishing such results and can help policymakers decide on which allocation
rule to use in practice.",Market Design with Deferred Acceptance: A Recipe for Policymaking,2022-09-14 19:53:42,"Battal Doğan, Kenzo Imamura, M. Bumin Yenmez","http://arxiv.org/abs/2209.06777v1, http://arxiv.org/pdf/2209.06777v1",econ.TH
34405,th,"In this paper, we revisit the common claim that double auctions necessarily
generate competitive equilibria. We begin by observing that competitive
equilibrium has some counterintuitive implications: specifically, it predicts
that monotone shifts in the value distribution can leave prices unchanged.
Using experiments, we then test whether these implications are borne out by the
data. We find that in double auctions with stationary value distributions, the
resulting prices can be far from competitive equilibria. We also show that the
effectiveness of our counterexamples is blunted when traders can leave without
replacement as time progresses. Taken together, these findings suggest that the
`Marshallian path' is crucial for generating equilibrium prices in double
auctions.",Competitive equilibrium and the double auction,2022-09-15 15:22:21,Itzhak Rasooly,"http://arxiv.org/abs/2209.07532v1, http://arxiv.org/pdf/2209.07532v1",econ.TH
34406,th,"We illustrate some formal symmetries between Quadratic Funding (Buterin et
al., 2019), a mechanism for the (approximately optimal) determination of public
good funding levels, and the Born (1926) rule in Quantum Mechanics, which
converts the wave representation into a probability distribution, through a
bridging formulation we call ""Quantum Quartic Finance"". We suggest further
directions for investigating the practical utility of these symmetries. We
discuss potential interpretations in greater depth in a companion blog post.",Prospecting a Possible Quadratic Wormhole Between Quantum Mechanics and Plurality,2022-09-16 22:32:34,"Michal Fabinger, Michael H. Freedman, E. Glen Weyl","http://arxiv.org/abs/2209.08144v1, http://arxiv.org/pdf/2209.08144v1",econ.TH
34407,th,"This paper explores how ambiguity affects communication. We consider a cheap
talk model in which the receiver evaluates the sender's message with respect to
its worst-case expected payoff generated by multiplier preferences. We
characterize the receiver's optimal strategy and show that the receiver's
posterior action is consistent with his ex-ante action. We find that in some
situations, ambiguity improves communication by shifting the receiver's optimal
action upwards, and these situations are not rare.",Ambiguous Cheap Talk,2022-09-18 10:26:02,Longjian Li,"http://arxiv.org/abs/2209.08494v1, http://arxiv.org/pdf/2209.08494v1",econ.TH
34409,th,"We study a full implementation problem with hard evidence where the state is
common knowledge but agents face uncertainty about the evidence endowments of
other agents. We identify a necessary and sufficient condition for
implementation in mixed-strategy Bayesian Nash equilibria called No Perfect
Deceptions. The implementing mechanism requires only two agents and a finite
message space, imposes transfers only off the equilibrium, and invoke no device
with ""...questionable features..."" such as integer or modulo games. Requiring
only implementation in pure-strategy equilibria weakens the necessary and
sufficient condition to No Pure-Perfect Deceptions. In general type spaces
where the state is not common knowledge, a condition called higher-order
measurability is necessary and sufficient for rationalizable implementation
with arbitrarily small transfers alongside.",Implementation with Uncertain Evidence,2022-09-22 05:33:08,"Soumen Banerjee, Yi-Chun Chen","http://arxiv.org/abs/2209.10741v1, http://arxiv.org/pdf/2209.10741v1",econ.TH
34410,th,"An expert tells an advisee whether to take an action that may be good or bad.
He may provide a condition under which to take the action. This condition
predicts whether the action is good if and only if the expert is competent.
Providing the condition exposes the expert to reputational risk by allowing the
advisee to learn about his competence. He trades off the accuracy benefit and
reputational risk induced by providing the condition. He prefers not to provide
it -- i.e., to give ""simple advice"" -- when his payoff is sufficiently concave
in the posterior belief about his competence.",Why do experts give simple advice?,2022-09-23 19:37:20,Benjamin Davies,"http://arxiv.org/abs/2209.11710v1, http://arxiv.org/pdf/2209.11710v1",econ.TH
34411,th,"How important are leaders' actions in facilitating coordination? In this
paper, we investigate their signaling role in a global games framework. A
perfectly informed leader and a team of followers face a coordination problem.
Despite the endogenous information generated by the leader's action, we provide
a necessary and sufficient condition that makes the monotone equilibrium
strategy profile uniquely $\Delta$-rationalizable and hence guarantees
equilibrium uniqueness. Moreover, the unique equilibrium is fully efficient.
This result remains valid when the leader observes a noisy signal about the
true state except full efficiency may not be obtained. We discuss the
implications of our results for a broad class of phenomena such as adoption of
green technology, currency attacks and revolutions.",The Signaling Role of Leaders in Global Games,2022-09-26 08:23:58,"Panagiotis Kyriazis, Edmund Lou","http://arxiv.org/abs/2209.12426v2, http://arxiv.org/pdf/2209.12426v2",econ.TH
34412,th,"This paper shows that the principal can strictly benefit from delegating a
decision to an agent whose opinion differs from that of the principal. We
consider a ""delegated expertise"" problem, in which the agent has an advantage
in information acquisition relative to the principal, as opposed to having
preexisting private information. When the principal is ex ante predisposed
towards some action, it is optimal for her to hire an agent who is predisposed
towards the same action, but to a smaller extent, since such an agent would
acquire more information, which outweighs the bias stemming from misalignment.
We show that belief misalignment between an agent and a principal is a viable
instrument in delegation, performing on par with contracting and communication
in a class of problems.",Optimally Biased Expertise,2022-09-28 00:03:58,"Pavel Ilinov, Andrei Matveenko, Maxim Senkov, Egor Starkov","http://arxiv.org/abs/2209.13689v1, http://arxiv.org/pdf/2209.13689v1",econ.TH
34413,th,"This paper provides a general framework to explore the possibility of agenda
manipulation-proof and proper consensus-based preference aggregation rules, so
powerfully called in doubt by a disputable if widely shared understanding of
Arrow's `general possibility theorem'. We consider two alternative versions of
agenda manipulation-proofness for social welfare functions, that are
distinguished by `parallel' vs. `sequential' execution of agenda formation and
preference elicitation, respectively. Under the `parallel' version, it is shown
that a large class of anonymous and idempotent social welfare functions that
satisfy both agenda manipulation-proofness and strategy-proofness on a natural
domain of single-peaked `meta-preferences' induced by arbitrary total
preference preorders are indeed available. It is only under the second,
`sequential' version that agenda manipulation-proofness on the same natural
domain of single-peaked `meta-preferences' is in fact shown to be tightly
related to the classic Arrowian `independence of irrelevant alternatives' (IIA)
for social welfare functions. In particular, it is shown that using IIA to
secure such `sequential' version of agenda manipulation-proofness and combining
it with a very minimal requirement of distributed responsiveness results in a
characterization of the `global stalemate' social welfare function, the
constant function which invariably selects universal social indifference. It is
also argued that, altogether, the foregoing results provide new significant
insights concerning the actual content and the constructive implications of
Arrow's `general possibility theorem' from a mechanism-design perspective.","Agenda manipulation-proofness, stalemates, and redundant elicitation in preference aggregation. Exposing the bright side of Arrow's theorem",2022-10-06 23:41:55,Stefano Vannucci,"http://arxiv.org/abs/2210.03200v1, http://arxiv.org/pdf/2210.03200v1",econ.TH
34414,th,"This article aims to launch light on the limitations of the Coase and Pigou
approach in the solution of externalities. After contextualizing the need for
integration of ecological and economic approaches, we are introducing a new
conceptual proposal complementary to conventional economic approaches. Whose
process is guaranteed by a set of diffuse agents in the economy that partially
reverses entropy formation and marginal external costs generated by also
diffuse agents? The approach differs in six fundamentals from traditional
theory and proposes a new way of examining the actions of agents capable of
reducing entropy and containing part of external costs in the market economy.",A solution for external costs beyond negotiation and taxation,2022-10-08 18:16:11,"Alexandre Magno de Melo Faria, Helde A. D. Hdom","http://arxiv.org/abs/2210.04049v2, http://arxiv.org/pdf/2210.04049v2",econ.TH
34415,th,"We study what changes to an agent's decision problem increase her value for
information. We prove that information becomes more valuable if and only if the
agent's reduced-form payoff in her belief becomes more convex. When the
transformation corresponds to the addition of an action, the requisite increase
in convexity occurs if and only if a simple geometric condition holds, which
extends in a natural way to the addition of multiple actions. We apply these
findings to two scenarios: a monopolistic screening problem in which the good
is information and delegation with information acquisition.",Making Information More Valuable,2022-10-10 06:29:19,Mark Whitmeyer,"http://arxiv.org/abs/2210.04418v4, http://arxiv.org/pdf/2210.04418v4",econ.TH
34416,th,"Truck platooning is a promising transportation mode in which several trucks
drive together and thus save fuel consumption by suffering less air resistance.
In this paper, we consider a truck platooning system for which we jointly
optimize the truck routes and schedules from the perspective of a central
platform. We improve an existing decomposition-based heuristic by Luo and
Larson (2022), which iteratively solves a routing and scheduling problem, with
a cost modification step after each scheduling run. We propose different
formulations for the routing and the scheduling problem and embed these into
Luo and Larson's framework, and we examine ways to improve their iterative
process. In addition, we propose another scheduling heuristic to deal with
large instances. The computational results show that our procedure achieves
better performance than the existing one under certain realistic settings.",An improved decomposition-based heuristic for truck platooning,2022-10-11 18:58:05,"Boshuai Zhao, Roel Leus","http://arxiv.org/abs/2210.05562v3, http://arxiv.org/pdf/2210.05562v3",econ.TH
34417,th,"In a many-to-many matching model in which agents' preferences satisfy
substitutability and the law of aggregate demand, we proof the General
Manipulability Theorem. We result generalizes the presented in Sotomayor (1996
and 2012) for the many-to-one model. In addition, we show General
Manipulability Theorem fail when agents' preferences satisfy only
substitutability.",General Manipulability Theorem for a Matching Model,2022-10-12 22:36:08,"Paola B. Manasero, Jorge Oviedo","http://arxiv.org/abs/2210.06549v1, http://arxiv.org/pdf/2210.06549v1",econ.TH
34418,th,"A principal funds a multistage project and retains the right to cut the
funding if it stagnates at some point. An agent wants to convince the principal
to fund the project as long as possible, and can design the flow of information
about the progress of the project in order to persuade the principal. If the
project is sufficiently promising ex ante, then the agent commits to providing
only the good news that the project is accomplished. If the project is not
promising enough ex ante, the agent persuades the principal to start the
funding by committing to provide not only good news but also the bad news that
a project milestone has not been reached by an interim deadline. I demonstrate
that the outlined structure of optimal information disclosure holds
irrespective of the agent's profit share, benefit from the flow of funding, and
the common discount rate.",Setting Interim Deadlines to Persuade,2022-10-15 16:35:51,Maxim Senkov,"http://arxiv.org/abs/2210.08294v3, http://arxiv.org/pdf/2210.08294v3",econ.TH
34419,th,"We model a dynamic public good contribution game, where players are
(naturally) formed into groups. The groups are exogenously placed in a
sequence, with limited information available to players about their groups'
position in the sequence. Contribution decisions are made by players
simultaneously and independently, and the groups' total contribution is made
sequentially. We try to capture both inter and intra-group behaviors and
analyze different situations where players observe partial history about total
contributions of their predecessor groups. Given this framework, we show that
even when players observe a history of defection (no contribution), a
cooperative outcome is achievable. This is particularly interesting in the
situation when players observe only their immediate predecessor groups'
contribution, where we observe that players play an important role in
motivating others to contribute.",A Group Public Goods Game with Position Uncertainty,2022-10-15 19:11:57,"Chowdhury Mohammad Sakib Anwar, Jorge Bruno, Sonali SenGupta","http://arxiv.org/abs/2210.08328v1, http://arxiv.org/pdf/2210.08328v1",econ.TH
34420,th,"We present a model of public good provision with a distributor. Our main
result describes a symmetric mixed-strategy equilibrium, where all agents
contribute to a common fund with probability $p$ and the distributor provides
either a particular amount of public goods or nothing. A corollary of this
finding is the efficient public good provision equilibrium where all agents
contribute to the common fund, all agents are expected to contribute, and the
distributor spends the entire common fund for the public good provision.",Public Good Provision with a Distributor,2022-10-19 18:15:38,"Chowdhury Mohammad Sakib Anwar, Alexander Matros, Sonali SenGupta","http://arxiv.org/abs/2210.10642v1, http://arxiv.org/pdf/2210.10642v1",econ.TH
34421,th,"We study reserve prices in auctions with independent private values when
bidders are expectations-based loss averse. We find that the optimal public
reserve price excludes fewer bidder types than under risk neutrality. Moreover,
we show that public reserve prices are not optimal as the seller can earn a
higher revenue with mechanisms that better leverage the ``attachment effect''.
We discuss two such mechanisms: i) an auction with a secrete and random reserve
price, and ii) a two-stage mechanism where an auction with a public reserve
price is followed by a negotiation if the reserve price is not met. Both of
these mechanisms expose more bidders to the attachment effect, thereby
increasing bids and ultimately revenue.",Never Say Never: Optimal Exclusion and Reserve Prices with Expectations-Based Loss-Averse Buyers,2022-10-20 03:32:23,"Benjamin Balzer, Antonio Rosato","http://arxiv.org/abs/2210.10938v2, http://arxiv.org/pdf/2210.10938v2",econ.TH
34422,th,"In a voting problem with a finite set of alternatives to choose from, we
study the manipulation of tops-only rules. Since all non-dictatorial (onto)
voting rules are manipulable when there are more than two alternatives and all
preferences are allowed, we look for rules in which manipulations are not
obvious. First, we show that a rule does not have obvious manipulations if and
only if when an agent vetoes an alternative it can do so with any preference
that does not have such alternative in the top. Second, we focus on two classes
of tops-only rules: (i) (generalized) median voter schemes, and (ii) voting by
committees. For each class, we identify which rules do not have obvious
manipulations on the universal domain of preferences.",Obvious manipulations of tops-only voting rules,2022-10-21 02:07:53,"R. Pablo Arribillaga, Agustin G. Bonifacio","http://arxiv.org/abs/2210.11627v1, http://arxiv.org/pdf/2210.11627v1",econ.TH
34423,th,"In a one-commodity economy with single-peaked preferences and individual
endowments, we study different ways in which reallocation rules can be
strategically distorted by affecting the set of active agents. We introduce and
characterize the family of monotonic reallocation rules and show that each rule
in this class is withdrawal-proof and endowments-merging-proof, at least one is
endowments-splitting-proof and that no such rule is pre-delivery-proof.",Variable population manipulations of reallocation rules in economies with single-peaked preferences,2022-10-23 20:31:31,Agustin G. Bonifacio,"http://arxiv.org/abs/2210.12794v2, http://arxiv.org/pdf/2210.12794v2",econ.TH
34424,th,"We consider a monopolistic seller in a market that may be segmented. The
surplus of each consumer in a segment depends on the price that the seller
optimally charges, which depends on the set of consumers in the segment. We
study which segmentations may result from the interaction among consumers and
the seller. Instead of studying the interaction as a non-cooperative game, we
take a reduced-form approach and introduce a notion of stability that any
resulting segmentation must satisfy. A stable segmentation is one that, for any
alternative segmentation, contains a segment of consumers that prefers the
original segmentation to the alternative one. Our main result characterizes
stable segmentations as efficient and saturated. A segmentation is saturated if
no consumers can be shifted from a segment with a high price to a segment with
a low price without the seller optimally increasing the low price. We use this
characterization to constructively show that stable segmentations always exist.
Even though stable segmentations are efficient, they need not maximize average
consumer surplus, and segmentations that maximize average consumer surplus need
not be stable. Finally, we relate our notion of stability to solution concepts
from cooperative game theory and show that stable segmentations satisfy many of
them.",A Theory of Stable Market Segmentations,2022-10-24 16:13:32,"Nima Haghpanah, Ron Siegel","http://arxiv.org/abs/2210.13194v1, http://arxiv.org/pdf/2210.13194v1",econ.TH
34425,th,"Consider the object allocation (one-sided matching) model of Shapley and
Scarf (1974). When final allocations are observed but agents' preferences are
unknown, when might the allocation be in the core? This is a one-sided analogue
of the model in Echenique, Lee, Shum, and Yenmez (2013). I build a model in
which the strict core is testable -- an allocation is ""rationalizable"" if there
is a preference profile putting it in the core. In this manner, I develop a
theory of the revealed preferences of one-sided matching. I study
rationalizability in both non-transferrable and transferrable utility settings.
In the non-transferrable utility setting, an allocation is rationalizable if
and only if: whenever agents with the same preferences are in the same
potential trading cycle, they receive the same allocation. In the transferrable
utility setting, an allocation is rationalizable if and only if: there exists a
price vector supporting the allocation as a competitive equilibrium; or
equivalently, it satisfies a cyclic monotonicity condition. The proofs leverage
simple graph theory and combinatorial optimization and tie together classic
theories of consumer demand revealed preferences and competitive equilibrium.",Revealed Preferences of One-Sided Matching,2022-10-26 02:35:15,Andrew Tai,"http://arxiv.org/abs/2210.14388v2, http://arxiv.org/pdf/2210.14388v2",econ.TH
34426,th,"I introduce a concave function of allocations and prices -- the economy's
potential -- which measures the difference between utilitarian social welfare
and its dual. I show that Walrasian equilibria correspond to roots of the
potential: allocations maximize weighted utility and prices minimize weighted
indirect utility. Walrasian prices are ""utility clearing"" in the sense that the
utilities consumers expect at Walrasian prices are just feasible. I discuss the
implications of this simple duality for equilibrium existence, the welfare
theorems, and the interpretation of Walrasian prices.",The Economy's Potential: Duality and Equilibrium,2022-10-26 06:04:40,Jacob K Goeree,"http://arxiv.org/abs/2210.14437v1, http://arxiv.org/pdf/2210.14437v1",econ.TH
34427,th,"We study equilibria in an Electric Vehicle (EV) charging game, a cost
minimization game inherent to decentralized charging control strategy for EV
power demand management. In our model, each user optimizes its total cost which
is sum of direct power cost and the indirect dissatisfaction cost. We show
that, taking player specific price independent dissatisfaction cost in to
account, contrary to popular belief, herding only happens at lower EV uptake.
Moreover, this is true for both linear and logistic dissatisfaction functions.
We study the question of existence of price profiles to induce a desired
equilibrium. We define two types of equilibria, distributed and non-distributed
equilibria, and show that under logistic dissatisfaction, only non-distributed
equilibria are possible by feasibly setting prices. In linear case, both type
of equilibria are possible but price discrimination is necessary to induce
distributed equilibria. Finally, we show that in the case of symmetric EV
users, mediation cannot improve upon Nash equilibria.",Pricing and Electric Vehicle Charging Equilibria,2022-10-27 00:07:05,"Trivikram Dokka, Jorge Bruno, Sonali SenGupta, Chowdhury Mohammad Sakib Anwar","http://arxiv.org/abs/2210.15035v2, http://arxiv.org/pdf/2210.15035v2",econ.TH
34428,th,"In robust decision making under uncertainty, a natural choice is to go with
safety (aka security) level strategies. However, in many important cases, most
notably auctions, there is a large multitude of safety level strategies, thus
making the choice unclear. We consider two refined notions:
  (i) a term we call DSL (distinguishable safety level), and is based on the
notion of ``discrimin'', which uses a pairwise comparison of actions while
removing trivial equivalencies. This captures the fact that when comparing two
actions an agent should not care about payoffs in situations where they lead to
identical payoffs.
  (ii) The well-known Leximin notion from social choice theory, which we apply
for robust decision-making. In particular, the leximin is always DSL but not
vice-versa.
  We study the relations of these notions to other robust notions, and
illustrate the results of their use in auctions and other settings. Economic
design aims to maximize social welfare when facing self-motivated participants.
In online environments, such as the Web, participants' incentives take a novel
form originating from the lack of clear agent identity -- the ability to create
Sybil attacks, i.e., the ability of each participant to act using multiple
identities. It is well-known that Sybil attacks are a major obstacle for
welfare-maximization. Our main result proves that when DSL attackers face
uncertainty over the auction's bids, the celebrated VCG mechanism is
welfare-maximizing even under Sybil attacks. Altogether, our work shows a
successful fundamental synergy between robustness under uncertainty, economic
design, and agents' strategic manipulations in online multi-agent systems.",Optimal Mechanism Design for Agents with DSL Strategies: The Case of Sybil Attacks in Combinatorial Auctions,2022-10-27 08:18:00,"Yotam Gafni, Moshe Tennenholtz","http://dx.doi.org/10.4204/EPTCS.379.20, http://arxiv.org/abs/2210.15181v3, http://arxiv.org/pdf/2210.15181v3",econ.TH
34438,th,"We consider two-person bargaining problems in which (only) the players'
disagreement payoffs are private information and it is common knowledge that
disagreement is inefficient. We show that if the Pareto frontier is linear, or
the utility functions are quasi-linear, the outcome of an ex post efficient
mechanism must be independent of the players' disagreement values. Hence, in
this case, a bargaining solution must be ordinal: the players' interim expected
utilities cannot depend on the intensity of their preferences. For a non-linear
frontier, the result continues to hold if disagreement payoffs are independent
or if one of the players only has a few types. We discuss implications of these
results for axiomatic bargaining theory and for full surplus extraction in
mechanism design.","Efficiency, durability and ordinality in 2-person bargaining with incomplete information",2022-11-13 09:12:39,"Eric van Damme, Xu Lang","http://arxiv.org/abs/2211.06830v1, http://arxiv.org/pdf/2211.06830v1",econ.TH
34429,th,"A principal who values an object allocates it to one or more agents. Agents
learn private information (signals) from an information designer about the
allocation payoff to the principal. Monetary transfer is not available but the
principal can costly verify agents' private signals. The information designer
can influence the agents' signal distributions, based upon which the principal
maximizes the allocation surplus. An agent's utility is simply the probability
of obtaining the good. With a single agent, we characterize (i) the
agent-optimal information, (ii) the principal-worst information, and (iii) the
principal-optimal information. Even though the objectives of the principal and
the agent are not directly comparable, we find that any agent-optimal
information is principal-worst. Moreover, there exists a robust mechanism that
achieves the principal's payoff under (ii), which is therefore an optimal
robust mechanism. Many of our results extend to the multiple-agent case; if
not, we provide counterexamples.",Information Design in Allocation with Costly Verification,2022-10-28 12:07:27,"Yi-Chun Chen, Gaoji Hu, Xiangqian Yang","http://arxiv.org/abs/2210.16001v1, http://arxiv.org/pdf/2210.16001v1",econ.TH
34430,th,"We describe a model that explains possibly indecisive choice behavior, that
is, quasi-choices (choice correspondences that may be empty on some menus). The
justification is here provided by a proportion of ballots, which are
quasi-choices rationalizable by an arbitrary binary relation. We call a
quasi-choice $s$-majoritarian if all options selected from a menu are endorsed
by a share of ballots larger than $s$. We prove that all forms of
majoritarianism are equivalent to a well-known behavioral property, namely
Chernoff axiom. Then we focus on two paradigms of majoritarianism, whereby
either a simple majority of ballots justifies a quasi-choice, or the
endorsement by a single ballot suffices - a liberal justification. These
benchmark explanations typically require a different minimum number of ballots.
We determine the asymptotic minimum size of a liberal justification.",Rationalization of indecisive choice behavior by majoritarian ballots,2022-10-30 19:50:24,"José Carlos R. Alcantud, Domenico Cantone, Alfio Giarlotta, Stephen Watson","http://arxiv.org/abs/2210.16885v1, http://arxiv.org/pdf/2210.16885v1",econ.TH
34431,th,"The risk of a financial position shines through by means of the fluctuation
of its market price. The factors affecting the price of a financial position
include not only market internal factors, but also other various market
external factors. The latter can be understood as sorts of environments to
which financial positions have to expose. Motivated by this observation, this
paper aims to design a novel axiomatic approach to risk measures in random
environments. We construct a new distortion-type risk measure, which can
appropriately evaluate the risk of financial positions in the presence of
environments. After having studied its fundamental properties, we also
axiomatically characterize it by proposing a novel set of axioms. Furthermore,
its coherence and dual representation are investigated. The new class of risk
measures in random environments is rich enough, for example, it not only can
recover some known risk measures such as the common weighted value at risk and
range value at risk, but also can induce other new specific risk measures such
as risk measures in the presence of background risk. Examples are given to
illustrate the new framework of risk measures. This paper gives some
theoretical results about risk measures in random environments, helping one to
have an insight look at the potential impact of environments on risk measures
of positions.",Distortion risk measures in random environments: construction and axiomatic characterization,2022-11-01 18:11:34,"Shuo Gong, Yijun Hu, Linxiao Wei","http://arxiv.org/abs/2211.00520v3, http://arxiv.org/pdf/2211.00520v3",econ.TH
34432,th,"A principal and an agent face symmetric uncertainty about the value of two
correlated projects for the agent. The principal chooses which project values
to publicly discover and makes a proposal to the agent, who accepts if and only
if the expected sum of values is positive. We characterize optimal discovery
for various principal preferences: maximizing the probability of the grand
bundle, of having at least one project approved, and of a weighted combination
of projects. Our results highlight the usefulness of trial balloons: projects
which are ex-ante disfavored but have higher variance than a more favored
alternative. Discovering disfavored projects may be optimal even when their
variance is lower than that of the alternative, so long as their
disfavorability is neither too large nor too small. These conclusions
rationalize the inclusion of controversial policies in omnibus bills and the
presence of moonshot projects in organizations.",Discovery through Trial Balloons,2022-11-04 23:37:35,Eitan Sapiro-Gheiler,"http://arxiv.org/abs/2211.02743v2, http://arxiv.org/pdf/2211.02743v2",econ.TH
34433,th,"This gives two existence results of alpha-core solutions by introducing
P-open conditions and strong P-open conditions into games without ordered
preferences. The existence of alpha-core solutions is obtained for games with
infinite-players. Secondly, it provides a short proof of Kajii's (Journal of
Economic Theory 56, 194-205, 1992) existence theorem for alpha-core solutions,
further, the Kajii's theorem is equivalent to the Browder fixed point theorem.
In addition, the obtained existence results can include many typical results
for alpha-core solutions and some recent existence results as special cases.",On Existence of alpha-Core Solutions for Games with Finite or Infinite Players,2022-11-06 16:25:33,"Qi-Qing Song, Min Guo","http://arxiv.org/abs/2211.03112v3, http://arxiv.org/pdf/2211.03112v3",econ.TH
34434,th,"This paper builds a model of interactive belief hierarchies to derive the
conditions under which judging an arbitrage opportunity requires Bayesian
market participants to exercise their higher-order beliefs. As a Bayesian, an
agent must carry a complete recursion of priors over the uncertainty about
future asset payouts, the strategies employed by other market participants that
are aggregated in the price, other market participants' beliefs about the
agent's strategy, other market participants beliefs about what the agent
believes their strategies to be, and so on ad infinitum. Defining this infinite
recursion of priors -- the belief hierarchy so to speak -- along with how they
update gives the Bayesian decision problem equivalent to the standard asset
pricing formulation of the question. The main results of the paper show that an
arbitrage trade arises only when an agent updates his recursion of priors about
the strategies and beliefs employed by other market participants. The paper
thus connects the foundations of finance to the foundations of game theory by
identifying a bridge from market arbitrage to market participant belief
hierarchies.",Arbitrage from a Bayesian's Perspective,2022-11-07 03:33:28,Ayan Bhattacharya,"http://arxiv.org/abs/2211.03244v1, http://arxiv.org/pdf/2211.03244v1",econ.TH
34435,th,"We consider deterministic totally-ordered-time games. We present three axioms
for strategies. We show that for any tuple of strategies that satisfy the
axioms, there exists a unique complete history that is consistent with the
strategy tuple.",Strategies in deterministic totally-ordered-time games,2022-11-07 13:01:21,Tomohiko Kawamori,"http://arxiv.org/abs/2211.03398v1, http://arxiv.org/pdf/2211.03398v1",econ.TH
34440,th,"A monopoly platform sells either a risky product (with unknown utility) or a
safe product (with known utility) to agents who sequentially arrive and learn
the utility of the risky product by the reporting of previous agents. It is
costly for agents to report utility; hence the platform has to design both the
prices and the reporting bonus to motivate the agents to explore and generate
new information. By allowing sellers to set bonuses, we are essentially
enabling them to dynamically control the supply of learning signals without
significantly affecting the demand for the product. We characterize the optimal
bonus and pricing schemes offered by the profit-maximizing platform. It turns
out that the optimal scheme falls into one of four types: Full Coverage,
Partial Coverage, Immediate Revelation, and Non-Bonus. In a model of
exponential bandit, we find that there is a dynamical switch of the types along
the learning trajectory. Although learning stops efficiently, information is
revealed too slowly compared with the planner's optimal solution.",Optimal Pricing Schemes in the Presence of Social Learning and Costly Reporting,2022-11-14 16:58:35,"Kaiwei Zhang, Xi Weng, Xienan Cheng","http://arxiv.org/abs/2211.07362v4, http://arxiv.org/pdf/2211.07362v4",econ.TH
34441,th,"We investigate the allocation of a co-owned company to a single owner using
the Texas Shoot-Out mechanism with private valuations. We identify Knightian
Uncertainty about the peer's distribution as a reason for its deterrent effect
of a premature dissolving. Modeling uncertainty by a distribution band around a
reference distribution $F$, we derive the optimal price announcement for an
ambiguity averse divider. The divider hedges against uncertainty for valuations
close to the median of $F$, while extracting expected surplus for high and low
valuations. The outcome of the mechanism is efficient for valuations around the
median. A risk neutral co-owner prefers to be the chooser, even strictly so for
any valuation under low levels of uncertainty and for extreme valuations under
high levels of uncertainty. If valuations are believed to be close, less
uncertainty is required for the mechanism to always be efficient and reduce
premature dissolvements.",The Texas Shootout under Uncertainty,2022-11-18 11:35:11,"Gerrit Bauch, Frank Riedel","http://arxiv.org/abs/2211.10089v1, http://arxiv.org/pdf/2211.10089v1",econ.TH
34442,th,"We study the revenue comparison problem of auctions when the seller has a
maxmin expected utility preference. The seller holds a set of priors around
some reference belief, interpreted as an approximating model of the true
probability law or the focal point distribution. We develop a methodology for
comparing the revenue performances of auctions: the seller prefers auction X to
auction Y if their transfer functions satisfy a weak form of the
single-crossing condition. Intuitively, this condition means that a bidder's
payment is more negatively associated with the competitor's type in X than in
Y. Applying this methodology, we show that when the reference belief is
independent and identically distributed (IID) and the bidders are ambiguity
neutral, (i) the first-price auction outperforms the second-price and all-pay
auctions, and (ii) the second-price and all-pay auctions outperform the war of
attrition. Our methodology yields results opposite to those of the Linkage
Principle.",Revenue Comparisons of Auctions with Ambiguity Averse Sellers,2022-11-23 05:44:32,"Sosung Baik, Sung-Ha Hwang","http://arxiv.org/abs/2211.12669v1, http://arxiv.org/pdf/2211.12669v1",econ.TH
34443,th,"Organizations design their communication structures to improve
decision-making while limiting wasteful influence activities. An efficient
communication protocol grants complete-information payoffs to all organization
members, thereby overcoming asymmetric information problems at no cost. This
paper characterizes efficient protocols assuming that: (i) some agents within
the organization have the knowledge required for optimal decision-making; (ii)
both the organization and consulted agents incur losses proportional to the
exerted influence activities; and (iii) informed agents can discuss their
strategies before being consulted. Under these assumptions, ""public advocacy""
is the unique efficient communication protocol. This result provides a novel
rationale for public advocacy.",Efficient Communication in Organizations,2022-11-24 16:46:08,Federico Vaccari,"http://arxiv.org/abs/2211.13605v2, http://arxiv.org/pdf/2211.13605v2",econ.TH
34444,th,"We consider the mechanism design problem of a principal allocating a single
good to one of several agents without monetary transfers. Each agent desires
the good and uses it to create value for the principal. We designate this value
as the agent's private type. Even though the principal does not know the
agents' types, she can verify them at a cost. The allocation of the good thus
depends on the agents' self-declared types and the results of any verification
performed, and the principal's payoff matches her value of the allocation minus
the costs of verification. It is known that if the agents' types are
independent, then a favored-agent mechanism maximizes her expected payoff.
However, this result relies on the unrealistic assumptions that the agents'
types follow known independent probability distributions. In contrast, we
assume here that the agents' types are governed by an ambiguous joint
probability distribution belonging to a commonly known ambiguity set and that
the principal maximizes her worst-case expected payoff. We study support-only
ambiguity sets, which contain all distributions supported on a rectangle,
Markov ambiguity sets, which contain all distributions in a support-only
ambiguity set satisfying some first-order moment bounds, and Markov ambiguity
sets with independent types, which contain all distributions in a Markov
ambiguity set under which the agents' types are mutually independent. In all
cases we construct explicit favored-agent mechanisms that are not only optimal
but also Pareto-robustly optimal.",Distributionally Robust Optimal Allocation with Costly Verification,2022-11-28 11:25:44,"Halil İbrahim Bayrak, Çağıl Koçyiğit, Daniel Kuhn, Mustafa Çelebi Pınar","http://arxiv.org/abs/2211.15122v2, http://arxiv.org/pdf/2211.15122v2",econ.TH
34445,th,"A ""repeat voting"" procedure is proposed, whereby voting is carried out in two
identical rounds. Every voter can vote in each round, the results of the first
round are made public before the second round, and the final result is
determined by adding up all the votes in both rounds. It is argued that this
simple modification of election procedures may well increase voter
participation and result in more accurate and representative outcomes.",Repeat Voting: Two-Vote May Lead More People To Vote,2022-11-29 18:10:34,Sergiu Hart,"http://arxiv.org/abs/2211.16282v1, http://arxiv.org/pdf/2211.16282v1",econ.TH
34446,th,"We study a two-period moral hazard problem; there are two agents, with
identical action sets that are unknown to the principal. The principal
contracts with each agent sequentially, and seeks to maximize the worst-case
discounted sum of payoffs, where the worst case is over the possible action
sets. The principal observes the action chosen by the first agent, and then
offers a new contract to the second agent based on this knowledge, thus having
the opportunity to explore in the first period. We characterize the principal's
optimal payoff guarantee. Following nonlinear first-period contracts, optimal
second-period contracts may also be nonlinear in some cases. Nonetheless, we
find that linear contracts are optimal in both periods.",Robust Contracts with Exploration,2022-12-01 02:02:27,Chang Liu,"http://arxiv.org/abs/2212.00157v1, http://arxiv.org/pdf/2212.00157v1",econ.TH
34447,th,"This paper models legislative decision-making with an agenda setter who can
propose policies sequentially, tailoring each proposal to the status quo that
prevails after prior votes. Voters are sophisticated and the agenda setter
cannot commit to her future proposals. Nevertheless, the agenda setter obtains
her favorite outcome in every equilibrium regardless of the initial default
policy. Central to our results is a new condition on preferences,
manipulability, that holds in rich policy spaces, including spatial settings
and distribution problems. Our results overturn the conventional wisdom that
voter sophistication alone constrains an agenda setter's power.",Who Controls the Agenda Controls the Polity,2022-12-02 19:00:16,"S. Nageeb Ali, B. Douglas Bernheim, Alexander W. Bloedel, Silvia Console Battilana","http://arxiv.org/abs/2212.01263v1, http://arxiv.org/pdf/2212.01263v1",econ.TH
34448,th,"The process of market digitization at the world level and the increasing and
extended usage of digital devices reshaped the way consumers employ their
leisure time, with the emergence of what can be called digital leisure. This
new type of leisure produces data that firms can use, with no explicit cost
paid by consumers. At the same time, the global digitalization process has
allowed workers to allocate part of (or their whole) working time to the Gig
Economy sector, which strongly relies on data as a production factor. In this
paper, we develop a two-sector growth model to study how the above mechanism
can shape the dynamics of growth, also assessing how shocks in either the
traditional or the Gig Economy sector can modify the equilibrium of the overall
economy. We find that shocks in the TFP can crowd out working time from a
sector to the other, while shocks on the elasticity of production to data
determines a change in the time allocated to digital leisure.",Digital leisure and the gig economy: a two-sector model of growth,2022-12-05 12:23:22,"Francesco Angelini, Luca V. Ballestra, Massimiliano Castellani","http://arxiv.org/abs/2212.02119v1, http://arxiv.org/pdf/2212.02119v1",econ.TH
34449,th,"We establish the equivalence between a principle of almost absence of
arbitrage opportunities and nearly rational decision-making. The implications
of such principle are considered in the context of the aggregation of
probabilistic opinions and of stochastic choice functions. In the former a
bounded arbitrage principle and its equivalent form as an approximately Pareto
condition are shown to bound the difference between the collective
probabilistic assessment of a set of states and a linear aggregation rule on
the individual assessments. In the latter we show that our general principle of
limited arbitrage opportunities translates into a weakening of the
McFadden-Richter axiom of stochastic rationality, and gives an upper bound for
the minimum distance of a stochastic choice function to another in the class of
random utility maximization models.",Bounded arbitrage and nearly rational behavior,2022-12-06 03:44:45,Leandro Nascimento,"http://arxiv.org/abs/2212.02680v2, http://arxiv.org/pdf/2212.02680v2",econ.TH
34450,th,"A classic trade-off that school districts face when deciding which matching
algorithm to use is that it is not possible to always respect both priorities
and preferences. The student-proposing deferred acceptance algorithm (DA)
respects priorities but can lead to inefficient allocations. The top trading
cycle algorithm (TTC) respects preferences but may violate priorities. We
identify a new condition on school choice markets under which DA is efficient
and there is a unique allocation that respects priorities. Our condition
generalizes earlier conditions by placing restrictions on how preferences and
priorities relate to one another only on the parts that are relevant for the
assignment. We discuss how our condition sheds light on existing empirical
findings. We show through simulations that our condition significantly expands
the range of known environments for which DA is efficient.",Respecting priorities versus respecting preferences in school choice: When is there a trade-off?,2022-12-06 13:55:35,"Estelle Cantillon, Li Chen, Juan S. Pereyra","http://arxiv.org/abs/2212.02881v1, http://arxiv.org/pdf/2212.02881v1",econ.TH
34451,th,"We consider a general nonlinear pricing environment with private information.
The seller can control both the signal that the buyers receive about their
value and the selling mechanism. We characterize the optimal menu and
information structure that jointly maximize the seller's profits. The optimal
screening mechanism has finitely many items even with a continuum of values. We
identify sufficient conditions under which the optimal mechanism has a single
item. Thus the seller decreases the variety of items below the efficient level
as a by-product of reducing the information rents of the buyer.",Screening with Persuasion,2022-12-07 02:02:43,"Dirk Bergemann, Tibor Heumann, Stephen Morris","http://arxiv.org/abs/2212.03360v1, http://arxiv.org/pdf/2212.03360v1",econ.TH
34452,th,"The 1961 Ellsberg paradox is typically seen as an empirical challenge to the
subjective expected utility framework. Experiments based on Ellsberg's design
have spawned a variety of new approaches, culminating in a new paradigm
represented by, now classical, models of ambiguity aversion. We design and
implement a decision-theoretic lab experiment that is extremely close to the
original Ellsberg design and in which, empirically, subjects make choices very
similar to those in the Ellsberg experiments. In our environment, however,
these choices cannot be rationalized by any of the classical models of
ambiguity aversion.",An Ellsberg paradox for ambiguity aversion,2022-12-07 15:39:09,"Christoph Kuzmics, Brian W. Rogers, Xiannong Zhang","http://arxiv.org/abs/2212.03603v2, http://arxiv.org/pdf/2212.03603v2",econ.TH
34453,th,"Results from the communication complexity literature have demonstrated that
stable matching requires communication: one cannot find or verify a stable
match without having access to essentially all of the ordinal preference
information held privately by the agents in the market. Stated differently,
these results show that stable matching mechanisms are not robust to even a
small number of labeled inaccuracies in the input preferences. In practice,
these results indicate that agents must go through the time-intensive process
of accurately ranking each and every potential match candidate if they wish for
the resulting match to be guaranteedly stable. Thus, in large markets,
communication requirements for stable matching may be impractically high.
  A natural question to ask, given this result, is whether some higher-order
structure in the market can indicate which large markets have steeper
communication requirements. In this paper, we perform such an analysis in a
regime where agents have a utility-based notion of preference. We consider a
dynamic model where agents only have access to an approximation of their
utility that satisfies a universal multiplicative error bound. We apply
guarantees from the theoretical computer science literature on low-distortion
embeddings of finite metric spaces to understand the communication requirements
of stable matching in large markets in terms of their structural properties.
Our results show that for a broad family of markets, the error bound may not
grow faster than $n^2\log(n)$ while maintaining a deterministic guarantee on
the behavior of stable matching mechanisms in the limit. We also show that a
stronger probabilistic guarantee may be made so long as the bound grows at most
logarithmically in the underlying topological complexity of the market.",Utility-Based Communication Requirements for Stable Matching in Large Markets,2022-12-08 04:16:21,Naveen Durvasula,"http://arxiv.org/abs/2212.04024v1, http://arxiv.org/pdf/2212.04024v1",econ.TH
34456,th,"We establish the existence of the universal type structure in presence of
conditioning events without any topological assumption, namely, a type
structure that is terminal, belief-complete, and non-redundant, by performing a
construction \`a la Heifetz & Samet (1998). In doing so, we answer
affirmatively to a longstanding conjecture made by Battigalli & Siniscalchi
(1999) concerning the possibility of performing such a construction with
conditioning events. In particular, we obtain the result by exploiting
arguments from category theory and the theory of coalgebras, thus, making
explicit the mathematical structure underlining all the constructions of large
interactive structures and obtaining the belief-completeness of the structure
as an immediate corollary of known results from these fields.",Topology-Free Type Structures with Conditioning Events,2022-12-14 17:30:05,Pierfrancesco Guarino,"http://arxiv.org/abs/2212.07246v2, http://arxiv.org/pdf/2212.07246v2",econ.TH
34457,th,"These lecture notes accompany a one-semester graduate course on information
and learning in economic theory. Topics include common knowledge, Bayesian
updating, monotone-likelihood ratio properties, affiliation, the Blackwell
order, cost of information, learning and merging of beliefs, model uncertainty,
model misspecification, and information design.",Information and Learning in Economic Theory,2022-12-15 00:55:10,Annie Liang,"http://arxiv.org/abs/2212.07521v2, http://arxiv.org/pdf/2212.07521v2",econ.TH
34458,th,"This paper introduces an equilibrium framework based on sequential sampling
in which players face strategic uncertainty over their opponents' behavior and
acquire informative signals to resolve it. Sequential sampling equilibrium
delivers a disciplined model featuring an endogenous distribution of choices,
beliefs, and decision times, that not only rationalizes well-known deviations
from Nash equilibrium, but also makes novel predictions supported by existing
data. It grounds a relationship between empirical learning and strategic
sophistication, and generates stochastic choice through randomness inherent to
sampling, without relying on indifference or choice mistakes. Further, it
provides a rationale for Nash equilibrium when sampling costs vanish.",Sequential Sampling Equilibrium,2022-12-15 14:07:44,Duarte Gonçalves,"http://arxiv.org/abs/2212.07725v2, http://arxiv.org/pdf/2212.07725v2",econ.TH
34459,th,"When information acquisition is costly but flexible, a principal may
rationally acquire information that favors ""majorities"" over ""minorities.""
Majorities therefore face incentives to invest in becoming productive, whereas
minorities are discouraged from such investments. The principal, in turn,
rationally ignores minorities unless they surprise him with a genuinely
outstanding outcome, precisely because they are less likely to invest. We give
conditions under which the resulting discriminatory equilibrium is most
preferred by the principal, despite that all groups are ex-ante identical. Our
results add to the discussions of affirmative action, implicit bias, and
occupational segregation and stereotypes.",Rationally Inattentive Statistical Discrimination: Arrow Meets Phelps,2022-12-16 04:14:37,"Federico Echenique, Anqi Li","http://arxiv.org/abs/2212.08219v3, http://arxiv.org/pdf/2212.08219v3",econ.TH
34460,th,"I provide necessary and sufficient conditions for an agent's preferences to
be represented by a unique ergodic transformation. Put differently, if an agent
seeks to maximize the time average growth of their wealth, what axioms must
their preferences obey? By answering this, I provide economic theorists a clear
view of where ""Ergodicity Economics"" deviates from established models.",Expected Growth Criterion: An Axiomatization,2022-12-19 19:56:34,Joshua Lawson,"http://arxiv.org/abs/2212.09617v1, http://arxiv.org/pdf/2212.09617v1",econ.TH
34461,th,"We study learning by privately informed forward-looking agents in a simple
repeated-action setting of social learning. Under a symmetric signal structure,
forward-looking agents behave myopically for any degrees of patience. Myopic
equilibrium is unique in the class of symmetric threshold strategies, and the
simplest symmetric non-monotonic strategies. If the signal structure is
asymmetric and the game is infinite, there is no equilibrium in myopic
strategies, for any positive degree of patience.",Strategic Observational Learning,2022-12-20 01:19:26,Dimitri Migrow,"http://arxiv.org/abs/2212.09889v2, http://arxiv.org/pdf/2212.09889v2",econ.TH
34462,th,"An agent's preferences depend on an ordered parameter or type. We
characterize the set of utility functions with single-crossing differences
(SCD) in convex environments. These include preferences over lotteries, both in
expected utility and rank-dependent utility frameworks, and preferences over
bundles of goods and over consumption streams. Our notion of SCD does not
presume an order on the choice space. This unordered SCD is necessary and
sufficient for ''interval choice'' comparative statics. We present applications
to cheap talk, observational learning, and collective choice, showing how
convex environments arise in these problems and how SCD/interval choice are
useful. Methodologically, our main characterization stems from a result on
linear aggregations of single-crossing functions.",Single-Crossing Differences in Convex Environments,2022-12-22 22:35:51,"Navin Kartik, SangMok Lee, Daniel Rappoport","http://arxiv.org/abs/2212.12009v2, http://arxiv.org/pdf/2212.12009v2",econ.TH
34463,th,"We study optimal bundling when consumers differ in one dimension. We
introduce a partial order on the set of bundles defined by (i) set inclusion
and (ii) sales volumes (if sold alone and priced optimally). We show that if
the undominated bundles with respect to this partial order are nested, then
nested bundling (tiered pricing) is optimal. We characterize which nested menu
is optimal: Selling a given menu of nested bundles is optimal if a smaller
bundle in (out of) the menu sells more (less) than a bigger bundle in the menu.
We present three applications of these insights: the first two connect optimal
bundling and quality design to price elasticities and cost structures; the last
one establishes a necessary and sufficient condition for costly screening to be
optimal when a principal can use both price and nonprice screening instruments.",The Simple Economics of Optimal Bundling,2022-12-24 04:00:15,Frank Yang,"http://arxiv.org/abs/2212.12623v2, http://arxiv.org/pdf/2212.12623v2",econ.TH
34471,th,"This paper extends the findings of Liu et al. (2015, Strategic environmental
corporate social responsibility in a differentiated duopoly market, Economics
Letters), along two dimensions. First, we consider the case of endogenous
market structure a la Vives and Singh (1984, Price and quantity competition in
a differentiated duopoly, The Rand Journal of Economics). Second, we refine the
ECSR certification standards in differentiated duopoly with rankings. We find
that optimal ECSR certification standards by NGO are the highest in Bertrand
competition, followed by mixed markets and the lowest in Cournot competition.
Next, NGO certifier will set the ECSR standards below the optimal level. Also,
we show that given the ECSR certification standards, there is a possibility of
both price and quantity contracts choices by the firms in endogenous market
structure.",Strategic Environmental Corporate Social Responsibility (ECSR) Certification and Endogenous Market Structure,2023-01-09 15:12:25,"Ajay Sharma, Siddhartha Rastogi","http://arxiv.org/abs/2301.03291v1, http://arxiv.org/pdf/2301.03291v1",econ.TH
34464,th,"We consider a model of bilateral trade with private values. The value of the
buyer and the cost of the seller are jointly distributed. The true joint
distribution is unknown to the designer, however, the marginal distributions of
the value and the cost are known to the designer. The designer wants to find a
trading mechanism that is robustly Bayesian incentive compatible, robustly
individually rational, budget-balanced and maximizes the expected gains from
trade over all such mechanisms. We refer to such a mechanism as an optimal
robust mechanism. We establish equivalence between Bayesian incentive
compatible mechanisms (BIC) and dominant strategy mechanisms (DSIC). We
characterise the worst distribution for a given mechanism and use this
characterisation to find an optimal robust mechanism. We show that there is an
optimal robust mechanism that is deterministic (posted-price), dominant
strategy incentive compatible, and ex-post individually rational. We also
derive an explicit expression of the posted-price of such an optimal robust
mechanism. We also show the equivalence between the efficiency gains from the
optimal robust mechanism (max-min problem) and guaranteed efficiency gains if
the designer could choose the mechanism after observing the true joint
distribution (min-max problem).",Optimal Robust Mechanism in Bilateral Trading,2022-12-29 19:44:12,Komal Malik,"http://arxiv.org/abs/2212.14367v1, http://arxiv.org/pdf/2212.14367v1",econ.TH
34465,th,"This paper analyses a two-region model with vertical innovations that enhance
the quality of varieties of the horizontally differentiated manufactures
produced in each of the two regions. We look at how the creation and diffusion
of knowledge and increasing returns in manufacturing interact to shape the
spatial economy. Innovations occur with a probability that depends on the
inter-regional interaction between researchers (mobile workers). We find that,
if the weight of interaction with foreign scientists is relatively more
important for the success of innovation, the model accounts for re-dispersion
of economic activities after an initial stage of progressive agglomeration as
transport costs decrease from a high level.",Innovation through intra and inter-regional interaction in economic geography,2022-12-30 01:23:50,"José M. Gaspar, Minoru Osawa","http://arxiv.org/abs/2212.14475v5, http://arxiv.org/pdf/2212.14475v5",econ.TH
34466,th,"Given an initial matching and a policy objective on the distribution of agent
types to institutions, we study the existence of a mechanism that weakly
improves the distributional objective and satisfies constrained efficiency,
individual rationality, and strategy-proofness. We show that such a mechanism
need not exist in general. We introduce a new notion of discrete concavity,
which we call pseudo M$^{\natural}$-concavity, and construct a mechanism with
the desirable properties when the distributional objective satisfies this
notion. We provide several practically relevant distributional objectives that
are pseudo M$^{\natural}$-concave.",Efficient Market Design with Distributional Objectives,2022-12-31 18:52:17,"Isa E. Hafalir, Fuhito Kojima, M. Bumin Yenmez","http://arxiv.org/abs/2301.00232v2, http://arxiv.org/pdf/2301.00232v2",econ.TH
34467,th,"We provide optimal solutions to an institution that has dual goals of
diversity and meritocracy when choosing from a set of applications. For
example, in college admissions, administrators may want to admit a diverse
class in addition to choosing students with the highest qualifications. We
provide a class of choice rules that maximize merit subject to attaining a
diversity level. Using this class, we find all subsets of applications on the
diversity-merit Pareto frontier. In addition, we provide two novel
characterizations of matroids.",Design on Matroids: Diversity vs. Meritocracy,2022-12-31 19:05:19,"Isa E. Hafalir, Fuhito Kojima, M. Bumin Yenmez, Koji Yokote","http://arxiv.org/abs/2301.00237v1, http://arxiv.org/pdf/2301.00237v1",econ.TH
34468,th,"In this paper, we explore a dynamic Bertrand duopoly game with differentiated
products, where firms are boundedly rational and consumers are assumed to
possess an underlying CES utility function. We mainly focus on two distinct
degrees of product substitutability. Several tools based on symbolic
computations such as the triangular decomposition method and the PCAD method
are employed in the analytical investigation of the model. The uniqueness of
the non-vanishing equilibrium is proved and rigorous conditions for the local
stability of this equilibrium are established for the first time. Most
importantly, we find that increasing the substitutability degree or decreasing
the product differentiation has an effect of destabilization for our Bertrand
model, which is in contrast with the relative conclusions for the Cournot
models. This finding could be conducive to the revelation of the essential
difference between dynamic Cournot and Bertrand oligopolies with differentiated
goods. In the special case of identical marginal costs, we derive that lower
degrees of product differentiation mean lower prices, higher supplies, lower
profits, and lower social welfare. Furthermore, complex dynamics such as
periodic orbits and chaos are reported through our numerical simulations.",A Bertrand duopoly game with differentiated products reconsidered,2023-01-03 11:56:59,"Xiaoliang Li, Bo Li","http://arxiv.org/abs/2301.01007v1, http://arxiv.org/pdf/2301.01007v1",econ.TH
34469,th,"This paper formalizes the lattice structure of the ballot voters cast in a
ranked-choice election and the preferences that this structure induces. These
preferences are shown to be counter to previous assumptions about the
preferences of voters, which indicate that ranked-choice elections require
different considerations for voters and candidates alike. While this model
assumes that voters vote sincerely, the model of ranked-choice elections this
paper presents allows for considerations of strategic voting in future work.",Preferences on Ranked-Choice Ballots,2023-01-06 22:34:49,Brian Duricy,"http://arxiv.org/abs/2301.02697v1, http://arxiv.org/pdf/2301.02697v1",econ.TH
34470,th,"There is an increasing need to hold players responsible for negative or
positive impact that take place elsewhere in a value chain or a network. For
example, countries or companies are held more and more responsible for their
indirect carbon emissions. We introduce a responsibility value that allocates
the total impact of the value chain among the players, taking into account
their direct impact and their indirect impact through the underlying graph.
Moreover, we show that the responsibility value satisfies a set of natural yet
important properties.",A responsibility value for digraphs,2023-01-07 01:17:10,"Rosa van den Ende, Dylan Laplace Mermoud","http://arxiv.org/abs/2301.02728v1, http://arxiv.org/pdf/2301.02728v1",econ.TH
34472,th,"We design and implement lab experiments to evaluate the normative appeal of
behavior arising from models of ambiguity-averse preferences. We report two
main empirical findings. First, we demonstrate that behavior reflects an
incomplete understanding of the problem, providing evidence that subjects do
not act on the basis of preferences alone. Second, additional clarification of
the decision making environment pushes subjects' choices in the direction of
ambiguity aversion models, regardless of whether or not the choices are also
consistent with subjective expected utility, supporting the position that
subjects find such behavior normatively appealing.",Randomization advice and ambiguity aversion,2023-01-09 15:49:03,"Christoph Kuzmics, Brian W. Rogers, Xiannong Zhang","http://arxiv.org/abs/2301.03304v1, http://arxiv.org/pdf/2301.03304v1",econ.TH
34473,th,"The standard rational choice model describes individuals as making choices by
selecting the best option from a menu. A wealth of evidence instead suggests
that individuals often filter menus into smaller sets - consideration sets -
from which choices are then made. I provide a theoretical foundation for this
phenomenon, developing a formal language of axioms to characterize how
consideration sets are formed from menus. I posit that consideration filters -
mappings that translate a menu into one of its subsets - capture this process,
and I introduce several properties that consideration filters can have. I then
extend this core model to provide linkages with the sequential choice and
rational attention literatures. Finally, I explore whether utility
representation is feasible under this consideration model, conjecturing
necessary and sufficient conditions for consideration-mediated choices to be
rationalizable.",Filtering Down to Size: A Theory of Consideration,2023-01-13 19:43:18,Tonna Emenuga,"http://arxiv.org/abs/2301.05649v1, http://arxiv.org/pdf/2301.05649v1",econ.TH
34474,th,"Consider the following collective choice problem: a group of budget
constrained agents must choose one of several alternatives. Is there a budget
balanced mechanism that: i) does not depend on the specific characteristics of
the group, ii) does not require unaffordable transfers, and iii) implements
utilitarianism if the agents' preferences are quasilinear and their private
information? We study the following procedure: every agent can express any
intensity of support or opposition to each alternative, by transferring to the
rest of the agents wealth equal to the square of the intensity expressed; and
the outcome is determined by the sums of the expressed intensities. We prove
that as the group grows large, in every equilibrium of this quadratic-transfers
mechanism, each agent's transfer converges to zero, and the probability that
the efficient outcome is chosen converges to one.",Efficiency in Collective Decision-Making via Quadratic Transfers,2023-01-16 01:45:19,"Jon X. Eguia, Nicole Immorlica, Steven P. Lalley, Katrina Ligett, Glen Weyl, Dimitrios Xefteris","http://arxiv.org/abs/2301.06206v1, http://arxiv.org/pdf/2301.06206v1",econ.TH
34475,th,"The thing about inflation is that it ravages your income if you don not keep
up with it and you do not know when it will stop.","Turkish Inflation, Private Debt & how to overcome it",2023-01-17 21:28:51,Mahmood Abdullah,"http://arxiv.org/abs/2301.07064v1, http://arxiv.org/pdf/2301.07064v1",econ.TH
34476,th,"We introduce a generalization of the concept of sufficientarianism, intended
to rank allocations involving multiple consumption goods. In ranking
allocations of goods for a fixed society of agents, sufficientarianism posits
that allocations are compared according to the number of individuals whose
consumption is deemed sufficient. We base our analysis on a novel ethical
concept, which we term sufficientarian judgment. Sufficientarian judgment
asserts that if in starting from an allocation in which all agents have
identical consumption, a change in one agent's consumption hurts society, then
there is no change in any other agent's consumption which could subsequently
benefit society. Sufficientarianism is shown to be equivalent to
sufficientarian judgment, symmetry, and separability. We investigate our axioms
in an abstract environment, and in specific economic environments. Finally, we
argue formally that sufficientarian judgment is closely related to the leximin
principle.",Haves and Have-Nots: A Theory of Economic Sufficientarianism,2023-01-20 19:35:44,"Christopher P. Chambers, Siming Ye","http://arxiv.org/abs/2301.08666v2, http://arxiv.org/pdf/2301.08666v2",econ.TH
34477,th,"Complex organizations accomplish tasks through many steps of collaboration
among workers. Corporate culture supports collaborations by establishing norms
and reducing misunderstandings. Because a strong corporate culture relies on
costly, voluntary investments by many workers, we model it as an organizational
public good, subject to standard free-riding problems, which become severe in
large organizations. Our main finding is that voluntary contributions to
culture can nevertheless be sustained, because an organization's equilibrium
productivity is endogenously highly sensitive to individual contributions.
However, the completion of complex tasks is then necessarily fragile to small
shocks that damage the organization's culture.",Corporate Culture and Organizational Fragility,2023-01-21 09:31:08,"Matthew Elliott, Benjamin Golub, Matthieu V. Leduc","http://arxiv.org/abs/2301.08907v1, http://arxiv.org/pdf/2301.08907v1",econ.TH
34478,th,"We provide sufficient conditions under which a utility function may be
recovered from a finite choice experiment. Identification, as is commonly
understood in decision theory, is not enough. We provide a general
recoverability result that is widely applicable to modern theories of choice
under uncertainty. Key is to allow for a monetary environment, in which an
objective notion of monotonicity is meaningful. In such environments, we show
that subjective expected utility, as well as variational preferences, and other
parametrizations of utilities over uncertain acts are recoverable. We also
consider utility recovery in a statistical model with noise and random
deviations from utility maximization.",Recovering utility,2023-01-27 04:56:06,"Christopher P. Chambers, Federico Echenique, Nicolas S. Lambert","http://arxiv.org/abs/2301.11492v1, http://arxiv.org/pdf/2301.11492v1",econ.TH
34479,th,"This paper develops a framework to extend the strategic form analysis of
cursed equilibrium (CE) developed by Eyster and Rabin (2005) to multi-stage
games. The approach uses behavioral strategies rather than normal form mixed
strategies, and imposes sequential rationality. We define cursed sequential
equilibrium (CSE) and compare it to sequential equilibrium and standard
normal-form CE. We provide a general characterization of CSE and establish its
properties. We apply CSE to five applications in economics and political
science. These applications illustrate a wide range of differences between CSE
and Bayesian Nash equilibrium or CE: in signaling games; games with preplay
communication; reputation building; sequential voting; and the dirty faces game
where higher order beliefs play a key role. A common theme in several of these
applications is showing how and why CSE implies systematically different
behavior than Bayesian Nash equilibrium in dynamic games of incomplete
information with private values, while CE coincides with Bayesian Nash
equilibrium for such games.",Cursed Sequential Equilibrium,2023-01-27 23:08:19,"Meng-Jhang Fong, Po-Hsuan Lin, Thomas R. Palfrey","http://arxiv.org/abs/2301.11971v2, http://arxiv.org/pdf/2301.11971v2",econ.TH
34487,th,"The random utility model is one of the most fundamental models in economics.
Falmagne (1978) provides an axiomatization but his axioms can be applied only
when choice frequencies of all alternatives from all subsets are observable. In
reality, however, it is often the case that we do not observe choice
frequencies of some alternatives. For such a dataset, we obtain a finite system
of linear inequalities that is necessary and sufficient for the dataset to be
rationalized by a random utility model. Moreover, the necessary and sufficient
condition is tight in the sense that none of the inequalities is implied by the
other inequalities, and dropping any one of the inequalities makes the
condition not sufficient.",Axiomatization of Random Utility Model with Unobservable Alternatives,2023-02-08 10:11:19,"Haruki Kono, Kota Saito, Alec Sandroni","http://arxiv.org/abs/2302.03913v3, http://arxiv.org/pdf/2302.03913v3",econ.TH
34480,th,"We propose a fair and efficient solution for assigning agents to m posts
subject to congestion, when agents care about both their post and its
congestion. Examples include assigning jobs to busy servers, students to
crowded schools or crowded classes, commuters to congested routes, workers to
crowded office spaces or to team projects etc... Congestion is anonymous (it
only depends on the number n of agents in a given post). A canonical
interpretation of ex ante fairness allows each agent to choose m post-specific
caps on the congestion they tolerate: these requests are mutually feasible if
and only if the sum of the caps is n. For ex post fairness we impose a
competitive requirement close to envy freeness: taking the congestion profile
as given each agent is assigned to one of her best posts. If a competitive
assignment exists, it delivers unique congestion and welfare profiles and is
also efficient and ex ante fair. In a fractional (randomised or time sharing)
version of our model, a unique competitive congestion profile always exists. It
is approximately implemented by a mixture of ex post deterministic assignments:
with an approxination factor equal to the largest utility loss from one more
unit of congestion, the latter deliver identical welfare profiles and are
weakly efficient. Our approach to ex ante fairness generalises to the model
where each agent's congestion is weighted. Now the caps on posts depend only
upon own weight and total congestion, not on the number of other agents
contributing to it. Remarkably in both models these caps are feasible if and
only if they give to each agent the right to veto all but (1/m) of their
feasible allocations.",The congested assignment problem,2023-01-28 14:21:04,"Anna Bogomolnaia, Herve Moulin","http://arxiv.org/abs/2301.12163v3, http://arxiv.org/pdf/2301.12163v3",econ.TH
34481,th,"We study the set of incentive compatible and efficient two-sided matching
mechanisms. We classify all such mechanisms under an additional assumption --
""gender-neutrality"" -- which guarantees that the two sides be treated
symmetrically. All group strategy-proof, efficient and gender-neutral
mechanisms are recursive and the outcome is decided in a sequence of rounds. In
each round, two agents are selected, one from each side. These agents are
either ""matched-by-default"" or ""unmatched-by-default."" In the former case
either of the selected agents can unilaterally force the other to match with
them while in the latter case they may only match together if both agree. In
either case, if this pair of agents is not matched together, each gets their
top choice among the set of remaining agents. As an important step in the
characterization, we first show that in one-sided matching all group
strategy-proof and efficient mechanisms are sequential dictatorships. An
immediate corollary is that there are no individually rational, group
strategy-proof and efficient one-sided matching mechanisms.","Royal Processions: Incentives, Efficiency and Fairness in Two-sided Matching",2023-01-30 19:28:42,"Sophie Bade, Joseph Root","http://arxiv.org/abs/2301.13037v1, http://arxiv.org/pdf/2301.13037v1",econ.TH
34482,th,"Firms have access to abundant data on market participants. They use these
data to target contracts to agents with specific characteristics, and describe
these contracts in opaque terms. In response to such practices, recent proposed
regulations aim to increase transparency, especially in digital markets. In
order to understand when opacity arises in contracting and the potential
effects of proposed regulations, we study a moral hazard model in which a
risk-neutral principal faces a continuum of weakly risk-averse agents. The
agents differ in an observable characteristic that affects the payoff of the
principal. In a described contract, the principal sorts the agents into groups,
and to each group communicates a distribution of output-contingent payments.
Within each group, the realized distribution of payments must be consistent
with the communicated contract. A described contract is transparent if the
principal communicates the realized contract to the agent ex-ante, and
otherwise it is opaque. We provide a geometric characterization of the
principal's optimal described contract as well as conditions under which the
optimal described mechanism is transparent and opaque. We apply our results to
the design and description of driver payment schemes on ride-hailing platforms.",Opaque Contracts,2023-01-31 07:45:57,"Andreas Haupt, Zoe Hitzig","http://arxiv.org/abs/2301.13404v1, http://arxiv.org/pdf/2301.13404v1",econ.TH
34483,th,"We consider a nonlinear pricing environment with private information. We
provide profit guarantees (and associated mechanisms) that the seller can
achieve across all possible distributions of willingness to pay of the buyers.
With a constant elasticity cost function, constant markup pricing provides the
optimal revenue guarantee across all possible distributions of willingness to
pay and the lower bound is attained under a Pareto distribution. We
characterize how profits and consumer surplus vary with the distribution of
values and show that Pareto distributions are extremal. We also provide a
revenue guarantee for general cost functions. We establish equivalent results
for optimal procurement policies that support maximal surplus guarantees for
the buyer given all possible cost distributions of the sellers.",The Optimality of Constant Mark-Up Pricing,2023-01-31 21:11:07,"Dirk Bergemann, Tibor Heumann, Stephen Morris","http://arxiv.org/abs/2301.13827v1, http://arxiv.org/pdf/2301.13827v1",econ.TH
34484,th,"Many bounded rationality approaches discussed in the literature are models of
limited consideration. We provide a novel representation and data
interpretation for some of the analyzed behavioral patterns. Moreover, we
characterize a testable choice procedure that allows the experimenter to
uniquely infer limited consideration from irrational features of the observed
behavior.",From bounded rationality to limited consideration: Representation and behavioral analysis,2023-02-02 12:56:54,"Davide Carpentiere, Angelo Petralia","http://arxiv.org/abs/2302.00978v5, http://arxiv.org/pdf/2302.00978v5",econ.TH
34485,th,"We characterize the extreme points of first-order stochastic dominance (FOSD)
intervals and show how these intervals are at the heart of many topics in
economics. Using these extreme points, we characterize the distributions of
posterior quantiles, leading to an analog of a classical result regarding the
distributions of posterior means. We apply this analog to various subjects,
including the psychology of judgement, political economy, and Bayesian
persuasion. In addition, FOSD intervals provide a common structure to security
design. We use the extreme points to unify and generalize seminal results in
that literature when either adverse selection or moral hazard pertains.",Extreme Points and First-Order Stochastic Dominance: Theory and Applications,2023-02-07 00:47:56,"Kai Hao Yang, Alexander K. Zentefis","http://arxiv.org/abs/2302.03135v2, http://arxiv.org/pdf/2302.03135v2",econ.TH
34486,th,"We consider the problem of how to regulate an oligopoly when firms have
private information about their costs. In the environment, consumers make
discrete choices over goods, and minimal structure is placed on the manner in
which firms compete. In the optimal regulatory policy, the regulator need only
solicit prices from firms, and based on those prices, charge them taxes or give
them subsidies, and impose on each firm a ``yardstick'' price cap that depends
on the posted prices of competing firms.",Regulating Oligopolistic Competition,2023-02-07 04:34:05,"Kai Hao Yang, Alexander K. Zentefis","http://arxiv.org/abs/2302.03185v2, http://arxiv.org/pdf/2302.03185v2",econ.TH
34488,th,"The (static) utility maximization model of Afriat (1967), which is the
standard in analysing choice behavior, is under scrutiny. We propose the
Dynamic Random Utility Model (DRUM) that is more flexible than the framework of
Afriat (1967) and more informative than the static Random Utility Model (RUM)
framework of McFadden and Richter (1990). Under DRUM, each decision-maker
randomly draws a utility function in each period and maximizes it subject to a
menu. DRUM allows for unrestricted time correlation and cross-section
heterogeneity in preferences. We characterize DRUM for situations when panel
data on choices and menus are available. DRUM is linked to a finite mixture of
deterministic behaviors that can be represented as a product of static
rationalizable behaviors. This link allows us to convert the characterizations
of the static RUM to its dynamic form. In an application, we find that although
the static utility maximization model fails to explain population behavior,
DRUM can explain it.",Dynamic and Stochastic Rational Behavior,2023-02-09 06:04:38,"Nail Kashaev, Victor H. Aguiar, Martin Plávala, Charles Gauthier","http://arxiv.org/abs/2302.04417v3, http://arxiv.org/pdf/2302.04417v3",econ.TH
34489,th,"We study consumption dependence in the context of random utility and repeated
choice. We show that, in the presence of consumption dependence, the random
utility model is a misspecified model of repeated rational choice. This
misspecification leads to biased estimators and failures of standard random
utility axioms. We characterize exactly when and by how much the random utility
model is misspecified when utilities are consumption dependent. As one possible
solution to this problem, we consider time disaggregated data. We offer a
characterization of consumption dependent random utility when we observe time
disaggregated data. Using this characterization, we develop a hypothesis test
for consumption dependent random utility that offers computational improvements
over the natural extension of Kitamura and Stoye (2018) to our setting.","Random Utility, Repeated Choice, and Consumption Dependence",2023-02-12 01:58:32,Christopher Turansick,"http://arxiv.org/abs/2302.05806v5, http://arxiv.org/pdf/2302.05806v5",econ.TH
34490,th,"We explore a model of duopolistic competition in which consumers learn about
the fit of each competitor's product. In equilibrium, consumers comparison
shop: they learn only about the relative values of the products. When
information is cheap, increasing the cost of information decreases consumer
welfare; but when information is expensive, this relationship flips. As
information frictions vanish, there is a limiting equilibrium that is ex post
efficient.",Comparison Shopping: Learning Before Buying From Duopolists,2023-02-13 21:33:28,"Brian C. Albrecht, Mark Whitmeyer","http://arxiv.org/abs/2302.06580v2, http://arxiv.org/pdf/2302.06580v2",econ.TH
34491,th,"Agents may form coalitions. Each coalition shares its endowment among its
agents by applying a sharing rule. The sharing rule induces a coalition
formation problem by assuming that agents rank coalitions according to the
allocation they obtain in the corresponding sharing problem. We characterize
the sharing rules that induce a class of stable coalition formation problems as
those that satisfy a natural axiom that formalizes the principle of solidarity.
Thus, solidarity becomes a sufficient condition to achieve stability.",Solidarity to achieve stability,2023-02-15 15:19:01,"Jorge Alcalde-Unzu, Oihane Gallo, Elena Inarra, Juan D. Moreno-Ternero","http://arxiv.org/abs/2302.07618v2, http://arxiv.org/pdf/2302.07618v2",econ.TH
34492,th,"Real-world observed contests often take the form of multi-task contests
rather than single-task contests, and existing theories are insufficient to
explain the incentive for extending the task dimension. This paper proposes a
new effect of multi-task contests compared to single-tasking contests: the
specialization effect (SE). By establishing a multi-task contest model with
heterogeneous competitor costs, this paper shows that after expanding the new
competition dimension, competitors will choose the dimension with greater
relative comparative advantage rather than absolute advantage and pay more
effort, which eventually leads to competitors choosing higher effort levels in
both the original dimension and the extended dimension. The paper then uses
staggered Difference-in-Difference (DID) method on China's county officers'
promotion assessment from 2001 to 2022 as an entry point to discuss the
empirical evidence for specialization effect. Through models and empirical
studies, the specialization effect studied in this paper do exists in promotion
assessments, and may also explain many other real-world scenarios, such as
sports events, competition between corporate compensation and employee benefits
and competition for R&D expenses.",Contest in Multitasking: An Evidence from Chinese County Officials' Promotion Assessment,2023-02-17 07:48:37,Yuanhao Zhang,"http://arxiv.org/abs/2302.08691v1, http://arxiv.org/pdf/2302.08691v1",econ.TH
34493,th,"This paper builds a rule for decisionmaking from the physical behavior of
single neurons, the well established neural circuitry of mutual inhibition, and
the evolutionary principle of natural selection. No axioms are used in the
derivation of this rule. The paper provides a microfoundation to both Economics
Choice Theory and Cognitive Psychologys Response Times Theory. The paper finds
how classical expected utility should be modified to account for much
neuroscientific evidence, and how neuroscientific correlates of choice should
be linked to utility. In particular, the model implies the concept of utility
is a network property and cannot be calculated as a function of frequencies in
one layer of neurons alone; it requires understanding how different layers work
together. The resulting rule is simple enough to model markets and games as is
customary in the social sciences. Utility maximization and inaction are
endogenous to the model, cardinality and independence of irrelevant
alternatives, properties present in classical and random utility theories, are
only met in limiting situations.",Microfoundations of Expected Utility and Response Times,2023-02-18 23:56:30,"Valdes Salvador, Gonzalo ValdesEdwards","http://arxiv.org/abs/2302.09421v1, http://arxiv.org/pdf/2302.09421v1",econ.TH
34494,th,"This paper analyzes a multiplayer war of attrition game with asymmetric type
distributions in the setting of private provision of an indivisible public
good. In the unique equilibrium, asymmetry leads to a stratified behavior
pattern where one agent provides the good instantly with a positive
probability, while each of the others has no probability of provision before a
certain moment and this moment is idiosyncratic for different agents.
Comparative statics show that an agent with less patience, lower cost of
provision, and higher reputation in valuation provides the good uniformly
faster. The cost of delay is mainly determined by the strongest type of the
instant-exit agent. This paper investigates two types of variations of
asymmetry: raising the strongest type tends to improve efficiency, whereas
controlling the strongest type associates the effect of asymmetry with the sign
of an intuitive measure of ``the cost of symmetry"".",Multiplayer War of Attrition with Asymmetric Private Information,2023-02-19 00:42:08,"Hongcheng Li, Jialu Zhang","http://arxiv.org/abs/2302.09427v2, http://arxiv.org/pdf/2302.09427v2",econ.TH
34496,th,"This paper studies how the firm designs its internal information system when
facing an adverse selection problem arising from unobservable managerial
abilities. While more precise information allows the firm to make ex-post more
efficient investment decisions, noisier information has an ex-ante screening
effect that allows the firm to attract on-average better managers. The tradeoff
between more effective screening of managers and more informed investment
implies a non-monotonic relationship between firm value and information
quality, and a marginal improvement of information quality does not necessarily
lead to an overall improvement of firm value.",Ignorance Is Bliss: The Screening Effect of (Noisy) Information,2023-02-22 06:49:25,"Felix Zhiyu Feng, Wenyu Wang, Yufeng Wu, Gaoqing Zhang","http://arxiv.org/abs/2302.11128v2, http://arxiv.org/pdf/2302.11128v2",econ.TH
34497,th,"All possible types of deterministic choice behavior are classified by their
degree of irrationality. This classification is performed in three steps: (1)
select a benchmark of rationality, for which this degree is zero; (2) endow the
set of choices with a metric to measure deviations from rationality; and (3)
compute the distance of any choice behavior from the selected benchmark. The
natural candidate for step 1 is the family of all rationalizable behaviors. A
possible candidate for step 2 is a suitable variation of the metric described
by Klamler (2008), which displays a sharp discerning power among different
types of choice behaviors. In step 3 we use this new metric to establish the
minimum distance of any choice behavior from the benchmark of rationality.
Finally we describe a measure of stochastic irrationality, which employs the
random utility model as a benchmark of rationality, and the Block-Marschak
polynomials to measure deviations from it.",A rational measure of irrationality,2023-02-27 13:49:00,"Davide Carpentiere, Alfio Giarlotta, Stephen Watson","http://arxiv.org/abs/2302.13656v2, http://arxiv.org/pdf/2302.13656v2",econ.TH
34498,th,"There are updating rules other than Bayes' law that render the value of
information positive. I find all of them.","Bayes = Blackwell, Almost",2023-02-27 19:56:39,Mark Whitmeyer,"http://arxiv.org/abs/2302.13956v4, http://arxiv.org/pdf/2302.13956v4",econ.TH
34499,th,"This paper presents a model with the aim to follow, as closely as possible,
the rationale of the macroeconomic model advanced by J.M. Keynes in his famous
""The General Theory of Employment, Interest and Money"", in order to provide a
viable tool for macroeconomic research. Keynes' main result will be shown,
i.e., to determine the level of income and employment starting from the
marginal efficiency of capital and the marginal propensity to consume, given
the interest rate. Elements of the model will be described by referring to the
original text. The sequentiality in model operation will prove quintessential
in order to describe the complex nature of macroeconomic systems.","Mr.Keynes and the... Complexity! A suggested agent-based version of the General Theory of Employment, Interest and Money",2023-03-02 04:13:18,Alessio Emanuele Biondo,"http://arxiv.org/abs/2303.00889v3, http://arxiv.org/pdf/2303.00889v3",econ.TH
34500,th,"Path independence is arguably one of the most important choice rule
properties in economic theory. We show that a choice rule is path independent
if and only if it is rationalizable by a utility function satisfying ordinal
concavity, a concept closely related to concavity notions in discrete
mathematics. We also provide a representation result for choice rules that
satisfy path independence and the law of aggregate demand.",Representation Theorems for Path-Independent Choice Rules,2023-03-02 04:20:04,"Koji Yokote, Isa E. Hafalir, Fuhito Kojima, M. Bumin Yenmez","http://arxiv.org/abs/2303.00892v1, http://arxiv.org/pdf/2303.00892v1",econ.TH
34501,th,"Attention has recently been focused on the possibility of artificially
intelligent sellers on platforms colluding to limit output and raise prices.
Such arrangements (cartels), however, feature an incentive for individual
sellers to deviate to a lower price (cheat) to increase their own profits.
Stabilizing such cartels therefore requires credible threats of punishments,
such as price wars. In this paper, I propose a mechanism to destabilize cartels
by protecting any cheaters from a price war by guaranteeing a stream of profits
which is unaffected by arbitrary punishments, only if such punishments actually
occur. Equilibrium analysis of the induced game predicts a reversion to
repeated static Nash pricing. When implemented in a reinforcement learning
framework, it provides substantial reductions in prices (reducing markups by
40% or more), without affecting product variety or requiring the platform to
make any payments on path. This mechanism applies to both the sale of
differentiated goods on platforms, and the sale of homogeneous goods through
direct sales. The mechanism operates purely off-path, thereby inducing no
welfare losses in practice, and does not depend on the choice of discount
factors.",Combating Algorithmic Collusion: A Mechanism Design Approach,2023-03-05 07:11:28,Soumen Banerjee,"http://arxiv.org/abs/2303.02576v2, http://arxiv.org/pdf/2303.02576v2",econ.TH
34502,th,"Empirical welfare analyses often impose stringent parametric assumptions on
individuals' preferences and neglect unobserved preference heterogeneity. In
this paper, we develop a framework to conduct individual and social welfare
analysis for discrete choice that does not suffer from these drawbacks. We
first adapt the class of individual welfare measures introduced by Fleurbaey
(2009) to settings where individual choice is discrete. Allowing for
unrestricted, unobserved preference heterogeneity, these measures become random
variables. We then show that the distribution of these objects can be derived
from choice probabilities, which can be estimated nonparametrically from
cross-sectional data. In addition, we derive nonparametric results for the
joint distribution of welfare and welfare differences, as well as for social
welfare. The former is an important tool in determining whether the winners of
a price change belong disproportionately to those groups who were initially
well-off.",Identifying the Distribution of Welfare from Discrete Choice,2023-03-05 14:22:24,"Bart Capéau, Liebrecht De Sadeleer, Sebastiaan Maes","http://arxiv.org/abs/2303.02645v1, http://arxiv.org/pdf/2303.02645v1",econ.TH
34503,th,"We consider the problem of extending an acyclic binary relation that is
invariant under a given family of transformations into an invariant preference.
We show that when a family of transformations is commutative, every acyclic
invariant binary relation extends. We find that, in general, the set of
extensions agree on the ranking of many pairs that (i) are unranked by the
original relation, and (ii) cannot be ranked by invariance or transitivity
considerations alone. We interpret these additional implications as the
out-of-sample predictions generated by invariance, and study their structure.",A Note on Invariant Extensions of Preorders,2023-03-08 14:40:49,"Peter Caradonna, Christopher P. Chambers","http://arxiv.org/abs/2303.04522v1, http://arxiv.org/pdf/2303.04522v1",econ.TH
34504,th,"Decision to participate in education depends on the circumstances individual
inherits and on the returns to education she expects as well. If one person
from any socio-economically disadvantaged social group inherits poor
circumstances measured in terms of family background, then she is having poor
opportunities vis-\`a-vis her capability set becomes confined. Accordingly, her
freedom to choose the best alternative from many is also less, and she fails to
expect the potential returns from educational participation. Consequently, a
complementary relationship between the circumstances one inherits and the
returns to education she expects can be observed. This paper is an attempt to
look at this complementarity on the basis of theoretical logic and empirical
investigation, which enables us to unearth the origin of inter-group disparity
in educational participation, as is existed across the groups defined by taking
caste and gender together in Indian society. Furthermore, in the second piece
of analysis, we assess the discrimination in the likelihood of educational
participation by invoking the method of decomposition of disparity in the
likelihood of educational participation applicable in the logistic regression
models, which enables us to re-establish the earlier mentioned complementary
relationship.",Complementarity in Demand-side Variables and Educational Participation,2023-02-20 16:36:08,"Anjan Ray Chaudhury, Dipankar Das, Sreemanta Sarkar","http://arxiv.org/abs/2303.04647v2, http://arxiv.org/pdf/2303.04647v2",econ.TH
34505,th,"Existing models of rational pure bubble models feature multiple (and often a
continuum of) equilibria, which makes model predictions and policy analysis
non-robust. We show that when the interest rate in the fundamental equilibrium
is below the economic growth rate ($R<G$), a bubbly equilibrium with $R=G$
exists. By injecting dividends that are vanishingly small relative to aggregate
income to the bubble asset, we can eliminate the fundamental steady state and
resolve equilibrium indeterminacy. We show the general applicability of
dividend injection through examples in overlapping generations and
infinite-horizon models with or without production or financial frictions.",Unique Equilibria in Models of Rational Asset Price Bubbles,2023-03-10 03:48:32,"Tomohiro Hirano, Alexis Akira Toda","http://arxiv.org/abs/2303.05636v1, http://arxiv.org/pdf/2303.05636v1",econ.TH
34506,th,"We analyze the problem of locating a public facility in a domain of
single-peaked and single-dipped preferences when the social planner knows the
type of preference (single-peaked or single-dipped) of each agent. Our main
result characterizes all strategy-proof rules and shows that they can be
decomposed into two steps. In the first step, the agents with single-peaked
preferences are asked about their peaks and, for each profile of reported
peaks, at most two alternatives are preselected. In the second step, the agents
with single-dipped preferences are asked to reveal their dips to complete the
decision between the preselected alternatives. Our result generalizes the
findings of Moulin (1980) and Barber\`a and Jackson (1994) for single-peaked
and of Manjunath (2014) for single-dipped preferences. Finally, we show that
all strategy-proof rules are also group strategy-proof and analyze the
implications of Pareto efficiency.",Strategy-proofness with single-peaked and single-dipped preferences,2023-03-10 11:32:35,"Jorge Alcalde-Unzu, Oihane Gallo, Marc Vorsatz","http://arxiv.org/abs/2303.05781v1, http://arxiv.org/pdf/2303.05781v1",econ.TH
34507,th,"This paper studies a general class of social choice problems in which agents'
payoff functions (or types) are privately observable random variables, and
monetary transfers are not available. We consider cardinal social choice
functions which may respond to agents' preference intensities as well as
preference rankings. We show that a social choice function is ex ante Pareto
efficient and Bayesian incentive compatible if and only if it is dictatorial.
The result holds for arbitrary numbers of agents and alternatives, and under a
fairly weak assumption on the joint distribution of types, which allows for
arbitrary correlations and asymmetries.",A General Impossibility Theorem on Pareto Efficiency and Bayesian Incentive Compatibility,2023-03-10 18:00:43,"Kazuya Kikuchi, Yukio Koriyama","http://arxiv.org/abs/2303.05968v1, http://arxiv.org/pdf/2303.05968v1",econ.TH
34508,th,"We study the bounds of mediated communication in sender-receiver games in
which the sender's payoff is state-independent. We show that the feasible
distributions over the receiver's beliefs under mediation are those that induce
zero correlation, but not necessarily independence, between the sender's payoff
and the receiver's belief. Mediation attains the upper bound on the sender's
value, i.e., the Bayesian persuasion value, if and only if this value is
attainable under unmediated communication, i.e., cheap talk. The lower bound is
given by the cheap talk payoff. We provide a geometric characterization of when
mediation strictly improves on this using the quasiconcave and quasiconvex
envelopes of the sender's value function. In canonical environments, mediation
is strictly valuable when the sender has countervailing incentives in the space
of the receiver's belief. We apply our results to asymmetric-information
settings such as bilateral trade and lobbying and explicitly construct
mediation policies that increase the surplus of the informed and uninformed
parties with respect to unmediated communication.",The Bounds of Mediated Communication,2023-03-11 02:33:17,"Roberto Corrao, Yifan Dai","http://arxiv.org/abs/2303.06244v2, http://arxiv.org/pdf/2303.06244v2",econ.TH
34509,th,"We introduce and characterize inertial updating of beliefs. Under inertial
updating, a decision maker (DM) chooses a belief that minimizes the subjective
distance between their prior belief and the set of beliefs consistent with the
observed event. Importantly, by varying the subjective notion of distance,
inertial updating provides a unifying framework that nests three different
types of belief updating: (i) Bayesian updating, (ii) non-Bayesian updating
rules, and (iii) updating rules for events with zero probability, including the
conditional probability system (CPS) of Myerson (1986a,b). We demonstrate that
our model is behaviorally equivalent to the Hypothesis Testing model (HT) of
Ortoleva (2012), clarifying the connection between HT and CPS. We apply our
model to a persuasion game.",Inertial Updating,2023-03-11 10:47:54,"Adam Dominiak, Matthew Kovach, Gerelt Tserenjigmid","http://arxiv.org/abs/2303.06336v2, http://arxiv.org/pdf/2303.06336v2",econ.TH
34510,th,"We present the proof-of-concept for minimalist market design (S\""{o}nmez,
2023) as an effective methodology to enhance an institution based on the
desiderata of stakeholders with minimal interference. Four
objectives-respecting merit, increasing retention, aligning talent, and
enhancing trust-guided reforms to US Army's centralized branching process of
cadets to military specialties since 2006. USMA's mechanism for the Class of
2020 exacerbated challenges implementing these objectives. Formulating the
Army's desiderata as rigorous axioms, we analyze their implications. Under our
minimalist approach to institution redesign, the Army's objectives uniquely
identify a branching mechanism. Our design is now adopted at USMA and ROTC.",Redesigning the US Army's Branching Process: A Case Study in Minimalist Market Design,2023-03-12 07:10:12,"Kyle Greenberg, Parag A. Pathak, Tayfun Sönmez","http://arxiv.org/abs/2303.06564v2, http://arxiv.org/pdf/2303.06564v2",econ.TH
34511,th,"This paper adapts ideas from social identity theory to set out a new
framework for modelling conspicuous consumption. Notably, this approach can
explain two stylised facts about conspicuous consumption that initially seem at
odds with one another, and to date have required different families of models
to explain each: (1) people consume more visible goods when their neighbours'
incomes rise, but (2) consume less visible goods when incomes of those with the
same race in a wider geographic area rise. The first fact is typically
explained by `Keeping up with the Joneses' models, and the second by signalling
models. Our model also explains related features of conspicuous consumption:
that the rich are more sensitive to others' incomes than the poor, and that the
effect of income inequality on consumption differs qualitatively across groups.
Importantly, it explains this fourth stylised fact without falling back on
differences in preferences across groups, as required in other models. In
addition, our model delivers new testable predictions regarding the role of
network structure and income inequality for conspicuous consumption.",Status substitution and conspicuous consumption,2023-03-13 14:06:46,"Alastair Langtry, Christian Ghinglino","http://arxiv.org/abs/2303.07008v2, http://arxiv.org/pdf/2303.07008v2",econ.TH
34512,th,"We study a model in which two players with opposing interests try to alter a
status quo through instability-generating actions. We show that instability can
be used to secure longer-term durable changes, even if it is costly to generate
and does not generate short-term gains. In equilibrium, instability generated
by a player decreases when the status quo favors them more. Equilibrium always
exhibits a region of stable states in which the status quo persists. As
players' threat power increases, this region shrinks, ultimately collapsing to
a single stable state that is supported via a deterrence mechanism. There is
long-run path-dependency and inequity: although instability eventually leads to
a stable state, it typically selects the least favorable one for the initially
disadvantaged player.",The Dynamics of Instability,2023-03-13 20:02:31,"César Barilla, Duarte Gonçalves","http://arxiv.org/abs/2303.07285v1, http://arxiv.org/pdf/2303.07285v1",econ.TH
34513,th,"This note presents a unified theorem of the alternative that explicitly
allows for any combination of equality, componentwise inequality, weak
dominance, strict dominance, and nonnegativity relations. The theorem nests 60
special cases, some of which have been stated as separate theorems.",A Unified Theorem of the Alternative,2023-03-14 00:15:24,Ian Ball,"http://arxiv.org/abs/2303.07471v1, http://arxiv.org/pdf/2303.07471v1",econ.TH
34514,th,"The axiomatic foundations of Bentham and Rawls solutions are discussed within
the broader domain of cardinal preferences. It is unveiled that both solution
concepts share all four of the following axioms: Nonemptiness, Anonymity,
Unanimity, and Continuity. In order to fully characterize the Bentham and Rawls
solutions, three variations of a consistency criterion are introduced and their
compatibility with the other axioms is assessed. Each expression of consistency
can be interpreted as a property of decision-making in uncertain environments.",A Story of Consistency: Bridging the Gap between Bentham and Rawls Foundations,2023-03-14 00:48:30,"Stéphane Gonzalez, Nikolaos Pnevmatikos","http://arxiv.org/abs/2303.07488v2, http://arxiv.org/pdf/2303.07488v2",econ.TH
34515,th,"Our goal is to develop a partial ordering method for comparing stochastic
choice functions on the basis of their individual rationality. To this end, we
assign to any stochastic choice function a one-parameter class of deterministic
choice correspondences, and then check for the rationality (in the sense of
revealed preference) of these correspondences for each parameter value. The
resulting ordering detects violations of (stochastic) transitivity as well as
inconsistencies between choices from nested menus. We obtain a parameter-free
characterization and apply it to some popular stochastic choice models. We also
provide empirical applications in terms of two well-known choice experiments.",Measuring Stochastic Rationality,2023-03-14 22:42:44,"Efe A. Ok, Gerelt Tserenjigmid","http://arxiv.org/abs/2303.08202v2, http://arxiv.org/pdf/2303.08202v2",econ.TH
34516,th,"This paper studies a delegation problem faced by the planner who wants to
regulate receivers' reaction choices in markets for matching between receivers
and senders with signaling. We provide a noble insight into the planner's
willingness to delegate and the design of optimal (reaction) interval
delegation as a solution to the planner's general mechanism design problem. The
relative heterogeneity of receiver types and the productivity of the sender'
signal are crucial in deriving optimal interval delegation in the presence of
the trade-off between matching efficiency and signaling costs.",Optimal Delegation in Markets for Matching with Signaling,2023-03-16 18:48:22,"Seungjin Han, Alex Sam, Youngki Shin","http://arxiv.org/abs/2303.09415v1, http://arxiv.org/pdf/2303.09415v1",econ.TH
34517,th,"I study the optimal provision of information in a long-term relationship
between a sender and a receiver. The sender observes a persistent, evolving
state and commits to send signals over time to the receiver, who sequentially
chooses public actions that affect the welfare of both players. I solve for the
sender's optimal policy in closed form: the sender reports the value of the
state with a delay that shrinks over time and eventually vanishes. Even when
the receiver knows the current state, the sender retains leverage by
threatening to conceal the future evolution of the state.",Dynamic Information Provision: Rewarding the Past and Guiding the Future,2023-03-17 01:41:30,Ian Ball,"http://arxiv.org/abs/2303.09675v1, http://arxiv.org/pdf/2303.09675v1",econ.TH
34518,th,"We model a public goods game with groups, position uncertainty, and
observational learning. Contributions are simultaneous within groups, but
groups play sequentially based on their observation of an incomplete sample of
past contributions. We show that full cooperation between and within groups is
possible with self-interested players on a fixed horizon. Position uncertainty
implies the existence of an equilibrium where groups of players conditionally
cooperate in the hope of influencing further groups. Conditional cooperation
implies that each group member is pivotal, so that efficient simultaneous
provision within groups is an equilibrium.",Efficient Public Good Provision in a Multipolar World,2023-03-19 02:01:52,"Chowdhury Mohammad Sakib Anwar, Jorge Bruno, Renaud Foucart, Sonali SenGupta","http://arxiv.org/abs/2303.10514v2, http://arxiv.org/pdf/2303.10514v2",econ.TH
34519,th,"We introduce the notion of a multidimensional hybrid preference domain on a
(finite) set of alternatives that is a Cartesian product of finitely many
components. We demonstrate that in a model of public goods provision,
multidimensional hybrid preferences arise naturally through assembling marginal
preferences under the condition of semi-separability - a weakening of
separability. The main result shows that under a suitable ""richness"" condition,
every strategy-proof rule on this domain can be decomposed into component-wise
strategy-proof rules, and more importantly every domain of preferences that
reconciles decomposability of rules with strategy-proofness must be a
multidimensional hybrid domain.",Decomposability and Strategy-proofness in Multidimensional Models,2023-03-20 09:13:41,"Shurojit Chatterji, Huaxia Zeng","http://arxiv.org/abs/2303.10889v4, http://arxiv.org/pdf/2303.10889v4",econ.TH
34520,th,"We analyze equilibrium housing prices in an overlapping generations model
with perfect housing and rental markets. We prove that the economy exhibits a
two-stage phase transition: as the income of home buyers rises, the equilibrium
regime changes from fundamental only to coexistence of fundamental and bubbly
equilibria. With even higher incomes, fundamental equilibria disappear and
housing bubbles become inevitable. Even with low current incomes, housing
bubbles may emerge if home buyers have access to credit or have high future
income expectations. Contrary to widely-held beliefs, fundamental equilibria in
the coexistence region are inefficient despite housing being a productive
non-reproducible asset.",A Theory of Rational Housing Bubbles with Phase Transitions,2023-03-20 21:07:22,"Tomohiro Hirano, Alexis Akira Toda","http://arxiv.org/abs/2303.11365v2, http://arxiv.org/pdf/2303.11365v2",econ.TH
34521,th,"The unit value of a commodity that Michio Morishima's method and its
variations enable to determine correctly, is the sum of the value of the
commodities it contains (inputs) and the quantity of labor required for its
production. However, goods are sold at their market production price only when
they meet a solvent social need that involves the entire economy with its
interconnections between the different industrial sectors. This condition gives
full meaning to Marx's fundamental equalities, which derive from the law of
value and constitute invariants that apply to the economy as a whole. These
equalities are necessary to determine market production prices. We demonstrate
that they also enable to solve the transformation problem for a simple
reproduction system without fixed capital by starting from Morishima's
formalism and returning to a formalism closer to that used by Marx.",Both invariant principles implied by Marx's law of value are necessary and sufficient to solve the transformation problem through Morishima's formalism,2023-03-21 00:55:29,"Norbert Ankri, Païkan Marcaggi","http://arxiv.org/abs/2303.11471v1, http://arxiv.org/pdf/2303.11471v1",econ.TH
34522,th,"We study how to optimally design selection mechanisms, accounting for agents'
investment incentives. A principal wishes to allocate a resource of homogeneous
quality to a heterogeneous population of agents. The principal commits to a
possibly random selection rule that depends on a one-dimensional characteristic
of the agents she intrinsically values. Agents have a strict preference for
being selected by the principal and may undertake a costly investment to
improve their characteristic before it is revealed to the principal. We show
that even if random selection rules foster agents' investments, especially at
the top of the characteristic distribution, deterministic ""pass-fail"" selection
rules are in fact optimal.",Non-Market Allocation Mechanisms: Optimal Design and Investment Incentives,2023-03-21 15:40:06,"Victor Augias, Eduardo Perez-Richet","http://arxiv.org/abs/2303.11805v1, http://arxiv.org/pdf/2303.11805v1",econ.TH
34523,th,"Random serial dictatorship (RSD) is a randomized assignment rule that - given
a set of $n$ agents with strict preferences over $n$ houses - satisfies equal
treatment of equals, ex post efficiency, and strategyproofness. For $n \le 3$,
Bogomolnaia and Moulin (2001) have shown that RSD is characterized by these
axioms. Extending this characterization to arbitrary $n$ is a long-standing
open problem. By weakening ex post efficiency and strategyproofness, we reduce
the question of whether RSD is characterized by these axioms for fixed $n$ to
determining whether a matrix has rank $n^2 n!^n$. We leverage this insight to
prove the characterization for $n \le 5$ with the help of a computer. We also
provide computer-generated counterexamples to show that two other approaches
for proving the characterization (using deterministic extreme points or
restricted domains of preferences) are inadequate.",Towards a Characterization of Random Serial Dictatorship,2023-03-21 19:08:32,"Felix Brandt, Matthias Greger, René Romen","http://arxiv.org/abs/2303.11976v3, http://arxiv.org/pdf/2303.11976v3",econ.TH
34524,th,"In games with incomplete and ambiguous information, rational behavior depends
not only on fundamental ambiguity (ambiguity about states) but also on
strategic ambiguity (ambiguity about others' actions). We study the impacts of
strategic ambiguity in global games and demonstrate the distinct effects of
ambiguous-quality and low-quality information. Ambiguous-quality information
makes more players choose an action yielding a constant payoff, whereas
(unambiguous) low-quality information makes more players choose an ex-ante best
response to the uniform belief over the opponents' actions. If the ex-ante
best-response action yields a constant payoff, sufficiently ambiguous-quality
information induces a unique equilibrium, whereas sufficiently low-quality
information generates multiple equilibria. In applications to financial crises,
we show that news of more ambiguous quality triggers a debt rollover crisis,
whereas news of less ambiguous quality triggers a currency crisis.",Strategic Ambiguity in Global Games,2023-03-22 05:14:04,Takashi Ui,"http://arxiv.org/abs/2303.12263v2, http://arxiv.org/pdf/2303.12263v2",econ.TH
34525,th,"We study the set of feasible payoffs of OLG repeated games. We first provide
a complete characterization of the feasible payoffs. Second, we provide a novel
comparative statics of the feasible payoff set with respect to players'
discount factor and the length of interaction. Perhaps surprisingly, the
feasible payoff set becomes smaller as the players' discount factor approaches
to one.",Characterizing the Feasible Payoff Set of OLG Repeated Games,2023-03-23 04:43:19,"Daehyun Kim, Chihiro Morooka","http://arxiv.org/abs/2303.12988v2, http://arxiv.org/pdf/2303.12988v2",econ.TH
34526,th,"We study a general credence goods model with N problem types and N
treatments. Communication between the expert seller and the client is modeled
as cheap talk. We find that the expert's equilibrium payoffs admit a geometric
characterization, described by the quasiconcave envelope of his belief-based
profits function under discriminatory pricing. We establish the existence of
client-worst equilibria, apply the geometric characterization to previous
research on credence goods, and provide a necessary and sufficient condition
for when communication benefits the expert. For the binary case, we solve for
all equilibria and characterize client's possible welfare among all equilibria.",Information transmission in monopolistic credence goods markets,2023-03-23 17:26:58,"Xiaoxiao Hu, Haoran Lei","http://arxiv.org/abs/2303.13295v2, http://arxiv.org/pdf/2303.13295v2",econ.TH
34527,th,"We consider sequential search by an agent who cannot observe the quality of
goods but can acquire information by buying signals from a profit-maximizing
principal with limited commitment power. The principal can charge higher prices
for more informative signals in any period, but high prices in the future
discourage continued search by the agent, thereby reducing the principal's
future profits. A unique stationary equilibrium outcome exists, and we show
that the principal $(i)$ induces the socially efficient stopping rule, $(ii)$
extracts the full surplus, and $(iii)$ persuades the agent against settling for
marginal goods, extending the duration of surplus extraction. However,
introducing an additional, free source of information can lead to inefficiency
in equilibrium.",Persuaded Search,2023-03-23 19:29:52,"Teddy Mekonnen, Zeky Murra-Anton, Bobak Pakzad-Hurson","http://arxiv.org/abs/2303.13409v4, http://arxiv.org/pdf/2303.13409v4",econ.TH
34530,th,"Government-run (Government-led) restoration has become a common and effective
approach to the mitigation of financial risks triggered by corporation credit
defaults. However, in practice, it is often challenging to come up with the
optimal plan of those restorations, due to the massive search space associated
with defaulted corporation networks (DCNs), as well as the dynamic and looped
interdependence among the recovery of those individual corporations. To address
such a challenge, this paper proposes an array of viable heuristics of the
decision-making that drives those restoration campaigns. To examine their
applicability and measure their performance, those heuristics have been applied
to two real-work DCNs that consists of 100 listed Chinese A-share companies,
whose restoration has been modelled based on the 2021 financial data, in the
wake of randomly generated default scenarios. The corresponding simulation
outcome of the case-study shows that the restoration of the DCNs would be
significantly influenced by the different heuristics adopted, and in
particular, the system-oriented heuristic is revealed to be significantly
outperforming those individual corporation-oriented ones. Therefore, such a
research has further highlighted that the interdependence-induced risk
propagation shall be accounted for by the decision-makers, whereby a prompt and
effective restoration campaign of DCNs could be shaped.",Study on the risk-informed heuristic of decision-making on the restoration of defaulted corporation networks,2023-03-28 13:12:12,Jiajia Xia,"http://arxiv.org/abs/2303.15863v2, http://arxiv.org/pdf/2303.15863v2",econ.TH
34531,th,"In a stylized voting model, we establish that increasing the share of
critical thinkers -- individuals who are aware of the ambivalent nature of a
certain issue -- in the population increases the efficiency of surveys
(elections) but might increase surveys' bias. In an incentivized online social
media experiment on a representative US population (N = 706), we show that
different digital storytelling formats -- different designs to present the same
set of facts -- affect the intensity at which individuals become critical
thinkers. Intermediate-length designs (Facebook posts) are most effective at
triggering individuals into critical thinking. Individuals with a high need for
cognition mostly drive the differential effects of the treatments.",Critical Thinking Via Storytelling: Theory and Social Media Experiment,2023-03-29 06:06:07,"Brian Jabarian, Elia Sartori","http://arxiv.org/abs/2303.16422v1, http://arxiv.org/pdf/2303.16422v1",econ.TH
34532,th,"We study the problem of sharing the revenues from broadcasting sports leagues
axiomatically. Our key axiom is anonymity, the classical impartiality axiom.
Other impartiality axioms already studied in these problems are equal treatment
of equals, weak equal treatment of equals and symmetry. We study the
relationship between all impartiality axioms. Besides we combine anonymity with
other existing axioms in the literature. Some combinations give rise to new
characterizations of well-known rules. The family of generalized split rules is
characterized with anonymity, additivity and null team. The concede-and-divide
rule is characterized with anonymity, additivity and essential team. Others
combinations characterize new rules that had not been considered before. We
provide three characterizations in which three axioms are the same (anonymity,
additivity, and order preservation) the fourth one is different (maximum
aspirations, weak upper bound, and non-negativity). Depending on the fourth
axiom we obtain three different families of rules. In all of them
concede-and-divide plays a central role.",Anonymity in sharing the revenues from broadcasting sports leagues,2023-03-31 11:56:32,"Gustavo Bergantinos, Juan D. Moreno-Ternero","http://arxiv.org/abs/2303.17897v1, http://arxiv.org/pdf/2303.17897v1",econ.TH
34533,th,"How and why do incentive levels affect strategic behavior? This paper
examines an experiment designed to identify the causal effect of scaling up
incentives on choices and beliefs in strategic settings by holding fixed
opponents' actions. In dominance-solvable games, higher incentives increase
action sophistication and best-response rates and decrease mistake propensity.
Beliefs tend to become more accurate with higher own incentives in simple
games. However, opponents with higher incentive levels are harder to predict:
while beliefs track opponents' behavior when they have higher incentive levels,
beliefs about opponents also become more biased. We provide evidence that
incentives affect cognitive effort and that greater effort increases
performance and predicts choice and belief sophistication. Overall, the data
lends support to combining both payoff-dependent mistakes and costly reasoning.",The Effects of Incentives on Choices and Beliefs in Games: An Experiment,2023-04-02 03:05:08,"Teresa Esteban-Casanelles, Duarte Gonçalves","http://arxiv.org/abs/2304.00412v1, http://arxiv.org/pdf/2304.00412v1",econ.TH
34534,th,"We propose a generalization of Quantal Response Equilibrium (QRE) built on a
simple premise: some actions are more focal than others. In our model, which we
call the Focal Quantal Response Equilibrium (Focal QRE), each player plays a
stochastic version of Nash equilibrium as in the QRE, but some strategies are
focal and thus are chosen relatively more frequently than other strategies
after accounting for expected utilities. The Focal QRE is able to systemically
account for various forms of bounded rationality of players, especially
regret-aversion, salience, or limited consideration. The Focal QRE is also
useful to explain observed heterogeneity of bounded rationality of players
across different games. We show that regret-based focal sets perform relatively
well at predicting strategies that are chosen more frequently relative to their
expected utilities.",The Focal Quantal Response Equilibrium,2023-04-02 06:20:54,"Matthew Kovach, Gerelt Tserenjigmid","http://arxiv.org/abs/2304.00438v1, http://arxiv.org/pdf/2304.00438v1",econ.TH
34535,th,"This paper models a two-agent economy with production and appropriation as a
noncooperative dynamic game, and determines its closed-form Markovian Nash
equilibrium. The analysis highlights the para-metric conditions that tip the
economy from a nonaggressive or ""co-operative"" equilibrium to outright
distributional conflict. The model includes parameters that capture the role of
appropriation technology and destructiveness. The full dynamic implications of
the game are yet to be explored, but the model offers a promising general
framework for thinking about different technological and economic conditions as
more or less conducive to cooperation or distributional conflict.",Inequality and Growth: A Two-Player Dynamic Game with Production and Appropriation,2023-04-04 18:03:58,Julio Huato,"http://arxiv.org/abs/2304.01855v1, http://arxiv.org/pdf/2304.01855v1",econ.TH
34536,th,"This study delves into the origins of excess capacity by examining the
reactions of capital, labor, and capital intensity. To achieve this, we have
employed a novel three-layered production function model, estimating the
elasticity of substitution between capital and labor as a nested layer,
alongside capital intensity, for all industry groups. We have then selectively
analyzed a few industry groups for comparative purposes, taking into account
the current government policies and manufacturing plant realities. Ultimately,
we recommend that policymakers address the issue of excess capacity by
stimulating the expansion of manufacturing plants with cutting-edge machinery.
Our findings and recommendations are intended to appeal to academics and
policymakers alike.",Causes of Excess Capacity,2023-04-05 00:40:28,Samidh Pal,"http://arxiv.org/abs/2304.02137v1, http://arxiv.org/pdf/2304.02137v1",econ.TH
34538,th,"This paper studies a dynamic model of information acquisition, in which
information might be secretly manipulated. A principal must choose between a
safe action with known payoff and a risky action with uncertain payoff,
favoring the safe action under the prior belief. She may delay her decision to
acquire additional news that reveals the risky action's payoff, without knowing
exactly when such news will arrive. An uninformed agent with a misaligned
preference may have the capability to generate a false arrival of news, which
is indistinguishable from a real one, distorting the information content of
news and the principal's search. The analysis characterizes the positive and
normative distortions in the search for news arising from such manipulation,
and it considers three remedies that increase the principal's payoff: a
commitment to naive search, transfer of authority to the agent, and delegation
to an intermediary who is biased in the agent's favor.",Waiting for Fake News,2023-04-08 18:44:52,Raphael Boleslavsky,"http://arxiv.org/abs/2304.04053v2, http://arxiv.org/pdf/2304.04053v2",econ.TH
34539,th,"This paper provides a micro-economic foundation for an argument that direct
employment by a government is more desirable than government purchase of
private goods to eliminate unemployment. A general equilibrium model with
monopolistic competition is devised, and the effects of policy parameters
(government purchase, government employment, and tax rate) on macroeconomic
variables (consumption, price, and profit) are investigated. It is shown that
1) the government purchase is inflationary in the sense that additional
effective demand by a government not only increases private employment but also
raises prices; 2) the government employment can achieve full employment without
causing a rise in prices.",A micro-founded comparison of fiscal policies: indirect and direct job creation,2023-04-10 13:51:51,Kensuke Ohtake,"http://arxiv.org/abs/2304.04506v2, http://arxiv.org/pdf/2304.04506v2",econ.TH
34540,th,"This paper investigates a novel behavioral feature exhibited by recursive
preferences: aversion to risks that are persistent through time. I introduce a
formal notion of correlation aversion to capture this phenomenon and provide a
characterization based on risk attitudes. Furthermore, correlation averse
preferences admit a specific variational representation, which connects
correlation aversion to fear of model misspecification. These findings imply
that correlation aversion is the main driver in many applications of recursive
preferences such as asset pricing, climate policy, and optimal fiscal policy.","Recursive Preferences, Correlation Aversion, and the Temporal Resolution of Uncertainty",2023-04-10 17:13:44,Lorenzo Maria Stanca,"http://arxiv.org/abs/2304.04599v2, http://arxiv.org/pdf/2304.04599v2",econ.TH
34541,th,"This paper examines how a decision maker might incentive an expert, who is
more aware than himself, to reveal novel contingencies. In particular, the
decision maker will take an action that determines, along with the resolution
of uncertainty, the payoffs to both players. I delineate the exact level of
commitment needed to ensure full revelation: any fully revealing incentive
scheme is equivalent to a dynamic game wherein each round the decision maker
proposes a plan of action and the expert chooses to reveal some novel
contingencies. The game ends when nothing novel is revealed and the expert
chooses which of the proposed plans gets implemented. That is, the decision
maker must be able to commit that once a plan of action is proposed, it cannot
be taken off the table.
  I then consider the set of robust strategies -- those strategies that
maximize the worst case outcome across any possible awareness type of the
expert -- and show they are characterized by a principle of myopic optimality:
at each round, the decision maker maximizes his payoff as if the expert had
nothing further to reveal. It turns out, this myopic principle also solves the
mechanism designer's problem: the set of outcomes that are achievable by any
incentive compatible and Pareto efficient mechanism are delineated by the set
of robust strategies in the above dynamic game.",Eliciting Awareness,2023-04-11 14:17:28,Evan Piermont,"http://arxiv.org/abs/2304.05142v1, http://arxiv.org/pdf/2304.05142v1",econ.TH
34542,th,"In this short note, we compare the cursed sequential equilibrium (CSE) by
Fong et al. (2023) and the sequential cursed equilibrium (SCE) by Cohen and Li
(2023). We identify eight main differences between CSE and SCE with respect to
the following features: (1) the family of applicable games, (2) the number of
free parameters, (3) the belief updating process, (4) the treatment of public
histories, (5) effects in games of complete information, (6) violations of
subgame perfection and sequential rationality, (7) re-labeling of actions, and
(8) effects in one-stage simultaneous-move games.",A Note on Cursed Sequential Equilibrium and Sequential Cursed Equilibrium,2023-04-12 00:48:23,"Meng-Jhang Fong, Po-Hsuan Lin, Thomas R. Palfrey","http://arxiv.org/abs/2304.05515v1, http://arxiv.org/pdf/2304.05515v1",econ.TH
34543,th,We correct a gap in the proof of Theorem 2 in Matsushima et al. (2010).,"Comment on Matsushima, Miyazaki, and Yagi (2010) ""Role of Linking Mechanisms in Multitask Agency with Hidden Information""",2023-04-12 18:58:19,"Ian Ball, Deniz Kattwinkel","http://arxiv.org/abs/2304.05936v1, http://arxiv.org/pdf/2304.05936v1",econ.TH
34544,th,"We illustrate the strong implications of recursivity, a standard assumption
in dynamic environments, on attitudes toward uncertainty. We show that in
intertemporal consumption choice problems, recursivity always implies constant
absolute ambiguity aversion (CAAA) when applying the standard dynamic extension
of monotonicity. Our analysis yields a functional equation called ""generalized
rectangularity"", as it generalizes the standard notion of rectangularity for
recursive multiple priors. Our results highlight that if uncertainty aversion
is modeled as a form of convexity, recursivity limits us to recursive
variational preferences. We propose a novel notion of monotonicity that enables
us to overcome this limitation.",Recursive Preferences and Ambiguity Attitudes,2023-04-14 00:45:48,"Massimo Marinacci, Giulio Principi, Lorenzo Stanca","http://arxiv.org/abs/2304.06830v4, http://arxiv.org/pdf/2304.06830v4",econ.TH
34545,th,"We consider the social welfare function a la Arrow, where some voters are not
qualified to evaluate some alternatives. Thus, the inputs of the social welfare
function are the preferences of voters on the alternatives that they are
qualified to evaluate only. Our model is a generalization of the peer rating
model, where each voter evaluates the other voters (except for
himself/herself). We demonstrate the following three impossibility results.
First, if a transitive valued social welfare function satisfies independence of
irrelevant alternatives and the Pareto principle, then a dictator who is
qualified to evaluate all alternatives exists. Second, a transitive valued
function satisfying the Pareto principle exists if and only if at least one
voter is qualified to evaluate all alternatives. Finally, if no voter is
qualified to evaluate all alternatives, then under a transitive valued social
welfare function satisfying the weak Pareto principle and independence of
irrelevant alternatives, all alternatives are indifferent for any preference
profile of voters.",Social Welfare Functions with Voters Qualifications: Impossibility Results,2023-04-14 10:06:15,Yasunori Okumura,"http://arxiv.org/abs/2304.06961v1, http://arxiv.org/pdf/2304.06961v1",econ.TH
34546,th,"The Covid-19 pandemic has accelerated the trend of many colleges moving to
test-optional, and in some cases test-blind, admissions policies. A frequent
claim is that by not seeing standardized test scores, a college is able to
admit a student body that it prefers, such as one with more diversity. But how
can observing less information allow a college to improve its decisions? We
argue that test-optional policies may be driven by social pressure on colleges'
admission decisions. We propose a model of college admissions in which a
college disagrees with society on which students should be admitted. We show
how the college can use a test-optional policy to reduce its ""disagreement
cost"" with society, regardless of whether this results in a preferred student
pool. We discuss which students either benefit from or are harmed by a
test-optional policy. In an application, we study how a ban on using race in
admissions may result in more colleges going test optional or test blind.",Test-Optional Admissions,2023-04-15 16:05:57,"Wouter Dessein, Alex Frankel, Navin Kartik","http://arxiv.org/abs/2304.07551v2, http://arxiv.org/pdf/2304.07551v2",econ.TH
34547,th,"We analyze digital markets where a monopolist platform uses data to match
multiproduct sellers with heterogeneous consumers who can purchase both on and
off the platform. The platform sells targeted ads to sellers that recommend
their products to consumers and reveals information to consumers about their
values. The revenue-optimal mechanism is a managed advertising campaign that
matches products and preferences efficiently. In equilibrium, sellers offer
higher qualities at lower unit prices on than off the platform.
Privacy-respecting data-governance rules such as organic search results or
federated learning can lead to welfare gains for consumers.","Data, Competition, and Digital Platforms",2023-04-16 02:18:24,"Dirk Bergemann, Alessandro Bonatti","http://arxiv.org/abs/2304.07653v1, http://arxiv.org/pdf/2304.07653v1",econ.TH
34548,th,"Although portfolio diversification is the typical strategy followed by
risk-averse investors, extreme portfolios that allocate all funds to a single
asset/state of the world are common too. Such asset-demand behavior is
compatible with risk-averse subjective expected utility maximization under
beliefs that assign a strictly positive probability to every state. We show
that whenever finitely many extreme asset demands are rationalizable in this
way under such beliefs, they are simultaneously rationalizable under the
\textit{same} beliefs by: (i) constant absolute risk aversion; increasing
absolute risk aversion (IARA); risk-neutral; and risk-seeking utility indices
at all wealth levels; (ii) a class of power DARA/IRRA utility indices at some
strictly positive fixed initial wealth; and (iii) quadratic IARA utility
indices under bounded wealth. We also show that, in such situations, the
observable data allow for sharp bounds to be given for the relevant parameters
in each of the above classes of risk-averse preferences.",Non-diversified portfolios with subjective expected utility,2023-04-17 11:22:21,"Christopher P. Chambers, Georgios Gerasimou","http://arxiv.org/abs/2304.08059v2, http://arxiv.org/pdf/2304.08059v2",econ.TH
34549,th,"I revisit the standard moral-hazard model, in which an agent's preference
over contracts is rooted in costly effort choice. I characterise the
behavioural content of the model in terms of empirically testable axioms, and
show that the model's parameters are identified. I propose general behavioural
definitions of relative (over)confidence and optimism, and characterise these
in terms of the parameters of the moral-hazard model. My formal results are
rooted in a simple but powerful insight: that the moral-hazard model is closely
related to the well-known 'variational' model of choice under uncertainty.",Optimism and overconfidence,2023-04-17 18:05:44,Ludvig Sinander,"http://arxiv.org/abs/2304.08343v1, http://arxiv.org/pdf/2304.08343v1",econ.TH
34550,th,"We examine how uncertain veracity of external news influences investor
beliefs, market prices and corporate disclosures. Despite assuming independence
between the news' veracity and the firm's endowment with private information,
we find that favorable news is taken ``with a grain of salt'' in equilibrium --
more precisely, perceived as less likely veracious -- which reinforces investor
beliefs that nondisclosing managers are hiding disadvantageous information.
Hence more favorable external news could paradoxically lead to lower market
valuation. That is, amid management silence, stock prices may be non-monotonic
in the positivity of external news. In line with mounting empirical evidence,
our analysis implies asymmetric price reactions to news and price declines
following firm disclosures. We further predict that external news that is more
likely veracious may increase or decrease the probability of disclosure and
link these effects to empirically observable characteristics.",With a Grain of Salt: Uncertain Veracity of External News and Firm Disclosures,2023-04-18 22:51:59,"Jonathan Libgober, Beatrice Michaeli, Elyashiv Wiedman","http://arxiv.org/abs/2304.09262v3, http://arxiv.org/pdf/2304.09262v3",econ.TH
34551,th,"We study the problem of a partisan gerrymanderer who assigns voters to
equipopulous districts so as to maximize his party's expected seat share. The
designer faces both aggregate uncertainty (how many votes his party will
receive) and idiosyncratic, voter-level uncertainty (which voters will vote for
his party). We argue that pack-and-pair districting, where weaker districts are
``packed'' with a single type of voter, while stronger districts contain two
voter types, is typically optimal for the gerrymanderer. The optimal form of
pack-and-pair districting depends on the relative amounts of aggregate and
idiosyncratic uncertainty. When idiosyncratic uncertainty dominates, it is
optimal to pack opposing voters and pair more favorable voters; this plan
resembles traditional ``packing-and-cracking.'' When aggregate uncertainty
dominates, it is optimal to pack moderate voters and pair extreme voters; this
``matching slices'' plan has received some attention in the literature.
Estimating the model using precinct-level returns from recent US House
elections indicates that, in practice, idiosyncratic uncertainty dominates and
packing opponents is optimal; moreover, traditional pack-and-crack districting
is approximately optimal. We discuss implications for redistricting reform and
political polarization. Methodologically, we exploit a formal connection
between gerrymandering -- partitioning voters into districts -- and information
design -- partitioning states of the world into signals.",The Economics of Partisan Gerrymandering,2023-04-19 05:29:17,"Anton Kolotilin, Alexander Wolitzky","http://arxiv.org/abs/2304.09381v1, http://arxiv.org/pdf/2304.09381v1",econ.TH
34559,th,"We analyze the effect of homophily in an epidemics between two groups of
agents that differ in vaccination (rates vaxxers and anti--vaxxers). The steady
state infection rate is hump-shaped in homophily, whereas the cumulative number
of agents infected during an outbreak is u-shaped. If vaccination rates are
endogenous, homophily has the opposite impact on the two groups, but the
qualitative behavior of the aggregate is unchanged. However, the sign of the
group-level impact is the opposite if vaccination is motivated by infection
risk or by peer pressure. If motivations are group-specific, homophily can be
harmful for both groups.",Homophily and infections: static and dynamic effects,2023-04-24 12:18:11,"Matteo Bizzarri, Fabrizio Panebianco, Paolo Pin","http://arxiv.org/abs/2304.11934v1, http://arxiv.org/pdf/2304.11934v1",econ.TH
34552,th,"We consider linear orders of finite alternatives that are constructed by
aggregating the preferences of individuals. We focus on a linear order that is
consistent with the collective preference relation, which is constructed by one
of the supermajority rules and modified using two procedures if there exist
some cycles. One modification procedure uses the transitive closure, and the
other uses the Suzumura consistent closure. We derive two sets of linear orders
that are consistent with the (modified) collective preference relations formed
by any of the supermajority rules and show that these sets are generally not
empty. These sets of linear orders are closely related to those obtained
through the ranked pairs method and the Schulze method. Finally, we consider
two social choice correspondences whose output is one of the sets introduced
above, and show that the correspondences satisfy the four properties: the
extended Condorcet principle, the Pareto principle, the independence of clones,
and the reversal symmetry.",Consistent Linear Orders for Supermajority Rules,2023-04-19 07:38:47,Yasunori Okumura,"http://arxiv.org/abs/2304.09419v3, http://arxiv.org/pdf/2304.09419v3",econ.TH
34553,th,"In this paper, we consider an environment in which the utilitarian theorem
for the NM utility function derived by Harsanyi and the utilitarian theorem for
Alt's utility function derived by Harvey hold simultaneously, and prove that
the NM utility function coincides with Alt's utility function under this setup.
This result is so paradoxical that we must presume that at least one of the
utilitarian theorems contains a strong assumption. We examine the assumptions
one by one and conclude that one of Harsanyi's axioms is strong.",Utilitarian Theorems and Equivalence of Utility Theories,2023-04-20 00:21:37,Yuhki Hosoya,"http://arxiv.org/abs/2304.09973v4, http://arxiv.org/pdf/2304.09973v4",econ.TH
34554,th,"We are concerned with the stability of a coalitional game, i.e., a
transferable-utility (TU) cooperative game. First, the concept of core can be
weakened so that the blocking of changes is limited to only those with
multilateral backings. This principle of consensual blocking, as well as the
traditional core-defining principle of unilateral blocking and one straddling
in between, can all be applied to partition-allocation pairs. Each such pair is
made up of a partition of the grand coalition and a corresponding allocation
vector whose components are individually rational and efficient for the various
constituent coalitions of the given partition. For the resulting strong,
medium, and weak stability concepts, the first is core-compatible in that the
traditional core exactly contains those allocations that are associated through
this strong stability concept with the all-consolidated partition consisting of
only the grand coalition. Probably more importantly, the latter medium and weak
stability concepts are universal. By this, we mean that any game, no matter how
``poor'' it is, has its fair share of stable solutions. There is also a
steepest ascent method to guide the convergence process to a mediumly stable
partition-allocation pair from any starting partition.",Partition-based Stability of Coalitional Games,2023-04-21 00:26:07,Jian Yang,"http://arxiv.org/abs/2304.10651v1, http://arxiv.org/pdf/2304.10651v1",econ.TH
34555,th,"We deal with coalitional games possessing strictly positive values.
Individually rational allocations of such a game has clear fractional
interpretations. Many concepts, including the long-existing core and other
stability notions more recently proposed by Yang \cite{Y22}, can all be re-cast
in this fractional mode. The latter allows a certain ranking between games,
which we deem as in the sense of ``centripetality'', to imply a clearly
describable shift in the games' stable solutions. %These trends would be
preserved after the imposition of the restriction that fractions be positive as
well. When coalitions' values are built on both random outcomes and a common
positively homogeneous reward function characterizing players' enjoyments from
their shares, the above link could help explain why aversion to risk often
promotes cooperation.",A Partial Order for Strictly Positive Coalitional Games and a Link from Risk Aversion to Cooperation,2023-04-21 00:30:12,Jian Yang,"http://arxiv.org/abs/2304.10652v1, http://arxiv.org/pdf/2304.10652v1",econ.TH
34556,th,"This paper studies the stability and strategy-proofness aspect of the
two-sided one-to-one matching market. Agents have single-peaked preferences on
trees. In this setting, we characterize all rich anonymous tree-single-peaked
domains where a stable and (weakly group) strategy-proof matching rule exists.
We also show that whenever there exists a stable and strategy-proof matching
rule on a rich anonymous tree-single-peaked domain, one or both of the deferred
acceptance rules (Gale and Shapley, 1962) satisfy stability and weak group
strategy-proofness on that domain. Finally, we show that for markets with a
size of at least five, there is no rich anonymous domain where a stable and
non-bossy matching rule exists. As a corollary, we show incompatibility between
stability and group strategy-proofness on rich anonymous tree-single-peaked
domains for markets with a size of at least five.",Compatibility between stability and strategy-proofness with single-peaked preferences on trees,2023-04-23 02:10:24,Pinaki Mandal,"http://arxiv.org/abs/2304.11494v1, http://arxiv.org/pdf/2304.11494v1",econ.TH
34557,th,"We present a model that investigates preference evolution with endogenous
matching. In the short run, individuals' subjective preferences simultaneously
determine who they choose to match with and how they behave in the social
interactions with their matched partners, which result in material payoffs for
them. Material payoffs in turn affect how preferences evolve in the long run.
To properly model the ""match-to-interact"" process, we combine stable matching
and equilibrium concepts. Our analysis unveils that endogenous matching gives
rise to the ""we is greater than me"" moral perspective. This perspective is
underpinned by a preference that exhibits both homophily and efficiency, which
enables individuals to reach a consensus of a collective ``we"" that transcends
the boundaries of the individual ""I"" and ""you."" Such a preference stands out in
the evolutionary process because it is able to force positive assortative
matching and efficient play among individuals carrying the same preference
type. Under incomplete information, a strong form of homophily, which we call
parochialism, is necessary for a preference to prevail in evolution, because
stronger incentives are required to engage in self-sorting with information
friction.",Partner Choice and Morality: Preference Evolution under Stable Matching,2023-04-23 03:49:04,"Ziwei Wang, Jiabin Wu","http://arxiv.org/abs/2304.11504v2, http://arxiv.org/pdf/2304.11504v2",econ.TH
34558,th,"Following the decision-theoretic approach to game theory, we extend the
analysis of Epstein & Wang and of Di Tillio from hierarchies of preference
relations to hierarchies of choice functions. We then construct the universal
choice structure containing all these choice hierarchies, and show how the
universal preference structure of Di Tillio is embedded in it.",Choice Structures in Games,2023-04-23 11:19:39,"Paolo Galeazzi, Johannes Marti","http://arxiv.org/abs/2304.11575v1, http://arxiv.org/pdf/2304.11575v1",econ.TH
34560,th,"We propose monotone comparative statics results for maximizers of submodular
functions, as opposed to maximizers of supermodular functions as in the
classical theory put forth by Veinott, Topkis, Milgrom, and Shannon among
others. We introduce matrons, a natural structure that is dual to sublattices
that generalizes existing structures such as matroids and polymatroids in
combinatorial optimization and M-sets in discrete convex analysis. Our monotone
comparative statics result is based on a natural order on matrons, which is
dual in some sense to Veinott's strong set order on sublattices. As an
application, we propose a deferred acceptance algorithm that operates in the
case of divisible goods, and we study its convergence properties.","Monotone comparative statics for submodular functions, with an application to aggregated deferred acceptance",2023-04-24 18:29:59,"Alfred Galichon, Yu-Wei Hsieh, Maxime Sylvestre","http://arxiv.org/abs/2304.12171v1, http://arxiv.org/pdf/2304.12171v1",econ.TH
34561,th,"So far, the theory of equilibrium selection in the infinitely repeated
prisoner's dilemma is insensitive to communication possibilities. To address
this issue, we incorporate the assumption that communication reduces -- but
does not entirely eliminate -- an agent's uncertainty that the other agent
follows a cooperative strategy into the theory. Because of this, agents still
worry about the payoff from cooperating when the other one defects, i.e. the
sucker's payoff S, and, games with communication are more conducive to
cooperation than games without communication. This theory is supported by data
from laboratory experiments, and by machine learning based evaluation of the
communication content.",Communication in the Infinitely Repeated Prisoner's Dilemma: Theory and Experiments,2023-04-24 20:46:46,Maximilian Andres,"http://arxiv.org/abs/2304.12297v1, http://arxiv.org/pdf/2304.12297v1",econ.TH
34562,th,"We establish that all strategy-proof social choice rules in strict preference
domains follow necessarily a two-step procedure. In the first step, agents are
asked to reveal some specific information about their preferences. Afterwards,
a subrule that is dictatorial or strategy-proof of range 2 must be applied, and
the selected subrule may differ depending on the answers of the first step. As
a consequence, the strategy-proof rules that have been identified in the
literature for some domains can be reinterpreted in terms of our procedure and,
more importantly, this procedure serves as a guide for determining the
structure of the strategy-proof rules in domains that have not been explored
yet.",The structure of strategy-proof rules,2023-04-25 17:17:08,"Jorge Alcalde-Unzu, Marc Vorsatz","http://arxiv.org/abs/2304.12843v1, http://arxiv.org/pdf/2304.12843v1",econ.TH
34563,th,"We consider two-player contests with the possibility of ties and study the
effect of different tie-breaking rules on effort. For ratio-form and
difference-form contests that admit pure-strategy Nash equilibrium, we find
that the effort of both players is monotone decreasing in the probability that
ties are broken in favor of the stronger player. Thus, the effort-maximizing
tie-breaking rule commits to breaking ties in favor of the weaker agent. With
symmetric agents, we find that the equilibrium is generally symmetric and
independent of the tie-breaking rule. We also study the design of random
tie-breaking rules that are ex-ante fair and identify sufficient conditions
under which breaking ties before the contest actually leads to greater expected
effort than the more commonly observed practice of breaking ties after the
contest.",Optimal tie-breaking rules,2023-04-27 02:19:16,"Sumit Goel, Amit Goyal","http://dx.doi.org/10.1016/j.jmateco.2023.102872, http://arxiv.org/abs/2304.13866v2, http://arxiv.org/pdf/2304.13866v2",econ.TH
34564,th,"Nontransitive choices have long been an area of curiosity within economics.
However, determining whether nontransitive choices represent an individual's
preference is a difficult task since choice data is inherently stochastic. This
paper shows that behavior from nontransitive preferences under a monotonicity
assumption is equivalent to a transitive stochastic choice model. In
particular, nontransitive preferences are regularly interpreted as a strength
of preference, so we assume alternatives are chosen proportionally to the
nontransitive preference. One implication of this result is that one cannot
distinguish ``complementarity in attention"" and ``complementarity in demand.""",Nontransitive Preferences and Stochastic Rationalizability: A Behavioral Equivalence,2023-04-28 08:21:37,"Mogens Fosgerau, John Rehbeck","http://arxiv.org/abs/2304.14631v1, http://arxiv.org/pdf/2304.14631v1",econ.TH
34565,th,"Two acts are comonotonic if they yield high payoffs in the same states of
nature. The main purpose of this paper is to derive a new characterization of
Cumulative Prospect Theory (CPT) through simple properties involving
comonotonicity. The main novelty is a concept dubbed gain-loss hedging: mixing
positive and negative acts creates hedging possibilities even when acts are
comonotonic. This allows us to clarify in which sense CPT differs from Choquet
expected utility. Our analysis is performed under the simpler case of
(piece-wise) constant marginal utility which allows us to clearly separate the
perception of uncertainty from the evaluation of outcomes.",Gain-Loss Hedging and Cumulative Prospect Theory,2023-04-28 16:30:43,"Lorenzo Bastianello, Alain Chateauneuf, Bernard Cornet","http://arxiv.org/abs/2304.14843v1, http://arxiv.org/pdf/2304.14843v1",econ.TH
34566,th,"This study proposes a tractable stochastic choice model to identify
motivations for prosocial behavior, and to explore alternative motivations of
deliberate randomization beyond ex-ante fairness concerns. To represent social
preferences, we employ an additively perturbed utility model consisting of the
sum of expected utility and a nonlinear cost function, where the utility
function is purely selfish while the cost function depends on social
preferences. Using the cost function, we study stochastic choice patterns to
distinguish between stochastic inequity-averse behavior and stochastic
shame-mitigating behavior. Moreover, we discuss how our model can complement
recent experimental evidence of ex-post and ex-ante fairness concerns.",Social Preferences and Deliberately Stochastic Behavior,2023-04-28 20:01:30,"Yosuke Hashidate, Keisuke Yoshihara","http://arxiv.org/abs/2304.14977v1, http://arxiv.org/pdf/2304.14977v1",econ.TH
34567,th,"We consider a school choice matching model where the priorities for schools
are represented by binary relations that may not be weak order. We focus on the
(total order) extensions of the binary relations. We introduce a class of
algorithms to derive one of the extensions of a binary relation and
characterize them by using the class. We show that if the binary relations are
the partial orders, then for each stable matching for the profile of the binary
relations, there is an extension for which it is also stable. Moreover, if
there are multiple stable matchings for the profile of the binary relations
that are ranked by Pareto dominance, there is an extension for which all of
those matchings are stable. We provide several applications of these results.",On extensions of partial priorities in school choice,2023-05-01 06:17:33,"Minoru Kitahara, Yasunori Okumura","http://arxiv.org/abs/2305.00641v2, http://arxiv.org/pdf/2305.00641v2",econ.TH
34569,th,"In this paper, we take a mechanism design approach to optimal assignment
problems with asymmetrically informed buyers. In addition, the surplus
generated by an assignment of a buyer to a seller may be adversely affected by
externalities generated by other assignments. The problem is complicated by
several factors. Buyers know their own valuations and externality costs but do
not know this same information for other buyers. Buyers also receive private
signals correlated with the state and, consequently, the implementation problem
exhibits interdependent valuations. This precludes a naive application of the
VCG mechanism and to overcome this interdependency problem, we construct a
two-stage mechanism. In the first stage, we exploit correlation in the firms
signals about the state to induce truthful reporting of observed signals. Given
that buyers are honest in stage 1, we then use a VCG-like mechanism in stage 2
that induces honest reporting of valuation and externality functions.",An Assignment Problem with Interdependent Valuations and Externalities,2023-05-02 17:59:04,"Tatiana Daddario, Richard P. McLean, Andrew Postlewaite","http://arxiv.org/abs/2305.01477v1, http://arxiv.org/pdf/2305.01477v1",econ.TH
34570,th,"A sender with state-independent preferences (i.e., transparent motives)
privately observes a signal about the state of the world before sending a
message to a receiver, who subsequently takes an action. Regardless of whether
the receiver can mediate--and commit to a garbling of the sender's message--or
delegate--commit to a stochastic decision rule as a function of the
message--and understanding the statement ``the receiver is better off as a
result of an improvement of the sender's information'' to mean that her maximal
and minimal equilibrium payoffs (weakly) increase as the sender's signal
improves (in a Blackwell sense), we find that if the sender is more informed,
the receiver is better off.",A More Informed Sender Benefits the Receiver When the Sender Has Transparent Motives,2023-03-30 02:56:17,Mark Whitmeyer,"http://arxiv.org/abs/2305.01512v1, http://arxiv.org/pdf/2305.01512v1",econ.TH
34571,th,"The indoctrination game is a complete-information contest over public
opinion. The players exert costly effort to manifest their private opinions in
public in order to control the discussion, so that the governing opinion is
similar to theirs. Our analysis provides a theoretical foundation for the
silent majority and vocal minority phenomena, i.e., we show that all moderate
opinions remain mute in equilibrium while allowing extremists full control of
the discussion. Moreover, we prove that elevated exposure to others' opinions
increases the observed polarization among individuals. Using these results, we
formulate a new social-learning framework, referred to as an indoctrination
process.",The Indoctrination Game,2023-05-04 10:23:10,"Lotem Ikan, David Lagziel","http://arxiv.org/abs/2305.02604v1, http://arxiv.org/pdf/2305.02604v1",econ.TH
34572,th,"A seller posts a price for a single object. The seller's and buyer's values
may be interdependent. We characterize the set of payoff vectors across all
information structures. Simple feasibility and individual-rationality
constraints identify the payoff set. The buyer can obtain the entire surplus;
often, other mechanisms cannot enlarge the payoff set. We also study payoffs
when the buyer is more informed than the seller, and when the buyer is fully
informed. All three payoff sets coincide (only) in notable special cases -- in
particular, when there is complete breakdown in a ``lemons market'' with an
uninformed seller and fully-informed buyer.",Lemonade from Lemons: Information Design and Adverse Selection,2023-05-04 19:59:46,"Navin Kartik, Weijie Zhong","http://arxiv.org/abs/2305.02994v1, http://arxiv.org/pdf/2305.02994v1",econ.TH
34573,th,"We consider an incomplete information network game in which agents'
information is restricted only to the identity of their immediate neighbors.
Agents form beliefs about the adjacency pattern of others and play a
linear-quadratic effort game to maximize interim payoffs. We establish the
existence and uniqueness of Bayesian-Nash equilibria in pure strategies. In
this equilibrium agents use local information, i.e., knowledge of their direct
connections to make inferences about the complementarity strength of their
actions with those of other agents which is given by their updated beliefs
regarding the number of walks they have in the network. Our model clearly
demonstrates how asymmetric information based on network position and the
identity of agents affect strategic behavior in such network games. We also
characterize agent behavior in equilibria under different forms of ex-ante
prior beliefs such as uniform priors over the set of all networks, Erdos-Renyi
network generation, and homophilic linkage.",Games Under Network Uncertainty,2023-05-04 22:50:43,"Promit K. Chaudhuri, Sudipta Sarangi, Hector Tzavellas","http://arxiv.org/abs/2305.03124v4, http://arxiv.org/pdf/2305.03124v4",econ.TH
34574,th,"We consider a team-production environment where all participants are
motivated by career concerns, and where a team's joint productive outcome may
have different reputational implications for different team members. In this
context, we characterize equilibrium disclosure of team-outcomes when
team-disclosure choices aggregate individual decisions through some
deliberation protocol. In contrast with individual disclosure problems, we show
that equilibria often involve partial disclosure. Furthermore, we study the
effort-incentive properties of equilibrium disclosure strategies implied by
different deliberation protocols; and show that the partial disclosure of team
outcomes may improve individuals' incentives to contribute to the team.
Finally, we study the design of deliberation protocols, and characterize
productive environments where effort-incentives are maximized by unilateral
decision protocols or more consensual deliberation procedures.",Disclosure and Incentives in Teams,2023-05-05 18:45:33,"Paula Onuchic, João Ramos","http://arxiv.org/abs/2305.03633v1, http://arxiv.org/pdf/2305.03633v1",econ.TH
34575,th,"We develop a full-fledged analysis of an algorithmic decision process that,
in a multialternative choice problem, produces computable choice probabilities
and expected decision times.",Algorithmic Decision Processes,2023-05-05 19:05:36,"Carlo Baldassi, Fabio Maccheroni, Massimo Marinacci, Marco Pirazzini","http://arxiv.org/abs/2305.03645v1, http://arxiv.org/pdf/2305.03645v1",econ.TH
34576,th,"A sender seeks to persuade a receiver by presenting evidence obtained through
a sequence of private experiments. The sender has complete flexibility in his
choice of experiments, contingent on the private experimentation history. The
sender can disclose each experiment outcome credibly, but cannot prove whether
he has disclosed everything. By requiring `continuous disclosure', I first show
that the private sequential experimentation problem can be reformulated into a
static one, in which the sender chooses a single signal to learn about the
state. Using this observation, I derive necessary conditions for a signal to be
chosen in equilibrium, and then identify the set of beliefs induced by such
signals. Finally, I characterize sender-optimal signals from the
concavification of his value function constrained to this set.","Private Experimentation, Data Truncation, and Verifiable Disclosure",2023-05-07 12:38:12,Yichuan Lou,"http://arxiv.org/abs/2305.04231v1, http://arxiv.org/pdf/2305.04231v1",econ.TH
34578,th,"General equilibrium, the cornerstone of modern economics and finance, rests
on assumptions many markets do not meet. Spectrum auctions, electricity
markets, and cap-and-trade programs for resource rights often feature
non-convexities in preferences or production that can cause non-existence of
Walrasian equilibrium and render general equilibrium vacuous. Yquilibrium
complements general equilibrium with an optimization approach to (non-) convex
economies that does not require perfect competition. Yquilibrium coincides with
Walrasian equilibrium when the latter exists. Yquilibrium exists even if
Walrasian equilibrium ceases to and produces optimal allocations subject to
linear, anonymous, and (approximately) utility-clearing prices.",Yquilibrium: A Theory for (Non-) Convex Economies,2023-05-10 18:40:59,Jacob K Goeree,"http://arxiv.org/abs/2305.06256v1, http://arxiv.org/pdf/2305.06256v1",econ.TH
34579,th,"We cardinally and ordinally rank distribution functions (CDFs). We present a
new class of statistics, maximal adjusted quantiles, and show that a statistic
is invariant with respect to cardinal shifts, preserves least upper bounds with
respect to the first order stochastic dominance relation, and is lower
semicontinuous if and only if it is a maximal adjusted quantile. A dual result
is provided, as are ordinal results. Preservation of least upper bounds is
given several interpretations, including one that relates to changes in tax
brackets, and one that relates to valuing options composed of two assets.",Multiple Adjusted Quantiles,2023-05-10 20:58:03,"Christopher P. Chambers, Alan D. Miller","http://arxiv.org/abs/2305.06354v1, http://arxiv.org/pdf/2305.06354v1",econ.TH
34580,th,"This paper studies the equilibrium behavior in contests with stochastic
progress. Participants have access to a safe action that makes progress
deterministically, but they can also take risky moves that stochastically
influence their progress towards the goal and thus their relative position. In
the unique well-behaved Markov perfect equilibrium of this dynamic contest, the
follower drops out if the leader establishes a substantial lead, but resorts to
""Hail Marys"" beforehand: no matter how low the return of the risky move is, the
follower undertakes in an attempt to catch up. Moreover, if the risky move has
a medium return (between high and low), the leader will also adopt it when the
follower is close to dropping out - an interesting preemptive motive. We also
examine the impact of such equilibrium behavior on the optimal prize
allocation.",Hail Mary Pass: Contests with Stochastic Progress,2023-05-12 06:15:31,Chang Liu,"http://arxiv.org/abs/2305.07218v1, http://arxiv.org/pdf/2305.07218v1",econ.TH
34581,th,"Is incentive compatibility still necessary for implementation if we relax the
rational expectations assumption? This paper proposes a generalized model of
implementation that does not assume agents hold rational expectations and
characterizes the class of solution concepts requiring Bayesian Incentive
Compatibility (BIC) for full implementation. Surprisingly, for a broad class of
solution concepts, full implementation of functions still requires BIC even if
rational expectations do not hold. This finding implies that some classical
results, such as the impossibility of efficient bilateral trade (Myerson &
Satterthwaite, 1983), hold for a broader range of non-equilibrium solution
concepts, confirming their relevance even in boundedly rational setups.",Mechanism Design without Rational Expectations,2023-05-12 16:36:55,Giacomo Rubbini,"http://arxiv.org/abs/2305.07472v2, http://arxiv.org/pdf/2305.07472v2",econ.TH
34582,th,"We study group knowledge taking into account that members of a group may not
be perfect reasoners. Our focus is on distributed knowledge, which is what
privately informed agents can come to know by communicating freely with one
another and sharing everything they know. In descending order of reasoning
skills, agents can be fully introspective, positively introspective or
non-introspective. A fully introspective agent satisfies both positive and
negative introspection: When she knows a thing, she knows that she knows it;
when she does not know a thing, she knows that she does not know it. A
positively introspective agent may fail to satisfy negative introspection. A
non-introspective agent may fail to satisfy both properties.
  We introduce revision operators and revision types to model the inference
making process that leads to distributed knowledge. Since group members may be
differently introspective, inference making turns out to be order dependent.
Then we produce two equivalent characterizations of distributed knowledge: one
in terms of knowledge operators and the other in terms of possibility
relations, i.e., binary relations. We show that there are two qualitatively
different cases of how distributed knowledge is attained. In the first,
distributed knowledge is determined by any group member who can replicate all
the inferences that anyone else in the group makes. In the second case, no
member can replicate all the inferences that are made in the group. This is due
to different levels of introspection across group members and to the order
dependence of inference making. As a result, distributed knowledge is
determined by any two group members who can jointly replicate what anyone else
infers. This case is a type of wisdom of crowd effect, in which the group knows
more than what any of its members can conceivably know by having access to all
the information available in the group.",Group knowledge and individual introspection,2023-05-15 18:43:45,Michele Crescenzi,"http://arxiv.org/abs/2305.08729v2, http://arxiv.org/pdf/2305.08729v2",econ.TH
34583,th,"This paper describes a new method of proportional representation that uses
approval voting, known as COWPEA (Candidates Optimally Weighted in Proportional
Election using Approval voting). COWPEA optimally elects an unlimited number of
candidates with potentially different weights to a body, rather than giving a
fixed number equal weight. A version that elects a fixed a number of candidates
with equal weight does exist, but it is non-deterministic, and is known as
COWPEA Lottery. This is the only proportional method known to pass
monotonicity, Independence of Irrelevant Ballots, and Independence of
Universally Approved Candidates. There are also ways to convert COWPEA and
COWPEA Lottery to score or graded voting methods.",COWPEA (Candidates Optimally Weighted in Proportional Election using Approval voting),2023-04-03 23:13:09,Toby Pereira,"http://arxiv.org/abs/2305.08857v3, http://arxiv.org/pdf/2305.08857v3",econ.TH
34584,th,"Hierarchies of conditional beliefs (Battigalli and Siniscalchi 1999) play a
central role for the epistemic analysis of solution concepts in sequential
games. They are modelled by type structures, which allow the analyst to
represent the players' hierarchies without specifying an infinite sequence of
conditional beliefs. Here, we study type structures that satisfy a ""richness""
property, called completeness. This property is defined on the type structure
alone, without explicit reference to hierarchies of beliefs or other type
structures. We provide sufficient conditions under which a complete type
structure represents all hierarchies of conditional beliefs. In particular, we
present an extension of the main result in Friedenberg (2010) to type
structures with conditional beliefs. KEYWORDS: Conditional probability systems,
hierarchies of beliefs, type structures, completeness, terminality. JEL: C72,
D80",Complete Conditional Type Structures,2023-05-15 21:22:09,Nicodemo De Vito,"http://arxiv.org/abs/2305.08940v2, http://arxiv.org/pdf/2305.08940v2",econ.TH
34591,th,"We study the testable implications of models of dynamically inconsistent
choices when planned choices are unobservable, and thus only ""on path"" data is
available. First, we discuss the approach in Blow, Browning and Crawford
(2021), who characterize first-order rationalizability of the model of
quasi-hyperbolic discounting. We show that the first-order approach does not
guarantee rationalizability by means of the quasi-hyperbolic model. This
motivates consideration of an abstract model of intertemporal choice, under
which we provide a characterization of different behavioral models -- including
the naive and sophisticated paradigms of dynamically inconsistent choice.",Revealed preferences for dynamically inconsistent models,2023-05-23 17:52:09,"Federico Echenique, Gerelt Tserenjigmid","http://arxiv.org/abs/2305.14125v2, http://arxiv.org/pdf/2305.14125v2",econ.TH
34585,th,"In centralized mechanisms and platforms, participants do not fully observe
each others' type reports. Hence, the central authority (the designer) may
deviate from the promised mechanism without the participants being able to
detect these deviations. In this paper, we develop a theory of auditability for
allocation and social choice problems. Namely, we measure a mechanism's
auditabilty as the size of information needed to detect deviations from it. We
apply our theory to study auditability in a variety of settings. For
priority-based allocation and auction problems, we reveal stark contrasts
between the auditabilities of some prominent mechanisms. For house allocation
problems, we characterize a class of dictatorial rules that are maximally
auditable. For voting problems, we provide simple characterizations of
dictatorial and majority voting rules through auditability. Finally, for choice
with affirmative action problems, we study the auditability properties of
different reserves rules.",A Theory of Auditability for Allocation and Social Choice Mechanisms,2023-05-16 12:41:56,"Aram Grigoryan, Markus Möller","http://arxiv.org/abs/2305.09314v2, http://arxiv.org/pdf/2305.09314v2",econ.TH
34586,th,"Coordination games admit two types of equilibria: pure equilibria, where all
players successfully coordinate their actions, and mixed equilibria, where
players frequently experience miscoordination. The existing literature shows
that under many evolutionary dynamics, populations converge to a pure
equilibrium from almost any initial distribution of actions. By contrast, we
show that under plausible learning dynamics, where agents observe the actions
of a random sample of their opponents and adjust their strategies accordingly,
stable miscoordination can arise when there is heterogeneity in the sample
sizes. This occurs when some agents make decisions based on small samples
(anecdotal evidence) while others rely on large samples. Finally, we
demonstrate the empirical relevance of our results in a bargaining application.",Heterogeneous Noise and Stable Miscoordination,2023-05-17 18:38:52,"Srinivas Arigapudi, Yuval Heller, Amnon Schreiber","http://arxiv.org/abs/2305.10301v1, http://arxiv.org/pdf/2305.10301v1",econ.TH
34587,th,"In many centralized labor markets candidates interview with potential
employers before matches are formed through a clearinghouse One prominent
example is the market for medical residencies and fellowships, which in recent
years has had a large increase in the number of interviews. There have been
numerous efforts to reduce the cost of interviewing in these markets using a
variety of signalling mechanisms, however, the theoretical properties of these
mechanisms have not been systematically studied in models with rich
preferences. In this paper we give theoretical guarantees for a variety of
mechanisms, finding that these mechanisms must properly balance competition.
  We consider a random market in which agents' latent preferences are based on
observed qualities, personal taste and (ex post) interview shocks and assume
that following an interview mechanism a final stable match is generated with
respect to preferences over interview partners. We study a novel many-to-many
interview match mechanism to coordinate interviews and that with relatively few
interviews, when suitably designed, the interview match yields desirable
properties.
  We find that under the interview matching mechanism with a limit of $k$
interviews per candidate and per position, the fraction of positions that are
unfilled vanishes quickly with $k$. Moreover the ex post efficiency grows
rapidly with $k$, and reporting sincere pre-interview preferences to this
mechanism is an $\epsilon$-Bayes Nash equilibrium. Finally, we compare the
performance of the interview match to other signalling and coordination
mechanisms from the literature.",Interviewing Matching in Random Markets,2023-05-19 02:56:47,"Maxwell Allman, Itai Ashlagi","http://arxiv.org/abs/2305.11350v2, http://arxiv.org/pdf/2305.11350v2",econ.TH
34588,th,"The over-and-above choice rule is the prominent selection procedure to
implement affirmative action. In India, it is legally mandated to allocate
public school seats and government job positions. This paper presents an
axiomatic characterization of the over-and-above choice rule by rigorously
stating policy goals as formal axioms. Moreover, we characterize the deferred
acceptance mechanism coupled with the over-and-above choice rules for
centralized marketplaces.",The Over-and-Above Implementation of Reserve Policy in India,2023-05-19 18:45:05,"Orhan Aygün, Bertan Turhan","http://arxiv.org/abs/2305.11758v2, http://arxiv.org/pdf/2305.11758v2",econ.TH
34589,th,"In a many-to-one matching market with substitutable preferences, we analyze
the game induced by a stable matching rule. We show that any stable matching
rule implements the individually rational correspondence in Nash equilibrium
when both sides of the market play strategically. We also show that when only
workers play strategically in Nash equilibrium and when firms' preferences
satisfy the law of aggregate demand, any stable matching rule implements the
stable correspondence in Nash equilibrium.",Nash implementation in a many-to-one matching market,2023-05-23 14:32:18,"Noelia Juarez, Paola B. Manasero, Jorge Oviedo","http://arxiv.org/abs/2305.13956v2, http://arxiv.org/pdf/2305.13956v2",econ.TH
34590,th,"This paper proposes an addition to the firm-based perspective on
intra-industry profitability differentials by modelling a business organisation
as a complex adaptive system. The presented agent-based model introduces an
endogenous similarity-based social network and employees' reactions to dynamic
management strategies informed by key company benchmarks. The value-based
decision-making of employees shapes the behaviour of others through their
perception of social norms from which a corporate culture emerges. These
elements induce intertwined feedback mechanisms which lead to unforeseen
profitability outcomes. The simulations reveal that variants of extreme
adaptation of management style yield higher profitability in the long run than
the more moderate alternatives. Furthermore, we observe convergence towards a
dominant management strategy with low intensity in monitoring efforts as well
as high monetary incentivisation of cooperative behaviour. The results suggest
that measures increasing the connectedness of the workforce across all four
value groups might be advisable to escape potential lock-in situation and thus
raise profitability. A further positive impact on profitability can be achieved
through knowledge about the distribution of personal values among a firm's
employees. Choosing appropriate and enabling management strategies, and
sticking to them in the long run, can support the realisation of the inherent
self-organisational capacities of the workforce, ultimately leading to higher
profitability through cultural stability.",The Complexity of Corporate Culture as a Potential Source of Firm Profit Differentials,2023-05-23 16:01:15,"Frederik Banning, Jessica Reale, Michael Roos","http://arxiv.org/abs/2305.14029v2, http://arxiv.org/pdf/2305.14029v2",econ.TH
34593,th,"We define notions of cautiousness and cautious belief to provide epistemic
conditions for iterated admissibility in finite games. We show that iterated
admissibility characterizes the behavioral implications of ""cautious
rationality and common cautious belief in cautious rationality"" in a terminal
lexicographic type structure. For arbitrary type structures, the behavioral
implications of these epistemic assumptions are characterized by the solution
concept of self-admissible set (Brandenburger, Friedenberg and Keisler 2008).
We also show that analogous conclusions hold under alternative epistemic
assumptions, in particular if cautiousness is ""transparent"" to the players.
  KEYWORDS: Epistemic game theory, iterated admissibility, weak dominance,
lexicographic probability systems. JEL: C72.",Cautious Belief and Iterated Admissibility,2023-05-24 19:42:32,"Emiliano Catonini, Nicodemo De Vito","http://arxiv.org/abs/2305.15330v1, http://arxiv.org/pdf/2305.15330v1",econ.TH
34594,th,"We study firm-quasi-stability in the framework of many-to-many matching with
contracts under substitutable preferences. We establish various links between
firm-quasi-stability and stability, and give new insights into the existence
and lattice property of stable allocations. In addition, we show that
firm-quasi-stable allocations appear naturally when the stability of the market
is disrupted by the entry of new firms or the retirement of some workers, and
introduce a generalized deferred acceptance algorithm to show that the market
can regain stability from firm-quasi-stable allocations by a decentralized
process of offers and acceptances. Moreover, it is shown that the entry of new
firms or the retirement of workers cannot be bad for any of the incumbent
workers and cannot be good for any of the original firms, while each new firm
gets its optimal outcome under stable allocations whenever the law of aggregate
demand holds.",Firm-quasi-stability and re-equilibration in matching markets with contracts,2023-05-29 11:13:11,Yi-You Yang,"http://arxiv.org/abs/2305.17948v3, http://arxiv.org/pdf/2305.17948v3",econ.TH
34595,th,"We study an information design problem with continuous state and discrete
signal space. Under convex value functions, the optimal information structure
is interval-partitional and exhibits a dual expectations property: each induced
signal is the conditional mean (taken under the prior density) of each
interval; each interval cutoff is the conditional mean (taken under the value
function curvature) of the interval formed by neighboring signals. This
property enables examination into which part of the state space is more finely
partitioned and facilitates comparative statics analysis. The analysis can be
extended to general value functions and adapted to study coarse mechanism
design.",Coarse Information Design,2023-05-29 14:25:27,"Qianjun Lyu, Wing Suen, Yimeng Zhang","http://arxiv.org/abs/2305.18020v3, http://arxiv.org/pdf/2305.18020v3",econ.TH
34596,th,"When inferring the causal effect of one variable on another from
correlational data, a common practice by professional researchers as well as
lay decision makers is to control for some set of exogenous confounding
variables. Choosing an inappropriate set of control variables can lead to
erroneous causal inferences. This paper presents a model of lay decision makers
who use long-run observational data to learn the causal effect of their actions
on a payoff-relevant outcome. Different types of decision makers use different
sets of control variables. I obtain upper bounds on the equilibrium welfare
loss due to wrong causal inferences, for various families of data-generating
processes. The bounds depend on the structure of the type space. When types are
""ordered"" in a certain sense, the equilibrium condition greatly reduces the
cost of wrong causal inference due to poor controls.",Behavioral Causal Inference,2023-05-30 13:10:39,Ran Spiegler,"http://arxiv.org/abs/2305.18916v1, http://arxiv.org/pdf/2305.18916v1",econ.TH
34597,th,"Kin selection and direct reciprocity are two most basic mechanisms for
promoting cooperation in human society. Generalizing the standard models of the
multi-player Prisoner's Dilemma and the Public Goods games for heterogeneous
populations, we study the effects of genetic relatedness on cooperation in the
context of repeated interactions. Two sets of interrelated results are
established: a set of analytical results focusing on the subgame perfect
equilibrium and a set of agent-based simulation results based on an
evolutionary game model. We show that in both cases increasing genetic
relatedness does not always facilitate cooperation. Specifically, kinship can
hinder the effectiveness of reciprocity in two ways. First, the condition for
sustaining cooperation through direct reciprocity is harder to satisfy when
relatedness increases in an intermediate range. Second, full cooperation is
impossible to sustain for a medium-high range of relatedness values. Moreover,
individuals with low cost-benefit ratios can end up with lower payoffs than
their groupmates with high cost-benefit ratios. Our results point to the
importance of explicitly accounting for within-population heterogeneity when
studying the evolution of cooperation.",Kinship can hinder cooperation in heterogeneous populations,2023-05-30 16:37:49,"Boyu Zhang, Yali Dong, Cheng-Zhong Qin, Sergey Gavrilets","http://arxiv.org/abs/2305.19026v2, http://arxiv.org/pdf/2305.19026v2",econ.TH
34598,th,"The current Proposer-Builder Separation (PBS) equilibrium has several
builders with different backgrounds winning blocks consistently. This paper
considers how that equilibrium will shift when transactions are sold privately
via order flow auctions (OFAs) rather than forwarded directly to the public
mempool. We discuss a novel model that highlights the augmented value of
private order flow for integrated builder searchers. We show that private order
flow is complementary to top-of-block opportunities, and therefore integrated
builder-searchers are more likely to participate in OFAs and outbid non
integrated builders. They will then parlay access to these private transactions
into an advantage in the PBS auction, winning blocks more often and extracting
higher profits than non-integrated builders. To validate our main assumptions,
we construct a novel dataset pairing post-merge PBS outcomes with realized
12-second volatility on a leading CEX (Binance). Our results show that
integrated builder-searchers are more likely to win in the PBS auction when
realized volatility is high, suggesting that indeed such builders have an
advantage in extracting top-of-block opportunities. Our findings suggest that
modifying PBS to disentangle the intertwined dynamics between top-of-block
extraction and private order flow would pave the way for a fairer and more
decentralized Ethereum.",The Centralizing Effects of Private Order Flow on Proposer-Builder Separation,2023-05-30 18:54:07,"Tivas Gupta, Mallesh M Pai, Max Resnick","http://arxiv.org/abs/2305.19150v2, http://arxiv.org/pdf/2305.19150v2",econ.TH
34643,th,"The classic two-sided many-to-one job matching model assumes that firms treat
workers as substitutes and workers ignore colleagues when choosing where to
work. Relaxing these assumptions may lead to nonexistence of stable matchings.
However, matching is often not a static allocation, but an ongoing process with
long-lived firms and short-lived workers. We show that stability is always
guaranteed dynamically when firms are patient, even with complementarities in
firm technologies and peer effects in worker preferences. While no-poaching
agreements are anti-competitive, they can maintain dynamic stability in markets
that are otherwise unstable, which may contribute to their prevalence in labor
markets.",Self-Enforced Job Matching,2023-08-26 17:58:27,"Ce Liu, Ziwei Wang, Hanzhe Zhang","http://arxiv.org/abs/2308.13899v1, http://arxiv.org/pdf/2308.13899v1",econ.TH
34599,th,"Liquid democracy is a system that combines aspects of direct democracy and
representative democracy by allowing voters to either vote directly themselves,
or delegate their votes to others. In this paper we study the information
aggregation properties of liquid democracy in a setting with heterogeneously
informed truth-seeking voters -- who want the election outcome to match an
underlying state of the world -- and partisan voters. We establish that liquid
democracy admits equilibria which improve welfare and information aggregation
over direct and representative democracy when voters' preferences and
information precisions are publicly or privately known. Liquid democracy also
admits equilibria which do worse than the other two systems. We discuss
features of efficient and inefficient equilibria and provide conditions under
which voters can more easily coordinate on the efficient equilibria in liquid
democracy than the other two systems.",Information aggregation with delegation of votes,2023-06-06 21:44:21,"Amrita Dhillon, Grammateia Kotsialou, Dilip Ravindran, Dimitrios Xefteris","http://arxiv.org/abs/2306.03960v1, http://arxiv.org/pdf/2306.03960v1",econ.TH
34600,th,"By endowing the class of tops-only and efficient social choice rules with a
dual order structure that exploits the trade-off between different degrees of
manipulability and dictatorial power rules allow agents to have, we provide a
proof of the Gibbard-Satterthwaite Theorem.",Trade-off between manipulability and dictatorial power: a proof of the Gibbard-Satterthwaite Theorem,2023-06-07 19:37:39,Agustin G. Bonifacio,"http://arxiv.org/abs/2306.04587v2, http://arxiv.org/pdf/2306.04587v2",econ.TH
34601,th,"We derive robust predictions in games involving flexible information
acquisition, also known as rational inattention (Sims 2003). These predictions
remain accurate regardless of the specific methods players employ to gather
information. Compared to scenarios where information is predetermined, rational
inattention reduces welfare and introduces additional constraints on behavior.
We show these constraints generically do not bind; the two knowledge regimes
are behaviorally indistinguishable in most environments. Yet, we demonstrate
the welfare difference they generate is substantial: optimal policy depends on
whether one assumes information is given or acquired. We provide the necessary
tools for policy analysis in this context.",Robust Predictions in Games with Rational Inattention,2023-06-16 19:46:36,"Tommaso Denti, Doron Ravid","http://arxiv.org/abs/2306.09964v1, http://arxiv.org/pdf/2306.09964v1",econ.TH
34602,th,"The weak axiom of revealed preference (WARP) ensures that the revealed
preference (i) is a preference relation (i.e., it is complete and transitive)
and (ii) rationalizes the choices. However, when WARP fails, either one of
these two properties is violated, but it is unclear which one it is. We provide
an alternative characterization of WARP by showing that WARP is equivalent to
the conjunction of two axioms each of which separately guarantees (i) and (ii).",Disentangling Revealed Preference From Rationalization by a Preference,2023-06-21 01:15:28,Pablo Schenone,"http://arxiv.org/abs/2306.11923v2, http://arxiv.org/pdf/2306.11923v2",econ.TH
34603,th,"We distinguished between the expected and actual profit of a firm. We
proposed that, beyond maximizing profit, a firm's goal also encompasses
minimizing the gap between expected and actual profit. Firms strive to enhance
their capability to transform projects into reality through a process of trial
and error, evident as a cyclical iterative optimization process. To
characterize this iterative mechanism, we developed the Skill-Task Matching
Model, extending the task approach in both multidimensional and iterative
manners. We vectorized jobs and employees into task and skill vector spaces,
respectively, while treating production techniques as a skill-task matching
matrix and business strategy as a task value vector. In our model, the process
of stabilizing production techniques and optimizing business strategies
corresponds to the recalibration of parameters within the skill-task matching
matrix and the task value vector. We constructed a feed-forward neural network
algorithm to run this model and demonstrated how it can augment operational
efficiency.","The Skill-Task Matching Model: Mechanism, Model Structure, and Algorithm",2023-06-21 14:10:42,"Da Xie, WeiGuo Yang","http://arxiv.org/abs/2306.12176v5, http://arxiv.org/pdf/2306.12176v5",econ.TH
34604,th,"I consider a package assignment problem where multiple units of indivisible
items are allocated among individuals. The seller specifies allocation
preferences as cost savings on packages. I propose a social welfare maximising,
sealed-bid auction with a novel cost function graph to express seller
preferences. It facilitates the use of linear programming to find anonymous,
competitive, package-linear prices. If agents bid truthfully, these prices
support a Walrasian equilibrium. I provide necessary and sufficient conditions,
and additional sufficient conditions, for the existence of Walrasian
equilibria. The auction guarantees fair and transparent pricing and admits
preferences over the market concentration.",Selling Multiple Complements with Packaging Costs,2023-06-25 16:44:42,Simon Finster,"http://arxiv.org/abs/2306.14247v1, http://arxiv.org/pdf/2306.14247v1",econ.TH
34605,th,"Equal pay laws increasingly require that workers doing ""similar"" work are
paid equal wages within firm. We study such ""equal pay for similar work"" (EPSW)
policies theoretically and test our model's predictions empirically using
evidence from a 2009 Chilean EPSW. When EPSW only binds across protected class
(e.g., no woman can be paid less than any similar man, and vice versa), firms
segregate their workforce by gender. When there are more men than women in a
labor market, EPSW increases the gender wage gap. By contrast, EPSW that is not
based on protected class can decrease the gender wage gap.",Equal Pay for Similar Work,2023-06-29 20:14:47,"Diego Gentile Passaro, Fuhito Kojima, Bobak Pakzad-Hurson","http://arxiv.org/abs/2306.17111v1, http://arxiv.org/pdf/2306.17111v1",econ.TH
34606,th,"Recurring auctions are ubiquitous for selling durable assets like artworks
and homes, with follow-up auctions held for unsold items. We investigate such
auctions theoretically and empirically. Theoretical analysis demonstrates that
recurring auctions outperform single-round auctions when buyers face entry
costs, enhancing efficiency and revenue due to sorted entry of potential
buyers. Optimal reserve price sequences are characterized. Empirical findings
from home foreclosure auctions in China reveal significant annual gains in
efficiency (3.40 billion USD, 16.60%) and revenue (2.97 billion USD, 15.92%)
using recurring auctions compared to single-round auctions. Implementing
optimal reserve prices can further improve efficiency (3.35%) and revenue
(3.06%).",Recurring Auctions with Costly Entry: Theory and Evidence,2023-06-30 04:23:26,"Shanglyu Deng, Qiyao Zhou","http://arxiv.org/abs/2306.17355v2, http://arxiv.org/pdf/2306.17355v2",econ.TH
34658,th,"The decisive-set and pivotal-voter approaches have been used to prove Arrow's
impossibility theorem. This study presents a proof using a proof calculus in
logic. A valid deductive inference between the premises, the axioms and
conditions of the theorem, and the conclusion, dictatorship, guarantees that
every profile of all possible social welfare functions is examined, thereby
establishing the theorem.",A Reexamination of Proof Approaches for the Impossibility Theorem,2023-09-13 09:58:07,Kazuya Yamamoto,"http://arxiv.org/abs/2309.06753v3, http://arxiv.org/pdf/2309.06753v3",econ.TH
34607,th,"In this paper, we have considered the dense rank for assigning positions to
alternatives in weak orders. If we arrange the alternatives in tiers (i.e.,
indifference classes), the dense rank assigns position 1 to all the
alternatives in the top tier, 2 to all the alternatives in the second tier, and
so on. We have proposed a formal framework to analyze the dense rank when
compared to other well-known position operators such as the standard, modified
and fractional ranks. As the main results, we have provided two different
axiomatic characterizations which determine the dense rank by considering
position invariance conditions along horizontal extensions (duplication), as
well as through vertical reductions and movements (truncation, and upwards or
downwards independency).",Two characterizations of the dense rank,2023-06-30 14:04:17,"José Luis García-Lapresta, Miguel Martínez-Panero","http://arxiv.org/abs/2306.17546v1, http://arxiv.org/pdf/2306.17546v1",econ.TH
34608,th,"In a many-to-one matching model, with or without contracts, where doctors'
preferences are private information and hospitals' preferences are
substitutable and public information, any stable matching rule could be
manipulated for doctors. Since manipulations can not be completely avoided, we
consider the concept of \textit{obvious manipulations} and look for stable
matching rules that prevent at least such manipulations (for doctors). For the
model with contracts, we prove that: \textit{(i)} the doctor-optimal matching
rule is non-obviously manipulable and \textit{(ii)} the hospital-optimal
matching rule is obviously manipulable, even in the one-to-one model. In
contrast to \textit{(ii)}, for a many-to-one model without contracts, we prove
that the hospital-optimal matching rule is not obviously manipulable.% when
hospitals' preferences are substitutable. Furthermore, if we focus on quantile
stable rules, then we prove that the doctor-optimal matching rule is the only
non-obviously manipulable quantile stable rule.",Obvious Manipulations in Matching with and without Contracts,2023-06-30 19:25:47,"R. Pablo Arribillaga, E. Pepa Risma","http://arxiv.org/abs/2306.17773v1, http://arxiv.org/pdf/2306.17773v1",econ.TH
34609,th,"We study the long-run behavior of land prices when land plays the dual role
of factor of production and store of value. In modern economies where
technological progress is faster in non-land sectors, when the elasticity of
substitution in production exceeds 1 at high input levels (which always holds
if non-land factors do not fully depreciate), unbalanced growth occurs and land
becomes overvalued on the long-run trend relative to the fundamental value
defined by the present value of land rents. Around the trend, land prices
exhibit recurrent stochastic fluctuations, with expansions and contractions in
the size of land overvaluation.","Unbalanced Growth, Elasticity of Substitution, and Land Overvaluation",2023-07-01 17:23:44,"Tomohiro Hirano, Alexis Akira Toda","http://arxiv.org/abs/2307.00349v2, http://arxiv.org/pdf/2307.00349v2",econ.TH
34610,th,"This paper introduces and formalizes the classical view on supply and demand,
which, we argue, has an integrity independent and distinct from the
neoclassical theory. Demand and supply, before the marginal revolution, are
defined not by an unobservable criterion such as a utility function, but by an
observable monetary variable, the reservation price: the buyer's (maximum)
willingness to pay (WTP) value (a potential price) and the seller's (minimum)
willingness to accept (WTA) value (a potential price) at the marketplace.
Market demand and supply are the cumulative distribution of the buyers' and
sellers' reservation prices, respectively. This WTP-WTA classical view of
supply and demand formed the means whereby market participants were motivated
in experimental economics although experimentalists (trained in neoclassical
economics) were not cognizant of their link to the past. On this foundation was
erected a vast literature on the rules of trading for a host of institutions,
modern and ancient. This paper documents textually this reappraisal of
classical economics and then formalizes it mathematically. A follow-up paper
will articulate a theory of market price formation rooted in this classical
view on supply and demand and in experimental findings on market behavior.",The Classical Theory of Supply and Demand,2023-07-01 22:27:00,"Sabiou Inoua, Vernon Smith","http://arxiv.org/abs/2307.00413v1, http://arxiv.org/pdf/2307.00413v1",econ.TH
34611,th,"We study the problem of sharing the revenue obtained by selling museum passes
from the axiomatic perspective. In this setting, we propose replacing the usual
dummy axiom with a milder requirement: order preservation with dummies. This
new axiom formalizes the philosophical idea that even null agents/museums may
have the right to receive a minimum allocation in a sharing situation. By
replacing dummy with order preservation with dummies, we characterize several
families of rules, which are convex combinations of the uniform and Shapley
approaches. Our findings generalize several existing results in the literature.
Also, we consider a domain of problems that is richer than the domain proposed
by Ginsburgh and Zang (2003) in their seminal paper on the museum pass problem.",Order preservation with dummies in the musseum pass problem,2023-07-02 20:29:32,"Ricardo Martínez, Joaquín Sánchez-Soriano","http://arxiv.org/abs/2307.00622v1, http://arxiv.org/pdf/2307.00622v1",econ.TH
34612,th,"We develop a model of wishful thinking that incorporates the costs and
benefits of biased beliefs. We establish the connection between distorted
beliefs and risk, revealing how wishful thinking can be understood in terms of
risk measures. Our model accommodates extreme beliefs, allowing
wishful-thinking decision-makers to assign zero probability to undesirable
states and positive probability to otherwise impossible states. Furthermore, we
establish that wishful thinking behavior is equivalent to quantile-utility
maximization for the class of threshold beliefs distortion cost functions.
Finally, exploiting this equivalence, we derive conditions under which an
optimistic decision-maker prefers skewed and riskier choices.",Wishful Thinking is Risky Thinking: A Statistical-Distance Based Approach,2023-07-05 19:44:27,"Jarrod Burgh, Emerson Melo","http://arxiv.org/abs/2307.02422v1, http://arxiv.org/pdf/2307.02422v1",econ.TH
34613,th,"We study games of chance (e.g., pokers, dices, horse races) in the form of
agents' first-order posterior beliefs about game outcomes. We ask for any
profile of agents' posterior beliefs, is there a game that can generate these
beliefs? We completely characterize all feasible joint posterior beliefs from
these games. The characterization enables us to find a new variant of Border's
inequalities (Border, 1991), which we call a belief-based characterization of
Border's inequalities. It also leads to a generalization of Aumann's Agreement
Theorem. We show that the characterization results are powerful in bounding the
correlation of agents' joint posterior beliefs.",A Belief-Based Characterization of Reduced-Form Auctions,2023-07-09 03:35:34,Xu Lang,"http://arxiv.org/abs/2307.04070v1, http://arxiv.org/pdf/2307.04070v1",econ.TH
34614,th,"In the pursuit of understanding collective intelligence, the Hong-Page
Theorems have been presented as cornerstones of the interplay between diversity
and ability. However, upon rigorous examination, there seem to be inherent
flaws and misinterpretations within these theorems. H\'el\`ene Landemore's
application of these theorems in her epistemic argument and her political
proposal showcases a rather unsettling misuse of mathematical principles. This
paper critically dissects the HongPage Theorems, revealing significant
inconsistencies and oversights, and underscores the indispensable role of
'ability' in group problem-solving contexts. This paper aims not to undermine
the importance of diversity, but rather to highlight the dangers of misusing
mathematical principles and the necessity for a more nuanced comprehension of
mathematical results when applying them to social sciences.",Fatal errors and misuse of mathematics in the Hong-Page Theorem and Landemore's epistemic argument,2023-07-10 20:17:22,Álvaro Romaniega,"http://arxiv.org/abs/2307.04709v2, http://arxiv.org/pdf/2307.04709v2",econ.TH
34615,th,"$S$ equilibrium synthesizes a century of game-theoretic modeling. $S$-beliefs
determine choices as in the refinement literature and level-$k$, without
anchoring on Nash equilibrium or imposing ad hoc belief formation. $S$-choices
allow for mistakes as in QRE, without imposing rational expectations. $S$
equilibrium is explicitly set-valued to avoid the common practice of selecting
the best prediction from an implicitly defined set of unknown, and unaccounted
for, size. $S$-equilibrium sets vary with a complexity parameter, offering a
trade-off between accuracy and precision unlike in $M$ equilibrium. Simple
""areametrics"" determine the model's parameter and show that choice sets with a
relative size of 5 percent capture 58 percent percent of the data.
Goodness-of-fit tests applied to data from a broad array of experimental games
confirm $S$ equilibrium's ability to predict behavior in and out of sample. In
contrast, choice (belief) predictions of level-$k$ and QRE are rejected in most
(all) games.",S Equilibrium: A Synthesis of (Behavioral) Game Theory,2023-07-12 20:13:21,"Jacob K Goeree, Bernardo Garcia-Pola","http://arxiv.org/abs/2307.06309v1, http://arxiv.org/pdf/2307.06309v1",econ.TH
34616,th,"We study the design of contracts that incentivize a researcher to conduct a
costly experiment, extending the work of Yoder (2022) from binary states to a
general state space. The cost is private information of the researcher. When
the experiment is observable, we find the optimal contract and show that higher
types choose more costly experiments, but not necessarily more Blackwell
informative ones. When only the experiment result is observable, the principal
can still achieve the same optimal outcome if and only if a certain
monotonicity condition with respect to types holds. Our analysis demonstrates
that the general case is qualitatively different than the binary one, but that
the contracting problem remains tractable.",Contracting with Heterogeneous Researchers,2023-07-14 23:59:49,Han Wang,"http://arxiv.org/abs/2307.07629v1, http://arxiv.org/pdf/2307.07629v1",econ.TH
34617,th,"We study the optimal method for rationing scarce resources through a queue
system. The designer controls agents' entry into a queue and their exit, their
service priority -- or queueing discipline -- as well as their information
about queue priorities, while providing them with the incentive to join the
queue and, importantly, to stay in the queue, when recommended by the designer.
Under a mild condition, the optimal mechanism induces agents to enter up to a
certain queue length and never removes any agents from the queue; serves them
according to a first-come-first-served (FCFS) rule; and provides them with no
information throughout the process beyond the recommendations they receive.
FCFS is also necessary for optimality in a rich domain. We identify a novel
role for queueing disciplines in regulating agents' beliefs and their dynamic
incentives and uncover a hitherto unrecognized virtue of FCFS in this regard.",Optimal Queue Design,2023-07-15 11:55:37,"Yeon-Koo Che, Olivier Tercieux","http://arxiv.org/abs/2307.07746v1, http://arxiv.org/pdf/2307.07746v1",econ.TH
34618,th,"Quantal response equilibrium (QRE), a statistical generalization of Nash
equilibrium, is a standard benchmark in the analysis of experimental data.
Despite its influence, nonparametric characterizations and tests of QRE are
unavailable beyond the case of finite games. We address this gap by completely
characterizing the set of QRE in a class of binary-action games with a
continuum of types. Our characterization provides sharp predictions in settings
such as global games, the volunteer's dilemma, and the compromise game.
Further, we leverage our results to develop nonparametric tests of QRE. As an
empirical application, we revisit the experimental data from Carrillo and
Palfrey (2009) on the compromise game.",Quantal Response Equilibrium with a Continuum of Types: Characterization and Nonparametric Identification,2023-07-16 14:40:06,"Evan Friedman, Duarte Gonçalves","http://arxiv.org/abs/2307.08011v1, http://arxiv.org/pdf/2307.08011v1",econ.TH
34619,th,"Comonotonicity (""same variation"") of random variables minimizes hedging
possibilities and has been widely used in many fields. Comonotonic restrictions
of traditional axioms have led to impactful inventions in decision models,
including Gilboa and Schmeidler's ambiguity models. This paper investigates
antimonotonicity (""opposite variation""), the natural counterpart to
comonotonicity, minimizing leveraging possibilities. Surprisingly,
antimonotonic restrictions of traditional axioms often do not give new models
but, instead, give generalized axiomatizations of existing ones. We, thus,
generalize: (a) classical axiomatizations of linear functionals through
Cauchy's equation; (b) as-if-risk-neutral pricing through no-arbitrage; (c)
subjective probabilities through bookmaking; (d) Anscombe-Aumann expected
utility; (e) risk aversion in Savage's subjective expected utility. In each
case, our generalizations show where the most critical tests of classical
axioms lie: in the antimonotonic cases (maximal hedges). We, finally, present
cases where antimonotonic restrictions do weaken axioms and lead to new models,
primarily for ambiguity aversion in nonexpected utility.",Antimonotonicity for Preference Axioms: The Natural Counterpart to Comonotonicity,2023-07-17 18:03:41,"Giulio Principi, Peter P. Wakker, Ruodu Wang","http://arxiv.org/abs/2307.08542v1, http://arxiv.org/pdf/2307.08542v1",econ.TH
34620,th,"Strategic uncertainty complicates policy design in coordination games. To
rein in strategic uncertainty, the Planner in this paper connects the problem
of policy design to that of equilibrium selection. We characterize the subsidy
scheme that induces coordination on a given outcome of the game as its unique
equilibrium. Optimal subsidies are unique, symmetric for identical players,
continuous functions of model parameters, and do not make the targeted
strategies strictly dominant for any one player; these properties differ
starkly from canonical results in the literature. Uncertainty about payoffs
impels policy moderation as overly aggressive intervention might itself induce
coordination failure.
  JEL codes: D81, D82, D83, D86, H20.
  Keywords: mechanism design, global games, contracting with externalities,
unique implementation.",Unraveling Coordination Problems,2023-07-17 18:15:33,Roweno J. R. K. Heijmans,"http://arxiv.org/abs/2307.08557v6, http://arxiv.org/pdf/2307.08557v6",econ.TH
34621,th,"Studying intra-industry trade involves theoretical explanations and empirical
methods to measure the phenomenon. Indicators have been developed to measure
the intensity of intra-industry trade, leading to theoretical models explaining
its determinants. It is essential to distinguish between horizontal and
vertical differentiation in empirical analyses. The determinants and
consequences of intra-industry trade depend on whether the traded products
differ in quality. A method for distinguishing between vertical and horizontal
differentiation involves comparing exports' unit value to imports for each
industry's intra-industry trade. This approach has limitations, leading to the
need for an alternative method.",Horizontal and Vertical Differentiation: Approaching Endogenous Measurement in Intra-industry Trade,2023-07-20 10:44:24,Sourish Dutta,"http://arxiv.org/abs/2307.10660v2, http://arxiv.org/pdf/2307.10660v2",econ.TH
34622,th,"I consider a model of strategic experimentation where agents partake in risky
research with payoff externalities; a breakthrough grants the discoverer
(winner) a different payoff than the non-discoverers (losers). I characterize
the first-best solution and show that the noncooperative game is generically
inefficient. Simple contracts sharing payoffs between winner and losers restore
efficiency, even when actions are unobserved. Alternatively, if the winner's
identity is not contractible, contracting on effort only at the time of
breakthrough also restores efficiency. These results suggest that although
strategic experimentation generically entails inefficiency, sharing credit is a
robust and effective remedy.",Sharing Credit for Joint Research,2023-07-22 18:34:13,Nicholas Wu,"http://arxiv.org/abs/2307.12104v1, http://arxiv.org/pdf/2307.12104v1",econ.TH
34623,th,"We study the classic principal-agent model when the signal observed by the
principal is chosen by the agent. We fully characterize the optimal information
structure from an agent's perspective in a general moral hazard setting with
limited liability. Due to endogeneity of the contract chosen by the principal,
the agent's choice of information is non-trivial. We show that the agent's
problem can be mapped into a geometrical game between the principal and the
agent in the space of likelihood ratios. We use this representation result to
show that coarse contracts are sufficient: The agent can achieve her best with
binary signals. Additionally, we can characterize conditions under which the
agent is able to extract the entire surplus and implement the first-best
efficient allocation. Finally, we show that when effort and performance are
one-dimensional, under a general class of models, threshold signals are
optimal. Our theory can thus provide a rationale for coarseness of contracts
based on the bargaining power of the agent in negotiations.",Indicator Choice in Pay-for-Performance,2023-07-24 02:43:35,"Majid Mahzoon, Ali Shourideh, Ariel Zetlin-Jones","http://arxiv.org/abs/2307.12457v1, http://arxiv.org/pdf/2307.12457v1",econ.TH
34624,th,"Coordination facilitation and efficient decision-making are two essential
components of successful leadership. In this paper, we take an informational
approach and investigate how followers' information impacts coordination and
efficient leadership in a model featuring a leader and a team of followers. We
show that efficiency is achieved as the unique rationalizable outcome of the
game when followers possess sufficiently imprecise information. In contrast, if
followers have accurate information, the leader may fail to coordinate them
toward the desired outcome or even take an inefficient action herself. We
discuss the implications of the results for the role of leaders in the context
of financial fragility and crises.",It's Not Always the Leader's Fault: How Informed Followers Can Undermine Efficient Leadership,2023-07-26 01:31:09,"Panagiotis Kyriazis, Edmund Lou","http://arxiv.org/abs/2307.13841v3, http://arxiv.org/pdf/2307.13841v3",econ.TH
34625,th,"The concept of power among players can be expressed as a combination of their
utilities. A player who obeys another takes into account the utility of the
dominant one. Technically it is a matter of superimposing some weighted sum or
product function onto the individual utility function, where the weights can be
represented through directed graphs that reflect a situation of power among the
players. It is then possible to define some global indices of the system, such
as the level of hierarchy, mutualism and freedom, and measure their effects on
game equilibria.",Power relations in Game Theory,2023-07-26 15:51:20,Daniele De Luca,"http://arxiv.org/abs/2307.14170v1, http://arxiv.org/pdf/2307.14170v1",econ.TH
34626,th,"We study proliferation of an action in binary action network coordination
games that are generalized to include global effects. This captures important
aspects of proliferation of a particular action or narrative in online social
networks, providing a basis to understand their impact on societal outcomes.
Our model naturally captures complementarities among starting sets, network
resilience, and global effects, and highlights interdependence in channels
through which contagion spreads. We present new, natural, and computationally
tractable algorithms to define and compute equilibrium objects that facilitate
the general study of contagion in networks and prove their theoretical
properties. Our algorithms are easy to implement and help to quantify
relationships previously inaccessible due to computational intractability.
Using these algorithms, we study the spread of contagion in scale-free networks
with 1,000 players using millions of Monte Carlo simulations. Our analysis
provides quantitative and qualitative insight into the design of policies to
control or spread contagion in networks. The scope of application is enlarged
given the many other situations across different fields that may be modeled
using this framework.",Control and Spread of Contagion in Networks,2023-07-31 21:35:14,"John Higgins, Tarun Sabarwal","http://arxiv.org/abs/2308.00062v1, http://arxiv.org/pdf/2308.00062v1",econ.TH
34627,th,"We consider a repeated auction where the buyer's utility for an item depends
on the time that elapsed since his last purchase. We present an algorithm to
build the optimal bidding policy, and then, because optimal might be
impractical, we discuss the cost for the buyer of limiting himself to shading
policies.",Repeated Bidding with Dynamic Value,2023-08-03 16:36:31,"Benjamin Heymann, Alexandre Gilotte, Rémi Chan-Renous","http://arxiv.org/abs/2308.01755v1, http://arxiv.org/pdf/2308.01755v1",econ.TH
34628,th,"Tournament solutions play an important role within social choice theory and
the mathematical social sciences at large. We construct a tournament of order
36 for which the Banks set and the bipartisan set are disjoint. This implies
that refinements of the Banks set, such as the minimal extending set and the
tournament equilibrium set, can also be disjoint from the bipartisan set.",The Banks Set and the Bipartisan Set May be Disjoint,2023-08-03 20:15:34,"Felix Brandt, Florian Grundbacher","http://arxiv.org/abs/2308.01881v1, http://arxiv.org/pdf/2308.01881v1",econ.TH
34629,th,"We study a class of probabilistic cooperative games which can be treated as
an extension of the classical cooperative games with transferable utilities.
The coalitions have an exogenous probability of being realized. This
probability distribution is known beforehand and the distribution of the
expected worth needs to be done before the realization of the state. We obtain
a value for this class of games and present three characterizations of this
value using natural extensions of the axioms used in the seminal axiomatization
of the Shapley value. The value, which we call the Expected Shapley value,
allocates the players their expected worth with respect to a probability
distribution.",The Expected Shapley value on a class of probabilistic games,2023-08-07 14:31:31,"Surajit Borkotokey, Sujata Gowala, Rajnish Kumar","http://arxiv.org/abs/2308.03489v1, http://arxiv.org/pdf/2308.03489v1",econ.TH
34630,th,"In Network games under cooperative framework, the position value is a link
based allocation rule. It is obtained from the Shapley value of an associated
cooperative game where the links of the network are considered players. The
Shapley value of each of the links is then divided equally among the players
who form those links. The inherent assumption is that the value is indifferent
to the weights of the players in the network. Depending on how much central a
player is in the network, or the ability of making links with other players
etc., for example, players can be considered to have weights. Thus, in such
situations, dividing the Shapley value equally among the players can be an
over-simplistic notion. We propose a generalised version of the position value:
the weighted position value that allocates the Shapley shares proportional to
the players' weights. These weights of the players are exogenously given. We
provide two axiomatic characterizations of our value. Finally, a bidding
mechanism is formulated to show that any sub-game perfect equilibrium (SPE) of
this mechanism coincides with the weighted position value.",Weighted position value for Network games,2023-08-07 14:40:46,"Niharika Kakoty, Surajit Borkotokey, Rajnish Kumar, Abhijit Bora","http://arxiv.org/abs/2308.03494v1, http://arxiv.org/pdf/2308.03494v1",econ.TH
34631,th,"Electing a committee of size k from m alternatives (k < m) is an interesting
problem under the multi-winner voting rules. However, very few committee
selection rules found in the literature consider the coalitional possibilities
among the alternatives that the voters believe that certain coalitions are more
effective and can more efficiently deliver desired outputs. To include such
possibilities, in this present study, we consider a committee selection problem
(or multi-winner voting problem) where voters are able to express their opinion
regarding interdependencies among alternatives. Using a dichotomous preference
scale termed generalized approval evaluation we construct an $m$-person
coalitional game which is more commonly called a cooperative game with
transferable utilities. To identify each alternative's score we use the Shapley
value (Shapley, 1953) of the cooperative game we construct for the purpose. Our
approach to the committee selection problem emphasizes on an important issue
called the compatibility principle. Further, we show that the properties of the
Shapley value are well suited in the committee selection context too. We
explore several properties of the proposed committee selection rule.",How to choose a Compatible Committee?,2023-08-07 15:02:21,"Ritu Dutta, Rajnish Kumnar, Surajit Borkotokey","http://arxiv.org/abs/2308.03507v1, http://arxiv.org/pdf/2308.03507v1",econ.TH
34632,th,"This paper examines the relationship between resource reallocation,
uniqueness of equilibrium and efficiency in economics. We explore the
implications of reallocation policies for stability, conflict, and
decision-making by analysing the existence of geodesic coordinate functions in
the equilibrium manifold. Our main result shows that in an economy with M = 2
consumers and L goods, if L coordinate functions, representing policies, are
geodesics on the equilibrium manifold (a property that we call the finite
geodesic property), then the equilibrium is globally unique. The presence of
geodesic variables indicates optimization and efficiency in the economy, while
non-geodesic variables add complexity. Finally, we establish a link between the
existing results on curvature, minimal entropy, geodesics and uniqueness in
smooth exchange economies. This study contributes to the understanding of the
geometric and economic properties of equilibria and offers potential
applications in policy considerations. Keywords: Uniqueness of equilibrium,
redistributive policies, geodesics, equilibrium manifold, equilibrium
selection, curvature, geodesics.",Uniqueness of equilibrium and redistributive policies: a geometric approach to efficiency,2023-08-07 19:23:21,"Andrea Loi, Stefano Matta, Daria Uccheddu","http://arxiv.org/abs/2308.03706v1, http://arxiv.org/pdf/2308.03706v1",econ.TH
34633,th,"We establish the Subgradient Theorem for monotone correspondences -- a
monotone correspondence is equal to the subdifferential of a potential if and
only if it is conservative, i.e. its integral along a closed path vanishes
irrespective of the selection from the correspondence along the path. We prove
two attendant results: the Potential Theorem, whereby a conservative monotone
correspondence can be integrated up to a potential, and the Duality Theorem,
whereby the potential has a Fenchel dual whose subdifferential is another
conservative monotone correspondence. We use these results to reinterpret and
extend Baldwin and Klemperer's (2019) characterization of demand in economies
with indivisible goods. We introduce a simple test for existence of Walrasian
equilibrium in quasi-linear economies. Fenchel's Duality Theorem implies this
test is met when the aggregate utility is concave, which is not necessarily the
case with indivisible goods even if all consumers have concave utilities.",Tropical Analysis: With an Application to Indivisible Goods,2023-08-09 00:29:09,"Nicholas C. Bedard, Jacob K. Goeree","http://arxiv.org/abs/2308.04593v1, http://arxiv.org/pdf/2308.04593v1",econ.TH
34634,th,"This study considers a model where schools may have multiple priority orders
on students, which may be inconsistent with each other. For example, in school
choice systems, since the sibling priority and the walk zone priority coexist,
the priority orders based on them would be conflicting. In that case, there may
be no matching that respect to all priority orders. We introduce a novel
fairness notion called M-fairness to examine such markets. Further, we focus on
a more specific situation where all schools have two priority orders, and for a
certain group of students, a priority order of each school is an improvement of
the other priority order of the school. An illustrative example is the school
choice matching market with a priority-based affirmative action policy. We
introduce a mechanism that utilizes the efficiency adjusted deferred acceptance
algorithm and show that the mechanism is student optimally M-stable,
improved-group optimally M-stable and responsive to improvements.",School Choice with Multiple Priorities,2023-08-09 11:11:10,"Minoru Kitahara, Yasunori Okumura","http://arxiv.org/abs/2308.04780v2, http://arxiv.org/pdf/2308.04780v2",econ.TH
34635,th,"I study how organizations assign tasks to identify the best candidate to
promote among a pool of workers. Task allocation and workers' motivation
interact through the organization's promotion decisions. The organization
designs the workers' careers to both screen and develop talent. When only
non-routine tasks are informative about a worker's type and non-routine tasks
are scarce, the organization's preferred promotion system is an index contest.
Each worker is assigned a number that depends only on his own type. The
principal delegates the non-routine task to the worker whose current index is
the highest and promotes the first worker whose type exceeds a threshold. Each
worker's threshold is independent of the other workers' types. Competition is
mediated by the allocation of tasks: who gets the opportunity to prove
themselves is a determinant factor in promotions. Finally, features of the
optimal promotion contest rationalize the prevalence of fast-track promotion,
the role of seniority, or when a group of workers is systemically advantaged.",Dynamic delegation in promotion contests,2023-08-10 19:13:10,Théo Durandard,"http://arxiv.org/abs/2308.05668v1, http://arxiv.org/pdf/2308.05668v1",econ.TH
34636,th,"In a New Keynesian model where the trade-off between stabilising the
aggregate inflation rate and the output gap arises from sectoral asymmetries,
the gains from commitment are either zero or negligible. Thus, to the extent
that economic fluctuations are caused by sectoral shocks, policies designed to
overcome the stabilisation bias are aiming to correct an unimportant problem.",On the unimportance of commitment for monetary policy,2023-08-16 00:17:12,Juan Paez-Farrell,"http://arxiv.org/abs/2308.08044v2, http://arxiv.org/pdf/2308.08044v2",econ.TH
34637,th,"We present the concept of ordered majority rule, a property of Instant Runoff
Voting, and compare it to the familiar concept of pairwise majority rule of
Condorcet methods. Ordered majority rule establishes a social order among the
candidates such that that relative order between any two candidates is
determined by voters who do not prefer another major candidate. It ensures the
election of a candidate from the majority party or coalition while preventing
an antagonistic opposition party or coalition from influencing which candidate
that may be. We show how IRV is the only voting method to satisfy ordered
majority rule, for a self-consistently determined distinction between major and
minor candidates, and that ordered majority rule is incompatible with the
properties of Condorcet compliance, independence of irrelevant alternatives,
and monotonicity. Finally, we present some arguments as to why ordered majority
rule may be preferable to pairwise majority rule, using the 2022 Alaska special
congressional election as a case study.",A Majority Rule Philosophy for Instant Runoff Voting,2023-08-16 18:20:30,"Ross Hyman, Deb Otis, Seamus Allen, Greg Dennis","http://arxiv.org/abs/2308.08430v1, http://arxiv.org/pdf/2308.08430v1",econ.TH
34638,th,"This paper studies how improved monitoring affects the limit equilibrium
payoff set for stochastic games with imperfect public monitoring. We introduce
a simple generalization of Blackwell garbling called weighted garbling in order
to compare different information structures for this class of games. Our main
result is the monotonicity of the limit perfect public equilibrium (PPE) payoff
set with respect to this information order: we show that the limit PPE payoff
sets with one information structure is larger than the limit PPE payoff sets
with another information structure state by state if the latter information
structure is a weighted garbling of the former. We show that this monotonicity
result also holds for the class of strongly symmetric equilibrium. Finally, we
introduce and discuss another weaker sufficient condition for the expansion of
limit PPE payoff set. It is more complex and difficult to verify, but useful in
some special cases.",On the Value of Information Structures in Stochastic Games,2023-08-18 02:37:53,"Daehyun Kim, Ichiro Obara","http://arxiv.org/abs/2308.09211v1, http://arxiv.org/pdf/2308.09211v1",econ.TH
34639,th,"This paper establishes a link between endowments, patience types, and the
parameters of the HARA Bernoulli utility function that ensure equilibrium
uniqueness in an economy with two goods and two impatience types with additive
separable preferences. We provide sufficient conditions that guarantee
uniqueness of equilibrium for any possible value of $\gamma$ in the HARA
utility function
$\frac{\gamma}{1-\gamma}\left(b+\frac{a}{\gamma}x\right)^{1-\gamma}$. The
analysis contributes to the literature on uniqueness in pure exchange economies
with two-goods and two agent types and extends the result in [4].","Endowments, patience types, and uniqueness in two-good HARA utility economies",2023-08-18 10:10:03,"Andrea Loi, Stefano Matta","http://arxiv.org/abs/2308.09347v1, http://arxiv.org/pdf/2308.09347v1",econ.TH
34640,th,"We analyze the risks to financial stability following the introduction of a
central bank digital currency (CBDC). CBDC competes with commercial bank
deposits as the household's source of liquidity. We revisit the result in the
existing literature regarding the equivalence of payment systems by considering
different degrees of substitutability between payment instruments and
introducing a collateral constraint that banks must respect when borrowing from
the central bank. When CBDC and deposits are perfect substitutes, the central
bank can offer loans to banks that render the introduction of CBDC neutral to
the real economy. Hence, there is no effect on financial stability. However,
when CBDC and deposits are imperfect substitutes, the central bank cannot make
the bank indifferent to the competition from CBDC. It follows that the
introduction of CBDC has real effects on the economy. Our dynamic analysis
shows that an increase in CBDC demand leads to a drop in bank profits but does
not raise significant concerns about CBDC crowding out deposits.",Unveiling the Interplay between Central Bank Digital Currency and Bank Deposits,2023-08-20 23:24:49,"Hanfeng Chen, Maria Elena Filippin","http://arxiv.org/abs/2308.10359v2, http://arxiv.org/pdf/2308.10359v2",econ.TH
34641,th,"In this paper, I prove the existence of a pure-strategy Nash equilibrium for
a large class of games with nonconvex strategy spaces. Specifically, if each
player's strategies form a compact, connected Euclidean neighborhood retract
and if all best-response correspondences are null-homotopic, then the game has
a pure-strategy Nash equilibrium. As an application, I show how this result can
prove the fundamental theorem of algebra.",Nash Equilibrium Existence without Convexity,2023-08-22 20:46:40,Conrad Kosowsky,"http://arxiv.org/abs/2308.11597v1, http://arxiv.org/pdf/2308.11597v1",econ.TH
34642,th,"We introduce a definition of multivariate majorization that is new to the
economics literature. Our majorization technique allows us to generalize Mussa
and Rosen's (1978) ""ironing"" to a broad class of multivariate principal-agents
problems. Specifically, we consider adverse selection problems in which agents'
types are one dimensional but informational externalities create a
multidimensional ironing problem. Our majorization technique applies to
discrete and continuous type spaces alike and we demonstrate its usefulness for
contract theory and mechanism design. We further show that multivariate
majorization yields a natural extension of second-order stochastic dominance to
multiple dimensions and derive its implications for decision making under
multivariate risk.",Multivariate Majorization in Principal-Agents Models,2023-08-26 11:06:09,"Nicholas C Bedard, Jacob K Goeree, Ningyi Sun","http://arxiv.org/abs/2308.13804v1, http://arxiv.org/pdf/2308.13804v1",econ.TH
34644,th,"A patient firm interacts with a sequence of consumers. The firm is either an
honest type who supplies high quality and never erases its records, or an
opportunistic type who chooses what quality to supply and may erase its records
at a low cost. We show that in every equilibrium, the firm has an incentive to
build a reputation for supplying high quality until its continuation value
exceeds its commitment payoff, but its ex ante payoff must be close to its
minmax value when it has a sufficiently long lifespan. Therefore, even a small
fraction of opportunistic types can wipe out the firm's returns from building
reputations. Even if the honest type can commit to reveal information about its
history according to any disclosure policy, the opportunistic type's payoff
cannot exceed its equilibrium payoff when the consumers receive no information.",Reputation Effects with Endogenous Records,2023-08-26 23:45:15,Harry Pei,"http://arxiv.org/abs/2308.13956v2, http://arxiv.org/pdf/2308.13956v2",econ.TH
34645,th,"A group of agents each exert effort to produce a joint output, with the
complementarities between their efforts represented by a (weighted) network.
Under equity compensation, a principal motivates the agents to work by giving
them shares of the output. We describe the optimal equity allocation. It is
characterized by a neighborhood balance condition: any two agents receiving
equity have the same (weighted) total equity assigned to their neighbors. We
also study the problem of selecting the team of agents who receive positive
equity, and show this team must form a tight-knit subset of the complementarity
network, with any pair being complementary to one another or jointly to another
team member. Finally, we give conditions under which the amount of equity used
for compensation is increasing in the strength of a team's complementarities
and discuss several other applications.",Equity Pay In Networked Teams,2023-08-28 20:17:57,"Krishna Dasaratha, Benjamin Golub, Anant Shah","http://arxiv.org/abs/2308.14717v1, http://arxiv.org/pdf/2308.14717v1",econ.TH
34646,th,"We consider multiple-type housing markets (Moulin, 1995), which extend
Shapley-Scarf housing markets (Shapley and Scarf, 1974) from one dimension to
higher dimensions. In this model, Pareto efficiency is incompatible with
individual rationality and strategy-proofness (Konishi et al., 2001).
Therefore, we consider two weaker efficiency properties: coordinatewise
efficiency and pairwise efficiency. We show that these two properties both (i)
are compatible with individual rationality and strategy-proofness, and (ii)
help us to identify two specific mechanisms. To be more precise, on various
domains of preference profiles, together with other well-studied properties
(individual rationality, strategy-proofness, and non-bossiness), coordinatewise
efficiency and pairwise efficiency respectively characterize two extensions of
the top-trading-cycles mechanism (TTC): the coordinatewise top-trading-cycles
mechanism (cTTC) and the bundle top-trading-cycles mechanism (bTTC). Moreover,
we propose several variations of our efficiency properties, and we find that
each of them is either satisfied by cTTC or bTTC, or leads to an impossibility
result (together with individual rationality and strategy-proofness).
Therefore, our characterizations can be primarily interpreted as a
compatibility test: any reasonable efficiency property that is not satisfied by
cTTC or bTTC could be considered incompatible with individual rationality and
strategy-proofness. The external validity of our results in the context of
general environments is also discussed. For multiple-type housing markets with
strict preferences, our characterization of bTTC constitutes the first
characterization of an extension of the prominent TTC mechanism",Efficiency in Multiple-Type Housing Markets,2023-08-29 05:38:23,Di Feng,"http://arxiv.org/abs/2308.14989v2, http://arxiv.org/pdf/2308.14989v2",econ.TH
34647,th,"An agent observes the set of available projects and proposes some, but not
necessarily all, of them. A principal chooses one or none from the proposed
set. We solve for a mechanism that minimizes the principal's worst-case regret.
We compare the single-project environment in which the agent can propose only
one project with the multiproject environment in which he can propose many. In
both environments, if the agent proposes one project, it is chosen for sure if
the principal's payoff is sufficiently high; otherwise, the probability that it
is chosen decreases in the agent's payoff. In the multiproject environment, the
agent's payoff from proposing multiple projects equals his maximal payoff from
proposing each project alone. The multiproject environment outperforms the
single-project one by providing better fallback options than rejection and by
delivering this payoff to the agent more efficiently.",Regret-Minimizing Project Choice,2023-09-01 05:17:32,"Yingni Guo, Eran Shmaya","http://arxiv.org/abs/2309.00214v1, http://arxiv.org/pdf/2309.00214v1",econ.TH
34648,th,"This work studies the effect of incentives (in the form of punishment and
reward) on the equilibrium fraction of cooperators and defectors in an iterated
n-person prisoners' dilemma game. With a finite population of players employing
a strategy of nice tit-for-tat or universal defect, an equilibrium fraction of
each player-type can be identified from linearized payoff functions. Incentives
take the form of targeted and general punishment, and targeted and general
reward. The primary contribution of this work is in clearly articulating the
design and marginal effect of these incentives on cooperation. Generalizable
results indicate that while targeted incentives have the potential to
substantially reduce but never entirely eliminate defection, they exhibit
diminishing marginal effectiveness. General incentives on the other hand have
the potential to eliminate all defection from the population of players.
Applications to policy are briefly considered.",The Effect of Punishment and Reward on Cooperation in a Prisoners' Dilemma Game,2023-09-01 19:07:56,Alexander Kangas,"http://arxiv.org/abs/2309.00556v1, http://arxiv.org/pdf/2309.00556v1",econ.TH
34649,th,"In mechanism design theory, agents' types are described as their private
information, and the designer may reveal some public information to affect
agents' types in order to obtain more payoffs. Traditionally, each agent's
private type and the public information are represented as a random variable
respectively. In this paper, we propose a type-adjustable mechanism where each
agent's private type is represented as a function of two parameters,
\emph{i.e.}, his intrinsic factor and an external factor. Each agent's
intrinsic factor is modeled as a private random variable, and the external
factor is modeled as a solution of the designer's optimization problem. If the
designer chooses an optimal value of external factor as public information, the
type-adjustable mechanism may yield Pareto-optimal outcomes, which let the
designer and each agent obtain more expected payoffs than what they would
obtain at most in the traditional optimal mechanisms. As a comparison, in an
auction with interdependent values, only the seller will benefit from public
information which is represented as a random variable. We propose a revised
version of revelation principle for type-adjustable Bayesian equilibrium. In
the end, we compare the type-adjustable mechanism with other relevant models.",Constructing a type-adjustable mechanism to yield Pareto-optimal outcomes,2023-09-03 09:24:48,Haoyang Wu,"http://arxiv.org/abs/2309.01096v2, http://arxiv.org/pdf/2309.01096v2",econ.TH
34650,th,"Despite its importance, relatively little attention has been devoted to
studying the effects of exposing individuals to digital choice interfaces. In
two pre-registered lottery-choice experiments, we administer three
information-search technologies that are based on well-known heuristics: in the
ABS (alternative-based search) treatment, subjects explore outcomes and
corresponding probabilities within lotteries; in the CBS (characteristic-based
search) treatment, subjects explore outcomes and corresponding probabilities
across lotteries; in the Baseline treatment, subjects view outcomes and
corresponding probabilities all at once. We find that (i) when lottery outcomes
comprise gains and losses (experiment 1), exposing subjects to the CBS
technology systematically makes them choose safer lotteries, compared to the
subjects that are exposed to the other technologies, and (ii) when lottery
outcomes comprise gains only (experiment 2), the above results are reversed:
exposing subjects to the CBS technology systematically makes them choose
riskier lotteries. By combining the information-search and choice analysis, we
offer an interpretation of our results that is based on prospect theory,
whereby the information-search technology subjects are exposed to contributes
to determine the level of attention that the lottery attributes receive, which
in turn has an effect on the reference point.",Do Losses Matter? The Effect of Information-Search Technologies on Risky Choices,2023-09-04 13:06:31,"Luigi Mittone, Mauro Papi","http://arxiv.org/abs/2309.01495v1, http://arxiv.org/pdf/2309.01495v1",econ.TH
34651,th,"In our previous paper we proved that every affine economy has a competitive
equilibrium. In order to find a situation in which it is possible to compute
it, we define a simplex economy as a variation with repetition of the number of
commodities taking the number of consumers (representing the preferences), and
a transition matrix (defining the initial endowments). We show that a
competitive equilibrium can be intrinsically determined in any minimal simplex
economy.",On the minimal simplex economy,2023-08-20 17:53:04,Antonio Pulgarín,"http://arxiv.org/abs/2309.03784v2, http://arxiv.org/pdf/2309.03784v2",econ.TH
34652,th,"We introduce a novel family of mechanisms for constrained allocation problems
which we call local priority mechanisms. These mechanisms are parameterized by
a function which assigns a set of agents -- the local compromisers -- to every
infeasible allocation. The mechanism then greedily attempts to match agents
with their top choices. Whenever it reaches an infeasible allocation, the local
compromisers move to their next favorite alternative. Local priority mechanisms
exist for any constraint so this provides a method of constructing new designs
for any constrained allocation problem. We give axioms which characterize local
priority mechanisms. Since constrained object allocation includes many
canonical problems as special constraints, we apply this characterization to
show that several well-known mechanisms, including deferred acceptance for
school choice, top trading cycles for house allocation, and serial dictatorship
can be understood as instances of local priority mechanisms. Other mechanisms,
including the Boston mechanism, are not local priority mechanisms. We give
necessary and sufficient conditions which characterize the local priority
mechanisms that are group strategy-proof. As an application, we construct novel
mechanisms for a natural variation of the house allocation problem where no
existing class of mechanisms besides serial dictatorship would be applicable.",Local Priority Mechanisms,2023-09-08 00:13:11,"Joseph Root, David S. Ahn","http://arxiv.org/abs/2309.04020v1, http://arxiv.org/pdf/2309.04020v1",econ.TH
34653,th,"We propose a notion of concavity in two-sided many-to-one matching, which is
an analogue to the balancedness condition in cooperative games. A stable
matching exists when the market is concave. We provide a class of concave
markets. In the proof of the existence theorem, we use Scarf's algorithm to
find a stable schedule matching, which is of independent interest.",Concave many-to-one matching,2023-09-08 10:53:07,Chao Huang,"http://arxiv.org/abs/2309.04181v1, http://arxiv.org/pdf/2309.04181v1",econ.TH
34654,th,"We show that sender-optimal equilibria in cheap-talk games with a binary
state space and state-independent preferences are robust to perturbations of
the sender's preferences that allow for slight state-dependence. Other
equilibria generally fail to be robust.",Robust equilibria in cheap-talk games with fairly transparent motives,2023-09-08 11:10:38,"Jan-Henrik Steg, Elshan Garashli, Michael Greinecker, Christoph Kuzmics","http://arxiv.org/abs/2309.04193v1, http://arxiv.org/pdf/2309.04193v1",econ.TH
34655,th,"In this paper, we establish a mathematical duality between utility transforms
and probability distortions. These transforms play a central role in decision
under risk by forming the foundation for the classic theories of expected
utility, dual utility, and rank-dependent utility. Our main results establish
that probability distortions are characterized by commutation with utility
transforms, and utility transforms are characterized by commutation with
probability distortions. These results require no additional conditions, and
hence each class can be axiomatized with only one property. Moreover, under
monotonicity, rank-dependent utility transforms can be characterized by set
commutation with either utility transforms or probability distortions.",A duality between utility transforms and probability distortions,2023-08-25 20:48:12,"Peng Liu, Ruodu Wang","http://arxiv.org/abs/2309.05816v1, http://arxiv.org/pdf/2309.05816v1",econ.TH
34656,th,"If Economics is understood as the study of the interactions among intentional
agents, being rationality the main source of intentional behavior, the
mathematical tools that it requires must be extended to capture systemic
effects. Here we choose an alternative toolbox based on Category Theory. We
examine potential {\em level-agnostic} formalisms, presenting three categories,
$\mathcal{PR}$, $\mathcal{G}$ and an encompassing one, $\mathcal{PR-G}$. The
latter allows for representing dynamic rearrangements of the interactions among
different agents.",Dynamic Arrangements in Economic Theory: Level-Agnostic Representations,2023-09-12 19:48:33,Fernando Tohmé,"http://arxiv.org/abs/2309.06383v1, http://arxiv.org/pdf/2309.06383v1",econ.TH
34657,th,"In the problem of allocating a single non-disposable commodity among agents
whose preferences are single-peaked, we study a weakening of strategy-proofness
called not obvious manipulability (NOM). If agents are cognitively limited,
then NOM is sufficient to describe their strategic behavior. We characterize a
large family of own-peak-only rules that satisfy efficiency, NOM, and a minimal
fairness condition. We call these rules ""simple"". In economies with excess
demand, simple rules fully satiate agents whose peak amount is less than or
equal to equal division and assign, to each remaining agent, an amount between
equal division and his peak. In economies with excess supply, simple rules are
defined symmetrically. We also show that the single-plateaued domain is maximal
for the characterizing properties of simple rules. Therefore, even though
replacing strategy-proofness with NOM greatly expands the family of admissible
rules, the maximal domain of preferences involved remains basically unaltered.",Not obviously manipulable allotment rules,2023-09-12 22:40:43,"R. Pablo Arribillaga, Agustin G. Bonifacio","http://arxiv.org/abs/2309.06546v1, http://arxiv.org/pdf/2309.06546v1",econ.TH
34659,th,"Quota mechanisms are commonly used to elicit private information when agents
face multiple decisions and monetary transfers are infeasible. As the number of
decisions grows large, quotas asymptotically implement the same set of social
choice functions as do separate mechanisms with transfers. We analyze the
robustness of quota mechanisms. To set the correct quota, the designer must
have precise knowledge of the environment. We show that, without transfers,
only trivial social choice rules can be implemented in a prior-independent way.
We obtain a tight bound on the decision error that results when the quota does
not match the true type distribution. Finally, we show that in a multi-agent
setting, quotas are robust to agents' beliefs about each other. Crucially,
quotas make the distribution of reports common knowledge.",Linking Mechanisms: Limits and Robustness,2023-09-14 03:37:31,"Ian Ball, Deniz Kattwinkel","http://arxiv.org/abs/2309.07363v1, http://arxiv.org/pdf/2309.07363v1",econ.TH
34660,th,"We study how a decision maker (DM) learns about the bias of unfamiliar news
sources. Absent any frictions, a rational DM uses known sources as a yardstick
to discern the true bias of a source. If a DM has misspecified beliefs, this
process fails. We derive long-run beliefs, behavior, welfare, and corresponding
comparative statics, when the DM has dogmatic, incorrect beliefs about the bias
of known sources. The distortion due to misspecified learning is succinctly
captured by a single-dimensional metric we introduce. Our model generates the
hostile media effect and false polarization, and has implications for
fact-checking and misperception recalibration.",Learning News Bias: Misspecifications and Consequences,2023-09-15 23:02:21,"Lin Hu, Matthew Kovach, Anqi Li","http://arxiv.org/abs/2309.08740v1, http://arxiv.org/pdf/2309.08740v1",econ.TH
34661,th,"I introduce a favor exchange model where favors are substitutable and study
bilateral enforcement of cooperation. Without substitutability, the value of a
relationship does not depend on the rest of the network, and in equilibrium
there is either no cooperation or universal cooperation. When favors are
substitutable, each additional relationship is less valuable than the previous,
and intermediate levels of cooperation are observed. I extend the model to
allow for transfers, heterogeneous players, and multilateral enforcement. My
results can explain the stratification of social networks in post-Soviet states
and the adoption of different enforcement mechanisms by different groups of
medieval traders.",Substitutability in Favor Exchange,2023-09-19 19:50:39,Oguzhan Celebi,"http://arxiv.org/abs/2309.10749v1, http://arxiv.org/pdf/2309.10749v1",econ.TH
34662,th,"In classical contract theory, we usually impose two assumptions: delegated
contracts and perfect commitment. While the second assumption is demanding, the
first one suffers no loss of generality. Following this tradition, current
common-agency models impose delegated contracts and perfect commitment. We
first show that non-delegated contracts expand the set of equilibrium outcomes
under common agency. Furthermore, the powerful menu theorem for common agency
(Peters (2001) and Martimort and Stole (2002)}) fails for either non-delegated
contracts or imperfect commitment. We identify canonical contracts in such
environments, and re-establish generalized menu theorems. Given imperfect
commitment, our results for common-agency models are analogous to those in
Bester and Strausz (2001) and Doval and Skreta (2012) for the classical
contract theory, which re-establish the revelation principle.",Common Agency with Non-Delegation or Imperfect Commitment,2023-09-20 22:17:37,"Seungjin Han, Siyang Xiong","http://arxiv.org/abs/2309.11595v1, http://arxiv.org/pdf/2309.11595v1",econ.TH
34663,th,"We analyze a bilateral trade model in which the buyer's value for the product
and the seller's costs are uncertain, the seller chooses the product price, and
the product is recommended by an algorithm based on its value and price. We
characterize an algorithm that maximizes the buyer's expected payoff and show
that the optimal algorithm underrecommends the product at high prices and
overrecommends at low prices. Higher algorithm precision increases the maximal
equilibrium price and may increase prices across all of the seller's costs,
whereas informing the seller about the buyer's value results in a
mean-preserving spread of equilibrium prices and a mean-preserving contraction
of the buyer's payoff.",Buyer-Optimal Algorithmic Consumption,2023-09-21 17:44:04,"Shota Ichihashi, Alex Smolin","http://arxiv.org/abs/2309.12122v2, http://arxiv.org/pdf/2309.12122v2",econ.TH
34664,th,"We prove that when individual firms employ constant-returns-to-scale
production functions, the aggregate production function defined by the maximum
achievable total output given total inputs is always linear on some part of the
domain. Our result provides a microfoundation for the linear production
function.",Linearity of Aggregate Production Functions,2023-09-27 19:14:58,"Christopher P. Chambers, Alexis Akira Toda","http://arxiv.org/abs/2309.15760v1, http://arxiv.org/pdf/2309.15760v1",econ.TH
34665,th,"How should authorities that care about match quality and diversity allocate
resources when they are uncertain about the market? We introduce adaptive
priority mechanisms (APM) that prioritize agents based on both their scores and
characteristics. We derive an APM that is optimal and show that the ubiquitous
priority and quota mechanisms are optimal if and only if the authority is
risk-neutral or extremely risk-averse over diversity, respectively. With many
authorities, each authority using the optimal APM is dominant and implements
the unique stable matching. Using Chicago Public Schools data, we find that the
gains from adopting APM may be considerable.",Adaptive Priority Mechanisms,2023-09-27 23:25:43,"Oguzhan Celebi, Joel Flynn","http://arxiv.org/abs/2309.15997v1, http://arxiv.org/pdf/2309.15997v1",econ.TH
34666,th,"We provide necessary and sufficient conditions for a correspondence taking
values in a finite-dimensional Euclidean space to be open so as to revisit the
pioneering work of Schmeidler (1969), Shafer (1974), Shafer-Sonnenschein (1975)
and Bergstrom-Rader-Parks (1976) to answer several questions they and their
followers left open. We introduce the notion of separate convexity for a
correspondence and use it to relate to classical notions of continuity while
giving salience to the notion of separateness as in the interplay of separate
continuity and separate convexity of binary relations. As such, we provide a
consolidation of the convexity-continuity postulates from a broad
inter-disciplinary perspective and comment on how the qualified notions
proposed here have implications of substantive interest for choice theory.","Separately Convex and Separately Continuous Preferences: On Results of Schmeidler, Shafer, and Bergstrom-Parks-Rader",2023-10-01 03:43:45,"Metin Uyanik, Aniruddha Ghosh, M. Ali Khan","http://arxiv.org/abs/2310.00531v1, http://arxiv.org/pdf/2310.00531v1",econ.TH
34667,th,"We present a new proof for the existence of a Nash equilibrium, which
involves no fixed point theorem. The self-contained proof consists of two
parts. The first part introduces the notions of root function and
pre-equilibrium. The second part shows the existence of pre-equilibria and Nash
equilibria.",A new proof for the existence of Nash equilibrium,2023-10-02 21:16:31,"Davide Carpentiere, Stephen Watson","http://arxiv.org/abs/2310.01528v1, http://arxiv.org/pdf/2310.01528v1",econ.TH
34668,th,"Since 1950, India has instituted an intricate affirmative action program
through a meticulously designed reservation system. This system incorporates
vertical and horizontal reservations to address historically marginalized
groups' socioeconomic imbalances. Vertical reservations designate specific
quotas of available positions in publicly funded educational institutions and
government employment for Scheduled Castes, Scheduled Tribes, Other Backward
Classes, and Economically Weaker Sections. Concurrently, horizontal
reservations are employed within each vertical category to allocate positions
for additional subgroups, such as women and individuals with disabilities. In
educational admissions, the legal framework recommended that unfilled positions
reserved for the OBC category revert to unreserved status. Moreover, we
document that individuals from vertically reserved categories have more
complicated preferences over institution-vertical category position pairs, even
though authorities only elicit their preferences over institutions. To address
these challenges, the present paper proposes a novel class of choice rules,
termed the Generalized Lexicographic (GL) choice rules. This class is
comprehensive, subsuming the most salient priority structures discussed in the
extant matching literature. Utilizing the GL choice rules and the deferred
acceptance mechanism, we present a robust framework that generates equitable
and effective solutions for resource allocation problems in the Indian context.","Affirmative Action in India: Restricted Strategy Space, Complex Constraints, and Direct Mechanism Design",2023-10-04 11:35:54,"Orhan Aygün, Bertan Turhan","http://arxiv.org/abs/2310.02660v1, http://arxiv.org/pdf/2310.02660v1",econ.TH
34669,th,"Ecosystems enjoy increasing attention due to their flexibility and innovative
power. It is well known, however, that this type of networkbased economic
governance structures occupies a potentially unstable position between the two
stable (governance) endpoints, namely the firm (i.e., hierarchical governance)
and the (open) market (i.e., coordination through the monetary system).
  This paper develops a formal (mathematical) theory of the economic value of
(generic) ecosystem by extending transaction costs economics using certain
elements from service-dominant logic.
  Within a first-best setting of rational actors, we derive analytical
solutions for a hub-and-spoke and generic ecosystem configuration under some
uniformity assumptions of ecosystem participants.
  Relinquishing a first-best rational actors approach, we additionally derive
several general theorems on (i) necessary conditions for the economic
feasibility of ecosystem-based transactions, (ii) scaling requirements for
ecosystem stability, and (iii) a generic feasibility condition for arbitrary
provider-consumer ecosystems.
  Finally, we present an algebraic definition of business ecosystems and relate
it to existing informal definition attempts. Thereby we demonstrate that the
property of ""being an ecosystem"" of a network of transacting actors cannot be
decided on structural grounds alone.",A Formal Transaction Cost-Based Analysis of the Economic Feasibility Ecosystems,2023-10-04 23:52:51,Christoph F. Strnadl,"http://arxiv.org/abs/2310.03157v2, http://arxiv.org/pdf/2310.03157v2",econ.TH
34670,th,"We propose two general equilibrium models, quota equilibrium and emission tax
equilibrium. The government specifies quotas or taxes on emissions, then
refrains from further action. Quota equilibrium exists; the allocation of
emission property rights strongly impacts the distribution of welfare. If the
only externality arises from total net emissions, quota equilibrium is
constrained Pareto Optimal. Every quota equilibrium can be realized as an
emission tax equilibrium and vice versa. However, for certain tax rates,
emission tax equilibrium may not exist, or may exhibit high multiplicity. Full
Pareto Optimality of quota equilibrium can often be achieved by setting the
right quota.",General Equilibrium Theory for Climate Change,2023-10-05 19:26:44,"Robert M. Anderson, Haosui Duanmu","http://arxiv.org/abs/2310.03650v1, http://arxiv.org/pdf/2310.03650v1",econ.TH
34671,th,"We critique the formulation of Arrow's no-dictator condition to show that it
does not correspond to the accepted informal/intuitive interpretation. This has
implications for the theorem's scope of applicability.",Is Arrow's Dictator a Drinker?,2023-10-08 00:11:38,Jeffrey Uhlmann,"http://arxiv.org/abs/2310.04917v1, http://arxiv.org/pdf/2310.04917v1",econ.TH
34672,th,"This paper studies the (group) strategy-proofness aspect of two-sided
matching markets under stability. For a one-to-one matching market, we show an
equivalence between individual and group strategy-proofness under stability. We
obtain this equivalence assuming the domain satisfies a richness condition.
However, the result cannot be extended to the many-to-one matching markets. We
further consider a setting with single-peaked preferences and characterize all
domains compatible for stability and (group) strategy-proofness.",Equivalence between individual and group strategy-proofness under stability,2023-10-08 21:02:06,Pinaki Mandal,"http://arxiv.org/abs/2310.05252v1, http://arxiv.org/pdf/2310.05252v1",econ.TH
34673,th,"This paper studies the spatial agglomeration of workers and income in a
continuous space and time framework. Production and consumption are decided in
local markets, characterized by the presence of spatial spillovers and
amenities. Workers move across locations maximizing their instantaneous
utility, subject to mobility costs. We prove the existence of a short-run
Cournot-Nash equilibrium, and that, in the limit of an infinite number of
workers, the sequence of short-run equilibria can be expressed by a partial
differential equation. We characterize the conditions under which the long-run
equilibrium displays spatial agglomerations. Social welfare is non-decreasing
over time, and in the long-run equilibrium the expected utility of a
representative worker is equalized over space and, therefore, the spatial
allocation is efficient. The model can reproduce several stylized effects, such
as the emergence of spatial agglomerations (cities) with different sizes and
shapes; the dependence by history of spatial pattern of economic activities; a
non-linear out-of-equilibrium dynamics; and finally, the phenomenon of
metastability, where a long period of apparent stability in the spatial
distribution is followed by a sharp transition to a new equilibrium.",The spatial evolution of economic activities and the emergence of cities,2023-10-11 23:46:05,"Davide Fiaschi, Cristiano Ricci","http://arxiv.org/abs/2310.07883v1, http://arxiv.org/pdf/2310.07883v1",econ.TH
34674,th,"An expert seller chooses an experiment to influence a client's purchasing
decision, but may manipulate the experiment result for personal gain. When
credibility surpasses a critical threshold, the expert chooses a
fully-revealing experiment and, if possible, manipulates the unfavorable
result. In this case, a higher credibility strictly benefits the expert,
whereas the client never benefits from the expert's services. We also discuss
policies regarding monitoring expert's disclosure and price regulation. When
prices are imposed exogenously, monitoring disclosure does not affect the
client's highest equilibrium value. A lower price may harm the client when it
discourages the expert from disclosing information.",Credibility in Credence Goods Markets,2023-10-14 12:37:59,"Xiaoxiao Hu, Haoran Lei","http://arxiv.org/abs/2310.09544v1, http://arxiv.org/pdf/2310.09544v1",econ.TH
34675,th,"Many models of economics assume that individuals distort objective
probabilities. We propose a simple consistency condition on distortion
functions, which we term distortion coherence, that ensures that the function
commutes with conditioning on an event. We show that distortion coherence
restricts belief distortions to have a particular function form: power-weighted
distortions, where distorted beliefs are proportional to the original beliefs
raised to a power and weighted by a state-specific value. We generalize our
findings to allow for distortions of the probabilities assigned to both states
and signals, which nests the functional forms widely used in studying
probabilistic biases (e.g., Grether, 1980 and Benjamin, 2019). We show how
coherent distorted beliefs are tightly related to several extant models of
motivated beliefs: they are the outcome of maximizing anticipated expected
utility subject to a generalized Kullback-Liebler cost of distortion. Moreover,
in the domain of lottery choice, we link coherent distortions to explanations
of non-expected utility like the Allais paradox: individuals who maximize
subjective expected utility maximizers conditional on coherent distorted
beliefs are equivalent to the weighted utility maximizers studied by Chew
[1983].",Coherent Distorted Beliefs,2023-10-15 19:31:33,"Christopher P. Chambers, Yusufcan Masatlioglu, Collin Raymond","http://arxiv.org/abs/2310.09879v1, http://arxiv.org/pdf/2310.09879v1",econ.TH
34676,th,"This paper presents new conditions under which the stationary distribution of
a stochastic network formation process can be characterized in terms of a
generating function. These conditions are given in terms of a transforming
function between networks: if the total transformation between two networks is
independent of how these networks transform into each other (by adding or
deleting links), then the process is reversible and a generating function can
be constructed. When the network formation process is given by discrete choices
of link formation, this procedure is equivalent to proving that the game with
the associated utilities is a potential game. This implies that the potential
game characterization is related to reversibility and tractability of network
formation processes. I then use the characterized stationary distribution to
study long-run properties of simple models of homophilic dynamics and
international trade. The effects of adding forward-looking agents and switching
costs are also discussed.",Tractable Aggregation in Endogenous Network Formation Models,2023-10-16 21:59:11,Jose M. Betancourt,"http://arxiv.org/abs/2310.10764v1, http://arxiv.org/pdf/2310.10764v1",econ.TH
34677,th,"We study the problem of sharing the revenues raised from subscriptions to
music streaming platforms among content providers. We provide direct, axiomatic
and game-theoretical foundations for two focal (and somewhat polar) methods
widely used in practice: pro-rata and user-centric. The former rewards artists
proportionally to their number of total streams. With the latter, each user's
subscription fee is proportionally divided among the artists streamed by that
user. We also provide foundations for a family of methods compromising among
the previous two, which addresses the rising concern in the music industry to
explore new streaming models that better align the interests of artists, fans
and streaming services.",Revenue sharing at music streaming platforms,2023-10-18 13:23:12,"Gustavo Bergantiños, Juan D. Moreno-Ternero","http://arxiv.org/abs/2310.11861v1, http://arxiv.org/pdf/2310.11861v1",econ.TH
34678,th,"The ""Hotelling rule"" (HR) called to be ""the fundamental principle of the
economics of exhaustible resources"" has a logical deficiency which was never
paid a sufficient attention to. This deficiency should be taken into account
before attempting to explain discrepancies between the price prediction
provided by the HR and historically observed prices. Our analysis is focused on
the HR in its original form, we do not touch upon other aspects such as varying
extraction costs and other ways of upgrading the original model underlying the
Hotelling rule. We conclude that HR can not be derived from the simple models
as it was claimed, therefore it should be understood as an assumption on its
own, and not as a rule derived from other more basic assumptions.",A note on the logical inconsistency of the Hotelling Rule: A Revisit from the System's Analysis Perspective,2023-10-19 18:02:16,"Nikolay Khabarov, Alexey Smirnov, Michael Obersteiner","http://arxiv.org/abs/2310.12807v1, http://arxiv.org/pdf/2310.12807v1",econ.TH
34679,th,"May's classical theorem states that in a single-winner choose-one voting
system with just two candidates, majority rule is the only social choice
function satisfying anonimity, neutrality and positive responsiveness axiom.
Anonimity and neutrality are usually regarded as very natural constraints on
the social choice function. Positive responsiveness, on the other hand, is
sometimes deemed too strong of an axiom, which stimulates further search for
less stringent conditions. One viable substitute is Gerhard J. Woeginger's
""reducibility to subsocieties"". We demonstrate that the condition generalizes
to more than two candidates and, consequently, characterizes majority rule for
elections with multiple candidates.",Majority rule as a unique voting method in elections with multiple candidates,2023-08-10 22:20:32,Mateusz Krukowski,"http://arxiv.org/abs/2310.12983v1, http://arxiv.org/pdf/2310.12983v1",econ.TH
34680,th,"Ranked choice voting is vulnerable to monotonicity failure - a voting failure
where a candidate is cost an election due to losing voter preference or granted
an election due to gaining voter preference. Despite increasing use of ranked
choice voting at the time of writing of this paper, the frequency of
monotonicity failure is still a very open question. This paper builds on
previous work to develop conditions which can be used to test if it's possible
that monotonicity failure has happened in a 3-candidate ranked choice voting
election.",Monotonicity Failure in Ranked Choice Voting -- Necessary and Sufficient Conditions for 3-Candidate Elections,2023-09-18 09:24:43,Rylie Weaver,"http://arxiv.org/abs/2310.12988v1, http://arxiv.org/pdf/2310.12988v1",econ.TH
34681,th,"We consider the classic veto bargaining model but allow the agenda setter to
engage in persuasion to convince the veto player to approve her proposal. We
fully characterize the optimal proposal and experiment when Vetoer has
quadratic loss, and show that the proposer-optimal can be achieved either by
providing no information or with a simple binary experiment. Proposer chooses
to reveal partial information when there is sufficient expected misalignment
with Vetoer. In this case the opportunity to engage in persuasion strictly
benefits Proposer and increases the scope to exercise agenda power.",Persuasion in Veto Bargaining,2023-10-19 23:44:28,"Jenny S Kim, Kyungmin Kim, Richard Van Weelden","http://arxiv.org/abs/2310.13148v1, http://arxiv.org/pdf/2310.13148v1",econ.TH
35491,th,"We study how long-lived, rational agents learn in a social network. In every
period, after observing the past actions of his neighbors, each agent receives
a private signal, and chooses an action whose payoff depends only on the state.
Since equilibrium actions depend on higher order beliefs, it is difficult to
characterize behavior. Nevertheless, we show that regardless of the size and
shape of the network, the utility function, and the patience of the agents, the
speed of learning in any equilibrium is bounded from above by a constant that
only depends on the private signal distribution.",Learning in Repeated Interactions on Networks,2021-12-28 22:04:10,"Wanying Huang, Philipp Strack, Omer Tamuz","http://arxiv.org/abs/2112.14265v4, http://arxiv.org/pdf/2112.14265v4",econ.TH
34682,th,"Recently a number of papers have suggested using neural-networks in order to
approximate policy functions in DSGE models, while avoiding the curse of
dimensionality, which for example arises when solving many HANK models, and
while preserving non-linearity. One important step of this method is to
represent the constraints of the economic model in question in the outputs of
the neural-network. I propose, and demonstrate the advantages of, a novel
approach to handling these constraints which involves directly constraining the
neural-network outputs, such that the economic constraints are satisfied by
construction. This is achieved by a combination of re-scaling operations that
are differentiable and therefore compatible with the standard gradient descent
approach used when fitting neural-networks. This has a number of attractive
properties, and is shown to out-perform the penalty-based approach suggested by
the existing literature, which while theoretically sound, can be poorly behaved
practice for a number of reasons that I identify.",Non-linear approximations of DSGE models with neural-networks and hard-constraints,2023-10-20 14:49:56,Emmet Hall-Hoffarth,"http://arxiv.org/abs/2310.13436v1, http://arxiv.org/pdf/2310.13436v1",econ.TH
34683,th,"I study the relationship between diversity preferences and the choice rules
implemented by institutions, with a particular focus on the affirmative action
policies. I characterize the choice rules that can be rationalized by diversity
preferences and demonstrate that the recently rescinded affirmative action
mechanism used to allocate government positions in India cannot be
rationalized. I show that if institutions evaluate diversity without
considering intersectionality of identities, their choices cannot satisfy the
crucial substitutes condition. I characterize choice rules that satisfy the
substitutes condition and are rationalizable by preferences that are separable
in diversity and match quality domains.","Diversity Preferences, Affirmative Action and Choice Rules",2023-10-23 01:55:45,Oguzhan Celebi,"http://arxiv.org/abs/2310.14442v1, http://arxiv.org/pdf/2310.14442v1",econ.TH
34684,th,"Why do agents adopt a particular general behavioral rule among a collection
of possible alternatives? To address this question, we introduce a dynamic
social learning framework, where agents rely on general rules of thumb and
imitate the behavioral rules of successful peers. We find the social learning
outcome can be characterized independent of the initial rule distribution. When
one dominant general rule consistently yields superior problem-specific
outcomes, social learning almost surely leads all agents to adopt this dominant
rule; otherwise, provided the population is sufficiently large, the better rule
for the more frequent problem becomes the consensus rule with arbitrarily high
probability. As a result, the behavioral rule selected by the social learning
process need not maximize social welfare. We complement our theoretical
analysis with an application to the market sentiment selection in a stochastic
production market.",Social Learning of General Rules,2023-10-24 17:21:52,"Enrique Urbano Arellano, Xinyang Wang","http://arxiv.org/abs/2310.15861v1, http://arxiv.org/pdf/2310.15861v1",econ.TH
34685,th,"In this paper, we study the continuity of expected utility functions, and
derive a necessary and sufficient condition for a weak order on the space of
simple probabilities to have a continuous expected utility function. We also
verify that almost the same condition is necessary and sufficient for a weak
order on the space of probabilities with compact-support to have a continuous
expected utility function.",A Note on the Continuity of Expected Utility Functions,2023-10-25 20:37:27,Yuhki Hosoya,"http://arxiv.org/abs/2310.16806v2, http://arxiv.org/pdf/2310.16806v2",econ.TH
34686,th,"We introduce a way to compare actions in decision problems. An action is
safer than another if the set of beliefs at which the decision-maker prefers
the safer action increases in size (in the set inclusion sense) as the
decision-maker becomes more risk averse. We provide a full characterization of
this relation and discuss applications to robust belief elicitation,
contracting, Bayesian persuasion, game theory, and investment hedging.","Safety, in Numbers",2023-10-26 19:12:09,"Marilyn Pease, Mark Whitmeyer","http://arxiv.org/abs/2310.17517v1, http://arxiv.org/pdf/2310.17517v1",econ.TH
34687,th,"This paper studies multilateral matching in which any set of agents can
negotiate contracts. We assume scale economies in the sense that an agent
substitutes some contracts with some new contracts only if the newly signed
contracts involve a weakly larger set of partners. We show that a weakly
setwise stable outcome exists in a market with scale economies and a setwise
stable outcome exists under a stronger scale economies condition. Our
conditions apply to environments in which more partners bring advantages, and
allow agents to bargain over contracts signed by them.",Multilateral matching with scale economies,2023-10-30 15:09:33,Chao Huang,"http://arxiv.org/abs/2310.19479v1, http://arxiv.org/pdf/2310.19479v1",econ.TH
34688,th,"An informed seller designs a dynamic mechanism to sell an experience good.
The seller has partial information about the product match, which affects the
buyer's private consumption experience. We characterize equilibrium mechanisms
of this dynamic informed principal problem. The belief gap between the informed
seller and the uninformed buyer, coupled with the buyer's learning, gives rise
to mechanisms that provide the skeptical buyer with limited access to the
product and an option to upgrade if the buyer is swayed by a good experience.
Depending on the seller's screening technology, this takes the form of
free/discounted trials or tiered pricing, which are prevalent in digital
markets. In contrast to static environments, having consumer data can reduce
sellers' revenue in equilibrium, as they fine-tune the dynamic design with
their data forecasting the buyer's learning process.",From Doubt to Devotion: Trials and Learning-Based Pricing,2023-11-01 23:52:14,"Tan Gan, Nicholas Wu","http://arxiv.org/abs/2311.00846v1, http://arxiv.org/pdf/2311.00846v1",econ.TH
34689,th,"We consider general Bayesian persuasion problems where the receiver's utility
is single-peaked in a one-dimensional action. We show that a signal that pools
at most two states in each realization is always optimal, and that such
pairwise signals are the only solutions under a non-singularity condition (the
twist condition). Our core results provide conditions under which riskier
prospects induce higher or lower actions, so that the induced action is
single-dipped or single-peaked on each set of nested prospects. We also provide
conditions for the optimality of either full disclosure or negative assortative
disclosure, where all prospects are nested. Methodologically, our results rely
on novel duality and complementary slackness theorems. Our analysis extends to
a general problem of assigning one-dimensional inputs to productive units,
which we call optimal productive transport. This problem covers additional
applications including club economies (assigning workers to firms, or students
to schools), robust option pricing (assigning future asset prices to price
distributions), and partisan gerrymandering (assigning voters to districts).",Persuasion and Matching: Optimal Productive Transport,2023-11-06 08:46:39,"Anton Kolotilin, Roberto Corrao, Alexander Wolitzky","http://arxiv.org/abs/2311.02889v1, http://arxiv.org/pdf/2311.02889v1",econ.TH
34690,th,"We consider a set of agents who have claims on an endowment that is not large
enough to cover all claims. Agents can form coalitions but a minimal coalition
size $\theta$ is required to have positive coalitional funding that is
proportional to the sum of the claims of its members. We analyze the structure
of stable partitions when coalition members use well-behaved rules to allocate
coalitional endowments, e.g., the well-known constrained equal awards rule
(CEA) or the constrained equal losses rule (CEL).For continuous, (strictly)
resource monotonic, and consistent rules, stable partitions with (mostly)
$\theta$-size coalitions emerge. For CEA and CEL we provide algorithms to
construct such a stable partition formed by $\theta$-size coalitions.",Stable partitions for proportional generalized claims problems,2023-11-07 15:44:27,"Oihane Gallo, Bettina Klaus","http://arxiv.org/abs/2311.03950v1, http://arxiv.org/pdf/2311.03950v1",econ.TH
34691,th,"I study collective dynamic information acquisition. Players determine when to
end sequential sampling via a collective choice rule. My analysis focuses on
the case of two players, but extends to many players. With two players,
collective stopping is determined either unilaterally or unanimously. I develop
a methodology to characterize equilibrium outcomes using an ex ante perspective
on posterior distributions. Under unilateral stopping, each player chooses a
mean-preserving contraction of the other's posterior distribution; under
unanimous stopping, they choose meanpreserving spreads. Equilibrium outcomes
can be determined via concavification. Players learn Pareto inefficiently: too
little under unilateral stopping, while too much under unanimous stopping;
these learning inefficiencies are amplified when players' preferences become
less aligned. I demonstrate the value of my methodological approach in three
applications: committee search, dynamic persuasion, and competition in
persuasion.",Collective Sampling: An Ex Ante Perspective,2023-11-10 00:52:51,Yangfan Zhou,"http://arxiv.org/abs/2311.05758v1, http://arxiv.org/pdf/2311.05758v1",econ.TH
34692,th,"We revisit the problem of fairly allocating a sequence of time slots when
agents may have different levels of patience (Mackenzie and Komornik, 2023).
For each number of agents, we provide a lower threshold and an upper threshold
on the level of patience such that (i) if each agent is at least as patient as
the lower threshold, then there is a proportional allocation, and (ii) if each
agent is at least as patient as the upper threshold and moreover has weak
preference for earlier time slots, then there is an envy-free allocation. In
both cases, the proof is constructive.",Patience ensures fairness,2023-11-10 17:46:45,"Florian Brandl, Andrew Mackenzie","http://arxiv.org/abs/2311.06092v1, http://arxiv.org/pdf/2311.06092v1",econ.TH
34693,th,"This paper deals with the positivity condition of an infinite-dimensional
evolutionary equation, associated with a control problem for the optimal
consumption over space. We consider a spatial growth model for capital, with
production generating endogenous growth and technology of the form AK. We show
that for certain initial data, even in the case of heterogeneous spatial
distribution of technology and population, the solution to an auxiliary control
problem that is commonly used as a candidate for the original problem is not
admissible. In particular, we show that initial conditions that are
non-negative, under the auxiliary optimal consumption strategy, may lead to
negative capital allocations over time.",A non-invariance result for the spatial AK model,2023-11-12 14:19:54,Cristiano Ricci,"http://arxiv.org/abs/2311.06811v1, http://arxiv.org/pdf/2311.06811v1",econ.TH
34694,th,"Opinions are influenced by neighbors, with varying degrees of emphasis based
on their connections. Some may value more connected neighbors' views due to
authority respect, while others might lean towards grassroots perspectives. The
emergence of ChatGPT could signify a new ``opinion leader'' whose views people
put a lot of weight on. This study introduces a degree-weighted DeGroot
learning model to examine the effects of such belief updates on learning
outcomes, especially the speed of belief convergence. We find that greater
respect for authority doesn't guarantee faster convergence. The influence of
authority respect is non-monotonic. The convergence speed, influenced by
increased authority-respect or grassroots dissent, hinges on the unity of elite
and grassroots factions. This research sheds light on the growing skepticism
towards public figures and the ensuing dissonance in public debate.",From Authority-Respect to Grassroots-Dissent: Degree-Weighted Social Learning and Convergence Speed,2023-11-13 04:44:05,"Chen Cheng, Xiao Han, Xin Tong, Yusheng Wu, Yiqing Xing","http://arxiv.org/abs/2311.07010v1, http://arxiv.org/pdf/2311.07010v1",econ.TH
34695,th,"Motivated by the analysis of a general optimal portfolio selection problem,
which encompasses as special cases an optimal consumption and an optimal
debt-arrangement problem, we are concerned with the questions of how a
personality trait like risk-perception can be formalized and whether the two
objectives of utility-maximization and risk-minimization can be both achieved
simultaneously. We address these questions by developing an axiomatic
foundation of preferences for which utility-maximization is equivalent to
minimizing a utility-based shortfall risk measure. Our axiomatization hinges on
a novel axiom in decision theory, namely the risk-perception axiom.",Decision-making under risk: when is utility maximization equivalent to risk minimization?,2023-11-13 15:08:21,"Francesco Ruscitti, Ram Sewak Dubey, Giorgio Laguzzi","http://arxiv.org/abs/2311.07269v1, http://arxiv.org/pdf/2311.07269v1",econ.TH
34696,th,"This paper presents a method for incorporating risk aversion into existing
decision tree models used in economic evaluations. The method involves applying
a probability weighting function based on rank dependent utility theory to
reduced lotteries in the decision tree model. This adaptation embodies the fact
that different decision makers can observe the same decision tree model
structure but come to different conclusions about the optimal treatment. The
proposed solution to this problem is to compensate risk-averse decision makers
to use the efficient technology that they are reluctant to adopt.",Considering Risk Aversion in Economic Evaluation: A Rank Dependent Approach,2023-11-14 07:54:49,Jacob Smith,"http://arxiv.org/abs/2311.07905v1, http://arxiv.org/pdf/2311.07905v1",econ.TH
34697,th,"This paper introduces the axiom of Negative Dominance, stating that if a
lottery $f$ is strictly preferred to a lottery $g$, then some outcome in the
support of $f$ is strictly preferred to some outcome in the support of $g$. It
is shown that if preferences are incomplete on a sufficiently rich domain, then
this plausible axiom, which holds for complete preferences, is incompatible
with an array of otherwise plausible axioms for choice under uncertainty. In
particular, in this setting, Negative Dominance conflicts with the standard
Independence axiom. A novel theory, which includes Negative Dominance, and
rejects Independence, is developed and shown to be consistent.","Incompleteness, Independence, and Negative Dominance",2023-11-14 22:01:57,Harvey Lederman,"http://arxiv.org/abs/2311.08471v1, http://arxiv.org/pdf/2311.08471v1",econ.TH
34698,th,"Currently, over 90% of Ethereum blocks are built using MEV-Boost, an auction
that allows validators to sell their block-building power to builders who
compete in an open English auction in each slot. Shortly after the merge, when
MEV-Boost was in its infancy, most block builders were neutral, meaning they
did not trade themselves but rather aggregated transactions from other traders.
Over time, integrated builders, operated by trading firms, began to overtake
many of the neutral builders. Outside of the integrated builder teams, little
is known about which advantages integration confers beyond latency and how
latency advantages distort on-chain trading.
  This paper explores these poorly understood advantages. We make two
contributions. First, we point out that integrated builders are able to bid
truthfully in their own bundle merge and then decide how much profit to take
later in the final stages of the PBS auction when more information is
available, making the auction for them look closer to a second-price auction
while independent searchers are stuck in a first-price auction. Second, we find
that latency disadvantages convey a winner's curse on slow bidders when
underlying values depend on a stochastic price process that change as bids are
submitted.",Structural Advantages for Integrated Builders in MEV-Boost,2023-11-15 19:25:33,"Mallesh Pai, Max Resnick","http://arxiv.org/abs/2311.09083v1, http://arxiv.org/pdf/2311.09083v1",econ.TH
34699,th,"We analyze a problem of revealed preference given state-dependent stochastic
choice data in which the payoff to a decision maker (DM) only depends on their
beliefs about posterior means. Often, the DM must also learn about or pay
attention to the state; in applied work on this subject, a convenient
assumption is that the costs of such learning are linearly dependent in the
distribution over posterior means. We provide testable conditions to identify
whether this assumption holds. This allows for the use of information design
techniques to solve the DM's problem.",Posterior-Mean Separable Costs of Information Acquisition,2023-11-16 04:28:25,"Jeffrey Mensch, Komal Malik","http://arxiv.org/abs/2311.09496v3, http://arxiv.org/pdf/2311.09496v3",econ.TH
34700,th,"We studied the behavior and variation of utility between the two conflicting
players in a closed Nash-equilibrium loop. Our modeling approach also captured
the nexus between optimal premium strategizing and firm performance using the
Lotka-Volterra completion model. Our model robustly modeled the two main cases,
insurer-insurer and insurer-policyholder, which we accompanied by numerical
examples of premium movements and their relationship to the market equilibrium
point. We found that insurers with high claim exposures tend to set high
premiums. The other competitors either set a competitive premium or adopt the
fixed premium charge to remain in the game; otherwise, they will operate below
the optimal point. We also noted an inverse link between trading premiums and
claims in general insurance games due to self-interest and utility
indifferences. We concluded that while an insurer aims to charge high premiums
to enjoy more, policyholders are willing to avoid these charges by paying less.",Modeling trading games in a stochastic non-life insurance market,2023-11-18 03:03:55,"Leonard Mushunje, David Edmund Allen","http://arxiv.org/abs/2311.10917v1, http://arxiv.org/pdf/2311.10917v1",econ.TH
34701,th,"A principal delegates decisions to a biased agent. Payoffs depend on a state
that the principal cannot observe. Initially, the agent does not observe the
state, but he can acquire information about it at a cost. We characterize the
principal's optimal delegation set. This set features a cap on high decisions
and a gap around the agent's ex ante favorite decision. It may even induce
ex-post Pareto-dominated decisions. Under certain conditions on the cost of
information acquisition, we show that the principal prefers delegating to an
agent with a small bias than to an unbiased agent.",Benefiting from Bias: Delegating to Encourage Information Acquisition,2023-11-20 07:14:56,"Ian Ball, Xin Gao","http://arxiv.org/abs/2311.11526v1, http://arxiv.org/pdf/2311.11526v1",econ.TH
34702,th,"Cooperation through repetition is an important theme in game theory. In this
regard, various celebrated ``folk theorems'' have been proposed for repeated
games in increasingly more complex environments. There has, however, been
insufficient attention paid to the robustness of a large set of equilibria that
is needed for such folk theorems. Starting with perfect public equilibrium as
our starting point, we study uniformly strict equilibria in repeated games with
private monitoring and direct communication (cheap talk). We characterize the
limit equilibrium payoff set and identify the conditions for the folk theorem
to hold with uniformly strict equilibrium.",Uniformly Strict Equilibrium for Repeated Games with Private Monitoring and Communication,2023-11-21 02:47:26,"Richard McLean, Ichiro Obara, Andrew Postlewaite","http://arxiv.org/abs/2311.12242v1, http://arxiv.org/pdf/2311.12242v1",econ.TH
34703,th,"We study the design of optimal incentives in sequential processes. To do so,
we consider a basic and fundamental model in which an agent initiates a
value-creating sequential process through costly investment with random
success. If unsuccessful, the process stops. If successful, a new agent
thereafter faces a similar investment decision, and so forth. For any outcome
of the process, the total value is distributed among the agents using a reward
rule. Reward rules thus induce a game among the agents. By design, the reward
rule may lead to an asymmetric game, yet we are able to show equilibrium
existence with optimal symmetric equilibria. We characterize optimal reward
rules that yield the highest possible welfare created by the process, and the
highest possible expected payoff for the initiator of the process. Our findings
show that simple reward rules invoking short-run incentives are sufficient to
meet long-run objectives.",Successive Incentives,2023-11-21 13:10:49,"Jens Gudmundsson, Jens Leth Hougaard, Juan D. Moreno-Ternero, Lars Peter Østerdal","http://arxiv.org/abs/2311.12494v1, http://arxiv.org/pdf/2311.12494v1",econ.TH
34704,th,"Communication is rarely perfect, but rather prone to error of transmission
and reception. Often the origin of these errors cannot be properly quantified
and is thus imprecisely known. We analyze the impact of an ambiguous noise
which may alter the received message on a communication game of common
interest. The noise is ambiguous in the sense that the parameters of the
error-generating process and thus the likelihood to receive a message by
mistake are Knightianly unknown. Ex-ante and interim responses are
characterized under maxmin preferences. While the sender can disregard
ambiguity, the receiver reveals a dynamically inconsistent, but astonishing
behavior under a quadratic loss. Their interim actions will be closer to the
pooling action than their ex-ante ones, as if facing a higher likelihood of an
occurring error.",Underreaction and dynamic inconsistency in communication games under noise,2023-11-21 13:14:05,Gerrit Bauch,"http://arxiv.org/abs/2311.12496v1, http://arxiv.org/pdf/2311.12496v1",econ.TH
34705,th,"It is well known that individual beliefs cannot be identified using
traditional choice data, unless we exogenously assume state-independent
utilities. In this paper, I propose a novel methodology that solves this
long-standing identification problem in a simple way. This method relies on the
extending the state space by introducing a proxy, for which the agent has no
stakes conditional on the original state space. The latter allows us to
identify the agent's conditional beliefs about the proxy given each state
realization, which in turn suffices for indirectly identifying her beliefs
about the original state space. This approach is analogous to the one of
instrumental variables in econometrics. Similarly to instrumental variables,
the appeal of this method comes from the flexibility in selecting a proxy.",Belief identification by proxy,2023-11-22 16:44:10,Elias Tsakas,"http://arxiv.org/abs/2311.13394v1, http://arxiv.org/pdf/2311.13394v1",econ.TH
34706,th,"This review paper focuses on enhancing organizational economic sustainability
through process optimization and human capital effective management utilizing
the soft systems methodology (SSM) approach which offers a holistic approach
for understanding complex real-world challenges. By emphasizing systems
thinking and engaging diverse stakeholders in problem-solving, SSM provides a
comprehensive understanding of the problem's context and potential solutions.
The approach guides a systematic process of inquiry that leads to feasible and
desirable changes in tackling complex problems effectively. Our paper employs
the bibliometric analysis based on the sample of 5171 research articles,
proceedings papers, and book chapters indexed in Web of Science (WoS) database.
We carry out the network cluster analysis using the text data and the
bibliometric data with the help of VOSViewer software. Our results confirm that
as the real-world situations are becoming more complex and the new challenges
such as the global warming and climate change are threatening many economic and
social processes, SSM approach is currently getting back at the forefront of
academic research related to such topics as organizational management and
sustainable human capital efficiency.",Organizational economic sustainability via process optimization and human capital: a Soft Systems Methodology (SSM) approach,2023-11-29 21:28:19,"Wadim Strielkowski, Evgeny Kuzmin, Arina Suvorova, Natalya Nikitina, Olga Gorlova","http://arxiv.org/abs/2311.17882v2, http://arxiv.org/pdf/2311.17882v2",econ.TH
34707,th,"We investigate optimal carbon abatement in a dynamic general equilibrium
climate-economy model with endogenous structural change. By differentiating the
production of investment from consumption, we show that social cost of carbon
can be conceived as a reduction in physical capital. In addition, we
distinguish two final sectors in terms of productivity growth and climate
vulnerability. We theoretically show that heterogeneous climate vulnerability
results in a climate-induced version of Baumol's cost disease. Further, if
climate-vulnerable sectors have high (low) productivity growth, climate impact
can either ameliorate (aggravate) the Baumol's cost disease, call for less
(more) stringent climate policy. We conclude that carbon abatement should not
only factor in unpriced climate capital, but also be tailored to Baumol's cost
and climate diseases.",Baumol's Climate Disease,2023-11-30 22:45:13,"Fangzhi Wang, Hua Liao, Richard S. J. Tol","http://arxiv.org/abs/2312.00160v1, http://arxiv.org/pdf/2312.00160v1",econ.TH
34708,th,"In this paper, players contribute to two local public goods for which they
have different tastes and sponsor costly links to enjoy the provision of
others. In equilibrium, either there are several contributors specialized in
public good provision or only two contributors who are not entirely
specialized. Higher linking costs have a non-monotonic impact on welfare and
polarization, as they affect who specializes in public good provision. When the
available budget is small, subsidies should be given to players who already
specialize in public good provision; otherwise, they should target only one
player who specializes in public good provision.",Homophily and Specialization in Networks,2023-12-01 12:45:58,"Patrick Allmis, Luca Paolo Merlino","http://arxiv.org/abs/2312.00457v1, http://arxiv.org/pdf/2312.00457v1",econ.TH
34709,th,"How does receiver commitment affect incentives for information revelation in
Bayesian persuasion? We study many-sender persuasion games where a single
receiver commits to a posterior-dependent action profile, or allocation, before
senders design the informational environment. We develop a novel
revelation-like principle for ex-ante mechanism design settings where sender
reports are Blackwell experiments and use it to characterize the set of
implementable allocations in our model. We show global incentive constraints
are pinned down by ""worst-case"" punishments at finitely many posterior beliefs,
whose values are independent of the allocation. Moreover, the receiver will
generically benefit from the ability to randomize over deterministic outcomes
when solving for the constrained optimal allocation, in contrast to standard
mechanism design models. Finally, we apply our results to analyze efficiency in
multi-good allocation problems, full surplus extraction in auctions with
allocation externalities, and optimal audit design, highlighting the role that
monotone mechanisms play in these settings.",Ex-Ante Design of Persuasion Games,2023-12-05 06:34:48,"Eric Gao, Daniel Luo","http://arxiv.org/abs/2312.02465v2, http://arxiv.org/pdf/2312.02465v2",econ.TH
34710,th,"Most contributions in the algorithmic collusion literature only consider
symmetric algorithms interacting with each other. We study a simple model of
algorithmic collusion in which Q-learning algorithms repeatedly play a
prisoner's dilemma and allow players to choose different exploration policies.
We characterize behavior of such algorithms with asymmetric policies for
extreme values and prove that any Nash equilibrium features some cooperative
behavior. We further investigate the dynamics for general profiles of
exploration policy by running extensive numerical simulations which indicate
symmetry of equilibria, and give insight for their distribution.",Spontaneous Coupling of Q-Learning Algorithms in Equilibrium,2023-12-05 13:35:41,Ivan Conjeaud,"http://arxiv.org/abs/2312.02644v1, http://arxiv.org/pdf/2312.02644v1",econ.TH
34711,th,"We discuss price competition when positive network effects are the only other
factor in consumption choices. We show that partitioning consumers into two
groups creates a rich enough interaction structure to induce negative marginal
demand and produce pure price equilibria where both firms profit. The crucial
condition is one group has centripetal influence while the other has
centrifugal influence. The result is contrary to when positive network effects
depend on a single aggregate variable and challenges the prevalent assumption
that demand must be micro-founded on a distribution of consumer characteristics
with specific properties, highlighting the importance of interaction structures
in shaping market outcomes.",Two is enough: a flip on Bertrand through positive network effects,2023-12-05 19:22:47,"Renato Soeiro, Alberto Pinto","http://arxiv.org/abs/2312.02865v1, http://arxiv.org/pdf/2312.02865v1",econ.TH
34712,th,"Given a dynamic ordinal game, we deem a strategy sequentially rational if
there exist a Bernoulli utility function and a conditional probability system
with respect to which the strategy is a maximizer. We establish a complete
class theorem by characterizing sequential rationality via the new Conditional
B-Dominance. Building on this notion, we introduce Iterative Conditional
B-Dominance, which is an iterative elimination procedure that characterizes the
implications of forward induction in the class of games under scrutiny and
selects the unique backward induction outcome in dynamic ordinal games with
perfect information satisfying a genericity condition. Additionally, we show
that Iterative Conditional B-Dominance, as a `forward induction reasoning'
solution concept, captures: $(i)$ the unique backward induction outcome
obtained via sophisticated voting in binary agendas with sequential majority
voting; $(ii)$ farsightedness in dynamic ordinal games derived from social
environments; $(iii)$ a unique outcome in ordinal Money-Burning Games.",Revealing Sequential Rationality and Forward Induction,2023-12-06 17:59:52,Pierfrancesco Guarino,"http://arxiv.org/abs/2312.03536v1, http://arxiv.org/pdf/2312.03536v1",econ.TH
34713,th,"We investigate inherent stochasticity in individual choice behavior across
diverse decisions. Each decision is modeled as a menu of actions with outcomes,
and a stochastic choice rule assigns probabilities to actions based on the
outcome profile. Outcomes can be monetary values, lotteries, or elements of an
abstract outcome space. We characterize decomposable rules: those that predict
independent choices across decisions not affecting each other. For monetary
outcomes, such rules form the one-parametric family of multinomial logit rules.
For general outcomes, there exists a universal utility function on the set of
outcomes, such that choice follows multinomial logit with respect to this
utility. The conclusions are robust to replacing strict decomposability with an
approximate version or allowing minor dependencies on the actions' labels.
Applications include choice over time, under risk, and with ambiguity.",Decomposable Stochastic Choice,2023-12-08 07:52:52,"Fedor Sandomirskiy, Omer Tamuz","http://arxiv.org/abs/2312.04827v1, http://arxiv.org/pdf/2312.04827v1",econ.TH
34714,th,"The benefits of investments in sustainability are often public goods that
accrue at broad scales and to many people. Urban forests exemplify this; trees
supply ecosystem service benefits from local (in the case of shade) to global
(in the case of carbon sequestration) scales. The complex mosaic of public and
private ownership that typically defines an urban forest makes the public goods
problem of investing in forest sustainability especially acute. This results in
incentives for private tree owners to invest in tree care that typically fall
short of those of a public forest manager aiming for the social optimum. The
management of a forest pest, such as emerald ash borer, provides a salient
focus area because pests threaten the provision of public goods from urban
forests and pest management generates feedback that alters pest spread and
shapes future risks. We study how managers can design policies to address
forest pest outbreaks and achieve uniform management across a mosaic of
ownership types. We develop a game theoretic model to derive optimal subsidies
for the treatment of forests pests and evaluate the efficacy of these policies
in mitigating the spread of forest pests with a dynamic epidemiological model.
Our results suggest that a combination of optimal treatment subsidies for
privately owned trees and targeted treatment of public trees can be far more
effective at reducing pest-induced tree mortality than either approach in
isolation. While we focus on the management of urban forests, designing
programs that align private and public incentives for investment in public
goods could advance sustainability in a wide range of systems.",Public policy for management of forest pests within an ownership mosaic,2023-12-09 02:00:15,"Andrew R. Tilman, Robert G. Haight","http://arxiv.org/abs/2312.05403v1, http://arxiv.org/pdf/2312.05403v1",econ.TH
34715,th,"How does Artificial Intelligence (AI) affect the organization of work and the
structure of wages? We study this question in a model where heterogeneous
agents in terms of knowledge--humans and machines--endogenously sort into
hierarchical teams: Less knowledgeable agents become ""workers"" (i.e., execute
routine tasks), while more knowledgeable agents become ""managers"" (i.e.,
specialize in problem solving). When AI's knowledge is equivalent to that of a
pre-AI worker, AI displaces humans from routine work into managerial work
compared to the pre-AI outcome. In contrast, when AI's knowledge is that of a
pre-AI manager, it shifts humans from managerial work to routine work. AI
increases total human labor income, but it necessarily creates winners and
losers: When AI's knowledge is low, only the most knowledgeable humans
experience income gains. In contrast, when AI's knowledge is high, both
extremes of the knowledge distribution benefit. In any case, the introduction
of AI harms the middle class.",Artificial Intelligence in the Knowledge Economy,2023-12-09 09:59:55,"Enrique Ide, Eduard Talamas","http://arxiv.org/abs/2312.05481v1, http://arxiv.org/pdf/2312.05481v1",econ.TH
34716,th,"It is well known that ex ante social preferences and expected utility are not
always compatible. In this note, we introduce a novel framework that naturally
separates social preferences from selfish preferences to answer the following
question: What specific forms of social preferences can be accommodated within
the expected utility paradigm? In a departure from existing frameworks, our
framework reveals that ex ante social preferences are not inherently in
conflict with expected utility in games, provided a decision-maker's aversion
to randomization in selfish utility ""counterbalances"" her social preference for
randomization. We also show that when a player's preferences in both the game
(against another player) and the associated decision problem (against Nature)
conform to expected utility axioms, the permissible range of social preferences
becomes notably restricted. Only under this condition do we reaffirm the
existing literature's key insight regarding the incompatibility of ex ante
inequality aversion with expected utility.",Social preferences and expected utility,2023-12-11 03:35:33,"Mehmet S. Ismail, Ronald Peeters","http://arxiv.org/abs/2312.06048v1, http://arxiv.org/pdf/2312.06048v1",econ.TH
34717,th,"This note gives a simpler proof of the main representation theorem in Gilboa
and Schmeidler (1989).",A Simpler Proof of Gilboa and Schmeidler's (1989) Maxmin Expected Utility Representation,2023-12-11 07:25:46,Ian Ball,"http://arxiv.org/abs/2312.06107v1, http://arxiv.org/pdf/2312.06107v1",econ.TH
34725,th,"I study repeated games with anonymous random matching where players can erase
signals from their records. When players are sufficiently long-lived and have
strictly dominant actions, they will play their dominant actions with
probability close to one in all equilibria. When players' expected lifespans
are intermediate, there exist purifiable equilibria with a positive level of
cooperation in the submodular prisoner's dilemma but not in the supermodular
prisoner's dilemma. Therefore, the maximal level of cooperation a community can
sustain is not monotone with respect to players' expected lifespans and the
complementarity in players' actions can undermine their abilities to sustain
cooperation.",Community Enforcement with Endogenous Records,2024-01-01 21:39:01,Harry Pei,"http://arxiv.org/abs/2401.00839v1, http://arxiv.org/pdf/2401.00839v1",econ.TH
34718,th,"We present a model of a forecaster who must predict the future value of a
variable that depends on an exogenous state and on the intervention of a
policymaker. Our focus is on the incentives of the forecaster to acquire costly
private information to use in his forecasting exercise. We show that the
policy-making environment plays a crucial role in determining the incentives of
the forecaster to acquire information. Key parameters are the expected strength
of policy intervention, the precision of the policymaker's private information,
and the precision of public information. We identify conditions, which are
plausible in applications, under which the forecaster optimally acquires little
or no private information, and instead bases his forecast exclusively on
information publicly known at the time the forecast is made. Furthermore we
show that, also under plausible conditions, stronger policy intervention and
more precise policymaker's information crowd-out forecaster's information
acquisition.",Optimal Information Acquisition Under Intervention,2023-12-13 00:54:55,Augusto Nieto-Barthaburu,"http://arxiv.org/abs/2312.07757v1, http://arxiv.org/pdf/2312.07757v1",econ.TH
34719,th,"The Market for Lemons is a classic model of asymmetric information first
studied by Nobel Prize economist George Akerlof. It shows that information
asymmetry between the seller and buyer may result in market collapse or some
sellers leaving the market. ""Lemons"" in the used car market are cars of poor
quality. The information asymmetry present is that the buyer is uncertain of
the cars' true quality. I first offer a simple baseline model that illustrates
the market collapse, and then examine what happens when regulation, ie. a DMV
is introduced to reveal (signal) the true car quality to the buyer. The effect
on the market varies based on the assumptions about the regulator. The central
focus is on the DMV's signal structure, which can have interesting effects on
the market and the information asymmetry. I show that surprisingly, when the
DMV actually decreases the quality of their signal in a well constructed way,
it can substantially increase their profit. On the other hand, this negatively
effects overall welfare.",The Market for Lemons and the Regulator's Signalling Problem,2023-12-18 05:55:01,Roy Long,"http://arxiv.org/abs/2312.10896v1, http://arxiv.org/pdf/2312.10896v1",econ.TH
34720,th,"Ethereum is undergoing significant changes to its architecture as it evolves.
These changes include its switch to PoS consensus and the introduction of
significant infrastructural changes that do not require a change to the core
protocol, but that fundamentally affect the way users interact with the
network. These changes represent an evolution toward a more modular
architecture, in which there exists new exogenous vectors for centralization.
This paper builds on previous studies of decentralization of Ethereum to
reflect these recent significant changes, and Ethereum's new modular paradigm.",Measuring the Concentration of Control in Contemporary Ethereum,2023-12-22 12:47:52,Simon Brown,"http://arxiv.org/abs/2312.14562v1, http://arxiv.org/pdf/2312.14562v1",econ.TH
34721,th,"India has enacted an intricate affirmative action program through a
reservation system since the 1950s. Notably, in 2008, a historic judgment by
the Supreme Court of India (SCI) in the case of Ashoka Kumar Thakur vs. Union
of India mandated a 27 percent reservation to the Other Backward Classes (OBC).
The SCI's ruling suggested implementing the OBC reservation as a soft reserve
without defining a procedural framework. The SCI recommended a maximum of 10
points difference between the cutoff scores of the open-category and OBC
positions. We show that this directive conflicts with India's fundamental
Supreme Court mandates on reservation policy. Moreover, we show that the
score-elevated reserve policy proposed by S\""onmez and Yenmez (2022) is
inconsistent with this directive.",Inconsistency of Score-Elevated Reserve Policy for Indian Affirmative Action,2023-12-22 15:34:22,"Orhan Aygn, Bertan Turhan","http://arxiv.org/abs/2312.14648v2, http://arxiv.org/pdf/2312.14648v2",econ.TH
34722,th,"Industries learn productivity improvements from their suppliers. The observed
empirical importance of these interactions, often omitted by input-output
models, mandates larger attention. This article embeds interdependent total
factor productivity (TFP) growth into a general non-parametric input-output
model. TFP growth is assumed to be Cobb-Douglas in TFP-stocks of adjacent
sectors, where elasticities are the input-output coefficients. Studying how the
steady state of the system reacts to changes in research effort bears insight
for policy and the input-output literature. First, industries higher in the
supply chain see a greater multiplication of their productivity gains. Second,
the presence of `laggard' industries can bottleneck the the rest of the
economy. By deriving these insights formally, we review a canonical method for
aggregating TFP -- Hulten's Theorem -- and show the potential importance of
backward linkages.",Interdependent Total Factor Productivity in an Input-Output model,2023-12-24 01:10:05,"Thomas M. Bombarde, Andrew L. Krause","http://arxiv.org/abs/2312.15362v1, http://arxiv.org/pdf/2312.15362v1",econ.TH
34723,th,"In the context of the urgent need to establish sustainable food systems,
Community Supported Agriculture (CSA), in which consumers share risks with
producers, has gained increasing attention. Understanding the factors that
influence consumer participation in CSA is crucial, yet the complete picture
and interrelations of these factors remain unclear in existing studies. This
research adopts a scoping review and the KJ method to elucidate the factors
influencing consumer participation in CSA and to theorize the consumer
participation. In particular, we focus on the dynamics of individual
decision-making for participation, under the premise that individuals are
embedded in socio-cultural environments. We examine the decision-making process
based on the seesaw of expected gains and losses from participation, along with
the reflexivity to the individual and the process of updating decision-making
post-participation. Our study highlights how individual decision-making for
participation is influenced by relationships with others within the embedded
socio-cultural environment, as well as by attachment and connection to the
community. It also shows that discrepancies between expectations and
experiences post-participation, and the transformation of the social capital,
promote the updating of decision-making processes. Although there are
limitations, the insights gained from this study offer profound implications
for stakeholders and provide valuable insights for more sustainable and
efficient CSA practices. This research establishes a robust foundation for
future studies on CSA.",Consumer Participation in Community-Supported Agriculture: Modeling Decision-Making Dynamics through a Scoping Review and the KJ Method,2023-12-29 12:24:23,"Sota Takagi, Yusuke Numazawa, Kentaro Katsube, Wataru Omukai, Miki Saijo, Takumi Ohashi","http://arxiv.org/abs/2312.17529v1, http://arxiv.org/pdf/2312.17529v1",econ.TH
34724,th,"This paper studies a discrete-time version of the Lucas-Uzawa endogenous
growth model with physical and human capital. Equilibrium existence is proved
applying tools of dynamic programming with unbounded returns. The proofs rely
on properties of homogeneous functions and also apply well-known inequalities
in real analysis, seldom used in the literature, which significantly simplifies
the task of verifying certain assumptions that are rather technical in nature.",Equilibrium existence in a discrete-time endogenous growth model with physical and human capital,2023-12-31 01:52:59,Luis Alcala,"http://arxiv.org/abs/2401.00342v1, http://arxiv.org/pdf/2401.00342v1",econ.TH
34729,th,"Arrow's theorem implies that a social choice function satisfying
Transitivity, the Pareto Principle (Unanimity) and Independence of Irrelevant
Alternatives (IIA) must be dictatorial. When non-strict preferences are
allowed, a dictatorial social choice function is defined as a function for
which there exists a single voter whose strict preferences are followed. This
definition allows for many different dictatorial functions. In particular, we
construct examples of dictatorial functions which do not satisfy Transitivity
and IIA. Thus Arrow's theorem, in the case of non-strict preferences, does not
provide a complete characterization of all social choice functions satisfying
Transitivity, the Pareto Principle, and IIA.
  The main results of this article provide such a characterization for Arrow's
theorem, as well as for follow up results by Wilson. In particular, we
strengthen Arrow's and Wilson's result by giving an exact if and only if
condition for a function to satisfy Transitivity and IIA (and the Pareto
Principle). Additionally, we derive formulas for the number of functions
satisfying these conditions.",Complete Characterization of Functions Satisfying the Conditions of Arrow's Theorem,2009-10-13 23:01:35,"Elchanan Mossel, Omer Tamuz","http://dx.doi.org/10.1007/s00355-011-0547-0, http://arxiv.org/abs/0910.2465v2, http://arxiv.org/pdf/0910.2465v2",math.CO
34730,th,"Maximizing the revenue from selling _more than one_ good (or item) to a
single buyer is a notoriously difficult problem, in stark contrast to the
one-good case. For two goods, we show that simple ""one-dimensional"" mechanisms,
such as selling the goods separately, _guarantee_ at least 73% of the optimal
revenue when the valuations of the two goods are independent and identically
distributed, and at least $50\%$ when they are independent. For the case of
$k>2$ independent goods, we show that selling them separately guarantees at
least a $c/\log^2 k$ fraction of the optimal revenue; and, for independent and
identically distributed goods, we show that selling them as one bundle
guarantees at least a $c/\log k$ fraction of the optimal revenue. Additional
results compare the revenues from the two simple mechanisms of selling the
goods separately and bundled, identify situations where bundling is optimal,
and extend the analysis to multiple buyers.",Approximate Revenue Maximization with Multiple Items,2012-04-09 13:08:10,"Sergiu Hart, Noam Nisan","http://dx.doi.org/10.1016/j.jet.2017.09.001, http://arxiv.org/abs/1204.1846v3, http://arxiv.org/pdf/1204.1846v3",cs.GT
34731,th,"We consider the well known, and notoriously difficult, problem of a single
revenue-maximizing seller selling two or more heterogeneous goods to a single
buyer whose private values for the goods are drawn from a (possibly correlated)
known distribution, and whose valuation is additive over the goods. We show
that when there are two (or more) goods, _simple mechanisms_ -- such as selling
the goods separately or as a bundle -- _may yield only a negligible fraction of
the optimal revenue_. This resolves the open problem of Briest, Chawla,
Kleinberg, and Weinberg (JET 2015) who prove the result for at least three
goods in the related setup of a unit-demand buyer. We also introduce the menu
size as a simple measure of the complexity of mechanisms, and show that the
revenue may increase polynomially with _menu size_ and that no bounded menu
size can ensure any positive fraction of the optimal revenue. The menu size
also turns out to ""pin down"" the revenue properties of deterministic
mechanisms.","Selling Multiple Correlated Goods: Revenue Maximization and Menu-Size Complexity (old title: ""The Menu-Size Complexity of Auctions"")",2013-04-23 01:02:01,"Sergiu Hart, Noam Nisan","http://dx.doi.org/10.1016/j.jet.2019.07.006, http://arxiv.org/abs/1304.6116v3, http://arxiv.org/pdf/1304.6116v3",cs.GT
34732,th,"For many application areas A/B testing, which partitions users of a system
into an A (control) and B (treatment) group to experiment between several
application designs, enables Internet companies to optimize their services to
the behavioral patterns of their users. Unfortunately, the A/B testing
framework cannot be applied in a straightforward manner to applications like
auctions where the users (a.k.a., bidders) submit bids before the partitioning
into the A and B groups is made. This paper combines auction theoretic modeling
with the A/B testing framework to develop methodology for A/B testing auctions.
The accuracy of our method %, assuming the auction is directly comparable to
ideal A/B testing where there is no interference between A and B. Our results
are based on an extension and improved analysis of the inference method of
Chawla et al. (2014).",A/B Testing of Auctions,2016-06-03 00:47:39,"Shuchi Chawla, Jason D. Hartline, Denis Nekipelov","http://arxiv.org/abs/1606.00908v1, http://arxiv.org/pdf/1606.00908v1",cs.GT
34733,th,"Which equilibria will arise in signaling games depends on how the receiver
interprets deviations from the path of play. We develop a micro-foundation for
these off-path beliefs, and an associated equilibrium refinement, in a model
where equilibrium arises through non-equilibrium learning by populations of
patient and long-lived senders and receivers. In our model, young senders are
uncertain about the prevailing distribution of play, so they rationally send
out-of-equilibrium signals as experiments to learn about the behavior of the
population of receivers. Differences in the payoff functions of the types of
senders generate different incentives for these experiments. Using the Gittins
index (Gittins, 1979), we characterize which sender types use each signal more
often, leading to a constraint on the receiver's off-path beliefs based on
""type compatibility"" and hence a learning-based equilibrium selection.",Learning and Type Compatibility in Signaling Games,2017-02-07 02:05:56,"Drew Fudenberg, Kevin He","http://dx.doi.org/10.3982/ECTA15085, http://arxiv.org/abs/1702.01819v3, http://arxiv.org/pdf/1702.01819v3",q-fin.EC
34734,th,"In the classical herding literature, agents receive a private signal
regarding a binary state of nature, and sequentially choose an action, after
observing the actions of their predecessors. When the informativeness of
private signals is unbounded, it is known that agents converge to the correct
action and correct belief. We study how quickly convergence occurs, and show
that it happens more slowly than it does when agents observe signals. However,
we also show that the speed of learning from actions can be arbitrarily close
to the speed of learning from signals. In particular, the expected time until
the agents stop taking the wrong action can be either finite or infinite,
depending on the private signal distribution. In the canonical case of Gaussian
private signals we calculate the speed of convergence precisely, and show
explicitly that, in this case, learning from actions is significantly slower
than learning from signals.",The speed of sequential asymptotic learning,2017-07-10 06:57:37,"Wade Hann-Caruthers, Vadim V. Martynov, Omer Tamuz","http://dx.doi.org/10.1016/j.jet.2017.11.009, http://arxiv.org/abs/1707.02689v3, http://arxiv.org/pdf/1707.02689v3",math.PR
35492,th,"A private private information structure delivers information about an unknown
state while preserving privacy: An agent's signal contains information about
the state but remains independent of others' sensitive or private information.
We study how informative such structures can be, and characterize those that
are optimal in the sense that they cannot be made more informative without
violating privacy. We connect our results to fairness in recommendation systems
and explore a number of further applications.",Private Private Information,2021-12-29 04:30:39,"Kevin He, Fedor Sandomirskiy, Omer Tamuz","http://arxiv.org/abs/2112.14356v3, http://arxiv.org/pdf/2112.14356v3",econ.TH
34735,th,"This paper examines signalling when the sender exerts effort and receives
benefits over time. Receivers only observe a noisy public signal about the
effort, which has no intrinsic value.
  The modelling of signalling in a dynamic context gives rise to novel
equilibrium outcomes. In some equilibria, a sender with a higher cost of effort
exerts strictly more effort than his low-cost counterpart. The low-cost type
can compensate later for initial low effort, but this is not worthwhile for a
high-cost type. The interpretation of a given signal switches endogenously over
time, depending on which type the receivers expect to send it.
  JEL classification: D82, D83, C73.
  Keywords: Dynamic games, signalling , incomplete information",Good signals gone bad: dynamic signalling with switching efforts,2017-07-15 10:18:27,Sander Heinsalu,"http://arxiv.org/abs/1707.04699v1, http://arxiv.org/pdf/1707.04699v1",q-fin.EC
34737,th,"Separate selling of two independent goods is shown to yield at least 62% of
the optimal revenue, and at least 73% when the goods satisfy the Myerson
regularity condition. This improves the 50% result of Hart and Nisan (2017,
originally circulated in 2012).",The Better Half of Selling Separately,2017-12-25 02:38:29,"Sergiu Hart, Philip J. Reny","http://dx.doi.org/10.1145/3369927, http://arxiv.org/abs/1712.08973v2, http://arxiv.org/pdf/1712.08973v2",cs.GT
34739,th,"In a continuous-time setting where a risk-averse agent controls the drift of
an output process driven by a Brownian motion, optimal contracts are linear in
the terminal output; this result is well-known in a setting with moral hazard
and -under stronger assumptions - adverse selection. We show that this result
continues to hold when in addition reservation utilities are type-dependent.
This type of problem occurs in the study of optimal compensation problems
involving competing principals.",Optimal contracts under competition when uncertainty from adverse selection and moral hazard are present,2018-01-12 11:10:20,N. Packham,"http://dx.doi.org/10.1016/j.spl.2018.01.014, http://arxiv.org/abs/1801.04080v1, http://arxiv.org/pdf/1801.04080v1",q-fin.PM
34740,th,"We study the problem of fairly allocating a set of indivisible goods among
agents with additive valuations. The extent of fairness of an allocation is
measured by its Nash social welfare, which is the geometric mean of the
valuations of the agents for their bundles. While the problem of maximizing
Nash social welfare is known to be APX-hard in general, we study the
effectiveness of simple, greedy algorithms in solving this problem in two
interesting special cases.
  First, we show that a simple, greedy algorithm provides a 1.061-approximation
guarantee when agents have identical valuations, even though the problem of
maximizing Nash social welfare remains NP-hard for this setting. Second, we
show that when agents have binary valuations over the goods, an exact solution
(i.e., a Nash optimal allocation) can be found in polynomial time via a greedy
algorithm. Our results in the binary setting extend to provide novel, exact
algorithms for optimizing Nash social welfare under concave valuations.
Notably, for the above mentioned scenarios, our techniques provide a simple
alternative to several of the existing, more sophisticated techniques for this
problem such as constructing equilibria of Fisher markets or using real stable
polynomials.",Greedy Algorithms for Maximizing Nash Social Welfare,2018-01-27 09:49:38,"Siddharth Barman, Sanath Kumar Krishnamurthy, Rohit Vaish","http://arxiv.org/abs/1801.09046v1, http://arxiv.org/pdf/1801.09046v1",cs.GT
34741,th,"In previous studies of spatial public goods game, each player is able to
establish a group. However, in real life, some players cannot successfully
organize groups for various reasons. In this paper, we propose a mechanism of
reputation-driven group formation, in which groups can only be organized by
players whose reputation reaches or exceeds a threshold. We define a player's
reputation as the frequency of cooperation in the last $T$ time steps. We find
that the highest cooperation level can be obtained when groups are only
established by pure cooperators who always cooperate in the last $T$ time
steps. Effects of the memory length $T$ on cooperation are also studied.",Promoting cooperation by reputation-driven group formation,2018-02-05 06:32:35,"Han-Xin Yang, Zhen Wang","http://arxiv.org/abs/1802.01253v1, http://arxiv.org/pdf/1802.01253v1",physics.soc-ph
34916,th,"Increasing the infection risk early in an epidemic is individually and
socially optimal under some parameter values. The reason is that the early
patients recover or die before the peak of the epidemic, which flattens the
peak. This improves welfare if the peak exceeds the capacity of the healthcare
system and the social loss rises rapidly enough in the number infected. The
individual incentive to get infected early comes from the greater likelihood of
receiving treatment than at the peak when the disease has overwhelmed
healthcare capacity. Calibration to the Covid-19 pandemic data suggests that
catching the infection at the start was individually optimal and for some loss
functions would have reduced the aggregate loss.",Infection arbitrage,2020-04-18 23:39:46,Sander Heinsalu,"http://arxiv.org/abs/2004.08701v2, http://arxiv.org/pdf/2004.08701v2",q-bio.PE
34742,th,"We study the problem of fairly dividing a heterogeneous resource, commonly
known as cake cutting and chore division, in the presence of strategic agents.
While a number of results in this setting have been established in previous
works, they rely crucially on the free disposal assumption, meaning that the
mechanism is allowed to throw away part of the resource at no cost. In the
present work, we remove this assumption and focus on mechanisms that always
allocate the entire resource. We exhibit a truthful and envy-free mechanism for
cake cutting and chore division for two agents with piecewise uniform
valuations, and we complement our result by showing that such a mechanism does
not exist when certain additional constraints are imposed on the mechanisms.
Moreover, we provide bounds on the efficiency of mechanisms satisfying various
properties, and give truthful mechanisms for multiple agents with restricted
classes of valuations.",Truthful Fair Division without Free Disposal,2018-04-19 00:38:48,"Xiaohui Bei, Guangda Huzhang, Warut Suksompong","http://dx.doi.org/10.1007/s00355-020-01256-0, http://arxiv.org/abs/1804.06923v2, http://arxiv.org/pdf/1804.06923v2",cs.GT
34743,th,"We study the allocation of divisible goods to competing agents via a market
mechanism, focusing on agents with Leontief utilities. The majority of the
economics and mechanism design literature has focused on \emph{linear} prices,
meaning that the cost of a good is proportional to the quantity purchased.
Equilibria for linear prices are known to be exactly the maximum Nash welfare
allocations.
  \emph{Price curves} allow the cost of a good to be any (increasing) function
of the quantity purchased. We show that price curve equilibria are not limited
to maximum Nash welfare allocations with two main results. First, we show that
an allocation can be supported by strictly increasing price curves if and only
if it is \emph{group-domination-free}. A similarly characterization holds for
weakly increasing price curves. We use this to show that given any allocation,
we can compute strictly (or weakly) increasing price curves that support it (or
show that none exist) in polynomial time. These results involve a connection to
the \emph{agent-order matrix} of an allocation, which may have other
applications. Second, we use duality to show that in the bandwidth allocation
setting, any allocation maximizing a CES welfare function can be supported by
price curves.",Markets Beyond Nash Welfare for Leontief Utilities,2018-07-14 01:08:41,"Ashish Goel, Reyna Hulett, Benjamin Plaut","http://arxiv.org/abs/1807.05293v3, http://arxiv.org/pdf/1807.05293v3",cs.GT
34744,th,"Stochastic dominance is a crucial tool for the analysis of choice under risk.
It is typically analyzed as a property of two gambles that are taken in
isolation. We study how additional independent sources of risk (e.g.
uninsurable labor risk, house price risk, etc.) can affect the ordering of
gambles. We show that, perhaps surprisingly, background risk can be strong
enough to render lotteries that are ranked by their expectation ranked in terms
of first-order stochastic dominance. We extend our results to second order
stochastic dominance, and show how they lead to a novel, and elementary,
axiomatization of mean-variance preferences.",Stochastic Dominance Under Independent Noise,2018-07-18 16:49:05,"Luciano Pomatto, Philipp Strack, Omer Tamuz","http://dx.doi.org/10.1086/705555, http://arxiv.org/abs/1807.06927v5, http://arxiv.org/pdf/1807.06927v5",math.PR
34745,th,"Before purchase, a buyer of an experience good learns about the product's fit
using various information sources, including some of which the seller may be
unaware of. The buyer, however, can conclusively learn the fit only after
purchasing and trying out the product. We show that the seller can use a simple
mechanism to best take advantage of the buyer's post-purchase learning to
maximize his guaranteed-profit. We show that this mechanism combines a generous
refund, which performs well when the buyer is relatively informed, with
non-refundable random discounts, which work well when the buyer is relatively
uninformed.",Robust Pricing with Refunds,2018-08-07 10:07:50,"Toomas Hinnosaar, Keiichi Kawai","http://arxiv.org/abs/1808.02233v3, http://arxiv.org/pdf/1808.02233v3",cs.GT
34746,th,"Service sports include two-player contests such as volleyball, badminton, and
squash. We analyze four rules, including the Standard Rule (SR), in which a
player continues to serve until he or she loses. The Catch-Up Rule (CR) gives
the serve to the player who has lost the previous point - as opposed to the
player who won the previous point, as under SR. We also consider two Trailing
Rules that make the server the player who trails in total score. Surprisingly,
compared with SR, only CR gives the players the same probability of winning a
game while increasing its expected length, thereby making it more competitive
and exciting to watch. Unlike one of the Trailing Rules, CR is strategy-proof.
By contrast, the rules of tennis fix who serves and when; its tiebreaker,
however, keeps play competitive by being fair - not favoring either the player
who serves first or who serves second.",Catch-Up: A Rule that Makes Service Sports More Competitive,2018-08-17 19:33:08,"Steven J. Brams, Mehmet S. Ismail, D. Marc Kilgour, Walter Stromquist","http://arxiv.org/abs/1808.06922v1, http://arxiv.org/pdf/1808.06922v1",math.HO
34747,th,"We derive Nash equilibria for a class of quadratic multi-leader-follower
games using the nonsmooth best response function. To overcome the challenge of
nonsmoothness, we pursue a smoothing approach resulting in a reformulation as a
smooth Nash equilibrium problem. The existence and uniqueness of solutions are
proven for all smoothing parameters. Accumulation points of Nash equilibria
exist for a decreasing sequence of these smoothing parameters and we show that
these candidates fulfill the conditions of s-stationarity and are Nash
equilibria to the multi-leader-follower game. Finally, we propose an update on
the leader variables for efficient computation and numerically compare
nonsmooth Newton and subgradient methods.",Solving Quadratic Multi-Leader-Follower Games by Smoothing the Follower's Best Response,2018-08-23 23:38:57,"Michael Herty, Sonja Steffensen, Anna Thünen","http://arxiv.org/abs/1808.07941v4, http://arxiv.org/pdf/1808.07941v4",math.OC
34748,th,"We study the existence of allocations of indivisible goods that are envy-free
up to one good (EF1), under the additional constraint that each bundle needs to
be connected in an underlying item graph. If the graph is a path and the
utility functions are monotonic over bundles, we show the existence of EF1
allocations for at most four agents, and the existence of EF2 allocations for
any number of agents; our proofs involve discrete analogues of the Stromquist's
moving-knife protocol and the Su--Simmons argument based on Sperner's lemma.
For identical utilities, we provide a polynomial-time algorithm that computes
an EF1 allocation for any number of agents. For the case of two agents, we
characterize the class of graphs that guarantee the existence of EF1
allocations as those whose biconnected components are arranged in a path; this
property can be checked in linear time.",Almost Envy-Free Allocations with Connected Bundles,2018-08-28 19:57:17,"Vittorio Bilò, Ioannis Caragiannis, Michele Flammini, Ayumi Igarashi, Gianpiero Monaco, Dominik Peters, Cosimo Vinci, William S. Zwicker","http://dx.doi.org/10.1016/j.geb.2021.11.006, http://arxiv.org/abs/1808.09406v2, http://arxiv.org/pdf/1808.09406v2",cs.GT
34749,th,"Following the work of Lloyd Shapley on the Shapley value, and tangentially
the work of Guillermo Owen, we offer an alternative non-probabilistic
formulation of part of the work of Robert J. Weber in his 1978 paper
""Probabilistic values for games."" Specifically, we focus upon efficient but not
symmetric allocations of value for cooperative games. We retain standard
efficiency and linearity, and offer an alternative condition, ""reasonableness,""
to replace the other usual axioms. In the pursuit of the result, we discover
properties of the linear maps that describe the allocations. This culminates in
a special class of games for which any other map that is ""reasonable,
efficient"" can be written as a convex combination of members of this special
class of allocations, via an application of the Krein-Milman theorem.",Shapley-like values without symmetry,2018-09-20 20:31:52,"Jacob North Clark, Stephen Montgomery-Smith","http://arxiv.org/abs/1809.07747v2, http://arxiv.org/pdf/1809.07747v2",econ.TH
34750,th,"This paper offers a comprehensive treatment of the question as to whether a
binary relation can be consistent (transitive) without being decisive
(complete), or decisive without being consistent, or simultaneously
inconsistent or indecisive, in the presence of a continuity hypothesis that is,
in principle, non-testable. It identifies topological connectedness of the
(choice) set over which the continuous binary relation is defined as being
crucial to this question. Referring to the two-way relationship as the
Eilenberg-Sonnenschein (ES) research program, it presents four synthetic, and
complete, characterizations of connectedness, and its natural extensions; and
two consequences that only stem from it. The six theorems are novel to both the
economic and the mathematical literature: they generalize pioneering results of
Eilenberg (1941), Sonnenschein (1965), Schmeidler (1971) and Sen (1969), and
are relevant to several applied contexts, as well as to ongoing theoretical
work.",Topological Connectedness and Behavioral Assumptions on Preferences: A Two-Way Relationship,2018-10-04 02:20:45,"M. Ali Khan, Metin Uyanık","http://dx.doi.org/10.1007/s00199-019-01206-7, http://arxiv.org/abs/1810.02004v2, http://arxiv.org/pdf/1810.02004v2",econ.TH
34751,th,"Using a lab experiment, we investigate the real-life performance of envy-free
and proportional cake-cutting procedures with respect to fairness and
preference manipulation. We find that envy-free procedures, in particular
Selfridge-Conway, are fairer and also are perceived as fairer than their
proportional counterparts, despite the fact that agents very often manipulate
them. Our results support the practical use of the celebrated Selfridge-Conway
procedure, and more generally, of envy-free cake-cutting mechanisms.
  We also find that subjects learn their opponents' preferences after repeated
interaction and use this knowledge to improve their allocated share of the
cake. Learning reduces truth-telling behavior, but also reduces envy.",Fair Cake-Cutting in Practice,2018-10-18 22:10:38,"Maria Kyropoulou, Josué Ortega, Erel Segal-Halevi","http://dx.doi.org/10.1016/j.geb.2022.01.027, http://arxiv.org/abs/1810.08243v5, http://arxiv.org/pdf/1810.08243v5",cs.GT
34752,th,"Since the 1960s, the question whether markets are efficient or not is
controversially discussed. One reason for the difficulty to overcome the
controversy is the lack of a universal, but also precise, quantitative
definition of efficiency that is able to graduate between different states of
efficiency. The main purpose of this article is to fill this gap by developing
a measure for the efficiency of markets that fulfill all the stated
requirements. It is shown that the new definition of efficiency, based on
informational-entropy, is equivalent to the two most used definitions of
efficiency from Fama and Jensen. The new measure therefore enables steps to
settle the dispute over the state of efficiency in markets. Moreover, it is
shown that inefficiency in a market can either arise from the possibility to
use information to predict an event with higher than chance level, or can
emerge from wrong pricing/ quotes that do not reflect the right probabilities
of possible events. Finally, the calculation of efficiency is demonstrated on a
simple game (of coin tossing), to show how one could exactly quantify the
efficiency in any market-like system, if all probabilities are known.",Quantification of market efficiency based on informational-entropy,2018-12-06 09:56:43,Roland Rothenstein,"http://arxiv.org/abs/1812.02371v1, http://arxiv.org/pdf/1812.02371v1",q-fin.GN
34753,th,"We are interested in the setting where a seller sells sequentially arriving
items, one per period, via a dynamic auction. At the beginning of each period,
each buyer draws a private valuation for the item to be sold in that period and
this valuation is independent across buyers and periods. The auction can be
dynamic in the sense that the auction at period $t$ can be conditional on the
bids in that period and all previous periods, subject to certain appropriately
defined incentive compatible and individually rational conditions. Perhaps not
surprisingly, the revenue optimal dynamic auctions are computationally hard to
find and existing literatures that aim to approximate the optimal auctions are
all based on solving complex dynamic programs. It remains largely open on the
structural interpretability of the optimal dynamic auctions.
  In this paper, we show that any optimal dynamic auction is a virtual welfare
maximizer subject to some monotone allocation constraints. In particular, the
explicit definition of the virtual value function above arises naturally from
the primal-dual analysis by relaxing the monotone constraints. We further
develop an ironing technique that gets rid of the monotone allocation
constraints. Quite different from Myerson's ironing approach, our technique is
more technically involved due to the interdependence of the virtual value
functions across buyers. We nevertheless show that ironing can be done
approximately and efficiently, which in turn leads to a Fully Polynomial Time
Approximation Scheme of the optimal dynamic auction.",Optimal Dynamic Auctions are Virtual Welfare Maximizers,2018-12-07 15:11:24,"Vahab Mirrokni, Renato Paes Leme, Pingzhong Tang, Song Zuo","http://arxiv.org/abs/1812.02993v1, http://arxiv.org/pdf/1812.02993v1",cs.GT
34754,th,"We consider a decision maker (DM) who, before taking an action, seeks
information by allocating her limited attention dynamically over different news
sources that are biased toward alternative actions. Endogenous choice of
information generates rich dynamics: The chosen news source either reinforces
or weakens the prior, shaping subsequent attention choices, belief updating,
and the final action. The DM adopts a learning strategy biased toward the
current belief when the belief is extreme and against that belief when it is
moderate. Applied to consumption of news media, observed behavior exhibits an
`echo-chamber' effect for partisan voters and a novel `anti echo-chamber'
effect for moderates.",Optimal Dynamic Allocation of Attention,2018-12-16 00:48:12,"Yeon-Koo Che, Konrad Mierendorff","http://arxiv.org/abs/1812.06967v1, http://arxiv.org/pdf/1812.06967v1",math.OC
34755,th,"Consider a set of agents who play a network game repeatedly. Agents may not
know the network. They may even be unaware that they are interacting with other
agents in a network. Possibly, they just understand that their payoffs depend
on an unknown state that is, actually, an aggregate of the actions of their
neighbors. Each time, every agent chooses an action that maximizes her
instantaneous subjective expected payoff and then updates her beliefs according
to what she observes. In particular, we assume that each agent only observes
her realized payoff. A steady state of the resulting dynamic is a
selfconfirming equilibrium given the assumed feedback. We characterize the
structure of the set of selfconfirming equilibria in the given class of network
games, we relate selfconfirming and Nash equilibria, and we analyze simple
conjectural best-reply paths whose limit points are selfconfirming equilibria.",Learning and Selfconfirming Equilibria in Network Games,2018-12-31 15:27:01,"Pierpaolo Battigalli, Fabrizio Panebianco, Paolo Pin","http://arxiv.org/abs/1812.11775v3, http://arxiv.org/pdf/1812.11775v3",econ.TH
34756,th,"The present authors have put forward a quantum game theory based model of
market prices movements. By using Fisher information, we present a construction
of an equation of Schr\""{o}dinger type for probability distributions for
relationship between demand and supply. Various analogies between quantum
physics and market phenomena can be found.",Schrödinger type equation for subjective identification of supply and demand,2018-12-21 01:53:01,"Marcin Makowski, Edward W. Piotrowski, Jan Sładkowski","http://arxiv.org/abs/1812.11824v1, http://arxiv.org/pdf/1812.11824v1",econ.TH
34757,th,"In this study, we consider traveler coupon redemption behavior from the
perspective of an urban mobility service. Assuming traveler behavior is in
accordance with the principle of utility maximization, we first formulate a
baseline dynamical model for traveler's expected future trip sequence under the
framework of Markov decision processes and from which we derive approximations
of the optimal coupon redemption policy. However, we find that this baseline
model cannot explain perfectly observed coupon redemption behavior of traveler
for a car-sharing service. To resolve this deviation from utility-maximizing
behavior, we suggest a hypothesis that travelers may not be aware of all
coupons available to them. Based on this hypothesis, we formulate an
inattention model on unawareness, which is complementary to the existing models
of inattention, and incorporate it into the baseline model. Estimation results
show that the proposed model better explains the coupon redemption dataset than
the baseline model. We also conduct a simulation experiment to quantify the
negative impact of unawareness on coupons' promotional effects. These results
can be used by mobility service operators to design effective coupon
distribution schemes in practice.",An Inattention Model for Traveler Behavior with e-Coupons,2018-12-28 16:10:11,Han Qiu,"http://arxiv.org/abs/1901.05070v1, http://arxiv.org/pdf/1901.05070v1",econ.TH
34758,th,"We consider the problem of welfare maximization in two-sided markets using
simple mechanisms that are prior-independent. The Myerson-Satterthwaite
impossibility theorem shows that even for bilateral trade, there is no feasible
(IR, truthful, budget balanced) mechanism that has welfare as high as the
optimal-yet-infeasible VCG mechanism, which attains maximal welfare but runs a
deficit. On the other hand, the optimal feasible mechanism needs to be
carefully tailored to the Bayesian prior, and is extremely complex, eluding a
precise description.
  We present Bulow-Klemperer-style results to circumvent these hurdles in
double-auction markets. We suggest using the Buyer Trade Reduction (BTR)
mechanism, a variant of McAfee's mechanism, which is feasible and simple (in
particular, deterministic, truthful, prior-independent, anonymous). First, in
the setting where buyers' and sellers' values are sampled i.i.d. from the same
distribution, we show that for any such market of any size, BTR with one
additional buyer whose value is sampled from the same distribution has expected
welfare at least as high as the optimal in the original market.
  We then move to a more general setting where buyers' values are sampled from
one distribution and sellers' from another, focusing on the case where the
buyers' distribution first-order stochastically dominates the sellers'. We
present bounds on the number of buyers that, when added, guarantees that BTR in
the augmented market have welfare at least as high as the optimal in the
original market. Our lower bounds extend to a large class of mechanisms, and
all of our results extend to adding sellers instead of buyers. In addition, we
present positive results about the usefulness of pricing at a sample for
welfare maximization in two-sided markets under the above two settings, which
to the best of our knowledge are the first sampling results in this context.",Bulow-Klemperer-Style Results for Welfare Maximization in Two-Sided Markets,2019-03-15 20:46:20,"Moshe Babaioff, Kira Goldner, Yannai A. Gonczarowski","http://arxiv.org/abs/1903.06696v2, http://arxiv.org/pdf/1903.06696v2",cs.GT
34759,th,"Two-stage electricity market clearing is designed to maintain market
efficiency under ideal conditions, e.g., perfect forecast and nonstrategic
generation. This work demonstrates that the individual strategic behavior of
inelastic load participants in a two-stage settlement electricity market can
deteriorate efficiency. Our analysis further implies that virtual bidding can
play a role in alleviating this loss of efficiency by mitigating the market
power of strategic load participants. We use real-world market data from New
York ISO to validate our theory.",The Role of Strategic Load Participants in Two-Stage Settlement Electricity Markets,2019-03-20 07:33:59,"Pengcheng You, Dennice F. Gayme, Enrique Mallada","http://arxiv.org/abs/1903.08341v2, http://arxiv.org/pdf/1903.08341v2",math.OC
34760,th,"Ann likes oranges much more than apples; Bob likes apples much more than
oranges. Tomorrow they will receive one fruit that will be an orange or an
apple with equal probability. Giving one half to each agent is fair for each
realization of the fruit. However, agreeing that whatever fruit appears will go
to the agent who likes it more gives a higher expected utility to each agent
and is fair in the average sense: in expectation, each agent prefers his
allocation to the equal division of the fruit, i.e., he gets a fair share.
  We turn this familiar observation into an economic design problem: upon
drawing a random object (the fruit), we learn the realized utility of each
agent and can compare it to the mean of his distribution of utilities; no other
statistical information about the distribution is available. We fully
characterize the division rules using only this sparse information in the most
efficient possible way, while giving everyone a fair share. Although the
probability distribution of individual utilities is arbitrary and mostly
unknown to the manager, these rules perform in the same range as the best rule
when the manager has full access to this distribution.",On the fair division of a random object,2019-03-25 17:22:19,"Anna Bogomolnaia, Herve Moulin, Fedor Sandomirskiy","http://arxiv.org/abs/1903.10361v4, http://arxiv.org/pdf/1903.10361v4",cs.GT
34761,th,"When we enumerate numbers up to some specific value, or, even if we do not
specify the number, we know at the same time that there are much greater
numbers which should be reachable by the same enumeration, but indeed we also
congnize them without practical enumeration. Namely, if we deem enumeration to
be a way of reaching a number without any ""jump"", there is a ""jump'' in our way
of cognition of such greater numbers. In this article, making use of a set
theoretical framework by Vop\v{e}nka (1979) (alternative set theory) which
describes such structure, we attempt to shed light on an analogous sturucture
in human and social phenomenon. As an example, we examine a problem of common
knowledge in electronic mail game presented by Rubinstein (1989). We show an
event comes to common knowledge by a ""cognitive jump"".",An Alternative Set Model of Cognitive Jump,2019-04-01 10:45:55,"Kiri Sakahara, Takashi Sato","http://arxiv.org/abs/1904.00613v1, http://arxiv.org/pdf/1904.00613v1",cs.GT
34763,th,"We prove the existence of a competitive equilibrium in a production economy
with infinitely many commodities and a measure space of agents whose
preferences are price dependent. We employ a saturated measure space for the
set of agents and apply recent results for an infinite dimensional separable
Banach space such as Lyapunov's convexity theorem and an exact Fatou's lemma to
obtain the result.",Equilibria in a large production economy with an infinite dimensional commodity space and price dependent preferences,2019-04-16 06:57:58,"Hyo Seok Jang, Sangjik Lee","http://arxiv.org/abs/1904.07444v2, http://arxiv.org/pdf/1904.07444v2",econ.TH
34764,th,"Penney's game is a two player zero-sum game in which each player chooses a
three-flip pattern of heads and tails and the winner is the player whose
pattern occurs first in repeated tosses of a fair coin. Because the players
choose sequentially, the second mover has the advantage. In fact, for any
three-flip pattern, there is another three-flip pattern that is strictly more
likely to occur first. This paper provides a novel no-arbitrage argument that
generates the winning odds corresponding to any pair of distinct patterns. The
resulting odds formula is equivalent to that generated by Conway's ""leading
number"" algorithm. The accompanying betting odds intuition adds insight into
why Conway's algorithm works. The proof is simple and easy to generalize to
games involving more than two outcomes, unequal probabilities, and competing
patterns of various length. Additional results on the expected duration of
Penney's game are presented. Code implementing and cross-validating the
algorithms is included.",Penney's Game Odds From No-Arbitrage,2019-03-28 21:01:15,Joshua B. Miller,"http://arxiv.org/abs/1904.09888v2, http://arxiv.org/pdf/1904.09888v2",math.OC
34765,th,"In this paper, we study efficiency in truthful auctions via a social network,
where a seller can only spread the information of an auction to the buyers
through the buyers' network. In single-item auctions, we show that no mechanism
is strategy-proof, individually rational, efficient, and weakly budget
balanced. In addition, we propose $\alpha$-APG mechanisms, a class of
mechanisms which operate a trade-off between efficiency and weakly budget
balancedness. In multi-item auctions, there already exists a strategy-proof
mechanism when all buyers need only one item. However, we indicate a
counter-example to strategy-proofness in this mechanism, and to the best of our
knowledge, the question of finding a strategy-proof mechanism remains open. We
assume that all buyers have decreasing marginal utility and propose a
generalized APG mechanism that is strategy-proof and individually rational but
not efficient. Importantly, we show that this mechanism achieves the largest
efficiency measure among all strategy-proof mechanisms.",Efficiency in Truthful Auctions via a Social Network,2019-04-29 05:17:38,"Seiji Takanashi, Takehiro Kawasaki, Taiki Todo, Makoto Yokoo","http://arxiv.org/abs/1904.12422v1, http://arxiv.org/pdf/1904.12422v1",econ.TH
34767,th,"An election over a finite set of candidates is called single-crossing if, as
we sweep through the list of voters from left to right, the relative order of
every pair of candidates changes at most once. Such elections have many
attractive properties: e.g., their majority relation is transitive and they
admit efficient algorithms for problems that are NP-hard in general. If a given
election is not single-crossing, it is important to understand what are the
obstacles that prevent it from having this property. In this paper, we propose
a mapping between elections and graphs that provides us with a convenient
encoding of such obstacles. This mapping enables us to use the toolbox of graph
theory in order to analyze the complexity of detecting nearly single-crossing
elections, i.e., elections that can be made single-crossing by a small number
of modifications.",Single-crossing Implementation,2019-06-24 02:36:21,"Nathann Cohenn, Edith Elkind, Foram Lakhani","http://arxiv.org/abs/1906.09671v1, http://arxiv.org/pdf/1906.09671v1",cs.GT
34768,th,"Many economic theory models incorporate finiteness assumptions that, while
introduced for simplicity, play a real role in the analysis. We provide a
principled framework for scaling results from such models by removing these
finiteness assumptions. Our sufficient conditions are on the theorem statement
only, and not on its proof. This results in short proofs, and even allows to
use the same argument to scale similar theorems that were proven using
distinctly different tools. We demonstrate the versatility of our approach via
examples from both revealed-preference theory and matching theory.",To Infinity and Beyond: A General Framework for Scaling Economic Theories,2019-06-25 08:59:42,"Yannai A. Gonczarowski, Scott Duke Kominers, Ran I. Shorrer","http://arxiv.org/abs/1906.10333v5, http://arxiv.org/pdf/1906.10333v5",cs.GT
34769,th,"We introduce a general Hamiltonian framework that appears to be a natural
setting for the derivation of various production functions in economic growth
theory, starting with the celebrated Cobb-Douglas function. Employing our
method, we investigate some existing models and propose a new one as special
cases of the general $n$-dimensional Lotka-Volterra system of eco-dynamics.",The Hamiltonian approach to the problem of derivation of production functions in economic growth theory,2019-06-26 20:34:22,"Roman G. Smirnov, Kunpeng Wang","http://arxiv.org/abs/1906.11224v1, http://arxiv.org/pdf/1906.11224v1",econ.TH
34770,th,"This article shows a new focus of mathematic analysis for the Solow-Swan
economic growth model, using the generalized conformal derivative Katugampola
(KGCD). For this, under the same Solow-Swan model assumptions, the Inada
conditions are extended, which, for the new model shown here, depending on the
order of the KGCD. This order plays an important role in the speed of
convergence of the closed solutions obtained with this derivative for capital
(k) and for per-capita production (y) in the cases without migration and with
negative migration. Our approach to the model with the KGCD adds a new
parameter to the Solow-Swan model, the order of the KGCD and not a new state
variable. In addition, we propose several possible economic interpretations for
that parameter.",Katugampola Generalized Conformal Derivative Approach to Inada Conditions and Solow-Swan Economic Growth Model,2019-06-29 05:17:18,"G. Fernández-Anaya, L. A. Quezada-Téllez, B. Nuñez-Zavala, D. Brun-Battistini","http://arxiv.org/abs/1907.00130v1, http://arxiv.org/pdf/1907.00130v1",econ.TH
34771,th,"In 1998 a long-lost proposal for an election law by Gottlob Frege
(1848--1925) was rediscovered in the Th\""uringer Universit\""ats- und
Landesbibliothek in Jena, Germany. The method that Frege proposed for the
election of representatives of a constituency features a remarkable concern for
the representation of minorities. Its core idea is that votes cast for
unelected candidates are carried over to the next election, while elected
candidates incur a cost of winning. We prove that this sensitivity to past
elections guarantees a proportional representation of political opinions in the
long run. We find that through a slight modification of Frege's original method
even stronger proportionality guarantees can be achieved. This modified version
of Frege's method also provides a novel solution to the apportionment problem,
which is distinct from all of the best-known apportionment methods, while still
possessing noteworthy proportionality properties.",A Mathematical Analysis of an Election System Proposed by Gottlob Frege,2019-07-08 17:22:36,"Paul Harrenstein, Marie-Louise Lackner, Martin Lackner","http://arxiv.org/abs/1907.03643v4, http://arxiv.org/pdf/1907.03643v4",cs.GT
34773,th,"We study the Proportional Response dynamic in exchange economies, where each
player starts with some amount of money and a good. Every day, the players
bring one unit of their good and submit bids on goods they like, each good gets
allocated in proportion to the bid amounts, and each seller collects the bids
received. Then every player updates the bids proportionally to the contribution
of each good in their utility. This dynamic models a process of learning how to
bid and has been studied in a series of papers on Fisher and production
markets, but not in exchange economies. Our main results are as follows:
  - For linear utilities, the dynamic converges to market equilibrium utilities
and allocations, while the bids and prices may cycle. We give a combinatorial
characterization of limit cycles for prices and bids.
  - We introduce a lazy version of the dynamic, where players may save money
for later, and show this converges in everything: utilities, allocations, and
prices.
  - For CES utilities in the substitute range $[0,1)$, the dynamic converges
for all parameters.
  This answers an open question about exchange economies with linear utilities,
where tatonnement does not converge to market equilibria, and no natural
process leading to equilibria was known. We also note that proportional
response is a process where the players exchange goods throughout time (in
out-of-equilibrium states), while tatonnement only explains how exchange
happens in the limit.",Proportional Dynamics in Exchange Economies,2019-07-11 11:10:06,"Simina Brânzei, Nikhil R. Devanur, Yuval Rabani","http://arxiv.org/abs/1907.05037v3, http://arxiv.org/pdf/1907.05037v3",cs.GT
34774,th,"I propose a normative updating rule, extended Bayesianism, for the
incorporation of probabilistic information arising from the process of becoming
more aware. Extended Bayesianism generalizes standard Bayesian updating to
allow the posterior to reside on richer probability space than the prior. I
then provide an observable criterion on prior and posterior beliefs such that
they were consistent with extended Bayesianism.",Unforeseen Evidence,2019-07-16 17:10:59,Evan Piermont,"http://dx.doi.org/10.1016/j.jet.2021.105235, http://arxiv.org/abs/1907.07019v3, http://arxiv.org/pdf/1907.07019v3",econ.TH
34775,th,"Unlike any other field, the science of morality has drawn attention from an
extraordinarily diverse set of disciplines. An interdisciplinary research
program has formed in which economists, biologists, neuroscientists,
psychologists, and even philosophers have been eager to provide answers to
puzzling questions raised by the existence of human morality. Models and
simulations, for a variety of reasons, have played various important roles in
this endeavor. Their use, however, has sometimes been deemed as useless,
trivial and inadequate. The role of models in the science of morality has been
vastly underappreciated. This omission shall be remedied here, offering a much
more positive picture on the contributions modelers made to our understanding
of morality.",Modeling Morality,2019-07-19 22:29:12,Walter Veit,"http://arxiv.org/abs/1907.08659v1, http://arxiv.org/pdf/1907.08659v1",q-bio.PE
34776,th,"We present a proof of Arrow's theorem from social choice theory that uses a
fixpoint argument. Specifically, we use Banach's result on the existence of a
fixpoint of a contractive map defined on a complete metric space. Conceptually,
our approach shows that dictatorships can be seen as fixpoints of a certain
process.",Arrow's Theorem Through a Fixpoint Argument,2019-07-22 06:15:46,"Frank M. V. Feys, Helle Hvid Hansen","http://dx.doi.org/10.4204/EPTCS.297.12, http://arxiv.org/abs/1907.10381v1, http://arxiv.org/pdf/1907.10381v1",econ.TH
35046,th,"In this paper, we study an optimal insurance problem for a risk-averse
individual who seeks to maximize the rank-dependent expected utility (RDEU) of
her terminal wealth, and insurance is priced via a general distortion-deviation
premium principle. We prove necessary and sufficient conditions satisfied by
the optimal solution and consider three ambiguity orders to further determine
the optimal indemnity. Finally, we analyze examples under three
distortion-deviation premium principles to explore the specific conditions
under which no insurance or deductible insurance is optimal.",Optimal Insurance to Maximize RDEU Under a Distortion-Deviation Premium Principle,2021-07-06 17:50:14,"Xiaoqing Liang, Ruodu Wang, Virginia Young","http://arxiv.org/abs/2107.02656v2, http://arxiv.org/pdf/2107.02656v2",q-fin.RM
34777,th,"We discuss the problem of setting prices in an electronic market that has
more than one buyer. We assume that there are self-interested sellers each
selling a distinct item that has an associated cost. Each buyer has a
submodular valuation for purchasing any subset of items. The goal of the
sellers is to set a price for their item such that their profit from possibly
selling their item to the buyers is maximized. Our most comprehensive results
concern a multi copy setting where each seller has m copies of their item and
there are m buyers. In this setting, we give a necessary and sufficient
condition for the existence of market clearing pure Nash equilibrium. We also
show that not all equilibria are market clearing even when this condition is
satisfied contrary to what was shown in the case of a single buyer in [2].
Finally, we investigate the pricing problem for multiple buyers in the limited
supply setting when each seller only has a single copy of their item.",Electronic markets with multiple submodular buyers,2019-07-27 16:58:49,"Allan Borodin, Akash Rakheja","http://arxiv.org/abs/1907.11915v2, http://arxiv.org/pdf/1907.11915v2",cs.GT
34778,th,"We study the plausibility of sub-optimal Nash equilibria of the direct
revelation mechanism associated with a strategy-proof social choice function.
By using the recently introduced empirical equilibrium analysis (Velez and
Brown, 2019, arXiv:1804.07986) we determine that this behavior is plausible
only when the social choice function violates a non-bossiness condition and
information is not interior. Analysis of the accumulated experimental and
empirical evidence on these games supports our findings.",Empirical strategy-proofness,2019-07-29 16:17:29,"Rodrigo A. Velez, Alexander L. Brown","http://arxiv.org/abs/1907.12408v5, http://arxiv.org/pdf/1907.12408v5",econ.TH
34779,th,"This paper takes a game theoretical approach to open shop scheduling problems
with unit execution times to minimize the sum of completion times. By supposing
an initial schedule and associating each job (consisting in a number of
operations) to a different player, we can construct a cooperative TU-game
associated with any open shop scheduling problem. We assign to each coalition
the maximal cost savings it can obtain through admissible rearrangements of
jobs' operations. By providing a core allocation, we show that the associated
games are balanced. Finally, we relax the definition of admissible
rearrangements for a coalition to study to what extend balancedness still
holds.",Open shop scheduling games,2019-07-30 16:37:02,"Ata Atay, Pedro Calleja, Sergio Soteras","http://arxiv.org/abs/1907.12909v1, http://arxiv.org/pdf/1907.12909v1",cs.GT
34780,th,"We analyze the relative price change of assets starting from basic
supply/demand considerations subject to arbitrary motivations. The resulting
stochastic differential equation has coefficients that are functions of supply
and demand. We derive these rigorously. The variance in the relative price
change is then also dependent on the supply and demand, and is closely
connected to the expected return. An important consequence for risk assessment
and options pricing is the implication that variance is highest when the
magnitude of price change is greatest, and lowest near market extrema. This
occurs even if supply and demand are not dependent on price trend. The
stochastic equation differs from the standard equation in mathematical finance
in which the expected return and variance are decoupled. The methodology has
implications for the basic framework for risk assessment, suggesting that
volatility should be measured in the context of regimes of price change. The
model we propose shows how investors are often misled by the apparent calm of
markets near a market peak. Risk assessment methods utilizing volatility can be
improved using this formulation.",Derivation of non-classical stochastic price dynamics equations,2019-08-03 04:28:09,"Carey Caginalp, Gunduz Caginalp","http://arxiv.org/abs/1908.01103v2, http://arxiv.org/pdf/1908.01103v2",econ.TH
34781,th,"A collection of objects, some of which are good and some are bad, is to be
divided fairly among agents with different tastes, modeled by additive utility
functions. If the objects cannot be shared, so that each of them must be
entirely allocated to a single agent, then a fair division may not exist. What
is the smallest number of objects that must be shared between two or more
agents in order to attain a fair and efficient division?
  In this paper, fairness is understood as proportionality or envy-freeness,
and efficiency, as fractional Pareto-optimality. We show that, for a generic
instance of the problem (all instances except a zero-measure set of degenerate
problems), a fair fractionally Pareto-optimal division with the smallest
possible number of shared objects can be found in polynomial time, assuming
that the number of agents is fixed. The problem becomes computationally hard
for degenerate instances, where agents' valuations are aligned for many
objects.",Efficient Fair Division with Minimal Sharing,2019-08-05 17:57:25,"Fedor Sandomirskiy, Erel Segal-Halevi","http://dx.doi.org/10.1287/opre.2022.2279, http://arxiv.org/abs/1908.01669v3, http://arxiv.org/pdf/1908.01669v3",cs.GT
34782,th,"In cake-cutting, strategy-proofness is a very costly requirement in terms of
fairness: for n=2 it implies a dictatorial allocation, whereas for n > 2 it
requires that one agent receives no cake. We show that a weaker version of this
property recently suggested by Troyan and Morril, called non-obvious
manipulability, is compatible with the strong fairness property of
proportionality, which guarantees that each agent receives 1/n of the cake.
Both properties are satisfied by the leftmost leaves mechanism, an adaptation
of the Dubins - Spanier moving knife procedure. Most other classical
proportional mechanisms in literature are obviously manipulable, including the
original moving knife mechanism. Non-obvious manipulability explains why
leftmost leaves is manipulated less often in practice than other proportional
mechanisms.",Obvious Manipulations in Cake-Cutting,2019-08-08 12:44:04,"Josue Ortega, Erel Segal-Halevi","http://arxiv.org/abs/1908.02988v2, http://arxiv.org/pdf/1908.02988v2",cs.GT
34783,th,"We propose a notion of fairness for allocation problems in which different
agents may have different reservation utilities, stemming from different
outside options, or property rights. Fairness is usually understood as the
absence of envy, but this can be incompatible with reservation utilities. It is
possible that Alice's envy of Bob's assignment cannot be remedied without
violating Bob's participation constraint. Instead, we seek to rule out {\em
justified envy}, defined as envy for which a remedy would not violate any
agent's participation constraint. We show that fairness, meaning the absence of
justified envy, can be achieved together with efficiency and individual
rationality. We introduce a competitive equilibrium approach with
price-dependent incomes obtaining the desired properties.",Fairness and efficiency for probabilistic allocations with participation constraints,2019-08-12 21:52:11,"Federico Echenique, Antonio Miralles, Jun Zhang","http://arxiv.org/abs/1908.04336v3, http://arxiv.org/pdf/1908.04336v3",econ.TH
35047,th,"Choice functions over a set $X$ that satisfy the Outcast, a.k.a. Aizerman,
property are exactly those that attach to any set its maximal subset relative
to some total order of ${2}^{X}$.",Representing choice functions by a total hyper-order,2021-07-06 14:21:38,Daniel Lehmann,"http://arxiv.org/abs/2107.02798v1, http://arxiv.org/pdf/2107.02798v1",econ.TH
34784,th,"Bitcoin and Ethereum, whose miners arguably collectively comprise the most
powerful computational resource in the history of mankind, offer no more power
for processing and verifying transactions than a typical smart phone. The
system described herein bypasses this bottleneck and brings scalable
computation to Ethereum. Our new system consists of a financial incentive layer
atop a dispute resolution layer where the latter takes form of a versatile
""verification game."" In addition to secure outsourced computation, immediate
applications include decentralized mining pools whose operator is an Ethereum
smart contract, a cryptocurrency with scalable transaction throughput, and a
trustless means for transferring currency between disjoint cryptocurrency
systems.",A scalable verification solution for blockchains,2019-08-12 07:02:10,"Jason Teutsch, Christian Reitwießner","http://arxiv.org/abs/1908.04756v1, http://arxiv.org/pdf/1908.04756v1",cs.CR
34785,th,"We introduce a model of probabilistic verification in mechanism design. The
principal elicits a message from the agent and then selects a test to give the
agent. The agent's true type determines the probability with which he can pass
each test. We characterize whether each type has an associated test that best
screens out all other types. If this condition holds, then the testing
technology can be represented in a tractable reduced form. We use this reduced
form to solve for profit-maximizing mechanisms with verification. As the
verification technology varies, the solution continuously interpolates between
the no-verification solution and full surplus extraction.",Probabilistic Verification in Mechanism Design,2019-08-15 17:13:59,"Ian Ball, Deniz Kattwinkel","http://arxiv.org/abs/1908.05556v3, http://arxiv.org/pdf/1908.05556v3",econ.TH
34787,th,"In this paper we provide a novel family of stochastic orders that generalizes
second order stochastic dominance, which we call the $\alpha,[a,b]$-concave
stochastic orders.
  These stochastic orders are generated by a novel set of ""very"" concave
functions where $\alpha$ parameterizes the degree of concavity. The
$\alpha,[a,b]$-concave stochastic orders allow us to derive novel comparative
statics results for important applications in economics that cannot be derived
using previous stochastic orders. In particular, our comparative statics
results are useful when an increase in a lottery's riskiness changes the
agent's optimal action in the opposite direction to an increase in the
lottery's expected value. For this kind of situation, we provide a tool to
determine which of these two forces dominates -- riskiness or expected value.
We apply our results in consumption-savings problems, self-protection problems,
and in a Bayesian game.","The Family of Alpha,[a,b] Stochastic Orders: Risk vs. Expected Value",2019-08-18 11:32:01,"Bar Light, Andres Perlroth","http://arxiv.org/abs/1908.06398v5, http://arxiv.org/pdf/1908.06398v5",math.PR
34788,th,"Online platforms, such as Airbnb, hotels.com, Amazon, Uber and Lyft, can
control and optimize many aspects of product search to improve the efficiency
of marketplaces. Here we focus on a common model, called the discriminatory
control model, where the platform chooses to display a subset of sellers who
sell products at prices determined by the market and a buyer is interested in
buying a single product from one of the sellers. Under the commonly-used model
for single product selection by a buyer, called the multinomial logit model,
and the Bertrand game model for competition among sellers, we show the
following result: to maximize social welfare, the optimal strategy for the
platform is to display all products; however, to maximize revenue, the optimal
strategy is to only display a subset of the products whose qualities are above
a certain threshold. We extend our results to Cournot competition model, and
show that the optimal search segmentation mechanisms for both social welfare
maximization and revenue maximization also have simple threshold structures.
The threshold in each case depends on the quality of all products, the
platform's objective and seller's competition model, and can be computed in
linear time in the number of products.",Optimal Search Segmentation Mechanisms for Online Platform Markets,2019-08-20 19:54:06,"Zhenzhe Zheng, R. Srikant","http://arxiv.org/abs/1908.07489v1, http://arxiv.org/pdf/1908.07489v1",cs.GT
34789,th,"The study of Rational Secret Sharing initiated by Halpern and Teague regards
the reconstruction of the secret in secret sharing as a game. It was shown that
participants (parties) may refuse to reveal their shares and so the
reconstruction may fail. Moreover, a refusal to reveal the share may be a
dominant strategy of a party.
  In this paper we consider secret sharing as a sub-action or subgame of a
larger action/game where the secret opens a possibility of consumption of a
certain common good. We claim that utilities of participants will be dependent
on the nature of this common good. In particular, Halpern and Teague scenario
corresponds to a rivalrous and excludable common good. We consider the case
when this common good is non-rivalrous and non-excludable and find many natural
Nash equilibria. We list several applications of secret sharing to demonstrate
our claim and give corresponding scenarios. In such circumstances the secret
sharing scheme facilitates a power sharing agreement in the society. We also
state that non-reconstruction may be beneficial for this society and give
several examples.",Realistic versus Rational Secret Sharing,2019-08-20 22:34:45,"Yvo Desmedt, Arkadii Slinko","http://arxiv.org/abs/1908.07581v1, http://arxiv.org/pdf/1908.07581v1",cs.CR
34803,th,"We introduce a simple, geometric model of opinion polarization. It is a model
of political persuasion, as well as marketing and advertising, utilizing social
values. It focuses on the interplay between different topics and persuasion
efforts. We demonstrate that societal opinion polarization often arises as an
unintended byproduct of influencers attempting to promote a product or idea. We
discuss a number of mechanisms for the emergence of polarization involving one
or more influencers, sending messages strategically, heuristically, or
randomly. We also examine some computational aspects of choosing the most
effective means of influencing agents, and the effects of those strategic
considerations on polarization.",A Geometric Model of Opinion Polarization,2019-10-11 19:06:24,"Jan Hązła, Yan Jin, Elchanan Mossel, Govind Ramnarayan","http://arxiv.org/abs/1910.05274v4, http://arxiv.org/pdf/1910.05274v4",cs.SI
34790,th,"Ingroup favoritism, the tendency to favor ingroup over outgroup, is often
explained as a product of intergroup conflict, or correlations between group
tags and behavior. Such accounts assume that group membership is meaningful,
whereas human data show that ingroup favoritism occurs even when it confers no
advantage and groups are transparently arbitrary. Another possibility is that
ingroup favoritism arises due to perceptual biases like outgroup homogeneity,
the tendency for humans to have greater difficulty distinguishing outgroup
members than ingroup ones. We present a prisoner's dilemma model, where
individuals use Bayesian inference to learn how likely others are to cooperate,
and then act rationally to maximize expected utility. We show that, when such
individuals exhibit outgroup homogeneity bias, ingroup favoritism between
arbitrary groups arises through direct reciprocity. However, this outcome may
be mitigated by: (1) raising the benefits of cooperation, (2) increasing
population diversity, and (3) imposing a more restrictive social structure.",Outgroup Homogeneity Bias Causes Ingroup Favoritism,2019-08-22 08:13:26,"Marcel Montrey, Thomas R. Shultz","http://arxiv.org/abs/1908.08203v1, http://arxiv.org/pdf/1908.08203v1",econ.TH
34791,th,"Many-to-many matching with contracts is studied in the framework of revealed
preferences. All preferences are described by choice functions that satisfy
natural conditions. Under a no-externality assumption individual preferences
can be aggregated into a single choice function expressing a collective
preference. In this framework, a two-sided matching problem may be described as
an agreement problem between two parties: the two parties must find a stable
agreement, i.e., a set of contracts from which no party will want to take away
any contract and to which the two parties cannot agree to add any contract. On
such stable agreements each party's preference relation is a partial order and
the two parties have inverse preferences. An algorithm is presented that
generalizes algorithms previously proposed in less general situations. This
algorithm provides a stable agreement that is preferred to all stable
agreements by one of the parties and therefore less preferred than all stable
agreements by the other party. The number of steps of the algorithm is linear
in the size of the set of contracts, i.e., polynomial in the size of the
problem. The algorithm provides a proof that stable agreements form a lattice
under the two inverse preference relations. Under additional assumptions on the
role of money in preferences, agreement problems can describe general two-sided
markets in which goods are exchanged for money. Stable agreements provide a
solution concept, including prices, that is more general than competitive
equilibria. They satisfy an almost one price law for identical items.",Revealed Preferences for Matching with Contracts,2019-08-23 16:46:43,Daniel Lehmann,"http://arxiv.org/abs/1908.08823v2, http://arxiv.org/pdf/1908.08823v2",cs.GT
34793,th,"We derive new prox-functions on the simplex from additive random utility
models of discrete choice. They are convex conjugates of the corresponding
surplus functions. In particular, we explicitly derive the convexity parameter
of discrete choice prox-functions associated with generalized extreme value
models, and specifically with generalized nested logit models. Incorporated
into subgradient schemes, discrete choice prox-functions lead to natural
probabilistic interpretations of the iteration steps. As illustration we
discuss an economic application of discrete choice prox-functions in consumer
theory. The dual averaging scheme from convex programming naturally adjusts
demand within a consumption cycle.",Discrete choice prox-functions on the simplex,2019-09-12 15:15:37,"David Müller, Yurii Nesterov, Vladimir Shikhman","http://arxiv.org/abs/1909.05591v1, http://arxiv.org/pdf/1909.05591v1",math.OC
34794,th,"Arrow's `impossibility' theorem asserts that there are no satisfactory
methods of aggregating individual preferences into collective preferences in
many complex situations. This result has ramifications in economics, politics,
i.e., the theory of voting, and the structure of tournaments. By identifying
the objects of choice with mathematical sets, and preferences with Hausdorff
measures of the distances between sets, it is possible to extend Arrow's
arguments from a sociological to a mathematical setting. One consequence is
that notions of reversibility can be expressed in terms of the relative
configurations of patterns of sets.","Arrow, Hausdorff, and Ambiguities in the Choice of Preferred States in Complex Systems",2019-09-11 00:50:09,"T. Erber, M. J. Frank","http://arxiv.org/abs/1909.07771v1, http://arxiv.org/pdf/1909.07771v1",econ.TH
34796,th,"We devise an optimal allocation strategy for the execution of a predefined
number of stocks in a given time frame using the technique of discrete-time
Stochastic Control Theory for a defined market model. This market structure
allows an instant execution of the market orders and has been analyzed based on
the assumption of discretized geometric movement of the stock prices. We
consider two different cost functions where the first function involves just
the fiscal cost while the cost function of the second kind incorporates the
risks of non-strategic constrained investments along with fiscal costs.
Precisely, the strategic development of constrained execution of K stocks
within a stipulated time frame of T units is established mathematically using a
well-defined stochastic behaviour of stock prices and the same is compared with
some of the commonly-used execution strategies using the historical stock price
data.",Optimizing Execution Cost Using Stochastic Control,2019-09-24 11:54:46,"Akshay Bansal, Diganta Mukherjee","http://dx.doi.org/10.1007/978-3-030-11364-3_4, http://arxiv.org/abs/1909.10762v1, http://arxiv.org/pdf/1909.10762v1",q-fin.MF
34797,th,"We consider transferable-utility profit-sharing games that arise from
settings in which agents need to jointly choose one of several alternatives,
and may use transfers to redistribute the welfare generated by the chosen
alternative. One such setting is the Shared-Rental problem, in which students
jointly rent an apartment and need to decide which bedroom to allocate to each
student, depending on the student's preferences. Many solution concepts have
been proposed for such settings, ranging from mechanisms without transfers,
such as Random Priority and the Eating mechanism, to mechanisms with transfers,
such as envy free solutions, the Shapley value, and the Kalai-Smorodinsky
bargaining solution. We seek a solution concept that satisfies three natural
properties, concerning efficiency, fairness and decomposition. We observe that
every solution concept known (to us) fails to satisfy at least one of the three
properties. We present a new solution concept, designed so as to satisfy the
three properties. A certain submodularity condition (which holds in interesting
special cases such as the Shared-Rental setting) implies both existence and
uniqueness of our solution concept.",A New Approach to Fair Distribution of Welfare,2019-09-25 11:57:00,"Moshe Babaioff, Uriel Feige","http://arxiv.org/abs/1909.11346v1, http://arxiv.org/pdf/1909.11346v1",econ.TH
34798,th,"In this paper, we propose a statistical aggregation method for agent-based
models with heterogeneous agents that interact both locally on a complex
adaptive network and globally on a market. The method combines three approaches
from statistical physics: (a) moment closure, (b) pair approximation of
adaptive network processes, and (c) thermodynamic limit of the resulting
stochastic process. As an example of use, we develop a stochastic agent-based
model with heterogeneous households that invest in either a fossil-fuel or
renewables-based sector while allocating labor on a competitive market. Using
the adaptive voter model, the model describes agents as social learners that
interact on a dynamic network. We apply the approximation methods to derive a
set of ordinary differential equations that approximate the macro-dynamics of
the model. A comparison of the reduced analytical model with numerical
simulations shows that the approximation fits well for a wide range of
parameters. The proposed method makes it possible to use analytical tools to
better understand the dynamical properties of models with heterogeneous agents
on adaptive networks. We showcase this with a bifurcation analysis that
identifies parameter ranges with multi-stabilities. The method can thus help to
explain emergent phenomena from network interactions and make them
mathematically traceable.",Macroscopic approximation methods for the analysis of adaptive networked agent-based models: The example of a two-sector investment model,2019-09-30 17:47:47,"Jakob J. Kolb, Finn Müller-Hansen, Jürgen Kurths, Jobst Heitzig","http://dx.doi.org/10.1103/PhysRevE.102.042311, http://arxiv.org/abs/1909.13758v2, http://arxiv.org/pdf/1909.13758v2",econ.TH
34799,th,"The long-term impact of algorithmic decision making is shaped by the dynamics
between the deployed decision rule and individuals' response. Focusing on
settings where each individual desires a positive classification---including
many important applications such as hiring and school admissions, we study a
dynamic learning setting where individuals invest in a positive outcome based
on their group's expected gain and the decision rule is updated to maximize
institutional benefit. By characterizing the equilibria of these dynamics, we
show that natural challenges to desirable long-term outcomes arise due to
heterogeneity across groups and the lack of realizability. We consider two
interventions, decoupling the decision rule by group and subsidizing the cost
of investment. We show that decoupling achieves optimal outcomes in the
realizable case but has discrepant effects that may depend on the initial
conditions otherwise. In contrast, subsidizing the cost of investment is shown
to create better equilibria for the disadvantaged group even in the absence of
realizability.",The Disparate Equilibria of Algorithmic Decision Making when Individuals Invest Rationally,2019-10-04 23:18:49,"Lydia T. Liu, Ashia Wilson, Nika Haghtalab, Adam Tauman Kalai, Christian Borgs, Jennifer Chayes","http://arxiv.org/abs/1910.04123v1, http://arxiv.org/pdf/1910.04123v1",cs.GT
34800,th,"We study the regulation of a monopolistic firm using a robust-design
approach. We solve for the policy that minimizes the regulator's worst-case
regret, where the regret is the difference between his complete-information
payoff minus his realized payoff. When the regulator's payoff is consumers'
surplus, it is optimal to impose a price cap. The optimal cap balances the
benefit from more surplus for consumers and the loss from underproduction. When
his payoff is consumers' surplus plus the firm's profit, he offers a piece-rate
subsidy in order to mitigate underproduction, but caps the total subsidy so as
not to incentivize severe overproduction.",Robust Monopoly Regulation,2019-10-10 00:25:50,"Yingni Guo, Eran Shmaya","http://arxiv.org/abs/1910.04260v1, http://arxiv.org/pdf/1910.04260v1",econ.TH
34801,th,"The seminal book of Gusfield and Irving [GI89] provides a compact and
algorithmically useful way to represent the collection of stable matches
corresponding to a given set of preferences. In this paper, we reinterpret the
main results of [GI89], giving a new proof of the characterization which is
able to bypass a lot of the ""theory building"" of the original works. We also
provide a streamlined and efficient way to compute this representation. Our
proofs and algorithms emphasize the connection to well-known properties of the
deferred acceptance algorithm.",Representing All Stable Matchings by Walking a Maximal Chain,2019-10-10 10:30:07,"Linda Cai, Clayton Thomas","http://arxiv.org/abs/1910.04401v1, http://arxiv.org/pdf/1910.04401v1",cs.GT
34802,th,"In a strategic form game a strategy profile is an equilibrium if no viable
coalition of agents (or players) benefits (in the Pareto sense) from jointly
changing their strategies. Weaker or stronger equilibrium notions can be
defined by considering various restrictions on coalition formation. In a Nash
equilibrium, for instance, the assumption is that viable coalitions are
singletons, and in a super strong equilibrium, every coalition is viable.
Restrictions on coalition formation can be justified by communication
limitations, coordination problems or institutional constraints. In this paper,
inspired by social structures in various real-life scenarios, we introduce
certain restrictions on coalition formation, and on their basis we introduce a
number of equilibrium notions. As an application we study our equilibrium
notions in resource selection games (RSGs), and we present a complete set of
existence and non-existence results for general RSGs and their important
special cases.",On Existence of Equilibrium Under Social Coalition Structures,2019-10-10 18:30:02,"Bugra Caskurlu, Ozgun Ekici, Fatih Erdem Kizilkaya","http://arxiv.org/abs/1910.04648v1, http://arxiv.org/pdf/1910.04648v1",cs.GT
34804,th,"An agent has access to multiple information sources, each of which provides
information about a different attribute of an unknown state. Information is
acquired continuously -- where the agent chooses both which sources to sample
from, and also how to allocate attention across them -- until an endogenously
chosen time, at which point a decision is taken. We provide an exact
characterization of the optimal information acquisition strategy under weak
conditions on the agent's prior belief about the different attributes. We then
apply this characterization to derive new results regarding: (1) endogenous
information acquisition for binary choice, (2) strategic information provision
by biased news sources, and (3) the dynamic consequences of attention
manipulation.",Dynamically Aggregating Diverse Information,2019-10-15 22:34:28,"Annie Liang, Xiaosheng Mu, Vasilis Syrgkanis","http://arxiv.org/abs/1910.07015v3, http://arxiv.org/pdf/1910.07015v3",econ.TH
34805,th,"Players are statistical learners who learn about payoffs from data. They may
interpret the same data differently, but have common knowledge of a class of
learning procedures. I propose a metric for the analyst's ""confidence"" in a
strategic prediction, based on the probability that the prediction is
consistent with the realized data. The main results characterize the analyst's
confidence in a given prediction as the quantity of data grows large, and
provide bounds for small datasets. The approach generates new predictions, e.g.
that speculative trade is more likely given high-dimensional data, and that
coordination is less likely given noisy data.",Games of Incomplete Information Played By Statisticians,2019-10-15 22:39:01,Annie Liang,"http://arxiv.org/abs/1910.07018v2, http://arxiv.org/pdf/1910.07018v2",econ.TH
34806,th,"This paper considers a class of experimentation games with L\'{e}vy bandits
encompassing those of Bolton and Harris (1999) and Keller, Rady and Cripps
(2005). Its main result is that efficient (perfect Bayesian) equilibria exist
whenever players' payoffs have a diffusion component. Hence, the trade-offs
emphasized in the literature do not rely on the intrinsic nature of bandit
models but on the commonly adopted solution concept (MPE). This is not an
artifact of continuous time: we prove that efficient equilibria arise as limits
of equilibria in the discrete-time game. Furthermore, it suffices to relax the
solution concept to strongly symmetric equilibrium.",Overcoming Free-Riding in Bandit Games,2019-10-20 14:47:21,"Johannes Hörner, Nicolas Klein, Sven Rady","http://arxiv.org/abs/1910.08953v5, http://arxiv.org/pdf/1910.08953v5",econ.TH
34807,th,"This work deals with the implementation of social choice rules using dominant
strategies for unrestricted preferences. The seminal Gibbard-Satterthwaite
theorem shows that only few unappealing social choice rules can be implemented
unless we assume some restrictions on the preferences or allow monetary
transfers. When monetary transfers are allowed and quasi-linear utilities
w.r.t. money are assumed, Vickrey-Clarke-Groves (VCG) mechanisms were shown to
implement any affine-maximizer, and by the work of Roberts, only
affine-maximizers can be implemented whenever the type sets of the agents are
rich enough.
  In this work, we generalize these results and define a new class of
preferences: Preferences which are positive-represented by a quasi-linear
utility. That is, agents whose preference on a subspace of the outcomes can be
modeled using a quasi-linear utility. We show that the characterization of VCG
mechanisms as the incentive-compatible mechanisms extends naturally to this
domain. Our result follows from a simple reduction to the characterization of
VCG mechanisms. Hence, we see our result more as a fuller more correct version
of the VCG characterization.
  This work also highlights a common misconception in the community attributing
the VCG result to the usage of transferable utility. Our result shows that the
incentive-compatibility of the VCG mechanisms does not rely on money being a
common denominator, but rather on the ability of the designer to fine the
agents on a continuous (maybe agent-specific) scale.
  We think these two insights, considering the utility as a representation and
not as the preference itself (which is common in the economic community) and
considering utilities which represent the preference only for the relevant
domain, would turn out to fruitful in other domains as well.",Almost Quasi-linear Utilities in Disguise: Positive-representation An Extension of Roberts' Theorem,2019-10-26 23:26:01,Ilan Nehama,"http://arxiv.org/abs/1910.12131v1, http://arxiv.org/pdf/1910.12131v1",econ.TH
34808,th,"A set of objects is to be divided fairly among agents with different tastes,
modeled by additive value functions. If the objects cannot be shared, so that
each of them must be entirely allocated to a single agent, then fair division
may not exist. How many objects must be shared between two or more agents in
order to attain a fair division? The paper studies various notions of fairness,
such as proportionality, envy-freeness and equitability. It also studies
consensus division, in which each agent assigns the same value to all bundles
--- a notion that is useful in truthful fair division mechanisms. It proves
upper bounds on the number of required sharings. However, it shows that finding
the minimum number of sharings is, in general, NP-hard even for generic
instances. Many problems remain open.",Fair Division with Bounded Sharing,2019-12-01 21:11:13,Erel Segal-Halevi,"http://arxiv.org/abs/1912.00459v1, http://arxiv.org/pdf/1912.00459v1",cs.GT
34809,th,"Online platforms collect rich information about participants and then share
some of this information back with them to improve market outcomes. In this
paper we study the following information disclosure problem in two-sided
markets: If a platform wants to maximize revenue, which sellers should the
platform allow to participate, and how much of its available information about
participating sellers' quality should the platform share with buyers? We study
this information disclosure problem in the context of two distinct two-sided
market models: one in which the platform chooses prices and the sellers choose
quantities (similar to ride-sharing), and one in which the sellers choose
prices (similar to e-commerce). Our main results provide conditions under which
simple information structures commonly observed in practice, such as banning
certain sellers from the platform while not distinguishing between
participating sellers, maximize the platform's revenue. The platform's
information disclosure problem naturally transforms into a constrained price
discrimination problem where the constraints are determined by the equilibrium
outcomes of the specific two-sided market model being studied. We analyze this
constrained price discrimination problem to obtain our structural results.",Quality Selection in Two-Sided Markets: A Constrained Price Discrimination Approach,2019-12-05 00:21:51,"Bar Light, Ramesh Johari, Gabriel Weintraub","http://arxiv.org/abs/1912.02251v7, http://arxiv.org/pdf/1912.02251v7",econ.TH
34810,th,"We study the fair division of a collection of $m$ indivisible goods amongst a
set of $n$ agents. Whilst envy-free allocations typically do not exist in the
indivisible goods setting, envy-freeness can be achieved if some amount of a
divisible good (money) is introduced. Specifically, Halpern and Shah (SAGT
2019, pp.374-389) showed that, given additive valuation functions where the
marginal value of each item is at most one dollar for each agent, there always
exists an envy-free allocation requiring a subsidy of at most $(n-1)\cdot m$
dollars. The authors also conjectured that a subsidy of $n-1$ dollars is
sufficient for additive valuations. We prove this conjecture. In fact, a
subsidy of at most one dollar per agent is sufficient to guarantee the
existence of an envy-free allocation. Further, we prove that for general
monotonic valuation functions an envy-free allocation always exists with a
subsidy of at most $2(n-1)$ dollars per agent. In particular, the total subsidy
required for monotonic valuations is independent of the number of items.",One Dollar Each Eliminates Envy,2019-12-05 21:43:30,"Johannes Brustle, Jack Dippel, Vishnu V. Narayan, Mashbat Suzuki, Adrian Vetta","http://arxiv.org/abs/1912.02797v1, http://arxiv.org/pdf/1912.02797v1",cs.GT
34811,th,"Mutually beneficial cooperation is a common part of economic systems as firms
in partial cooperation with others can often make a higher sustainable profit.
Though cooperative games were popular in 1950s, recent interest in
non-cooperative games is prevalent despite the fact that cooperative bargaining
seems to be more useful in economic and political applications. In this paper
we assume that the strategy space and time are inseparable with respect to a
contract. Under this assumption we show that the strategy spacetime is a
dynamic curved Liouville-like 2-brane quantum gravity surface under asymmetric
information and that traditional Euclidean geometry fails to give a proper
feedback Nash equilibrium. Cooperation occurs when two firms' strategies fall
into each other's influence curvature in this strategy spacetime. Small firms
in an economy dominated by large firms are subject to the influence of large
firms. We determine an optimal feedback semi-cooperation of the small firm in
this case using a Liouville-Feynman path integral method.",Semicooperation under curved strategy spacetime,2019-12-27 18:13:22,"Paramahansa Pramanik, Alan M. Polansky","http://arxiv.org/abs/1912.12146v1, http://arxiv.org/pdf/1912.12146v1",math.OC
34812,th,"On taking a non-trivial and semi-transitive bi-relation constituted by two
(hard and soft) binary relations, we report a (i) p-continuity assumption that
guarantees the completeness and transitivity of its soft part, and a (ii)
characterization of a connected topological space in terms of its attendant
properties on the space. Our work generalizes antecedent results in applied
mathematics, all following Eilenberg (1941), and now framed in the context of a
parametrized-topological space. This re-framing is directly inspired by the
continuity assumption in Wold (1943-44) and the mixture-space structure
proposed in Herstein and Milnor (1953), and the unifying synthesis of these
pioneering but neglected papers that it affords may have independent interest.",On an Extension of a Theorem of Eilenberg and a Characterization of Topological Connectedness,2019-12-30 05:40:47,"M. Ali Khan, Metin Uyanik","http://dx.doi.org/10.1016/j.topol.2020.107117, http://arxiv.org/abs/1912.12787v1, http://arxiv.org/pdf/1912.12787v1",econ.TH
34813,th,"This paper proposes a new equilibrium concept ""robust perfect equilibrium""
for non-cooperative games with a continuum of players, incorporating three
types of perturbations. Such an equilibrium is shown to exist (in symmetric
mixed strategies and in pure strategies) and satisfy the important properties
of admissibility, aggregate robustness, and ex post robust perfection. These
properties strengthen relevant equilibrium results in an extensive literature
on strategic interactions among a large number of agents. Illustrative
applications to congestion games and potential games are presented. In the
particular case of a congestion game with strictly increasing cost functions,
we show that there is a unique symmetric robust perfect equilibrium.",Robust perfect equilibrium in large games,2019-12-30 15:51:34,"Enxian Chen, Lei Qiao, Xiang Sun, Yeneng Sun","http://arxiv.org/abs/1912.12908v3, http://arxiv.org/pdf/1912.12908v3",econ.TH
34814,th,"Rental Harmony is the problem of assigning rooms in a rented house to tenants
with different preferences, and simultaneously splitting the rent among them,
such that no tenant envies the bundle (room+price) given to another tenant.
Different papers have studied this problem under two incompatible assumptions:
the miserly tenants assumption is that each tenant prefers a free room to a
non-free room; the quasilinear tenants assumption is that each tenant
attributes a monetary value to each room, and prefers a room of which the
difference between value and price is maximum. This note shows how to adapt the
main technique used for rental harmony with miserly tenants, using Sperner's
lemma, to a much more general class of preferences, that contains both miserly
and quasilinear tenants as special cases. This implies that some recent results
derived for miserly tenants apply to this more general preference class too.",Generalized Rental Harmony,2019-12-31 13:01:55,Erel Segal-Halevi,"http://dx.doi.org/10.1080/00029890.2022.2037988, http://arxiv.org/abs/1912.13249v2, http://arxiv.org/pdf/1912.13249v2",cs.GT
34815,th,"A robust game is a distribution-free model to handle ambiguity generated by a
bounded set of possible realizations of the values of players' payoff
functions. The players are worst-case optimizers and a solution, called
robust-optimization equilibrium, is guaranteed by standard regularity
conditions. The paper investigates the sensitivity to the level of uncertainty
of this equilibrium. Specifically, we prove that it is an epsilon-Nash
equilibrium of the nominal counterpart game, where the epsilon-approximation
measures the extra profit that a player would obtain by reducing his level of
uncertainty. Moreover, given an epsilon-Nash equilibrium of a nominal game, we
prove that it is always possible to introduce uncertainty such that the
epsilon-Nash equilibrium is a robust-optimization equilibrium. An example shows
that a robust Cournot duopoly model can admit multiple and asymmetric
robust-optimization equilibria despite only a symmetric Nash equilibrium exists
for the nominal counterpart game.",Insights on the Theory of Robust Games,2020-02-01 18:00:17,"Giovanni Paolo Crespi, Davide Radi, Matteo Rocca","http://arxiv.org/abs/2002.00225v1, http://arxiv.org/pdf/2002.00225v1",econ.TH
34816,th,"The concept of Owen point, introduced in Guardiola et al. (2009), is an
appealing solution concept that for Production-Inventory games (PI-games)
always belongs to their core. The Owen point allows all the players in the game
to operate at minimum cost but it does not take into account the cost reduction
induced by essential players over their followers (fans). Thus, it may be seen
as an altruistic allocation for essential players what can be criticized. The
aim this paper is two-fold: to study the structure and complexity of the core
of PI-games and to introduce new core allocations for PI-games improving the
weaknesses of the Owen point. Regarding the first goal, we advance further on
the analysis of PI-games and we analyze its core structure and algorithmic
complexity. Specifically, we prove that the number of extreme points of the
core of PI-games is exponential on the number of players. On the other hand, we
propose and characterize a new core-allocation, the Omega point, which
compensates the essential players for their role on reducing the costs of their
fans. Moreover, we define another solution concept, the Quid Pro Quo set
(QPQ-set) of allocations, which is based on the Owen and Omega points. Among
all the allocations in this set, we emphasize what we call the Solomonic QPQ
allocation and we provide some necessary conditions for the coincidence of that
allocation with the Shapley value and the Nucleolus.",Quid Pro Quo allocations in Production-Inventory games,2020-02-03 20:11:33,"Luis Guardiola, Ana Meca, Justo Puerto","http://arxiv.org/abs/2002.00953v1, http://arxiv.org/pdf/2002.00953v1",cs.GT
34817,th,"The current practice of envy-free rent division, lead by the fair allocation
website Spliddit, is based on quasi-linear preferences. These preferences rule
out agents' well documented financial constraints. To resolve this issue we
consider piece-wise linear budget constrained preferences. These preferences
admit differences in agents' marginal disutility of paying rent below and above
a given reference, i.e., a soft budget. We construct a polynomial algorithm to
calculate a maxmin utility envy-free allocation, and other related solutions,
in this domain.",A polynomial algorithm for maxmin and minmax envy-free rent division on a soft budget,2020-02-07 19:02:30,Rodrigo A. Velez,"http://arxiv.org/abs/2002.02966v1, http://arxiv.org/pdf/2002.02966v1",cs.GT
34819,th,"This note presents a simple overlapping-generations (OLG) model of the
transmission of a trait, such as a culture. Initially, some fraction of agents
carry the trait. In each time period, young agents are ""born"" and are
influenced by some older agents. Agents adopt the trait only if at least a
certain number of their influencers have the trait. This influence may occur
due to rational choice (e.g., because the young agents are playing a
coordination game with old agents who are already committed to a strategy), or
for some other reason. Our interest is in how the process of social influence
unfolds over time, and whether a trait will persist or die out. We characterize
the dynamics of the fraction of active agents and relate the analysis to
classic results on branching processes and random graphs.",Notes on a Social Transmission Model with a Continuum of Agents,2020-02-10 09:04:59,Benjamin Golub,"http://arxiv.org/abs/2002.03569v2, http://arxiv.org/pdf/2002.03569v2",physics.soc-ph
34820,th,"We initiate the study of the natural multiplayer generalization of the
classic continuous Colonel Blotto game. The two-player Blotto game, introduced
by Borel as a model of resource competition across $n$ simultaneous fronts, has
been studied extensively for a century and seen numerous applications
throughout the social sciences. Our work defines the multiplayer Colonel Blotto
game and derives Nash equilibria for various settings of $k$ (number of
players) and $n$. We also introduce a ""Boolean"" version of Blotto that becomes
interesting in the multiplayer setting. The main technical difficulty of our
work, as in the two-player theoretical literature, is the challenge of coupling
various marginal distributions into a joint distribution satisfying a strict
sum constraint. In contrast to previous works in the continuous setting, we
derive our couplings algorithmically in the form of efficient sampling
algorithms.",The Multiplayer Colonel Blotto Game,2020-02-13 00:22:33,"Enric Boix-Adserà, Benjamin L. Edelman, Siddhartha Jayanti","http://arxiv.org/abs/2002.05240v3, http://arxiv.org/pdf/2002.05240v3",cs.GT
34821,th,"We derive a revenue-maximizing scheme that charges customers who are
homogeneous with respect to their waiting cost parameter for a random fee in
order to become premium customers. This scheme incentivizes all customers to
purchase priority, each at his/her drawn price. We also design a
revenue-maximizing scheme for the case where customers are heterogeneous with
respect to their waiting cost parameter. Now lower cost parameter customers are
encouraged to join the premium class at a low price: Given that, those with
high cost parameter would be willing to pay even more for this privilege.",An optimal mechanism charging for priority in a queue,2020-02-16 11:13:17,"Moshe Haviv, Eyal Winter","http://arxiv.org/abs/2002.06533v1, http://arxiv.org/pdf/2002.06533v1",econ.TH
34822,th,"We present a general framework for designing approximately revenue-optimal
mechanisms for multi-item additive auctions, which applies to both truthful and
non-truthful auctions. Given a (not necessarily truthful) single-item auction
format $A$ satisfying certain technical conditions, we run simultaneous item
auctions augmented with a personalized entry fee for each bidder that must be
paid before the auction can be accessed. These entry fees depend only on the
prior distribution of bidder types, and in particular are independent of
realized bids. We bound the revenue of the resulting two-part tariff mechanism
using a novel geometric technique that enables revenue guarantees for many
common non-truthful auctions that previously had none. Our approach adapts and
extends the duality framework of Cai et al [CDW16] beyond truthful auctions.
  Our framework can be used with many common auction formats, such as
simultaneous first-price, simultaneous second-price, and simultaneous all-pay
auctions. Our results for first price and all-pay are the first revenue
guarantees of non-truthful mechanisms in multi-dimensional environments,
addressing an open question in the literature [RST17]. If all-pay auctions are
used, we prove that the resulting mechanism is also credible in the sense that
the auctioneer cannot benefit by deviating from the stated mechanism after
observing agent bids. This is the first static credible mechanism for
multi-item additive auctions that achieves a constant factor of the optimal
revenue. If second-price auctions are used, we obtain a truthful
$O(1)$-approximate mechanism with fixed entry fees that are amenable to tuning
via online learning techniques.",Multi-item Non-truthful Auctions Achieve Good Revenue,2020-02-17 02:04:53,"Constantinos Daskalakis, Maxwell Fishelson, Brendan Lucier, Vasilis Syrgkanis, Santhoshini Velusamy","http://arxiv.org/abs/2002.06702v6, http://arxiv.org/pdf/2002.06702v6",cs.GT
34823,th,"A large fraction of online advertisement is sold via repeated second price
auctions. In these auctions, the reserve price is the main tool for the
auctioneer to boost revenues. In this work, we investigate the following
question: Can changing the reserve prices based on the previous bids improve
the revenue of the auction, taking into account the long-term incentives and
strategic behavior of the bidders? We show that if the distribution of the
valuations is known and satisfies the standard regularity assumptions, then the
optimal mechanism has a constant reserve. However, when there is uncertainty in
the distribution of the valuations, previous bids can be used to learn the
distribution of the valuations and to update the reserve price. We present a
simple, approximately incentive-compatible, and asymptotically optimal dynamic
reserve mechanism that can significantly improve the revenue over the best
static reserve.
  The paper is from July 2014 (our submission to WINE 2014), posted later here
on the arxiv to complement the 1-page abstract in the WINE 2014 proceedings.",Dynamic Reserve Prices for Repeated Auctions: Learning from Bids,2020-02-18 04:53:01,"Yash Kanoria, Hamid Nazerzadeh","http://dx.doi.org/10.1007/978-3-319-13129-0_17, http://arxiv.org/abs/2002.07331v1, http://arxiv.org/pdf/2002.07331v1",cs.GT
34824,th,"When selling information products, the seller can provide some free partial
information to change people's valuations so that the overall revenue can
possibly be increased. We study the general problem of advertising information
products by revealing partial information. We consider buyers who are
decision-makers. The outcomes of the decision problems depend on the state of
the world that is unknown to the buyers. The buyers can make their own
observations and thus can hold different personal beliefs about the state of
the world. There is an information seller who has access to the state of the
world. The seller can promote the information by revealing some partial
information. We assume that the seller chooses a long-term advertising strategy
and then commits to it. The seller's goal is to maximize the expected revenue.
We study the problem in two settings. (1) The seller targets buyers of a
certain type. In this case, finding the optimal advertising strategy is
equivalent to finding the concave closure of a simple function. The function is
a product of two quantities, the likelihood ratio and the cost of uncertainty.
Based on this observation, we prove some properties of the optimal mechanism,
which allow us to solve for the optimal mechanism by a finite-size convex
program. The convex program will have a polynomial-size if the state of the
world has a constant number of possible realizations or the buyers face a
decision problem with a constant number of options. For the general problem, we
prove that it is NP-hard to find the optimal mechanism. (2) When the seller
faces buyers of different types and only knows the distribution of their types,
we provide an approximation algorithm when it is not too hard to predict the
possible type of buyers who will make the purchase. For the general problem, we
prove that it is NP-hard to find a constant-factor approximation.",Optimal Advertising for Information Products,2020-02-24 05:36:22,"Shuran Zheng, Yiling Chen","http://dx.doi.org/10.1145/3465456.3467649, http://arxiv.org/abs/2002.10045v5, http://arxiv.org/pdf/2002.10045v5",cs.GT
34825,th,"We consider sender-receiver games, where the sender and the receiver are two
distinct nodes in a communication network. Communication between the sender and
the receiver is thus indirect. We ask when it is possible to robustly implement
the equilibrium outcomes of the direct communication game as equilibrium
outcomes of indirect communication games on the network. Robust implementation
requires that: (i) the implementation is independent of the preferences of the
intermediaries and (ii) the implementation is guaranteed at all histories
consistent with unilateral deviations by the intermediaries. Robust
implementation of direct communication is possible if and only if either the
sender and receiver are directly connected or there exist two disjoint paths
between the sender and the receiver.",Robust communication on networks,2020-07-01 15:59:47,"Marie Laclau, Ludovic Renou, Xavier Venel","http://arxiv.org/abs/2007.00457v2, http://arxiv.org/pdf/2007.00457v2",econ.TH
34826,th,"This chapter examines how positivity and order play out in two important
questions in mathematical economics, and in so doing, subjects the postulates
of continuity, additivity and monotonicity to closer scrutiny. Two sets of
results are offered: the first departs from Eilenberg's (1941) necessary and
sufficient conditions on the topology under which an anti-symmetric, complete,
transitive and continuous binary relation exists on a topologically connected
space; and the second, from DeGroot's (1970) result concerning an additivity
postulate that ensures a complete binary relation on a {\sigma}-algebra to be
transitive. These results are framed in the registers of order, topology,
algebra and measure-theory; and also beyond mathematics in economics: the
exploitation of Villegas' notion of monotonic continuity by Arrow-Chichilnisky
in the context of Savage's theorem in decision theory, and the extension of
Diamond's impossibility result in social choice theory by Basu-Mitra. As such,
this chapter has a synthetic and expository motivation, and can be read as a
plea for inter-disciplinary conversations, connections and collaboration.","Binary Relations in Mathematical Economics: On the Continuity, Additivity and Monotonicity Postulates in Eilenberg, Villegas and DeGroot",2020-07-04 01:13:19,"M. Ali Khan, Metin Uyanik","http://dx.doi.org/10.1007/978-3-030-70974-7_12, http://arxiv.org/abs/2007.01952v1, http://arxiv.org/pdf/2007.01952v1",econ.TH
34827,th,"This paper develops a framework for repeated matching markets. The model
departs from the Gale-Shapley matching model by having a fixed set of
long-lived hospitals match with a new generation of short-lived residents in
every period. I show that there are two kinds of hospitals in this repeated
environment: some hospitals can be motivated dynamically to voluntarily reduce
their hiring capacity, potentially making more residents available to rural
hospitals; the others, however, are untouchable even with repeated interaction
and must obtain the same match as they do in a static matching. In large
matching markets with correlated preferences, at most a vanishingly small
fraction of the hospitals are untouchable. The vast majority of hospitals can
be motivated using dynamic incentives.",Stability in Repeated Matching Markets,2020-07-08 00:25:45,Ce Liu,"http://arxiv.org/abs/2007.03794v2, http://arxiv.org/pdf/2007.03794v2",econ.TH
34828,th,"We introduce the model of line-up elections which captures parallel or
sequential single-winner elections with a shared candidate pool. The goal of a
line-up election is to find a high-quality assignment of a set of candidates to
a set of positions such that each position is filled by exactly one candidate
and each candidate fills at most one position. A score for each
candidate-position pair is given as part of the input, which expresses the
qualification of the candidate to fill the position. We propose several voting
rules for line-up elections and analyze them from an axiomatic and an empirical
perspective using real-world data from the popular video game FIFA.",Line-Up Elections: Parallel Voting with Shared Candidate Pool,2020-07-09 20:48:55,"Niclas Boehmer, Robert Bredereck, Piotr Faliszewski, Andrzej Kaczmarczyk, Rolf Niedermeier","http://arxiv.org/abs/2007.04960v1, http://arxiv.org/pdf/2007.04960v1",cs.GT
34829,th,"We introduce a model of polarization in networks as a unifying framework for
the measurement of polarization that covers a wide range of applications. We
consider a sufficiently general setup for this purpose: node- and
edge-weighted, undirected, and connected networks. We generalize the axiomatic
characterization of Esteban and Ray (1994) and show that only a particular
instance within this class can be used justifiably to measure polarization in
networks.",Polarization in Networks: Identification-alienation Framework,2020-07-14 17:39:48,"Kenan Huremovic, Ali Ozkes","http://arxiv.org/abs/2007.07061v5, http://arxiv.org/pdf/2007.07061v5",econ.TH
34830,th,"Many real-world situations of ethical relevance, in particular those of
large-scale social choice such as mitigating climate change, involve not only
many agents whose decisions interact in complicated ways, but also various
forms of uncertainty, including quantifiable risk and unquantifiable ambiguity.
In such problems, an assessment of individual and groupwise moral
responsibility for ethically undesired outcomes or their responsibility to
avoid such is challenging and prone to the risk of under- or overdetermination
of responsibility. In contrast to existing approaches based on strict causation
or certain deontic logics that focus on a binary classification of
`responsible' vs `not responsible', we here present several different
quantitative responsibility metrics that assess responsibility degrees in units
of probability. For this, we use a framework based on an adapted version of
extensive-form game trees and an axiomatic approach that specifies a number of
potentially desirable properties of such metrics, and then test the developed
candidate metrics by their application to a number of paradigmatic social
choice situations. We find that while most properties one might desire of such
responsibility metrics can be fulfilled by some variant, an optimal metric that
clearly outperforms others has yet to be found.","Degrees of individual and groupwise backward and forward responsibility in extensive-form games with ambiguity, and their application to social choice problems",2020-07-09 16:19:13,"Jobst Heitzig, Sarah Hiller","http://arxiv.org/abs/2007.07352v1, http://arxiv.org/pdf/2007.07352v1",econ.TH
34837,th,"Vicarious learning is a vital component of organizational learning. We
theorize and model two fundamental processes underlying vicarious learning:
observation of actions (learning what they do) vs. belief sharing (learning
what they think). The analysis of our model points to three key insights.
First, vicarious learning through either process is beneficial even when no
agent in a system of vicarious learners begins with a knowledge advantage.
Second, vicarious learning through belief sharing is not universally better
than mutual observation of actions and outcomes. Specifically, enabling mutual
observability of actions and outcomes is superior to sharing of beliefs when
the task environment features few alternatives with large differences in their
value and there are no time pressures. Third, symmetry in vicarious learning in
fact adversely affects belief sharing but improves observational learning. All
three results are shown to be the consequence of how vicarious learning affects
self-confirming biased beliefs.",Learning what they think vs. learning what they do: The micro-foundations of vicarious learning,2020-07-30 10:06:01,"Sanghyun Park, Phanish Puranam","http://dx.doi.org/10.1287/mnsc.2023.4842, http://arxiv.org/abs/2007.15264v2, http://arxiv.org/pdf/2007.15264v2",econ.TH
34831,th,"We investigate the containment of epidemic spreading in networks from a
normative point of view. We consider a susceptible/infected model in which
agents can invest in order to reduce the contagiousness of network links. In
this setting, we study the relationships between social efficiency, individual
behaviours and network structure. First, we exhibit an upper bound on the Price
of Anarchy and prove that the level of inefficiency can scale up to linearly
with the number of agents. Second, we prove that policies of uniform reduction
of interactions satisfy some optimality conditions in a vast range of networks.
In setting where no central authority can enforce such stringent policies, we
consider as a type of second-best policy the shift from a local to a global
game by allowing agents to subsidise investments in contagiousness reduction in
the global rather than in the local network. We then characterise the scope for
Pareto improvement opened by such policies through a notion of Price of
Autarky, measuring the ratio between social welfare at a global and a local
equilibrium. Overall, our results show that individual behaviours can be
extremely inefficient in the face of epidemic propagation but that policy can
take advantage of the network structure to design efficient containment
policies.",Prophylaxis of Epidemic Spreading with Transient Dynamics,2020-07-15 12:54:54,"Geraldine Bouveret, Antoine Mandel","http://arxiv.org/abs/2007.07580v1, http://arxiv.org/pdf/2007.07580v1",econ.TH
34832,th,"In this paper, we provide a theoretical framework to analyze an agent who
misinterprets or misperceives the true decision problem she faces. We show that
a wide range of behavior observed in experimental settings manifest as failures
to perceive implications, in other words, to properly account for the logical
relationships between various payoff relevant contingencies. We present a
behavioral definition of perceived implication, thereby providing an
elicitation technique, and show that an agent's account of implication
identifies a subjective state-space that underlies her behavior. By analyzing
this state-space, we characterize distinct benchmarks of logical sophistication
that drive empirical phenomena. We disentangle static and dynamic rationality.
Thus, our framework delivers both a methodology for assessing an agent's level
of contingent thinking and a strategy for identifying her beliefs in the
absence full rationality.",Failures of Contingent Thinking,2020-07-15 17:21:16,"Evan Piermont, Peio Zuazo-Garin","http://arxiv.org/abs/2007.07703v3, http://arxiv.org/pdf/2007.07703v3",cs.AI
34833,th,"We extend Berge's Maximum Theorem to allow for incomplete preferences. We
first provide a simple version of the Maximum Theorem for convex feasible sets
and a fixed preference. Then, we show that if, in addition to the traditional
continuity assumptions, a new continuity property for the domains of
comparability holds, the limits of maximal elements along a sequence of
decision problems are maximal elements in the limit problem. While this new
continuity property for the domains of comparability is not generally necessary
for optimality to be preserved by limits, we provide conditions under which it
is necessary and sufficient.",A Maximum Theorem for Incomplete Preferences,2020-07-20 00:04:46,"Leandro Gorno, Alessandro Rivello","http://arxiv.org/abs/2007.09781v6, http://arxiv.org/pdf/2007.09781v6",econ.TH
34834,th,"Inspired by the recent COVID-19 pandemic, we study a generalization of the
multi-resource allocation problem with heterogeneous demands and Leontief
utilities. Unlike existing settings, we allow each agent to specify
requirements to only accept allocations from a subset of the total supply for
each resource. These requirements can take form in location constraints (e.g. A
hospital can only accept volunteers who live nearby due to commute
limitations). This can also model a type of substitution effect where some
agents need 1 unit of resource A \emph{or} B, both belonging to the same
meta-type. But some agents specifically want A, and others specifically want B.
We propose a new mechanism called Dominant Resource Fairness with Meta Types
which determines the allocations by solving a small number of linear programs.
The proposed method satisfies Pareto optimality, envy-freeness,
strategy-proofness, and a notion of sharing incentive for our setting. To the
best of our knowledge, we are the first to study this problem formulation,
which improved upon existing work by capturing more constraints that often
arise in real life situations. Finally, we show numerically that our method
scales better to large problems than alternative approaches.",Dominant Resource Fairness with Meta-Types,2020-07-22 01:22:48,"Steven Yin, Shatian Wang, Lingyi Zhang, Christian Kroer","http://arxiv.org/abs/2007.11961v3, http://arxiv.org/pdf/2007.11961v3",econ.TH
34835,th,"For a set-valued function $F$ on a compact subset $W$ of a manifold, spanning
is a topological property that implies that $F(x) \ne 0$ for interior points
$x$ of $W$. A myopic equilibrium applies when for each action there is a payoff
whose functional value is not necessarily affine in the strategy space. We show
that if the payoffs satisfy the spanning property, then there exist a myopic
equilibrium (though not necessarily a Nash equilibrium). Furthermore, given a
parametrized collection of games and the spanning property to the structure of
payoffs in that collection, the resulting myopic equilibria and their payoffs
have the spanning property with respect to that parametrization. This is a far
reaching extension of the Kohberg-Mertens Structure Theorem. There are at least
four useful applications, when payoffs are exogenous to a finite game tree (for
example a finitely repeated game followed by an infinitely repeated game), when
one wants to understand a game strategically entirely with behaviour
strategies, when one wants to extends the subgame concept to subsets of a game
tree that are known in common, and for evolutionary game theory. The proofs
involve new topological results asserting that spanning is preserved by
relevant operations on set-valued functions.","Myopic equilibria, the spanning property, and subgame bundles",2020-07-25 10:51:48,"Robert Simon, Stanislaw Spiez, Henryk Torunczyk","http://arxiv.org/abs/2007.12876v1, http://arxiv.org/pdf/2007.12876v1",econ.TH
34836,th,"This paper provides a comprehensive convergence analysis of the PoA of both
pure and mixed Nash equilibria in atomic congestion games with unsplittable
demands.",A convergence analysis of the price of anarchy in atomic congestion games,2020-07-28 07:31:43,"Zijun Wu, Rolf H. Moehring, Chunying Ren, Dachuan Xu","http://dx.doi.org/10.1007/s10107-022-01853-0, http://arxiv.org/abs/2007.14769v3, http://arxiv.org/pdf/2007.14769v3",cs.GT
34893,th,"We develop a theory of monotone comparative statics based on weak set order
-- in short, weak monotone comparative statics -- and identify the enabling
conditions in the context of individual choices, Pareto optimal choices% for a
coalition of agents, Nash equilibria of games, and matching theory. Compared
with the existing theory based on strong set order, the conditions for weak
monotone comparative statics are weaker, sometimes considerably, in terms of
the structure of the choice environments and underlying preferences of agents.
We apply the theory to establish existence and monotone comparative statics of
Nash equilibria in games with strategic complementarities and of stable
many-to-one matchings in two-sided matching problems, allowing for general
preferences that accommodate indifferences and incompleteness.",Weak Monotone Comparative Statics,2019-11-15 04:44:20,"Yeon-Koo Che, Jinwoo Kim, Fuhito Kojima","http://arxiv.org/abs/1911.06442v4, http://arxiv.org/pdf/1911.06442v4",econ.TH
34838,th,"Attributes provide critical information about the alternatives that a
decision-maker is considering. When their magnitudes are uncertain, the
decision-maker may be unsure about which alternative is truly the best, so
measuring the attributes may help the decision-maker make a better decision.
This paper considers settings in which each measurement yields one sample of
one attribute for one alternative. When given a fixed number of samples to
collect, the decision-maker must determine which samples to obtain, make the
measurements, update prior beliefs about the attribute magnitudes, and then
select an alternative. This paper presents the sample allocation problem for
multiple attribute selection decisions and proposes two sequential, lookahead
procedures for the case in which discrete distributions are used to model the
uncertain attribute magnitudes. The two procedures are similar but reflect
different quality measures (and loss functions), which motivate different
decision rules: (1) select the alternative with the greatest expected utility
and (2) select the alternative that is most likely to be the truly best
alternative. We conducted a simulation study to evaluate the performance of the
sequential procedures and hybrid procedures that first allocate some samples
using a uniform allocation procedure and then use the sequential, lookahead
procedure. The results indicate that the hybrid procedures are effective;
allocating many (but not all) of the initial samples with the uniform
allocation procedure not only reduces overall computational effort but also
selects alternatives that have lower average opportunity cost and are more
often truly best.",Lookahead and Hybrid Sample Allocation Procedures for Multiple Attribute Selection Decisions,2020-07-31 18:04:49,"Jeffrey W. Herrmann, Kunal Mehta","http://arxiv.org/abs/2007.16119v1, http://arxiv.org/pdf/2007.16119v1",econ.TH
34839,th,"We consider two-player normal form games where each player has the same
finite strategy set.
  The payoffs of each player are assumed to be i.i.d. random variables with a
continuous distribution.
  We show that, with high probability, the better-response dynamics converge to
pure Nash equilibrium whenever there is one, whereas best-response dynamics
fails to converge, as it is trapped.","When ""Better"" is better than ""Best""",2020-10-31 13:58:02,"Ben Amiet, Andrea Collevecchio, Kais Hamza","http://arxiv.org/abs/2011.00239v1, http://arxiv.org/pdf/2011.00239v1",econ.TH
34840,th,"We expand the literature on the price of anarchy (PoA) of simultaneous item
auctions by considering settings with correlated values; we do this via the
fundamental economic model of interdependent values (IDV). It is well-known
that in multi-item settings with private values, correlated values can lead to
bad PoA, which can be polynomially large in the number of agents $n$. In the
more general model of IDV, we show that the PoA can be polynomially large even
in single-item settings. On the positive side, we identify a natural condition
on information dispersion in the market, termed $\gamma$-heterogeneity, which
enables good PoA guarantees. Under this condition, we show that for single-item
settings, the PoA of standard mechanisms degrades gracefully with $\gamma$. For
settings with $m>1$ items we show a separation between two domains: If $n \geq
m$, we devise a new simultaneous item auction with good PoA (with respect to
$\gamma$), under limited information asymmetry. To the best of our knowledge,
this is the first positive PoA result for correlated values in multi-item
settings. The main technical difficulty in establishing this result is that the
standard tool for establishing PoA results -- the smoothness framework -- is
unsuitable for IDV settings, and so we must introduce new techniques to address
the unique challenges imposed by such settings. In the domain of $n \ll m$, we
establish impossibility results even for surprisingly simple scenarios.",Price of Anarchy of Simple Auctions with Interdependent Values,2020-11-01 16:38:55,"Alon Eden, Michal Feldman, Inbal Talgam-Cohen, Ori Zviran","http://arxiv.org/abs/2011.00498v2, http://arxiv.org/pdf/2011.00498v2",cs.GT
34841,th,"It is well understood that the structure of a social network is critical to
whether or not agents can aggregate information correctly. In this paper, we
study social networks that support information aggregation when rational agents
act sequentially and irrevocably. Whether or not information is aggregated
depends, inter alia, on the order in which agents decide. Thus, to decouple the
order and the topology, our model studies a random arrival order.
  Unlike the case of a fixed arrival order, in our model, the decision of an
agent is unlikely to be affected by those who are far from him in the network.
This observation allows us to identify a local learning requirement, a natural
condition on the agent's neighborhood that guarantees that this agent makes the
correct decision (with high probability) no matter how well other agents
perform. Roughly speaking, the agent should belong to a multitude of mutually
exclusive social circles.
  We illustrate the power of the local learning requirement by constructing a
family of social networks that guarantee information aggregation despite that
no agent is a social hub (in other words, there are no opinion leaders).
Although the common wisdom of the social learning literature suggests that
information aggregation is very fragile, another application of the local
learning requirement demonstrates the existence of networks where learning
prevails even if a substantial fraction of the agents are not involved in the
learning process. On a technical level, the networks we construct rely on the
theory of expander graphs, i.e., highly connected sparse graphs with a wide
range of applications from pure mathematics to error-correcting codes.",On social networks that support learning,2020-11-10 20:22:45,"Itai Arieli, Fedor Sandomirskiy, Rann Smorodinsky","http://arxiv.org/abs/2011.05255v1, http://arxiv.org/pdf/2011.05255v1",econ.TH
34842,th,"I develop a novel macroeconomic epidemiological agent-based model to study
the impact of the COVID-19 pandemic under varying policy scenarios. Agents
differ with regard to their profession, family status and age and interact with
other agents at home, work or during leisure activities. The model allows to
implement and test actually used or counterfactual policies such as closing
schools or the leisure industry explicitly in the model in order to explore
their impact on the spread of the virus, and their economic consequences. The
model is calibrated with German statistical data on time use, demography,
households, firm demography, employment, company profits and wages. I set up a
baseline scenario based on the German containment policies and fit the
epidemiological parameters of the simulation to the observed German death curve
and an estimated infection curve of the first COVID-19 wave. My model suggests
that by acting one week later, the death toll of the first wave in Germany
would have been 180% higher, whereas it would have been 60% lower, if the
policies had been enacted a week earlier. I finally discuss two stylized fiscal
policy scenarios: procyclical (zero-deficit) and anticyclical fiscal policy. In
the zero-deficit scenario a vicious circle emerges, in which the economic
recession spreads from the high-interaction leisure industry to the rest of the
economy. Even after eliminating the virus and lifting the restrictions, the
economic recovery is incomplete. Anticyclical fiscal policy on the other hand
limits the economic losses and allows for a V-shaped recovery, but does not
increase the number of deaths. These results suggest that an optimal response
to the pandemic aiming at containment or holding out for a vaccine combines
early introduction of containment measures to keep the number of infected low
with expansionary fiscal policy to keep output in lower risk sectors high.",COVID-Town: An Integrated Economic-Epidemiological Agent-Based Model,2020-11-12 12:59:29,Patrick Mellacher,"http://arxiv.org/abs/2011.06289v1, http://arxiv.org/pdf/2011.06289v1",econ.TH
34843,th,"Income inequality is known to have negative impacts on an economic system,
thus has been debated for a hundred years past or more. Numerous ideas have
been proposed to quantify income inequality, and the Gini coefficient is a
prevalent index. However, the concept of perfect equality in the Gini
coefficient is rather idealistic and cannot provide realistic guidance on
whether government interventions are needed to adjust income inequality. In
this paper, we first propose the concept of a more realistic and feasible
income equality that maximizes total social welfare. Then we show that an
optimal income distribution representing the feasible equality could be modeled
using the sigmoid welfare function and the Boltzmann income distribution.
Finally, we carry out an empirical analysis of four countries and demonstrate
how optimal income distributions could be evaluated. Our results show that the
feasible income equality could be used as a practical guideline for government
policies and interventions.",Getting to a feasible income equality,2020-11-18 10:03:34,"Ji-Won Park, Chae Un Kim","http://dx.doi.org/10.1371/journal.pone.0249204, http://arxiv.org/abs/2011.09119v2, http://arxiv.org/pdf/2011.09119v2",econ.TH
34844,th,"We study the strategic simplicity of stable matching mechanisms where one
side has fixed preferences, termed priorities. Specifically, we ask which
priorities are such that the strategyproofness of deferred acceptance (DA) can
be recognized by agents unable to perform contingency reasoning, that is,
\emph{when is DA obviously strategyproof} (Li, 2017)?
  We answer this question by completely characterizing those priorities which
make DA obviously strategyproof (OSP). This solves an open problem of Ashlagi
and Gonczarowski, 2018. We find that when DA is OSP, priorities are either
acyclic (Ergin, 2002), a restrictive condition which allows priorities to only
differ on only two agents at a time, or contain an extremely limited cyclic
pattern where all priority lists are identical except for exactly two. We
conclude that, for stable matching mechanisms, the tension between
understandability (in the sense of OSP) and expressiveness of priorities is
very high.",Classification of Priorities Such That Deferred Acceptance is Obviously Strategyproof,2020-11-24 23:39:21,Clayton Thomas,"http://arxiv.org/abs/2011.12367v2, http://arxiv.org/pdf/2011.12367v2",econ.TH
34845,th,"In a housing market of Shapley and Scarf, each agent is endowed with one
indivisible object and has preferences over all objects. An allocation of the
objects is in the (strong) core if there exists no (weakly) blocking coalition.
In this paper we show that in the case of strict preferences the unique strong
core allocation (or competitive allocation) respects improvement: if an agent's
object becomes more attractive for some other agents, then the agent's
allotment in the unique strong core allocation weakly improves. We obtain a
general result in case of ties in the preferences and provide new integer
programming formulations for computing (strong) core and competitive
allocations. Finally, we conduct computer simulations to compare the
game-theoretical solutions with maximum size and maximum weight exchanges for
markets that resemble the pools of kidney exchange programmes.","Shapley-Scarf Housing Markets: Respecting Improvement, Integer Programming, and Kidney Exchange",2021-01-30 09:50:42,"Péter Biró, Flip Klijn, Xenia Klimentova, Ana Viana","http://arxiv.org/abs/2102.00167v1, http://arxiv.org/pdf/2102.00167v1",econ.TH
34846,th,"Social choice functions (SCFs) map the preferences of a group of agents over
some set of alternatives to a non-empty subset of alternatives. The
Gibbard-Satterthwaite theorem has shown that only extremely restrictive SCFs
are strategyproof when there are more than two alternatives. For set-valued
SCFs, or so-called social choice correspondences, the situation is less clear.
There are miscellaneous - mostly negative - results using a variety of
strategyproofness notions and additional requirements. The simple and intuitive
notion of Kelly-strategyproofness has turned out to be particularly compelling
because it is weak enough to still allow for positive results. For example, the
Pareto rule is strategyproof even when preferences are weak, and a number of
attractive SCFs (such as the top cycle, the uncovered set, and the essential
set) are strategyproof for strict preferences. In this paper, we show that, for
weak preferences, only indecisive SCFs can satisfy strategyproofness. In
particular, (i) every strategyproof rank-based SCF violates Pareto-optimality,
(ii) every strategyproof support-based SCF (which generalize Fishburn's C2
SCFs) that satisfies Pareto-optimality returns at least one most preferred
alternative of every voter, and (iii) every strategyproof non-imposing SCF
returns the Condorcet loser in at least one profile. We also discuss the
consequences of these results for randomized social choice.",On the Indecisiveness of Kelly-Strategyproof Social Choice Functions,2021-01-31 20:41:41,"Felix Brandt, Martin Bullinger, Patrick Lederer","http://arxiv.org/abs/2102.00499v2, http://arxiv.org/pdf/2102.00499v2",cs.GT
34847,th,"The classic wisdom-of-the-crowd problem asks how a principal can ""aggregate""
information about the unknown state of the world from agents without
understanding the information structure among them. We propose a new simple
procedure called Population-Mean-Based Aggregation to achieve this goal. The
procedure only requires eliciting agents' beliefs about the state, and also
eliciting some agents' expectations of the average belief in the population. We
show that this procedure fully aggregates information: in an infinite
population, it always infers the true state of the world. The procedure can
accommodate correlations in agents' information, misspecified beliefs, any
finite number of possible states of the world, and only requires very weak
assumptions on the information structure.",The Wisdom of the Crowd and Higher-Order Beliefs,2021-02-04 18:05:18,"Yi-Chun Chen, Manuel Mueller-Frank, Mallesh M Pai","http://arxiv.org/abs/2102.02666v2, http://arxiv.org/pdf/2102.02666v2",econ.TH
34853,th,"A mixture preorder is a preorder on a mixture space (such as a convex set)
that is compatible with the mixing operation. In decision theoretic terms, it
satisfies the central expected utility axiom of strong independence. We
consider when a mixture preorder has a multi-representation that consists of
real-valued, mixture-preserving functions. If it does, it must satisfy the
mixture continuity axiom of Herstein and Milnor (1953). Mixture continuity is
sufficient for a mixture-preserving multi-representation when the dimension of
the mixture space is countable, but not when it is uncountable. Our strongest
positive result is that mixture continuity is sufficient in conjunction with a
novel axiom we call countable domination, which constrains the order complexity
of the mixture preorder in terms of its Archimedean structure. We also consider
what happens when the mixture space is given its natural weak topology.
Continuity (having closed upper and lower sets) and closedness (having a closed
graph) are stronger than mixture continuity. We show that continuity is
necessary but not sufficient for a mixture preorder to have a
mixture-preserving multi-representation. Closedness is also necessary; we leave
it as an open question whether it is sufficient. We end with results concerning
the existence of mixture-preserving multi-representations that consist entirely
of strictly increasing functions, and a uniqueness result.",Expected utility theory on mixture spaces without the completeness axiom,2021-02-13 13:48:54,"David McCarthy, Kalle Mikkola, Teruji Thomas","http://arxiv.org/abs/2102.06898v1, http://arxiv.org/pdf/2102.06898v1",econ.TH
34848,th,"We present tight bounds and heuristics for personalized, multi-product
pricing problems. Under mild conditions we show that the best price in the
direction of a positive vector results in profits that are guaranteed to be at
least as large as a fraction of the profits from optimal personalized pricing.
For unconstrained problems, the fraction depends on the factor and on optimal
price vectors for the different customer types. For constrained problems the
factor depends on the factor and a ratio of the constraints. Using a factor
vector with equal components results in uniform pricing and has exceedingly
mild sufficient conditions for the bound to hold. A robust factor is presented
that achieves the best possible performance guarantee. As an application, our
model yields a tight lower-bound on the performance of linear pricing relative
to optimal personalized non-linear pricing, and suggests effective non-linear
price heuristics relative to personalized solutions. Additionally, our model
provides guarantees for simple strategies such as bundle-size pricing and
component-pricing with respect to optimal personalized mixed bundle pricing.
Heuristics to cluster customer types are also developed with the goal of
improving performance by allowing each cluster to price along its own factor.
Numerical results are presented for a variety of demand models that illustrate
the tradeoffs between using the economic factor and the robust factor for each
cluster, as well as the tradeoffs between using a clustering heuristic with a
worst case performance of two and a machine learning clustering algorithm. In
our experiments economically motivated factors coupled with machine learning
clustering heuristics performed best.",Bounds and Heuristics for Multi-Product Personalized Pricing,2021-02-05 10:52:56,"Guillermo Gallego, Gerardo Berbeglia","http://arxiv.org/abs/2102.03038v2, http://arxiv.org/pdf/2102.03038v2",econ.TH
34849,th,"Rationing of healthcare resources has emerged as an important issue, which
has been discussed by medical experts, policy-makers, and the general public.
We consider a rationing problem where medical units are to be allocated to
patients. Each unit is reserved for one of several categories and each category
has a priority ranking of the patients. We present an allocation rule that
respects the priorities, complies with the eligibility requirements, allocates
the largest feasible number of units, and does not incentivize agents to hide
that they qualify through a category. The rule characterizes all possible
allocations that satisfy the first three properties and is polynomial-time
computable.","Efficient, Fair, and Incentive-Compatible Healthcare Rationing",2021-02-08 20:45:52,"Haris Aziz, Florian Brandl","http://arxiv.org/abs/2102.04384v2, http://arxiv.org/pdf/2102.04384v2",cs.GT
34850,th,"Are deep convolutional neural networks (CNNs) for image classification
explainable by utility maximization with information acquisition costs? We
demonstrate that deep CNNs behave equivalently (in terms of necessary and
sufficient conditions) to rationally inattentive utility maximizers, a
generative model used extensively in economics for human decision making. Our
claim is based by extensive experiments on 200 deep CNNs from 5 popular
architectures. The parameters of our interpretable model are computed
efficiently via convex feasibility algorithms. As an application, we show that
our economics-based interpretable model can predict the classification
performance of deep CNNs trained with arbitrary parameters with accuracy
exceeding 94% . This eliminates the need to re-train the deep CNNs for image
classification. The theoretical foundation of our approach lies in Bayesian
revealed preference studied in micro-economics. All our results are on GitHub
and completely reproducible.",Rationally Inattentive Utility Maximization for Interpretable Deep Image Classification,2021-02-09 04:14:24,"Kunal Pattanayak, Vikram Krishnamurthy","http://arxiv.org/abs/2102.04594v3, http://arxiv.org/pdf/2102.04594v3",cs.LG
34851,th,"Many centralized matching markets are preceded by interviews between
participants. We study the impact on the final match of an increase in the
number of interviews for one side of the market. Our motivation is the match
between residents and hospitals where, due to the COVID-19 pandemic, interviews
for the 2020-21 season of the National Residency Matching Program were switched
to a virtual format. This drastically reduced the cost to applicants of
accepting interview invitations. However, the reduction in cost was not
symmetric since applicants, not programs, previously bore most of the costs of
in-person interviews. We show that if doctors can accept more interviews, but
the hospitals do not increase the number of interviews they offer, then no
previously matched doctor is better off and many are potentially harmed. This
adverse consequence is the result of what we call interview hoarding. We prove
this analytically and characterize optimal mitigation strategies for special
cases. We use simulations to extend these insights to more general settings.",Interview Hoarding,2021-02-12 14:06:18,"Vikram Manjunath, Thayer Morrill","http://arxiv.org/abs/2102.06440v4, http://arxiv.org/pdf/2102.06440v4",econ.TH
34852,th,"After the closure of the schools in Hungary from March 2020 due to the
pandemic, many students were left at home with no or not enough parental help
for studying, and in the meantime some people had more free time and
willingness to help others in need during the lockdown. In this paper we
describe the optimisation aspects of a joint NGO project for allocating
voluntary mentors to students using a web-based coordination mechanism. The
goal of the project has been to form optimal pairs and study groups by taking
into the preferences and the constraints of the participants. In this paper we
present the optimisation concept, and the integer programming techniques used
for solving the allocation problems. Furthermore, we conducted computation
simulations on real and generated data for evaluate the performance of this
dynamic matching scheme under different parameter settings.",Online voluntary mentoring: Optimising the assignment of students and mentors,2021-02-12 21:14:30,"Péter Biró, Márton Gyetvai","http://arxiv.org/abs/2102.06671v1, http://arxiv.org/pdf/2102.06671v1",cs.GT
34854,th,"Given a set of agents with approval preferences over each other, we study the
task of finding $k$ matchings fairly representing everyone's preferences. We
model the problem as an approval-based multiwinner election where the set of
candidates consists of all possible matchings and agents' preferences over each
other are lifted to preferences over matchings. Due to the exponential number
of candidates in such elections, standard algorithms for classical sequential
voting rules (such as those proposed by Thiele and Phragm\'en) are rendered
inefficient. We show that the computational tractability of these rules can be
regained by exploiting the structure of the approval preferences. Moreover, we
establish algorithmic results and axiomatic guarantees that go beyond those
obtainable in the general multiwinner setting. Assuming that approvals are
symmetric, we show that proportional approval voting (PAV), a well-established
but computationally intractable voting rule, becomes polynomial-time
computable, and its sequential variant (seq-PAV), which does not provide any
proportionality guarantees in general, fulfills a rather strong guarantee known
as extended justified representation. Some of our positive computational
results extend to other types of compactly representable elections with an
exponential candidate space.",Selecting Matchings via Multiwinner Voting: How Structure Defeats a Large Candidate Space,2021-02-15 13:25:11,"Niclas Boehmer, Markus Brill, Ulrike Schmidt-Kraepelin","http://arxiv.org/abs/2102.07441v1, http://arxiv.org/pdf/2102.07441v1",cs.GT
34855,th,"We study vote delegation with ""well-behaving"" and ""misbehaving"" agents and
compare it with conventional voting. Typical examples for vote delegation are
validation or governance tasks on blockchains. There is a majority of
well-behaving agents, but they may abstain or delegate their vote to other
agents since voting is costly. Misbehaving agents always vote. We compare
conventional voting allowing for abstention with vote delegation. Preferences
of voters are private information and a positive outcome is achieved if
well-behaving agents win. We illustrate that vote delegation leads to quite
different outcomes than conventional voting with abstention. In particular, we
obtain three insights: First, if the number of misbehaving voters, denoted by f
, is high, both voting methods fail to deliver a positive outcome. Second, if f
takes an intermediate value, conventional voting delivers a positive outcome,
while vote delegation fails with probability one. Third, if f is low,
delegation delivers a positive outcome with higher probability than
conventional voting. Finally, our results characterize worst-case outcomes that
can happen in a liquid democracy.",Vote Delegation and Misbehavior,2021-02-17 18:32:32,"Hans Gersbach, Akaki Mamageishvili, Manvir Schneider","http://arxiv.org/abs/2102.08823v2, http://arxiv.org/pdf/2102.08823v2",cs.GT
34856,th,"We examine vote delegation when preferences of agents are private
information. One group of agents (delegators) does not want to participate in
voting and abstains under conventional voting or can delegate its votes to the
other group (voters) who decide between two alternatives. We show that free
delegation favors minorities, that is, alternatives that have a lower chance of
winning ex-ante. The same occurs if the number of voting rights that actual
voters can have is capped. When the number of delegators increases, the
probability that the ex-ante minority wins under free and capped delegation
converges to the one under conventional voting--albeit non-monotonically. Our
results are obtained in a private value setting but can be readily translated
into an information aggregation setting when voters receive a signal about the
''correct"" alternative with some probability.",Vote Delegation with Unknown Preferences,2021-02-17 18:44:38,"Hans Gersbach, Akaki Mamageishvili, Manvir Schneider","http://arxiv.org/abs/2102.08835v3, http://arxiv.org/pdf/2102.08835v3",cs.GT
34857,th,"We study how loyalty behavior of customers and differing costs to produce
undifferentiated products by firms can influence market outcomes. In prior
works that study such markets, firm costs have generally been assumed
negligible or equal, and loyalty is modeled as an additive bias in customer
valuations. We extend these previous treatments by explicitly considering cost
asymmetry and richer customer loyalty behavior in a game-theoretic model. Thus,
in the setting where firms incur different non-negligible product costs, and
customers have firm-specific loyalty levels, we comprehensively characterize
the effects of loyalty and product cost difference on market outcomes such as
prices, market shares, and profits. Our analysis and numerical simulations
provide new insights into how firms can price, how they can survive competition
even with higher product costs, and how they can control these costs and/or
increase customer loyalty to change their market position.",Price Discrimination in the Presence of Customer Loyalty and Differing Firm Costs,2021-02-17 02:34:29,"Theja Tulabandhula, Aris Ouksel, Son Nguyen","http://arxiv.org/abs/2102.09620v2, http://arxiv.org/pdf/2102.09620v2",econ.TH
34858,th,"Delays in the availability of vaccines are costly as the pandemic continues.
However, in the presence of adjustment costs firms have an incentive to
increase production capacity only gradually. The existing contracts specify
only a fixed quantity to be supplied over a certain period and thus provide no
incentive for an accelerated buildup in capacity. A high price does not change
this. The optimal contract would specify a decreasing price schedule over time
which can replicate the social optimum.",Incentives for accelerating the production of Covid-19 vaccines in the presence of adjustment costs,2021-02-19 11:38:54,"Claudius Gros, Daniel Gros","http://arxiv.org/abs/2102.09807v1, http://arxiv.org/pdf/2102.09807v1",econ.TH
34859,th,"Faced with huge market potential and increasing competition in emerging
industries, product manufacturers with key technologies tend to consider
whether to implement a component open supply strategy. This study focuses on a
pricing game induced by the component open supply strategy between a vertically
integrated manufacturer (who produces key components and end products) and an
exterior product manufacturer (who produces end products using purchased key
components) with different customer perceived value and different cost
structure. This study first establishes a three stage pricing game model and
proposes demand functions by incorporating relative customer perceived value.
Based on the demand functions, we obtain feasible regions of the exterior
manufacturer's sourcing decision and the optimal price decision in each region.
Then the effects of relative customer perceived value, cost structure, and
market structure on price decisions and optimal profits of the vertically
integrated manufacturer are demonstrated. Finally, as for the optimal component
supply strategy, we present a generalized closed supply Pareto zone and
establish supply strategy Pareto zones under several specific configurations.",Pricing decisions under manufacturer's component open-supply strategy,2021-02-20 10:52:25,"Peiya Zhu, Xiaofei Qian, Xinbao Liu, Shaojun Lu","http://arxiv.org/abs/2102.10280v2, http://arxiv.org/pdf/2102.10280v2",econ.TH
34860,th,"Mechanism design has traditionally assumed that the set of participants are
fixed and known to the mechanism (the market owner) in advance. However, in
practice, the market owner can only directly reach a small number of
participants (her neighbours). Hence the owner often needs costly promotions to
recruit more participants in order to get desirable outcomes such as social
welfare or revenue maximization. In this paper, we propose to incentivize
existing participants to invite their neighbours to attract more participants.
However, they would not invite each other if they are competitors. We discuss
how to utilize the conflict of interest between the participants to incentivize
them to invite each other to form larger markets. We will highlight the early
solutions and open the floor for discussing the fundamental open questions in
the settings of auctions, coalitional games, matching and voting.",Mechanism Design Powered by Social Interactions,2021-02-20 16:46:05,Dengji Zhao,"http://arxiv.org/abs/2102.10347v1, http://arxiv.org/pdf/2102.10347v1",cs.GT
34900,th,"We propose a new approach to solving dynamic decision problems with rewards
that are unbounded below. The approach involves transforming the Bellman
equation in order to convert an unbounded problem into a bounded one. The major
advantage is that, when the conditions stated below are satisfied, the
transformed problem can be solved by iterating with a contraction mapping.
While the method is not universal, we show by example that many common decision
problems do satisfy our conditions.",Dynamic Optimal Choice When Rewards are Unbounded Below,2019-11-29 12:54:25,"Qingyin Ma, John Stachurski","http://arxiv.org/abs/1911.13025v1, http://arxiv.org/pdf/1911.13025v1",econ.TH
34861,th,"The internet advertising market is a multi-billion dollar industry, in which
advertisers buy thousands of ad placements every day by repeatedly
participating in auctions. An important and ubiquitous feature of these
auctions is the presence of campaign budgets, which specify the maximum amount
the advertisers are willing to pay over a specified time period. In this paper,
we present a new model to study the equilibrium bidding strategies in standard
auctions, a large class of auctions that includes first- and second-price
auctions, for advertisers who satisfy budget constraints on average. Our model
dispenses with the common, yet unrealistic assumption that advertisers' values
are independent and instead assumes a contextual model in which advertisers
determine their values using a common feature vector. We show the existence of
a natural value-pacing-based Bayes-Nash equilibrium under very mild
assumptions. Furthermore, we prove a revenue equivalence showing that all
standard auctions yield the same revenue even in the presence of budget
constraints. Leveraging this equivalence, we prove Price of Anarchy bounds for
liquid welfare and structural properties of pacing-based equilibria that hold
for all standard auctions. In recent years, the internet advertising market has
adopted first-price auctions as the preferred paradigm for selling advertising
slots. Our work thus takes an important step toward understanding the
implications of the shift to first-price auctions in internet advertising
markets by studying how the choice of the selling mechanism impacts revenues,
welfare, and advertisers' bidding strategies.",Contextual Standard Auctions with Budgets: Revenue Equivalence and Efficiency Guarantees,2021-02-21 02:41:25,"Santiago Balseiro, Christian Kroer, Rachitesh Kumar","http://arxiv.org/abs/2102.10476v3, http://arxiv.org/pdf/2102.10476v3",cs.GT
34862,th,"We provide novel simple representations of strategy-proof voting rules when
voters have uni-dimensional single-peaked preferences (as well as
multi-dimensional separable preferences). The analysis recovers, links and
unifies existing results in the literature such as Moulin's classic
characterization in terms of phantom voters and Barber\`a, Gul and Stacchetti's
in terms of winning coalitions (""generalized median voter schemes""). First, we
compare the computational properties of the various representations and show
that the grading curve representation is superior in terms of computational
complexity. Moreover, the new approach allows us to obtain new
characterizations when strategy-proofness is combined with other desirable
properties such as anonymity, responsiveness, ordinality, participation,
consistency, or proportionality. In the anonymous case, two methods are single
out: the -- well know -- ordinal median and the -- most recent -- linear
median.",New Characterizations of Strategy-Proofness under Single-Peakedness,2021-02-23 16:29:57,"Andrew Jennings, Rida Laraki, Clemens Puppe, Estelle Varloot","http://arxiv.org/abs/2102.11686v2, http://arxiv.org/pdf/2102.11686v2",cs.GT
34863,th,"We extend the mathematical model proposed by Ottaviano-Tabuchi-Thisse (2002)
to a multi-regional case and investigate the stability of the homogeneous
stationary solution of the model in a one-dimensional periodic space. When the
number of regions is two and three, the homogeneous stationary solution is
stable under sufficiently high transport cost. On the other hand, when the
number of regions is a multiple of four, the homogeneous stationary solution is
unstable under any values of the transport cost.",Agglomeration triggered by the effect of the number of regions: A model in NEG with a quadratic subutility,2021-12-06 13:38:51,Kensuke Ohtake,"http://dx.doi.org/10.1007/s40505-022-00222-6, http://arxiv.org/abs/2112.02920v3, http://arxiv.org/pdf/2112.02920v3",econ.TH
34864,th,"We revisit the setting of fairly allocating indivisible items when agents
have different weights representing their entitlements. First, we propose a
parameterized family of relaxations for weighted envy-freeness and the same for
weighted proportionality; the parameters indicate whether smaller-weight or
larger-weight agents should be given a higher priority. We show that each
notion in these families can always be satisfied, but any two cannot
necessarily be fulfilled simultaneously. We then introduce an intuitive
weighted generalization of maximin share fairness and establish the optimal
approximation of it that can be guaranteed. Furthermore, we characterize the
implication relations between the various weighted fairness notions introduced
in this and prior work, and relate them to the lower and upper quota axioms
from apportionment.",Weighted Fairness Notions for Indivisible Items Revisited,2021-12-08 11:23:39,"Mithun Chakraborty, Erel Segal-Halevi, Warut Suksompong","http://arxiv.org/abs/2112.04166v1, http://arxiv.org/pdf/2112.04166v1",cs.GT
34865,th,"Cyber-security breaches inflict significant costs on organizations. Hence,
the development of an information-systems defense capability through
cyber-security investment is a prerequisite. The question of how to determine
the optimal amount to invest in cyber-security has been widely investigated in
the literature. In this respect, the Gordon-Loeb model and its extensions
received wide-scale acceptance. However, such models predominantly rely on
restrictive assumptions that are not adapted for analyzing dynamic aspects of
cyber-security investment. Yet, understanding such dynamic aspects is a key
feature for studying cyber-security investment in the context of a fast-paced
and continuously evolving technological landscape. We propose an extension of
the Gordon-Loeb model by considering multi-period and relaxing the assumption
of a continuous security-breach probability function. Such theoretical
adaptations enable to capture dynamic aspects of cyber-security investment such
as the advent of a disruptive technology and its investment consequences. Such
a proposed extension of the Gordon-Loeb model gives room for a hypothetical
decrease of the optimal level of cyber-security investment, due to a potential
technological shift. While we believe our framework should be generalizable
across the cyber-security milieu, we illustrate our approach in the context of
critical-infrastructure protection, where security-cost reductions related to
risk events are of paramount importance as potential losses reach unaffordable
proportions. Moreover, despite the fact that some technologies are considered
as disruptive and thus promising for critical-infrastructure protection, their
effects on cyber-security investment have been discussed little.",Cyber-Security Investment in the Context of Disruptive Technologies: Extension of the Gordon-Loeb Model,2021-12-08 17:38:23,"Dimitri Percia David, Alain Mermoud, Sébastien Gillard","http://arxiv.org/abs/2112.04310v1, http://arxiv.org/pdf/2112.04310v1",cs.CR
34872,th,"An evolutionarily stable strategy (ESS) is an equilibrium strategy that is
immune to invasions by rare alternative (``mutant'') strategies. Unlike Nash
equilibria, ESS do not always exist in finite games. In this paper we address
the question of what happens when the size of the game increases: does an ESS
exist for ``almost every large'' game? Letting the entries in the $n\times n$
game matrix be independently randomly chosen according to a distribution $F$,
we study the number of ESS with support of size $2.$ In particular, we show
that, as $n\to \infty$, the probability of having such an ESS: (i) converges to
1 for distributions $F$ with ``exponential and faster decreasing tails'' (e.g.,
uniform, normal, exponential); and (ii) converges to $1-1/\sqrt{e}$ for
distributions $F$ with ``slower than exponential decreasing tails'' (e.g.,
lognormal, Pareto, Cauchy). Our results also imply that the expected number of
vertices of the convex hull of $n$ random points in the plane converges to
infinity for the distributions in (i), and to 4 for the distributions in (ii).","Evolutionarily stable strategies of random games, and the vertices of random polygons",2008-01-22 14:37:16,"Sergiu Hart, Yosef Rinott, Benjamin Weiss","http://dx.doi.org/10.1214/07-AAP455, http://arxiv.org/abs/0801.3353v1, http://arxiv.org/pdf/0801.3353v1",math.PR
34866,th,"We consider economic agents, agent's variables, agent's trades and deals with
other agents and agent's expectations as ground for theoretical description of
economic and financial processes. Macroeconomic and financial variables are
composed by agent's variables. In turn, sums of agent's trade values or volumes
determine evolution of agent's variables. In conclusion, agent's expectations
govern agent's trade decisions. We consider that trinity - agent's variables,
trades and expectations as simple bricks for theoretical description of
economics. We note models that describe variables determined by sums of market
trades during certain time interval {\Delta} as the first-order economic
theories. Most current economic models belong to the first-order economic
theories. However, we show that these models are insufficient for adequate
economic description. Trade decisions substantially depend on market price
forecasting. We show that reasonable predictions of market price volatility
equal descriptions of sums of squares of trade values and volumes during
{\Delta}. We call modeling variables composed by sums of squares of market
trades as the second-order economic theories. If forecast of price probability
uses 3-d price statistical moment and price skewness then it equals description
of sums of 3-d power of market trades - the third-order economic theory. Exact
prediction of market price probability equals description of sums of n-th power
of market trades for all n. That limits accuracy of price probability
forecasting and confines forecast validity of economic theories.",Theoretical Economics and the Second-Order Economic Theory. What is it?,2021-12-01 19:42:09,Victor Olkhov,"http://arxiv.org/abs/2112.04566v1, http://arxiv.org/pdf/2112.04566v1",econ.TH
34867,th,"The classic cake cutting problem concerns the fair allocation of a
heterogeneous resource among interested agents. In this paper, we study a
public goods variant of the problem, where instead of competing with one
another for the cake, the agents all share the same subset of the cake which
must be chosen subject to a length constraint. We focus on the design of
truthful and fair mechanisms in the presence of strategic agents who have
piecewise uniform utilities over the cake. On the one hand, we show that the
leximin solution is truthful and moreover maximizes an egalitarian welfare
measure among all truthful and position oblivious mechanisms. On the other
hand, we demonstrate that the maximum Nash welfare solution is truthful for two
agents but not in general. Our results assume that mechanisms can block each
agent from accessing parts that the agent does not claim to desire; we provide
an impossibility result when blocking is not allowed.",Truthful Cake Sharing,2021-12-10 19:04:42,"Xiaohui Bei, Xinhang Lu, Warut Suksompong","http://arxiv.org/abs/2112.05632v3, http://arxiv.org/pdf/2112.05632v3",cs.GT
34869,th,"We characterize the shape of spatial externalities in a continuous time and
space differential game with transboundary pollution. We posit a realistic
spatiotemporal law of motion for pollution (diffusion and advection), and
tackle spatiotemporal non-cooperative (and cooperative) differential games.
Precisely, we consider a circle partitioned into several states where a local
authority decides autonomously about its investment, production and depollution
strategies over time knowing that investment/production generates pollution,
and pollution is transboundary. The time horizon is infinite. We allow for a
rich set of geographic heterogeneities across states. We solve analytically the
induced non-cooperative differential game and characterize its long-term
spatial distributions. In particular, we prove that there exist a Perfect
Markov Equilibrium, unique among the class of the affine feedbacks. We further
provide with a full exploration of the free riding problem and the associated
border effect.",A dynamic theory of spatial externalities,2021-12-20 18:02:33,"Raouf Boucekkine, Giorgio Fabbri, Salvatore Federico, Fausto Gozzi","http://arxiv.org/abs/2112.10584v1, http://arxiv.org/pdf/2112.10584v1",econ.TH
34870,th,"This paper studies the Random Utility Model (RUM) in a repeated stochastic
choice situation, in which the decision maker is imperfectly informed about the
payoffs of each available alternative. We develop a gradient-based learning
algorithm by embedding the RUM into an online decision problem. We show that a
large class of RUMs are Hannan consistent (\citet{Hahn1957}); that is, the
average difference between the expected payoffs generated by a RUM and that of
the best-fixed policy in hindsight goes to zero as the number of periods
increase. In addition, we show that our gradient-based algorithm is equivalent
to the Follow the Regularized Leader (FTRL) algorithm, which is widely used in
the machine learning literature to model learning in repeated stochastic choice
problems. Thus, we provide an economically grounded optimization framework to
the FTRL algorithm. Finally, we apply our framework to study recency bias,
no-regret learning in normal form games, and prediction markets.",Learning in Random Utility Models Via Online Decision Problems,2021-12-21 08:32:02,Emerson Melo,"http://arxiv.org/abs/2112.10993v3, http://arxiv.org/pdf/2112.10993v3",econ.TH
34892,th,"We consider tit-for-tat dynamics in production markets, where there is a set
of $n$ players connected via a weighted graph. Each player $i$ can produce an
eponymous good using its linear production function, given as input various
amounts of goods in the system. In the tit-for-tat dynamic, each player $i$
shares its good with its neighbors in fractions proportional to how much they
helped player $i$'s production in the last round. This dynamic has been studied
before in exchange markets by Wu and Zhang.
  We analyze the long term behavior of the dynamic and characterize which
players grow in the long term as a function of the graph structure. At a high
level, we find that a player grows in the long term if and only if it has a
good self loop (i.e. is productive alone) or works well with at least one other
player.
  We also consider a generalized damped update, where the players may update
their strategies with different speeds, and obtain a lower bound on their rate
of growth by finding a function that gives insight into the behavior of the
dynamical system.",Tit-for-Tat Dynamics and Market Volatility,2019-11-09 10:07:58,Simina Brânzei,"http://arxiv.org/abs/1911.03629v2, http://arxiv.org/pdf/1911.03629v2",cs.GT
34873,th,"We introduce a new solution concept, called periodicity, for selecting
optimal strategies in strategic form games. This periodicity solution concept
yields new insight into non-trivial games. In mixed strategy strategic form
games, periodic solutions yield values for the utility function of each player
that are equal to the Nash equilibrium ones. In contrast to the Nash
strategies, here the payoffs of each player are robust against what the
opponent plays. Sometimes, periodicity strategies yield higher utilities, and
sometimes the Nash strategies do, but often the utilities of these two
strategies coincide. We formally define and study periodic strategies in two
player perfect information strategic form games with pure strategies and we
prove that every non-trivial finite game has at least one periodic strategy,
with non-trivial meaning non-degenerate payoffs. In some classes of games where
mixed strategies are used, we identify quantitative features. Particularly
interesting are the implications for collective action games, since there the
collective action strategy can be incorporated in a purely non-cooperative
context. Moreover, we address the periodicity issue when the players have a
continuum set of strategies available.",Periodic Strategies: A New Solution Concept and an Algorithm for NonTrivial Strategic Form Games,2013-07-08 14:06:16,"V. K. Oikonomou, J. Jost","http://dx.doi.org/10.1142/S0219525917500096, http://arxiv.org/abs/1307.2035v4, http://arxiv.org/pdf/1307.2035v4",cs.GT
34874,th,"In the presence of persistent payoff heterogeneity, the evolution of the
aggregate strategy hugely depends on the underlying strategy composition under
many evolutionary dynamics, while the aggregate dynamic under the standard BRD
reduces to a homogenized smooth BRD, where persistent payoff heterogeneity
averages to homogeneous transitory payoff shocks. In this paper, we consider
deterministic evolutionary dynamics in heterogeneous population and develop the
stronger concept of local stability by imposing robustness to persistent payoff
heterogeneity. It is known that nonaggregability holds generically if the
switching rate in a given evolutionary dynamic correlates with the payoff gain
from a switch. To parameterize the payoff sensitivity of an evolutionary
dynamic, we propose to use tempered best response dynamics with bounded support
of switching costs.",Distributional stability and deterministic equilibrium selection under heterogeneous evolutionary dynamics,2018-05-13 17:33:15,Dai Zusai,"http://arxiv.org/abs/1805.04895v1, http://arxiv.org/pdf/1805.04895v1",cs.GT
34875,th,"A general framework of evolutionary dynamics under heterogeneous populations
is presented. The framework allows continuously many types of heterogeneous
agents, heterogeneity both in payoff functions and in revision protocols and
the entire joint distribution of strategies and types to influence the payoffs
of agents. We clarify regularity conditions for the unique existence of a
solution trajectory and for the existence of equilibrium. We confirm that
equilibrium stationarity in general and equilibrium stability in potential
games are extended from the homogeneous setting to the heterogeneous setting.
In particular, a wide class of admissible dynamics share the same set of
locally stable equilibria in a potential game through local maximization of the
potential.",Evolutionary dynamics in heterogeneous populations: a general framework for an arbitrary type distribution,2018-05-13 18:00:06,Dai Zusai,"http://dx.doi.org/10.1007/s00182-023-00867-y, http://arxiv.org/abs/1805.04897v2, http://arxiv.org/pdf/1805.04897v2",cs.GT
34876,th,"May's Theorem (1952), a celebrated result in social choice, provides the
foundation for majority rule. May's crucial assumption of symmetry, often
thought of as a procedural equity requirement, is violated by many choice
procedures that grant voters identical roles. We show that a weakening of May's
symmetry assumption allows for a far richer set of rules that still treat
voters equally. We show that such rules can have minimal winning coalitions
comprising a vanishing fraction of the population, but not less than the square
root of the population size. Methodologically, we introduce techniques from
group theory and illustrate their usefulness for the analysis of social choice
questions.",Equitable voting rules,2018-11-03 17:37:52,"Laurent Bartholdi, Wade Hann-Caruthers, Maya Josyula, Omer Tamuz, Leeat Yariv","http://arxiv.org/abs/1811.01227v4, http://arxiv.org/pdf/1811.01227v4",econ.TH
34877,th,"Liquid democracy allows members of an electorate to either directly vote over
alternatives, or delegate their voting rights to someone they trust. Most of
the liquid democracy literature and implementations allow each voter to
nominate only one delegate per election. However, if that delegate abstains,
the voting rights assigned to her are left unused. To minimise the number of
unused delegations, it has been suggested that each voter should declare a
personal ranking over voters she trusts. In this paper, we show that even if
personal rankings over voters are declared, the standard delegation method of
liquid democracy remains problematic. More specifically, we show that when
personal rankings over voters are declared, it could be undesirable to receive
delegated voting rights, which is contrary to what liquid democracy
fundamentally relies on. To solve this issue, we propose a new method to
delegate voting rights in an election, called breadth-first delegation.
Additionally, the proposed method prioritises assigning voting rights to
individuals closely connected to the voters who delegate.",Incentivising Participation in Liquid Democracy with Breadth-First Delegation,2018-11-09 02:09:15,"Grammateia Kotsialou, Luke Riley","http://arxiv.org/abs/1811.03710v2, http://arxiv.org/pdf/1811.03710v2",econ.TH
34878,th,"We introduce a set-valued solution concept, M equilibrium, to capture
empirical regularities from over half a century of game-theory experiments. We
show M equilibrium serves as a meta theory for various models that hitherto
were considered unrelated. M equilibrium is empirically robust and, despite
being set-valued, falsifiable. We report results from a series of experiments
comparing M equilibrium to leading behavioral-game-theory models and
demonstrate its virtues in predicting observed choices and stated beliefs. Data
from experimental games with a unique pure-strategy Nash equilibrium and
multiple M equilibria exhibit coordination problems that could not be
anticipated through the lens of existing models.",M Equilibrium: A theory of beliefs and choices in games,2018-11-13 10:32:43,"Jacob K. Goeree, Philippos Louis","http://arxiv.org/abs/1811.05138v2, http://arxiv.org/pdf/1811.05138v2",econ.TH
34879,th,"The purpose of this paper is to study the time average behavior of Markov
chains with transition probabilities being kernels of completely continuous
operators, and therefore to provide a sufficient condition for a class of
Markov chains that are frequently used in dynamic economic models to be
ergodic. The paper reviews the time average convergence of the quasi-weakly
complete continuity Markov operators to a unique projection operator. Also, it
shows that a further assumption of quasi-strongly complete continuity reduces
the dependence of the unique invariant measure on its corresponding initial
distribution through ergodic decomposition, and therefore guarantees the Markov
chain to be ergodic up to multiplication of constant coefficients. Moreover, a
sufficient and practical condition is provided for the ergodicity in economic
state Markov chains that are induced by exogenous random shocks and a
correspondence between the exogenous space and the state space.",Operator-Theoretical Treatment of Ergodic Theorem and Its Application to Dynamic Models in Economics,2018-11-15 02:03:43,Shizhou Xu,"http://arxiv.org/abs/1811.06107v1, http://arxiv.org/pdf/1811.06107v1",math.PR
34880,th,"The attribution problem, that is the problem of attributing a model's
prediction to its base features, is well-studied. We extend the notion of
attribution to also apply to feature interactions.
  The Shapley value is a commonly used method to attribute a model's prediction
to its base features. We propose a generalization of the Shapley value called
Shapley-Taylor index that attributes the model's prediction to interactions of
subsets of features up to some size k. The method is analogous to how the
truncated Taylor Series decomposes the function value at a certain point using
its derivatives at a different point. In fact, we show that the Shapley Taylor
index is equal to the Taylor Series of the multilinear extension of the
set-theoretic behavior of the model.
  We axiomatize this method using the standard Shapley axioms -- linearity,
dummy, symmetry and efficiency -- and an additional axiom that we call the
interaction distribution axiom. This new axiom explicitly characterizes how
interactions are distributed for a class of functions that model pure
interaction.
  We contrast the Shapley-Taylor index against the previously proposed Shapley
Interaction index (cf. [9]) from the cooperative game theory literature. We
also apply the Shapley Taylor index to three models and identify interesting
qualitative insights.",The Shapley Taylor Interaction Index,2019-02-15 01:09:49,"Kedar Dhamdhere, Ashish Agarwal, Mukund Sundararajan","http://arxiv.org/abs/1902.05622v2, http://arxiv.org/pdf/1902.05622v2",cs.GT
34881,th,"We prove an existence result for the principal-agent problem with adverse
selection under general assumptions on preferences and allocation spaces.
Instead of assuming that the allocation space is finite-dimensional or compact,
we consider a more general coercivity condition which takes into account the
principal's cost and the agents' preferences. Our existence proof is simple and
flexible enough to adapt to partial participation models as well as to the case
of type-dependent budget constraints.",Existence of solutions to principal-agent problems with adverse selection under minimal assumptions,2019-02-18 16:01:37,"Guillaume Carlier, Kelvin Shuangjian Zhang","http://arxiv.org/abs/1902.06552v2, http://arxiv.org/pdf/1902.06552v2",math.OC
34882,th,"We describe our experience with designing and running a matching market for
the Israeli ""Mechinot"" gap-year programs. The main conceptual challenge in the
design of this market was the rich set of diversity considerations, which
necessitated the development of an appropriate preference-specification
language along with corresponding choice-function semantics, which we also
theoretically analyze. Our contribution extends the existing toolbox for
two-sided matching with soft constraints. This market was run for the first
time in January 2018 and matched 1,607 candidates (out of a total of 3,120
candidates) to 35 different programs, has been run twice more since, and has
been adopted by the Joint Council of the ""Mechinot"" gap-year programs for the
foreseeable future.","Matching for the Israeli ""Mechinot"" Gap-Year Programs: Handling Rich Diversity Requirements",2019-05-01 19:28:22,"Yannai A. Gonczarowski, Lior Kovalio, Noam Nisan, Assaf Romm","http://arxiv.org/abs/1905.00364v2, http://arxiv.org/pdf/1905.00364v2",cs.GT
34883,th,"The declining price anomaly states that the price weakly decreases when
multiple copies of an item are sold sequentially over time. The anomaly has
been observed in a plethora of practical applications. On the theoretical side,
Gale and Stegeman proved that the anomaly is guaranteed to hold in full
information sequential auctions with exactly two buyers. We prove that the
declining price anomaly is not guaranteed in full information sequential
auctions with three or more buyers. This result applies to both first-price and
second-price sequential auctions. Moreover, it applies regardless of the
tie-breaking rule used to generate equilibria in these sequential auctions. To
prove this result we provide a refined treatment of subgame perfect equilibria
that survive the iterative deletion of weakly dominated strategies and use this
framework to experimentally generate a very large number of random sequential
auction instances. In particular, our experiments produce an instance with
three bidders and eight items that, for a specific tie-breaking rule, induces a
non-monotonic price trajectory. Theoretic analyses are then applied to show
that this instance can be used to prove that for every possible tie-breaking
rule there is a sequential auction on which it induces a non-monotonic price
trajectory. On the other hand, our experiments show that non-monotonic price
trajectories are extremely rare. In over six million experiments only a
0.000183 proportion of the instances violated the declining price anomaly.",The Declining Price Anomaly is not Universal in Multi-Buyer Sequential Auctions (but almost is),2019-05-02 20:01:58,"Vishnu V. Narayan, Enguerrand Prebet, Adrian Vetta","http://arxiv.org/abs/1905.00853v1, http://arxiv.org/pdf/1905.00853v1",cs.GT
34884,th,"We study the implications of endogenous pricing for learning and welfare in
the classic herding model . When prices are determined exogenously, it is known
that learning occurs if and only if signals are unbounded. By contrast, we show
that learning can occur when signals are bounded as long as non-conformism
among consumers is scarce. More formally, learning happens if and only if
signals exhibit the vanishing likelihood property introduced bellow. We discuss
the implications of our results for potential market failure in the context of
Schumpeterian growth with uncertainty over the value of innovations.",The Implications of Pricing on Social Learning,2019-05-09 09:26:14,"Itai Arieli, Moran Koren, Rann Smorodinsky","http://dx.doi.org/10.1145/3328526.3329554, http://arxiv.org/abs/1905.03452v1, http://arxiv.org/pdf/1905.03452v1",econ.TH
34885,th,"We study the mathematical and economic structure of the Kolkata (k) index of
income inequality. We show that the k-index always exists and is a unique fixed
point of the complementary Lorenz function, where the Lorenz function itself
gives the fraction of cumulative income possessed by the cumulative fraction of
population (when arranged from poorer to richer). We show that the k-index
generalizes Pareto's 80/20 rule. Although the k and Pietra indices both split
the society into two groups, we show that k-index is a more intensive measure
for the poor-rich split. We compare the normalized k-index with the Gini
coefficient and the Pietra index and discuss when they coincide. We establish
that for any income distribution the value of Gini coefficient is no less than
that of the Pietra index and the value of the Pietra index is no less than that
of the normalized k-index. While the Gini coefficient and the Pietra index are
affected by transfers exclusively among the rich or among the poor, the k-index
is only affected by transfers across the two groups.",On the Kolkata index as a measure of income inequality,2019-04-30 18:30:24,"Suchismita Banerjee, Bikas K. Chakrabarti, Manipushpak Mitra, Suresh Mutuswami","http://dx.doi.org/10.1016/j.physa.2019.123178, http://arxiv.org/abs/1905.03615v2, http://arxiv.org/pdf/1905.03615v2",econ.TH
34901,th,"We propose and study a strategic model of hiding in a network, where the
network designer chooses the links and his position in the network facing the
seeker who inspects and disrupts the network. We characterize optimal networks
for the hider, as well as equilibrium hiding and seeking strategies on these
networks. We show that optimal networks are either equivalent to cycles or
variants of a core-periphery networks where every node in the periphery is
connected to a single node in the core.",A game of hide and seek in networks,2020-01-09 20:49:30,"Francis Bloch, Bhaskar Dutta, Marcin Dziubinski","http://arxiv.org/abs/2001.03132v1, http://arxiv.org/pdf/2001.03132v1",econ.TH
34887,th,"Given a purely atomic probability measure with support on n points, P, any
mean-preserving contraction (mpc) of P, Q, with support on m > n points is a
mixture of mpcs of P, each with support on most n points. We illustrate an
application of this result in economics.",Mixtures of Mean-Preserving Contractions,2019-05-13 20:32:00,"Joseph Whitmeyer, Mark Whitmeyer","http://arxiv.org/abs/1905.05157v3, http://arxiv.org/pdf/1905.05157v3",econ.TH
34888,th,"Consider an application sold on an on-line platform, with the app paying a
commission fee and, henceforth, offered for sale on the platform. The ability
to sell the application depends on its customer ranking. Therefore, developers
may have an incentive to promote their applications ranking in a dishonest
manner. One way to do this is by faking positive customer reviews. However, the
platform is able to detect dishonest behavior (cheating) with some probability
and then proceeds to decide whether to ban the application. We provide an
analysis and find the equilibrium behaviors of both the applications developers
(cheat or not) and the platform (setting of the commission fee). We provide
initial insights into how the platforms detection accuracy affects the
incentives of the app developers.",Cheating in Ranking Systems,2019-05-22 16:10:24,"Lihi Dery, Dror Hermel, Artyom Jelnov","http://arxiv.org/abs/1905.09116v1, http://arxiv.org/pdf/1905.09116v1",econ.TH
34890,th,"In the setting where participants are asked multiple similar possibly
subjective multi-choice questions (e.g. Do you like Panda Express? Y/N; do you
like Chick-fil-A? Y/N), a series of peer prediction mechanisms are designed to
incentivize honest reports and some of them achieve dominantly truthfulness:
truth-telling is a dominant strategy and strictly dominate other
""non-permutation strategy"" with some mild conditions. However, a major issue
hinders the practical usage of those mechanisms: they require the participants
to perform an infinite number of tasks. When the participants perform a finite
number of tasks, these mechanisms only achieve approximated dominant
truthfulness. The existence of a dominantly truthful multi-task peer prediction
mechanism that only requires a finite number of tasks remains to be an open
question that may have a negative result, even with full prior knowledge.
  This paper answers this open question by proposing a new mechanism,
Determinant based Mutual Information Mechanism (DMI-Mechanism), that is
dominantly truthful when the number of tasks is at least 2C and the number of
participants is at least 2. C is the number of choices for each question (C=2
for binary-choice questions). In addition to incentivizing honest reports,
DMI-Mechanism can also be transferred into an information evaluation rule that
identifies high-quality information without verification when there are at
least 3 participants. To the best of our knowledge, DMI-Mechanism is the first
dominantly truthful mechanism that works for a finite number of tasks, not to
say a small constant number of tasks.",Dominantly Truthful Multi-task Peer Prediction with a Constant Number of Tasks,2019-11-01 12:15:46,Yuqing Kong,"http://arxiv.org/abs/1911.00272v1, http://arxiv.org/pdf/1911.00272v1",cs.GT
34908,th,"Many facts are learned through the intermediation of individuals with special
access to information, such as law enforcement officers, officials with a
security clearance, or experts with specific knowledge. This paper considers
whether societies can learn about such facts when information is cheap to
manipulate, produced sequentially, and these individuals are devoid of ethical
motive. The answer depends on an ""information attrition"" condition pertaining
to the amount of evidence available which distinguishes, for example, between
reproducible scientific evidence and the evidence generated in a crime.
Applications to institution enforcement, social cohesion, scientific progress,
and historical revisionism are discussed.",Can Society Function Without Ethical Agents? An Informational Perspective,2020-03-11 05:08:44,Bruno Strulovici,"http://arxiv.org/abs/2003.05441v1, http://arxiv.org/pdf/2003.05441v1",cs.GT
34894,th,"We study a model of innovation with a large number of firms that create new
technologies by combining several discrete ideas. These ideas are created via
private investment and spread between firms. Firms face a choice between
secrecy, which protects existing intellectual property, and openness, which
facilitates learning from others. Their decisions determine interaction rates
between firms, and these interaction rates enter our model as link
probabilities in a learning network. Higher interaction rates impose both
positive and negative externalities, as there is more learning but also more
competition. We show that the equilibrium learning network is at a critical
threshold between sparse and dense networks. At equilibrium, the positive
externality from interaction dominates: the innovation rate and welfare would
be dramatically higher if the network were denser. So there are large returns
to increasing interaction rates above the critical threshold. Nevertheless,
several natural types of interventions fail to move the equilibrium away from
criticality. One effective policy solution is to introduce informational
intermediaries, such as public innovators who do not have incentives to be
secretive. These intermediaries can facilitate a high-innovation equilibrium by
transmitting ideas from one private firm to another.",Innovation and Strategic Network Formation,2019-11-16 00:01:31,Krishna Dasaratha,"http://arxiv.org/abs/1911.06872v4, http://arxiv.org/pdf/1911.06872v4",econ.TH
34895,th,"When network users are satisficing decision-makers, the resulting traffic
pattern attains a satisficing user equilibrium, which may deviate from the
(perfectly rational) user equilibrium. In a satisficing user equilibrium
traffic pattern, the total system travel time can be worse than in the case of
the PRUE. We show how bad the worst-case satisficing user equilibrium traffic
pattern can be, compared to the perfectly rational user equilibrium. We call
the ratio between the total system travel times of the two traffic patterns the
price of satisficing, for which we provide an analytical bound. We compare the
analytical bound with numerical bounds for several transportation networks.",On the Price of Satisficing in Network User Equilibria,2019-11-18 23:27:04,"Mahdi Takalloo, Changhyun Kwon","http://arxiv.org/abs/1911.07914v1, http://arxiv.org/pdf/1911.07914v1",cs.GT
34896,th,"In two-sided markets, Myerson and Satterthwaite's impossibility theorem
states that one can not maximize the gain-from-trade while also satisfying
truthfulness, individual-rationality and no deficit. Attempts have been made to
circumvent Myerson and Satterthwaite's result by attaining
approximately-maximum gain-from-trade: the double-sided auctions of McAfee
(1992) is truthful and has no deficit, and the one by Segal-Halevi et al.
(2016) additionally has no surplus --- it is strongly-budget-balanced. They
consider two categories of agents --- buyers and sellers, where each trade set
is composed of a single buyer and a single seller. The practical complexity of
applications such as supply chain require one to look beyond two-sided markets.
Common requirements are for: buyers trading with multiple sellers of different
or identical items, buyers trading with sellers through transporters and
mediators, and sellers trading with multiple buyers. We attempt to address
these settings. We generalize Segal-Halevi et al. (2016)'s
strongly-budget-balanced double-sided auction setting to a multilateral market
where each trade set is composed of any number of agent categories. Our
generalization refines the notion of competition in multi-sided auctions by
introducing the concepts of external competition and trade reduction. We also
show an obviously-truthful implementation of our auction using multiple
ascending prices.",Strongly Budget Balanced Auctions for Multi-Sided Markets,2019-11-19 07:50:39,"Rica Gonen, Erel Segal-Halevi","http://arxiv.org/abs/1911.08094v2, http://arxiv.org/pdf/1911.08094v2",cs.GT
34897,th,"Auctions via social network, pioneered by Li et al. (2017), have been
attracting considerable attention in the literature of mechanism design for
auctions. However, no known mechanism has satisfied strategy-proofness,
non-deficit, non-wastefulness, and individual rationality for the multi-unit
unit-demand auction, except for some naive ones. In this paper, we first
propose a mechanism that satisfies all the above properties. We then make a
comprehensive comparison with two naive mechanisms, showing that the proposed
mechanism dominates them in social surplus, seller's revenue, and incentive of
buyers for truth-telling. We also analyze the characteristics of the social
surplus and the revenue achieved by the proposed mechanism, including the
constant approximability of the worst-case efficiency loss and the complexity
of optimizing revenue from the seller's perspective.",Strategy-Proof and Non-Wasteful Multi-Unit Auction via Social Network,2019-11-20 13:40:57,"Takehiro Kawasaki, Nathanael Barrot, Seiji Takanashi, Taiki Todo, Makoto Yokoo","http://arxiv.org/abs/1911.08809v1, http://arxiv.org/pdf/1911.08809v1",cs.GT
34898,th,"Coalitional manipulation in voting is considered to be any scenario in which
a group of voters decide to misrepresent their vote in order to secure an
outcome they all prefer to the first outcome of the election when they vote
honestly. The present paper is devoted to study coalitional manipulability
within the class of scoring voting rules. For any such rule and any number of
alternatives, we introduce a new approach allowing to characterize all the
outcomes that can be manipulable by a coalition of voters. This gives us the
possibility to find the probability of manipulable outcomes for some
well-studied scoring voting rules in case of small number of alternatives and
large electorates under a well-known assumption on individual preference
profiles.",Manipulable outcomes within the class of scoring voting rules,2019-11-02 01:13:35,"Mostapha Diss, Boris Tsvelikhovskiy","http://arxiv.org/abs/1911.09173v3, http://arxiv.org/pdf/1911.09173v3",econ.TH
34899,th,"We study two influential voting rules proposed in the 1890s by Phragm\'en and
Thiele, which elect a committee or parliament of k candidates which
proportionally represents the voters. Voters provide their preferences by
approving an arbitrary number of candidates. Previous work has proposed
proportionality axioms satisfied by Thiele's rule (now known as Proportional
Approval Voting, PAV) but not by Phragm\'en's rule. By proposing two new
proportionality axioms (laminar proportionality and priceability) satisfied by
Phragm\'en but not Thiele, we show that the two rules achieve two distinct
forms of proportional representation. Phragm\'en's rule ensures that all voters
have a similar amount of influence on the committee, and Thiele's rule ensures
a fair utility distribution.
  Thiele's rule is a welfarist voting rule (one that maximizes a function of
voter utilities). We show that no welfarist rule can satisfy our new axioms,
and we prove that no such rule can satisfy the core. Conversely, some welfarist
fairness properties cannot be guaranteed by Phragm\'en-type rules. This
formalizes the difference between the two types of proportionality. We then
introduce an attractive committee rule, the Method of Equal Shares, which
satisfies a property intermediate between the core and extended justified
representation (EJR). It satisfies laminar proportionality, priceability, and
is computable in polynomial time. We show that our new rule provides a
logarithmic approximation to the core. On the other hand, PAV provides a
factor-2 approximation to the core, and this factor is optimal for rules that
are fair in the sense of the Pigou--Dalton principle.",Proportionality and the Limits of Welfarism,2019-11-26 21:31:41,"Dominik Peters, Piotr Skowron","http://arxiv.org/abs/1911.11747v3, http://arxiv.org/pdf/1911.11747v3",cs.GT
34902,th,"Diffusion auction is a new model in auction design. It can incentivize the
buyers who have already joined in the auction to further diffuse the sale
information to others via social relations, whereby both the seller's revenue
and the social welfare can be improved. Diffusion auctions are essentially
non-typical multidimensional mechanism design problems and agents' social
relations are complicatedly involved with their bids. In such auctions,
incentive-compatibility (IC) means it is best for every agent to honestly
report her valuation and fully diffuse the sale information to all her
neighbors. Existing work identified some specific mechanisms for diffusion
auctions, while a general theory characterizing all incentive-compatible
diffusion auctions is still missing. In this work, we identify a sufficient and
necessary condition for all dominant-strategy incentive-compatible (DSIC)
diffusion auctions. We formulate the monotonic allocation policies in such
multidimensional problems and show that any monotonic allocation policy can be
implemented in a DSIC diffusion auction mechanism. Moreover, given any
monotonic allocation policy, we obtain the optimal payment policy to maximize
the seller's revenue.",Incentive-Compatible Diffusion Auctions,2020-01-20 08:11:04,"Bin Li, Dong Hao, Dengji Zhao","http://arxiv.org/abs/2001.06975v2, http://arxiv.org/pdf/2001.06975v2",cs.GT
34903,th,"In this work we present a strategic network formation model predicting the
emergence of multigroup structures. Individuals decide to form or remove links
based on the benefits and costs those connections carry; we focus on bilateral
consent for link formation. An exogenous system specifies the frequency of
coordination issues arising among the groups. We are interested in structures
that arise to resolve coordination issues and, specifically, structures in
which groups are linked through bridging, redundant, and co-membership
interconnections. We characterize the conditions under which certain structures
are stable and study their efficiency as well as the convergence of formation
dynamics.",Stable and Efficient Structures in Multigroup Network Formation,2020-01-29 01:51:45,"Shadi Mohagheghi, Jingying Ma, Francesco Bullo","http://arxiv.org/abs/2001.10627v1, http://arxiv.org/pdf/2001.10627v1",cs.SI
34904,th,"This paper studies a spatial competition game between two firms that sell a
homogeneous good at some pre-determined fixed price. A population of consumers
is spread out over the real line, and the two firms simultaneously choose
location in this same space. When buying from one of the firms, consumers incur
the fixed price plus some transportation costs, which are increasing with their
distance to the firm. Under the assumption that each consumer is ready to buy
one unit of the good whatever the locations of the firms, firms converge to the
median location: there is ""minimal differentiation"". In this article, we relax
this assumption and assume that there is an upper limit to the distance a
consumer is ready to cover to buy the good. We show that the game always has at
least one Nash equilibrium in pure strategy. Under this more general
assumption, the ""minimal differentiation principle"" no longer holds in general.
At equilibrium, firms choose ""minimal"", ""intermediate"" or ""full""
differentiation, depending on this critical distance a consumer is ready to
cover and on the shape of the distribution of consumers' locations.",Spatial competition with unit-demand functions,2020-01-30 19:04:00,"Gaëtan Fournier, Karine Van Der Straeten, Jörgen Weibull","http://arxiv.org/abs/2001.11422v1, http://arxiv.org/pdf/2001.11422v1",math.OC
34905,th,"We show that economic conclusions derived from Bulow and Roberts (1989) for
linear utility models approximately extend to non-linear utility models.
Specifically, we quantify the extent to which agents with non-linear utilities
resemble agents with linear utilities, and we show that the approximation of
mechanisms for agents with linear utilities approximately extend for agents
with non-linear utilities.
  We illustrate the framework for the objectives of revenue and welfare on
non-linear models that include agents with budget constraints, agents with risk
aversion, and agents with endogenous valuations. We derive bounds on how much
these models resemble the linear utility model and combine these bounds with
well-studied approximation results for linear utility models. We conclude that
simple mechanisms are approximately optimal for these non-linear agent models.",Simple Mechanisms for Agents with Non-linear Utilities,2020-03-01 21:32:21,"Yiding Feng, Jason Hartline, Yingkai Li","http://arxiv.org/abs/2003.00545v2, http://arxiv.org/pdf/2003.00545v2",cs.GT
34906,th,"Discretely-constrained Nash-Cournot games have attracted attention as they
arise in various competitive energy production settings in which players must
make one or more discrete decisions. Gabriel et al. [""Solving
discretely-constrained Nash-Cournot games with an application to power
markets."" Networks and Spatial Economics 13(3), 2013] claim that the set of
equilibria to a discretely-constrained Nash-Cournot game coincides with the set
of solutions to a corresponding discretely-constrained mixed complementarity
problem. We show that this claim is false.",A Note on Solving Discretely-Constrained Nash-Cournot Games via Complementarity,2020-02-29 06:30:50,"Dimitri J. Papageorgiou, Francisco Trespalacios, Stuart Harwood","http://dx.doi.org/10.1007/s11067-021-09524-x, http://arxiv.org/abs/2003.01536v1, http://arxiv.org/pdf/2003.01536v1",econ.TH
34907,th,"Network utility maximization (NUM) is a general framework for designing
distributed optimization algorithms for large-scale networks. An economic
challenge arises in the presence of strategic agents' private information.
Existing studies proposed (economic) mechanisms but largely neglected the issue
of large-scale implementation. Specifically, they require certain modifications
to the deployed algorithms, which may bring the significant cost. To tackle
this challenge, we present the large-scale Vickery-Clark-Grove (VCG) Mechanism
for NUM, with a simpler payment rule characterized by the shadow prices. The
Large-Scale VCG Mechanism maximizes the network utility and achieves individual
rationality and budget balance. With infinitely many agents, agents' truthful
reports of their types are their dominant strategies; for the finite case, each
agent's incentive to misreport converges quadratically to zero. For practical
implementation, we introduce a modified mechanism that possesses an additional
important technical property, superimposability, which makes it able to be
built upon any (potentially distributed) algorithm that optimally solves the
NUM Problem and ensures all agents to obey the algorithm. We then extend this
idea to the dynamic case, when agents' types are dynamically evolving as a
controlled Markov process. In this case, the mechanism leads to incentive
compatible actions of agent for each time slot.",Mechanism Design for Large Scale Network Utility Maximization,2020-03-09 20:06:25,"Meng Zhang, Deepanshu Vasal","http://arxiv.org/abs/2003.04263v2, http://arxiv.org/pdf/2003.04263v2",cs.GT
34909,th,"In this paper, we study the egalitarian solution for games with discrete side
payment, where the characteristic function is integer-valued and payoffs of
players are integral vectors. The egalitarian solution, introduced by Dutta and
Ray in 1989, is a solution concept for transferable utility cooperative games
in characteristic form, which combines commitment for egalitarianism and
promotion of indivisual interests in a consistent manner. We first point out
that the nice properties of the egalitarian solution (in the continuous case)
do not extend to games with discrete side payment. Then we show that the Lorenz
stable set, which may be regarded a variant of the egalitarian solution, has
nice properties such as the Davis and Maschler reduced game property and the
converse reduced game property. For the proofs we utilize recent results in
discrete convex analysis on decreasing minimization on an M-convex set
investigated by Frank and Murota.",Egalitarian solution for games with discrete side payment,2020-03-23 05:46:11,Takafumi Otsuka,"http://arxiv.org/abs/2003.10059v1, http://arxiv.org/pdf/2003.10059v1",cs.GT
34910,th,"In recent years, many scholars praised the seemingly endless possibilities of
using machine learning (ML) techniques in and for agent-based simulation models
(ABM). To get a more comprehensive understanding of these possibilities, we
conduct a systematic literature review (SLR) and classify the literature on the
application of ML in and for ABM according to a theoretically derived
classification scheme. We do so to investigate how exactly machine learning has
been utilized in and for agent-based models so far and to critically discuss
the combination of these two promising methods. We find that, indeed, there is
a broad range of possible applications of ML to support and complement ABMs in
many different ways, already applied in many different disciplines. We see
that, so far, ML is mainly used in ABM for two broad cases: First, the
modelling of adaptive agents equipped with experience learning and, second, the
analysis of outcomes produced by a given ABM. While these are the most
frequent, there also exist a variety of many more interesting applications.
This being the case, researchers should dive deeper into the analysis of when
and how which kinds of ML techniques can support ABM, e.g. by conducting a more
in-depth analysis and comparison of different use cases. Nonetheless, as the
application of ML in and for ABM comes at certain costs, researchers should not
use ML for ABMs just for the sake of doing it.",Is the Juice Worth the Squeeze? Machine Learning (ML) In and For Agent-Based Modelling (ABM),2020-03-26 18:49:01,"Johannes Dahlke, Kristina Bogner, Matthias Mueller, Thomas Berger, Andreas Pyka, Bernd Ebersberger","http://arxiv.org/abs/2003.11985v1, http://arxiv.org/pdf/2003.11985v1",econ.TH
34911,th,"There are several aspects of data markets that distinguish them from a
typical commodity market: asymmetric information, the non-rivalrous nature of
data, and informational externalities. Formally, this gives rise to a new class
of games which we call multiple-principal, multiple-agent problem with
non-rivalrous goods. Under the assumption that the principal's payoff is
quasilinear in the payments given to agents, we show that there is a
fundamental degeneracy in the market of non-rivalrous goods. This multiplicity
of equilibria also affects common refinements of equilibrium definitions
intended to uniquely select an equilibrium: both variational equilibria and
normalized equilibria will be non-unique in general. This implies that most
existing equilibrium concepts cannot provide predictions on the outcomes of
data markets emerging today. The results support the idea that modifications to
payment contracts themselves are unlikely to yield a unique equilibrium, and
either changes to the models of study or new equilibrium concepts will be
required to determine unique equilibria in settings with multiple principals
and a non-rivalrous good.","Equilibrium Selection in Data Markets: Multiple-Principal, Multiple-Agent Problems with Non-Rivalrous Goods",2020-04-01 05:07:19,"Samir Wadhwa, Roy Dong","http://arxiv.org/abs/2004.00196v2, http://arxiv.org/pdf/2004.00196v2",cs.GT
34912,th,"The optimal price of each firm falls in the search cost of consumers, in the
limit to the monopoly price, despite the exit of lower-value consumers in
response to costlier search. Exit means that fewer inframarginal consumers
remain. The decrease in marginal buyers is smaller, because part of demand is
composed of customers coming from rival firms. These buyers can be held up and
are not marginal. Higher search cost reduces the fraction of incoming switchers
among buyers, which decreases the hold-up motive, thus the price.",Greater search cost reduces prices,2020-04-02 22:38:09,Sander Heinsalu,"http://arxiv.org/abs/2004.01238v1, http://arxiv.org/pdf/2004.01238v1",econ.TH
34913,th,"Information is replicable in that it can be simultaneously consumed and sold
to others. We study how resale affects a decentralized market for information.
We show that even if the initial seller is an informational monopolist, she
captures non-trivial rents from at most a single buyer: her payoffs converge to
0 as soon as a single buyer has bought information. By contrast, if the seller
can also sell valueless tokens, there exists a ``prepay equilibrium'' where
payment is extracted from all buyers before the information good is released.
By exploiting resale possibilities, this prepay equilibrium gives the seller as
high a payoff as she would achieve if resale were prohibited.",Reselling Information,2020-04-04 00:50:33,"S. Nageeb Ali, Ayal Chen-Zion, Erik Lillethun","http://arxiv.org/abs/2004.01788v2, http://arxiv.org/pdf/2004.01788v2",cs.GT
34914,th,"We introduce a model of sender-receiver stopping games, where the state of
the world follows an iid--process throughout the game. At each period, the
sender observes the current state, and sends a message to the receiver,
suggesting either to stop or to continue. The receiver, only seeing the message
but not the state, decides either to stop the game, or to continue which takes
the game to the next period. The payoff to each player is a function of the
state when the receiver quits, with higher states leading to better payoffs.
The horizon of the game can be finite or infinite.
  We prove existence and uniqueness of responsive (i.e. non-babbling) Perfect
Bayesian Equilibrium (PBE) under mild conditions on the game primitives in the
case where the players are sufficiently patient. The responsive PBE has a
remarkably simple structure, which builds on the identification of an
easy-to-implement and compute class of threshold strategies for the sender.
With the help of these threshold strategies, we derive simple expressions
describing this PBE. It turns out that in this PBE the receiver obediently
follows the recommendations of the sender. Hence, surprisingly, the sender
alone plays the decisive role, and regardless of the payoff function of the
receiver the sender always obtains the best possible payoff for himself.",Incentive compatibility in sender-receiver stopping games,2020-04-04 14:15:20,"Aditya Aradhye, János Flesch, Mathias Staudigl, Dries Vermeulen","http://arxiv.org/abs/2004.01910v1, http://arxiv.org/pdf/2004.01910v1",cs.GT
34917,th,"In their article, ""Egalitarianism under Severe Uncertainty"", Philosophy and
Public Affairs, 46:3, 2018, Thomas Rowe and Alex Voorhoeve develop an original
moral decision theory for cases under uncertainty, called ""pluralist
egalitarianism under uncertainty"". In this paper, I firstly sketch their views
and arguments. I then elaborate on their moral decision theory by discussing
how it applies to choice scenarios in health ethics. Finally, I suggest a new
two-stage Ellsberg thought experiment challenging the core of the principle of
their theory. In such an experiment pluralist egalitarianism seems to suggest
the wrong, morally and rationally speaking, course of action -- no matter
whether I consider my thought experiment in a simultaneous or a sequential
setting.",The Moral Burden of Ambiguity Aversion,2020-04-19 19:08:12,Brian Jabarian,"http://arxiv.org/abs/2004.08892v2, http://arxiv.org/pdf/2004.08892v2",econ.TH
34918,th,"We study the welfare consequences of merging Shapley--Scarf markets. Market
integration can lead to large welfare losses and make the vast majority of
agents worse-off, but is on average welfare-enhancing and makes all agents
better off ex-ante. The number of agents harmed by integration is a minority
when all markets are small or agents' preferences are highly correlated.",On the integration of Shapley-Scarf housing markets,2020-04-20 09:09:55,"Rajnish Kunar, Kriti Manocha, Josue Ortega","http://arxiv.org/abs/2004.09075v3, http://arxiv.org/pdf/2004.09075v3",econ.TH
34919,th,"We propose an equilibrium interaction model of occupational segregation and
labor market inequality between two social groups, generated exclusively
through the documented tendency to refer informal job seekers of identical
""social color"". The expected social color homophily in job referrals
strategically induces distinct career choices for individuals from different
social groups, which further translates into stable partial occupational
segregation equilibria with sustained wage and employment inequality -- in line
with observed patterns of racial or gender labor market disparities. Supporting
the qualitative analysis with a calibration and simulation exercise, we
furthermore show that both first and second best utilitarian social optima
entail segregation, any integration policy requiring explicit distributional
concerns. Our framework highlights that the mere social interaction through
homophilous contact networks can be a pivotal channel for the propagation and
persistence of gender and racial labor market gaps, complementary to long
studied mechanisms such as taste or statistical discrimination.",A Social Network Analysis of Occupational Segregation,2020-04-20 16:48:38,"I. Sebastian Buhai, Marco J. van der Leij","http://arxiv.org/abs/2004.09293v4, http://arxiv.org/pdf/2004.09293v4",econ.TH
34920,th,"The betweenness property of preference relations states that a probability
mixture of two lotteries should lie between them in preference. It is a
weakened form of the independence property and hence satisfied in expected
utility theory (EUT). Experimental violations of betweenness are
well-documented and several preference theories, notably cumulative prospect
theory (CPT), do not satisfy betweenness. We prove that CPT preferences satisfy
betweenness if and only if they conform with EUT preferences. In game theory,
lack of betweenness in the players' preference relations makes it essential to
distinguish between the two interpretations of a mixed action by a player -
conscious randomizations by the player and the uncertainty in the beliefs of
the opponents. We elaborate on this distinction and study its implication for
the definition of Nash equilibrium. This results in four different notions of
equilibrium, with pure and mixed action Nash equilibrium being two of them. We
dub the other two pure and mixed black-box strategy Nash equilibrium
respectively. We resolve the issue of existence of such equilibria and examine
how these different notions of equilibrium compare with each other.",Black-Box Strategies and Equilibrium for Games with Cumulative Prospect Theoretic Players,2020-04-20 22:36:43,"Soham R. Phade, Venkat Anantharam","http://arxiv.org/abs/2004.09592v1, http://arxiv.org/pdf/2004.09592v1",econ.TH
34921,th,"We study statistical discrimination of individuals based on payoff-irrelevant
social identities in markets where ratings/recommendations facilitate social
learning among users. Despite the potential promise and guarantee for the
ratings/recommendation algorithms to be fair and free of human bias and
prejudice, we identify the possible vulnerability of the ratings-based social
learning to discriminatory inferences on social groups. In our model, users'
equilibrium attention decisions may lead data to be sampled differentially
across different groups so that differential inferences on individuals may
emerge based on their group identities. We explore policy implications in terms
of regulating trading relationships as well as algorithm design.",Statistical Discrimination in Ratings-Guided Markets,2020-04-24 07:52:53,"Yeon-Koo Che, Kyungmin Kim, Weijie Zhong","http://arxiv.org/abs/2004.11531v1, http://arxiv.org/pdf/2004.11531v1",cs.GT
34922,th,"We provide two characterizations, one axiomatic and the other
neuro-computational, of the dependence of choice probabilities on deadlines,
within the widely used softmax representation \[ p_{t}\left( a,A\right)
=\dfrac{e^{\frac{u\left( a\right) }{\lambda \left( t\right) }+\alpha \left(
a\right) }}{\sum_{b\in A}e^{\frac{u\left( b\right) }{\lambda \left( t\right)
}+\alpha \left( b\right) }}% \] where $p_{t}\left( a,A\right) $ is the
probability that alternative $a$ is selected from the set $A$ of feasible
alternatives if $t$ is the time available to decide, $\lambda$ is a time
dependent noise parameter measuring the unit cost of information, $u$ is a time
independent utility function, and $\alpha$ is an alternative-specific bias that
determines the initial choice probabilities reflecting prior information and
memory anchoring.
  Our axiomatic analysis provides a behavioral foundation of softmax (also
known as Multinomial Logit Model when $\alpha$ is constant). Our
neuro-computational derivation provides a biologically inspired algorithm that
may explain the emergence of softmax in choice behavior. Jointly, the two
approaches provide a thorough understanding of soft-maximization in terms of
internal causes (neurophysiological mechanisms) and external effects (testable
implications).",Multinomial logit processes and preference discovery: inside and outside the black box,2020-04-28 12:10:40,"Simone Cerreia-Vioglio, Fabio Maccheroni, Massimo Marinacci, Aldo Rustichini","http://arxiv.org/abs/2004.13376v3, http://arxiv.org/pdf/2004.13376v3",econ.TH
34923,th,"Motivated by epidemics such as COVID-19, we study the spread of a contagious
disease when behavior responds to the disease's prevalence. We extend the SIR
epidemiological model to include endogenous meeting rates. Individuals benefit
from economic activity, but activity involves interactions with potentially
infected individuals. The main focus is a theoretical analysis of contagion
dynamics and behavioral responses to changes in risk. We obtain a simple
condition for when public-health interventions or variants of a disease will
have paradoxical effects on infection rates due to risk compensation.
Behavioral responses are most likely to undermine public-health interventions
near the peak of severe diseases.",Virus Dynamics with Behavioral Responses,2020-04-30 04:16:31,Krishna Dasaratha,"http://arxiv.org/abs/2004.14533v5, http://arxiv.org/pdf/2004.14533v5",q-bio.PE
34924,th,"In the multidimensional stable roommate problem, agents have to be allocated
to rooms and have preferences over sets of potential roommates. We study the
complexity of finding good allocations of agents to rooms under the assumption
that agents have diversity preferences [Bredereck et al., 2019]: each agent
belongs to one of the two types (e.g., juniors and seniors, artists and
engineers), and agents' preferences over rooms depend solely on the fraction of
agents of their own type among their potential roommates. We consider various
solution concepts for this setting, such as core and exchange stability, Pareto
optimality and envy-freeness. On the negative side, we prove that envy-free,
core stable or (strongly) exchange stable outcomes may fail to exist and that
the associated decision problems are NP-complete. On the positive side, we show
that these problems are in FPT with respect to the room size, which is not the
case for the general stable roommate problem. Moreover, for the classic setting
with rooms of size two, we present a linear-time algorithm that computes an
outcome that is core and exchange stable as well as Pareto optimal. Many of our
results for the stable roommate problem extend to the stable marriage problem.",Stable Roommate Problem with Diversity Preferences,2020-04-30 11:59:27,"Niclas Boehmer, Edith Elkind","http://arxiv.org/abs/2004.14640v1, http://arxiv.org/pdf/2004.14640v1",cs.GT
34925,th,"The behavior of complex systems is one of the most intriguing phenomena
investigated by recent science; natural and artificial systems offer a wide
opportunity for this kind of analysis. The energy conversion is both a process
based on important physical laws and one of the most important economic
sectors; the interaction between these two aspects of energy production
suggests the possibility to apply some of the approaches of the dynamic
systems' analysis. In particular, a phase plot, which is one of the methods to
detect a correlation between quantities in a complex system, provides a good
way to establish qualitative analogies between the ecological systems and the
economic ones and may shed light on the processes governing the evolution of
the system. The aim of this paper is to highlight the analogies between some
peculiar characteristics of the oil production vs. price and show in which way
such characteristics are similar to some behavioral mechanisms found in Nature.",Spruce budworm and oil price: a biophysical analogy,2020-04-30 18:56:59,"Luciano Celi, Claudio Della Volpe, Luca Pardi, Stefano Siboni","http://arxiv.org/abs/2004.14898v1, http://arxiv.org/pdf/2004.14898v1",econ.TH
34926,th,"How much and when should we limit economic and social activity to ensure that
the health-care system is not overwhelmed during an epidemic? We study a
setting where ICU resources are constrained while suppression is costly (e.g.,
limiting economic interaction). Providing a fully analytical solution we show
that the common wisdom of ""flattening the curve"", where suppression measures
are continuously taken to hold down the spread throughout the epidemic, is
suboptimal. Instead, the optimal suppression is discontinuous. The epidemic
should be left unregulated in a first phase and when the ICU constraint is
approaching society should quickly lock down (a discontinuity). After the
lockdown regulation should gradually be lifted, holding the rate of infected
constant thus respecting the ICU resources while not unnecessarily limiting
economic activity. In a final phase, regulation is lifted. We call this
strategy ""filling the box"".",Optimal epidemic suppression under an ICU constraint,2020-05-04 11:55:30,"Laurent Miclo, Daniel Spiro, Jörgen Weibull","http://arxiv.org/abs/2005.01327v1, http://arxiv.org/pdf/2005.01327v1",econ.TH
34927,th,"We add here another layer to the literature on nonatomic anonymous games
started with the 1973 paper by Schmeidler. More specifically, we define a new
notion of equilibrium which we call $\varepsilon$-estimated equilibrium and
prove its existence for any positive $\varepsilon$. This notion encompasses and
brings to nonatomic games recent concepts of equilibrium such as
self-confirming, peer-confirming, and Berk--Nash. This augmented scope is our
main motivation. At the same time, our approach also resolves some conceptual
problems present in Schmeidler (1973), pointed out by Shapley. In that paper\
the existence of pure-strategy Nash equilibria has been proved for any
nonatomic game with a continuum of players, endowed with an atomless countably
additive probability. But, requiring Borel measurability of strategy profiles
may impose some limitation on players' choices and introduce an exogenous
dependence among\ players' actions, which clashes with the nature of
noncooperative game theory. Our suggested solution is to consider every subset
of players as measurable. This leads to a nontrivial purely finitely additive
component which might prevent the existence of equilibria and requires a novel
mathematical approach to prove the existence of $\varepsilon$-equilibria.",Equilibria of nonatomic anonymous games,2020-05-04 23:45:24,"Simone Cerreia-Vioglio, Fabio Maccheroni, David Schmeidler","http://arxiv.org/abs/2005.01839v1, http://arxiv.org/pdf/2005.01839v1",econ.TH
34928,th,"We consider social welfare functions when the preferences of individual
agents and society maximize subjective expected utility in the tradition of
Savage. A system of axioms is introduced whose unique solution is the social
welfare function that averages the agents' beliefs and sums up their utility
functions, normalized to have the same range. The first distinguishing axiom
requires positive association of society's preferences with the agents'
preferences for acts about which beliefs agree. The second is a weakening of
Arrow's independence of irrelevant alternatives that only applies to
non-redundant acts.",Belief-Averaged Relative Utilitarianism,2020-05-07 21:39:28,Florian Brandl,"http://arxiv.org/abs/2005.03693v3, http://arxiv.org/pdf/2005.03693v3",econ.TH
34929,th,"Functional decision theory (FDT) is a fairly new mode of decision theory and
a normative viewpoint on how an agent should maximize expected utility. The
current standard in decision theory and computer science is causal decision
theory (CDT), largely seen as superior to the main alternative evidential
decision theory (EDT). These theories prescribe three distinct methods for
maximizing utility. We explore how FDT differs from CDT and EDT, and what
implications it has on the behavior of FDT agents and humans. It has been shown
in previous research how FDT can outperform CDT and EDT. We additionally show
FDT performing well on more classical game theory problems and argue for its
extension to human problems to show that its potential for superiority is
robust. We also make FDT more concrete by displaying it in an evolutionary
environment, competing directly against other theories.",Functional Decision Theory in an Evolutionary Environment,2020-05-06 22:38:54,Noah Topper,"http://arxiv.org/abs/2005.05154v2, http://arxiv.org/pdf/2005.05154v2",econ.TH
35363,th,"We study the class of potential games that are also graphical games with
respect to a given graph $G$ of connections between the players. We show that,
up to strategic equivalence, this class of games can be identified with the set
of Markov random fields on $G$.
  From this characterization, and from the Hammersley-Clifford theorem, it
follows that the potentials of such games can be decomposed to local
potentials. We use this decomposition to strongly bound the number of strategy
changes of a single player along a better response path. This result extends to
generalized graphical potential games, which are played on infinite graphs.",Graphical potential games,2014-05-07 04:00:07,"Yakov Babichenko, Omer Tamuz","http://dx.doi.org/10.1016/j.jet.2016.03.010, http://arxiv.org/abs/1405.1481v2, http://arxiv.org/pdf/1405.1481v2",math.PR
34930,th,"Known results are reviewed about the bounded and convex bounded variants, bT
and cbT, of a topology T on a real Banach space. The focus is on the cases of T
= w(P*, P) and of T = m(P*, P), which are the weak* and the Mackey topologies
on a dual Banach space P*. The convex bounded Mackey topology, cbm(P*, P), is
known to be identical to m(P*, P). As for bm(P*, P), it is conjectured to be
strictly stronger than m(P*, P) or, equivalently, not to be a vector topology
(except when P is reflexive). Some uses of the bounded Mackey and the bounded
weak* topologies in economic theory and its applications are pointed to. Also
reviewed are the bounded weak and the compact weak topologies, bw(Y, Y*) and
kw(Y, Y*), on a general Banach space Y, as well as their convex variants (cbw
and ckw).",Bounded topologies on Banach spaces and some of their uses in economic theory: a review,2020-05-11 18:47:04,Andrew J. Wrobel,"http://arxiv.org/abs/2005.05202v4, http://arxiv.org/pdf/2005.05202v4",math.FA
34931,th,"Our understanding of risk preferences can be sharpened by considering their
evolutionary basis. The existing literature has focused on two sources of risk:
idiosyncratic risk and aggregate risk. We introduce a new source of risk,
heritable risk, in which there is a positive correlation between the fitness of
a newborn agent and the fitness of her parent. Heritable risk was plausibly
common in our evolutionary past and it leads to a strictly higher growth rate
than the other sources of risk. We show that the presence of heritable risk in
the evolutionary past may explain the tendency of people to exhibit skewness
loving today.","Evolution, Heritable Risk, and Skewness Loving",2020-05-12 16:50:26,"Yuval Heller, Arthur Robson","http://dx.doi.org/10.3982/TE3949, http://arxiv.org/abs/2005.05772v3, http://arxiv.org/pdf/2005.05772v3",econ.TH
34932,th,"We study various decision problems regarding short-term investments in risky
assets whose returns evolve continuously in time. We show that in each problem,
all risk-averse decision makers have the same (problem-dependent) ranking over
short-term risky assets. Moreover, in each problem, the ranking is represented
by the same risk index as in the case of CARA utility agents and normally
distributed risky assets.",Short-Term Investments and Indices of Risk,2020-05-13 23:35:39,"Yuval Heller, Amnon Schreiber","http://arxiv.org/abs/2005.06576v1, http://arxiv.org/pdf/2005.06576v1",q-fin.PM
34933,th,"The notion of fault tolerant Nash equilibria has been introduced as a way of
studying the robustness of Nash equilibria. Under this notion, a fixed number
of players are allowed to exhibit faulty behavior in which they may deviate
arbitrarily from an equilibrium strategy. A Nash equilibrium in a game with $N$
players is said to be $\alpha$-tolerant if no non-faulty user wants to deviate
from an equilibrium strategy as long as $N-\alpha-1$ other players are playing
the equilibrium strategies, i.e., it is robust to deviations from rationality
by $\alpha$ faulty players. In prior work, $\alpha$-tolerance has been largely
viewed as a property of a given Nash equilibria. Here, instead we consider
following Nash's approach for showing the existence of equilibria, namely,
through the use of best response correspondences and fixed-point arguments. In
this manner, we provide sufficient conditions for the existence an
$\alpha$-tolerant equilibrium. This involves first defining an
$\alpha$-tolerant best response correspondence. Given a strategy profile of
non-faulty agents, this correspondence contains strategies for a non-faulty
player that are a best response given any strategy profile of the faulty
players. We prove that if this correspondence is non-empty, then it is
upper-hemi-continuous. This enables us to apply Kakutani's fixed-point theorem
and argue that if this correspondence is non-empty for every strategy profile
of the non-faulty players then there exists an $\alpha$-tolerant equilibrium.
However, we also illustrate by examples, that in many games this best response
correspondence will be empty for some strategy profiles even though
$\alpha$-tolerant equilibira still exist.",Fault Tolerant Equilibria in Anonymous Games: best response correspondences and fixed points,2020-05-14 11:44:40,"Deepanshu Vasal, Randall Berry","http://arxiv.org/abs/2005.06812v2, http://arxiv.org/pdf/2005.06812v2",econ.TH
34934,th,"Recursive preferences, of the sort developed by Epstein and Zin (1989), play
an integral role in modern macroeconomics and asset pricing theory.
Unfortunately, it is non-trivial to establish the unique existence of a
solution to recursive utility models. We show that the tightest known existence
and uniqueness conditions can be extended to (i) Schorfheide, Song and Yaron
(2018) recursive utilities and (ii) recursive utilities with `narrow framing'.
Further, we sharpen the solution space of Borovicka and Stachurski (2019) from
$L_1$ to $L_p$ so that the results apply to a broader class of modern asset
pricing models. For example, using $L_2$ Hilbert space theory, we find the
class of parameters which generate a unique $L_2$ solution to the Bansal and
Yaron (2004) and Schorfheide, Song and Yaron (2018) models.",Existence and Uniqueness of Recursive Utility Models in $L_p$,2020-05-13 11:29:36,Flint O'Neil,"http://arxiv.org/abs/2005.07067v1, http://arxiv.org/pdf/2005.07067v1",econ.TH
34935,th,"Shortlisting is the task of reducing a long list of alternatives to a
(smaller) set of best or most suitable alternatives. Shortlisting is often used
in the nomination process of awards or in recommender systems to display
featured objects. In this paper, we analyze shortlisting methods that are based
on approval data, a common type of preferences. Furthermore, we assume that the
size of the shortlist, i.e., the number of best or most suitable alternatives,
is not fixed but determined by the shortlisting method. We axiomatically
analyze established and new shortlisting methods and complement this analysis
with an experimental evaluation based on synthetic and real-world data. Our
results lead to recommendations which shortlisting methods to use, depending on
the desired properties.",Approval-Based Shortlisting,2020-05-14 19:03:50,"Martin Lackner, Jan Maly","http://arxiv.org/abs/2005.07094v2, http://arxiv.org/pdf/2005.07094v2",cs.GT
34942,th,"At a mixed Nash equilibrium, the payoff of a player does not depend on her
own action, as long as her opponent sticks to his. In a periodic strategy, a
concept developed in a previous paper (arXiv:1307.2035v4), in contrast, the own
payoff does not depend on the opponent's action. Here, we generalize this to
multi-player simultaneous perfect information strategic form games. We show
that also in this class of games, there always exists at least one periodic
strategy, and we investigate the mathematical properties of such periodic
strategies. In addition, we demonstrate that periodic strategies may exist in
games with incomplete information; we shall focus on Bayesian games. Moreover
we discuss the differences between the periodic strategies formalism and
cooperative game theory. In fact, the periodic strategies are obtained in a
purely non-cooperative way, and periodic strategies are as cooperative as the
Nash equilibria are. Finally, we incorporate the periodic strategies in an
epistemic game theory framework, and discuss several features of this approach.",Periodic Strategies II: Generalizations and Extensions,2020-05-26 19:12:04,"V. K. Oikonomou, J. Jost","http://arxiv.org/abs/2005.12832v1, http://arxiv.org/pdf/2005.12832v1",cs.GT
34936,th,"We study the effectiveness of information design in reducing congestion in
social services catering to users with varied levels of need. In the absence of
price discrimination and centralized admission, the provider relies on sharing
information about wait times to improve welfare. We consider a stylized model
with heterogeneous users who differ in their private outside options: low-need
users have an acceptable outside option to the social service, whereas
high-need users have no viable outside option. Upon arrival, a user decides to
wait for the service by joining an unobservable first-come-first-serve queue,
or leave and seek her outside option. To reduce congestion and improve social
outcomes, the service provider seeks to persuade more low-need users to avail
their outside option, and thus better serve high-need users. We characterize
the Pareto-optimal signaling mechanisms and compare their welfare outcomes
against several benchmarks. We show that if either type is the overwhelming
majority of the population, information design does not provide improvement
over sharing full information or no information. On the other hand, when the
population is a mixture of the two types, information design not only Pareto
dominates full-information and no-information mechanisms, in some regimes it
also achieves the same welfare as the ""first-best"", i.e., the Pareto-optimal
centralized admission policy with knowledge of users' types.",Information Design for Congested Social Services: Optimal Need-Based Persuasion,2020-05-14 23:47:10,"Jerry Anunrojwong, Krishnamurthy Iyer, Vahideh Manshadi","http://arxiv.org/abs/2005.07253v3, http://arxiv.org/pdf/2005.07253v3",cs.GT
34937,th,"We study a mechanism design problem where a community of agents wishes to
fund public projects via voluntary monetary contributions by the community
members. This serves as a model for public expenditure without an exogenously
available budget, such as participatory budgeting or voluntary tax programs, as
well as donor coordination when interpreting charities as public projects and
donations as contributions. Our aim is to identify a mutually beneficial
distribution of the individual contributions. In the preference aggregation
problem that we study, agents report linear utility functions over projects
together with the amount of their contributions, and the mechanism determines a
socially optimal distribution of the money. We identify a specific mechanism --
the Nash product rule -- which picks the distribution that maximizes the
product of the agents' utilities. This rule is Pareto efficient, and we prove
that it satisfies attractive incentive properties: it spends each agent's
contribution only on projects the agent finds acceptable, and agents are
strongly incentivized to participate.",Funding Public Projects: A Case for the Nash Product Rule,2020-05-16 17:17:00,"Florian Brandl, Felix Brandt, Matthias Greger, Dominik Peters, Christian Stricker, Warut Suksompong","http://dx.doi.org/10.1016/j.jmateco.2021.102585, http://arxiv.org/abs/2005.07997v2, http://arxiv.org/pdf/2005.07997v2",cs.GT
34938,th,"The core is a traditional and useful solution concept in economic theory. But
in discrete exchange economies without transfers, when endowments are complex,
the core may be empty. This motivates Balbuzanov and Kotowski (2019) to
interpret endowments as exclusion rights and propose a new concept called
exclusion core. Our contribution is twofold. First, we propose a rectification
of the core to solve its problem under complex endowments. Second, we propose a
refinement of Balbuzanov and Kotowski's exclusion core to improve its
performance. Our two core concepts share a common idea of correcting the
misused altruism of unaffected agents in blocking coalitions. We propose a
mechanism to find allocations in the two cores.",Cores in discrete exchange economies with complex endowments,2020-05-19 13:22:14,Jun Zhang,"http://arxiv.org/abs/2005.09351v2, http://arxiv.org/pdf/2005.09351v2",econ.TH
34939,th,"This paper studies cooperative data-sharing between competitors vying to
predict a consumer's tastes. We design optimal data-sharing schemes both for
when they compete only with each other, and for when they additionally compete
with an Amazon -- a company with more, better data. We show that simple schemes
-- threshold rules that probabilistically induce either full data-sharing
between competitors, or the full transfer of data from one competitor to
another -- are either optimal or approximately optimal, depending on properties
of the information structure. We also provide conditions under which firms
share more data when they face stronger outside competition, and describe
situations in which this conclusion is reversed.",Coopetition Against an Amazon,2020-05-20 16:33:03,"Ronen Gradwohl, Moshe Tennenholtz","http://arxiv.org/abs/2005.10038v3, http://arxiv.org/pdf/2005.10038v3",cs.GT
34940,th,"Seed fundraising for ventures often takes place by sequentially approaching
potential contributors, who make observable decisions. The fundraising succeeds
when a target number of investments is reached. Though resembling classic
information cascades models, its behavior is radically different, exhibiting
surprising complexities. Assuming a common distribution for contributors'
levels of information, we show that participants rely on {\em mutual
insurance}, i.e., invest despite unfavorable information, trusting future
player strategies to protect them from loss. {\em Delegation} occurs when
contributors invest unconditionally, empowering the decision to future players.
Often, all early contributors delegate, in effect empowering the last few
contributors to decide the outcome. Similar dynamics hold in sequential voting,
as in voting in committees.",Sequential Fundraising and Mutual Insurance,2020-05-21 18:08:58,"Amir Ban, Moran Koren","http://arxiv.org/abs/2005.10711v3, http://arxiv.org/pdf/2005.10711v3",cs.GT
34941,th,"Volunteer crowdsourcing platforms match volunteers with tasks which are often
recurring. To ensure completion of such tasks, platforms frequently use a lever
known as ""adoption,"" which amounts to a commitment by the volunteer to
repeatedly perform the task. Despite reducing match uncertainty, high levels of
adoption can decrease the probability of forming new matches, which in turn can
suppress growth. We study how platforms should manage this trade-off. Our
research is motivated by a collaboration with Food Rescue U.S. (FRUS), a
volunteer-based food recovery organization active in over 30 locations. For
platforms such as FRUS, success crucially depends on volunteer engagement.
Consequently, effectively utilizing non-monetary levers, such as adoption, is
critical. Motivated by the volunteer management literature and our analysis of
FRUS data, we develop a model for two-sided markets which repeatedly match
volunteers with tasks. Our model incorporates match uncertainty as well as the
negative impact of failing to match on future engagement. We study the
platform's optimal policy for setting the adoption level to maximize the total
discounted number of matches. We fully characterize the optimal myopic policy
and show that it takes a simple form: depending on volunteer characteristics
and market thickness, either allow for full adoption or disallow adoption. In
the long run, we show that such a policy is either optimal or achieves a
constant-factor approximation. Our finding is robust to incorporating
heterogeneity in volunteer behavior. Our work sheds light on how two-sided
platforms need to carefully control the double-edged impacts that commitment
levers have on growth and engagement. A one-size-fits-all solution may not be
effective, as the optimal design crucially depends on the characteristics of
the volunteer population.",Commitment on Volunteer Crowdsourcing Platforms: Implications for Growth and Engagement,2020-05-21 18:41:05,"Irene Lo, Vahideh Manshadi, Scott Rodilitz, Ali Shameli","http://arxiv.org/abs/2005.10731v3, http://arxiv.org/pdf/2005.10731v3",cs.GT
34943,th,"The present notes summarise the oligopoly dynamics lectures professor Lu\'is
Cabral gave at the Bank of Portugal in September and October 2017. The lectures
discuss a set industrial organisation problems in a dynamic environment, namely
learning by doing, switching costs, price wars, networks and platforms, and
ladder models of innovation. Methodologically, the materials cover analytical
solutions of known points (e.g., $\delta = 0$), the discussion of firms'
strategies based on intuitions derived directly from their value functions with
no model solving, and the combination of analytical and numerical procedures to
reach model solutions. State space analysis is done for both continuous and
discrete cases. All errors are my own.",Oligopoly Dynamics,2020-05-27 11:11:58,Bernardo Melo Pimentel,"http://arxiv.org/abs/2005.13228v1, http://arxiv.org/pdf/2005.13228v1",cs.GT
34944,th,"It is well-known that for infinitely repeated games, there are computable
strategies that have best responses, but no computable best responses. These
results were originally proved for either specific games (e.g., Prisoner's
dilemma), or for classes of games satisfying certain conditions not known to be
both necessary and sufficient.
  We derive a complete characterization in the form of simple necessary and
sufficient conditions for the existence of a computable strategy without a
computable best response under limit-of-means payoff. We further refine the
characterization by requiring the strategy profiles to be Nash equilibria or
subgame-perfect equilibria, and we show how the characterizations entail that
it is efficiently decidable whether an infinitely repeated game has a
computable strategy without a computable best response.",A Complete Characterization of Infinitely Repeated Two-Player Games having Computable Strategies with no Computable Best Response under Limit-of-Means Payoff,2020-05-28 14:30:17,"Jakub Dargaj, Jakob Grue Simonsen","http://arxiv.org/abs/2005.13921v2, http://arxiv.org/pdf/2005.13921v2",cs.GT
34945,th,"The concept of shared value was introduced by Porter and Kramer as a new
conception of capitalism. Shared value describes the strategy of organizations
that simultaneously enhance their competitiveness and the social conditions of
related stakeholders such as employees, suppliers and the natural environment.
The idea has generated strong interest, but also some controversy due to a lack
of a precise definition, measurement techniques and difficulties to connect
theory to practice. We overcome these drawbacks by proposing an economic
framework based on three key aspects: coalition formation, sustainability and
consistency, meaning that conclusions can be tested by means of logical
deductions and empirical applications. The presence of multiple agents to
create shared value and the optimization of both social and economic criteria
in decision making represent the core of our quantitative definition of shared
value. We also show how economic models can be characterized as shared value
models by means of logical deductions. Summarizing, our proposal builds on the
foundations of shared value to improve its understanding and to facilitate the
suggestion of economic hypotheses, hence accommodating the concept of shared
value within modern economic theory.",Shared value economics: an axiomatic approach,2020-05-31 21:25:31,"Francisco Salas-Molina, Juan Antonio Rodríguez Aguilar, Filippo Bistaffa","http://arxiv.org/abs/2006.00581v1, http://arxiv.org/pdf/2006.00581v1",cs.GT
34946,th,"We suggest a new approach to creation of general market equilibrium models
involving economic agents with local and partial knowledge about the system and
under different restrictions. The market equilibrium problem is then formulated
as a quasi-variational inequality that enables us to establish existence
results for the model in different settings. We also describe dynamic
processes, which fall into information exchange schemes of the proposed market
model. In particular, we propose an iterative solution method for
quasi-variational inequalities, which is based on evaluations of the proper
market information only in a neighborhood of the current market state without
knowledge of the whole feasible set and prove its convergence.",Variational Inequality Type Formulations of General Market Equilibrium Problems with Local Information,2020-06-01 21:12:06,Igor Konnov,"http://arxiv.org/abs/2006.01178v2, http://arxiv.org/pdf/2006.01178v2",math.OC
34947,th,"This study investigates a potential mechanism to promote coordination. With
theoretical guidance using a belief-based learning model, we conduct a
multi-period, binary-choice, and weakest-link laboratory coordination
experiment to study the effect of gradualism - increasing the required levels
(stakes) of contributions slowly over time rather than requiring a high level
of contribution immediately - on group coordination performance. We randomly
assign subjects to three treatments: starting and continuing at a high stake,
starting at a low stake but jumping to a high stake after a few periods, and
starting at a low stake while gradually increasing the stakes over time (the
Gradualism treatment). We find that relative to the other two treatments,
groups coordinate most successfully at high stakes in the Gradualism treatment.
We also find evidence that supports the belief-based learning model. These
findings point to a simple mechanism for promoting successful voluntary
coordination.",One Step at a Time: Does Gradualism Build Coordination?,2020-06-02 07:33:28,"Maoliang Ye, Jie Zheng, Plamen Nikolov, Sam Asher","http://dx.doi.org/10.1287/mnsc.2018.3210, http://arxiv.org/abs/2006.01386v1, http://arxiv.org/pdf/2006.01386v1",econ.TH
34949,th,"The Bayesian persuasion model studies communication between an informed
sender and a receiver with a payoff-relevant action, emphasizing the ability of
a sender to extract maximal surplus from his informational advantage. In this
paper we study a setting with multiple senders, but in which the receiver
interacts with only one sender of his choice: senders commit to signals and the
receiver then chooses, at the interim stage, with which sender to interact. Our
main result is that whenever senders are even slightly uncertain about each
other's preferences, the receiver receives all the informational surplus in all
equilibria of this game.",Reaping the Informational Surplus in Bayesian Persuasion,2020-06-03 08:14:05,"Ronen Gradwohl, Niklas Hahn, Martin Hoefer, Rann Smorodinsky","http://arxiv.org/abs/2006.02048v1, http://arxiv.org/pdf/2006.02048v1",cs.GT
34950,th,"We interpret multi-product supply chains (SCs) as coordinated markets; under
this interpretation, a SC optimization problem is a market clearing problem
that allocates resources and associated economic values (prices) to different
stakeholders that bid into the market (suppliers, consumers, transportation,
and processing technologies). The market interpretation allows us to establish
fundamental properties that explain how physical resources (primal variables)
and associated economic values (dual variables) flow in the SC. We use duality
theory to explain why incentivizing markets by forcing stakeholder
participation (e.g., by imposing demand satisfaction or service provision
constraints) yields artificial price behavior, inefficient allocations, and
economic losses. To overcome these issues, we explore market incentive
mechanisms that use bids; here, we introduce the concept of a stakeholder graph
(a product-based representation of a supply chain) and show that this
representation allows us to naturally determine minimum bids that activate the
market. These results provide guidelines to design SC formulations that
properly remunerate stakeholders and to design policy that foster market
transactions. The results are illustrated using an urban waste management
problem for a city of 100,000 residents.",Economic Properties of Multi-Product Supply Chains,2020-06-04 20:02:16,"Philip A. Tominac, Victor M. Zavala","http://arxiv.org/abs/2006.03467v2, http://arxiv.org/pdf/2006.03467v2",math.OC
34951,th,"An indivisible object may be sold to one of $n$ agents who know their
valuations of the object. The seller would like to use a revenue-maximizing
mechanism but her knowledge of the valuations' distribution is scarce: she
knows only the means (which may be different) and an upper bound for
valuations. Valuations may be correlated.
  Using a constructive approach based on duality, we prove that a mechanism
that maximizes the worst-case expected revenue among all deterministic
dominant-strategy incentive compatible, ex post individually rational
mechanisms is such that the object should be awarded to the agent with the
highest linear score provided it is nonnegative. Linear scores are
bidder-specific linear functions of bids. The set of optimal mechanisms
includes other mechanisms but all those have to be close to the optimal linear
score auction in a certain sense. When means are high, all optimal mechanisms
share the linearity property. Second-price auction without a reserve is an
optimal mechanism when the number of symmetric bidders is sufficiently high.",An Optimal Distributionally Robust Auction,2020-06-09 14:37:13,Alex Suzdaltsev,"http://arxiv.org/abs/2006.05192v2, http://arxiv.org/pdf/2006.05192v2",econ.TH
34952,th,"Binary yes-no decisions in a legislative committee or a shareholder meeting
are commonly modeled as a weighted game. However, there are noteworthy
exceptions. E.g., the voting rules of the European Council according to the
Treaty of Lisbon use a more complicated construction. Here we want to study the
question if we lose much from a practical point of view, if we restrict
ourselves to weighted games. To this end, we invoke power indices that measure
the influence of a member in binary decision committees. More precisely, we
compare the achievable power distributions of weighted games with those from a
reasonable superset of weighted games. It turns out that the deviation is
relatively small.",Are weighted games sufficiently good for binary voting?,2020-06-09 18:10:57,Sascha Kurz,"http://dx.doi.org/10.1007/s41412-021-00111-6, http://arxiv.org/abs/2006.05330v3, http://arxiv.org/pdf/2006.05330v3",cs.GT
34953,th,"We develop a model of bailout stigma where accepting a bailout signals a
firm's balance-sheet weakness and worsens its funding prospect. To avoid
stigma, high-quality firms either withdraw from subsequent financing after
receiving bailouts or refuse bailouts altogether to send a favorable signal.
The former leads to a short-lived stimulation with a subsequent market freeze
even worse than if there were no bailouts. The latter revives the funding
market, albeit with delay, to the level achievable without any stigma, and
implements a constrained optimal outcome. A menu of multiple bailout programs
also compounds bailout stigma and worsens market freeze.",Bailout Stigma,2020-06-10 06:41:57,"Yeon-Koo Che, Chongwoo Choe, Keeyoung Rhee","http://arxiv.org/abs/2006.05640v4, http://arxiv.org/pdf/2006.05640v4",q-fin.GN
34954,th,"""Big data"" gives markets access to previously unmeasured characteristics of
individual agents. Policymakers must decide whether and how to regulate the use
of this data. We study how new data affects incentives for agents to exert
effort in settings such as the labor market, where an agent's quality is
initially unknown but is forecast from an observable outcome. We show that
measurement of a new covariate has a systematic effect on the average effort
exerted by agents, with the direction of the effect determined by whether the
covariate is informative about long-run quality or about a shock to short-run
outcomes. For a class of covariates satisfying a statistical property we call
strong homoskedasticity, this effect is uniform across agents. More generally,
new measurements can impact agents unequally, and we show that these
distributional effects have a first-order impact on social welfare.",Data and Incentives,2020-06-11 18:57:49,"Annie Liang, Erik Madsen","http://arxiv.org/abs/2006.06543v2, http://arxiv.org/pdf/2006.06543v2",econ.TH
34955,th,"Pandemic response is a complex affair. Most governments employ a set of
quasi-standard measures to fight COVID-19 including wearing masks, social
distancing, virus testing and contact tracing. We argue that some non-trivial
factors behind the varying effectiveness of these measures are selfish
decision-making and the differing national implementations of the response
mechanism. In this paper, through simple games, we show the effect of
individual incentives on the decisions made with respect to wearing masks and
social distancing, and how these may result in a sub-optimal outcome. We also
demonstrate the responsibility of national authorities in designing these games
properly regarding the chosen policies and their influence on the preferred
outcome. We promote a mechanism design approach: it is in the best interest of
every government to carefully balance social good and response costs when
implementing their respective pandemic response mechanism.","Corona Games: Masks, Social Distancing and Mechanism Design",2020-06-11 10:18:03,"Balazs Pejo, Gergely Biczok","http://arxiv.org/abs/2006.06674v4, http://arxiv.org/pdf/2006.06674v4",econ.TH
34958,th,"We study the competition for partners in two-sided matching markets with
heterogeneous agent preferences, with a focus on how the equilibrium outcomes
depend on the connectivity in the market. We model random partially connected
markets, with each agent having an average degree $d$ in a random (undirected)
graph, and a uniformly random preference ranking over their neighbors in the
graph. We formally characterize stable matchings in large markets random with
small imbalance and find a threshold in the connectivity $d$ at $\log^2 n$
(where $n$ is the number of agents on one side of the market) which separates a
``weak competition'' regime, where agents on both sides of the market do
equally well, from a ``strong competition'' regime, where agents on the short
(long) side of the market enjoy a significant advantage (disadvantage).
Numerical simulations confirm and sharpen our theoretical predictions, and
demonstrate robustness to our assumptions. We leverage our characterizations in
two ways: First, we derive prescriptive insights into how to design the
connectivity of the market to trade off optimally between the average agent
welfare achieved and the number of agents who remain unmatched in the market.
For most market primitives, we find that the optimal connectivity should lie in
the weak competition regime or at the threshold between the regimes. Second,
our analysis uncovers a new conceptual principle governing whether the short
side enjoys a significant advantage in a given matching market, which can
moreover be applied as a diagnostic tool given only basic summary statistics
for the market. Counterfactual analyses using data on centralized high school
admissions in a major USA city show the practical value of both our design
insights and our diagnostic principle.",The Competition for Partners in Matching Markets,2020-06-25 21:28:30,"Yash Kanoria, Seungki Min, Pengyu Qian","http://arxiv.org/abs/2006.14653v2, http://arxiv.org/pdf/2006.14653v2",cs.GT
34959,th,"We show that, with indivisible goods, the existence of competitive
equilibrium fundamentally depends on agents' substitution effects, not their
income effects. Our Equilibrium Existence Duality allows us to transport
results on the existence of competitive equilibrium from settings with
transferable utility to settings with income effects. One consequence is that
net substitutability---which is a strictly weaker condition than gross
substitutability---is sufficient for the existence of competitive equilibrium.
We also extend the ``demand types'' classification of valuations to settings
with income effects and give necessary and sufficient conditions for a pattern
of substitution effects to guarantee the existence of competitive equilibrium.",The Equilibrium Existence Duality: Equilibrium with Indivisibilities & Income Effects,2020-06-30 19:21:12,"Elizabeth Baldwin, Omer Edhan, Ravi Jagadeesan, Paul Klemperer, Alexander Teytelboym","http://arxiv.org/abs/2006.16939v1, http://arxiv.org/pdf/2006.16939v1",econ.TH
34960,th,"We use decision theory to confront uncertainty that is sufficiently broad to
incorporate ""models as approximations."" We presume the existence of a featured
collection of what we call ""structured models"" that have explicit substantive
motivations. The decision maker confronts uncertainty through the lens of these
models, but also views these models as simplifications, and hence, as
misspecified. We extend the max-min analysis under model ambiguity to
incorporate the uncertainty induced by acknowledging that the models used in
decision-making are simplified approximations. Formally, we provide an
axiomatic rationale for a decision criterion that incorporates model
misspecification concerns.",Making Decisions under Model Misspecification,2020-08-01 13:09:07,"Simone Cerreia-Vioglio, Lars Peter Hansen, Fabio Maccheroni, Massimo Marinacci","http://arxiv.org/abs/2008.01071v4, http://arxiv.org/pdf/2008.01071v4",econ.TH
34961,th,"A seller chooses a reserve price in a second-price auction to maximize
worst-case expected revenue when she knows only the mean of value distribution
and an upper bound on either values themselves or variance. Values are private
and iid. Using an indirect technique, we prove that it is always optimal to set
the reserve price to the seller's own valuation. However, the maxmin reserve
price may not be unique. If the number of bidders is sufficiently high, all
prices below the seller's valuation, including zero, are also optimal. A
second-price auction with the reserve equal to seller's value (or zero) is an
asymptotically optimal mechanism (among all ex post individually rational
mechanisms) as the number of bidders grows without bound.",Distributionally Robust Pricing in Independent Private Value Auctions,2020-08-04 17:59:45,Alex Suzdaltsev,"http://arxiv.org/abs/2008.01618v2, http://arxiv.org/pdf/2008.01618v2",econ.TH
34981,th,"We consider a discrete-time nonatomic routing game with variable demand and
uncertain costs. Given a routing network with single origin and destination,
the cost function of each edge depends on some uncertain persistent state
parameter. At every period, a random traffic demand is routed through the
network according to a Wardrop equilibrium. The realized costs are publicly
observed and the public Bayesian belief about the state parameter is updated.
We say that there is strong learning when beliefs converge to the truth and
weak learning when the equilibrium flow converges to the complete-information
flow. We characterize the networks for which learning occurs. We prove that
these networks have a series-parallel structure and provide a counterexample to
show that learning may fail in non-series-parallel networks.",Social Learning in Nonatomic Routing Games,2020-09-24 13:13:05,"Emilien Macault, Marco Scarsini, Tristan Tomala","http://arxiv.org/abs/2009.11580v3, http://arxiv.org/pdf/2009.11580v3",econ.TH
34962,th,"Let $V$ be society whose members express preferences about two alternatives,
indifference included. Identifying anonymous binary social choice functions
with binary functions $f=f(k,m)$ defined over the integer triangular grid
$G=\{(k,m)\in \mathbb{N}_0\times\mathbb{N}_0 : k+m\le |V|\} $, we show that
every strategy-proof, anonymous social choice function can be described
geometrically by listing, in a sequential manner, groups of segments of G, of
equal (maximum possible) length, alternately horizontal and vertical,
representative of preference profiles that determine the collective choice of
one of the two alternatives. Indeed, we show that every function which is
anonymous and strategy-proof can be described in terms of a sequence of
nonnegative integers $(q_1, q_2, \cdots, q_s)$ corresponding to the
cardinalities of the mentioned groups of segments. We also analyze the
connections between our present representation with another of our earlier
representations involving sequences of majority quotas.
  A Python code is available with the authors for the implementation of any
such social choice function.",Geometry of anonymous binary social choices that are strategy-proof,2020-08-05 13:47:53,"Achille Basile, Surekha Rao, K. P. S. Bhaskara Rao","http://arxiv.org/abs/2008.02041v1, http://arxiv.org/pdf/2008.02041v1",econ.TH
34963,th,"We consider a model where agents differ in their `types' which determines
their voluntary contribution towards a public good. We analyze what the
equilibrium composition of groups are under centralized and centralized choice.
We show that there exists a top-down sorting equilibrium i.e. an equilibrium
where there exists a set of prices which leads to groups that can be ordered by
level of types, with the first k types in the group with the highest price and
so on. This exists both under decentralized and centralized choosing. We also
analyze the model with endogenous group size and examine under what conditions
is top-down sorting socially efficient. We illustrate when integration (i.e.
mixing types so that each group's average type if the same) is socially better
than top-down sorting. Finally, we show that top down sorting is efficient even
when groups compete among themselves.",Pricing group membership,2020-08-05 20:24:55,"Siddhartha Bandyopadhyay, Antonio Cabrales","http://arxiv.org/abs/2008.03102v1, http://arxiv.org/pdf/2008.03102v1",econ.TH
34965,th,"This paper proposes a careful separation between an entity's epistemic system
and their decision system. Crucially, Bayesian counterfactuals are estimated by
the epistemic system; not by the decision system. Based on this remark, I prove
the existence of Newcomb-like problems for which an epistemic system
necessarily expects the entity to make a counterfactually bad decision. I then
address (a slight generalization of) Newcomb's paradox. I solve the specific
case where the player believes that the predictor applies Bayes rule with a
supset of all the data available to the player. I prove that the counterfactual
optimality of the 1-Box strategy depends on the player's prior on the
predictor's additional data. If these additional data are not expected to
reduce sufficiently the predictor's uncertainty on the player's decision, then
the player's epistemic system will counterfactually prefer to 2-Box. But if the
predictor's data is believed to make them quasi-omniscient, then 1-Box will be
counterfactually preferred. Implications of the analysis are then discussed.
More generally, I argue that, to better understand or design an entity, it is
useful to clearly separate the entity's epistemic, decision, but also data
collection, reward and maintenance systems, whether the entity is human,
algorithmic or institutional.",Purely Bayesian counterfactuals versus Newcomb's paradox,2020-08-10 19:56:48,Lê Nguyên Hoang,"http://arxiv.org/abs/2008.04256v1, http://arxiv.org/pdf/2008.04256v1",econ.TH
34966,th,"When making important decisions such as choosing health insurance or a
school, people are often uncertain what levels of attributes will suit their
true preference. After choice, they might realize that their uncertainty
resulted in a mismatch: choosing a sub-optimal alternative, while another
available alternative better matches their needs.
  We study here the overall impact, from a central planner's perspective, of
decisions under such uncertainty. We use the representation of Voronoi
tessellations to locate all individuals and alternatives in an attribute space.
We provide an expression for the probability of correct match, and calculate,
analytically and numerically, the average percentage of matches. We test
dependence on the level of uncertainty and location.
  We find overall considerable mismatch even for low uncertainty - a possible
concern for policy makers. We further explore a commonly used practice -
allocating service representatives to assist individuals' decisions. We show
that within a given budget and uncertainty level, the effective allocation is
for individuals who are close to the boundary between several Voronoi cells,
but are not right on the boundary.",Modelling the expected probability of correct assignment under uncertainty,2020-08-12 16:08:17,"Tom Dvir, Renana Peres, Zeév Rudnick","http://dx.doi.org/10.1038/s41598-020-71558-x, http://arxiv.org/abs/2008.05878v1, http://arxiv.org/pdf/2008.05878v1",physics.soc-ph
34967,th,"We study the nature (i.e., constructive as opposed to non-constructive) of
social welfare orders on infinite utility streams, and their representability
by means of real-valued functions. We assume finite anonymity and introduce a
new efficiency concept we refer to as asymptotic density-one Pareto. We
characterize the existence of representable and constructive social welfare
orders (satisfying the above properties) in terms of easily verifiable
conditions on the feasible set of one-period utilities.",On social welfare orders satisfying anonymity and asymptotic density-one Pareto,2020-08-11 23:23:36,"Ram Sewak Dubey, Giorgio Laguzzi, Francesco Ruscitti","http://arxiv.org/abs/2008.05879v2, http://arxiv.org/pdf/2008.05879v2",econ.TH
34968,th,"In this paper we propose a macro-dynamic age-structured set-up for the
analysis of epidemics/economic dynamics in continuous time. The resulting
optimal control problem is reformulated in an infinite dimensional Hilbert
space framework where we perform the basic steps of dynamic programming
approach. Our main result is a verification theorem which allows to guess the
feedback form of optimal strategies. This will be a departure point to discuss
the behavior of the models of the family we introduce and their policy
implications.",Verification Results for Age-Structured Models of Economic-Epidemics Dynamics,2020-08-17 17:06:15,"Giorgio Fabbri, Fausto Gozzi, Giovanni Zanco","http://arxiv.org/abs/2008.07335v1, http://arxiv.org/pdf/2008.07335v1",econ.TH
34969,th,"We consider the problem of probabilistic allocation of objects under ordinal
preferences. We devise an allocation mechanism, called the vigilant eating rule
(VER), that applies to nearly arbitrary feasibility constraints. It is
constrained ordinally efficient, can be computed efficiently for a large class
of constraints, and treats agents equally if they have the same preferences and
are subject to the same constraints. When the set of feasible allocations is
convex, we also present a characterization of our rule based on ordinal
egalitarianism. Our results about VER do not just apply to allocation problems
but to all collective choice problems in which agents have ordinal preferences
over discrete outcomes. As a case study, we assume objects have priorities for
agents and apply VER to sets of probabilistic allocations that are constrained
by stability. VER coincides with the (extended) probabilistic serial rule when
priorities are flat and the agent proposing deterministic deferred acceptance
algorithm when preferences and priorities are strict. While VER always returns
a stable and constrained efficient allocation, it fails to be strategyproof,
unconstrained efficient, and envy-free. We show, however, that each of these
three properties is incompatible with stability and constrained efficiency.",The Vigilant Eating Rule: A General Approach for Probabilistic Economic Design with Constraints,2020-08-20 17:24:34,"Haris Aziz, Florian Brandl","http://arxiv.org/abs/2008.08991v3, http://arxiv.org/pdf/2008.08991v3",econ.TH
34970,th,"Trades based on bilateral (indivisible) contracts can be represented by a
network. Vertices correspond to agents while arcs represent the non-price
elements of a bilateral contract. Given prices for each arc, agents choose the
incident arcs that maximize their utility. We enlarge the model to allow for
polymatroidal constraints on the set of contracts that may be traded which can
be interpreted as modeling limited one for-one substitution. We show that for
two-sided markets there exists a competitive equilibrium however for
multi-sided markets this may not be possible.",Constrained Trading Networks,2020-08-22 08:11:20,"Can Kizilkale, Rakesh Vohra","http://arxiv.org/abs/2008.09757v1, http://arxiv.org/pdf/2008.09757v1",econ.TH
34972,th,"We study voting rules for participatory budgeting, where a group of voters
collectively decides which projects should be funded using a common budget. We
allow the projects to have arbitrary costs, and the voters to have arbitrary
additive valuations over the projects. We formulate an axiom (Extended
Justified Representation, EJR) that guarantees proportional representation to
groups of voters with common interests. We propose a simple and attractive
voting rule called the Method of Equal Shares that satisfies this axiom for
arbitrary costs and approval utilities, and that satisfies the axiom up to one
project for arbitrary additive valuations. This method can be computed in
polynomial time. In contrast, we show that the standard method for achieving
proportionality in committee elections, Proportional Approval Voting (PAV),
cannot be extended to work with arbitrary costs. Finally, we introduce a
strengthened axiom (Full Justified Representation, FJR) and show that it is
also satisfiable, though by a computationally more expensive and less natural
voting rule.",Proportional Participatory Budgeting with Additive Utilities,2020-08-31 00:04:11,"Dominik Peters, Grzegorz Pierczyński, Piotr Skowron","http://arxiv.org/abs/2008.13276v2, http://arxiv.org/pdf/2008.13276v2",cs.GT
34973,th,"In this paper, we introduce a natural learning rule for mean field games with
finite state and action space, the so-called myopic adjustment process. The
main motivation for these considerations are the complex computations necessary
to determine dynamic mean-field equilibria, which make it seem questionable
whether agents are indeed able to play these equilibria. We prove that the
myopic adjustment process converges locally towards stationary equilibria with
deterministic equilibrium strategies under rather broad conditions. Moreover,
for a two-strategy setting, we also obtain a global convergence result under
stronger, yet intuitive conditions.",A Myopic Adjustment Process for Mean Field Games with Finite State and Action Space,2020-08-31 11:25:00,Berenice Anne Neumann,"http://arxiv.org/abs/2008.13420v1, http://arxiv.org/pdf/2008.13420v1",math.OC
34974,th,"In an election in which each voter ranks all of the candidates, we consider
the head-to-head results between each pair of candidates and form a labeled
directed graph, called the margin graph, which contains the margin of victory
of each candidate over each of the other candidates. A central issue in
developing voting methods is that there can be cycles in this graph, where
candidate $\mathsf{A}$ defeats candidate $\mathsf{B}$, $\mathsf{B}$ defeats
$\mathsf{C}$, and $\mathsf{C}$ defeats $\mathsf{A}$. In this paper we apply the
central limit theorem, graph homology, and linear algebra to analyze how likely
such situations are to occur for large numbers of voters. There is a large
literature on analyzing the probability of having a majority winner; our
analysis is more fine-grained. The result of our analysis is that in elections
with the number of voters going to infinity, margin graphs that are more cyclic
in a certain precise sense are less likely to occur.",An Analysis of Random Elections with Large Numbers of Voters,2020-09-07 12:46:34,Matthew Harrison-Trainor,"http://arxiv.org/abs/2009.02979v1, http://arxiv.org/pdf/2009.02979v1",econ.TH
34982,th,"It is known that a coalition formation game may not have a stable coalition
structure. In this study we propose a new solution concept for these games,
which we call ""stable decomposition"", and show that each game has at least one.
This solution consists of a collection of coalitions organized in sets that
""protect"" each other in a stable way. When sets of this collection are
singletons, the stable decomposition can be identified with a stable coalition
structure. As an application, we study convergence to stability in coalition
formation games.",Stable decompositions of coalition formation games,2020-09-24 16:48:18,"Agustín G. Bonifacio, Elena Inarra, Pablo Neme","http://arxiv.org/abs/2009.11689v2, http://arxiv.org/pdf/2009.11689v2",econ.TH
34975,th,"We study the implications of selling through a voice-based virtual assistant
(VA). The seller has a set of products available and the VA decides which
product to offer and at what price, seeking to maximize its revenue, consumer-
or total-surplus. The consumer is impatient and rational, seeking to maximize
her expected utility given the information available to her. The VA selects
products based on the consumer's request and other information available to it
and then presents them sequentially. Once a product is presented and priced,
the consumer evaluates it and decides whether to make a purchase. The
consumer's valuation of each product comprises a pre-evaluation value, which is
common knowledge, and a post-evaluation component which is private to the
consumer. We solve for the equilibria and develop efficient algorithms for
implementing the solution. We examine the effects of information asymmetry on
the outcomes and study how incentive misalignment depends on the distribution
of private valuations. We find that monotone rankings are optimal in the cases
of a highly patient or impatient consumer and provide a good approximation for
other levels of patience. The relationship between products' expected
valuations and prices depends on the consumer's patience level and is monotone
increasing (decreasing) when the consumer is highly impatient (patient). Also,
the seller's share of total surplus decreases in the amount of private
information. We compare the VA to a traditional web-based interface, where
multiple products are presented simultaneously on each page. We find that
within a page, the higher-value products are priced lower than the lower-value
products when the private valuations are exponentially distributed. Finally,
the web-based interface generally achieves higher profits for the seller than a
VA due to the greater commitment power inherent in its presentation.",Sales Policies for a Virtual Assistant,2020-09-03 01:05:23,"Wenjia Ba, Haim Mendelson, Mingxi Zhu","http://arxiv.org/abs/2009.03719v1, http://arxiv.org/pdf/2009.03719v1",econ.TH
34976,th,"We study the stable marriage problem in two-sided markets with randomly
generated preferences. We consider agents on each side divided into a constant
number of ""soft tiers"", which intuitively indicate the quality of the agent.
Specifically, every agent within a tier has the same public score, and agents
on each side have preferences independently generated proportionally to the
public scores of the other side.
  We compute the expected average rank which agents in each tier have for their
partners in the men-optimal stable matching, and prove concentration results
for the average rank in asymptotically large markets. Furthermore, we show that
despite having a significant effect on ranks, public scores do not strongly
influence the probability of an agent matching to a given tier of the other
side. This generalizes results of [Pittel 1989] which correspond to uniform
preferences. The results quantitatively demonstrate the effect of competition
due to the heterogeneous attractiveness of agents in the market, and we give
the first explicit calculations of rank beyond uniform markets.",Tiered Random Matching Markets: Rank is Proportional to Popularity,2020-09-10 22:49:51,"Itai Ashlagi, Mark Braverman, Amin Saberi, Clayton Thomas, Geng Zhao","http://arxiv.org/abs/2009.05124v2, http://arxiv.org/pdf/2009.05124v2",cs.GT
34977,th,"A rich class of mechanism design problems can be understood as
incomplete-information games between a principal who commits to a policy and an
agent who responds, with payoffs determined by an unknown state of the world.
Traditionally, these models require strong and often-impractical assumptions
about beliefs (a common prior over the state). In this paper, we dispense with
the common prior. Instead, we consider a repeated interaction where both the
principal and the agent may learn over time from the state history. We
reformulate mechanism design as a reinforcement learning problem and develop
mechanisms that attain natural benchmarks without any assumptions on the
state-generating process. Our results make use of novel behavioral assumptions
for the agent -- centered around counterfactual internal regret -- that capture
the spirit of rationality without relying on beliefs.",Mechanisms for a No-Regret Agent: Beyond the Common Prior,2020-09-11 19:41:20,"Modibo Camara, Jason Hartline, Aleck Johnsen","http://arxiv.org/abs/2009.05518v1, http://arxiv.org/pdf/2009.05518v1",econ.TH
34978,th,"This article concerns the tail probabilities of a light-tailed
Markov-modulated L\'evy process stopped at a state-dependent Poisson rate. The
tails are shown to decay exponentially at rates given by the unique positive
and negative roots of the spectral abscissa of a certain matrix-valued
function. We illustrate the use of our results with an application to the
stationary distribution of wealth in a simple economic model in which agents
with constant absolute risk aversion are subject to random mortality and income
fluctuation.",Tail behavior of stopped Lévy processes with Markov modulation,2020-09-17 04:50:28,"Brendan K. Beare, Won-Ki Seo, Alexis Akira Toda","http://dx.doi.org/10.1017/S0266466621000268, http://arxiv.org/abs/2009.08010v1, http://arxiv.org/pdf/2009.08010v1",math.PR
34979,th,"The Arrow-Debreu extension of the classic Hylland-Zeckhauser scheme for a
one-sided matching market -- called ADHZ in this paper -- has natural
applications but has instances which do not admit equilibria. By introducing
approximation, we define the $\epsilon$-approximate ADHZ model, and we give the
following results.
  * Existence of equilibrium under linear utility functions. We prove that the
equilibrium satisfies Pareto optimality, approximate envy-freeness, and
approximate weak core stability.
  * A combinatorial polynomial-time algorithm for an $\epsilon$-approximate
ADHZ equilibrium for the case of dichotomous, and more generally bi-valued,
utilities.
  * An instance of ADHZ, with dichotomous utilities and a strongly connected
demand graph, which does not admit an equilibrium.
  Since computing an equilibrium for HZ is likely to be highly intractable and
because of the difficulty of extending HZ to more general utility functions,
Hosseini and Vazirani proposed (a rich collection of) Nash-bargaining-based
matching market models. For the dichotomous-utilities case of their model
linear Arrow-Debreu Nash bargaining one-sided matching market (1LAD), we give a
combinatorial, strongly polynomial-time algorithm and show that it admits a
rational convex program.",One-Sided Matching Markets with Endowments: Equilibria and Algorithms,2020-09-22 08:06:43,"Jugal Garg, Thorben Tröbst, Vijay V. Vazirani","http://arxiv.org/abs/2009.10320v3, http://arxiv.org/pdf/2009.10320v3",cs.GT
34980,th,"It is well-known that optimal (i.e., revenue-maximizing) selling mechanisms
in multidimensional type spaces may involve randomization. We obtain conditions
under which deterministic mechanisms are optimal for selling two identical,
indivisible objects to a single buyer. We analyze two settings: (i) decreasing
marginal values (DMV) and (ii) increasing marginal values (IMV). Thus, the
values of the buyer for the two units are not independent.
  We show that under a well-known condition on distributions~(due to McAfee and
McMillan (1988)), (a) it is optimal to sell the first unit deterministically in
the DMV model and (b) it is optimal to bundle (which is a deterministic
mechanism) in the IMV model. Under a stronger sufficient condition on
distributions, a deterministic mechanism is optimal in the DMV model.
  Our results apply to heterogeneous objects when there is a specified sequence
in which the two objects must be sold.",Selling Two Identical Objects,2020-09-24 11:24:30,"Sushil Bikhchandani, Debasis Mishra","http://dx.doi.org/10.1016/j.jet.2021.105397, http://arxiv.org/abs/2009.11545v5, http://arxiv.org/pdf/2009.11545v5",econ.TH
34997,th,"We propose generalized versions of strong equity and Pigou-Dalton transfer
principle. We study the existence and the real valued representation of social
welfare relations satisfying these two generalized equity principles. Our
results characterize the restrictions on one period utility domains for the
equitable social welfare relation (i) to exist; and (ii) to admit real-valued
representations.",Equitable preference relations on infinite utility streams,2020-11-12 23:00:19,"Ram S. Dubey, Giorgio Laguzzi","http://arxiv.org/abs/2012.06481v2, http://arxiv.org/pdf/2012.06481v2",econ.TH
34983,th,"We consider a combinatorial auction model where preferences of agents over
bundles of objects and payments need not be quasilinear. However, we restrict
the preferences of agents to be dichotomous. An agent with dichotomous
preference partitions the set of bundles of objects as acceptable} and
unacceptable, and at the same payment level, she is indifferent between bundles
in each class but strictly prefers acceptable to unacceptable bundles. We show
that there is no Pareto efficient, dominant strategy incentive compatible
(DSIC), individually rational (IR) mechanism satisfying no subsidy if the
domain of preferences includes all dichotomous preferences. However, a
generalization of the VCG mechanism is Pareto efficient, DSIC, IR and satisfies
no subsidy if the domain of preferences contains only positive income effect
dichotomous preferences. We show the tightness of this result: adding any
non-dichotomous preference (satisfying some natural properties) to the domain
of quasilinear dichotomous preferences brings back the impossibility result.",Pareto efficient combinatorial auctions: dichotomous preferences without quasilinearity,2020-09-25 13:27:08,"Komal Malik, Debasis Mishra","http://dx.doi.org/10.1016/j.jet.2020.105128, http://arxiv.org/abs/2009.12114v1, http://arxiv.org/pdf/2009.12114v1",econ.TH
34984,th,"We investigate traffic routing both from the perspective of theory as well as
real world data. First, we introduce a new type of games: $\theta$-free flow
games. Here, commuters only consider, in their strategy sets, paths whose
free-flow costs (informally their lengths) are within a small multiplicative
$(1+\theta)$ constant of the optimal free-flow cost path connecting their
source and destination, where $\theta\geq0$. We provide an exhaustive analysis
of tight bounds on PoA($\theta$) for arbitrary classes of cost functions, both
in the case of general congestion/routing games as well as in the special case
of path-disjoint networks. Second, by using a large mobility dataset in
Singapore, we inspect minute-by-minute decision-making of thousands of
commuters, and find that $\theta=1$ is a good estimate of agents' route
(pre)selection mechanism. In contrast, in Pigou networks, the ratio of the
free-flow costs of the routes, and thus $\theta$, is \textit{infinite}; so,
although such worst case networks are mathematically simple, they correspond to
artificial routing scenarios with little resemblance to real world conditions,
opening the possibility of proving much stronger Price of Anarchy guarantees by
explicitly studying their dependency on $\theta$. For example, in the case of
the standard Bureau of Public Roads (BPR) cost model, where$c_e(x)= a_e
x^4+b_e$, and for quartic cost functions in general, the standard PoA bound for
$\theta=\infty$ is $2.1505$, and this is tight both for general networks as
well as path-disjoint and even parallel-edge networks. In comparison, for
$\theta=1$, the PoA in the case of general networks is only $1.6994$, whereas
for path-disjoint/parallel-edge networks is even smaller ($1.3652$), showing
that both the route geometries as captured by the parameter $\theta$ as well as
the network topology have significant effects on PoA.",Data-Driven Models of Selfish Routing: Why Price of Anarchy Does Depend on Network Topology,2020-09-27 18:22:33,"Francisco Benita, Vittorio Bilò, Barnabé Monnot, Georgios Piliouras, Cosimo Vinci","http://arxiv.org/abs/2009.12871v2, http://arxiv.org/pdf/2009.12871v2",cs.GT
34985,th,"We explore the consequences of weakening the notion of incentive
compatibility from strategy-proofness to ordinal Bayesian incentive
compatibility (OBIC) in the random assignment model. If the common prior of the
agents is a uniform prior, then a large class of random mechanisms are OBIC
with respect to this prior -- this includes the probabilistic serial mechanism.
We then introduce a robust version of OBIC: a mechanism is locally robust OBIC
if it is OBIC with respect all independent priors in some neighborhood of a
given independent prior. We show that every locally robust OBIC mechanism
satisfying a mild property called elementary monotonicity is strategy-proof.
This leads to a strengthening of the impossibility result in Bogomolnaia and
Moulin (2001): if there are at least four agents, there is no locally robust
OBIC and ordinally efficient mechanism satisfying equal treatment of equals.",Ordinal Bayesian incentive compatibility in random assignment model,2020-09-28 10:24:49,"Sulagna Dasgupta, Debasis Mishra","http://dx.doi.org/10.1007/s10058-022-00289-4, http://arxiv.org/abs/2009.13104v2, http://arxiv.org/pdf/2009.13104v2",econ.TH
34986,th,"In coordination games and speculative over-the-counter financial markets,
solutions depend on higher-order average expectations: agents' expectations
about what counterparties, on average, expect their counterparties to think,
etc. We offer a unified analysis of these objects and their limits, for general
information structures, priors, and networks of counterparty relationships. Our
key device is an interaction structure combining the network and agents'
beliefs, which we analyze using Markov methods. This device allows us to nest
classical beauty contests and network games within one model and unify their
results. Two applications illustrate the techniques: The first characterizes
when slight optimism about counterparties' average expectations leads to
contagion of optimism and extreme asset prices. The second describes the
tyranny of the least-informed: agents coordinating on the prior expectations of
the one with the worst private information, despite all having nearly common
certainty, based on precise private signals, of the ex post optimal action.","Expectations, Networks, and Conventions",2020-09-29 09:21:16,"Benjamin Golub, Stephen Morris","http://arxiv.org/abs/2009.13802v1, http://arxiv.org/pdf/2009.13802v1",econ.TH
34987,th,"We study the diffusion of a true and a false message (the rumor) in a social
network. Upon hearing a message, individuals may believe it, disbelieve it, or
debunk it through costly verification. Whenever the truth survives in steady
state, so does the rumor. Communication intensity in itself is irrelevant for
relative rumor prevalence, and the effect of homophily depends on the exact
verification process and equilibrium verification rates. Our model highlights
that successful policies in the fight against rumors increase individuals'
incentives to verify.",Debunking Rumors in Networks,2020-10-02 17:10:29,"Luca P. Merlino, Paolo Pin, Nicole Tabasso","http://arxiv.org/abs/2010.01018v3, http://arxiv.org/pdf/2010.01018v3",econ.TH
34988,th,"We prove that a fractional perfect matching in a non-bipartite graph can be
written, in polynomial time, as a convex combination of perfect matchings. This
extends the Birkhoff-von Neumann Theorem from bipartite to non-bipartite
graphs.
  The algorithm of Birkhoff and von Neumann is greedy; it starts with the given
fractional perfect matching and successively ""removes"" from it perfect
matchings, with appropriate coefficients. This fails in non-bipartite graphs --
removing perfect matchings arbitrarily can lead to a graph that is non-empty
but has no perfect matchings. Using odd cuts appropriately saves the day.",An Extension of the Birkhoff-von Neumann Theorem to Non-Bipartite Graphs,2020-10-12 22:20:46,Vijay V. Vazirani,"http://arxiv.org/abs/2010.05984v2, http://arxiv.org/pdf/2010.05984v2",cs.DS
34998,th,"In this paper we characterize the niveloidal preferences that satisfy the
Weak Order, Monotonicity, Archimedean, and Weak C-Independence Axioms from the
point of view of an intra-personal, leader-follower game. We also show that the
leader's strategy space can serve as an ambiguity aversion index.",Decision Making under Uncertainty: A Game of Two Selves,2020-12-14 16:42:47,Jianming Xia,"http://arxiv.org/abs/2012.07509v1, http://arxiv.org/pdf/2012.07509v1",econ.TH
34989,th,"The seller of an asset has the option to buy hard information about the value
of the asset from an intermediary. The seller can then disclose the acquired
information before selling the asset in a competitive market. We study how the
intermediary designs and sells hard information to robustly maximize her
revenue across all equilibria. Even though the intermediary could use an
accurate test that reveals the asset's value, we show that robust revenue
maximization leads to a noisy test with a continuum of possible scores that are
distributed exponentially. In addition, the intermediary always charges the
seller for disclosing the test score to the market, but not necessarily for
running the test. This enables the intermediary to robustly appropriate a
significant share of the surplus resulting from the asset sale even though the
information generated by the test provides no social value.",How to Sell Hard Information,2020-10-16 00:48:35,"S. Nageeb Ali, Nima Haghpanah, Xiao Lin, Ron Siegel","http://arxiv.org/abs/2010.08037v1, http://arxiv.org/pdf/2010.08037v1",econ.TH
34990,th,"We examine how the randomness of behavior and the flow of information between
agents affect the formation of opinions. Our main research involves the process
of opinion evolution, opinion clusters formation and studying the probability
of sustaining opinion. The results show that opinion formation (clustering of
opinion) is influenced by both flow of information between agents (interactions
outside the closest neighbors) and randomness in adopting opinions.",Are randomness of behavior and information flow important to opinion forming in organization?,2020-10-29 19:33:09,"Agnieszka Kowalska-Styczeń, Krzysztof Malarz","http://arxiv.org/abs/2010.15736v1, http://arxiv.org/pdf/2010.15736v1",econ.TH
34991,th,"This paper expands the work on distributionally robust newsvendor to
incorporate moment constraints. The use of Wasserstein distance as the
ambiguity measure is preserved. The infinite dimensional primal problem is
formulated; problem of moments duality is invoked to derive the simpler finite
dimensional dual problem. An important research question is: How does
distributional ambiguity affect the optimal order quantity and the
corresponding profits/costs? To investigate this, some theory is developed and
a case study in auto sales is performed. We conclude with some comments on
directions for further research.",Distributionally Robust Newsvendor with Moment Constraints,2020-10-30 19:57:51,"Derek Singh, Shuzhong Zhang","http://arxiv.org/abs/2010.16369v1, http://arxiv.org/pdf/2010.16369v1",q-fin.MF
34992,th,"We propose a new approach to solving dynamic decision problems with unbounded
rewards based on the transformations used in Q-learning. In our case, the
objective of the transform is to convert an unbounded dynamic program into a
bounded one. The approach is general enough to handle problems for which
existing methods struggle, and yet simple relative to other techniques and
accessible for applied work. We show by example that many common decision
problems satisfy our conditions.",Unbounded Dynamic Programming via the Q-Transform,2020-12-01 05:26:07,"Qingyin Ma, John Stachurski, Alexis Akira Toda","http://dx.doi.org/10.1016/j.jmateco.2022.102652, http://arxiv.org/abs/2012.00219v2, http://arxiv.org/pdf/2012.00219v2",math.OC
34993,th,"In this paper, we propose and analyse two game theoretical models useful to
design marketing channels attribution mechanisms based on cooperative TU games
and bankruptcy problems, respectively. First, we analyse the Sum Game, a
coalitional game introduced by Morales (2016). We extend the ideas introduced
in Zhao et al. (2018) and Cano-Berlanga et al. (2017) to the case in which the
order and the repetition of channels on the paths to conversion are taken into
account. In all studied cases, the Shapley value is proposed as the attribution
mechanism. Second, a bankruptcy problem approach is proposed, and a similar
analysis is developed relying on the Constrained Equal Loss (CEL) and
Proportional (PROP) rules as attribution mechanisms. In particular, it is
relevant to note that the class of attribution bankruptcy problems is a proper
subclass of bankruptcy problems.",Some game theoretic marketing attribution models,2020-11-02 16:18:04,"Elisenda Molina, Juan Tejada, Tom Weiss","http://arxiv.org/abs/2012.00812v1, http://arxiv.org/pdf/2012.00812v1",econ.TH
34994,th,"These lecture notes attempt a mathematical treatment of game theory akin to
mathematical physics. A game instance is defined as a sequence of states of an
underlying system. This viewpoint unifies classical mathematical models for
2-person and, in particular, combinatorial and zero-sum games as well as models
for investing and betting. n-person games are studied with emphasis on notions
of utilities, potentials and equilibria, which allows to subsume cooperative
games as special cases. The represenation of a game theoretic system in a
Hilbert space furthermore establishes a link to the mathematical model of
quantum mechancis and general interaction systems.
  The notes sketch an outline of the theory. Details are available as a
textbook elsewhere.",Mathematical Game Theory: A New Approach,2020-12-03 14:50:00,Ulrich Faigle,"http://arxiv.org/abs/2012.01850v3, http://arxiv.org/pdf/2012.01850v3",econ.TH
34995,th,"This paper investigates the reality and accuracy of evolutionary game
dynamics theory in human game behavior experiments. In classical game theory,
the central concept is Nash equilibrium, which reality and accuracy has been
well known since the firstly illustration by the O'Neill game experiment in
1987. In game dynamics theory, the central approach is dynamics equations,
however, its reality and accuracy is rare known, especially in high dimensional
games. By develop a new approach, namely the eigencycle approach, with the
eigenvectors from the game dynamics equations, we discover the fine structure
of the cycles in the same experiments. We show that, the eigencycle approach
can increase the accuracy by an order of magnitude in the human dynamic
hehavior data. As the eigenvector is fundamental in dynamical systems theory
which has applications in natural, social, and virtual worlds, the power of the
eigencycles is expectedly. Inspired by the high dimensional eigencycles, we
suggest that, the mathematical concept, namely 'invariant manifolds', could be
a candidate as the central concept for the game dynamics theory, like the fixed
point concept for classical game theory.",Human Social Cycling Spectrum,2020-12-06 19:39:42,"Wang Zhijian, Yao Qingmei","http://arxiv.org/abs/2012.03315v2, http://arxiv.org/pdf/2012.03315v2",econ.TH
34999,th,"Envy-freeness up to any good (EFX) provides a strong and intuitive guarantee
of fairness in the allocation of indivisible goods. But whether such
allocations always exist or whether they can be efficiently computed remains an
important open question. We study the existence and computation of EFX in
conjunction with various other economic properties under lexicographic
preferences--a well-studied preference model in artificial intelligence and
economics. In sharp contrast to the known results for additive valuations, we
not only prove the existence of EFX and Pareto optimal allocations, but in fact
provide an algorithmic characterization of these two properties. We also
characterize the mechanisms that are, in addition, strategyproof, non-bossy,
and neutral. When the efficiency notion is strengthened to rank-maximality, we
obtain non-existence and computational hardness results, and show that
tractability can be restored when EFX is relaxed to another well-studied
fairness notion called maximin share guarantee (MMS).",Fair and Efficient Allocations under Lexicographic Preferences,2020-12-14 19:27:26,"Hadi Hosseini, Sujoy Sikdar, Rohit Vaish, Lirong Xia","http://arxiv.org/abs/2012.07680v1, http://arxiv.org/pdf/2012.07680v1",cs.GT
35000,th,"We analyse 'stop-and-go' containment policies that produce infection cycles
as periods of tight lockdowns are followed by periods of falling infection
rates. The subsequent relaxation of containment measures allows cases to
increase again until another lockdown is imposed and the cycle repeats. The
policies followed by several European countries during the Covid-19 pandemic
seem to fit this pattern. We show that 'stop-and-go' should lead to lower
medical costs than keeping infections at the midpoint between the highs and
lows produced by 'stop-and-go'. Increasing the upper and reducing the lower
limits of a stop-and-go policy by the same amount would lower the average
medical load. But increasing the upper and lowering the lower limit while
keeping the geometric average constant would have the opposite effect. We also
show that with economic costs proportional to containment, any path that brings
infections back to the original level (technically a closed cycle) has the same
overall economic cost.",The economics of stop-and-go epidemic control,2020-12-14 20:38:50,"Claudius Gros, Daniel Gros","http://arxiv.org/abs/2012.07739v2, http://arxiv.org/pdf/2012.07739v2",econ.TH
35001,th,"We construct a model of an exchange economy in which agents trade assets
contingent on an observable signal, the probability of which depends on public
opinion. The agents in our model are replaced occasionally and each person
updates beliefs in response to observed outcomes. We show that the distribution
of the observed signal is described by a quasi-non-ergodic process and that
people continue to disagree with each other forever. These disagreements
generate large wealth inequalities that arise from the multiplicative nature of
wealth dynamics which make successful bold bets highly profitable.","Self-Fulfilling Prophecies, Quasi Non-Ergodicity and Wealth Inequality",2020-12-17 11:40:26,"Jean-Philippe Bouchaud, Roger Farmer","http://arxiv.org/abs/2012.09445v3, http://arxiv.org/pdf/2012.09445v3",econ.TH
35002,th,"Consider the problem of assigning indivisible objects to agents with strict
ordinal preferences over objects, where each agent is interested in consuming
at most one object, and objects have integer minimum and maximum quotas. We
define an assignment to be feasible if it satisfies all quotas and assume such
an assignment always exists. The Probabilistic Serial (PS) and Random Priority
(RP) mechanisms are generalised based on the same intuitive idea: Allow agents
to consume their most preferred available object until the total mass of agents
yet to be allocated is exactly equal to the remaining amount of unfilled lower
quotas; in this case, we restrict agents' menus to objects which are yet to
fill their minimum quotas. We show the mechanisms satisfy the same criteria as
their classical counterparts: PS is ordinally efficient, envy-free and weakly
strategy-proof; RP is strategy-proof, weakly envy-free but not ordinally
efficient.",The Probabilistic Serial and Random Priority Mechanisms with Minimum Quotas,2020-12-21 00:38:24,Marek Bojko,"http://arxiv.org/abs/2012.11028v1, http://arxiv.org/pdf/2012.11028v1",econ.TH
35003,th,"We consider the problem of implementing a fixed social choice function
between multiple players (which takes as input a type $t_i$ from each player
$i$ and outputs an outcome $f(t_1,\ldots, t_n)$), in which each player must be
incentivized to follow the protocol. In particular, we study the communication
requirements of a protocol which: (a) implements $f$, (b) implements $f$ and
computes payments that make it ex-post incentive compatible (EPIC) to follow
the protocol, and (c) implements $f$ and computes payments in a way that makes
it dominant-strategy incentive compatible (DSIC) to follow the protocol.
  We show exponential separations between all three of these quantities,
already for just two players. That is, we first construct an $f$ such that $f$
can be implemented in communication $c$, but any EPIC implementation of $f$
(with any choice of payments) requires communication $\exp(c)$. This answers an
open question of [FS09, BBS13]. Second, we construct an $f$ such that an EPIC
protocol implements $f$ with communication $C$, but all DSIC implementations of
$f$ require communication $\exp(C)$.",Exponential Communication Separations between Notions of Selfishness,2020-12-29 21:41:44,"Aviad Rubinstein, Raghuvansh R. Saxena, Clayton Thomas, S. Mathew Weinberg, Junyao Zhao","http://arxiv.org/abs/2012.14898v2, http://arxiv.org/pdf/2012.14898v2",cs.GT
35004,th,"We study public goods games played on networks with possibly non-reciprocal
relationships between players. Examples for this type of interactions include
one-sided relationships, mutual but unequal relationships, and parasitism. It
is well known that many simple learning processes converge to a Nash
equilibrium if interactions are reciprocal, but this is not true in general for
directed networks. However, by a simple tool of rescaling the strategy space,
we generalize the convergence result for a class of directed networks and show
that it is characterized by transitive weight matrices. Additionally, we show
convergence in a second class of networks; those rescalable into networks with
weak externalities. We characterize the latter class by the spectral properties
of the absolute value of the network's weight matrix and show that it includes
all directed acyclic networks.",Best-response dynamics in directed network games,2021-01-11 16:24:58,"Péter Bayer, György Kozics, Nóra Gabriella Szőke","http://arxiv.org/abs/2101.03863v1, http://arxiv.org/pdf/2101.03863v1",cs.GT
35012,th,"I introduce novel preference formulations which capture aversion to ambiguity
about unknown and potentially time-varying volatility. I compare these
preferences with Gilboa and Schmeidler's maxmin expected utility as well as
variational formulations of ambiguity aversion. The impact of ambiguity
aversion is illustrated in a simple static model of portfolio choice, as well
as a dynamic model of optimal contracting under repeated moral hazard.
Implications for investor beliefs, optimal design of corporate securities, and
asset pricing are explored.",New Formulations of Ambiguous Volatility with an Application to Optimal Dynamic Contracting,2021-01-29 01:37:03,Peter G. Hansen,"http://arxiv.org/abs/2101.12306v1, http://arxiv.org/pdf/2101.12306v1",econ.TH
35005,th,"Recently, there is growing interest and need for dynamic pricing algorithms,
especially, in the field of online marketplaces by offering smart pricing
options for big online stores. We present an approach to adjust prices based on
the observed online market data. The key idea is to characterize optimal prices
as minimizers of a total expected revenue function, which turns out to be
convex. We assume that consumers face information processing costs, hence,
follow a discrete choice demand model, and suppliers are equipped with quantity
adjustment costs. We prove the strong smoothness of the total expected revenue
function by deriving the strong convexity modulus of its dual. Our
gradient-based pricing schemes outbalance supply and demand at the convergence
rates of $\mathcal{O}(\frac{1}{t})$ and $\mathcal{O}(\frac{1}{t^2})$,
respectively. This suggests that the imperfect behavior of consumers and
suppliers helps to stabilize the market.",Dynamic pricing under nested logit demand,2021-01-12 17:08:55,"David Müller, Yurii Nesterov, Vladimir Shikhman","http://arxiv.org/abs/2101.04486v1, http://arxiv.org/pdf/2101.04486v1",math.OC
35006,th,"We present a polynomial-time algorithm that determines, given some choice
rule, whether there exists an obviously strategy-proof mechanism for that
choice rule.",On the Computational Properties of Obviously Strategy-Proof Mechanisms,2021-01-13 18:48:11,"Louis Golowich, Shengwu Li","http://arxiv.org/abs/2101.05149v3, http://arxiv.org/pdf/2101.05149v3",econ.TH
35007,th,"Optimization problems with discrete decisions are nonconvex and thus lack
strong duality, which limits the usefulness of tools such as shadow prices and
the KKT conditions. It was shown in Burer(2009) that mixed-binary quadratic
programs can be written as completely positive programs, which are convex.
Completely positive reformulations of discrete optimization problems therefore
have strong duality if a constraint qualification is satisfied. We apply this
perspective in two ways. First, we write unit commitment in power systems as a
completely positive program, and use the dual copositive program to design a
new pricing mechanism. Second, we reformulate integer programming games in
terms of completely positive programming, and use the KKT conditions to solve
for pure strategy Nash equilibria. To facilitate implementation, we also design
a cutting plane algorithm for solving copositive programs exactly.",Copositive Duality for Discrete Markets and Games,2021-01-14 01:48:40,"Cheng Guo, Merve Bodur, Joshua A. Taylor","http://arxiv.org/abs/2101.05379v2, http://arxiv.org/pdf/2101.05379v2",math.OC
35008,th,"Network games provide a natural machinery to compactly represent strategic
interactions among agents whose payoffs exhibit sparsity in their dependence on
the actions of others. Besides encoding interaction sparsity, however, real
networks often exhibit a multi-scale structure, in which agents can be grouped
into communities, those communities further grouped, and so on, and where
interactions among such groups may also exhibit sparsity. We present a general
model of multi-scale network games that encodes such multi-level structure. We
then develop several algorithmic approaches that leverage this multi-scale
structure, and derive sufficient conditions for convergence of these to a Nash
equilibrium. Our numerical experiments demonstrate that the proposed approaches
enable orders of magnitude improvements in scalability when computing Nash
equilibria in such games. For example, we can solve previously intractable
instances involving up to 1 million agents in under 15 minutes.",Multi-Scale Games: Representing and Solving Games on Networks with Group Structure,2021-01-20 23:46:44,"Kun Jin, Yevgeniy Vorobeychik, Mingyan Liu","http://arxiv.org/abs/2101.08314v1, http://arxiv.org/pdf/2101.08314v1",cs.CE
35009,th,"This paper initiates a discussion of mechanism design when the participating
agents exhibit preferences that deviate from expected utility theory (EUT). In
particular, we consider mechanism design for systems where the agents are
modeled as having cumulative prospect theory (CPT) preferences, which is a
generalization of EUT preferences. We point out some of the key modifications
needed in the theory of mechanism design that arise from agents having CPT
preferences and some of the shortcomings of the classical mechanism design
framework. In particular, we show that the revelation principle, which has
traditionally played a fundamental role in mechanism design, does not continue
to hold under CPT. We develop an appropriate framework that we call mediated
mechanism design which allows us to recover the revelation principle for CPT
agents. We conclude with some interesting directions for future work.",Mechanism Design for Cumulative Prospect Theoretic Agents: A General Framework and the Revelation Principle,2021-01-21 20:03:27,"Soham R. Phade, Venkat Anantharam","http://arxiv.org/abs/2101.08722v1, http://arxiv.org/pdf/2101.08722v1",cs.GT
35010,th,"Mill's classic argument for liberty requires that people's exercise of
freedom should be governed by a no-harm principle (NHP). In this paper, we
develop the concept of a no-harm equilibrium in $n$-person games where players
maximize utility subject to the constraint of the NHP. Our main result is in
the spirit of the fundamental theorems of welfare economics. We show that for
every initial `reference point' in a game the associated no-harm equilibrium is
Pareto efficient and, conversely, every Pareto efficient point can be supported
as a no-harm equilibrium for some initial reference point.","No-harm principle, rationality, and Pareto optimality in games",2021-01-26 14:34:51,"Shaun Hargreaves Heap, Mehmet S. Ismail","http://arxiv.org/abs/2101.10723v4, http://arxiv.org/pdf/2101.10723v4",econ.TH
35011,th,"Human societies are characterized, besides others, by three constituent
features. (A) Options, as for jobs and societal positions, differ with respect
to their associated monetary and non-monetary payoffs. (B) Competition leads to
reduced payoffs when individuals compete for the same option with others. (C)
People care how they are doing relatively to others. The latter trait, the
propensity to compare one's own success with that of others, expresses itself
as envy.
  It is shown that the combination of (A)-(C) leads to spontaneous class
stratification. Societies of agents split endogenously into two social classes,
an upper and a lower class, when envy becomes relevant. A comprehensive
analysis of the Nash equilibria characterizing a basic reference game is
presented. Class separation is due to the condensation of the strategies of
lower-class agents, which play an identical mixed strategy. Upper class agents
do not condense, following individualist pure strategies.
  Model and results are size-consistent, holding for arbitrary large numbers of
agents and options. Analytic results are confirmed by extensive numerical
simulations. An analogy to interacting confined classical particles is
discussed.",Collective strategy condensation: When envy splits societies,2021-01-22 13:51:35,Claudius Gros,"http://dx.doi.org/10.3390/e23020157, http://arxiv.org/abs/2101.10824v1, http://arxiv.org/pdf/2101.10824v1",physics.soc-ph
35364,th,"We study how long-lived rational agents learn from repeatedly observing a
private signal and each others' actions. With normal signals, a group of any
size learns more slowly than just four agents who directly observe each others'
private signals in each period. Similar results apply to general signal
structures. We identify rational groupthink---in which agents ignore their
private signals and choose the same action for long periods of time---as the
cause of this failure of information aggregation.",Rational Groupthink,2014-12-22 23:24:11,"Matan Harel, Elchanan Mossel, Philipp Strack, Omer Tamuz","http://arxiv.org/abs/1412.7172v7, http://arxiv.org/pdf/1412.7172v7",cs.GT
35013,th,"We study zero-sum repeated games where the minimizing player has to pay a
certain cost each time he changes his action. Our contribution is twofold.
First, we show that the value of the game exists in stationary strategies,
depending solely on the previous action of the minimizing player, not the
entire history. We provide a full characterization of the value and the optimal
strategies. The strategies exhibit a robustness property and typically do not
change with a small perturbation of the switching costs. Second, we consider a
case where the minimizing player is limited to playing simpler strategies that
are completely history-independent. Here too, we provide a full
characterization of the (minimax) value and the strategies for obtaining it.
Moreover, we present several bounds on the loss due to this limitation.",Repeated Games with Switching Costs: Stationary vs History Independent Strategies,2021-02-26 23:39:32,"Yevgeny Tsodikovich, Xavier Venel, Anna Zseleva","http://arxiv.org/abs/2103.00045v3, http://arxiv.org/pdf/2103.00045v3",math.OC
35014,th,"The notion of distortion in social choice problems has been defined to
measure the loss in efficiency -- typically measured by the utilitarian social
welfare, the sum of utilities of the participating agents -- due to having
access only to limited information about the preferences of the agents. We
survey the most significant results of the literature on distortion from the
past 15 years, and highlight important open problems and the most promising
avenues of ongoing and future work.",Distortion in Social Choice Problems: The First 15 Years and Beyond,2021-03-01 14:00:12,"Elliot Anshelevich, Aris Filos-Ratsikas, Nisarg Shah, Alexandros A. Voudouris","http://arxiv.org/abs/2103.00911v1, http://arxiv.org/pdf/2103.00911v1",cs.GT
35015,th,"We study the problem of repeatedly auctioning off an item to one of $k$
bidders where: a) bidders have a per-round individual rationality constraint,
b) bidders may leave the mechanism at any point, and c) the bidders' valuations
are adversarially chosen (the prior-free setting). Without these constraints,
the auctioneer can run a second-price auction to ""sell the business"" and
receive the second highest total value for the entire stream of items. We show
that under these constraints, the auctioneer can attain a constant fraction of
the ""sell the business"" benchmark, but no more than $2/e$ of this benchmark.
  In the course of doing so, we design mechanisms for a single bidder problem
of independent interest: how should you repeatedly sell an item to a (per-round
IR) buyer with adversarial valuations if you know their total value over all
rounds is $V$ but not how their value changes over time? We demonstrate a
mechanism that achieves revenue $V/e$ and show that this is tight.",Prior-free Dynamic Mechanism Design With Limited Liability,2021-03-02 20:06:15,"Mark Braverman, Jon Schneider, S. Matthew Weinberg","http://arxiv.org/abs/2103.01868v1, http://arxiv.org/pdf/2103.01868v1",cs.GT
35016,th,"We study mechanisms for selling a single item when buyers have private costs
for participating in the mechanism. An agent's participation cost can also be
interpreted as an outside option value that she must forego to participate.
This substantially changes the revenue maximization problem, which becomes
non-convex in the presence of participation costs. For multiple buyers, we show
how to construct a $(2+\epsilon)$-approximately revenue-optimal mechanism in
polynomial time. Our approach makes use of a many-buyers-to-single-buyer
reduction, and in the single-buyer case our mechanism improves to an FPTAS. We
also bound the menu size and the sample complexity for the optimal single-buyer
mechanism. Moreover, we show that posting a single price in the single-buyer
case is in fact optimal under the assumption that either (1) the participation
cost is independent of the value, and the value distribution has decreasing
marginal revenue or monotone hazard rate; or (2) the participation cost is a
concave function of the value. When there are multiple buyers, we show that
sequential posted pricing guarantees a large fraction of the optimal revenue
under similar conditions.",Revenue Maximization for Buyers with Costly Participation,2021-03-06 02:37:18,"Yannai A. Gonczarowski, Nicole Immorlica, Yingkai Li, Brendan Lucier","http://arxiv.org/abs/2103.03980v4, http://arxiv.org/pdf/2103.03980v4",cs.GT
35017,th,"We formulate a mean field game where each player stops a privately observed
Brownian motion with absorption. Players are ranked according to their level of
stopping and rewarded as a function of their relative rank. There is a unique
mean field equilibrium and it is shown to be the limit of associated $n$-player
games. Conversely, the mean field strategy induces $n$-player
$\varepsilon$-Nash equilibria for any continuous reward function -- but not for
discontinuous ones. In a second part, we study the problem of a principal who
can choose how to distribute a reward budget over the ranks and aims to
maximize the performance of the median player. The optimal reward design
(contract) is found in closed form, complementing the merely partial results
available in the $n$-player case. We then analyze the quality of the mean field
design when used as a proxy for the optimizer in the $n$-player game.
Surprisingly, the quality deteriorates dramatically as $n$ grows. We explain
this with an asymptotic singularity in the induced $n$-player equilibrium
distributions.",Mean Field Contest with Singularity,2021-03-07 03:33:32,"Marcel Nutz, Yuchong Zhang","http://arxiv.org/abs/2103.04219v1, http://arxiv.org/pdf/2103.04219v1",math.OC
35018,th,"We consider a model of a data broker selling information to a single agent to
maximize his revenue. The agent has a private valuation of the additional
information, and upon receiving the signal from the data broker, the agent can
conduct her own experiment to refine her posterior belief on the states with
additional costs. To maximize expected revenue, only offering full information
in general is suboptimal, and the optimal mechanism may contain a continuum of
menu options with partial information to prevent the agent from having
incentives to acquire additional information from other sources. However, our
main result shows that the additional benefit from price discrimination is
limited, i.e., posting a deterministic price for revealing full information
obtains at least half of the optimal revenue for arbitrary prior and cost
functions.",Selling Data to an Agent with Endogenous Information,2021-03-10 02:51:39,Yingkai Li,"http://arxiv.org/abs/2103.05788v4, http://arxiv.org/pdf/2103.05788v4",econ.TH
35019,th,"We propose a novel model to explain the mechanisms underlying dominance
hierarchical structures. Guided by a predetermined social convention, agents
with limited cognitive abilities optimize their strategies in a Hawk-Dove game.
We find that several commonly observed hierarchical structures in nature such
as linear hierarchy and despotism, emerge as the total fitness-maximizing
social structures given different levels of cognitive abilities.",Limited Cognitive Abilities and Dominance Hierarchy,2021-03-20 05:45:13,"Hanyuan Huang, Jiabin Wu","http://dx.doi.org/10.1007/s10441-022-09442-6, http://arxiv.org/abs/2103.11075v2, http://arxiv.org/pdf/2103.11075v2",q-bio.PE
35020,th,"We present a fascinating model that has lately caught attention among
physicists working in complexity related fields. Though it originated from
mathematics and later from economics, the model is very enlightening in many
aspects that we shall highlight in this review. It is called The Stable
Marriage Problem (though the marriage metaphor can be generalized to many other
contexts), and it consists of matching men and women, considering
preference-lists where individuals express their preference over the members of
the opposite gender. This problem appeared for the first time in 1962 in the
seminal paper of Gale and Shapley and has aroused interest in many fields of
science, including economics, game theory, computer science, etc. Recently it
has also attracted many physicists who, using the powerful tools of statistical
mechanics, have also approached it as an optimization problem. Here we present
a complete overview of the Stable Marriage Problem emphasizing its
multidisciplinary aspect, and reviewing the key results in the disciplines that
it has influenced most. We focus, in particular, in the old and recent results
achieved by physicists, finally introducing two new promising models inspired
by the philosophy of the Stable Marriage Problem. Moreover, we present an
innovative reinterpretation of the problem, useful to highlight the
revolutionary role of information in the contemporary economy.",The Stable Marriage Problem: an Interdisciplinary Review from the Physicist's Perspective,2021-03-21 21:34:45,"Enrico Maria Fenoaltea, Izat B. Baybusinov, Jianyang Zhao, Lei Zhou, Yi-Cheng Zhang","http://dx.doi.org/10.1016/j.physrep.2021.03.001, http://arxiv.org/abs/2103.11458v1, http://arxiv.org/pdf/2103.11458v1",physics.soc-ph
35021,th,"Computing market equilibria is a problem of both theoretical and applied
interest. Much research to date focuses on the case of static Fisher markets
with full information on buyers' utility functions and item supplies. Motivated
by real-world markets, we consider an online setting: individuals have linear,
additive utility functions; items arrive sequentially and must be allocated and
priced irrevocably. We define the notion of an online market equilibrium in
such a market as time-indexed allocations and prices which guarantee buyer
optimality and market clearance in hindsight. We propose a simple, scalable and
interpretable allocation and pricing dynamics termed as PACE. When items are
drawn i.i.d. from an unknown distribution (with a possibly continuous support),
we show that PACE leads to an online market equilibrium asymptotically. In
particular, PACE ensures that buyers' time-averaged utilities converge to the
equilibrium utilities w.r.t. a static market with item supplies being the
unknown distribution and that buyers' time-averaged expenditures converge to
their per-period budget. Hence, many desirable properties of market
equilibrium-based fair division such as no envy, Pareto optimality, and the
proportional-share guarantee are also attained asymptotically in the online
setting. Next, we extend the dynamics to handle quasilinear buyer utilities,
which gives the first online algorithm for computing first-price pacing
equilibria. Finally, numerical experiments on real and synthetic datasets show
that the dynamics converges quickly under various metrics.",Online Market Equilibrium with Application to Fair Division,2021-03-24 05:21:41,"Yuan Gao, Christian Kroer, Alex Peysakhovich","http://arxiv.org/abs/2103.12936v2, http://arxiv.org/pdf/2103.12936v2",cs.GT
35022,th,"Consider an urn filled with balls, each labeled with one of several possible
collective decisions. Now, let a random voter draw two balls from the urn and
pick her more preferred as the collective decision. Relabel the losing ball
with the collective decision, put both balls back into the urn, and repeat.
Once in a while, relabel a randomly drawn ball with a random collective
decision. We prove that the empirical distribution of collective decisions
produced by this process approximates a maximal lottery, a celebrated
probabilistic voting rule proposed by Peter C. Fishburn (Rev. Econ. Stud.,
51(4), 1984). In fact, the probability that the collective decision in round
$n$ is made according to a maximal lottery increases exponentially in $n$. The
proposed procedure is more flexible than traditional voting rules and bears
strong similarities to natural processes studied in biology, physics, and
chemistry as well as algorithms proposed in machine learning.",A Natural Adaptive Process for Collective Decision-Making,2021-03-26 12:43:50,"Florian Brandl, Felix Brandt","http://arxiv.org/abs/2103.14351v5, http://arxiv.org/pdf/2103.14351v5",econ.TH
35023,th,"Second-price auctions with deposits are frequently used in blockchain
environments. An auction takes place on-chain: bidders deposit an amount that
fully covers their bid (but possibly exceeds it) in a smart contract. The
deposit is used as insurance against bidders not honoring their bid if they
win. The deposit, but not the bid, is publicly observed during the bidding
phase of the auction.
  The visibility of deposits can fundamentally change the strategic structure
of the auction if bidding happens sequentially: Bidding is costly since deposit
are costly to make. Thus, deposits can be used as a costly signal for a high
valuation. This is the source of multiple inefficiencies: To engage in costly
signalling, a bidder who bids first and has a high valuation will generally
over-deposit in equilibrium, i.e.~deposit more than he will bid. If high
valuations are likely there can, moreover, be entry deterrence through high
deposits: a bidder who bids first can deter subsequent bidders from entering
the auction. Pooling can happen in equilibrium, where bidders of different
valuations deposit the same amount. The auction fails to allocate the item to
the bidder with the highest valuation.",On-Chain Auctions with Deposits,2021-03-31 00:05:29,"Jan Christoph Schlegel, Akaki Mamageishvili","http://arxiv.org/abs/2103.16681v4, http://arxiv.org/pdf/2103.16681v4",cs.GT
35024,th,"We provide a comprehensive analysis of the two-parameter Beta distributions
seen from the perspective of second-order stochastic dominance. By changing its
parameters through a bijective mapping, we work with a bounded subset D instead
of an unbounded plane. We show that a mean-preserving spread is equivalent to
an increase of the variance, which means that higher moments are irrelevant to
compare the riskiness of Beta distributions. We then derive the lattice
structure induced by second-order stochastic dominance, which is feasible
thanks to the topological closure of D. Finally, we consider a standard
(expected-utility based) portfolio optimization problem in which its inputs are
the parameters of the Beta distribution. We explicitly characterize the subset
of D for which the optimal solution consists of investing 100% of the wealth in
the risky asset and we provide an exhaustive numerical analysis of this optimal
solution through (color-coded) graphs.",A lattice approach to the Beta distribution induced by stochastic dominance: Theory and applications,2021-04-03 17:01:32,"Yann Braouezec, John Cagnol","http://dx.doi.org/10.1080/01605682.2022.2096500, http://arxiv.org/abs/2104.01412v1, http://arxiv.org/pdf/2104.01412v1",math.PR
35025,th,"We study dynamic matching in a spatial setting. Drivers are distributed at
random on some interval. Riders arrive in some (possibly adversarial) order at
randomly drawn points. The platform observes the location of the drivers, and
can match newly arrived riders immediately, or can wait for more riders to
arrive. Unmatched riders incur a waiting cost $c$ per period. The platform can
match riders and drivers, irrevocably. The cost of matching a driver to a rider
is equal to the distance between them. We quantify the value of slightly
increasing supply. We prove that when there are $(1+\epsilon)$ drivers per
rider (for any $\epsilon > 0$), the cost of matching returned by a simple
greedy algorithm which pairs each arriving rider to the closest available
driver is $O(\log^3(n))$, where $n$ is the number of riders. On the other hand,
with equal number of drivers and riders, even the \emph{ex post} optimal
matching does not have a cost less than $\Theta(\sqrt{n})$. Our results shed
light on the important role of (small) excess supply in spatial matching
markets.",The Value of Excess Supply in Spatial Matching Markets,2021-04-07 19:15:54,"Mohammad Akbarpour, Yeganeh Alimohammadi, Shengwu Li, Amin Saberi","http://arxiv.org/abs/2104.03219v1, http://arxiv.org/pdf/2104.03219v1",cs.DS
35026,th,"We analyze equilibrium behavior and optimal policy within a
Susceptible-Infected-Recovered epidemic model augmented with potentially
undiagnosed agents who infer their health status and a social planner with
imperfect enforcement of social distancing. We define and prove the existence
of a perfect Bayesian Markov competitive equilibrium and contrast it with the
efficient allocation subject to the same informational constraints. We identify
two externalities, static (individual actions affect current risk of infection)
and dynamic (individual actions affect future disease prevalence), and study
how they are affected by limitations on testing and enforcement. We prove that
a planner with imperfect enforcement will always wish to curtail activity, but
that its incentives to do so vanish as testing becomes perfect. When a vaccine
arrives far into the future, the planner with perfect enforcement may encourage
activity before herd immunity. We find that lockdown policies have modest
welfare gains, whereas quarantine policies are effective even with imperfect
testing.",Optimal Epidemic Control in Equilibrium with Imperfect Testing and Enforcement,2021-04-09 19:06:15,"Thomas Phelan, Alexis Akira Toda","http://dx.doi.org/10.1016/j.jet.2022.105570, http://arxiv.org/abs/2104.04455v2, http://arxiv.org/pdf/2104.04455v2",econ.TH
35028,th,"We study the problem of fairly allocating indivisible items to agents with
different entitlements, which captures, for example, the distribution of
ministries among political parties in a coalition government. Our focus is on
picking sequences derived from common apportionment methods, including five
traditional divisor methods and the quota method. We paint a complete picture
of these methods in relation to known envy-freeness and proportionality
relaxations for indivisible items as well as monotonicity properties with
respect to the resource, population, and weights. In addition, we provide
characterizations of picking sequences satisfying each of the fairness notions,
and show that the well-studied maximum Nash welfare solution fails resource-
and population-monotonicity even in the unweighted setting. Our results serve
as an argument in favor of using picking sequences in weighted fair division
problems.",Picking Sequences and Monotonicity in Weighted Fair Division,2021-04-29 16:52:09,"Mithun Chakraborty, Ulrike Schmidt-Kraepelin, Warut Suksompong","http://dx.doi.org/10.1016/j.artint.2021.103578, http://arxiv.org/abs/2104.14347v2, http://arxiv.org/pdf/2104.14347v2",cs.GT
35029,th,"These lecture notes have been developed for the course Computational Social
Choice of the Artificial Intelligence MSc programme at the University of
Groningen. They cover mathematical and algorithmic aspects of voting theory.",Lecture Notes on Voting Theory,2021-05-01 14:19:44,Davide Grossi,"http://arxiv.org/abs/2105.00216v1, http://arxiv.org/pdf/2105.00216v1",cs.MA
35030,th,"A decision maker's utility depends on her action $a\in A \subset
\mathbb{R}^d$ and the payoff relevant state of the world $\theta\in \Theta$.
One can define the value of acquiring new information as the difference between
the maximum expected utility pre- and post information acquisition. In this
paper, I find asymptotic results on the expected value of information as $d \to
\infty$, by using tools from the theory of (sub)-Guassian processes and generic
chaining.","High Dimensional Decision Making, Upper and Lower Bounds",2021-05-02 23:12:05,Farzad Pourbabaee,"http://arxiv.org/abs/2105.00545v1, http://arxiv.org/pdf/2105.00545v1",econ.TH
35031,th,"We design a recursive measure of voting power based on partial as well as
full voting efficacy. Classical measures, by contrast, incorporate solely full
efficacy. We motivate our design by representing voting games using a division
lattice and via the notion of random walks in stochastic processes, and show
the viability of our recursive measure by proving it satisfies a plethora of
postulates that any reasonable voting measure should satisfy. These include the
iso-invariance, dummy, dominance, donation, minimum-power bloc, and quarrel
postulates.",A Recursive Measure of Voting Power that Satisfies Reasonable Postulates,2021-05-07 02:46:15,"Arash Abizadeh, Adrian Vetta","http://arxiv.org/abs/2105.03006v2, http://arxiv.org/pdf/2105.03006v2",econ.TH
35032,th,"We study wealth rank correlations in a simple model of macro-economy. To
quantify rank correlations between wealth rankings at different times, we use
Kendall's $\tau$ and Spearman's $\rho$, Goodman--Kruskal's $\gamma$, and the
lists' overlap ratio. We show that the dynamics of wealth flow and the speed of
reshuffling in the ranking list depend on parameters of the model controlling
the wealth exchange rate and the wealth growth volatility. As an example of the
rheology of wealth in real data, we analyze the lists of the richest people in
Poland, Germany, the USA and the world.",Wealth rheology,2021-05-17 20:54:31,"Zdzislaw Burda, Malgorzata J. Krawczyk, Krzysztof Malarz, Malgorzata Snarska","http://dx.doi.org/10.3390/e23070842, http://arxiv.org/abs/2105.08048v2, http://arxiv.org/pdf/2105.08048v2",econ.TH
35033,th,"We analyze the optimal information design in a click-through auction with
fixed valuations per click, but stochastic click-through rates. While the
auctioneer takes as given the auction rule of the click-through auction, namely
the generalized second-price auction, the auctioneer can design the information
flow regarding the click-through rates among the bidders. A natural requirement
in this context is to ask for the information structure to be calibrated in the
learning sense. With this constraint, the auction needs to rank the ads by a
product of the bid and an unbiased estimator of the click-through rates, and
the task of designing an optimal information structure is thus reduced to the
task of designing an optimal unbiased estimator.
  We show that in a symmetric setting with uncertainty about the click-through
rates, the optimal information structure attains both social efficiency and
surplus extraction. The optimal information structure requires private (rather
than public) signals to the bidders. It also requires correlated (rather than
independent) signals, even when the underlying uncertainty regarding the
click-through rates is independent. Beyond symmetric settings, we show that the
optimal information structure requires partial information disclosure.",Calibrated Click-Through Auctions: An Information Design Approach,2021-05-19 22:40:25,"Dirk Bergemann, Paul Duetting, Renato Paes Leme, Song Zuo","http://arxiv.org/abs/2105.09375v1, http://arxiv.org/pdf/2105.09375v1",econ.TH
35034,th,"We study the effectiveness of iterated elimination of strictly-dominated
actions in random games. We show that dominance solvability of games is
vanishingly small as the number of at least one player's actions grows.
Furthermore, conditional on dominance solvability, the number of iterations
required to converge to Nash equilibrium grows rapidly as action sets grow.
Nonetheless, when games are highly imbalanced, iterated elimination simplifies
the game substantially by ruling out a sizable fraction of actions.
Technically, we illustrate the usefulness of recent combinatorial methods for
the analysis of general games.",Dominance Solvability in Random Games,2021-05-22 18:00:19,"Noga Alon, Kirill Rudov, Leeat Yariv","http://arxiv.org/abs/2105.10743v1, http://arxiv.org/pdf/2105.10743v1",econ.TH
35035,th,"The Institutional Analysis and Development (IAD) framework is a conceptual
toolbox put forward by Elinor Ostrom and colleagues in an effort to identify
and delineate the universal common variables that structure the immense variety
of human interactions. The framework identifies rules as one of the core
concepts to determine the structure of interactions, and acknowledges their
potential to steer a community towards more beneficial and socially desirable
outcomes. This work presents the first attempt to turn the IAD framework into a
computational model to allow communities of agents to formally perform what-if
analysis on a given rule configuration. To do so, we define the Action
Situation Language -- or ASL -- whose syntax is hgighly tailored to the
components of the IAD framework and that we use to write descriptions of social
interactions. ASL is complemented by a game engine that generates its semantics
as an extensive-form game. These models, then, can be analyzed with the
standard tools of game theory to predict which outcomes are being most
incentivized, and evaluated according to their socially relevant properties.",A Computational Model of the Institutional Analysis and Development Framework,2021-05-27 16:53:56,Nieves Montes,"http://dx.doi.org/10.1016/j.artint.2022.103756, http://arxiv.org/abs/2105.13151v1, http://arxiv.org/pdf/2105.13151v1",cs.AI
35036,th,"In the area of matching-based market design, existing models using cardinal
utilities suffer from two deficiencies, which restrict applicability: First,
the Hylland-Zeckhauser (HZ) mechanism, which has remained a classic in
economics for one-sided matching markets, is intractable; computation of even
an approximate equilibrium is PPAD-complete [Vazirani, Yannakakis 2021], [Chen
et al 2022]. Second, there is an extreme paucity of such models. This led
[Hosseini and Vazirani 2021] to define a rich collection of
Nash-bargaining-based models for one-sided and two-sided matching markets, in
both Fisher and Arrow-Debreu settings, together with implementations using
available solvers and very encouraging experimental results.
  [Hosseini and Vazirani 2021] raised the question of finding efficient
combinatorial algorithms, with proven running times, for these models. In this
paper, we address this question by giving algorithms based on the techniques of
multiplicative weights update (MWU) and conditional gradient descent (CGD).
Additionally, we make the following conceptual contributions to the proposal of
[Hosseini and Vazirani 2021] in order to set it on a more firm foundation:
  1) We establish a connection between HZ and Nash-bargaining-based models via
the celebrated Eisenberg-Gale convex program, thereby providing a theoretical
ratification.
  2) Whereas HZ satisfies envy-freeness, due to the presence of demand
constraints, the Nash-bargaining-based models do not. We rectify this to the
extent possible by showing that these models satisfy approximate equal-share
fairness notions.
  3) We define, for the first time, a model for non-bipartite matching markets
under cardinal utilities. It is also Nash-bargaining-based and we solve it
using CGD.","Combinatorial Algorithms for Matching Markets via Nash Bargaining: One-Sided, Two-Sided and Non-Bipartite",2021-06-03 20:48:40,"Ioannis Panageas, Thorben Tröbst, Vijay V. Vazirani","http://arxiv.org/abs/2106.02024v4, http://arxiv.org/pdf/2106.02024v4",cs.GT
35037,th,"We define a model of interactive communication where two agents with private
types can exchange information before a game is played. The model contains
Bayesian persuasion as a special case of a one-round communication protocol. We
define message complexity corresponding to the minimum number of interactive
rounds necessary to achieve the best possible outcome. Our main result is that
for bilateral trade, agents don't stop talking until they reach an efficient
outcome: Either agents achieve an efficient allocation in finitely many rounds
of communication; or the optimal communication protocol has infinite number of
rounds. We show an important class of bilateral trade settings where efficient
allocation is achievable with a small number of rounds of communication.",Interactive Communication in Bilateral Trade,2021-06-04 00:56:54,"Jieming Mao, Renato Paes Leme, Kangning Wang","http://arxiv.org/abs/2106.02150v1, http://arxiv.org/pdf/2106.02150v1",cs.GT
35038,th,"Recent advances in multi-task peer prediction have greatly expanded our
knowledge about the power of multi-task peer prediction mechanisms. Various
mechanisms have been proposed in different settings to elicit different types
of information. But we still lack understanding about when desirable mechanisms
will exist for a multi-task peer prediction problem. In this work, we study the
elicitability of multi-task peer prediction problems. We consider a designer
who has certain knowledge about the underlying information structure and wants
to elicit certain information from a group of participants. Our goal is to
infer the possibility of having a desirable mechanism based on the primitives
of the problem.
  Our contribution is twofold. First, we provide a characterization of the
elicitable multi-task peer prediction problems, assuming that the designer only
uses scoring mechanisms. Scoring mechanisms are the mechanisms that reward
participants' reports for different tasks separately. The characterization uses
a geometric approach based on the power diagram characterization in the
single-task setting ([Lambert and Shoham, 2009, Frongillo and Witkowski,
2017]). For general mechanisms, we also give a necessary condition for a
multi-task problem to be elicitable.
  Second, we consider the case when the designer aims to elicit some properties
that are linear in the participant's posterior about the state of the world. We
first show that in some cases, the designer basically can only elicit the
posterior itself. We then look into the case when the designer aims to elicit
the participants' posteriors. We give a necessary condition for the posterior
to be elicitable. This condition implies that the mechanisms proposed by Kong
and Schoenebeck are already the best we can hope for in their setting, in the
sense that their mechanisms can solve any problem instance that can possibly be
elicitable.",The Limits of Multi-task Peer Prediction,2021-06-06 19:48:42,"Shuran Zheng, Fang-Yi Yu, Yiling Chen","http://dx.doi.org/10.1145/3465456.3467642, http://arxiv.org/abs/2106.03176v1, http://arxiv.org/pdf/2106.03176v1",cs.GT
35039,th,"This paper proposes a model to explain the potential role of inter-group
conflicts in determining the rise and fall of signaling norms. Individuals in a
population are characterized by high and low productivity types and they are
matched in pairs to form social relationships such as mating or foraging
relationships. In each relationship, an individual's payoff is increasing in
its own type and its partner's type. Hence, the payoff structure of a
relationship does not resemble a dilemma situation. Assume that types are not
observable. In one population, assortative matching according to types is
sustained by signaling. In the other population, individuals do not signal and
they are randomly matched. Types evolve within each population. At the same
time, the two populations may engage in conflicts. Due to assortative matching,
high types grow faster in the population with signaling, yet they bear the cost
of signaling, which lowers their population's fitness in the long run. Through
simulations, we show that the survival of the signaling population depends
crucially on the timing and the efficiency of weapon used in inter-group
conflicts.","Conflicts, Assortative Matching, and the Evolution of Signaling Norms",2021-06-21 01:47:37,"Ethan Holdahl, Jiabin Wu","http://arxiv.org/abs/2106.10772v4, http://arxiv.org/pdf/2106.10772v4",q-bio.PE
35040,th,"Motivated by kidney exchange, we study the following mechanism-design
problem: On a directed graph (of transplant compatibilities among patient-donor
pairs), the mechanism must select a simple path (a chain of transplantations)
starting at a distinguished vertex (an altruistic donor) such that the total
length of this path is as large as possible (a maximum number of patients
receive a kidney). However, the mechanism does not have direct access to the
graph. Instead, the vertices are partitioned over multiple players (hospitals),
and each player reports a subset of her vertices to the mechanism. In
particular, a player may strategically omit vertices to increase how many of
her vertices lie on the path returned by the mechanism.
  Our objective is to find mechanisms that limit incentives for such
manipulation while producing long paths. Unfortunately, in worst-case
instances, competing with the overall longest path is impossible while
incentivizing (approximate) truthfulness, i.e., requiring that hiding nodes
cannot increase a player's utility by more than a factor of $1 + o(1)$. We
therefore adopt a semi-random model where a small ($o(n)$) number of random
edges are added to worst-case instances. While it remains impossible for
truthful mechanisms to compete with the overall longest path, we give a
truthful mechanism that competes with a weaker but non-trivial benchmark: the
length of any path whose subpaths within each player have a minimum average
length. In fact, our mechanism satisfies even a stronger notion of
truthfulness, which we call matching-time incentive compatibility. This notion
of truthfulness requires that each player not only reports her nodes truthfully
but also does not stop the returned path at any of her nodes in order to divert
it to a continuation inside her own subgraph.",Incentive-Compatible Kidney Exchange in a Slightly Semi-Random Model,2021-06-21 22:41:09,"Avrim Blum, Paul Gölz","http://dx.doi.org/10.1145/3465456.3467622, http://arxiv.org/abs/2106.11387v1, http://arxiv.org/pdf/2106.11387v1",cs.GT
35041,th,"Most governments employ a set of quasi-standard measures to fight COVID-19
including wearing masks, social distancing, virus testing, contact tracing, and
vaccination. However, combining these measures into an efficient holistic
pandemic response instrument is even more involved than anticipated. We argue
that some non-trivial factors behind the varying effectiveness of these
measures are selfish decision making and the differing national implementations
of the response mechanism. In this paper, through simple games, we show the
effect of individual incentives on the decisions made with respect to mask
wearing, social distancing and vaccination, and how these may result in
sub-optimal outcomes. We also demonstrate the responsibility of national
authorities in designing these games properly regarding data transparency, the
chosen policies and their influence on the preferred outcome. We promote a
mechanism design approach: it is in the best interest of every government to
carefully balance social good and response costs when implementing their
respective pandemic response mechanism; moreover, there is no one-size-fits-all
solution when designing an effective solution.",Games in the Time of COVID-19: Promoting Mechanism Design for Pandemic Response,2021-06-23 14:51:44,"Balázs Pejó, Gergely Biczók","http://arxiv.org/abs/2106.12329v2, http://arxiv.org/pdf/2106.12329v2",econ.TH
35042,th,"Kakade, Kearns, and Ortiz (KKO) introduce a graph-theoretic generalization of
the classic Arrow--Debreu (AD) exchange economy. Despite its appeal as a
networked version of AD, we argue that the KKO model is too local, in the sense
that goods cannot travel more than one hop through the network. We introduce an
alternative model in which agents may purchase goods on credit in order to
resell them. In contrast to KKO, our model allows for long-range trade, and
yields equilibria in more settings than KKO, including sparse endowments. Our
model smoothly interpolates between the KKO and AD equilibrium concepts: we
recover KKO when the resale capacity is zero, and recover AD when it is
sufficiently large. We give general equilibrium existence results, and an
auction-based algorithm to compute approximate equilibria when agent utilities
satisfy the weak gross-substitutes property.",Graphical Economies with Resale,2021-06-28 08:07:58,"Gabriel P. Andrade, Rafael Frongillo, Elliot Gorokhovsky, Sharadha Srinivasan","http://dx.doi.org/10.1145/3465456.3467628, http://arxiv.org/abs/2106.14397v1, http://arxiv.org/pdf/2106.14397v1",econ.TH
35043,th,"This paper provides a model to analyze and identify a decision maker's (DM's)
hypothetical reasoning. Using this model, I show that a DM's propensity to
engage in hypothetical thinking is captured exactly by her ability to recognize
implications (i.e., to identify that one hypothesis implies another) and that
this later relation is encoded by a DM's observable behavior. Thus, this
characterization both provides a concrete definition of (flawed) hypothetical
reasoning and, importantly, yields a methodology to identify these judgments
from standard economic data.",Hypothetical Expected Utility,2021-06-30 13:56:24,Evan Piermont,"http://arxiv.org/abs/2106.15979v2, http://arxiv.org/pdf/2106.15979v2",econ.TH
35044,th,"The matching game is a cooperative game where the value of every coalition is
the maximum revenue of players in the coalition can make by forming pairwise
disjoint partners. The multiple partners matching game generalizes the matching
game by allowing each player to have more than one possibly repeated partner.
In this paper, we study profit-sharing in multiple partners matching games. A
central concept for profit-sharing is the core which consists of all possible
ways of distributing the profit among individual players such that the grand
coalition remains intact. The core of multiple partners matching games may be
empty [Deng et al., Algorithmic aspects of the core of combinatorial
optimization games, Math. Oper. Res., 1999.]; even when the core is non-empty,
the core membership problem is intractable in general [Biro et al., The stable
fixtures problem with payments, Games Econ. Behav., 2018]. Thus we study
approximate core allocations upon which a coalition may be paid less than the
profit it makes by seceding from the grand coalition. We provide an LP-based
mechanism guaranteeing that no coalition is paid less than $2/3$ times the
profit it makes on its own. We also show that $2/3$ is the best possible factor
relative to the underlying LP-relaxation. Our result generalizes the work of
Vazirani [Vazirani, The general graph matching game: approximate core, arXiv,
2021] from matching games to multiple partners matching games.",Approximate Core Allocations for Multiple Partners Matching Games,2021-07-03 17:20:07,"Han Xiao, Tianhang Lu, Qizhi Fang","http://arxiv.org/abs/2107.01442v2, http://arxiv.org/pdf/2107.01442v2",cs.GT
35045,th,"We study the wealth distribution of UK households through a detailed analysis
of data from wealth surveys and rich lists, and propose a non-linear Kesten
process to model the dynamics of household wealth. The main features of our
model are that we focus on wealth growth and disregard exchange, and that the
rate of return on wealth is increasing with wealth. The linear case with
wealth-independent return rate has been well studied, leading to a log-normal
wealth distribution in the long time limit which is essentially independent of
initial conditions. We find through theoretical analysis and simulations that
the non-linearity in our model leads to more realistic power-law tails, and can
explain an apparent two-tailed structure in the empirical wealth distribution
of the UK and other countries. Other realistic features of our model include an
increase in inequality over time, and a stronger dependence on initial
conditions compared to linear models.",A Study of UK Household Wealth through Empirical Analysis and a Non-linear Kesten Process,2021-07-05 20:53:54,"Samuel Forbes, Stefan Grosskinsky","http://dx.doi.org/10.1371/journal.pone.0272864, http://arxiv.org/abs/2107.02169v1, http://arxiv.org/pdf/2107.02169v1",econ.TH
35048,th,"In this paper, we present a network manipulation algorithm based on an
alternating minimization scheme from (Nesterov 2020). In our context, the
latter mimics the natural behavior of agents and organizations operating on a
network. By selecting starting distributions, the organizations determine the
short-term dynamics of the network. While choosing an organization in
accordance with their manipulation goals, agents are prone to errors. This
rational inattentive behavior leads to discrete choice probabilities. We extend
the analysis of our algorithm to the inexact case, where the corresponding
subproblems can only be solved with numerical inaccuracies. The parameters
reflecting the imperfect behavior of agents and the credibility of
organizations, as well as the condition number of the network transition matrix
have a significant impact on the convergence of our algorithm. Namely, they
turn out not only to improve the rate of convergence, but also to reduce the
accumulated errors. From the mathematical perspective, this is due to the
induced strong convexity of an appropriate potential function.",Network manipulation algorithm based on inexact alternating minimization,2021-07-08 14:03:33,"David Müller, Vladimir Shikhman","http://arxiv.org/abs/2107.03754v2, http://arxiv.org/pdf/2107.03754v2",math.OC
35049,th,"Suppose agents can exert costly effort that creates nonrival, heterogeneous
benefits for each other. At each possible outcome, a weighted, directed network
describing marginal externalities is defined. We show that Pareto efficient
outcomes are those at which the largest eigenvalue of the network is 1. An
important set of efficient solutions, Lindahl outcomes, are characterized by
contributions being proportional to agents' eigenvector centralities in the
network. The outcomes we focus on are motivated by negotiations. We apply the
results to identify who is essential for Pareto improvements, how to
efficiently subdivide negotiations, and whom to optimally add to a team.",A Network Approach to Public Goods: A Short Summary,2021-07-09 06:18:05,"Matthew Elliott, Benjamin Golub","http://arxiv.org/abs/2107.04185v1, http://arxiv.org/pdf/2107.04185v1",econ.TH
35050,th,"This paper proposes a conceptual framework for the analysis of reward sharing
schemes in mining pools, such as those associated with Bitcoin. The framework
is centered around the reported shares in a pool instead of agents and results
in two new fairness criteria, absolute and relative redistribution. These
criteria impose that the addition of a share to the pool affects all previous
shares in the same way, either in absolute amount or in relative ratio. We
characterize two large classes of economically viable reward sharing schemes
corresponding to each of these fairness criteria in turn. We further show that
the intersection of these classes brings about a generalization of the
well-known proportional scheme, which also leads to a new characterization of
the proportional scheme as a corollary.",On Reward Sharing in Blockchain Mining Pools,2021-07-12 13:25:23,"Burak Can, Jens Leth Hougaard, Mohsen Pourpouneh","http://dx.doi.org/10.26481/umagsb.2021009, http://arxiv.org/abs/2107.05302v1, http://arxiv.org/pdf/2107.05302v1",cs.GT
35051,th,"A prevalent assumption in auction theory is that the auctioneer has full
control over the market and that the allocation she dictates is final. In
practice, however, agents might be able to resell acquired items in an
aftermarket. A prominent example is the market for carbon emission allowances.
These allowances are commonly allocated by the government using uniform-price
auctions, and firms can typically trade these allowances among themselves in an
aftermarket that may not be fully under the auctioneer's control. While the
uniform-price auction is approximately efficient in isolation, we show that
speculation and resale in aftermarkets might result in a significant welfare
loss. Motivated by this issue, we consider three approaches, each ensuring high
equilibrium welfare in the combined market. The first approach is to adopt
smooth auctions such as discriminatory auctions. This approach is robust to
correlated valuations and to participants acquiring information about others'
types. However, discriminatory auctions have several downsides, notably that of
charging bidders different prices for identical items, resulting in fairness
concerns that make the format unpopular. Two other approaches we suggest are
either using posted-pricing mechanisms, or using uniform-price auctions with
anonymous reserves. We show that when using balanced prices, both these
approaches ensure high equilibrium welfare in the combined market. The latter
also inherits many of the benefits from uniform-price auctions such as price
discovery, and can be introduced with a minor modification to auctions
currently in use to sell carbon emission allowances.",Making Auctions Robust to Aftermarkets,2021-07-13 08:19:34,"Moshe Babaioff, Nicole Immorlica, Yingkai Li, Brendan Lucier","http://arxiv.org/abs/2107.05853v2, http://arxiv.org/pdf/2107.05853v2",econ.TH
35052,th,"We consider multi-population Bayesian games with a large number of players.
Each player aims at minimizing a cost function that depends on this player's
own action, the distribution of players' actions in all populations, and an
unknown state parameter. We study the nonatomic limit versions of these games
and introduce the concept of Bayes correlated Wardrop equilibrium, which
extends the concept of Bayes correlated equilibrium to nonatomic games. We
prove that Bayes correlated Wardrop equilibria are limits of action flows
induced by Bayes correlated equilibria of the game with a large finite set of
small players. For nonatomic games with complete information admitting a convex
potential, we prove that the set of correlated and of coarse correlated Wardrop
equilibria coincide with the set of probability distributions over Wardrop
equilibria, and that all equilibrium outcomes have the same costs. We get the
following consequences. First, all flow distributions of (coarse) correlated
equilibria in convex potential games with finitely many players converge to
Wardrop equilibria when the weight of each player tends to zero. Second, for
any sequence of flows satisfying a no-regret property, its empirical
distribution converges to the set of distributions over Wardrop equilibria and
the average cost converges to the unique Wardrop cost.",Correlated Equilibria in Large Anonymous Bayesian Games,2021-07-13 21:13:40,"Frederic Koessler, Marco Scarsini, Tristan Tomala","http://arxiv.org/abs/2107.06312v4, http://arxiv.org/pdf/2107.06312v4",cs.GT
35059,th,"Gibbard and Satterthwaite have shown that the only single-valued social
choice functions (SCFs) that satisfy non-imposition (i.e., the function's range
coincides with its codomain) and strategyproofness (i.e., voters are never
better off by misrepresenting their preferences) are dictatorships. In this
paper, we consider set-valued social choice correspondences (SCCs) that are
strategyproof according to Fishburn's preference extension and, in particular,
the top cycle, an attractive SCC that returns the maximal elements of the
transitive closure of the weak majority relation. Our main theorem implies
that, under mild conditions, the top cycle is the only non-imposing
strategyproof SCC whose outcome only depends on the quantified pairwise
comparisons between alternatives. This result effectively turns the
Gibbard-Satterthwaite impossibility into a complete characterization of the top
cycle by moving from SCFs to SCCs. It is obtained as a corollary of a more
general characterization of strategyproof SCCs.",Characterizing the Top Cycle via Strategyproofness,2021-08-10 15:03:56,"Felix Brandt, Patrick Lederer","http://arxiv.org/abs/2108.04622v2, http://arxiv.org/pdf/2108.04622v2",econ.TH
35053,th,"In this paper, we bring consumer theory to bear in the analysis of Fisher
markets whose buyers have arbitrary continuous, concave, homogeneous (CCH)
utility functions representing locally non-satiated preferences. The main tools
we use are the dual concepts of expenditure minimization and indirect utility
maximization. First, we use expenditure functions to construct a new convex
program whose dual, like the dual of the Eisenberg-Gale program, characterizes
the equilibrium prices of CCH Fisher markets. We then prove that the
subdifferential of the dual of our convex program is equal to the negative
excess demand in the associated market, which makes generalized gradient
descent equivalent to computing equilibrium prices via t\^atonnement. Finally,
we run a series of experiments which suggest that t\^atonnement may converge at
a rate of $O\left(\frac{(1+E)}{t^2}\right)$ in CCH Fisher markets that comprise
buyers with elasticity of demand bounded by $E$. Our novel characterization of
equilibrium prices may provide a path to proving the convergence of
t\^atonnement in Fisher markets beyond those in which buyers utilities exhibit
constant elasticity of substitution.",A Consumer-Theoretic Characterization of Fisher Market Equilibria,2021-07-17 04:04:34,"Denizalp Goktas, Enrique Areyan Viqueira, Amy Greenwald","http://arxiv.org/abs/2107.08153v2, http://arxiv.org/pdf/2107.08153v2",cs.GT
35054,th,"We consider a multiproduct monopoly pricing model. We provide sufficient
conditions under which the optimal mechanism can be implemented via upgrade
pricing -- a menu of product bundles that are nested in the strong set order.
Our approach exploits duality methods to identify conditions on the
distribution of consumer types under which (a) each product is purchased by the
same set of buyers as under separate monopoly pricing (though the transfers can
be different), and (b) these sets are nested.
  We exhibit two distinct sets of sufficient conditions. The first set of
conditions is given by a weak version of monotonicity of types and virtual
values, while maintaining a regularity assumption, i.e., that the
product-by-product revenue curves are single-peaked. The second set of
conditions establishes the optimality of upgrade pricing for type spaces with
monotone marginal rates of substitution (MRS) -- the relative preference ratios
for any two products are monotone across types. The monotone MRS condition
allows us to relax the earlier regularity assumption.
  Under both sets of conditions, we fully characterize the product bundles and
prices that form the optimal upgrade pricing menu. Finally, we show that, if
the consumer's types are monotone, the seller can equivalently post a vector of
single-item prices: upgrade pricing and separate pricing are equivalent.",The Optimality of Upgrade Pricing,2021-07-21 22:32:03,"Dirk Bergemann, Alessandro Bonatti, Andreas Haupt, Alex Smolin","http://arxiv.org/abs/2107.10323v2, http://arxiv.org/pdf/2107.10323v2",cs.GT
35055,th,"Hysteresis is treated as a history dependent branching, and the use of the
classical Preisach model for the analysis of macroeconomic hysteresis is first
discussed. Then, a new Preisach-type model is introduced as a macroeconomic
aggregation of more realistic microeconomic hysteresis than in the case of the
classical Preisach model. It is demonstrated that this model is endowed with a
more general mechanism of branching and may account for the continuous
evolution of the economy and its effect on hysteresis. Furthermore, it is shown
that the sluggishness of economic recovery is an intrinsic manifestation of
hysteresis branching.",Economic Hysteresis and Its Mathematical Modeling,2021-07-17 09:42:08,"Isaak D. Mayergoyz, Can E. Korman","http://arxiv.org/abs/2107.10639v2, http://arxiv.org/pdf/2107.10639v2",econ.TH
35056,th,"In the early $20^{th}$ century, Pigou observed that imposing a marginal cost
tax on the usage of a public good induces a socially efficient level of use as
an equilibrium. Unfortunately, such a ""Pigouvian"" tax may also induce other,
socially inefficient, equilibria. We observe that this social inefficiency may
be unbounded, and study whether alternative tax structures may lead to milder
losses in the worst case, i.e. to a lower price of anarchy. We show that no tax
structure leads to bounded losses in the worst case. However, we do find a tax
scheme that has a lower price of anarchy than the Pigouvian tax, obtaining
tight lower and upper bounds in terms of a crucial parameter that we identify.
We generalize our results to various scenarios that each offers an alternative
to the use of a public road by private cars, such as ride sharing, or using a
bus or a train.",Beyond Pigouvian Taxes: A Worst Case Analysis,2021-07-26 11:24:44,"Moshe Babaioff, Ruty Mundel, Noam Nisan","http://arxiv.org/abs/2107.12023v2, http://arxiv.org/pdf/2107.12023v2",econ.TH
35057,th,"We study competition among contests in a general model that allows for an
arbitrary and heterogeneous space of contest design, where the goal of the
contest designers is to maximize the contestants' sum of efforts. Our main
result shows that optimal contests in the monopolistic setting (i.e., those
that maximize the sum of efforts in a model with a single contest) form an
equilibrium in the model with competition among contests. Under a very natural
assumption these contests are in fact dominant, and the equilibria that they
form are unique. Moreover, equilibria with the optimal contests are
Pareto-optimal even in cases where other equilibria emerge. In many natural
cases, they also maximize the social welfare.",From Monopoly to Competition: Optimal Contests Prevail,2021-07-28 16:50:18,"Xiaotie Deng, Yotam Gafni, Ron Lavi, Tao Lin, Hongyi Ling","http://dx.doi.org/10.13140/RG.2.2.11070.20807, http://arxiv.org/abs/2107.13363v1, http://arxiv.org/pdf/2107.13363v1",cs.GT
35058,th,"There is an extensive literature in social choice theory studying the
consequences of weakening the assumptions of Arrow's Impossibility Theorem.
Much of this literature suggests that there is no escape from Arrow-style
impossibility theorems, while remaining in an ordinal preference setting,
unless one drastically violates the Independence of Irrelevant Alternatives
(IIA). In this paper, we present a more positive outlook. We propose a model of
comparing candidates in elections, which we call the Advantage-Standard (AS)
model. The requirement that a collective choice rule (CCR) be rationalizable by
the AS model is in the spirit of but weaker than IIA; yet it is stronger than
what is known in the literature as weak IIA (two profiles alike on $x, y$
cannot have opposite strict social preferences on $x$ and $y$). In addition to
motivating violations of IIA, the AS model makes intelligible violations of
another Arrovian assumption: the negative transitivity of the strict social
preference relation $P$. While previous literature shows that only weakening
IIA to weak IIA or only weakening negative transitivity of $P$ to acyclicity
still leads to impossibility theorems, we show that jointly weakening IIA to AS
rationalizability and weakening negative transitivity of $P$ leads to no such
impossibility theorems. Indeed, we show that several appealing CCRs are AS
rationalizable, including even transitive CCRs.",Escaping Arrow's Theorem: The Advantage-Standard Model,2021-08-02 22:24:16,"Wesley H. Holliday, Mikayla Kelley","http://arxiv.org/abs/2108.01134v3, http://arxiv.org/pdf/2108.01134v3",econ.TH
35060,th,"In the problem of aggregating experts' probabilistic predictions over an
ordered set of outcomes, we introduce the axiom of level-strategy\-proofness
(level-SP) and prove that it is a natural notion with several applications.
Moreover, it is a robust concept as it implies incentive compatibility in a
rich domain of single-peakedness over the space of cumulative distribution
functions (CDFs). This contrasts with the literature which assumes
single-peaked preferences over the space of probability distributions. Our main
results are: (1) a reduction of our problem to the aggregation of CDFs; (2) the
axiomatic characterization of level-SP probability aggregation functions with
and without the addition of other axioms; (3) impossibility results which
provide bounds for our characterization; (4) the axiomatic characterization of
two new and practical level-SP methods: the proportional-cumulative method and
the middlemost-cumulative method; and (5) the application of
proportional-cumulative to extend approval voting, majority rule, and majority
judgment methods to situations where voters/experts are uncertain about how to
grade the candidates/alternatives to be ranked.\footnote{We are grateful to
Thomas Boyer-Kassem, Roger Cooke, Aris Filos-Ratsikas, Herv\'e Moulin, Clemens
Puppe and some anonymous EC2021 referees for their helpful comments and
suggestions.}
  \keywords{Probability Aggregation Functions \and ordered Set of Alternatives
\and Level Strategy-Proofness \and Proportional-Cumulative \and
Middlemost-Cumulative}",Level-strategyproof Belief Aggregation Mechanisms,2021-08-10 17:05:10,"Rida Laraki, Estelle Varloot","http://arxiv.org/abs/2108.04705v4, http://arxiv.org/pdf/2108.04705v4",econ.TH
35062,th,"We consider the extension of a tractable NEG model with a quasi-linear log
utility to continuous space, and investigate the behavior of its solution
mathematically. The model is a system of nonlinear integral and differential
equations describing the market equilibrium and the time evolution of the
spatial distribution of population density. A unique global solution is
constructed and a homogeneous stationary solution with evenly distributed
population is shown to be unstable. Furthermore, it is shown numerically that
the destabilized homogeneous stationary solution eventually forms spiky spatial
distributions. The number of the spikes decreases as the preference for variety
increases or the transport cost decreases.",A continuous space model of new economic geography with a quasi-linear log utility function,2021-08-27 13:38:26,Kensuke Ohtake,"http://dx.doi.org/10.1007/s11067-023-09604-0, http://arxiv.org/abs/2108.12217v8, http://arxiv.org/pdf/2108.12217v8",econ.TH
35063,th,"We study learning dynamics induced by strategic agents who repeatedly play a
game with an unknown payoff-relevant parameter. In this dynamics, a belief
estimate of the parameter is repeatedly updated given players' strategies and
realized payoffs using Bayes's rule. Players adjust their strategies by
accounting for best response strategies given the belief. We show that, with
probability 1, beliefs and strategies converge to a fixed point, where the
belief consistently estimates the payoff distribution for the strategy, and the
strategy is an equilibrium corresponding to the belief. However, learning may
not always identify the unknown parameter because the belief estimate relies on
the game outcomes that are endogenously generated by players' strategies. We
obtain sufficient and necessary conditions, under which learning leads to a
globally stable fixed point that is a complete information Nash equilibrium. We
also provide sufficient conditions that guarantee local stability of fixed
point beliefs and strategies.",Multi-agent Bayesian Learning with Best Response Dynamics: Convergence and Stability,2021-09-02 08:30:57,"Manxi Wu, Saurabh Amin, Asuman Ozdaglar","http://arxiv.org/abs/2109.00719v1, http://arxiv.org/pdf/2109.00719v1",cs.GT
35064,th,"We introduce balancedness a fairness axiom in house allocation problems. It
requires a mechanism to assign the top choice, the second top choice, and so
on, on the same number of profiles for each agent. This axiom guarantees equal
treatment of all agents at the stage in which the mechanism is announced when
all preference profiles are equally likely. We show that, with an interesting
exception for the three-agent case, Top Trading Cycles from individual
endowments is the only mechanism that is balanced, efficient, and group
strategy-proof.",Balanced House Allocation,2021-09-05 07:53:31,"Xinghua Long, Rodrigo A. Velez","http://arxiv.org/abs/2109.01992v1, http://arxiv.org/pdf/2109.01992v1",econ.TH
35065,th,"We study contests where the designer's objective is an extension of the
widely studied objective of maximizing the total output: The designer gets zero
marginal utility from a player's output if the output of the player is very low
or very high. We model this using two objective functions: binary threshold,
where a player's contribution to the designer's utility is 1 if her output is
above a certain threshold, and 0 otherwise; and linear threshold, where a
player's contribution is linear if her output is between a lower and an upper
threshold, and becomes constant below the lower and above the upper threshold.
For both of these objectives, we study (1) rank-order allocation contests that
use only the ranking of the players to assign prizes and (2) general contests
that may use the numerical values of the players' outputs to assign prizes. We
characterize the contests that maximize the designer's objective and indicate
techniques to efficiently compute them. We also prove that for the linear
threshold objective, a contest that distributes the prize equally among a fixed
number of top-ranked players offers a factor-2 approximation to the optimal
rank-order allocation contest.",Contest Design with Threshold Objectives,2021-09-07 19:18:59,"Edith Elkind, Abheek Ghosh, Paul Goldberg","http://arxiv.org/abs/2109.03179v2, http://arxiv.org/pdf/2109.03179v2",cs.GT
35084,th,"This paper considers an infinitely repeated three-player Bayesian game with
lack of information on two sides, in which an informed player plays two
zero-sum games simultaneously at each stage against two uninformed players.
This is a generalization of the Aumann et al. [1] two-player zero-sum one-sided
incomplete information model. Under a correlated prior, the informed player
faces the problem of how to optimally disclose information among two uninformed
players in order to maximize his long-term average payoffs. Our objective is to
understand the adverse effects of \information spillover"" from one game to the
other in the equilibrium payoff set of the informed player. We provide
conditions under which the informed player can fully overcome such adverse
effects and characterize equilibrium payoffs. In a second result, we show how
the effects of information spillover on the equilibrium payoff set of the
informed player might be severe.",Information Spillover in Multiple Zero-sum Games,2021-11-02 18:05:11,Lucas Pahl,"http://arxiv.org/abs/2111.01647v3, http://arxiv.org/pdf/2111.01647v3",econ.TH
35066,th,"Electronic countermeasures (ECM) against a radar are actions taken by an
adversarial jammer to mitigate effective utilization of the electromagnetic
spectrum by the radar. On the other hand, electronic counter-countermeasures
(ECCM) are actions taken by the radar to mitigate the impact of electronic
countermeasures (ECM) so that the radar can continue to operate effectively.
The main idea of this paper is to show that ECCM involving a radar and a jammer
can be formulated as a principal-agent problem (PAP) - a problem widely studied
in microeconomics. With the radar as the principal and the jammer as the agent,
we design a PAP to optimize the radar's ECCM strategy in the presence of a
jammer. The radar seeks to optimally trade-off signal-to-noise ratio (SNR) of
the target measurement with the measurement cost: cost for generating radiation
power for the pulse to probe the target. We show that for a suitable choice of
utility functions, PAP is a convex optimization problem. Further, we analyze
the structure of the PAP and provide sufficient conditions under which the
optimal solution is an increasing function of the jamming power observed by the
radar; this enables computation of the radar's optimal ECCM within the class of
increasing affine functions at a low computation cost. Finally, we illustrate
the PAP formulation of the radar's ECCM problem via numerical simulations. We
also use simulations to study a radar's ECCM problem wherein the radar and the
jammer have mismatched information.",Principal Agent Problem as a Principled Approach to Electronic Counter-Countermeasures in Radar,2021-09-08 14:11:11,"Anurag Gupta, Vikram Krishnamurthy","http://arxiv.org/abs/2109.03546v2, http://arxiv.org/pdf/2109.03546v2",eess.SP
35067,th,"We propose and study a novel mechanism design setup where each bidder holds
two kinds of private information: (1) type variable, which can be misreported;
(2) information variable, which the bidder may want to conceal or partially
reveal, but importantly, not to misreport. We refer to bidders with such
behaviors as strategically reticent bidders. Among others, one direct
motivation of our model is the ad auction in which many ad platforms today
elicit from each bidder not only their private value per conversion but also
their private information about Internet users (e.g., user activities on the
advertiser's websites) in order to improve the platform's estimation of
conversion rates.
  We show that in this new setup, it is still possible to design mechanisms
that are both Incentive and Information Compatible (IIC). We develop two
different black-box transformations, which convert any mechanism $\mathcal{M}$
for classic bidders to a mechanism $\bar{\mathcal{M}}$ for strategically
reticent bidders, based on either outcome of expectation or expectation of
outcome, respectively. We identify properties of the original mechanism
$\mathcal{M}$ under which the transformation leads to IIC mechanisms
$\bar{\mathcal{M}}$. Interestingly, as corollaries of these results, we show
that running VCG with bidders' expected values maximizes welfare, whereas the
mechanism using expected outcome of Myerson's auction maximizes revenue.
Finally, we study how regulation on the auctioneer's usage of information can
lead to more robust mechanisms.",Auctioning with Strategically Reticent Bidders,2021-09-10 17:01:01,"Jibang Wu, Ashwinkumar Badanidiyuru, Haifeng Xu","http://arxiv.org/abs/2109.04888v2, http://arxiv.org/pdf/2109.04888v2",cs.GT
35068,th,"This paper studies matching markets in the presence of middlemen. In our
framework, a buyer-seller pair may either trade directly or use the services of
a middleman; and a middleman may serve multiple buyer-seller pairs. Direct
trade between a buyer and a seller is costlier than a trade mediated by a
middleman. For each such market, we examine an associated cooperative game with
transferable utility. First, we show that an optimal matching for a matching
market with middlemen can be obtained by considering the two-sided assignment
market where each buyer-seller pair is allowed to use the mediation service of
the middlemen free of charge and attain the maximum surplus. Second, we prove
that the core of a matching market with middlemen is always non-empty. Third,
we show the existence of a buyer-optimal core allocation and a seller-optimal
core allocation. In general, the core does not exhibit a middleman-optimal
matching. Finally, we establish the coincidence between the core and the set of
competitive equilibrium payoff vectors.",Matching markets with middlemen under transferable utility,2021-09-12 11:02:58,"Ata Atay, Eric Bahel, Tamás Solymosi","http://arxiv.org/abs/2109.05456v2, http://arxiv.org/pdf/2109.05456v2",econ.TH
35069,th,"We study a class of non-cooperative aggregative games -- denoted as
\emph{social purpose games} -- in which the payoffs depend separately on a
player's own strategy (individual benefits) and on a function of the strategy
profile which is common to all players (social benefits) weighted by an
individual benefit parameter. This structure allows for an asymmetric
assessment of the social benefit across players.
  We show that these games have a potential and we investigate its properties.
We investigate the payoff structure and the uniqueness of Nash equilibria and
social optima. Furthermore, following the literature on partial cooperation, we
investigate the leadership of a single coalition of cooperators while the rest
of players act as non-cooperative followers. In particular, we show that social
purpose games admit the emergence of a stable coalition of cooperators for the
subclass of \emph{strict} social purpose games. Due to the nature of the
partial cooperative leadership equilibrium, stable coalitions of cooperators
reflect a limited form of farsightedness in their formation.
  As a particular application, we study the tragedy of the commons game. We
show that there emerges a single stable coalition of cooperators to curb the
over-exploitation of the resource.",Emergent Collaboration in Social Purpose Games,2021-09-17 14:21:27,"Robert P. Gilles, Lina Mallozzi, Roberta Messalli","http://arxiv.org/abs/2109.08471v1, http://arxiv.org/pdf/2109.08471v1",cs.GT
35070,th,"School choice is the two-sided matching market where students (on one side)
are to be matched with schools (on the other side) based on their mutual
preferences. The classical algorithm to solve this problem is the celebrated
deferred acceptance procedure, proposed by Gale and Shapley. After both sides
have revealed their mutual preferences, the algorithm computes an optimal
stable matching. Most often in practice, notably when the process is
implemented by a national clearinghouse and thousands of schools enter the
market, there is a quota on the number of applications that a student can
submit: students have to perform a partial revelation of their preferences,
based on partial information on the market. We model this situation by drawing
each student type from a publicly known distribution and study Nash equilibria
of the corresponding Bayesian game. We focus on symmetric equilibria, in which
all students play the same strategy. We show existence of these equilibria in
the general case, and provide two algorithms to compute such equilibria under
additional assumptions, including the case where schools have identical
preferences over students.",Constrained School Choice with Incomplete Information,2021-09-19 11:58:46,"Hugo Gimbert, Claire Mathieu, Simon Mauras","http://arxiv.org/abs/2109.09089v1, http://arxiv.org/pdf/2109.09089v1",cs.GT
35071,th,"There are different interpretations of the terms ""tokens"" and ""token-based
systems"" in the literature around blockchain and digital currencies although
the distinction between token-based and account-based systems is well
entrenched in economics. Despite the wide use of the terminologies of tokens
and tokenisation in the cryptocurrency community, the underlying concept
sometimes does not square well with the economic notions, or is even contrary
to them. The UTXO design of Bitcoin exhibits partially characteristics of a
token-based system and partially characteristics of an account-based system. A
discussion on the difficulty to implement the economic notion of tokens in the
digital domain, along with an exposition of the design of UTXO, is given in
order to discuss why UTXO-based systems should be viewed as account-based
according to the classical economic notion. Besides, a detailed comparison
between UTXO-based systems and account-based systems is presented. Using the
data structure of the system state representation as the defining feature to
distinguish digital token-based and account-based systems is therefore
suggested. This extended definition of token-based systems covers both physical
and digital tokens while neatly distinguishing token-based and account-based
systems.",UTXO in Digital Currencies: Account-based or Token-based? Or Both?,2021-09-20 07:35:28,Aldar C-F. Chan,"http://arxiv.org/abs/2109.09294v1, http://arxiv.org/pdf/2109.09294v1",econ.TH
35072,th,"We solve the binary decision model of Brock and Durlauf in time using a
method reliant on the resolvent of the master operator of the stochastic
process. Our solution is valid when not at equilibrium and can be used to
exemplify path-dependent behaviours of the binary decision model. The solution
is computationally fast and is indistinguishable from Monte Carlo simulation.
Well-known metastable effects are observed in regions of the model's parameter
space where agent rationality is above a critical value, and we calculate the
time scale at which equilibrium is reached from first passage time theory to a
much greater approximation than has been previously conducted. In addition to
considering selfish agents, who only care to maximise their own utility, we
consider altruistic agents who make decisions on the basis of maximising global
utility. Curiously, we find that although altruistic agents coalesce more
strongly on a particular decision, thereby increasing their utility in the
short-term, they are also more prone to being subject to non-optimal metastable
regimes as compared to selfish agents. The method used for this solution can be
easily extended to other binary decision models, including Kirman's ant model,
and under reinterpretation also provides a time-dependent solution to the
mean-field Ising model. Finally, we use our time-dependent solution to
construct a likelihood function that can be used on non-equilibrium data for
model calibration. This is a rare finding, since often calibration in economic
agent based models must be done without an explicit likelihood function. From
simulated data, we show that even with a well-defined likelihood function,
model calibration is difficult unless one has access to data representative of
the underlying model.",Non-equilibrium time-dependent solution to discrete choice with social interactions,2021-09-20 18:32:59,"James Holehouse, Hector Pollitt","http://dx.doi.org/10.1371/journal.pone.0267083, http://arxiv.org/abs/2109.09633v3, http://arxiv.org/pdf/2109.09633v3",econ.TH
35073,th,"How and to what extent will new activities spread through social ties? Here,
we develop a more sophisticated framework than the standard mean-field approach
to describe the diffusion dynamics of multiple activities on complex networks.
We show that the diffusion of multiple activities follows a saddle path and can
be highly unstable. In particular, when the two activities are sufficiently
substitutable, either of them would dominate the other by chance even if they
are equally attractive ex ante. When such symmetry-breaking occurs, any
average-based approach cannot correctly calculate the Nash equilibrium - the
steady state of an actual diffusion process.",Unstable diffusion in social networks,2021-09-28 06:14:01,"Teruyoshi Kobayashi, Yoshitaka Ogisu, Tomokatsu Onaga","http://dx.doi.org/10.1016/j.jedc.2022.104561, http://arxiv.org/abs/2109.14560v1, http://arxiv.org/pdf/2109.14560v1",econ.TH
35074,th,"I model the belief formation and decision making processes of economic agents
during a monetary policy regime change (an acceleration in the money supply)
with a deep reinforcement learning algorithm in the AI literature. I show that
when the money supply accelerates, the learning agents only adjust their
actions, which include consumption and demand for real balance, after gathering
learning experience for many periods. This delayed adjustments leads to low
returns during transition periods. Once they start adjusting to the new
environment, their welfare improves. Their changes in beliefs and actions lead
to temporary inflation volatility. I also show that, 1. the AI agents who
explores their environment more adapt to the policy regime change quicker,
which leads to welfare improvements and less inflation volatility, and 2. the
AI agents who have experienced a structural change adjust their beliefs and
behaviours quicker than an inexperienced learning agent.",Can an AI agent hit a moving target?,2021-10-06 06:16:54,"Rui, Shi","http://arxiv.org/abs/2110.02474v3, http://arxiv.org/pdf/2110.02474v3",econ.TH
35076,th,"I explain how faculty members could exploit a method to allocate travel funds
and how to use game theory to design a method that cannot be manipulated.",A Mechanism Design Approach to Allocating Travel Funds,2021-10-08 17:36:55,Michael A. Jones,"http://arxiv.org/abs/2110.04161v1, http://arxiv.org/pdf/2110.04161v1",econ.TH
35077,th,"This paper establishes non-asymptotic convergence of the cutoffs in Random
serial dictatorship in an environment with many students, many schools, and
arbitrary student preferences. Convergence is shown to hold when the number of
schools, $m$, and the number of students, $n$, satisfy the relation $m \ln m
\ll n$, and we provide an example showing that this result is sharp.
  We differ significantly from prior work in the mechanism design literature in
our use of analytic tools from randomized algorithms and discrete probability,
which allow us to show concentration of the RSD lottery probabilities and
cutoffs even against adversarial student preferences.",Stability and Efficiency of Random Serial Dictatorship,2021-10-13 23:50:23,Suhas Vijaykumar,"http://arxiv.org/abs/2110.07024v1, http://arxiv.org/pdf/2110.07024v1",econ.TH
35078,th,"We consider a Bayesian persuasion or information design problem where the
sender tries to persuade the receiver to take a particular action via a
sequence of signals. This we model by considering multi-phase trials with
different experiments conducted based on the outcomes of prior experiments. In
contrast to most of the literature, we consider the problem with constraints on
signals imposed on the sender. This we achieve by fixing some of the
experiments in an exogenous manner; these are called determined experiments.
This modeling helps us understand real-world situations where this occurs:
e.g., multi-phase drug trials where the FDA determines some of the experiments,
funding of a startup by a venture capital firm, start-up acquisition by big
firms where late-stage assessments are determined by the potential acquirer,
multi-round job interviews where the candidates signal initially by presenting
their qualifications but the rest of the screening procedures are determined by
the interviewer. The non-determined experiments (signals) in the multi-phase
trial are to be chosen by the sender in order to persuade the receiver best.
With a binary state of the world, we start by deriving the optimal signaling
policy in the only non-trivial configuration of a two-phase trial with
binary-outcome experiments. We then generalize to multi-phase trials with
binary-outcome experiments where the determined experiments can be placed at
any chosen node in the trial tree. Here we present a dynamic programming
algorithm to derive the optimal signaling policy that uses the two-phase trial
solution's structural insights. We also contrast the optimal signaling policy
structure with classical Bayesian persuasion strategies to highlight the impact
of the signaling constraints on the sender.",Bayesian Persuasion in Sequential Trials,2021-10-18 22:31:10,"Shih-Tang Su, Vijay G. Subramanian, Grant Schoenebeck","http://arxiv.org/abs/2110.09594v3, http://arxiv.org/pdf/2110.09594v3",econ.TH
35080,th,"We examine the evolutionary basis for risk aversion with respect to aggregate
risk. We study populations in which agents face choices between alternatives
with different levels of aggregate risk. We show that the choices that maximize
the long-run growth rate are induced by a heterogeneous population in which the
least and most risk-averse agents are indifferent between facing an aggregate
risk and obtaining its linear and harmonic mean for sure, respectively.
Moreover, approximately optimal behavior can be induced by a simple
distribution according to which all agents have constant relative risk
aversion, and the coefficient of relative risk aversion is uniformly
distributed between zero and two.",Evolutionary Foundation for Heterogeneity in Risk Aversion,2021-10-21 19:15:17,"Yuval Heller, Ilan Nehama","http://dx.doi.org/10.1016/j.jet.2023.105617, http://arxiv.org/abs/2110.11245v3, http://arxiv.org/pdf/2110.11245v3",econ.TH
35081,th,"Athey and Segal introduced an efficient budget-balanced mechanism for a
dynamic stochastic model with quasilinear payoffs and private values, using the
solution concept of perfect Bayesian equilibrium. We show that this
implementation is not robust in multiple senses, especially for at least 3
agents. For example, we will show a generic setup where all efficient strategy
profiles can be eliminated by iterative elimination of weakly dominated
strategies. Furthermore, this model used strong assumptions about the
information of the agents, and the mechanism was not robust to the relaxation
of these assumptions. In this paper, we will show a different mechanism that
implements efficiency under weaker assumptions and uses the stronger solution
concept of ``efficient Nash equilibrium with guaranteed expected payoffs''.",A Robust Efficient Dynamic Mechanism,2021-10-20 18:27:33,Endre Csóka,"http://arxiv.org/abs/2110.15219v2, http://arxiv.org/pdf/2110.15219v2",econ.TH
35082,th,"This paper proposes a spatial model with a realistic geography where a
continuous distribution of agents (e.g., farmers) engages in economic
interactions with one location from a finite set (e.g., cities). The spatial
structure of the equilibrium consists of a tessellation, i.e., a partition of
space into a collection of mutually exclusive market areas. After proving the
existence of a unique equilibrium, we characterize how the location of borders
and, in the case with mobile labor, the set of inhabited cities change in
response to economic shocks. To deal with a two-dimensional space, we draw on
tools from computational geometry and from the theory of shape optimization.
Finally, we provide an empirical application to illustrate the usefulness of
the framework for applied work.",Market Areas in General Equilibrium,2021-10-29 18:14:24,"Gianandrea Lanzara, Matteo Santacesaria","http://dx.doi.org/10.1016/j.jet.2023.105675, http://arxiv.org/abs/2110.15849v2, http://arxiv.org/pdf/2110.15849v2",econ.TH
35083,th,"In the literature on simultaneous non-cooperative games, it is a widely used
fact that a positive affine (linear) transformation of the utility payoffs
neither changes the best response sets nor the Nash equilibrium set. We
investigate which other game transformations also possess one of these two
properties when being applied to an arbitrary N-player game (N >= 2):
  (i) The Nash equilibrium set stays the same.
  (ii) The best response sets stay the same.
  For game transformations that operate player-wise and strategy-wise, we prove
that (i) implies (ii) and that transformations with property (ii) must be
positive affine. The resulting equivalence chain gives an explicit description
of all those game transformations that always preserve the Nash equilibrium set
(or, respectively, the best response sets). Simultaneously, we obtain two new
characterizations of the class of positive affine transformations.",Game Transformations That Preserve Nash Equilibria or Best Response Sets,2021-10-29 23:36:57,Emanuel Tewolde,"http://arxiv.org/abs/2111.00076v2, http://arxiv.org/pdf/2111.00076v2",cs.GT
35085,th,"Many real-world situations of ethical and economic relevance, such as
collective (in)action with respect to the climate crisis, involve not only
diverse agents whose decisions interact in complicated ways, but also various
forms of uncertainty, including both quantifiable risk and unquantifiable
ambiguity. In such cases, an assessment of moral responsibility for ethically
undesired outcomes or of the responsibility to avoid these is challenging and
prone to the risk of under- or over determination. In contrast to existing
approaches that employ notions of causation based on combinations of necessity
and sufficiency or certain logics that focus on a binary classification of
`responsible' vs `not responsible', we present a set of quantitative metrics
that assess responsibility degrees in units of probability. To this end, we
adapt extensive-form game trees as the framework for representing decision
scenarios and evaluate the proposed responsibility functions based on the
correct representation of a set of analytically assessed paradigmatic example
scenarios. We test the best performing metrics on a reduced representation of a
real-world decision scenario and are able to compute meaningful responsibility
scores.",Quantifying Responsibility with Probabilistic Causation -- The Case of Climate Action,2021-11-03 18:42:37,"Sarah Hiller, Jobst Heitzig","http://arxiv.org/abs/2111.02304v1, http://arxiv.org/pdf/2111.02304v1",physics.soc-ph
35086,th,"In blockchains such as Bitcoin and Ethereum, users compete in a transaction
fee auction to get their transactions confirmed in the next block. A line of
recent works set forth the desiderata for a ""dream"" transaction fee mechanism
(TFM), and explored whether such a mechanism existed. A dream TFM should
satisfy 1) user incentive compatibility (UIC), i.e., truthful bidding should be
a user's dominant strategy; 2) miner incentive compatibility (MIC), i.e., the
miner's dominant strategy is to faithfully implement the prescribed mechanism;
and 3) miner-user side contract proofness (SCP), i.e., no coalition of the
miner and one or more user(s) can increase their joint utility by deviating
from the honest behavior. The weakest form of SCP is called 1-SCP, where we
only aim to provide resilience against the collusion of the miner and a single
user. Sadly, despite the various attempts, to the best of knowledge, no
existing mechanism can satisfy all three properties in all situations.
  Since the TFM departs from classical mechanism design in modeling and
assumptions, to date, our understanding of the design space is relatively
little. In this paper, we further unravel the mathematical structure of
transaction fee mechanism design by proving the following results:
  - Can we have a dream TFM?
  - Rethinking the incentive compatibility notions.
  - Do the new design elements make a difference?",Foundations of Transaction Fee Mechanism Design,2021-11-04 23:43:24,"Hao Chung, Elaine Shi","http://arxiv.org/abs/2111.03151v3, http://arxiv.org/pdf/2111.03151v3",cs.GT
35087,th,"We propose an incentive-based traffic demand management policy to alleviate
traffic congestion on a road stretch that creates a bottleneck for the
commuters. The incentive targets electric vehicles owners by proposing a
discount on the energy price they use to charge their vehicles if they are
flexible in their departure time. We show that, with a sufficient monetary
budget, it is possible to completely eliminate the traffic congestion and we
compute the optimal discount. We analyse also the case of limited budget, when
the congestion cannot be completely eliminated. We compute analytically the
policy minimising the congestion and estimate the level of inefficiency for
different budgets. We corroborate our theoretical findings with numerical
simulations that allow us to highlight the power of the proposed method in
providing practical advice for the design of policies.",Incentive-Based Electric Vehicle Charging for Managing Bottleneck Congestion,2021-11-10 12:46:14,"Carlo Cenedese, Patrick Stokkink, Nikolas Gerolimins, John Lygeros","http://arxiv.org/abs/2111.05600v1, http://arxiv.org/pdf/2111.05600v1",math.OC
35089,th,"We study the mechanism design problem of selling $k$ items to unit-demand
buyers with private valuations for the items. A buyer either participates
directly in the auction or is represented by an intermediary, who represents a
subset of buyers. Our goal is to design robust mechanisms that are independent
of the demand structure (i.e. how the buyers are partitioned across
intermediaries), and perform well under a wide variety of possible contracts
between intermediaries and buyers.
  We first study the case of $k$ identical items where each buyer draws its
private valuation for an item i.i.d. from a known $\lambda$-regular
distribution. We construct a robust mechanism that, independent of the demand
structure and under certain conditions on the contracts between intermediaries
and buyers, obtains a constant factor of the revenue that the mechanism
designer could obtain had she known the buyers' valuations. In other words, our
mechanism's expected revenue achieves a constant factor of the optimal welfare,
regardless of the demand structure. Our mechanism is a simple posted-price
mechanism that sets a take-it-or-leave-it per-item price that depends on $k$
and the total number of buyers, but does not depend on the demand structure or
the downstream contracts.
  Next we generalize our result to the case when the items are not identical.
We assume that the item valuations are separable. For this case, we design a
mechanism that obtains at least a constant fraction of the optimal welfare, by
using a menu of posted prices. This mechanism is also independent of the demand
structure, but makes a relatively stronger assumption on the contracts between
intermediaries and buyers, namely that each intermediary prefers outcomes with
a higher sum of utilities of the subset of buyers represented by it.",Maximizing revenue in the presence of intermediaries,2021-11-20 02:02:24,"Gagan Aggarwal, Kshipra Bhawalkar, Guru Guruganesh, Andres Perlroth","http://arxiv.org/abs/2111.10472v1, http://arxiv.org/pdf/2111.10472v1",cs.GT
35096,th,"We provide a generic method to find full dynamical solutions to binary
decision models with interactions. In these models, agents follow a stochastic
evolution where they must choose between two possible choices by taking into
account the choices of their peers. We illustrate our method by solving Kirman
and F\""ollmer's ant recruitment model for any number $N$ of agents and for any
choice of parameters, recovering past results found in the limit $N\to \infty$.
We then solve extensions of the ant recruitment model for increasing asymmetry
between the two choices. Finally, we provide an analytical time-dependent
solution to the standard voter model and a semi-analytical solution to the
vacillating voter model.",Exact time-dependent dynamics of discrete binary choice models,2022-01-24 13:32:08,"James Holehouse, José Moran","http://arxiv.org/abs/2201.09573v3, http://arxiv.org/pdf/2201.09573v3",cond-mat.stat-mech
35090,th,"Convertible instruments are contracts, used in venture financing, which give
investors the right to receive shares in the venture in certain circumstances.
In liquidity events, investors may have the option to either receive back their
principal investment, or to receive a proportional payment after conversion of
the contract to a shareholding. In each case, the value of the payment may
depend on the choices made by other investors who hold such convertible
contracts. A liquidity event therefore sets up a game theoretic optimization
problem. The paper defines a general model for such games, which is shown to
cover all instances of the Y Combinator Simple Agreement for Future Equity
(SAFE) contracts, a type of convertible instrument that is commonly used to
finance startup ventures. The paper shows that, in general, pure strategy Nash
equilibria do not necessarily exist in this model, and there may not exist an
optimum pure strategy Nash equilibrium in cases where pure strategy Nash
equilibria do exist. However, it is shown when all contracts are uniformly one
of the SAFE contract types, an optimum pure strategy Nash equilibrium exists.
Polynomial time algorithms for computing (optimum) pure strategy Nash
equilibria in these cases are developed.",A Game Theoretic Analysis of Liquidity Events in Convertible Instruments,2021-11-24 05:56:06,Ron van der Meyden,"http://arxiv.org/abs/2111.12237v1, http://arxiv.org/pdf/2111.12237v1",econ.TH
35091,th,"To profit from price oscillations, investors frequently use threshold-type
strategies where changes in the portfolio position are triggered by some
indicators reaching prescribed levels. In this paper, we investigate
threshold-type strategies in the context of ergodic control. We make the first
steps towards their optimization by proving the ergodic properties of related
functionals. Assuming Markovian price increments satisfying a minorization
condition and (one-sided) boundedness we show, in particular, that for given
thresholds, the distribution of the gains converges in the long run. We also
extend recent results on the stability of overshoots of random walks from the
i.i.d.\ increment case to Markovian increments, under suitable conditions.",Ergodic aspects of trading with threshold strategies,2021-11-29 20:07:07,"Attila Lovas, Miklós Rásonyi","http://arxiv.org/abs/2111.14708v2, http://arxiv.org/pdf/2111.14708v2",math.PR
35092,th,"We introduce \emph{informational punishment} to the design of mechanisms that
compete with an exogenous status quo mechanism: Players can send garbled public
messages with some delay, and others cannot commit to ignoring them. Optimal
informational punishment ensures that full participation is without loss, even
if any single player can publicly enforce the status quo mechanism.
Informational punishment permits using a standard revelation principle, is
independent of the mechanism designer's objective, and operates exclusively off
the equilibrium path. It is robust to refinements and applies in
informed-principal settings. We provide conditions that make it robust to
opportunistic signal designers.",Mechanism Design with Informational Punishment,2022-01-04 17:18:33,"Benjamin Balzer, Johannes Schneider","http://dx.doi.org/10.1016/j.geb.2023.03.012, http://arxiv.org/abs/2201.01149v2, http://arxiv.org/pdf/2201.01149v2",econ.TH
35093,th,"We propose a general framework of mass transport between vector-valued
measures, which will be called simultaneous optimal transport (SOT). The new
framework is motivated by the need to transport resources of different types
simultaneously, i.e., in single trips, from specified origins to destinations;
similarly, in economic matching, one needs to couple two groups, e.g., buyers
and sellers, by equating supplies and demands of different goods at the same
time. The mathematical structure of simultaneous transport is very different
from the classic setting of optimal transport, leading to many new challenges.
The Monge and Kantorovich formulations are contrasted and connected. Existence
conditions and duality formulas are established. More interestingly, by
connecting SOT to a natural relaxation of martingale optimal transport (MOT),
we introduce the MOT-SOT parity, which allows for explicit solutions of SOT in
many interesting cases.",Simultaneous Optimal Transport,2022-01-10 20:26:55,"Ruodu Wang, Zhenyuan Zhang","http://arxiv.org/abs/2201.03483v2, http://arxiv.org/pdf/2201.03483v2",econ.TH
35094,th,"Indirect reciprocity is a mechanism by which individuals cooperate with those
who have cooperated with others. This creates a regime in which repeated
interactions are not necessary to incent cooperation (as would be required for
direct reciprocity). However, indirect reciprocity creates a new problem: how
do agents know who has cooperated with others? To know this, agents would need
to access some form of reputation information. Perhaps there is a communication
system to disseminate reputation information, but how does it remain truthful
and informative? Most papers assume the existence of a truthful, forthcoming,
and informative communication system; in this paper, we seek to explain how
such a communication system could remain evolutionarily stable in the absence
of exogenous pressures. Specifically, we present three conditions that together
maintain both the truthfulness of the communication system and the prevalence
of cooperation: individuals (1) use a norm that rewards the behaviors that it
prescribes (an aligned norm), (2) can signal not only about the actions of
other agents, but also about their truthfulness (by acting as third party
observers to an interaction), and (3) make occasional mistakes, demonstrating
how error can create stability by introducing diversity.","Tit for Tattling: Cooperation, communication, and how each could stabilize the other",2022-01-18 10:44:11,"Victor Vikram Odouard, Michael Holton Price","http://arxiv.org/abs/2201.06792v4, http://arxiv.org/pdf/2201.06792v4",q-bio.PE
35095,th,"This article clarifies the relationship between pricing kernel monotonicity
and the existence of opportunities for stochastic arbitrage in a complete and
frictionless market of derivative securities written on a market portfolio. The
relationship depends on whether the payoff distribution of the market portfolio
satisfies a technical condition called adequacy, meaning that it is atomless or
is comprised of finitely many equally probable atoms. Under adequacy, pricing
kernel nonmonotonicity is equivalent to the existence of a strong form of
stochastic arbitrage involving distributional replication of the market
portfolio at a lower price. If the adequacy condition is dropped then this
equivalence no longer holds, but pricing kernel nonmonotonicity remains
equivalent to the existence of a weaker form of stochastic arbitrage involving
second-order stochastic dominance of the market portfolio at a lower price. A
generalization of the optimal measure preserving derivative is obtained which
achieves distributional replication at the minimum cost of all second-order
stochastically dominant securities under adequacy.",Optimal measure preserving derivatives revisited,2022-01-22 21:17:29,Brendan K. Beare,"http://arxiv.org/abs/2201.09108v4, http://arxiv.org/pdf/2201.09108v4",q-fin.MF
35097,th,"Periodic double auctions (PDA) have applications in many areas such as in
e-commerce, intra-day equity markets, and day-ahead energy markets in
smart-grids. While the trades accomplished using PDAs are worth trillions of
dollars, finding a reliable bidding strategy in such auctions is still a
challenge as it requires the consideration of future auctions. A participating
buyer in a PDA has to design its bidding strategy by planning for current and
future auctions. Many equilibrium-based bidding strategies proposed are complex
to use in real-time. In the current exposition, we propose a scale-based
bidding strategy for buyers participating in PDA. We first present an
equilibrium analysis for single-buyer single-seller multi-unit single-shot
k-Double auctions. Specifically, we analyze the situation when a seller and a
buyer trade two identical units of quantity in a double auction where both the
buyer and the seller deploy a simple, scale-based bidding strategy. The
equilibrium analysis becomes intractable as the number of participants
increases. To be useful in more complex settings such as wholesale markets in
smart-grids, we model equilibrium bidding strategy as a learning problem. We
develop a deep deterministic policy gradient (DDPG) based learning strategy,
DDPGBBS, for a participating agent in PDAs to suggest an action at any auction
instance. DDPGBBS, which empirically follows the obtained theoretical
equilibrium, is easily extendable when the number of buyers/sellers increases.
We take Power Trading Agent Competition's (PowerTAC) wholesale market PDA as a
testbed to evaluate our novel bidding strategy. We benchmark our DDPG based
strategy against several baselines and state-of-the-art bidding strategies of
the PowerTAC wholesale market PDA and demonstrate the efficacy of DDPGBBS
against several benchmarked strategies.",Multi-unit Double Auctions: Equilibrium Analysis and Bidding Strategy using DDPG in Smart-grids,2022-01-25 09:50:37,"Sanjay Chandlekar, Easwar Subramanian, Sanjay Bhat, Praveen Paruchuri, Sujit Gujar","http://arxiv.org/abs/2201.10127v2, http://arxiv.org/pdf/2201.10127v2",cs.GT
35100,th,"We study the problem of fairly allocating a multiset $M$ of $m$ indivisible
items among $n$ agents with additive valuations. Specifically, we introduce a
parameter $t$ for the number of distinct types of items and study fair
allocations of multisets that contain only items of these $t$ types, under two
standard notions of fairness:
  1. Envy-freeness (EF): For arbitrary $n$, $t$, we show that a complete EF
allocation exists when at least one agent has a unique valuation and the number
of items of each type exceeds a particular finite threshold. We give explicit
upper and lower bounds on this threshold in some special cases.
  2. Envy-freeness up to any good (EFX): For arbitrary $n$, $m$, and for $t\le
2$, we show that a complete EFX allocation always exists. We give two different
proofs of this result. One proof is constructive and runs in polynomial time;
the other is geometrically inspired.",Fair allocation of a multiset of indivisible items,2022-02-10 20:40:58,"Pranay Gorantla, Kunal Marwaha, Santhoshini Velusamy","http://dx.doi.org/10.1137/1.9781611977554.ch13, http://arxiv.org/abs/2202.05186v5, http://arxiv.org/pdf/2202.05186v5",cs.GT
35101,th,"We study two-sided many-to-one matching markets with transferable utilities,
e.g., labor and rental housing markets, in which money can exchange hands
between agents, subject to distributional constraints on the set of feasible
allocations. In such markets, we establish the efficiency of equilibrium
arrangements, specified by an assignment and transfers between agents on the
two sides of the market, and study the conditions on the distributional
constraints and agent preferences under which equilibria exist and can be
computed efficiently. To this end, we first consider the setting when the
number of institutions (e.g., firms in a labor market) is one and show that
equilibrium arrangements exist irrespective of the nature of the constraint
structure or the agents' preferences. However, equilibrium arrangements may not
exist in markets with multiple institutions even when agents on each side have
linear (or additively separable) preferences over agents on the other side.
Thus, for markets with linear preferences, we study sufficient conditions on
the constraint structure that guarantee the existence of equilibria using
linear programming duality. Our linear programming approach not only
generalizes that of Shapley and Shubik (1971) in the one-to-one matching
setting to the many-to-one matching setting under distributional constraints
but also provides a method to compute market equilibria efficiently.",Matching with Transfers under Distributional Constraints,2022-02-10 21:39:22,"Devansh Jalota, Michael Ostrovsky, Marco Pavone","http://arxiv.org/abs/2202.05232v2, http://arxiv.org/pdf/2202.05232v2",econ.TH
35102,th,"We study the complexity of closure operators, with applications to machine
learning and decision theory. In machine learning, closure operators emerge
naturally in data classification and clustering. In decision theory, they can
model equivalence of choice menus, and therefore situations with a preference
for flexibility. Our contribution is to formulate a notion of complexity of
closure operators, which translate into the complexity of a classifier in ML,
or of a utility function in decision theory.",Closure operators: Complexity and applications to classification and decision-making,2022-02-11 00:39:42,"Hamed Hamze Bajgiran, Federico Echenique","http://arxiv.org/abs/2202.05339v2, http://arxiv.org/pdf/2202.05339v2",econ.TH
35103,th,"Firms and statistical agencies must protect the privacy of the individuals
whose data they collect, analyze, and publish. Increasingly, these
organizations do so by using publication mechanisms that satisfy differential
privacy. We consider the problem of choosing such a mechanism so as to maximize
the value of its output to end users. We show that this is a constrained
information design problem, and characterize its solution. When the underlying
database is drawn from a symmetric distribution -- for instance, if
individuals' data are i.i.d. -- we show that the problem's dimensionality can
be reduced, and that its solution belongs to a simpler class of mechanisms.
When, in addition, data users have supermodular payoffs, we show that the
simple geometric mechanism is always optimal by using a novel comparative
static that ranks information structures according to their usefulness in
supermodular decision problems.",Information Design for Differential Privacy,2022-02-11 08:17:05,"Ian M. Schmutte, Nathan Yoder","http://arxiv.org/abs/2202.05452v5, http://arxiv.org/pdf/2202.05452v5",econ.TH
35104,th,"In a multiple partners matching problem the agents can have multiple partners
up to their capacities. In this paper we consider both the two-sided
many-to-many stable matching problem and the one-sided stable fixtures problem
under lexicographic preferences. We study strong core and Pareto-optimal
solutions for this setting from a computational point of view. First we provide
an example to show that the strong core can be empty even under these severe
restrictions for many-to-many problems, and that deciding the non-emptiness of
the strong core is NP-hard. We also show that for a given matching checking
Pareto-optimality and the strong core properties are co-NP-complete problems
for the many-to-many problem, and deciding the existence of a complete
Pareto-optimal matching is also NP-hard for the fixtures problem. On the
positive side, we give efficient algorithms for finding a near feasible strong
core solution, where the capacities are only violated by at most one unit for
each agent, and also for finding a half-matching in the strong core of
fractional matchings. These polynomial time algorithms are based on the Top
Trading Cycle algorithm. Finally, we also show that finding a maximum size
matching that is Pareto-optimal can be done efficiently for many-to-many
problems, which is in contrast with the hardness result for the fixtures
problem.",Strong core and Pareto-optimal solutions for the multiple partners matching problem under lexicographic preferences,2022-02-11 10:17:02,"Péter Biró, Gergely Csáji","http://arxiv.org/abs/2202.05484v1, http://arxiv.org/pdf/2202.05484v1",cs.GT
35105,th,"In the context of a large class of stochastic processes used to describe the
dynamics of wealth growth, we prove a set of inequalities establishing
necessary and sufficient conditions in order to avoid infinite wealth
concentration. These inequalities generalize results previously found only in
the context of particular models, or with more restrictive sets of hypotheses.
In particular, we emphasize the role of the additive component of growth -
usually representing labor incomes - in limiting the growth of inequality. Our
main result is a proof that in an economy with random wealth growth, with
returns non-negatively correlated with wealth, an average labor income growing
at least proportionally to the average wealth is necessary to avoid a runaway
concentration. One of the main advantages of this result with respect to the
standard economics literature is the independence from the concept of an
equilibrium wealth distribution, which does not always exist in random growth
models. We analyze in this light three toy models, widely studied in the
economics and econophysics literature.",A constraint on the dynamics of wealth concentration,2022-02-11 20:35:09,Valerio Astuti,"http://arxiv.org/abs/2202.05789v5, http://arxiv.org/pdf/2202.05789v5",econ.TH
35106,th,"In dynamic capital structure models with an investor break-even condition,
the firm's Bellman equation may not generate a contraction mapping, so the
standard existence and uniqueness conditions do not apply. First, we provide an
example showing the problem in a classical trade-off model. The firm can issue
one-period defaultable debt, invest in capital and pay a dividend. If the firm
cannot meet the required debt payment, it is liquidated. Second, we show how to
use a dual to the original problem and a change of measure, such that existence
and uniqueness can be proved. In the unique Markov-perfect equilibrium, firm
decisions reflect state-dependent capital and debt targets. Our approach may be
useful for other dynamic firm models that have an investor break-even
condition.",Equilibrium Defaultable Corporate Debt and Investment,2022-02-11 22:56:54,"Hong Chen, Murray Zed Frank","http://arxiv.org/abs/2202.05885v1, http://arxiv.org/pdf/2202.05885v1",q-fin.GN
35107,th,"The difficulty of recruiting patients is a well-known issue in clinical
trials which inhibits or sometimes precludes them in practice. We interpret
this issue as a non-standard version of exploration-exploitation tradeoff:
here, the clinical trial would like to explore as uniformly as possible,
whereas each patient prefers ``exploitation"", i.e. treatments that seem best.
We study how to incentivize participation by leveraging information asymmetry
between the trial and the patients. We measure statistical performance via
worst-case estimation error under adversarially generated outcomes, a standard
objective for clinical trials. We obtain a near-optimal solution in terms of
this objective. Namely, we provide an incentive-compatible mechanism with a
particular guarantee, and a nearly matching impossibility result for any
incentive-compatible mechanism. Our results extend to agents with heterogeneous
public and private types.",Incentivizing Participation in Clinical Trials,2022-02-13 06:23:55,"Yingkai Li, Aleksandrs Slivkins","http://arxiv.org/abs/2202.06191v4, http://arxiv.org/pdf/2202.06191v4",cs.GT
35108,th,"We investigate the implementation of reduced-form allocation probabilities in
a two-person bargaining problem without side payments, where the agents have to
select one alternative from a finite set of social alternatives. We provide a
necessary and sufficient condition for the implementability. We find that the
implementability condition in bargaining has some new feature compared to
Border's theorem. Our results have applications in compromise problems and
package exchange problems where the agents barter indivisible objects and the
agents value the objects as complements.",Reduced-Form Allocations with Complementarity: A 2-Person Case,2022-02-13 10:52:06,Xu Lang,"http://arxiv.org/abs/2202.06245v3, http://arxiv.org/pdf/2202.06245v3",econ.TH
35110,th,"Many modern Internet applications, like content moderation and recommendation
on social media, require reviewing and score a large number of alternatives. In
such a context, the voting can only be sparse, as the number of alternatives is
too large for any individual to review a significant fraction of all of them.
Moreover, in critical applications, malicious players might seek to hack the
voting process by entering dishonest reviews or creating fake accounts.
Classical voting methods are unfit for this task, as they usually (a) require
each reviewer to assess all available alternatives and (b) can be easily
manipulated by malicious players.
  This paper defines precisely the problem of robust sparse voting, highlights
its underlying technical challenges, and presents Mehestan, a novel voting
mechanism that solves the problem. Namely, we prove that by using Mehestan, no
(malicious) voter can have more than a small parametrizable effect on each
alternative's score, and we identify conditions of voters comparability under
which any unanimous preferences can be recovered, even when these preferences
are expressed by voters on very different scales.",Robust Sparse Voting,2022-02-17 16:40:33,"Youssef Allouah, Rachid Guerraoui, Lê-Nguyên Hoang, Oscar Villemaud","http://arxiv.org/abs/2202.08656v1, http://arxiv.org/pdf/2202.08656v1",cs.GT
35111,th,"We analyze a model of selling a single object to a principal-agent pair who
want to acquire the object for a firm. The principal and the agent have
different assessments of the object's value to the firm. The agent is
budget-constrained while the principal is not. The agent participates in the
mechanism, but she can (strategically) delegate decision-making to the
principal. We derive the revenue-maximizing mechanism in a two-dimensional type
space (values of the agent and the principal). We show that below a threshold
budget, a mechanism involving two posted prices and three outcomes (one of
which involves randomization) is the optimal mechanism for the seller.
Otherwise, a single posted price mechanism is optimal.",Selling to a principal and a budget-constrained agent,2022-02-21 20:15:10,"Debasis Mishra, Kolagani Paramahamsa","http://arxiv.org/abs/2202.10378v2, http://arxiv.org/pdf/2202.10378v2",econ.TH
35112,th,"We consider a hypergraph (I,C), with possible multiple (hyper)edges and
loops, in which the vertices $i\in I$ are interpreted as agents, and the edges
$c\in C$ as contracts that can be concluded between agents. The preferences of
each agent i concerning the contracts where i takes part are given by use of a
choice function $f_i$ possessing the so-called path independent property. In
this general setup we introduce the notion of stable network of contracts.
  The paper contains two main results. The first one is that a general problem
on stable systems of contracts for (I,C,f) is reduced to a set of special ones
in which preferences of agents are described by use of so-called weak orders,
or utility functions. However, for a special case of this sort, the stability
may not exist. Trying to overcome this trouble when dealing with such special
cases, we introduce a weaker notion of metastability for systems of contracts.
Our second result is that a metastable system always exists.",Stable and metastable contract networks,2022-02-26 10:48:24,"Vladimir I. Danilov, Alexander V. Karzanov","http://arxiv.org/abs/2202.13089v2, http://arxiv.org/pdf/2202.13089v2",math.CO
35113,th,"This paper introduces a class of objects called decision rules that map
infinite sequences of alternatives to a decision space. These objects can be
used to model situations where a decision maker encounters alternatives in a
sequence such as receiving recommendations. Within the class of decision rules,
we study natural subclasses: stopping and uniform stopping rules. Our main
result establishes the equivalence of these two subclasses of decision rules.
Next, we introduce the notion of computability of decision rules using Turing
machines and show that computable rules can be implemented using a simpler
computational device: a finite automaton. We further show that computability of
choice rules -- an important subclass of decision rules -- is implied by their
continuity with respect to a natural topology. Finally, we introduce some
natural heuristics in this framework and provide their behavioral
characterization.",Decisions over Sequences,2022-02-28 23:16:24,"Bhavook Bhardwaj, Siddharth Chatterjee","http://arxiv.org/abs/2203.00070v2, http://arxiv.org/pdf/2203.00070v2",econ.TH
35114,th,"Discovery program (DISC) is a policy used by the New York City Department of
Education (NYC DOE) to increase the number of admissions of students from low
socio-economic background to specialized high schools. This policy has been
instrumental in increasing the number of disadvantaged students attending these
schools. However, assuming that students care more about the school they are
assigned to rather than the type of seat they occupy (\emph{school-over-seat
hypothesis}), our empirical analysis using data from 12 recent academic years
shows that DISC creates about 950 in-group blocking pairs each year amongst
disadvantaged students, impacting about 650 disadvantaged students every year.
Moreover, we find that this program does not respect improvements, thus
unintentionally creating an incentive to under-perform. These experimental
results are confirmed by our theoretical analysis.
  In order to alleviate the concerns caused by DISC, we explore two alternative
policies: the minority reserve (MR) and the joint-seat allocation (JSA)
mechanisms. As our main theoretical contribution, we introduce a feature of
markets, that we term high competitiveness, and we show that under this
condition, JSA dominates MR for all disadvantaged students. We give sufficient
conditions under which high competitiveness is verified. Data from NYC DOE
satisfies the high competitiveness condition, and for this dataset our
empirical results corroborate our theoretical predictions, showing the
superiority of JSA. Given that JSA can be implemented by a simple modification
of the classical deferred acceptance algorithm with responsive preference
lists, we believe that, when the school-over-seat hypothesis holds, the
discovery program can be changed for the better by implementing the JSA
mechanism, leading in particular to aligned incentives for the top-performing
disadvantaged students.",Discovering Opportunities in New York City's Discovery Program: Disadvantaged Students in Highly Competitive Markets,2022-03-01 18:29:09,"Yuri Faenza, Swati Gupta, Xuan Zhang","http://arxiv.org/abs/2203.00544v2, http://arxiv.org/pdf/2203.00544v2",cs.GT
35115,th,"An informed sender communicates with an uninformed receiver through a
sequence of uninformed mediators; agents' utilities depend on receiver's action
and the state. For any number of mediators, the sender's optimal value is
characterized. For one mediator, the characterization has a geometric meaning
of constrained concavification of sender's utility, optimal persuasion requires
the same number of signals as without mediators, and the presence of the
mediator is never profitable for the sender. Surprisingly, the second mediator
may improve the value but optimal persuasion may require more signals.",Bayesian Persuasion with Mediators,2022-03-08 21:57:42,"Itai Arieli, Yakov Babichenko, Fedor Sandomirskiy","http://arxiv.org/abs/2203.04285v2, http://arxiv.org/pdf/2203.04285v2",econ.TH
35123,th,"The Combinatorial Multi-Round Auction (CMRA) is a new auction format which
has already been used in several recent European spectrum auctions. We
characterize equilibria in the CMRA that feature auction-specific forms of
truthful bidding, demand expansion, and demand reduction for settings in which
bidders have either decreasing or non-decreasing marginal values. In
particular, we establish sufficient conditions for riskless collusion. Overall,
our results suggest that the CMRA might be an attractive auction design in the
presence of highly complementary goods on sale. We discuss to what extent our
theory is consistent with outcomes data in Danish spectrum auctions and how our
predictions can be tested using bidding data.",The Combinatorial Multi-Round Ascending Auction,2022-03-22 17:45:24,"Bernhard Kasberger, Alexander Teytelboym","http://arxiv.org/abs/2203.11783v1, http://arxiv.org/pdf/2203.11783v1",econ.TH
35116,th,"We propose a class of semimetrics for preference relations any one of which
is an alternative to the classical Kemeny-Snell-Bogart metric. (We take a
fairly general viewpoint about what constitutes a preference relation, allowing
for any acyclic order to act as one.) These semimetrics are based solely on the
implications of preferences for choice behavior, and thus appear more suitable
in economic contexts and choice experiments. In our main result, we obtain a
fairly simple axiomatic characterization for the class we propose. The
apparently most important member of this class (at least in the case of finite
alternative spaces), which we dub the top-difference semimetric, is
characterized separately. We also obtain alternative formulae for it, and
relative to this metric, compute the diameter of the space of complete
preferences, as well as the best transitive extension of a given acyclic
preference relation. Finally, we prove that our preference metric spaces cannot
be isometically embedded in a Euclidean space.",A class of dissimilarity semimetrics for preference relations,2022-03-09 00:48:43,"Hiroki Nishimura, Efe A. Ok","http://arxiv.org/abs/2203.04418v1, http://arxiv.org/pdf/2203.04418v1",math.CO
35117,th,"Game dynamics structure (e.g., endogenous cycle motion) in human subjects
game experiments can be predicted by game dynamics theory. However, whether the
structure can be controlled by mechanism design to a desired goal is not known.
Here, using the pole assignment approach in modern control theory, we
demonstrate how to control the structure in two steps: (1) Illustrate an
theoretical workflow on how to design a state-depended feedback controller for
desired structure; (2) Evaluate the controller by laboratory human subject game
experiments and by agent-based evolutionary dynamics simulation. To our
knowledge, this is the first realisation of the control of the human social
game dynamics structure in theory and experiment.",Game Dynamics Structure Control by Design: an Example from Experimental Economics,2022-03-11 20:12:14,Wang Zhijian,"http://arxiv.org/abs/2203.06088v1, http://arxiv.org/pdf/2203.06088v1",econ.TH
35118,th,"We study efficiency in general collective choice problems where agents have
ordinal preferences and randomization is allowed. We explore the structure of
preference profiles where ex-ante and ex-post efficiency coincide, offer a
unifying perspective on the known results, and give several new
characterizations. The results have implications for well-studied mechanisms
including random serial dictatorship and a number of specific environments,
including the dichotomous, single-peaked, and social choice domains.",Efficiency in Random Resource Allocation and Social Choice,2022-03-12 09:18:28,"Federico Echenique, Joseph Root, Fedor Sandomirskiy","http://arxiv.org/abs/2203.06353v3, http://arxiv.org/pdf/2203.06353v3",econ.TH
35119,th,"We study fair allocation of indivisible goods and chores among agents with
\emph{lexicographic} preferences -- a subclass of additive valuations. In sharp
contrast to the goods-only setting, we show that an allocation satisfying
\emph{envy-freeness up to any item} (EFX) could fail to exist for a mixture of
\emph{objective} goods and chores. To our knowledge, this negative result
provides the \emph{first} counterexample for EFX over (any subdomain of)
additive valuations. To complement this non-existence result, we identify a
class of instances with (possibly subjective) mixed items where an EFX and
Pareto optimal allocation always exists and can be efficiently computed. When
the fairness requirement is relaxed to \emph{maximin share} (MMS), we show
positive existence and computation for \emph{any} mixed instance. More broadly,
our work examines the existence and computation of fair and efficient
allocations both for mixed items as well as chores-only instances, and
highlights the additional difficulty of these problems vis-{\`a}-vis their
goods-only counterparts.",Fairly Dividing Mixtures of Goods and Chores under Lexicographic Preferences,2022-03-14 19:54:09,"Hadi Hosseini, Sujoy Sikdar, Rohit Vaish, Lirong Xia","http://arxiv.org/abs/2203.07279v1, http://arxiv.org/pdf/2203.07279v1",cs.GT
35120,th,"In this discussion draft, we investigate five different models of duopoly
games, where the market is assumed to have an isoelastic demand function.
Moreover, quadratic cost functions reflecting decreasing returns to scale are
considered. The games in this draft are formulated with systems of two
nonlinear difference equations. Existing equilibria and their local stability
are analyzed by symbolic computations. In the model where a gradiently
adjusting player and a rational (or a boundedly rational) player compete with
each other, diseconomies of scale are proved to have an effect of stability
enhancement, which is consistent with the similar results found by Fisher for
homogeneous oligopolies with linear demand functions.",Cournot duopoly games with isoelastic demands and diseconomies of scale,2022-03-18 17:07:47,Xiaoliang Li,"http://arxiv.org/abs/2203.09972v1, http://arxiv.org/pdf/2203.09972v1",econ.TH
35121,th,"We address the problem of mechanism design for two-stage repeated stochastic
games -- a novel setting using which many emerging problems in next-generation
electricity markets can be readily modeled. Repeated playing affords the
players a large class of strategies that adapt a player's actions to all past
observations and inferences obtained therefrom. In other settings such as
iterative auctions or dynamic games where a large strategy space of this sort
manifests, it typically has an important implication for mechanism design: It
may be impossible to obtain truth-telling as a dominant strategy equilibrium.
Consequently, in such scenarios, it is common to settle for mechanisms that
render truth-telling only a Nash equilibrium, or variants thereof, even though
Nash equilibria are known to be poor models of real-world behavior. This is
owing to each player having to make overly specific assumptions about the
behaviors of the other players to employ their Nash equilibrium strategy, which
they may not make. In general, the lesser the burden of speculation in an
equilibrium, the more plausible it is that it models real-world behavior.
Guided by this maxim, we introduce a new notion of equilibrium called Dominant
Strategy Non-Bankrupting Equilibrium (DNBE) which requires the players to make
very little assumptions about the behavior of the other players to employ their
equilibrium strategy. Consequently, a mechanism that renders truth-telling a
DNBE as opposed to only a Nash equilibrium could be quite effective in molding
real-world behavior along truthful lines. We present a mechanism for two-stage
repeated stochastic games that renders truth-telling a Dominant Strategy
Non-Bankrupting Equilibrium. The mechanism also guarantees individual
rationality and maximizes social welfare. Finally, we describe an application
of the mechanism to design demand response markets.",Incentive Compatibility in Two-Stage Repeated Stochastic Games,2022-03-19 04:09:37,"Bharadwaj Satchidanandan, Munther A. Dahleh","http://arxiv.org/abs/2203.10206v2, http://arxiv.org/pdf/2203.10206v2",econ.TH
35122,th,"In this paper, we provide an example of the optimal growth model in which
there exist infinitely many solutions to the Hamilton-Jacobi-Bellman equation
but the value function does not satisfy this equation. We consider the cause of
this phenomenon, and find that the lack of a solution to the original problem
is crucial. We show that under several conditions, there exists a solution to
the original problem if and only if the value function solves the
Hamilton-Jacobi-Bellman equation. Moreover, in this case, the value function is
the unique nondecreasing concave solution to the Hamilton-Jacobi-Bellman
equation. We also show that without our conditions, this uniqueness result does
not hold.",On the Fragility of the Basis on the Hamilton-Jacobi-Bellman Equation in Economic Dynamics,2022-03-20 19:41:07,Yuhki Hosoya,"http://arxiv.org/abs/2203.10595v3, http://arxiv.org/pdf/2203.10595v3",econ.TH
35130,th,"We present a rotating proposer mechanism for team formation, which implements
a Pareto efficient subgame perfect Nash equilibrium of an extensive-form team
formation game.",A Rotating Proposer Mechanism for Team Formation,2022-04-08 21:54:39,"Jian Low, Chen Hajaj, Yevgeniy Vorobeychik","http://arxiv.org/abs/2204.04251v1, http://arxiv.org/pdf/2204.04251v1",cs.GT
35124,th,"The online bipartite matching problem has offline buyers desiring to be
matched to online items. The analysis of online bipartite matching of Eden et
al. (2021) is a smoothness proof (Syrgkanis and Tardos, 2013). Moreover, it can
be interpreted as combining a $\lambda = 1-1/e$ value covering (which holds for
single-dimensional agents and randomized auctions) and $\mu = 1$ revenue
covering (Hartline et al., 2014). Note that value covering is a fact about
single-dimensional agents and has nothing to do with the underlying feasibility
setting. Thus, the essential new result from Eden et al. (2021) is that online
bipartite matching is $\mu=1$ revenue covered. A number of old and new
observations follow from this perspective.",Online Bipartite Matching via Smoothness,2022-03-24 18:54:14,Jason Hartline,"http://arxiv.org/abs/2203.13140v2, http://arxiv.org/pdf/2203.13140v2",cs.GT
35125,th,"Consider $2k-1$ voters, each of which has a preference ranking between $n$
given alternatives. An alternative $A$ is called a Condorcet winner, if it wins
against every other alternative $B$ in majority voting (meaning that for every
other alternative $B$ there are at least $k$ voters who prefer $A$ over $B$).
The notion of Condorcet winners has been studied intensively for many decades,
yet some basic questions remain open. In this paper, we consider a model where
each voter chooses their ranking randomly according to some probability
distribution among all rankings. One may then ask about the probability to have
a Condorcet winner with these randomly chosen rankings (which, of course,
depends on $n$ and $k$, and the underlying probability distribution on the set
of rankings). In the case of the uniform probability distribution over all
rankings, which has received a lot of attention and is often referred to as the
setting of an ""impartial culture"", we asymptotically determine the probability
of having a Condorcet winner for a fixed number $2k-1$ of voters and $n$
alternatives with $n\to \infty$. This question has been open for around fifty
years. While some authors suggested that the impartial culture should exhibit
the lowest possible probability of having a Condorcet winner, in fact the
probability can be much smaller for other distributions. We determine, for all
values of $n$ and $k$, the smallest possible probability of having a Condorcet
winner (and give an example of a probability distribution over all rankings
which achieves this minimum possible probability).",On the probability of a Condorcet winner among a large number of alternatives,2022-03-25 18:32:08,Lisa Sauermann,"http://arxiv.org/abs/2203.13713v1, http://arxiv.org/pdf/2203.13713v1",econ.TH
35126,th,"Game dynamics theory, as a field of science, the consistency of theory and
experiment is essential. In the past 10 years, important progress has been made
in the merging of the theory and experiment in this field, in which dynamics
cycle is the presentation. However, the merging works have not got rid of the
constraints of Euclidean two-dimensional cycle so far. This paper uses a
classic four-strategy game to study the dynamic structure (non-Euclidean
superplane cycle). The consistency is in significant between the three ways:
(1) the analytical results from evolutionary dynamics equations, (2)
agent-based simulation results from learning models and (3) laboratory results
from human subjects game experiments. The consistency suggests that, game
dynamic structure could be quantitatively predictable, observable and
controllable in general.",Dynamic Structure in Four-strategy Game: Theory and Experiment,2022-03-28 15:02:13,"Zhijian Wang, Shujie Zhou, Qinmei Yao, Yijia Wang","http://dx.doi.org/10.21638/11701/spbu31.2022.26, http://arxiv.org/abs/2203.14669v1, http://arxiv.org/pdf/2203.14669v1",econ.TH
35127,th,"In today's online advertising markets, it is common for an advertiser to set
a long-period budget. Correspondingly, advertising platforms adopt budget
control methods to ensure that an advertiser's payment is within her budget.
Most budget control methods rely on the value distributions of advertisers.
However, the platform hardly learns their true priors due to the complex
environment advertisers stand in and privacy issues. Therefore, it is essential
to understand how budget control auction mechanisms perform under unassured
priors.
  This work answers this problem from multiple aspects. Specifically, we
discuss five budget-constrained parameterized mechanisms: bid-discount/pacing
first-price auctions, the Bayesian revenue-optimal auction and
bid-discount/pacing second-price auctions. We consider the game among the
seller and all buyers induced by these five mechanisms in the stochastic model.
We restrict the discussion to efficient mechanisms in which the seller earns
buyers' budgets sufficiently, and thus, leads to his high revenue. Our main
result shows the strategic equivalence between the Bayesian revenue-optimal
mechanism and the efficient bid-discount first-price mechanism. A broad
equivalence among all these (efficient) mechanisms is also given in the
symmetric case. We further dig into the structural properties of stochastic
budget-constrained mechanisms. We characterize sufficient and necessary
conditions on the efficient parameter tuple for bid-discount/pacing first-price
auctions. Meanwhile, when buyers do not take strategic behaviors, we exploit
the dominance relationships of these mechanisms by revealing their intrinsic
structures.",Budget-Constrained Auctions with Unassured Priors: Strategic Equivalence and Structural Properties,2022-03-31 08:53:36,"Zhaohua Chen, Xiaotie Deng, Jicheng Li, Chang Wang, Mingwei Yang, Zheng Cai, Yukun Ren, Zhihua Zhu","http://arxiv.org/abs/2203.16816v5, http://arxiv.org/pdf/2203.16816v5",cs.GT
35128,th,"We incorporate group fairness into the algorithmic centroid clustering
problem, where $k$ centers are to be located to serve $n$ agents distributed in
a metric space. We refine the notion of proportional fairness proposed in [Chen
et al., ICML 2019] as {\em core fairness}, and $k$-clustering is in the core if
no coalition containing at least $n/k$ agents can strictly decrease their total
distance by deviating to a new center together. Our solution concept is
motivated by the situation where agents are able to coordinate and utilities
are transferable. A string of existence, hardness and approximability results
is provided. Particularly, we propose two dimensions to relax core
requirements: one is on the degree of distance improvement, and the other is on
the size of deviating coalition. For both relaxations and their combination, we
study the extent to which relaxed core fairness can be satisfied in metric
spaces including line, tree and general metric space, and design approximation
algorithms accordingly.",Approximate Group Fairness for Clustering,2022-03-31 19:14:46,"Bo Li, Lijun Li, Ankang Sun, Chenhao Wang, Yingfan Wang","http://arxiv.org/abs/2203.17146v1, http://arxiv.org/pdf/2203.17146v1",cs.GT
35129,th,"We consider the problem of fairly allocating indivisible goods to agents with
weights representing their entitlements. A natural rule in this setting is the
maximum weighted Nash welfare (MWNW) rule, which selects an allocation
maximizing the weighted product of the agents' utilities. We show that when
agents have binary valuations, a specific version of MWNW is resource- and
population-monotone, satisfies group-strategyproofness, and can be implemented
in polynomial time.",On Maximum Weighted Nash Welfare for Binary Valuations,2022-04-08 04:35:16,"Warut Suksompong, Nicholas Teh","http://dx.doi.org/10.1016/j.mathsocsci.2022.03.004, http://arxiv.org/abs/2204.03803v2, http://arxiv.org/pdf/2204.03803v2",econ.TH
35131,th,"We study strategic information transmission in a hierarchical setting where
information gets transmitted through a chain of agents up to a decision maker
whose action is of importance to every agent. This situation could arise
whenever an agent can communicate to the decision maker only through a chain of
intermediaries, for example, an entry-level worker and the CEO in a firm, or an
official in the bottom of the chain of command and the president in a
government. Each agent can decide to conceal part or all the information she
receives. Proving we can focus on simple equilibria, where the only player who
conceals information is the first one, we provide a tractable recursive
characterization of the equilibrium outcome, and show that it could be
inefficient. Interestingly, in the binary-action case, regardless of the number
of intermediaries, there are a few pivotal ones who determine the amount of
information communicated to the decision maker. In this case, our results
underscore the importance of choosing a pivotal vice president for maximizing
the payoff of the CEO or president.",Hierarchical Bayesian Persuasion: Importance of Vice Presidents,2022-04-11 20:57:53,Majid Mahzoon,"http://arxiv.org/abs/2204.05304v2, http://arxiv.org/pdf/2204.05304v2",econ.TH
35132,th,"We consider item allocation to individual agents who have additive
valuations, in settings in which there are protected groups, and the allocation
needs to give each protected group its ""fair"" share of the total welfare.
Informally, within each protected group we consider the total welfare that the
allocation gives the members of the group, and compare it to the maximum
possible welfare that an allocation can give to the group members. An
allocation is fair towards the group if the ratio between these two values is
no worse then the relative size of the group. For divisible items, our formal
definition of fairness is based on the proportional share, whereas for
indivisible items, it is based on the anyprice share.
  We present examples in which there are no fair allocations, and even not
allocations that approximate the fairness requirement within a constant
multiplicative factor. We then attempt to identify sufficient conditions for
fair or approximately fair allocations to exist. For example, for indivisible
items, when agents have identical valuations and the family of protected groups
is laminar, we show that if the items are chores, then an allocation that
satisfies every fairness requirement within a multiplicative factor no worse
than two exists and can be found efficiently, whereas if the items are goods,
no constant approximation can be guaranteed.",On allocations that give intersecting groups their fair share,2022-04-14 11:43:21,"Uriel Feige, Yehonatan Tahan","http://arxiv.org/abs/2204.06820v1, http://arxiv.org/pdf/2204.06820v1",cs.GT
35133,th,"We propose a model of Pareto optimization (multi-objective programming) in
the context of a categorical theory of resources. We describe how to adapt
multi-objective swarm intelligence algorithms to this categorical formulation.",Pareto Optimization in Categories,2022-04-25 22:13:08,Matilde Marcolli,"http://arxiv.org/abs/2204.11931v1, http://arxiv.org/pdf/2204.11931v1",math.CT
35134,th,"Social decision schemes (SDSs) map the ordinal preferences of individual
voters over multiple alternatives to a probability distribution over the
alternatives. In order to study the axiomatic properties of SDSs, we lift
preferences over alternatives to preferences over lotteries using the natural
-- but little understood -- pairwise comparison (PC) preference extension. This
extension postulates that one lottery is preferred to another if the former is
more likely to return a preferred outcome. We settle three open questions
raised by Brandt (2017): (i) there is no Condorcet-consistent SDS that
satisfies PC-strategyproofness; (ii) there is no anonymous and neutral SDS that
satisfies PC-efficiency and PC-strategyproofness; and (iii) there is no
anonymous and neutral SDS that satisfies PC-efficiency and strict
PC-participation. All three impossibilities require $m\geq 4$ alternatives and
turn into possibilities when $m\leq 3$. We furthermore settle an open problem
raised by Aziz et al. (2015) by showing that no path of PC-improvements
originating from an in",Incentives in Social Decision Schemes with Pairwise Comparison Preferences,2022-04-26 19:55:34,"Felix Brandt, Patrick Lederer, Warut Suksompong","http://arxiv.org/abs/2204.12436v2, http://arxiv.org/pdf/2204.12436v2",cs.GT
35135,th,"We study how to incentivize agents in a target group to produce a higher
output in the context of incomplete information, by means of rank-order
allocation contests. We describe a symmetric Bayes--Nash equilibrium for
contests that have two types of rank-based prizes: prizes that are accessible
only to the agents in the target group; prizes that are accessible to everyone.
We also specialize this equilibrium characterization to two important
sub-cases: (i) contests that do not discriminate while awarding the prizes,
i.e., only have prizes that are accessible to everyone; (ii) contests that have
prize quotas for the groups, and each group can compete only for prizes in
their share. For these models, we also study the properties of the contest that
maximizes the expected total output by the agents in the target group.",Contests to Incentivize a Target Group,2022-04-29 15:48:24,"Edith Elkind, Abheek Ghosh, Paul Goldberg","http://arxiv.org/abs/2204.14051v1, http://arxiv.org/pdf/2204.14051v1",cs.GT
35136,th,"In this paper, we analyze the problem of how to adapt the concept of priority
to situations where several perfectly divisible resources have to be allocated
among certain set of agents that have exactly one claim which is used for all
resources. In particular, we introduce constrained sequential priority rules
and two constrained random arrival rules, which extend the classical sequential
priority rules and the random arrival rule to these situations. Moreover, we
provide an axiomatic analysis of these rules.",On priority in multi-issue bankruptcy problems with crossed claims,2022-05-01 14:43:41,"Rick K. Acosta-Vega, Encarnación Algaba, Joaquín Sánchez-Soriano","http://arxiv.org/abs/2205.00450v2, http://arxiv.org/pdf/2205.00450v2",math.OC
35137,th,"Motivated by putting empirical work based on (synthetic) election data on a
more solid mathematical basis, we analyze six distances among elections,
including, e.g., the challenging-to-compute but very precise swap distance and
the distance used to form the so-called map of elections. Among the six, the
latter seems to strike the best balance between its computational complexity
and expressiveness.",Understanding Distance Measures Among Elections,2022-05-01 18:22:46,"Niclas Boehmer, Piotr Faliszewski, Rolf Niedermeier, Stanisław Szufa, Tomasz Wąs","http://arxiv.org/abs/2205.00492v1, http://arxiv.org/pdf/2205.00492v1",cs.GT
35140,th,"A much studied issue is the extent to which the confidence scores provided by
machine learning algorithms are calibrated to ground truth probabilities. Our
starting point is that calibration is seemingly incompatible with class
weighting, a technique often employed when one class is less common (class
imbalance) or with the hope of achieving some external objective
(cost-sensitive learning). We provide a model-based explanation for this
incompatibility and use our anthropomorphic model to generate a simple method
of recovering likelihoods from an algorithm that is miscalibrated due to class
weighting. We validate this approach in the binary pneumonia detection task of
Rajpurkar, Irvin, Zhu, et al. (2017).",Calibrating for Class Weights by Modeling Machine Learning,2022-05-10 04:20:11,"Andrew Caplin, Daniel Martin, Philip Marx","http://arxiv.org/abs/2205.04613v2, http://arxiv.org/pdf/2205.04613v2",cs.LG
35141,th,"We study the design of grading contests between agents with private
information about their abilities under the assumption that the value of a
grade is determined by the information it reveals about the agent's
productivity. Towards the goal of identifying the effort-maximizing grading
contest, we study the effect of increasing prizes and increasing competition on
effort and find that the effects depend qualitatively on the distribution of
abilities in the population. Consequently, while the optimal grading contest
always uniquely identifies the best performing agent, it may want to pool or
separate the remaining agents depending upon the distribution. We identify
sufficient conditions under which a rank-revealing grading contest, a
leaderboard-with-cutoff type grading contest, and a coarse grading contest with
at most three grades are optimal. In the process, we also identify
distributions under which there is a monotonic relationship between the
informativeness of a grading scheme and the effort induced by it.",Optimal grading contests,2022-05-11 02:11:09,Sumit Goel,"http://dx.doi.org/10.1145/3580507.3597670, http://arxiv.org/abs/2205.05207v5, http://arxiv.org/pdf/2205.05207v5",cs.GT
35142,th,"We study markets where a set of indivisible items is sold to bidders with
unit-demand valuations, subject to a hard budget limit. Without financial
constraints and pure quasilinear bidders, this assignment model allows for a
simple ascending auction format that maximizes welfare and is
incentive-compatible and core-stable. Introducing budget constraints, the
ascending auction requires strong additional conditions on the unit-demand
preferences to maintain its properties. We show that, without these conditions,
we cannot hope for an incentive-compatible and core-stable mechanism. We design
an iterative algorithm that depends solely on a trivially verifiable ex-post
condition and demand queries, and with appropriate decisions made by an
auctioneer, always yields a welfare-maximizing and core-stable outcome. If
these conditions do not hold, we cannot hope for incentive-compatibility and
computing welfare-maximizing assignments and core-stable prices is hard: Even
in the presence of value queries, where bidders reveal their valuations and
budgets truthfully, we prove that the problem becomes NP-complete for the
assignment market model. The analysis complements complexity results for
markets with more complex valuations and shows that even with simple
unit-demand bidders the problem becomes intractable. This raises doubts on the
efficiency of simple auction designs as they are used in high-stakes markets,
where budget constraints typically play a role.",Core-Stability in Assignment Markets with Financially Constrained Buyers,2022-05-12 17:50:19,"Eleni Batziou, Martin Bichler, Maximilian Fichtl","http://arxiv.org/abs/2205.06132v1, http://arxiv.org/pdf/2205.06132v1",cs.GT
35143,th,"We consider fair allocation of a set $M$ of indivisible goods to $n$
equally-entitled agents, with no monetary transfers. Every agent $i$ has a
valuation $v_i$ from some given class of valuation functions. A share $s$ is a
function that maps a pair $(v_i,n)$ to a value, with the interpretation that if
an allocation of $M$ to $n$ agents fails to give agent $i$ a bundle of value at
least equal to $s(v_i,n)$, this serves as evidence that the allocation is not
fair towards $i$. For such an interpretation to make sense, we would like the
share to be feasible, meaning that for any valuations in the class, there is an
allocation that gives every agent at least her share. The maximin share was a
natural candidate for a feasible share for additive valuations. However,
Kurokawa, Procaccia and Wang [2018] show that it is not feasible.
  We initiate a systematic study of the family of feasible shares. We say that
a share is \emph{self maximizing} if truth-telling maximizes the implied
guarantee. We show that every feasible share is dominated by some
self-maximizing and feasible share. We seek to identify those self-maximizing
feasible shares that are polynomial time computable, and offer the highest
share values. We show that a SM-dominating feasible share -- one that dominates
every self-maximizing (SM) feasible share -- does not exist for additive
valuations (and beyond). Consequently, we relax the domination property to that
of domination up to a multiplicative factor of $\rho$ (called
$\rho$-dominating). For additive valuations we present shares that are
feasible, self-maximizing and polynomial-time computable. For $n$ agents we
present such a share that is $\frac{2n}{3n-1}$-dominating. For two agents we
present such a share that is $(1 - \epsilon)$-dominating. Moreover, for these
shares we present poly-time algorithms that compute allocations that give every
agent at least her share.","Fair Shares: Feasibility, Domination and Incentives",2022-05-16 11:52:42,"Moshe Babaioff, Uriel Feige","http://arxiv.org/abs/2205.07519v1, http://arxiv.org/pdf/2205.07519v1",econ.TH
35144,th,"If a measure of voting power assigns greater voting power to a player because
it no longer effectively cooperates with another, then the measure displays the
quarrelling paradox and violates the quarrel postulate. We provide formal
criteria by which to judge whether a given conception of quarrelling is (a)
reasonable and (b) fit to serve as the basis for a reasonable quarrel
postulate. To achieve this, we formalize a general framework distinguishing
between three degrees of quarrelling (weak, strong, cataclysmic), symmetric vs.
asymmetrical quarrels, and reciprocal vs. non-reciprocal quarrels, and which
thereby yields twelve conceptions of quarrelling, which encompasses the two
conceptions proposed by Felsenthal and Machover and by Laruelle and Valenciano,
respectively. We argue that the two existing formulations of the quarrel
postulate based on these conceptions are unreasonable. In contrast, we prove
that the symmetric, weak conception of quarrelling identified by our framework
-- whether reciprocal or not -- is fit to serve as the basis for a reasonable
quarrel postulate. Furthermore, the classic Shapley-Shubik index and
Penrose-Banzhaf measure both satisfy the quarrel postulate based on a symmetric
weak quarrel.",A General Framework for a Class of Quarrels: The Quarrelling Paradox Revisited,2022-05-17 16:47:36,"Arash Abizadeh, Adrian Vetta","http://arxiv.org/abs/2205.08353v1, http://arxiv.org/pdf/2205.08353v1",econ.TH
35145,th,"Social choice becomes easier on restricted preference domains such as
single-peaked, single-crossing, and Euclidean preferences. Many impossibility
theorems disappear, the structure makes it easier to reason about preferences,
and computational problems can be solved more efficiently. In this survey, we
give a thorough overview of many classic and modern restricted preference
domains and explore their properties and applications. We do this from the
viewpoint of computational social choice, letting computational problems drive
our interest, but we include a comprehensive discussion of the economics and
social choice literatures as well. Particular focus areas of our survey include
algorithms for recognizing whether preferences belong to a particular
preference domain, and algorithms for winner determination of voting rules that
are hard to compute if preferences are unrestricted.",Preference Restrictions in Computational Social Choice: A Survey,2022-05-18 20:38:00,"Edith Elkind, Martin Lackner, Dominik Peters","http://arxiv.org/abs/2205.09092v1, http://arxiv.org/pdf/2205.09092v1",cs.GT
35146,th,"We study the effects of data sharing between firms on prices, profits, and
consumer welfare. Although indiscriminate sharing of consumer data decreases
firm profits due to the subsequent increase in competition, selective sharing
can be beneficial. We show that there are data-sharing mechanisms that are
strictly Pareto-improving, simultaneously increasing firm profits and consumer
welfare. Within the class of Pareto-improving mechanisms, we identify one that
maximizes firm profits and one that maximizes consumer welfare.",Pareto-Improving Data-Sharing,2022-05-23 16:32:22,"Ronen Gradwohl, Moshe Tennenholtz","http://arxiv.org/abs/2205.11295v1, http://arxiv.org/pdf/2205.11295v1",cs.GT
35147,th,"Hedonic games are a prominent model of coalition formation, in which each
agent's utility only depends on the coalition she resides. The subclass of
hedonic games that models the formation of general partnerships, where output
is shared equally among affiliates, is referred to as hedonic games with common
ranking property (HGCRP). Aside from their economic motivation, HGCRP came into
prominence since they are guaranteed to have core stable solutions that can be
found efficiently. We improve upon existing results by proving that every
instance of HGCRP has a solution that is Pareto optimal, core stable and
individually stable. The economic significance of this result is that
efficiency is not to be totally sacrificed for the sake of stability in HGCRP.
We establish that finding such a solution is {\bf NP-hard} even if the sizes of
the coalitions are bounded above by $3$; however, it is polynomial time
solvable if the sizes of the coalitions are bounded above by $2$. We show that
the gap between the total utility of a core stable solution and that of the
socially-optimal solution (OPT) is bounded above by $n$, where $n$ is the
number of agents, and that this bound is tight. Our investigations reveal that
computing OPT is inapproximable within better than $O(n^{1-\epsilon})$ for any
fixed $\epsilon > 0$, and that this inapproximability lower bound is
polynomially tight. However, OPT can be computed in polynomial time if the
sizes of the coalitions are bounded above by $2$.",On Hedonic Games with Common Ranking Property,2022-05-24 13:10:40,"Bugra Caskurlu, Fatih Erdem Kizilkaya","http://arxiv.org/abs/2205.11939v1, http://arxiv.org/pdf/2205.11939v1",cs.GT
35148,th,"This paper introduces a unified framework for stable matching, which nests
the traditional definition of stable matching in finite markets and the
continuum definition of stable matching from Azevedo and Leshno (2016) as
special cases. Within this framework, I identify a novel continuum model, which
makes individual-level probabilistic predictions.
  This new model always has a unique stable outcome, which can be found using
an analog of the Deferred Acceptance algorithm. The crucial difference between
this model and that of Azevedo and Leshno (2016) is that they assume that the
amount of student interest at each school is deterministic, whereas my proposed
alternative assumes that it follows a Poisson distribution. As a result, this
new model accurately predicts the simulated distribution of cutoffs, even for
markets with only ten schools and twenty students.
  This model generates new insights about the number and quality of matches.
When schools are homogeneous, it provides upper and lower bounds on students'
average rank, which match results from Ashlagi, Kanoria and Leshno (2017) but
apply to more general settings. This model also provides clean analytical
expressions for the number of matches in a platform pricing setting considered
by Marx and Schummer (2021).",A Continuum Model of Stable Matching With Finite Capacities,2022-05-25 19:05:41,Nick Arnosti,"http://arxiv.org/abs/2205.12881v1, http://arxiv.org/pdf/2205.12881v1",econ.TH
35149,th,"We study a communication game between a sender and receiver where the sender
has access to a set of informative signals about a state of the world. The
sender chooses one of her signals, called an ``anecdote'' and communicates it
to the receiver. The receiver takes an action, yielding a utility for both
players. Sender and receiver both care about the state of the world but are
also influenced by a personal preference so that their ideal actions differ. We
characterize perfect Bayesian equilibria when the sender cannot commit to a
particular communication scheme. In this setting the sender faces ``persuasion
temptation'': she is tempted to select a more biased anecdote to influence the
receiver's action. Anecdotes are still informative to the receiver but
persuasion comes at the cost of precision. This gives rise to ``informational
homophily'' where the receiver prefers to listen to like-minded senders because
they provide higher-precision signals. In particular, we show that a sender
with access to many anecdotes will essentially send the minimum or maximum
anecdote even though with high probability she has access to an anecdote close
to the state of the world that would almost perfectly reveal it to the
receiver. In contrast to the classic Crawford-Sobel model, full revelation is a
knife-edge equilibrium and even small differences in personal preferences will
induce highly polarized communication and a loss in utility for any
equilibrium. We show that for fat-tailed anecdote distributions the receiver
might even prefer to talk to poorly informed senders with aligned preferences
rather than a knowledgeable expert whose preferences may differ from her own.
We also show that under commitment differences in personal preferences no
longer affect communication and the sender will generally report the most
representative anecdote closest to the posterior mean for common distributions.",Communicating with Anecdotes,2022-05-26 19:07:45,"Nika Haghtalab, Nicole Immorlica, Brendan Lucier, Markus Mobius, Divyarthi Mohan","http://arxiv.org/abs/2205.13461v1, http://arxiv.org/pdf/2205.13461v1",econ.TH
35161,th,"Clearing functions (CFs), which express a mathematical relationship between
the expected throughput of a production facility in a planning period and its
workload (or work-in-progress, WIP) in that period have shown considerable
promise for modeling WIP-dependent cycle times in production planning. While
steady-state queueing models are commonly used to derive analytic expressions
for CFs, the finite length of planning periods calls their validity into
question. We apply a different approach to propose a mechanistic model for
one-resource, one-product factory shop based on the analogy between the
operation of machine and enzyme molecule. The model is reduced to a singularly
perturbed system of two differential equations for slow (WIP) and fast (busy
machines) variables, respectively. The analysis of this slow-fast system finds
that CF is nothing but a result of the asymptotic expansion of the slow
invariant manifold. The validity of CF is ultimately determined by how small is
the parameter multiplying the derivative of the fast variable. It is shown that
sufficiently small characteristic ratio 'working machines : WIP' guarantees the
applicability of CF approximation in unsteady-state operation.",Clearing function in the context of the invariant manifold method,2022-06-22 19:44:27,"A. Mustafin, A. Kantarbayeva","http://arxiv.org/abs/2206.11205v1, http://arxiv.org/pdf/2206.11205v1",nlin.AO
35150,th,"The tendency for individuals to form social ties with others who are similar
to themselves, known as homophily, is one of the most robust sociological
principles. Since this phenomenon can lead to patterns of interactions that
segregate people along different demographic dimensions, it can also lead to
inequalities in access to information, resources, and opportunities. As we
consider potential interventions that might alleviate the effects of
segregation, we face the challenge that homophily constitutes a pervasive and
organic force that is difficult to push back against. Designing effective
interventions can therefore benefit from identifying counterbalancing social
processes that might be harnessed to work in opposition to segregation.
  In this work, we show that triadic closure -- another common phenomenon that
posits that individuals with a mutual connection are more likely to be
connected to one another -- can be one such process. In doing so, we challenge
a long-held belief that triadic closure and homophily work in tandem. By
analyzing several fundamental network models using popular integration
measures, we demonstrate the desegregating potential of triadic closure. We
further empirically investigate this effect on real-world dynamic networks,
surfacing observations that mirror our theoretical findings. We leverage these
insights to discuss simple interventions that can help reduce segregation in
settings that exhibit an interplay between triadic closure and homophily. We
conclude with a discussion on qualitative implications for the design of
interventions in settings where individuals arrive in an online fashion, and
the designer can influence the initial set of connections.",On the Effect of Triadic Closure on Network Segregation,2022-05-27 01:42:07,"Rediet Abebe, Nicole Immorlica, Jon Kleinberg, Brendan Lucier, Ali Shirali","http://dx.doi.org/10.1145/3490486.3538322, http://arxiv.org/abs/2205.13658v1, http://arxiv.org/pdf/2205.13658v1",cs.SI
35151,th,"For generalized Nash equilibrium problems (GNEP) with shared constraints we
focus on the notion of normalized Nash equilibrium in the nonconvex setting.
The property of nondegeneracy for normalized Nash equilibria is introduced.
Nondegeneracy refers to GNEP-tailored versions of linear independence
constraint qualification, strict complementarity and second-order regularity.
Surprisingly enough, nondegeneracy of normalized Nash equilibrium does not
prevent from degeneracies at the individual players' level. We show that
generically all normalized Nash equilibria are nondegenerate. Moreover,
nondegeneracy turns out to be a sufficient condition for the local uniqueness
of normalized Nash equilibria. We emphasize that even in the convex setting the
proposed notion of nondegeneracy differs from the sufficient condition for
(global) uniqueness of normalized Nash equilibria, which is known from the
literature.",On local uniqueness of normalized Nash equilibria,2022-05-27 13:18:11,Vladimir Shikhman,"http://arxiv.org/abs/2205.13878v1, http://arxiv.org/pdf/2205.13878v1",math.OC
35152,th,"We make a detailed analysis of three key algorithms (Serial Dictatorship and
the naive and adaptive variants of the Boston algorithm) for the housing
allocation problem, under the assumption that agent preferences are chosen iid
uniformly from linear orders on the items. We compute limiting distributions
(with respect to some common utility functions) as $n\to \infty$ of both the
utilitarian welfare and the order bias. To do this, we compute limiting
distributions of the outcomes for an arbitrary agent whose initial relative
position in the tiebreak order is $\theta\in[0,1]$, as a function of $\theta$.
The results for the Boston algorithms are all new, and we expect that these
fundamental results on the stochastic processes underlying these algorithms
will have wider applicability in future. Overall our results show that the
differences in utilitarian welfare performance of the three algorithms are
fairly small but still important. However, the differences in order bias are
much greater. Also, Naive Boston beats Adaptive Boston, which beats Serial
Dictatorship, on both utilitarian welfare and order bias.",Asymptotic welfare performance of Boston assignment algorithms,2022-05-30 23:32:37,"Geoffrey Pritchard, Mark C. Wilson","http://arxiv.org/abs/2205.15418v1, http://arxiv.org/pdf/2205.15418v1",econ.TH
35153,th,"A voting rule decides on a probability distribution over a set of $m$
alternatives, based on rankings of those alternatives provided by agents. We
assume that agents have cardinal utility functions over the alternatives, but
voting rules have access to only the rankings induced by these utilities. We
evaluate how well voting rules do on measures of social welfare and of
proportional fairness, computed based on the hidden utility functions.
  In particular, we study the distortion of voting rules, which is a worst-case
measure. It is an approximation ratio comparing the utilitarian social welfare
of the optimum outcome to the welfare of the outcome selected by the voting
rule, in the worst case over possible input profiles and utility functions that
are consistent with the input. The literature has studied distortion with
unit-sum utility functions, and left a small asymptotic gap in the best
possible distortion. Using tools from the theory of fair multi-winner
elections, we propose the first voting rule which achieves the optimal
distortion $\Theta(\sqrt{m})$ for unit-sum utilities. Our voting rule also
achieves optimum $\Theta(\sqrt{m})$ distortion for unit-range and approval
utilities.
  We then take a similar worst-case approach to a quantitative measure of the
fairness of a voting rule, called proportional fairness. Informally, it
measures whether the influence of cohesive groups of agents on the voting
outcome is proportional to the group size. We show that there is a voting rule
which, without knowledge of the utilities, can achieve an $O(\log
m)$-approximation to proportional fairness, the best possible approximation. As
a consequence of its proportional fairness, we show that this voting rule
achieves $O(\log m)$ distortion with respect to Nash welfare, and provides an
$O(\log m)$-approximation to the core, making it interesting for applications
in participatory budgeting.",Optimized Distortion and Proportional Fairness in Voting,2022-05-31 15:57:48,"Soroush Ebadian, Anson Kahng, Dominik Peters, Nisarg Shah","http://dx.doi.org/10.1145/3490486.3538339, http://arxiv.org/abs/2205.15760v2, http://arxiv.org/pdf/2205.15760v2",cs.GT
35154,th,"The core-periphery model with transport cost of differentiated agriculture is
extended to continuously multi-regional space, and the stability of a
homogeneous stationary solution of the model on a periodic space is
investigated. When a spatial disturbance around the homogeneous stationary
solution is decomposed into eigenfunctions, an unstable area, defined as the
area of parameters where each eigenfunction is unstable, is observed to
generally expand in its size as the frequency number of each eigenfunction
increases.",Spatial frequency of unstable eigenfunction of the core-periphery model with transport cost of differentiated agriculture,2022-06-02 16:23:51,Kensuke Ohtake,"http://arxiv.org/abs/2206.01040v4, http://arxiv.org/pdf/2206.01040v4",econ.TH
35162,th,"We study the ability of a social media platform with a political agenda to
influence voting outcomes. Our benchmark is Condorcet's jury theorem, which
states that the likelihood of a correct decision under majority voting
increases with the number of voters. We show how information manipulation by a
social media platform can overturn the jury theorem, thereby undermining
democracy. We also show that sometimes the platform can do so only by providing
information that is biased in the opposite direction of its preferred outcome.
Finally, we compare manipulation of voting outcomes through social media to
manipulation through traditional media.",Social Media and Democracy,2022-06-29 10:06:59,"Ronen Gradwohl, Yuval Heller, Arye Hillman","http://arxiv.org/abs/2206.14430v1, http://arxiv.org/pdf/2206.14430v1",econ.TH
35155,th,"Sellers in online markets face the challenge of determining the right time to
sell in view of uncertain future offers. Classical stopping theory assumes that
sellers have full knowledge of the value distributions, and leverage this
knowledge to determine stopping rules that maximize expected welfare. In
practice, however, stopping rules must often be determined under partial
information, based on scarce data or expert predictions. Consider a seller that
has one item for sale and receives successive offers drawn from some value
distributions. The decision on whether or not to accept an offer is
irrevocable, and the value distributions are only partially known. We therefore
let the seller adopt a robust maximin strategy, assuming that value
distributions are chosen adversarially by nature to minimize the value of the
accepted offer. We provide a general maximin solution to this stopping problem
that identifies the optimal (threshold-based) stopping rule for the seller for
all possible statistical information structures. We then perform a detailed
analysis for various ambiguity sets relying on knowledge about the common mean,
dispersion (variance or mean absolute deviation) and support of the
distributions. We show for these information structures that the seller's
stopping rule consists of decreasing thresholds converging to the common mean,
and that nature's adversarial response, in the long run, is to always create an
all-or-nothing scenario. The maximin solutions also reveal what happens as
dispersion or the number of offers grows large.",Optimal Stopping Theory for a Distributionally Robust Seller,2022-06-06 13:27:32,"Pieter Kleer, Johan van Leeuwaarden","http://arxiv.org/abs/2206.02477v2, http://arxiv.org/pdf/2206.02477v2",econ.TH
35156,th,"We study a setting where Bayesian agents with a common prior have private
information related to an event's outcome and sequentially make public
announcements relating to their information. Our main result shows that when
agents' private information is independent conditioning on the event's outcome
whenever agents have similar beliefs about the outcome, their information is
aggregated. That is, there is no false consensus.
  Our main result has a short proof based on a natural information theoretic
framework. A key ingredient of the framework is the equivalence between the
sign of the ``interaction information'' and a super/sub-additive property of
the value of people's information. This provides an intuitive interpretation
and an interesting application of the interaction information, which measures
the amount of information shared by three random variables.
  We illustrate the power of this information theoretic framework by reproving
two additional results within it: 1) that agents quickly agree when announcing
(summaries of) beliefs in round robin fashion [Aaronson 2005]; and 2) results
from [Chen et al 2010] on when prediction market agents should release
information to maximize their payment. We also interpret the information
theoretic framework and the above results in prediction markets by proving that
the expected reward of revealing information is the conditional mutual
information of the information revealed.","False Consensus, Information Theory, and Prediction Markets",2022-06-07 06:46:11,"Yuqing Kong, Grant Schoenebeck","http://arxiv.org/abs/2206.02993v2, http://arxiv.org/pdf/2206.02993v2",cs.GT
35158,th,"Credible equilibrium is a solution concept that imposes a stronger
credibility notion than subgame perfect equilibrium. A credible equilibrium is
a refinement of subgame perfect equilibrium such that if a threat in a subgame
g is ""credible,"" then it must also be credible in every subgame g' that is
""equivalent"" to g. I show that (i) a credible equilibrium exists in multi-stage
games, and (ii) if every stage game has a unique Nash equilibrium, then the
credible equilibrium is unique even in infinite horizon multi-stage games.
Moreover, in perfect information games, credible equilibrium is equivalent to
subgame perfect equilibrium.",Credible equilibrium,2022-06-10 20:30:39,Mehmet S. Ismail,"http://arxiv.org/abs/2206.05241v1, http://arxiv.org/pdf/2206.05241v1",econ.TH
35159,th,"We study a fair division setting in which a number of players are to be
fairly distributed among a set of teams. In our model, not only do the teams
have preferences over the players as in the canonical fair division setting,
but the players also have preferences over the teams. We focus on guaranteeing
envy-freeness up to one player (EF1) for the teams together with a stability
condition for both sides. We show that an allocation satisfying EF1, swap
stability, and individual stability always exists and can be computed in
polynomial time, even when teams may have positive or negative values for
players. Similarly, a balanced and swap stable allocation that satisfies a
relaxation of EF1 can be computed efficiently. When teams have nonnegative
values for players, we prove that an EF1 and Pareto optimal allocation exists
and, if the valuations are binary, can be found in polynomial time. We also
examine the compatibility between EF1 and justified envy-freeness.",Fair Division with Two-Sided Preferences,2022-06-13 05:14:33,"Ayumi Igarashi, Yasushi Kawase, Warut Suksompong, Hanna Sumita","http://arxiv.org/abs/2206.05879v2, http://arxiv.org/pdf/2206.05879v2",cs.GT
35160,th,"We study a dynamic matching procedure where homogeneous agents arrive at
random according to a Poisson process and form edges at random yielding a
sparse market. Agents leave according to a certain departure distribution and
may leave early by forming a pair with a compatible agent. The primary
objective is to maximize the number of matched agents. Our main result is to
show that a mild condition on the departure distribution suffices to get almost
optimal performance of instantaneous matching, despite operating in a thin
market. We are thus the first to provide a natural condition under which
instantaneous decisions are superior in a market that is both sparse and thin.
This result is surprising because similar results in the previous literature
are based on market thickness. In addition, instantaneous matching performs
well with respect to further objectives such as minimizing waiting times and
avoiding the risk of market congestion. We develop new techniques for proving
our results going beyond commonly adopted methods for Markov processes.",Superiority of Instantaneous Decisions in Thin Dynamic Matching Markets,2022-06-21 15:13:14,"Johannes Bäumler, Martin Bullinger, Stefan Kober, Donghao Zhu","http://arxiv.org/abs/2206.10287v2, http://arxiv.org/pdf/2206.10287v2",cs.DS
35164,th,"We study the classic divide-and-choose method for equitably allocating
divisible goods between two players who are rational, self-interested Bayesian
agents. The players have additive private values for the goods. The prior
distributions on those values are independent and common knowledge.
  We characterize the structure of optimal divisions in the divide-and-choose
game and show how to efficiently compute equilibria. We identify several
striking differences between optimal strategies in the cases of known versus
unknown preferences. Most notably, the divider has a compelling
""diversification"" incentive in creating the chooser's two options. This
incentive, hereto unnoticed, leads to multiple goods being divided at
equilibrium, quite contrary to the divider's optimal strategy when preferences
are known.
  In many contexts, such as buy-and-sell provisions between partners, or in
judging fairness, it is important to assess the relative expected utilities of
the divider and chooser. Those utilities, we show, depend on the players'
uncertainties about each other's values, the number of goods being divided, and
whether the divider can offer multiple alternative divisions. We prove that,
when values are independently and identically distributed across players and
goods, the chooser is strictly better off for a small number of goods, while
the divider is strictly better off for a large number of goods.",Playing Divide-and-Choose Given Uncertain Preferences,2022-07-07 07:15:26,"Jamie Tucker-Foltz, Richard Zeckhauser","http://arxiv.org/abs/2207.03076v1, http://arxiv.org/pdf/2207.03076v1",cs.GT
35165,th,"We define a notion of the criticality of a player for simple monotone games
based on cooperation with other players, either to form a winning coalition or
to break a winning one, with an essential role for all the players involved. We
compare it with the notion of differential criticality given by Beisbart that
measures power as the opportunity left by other players. We prove that our
proposal satisfies an extension of the strong monotonicity introduced by Young,
assigns no power to dummy players and free riders, and can easily be computed
from the minimal winning and blocking coalitions. An application to the Italian
elections is presented. Our analysis shows that the measures of group
criticality defined so far cannot weigh essential players while only remaining
an opportunity measure. We propose a group opportunity test to reconcile the
two views.",With a little help from my friends: essentiality vs opportunity in group criticality,2022-07-07 23:42:50,"Michele Aleandri, Marco Dall'Aglio","http://arxiv.org/abs/2207.03565v3, http://arxiv.org/pdf/2207.03565v3",econ.TH
35166,th,"The Multiple Choice Polytope (MCP) is the prediction range of a random
utility model due to Block and Marschak (1960). Fishburn (1998) offers a nice
survey of the findings on random utility models at the time. A complete
characterization of the MCP is a remarkable achievement of Falmagne (1978).
Apart for a recognition of the facets by Suck (2002), the geometric structure
of the MCP was apparently not much investigated. Recently, Chang, Narita and
Saito (2022) refer to the adjacency of vertices while Turansick (2022) uses a
condition which we show to be equivalent to the non-adjacency of two vertices.
We characterize the adjacency of vertices and the adjacency of facets. To
derive a more enlightening proof of Falmagne Theorem and of Suck result,
Fiorini (2004) assimilates the MCP with the flow polytope of some acyclic
network. Our results on adjacencies also hold for the flow polytope of any
acyclic network. In particular, they apply not only to the MCP, but also to
three polytopes which Davis-Stober, Doignon, Fiorini, Glineur and Regenwetter
(2018) introduced as extended formulations of the weak order polytope, interval
order polytope and semiorder polytope (the prediction ranges of other models,
see for instance Fishburn and Falmagne, 1989, and Marley and Regenwetter,
2017).",Adjacencies on random ordering polytopes and flow polytopes,2022-07-13 00:51:52,"Jean-Paul Doignon, Kota Saito","http://arxiv.org/abs/2207.06925v1, http://arxiv.org/pdf/2207.06925v1",math.CO
35167,th,"This paper studies queueing problems with an endogenous number of machines
with and without an initial queue, the novelty being that coalitions not only
choose how to queue, but also on how many machines. For a given problem, agents
can (de)activate as many machines as they want, at a cost. After minimizing the
total cost (processing costs and machine costs), we use a game theoretical
approach to share to proceeds of this cooperation, and study the existence of
stable allocations. First, we study queueing problems with an endogenous number
of machines, and examine how to share the total cost. We provide an upper bound
and a lower bound on the cost of a machine to guarantee the non-emptiness of
the core (the set of stable allocations). Next, we study requeueing problems
with an endogenous number of machines, where there is an existing queue. We
examine how to share the cost savings compared to the initial situation, when
optimally requeueing/changing the number of machines. Although, in general,
stable allocation may not exist, we guarantee the existence of stable
allocations when all machines are considered public goods, and we start with an
initial schedule that might not have the optimal number of machines, but in
which agents with large waiting costs are processed first.",Queueing games with an endogenous number of machines,2022-07-14 23:25:03,"Ata Atay, Christian Trudeau","http://arxiv.org/abs/2207.07190v3, http://arxiv.org/pdf/2207.07190v3",econ.TH
35168,th,"Instant runoff voting (IRV) is an increasingly-popular alternative to
traditional plurality voting in which voters submit rankings over the
candidates rather than single votes. In practice, elections using IRV often
restrict the ballot length, the number of candidates a voter is allowed to rank
on their ballot. We theoretically and empirically analyze how ballot length can
influence the outcome of an election, given fixed voter preferences. We show
that there exist preference profiles over $k$ candidates such that up to $k-1$
different candidates win at different ballot lengths. We derive exact lower
bounds on the number of voters required for such profiles and provide a
construction matching the lower bound for unrestricted voter preferences.
Additionally, we characterize which sequences of winners are possible over
ballot lengths and provide explicit profile constructions achieving any
feasible winner sequence. We also examine how classic preference restrictions
influence our results--for instance, single-peakedness makes $k-1$ different
winners impossible but still allows at least $\Omega(\sqrt k)$. Finally, we
analyze a collection of 168 real-world elections, where we truncate rankings to
simulate shorter ballots. We find that shorter ballots could have changed the
outcome in one quarter of these elections. Our results highlight ballot length
as a consequential degree of freedom in the design of IRV elections.",Ballot Length in Instant Runoff Voting,2022-07-19 01:01:13,"Kiran Tomlinson, Johan Ugander, Jon Kleinberg","http://arxiv.org/abs/2207.08958v3, http://arxiv.org/pdf/2207.08958v3",cs.MA
35169,th,"In our time cybersecurity has grown to be a topic of massive proportion at
the national and enterprise levels. Our thesis is that the economic perspective
and investment decision-making are vital factors in determining the outcome of
the struggle. To build our economic framework, we borrow from the pioneering
work of Gordon and Loeb in which the Defender optimally trades-off investments
for lower likelihood of its system breach. Our two-sided model additionally has
an Attacker, assumed to be rational and also guided by economic considerations
in its decision-making, to which the Defender responds. Our model is a
simplified adaptation of a model proposed during the Cold War for weapons
deployment in the US. Our model may also be viewed as a Stackelberg game and,
from an analytic perspective, as a Max-Min problem, the analysis of which is
known to have to contend with discontinuous behavior. The complexity of our
simple model is rooted in its inherent nonlinearity and, more consequentially,
non-convexity of the objective function in the optimization. The possibilities
of the Attacker's actions add substantially to the risk to the Defender, and
the Defender's rational, risk-neutral optimal investments in general
substantially exceed the optimal investments predicted by the one-sided
Gordon-Loeb model. We obtain a succinct set of three decision types that
categorize all of the Defender's optimal investment decisions. Also, the
Defender's optimal decisions exhibit discontinuous behavior as the initial
vulnerability of its system is varied. The analysis is supplemented by
extensive numerical illustrations. The results from our model open several
major avenues for future work.",Economics and Optimal Investment Policies of Attackers and Defenders in Cybersecurity,2022-07-19 21:09:50,"Austin Ebel, Debasis Mitra","http://arxiv.org/abs/2207.09497v1, http://arxiv.org/pdf/2207.09497v1",cs.CR
35172,th,"We consider a persuasion problem between a sender and a receiver whose
utility may be nonlinear in her belief; we call such receivers risk-conscious.
Such utility models arise when the receiver exhibits systematic biases away
from expected-utility-maximization, such as uncertainty aversion (e.g., from
sensitivity to the variance of the waiting time for a service). Due to this
nonlinearity, the standard approach to finding the optimal persuasion mechanism
using revelation principle fails. To overcome this difficulty, we use the
underlying geometry of the problem to develop a convex optimization framework
to find the optimal persuasion mechanism. We define the notion of full
persuasion and use our framework to characterize conditions under which full
persuasion can be achieved. We use our approach to study binary persuasion,
where the receiver has two actions and the sender strictly prefers one of them
at every state. Under a convexity assumption, we show that the binary
persuasion problem reduces to a linear program, and establish a canonical set
of signals where each signal either reveals the state or induces in the
receiver uncertainty between two states. Finally, we discuss the broader
applicability of our methods to more general contexts, and illustrate our
methodology by studying information sharing of waiting times in service
systems.",Persuading Risk-Conscious Agents: A Geometric Approach,2022-08-07 19:08:42,"Jerry Anunrojwong, Krishnamurthy Iyer, David Lingenbrink","http://arxiv.org/abs/2208.03758v3, http://arxiv.org/pdf/2208.03758v3",econ.TH
35173,th,"A sender flexibly acquires evidence--which she may pay a third party to
certify--to disclose to a receiver. When evidence acquisition is overt, the
receiver observes the evidence gathering process irrespective of whether its
outcome is certified. When acquisition is covert, the receiver does not. In
contrast to the case with exogenous evidence, the receiver prefers a strictly
positive certification cost. As acquisition costs vanish, equilibria converge
to the Pareto-worst free-learning equilibrium. The receiver always prefers
covert to overt evidence acquisition.",Costly Evidence and Discretionary Disclosure,2022-08-09 20:37:21,"Mark Whitmeyer, Kun Zhang","http://arxiv.org/abs/2208.04922v1, http://arxiv.org/pdf/2208.04922v1",econ.TH
35179,th,"The auction of a single indivisible item is one of the most celebrated
problems in mechanism design with transfers. Despite its simplicity, it
provides arguably the cleanest and most insightful results in the literature.
When the information that the auction is running is available to every
participant, Myerson [20] provided a seminal result to characterize the
incentive-compatible auctions along with revenue optimality. However, such a
result does not hold in an auction on a network, where the information of the
auction is spread via the agents, and they need incentives to forward the
information. In recent times, a few auctions (e.g., [13, 18]) were designed
that appropriately incentivized the intermediate nodes on the network to
promulgate the information to potentially more valuable bidders. In this paper,
we provide a Myerson-like characterization of incentive-compatible auctions on
a network and show that the currently known auctions fall within this class of
randomized auctions. We then consider a special class called the referral
auctions that are inspired by the multi-level marketing mechanisms [1, 6, 7]
and obtain the structure of a revenue optimal referral auction for i.i.d.
bidders. Through experiments, we show that even for non-i.i.d. bidders there
exist auctions following this characterization that can provide a higher
revenue than the currently known auctions on networks.",Optimal Referral Auction Design,2022-08-19 16:10:31,"Rangeet Bhattacharyya, Parvik Dave, Palash Dey, Swaprava Nath","http://arxiv.org/abs/2208.09326v2, http://arxiv.org/pdf/2208.09326v2",cs.GT
35174,th,"A fundamental principle of individual rational choice is Sen's $\gamma$
axiom, also known as expansion consistency, stating that any alternative chosen
from each of two menus must be chosen from the union of the menus. Expansion
consistency can also be formulated in the setting of social choice. In voting
theory, it states that any candidate chosen from two fields of candidates must
be chosen from the combined field of candidates. An important special case of
the axiom is binary expansion consistency, which states that any candidate
chosen from an initial field of candidates and chosen in a head-to-head match
with a new candidate must also be chosen when the new candidate is added to the
field, thereby ruling out spoiler effects. In this paper, we study the tension
between this weakening of expansion consistency and weakenings of resoluteness,
an axiom demanding the choice of a single candidate in any election. As is well
known, resoluteness is inconsistent with basic fairness conditions on social
choice, namely anonymity and neutrality. Here we prove that even significant
weakenings of resoluteness, which are consistent with anonymity and neutrality,
are inconsistent with binary expansion consistency. The proofs make use of SAT
solving, with the correctness of a SAT encoding formally verified in the Lean
Theorem Prover, as well as a strategy for generalizing impossibility theorems
obtained for special types of voting methods (namely majoritarian and pairwise
voting methods) to impossibility theorems for arbitrary voting methods. This
proof strategy may be of independent interest for its potential applicability
to other impossibility theorems in social choice.",Impossibility theorems involving weakenings of expansion consistency and resoluteness in voting,2022-08-14 23:12:19,"Wesley H. Holliday, Chase Norman, Eric Pacuit, Saam Zahedian","http://arxiv.org/abs/2208.06907v3, http://arxiv.org/pdf/2208.06907v3",econ.TH
35175,th,"We study an axiomatic framework for anonymized risk sharing. In contrast to
traditional risk sharing settings, our framework requires no information on
preferences, identities, private operations and realized losses from the
individual agents, and thereby it is useful for modeling risk sharing in
decentralized systems. Four axioms natural in such a framework -- actuarial
fairness, risk fairness, risk anonymity, and operational anonymity -- are put
forward and discussed. We establish the remarkable fact that the four axioms
characterizes the conditional mean risk sharing rule, revealing the unique and
prominent role of this popular risk sharing rule among all others in relevant
applications of anonymized risk sharing. Several other properties and their
relations to the four axioms are studied, as well as their implications in
rationalizing the design of some sharing mechanisms in practice.",An axiomatic theory for anonymized risk sharing,2022-08-16 07:37:00,"Zhanyi Jiao, Steven Kou, Yang Liu, Ruodu Wang","http://arxiv.org/abs/2208.07533v4, http://arxiv.org/pdf/2208.07533v4",econ.TH
35176,th,"We consider the problem of random assignment of indivisible goods, in which
each agent has an ordinal preference and a constraint. Our goal is to
characterize the conditions under which there exists a random assignment that
simultaneously achieves efficiency and envy-freeness. The probabilistic serial
mechanism ensures the existence of such an assignment for the unconstrained
setting. In this paper, we consider a more general setting in which each agent
can consume a set of items only if the set satisfies her feasibility
constraint. Such constraints must be taken into account in student course
placements, employee shift assignments, and so on. We demonstrate that an
efficient and envy-free assignment may not exist even for the simple case of
partition matroid constraints, where the items are categorized and each agent
demands one item from each category. We then identify special cases in which an
efficient and envy-free assignment always exists. For these cases, the
probabilistic serial cannot be naturally extended; therefore, we provide
mechanisms to find the desired assignment using various approaches.",Random Assignment of Indivisible Goods under Constraints,2022-08-16 14:08:29,"Yasushi Kawase, Hanna Sumita, Yu Yokoi","http://arxiv.org/abs/2208.07666v1, http://arxiv.org/pdf/2208.07666v1",cs.GT
35177,th,"Algorithmic fairness is a new interdisciplinary field of study focused on how
to measure whether a process, or algorithm, may unintentionally produce unfair
outcomes, as well as whether or how the potential unfairness of such processes
can be mitigated. Statistical discrimination describes a set of informational
issues that can induce rational (i.e., Bayesian) decision-making to lead to
unfair outcomes even in the absence of discriminatory intent. In this article,
we provide overviews of these two related literatures and draw connections
between them. The comparison illustrates both the conflict between rationality
and fairness and the importance of endogeneity (e.g., ""rational expectations""
and ""self-fulfilling prophecies"") in defining and pursuing fairness. Taken in
concert, we argue that the two traditions suggest a value for considering new
fairness notions that explicitly account for how the individual characteristics
an algorithm intends to measure may change in response to the algorithm.",Algorithmic Fairness and Statistical Discrimination,2022-08-17 18:07:05,"John W. Patty, Elizabeth Maggie Penn","http://arxiv.org/abs/2208.08341v1, http://arxiv.org/pdf/2208.08341v1",econ.TH
35178,th,"""Banning the Box"" refers to a policy campaign aimed at prohibiting employers
from soliciting applicant information that could be used to statistically
discriminate against categories of applicants (in particular, those with
criminal records). In this article, we examine how the concealing or revealing
of informative features about an applicant's identity affects hiring both
directly and, in equilibrium, by possibly changing applicants' incentives to
invest in human capital. We show that there exist situations in which an
employer and an applicant are in agreement about whether to ban the box.
Specifically, depending on the structure of the labor market, banning the box
can be (1) Pareto dominant, (2) Pareto dominated, (3) benefit the applicant
while harming the employer, or (4) benefit the employer while harming the
applicant. Our results have policy implications spanning beyond employment
decisions, including the use of credit checks by landlords and standardized
tests in college admissions.","Ban The Box? Information, Incentives, and Statistical Discrimination",2022-08-17 18:22:14,"John W. Patty, Elizabeth Maggie Penn","http://arxiv.org/abs/2208.08348v1, http://arxiv.org/pdf/2208.08348v1",econ.TH
35180,th,"Min-max optimization problems (i.e., min-max games) have attracted a great
deal of attention recently as their applicability to a wide range of machine
learning problems has become evident. In this paper, we study min-max games
with dependent strategy sets, where the strategy of the first player constrains
the behavior of the second. Such games are best understood as sequential, i.e.,
Stackelberg, games, for which the relevant solution concept is Stackelberg
equilibrium, a generalization of Nash. One of the most popular algorithms for
solving min-max games is gradient descent ascent (GDA). We present a
straightforward generalization of GDA to min-max Stackelberg games with
dependent strategy sets, but show that it may not converge to a Stackelberg
equilibrium. We then introduce two variants of GDA, which assume access to a
solution oracle for the optimal Karush Kuhn Tucker (KKT) multipliers of the
games' constraints. We show that such an oracle exists for a large class of
convex-concave min-max Stackelberg games, and provide proof that our GDA
variants with such an oracle converge in $O(\frac{1}{\varepsilon^2})$
iterations to an $\varepsilon$-Stackelberg equilibrium, improving on the most
efficient algorithms currently known which converge in
$O(\frac{1}{\varepsilon^3})$ iterations. We then show that solving Fisher
markets, a canonical example of a min-max Stackelberg game, using our novel
algorithm, corresponds to buyers and sellers using myopic best-response
dynamics in a repeated market, allowing us to prove the convergence of these
dynamics in $O(\frac{1}{\varepsilon^2})$ iterations in Fisher markets. We close
by describing experiments on Fisher markets which suggest potential ways to
extend our theoretical results, by demonstrating how different properties of
the objective function can affect the convergence and convergence rate of our
algorithms.",Gradient Descent Ascent in Min-Max Stackelberg Games,2022-08-20 17:19:43,"Denizalp Goktas, Amy Greenwald","http://arxiv.org/abs/2208.09690v1, http://arxiv.org/pdf/2208.09690v1",cs.GT
35181,th,"In the game of investment in the common good, the free rider problem can
delay the stakeholders' actions in the form of a mixed strategy equilibrium.
However, it has been recently shown that the mixed strategy equilibria of the
stochastic war of attrition are destabilized by even the slightest degree of
asymmetry between the players. Such extreme instability is contrary to the
widely accepted notion that a mixed strategy equilibrium is the hallmark of the
war of attrition. Motivated by this quandary, we search for a mixed strategy
equilibrium in a stochastic game of investment in the common good. Our results
show that, despite asymmetry, a mixed strategy equilibrium exists if the model
takes into account the repeated investment opportunities. The mixed strategy
equilibrium disappears only if the asymmetry is sufficiently high. Since the
mixed strategy equilibrium is less efficient than pure strategy equilibria, it
behooves policymakers to prevent it by promoting a sufficiently high degree of
asymmetry between the stakeholders through, for example, asymmetric subsidy.",Investment in the common good: free rider effect and the stability of mixed strategy equilibria,2022-08-24 01:25:39,"Youngsoo Kim, H. Dharma Kwon","http://arxiv.org/abs/2208.11217v1, http://arxiv.org/pdf/2208.11217v1",math.OC
35182,th,"We consider a many-to-one variant of the stable matching problem. More
concretely, we consider the variant of the stable matching problem where one
side has a matroid constraint. Furthermore, we consider the situation where the
preference of each agent may contain ties. In this setting, we consider the
problem of checking the existence of a strongly stable matching, and finding a
strongly stable matching if a strongly stable matching exists. We propose a
polynomial-time algorithm for this problem.",Strongly Stable Matchings under Matroid Constraints,2022-08-24 05:31:59,Naoyuki Kamiyama,"http://arxiv.org/abs/2208.11272v3, http://arxiv.org/pdf/2208.11272v3",cs.GT
35183,th,"Contributing to the toolbox for interpreting election results, we evaluate
the robustness of election winners to random noise. We compare the robustness
of different voting rules and evaluate the robustness of real-world election
winners from the Formula 1 World Championship and some variant of political
elections. We find many instances of elections that have very non-robust
winners and numerous delicate robustness patterns that cannot be identified
using classical and simpler approaches.",A Quantitative and Qualitative Analysis of the Robustness of (Real-World) Election Winners,2022-08-29 20:49:44,"Niclas Boehmer, Robert Bredereck, Piotr Faliszewski, Rolf Niedermeier","http://arxiv.org/abs/2208.13760v1, http://arxiv.org/pdf/2208.13760v1",cs.GT
35184,th,"This paper studies one emerging procurement auction scenario where the market
is constructed over the social networks. In a social network composed of many
agents, smartphones or computers, one requester releases her requirement for
goods or tasks to suppliers, then suppliers who have entered the market are
also encouraged to invite some other suppliers to join and all the suppliers in
the network could compete for the business. The key problem for this networked
auction is about how to incentivize each node who have entered the sell not
only to truthfully use her full ability, but also to forward the task to her
neighbours. Auctions conducting over social networks have attracted
considerable interests in recent years. However, most of the existing works
focus on classic forward auctions. Moreover, there is no existing valid
networked auction considering multiple goods/tasks. This work is the first to
explore procurement auction for both homogeneous and heterogeneous goods or
tasks in social networks. From both theoretical proof and experimental
simulation, we proved that the proposed mechanisms are proved to be
individual-rational and incentive-compatible, also both the cost of the system
and the requester could get decreased.",Combinatorial Procurement Auction in Social Networks,2022-08-31 04:42:11,"Yuhang Guo, Dong Hao, Bin Li","http://arxiv.org/abs/2208.14591v1, http://arxiv.org/pdf/2208.14591v1",cs.GT
35185,th,"In order to understand if and how strategic resource allocation can constrain
the structure of pair-wise competition outcomes in competitive human
competitions we introduce a new multiplayer resource allocation game, the
Population Lotto Game. This new game allows agents to allocate their resources
across a continuum of possible specializations. While this game allows
non-transitive cycles between players, we show that the Nash equilibrium of the
game also forms a hierarchical structure between discrete `leagues' based on
their different resource budgets, with potential sub-league structure and/or
non-transitive cycles inside individual leagues. We provide an algorithm that
can find a particular Nash equilibrium for any finite set of discrete
sub-population sizes and budgets. Further, our algorithm finds the unique Nash
equilibrium that remains stable for the subset of players with budgets below
any threshold.",The Emergence of League and Sub-League Structure in the Population Lotto Game,2022-09-01 01:27:23,"Giovanni Artiglio, Aiden Youkhana, Joel Nishimura","http://arxiv.org/abs/2209.00143v1, http://arxiv.org/pdf/2209.00143v1",cs.GT
35194,th,"There are two well-known sufficient conditions for Nash equilibrium in
two-player games: mutual knowledge of rationality (MKR) and mutual knowledge of
conjectures. MKR assumes that the concept of rationality is mutually known. In
contrast, mutual knowledge of conjectures assumes that a given profile of
conjectures is mutually known, which has long been recognized as a strong
assumption. In this note, we introduce a notion of ""mutual assumption of
rationality and correctness"" (MARC), which conceptually aligns more closely
with the MKR assumption. We present two main results. Our first result
establishes that MARC holds in every two-person zero-sum game. In our second
theorem, we show that MARC does not in general hold in n-player games.",Rationality and correctness in n-player games,2022-09-20 19:46:22,"Lorenzo Bastianello, Mehmet S. Ismail","http://arxiv.org/abs/2209.09847v2, http://arxiv.org/pdf/2209.09847v2",econ.TH
35186,th,"A lottery is a popular form of gambling between a seller and multiple buyers,
and its profitable design is of primary interest to the seller. Designing a
lottery requires modeling the buyer decision-making process for uncertain
outcomes. One of the most promising descriptive models of such decision-making
is the cumulative prospect theory (CPT), which represents people's different
attitudes towards gain and loss, and their overestimation of extreme events. In
this study, we design a lottery that maximizes the seller's profit when the
buyers follow CPT. The derived problem is nonconvex and constrained, and hence,
it is challenging to directly characterize its optimal solution. We overcome
this difficulty by reformulating the problem as a three-level optimization
problem. The reformulation enables us to characterize the optimal solution.
Based on this characterization, we propose an algorithm that computes the
optimal lottery in linear time with respect to the number of lottery tickets.
In addition, we provide an efficient algorithm for a more general setting in
which the ticket price is constrained. To the best of the authors' knowledge,
this is the first study that employs the CPT framework for designing an optimal
lottery.",Optimal design of lottery with cumulative prospect theory,2022-09-02 08:10:12,"Shunta Akiyama, Mitsuaki Obara, Yasushi Kawase","http://arxiv.org/abs/2209.00822v1, http://arxiv.org/pdf/2209.00822v1",cs.GT
35187,th,"In the principal-agent problem formulated in [Myerson 1982], agents have
private information (type) and make private decisions (action), both of which
are unobservable to the principal. Myerson pointed out an elegant solution that
relies on the revelation principle, which states that without loss of
generality optimal coordination mechanisms of this problem can be assumed to be
truthful and direct. Consequently, the problem can be solved by a linear
program when the support sets of the action and type spaces are finite. In this
paper, we extend Myerson's results to the setting where the principal's action
space might be infinite and subject to additional design constraints. This
generalized principal-agent model unifies several important design problems --
including contract design, information design, and Bayesian Stackelberg games
-- and encompasses them as special cases. We present a revelation principle for
this general model, based on which a polynomial-time algorithm is derived for
computing the optimal coordination mechanism. This algorithm not only implies
new efficient algorithms simultaneously for all the aforementioned special
cases but also significantly simplifies previous approaches in the literature.",Optimal Coordination in Generalized Principal-Agent Problems: A Revisit and Extensions,2022-09-02 18:56:08,"Jiarui Gan, Minbiao Han, Jibang Wu, Haifeng Xu","http://arxiv.org/abs/2209.01146v1, http://arxiv.org/pdf/2209.01146v1",cs.GT
35188,th,"In 1964 Shapley devised a family of games for which fictitious play fails to
converge to Nash equilibrium. The games are two-player non-zero-sum with 3 pure
strategies per player. Shapley assumed that each player played a specific pure
strategy in the first round. We show that if we use random (mixed) strategy
profile initializations we are able to converge to Nash equilibrium
approximately 1/3 of the time for a representative game in this class.",Random Initialization Solves Shapley's Fictitious Play Counterexample,2022-09-06 01:12:09,Sam Ganzfried,"http://arxiv.org/abs/2209.02154v3, http://arxiv.org/pdf/2209.02154v3",cs.GT
35189,th,"Quasi-convexity in probabilistic mixtures is a common and useful property in
decision analysis. We study a general class of non-monotone mappings, called
the generalized rank-dependent functions, which include the preference models
of expected utilities, dual utilities, and rank-dependent utilities as special
cases, as well as signed Choquet integrals used in risk management. As one of
our main results, quasi-convex (in mixtures) signed Choquet integrals precisely
include two parts: those that are convex (in mixtures) and the class of scaled
quantile-spread mixtures, and this result leads to a full characterization of
quasi-convexity for generalized rank-dependent functions. Seven equivalent
conditions for quasi-convexity in mixtures are obtained for dual utilities and
signed Choquet integrals. We also illustrate a conflict between convexity in
mixtures and convexity in risk pooling among constant-additive mappings.",Quasi-convexity in mixtures for generalized rank-dependent functions,2022-09-07 21:56:10,"Ruodu Wang, Qinyu Wu","http://arxiv.org/abs/2209.03425v1, http://arxiv.org/pdf/2209.03425v1",econ.TH
35190,th,"We investigate the fair allocation of indivisible goods to agents with
possibly different entitlements represented by weights. Previous work has shown
that guarantees for additive valuations with existing envy-based notions cannot
be extended to the case where agents have matroid-rank (i.e., binary
submodular) valuations. We propose two families of envy-based notions for
matroid-rank and general submodular valuations, one based on the idea of
transferability and the other on marginal values. We show that our notions can
be satisfied via generalizations of rules such as picking sequences and maximum
weighted Nash welfare. In addition, we introduce welfare measures based on
harmonic numbers, and show that variants of maximum weighted harmonic welfare
offer stronger fairness guarantees than maximum weighted Nash welfare under
matroid-rank valuations.",Weighted Envy-Freeness for Submodular Valuations,2022-09-14 09:17:41,"Luisa Montanari, Ulrike Schmidt-Kraepelin, Warut Suksompong, Nicholas Teh","http://arxiv.org/abs/2209.06437v1, http://arxiv.org/pdf/2209.06437v1",cs.GT
35191,th,"The paper studies complementary choice functions, i.e. monotonic and
consistent choice functions. Such choice functions were introduced and used in
the work \cite{RY} for investigation of matchings with complementary contracts.
Three (universal) ways of constructing such functions are given: through
pre-topologies, as direct images of completely complementary (or pre-ordered)
choice functions, and with the help of supermodular set-functions.",Complementary choice functions,2022-09-14 12:37:29,Vladimir Danilov,"http://arxiv.org/abs/2209.06514v1, http://arxiv.org/pdf/2209.06514v1",math.CO
35192,th,"This paper presents a logic of preference and functional dependence (LPFD)
and its hybrid extension (HLPFD), both of whose sound and strongly complete
axiomatization are provided. The decidability of LPFD is also proved. The
application of LPFD and HLPFD to modelling cooperative games in strategic and
coalitional forms is explored. The resulted framework provides a unified view
on Nash equilibrium, Pareto optimality and the core. The philosophical
relevance of these game-theoretical notions to discussions of collective agency
is made explicit. Some key connections with other logics are also revealed, for
example, the coalition logic, the logic functional dependence and the logic of
ceteris paribus preference.","Reasoning about Dependence, Preference and Coalitional Power",2022-09-17 04:52:27,"Qian Chen, Chenwei Shi, Yiyan Wang","http://arxiv.org/abs/2209.08213v1, http://arxiv.org/pdf/2209.08213v1",cs.GT
35195,th,"We show that the mechanism-design problem for a monopolist selling multiple,
heterogeneous objects to a buyer with ex ante symmetric and additive values is
equivalent to the mechanism-design problem for a monopolist selling identical
objects to a buyer with decreasing marginal values. The equivalence is
facilitated by the rank-preserving property, which states that higher-valued
objects are assigned with a higher probability. In the heterogeneous-objects
model, every symmetric and incentive-compatible mechanism is rank preserving.
In the identical-objects model, every feasible mechanism is rank preserving. We
provide three applications in which we derive new results for the
identical-objects model and use our equivalence result to establish
corresponding results in the heterogeneous-objects model.",Rank-Preserving Multidimensional Mechanisms,2022-09-21 09:03:49,"Sushil Bikhchandani, Debasis Mishra","http://arxiv.org/abs/2209.10137v4, http://arxiv.org/pdf/2209.10137v4",econ.TH
35196,th,"Lately, Non-Fungible Tokens (NFTs), i.e., uniquely discernible assets on a
blockchain, have skyrocketed in popularity by addressing a broad audience.
However, the typical NFT auctioning procedures are conducted in various, ad hoc
ways, while mostly ignoring the context that the blockchain provides. One of
the main targets of this work is to shed light on the vastly unexplored design
space of NFT Auction Mechanisms, especially in those characteristics that
fundamentally differ from traditional and more contemporaneous forms of
auctions. We focus on the case that bidders have a valuation for the auctioned
NFT, i.e., what we term the single-item NFT auction case. In this setting, we
formally define an NFT Auction Mechanism, give the properties that we would
ideally like a perfect mechanism to satisfy (broadly known as incentive
compatibility and collusion resistance) and prove that it is impossible to have
such a perfect mechanism. Even though we cannot have an all-powerful protocol
like that, we move on to consider relaxed notions of those properties that we
may desire the protocol to satisfy, as a trade-off between implementability and
economic guarantees. Specifically, we define the notion of an
equilibrium-truthful auction, where neither the seller nor the bidders can
improve their utility by acting non-truthfully, so long as the counter-party
acts truthfully. We also define asymptotically second-price auctions, in which
the seller does not lose asymptotically any revenue in comparison to the
theoretically-optimal (static) second-price sealed-bid auction, in the case
that the bidders' valuations are drawn independently from some distribution. We
showcase why these two are very desirable properties for an auction mechanism
to enjoy, and construct the first known NFT Auction Mechanism which provably
possesses such formal guarantees.",A Framework for Single-Item NFT Auction Mechanism Design,2022-09-22 23:12:16,"Jason Milionis, Dean Hirsch, Andy Arditi, Pranav Garimidi","http://dx.doi.org/10.1145/3560832.3563436, http://arxiv.org/abs/2209.11293v1, http://arxiv.org/pdf/2209.11293v1",cs.GT
35197,th,"The emergence of increasingly sophisticated artificial intelligence (AI)
systems have sparked intense debate among researchers, policymakers, and the
public due to their potential to surpass human intelligence and capabilities in
all domains. In this paper, I propose a game-theoretic framework that captures
the strategic interactions between a human agent and a potential superhuman
machine agent. I identify four key assumptions: Strategic Unpredictability,
Access to Machine's Strategy, Rationality, and Superhuman Machine. The main
result of this paper is an impossibility theorem: these four assumptions are
inconsistent when taken together, but relaxing any one of them results in a
consistent set of assumptions. Two straightforward policy recommendations
follow: first, policymakers should control access to specific human data to
maintain Strategic Unpredictability; and second, they should grant select AI
researchers access to superhuman machine research to ensure Access to Machine's
Strategy holds. My analysis contributes to a better understanding of the
context that can shape the theoretical development of superhuman AI.",Exploring the Constraints on Artificial General Intelligence: A Game-Theoretic No-Go Theorem,2022-09-26 02:17:20,Mehmet S. Ismail,"http://arxiv.org/abs/2209.12346v2, http://arxiv.org/pdf/2209.12346v2",econ.TH
35198,th,"We study axiomatic foundations for different classes of constant-function
automated market makers (CFMMs). We focus particularly on separability and on
different invariance properties under scaling. Our main results are an
axiomatic characterization of a natural generalization of constant product
market makers (CPMMs), popular in decentralized finance, on the one hand, and a
characterization of the Logarithmic Scoring Rule Market Makers (LMSR), popular
in prediction markets, on the other hand. The first class is characterized by
the combination of independence and scale invariance, whereas the second is
characterized by the combination of independence and translation invariance.
The two classes are therefore distinguished by a different invariance property
that is motivated by different interpretations of the num\'eraire in the two
applications.
  However, both are pinned down by the same separability property.
  Moreover, we characterize the CPMM as an extremal point within the class of
scale invariant, independent, symmetric AMMs with non-concentrated liquidity
provision. Our results add to a formal analysis of mechanisms that are
currently used for decentralized exchanges and connect the most popular class
of DeFi AMMs to the most popular class of prediction market AMMs.",Axioms for Constant Function Market Makers,2022-09-30 22:22:47,"Jan Christoph Schlegel, Mateusz Kwaśnicki, Akaki Mamageishvili","http://arxiv.org/abs/2210.00048v4, http://arxiv.org/pdf/2210.00048v4",cs.GT
35199,th,"Since the 1990s, AI systems have achieved superhuman performance in major
zero-sum games where ""winning"" has an unambiguous definition. However, most
social interactions are mixed-motive games, where measuring the performance of
AI systems is a non-trivial task. In this paper, I propose a novel benchmark
called super-Nash performance to assess the performance of AI systems in
mixed-motive settings. I show that a solution concept called optimin achieves
super-Nash performance in every n-person game, i.e., for every Nash equilibrium
there exists an optimin where every player not only receives but also
guarantees super-Nash payoffs even if the others deviate unilaterally and
profitably from the optimin.",Optimin achieves super-Nash performance,2022-10-02 23:53:52,Mehmet S. Ismail,"http://arxiv.org/abs/2210.00625v1, http://arxiv.org/pdf/2210.00625v1",cs.GT
35200,th,"We study learning on social media with an equilibrium model of users
interacting with shared news stories. Rational users arrive sequentially,
observe an original story (i.e., a private signal) and a sample of
predecessors' stories in a news feed, and then decide which stories to share.
The observed sample of stories depends on what predecessors share as well as
the sampling algorithm generating news feeds. We focus on how often this
algorithm selects more viral (i.e., widely shared) stories. Showing users viral
stories can increase information aggregation, but it can also generate steady
states where most shared stories are wrong. These misleading steady states
self-perpetuate, as users who observe wrong stories develop wrong beliefs, and
thus rationally continue to share them. Finally, we describe several
consequences for platform design and robustness.",Learning from Viral Content,2022-10-04 02:17:10,"Krishna Dasaratha, Kevin He","http://arxiv.org/abs/2210.01267v2, http://arxiv.org/pdf/2210.01267v2",econ.TH
35201,th,"Optimizing strategic decisions (a.k.a. computing equilibrium) is key to the
success of many non-cooperative multi-agent applications. However, in many
real-world situations, we may face the exact opposite of this game-theoretic
problem -- instead of prescribing equilibrium of a given game, we may directly
observe the agents' equilibrium behaviors but want to infer the underlying
parameters of an unknown game. This research question, also known as inverse
game theory, has been studied in multiple recent works in the context of
Stackelberg games. Unfortunately, existing works exhibit quite negative
results, showing statistical hardness and computational hardness, assuming
follower's perfectly rational behaviors. Our work relaxes the perfect
rationality agent assumption to the classic quantal response model, a more
realistic behavior model of bounded rationality. Interestingly, we show that
the smooth property brought by such bounded rationality model actually leads to
provably more efficient learning of the follower utility parameters in general
Stackelberg games. Systematic empirical experiments on synthesized games
confirm our theoretical results and further suggest its robustness beyond the
strict quantal response model.",Inverse Game Theory for Stackelberg Games: the Blessing of Bounded Rationality,2022-10-04 07:42:42,"Jibang Wu, Weiran Shen, Fei Fang, Haifeng Xu","http://arxiv.org/abs/2210.01380v1, http://arxiv.org/pdf/2210.01380v1",cs.GT
35202,th,"We study an online resource allocation problem under uncertainty about demand
and about the reward of each type of demand (agents) for the resource. Even
though dealing with demand uncertainty in resource allocation problems has been
the topic of many papers in the literature, the challenge of not knowing
rewards has been barely explored. The lack of knowledge about agents' rewards
is inspired by the problem of allocating units of a new resource (e.g., newly
developed vaccines or drugs) with unknown effectiveness/value. For such
settings, we assume that we can \emph{test} the market before the allocation
period starts. During the test period, we sample each agent in the market with
probability $p$. We study how to optimally exploit the \emph{sample
information} in our online resource allocation problem under adversarial
arrival processes. We present an asymptotically optimal algorithm that achieves
$1-\Theta(1/(p\sqrt{m}))$ competitive ratio, where $m$ is the number of
available units of the resource. By characterizing an upper bound on the
competitive ratio of any randomized and deterministic algorithm, we show that
our competitive ratio of $1-\Theta(1/(p\sqrt{m}))$ is tight for any $p
=\omega(1/\sqrt{m})$. That asymptotic optimality is possible with sample
information highlights the significant advantage of running a test period for
new resources. We demonstrate the efficacy of our proposed algorithm using a
dataset that contains the number of COVID-19 related hospitalized patients
across different age groups.",Online Resource Allocation with Samples,2022-10-10 18:35:28,"Negin Gorlezaei, Patrick Jaillet, Zijie Zhou","http://arxiv.org/abs/2210.04774v1, http://arxiv.org/pdf/2210.04774v1",math.OC
35203,th,"Decentralized cryptocurrencies are payment systems that rely on aligning the
incentives of users and miners to operate correctly and offer a high quality of
service to users. Recent literature studies the mechanism design problem of the
auction serving as a cryptocurrency's transaction fee mechanism (TFM). We
present a general framework that captures both myopic and non-myopic settings,
as well as different possible strategic models for users. Within this general
framework, when restricted to the myopic case, we show that while the mechanism
that requires a user to ""pay-as-bid"", and greedily chooses among available
transactions based on their fees, is not dominant strategy incentive-compatible
for users, it has a Bayesian-Nash equilibrium where bids are slightly shaded.
Relaxing this incentive compatibility requirement circumvents the impossibility
results proven by previous works, and allows for an approximately revenue and
welfare optimal, myopic miner incentive-compatible (MMIC), and
off-chain-agreement (OCA)-proof mechanism. We prove these guarantees using
different benchmarks, and show that the pay-as-bid greedy auction is the
revenue optimal Bayesian incentive-compatible, MMIC and 1-OCA-proof mechanism
among a large class of mechanisms. We move beyond the myopic setting explored
in the literature, to one where users offer transaction fees for their
transaction to be accepted, as well as report their urgency level by specifying
the time to live of the transaction, after which it expires. We analyze
pay-as-bid mechanisms in this setting, and show the competitive ratio
guarantees provided by the greedy allocation rule. We then present a
better-performing non-myopic rule, and analyze its competitive ratio. The above
analysis is stated in terms of a cryptocurrency TFM, but applies to other
settings, such as cloud computing and decentralized ""gig"" economy, as well.",Greedy Transaction Fee Mechanisms for (Non-)myopic Miners,2022-10-14 16:24:32,"Yotam Gafni, Aviv Yaish","http://arxiv.org/abs/2210.07793v3, http://arxiv.org/pdf/2210.07793v3",cs.GT
35204,th,"This paper studies a sequential decision problem where payoff distributions
are known and where the riskiness of payoffs matters. Equivalently, it studies
sequential choice from a repeated set of independent lotteries. The
decision-maker is assumed to pursue strategies that are approximately optimal
for large horizons. By exploiting the tractability afforded by asymptotics,
conditions are derived characterizing when specialization in one action or
lottery throughout is asymptotically optimal and when optimality requires
intertemporal diversification. The key is the constancy or variability of risk
attitude. The main technical tool is a new central limit theorem.",Approximate optimality and the risk/reward tradeoff in a class of bandit problems,2022-10-14 22:52:30,"Zengjing Chen, Larry G. Epstein, Guodong Zhang","http://arxiv.org/abs/2210.08077v2, http://arxiv.org/pdf/2210.08077v2",econ.TH
35205,th,"Recently, Artificial Intelligence (AI) technology use has been rising in
sports. For example, to reduce staff during the COVID-19 pandemic, major tennis
tournaments replaced human line judges with Hawk-Eye Live technology. AI is now
ready to move beyond such mundane tasks, however. A case in point and a perfect
application ground is chess. To reduce the growing incidence of draws, many
elite tournaments have resorted to fast chess tiebreakers. However, these
tiebreakers are vulnerable to strategic manipulation, e.g., in the last game of
the 2018 World Chess Championship, Magnus Carlsen -- in a significantly
advantageous position -- offered a draw to Fabiano Caruana (whom accepted the
offer) to proceed to fast chess tiebreaks in which Carlsen had even better odds
of winning the championship. By contrast, we prove that our AI-based method can
serve as a judge to break ties without being vulnerable to such manipulation.
It relies on measuring the difference between the evaluations of a player's
actual move and the best move as deemed by a powerful chess engine. If there is
a tie, the player with the higher quality measure wins the tiebreak. We
generalize our method to all competitive sports and games in which AI's
superiority is -- or can be -- established.",AI-powered mechanisms as judges: Breaking ties in chess and beyond,2022-10-15 16:27:49,"Nejat Anbarci, Mehmet S. Ismail","http://arxiv.org/abs/2210.08289v2, http://arxiv.org/pdf/2210.08289v2",econ.TH
35220,th,"We consider priority-based matching problems with limited farsightedness. We
show that, once agents are sufficiently farsighted, the matching obtained from
the Top Trading Cycles (TTC) algorithm becomes stable: a singleton set
consisting of the TTC matching is a horizon-$k$ vNM stable set if the degree of
farsightedness is greater than three times the number of agents in the largest
cycle of the TTC. On the contrary, the matching obtained from the Deferred
Acceptance (DA) algorithm may not belong to any horizon-$k$ vNM stable set for
$k$ large enough.",Limited Farsightedness in Priority-Based Matching,2022-12-14 12:02:41,"Ata Atay, Ana Mauleon, Vincent Vannetelbosch","http://arxiv.org/abs/2212.07427v1, http://arxiv.org/pdf/2212.07427v1",econ.TH
35206,th,"The hierarchical nature of corporate information processing is a topic of
great interest in economic and management literature. Firms are characterised
by a need to make complex decisions, often aggregating partial and uncertain
information, which greatly exceeds the attention capacity of constituent
individuals. However, the efficient transmission of these signals is still not
fully understood. Recently, the information bottleneck principle has emerged as
a powerful tool for understanding the transmission of relevant information
through intermediate levels in a hierarchical structure. In this paper we note
that the information bottleneck principle may similarly be applied directly to
corporate hierarchies. In doing so we provide a bridge between organisation
theory and that of rapidly expanding work in deep neural networks (DNNs),
including the use of skip connections as a means of more efficient transmission
of information in hierarchical organisations.",The Information Bottleneck Principle in Corporate Hierarchies,2022-10-25 09:18:38,Cameron Gordon,"http://arxiv.org/abs/2210.14861v1, http://arxiv.org/pdf/2210.14861v1",cs.SI
35207,th,"Maximizing the revenue from selling two or more goods has been shown to
require the use of $nonmonotonic$ mechanisms, where a higher-valuation buyer
may pay less than a lower-valuation one. Here we show that the restriction to
$monotonic$ mechanisms may not just lower the revenue, but may in fact yield
only a $negligible$ $fraction$ of the maximal revenue; more precisely, the
revenue from monotonic mechanisms is no more than k times the simple revenue
obtainable by selling the goods separately, or bundled (where k is the number
of goods), whereas the maximal revenue may be arbitrarily larger. We then study
the class of monotonic mechanisms and its subclass of allocation-monotonic
mechanisms, and obtain useful characterizations and revenue bounds.",Monotonic Mechanisms for Selling Multiple Goods,2022-10-31 12:01:59,"Ran Ben-Moshe, Sergiu Hart, Noam Nisan","http://arxiv.org/abs/2210.17150v1, http://arxiv.org/pdf/2210.17150v1",cs.GT
35208,th,"We characterize the assignment games which admit a population monotonic
allocation scheme (PMAS) in terms of efficiently verifiable structural
properties of the nonnegative matrix that induces the game. We prove that an
assignment game is PMAS-admissible if and only if the positive elements of the
underlying nonnegative matrix form orthogonal submatrices of three special
types. In game theoretic terms it means that an assignment game is
PMAS-admissible if and only if it contains a veto player or a dominant veto
mixed pair or is composed of from these two types of special assignment games.
We also show that in a PMAS-admissible assignment game all core allocations can
be extended to a PMAS, and the nucleolus coincides with the tau-value.",Assignment games with population monotonic allocation schemes,2022-10-31 17:56:56,Tamás Solymosi,"http://arxiv.org/abs/2210.17373v1, http://arxiv.org/pdf/2210.17373v1",cs.GT
35209,th,"This paper develops a framework for the design of scoring rules to optimally
incentivize an agent to exert a multi-dimensional effort. This framework is a
generalization to strategic agents of the classical knapsack problem (cf.
Briest, Krysta, and V\""ocking, 2005, Singer, 2010) and it is foundational to
applying algorithmic mechanism design to the classroom. The paper identifies
two simple families of scoring rules that guarantee constant approximations to
the optimal scoring rule. The truncated separate scoring rule is the sum of
single dimensional scoring rules that is truncated to the bounded range of
feasible scores. The threshold scoring rule gives the maximum score if reports
exceed a threshold and zero otherwise. Approximate optimality of one or the
other of these rules is similar to the bundling or selling separately result of
Babaioff, Immorlica, Lucier, and Weinberg (2014). Finally, we show that the
approximate optimality of the best of those two simple scoring rules is robust
when the agent's choice of effort is made sequentially.",Optimal Scoring Rules for Multi-dimensional Effort,2022-11-07 07:36:49,"Jason D. Hartline, Liren Shan, Yingkai Li, Yifan Wu","http://arxiv.org/abs/2211.03302v2, http://arxiv.org/pdf/2211.03302v2",cs.GT
35210,th,"We provide a justification for the prevalence of linear (commission-based)
contracts in practice under the Bayesian framework. We consider a hidden-action
principal-agent model, in which actions require different amounts of effort,
and the agent's cost per-unit-of-effort is private. We show that linear
contracts are near-optimal whenever there is sufficient uncertainty in the
principal-agent setting.",Bayesian Analysis of Linear Contracts,2022-11-13 11:38:51,"Tal Alon, Paul Dütting, Yingkai Li, Inbal Talgam-Cohen","http://arxiv.org/abs/2211.06850v2, http://arxiv.org/pdf/2211.06850v2",cs.GT
35211,th,"In solving today's social issues, it is necessary to determine solutions that
are acceptable to all stakeholders and collaborate to apply them. The
conventional technology of ""permissive meeting analysis"" derives a
consensusable choice that falls within everyone's permissible range through
mathematical analyses; however, it tends to be biased toward the majority in a
group, making it difficult to reach a consensus when a conflict arises. To
support consensus building (defined here as an acceptable compromise that not
everyone rejects), we developed a composite consensus-building process. The
developed process addresses this issue by combining permissible meeting
analysis with a new ""compromise choice-exploration"" technology, which presents
a consensusable choice that emphasizes fairness and equality among everyone
when permissible meeting analysis fails to do so. When both permissible meeting
analysis and compromise choice exploration do not arrive at a consensus, a
facility is provided to create a sublated choice among those provided by them.
The trial experimental results confirmed that permissive meeting analysis and
compromise choice exploration are sufficiently useful for deriving
consensusable choices. Furthermore, we found that compromise choice exploration
is characterized by its ability to derive choices that control the balance
between compromise and fairness. Our proposed composite consensus-building
approach could be applied in a wide range of situations, from local issues in
municipalities and communities to international issues such as environmental
protection and human rights issues. It could also aid in developing digital
democracy and platform cooperativism.",Composite Consensus-Building Process: Permissible Meeting Analysis and Compromise Choice Exploration,2022-11-16 03:49:01,"Yasuhiro Asa, Takeshi Kato, Ryuji Mine","http://arxiv.org/abs/2211.08593v1, http://arxiv.org/pdf/2211.08593v1",cs.GT
35212,th,"We present a new theoretical framework to think about asset price bubbles in
dividend-paying assets. We study a general equilibrium macro-finance model with
a positive feedback loop between capital investment and land price, whose
magnitude is affected by financial leverage. As leverage is relaxed beyond a
critical value, a phase transition occurs from balanced growth of a stationary
nature where land prices reflect fundamentals (present value of rents) to
unbalanced growth of a nonstationary nature where land prices grow faster than
rents, generating a land price bubble. Unbalanced growth dynamics and bubbles
are associated with financial deregulation and technological progress.","Leverage, Endogenous Unbalanced Growth, and Asset Price Bubbles",2022-11-23 19:40:09,"Tomohiro Hirano, Ryo Jinnai, Alexis Akira Toda","http://arxiv.org/abs/2211.13100v6, http://arxiv.org/pdf/2211.13100v6",econ.TH
35213,th,"In party-approval multiwinner elections the goal is to allocate the seats of
a fixed-size committee to parties based on the approval ballots of the voters
over the parties. In particular, each voter can approve multiple parties and
each party can be assigned multiple seats. Two central requirements in this
setting are proportional representation and strategyproofness. Intuitively,
proportional representation requires that every sufficiently large group of
voters with similar preferences is represented in the committee.
Strategyproofness demands that no voter can benefit by misreporting her true
preferences. We show that these two axioms are incompatible for anonymous
party-approval multiwinner voting rules, thus proving a far-reaching
impossibility theorem. The proof of this result is obtained by formulating the
problem in propositional logic and then letting a SAT solver show that the
formula is unsatisfiable. Additionally, we demonstrate how to circumvent this
impossibility by considering a weakening of strategy\-proofness which requires
that only voters who do not approve any elected party cannot manipulate. While
most common voting rules fail even this weak notion of strategyproofness, we
characterize Chamberlin--Courant approval voting within the class of Thiele
rules based on this strategyproofness notion.",Strategyproofness and Proportionality in Party-Approval Multiwinner Elections,2022-11-24 15:35:15,"Théo Delemazure, Tom Demeulemeester, Manuel Eberl, Jonas Israel, Patrick Lederer","http://arxiv.org/abs/2211.13567v1, http://arxiv.org/pdf/2211.13567v1",cs.GT
35214,th,"Zero-sum stochastic games have found important applications in a variety of
fields, from machine learning to economics. Work on this model has primarily
focused on the computation of Nash equilibrium due to its effectiveness in
solving adversarial board and video games. Unfortunately, a Nash equilibrium is
not guaranteed to exist in zero-sum stochastic games when the payoffs at each
state are not convex-concave in the players' actions. A Stackelberg
equilibrium, however, is guaranteed to exist. Consequently, in this paper, we
study zero-sum stochastic Stackelberg games. Going beyond known existence
results for (non-stationary) Stackelberg equilibria, we prove the existence of
recursive (i.e., Markov perfect) Stackelberg equilibria (recSE) in these games,
provide necessary and sufficient conditions for a policy profile to be a recSE,
and show that recSE can be computed in (weakly) polynomial time via value
iteration. Finally, we show that zero-sum stochastic Stackelberg games can
model the problem of pricing and allocating goods across agents and time. More
specifically, we propose a zero-sum stochastic Stackelberg game whose recSE
correspond to the recursive competitive equilibria of a large class of
stochastic Fisher markets. We close with a series of experiments that showcase
how our methodology can be used to solve the consumption-savings problem in
stochastic Fisher markets.",Zero-Sum Stochastic Stackelberg Games,2022-11-25 04:09:56,"Denizalp Goktas, Jiayi Zhao, Amy Greenwald","http://arxiv.org/abs/2211.13847v1, http://arxiv.org/pdf/2211.13847v1",cs.GT
35215,th,"This paper provides a rigorous and gap-free proof of the index theorem used
in the theory of regular economy. In the index theorem that is the subject of
this paper, the assumptions for the excess demand function are only several
usual assumptions and continuous differentiability around any equilibrium
price, and thus it has a form that is applicable to many economies. However,
the textbooks on this theme contain only abbreviated proofs and there is no
known monograph that contains a rigorous proof of this theorem. Hence, the
purpose of this paper is to make this theorem available to more economists by
constructing a readable proof.",A Rigorous Proof of the Index Theorem for Economists,2022-11-25 21:10:38,Yuhki Hosoya,"http://dx.doi.org/10.50906/cems.2.0_11, http://arxiv.org/abs/2211.14272v3, http://arxiv.org/pdf/2211.14272v3",econ.TH
35216,th,"We investigate the level of success a firm achieves depending on which of two
common scoring algorithms is used to screen qualified applicants belonging to a
disadvantaged group. Both algorithms are trained on data generated by a
prejudiced decision-maker independently of the firm. One algorithm favors
disadvantaged individuals, while the other algorithm exemplifies prejudice in
the training data. We deliver sharp guarantees for when the firm finds more
success with one algorithm over the other, depending on the prejudice level of
the decision-maker.",Average Profits of Prejudiced Algorithms,2022-12-01 18:21:37,David J. Jin,"http://arxiv.org/abs/2212.00578v2, http://arxiv.org/pdf/2212.00578v2",econ.TH
35217,th,"In repeated games, strategies are often evaluated by their ability to
guarantee the performance of the single best action that is selected in
hindsight, a property referred to as \emph{Hannan consistency}, or
\emph{no-regret}. However, the effectiveness of the single best action as a
yardstick to evaluate strategies is limited, as any static action may perform
poorly in common dynamic settings. Our work therefore turns to a more ambitious
notion of \emph{dynamic benchmark consistency}, which guarantees the
performance of the best \emph{dynamic} sequence of actions, selected in
hindsight subject to a constraint on the allowable number of action changes.
Our main result establishes that for any joint empirical distribution of play
that may arise when all players deploy no-regret strategies, there exist
dynamic benchmark consistent strategies such that if all players deploy these
strategies the same empirical distribution emerges when the horizon is large
enough. This result demonstrates that although dynamic benchmark consistent
strategies have a different algorithmic structure and provide significantly
enhanced individual assurances, they lead to the same equilibrium set as
no-regret strategies. Moreover, the proof of our main result uncovers the
capacity of independent algorithms with strong individual guarantees to foster
a strong form of coordination.",Equilibria in Repeated Games under No-Regret with Dynamic Benchmarks,2022-12-06 20:26:01,"Ludovico Crippa, Yonatan Gur, Bar Light","http://arxiv.org/abs/2212.03152v2, http://arxiv.org/pdf/2212.03152v2",cs.GT
35218,th,"In the allocation of indivisible goods, the maximum Nash welfare (MNW) rule,
which chooses an allocation maximizing the product of the agents' utilities,
has received substantial attention for its fairness. We characterize MNW as the
only additive welfarist rule that satisfies envy-freeness up to one good. Our
characterization holds even in the simplest setting of two agents.",A Characterization of Maximum Nash Welfare for Indivisible Goods,2022-12-08 14:39:22,Warut Suksompong,"http://dx.doi.org/10.1016/j.econlet.2022.110956, http://arxiv.org/abs/2212.04203v2, http://arxiv.org/pdf/2212.04203v2",econ.TH
35219,th,"We consider priority-based school choice problems with farsighted students.
We show that a singleton set consisting of the matching obtained from the Top
Trading Cycles (TTC) mechanism is a farsighted stable set. However, the
matching obtained from the Deferred Acceptance (DA) mechanism may not belong to
any farsighted stable set. Hence, the TTC mechanism provides an assignment that
is not only Pareto efficient but also farsightedly stable. Moreover, looking
forward three steps ahead is already sufficient for stabilizing the matching
obtained from the TTC.",School Choice with Farsighted Students,2022-12-14 11:57:28,"Ata Atay, Ana Mauleon, Vincent Vannetelbosch","http://arxiv.org/abs/2212.07108v1, http://arxiv.org/pdf/2212.07108v1",econ.TH
35221,th,"We explore the implications of a preference ordering for an investor-consumer
with a strong preference for keeping consumption above an exogenous social
norm, but who is willing to tolerate occasional dips below it. We do this by
splicing two CRRA preference orderings, one with high curvature below the norm
and the other with low curvature at or above it. We find this formulation
appealing for many endowment funds and sovereign wealth funds, including the
Norwegian Government Pension Fund Global, which inspired our research. We solve
this model analytically as well as numerically and find that annual spending
should not only be significantly lower than the expected financial return, but
mostly also procyclical. In particular, financial losses should, as a rule, be
followed by larger than proportional spending cuts, except when some smoothing
is needed to keep spending from falling too far below the social norm. Yet, at
very low wealth levels, spending should be kept particularly low in order to
build sufficient wealth to raise consumption above the social norm. Financial
risk taking should also be modest and procyclical, so that the investor
sometimes may want to ""buy at the top"" and ""sell at the bottom"". Many of these
features are shared by habitformation models and other models with some lower
bound for consumption. However, our specification is more flexible and thus
more easily adaptable to actual fund management. The nonlinearity of the policy
functions may present challenges regarding delegation to professional managers.
However, simpler rules of thumb with constant or slowly moving equity share and
consumption-wealth ratio can reach almost the same expected discounted utility.
However, the constant levels will then look very different from the
implications of expected CRRA utility or Epstein-Zin preferences in that
consumption is much lower.",Dynamic spending and portfolio decisions with a soft social norm,2022-12-20 11:00:11,"Knut Anton Mork, Fabian Andsem Harang, Haakon Andreas Trønnes, Vegard Skonseng Bjerketvedt","http://arxiv.org/abs/2212.10053v1, http://arxiv.org/pdf/2212.10053v1",econ.TH
35222,th,"We design efficient and robust algorithms for the batch posting of rollup
chain calldata on the base layer chain, using tools from operations research.
We relate the costs of posting and delaying, by converting them to the same
units and adding them up. The algorithm that keeps the average and maximum
queued number of batches tolerable enough improves the posting costs of the
trivial algorithm, which posts batches immediately when they are created, by
8%. On the other hand, the algorithm that only cares moderately about the batch
queue length can improve the trivial algorithm posting costs by 29%. Our
findings can be used by layer two projects that post data to the base layer at
some regular rate.",Efficient Rollup Batch Posting Strategy on Base Layer,2022-12-20 18:24:16,"Akaki Mamageishvili, Edward W. Felten","http://arxiv.org/abs/2212.10337v2, http://arxiv.org/pdf/2212.10337v2",cs.CR
35223,th,"It is well known that Sperner lemma is equivalent to Brouwer fixed-point
theorem. Tanaka [12] proved that Brouwer theorem is equivalent to Arrow
theorem, hence Arrow theorem is equivalent to Sperner lemma. In this paper we
will prove this result directly. Moreover, we describe a number of other
statements equivalent to Arrow theorem.",The connection between Arrow theorem and Sperner lemma,2022-12-23 13:54:40,Nikita Miku,"http://arxiv.org/abs/2212.12251v1, http://arxiv.org/pdf/2212.12251v1",math.CO
35224,th,"Consider a population of heterogenous agents whose choice behaviors are
partially comparable according to given primitive orderings. The set of choice
functions admissible in the population specifies a choice model. A choice model
is self-progressive if each aggregate choice behavior consistent with the model
is uniquely representable as a probability distribution over admissible choice
functions that are comparable. We establish an equivalence between
self-progressive choice models and well-known algebraic structures called
lattices. This equivalence provides for a precise recipe to restrict or extend
any choice model for unique orderly representation. To prove out, we
characterize the minimal self-progressive extension of rational choice
functions, explaining why agents might exhibit choice overload. We provide
necessary and sufficient conditions for the identification of a (unique)
primitive ordering that renders our choice overload representation to a choice
model.",Foundations of self-progressive choice models,2022-12-27 14:12:18,Kemal Yildiz,"http://arxiv.org/abs/2212.13449v8, http://arxiv.org/pdf/2212.13449v8",econ.TH
35225,th,"In this paper, we explore the dynamics of two monopoly models with
knowledgeable players. The first model was initially introduced by Naimzada and
Ricchiuti, while the second one is simplified from a famous monopoly introduced
by Puu. We employ several tools based on symbolic computations to analyze the
local stability and bifurcations of the two models. To the best of our
knowledge, the complete stability conditions of the second model are obtained
for the first time. We also investigate periodic solutions as well as their
stability. Most importantly, we discover that the topological structure of the
parameter space of the second model is much more complex than that of the first
one. Specifically, in the first model, the parameter region for the stability
of any periodic orbit with a fixed order constitutes a connected set. In the
second model, however, the stability regions for the 3-cycle, 4-cycle, and
5-cycle orbits are disconnected sets formed by many disjoint portions.
Furthermore, we find that the basins of the two stable equilibria in the second
model are disconnected and also have complicated topological structures. In
addition, the existence of chaos in the sense of Li-Yorke is rigorously proved
by finding snapback repellers and 3-cycle orbits in the two models,
respectively.",Complex dynamics of knowledgeable monopoly models with gradient mechanisms,2023-01-04 12:05:18,"Xiaoliang Li, Jiacheng Fu, Wei Niu","http://arxiv.org/abs/2301.01497v1, http://arxiv.org/pdf/2301.01497v1",econ.TH
35226,th,"In the allocation of indivisible goods, the maximum Nash welfare rule has
recently been characterized as the only rule within the class of additive
welfarist rules that satisfies envy-freeness up to one good. We extend this
characterization to the class of all welfarist rules.",Extending the Characterization of Maximum Nash Welfare,2023-01-10 08:44:07,"Sheung Man Yuen, Warut Suksompong","http://dx.doi.org/10.1016/j.econlet.2023.111030, http://arxiv.org/abs/2301.03798v2, http://arxiv.org/pdf/2301.03798v2",econ.TH
35227,th,"We study mechanisms that select a subset of the vertex set of a directed
graph in order to maximize the minimum indegree of any selected vertex, subject
to an impartiality constraint that the selection of a particular vertex is
independent of the outgoing edges of that vertex. For graphs with maximum
outdegree $d$, we give a mechanism that selects at most $d+1$ vertices and only
selects vertices whose indegree is at least the maximum indegree in the graph
minus one. We then show that this is best possible in the sense that no
impartial mechanism can only select vertices with maximum degree, even without
any restriction on the number of selected vertices. We finally obtain the
following trade-off between the maximum number of vertices selected and the
minimum indegree of any selected vertex: when selecting at most~$k$ vertices
out of $n$, it is possible to only select vertices whose indegree is at least
the maximum indegree minus $\lfloor(n-2)/(k-1)\rfloor+1$.",Optimal Impartial Correspondences,2023-01-11 19:13:15,"Javier Cembrano, Felix Fischer, Max Klimm","http://dx.doi.org/10.1007/978-3-031-22832-2_11, http://arxiv.org/abs/2301.04544v1, http://arxiv.org/pdf/2301.04544v1",cs.GT
35228,th,"The course of an epidemic can be drastically altered by changes in human
behavior. Incorporating the dynamics of individual decision-making during an
outbreak represents a key challenge of epidemiology, faced by several modeling
approaches siloed by different disciplines. Here, we propose an epi-economic
model including adaptive, forward-looking behavioral response on a
heterogeneous networked substrate, where individuals tune their social activity
based on future health expectations. Under basic assumptions, we show that it
is possible to derive an analytical expression of the optimal value of the
social activity that matches the traditional assumptions of classic epidemic
models. Through numerical simulations, we contrast the settings of global
awareness -- individuals only know the prevalence of the disease in the
population -- with local awareness, where individuals explicitly know which of
their contacts are infected. We show that behavior change can flatten the
epidemic curve by lowering the peak prevalence, but local awareness is much
more effective in curbing the disease early with respect to global awareness.
Our work bridges classical epidemic modeling with the epi-economic approach,
and sheds light on the effects of heterogeneous behavioral responses in curbing
the epidemic spread.",Modeling adaptive forward-looking behavior in epidemics on networks,2023-01-12 14:36:22,"Lorenzo Amir Nemati Fard, Michele Starnini, Michele Tizzoni","http://arxiv.org/abs/2301.04947v1, http://arxiv.org/pdf/2301.04947v1",physics.soc-ph
35229,th,"We study mechanism design for public-good provision under a noisy
privacy-preserving transformation of individual agents' reported preferences.
The setting is a standard binary model with transfers and quasi-linear utility.
Agents report their preferences for the public good, which are randomly
``flipped,'' so that any individual report may be explained away as the outcome
of noise. We study the tradeoffs between preserving the public decisions made
in the presence of noise (noise sensitivity), pursuing efficiency, and
mitigating the effect of noise on revenue.",Binary Mechanisms under Privacy-Preserving Noise,2023-01-17 18:38:50,"Farzad Pourbabaee, Federico Echenique","http://arxiv.org/abs/2301.06967v2, http://arxiv.org/pdf/2301.06967v2",econ.TH
35230,th,"Advertisers in online ad auctions are increasingly using auto-bidding
mechanisms to bid into auctions instead of directly bidding their value
manually. One prominent auto-bidding format is the target cost-per-acquisition
(tCPA) which maximizes the volume of conversions subject to a
return-of-investment constraint. From an auction theoretic perspective however,
this trend seems to go against foundational results that postulate that for
profit-maximizing bidders, it is optimal to use a classic bidding system like
marginal CPA (mCPA) bidding rather than using strategies like tCPA.
  In this paper we rationalize the adoption of such seemingly sub-optimal
bidding within the canonical quasi-linear framework. The crux of the argument
lies in the notion of commitment. We consider a multi-stage game where first
the auctioneer declares the auction rules; then bidders select either the tCPA
or mCPA bidding format and then, if the auctioneer lacks commitment, it can
revisit the rules of the auction (e.g., may readjust reserve prices depending
on the observed bids). Our main result is that so long as a bidder believes
that the auctioneer lacks commitment to follow the rule of the declared auction
then the bidder will make a higher profit by choosing the tCPA format over the
mCPA format.
  We then explore the commitment consequences for the auctioneer. In a
simplified version of the model where there is only one bidder, we show that
the tCPA subgame admits a credible equilibrium while the mCPA format does not.
That is, when the bidder chooses the tCPA format the auctioneer can credibly
implement the auction rules announced at the beginning of the game. We also
show that, under some mild conditions, the auctioneer's revenue is larger when
the bidder uses the tCPA format rather than mCPA. We further quantify the value
for the auctioneer to be able to commit to the declared auction rules.",Auctions without commitment in the auto-bidding world,2023-01-18 08:23:17,"Aranyak Mehta, Andres Perlroth","http://arxiv.org/abs/2301.07312v2, http://arxiv.org/pdf/2301.07312v2",econ.TH
35231,th,"Adverse selection is a version of the principal-agent problem that includes
monopolist nonlinear pricing, where a monopolist with known costs seeks a
profit-maximizing price menu facing a population of potential consumers whose
preferences are known only in the aggregate. For multidimensional spaces of
agents and products, Rochet and Chon\'e (1998) reformulated this problem to a
concave maximization over the set of convex functions, by assuming agent
preferences combine bilinearity in the product and agent parameters with a
quasilinear sensitivity to prices. We characterize solutions to this problem by
identifying a dual minimization problem. This duality allows us to reduce the
solution of the square example of Rochet-Chon\'e to a novel free boundary
problem, giving the first analytical description of an overlooked market
segment.",A duality and free boundary approach to adverse selection,2023-01-18 20:07:24,"Robert J. McCann, Kelvin Shuangjian Zhang","http://arxiv.org/abs/2301.07660v2, http://arxiv.org/pdf/2301.07660v2",math.OC
35232,th,"In a misspecified social learning setting, agents are condescending if they
perceive their peers as having private information that is of lower quality
than it is in reality. Applying this to a standard sequential model, we show
that outcomes improve when agents are mildly condescending. In contrast, too
much condescension leads to worse outcomes, as does anti-condescension.",The Hazards and Benefits of Condescension in Social Learning,2023-01-26 20:19:42,"Itai Arieli, Yakov Babichenko, Stephan Müller, Farzad Pourbabaee, Omer Tamuz","http://arxiv.org/abs/2301.11237v2, http://arxiv.org/pdf/2301.11237v2",econ.TH
35233,th,"In this paper, we investigate the equilibria and their stability in an
asymmetric duopoly model of Kopel by using several tools based on symbolic
computations. We explore the possible positions of the equilibria in Kopel's
model. We discuss the possibility of the existence of multiple positive
equilibria and establish a necessary and sufficient condition for a given
number of equilibria to exist. Furthermore, if the two duopolists adopt the
best response reactions or homogeneous adaptive expectations, we establish
rigorous conditions for the existence of distinct numbers of positive
equilibria for the first time.",Equilibria and their stability in an asymmetric duopoly model of Kopel,2023-01-30 06:10:53,"Xiaoliang Li, Kongyan Chen","http://arxiv.org/abs/2301.12628v1, http://arxiv.org/pdf/2301.12628v1",econ.TH
35234,th,"Modern blockchains guarantee that submitted transactions will be included
eventually; a property formally known as liveness. But financial activity
requires transactions to be included in a timely manner. Unfortunately,
classical liveness is not strong enough to guarantee this, particularly in the
presence of a motivated adversary who benefits from censoring transactions. We
define censorship resistance as the amount it would cost the adversary to
censor a transaction for a fixed interval of time as a function of the
associated tip. This definition has two advantages, first it captures the fact
that transactions with a higher miner tip can be more costly to censor, and
therefore are more likely to swiftly make their way onto the chain. Second, it
applies to a finite time window, so it can be used to assess whether a
blockchain is capable of hosting financial activity that relies on timely
inclusion.
  We apply this definition in the context of auctions. Auctions are a building
block for many financial applications, and censoring competing bids offers an
easy-to-model motivation for our adversary. Traditional proof-of-stake
blockchains have poor enough censorship resistance that it is difficult to
retain the integrity of an auction when bids can only be submitted in a single
block. As the number of bidders $n$ in a single block auction increases, the
probability that the winner is not the adversary, and the economic efficiency
of the auction, both decrease faster than $1/n$. Running the auction over
multiple blocks, each with a different proposer, alleviates the problem only if
the number of blocks grows faster than the number of bidders. We argue that
blockchains with more than one concurrent proposer have can have strong
censorship resistance. We achieve this by setting up a prisoner's dilemma among
the proposers using conditional tips.",Censorship Resistance in On-Chain Auctions,2023-01-31 02:05:01,"Elijah Fox, Mallesh Pai, Max Resnick","http://arxiv.org/abs/2301.13321v2, http://arxiv.org/pdf/2301.13321v2",econ.TH
35235,th,"Over the past few years, more and more Internet advertisers have started
using automated bidding for optimizing their advertising campaigns. Such
advertisers have an optimization goal (e.g. to maximize conversions), and some
constraints (e.g. a budget or an upper bound on average cost per conversion),
and the automated bidding system optimizes their auction bids on their behalf.
Often, these advertisers participate on multiple advertising channels and try
to optimize across these channels. A central question that remains unexplored
is how automated bidding affects optimal auction design in the multi-channel
setting.
  In this paper, we study the problem of setting auction reserve prices in the
multi-channel setting. In particular, we shed light on the revenue implications
of whether each channel optimizes its reserve price locally, or whether the
channels optimize them globally to maximize total revenue. Motivated by
practice, we consider two models: one in which the channels have full freedom
to set reserve prices, and another in which the channels have to respect floor
prices set by the publisher. We show that in the first model, welfare and
revenue loss from local optimization is bounded by a function of the
advertisers' inputs, but is independent of the number of channels and bidders.
In stark contrast, we show that the revenue from local optimization could be
arbitrarily smaller than those from global optimization in the second model.",Multi-Channel Auction Design in the Autobidding World,2023-01-31 07:57:59,"Gagan Aggarwal, Andres Perlroth, Junyao Zhao","http://arxiv.org/abs/2301.13410v1, http://arxiv.org/pdf/2301.13410v1",cs.GT
35236,th,"Auto-bidding has recently become a popular feature in ad auctions. This
feature enables advertisers to simply provide high-level constraints and goals
to an automated agent, which optimizes their auction bids on their behalf. In
this paper, we examine the effect of different auctions on the incentives of
advertisers to report their constraints to the auto-bidder intermediaries. More
precisely, we study whether canonical auctions such as first price auction
(FPA) and second price auction (SPA) are auto-bidding incentive compatible
(AIC): whether an advertiser can gain by misreporting their constraints to the
autobidder.
  We consider value-maximizing advertisers in two important settings: when they
have a budget constraint and when they have a target cost-per-acquisition
constraint. The main result of our work is that for both settings, FPA and SPA
are not AIC. This contrasts with FPA being AIC when auto-bidders are
constrained to bid using a (sub-optimal) uniform bidding policy. We further
extend our main result and show that any (possibly randomized) auction that is
truthful (in the classic profit-maximizing sense), scalar invariant and
symmetric is not AIC. Finally, to complement our findings, we provide
sufficient market conditions for FPA and SPA to become AIC for two advertisers.
These conditions require advertisers' valuations to be well-aligned. This
suggests that when the competition is intense for all queries, advertisers have
less incentive to misreport their constraints.
  From a methodological standpoint, we develop a novel continuous model of
queries. This model provides tractability to study equilibrium with
auto-bidders, which contrasts with the standard discrete query model, which is
known to be hard. Through the analysis of this model, we uncover a surprising
result: in auto-bidding with two advertisers, FPA and SPA are auction
equivalent.",Incentive Compatibility in the Auto-bidding World,2023-01-31 08:08:37,"Yeganeh Alimohammadi, Aranyak Mehta, Andres Perlroth","http://arxiv.org/abs/2301.13414v1, http://arxiv.org/pdf/2301.13414v1",econ.TH
35237,th,"Motivated by applications such as voluntary carbon markets and educational
testing, we consider a market for goods with varying but hidden levels of
quality in the presence of a third-party certifier. The certifier can provide
informative signals about the quality of products, and can charge for this
service. Sellers choose both the quality of the product they produce and a
certification. Prices are then determined in a competitive market. Under a
single-crossing condition, we show that the levels of certification chosen by
producers are uniquely determined at equilibrium. We then show how to reduce a
revenue-maximizing certifier's problem to a monopolistic pricing problem with
non-linear valuations, and design an FPTAS for computing the optimal slate of
certificates and their prices. In general, both the welfare-optimal and
revenue-optimal slate of certificates can be arbitrarily large.",Certification Design for a Competitive Market,2023-01-31 10:04:04,"Andreas A. Haupt, Nicole Immorlica, Brendan Lucier","http://arxiv.org/abs/2301.13449v1, http://arxiv.org/pdf/2301.13449v1",cs.GT
35238,th,"We provide a game-theoretic analysis of the problem of front-running attacks.
We use it to distinguish attacks from legitimate competition among honest users
for having their transactions included earlier in the block. We also use it to
introduce an intuitive notion of the severity of front-running attacks. We then
study a simple commit-reveal protocol and discuss its properties. This protocol
has costs because it requires two messages and imposes a delay. However, we
show that it prevents the most severe front-running attacks while preserving
legitimate competition between users, guaranteeing that the earliest
transaction in a block belongs to the honest user who values it the most. When
the protocol does not fully eliminate attacks, it nonetheless benefits honest
users because it reduces competition among attackers (and overall expenditure
by attackers).",Commitment Against Front Running Attacks,2023-01-31 20:33:09,"Andrea Canidio, Vincent Danos","http://arxiv.org/abs/2301.13785v3, http://arxiv.org/pdf/2301.13785v3",econ.TH
35239,th,"I study mechanism design with blockchain-based tokens, that is, tokens that
can be used within a mechanism but can also be saved and traded outside of the
mechanism. I do so by considering a repeated, private-value auction, in which
the auctioneer accepts payments in a blockchain-based token he creates and
initially owns. I show that the present-discounted value of the expected
revenues is the same as in a standard auction with dollars, but these revenues
accrue earlier and are less variable. The optimal monetary policy involves the
burning of tokens used in the auction, a common feature of many
blockchain-based auctions. I then introduce non-contractible effort and the
possibility of misappropriating revenues. I compare the auction with tokens to
an auction with dollars in which the auctioneer can also issue financial
securities. An auction with tokens is preferred when there are sufficiently
severe contracting frictions, while the opposite is true when contracting
frictions are low.",Auctions with Tokens: Monetary Policy as a Mechanism Design Choice,2023-01-31 20:37:48,Andrea Canidio,"http://arxiv.org/abs/2301.13794v2, http://arxiv.org/pdf/2301.13794v2",econ.TH
35240,th,"We study the robustness of cheap-talk equilibria to infinitesimal private
information of the receiver in a model with a binary state-space and
state-independent sender-preferences. We show that the sender-optimal
equilibrium is robust if and only if this equilibrium either reveals no
information to the receiver or fully reveals one of the states with positive
probability. We then characterize the actions that can be played with positive
probability in any robust equilibrium. Finally, we fully characterize the
optimal sender-utility under binary receiver's private information, and provide
bounds for the optimal sender-utility under general private information.",Informationally Robust Cheap-Talk,2023-02-01 10:22:58,"Itai Arieli, Ronen Gradwohl, Rann Smorodinsky","http://arxiv.org/abs/2302.00281v1, http://arxiv.org/pdf/2302.00281v1",econ.TH
35243,th,"Bolletta (2021, Math. Soc. Sci. 114:1-10) studies a model in which a network
is strategically formed and then agents play a linear best-response investment
game in it. The model is motivated by an application in which people choose
both their study partners and their levels of educational effort. Agents have
different one-dimensional types $\unicode{x2013}$ private returns to effort. A
main result claims that pairwise Nash stable networks have a locally complete
structure consisting of possibly overlapping cliques: if two agents are linked,
they are part of a clique composed of all agents with types between theirs. We
offer a counterexample showing that the claimed characterization is incorrect,
highlight where the analysis errs, and discuss implications for network
formation models.",On the Difficulty of Characterizing Network Formation with Endogenous Behavior,2023-02-12 04:19:29,"Benjamin Golub, Yu-Chi Hsieh, Evan Sadler","http://arxiv.org/abs/2302.05831v2, http://arxiv.org/pdf/2302.05831v2",econ.TH
35244,th,"Fair allocation of indivisible goods is a well-explored problem.
Traditionally, research focused on individual fairness - are individual agents
satisfied with their allotted share? - and group fairness - are groups of
agents treated fairly? In this paper, we explore the coexistence of individual
envy-freeness (i-EF) and its group counterpart, group weighted envy-freeness
(g-WEF), in the allocation of indivisible goods. We propose several
polynomial-time algorithms that provably achieve i-EF and g-WEF simultaneously
in various degrees of approximation under three different conditions on the
agents' (i) when agents have identical additive valuation functions, i-EFX and
i-WEF1 can be achieved simultaneously; (ii) when agents within a group share a
common valuation function, an allocation satisfying both i-EF1 and g-WEF1
exists; and (iii) when agents' valuations for goods within a group differ, we
show that while maintaining i-EF1, we can achieve a 1/3-approximation to
ex-ante g-WEF1. Our results thus provide a first step towards connecting
individual and group fairness in the allocation of indivisible goods, in hopes
of its useful application to domains requiring the reconciliation of diversity
with individual demands.",For One and All: Individual and Group Fairness in the Allocation of Indivisible Goods,2023-02-14 13:36:18,"Jonathan Scarlett, Nicholas Teh, Yair Zick","http://arxiv.org/abs/2302.06958v1, http://arxiv.org/pdf/2302.06958v1",cs.GT
35245,th,"We study the welfare structure in two-sided large random matching markets. In
the model, each agent has a latent personal score for every agent on the other
side of the market and her preferences follow a logit model based on these
scores. Under a contiguity condition, we provide a tight description of stable
outcomes. First, we identify an intrinsic fitness for each agent that
represents her relative competitiveness in the market, independent of the
realized stable outcome. The intrinsic fitness values correspond to scaling
coefficients needed to make a latent mutual matrix bi-stochastic, where the
latent scores can be interpreted as a-priori probabilities of a pair being
matched. Second, in every stable (or even approximately stable) matching, the
welfare or the ranks of the agents on each side of the market, when scaled by
their intrinsic fitness, have an approximately exponential empirical
distribution. Moreover, the average welfare of agents on one side of the market
is sufficient to determine the average on the other side. Overall, each agent's
welfare is determined by a global parameter, her intrinsic fitness, and an
extrinsic factor with exponential distribution across the population.",Welfare Distribution in Two-sided Random Matching Markets,2023-02-17 00:51:09,"Itai Ashlagi, Mark Braverman, Geng Zhao","http://arxiv.org/abs/2302.08599v1, http://arxiv.org/pdf/2302.08599v1",econ.TH
35246,th,"People choose their strategies through a trial-and-error learning process in
which they gradually discover that some strategies work better than others. The
process can be modelled as an evolutionary game dynamics system, which may be
controllable. In modern control theory, eigenvalue (pole) assignment is a basic
approach to designing a full-state feedback controller, which can influence the
outcome of a game. This study shows that, in a game with two Nash equilibria,
the long-running strategy distribution can be controlled by pole assignment. We
illustrate a theoretical workflow to design and evaluate the controller. To our
knowledge, this is the first realisation of the control of equilibrium
selection by design in the game dynamics theory paradigm. We hope the
controller can be verified in a laboratory human subject game experiment.",Nash equilibrium selection by eigenvalue control,2023-02-17 23:46:32,Wang Zhijian,"http://arxiv.org/abs/2302.09131v1, http://arxiv.org/pdf/2302.09131v1",econ.TH
35247,th,"We study the distribution of multiple homogeneous items to multiple agents
with unit demand. Monetary transfer is not allowed and the allocation of the
items can only depend on the informative signals that are manipulable by costly
and wasteful efforts. Examples of such scenarios include grant allocation,
college admission, lobbying and affordable housing. We show that the
welfare-maximizing mechanism takes the form of a contest and characterize it.
We apply our characterizations to study large contests. When the number of
agents is large compared to item(s), the format of the optimal contest
converges to winner-takes-all, but principal's payoff does not. When both the
number of items and agents are large, allocation is randomized to middle types
to induce no effort under optimal contest, which weakly decreases effort for
all higher types.",Contests as Optimal Mechanisms under Signal Manipulation,2023-02-18 01:46:30,"Yingkai Li, Xiaoyun Qiu","http://arxiv.org/abs/2302.09168v1, http://arxiv.org/pdf/2302.09168v1",econ.TH
35248,th,"The collapse process is a constitutional sub-process in the full finding Nash
equilibrium process. We conducted laboratory game experiments with human
subjects to study this process. We observed significant pulse signals in the
collapse process. The observations from the data support the completeness and
the consistency of the game dynamics paradigm.",Pulse in collapse: a game dynamics experiment,2023-02-18 17:04:53,"Wang Yijia, Wang Zhijian","http://arxiv.org/abs/2302.09336v1, http://arxiv.org/pdf/2302.09336v1",econ.TH
35258,th,"We study the problem of an agent continuously faced with the decision of
placing or not placing trust in an institution. The agent makes use of Bayesian
learning in order to estimate the institution's true trustworthiness and makes
the decision to place trust based on myopic rationality. Using elements from
random walk theory, we explicitly derive the probability that such an agent
ceases placing trust at some point in the relationship, as well as the expected
time spent placing trust conditioned on their discontinuation thereof.
  We then continue by modelling two truster agents, each in their own
relationship to the institution. We consider two natural models of
communication between them. In the first (``observable rewards'') agents
disclose their experiences with the institution with one another, while in the
second (``observable actions'') agents merely witness the actions of their
neighbour, i.e., placing or not placing trust. Under the same assumptions as in
the single agent case, we describe the evolution of the beliefs of agents under
these two different communication models. Both the probability of ceasing to
place trust and the expected time in the system elude explicit expressions,
despite there being only two agents. We therefore conduct a simulation study in
order to compare the effect of the different kinds of communication on the
trust dynamics.
  We find that a pair of agents in both communication models has a greater
chance of learning the true trustworthiness of an institution than a single
agent. Communication between agents promotes the formation of long term trust
with a trustworthy institution as well as the timely exit from a trust
relationship with an untrustworthy institution. Contrary to what one might
expect, we find that having less information (observing each other's actions
instead of experiences) can sometimes be beneficial to the agents.",Trusting: Alone and together,2023-03-03 16:46:04,"Benedikt V. Meylahn, Arnoud V. den Boer, Michel Mandjes","http://arxiv.org/abs/2303.01921v1, http://arxiv.org/pdf/2303.01921v1",physics.soc-ph
35249,th,"Machine learning algorithms are increasingly employed to price or value homes
for sale, properties for rent, rides for hire, and various other goods and
services. Machine learning-based prices are typically generated by complex
algorithms trained on historical sales data. However, displaying these prices
to consumers anchors the realized sales prices, which will in turn become
training samples for future iterations of the algorithms. The economic
implications of this machine learning ""feedback loop"" - an indirect
human-algorithm interaction - remain relatively unexplored. In this work, we
develop an analytical model of machine learning feedback loops in the context
of the housing market. We show that feedback loops lead machine learning
algorithms to become overconfident in their own accuracy (by underestimating
its error), and leads home sellers to over-rely on possibly erroneous
algorithmic prices. As a consequence at the feedback loop equilibrium, sale
prices can become entirely erratic (relative to true consumer preferences in
absence of ML price interference). We then identify conditions (choice of ML
models, seller characteristics and market characteristics) where the economic
payoffs for home sellers at the feedback loop equilibrium is worse off than no
machine learning. We also empirically validate primitive building blocks of our
analytical model using housing market data from Zillow. We conclude by
prescribing algorithmic corrective strategies to mitigate the effects of
machine learning feedback loops, discuss the incentives for platforms to adopt
these strategies, and discuss the role of policymakers in regulating the same.",Does Machine Learning Amplify Pricing Errors in the Housing Market? -- The Economics of Machine Learning Feedback Loops,2023-02-19 02:20:57,"Nikhil Malik, Emaad Manzoor","http://arxiv.org/abs/2302.09438v1, http://arxiv.org/pdf/2302.09438v1",econ.TH
35250,th,"In a model of interconnected conflicts on a network, we compare the
equilibrium effort profiles and payoffs under two scenarios: uniform effort
(UE) in which each contestant is restricted to exert the same effort across all
the battles she participates, and discriminatory effort (DE) in which such a
restriction is lifted. When the contest technology in each battle is of Tullock
form, a surprising neutrality result holds within the class of semi-symmetric
conflict network structures: both the aggregate actions and equilibrium payoffs
under two regimes are the same. We also show that, in some sense, the Tullock
form is necessary for such a neutrality result. Moving beyond the Tullock
family, we further demonstrate how the curvature of contest technology shapes
the welfare and effort effects.",Effort Discrimination and Curvature of Contest Technology in Conflict Networks,2023-02-20 12:49:13,"Xiang Sun, Jin Xu, Junjie Zhou","http://arxiv.org/abs/2302.09861v2, http://arxiv.org/pdf/2302.09861v2",econ.TH
35251,th,"Throughout the food supply chain, between production, transportation,
packaging, and green employment, a plethora of indicators cover the
environmental footprint and resource use. By defining and tracking the more
inefficient practices of the food supply chain and their effects, we can better
understand how to improve agricultural performance, track nutrition values, and
focus on the reduction of a major risk to the environment while contributing to
food security. Our aim is to propose a framework for a food supply chain,
devoted to the vision of zero waste and zero emissions, and at the same time,
fulfilling the broad commitment on inclusive green economy within the climate
action. To set the groundwork for a smart city solution which achieves this
vision, main indicators and evaluation frameworks are introduced, followed by
the drill down into most crucial problems, both globally and locally, in a case
study in north Italy. Methane is on the rise in the climate agenda, and
specifically in Italy emission mitigation is difficult to achieve in the
farming sector. Accordingly, going from the generic frameworks towards a
federation deployment, we provide the reasoning for a cost-effective use case
in the domain of food, to create a valuable digital twin. A Bayesian approach
to assess use cases and select preferred scenarios is proposed, realizing the
potential of the digital twin flexibility with FAIR data, while understanding
and acting to achieve environmental and social goals, i.e., coping
uncertainties, and combining green employment and food security. The proposed
framework can be adjusted to organizational, financial, and political
considerations in different locations worldwide, rethinking the value of
information in the context of FAIR data in digital twins.",Goal oriented indicators for food systems based on FAIR data,2023-02-20 14:20:44,Ronit Purian,"http://arxiv.org/abs/2302.09916v1, http://arxiv.org/pdf/2302.09916v1",cs.CY
35252,th,"Approval-based committee (ABC) voting rules elect a fixed size subset of the
candidates, a so-called committee, based on the voters' approval ballots over
the candidates. While these rules have recently attracted significant
attention, axiomatic characterizations are largely missing so far. We address
this problem by characterizing ABC voting rules within the broad and intuitive
class of sequential valuation rules. These rules compute the winning committees
by sequentially adding candidates that increase the score of the chosen
committee the most. In more detail, we first characterize almost the full class
of sequential valuation rules based on mild standard conditions and a new axiom
called consistent committee monotonicity. This axiom postulates that the
winning committees of size k can be derived from those of size k-1 by only
adding candidates and that these new candidates are chosen consistently. By
requiring additional conditions, we derive from this result also a
characterization of the prominent class of sequential Thiele rules. Finally, we
refine our results to characterize three well-known ABC voting rules, namely
sequential approval voting, sequential proportional approval voting, and
sequential Chamberlin-Courant approval voting.",Characterizations of Sequential Valuation Rules,2023-02-23 12:53:21,"Chris Dong, Patrick Lederer","http://arxiv.org/abs/2302.11890v1, http://arxiv.org/pdf/2302.11890v1",cs.GT
35259,th,"The best-of-both-worlds paradigm advocates an approach that achieves
desirable properties both ex-ante and ex-post. We launch a best-of-both-worlds
fairness perspective for the important social choice setting of approval-based
committee voting. To this end, we initiate work on ex-ante proportional
representation properties in this domain and formalize a hierarchy of notions
including Individual Fair Share (IFS), Unanimous Fair Share (UFS), Group Fair
Share (GFS), and their stronger variants. We establish their compatibility with
well-studied ex-post concepts such as extended justified representation (EJR)
and fully justified representation (FJR). Our first main result is a
polynomial-time algorithm that simultaneously satisfies ex-post EJR, ex-ante
GFS and ex-ante Strong UFS. Subsequently, we strengthen our ex-post guarantee
to FJR and present an algorithm that outputs a lottery which is ex-post FJR and
ex-ante Strong UFS, but does not run in polynomial time.",Best-of-Both-Worlds Fairness in Committee Voting,2023-03-07 07:19:47,"Haris Aziz, Xinhang Lu, Mashbat Suzuki, Jeremy Vollen, Toby Walsh","http://arxiv.org/abs/2303.03642v3, http://arxiv.org/pdf/2303.03642v3",cs.GT
35253,th,"One of the central economic paradigms in multi-agent systems is that agents
should not be better off by acting dishonestly. In the context of collective
decision-making, this axiom is known as strategyproofness and turns out to be
rather prohibitive, even when allowing for randomization. In particular,
Gibbard's random dictatorship theorem shows that only rather unattractive
social decision schemes (SDSs) satisfy strategyproofness on the full domain of
preferences. In this paper, we obtain more positive results by investigating
strategyproof SDSs on the Condorcet domain, which consists of all preference
profiles that admit a Condorcet winner. In more detail, we show that, if the
number of voters $n$ is odd, every strategyproof and non-imposing SDS on the
Condorcet domain can be represented as a mixture of dictatorial SDSs and the
Condorcet rule (which chooses the Condorcet winner with probability $1$).
Moreover, we prove that the Condorcet domain is a maximal connected domain that
allows for attractive strategyproof SDSs if $n$ is odd as only random
dictatorships are strategyproof and non-imposing on any sufficiently connected
superset of it. We also derive analogous results for even $n$ by slightly
extending the Condorcet domain. Finally, we also characterize the set of
group-strategyproof and non-imposing SDSs on the Condorcet domain and its
supersets. These characterizations strengthen Gibbard's random dictatorship
theorem and establish that the Condorcet domain is essentially a maximal domain
that allows for attractive strategyproof SDSs.",Strategyproof Social Decision Schemes on Super Condorcet Domains,2023-02-23 19:27:16,"Felix Brand, Patrick Lederer, Sascha Tausch","http://arxiv.org/abs/2302.12140v1, http://arxiv.org/pdf/2302.12140v1",cs.GT
35254,th,"We consider games of incomplete information in which the players' payoffs
depend both on a privately observed type and an unknown but common ""state of
nature"". External to the game, a data provider knows the state of nature and
sells information to the players, thus solving a joint information and
mechanism design problem: deciding which information to sell while eliciting
the player' types and collecting payments. We restrict ourselves to a general
class of symmetric games with quadratic payoffs that includes games of both
strategic substitutes (e.g. Cournot competition) and strategic complements
(e.g. Bertrand competition, Keynesian beauty contest). By to the Revelation
Principle, the sellers' problem reduces to designing a mechanism that
truthfully elicits the player' types and sends action recommendations that
constitute a Bayes Correlated Equilibrium of the game. We fully characterize
the class of all such Gaussian mechanisms (where the joint distribution of
actions and private signals is a multivariate normal distribution) as well as
the welfare- and revenue- optimal mechanisms within this class. For games of
strategic complements, the optimal mechanisms maximally correlate the players'
actions, and conversely maximally anticorrelate them for games of strategic
substitutes. In both cases, for sufficiently large uncertainty over the
players' types, the recommendations are deterministic (and linear) conditional
on the state and the type reports, but they are not fully revealing.",Coordination via Selling Information,2023-02-23 21:42:34,"Alessandro Bonatti, Munther Dahleh, Thibaut Horel, Amir Nouripour","http://arxiv.org/abs/2302.12223v1, http://arxiv.org/pdf/2302.12223v1",cs.GT
35255,th,"In this note, I introduce a new framework called n-person games with partial
knowledge, in which players have only limited knowledge about the aspects of
the game -- including actions, outcomes, and other players. For example,
playing an actual game of chess is a game of partial knowledge. To analyze
these games, I introduce a set of new concepts and mechanisms for measuring the
intelligence of players, with a focus on the interplay between human- and
machine-based decision-making. Specifically, I introduce two main concepts:
firstly, the Game Intelligence (GI) mechanism, which quantifies a player's
demonstrated intelligence in a game by considering not only the game's outcome
but also the ""mistakes"" made during the game according to the reference
machine's intelligence. Secondly, I define gaming-proofness, a practical and
computational concept of strategy-proofness. The GI mechanism provides a
practicable way to assess players and can potentially be applied to a wide
range of games, from chess and backgammon to AI systems. To illustrate the GI
mechanism, I apply it to an extensive dataset comprising over a million moves
made by top chess Grandmasters.",Human and Machine Intelligence in n-Person Games with Partial Knowledge,2023-02-27 19:40:08,Mehmet S. Ismail,"http://arxiv.org/abs/2302.13937v3, http://arxiv.org/pdf/2302.13937v3",econ.TH
35256,th,"We develop a versatile new methodology for multidimensional mechanism design
that incorporates side information about agent types with the bicriteria goal
of generating high social welfare and high revenue simultaneously. Side
information can come from a variety of sources -- examples include advice from
a domain expert, predictions from a machine-learning model trained on
historical agent data, or even the mechanism designer's own gut instinct -- and
in practice such sources are abundant. In this paper we adopt a prior-free
perspective that makes no assumptions on the correctness, accuracy, or source
of the side information. First, we design a meta-mechanism that integrates
input side information with an improvement of the classical VCG mechanism. The
welfare, revenue, and incentive properties of our meta-mechanism are
characterized by a number of novel constructions we introduce based on the
notion of a weakest competitor, which is an agent that has the smallest impact
on welfare. We then show that our meta-mechanism -- when carefully instantiated
-- simultaneously achieves strong welfare and revenue guarantees that are
parameterized by errors in the side information. When the side information is
highly informative and accurate, our mechanism achieves welfare and revenue
competitive with the total social surplus, and its performance decays
continuously and gradually as the quality of the side information decreases.
Finally, we apply our meta-mechanism to a setting where each agent's type is
determined by a constant number of parameters. Specifically, agent types lie on
constant-dimensional subspaces (of the potentially high-dimensional ambient
type space) that are known to the mechanism designer. We use our meta-mechanism
to obtain the first known welfare and revenue guarantees in this setting.",Bicriteria Multidimensional Mechanism Design with Side Information,2023-02-28 04:33:14,"Maria-Florina Balcan, Siddharth Prasad, Tuomas Sandholm","http://arxiv.org/abs/2302.14234v3, http://arxiv.org/pdf/2302.14234v3",cs.GT
35260,th,"After a decade of on-demand mobility services that change spatial behaviors
in metropolitan areas, the Shared Autonomous Vehicle (SAV) service is expected
to increase traffic congestion and unequal access to transport services. A
paradigm of scheduled supply that is aware of demand but not on-demand is
proposed, introducing coordination and social and behavioral understanding,
urban cognition and empowerment of agents, into a novel informational
framework. Daily routines and other patterns of spatial behaviors outline a
fundamental demand layer in a supply-oriented paradigm that captures urban
dynamics and spatial-temporal behaviors, mostly in groups. Rather than
real-time requests and instant responses that reward unplanned actions, and
beyond just reservation of travels in timetables, the intention is to capture
mobility flows in scheduled travels along the day considering time of day,
places, passengers etc. Regulating goal-directed behaviors and caring for
service resources and the overall system welfare is proposed to minimize
uncertainty, considering the capacity of mobility interactions to hold value,
i.e., Motility as a Service (MaaS). The principal-agent problem in the smart
city is a problem of collective action among service providers and users that
create expectations based on previous actions and reactions in mutual systems.
Planned behavior that accounts for service coordination is expected to
stabilize excessive rides and traffic load, and to induce a cognitive gain,
thus balancing information load and facilitating cognitive effort.","Spatial, Social and Data Gaps in On-Demand Mobility Services: Towards a Supply-Oriented MaaS",2023-02-20 13:04:41,"Ronit Purian, Daniel Polani","http://arxiv.org/abs/2303.03881v1, http://arxiv.org/pdf/2303.03881v1",cs.CY
35261,th,"Winning the coin toss at the end of a tied soccer game gives a team the right
to choose whether to kick either first or second on all five rounds of penalty
kicks, when each team is allowed one kick per round. There is considerable
evidence that the right to make this choice, which is usually to kick first,
gives a team a significant advantage. To make the outcome of a tied game
fairer, we suggest a rule that handicaps the team that kicks first (A),
requiring it to succeed on one more penalty kick than the team that kicks
second (B). We call this the $m - n$ rule and, more specifically, propose $(m,
n)$ = (5, 4): For A to win, it must successfully kick 5 goals before the end of
the round in which B kicks its 4th; for B to win, it must succeed on 4 penalty
kicks before A succeeds on 5. If both teams reach (5, 4) on the same round --
when they both kick successfully at (4, 3) -- then the game is decided by
round-by-round ""sudden death,"" whereby the winner is the first team to score in
a subsequent round when the other team does not. We show that this rule is fair
in tending to equalize the ability of each team to win a tied game in a penalty
shootout. We also discuss a related rule that precludes the teams from reaching
(5, 4) at the same time, obviating the need for sudden death and extra rounds.",Fairer Shootouts in Soccer: The $m-n$ Rule,2023-02-11 22:20:15,"Steven J. Brams, Mehmet S. Ismail, D. Marc Kilgour","http://arxiv.org/abs/2303.04807v1, http://arxiv.org/pdf/2303.04807v1",econ.TH
35262,th,"This paper introduces the distributionally robust random utility model
(DRO-RUM), which allows the preference shock (unobserved heterogeneity)
distribution to be misspecified or unknown. We make three contributions using
tools from the literature on robust optimization. First, by exploiting the
notion of distributionally robust social surplus function, we show that the
DRO-RUM endogenously generates a shock distributionthat incorporates a
correlation between the utilities of the different alternatives. Second, we
show that the gradient of the distributionally robust social surplus yields the
choice probability vector. This result generalizes the celebrated
William-Daly-Zachary theorem to environments where the shock distribution is
unknown. Third, we show how the DRO-RUM allows us to nonparametrically identify
the mean utility vector associated with choice market data. This result extends
the demand inversion approach to environments where the shock distribution is
unknown or misspecified. We carry out several numerical experiments comparing
the performance of the DRO-RUM with the traditional multinomial logit and
probit models.",A Distributionally Robust Random Utility Model,2023-03-10 15:46:34,"David Müller, Emerson Melo, Ruben Schlotter","http://arxiv.org/abs/2303.05888v1, http://arxiv.org/pdf/2303.05888v1",econ.TH
35263,th,"We propose a distributionally robust principal-agent formulation, which
generalizes some common variants of worst-case and Bayesian principal-agent
problems. With this formulation, we first construct a theoretical framework to
certify whether any surjective contract menu is optimal, and quantify its
sub-optimality. The framework and its proofs rely on a novel construction and
comparison of several related optimization problems. Roughly speaking, our
framework transforms the question of contract menu optimality into the study of
duality gaps. We then operationalize the framework to study the optimality of
simple -- affine and linear -- contract menus. By our framework, the optimality
gap of a contract menu is broken down into an adjustability gap and an
information rent. We show that, with strong geometric intuition, these simple
contract menus tend to be close to optimal when the social value of production
is convex in production cost and the conversion ratio is large at the agent's
highest effort level. We also provide succinct and closed-form expressions to
quantify the optimality gap when the social value function is concave. With
this systematic analysis, our results shed light on the technical root of a
higher-level question -- why are there more positive results in the literature
in terms of a simple menu's optimality in a \emph{robust} (worst-case) setting
rather than in a \emph{stochastic} (Bayesian) setting? Our results demonstrate
that the answer relates to two facts: the sum of quasi-concave functions is not
quasi-concave in general; the maximization operator and the expectation
operator do not commute in general. Overall, our study is among the first to
cross-pollinate the contract theory and optimization literature.",Distributionally Robust Principal-Agent Problems and Optimality of Contract Menus,2023-03-14 00:09:41,Peter Zhang,"http://arxiv.org/abs/2303.07468v1, http://arxiv.org/pdf/2303.07468v1",econ.TH
35270,th,"Systems and blockchains often have security vulnerabilities and can be
attacked by adversaries, with potentially significant negative consequences.
Therefore, infrastructure providers increasingly rely on bug bounty programs,
where external individuals probe the system and report any vulnerabilities
(bugs) in exchange for rewards (bounty). We develop a simple contest model of
bug bounty. A group of individuals of arbitrary size is invited to undertake a
costly search for bugs. The individuals differ with regard to their abilities,
which we capture by different costs to achieve a certain probability to find
bugs if any exist. Costs are private information. We study equilibria of the
contest and characterize the optimal design of bug bounty schemes. In
particular, the designer can vary the size of the group of individuals invited
to search, add a paid expert, insert an artificial bug with some probability,
and pay multiple prizes.",Decentralized Attack Search and the Design of Bug Bounty Schemes,2023-03-31 22:00:30,"Hans Gersbach, Akaki Mamageishvili, Fikri Pitsuwan","http://arxiv.org/abs/2304.00077v2, http://arxiv.org/pdf/2304.00077v2",econ.TH
35264,th,"This paper proves a constructive version of the expected utility theorem. The
word constructive is construed here in two senses - first, as in constructive
mathematics, whereby the logic underlying proofs is intuitionistic. In a second
sense of the word, constructive is taken to mean built up from simpler
components. Preferences are defined over lotteries that vary continuously over
some topological space and are themselves assumed to vary depending upon some
underlying process of measurement, deliberation, or observation. The open sets
of the topology serve as the possible truth values of assertions about
preference and constrain what can be measured, deduced, or observed. Replacing
an open set by an open covering of smaller sets serves as a notion of
refinement of information, and deductions made for these smaller sets can be
combined or collated to yield corresponding deductions over the larger set. The
two notions of constructive turn out to be related because the underlying logic
of the objects within the system is generally intuitionistic. The classical
expected utility theorem is shown to be non-constructive, and a constructive
version is proved for the present setting by narrowing the domain of
preferences. The representation of preferences is via comparisons of continuous
real-valued functions. A version of the classical theorem is obtained by
imposing a restriction on the collection of open sets of the topology, which
has the effect of making the logic classical. The proof here illustrates a
robustness property of constructive results. They can be carried over virtually
unchanged to a variety of mathematical settings. Here the set of variable
lotteries replaces the sets of lotteries from the classical setting, but there
are other possibilities.",Expected Utility from a Constructive Viewpoint,2023-03-14 01:59:28,Kislaya Prasad,"http://arxiv.org/abs/2303.08633v1, http://arxiv.org/pdf/2303.08633v1",econ.TH
35265,th,"Classical law and economics is foundational to the American legal system.
Centered at the University of Chicago, its assumptions, most especially that
humans act both rationally and selfishly, informs the thinking of legislatures,
judges, and government lawyers, and has shaped nearly every aspect of the way
commercial transactions are conducted. But what if the Chicago School, as I
refer to this line of thinking, is wrong? Alternative approaches such as
behavioral law and economics or law and political economy contend that human
decisionmaking is based on emotions or should not be regulated as a social
geometry of bargains. This Article proposes a different and wholly novel reason
that the Chicago School is wrong: a fundamental assumption central to many of
its game theory models has been disproven. This Article shows that a 2012
breakthrough from world famous physicist Freeman Dyson shocked the world of
game theory. This Article shows that Chicago School game theorists are wrong on
their own terms because these 2 x 2 games such as the Prisoner's Dilemma,
Chicken, and Snowdrift, ostensibly based on mutual defection and corrective
justice, in fact yield to an insight of pure cooperation. These new game theory
solutions can be scaled to design whole institutions and systems that honor the
pure cooperation insight, holding out the possibility of cracking large scale
social dilemmas like the tragedy of the commons. It demonstrates that, in such
systems, pure cooperation is the best answer in the right environment and in
the long run. It ends by calling for a new legal field to redesign the
structures based on the outdated assumptions of the Chicago School game
theorists.",Physics Breakthrough Disproves Fundamental Assumptions of the Chicago School,2023-03-14 22:48:46,Cortelyou C. Kenney,"http://arxiv.org/abs/2303.09321v1, http://arxiv.org/pdf/2303.09321v1",econ.TH
35266,th,"We study a model of dynamic combinatorial assignment of indivisible objects
without money. We introduce a new solution concept called ``dynamic approximate
competitive equilibrium from equal incomes'' (DACEEI), which stipulates that
markets must approximately clear in almost all time periods. A naive repeated
application of approximate competitive equilibrium from equal incomes (Budish,
2011) does not yield a desirable outcome because the approximation error in
market-clearing compounds quickly over time. We therefore develop a new version
of the static approximate competitive equilibrium from carefully constructed
random budgets which ensures that, in expectation, markets clear exactly. We
then use it to design the ``online combinatorial assignment mechanism'' (OCAM)
which implements a DACEEI with high probability. The OCAM is (i)
group-strategyproof up to one object (ii) envy-free up to one object for almost
all agents (iii) approximately market-clearing in almost all periods with high
probability when the market is large and arrivals are random. Applications
include refugee resettlement, daycare assignment, and airport slot allocation.",Dynamic Combinatorial Assignment,2023-03-24 15:38:17,"Thành Nguyen, Alexander Teytelboym, Shai Vardi","http://arxiv.org/abs/2303.13967v1, http://arxiv.org/pdf/2303.13967v1",econ.TH
35267,th,"In recent years there has been an influx of papers which use graph theoretic
tools to study stochastic choice. Fiorini (2004) serves as a base for this
literature by providing a graphical representation of choice probabilities and
showing that every interior node of this graph must satisfy inflow equals
outflow. We show that this inflow equals outflow property is almost
characteristic of choice probabilities. In doing so, we characterize choice
probabilities through graph theoretic tools. We offer two applications of this
result to random utility. First, we offer a novel axiomatization of random
utility on an incomplete domain. Second, we construct a new hypothesis test for
random utility which offers computational improvements over the hypothesis test
of Kitamura and Stoye (2018).",On Graphical Methods in Stochastic Choice,2023-03-24 22:28:05,Christopher Turansick,"http://arxiv.org/abs/2303.14249v3, http://arxiv.org/pdf/2303.14249v3",econ.TH
35268,th,"We study the problem of fairly allocating indivisible goods to agents with
weights corresponding to their entitlements. Previous work has shown that, when
agents have binary additive valuations, the maximum weighted Nash welfare rule
is resource-, population-, and weight-monotone, satisfies
group-strategyproofness, and can be implemented in polynomial time. We
generalize these results to the class of weighted additive welfarist rules with
concave functions and agents with matroid-rank (also known as binary
submodular) valuations.",Weighted Fair Division with Matroid-Rank Valuations: Monotonicity and Strategyproofness,2023-03-25 15:16:45,"Warut Suksompong, Nicholas Teh","http://dx.doi.org/10.1016/j.mathsocsci.2023.09.004, http://arxiv.org/abs/2303.14454v2, http://arxiv.org/pdf/2303.14454v2",econ.TH
35269,th,"We present a universal concept for the Value of Information (VoI), based on
the works of Claude Shannon's and Ruslan Stratonovich that can take into
account very general preferences of the agents and results in a single number.
As such it is convenient for applications and also has desirable properties for
decision theory and demand analysis. The Shannon/Stratonovich VoI concept is
compared to alternatives and applied in examples. In particular we apply the
concept to a circular spatial structure well known from many economic models
and allow for various economic transport costs.",The Value of Information and Circular Settings,2023-03-28 19:47:07,"Stefan Behringer, Roman V. Belavkin","http://arxiv.org/abs/2303.16126v2, http://arxiv.org/pdf/2303.16126v2",econ.TH
35271,th,"""Egyptian Ratscrew"" (ERS) is a modern American card game enjoyed by millions
of players worldwide. A game of ERS is won by collecting all of the cards in
the deck. Typically this game is won by the player with the fastest reflexes,
since the most common strategy for collecting cards is being the first to slap
the pile in the center whenever legal combinations of cards are placed down.
Most players assume that the dominant strategy is to develop a faster reaction
time than your opponents, and no academic inquiry has been levied against this
assumption. This thesis investigates the hypothesis that a ""risk slapping""
strategist who relies on practical economic decision making will win an
overwhelming majority of games against players who rely on quick reflexes
alone. It is theorized that this can be done by exploiting the ""burn rule,"" a
penalty that is too low-cost to effectively dissuade players from slapping
illegally when it benefits them. Using the Ruby programming language, we
construct an Egyptian Ratscrew simulator from scratch. Our model allows us to
simulate the behavior of 8 strategically unique players within easily
adjustable parameters including simulation type, player count, and burn amount.
We simulate 100k iterations of 67 different ERS games, totaling 6.7 million
games of ERS, and use win percentage data in order to determine which
strategies are dominant under each set of parameters. We then confirm our
hypothesis that risk slapping is a dominant strategy, discover that there is no
strictly dominant approach to risk slapping, and elucidate a deeper
understanding of different ERS mechanics such as the burn rule. Finally, we
assess the implications of our findings and suggest potential improvements to
the rules of the game. We also touch on the real-world applications of our
research and make recommendations for the future of Egyptian Ratscrew modeling.",Egyptian Ratscrew: Discovering Dominant Strategies with Computational Game Theory,2023-03-30 06:35:19,"Justin Diamond, Ben Garcia","http://arxiv.org/abs/2304.01007v1, http://arxiv.org/pdf/2304.01007v1",cs.GT
35272,th,"A principal hires an agent to work on a long-term project that culminates in
a breakthrough or a breakdown. At each time, the agent privately chooses to
work or shirk. Working increases the arrival rate of breakthroughs and
decreases the arrival rate of breakdowns. To motivate the agent to work, the
principal conducts costly inspections. She fires the agent if shirking is
detected. We characterize the principal's optimal inspection policy. Periodic
inspections are optimal if work primarily speeds up breakthroughs. Random
inspections are optimal if work primarily delays breakdowns. Crucially, the
agent's actions determine his risk-attitude over the timing of punishments.",Should the Timing of Inspections be Predictable?,2023-04-04 00:20:25,"Ian Ball, Jan Knoepfle","http://arxiv.org/abs/2304.01385v2, http://arxiv.org/pdf/2304.01385v2",econ.TH
35273,th,"As one of the most fundamental concepts in transportation science, Wardrop
equilibrium (WE) has always had a relatively weak behavioral underpinning. To
strengthen this foundation, one must reckon with bounded rationality in human
decision-making processes, such as the lack of accurate information, limited
computing power, and sub-optimal choices. This retreat from behavioral
perfectionism in the literature, however, was typically accompanied by a
conceptual modification of WE. Here we show that giving up perfect rationality
need not force a departure from WE. On the contrary, WE can be reached with
global stability in a routing game played by boundedly rational travelers. We
achieve this result by developing a day-to-day (DTD) dynamical model that
mimics how travelers gradually adjust their route valuations, hence choice
probabilities, based on past experiences. Our model, called cumulative logit
(CULO), resembles the classical DTD models but makes a crucial change: whereas
the classical models assume routes are valued based on the cost averaged over
historical data, ours values the routes based on the cost accumulated. To
describe route choice behaviors, the CULO model only uses two parameters, one
accounting for the rate at which the future route cost is discounted in the
valuation relative to the past ones and the other describing the sensitivity of
route choice probabilities to valuation differences. We prove that the CULO
model always converges to WE, regardless of the initial point, as long as the
behavioral parameters satisfy certain mild conditions. Our theory thus upholds
WE's role as a benchmark in transportation systems analysis. It also resolves
the theoretical challenge posed by Harsanyi's instability problem by explaining
why equally good routes at WE are selected with different probabilities.",Wardrop Equilibrium Can Be Boundedly Rational: A New Behavioral Theory of Route Choice,2023-04-05 18:17:06,"Jiayang Li, Zhaoran Wang, Yu Marco Nie","http://arxiv.org/abs/2304.02500v1, http://arxiv.org/pdf/2304.02500v1",econ.TH
35274,th,"Many early order flow auction designs handle the payment for orders when they
execute on the chain rather than when they are won in the auction. Payments in
these auctions only take place when the orders are executed, creating a free
option for whoever wins the order. Bids in these auctions set the strike price
of this option rather than the option premium. This paper develops a simple
model of an order flow auction and compares contingent fees with upfront
payments as well as mixtures of the two. Results suggest that auctions with a
greater share of the payment contingent on execution have lower execution
probability, lower revenue, and increased effective spreads in equilibrium. A
Reputation system can act as a negative contingent fee, partially mitigating
the downsides; however, unless the system is calibrated perfectly, some of the
undesirable qualities of the contingent fees remain. Results suggest that
designers of order flow auctions should avoid contingent fees whenever
possible.",Contingent Fees in Order Flow Auctions,2023-04-11 07:54:48,Max Resnick,"http://arxiv.org/abs/2304.04981v1, http://arxiv.org/pdf/2304.04981v1",econ.TH
35275,th,"We ask how the advertising mechanisms of digital platforms impact product
prices. We present a model that integrates three fundamental features of
digital advertising markets: (i) advertisers can reach customers on and
off-platform, (ii) additional data enhances the value of matching advertisers
and consumers, and (iii) bidding follows auction-like mechanisms. We compare
data-augmented auctions, which leverage the platform's data advantage to
improve match quality, with managed campaign mechanisms, where advertisers'
budgets are transformed into personalized matches and prices through
auto-bidding algorithms. In data-augmented second-price auctions, advertisers
increase off-platform product prices to boost their competitiveness
on-platform. This leads to socially efficient allocations on-platform, but
inefficient allocations off-platform due to high product prices. The
platform-optimal mechanism is a sophisticated managed campaign that conditions
on-platform prices for sponsored products on off-platform prices set by all
advertisers. Relative to auctions, the optimal managed campaign raises
off-platform product prices and further reduces consumer surplus.",How Do Digital Advertising Auctions Impact Product Prices?,2023-04-17 19:52:50,"Dirk Bergemann, Alessandro Bonatti, Nicholas Wu","http://arxiv.org/abs/2304.08432v2, http://arxiv.org/pdf/2304.08432v2",econ.TH
35276,th,"Firms compete for clients, creating distributions of market shares ranging
from domination by a few giant companies to markets in which there are many
small firms. These market structures evolve in time, and may remain stable for
many years before a new firm emerges and rapidly obtains a large market share.
We seek the simplest realistic model giving rise to such diverse market
structures and dynamics. We focus on markets in which every client adopts a
single firm, and can, from time to time, switch to a different firm. Examples
include markets of cell phone and Internet service providers, and of consumer
products with strong brand identification. In the model, the size of a
particular firm, labelled $i$, is equal to its current number of clients,
$n_i$. In every step of the simulation, a client is chosen at random, and then
selects a firm from among the full set of firms with probability $p_i =
(n_i^\alpha + \beta)/K$, where $K$ is the normalization factor. Our model thus
has two parameters: $\alpha$ represents the degree to which firm size is an
advantage ($\alpha$ > 1) or disadvantage ($\alpha$ < 1), relative to strict
proportionality to size ($\alpha$ = 1), and $\beta$ represents the degree to
which small firms are viable despite their small size. We postulate that
$\alpha$ and $\beta$ are determined by the regulatory, technology, business
culture and social environments. The model exhibits a phase diagram in the
parameter space, with different regions of behaviour. At the large $\alpha$
extreme of the phase diagram, a single dominant firm emerges, whose market
share depends on the value of $\beta$. At the small $\alpha$ extreme, many
firms with small market shares coexist, and no dominant firm emerges. In the
intermediate region, markets are divided among a relatively small number of
firms, each with sizeable market share but with distinct rankings, which can
persist for long [...]",Simple model of market share dynamics based on clients' firm-switching decisions,2023-04-18 07:46:18,Joseph Hickey,"http://arxiv.org/abs/2304.08727v2, http://arxiv.org/pdf/2304.08727v2",physics.soc-ph
35277,th,"The principle of DEPENDENCY LENGTH MINIMIZATION, which seeks to keep
syntactically related words close in a sentence, is thought to universally
shape the structure of human languages for effective communication. However,
the extent to which dependency length minimization is applied in human language
systems is not yet fully understood. Preverbally, the placement of
long-before-short constituents and postverbally, short-before-long constituents
are known to minimize overall dependency length of a sentence. In this study,
we test the hypothesis that placing only the shortest preverbal constituent
next to the main-verb explains word order preferences in Hindi (a SOV language)
as opposed to the global minimization of dependency length. We characterize
this approach as a least-effort strategy because it is a cost-effective way to
shorten all dependencies between the verb and its preverbal dependencies. As
such, this approach is consistent with the bounded-rationality perspective
according to which decision making is governed by ""fast but frugal"" heuristics
rather than by a search for optimal solutions. Consistent with this idea, our
results indicate that actual corpus sentences in the Hindi-Urdu Treebank corpus
are better explained by the least effort strategy than by global minimization
of dependency lengths. Additionally, for the task of distinguishing corpus
sentences from counterfactual variants, we find that the dependency length and
constituent length of the constituent closest to the main verb are much better
predictors of whether a sentence appeared in the corpus than total dependency
length. Overall, our findings suggest that cognitive resource constraints play
a crucial role in shaping natural languages.",A bounded rationality account of dependency length minimization in Hindi,2023-04-22 16:53:50,"Sidharth Ranjan, Titus von der Malsburg","http://arxiv.org/abs/2304.11410v1, http://arxiv.org/pdf/2304.11410v1",cs.CL
35278,th,"We introduce a spatial economic growth model where space is described as a
network of interconnected geographic locations and we study a corresponding
finite-dimensional optimal control problem on a graph with state constraints.
Economic growth models on networks are motivated by the nature of spatial
economic data, which naturally possess a graph-like structure: this fact makes
these models well-suited for numerical implementation and calibration. The
network setting is different from the one adopted in the related literature,
where space is modeled as a subset of a Euclidean space, which gives rise to
infinite dimensional optimal control problems. After introducing the model and
the related control problem, we prove existence and uniqueness of an optimal
control and a regularity result for the value function, which sets up the basis
for a deeper study of the optimal strategies. Then, we focus on specific cases
where it is possible to find, under suitable assumptions, an explicit solution
of the control problem. Finally, we discuss the cases of networks of two and
three geographic locations.",An optimal control problem with state constraints in a spatio-temporal economic growth model on networks,2023-04-23 10:39:44,"Alessandro Calvia, Fausto Gozzi, Marta Leocata, Georgios I. Papayiannis, Anastasios Xepapadeas, Athanasios N. Yannacopoulos","http://arxiv.org/abs/2304.11568v1, http://arxiv.org/pdf/2304.11568v1",math.OC
35279,th,"We adopt the notion of the farsighted stable set to determine which matchings
are stable when agents are farsighted in matching markets with couples. We show
that a singleton matching is a farsighted stable set if and only if the
matching is stable. Thus, matchings that are stable with myopic agents remain
stable when agents become farsighted. Examples of farsighted stable sets
containing multiple non-stable matchings are provided for markets with and
without stable matchings. For couples markets where the farsighted stable set
does not exist, we propose the DEM farsighted stable set to predict the
matchings that are stable when agents are farsighted.",Matching markets with farsighted couples,2023-04-24 20:16:43,"Ata Atay, Sylvain Funck, Ana Mauleon, Vincent Vannetelbosch","http://arxiv.org/abs/2304.12276v2, http://arxiv.org/pdf/2304.12276v2",econ.TH
35280,th,"Adaptation to dynamic conditions requires a certain degree of diversity. If
all agents take the best current action, learning that the underlying state has
changed and behavior should adapt will be slower. Diversity is harder to
maintain when there is fast communication between agents, because they tend to
find out and pursue the best action rapidly. We explore these issues using a
model of (Bayesian) learning over a social network. Agents learn rapidly from
and may also have incentives to coordinate with others to whom they are
connected via strong links. We show, however, that when the underlying
environment changes sufficiently rapidly, any network consisting of just strong
links will do only a little better than random choice in the long run. In
contrast, networks combining strong and weak links, whereby the latter type of
links transmit information only slowly, can achieve much higher long-run
average payoffs. The best social networks are those that combine a large
fraction of agents into a strongly-connected component, while still maintaining
a sufficient number of smaller communities that make diverse choices and
communicate with this component via weak links.","Learning, Diversity and Adaptation in Changing Environments: The Role of Weak Links",2023-04-30 16:17:59,"Daron Acemoglu, Asuman Ozdaglar, Sarath Pattathil","http://arxiv.org/abs/2305.00474v1, http://arxiv.org/pdf/2305.00474v1",cs.SI
35281,th,"This paper motivates and develops a framework for understanding how the
socio-technical systems surrounding AI development interact with social
welfare. It introduces the concept of ``signaling'' from evolutionary game
theory and demonstrates how it can enhance existing theory and practice
surrounding the evaluation and governance of AI systems.",Beneficence Signaling in AI Development Dynamics,2023-05-04 08:27:51,Sarita Rosenstock,"http://arxiv.org/abs/2305.02561v1, http://arxiv.org/pdf/2305.02561v1",cs.CY
35282,th,"We study candidates' positioning when adjustments are possible in response to
new information about voters' preferences. Re-positioning allows candidates to
get closer to the median voter but is costly both financially and electorally.
We examine the occurrence and the direction of the adjustments depending on the
ex-ante positions and the new information. In the unique subgame perfect
equilibrium, candidates anticipate the possibility to adjust in response to
future information and diverge ex-ante in order to secure a cost-less victory
when the new information is favorable.",Strategic flip-flopping in political competition,2023-05-04 16:50:22,"Gaëtan Fournier, Alberto Grillo, Yevgeny Tsodikovich","http://arxiv.org/abs/2305.02834v1, http://arxiv.org/pdf/2305.02834v1",cs.GT
35308,th,"We study the approximate core for edge cover games, which are cooperative
games stemming from edge cover problems. In these games, each player controls a
vertex on a network $G = (V, E; w)$, and the cost of a coalition $S\subseteq V$
is equivalent to the minimum weight of edge covers in the subgraph induced by
$S$. We prove that the 3/4-core of edge cover games is always non-empty and can
be computed in polynomial time by using linear program duality approach. This
ratio is the best possible, as it represents the integrality gap of the natural
LP for edge cover problems. Moreover, our analysis reveals that the ratio of
approximate core corresponds with the length of the shortest odd cycle of
underlying graphs.",Approximate Core Allocations for Edge Cover Games,2023-08-22 09:27:33,"Tianhang Lu, Han Xian, Qizhi Fang","http://arxiv.org/abs/2308.11222v1, http://arxiv.org/pdf/2308.11222v1",math.CO
35283,th,"We consider a multi-agent delegation mechanism without money. In our model,
given a set of agents, each agent has a fixed number of solutions which is
exogenous to the mechanism, and privately sends a signal, e.g., a subset of
solutions, to the principal. Then, the principal selects a final solution based
on the agents' signals. In stark contrast to single-agent setting by Kleinberg
and Kleinberg (EC'18) with an approximate Bayesian mechanism, we show that
there exists efficient approximate prior-independent mechanisms with both
information and performance gain, thanks to the competitive tension between the
agents. Interestingly, however, the amount of such a compelling power
significantly varies with respect to the information available to the agents,
and the degree of correlation between the principal's and the agent's utility.
Technically, we conduct a comprehensive study on the multi-agent delegation
problem and derive several results on the approximation factors of
Bayesian/prior-independent mechanisms in complete/incomplete information
settings. As a special case of independent interest, we obtain comparative
statics regarding the number of agents which implies the dominance of the
multi-agent setting ($n \ge 2$) over the single-agent setting ($n=1$) in terms
of the principal's utility. We further extend our problem by considering an
examination cost of the mechanism and derive some analogous results in the
complete information setting.",Delegating to Multiple Agents,2023-05-05 02:26:06,"MohammadTaghi Hajiaghayi, Keivan Rezaei, Suho Shin","http://arxiv.org/abs/2305.03203v2, http://arxiv.org/pdf/2305.03203v2",cs.GT
35284,th,"This note shows that in Bauer's maximum principle, the assumed convexity of
the objective function can be relaxed to quasiconvexity.",Bauer's Maximum Principle for Quasiconvex Functions,2023-05-08 20:34:56,Ian Ball,"http://arxiv.org/abs/2305.04893v1, http://arxiv.org/pdf/2305.04893v1",econ.TH
35285,th,"Myerson's regularity condition of a distribution is a standard assumption in
economics. In this paper, we study the complexity of describing a regular
distribution within a small statistical distance. Our main result is that
$\tilde{\Theta}{(\epsilon^{-0.5})}$ bits are necessary and sufficient to
describe a regular distribution with support $[0,1]$ within $\epsilon$ Levy
distance. We prove this by showing that we can learn the regular distribution
approximately with $\tilde{O}(\epsilon^{-0.5})$ queries to the cumulative
density function. As a corollary, we show that the pricing query complexity to
learn the class of regular distribution with support $[0,1]$ within $\epsilon$
Levy distance is $\tilde{\Theta}{(\epsilon^{-2.5})}$. To learn the mixture of
two regular distributions, $\tilde{\Theta}(\epsilon^{-3})$ pricing queries are
required.",Description Complexity of Regular Distributions,2023-05-09 19:25:59,"Renato Paes Leme, Balasubramanian Sivan, Yifeng Teng, Pratik Worah","http://arxiv.org/abs/2305.05590v1, http://arxiv.org/pdf/2305.05590v1",cs.GT
35286,th,"We study the robustness of approval-based participatory budgeting (PB) rules
to random noise in the votes. Our contributions are twofold. First, we study
the computational complexity of the #Flip-Bribery problem, where given a PB
instance we ask for the number of ways in which we can flip a given number of
approvals in the votes, so that a specific project is selected. The idea is
that #Flip-Bribery captures the problem of computing the funding probabilities
of projects in case random noise is added. Unfortunately, the problem is
intractable even for the simplest PB rules. Second, we analyze the robustness
of several prominent PB rules (including the basic greedy rule and the Method
of Equal Shares) on real-life instances from Pabulib. Since #Flip-Bribery is
intractable, we resort to sampling to obtain our results. We quantify the
extent to which simple, greedy PB rules are more robust than proportional ones,
and we identify three types of (very) non-robust projects in real-life PB
instances.",Robustness of Participatory Budgeting Outcomes: Complexity and Experiments,2023-05-14 14:08:10,"Niclas Boehmer, Piotr Faliszewski, Łukasz Janeczko, Andrzej Kaczmarczyk","http://arxiv.org/abs/2305.08125v1, http://arxiv.org/pdf/2305.08125v1",cs.GT
35287,th,"Asset price bubbles are situations where asset prices exceed the fundamental
values defined by the present value of dividends. This paper presents a
conceptually new perspective: the necessity of bubbles. We establish the Bubble
Necessity Theorem in a plausible general class of economic models: with faster
long-run economic growth ($G$) than dividend growth ($G_d$) and counterfactual
long-run autarky interest rate ($R$) below dividend growth, all equilibria are
bubbly with non-negligible bubble sizes relative to the economy. This bubble
necessity condition naturally arises in economies with sufficiently strong
savings motives and multiple factors or sectors with uneven productivity
growth.",Bubble Necessity Theorem,2023-05-15 01:23:05,"Tomohiro Hirano, Alexis Akira Toda","http://arxiv.org/abs/2305.08268v4, http://arxiv.org/pdf/2305.08268v4",econ.TH
35288,th,"We give new bounds for the single-nomination model of impartial selection, a
problem proposed by Holzman and Moulin (Econometrica, 2013). A selection
mechanism, which may be randomized, selects one individual from a group of $n$
based on nominations among members of the group; a mechanism is impartial if
the selection of an individual is independent of nominations cast by that
individual, and $\alpha$-optimal if under any circumstance the expected number
of nominations received by the selected individual is at least $\alpha$ times
that received by any individual. In a many-nominations model, where individuals
may cast an arbitrary number of nominations, the so-called permutation
mechanism is $1/2$-optimal, and this is best possible. In the single-nomination
model, where each individual casts exactly one nomination, the permutation
mechanism does better and prior to this work was known to be $67/108$-optimal
but no better than $2/3$-optimal. We show that it is in fact $2/3$-optimal for
all $n$. This result is obtained via tight bounds on the performance of the
mechanism for graphs with maximum degree $\Delta$, for any $\Delta$, which we
prove using an adversarial argument. We then show that the permutation
mechanism is not best possible; indeed, by combining the permutation mechanism,
another mechanism called plurality with runner-up, and some new ideas,
$2105/3147$-optimality can be achieved for all $n$. We finally give new upper
bounds on $\alpha$ for any $\alpha$-optimal impartial mechanism. They improve
on the existing upper bounds for all $n\geq 7$ and imply that no impartial
mechanism can be better than $76/105$-optimal for all $n$; they do not preclude
the existence of a $(3/4-\varepsilon)$-optimal impartial mechanism for
arbitrary $\varepsilon>0$ if $n$ is large.",Improved Bounds for Single-Nomination Impartial Selection,2023-05-17 09:43:22,"Javier Cembrano, Felix Fischer, Max Klimm","http://arxiv.org/abs/2305.09998v1, http://arxiv.org/pdf/2305.09998v1",cs.GT
35289,th,"Charity is typically done either by individual donors, who donate money to
the charities that they support, or by centralized organizations such as
governments or municipalities, which collect the individual contributions and
distribute them among a set of charities. On the one hand, individual charity
respects the will of the donors but may be inefficient due to a lack of
coordination. On the other hand, centralized charity is potentially more
efficient but may ignore the will of individual donors. We present a mechanism
that combines the advantages of both methods by distributing the contribution
of each donor in an efficient way such that no subset of donors has an
incentive to redistribute their donations. Assuming Leontief utilities (i.e.,
each donor is interested in maximizing an individually weighted minimum of all
contributions across the charities), our mechanism is group-strategyproof,
preference-monotonic, contribution-monotonic, maximizes Nash welfare, and can
be computed using convex programming.",Balanced Donor Coordination,2023-05-17 18:22:02,"Felix Brandt, Matthias Greger, Erel Segal-Halevi, Warut Suksompong","http://arxiv.org/abs/2305.10286v1, http://arxiv.org/pdf/2305.10286v1",econ.TH
35290,th,"We study the convergence of best-response dynamics in lottery contests. We
show that best-response dynamics rapidly converges to the (unique) equilibrium
for homogeneous agents but may not converge for non-homogeneous agents, even
for two non-homogeneous agents. For $2$ homogeneous agents, we show convergence
to an $\epsilon$-approximate equilibrium in $\Theta(\log\log(1/\epsilon))$
steps. For $n \ge 3$ agents, the dynamics is not unique because at each step
$n-1 \ge 2$ agents can make non-trivial moves. We consider a model where the
agent making the move is randomly selected at each time step. We show
convergence to an $\epsilon$-approximate equilibrium in $O(\beta
\log(n/(\epsilon\delta)))$ steps with probability $1-\delta$, where $\beta$ is
a parameter of the agent selection process, e.g., $\beta = n$ if agents are
selected uniformly at random at each time step. Our simulations indicate that
this bound is tight.",Best-Response Dynamics in Lottery Contests,2023-05-18 14:21:31,"Abheek Ghosh, Paul W. Goldberg","http://arxiv.org/abs/2305.10881v1, http://arxiv.org/pdf/2305.10881v1",cs.GT
35291,th,"The Walras approach to equilibrium focuses on the existence of market prices
at which the total demands for goods are matched by the total supplies. Trading
activities that might identify such prices by bringing agents together as
potential buyers and sellers of a good are characteristically absent, however.
Anyway, there is no money to pass from one to the other as ordinarily
envisioned in buying and selling. Here a different approach to equilibrium --
what it should mean and how it may be achieved -- is offered as a constructive
alternative.
  Agents operate in an economic environment where adjustments to holdings have
been needed in the past, will be needed again in a changed future, and money is
familiar for its role in facilitating that. Marginal utility provides relative
values of goods for guidance in making incremental adjustments, and with money
incorporated into utility and taken as num\`eraire, those values give money
price thresholds at which an agent will be willing to buy or sell. Agents in
pairs can then look at such individualized thresholds to see whether a trade of
some amount of a good for some amount of money may be mutually advantageous in
leading to higher levels of utility. Iterative bilateral trades in this most
basic sense, if they keep bringing all goods and agents into play, are
guaranteed in the limit to reach an equilibrium state in which the agents all
agree on prices and, under those prices, have no interest in further adjusting
their holdings. The results of computer simulations are provided to illustrate
how this works.",Reaching an equilibrium of prices and holdings of goods through direct buying and selling,2023-05-27 23:32:04,"J. Deride, A. Jofré, R. T. Rockafellar","http://arxiv.org/abs/2305.17577v1, http://arxiv.org/pdf/2305.17577v1",math.OC
35292,th,"In this paper, I prove that existence of pure-strategy Nash equilibrium in
games with infinitely many players is equivalent to the axiom of choice.",Nash Equilibrium and Axiom of Choice Are Equivalent,2023-05-31 23:14:47,Conrad Kosowsky,"http://arxiv.org/abs/2306.01790v1, http://arxiv.org/pdf/2306.01790v1",math.LO
35293,th,"Prediction markets elicit and aggregate beliefs by paying agents based on how
close their predictions are to a verifiable future outcome. However, outcomes
of many important questions are difficult to verify or unverifiable, in that
the ground truth may be hard or impossible to access. Examples include
questions about causal effects where it is infeasible or unethical to run
randomized trials; crowdsourcing and content moderation tasks where it is
prohibitively expensive to verify ground truth; and questions asked over long
time horizons, where the delay until the realization of the outcome skews
agents' incentives to report their true beliefs. We present a novel and
unintuitive result showing that it is possible to run an
$\varepsilon-$incentive compatible prediction market to elicit and efficiently
aggregate information from a pool of agents without observing the outcome by
paying agents the negative cross-entropy between their prediction and that of a
carefully chosen reference agent. Our key insight is that a reference agent
with access to more information can serve as a reasonable proxy for the ground
truth. We use this insight to propose self-resolving prediction markets that
terminate with some probability after every report and pay all but a few agents
based on the final prediction. We show that it is an $\varepsilon-$Perfect
Bayesian Equilibrium for all agents to report truthfully in our mechanism and
to believe that all other agents report truthfully. Although primarily of
interest for unverifiable outcomes, this design is also applicable for
verifiable outcomes.",Self-Resolving Prediction Markets for Unverifiable Outcomes,2023-06-07 13:09:22,"Siddarth Srinivasan, Ezra Karger, Yiling Chen","http://arxiv.org/abs/2306.04305v1, http://arxiv.org/pdf/2306.04305v1",cs.GT
35306,th,"We consider two-player non-zero-sum linear-quadratic Gaussian games in which
both players aim to minimize a quadratic cost function while controlling a
linear and stochastic state process {using linear policies}. The system is
partially observable with asymmetric information available to the players. In
particular, each player has a private and noisy measurement of the state
process but can see the history of their opponent's actions. The challenge of
this asymmetry is that it introduces correlations into the players' belief
processes for the state and leads to circularity in their beliefs about their
opponents beliefs. We show that by leveraging the information available through
their opponent's actions, both players can enhance their state estimates and
improve their overall outcomes. In addition, we provide a closed-form solution
for the Bayesian updating rule of their belief process. We show that there is a
Nash equilibrium which is linear in the estimation of the state and with a
value function incorporating terms that arise due to errors in the state
estimation. We illustrate the results through an application to bargaining
which demonstrates the value of these information corrections.",Linear-quadratic Gaussian Games with Asymmetric Information: Belief Corrections Using the Opponents Actions,2023-07-29 03:07:50,"Ben Hambly, Renyuan Xu, Huining Yang","http://arxiv.org/abs/2307.15842v1, http://arxiv.org/pdf/2307.15842v1",math.OC
35294,th,"A prevalent theme in the economics and computation literature is to identify
natural price-adjustment processes by which sellers and buyers in a market can
discover equilibrium prices. An example of such a process is t\^atonnement, an
auction-like algorithm first proposed in 1874 by French economist Walras in
which sellers adjust prices based on the Marshallian demands of buyers. A dual
concept in consumer theory is a buyer's Hicksian demand. In this paper, we
identify the maximum of the absolute value of the elasticity of the Hicksian
demand, as an economic parameter sufficient to capture and explain a range of
convergent and non-convergent t\^atonnement behaviors in a broad class of
markets. In particular, we prove the convergence of t\^atonnement at a rate of
$O((1+\varepsilon^2)/T)$, in homothetic Fisher markets with bounded price
elasticity of Hicksian demand, i.e., Fisher markets in which consumers have
preferences represented by homogeneous utility functions and the price
elasticity of their Hicksian demand is bounded, where $\varepsilon \geq 0$ is
the maximum absolute value of the price elasticity of Hicksian demand across
all buyers. Our result not only generalizes known convergence results for CES
Fisher markets, but extends them to mixed nested CES markets and Fisher markets
with continuous, possibly non-concave, homogeneous utility functions. Our
convergence rate covers the full spectrum of nested CES utilities, including
Leontief and linear utilities, unifying previously existing disparate
convergence and non-convergence results. In particular, for $\varepsilon = 0$,
i.e., Leontief markets, we recover the best-known convergence rate of $O(1/T)$,
and as $\varepsilon \to \infty$, e.g., linear Fisher markets, we obtain
non-convergent behavior, as expected.",Tâtonnement in Homothetic Fisher Markets,2023-06-08 05:38:15,"Denizalp Goktas, Jiayi Zhao, Amy Greenwald","http://arxiv.org/abs/2306.04890v1, http://arxiv.org/pdf/2306.04890v1",cs.GT
35295,th,"The Consumer Financial Protection Bureau defines the notion of payoff amount
as the amount that has to be payed at a particular time in order to completely
pay off the debt, in case the lender intends to pay off the loan early, way
before the last installment is due (CFPB 2020). This amount is well-understood
for loans at compound interest, but much less so when simple interest is used.
  Recently, Aretusi and Mari (2018) have proposed a formula for the payoff
amount for loans at simple interest. We assume that the payoff amounts are
established contractually at time zero, whence the requirement that no
arbitrage may arise this way
  The first goal of this paper is to study this new formula and derive it
within a model of a loan market in which loans are bought and sold at simple
interest, interest rates change over time, and no arbitrage opportunities
exist.
  The second goal is to show that this formula exhibits a behaviour rather
different from the one which occurs when compound interest is used. Indeed, we
show that, if the installments are constant and if the interest rate is greater
than a critical value (which depends on the number of installments), then the
sequence of the payoff amounts is increasing before a certain critical time,
and will start decreasing only thereafter. We also show that the critical value
is decreasing as a function of the number of installments. For two
installments, the critical value is equal to the golden section.
  The third goal is to introduce a more efficient polynomial notation, which
encodes a basic tenet of the subject: Each amount of money is embedded in a
time position (to wit: The time when it is due). The model of a loan market we
propose is naturally linked to this new notation.",On the Behavior of the Payoff Amounts in Simple Interest Loans in Arbitrage-Free Markets,2023-06-30 11:23:50,"Fausto Di Biase, Stefano Di Rocco, Alessandra Ortolano, Maurizio Parton","http://arxiv.org/abs/2306.17467v1, http://arxiv.org/pdf/2306.17467v1",q-fin.MF
35296,th,"In retrospect, the experimental findings on competitive market behavior
called for a revival of the old, classical, view of competition as a collective
higgling and bargaining process (as opposed to price-taking behaviors) founded
on reservation prices (in place of the utility function). In this paper, we
specialize the classical methodology to deal with speculation, an important
impediment to price stability. The model involves typical features of a field
or lab asset market setup and lends itself to an experimental test of its
specific predictions; here we use the model to explain three general stylized
facts, well established both empirically and experimentally: the excess,
fat-tailed, and clustered volatility of speculative asset prices. The fat tails
emerge in the model from the amplifying nature of speculation, leading to a
random-coefficient autoregressive return process (and power-law tails); the
volatility clustering is due to the traders' long memory of news; bubbles are a
persistent phenomenon in the model, and, assuming the standard lab present
value pattern, the bubble size increases with the proportion of speculators and
decreases with the trading horizon.",A Classical Model of Speculative Asset Price Dynamics,2023-07-01 22:02:17,"Sabiou Inoua, Vernon Smith","http://arxiv.org/abs/2307.00410v1, http://arxiv.org/pdf/2307.00410v1",q-fin.GN
35297,th,"We propose a new type of automated market maker (AMM) in which all trades are
batched and executed at a price equal to the marginal price (i.e., the price of
an arbitrarily small trade) after the batch trades. Our proposed AMM is
function maximizing (or FM-AMM) because, for given prices, it trades to reach
the highest possible value of a given function. Competition between
arbitrageurs guarantees that an FM-AMM always trades at a fair, equilibrium
price, and arbitrage profits (also known as LVR) are eliminated. Sandwich
attacks are also eliminated because all trades occur at the exogenously
determined equilibrium price. We use Binance price data to simulate the lower
bound to the return of providing liquidity to an FM-AMM and show that this
bound is close to the empirical returns of providing liquidity on Uniswap v3
(at least for the token pairs and the period we consider).","Arbitrageurs' profits, LVR, and sandwich attacks: batch trading as an AMM design response",2023-07-05 10:31:27,"Andrea Canidio, Robin Fritsch","http://arxiv.org/abs/2307.02074v3, http://arxiv.org/pdf/2307.02074v3",cs.DC
35298,th,"We give a robust characterization of Nash equilibrium by postulating coherent
behavior across varying games: Nash equilibrium is the only solution concept
that satisfies consequentialism, consistency, and rationality. As a
consequence, every equilibrium refinement violates at least one of these
properties. We moreover show that every solution concept that approximately
satisfies consequentialism, consistency, and rationality returns approximate
Nash equilibria. The latter approximation can be made arbitrarily good by
increasing the approximation of the axioms. This result extends to various
natural subclasses of games such as two-player zero-sum games, potential games,
and graphical games.",A Robust Characterization of Nash Equilibrium,2023-07-06 18:47:40,"Florian Brandl, Felix Brandt","http://arxiv.org/abs/2307.03079v1, http://arxiv.org/pdf/2307.03079v1",econ.TH
35307,th,"In this paper, we introduce set-alternating schemes to construct Condorcet
domains. These schemes lead to domains which are copious, connected, and
peak-pit. The resulting family of domains includes some of Arrow's
single-peaked domains of size $2^{n-1}$, which we prove to be the smallest
possible domains. Still, schemes of this type lead to domains larger than the
domains of Fishburn's alternating scheme. Thanks to the concise form of our
schemes, we can analyze the growth of our fastest-growing domains. We show that
the domain size for sufficiently high $n$ exceeds $2.1973^n$, improving the
previous record $2.1890^n$ from \cite{karpov2023constructing}. To perform this
analysis, a bijection between suborders and Dyck words, counted by the Catalan
numbers, is developed.",Set-alternating schemes: A new class of large Condorcet domains,2023-08-05 11:21:45,"Alexander Karpov, Klas Markström, Søren Riis, Bei Zhou","http://arxiv.org/abs/2308.02817v1, http://arxiv.org/pdf/2308.02817v1",cs.DM
35299,th,"We consider a dynamic Bayesian persuasion setting where a single long-lived
sender persuades a stream of ``short-lived'' agents (receivers) by sharing
information about a payoff-relevant state. The state transitions are Markovian
and the sender seeks to maximize the long-run average reward by committing to a
(possibly history-dependent) signaling mechanism. While most previous studies
of Markov persuasion consider exogenous agent beliefs that are independent of
the chain, we study a more natural variant with endogenous agent beliefs that
depend on the chain's realized history. A key challenge to analyze such
settings is to model the agents' partial knowledge about the history
information. We analyze a Markov persuasion process (MPP) under various
information models that differ in the amount of information the receivers have
about the history of the process. Specifically, we formulate a general
partial-information model where each receiver observes the history with an
$\ell$ period lag. Our technical contribution start with analyzing two
benchmark models, i.e., the full-history information model and the no-history
information model. We establish an ordering of the sender's payoff as a
function of the informativeness of agent's information model (with no-history
as the least informative), and develop efficient algorithms to compute optimal
solutions for these two benchmarks. For general $\ell$, we present the
technical challenges in finding an optimal signaling mechanism, where even
determining the right dependency on the history becomes difficult. To bypass
the difficulties, we use a robustness framework to design a ""simple""
\emph{history-independent} signaling mechanism that approximately achieves
optimal payoff when $\ell$ is reasonably large.",Markov Persuasion Processes with Endogenous Agent Beliefs,2023-07-06 20:58:01,"Krishnamurthy Iyer, Haifeng Xu, You Zu","http://arxiv.org/abs/2307.03181v2, http://arxiv.org/pdf/2307.03181v2",cs.GT
35300,th,"We study a market mechanism that sets edge prices to incentivize strategic
agents to organize trips that efficiently share limited network capacity. This
market allows agents to form groups to share trips, make decisions on departure
times and route choices, and make payments to cover edge prices and other
costs. We develop a new approach to analyze the existence and computation of
market equilibrium, building on theories of combinatorial auctions and dynamic
network flows. Our approach tackles the challenges in market equilibrium
characterization arising from: (a) integer and network constraints on the
dynamic flow of trips in sharing limited edge capacity; (b) heterogeneous and
private preferences of strategic agents. We provide sufficient conditions on
the network topology and agents' preferences that ensure the existence and
polynomial-time computation of market equilibrium. We identify a particular
market equilibrium that achieves maximum utilities for all agents, and is
equivalent to the outcome of the classical Vickery Clark Grove mechanism.
Finally, we extend our results to general networks with multiple populations
and apply them to compute dynamic tolls for efficient carpooling in San
Francisco Bay Area.",Market Design for Dynamic Pricing and Pooling in Capacitated Networks,2023-07-08 18:07:03,"Saurabh Amin, Patrick Jaillet, Haripriya Pulyassary, Manxi Wu","http://arxiv.org/abs/2307.03994v2, http://arxiv.org/pdf/2307.03994v2",cs.GT
35302,th,"Hierarchies of conditional beliefs (Battigalli and Siniscalchi 1999) play a
central role for the epistemic analysis of solution concepts in sequential
games. They are practically modelled by type structures, which allow the
analyst to represent the players' hierarchies without specifying an infinite
sequence of conditional beliefs. Here, we study type structures that satisfy a
""richness"" property, called completeness. This property is defined on the type
structure alone, without explicit reference to hierarchies of beliefs or other
type structures. We provide sufficient conditions under which a complete type
structure represents all hierarchies of conditional beliefs. In particular, we
present an extension of the main result in Friedenberg (2010) to type
structures with conditional beliefs.",Complete Conditional Type Structures (Extended Abstract),2023-07-11 10:07:56,Nicodemo De Vito,"http://dx.doi.org/10.4204/EPTCS.379.15, http://arxiv.org/abs/2307.05630v1, http://arxiv.org/pdf/2307.05630v1",cs.GT
35303,th,"We study a game played between advertisers in an online ad platform. The
platform sells ad impressions by first-price auction and provides autobidding
algorithms that optimize bids on each advertiser's behalf, subject to
advertiser constraints such as budgets. Crucially, these constraints are
strategically chosen by the advertisers. The chosen constraints define an
""inner'' budget-pacing game for the autobidders. Advertiser payoffs in the
constraint-choosing ""metagame'' are determined by the equilibrium reached by
the autobidders.
  Advertiser preferences can be more general than what is implied by their
constraints: we assume only that they have weakly decreasing marginal value for
clicks and weakly increasing marginal disutility for spending money.
Nevertheless, we show that at any pure Nash equilibrium of the metagame, the
resulting allocation obtains at least half of the liquid welfare of any
allocation and this bound is tight. We also obtain a 4-approximation for any
mixed Nash equilibrium or Bayes-Nash equilibria. These results rely on the
power to declare budgets: if advertisers can specify only a (linear) value per
click or an ROI target but not a budget constraint, the approximation factor at
equilibrium can be as bad as linear in the number of advertisers.",Strategic Budget Selection in a Competitive Autobidding World,2023-07-14 17:28:00,"Yiding Feng, Brendan Lucier, Aleksandrs Slivkins","http://arxiv.org/abs/2307.07374v2, http://arxiv.org/pdf/2307.07374v2",cs.GT
35304,th,"We consider a model of Bayesian persuasion with one informed sender and
several uninformed receivers. The sender can affect receivers' beliefs via
private signals, and the sender's objective depends on the combination of
induced beliefs.
  We reduce the persuasion problem to the Monge-Kantorovich problem of optimal
transportation. Using insights from optimal transportation theory, we identify
several classes of multi-receiver problems that admit explicit solutions, get
general structural results, derive a dual representation for the value, and
generalize the celebrated concavification formula for the value to
multi-receiver problems.",Persuasion as Transportation,2023-07-15 04:02:18,"Itai Arieli, Yakov Babichenko, Fedor Sandomirskiy","http://arxiv.org/abs/2307.07672v1, http://arxiv.org/pdf/2307.07672v1",econ.TH
35309,th,"We investigate the allocation of children to childcare facilities and propose
solutions to overcome limitations in the current allocation mechanism. We
introduce a natural preference domain and a priority structure that address
these setbacks, aiming to enhance the allocation process. To achieve this, we
present an adaptation of the Deferred Acceptance mechanism to our problem,
which ensures strategy-proofness within our preference domain and yields the
student-optimal stable matching. Finally, we provide a maximal domain for the
existence of stable matchings using the properties that define our natural
preference domain. Our results have practical implications for allocating
indivisible bundles with complementarities.",Complementarities in childcare allocation under priorities,2023-08-28 19:26:45,"Ata Atay, Antonio Romero-Medina","http://arxiv.org/abs/2308.14689v1, http://arxiv.org/pdf/2308.14689v1",econ.TH
35311,th,"A number of citation indices have been proposed for measuring and ranking the
research publication records of scholars. Some of the best known indices, such
as those proposed by Hirsch and Woeginger, are designed to reward most highly
those records that strike some balance between productivity (number of papers
published), and impact (frequency with which those papers are cited). A large
number of rarely cited publications will not score well, nor will a very small
number of heavily cited papers. We discuss three new citation indices, one of
which was independently proposed in \cite{FHLB}. Each rests on the notion of
scale invariance, fundamental to John Nash's solution of the two-person
bargaining problem. Our main focus is on one of these -- a scale invariant
version of the Hirsch index. We argue that it has advantages over the original;
it produces fairer rankings within subdisciplines, is more decisive
(discriminates more finely, yielding fewer ties) and more dynamic (growing over
time via more frequent, smaller increments), and exhibits enhanced centrality
and tail balancedness. Simulations suggest that scale invariance improves
robustness under Poisson noise, with increased decisiveness having no cost in
terms of the number of ``accidental"" reversals, wherein random irregularities
cause researcher A to receive a lower index value than B, although A's
productivity and impact are both slightly higher than B's. Moreover, we provide
an axiomatic characterization of the scale invariant Hirsch index, via axioms
that bear a close relationship, in discrete analogue, to those used by Nash in
\cite{Nas50}. This argues for the mathematical naturality of the new index.
  An earlier version was presented at the 5th World Congress of the Game Theory
Society, Maastricht, Netherlands in 2016.",Nash's bargaining problem and the scale-invariant Hirsch citation index,2023-09-03 17:40:02,"Josep Freixas, Roger Hoerl, William S. Zwicker","http://arxiv.org/abs/2309.01192v1, http://arxiv.org/pdf/2309.01192v1",math.CO
35312,th,"We give a new proof of the Gibbard-Satterthwaite Theorem. We construct two
topological spaces: one for the space of preference profiles and another for
the space of outcomes. We show that social choice functions induce continuous
mappings between the two spaces. By studying the properties of this mapping, we
prove the theorem.",A Topological Proof of The Gibbard-Satterthwaite Theorem,2023-09-06 19:00:28,"Yuliy Baryshnikov, Joseph Root","http://arxiv.org/abs/2309.03123v1, http://arxiv.org/pdf/2309.03123v1",econ.TH
35313,th,"Experimental results on market behavior establish a lower stability and
efficiency of markets for durable re-tradable assets compared to markets for
non-durable, or perishable, goods. In this chapter, we revisit this known but
underappreciated dichotomy of goods in the light of our theory of competitive
market price formation, and we emphasize the fundamental nature of the concept
of asset re-tradability in neoclassical finance through a simple reformulation
of the famous no-trade and no-arbitrage theorems.",Perishable Goods versus Re-tradable Assets: A Theoretical Reappraisal of a Fundamental Dichotomy,2023-09-07 04:31:04,"Sabiou Inoua, Vernon Smith","http://arxiv.org/abs/2309.03432v1, http://arxiv.org/pdf/2309.03432v1",q-fin.GN
35314,th,"Rising inequality is a critical concern for societies worldwide, to the
extent that emerging high-growth economies such as China have identified common
prosperity as a central goal. However, the mechanisms by which digital
disruptions contribute to inequality and the efficacy of existing remedies such
as taxation, must be better understood. This is particularly true for the
implications of the complex process of technological adoption that requires
extensive social validation beyond weak ties and, how to trigger it in the
hyperconnected world of the 21st century.
  This study aims to shed light on the implications of market evolutionary
mechanism from the lenses of technological adoption as a social process. Our
findings underscore the pivotal importance of connectivity in this process
while also revealing the limited effectiveness of taxation as a counterbalance
for inequality. Our research reveals that widespread cultural change is not a
prerequisite for technological disruption. The injection of a small cohort of
entrepreneurs - a few misfits - can expedite technology adoption even in
conservative, moderately connected societies and, change the world.",A few misfits can Change the World,2023-09-07 10:35:02,"Esteve Almirall, Steve Willmott, Ulises Cortés","http://arxiv.org/abs/2309.03532v3, http://arxiv.org/pdf/2309.03532v3",cs.SI
35315,th,"Global environmental change is pushing many socio-environmental systems
towards critical thresholds, where ecological systems' states are on the
precipice of tipping points and interventions are needed to navigate or avert
impending transitions. Flickering, where a system vacillates between
alternative stable states, is touted as a useful early warning signal of
irreversible transitions to undesirable ecological regimes. However, while
flickering may presage an ecological tipping point, these dynamics also pose
unique challenges for human adaptation. In this work, we link an ecological
model that can exhibit flickering to a model of human adaptation to a changing
environment. This allows us to explore the impact of flickering on the utility
of adaptive agents in a coupled socio-environmental system. We highlight the
conditions under which flickering causes wellbeing to decline
disproportionately, and explore how these dynamics impact the optimal timing of
a transformational change that partially decouples wellbeing from environmental
variability. The implications of flickering on nomadic communities in Mongolia,
artisanal fisheries, and wildfire systems are explored as possible case
studies. Flickering, driven in part by climate change and changes to governance
systems, may already be impacting communities. We argue that governance
interventions investing in adaptive capacity could blunt the negative impact of
flickering that can occur as socio-environmental systems pass through tipping
points, and therefore contribute to the sustainability of these systems.",Maintaining human wellbeing as socio-environmental systems undergo regime shifts,2023-09-08 23:14:13,"Andrew R. Tilman, Elisabeth H. Krueger, Lisa C. McManus, James R. Watson","http://arxiv.org/abs/2309.04578v1, http://arxiv.org/pdf/2309.04578v1",econ.TH
35328,th,"We provide a new foundation of risk aversion by showing that the propension
to exploit insurance opportunities fully describes this attitude. Our
foundation, which applies to any probabilistically sophisticated preference,
well accords with the commonly held prudential interpretation of risk aversion
that dates back to the seminal works of Arrow (1963) and Pratt (1964).
  In our main results, we first characterize the Arrow-Pratt risk aversion in
terms of propension to full insurance and the stronger notion of risk aversion
of Rothschild and Stiglitz (1970) in terms of propension to partial insurance.
We then extend the analysis to comparative risk aversion by showing that the
notion of Yaari (1969) corresponds to comparative propension to full insurance,
while the stronger notion of Ross (1981) corresponds to comparative propension
to partial insurance.",Risk Aversion and Insurance Propensity,2023-10-13 18:09:09,"Fabio Maccheroni, Massimo Marinacci, Ruodu Wang, Qinyu Wu","http://arxiv.org/abs/2310.09173v1, http://arxiv.org/pdf/2310.09173v1",econ.TH
35316,th,"Financial volatility obeys two fascinating empirical regularities that apply
to various assets, on various markets, and on various time scales: it is
fat-tailed (more precisely power-law distributed) and it tends to be clustered
in time. Many interesting models have been proposed to account for these
regularities, notably agent-based models, which mimic the two empirical laws
through a complex mix of nonlinear mechanisms such as traders' switching
between trading strategies in highly nonlinear way. This paper explains the two
regularities simply in terms of traders' attitudes towards news, an explanation
that follows almost by definition of the traditional dichotomy of financial
market participants, investors versus speculators, whose behaviors are reduced
to their simplest forms. Long-run investors' valuations of an asset are assumed
to follow a news-driven random walk, thus capturing the investors' persistent,
long memory of fundamental news. Short-term speculators' anticipated returns,
on the other hand, are assumed to follow a news-driven autoregressive process,
capturing their shorter memory of fundamental news, and, by the same token, the
feedback intrinsic to the short-sighted, trend-following (or herding) mindset
of speculators. These simple, linear, models of traders' expectations, it is
shown, explain the two financial regularities in a generic and robust way.
Rational expectations, the dominant model of traders' expectations, is not
assumed here, owing to the famous no-speculation, no-trade results",News-driven Expectations and Volatility Clustering,2023-09-10 00:05:07,Sabiou Inoua,"http://dx.doi.org/10.3390/jrfm13010017, http://arxiv.org/abs/2309.04876v1, http://arxiv.org/pdf/2309.04876v1",q-fin.GN
35317,th,"The standard vector autoregressive (VAR) models suffer from
overparameterization which is a serious issue for high-dimensional time series
data as it restricts the number of variables and lags that can be incorporated
into the model. Several statistical methods, such as the reduced-rank model for
multivariate (multiple) time series (Velu et al., 1986; Reinsel and Velu, 1998;
Reinsel et al., 2022) and the Envelope VAR model (Wang and Ding, 2018), provide
solutions for achieving dimension reduction of the parameter space of the VAR
model. However, these methods can be inefficient in extracting relevant
information from complex data, as they fail to distinguish between relevant and
irrelevant information, or they are inefficient in addressing the rank
deficiency problem. We put together the idea of envelope models into the
reduced-rank VAR model to simultaneously tackle these challenges, and propose a
new parsimonious version of the classical VAR model called the reduced-rank
envelope VAR (REVAR) model. Our proposed REVAR model incorporates the strengths
of both reduced-rank VAR and envelope VAR models and leads to significant gains
in efficiency and accuracy. The asymptotic properties of the proposed
estimators are established under different error assumptions. Simulation
studies and real data analysis are conducted to evaluate and illustrate the
proposed method.",Reduced-rank Envelope Vector Autoregressive Models,2023-09-22 17:38:52,"S. Yaser Samadi, Wiranthe B. Herath","http://dx.doi.org/10.1080/07350015.2023.2260862, http://arxiv.org/abs/2309.12902v1, http://arxiv.org/pdf/2309.12902v1",stat.ME
35318,th,"Reserve systems are used to accommodate multiple essential or
underrepresented groups in allocating indivisible scarce resources by creating
categories that prioritize their respective beneficiaries. Some applications
include the optimal allocation of vaccines, or assignment of minority students
to elite colleges in India. An allocation is called smart if it optimizes the
number of units distributed. Previous literature mostly assumed baseline
priorities, which impose significant interdependencies between the priority
ordering of different categories. It also assumes either everybody is eligible
for receiving a unit from any category, or only the beneficiaries are eligible.
The comprehensive Threshold Model we propose allows independent priority
orderings among categories and arbitrary beneficiary and eligibility
thresholds, enabling policymakers to avoid comparing incomparables in
affirmative action systems. We present a new smart reserve system that
optimizes two objectives simultaneously to allocate scarce resources. Our Smart
Pipeline Matching Mechanism achieves all desirable properties in the most
general domain possible. Our results apply to any resource allocation market,
but we focus our attention on the vaccine allocation problem.",Reserve Matching with Thresholds,2023-09-25 01:25:16,Suat Evren,"http://arxiv.org/abs/2309.13766v2, http://arxiv.org/pdf/2309.13766v2",econ.TH
35319,th,"Despite having the same basic prophet inequality setup and model of loss
aversion, conclusions in our multi-dimensional model differs considerably from
the one-dimensional model of Kleinberg et al. For example, Kleinberg et al.
gives a tight closed-form on the competitive ratio that an online
decision-maker can achieve as a function of $\lambda$, for any $\lambda \geq
0$. In our multi-dimensional model, there is a sharp phase transition: if $k$
denotes the number of dimensions, then when $\lambda \cdot (k-1) \geq 1$, no
non-trivial competitive ratio is possible. On the other hand, when $\lambda
\cdot (k-1) < 1$, we give a tight bound on the achievable competitive ratio
(similar to Kleinberg et al.). As another example, Kleinberg et al. uncovers an
exponential improvement in their competitive ratio for the random-order vs.
worst-case prophet inequality problem. In our model with $k\geq 2$ dimensions,
the gap is at most a constant-factor. We uncover several additional key
differences in the multi- and single-dimensional models.",Optimal Stopping with Multi-Dimensional Comparative Loss Aversion,2023-09-26 01:00:46,"Linda Cai, Joshua Gardner, S. Matthew Weinberg","http://arxiv.org/abs/2309.14555v2, http://arxiv.org/pdf/2309.14555v2",cs.GT
35320,th,"We study the economics of the Ethereum improvement proposal 4844 and its
effect on rollups' data posting strategies. Rollups' cost consists of two
parts: data posting and delay. In the new proposal, the data posting cost
corresponds to a blob posting cost and is fixed in each block, no matter how
much of the blob is utilized by the rollup. The tradeoff is clear: the rollup
prefers to post a full blob, but if its transaction arrival rate is low,
filling up a blob space causes too large delay cost. The first result of the
paper shows that if a rollup transaction arrival rate is too low, it prefers to
use the regular blockspace market for data posting, as it offers a more
flexible cost structure. Second, we show that shared blob posting is not always
beneficial for participating rollups and change in the aggregate blob posting
cost in the equilibrium depends on the types of participating rollups. In the
end, we discuss blob cost-sharing rules from an axiomatic angle.",EIP-4844 Economics and Rollup Strategies,2023-10-02 15:41:36,"Davide Crapis, Edward W. Felten, Akaki Mamageishvili","http://arxiv.org/abs/2310.01155v1, http://arxiv.org/pdf/2310.01155v1",cs.GT
35321,th,"Humans have been arguing about the benefits of dictatorial versus democratic
regimes for millennia. For example, Plato, in The Republic, favored
Aristocracy,, the enlightened autocratic regime, to democracy. Modern dictators
typically come to power promising quick solutions to societal problems and
long-term stability}. I present a model of a dictatorship with the country's
best interests in mind. The model is based on the following premises: a) the
dictator forces the country to follow the desired trajectory of development
only from the information from the advisors; b) the deception from the advisors
cannot decrease in time; and c) the deception increases based on the
difficulties the country encounters. The model shows an improvement in the
short term (a few months to a year), followed by instability leading to the
country's gradual deterioration over many years. I derive some universal
parameters applicable to all dictators and show that advisors' deception
increases in parallel with the country's decline. In contrast, the dictator
thinks the government is doing a reasonable, but not perfect, job. Finally, I
present a match of the model to the historical data of grain production in the
Soviet Union in 1928-1940}.",The Dictator Equation: The Distortion of Information Flow in Autocratic Regimes and Its Consequences,2023-10-03 00:56:59,Vakhtang Putkaradze,"http://arxiv.org/abs/2310.01666v1, http://arxiv.org/pdf/2310.01666v1",nlin.AO
35322,th,"We study the convergence of best-response dynamics in Tullock contests with
convex cost functions (these games always have a unique pure-strategy Nash
equilibrium). We show that best-response dynamics rapidly converges to the
equilibrium for homogeneous agents. For two homogeneous agents, we show
convergence to an $\epsilon$-approximate equilibrium in
$\Theta(\log\log(1/\epsilon))$ steps. For $n \ge 3$ agents, the dynamics is not
unique because at each step $n-1 \ge 2$ agents can make non-trivial moves. We
consider the model proposed by Ghosh and Goldberg (2023), where the agent
making the move is randomly selected at each time step. We show convergence to
an $\epsilon$-approximate equilibrium in $O(\beta \log(n/(\epsilon\delta)))$
steps with probability $1-\delta$, where $\beta$ is a parameter of the agent
selection process, e.g., $\beta = n^2 \log(n)$ if agents are selected uniformly
at random at each time step. We complement this result with a lower bound of
$\Omega(n + \log(1/\epsilon)/\log(n))$ applicable for any agent selection
process.",Best-Response Dynamics in Tullock Contests with Convex Costs,2023-10-05 16:28:44,Abheek Ghosh,"http://arxiv.org/abs/2310.03528v2, http://arxiv.org/pdf/2310.03528v2",cs.GT
35323,th,"Markets with multiple divisible goods have been studied widely from the
perspective of revenue and welfare. In general, it is well known that envy-free
revenue-maximal outcomes can result in lower welfare than competitive
equilibrium outcomes. We study a market in which buyers have quasilinear
utilities with linear substitutes valuations and budget constraints, and the
seller must find prices and an envy-free allocation that maximise revenue or
welfare. Our setup mirrors markets such as ad auctions and auctions for the
exchange of financial assets. We prove that the unique competitive equilibrium
prices are also envy-free revenue-maximal. This coincidence of maximal revenue
and welfare is surprising and breaks down even when buyers have
piecewise-linear valuations. We present a novel characterisation of the set of
""feasible"" prices at which demand does not exceed supply, show that this set
has an elementwise minimal price vector, and demonstrate that these prices
maximise revenue and welfare. The proof also implies an algorithm for finding
this unique price vector.",Substitutes markets with budget constraints: solving for competitive and optimal prices,2023-10-05 20:12:06,"Simon Finster, Paul Goldberg, Edwin Lock","http://arxiv.org/abs/2310.03692v1, http://arxiv.org/pdf/2310.03692v1",econ.TH
35324,th,"I modify the canonical statistical discrimination model of Coate and Loury
(1993) by assuming the firm's belief about an individual's unobserved class is
machine learning-generated and, therefore, contractible. This expands the
toolkit of a regulator beyond belief-free regulations like affirmative action.
Contractible beliefs make it feasible to require the firm to select a decision
policy that equalizes true positive rates across groups -- what the algorithmic
fairness literature calls equal opportunity. While affirmative action does not
necessarily end statistical discrimination, I show that imposing equal
opportunity does.",The Impact of Equal Opportunity on Statistical Discrimination,2023-10-06 23:57:34,John Y. Zhu,"http://arxiv.org/abs/2310.04585v1, http://arxiv.org/pdf/2310.04585v1",econ.TH
35325,th,"This paper shows the usefulness of the Perov contraction theorem, which is a
generalization of the classical Banach contraction theorem, for solving Markov
dynamic programming problems. When the reward function is unbounded, combining
an appropriate weighted supremum norm with the Perov contraction theorem yields
a unique fixed point of the Bellman operator under weaker conditions than
existing approaches. An application to the optimal savings problem shows that
the average growth rate condition derived from the spectral radius of a certain
nonnegative matrix is sufficient and almost necessary for obtaining a solution.",Unbounded Markov Dynamic Programming with Weighted Supremum Norm Perov Contractions,2023-10-07 00:16:03,Alexis Akira Toda,"http://arxiv.org/abs/2310.04593v1, http://arxiv.org/pdf/2310.04593v1",math.OC
35326,th,"We study the stochastic structure of cryptocurrency rates of returns as
compared to stock returns by focusing on the associated cross-sectional
distributions. We build two datasets. The first comprises forty-six major
cryptocurrencies, and the second includes all the companies listed in the S&P
500. We collect individual data from January 2017 until December 2022. We then
apply the Quantal Response Statistical Equilibrium (QRSE) model to recover the
cross-sectional frequency distribution of the daily returns of cryptocurrencies
and S&P 500 companies. We study the stochastic structure of these two markets
and the properties of investors' behavior over bear and bull trends. Finally,
we compare the degree of informational efficiency of these two markets.",An Information Theory Approach to the Stock and Cryptocurrency Market: A Statistical Equilibrium Perspective,2023-10-07 23:02:21,"Emanuele Citera, Francesco De Pretis","http://arxiv.org/abs/2310.04907v1, http://arxiv.org/pdf/2310.04907v1",econ.TH
35327,th,"We simulate behaviour of independent reinforcement learning algorithms
playing the Crawford and Sobel (1982) game of strategic information
transmission. We show that a sender and a receiver training together converge
to strategies approximating the ex-ante optimal equilibrium of the game.
Communication occurs to the largest extent predicted by Nash equilibrium. The
conclusion is robust to alternative specifications of the learning
hyperparameters and of the game. We discuss implications for theories of
equilibrium selection in information transmission games, for work on emerging
communication among algorithms in computer science, and for the economics of
collusions in markets populated by artificially intelligent agents.",Cheap Talking Algorithms,2023-10-11 23:16:38,"Daniele Condorelli, Massimiliano Furlan","http://arxiv.org/abs/2310.07867v3, http://arxiv.org/pdf/2310.07867v3",econ.TH
35329,th,"We study a sender-receiver model where the receiver can commit to a decision
rule before the sender determines the information policy. The decision rule can
depend on the signal structure and the signal realization that the sender
adopts. This framework captures applications where a decision-maker (the
receiver) solicit advice from an interested party (sender). In these
applications, the receiver faces uncertainty regarding the sender's preferences
and the set of feasible signal structures. Consequently, we adopt a unified
robust analysis framework that includes max-min utility, min-max regret, and
min-max approximation ratio as special cases. We show that it is optimal for
the receiver to sacrifice ex-post optimality to perfectly align the sender's
incentive. The optimal decision rule is a quota rule, i.e., the decision rule
maximizes the receiver's ex-ante payoff subject to the constraint that the
marginal distribution over actions adheres to a consistent quota, regardless of
the sender's chosen signal structure.",Managing Persuasion Robustly: The Optimality of Quota Rules,2023-10-16 05:52:39,"Dirk Bergemann, Tan Gan, Yingkai Li","http://arxiv.org/abs/2310.10024v1, http://arxiv.org/pdf/2310.10024v1",econ.TH
35330,th,"We investigate auction mechanisms to support the emerging format of
AI-generated content. We in particular study how to aggregate several LLMs in
an incentive compatible manner. In this problem, the preferences of each agent
over stochastically generated contents are described/encoded as an LLM. A key
motivation is to design an auction format for AI-generated ad creatives to
combine inputs from different advertisers. We argue that this problem, while
generally falling under the umbrella of mechanism design, has several unique
features. We propose a general formalism -- the token auction model -- for
studying this problem. A key feature of this model is that it acts on a
token-by-token basis and lets LLM agents influence generated contents through
single dimensional bids.
  We first explore a robust auction design approach, in which all we assume is
that agent preferences entail partial orders over outcome distributions. We
formulate two natural incentive properties, and show that these are equivalent
to a monotonicity condition on distribution aggregation. We also show that for
such aggregation functions, it is possible to design a second-price auction,
despite the absence of bidder valuation functions. We then move to designing
concrete aggregation functions by focusing on specific valuation forms based on
KL-divergence, a commonly used loss function in LLM. The welfare-maximizing
aggregation rules turn out to be the weighted (log-space) convex combination of
the target distributions from all participants. We conclude with experimental
results in support of the token auction formulation.",Mechanism Design for Large Language Models,2023-10-17 00:01:12,"Paul Duetting, Vahab Mirrokni, Renato Paes Leme, Haifeng Xu, Song Zuo","http://arxiv.org/abs/2310.10826v1, http://arxiv.org/pdf/2310.10826v1",cs.GT
35331,th,"Consider a finite set of trade orders and automated market makers (AMMs) at
some state. We propose a solution to the problem of finding an equilibrium
price vector to execute all the orders jointly with corresponding optimal AMMs
swaps. The solution is based on Brouwer's fixed-point theorem. We discuss
computational aspects relevant for realistic situations in public blockchain
activity.",Walraswap: a solution to uniform price batch auctions,2023-10-18 21:42:52,Sergio A. Yuhjtman,"http://arxiv.org/abs/2310.12255v2, http://arxiv.org/pdf/2310.12255v2",q-fin.MF
35332,th,"In the impartial selection problem, a subset of agents up to a fixed size $k$
among a group of $n$ is to be chosen based on votes cast by the agents
themselves. A selection mechanism is impartial if no agent can influence its
own chance of being selected by changing its vote. It is $\alpha$-optimal if,
for every instance, the ratio between the votes received by the selected subset
is at least a fraction of $\alpha$ of the votes received by the subset of size
$k$ with the highest number of votes. We study deterministic impartial
mechanisms in a more general setting with arbitrarily weighted votes and
provide the first approximation guarantee, roughly $1/\lceil 2n/k\rceil$. When
the number of agents to select is large enough compared to the total number of
agents, this yields an improvement on the previously best known approximation
ratio of $1/k$ for the unweighted setting. We further show that our mechanism
can be adapted to the impartial assignment problem, in which multiple sets of
up to $k$ agents are to be selected, with a loss in the approximation ratio of
$1/2$.",Deterministic Impartial Selection with Weights,2023-10-23 17:44:44,"Javier Cembrano, Svenja M. Griesbach, Maximilian J. Stahlberg","http://arxiv.org/abs/2310.14991v1, http://arxiv.org/pdf/2310.14991v1",cs.GT
35334,th,"We study a robust selling problem where a seller attempts to sell one item to
a buyer but is uncertain about the buyer's valuation distribution. Existing
literature indicates that robust mechanism design provides a stronger
theoretical guarantee than robust deterministic pricing. Meanwhile, the
superior performance of robust mechanism design comes at the expense of
implementation complexity given that the seller offers a menu with an infinite
number of options, each coupled with a lottery and a payment for the buyer's
selection. In view of this, the primary focus of our research is to find simple
selling mechanisms that can effectively hedge against market ambiguity. We show
that a selling mechanism with a small menu size (or limited randomization
across a finite number of prices) is already capable of deriving significant
benefits achieved by the optimal robust mechanism with infinite options. In
particular, we develop a general framework to study the robust selling
mechanism problem where the seller only offers a finite number of options in
the menu. Then we propose a tractable reformulation that addresses a variety of
ambiguity sets of the buyer's valuation distribution. Our formulation further
enables us to characterize the optimal selling mechanisms and the corresponding
competitive ratio for different menu sizes and various ambiguity sets,
including support, mean, and quantile information. In light of the closed-form
competitive ratios associated with different menu sizes, we provide managerial
implications that incorporating a modest menu size already yields a competitive
ratio comparable to the optimal robust mechanism with infinite options, which
establishes a favorable trade-off between theoretical performance and
implementation simplicity. Remarkably, a menu size of merely two can
significantly enhance the competitive ratio, compared to the deterministic
pricing scheme.",The Power of Simple Menus in Robust Selling Mechanisms,2023-10-26 16:41:51,Shixin Wang,"http://arxiv.org/abs/2310.17392v1, http://arxiv.org/pdf/2310.17392v1",econ.TH
35335,th,"Stable marriage of a two-sided market with unit demand is a classic problem
that arises in many real-world scenarios. In addition, a unique stable marriage
in this market simplifies a host of downstream desiderata. In this paper, we
explore a new set of sufficient conditions for unique stable matching (USM)
under this setup. Unlike other approaches that also address this question using
the structure of preference profiles, we use an algorithmic viewpoint and
investigate if this question can be answered using the lens of the deferred
acceptance (DA) algorithm (Gale and Shapley, 1962). Our results yield a set of
sufficient conditions for USM (viz., MaxProp and MaxRou) and show that these
are disjoint from the previously known sufficiency conditions like sequential
preference and no crossing. We also provide a characterization of MaxProp that
makes it efficiently verifiable, and shows the gap between MaxProp and the
entire USM class. These results give a more detailed view of the sub-structures
of the USM class.",A Gale-Shapley View of Unique Stable Marriages,2023-10-28 18:36:03,"Kartik Gokhale, Amit Kumar Mallik, Ankit Kumar Misra, Swaprava Nath","http://arxiv.org/abs/2310.18736v1, http://arxiv.org/pdf/2310.18736v1",cs.GT
35336,th,"A principal seeks to learn about a binary state and can do so by enlisting an
agent to acquire information over time using a Poisson information arrival
technology. The agent learns about this state privately, and his effort choices
are unobserved by the principal. The principal can reward the agent with a
prize of fixed value as a function of the agent's sequence of reports and the
realized state. We identify conditions that each individually ensure that the
principal cannot do better than by eliciting a single report from the agent
after all information has been acquired. We also show that such a static
contract is suboptimal under sufficiently strong violations of these
conditions. We contrast our solution to the case where the agent acquires
information ""all at once;"" notably, the optimal contract in the dynamic
environment may provide strictly positive base rewards to the agent even if his
prediction about the state is incorrect.",Optimal Scoring for Dynamic Information Acquisition,2023-10-29 23:46:05,"Yingkai Li, Jonathan Libgober","http://arxiv.org/abs/2310.19147v1, http://arxiv.org/pdf/2310.19147v1",econ.TH
35337,th,"We study the role of regulatory inspections in a contract design problem in
which a principal interacts separately with multiple agents. Each agent's
hidden action includes a dimension that determines whether they undertake an
extra costly step to adhere to safety protocols. The principal's objective is
to use payments combined with a limited budget for random inspections to
incentivize agents towards safety-compliant actions that maximize the
principal's utility. We first focus on the single-agent setting with linear
contracts and present an efficient algorithm that characterizes the optimal
linear contract, which includes both payment and random inspection. We further
investigate how the optimal contract changes as the inspection cost or the cost
of adhering to safety protocols vary. Notably, we demonstrate that the agent's
compensation increases if either of these costs escalates. However, while the
probability of inspection decreases with rising inspection costs, it
demonstrates nonmonotonic behavior as a function of the safety action costs.
Lastly, we explore the multi-agent setting, where the principal's challenge is
to determine the best distribution of inspection budgets among all agents. We
propose an efficient approach based on dynamic programming to find an
approximately optimal allocation of inspection budget across contracts. We also
design a random sequential scheme to determine the inspector's assignments,
ensuring each agent is inspected at most once and at the desired probability.
Finally, we present a case study illustrating that a mere difference in the
cost of inspection across various agents can drive the principal's decision to
forego inspecting a significant fraction of them, concentrating its entire
budget on those that are less costly to inspect.",Contract Design With Safety Inspections,2023-11-05 04:23:43,"Alireza Fallah, Michael I. Jordan","http://arxiv.org/abs/2311.02537v1, http://arxiv.org/pdf/2311.02537v1",cs.GT
35338,th,"In order to coordinate players in a game must first identify a target pattern
of behaviour. In this paper we investigate the difficulty of identifying
prominent outcomes in two kinds of binary action coordination problems in
social networks: pure coordination games and anti-coordination games. For both
environments, we determine the computational complexity of finding a strategy
profile that (i) maximises welfare, (ii) maximises welfare subject to being an
equilibrium, and (iii) maximises potential. We show that the complexity of
these objectives can vary with the type of coordination problem. Objectives (i)
and (iii) are tractable problems in pure coordination games, but for
anti-coordination games are NP-hard. Objective (ii), finding the best Nash
equilibrium, is NP-hard for both. Our results support the idea that
environments in which actions are strategic complements (e.g., technology
adoption) facilitate successful coordination more readily than those in which
actions are strategic substitutes (e.g., public good provision).",Some coordination problems are harder than others,2023-11-06 18:33:16,"Argyrios Deligkas, Eduard Eiben, Gregory Gutin, Philip R. Neary, Anders Yeo","http://arxiv.org/abs/2311.03195v2, http://arxiv.org/pdf/2311.03195v2",econ.TH
35339,th,"This paper studies a mathematical model of city formation by migration of
firms and workers. The Core-Periphery model in the new economic geography,
which considers migration of workers driven by real wage inequality among
regions, is extended to incorporate migration of firms driven by real profit
inequality among regions. A spatially homogeneous distributions of firms and
workers become destabilized and eventually forms several cities in which both
the firms and workers agglomerate, and the number of the cities decreases as
transport costs become lower.",City formation by dual migration of firms and workers,2023-11-09 14:42:42,Kensuke Ohtake,"http://arxiv.org/abs/2311.05292v2, http://arxiv.org/pdf/2311.05292v2",econ.TH
35340,th,"It is unclear how to restructure ownership when an asset is privately held,
and there is uncertainty about the owners' subjective valuations. When
ownership is divided equally between two owners, a commonly used mechanism is
called a BMBY mechanism. This mechanism works as follows: each owner can
initiate a BMBY by naming her price. Once an owner declares a price, the other
chooses to sell his holdings or buy the shares of the initiator at the given
price. This mechanism is simple and tractable; however, it does not elicit
actual owner valuations, does not guarantee an efficient allocation, and, most
importantly, is limited to an equal partnership of two owners. In this paper,
we extend this rationale to a multi-owner setting. Our proposed mechanism
elicits owner valuations truthfully. Additionally, our proposed mechanism
exhibits several desirable traits: it is easy to implement, budget balanced,
robust to collusion (weakly group strategyproof), individually rational, and
ex-post efficient.",A Strategyproof Mechanism for Ownership Restructuring in Privately Owned Assets,2023-11-12 11:56:47,"Gal Danino, Moran Koren, Omer Madmon","http://arxiv.org/abs/2311.06780v1, http://arxiv.org/pdf/2311.06780v1",econ.TH
35341,th,"A common economic process is crowdsearch, wherein a group of agents is
invited to search for a valuable physical or virtual object, e.g. creating and
patenting an invention, solving an open scientific problem, or identifying
vulnerabilities in software. We study a binary model of crowdsearch in which
agents have different abilities to find the object. We characterize the types
of equilibria and identify which type of crowd maximizes the likelihood of
finding the object. Sometimes, however, an unlimited crowd is not sufficient to
guarantee that the object is found. It even can happen that inviting more
agents lowers the probability of finding the object. We characterize the
optimal prize and show that offering only one prize (winner-takes-all)
maximizes the probability of finding the object but is not necessarily optimal
for the crowdsearch designer.",Crowdsearch,2023-11-14 23:57:22,"Hans Gersbach, Akaki Mamageishvili, Fikri Pitsuwan","http://arxiv.org/abs/2311.08532v1, http://arxiv.org/pdf/2311.08532v1",econ.TH
35342,th,"What will likely be the effect of the emergence of ChatGPT and other forms of
artificial intelligence (AI) on the skill premium? To address this question, we
develop a nested constant elasticity of substitution production function that
distinguishes between industrial robots and AI. Industrial robots predominantly
substitute for low-skill workers, whereas AI mainly helps to perform the tasks
of high-skill workers. We show that AI reduces the skill premium as long as it
is more substitutable for high-skill workers than low-skill workers are for
high-skill workers.",Artificial intelligence and the skill premium,2023-11-14 23:16:55,"David E. Bloom, Klaus Prettner, Jamel Saadaoui, Mario Veruete","http://arxiv.org/abs/2311.09255v1, http://arxiv.org/pdf/2311.09255v1",econ.TH
35343,th,"The global dynamics is investigated for a duopoly game where the perfect
foresight hypothesis is relaxed and firms are worst-case maximizers.
Overlooking the degree of product substitutability as well as the sensitivity
of price to quantity, the unique and globally stable Cournot-Nash equilibrium
of the complete-information duopoly game, loses stability when firms are not
aware if they are playing a duopoly game, as it is, or an oligopoly game with
more than two competitors. This finding resembles Theocharis' condition for the
stability of the Cournot-Nash equilibrium in oligopolies without uncertainty.
As opposed to complete-information oligopoly games, coexisting attractors,
disconnected basins of attractions and chaotic dynamics emerge when the
Cournot-Nash equilibrium loses stability. This difference in the global
dynamics is due to the nonlinearities introduced by the worst-case approach to
uncertainty, which mirror in bimodal best-reply functions. Conducted with
techniques that require a symmetric setting of the game, the investigation of
the dynamics reveals that a chaotic regime prevents firms from being ambiguity
averse, that is, firms are worst-case maximizers only in the
quantity-expectation space. Therefore, chaotic dynamics are the result and at
the same time the source of profit uncertainty.",Ambiguity aversion as a route to randomness in a duopoly game,2023-11-19 19:24:57,"Davide Radi, Laura Gardini","http://arxiv.org/abs/2311.11366v1, http://arxiv.org/pdf/2311.11366v1",econ.TH
35344,th,"In their study of price discrimination for a monopolist selling heterogeneous
products to consumers having private information about their own
multidimensional types, Rochet and Chon\'e (1998) discovered a new form of
screening in which consumers with intermediate types are bunched together into
isochoice groups of various dimensions incentivized to purchase the same
product. They analyzed a particular example involving customer types
distributed uniformly over the unit square. For this example, we prove that
their proposed solution is not selfconsistent, and we indicate how consistency
can be restored.","Comment on ""Ironing, sweeping, and multidimensional screening''",2023-11-22 00:41:52,"Robert J. McCann, Kelvin Shuangjian Zhang","http://arxiv.org/abs/2311.13012v2, http://arxiv.org/pdf/2311.13012v2",math.OC
35345,th,"We show that no efficient ascending auction can guarantee to find even a
minimal envy-free price vector if all valuations are submodular, assuming a
basic complexity theory's assumption.",No Ascending Auction can find Equilibrium for SubModular valuations,2023-12-01 14:57:45,"Oren Ben-Zwi, Ilan Newman","http://arxiv.org/abs/2312.00522v1, http://arxiv.org/pdf/2312.00522v1",cs.GT
35346,th,"We study equilibrium investment into bidding and latency reduction for
different sequencing policies. For a batch auction design, we observe that
bidders shade bids according to the likelihood that competing bidders land in
the current batch. Moreover, in equilibrium, in the ex-ante investment stage
before the auction, bidders invest into latency until they make zero profit in
expectation.
  We compare the batch auction design to continuous time bidding policies (time
boost) and observe that (depending on the choice of parameters) they obtain
similar revenue and welfare guarantees.",Transaction Ordering Auctions,2023-12-04 20:14:06,Jan Christoph Schlegel,"http://arxiv.org/abs/2312.02055v1, http://arxiv.org/pdf/2312.02055v1",cs.GT
35347,th,"Dimensionality reduction has always been one of the most significant and
challenging problems in the analysis of high-dimensional data. In the context
of time series analysis, our focus is on the estimation and inference of
conditional mean and variance functions. By using central mean and variance
dimension reduction subspaces that preserve sufficient information about the
response, one can effectively estimate the unknown mean and variance functions
of the time series. While the literature presents several approaches to
estimate the time series central mean and variance subspaces (TS-CMS and
TS-CVS), these methods tend to be computationally intensive and infeasible for
practical applications. By employing the Fourier transform, we derive explicit
estimators for TS-CMS and TS-CVS. These proposed estimators are demonstrated to
be consistent, asymptotically normal, and efficient. Simulation studies have
been conducted to evaluate the performance of the proposed method. The results
show that our method is significantly more accurate and computationally
efficient than existing methods. Furthermore, the method has been applied to
the Canadian Lynx dataset.",Fourier Methods for Sufficient Dimension Reduction in Time Series,2023-12-04 21:41:29,"S. Yaser Samadi, Tharindu P. De Alwis","http://arxiv.org/abs/2312.02110v1, http://arxiv.org/pdf/2312.02110v1",stat.ME
35348,th,"It is well known that Random Serial Dictatorship is strategy-proof and leads
to a Pareto-Efficient outcome. We show that this result breaks down when
individuals are allowed to make transfers, and adapt Random Serial Dictatorship
to encompass trades between individuals. Strategic analysis of play under the
new mechanisms we define is given, accompanied by simulations to quantify the
gains from trade.",Random Serial Dictatorship with Transfers,2023-12-13 12:14:48,"Sudharsan Sundar, Eric Gao, Trevor Chow, Matthew Ding","http://arxiv.org/abs/2312.07999v1, http://arxiv.org/pdf/2312.07999v1",cs.GT
35349,th,"In approval-based committee (ABC) voting, the goal is to choose a subset of
predefined size of the candidates based on the voters' approval preferences
over the candidates. While this problem has attracted significant attention in
recent years, the incentives for voters to participate in an election for a
given ABC voting rule have been neglected so far. This paper is thus the first
to explicitly study this property, typically called participation, for ABC
voting rules. In particular, we show that all ABC scoring rules even satisfy
group participation, whereas most sequential rules severely fail participation.
We furthermore explore several escape routes to the impossibility for
sequential ABC voting rules: we prove for many sequential rules that (i) they
satisfy participation on laminar profiles, (ii) voters who approve none of the
elected candidates cannot benefit by abstaining, and (iii) it is NP-hard for a
voter to decide whether she benefits from abstaining.",Participation Incentives in Approval-Based Committee Elections,2023-12-14 13:32:59,"Martin Bullinger, Chris Dong, Patrick Lederer, Clara Mehler","http://arxiv.org/abs/2312.08798v1, http://arxiv.org/pdf/2312.08798v1",cs.GT
35350,th,"In approval-based committee (ABC) elections, the goal is to select a
fixed-size subset of the candidates, a so-called committee, based on the
voters' approval ballots over the candidates. One of the most popular classes
of ABC voting rules are ABC scoring rules, which have recently been
characterized by Lackner and Skowron (2021). However, this characterization
relies on a model where the output is a ranking of committees instead of a set
of winning committees and no full characterization of ABC scoring rules exists
in the latter standard setting. We address this issue by characterizing two
important subclasses of ABC scoring rules in the standard ABC election model,
thereby both extending the result of Lackner and Skowron (2021) to the standard
setting and refining it to subclasses. In more detail, by relying on a
consistency axiom for variable electorates, we characterize (i) the prominent
class of Thiele rules and (ii) a new class of ABC voting rules called ballot
size weighted approval voting. Based on these theorems, we also infer
characterizations of three well-known ABC voting rules, namely multi-winner
approval voting, proportional approval voting, and satisfaction approval
voting.",Refined Characterizations of Approval-based Committee Scoring Rules,2023-12-14 13:34:07,"Chris Dang, Patrick Lederer","http://arxiv.org/abs/2312.08799v1, http://arxiv.org/pdf/2312.08799v1",cs.GT
35351,th,"We provide the first large-scale data collection of real-world approval-based
committee elections. These elections have been conducted on the Polkadot
blockchain as part of their Nominated Proof-of-Stake mechanism and contain
around one thousand candidates and tens of thousands of (weighted) voters each.
We conduct an in-depth study of application-relevant questions, including a
quantitative and qualitative analysis of the outcomes returned by different
voting rules. Besides considering proportionality measures that are standard in
the multiwinner voting literature, we pay particular attention to less-studied
measures of overrepresentation, as these are closely related to the security of
the Polkadot network. We also analyze how different design decisions such as
the committee size affect the examined measures.",Approval-Based Committee Voting in Practice: A Case Study of (Over-)Representation in the Polkadot Blockchain,2023-12-18 21:15:38,"Niclas Boehmer, Markus Brill, Alfonso Cevallos, Jonas Gehrlein, Luis Sánchez-Fernández, Ulrike Schmidt-Kraepelin","http://arxiv.org/abs/2312.11408v2, http://arxiv.org/pdf/2312.11408v2",cs.GT
35352,th,"In this note, I introduce Estimated Performance Rating (PR$^e$), a novel
system for evaluating player performance in sports and games. PR$^e$ addresses
a key limitation of the Tournament Performance Rating (TPR) system, which is
undefined for zero or perfect scores in a series of games. PR$^e$ is defined as
the rating that solves an optimization problem related to scoring probability,
making it applicable for any performance level. The main theorem establishes
that the PR$^e$ of a player is equivalent to the TPR whenever the latter is
defined. I then apply this system to historically significant win-streaks in
association football, tennis, and chess. Beyond sports, PR$^e$ has broad
applicability in domains where Elo ratings are used, from college rankings to
the evaluation of large language models.","Performance rating in chess, tennis, and other contexts",2023-12-20 04:47:55,Mehmet S. Ismail,"http://arxiv.org/abs/2312.12700v1, http://arxiv.org/pdf/2312.12700v1",econ.TH
35354,th,"We consider moral hazard problems where a principal has access to rich
monitoring data about an agent's action. Rather than focusing on optimal
contracts (which are known to in general be complicated), we characterize the
optimal rate at which the principal's payoffs can converge to the first-best
payoff as the amount of data grows large. Our main result suggests a novel
rationale for the widely observed binary wage schemes, by showing that such
simple contracts achieve the optimal convergence rate. Notably, in order to
attain the optimal convergence rate, the principal must set a lenient cutoff
for when the agent receives a high vs. low wage. In contrast, we find that
other common contracts where wages vary more finely with observed data (e.g.,
linear contracts) approximate the first-best at a highly suboptimal rate.
Finally, we show that the optimal convergence rate depends only on a simple
summary statistic of the monitoring technology. This yields a detail-free
ranking over monitoring technologies that quantifies their value for incentive
provision in data-rich settings and applies regardless of the agent's specific
utility or cost functions.",Monitoring with Rich Data,2023-12-28 05:34:18,"Mira Frick, Ryota Iijima, Yuhta Ishii","http://arxiv.org/abs/2312.16789v1, http://arxiv.org/pdf/2312.16789v1",econ.TH
35355,th,"We study the problem of screening in decision-making processes under
uncertainty, focusing on the impact of adding an additional screening stage,
commonly known as a 'gatekeeper.' While our primary analysis is rooted in the
context of job market hiring, the principles and findings are broadly
applicable to areas such as educational admissions, healthcare patient
selection, and financial loan approvals. The gatekeeper's role is to assess
applicants' suitability before significant investments are made. Our study
reveals that while gatekeepers are designed to streamline the selection process
by filtering out less likely candidates, they can sometimes inadvertently
affect the candidates' own decision-making process. We explore the conditions
under which the introduction of a gatekeeper can enhance or impede the
efficiency of these processes. Additionally, we consider how adjusting
gatekeeping strategies might impact the accuracy of selection decisions. Our
research also extends to scenarios where gatekeeping is influenced by
historical biases, particularly in competitive settings like hiring. We
discover that candidates confronted with a statistically biased gatekeeping
process are more likely to withdraw from applying, thereby perpetuating the
previously mentioned historical biases. The study suggests that measures such
as affirmative action can be effective in addressing these biases. While
centered on hiring, the insights and methodologies from our study have
significant implications for a wide range of fields where screening and
gatekeeping are integral.","The Gatekeeper Effect: The Implications of Pre-Screening, Self-selection, and Bias for Hiring Processes",2023-12-28 20:54:39,Moran Koren,"http://arxiv.org/abs/2312.17167v1, http://arxiv.org/pdf/2312.17167v1",econ.TH
35356,th,"The concept of sequential choice functions is introduced and studied. This
concept applies to the reduction of the problem of stable matchings with
sequential workers to a situation where the workers are linear.",Sequential choice functions and stability problems,2024-01-01 16:08:56,Vladimir I. Danilov,"http://arxiv.org/abs/2401.00748v1, http://arxiv.org/pdf/2401.00748v1",math.CO
35357,th,"Correlated equilibria are sometimes more efficient than the Nash equilibria
of a game without signals. We investigate whether the availability of quantum
signals in the context of a classical strategic game may allow the players to
achieve even better efficiency than in any correlated equilibrium with
classical signals, and find the answer to be positive.",Correlated Equilibria of Classical Strategic Games with Quantum Signals,2003-09-03 02:34:49,Pierfrancesco La Mura,"http://dx.doi.org/10.1142/S0219749905000724, http://arxiv.org/abs/quant-ph/0309033v1, http://arxiv.org/pdf/quant-ph/0309033v1",quant-ph
35358,th,"Motivated by several classic decision-theoretic paradoxes, and by analogies
with the paradoxes which in physics motivated the development of quantum
mechanics, we introduce a projective generalization of expected utility along
the lines of the quantum-mechanical generalization of probability theory. The
resulting decision theory accommodates the dominant paradoxes, while retaining
significant simplicity and tractability. In particular, every finite game
within this larger class of preferences still has an equilibrium.",Projective Expected Utility,2008-02-22 14:00:20,Pierfrancesco La Mura,"http://dx.doi.org/10.1016/j.jmp.2009.02.001, http://arxiv.org/abs/0802.3300v1, http://arxiv.org/pdf/0802.3300v1",quant-ph
35359,th,"In most contemporary approaches to decision making, a decision problem is
described by a sets of states and set of outcomes, and a rich set of acts,
which are functions from states to outcomes over which the decision maker (DM)
has preferences. Most interesting decision problems, however, do not come with
a state space and an outcome space. Indeed, in complex problems it is often far
from clear what the state and outcome spaces would be. We present an
alternative foundation for decision making, in which the primitive objects of
choice are syntactic programs. A representation theorem is proved in the spirit
of standard representation theorems, showing that if the DM's preference
relation on objects of choice satisfies appropriate axioms, then there exist a
set S of states, a set O of outcomes, a way of interpreting the objects of
choice as functions from S to O, a probability on S, and a utility function on
O, such that the DM prefers choice a to choice b if and only if the expected
utility of a is higher than that of b. Thus, the state space and outcome space
are subjective, just like the probability and utility; they are not part of the
description of the problem. In principle, a modeler can test for SEU behavior
without having access to states or outcomes. We illustrate the power of our
approach by showing that it can capture decision makers who are subject to
framing effects.",Constructive Decision Theory,2009-06-23 21:22:45,"Lawrence Blume, David Easley, Joseph Y. Halpern","http://arxiv.org/abs/0906.4316v2, http://arxiv.org/pdf/0906.4316v2",cs.GT
35360,th,"We consider a large class of social learning models in which a group of
agents face uncertainty regarding a state of the world, share the same utility
function, observe private signals, and interact in a general dynamic setting.
We introduce Social Learning Equilibria, a static equilibrium concept that
abstracts away from the details of the given extensive form, but nevertheless
captures the corresponding asymptotic equilibrium behavior. We establish
general conditions for agreement, herding, and information aggregation in
equilibrium, highlighting a connection between agreement and information
aggregation.",Social learning equilibria,2012-07-25 08:58:32,"Elchanan Mossel, Manuel Mueller-Frank, Allan Sly, Omer Tamuz","http://dx.doi.org/10.3982/ECTA16465, http://arxiv.org/abs/1207.5895v4, http://arxiv.org/pdf/1207.5895v4",math.ST
35361,th,"We consider a group of strategic agents who must each repeatedly take one of
two possible actions. They learn which of the two actions is preferable from
initial private signals, and by observing the actions of their neighbors in a
social network.
  We show that the question of whether or not the agents learn efficiently
depends on the topology of the social network. In particular, we identify a
geometric ""egalitarianism"" condition on the social network that guarantees
learning in infinite networks, or learning with high probability in large
finite networks, in any equilibrium. We also give examples of non-egalitarian
networks with equilibria in which learning fails.",Strategic Learning and the Topology of Social Networks,2012-09-25 11:13:59,"Elchanan Mossel, Allan Sly, Omer Tamuz","http://dx.doi.org/10.3982/ECTA12058, http://arxiv.org/abs/1209.5527v2, http://arxiv.org/pdf/1209.5527v2",cs.GT
35362,th,"We consider the complexity of finding a correlated equilibrium of an
$n$-player game in a model that allows the algorithm to make queries on
players' payoffs at pure strategy profiles. Randomized regret-based dynamics
are known to yield an approximate correlated equilibrium efficiently, namely,
in time that is polynomial in the number of players $n$. Here we show that both
randomization and approximation are necessary: no efficient deterministic
algorithm can reach even an approximate correlated equilibrium, and no
efficient randomized algorithm can reach an exact correlated equilibrium. The
results are obtained by bounding from below the number of payoff queries that
are needed.",The Query Complexity of Correlated Equilibria,2013-05-21 19:33:32,"Sergiu Hart, Noam Nisan","http://dx.doi.org/10.1016/j.geb.2016.11.003, http://arxiv.org/abs/1305.4874v2, http://arxiv.org/pdf/1305.4874v2",cs.GT
35365,th,"We consider a model of matching in trading networks in which firms can enter
into bilateral contracts. In trading networks, stable outcomes, which are
immune to deviations of arbitrary sets of firms, may not exist. We define a new
solution concept called trail stability. Trail-stable outcomes are immune to
consecutive, pairwise deviations between linked firms. We show that any trading
network with bilateral contracts has a trail-stable outcome whenever firms'
choice functions satisfy the full substitutability condition. For trail-stable
outcomes, we prove results on the lattice structure, the rural hospitals
theorem, strategy-proofness, and comparative statics of firm entry and exit. We
also introduce weak trail stability which is implied by trail stability under
full substitutability. We describe relationships between the solution concepts.",Trading Networks with Bilateral Contracts,2015-10-01 18:03:55,"Tamás Fleiner, Zsuzsanna Jankó, Akihisa Tamura, Alexander Teytelboym","http://arxiv.org/abs/1510.01210v3, http://arxiv.org/pdf/1510.01210v3",cs.GT
35366,th,"The first ever human vs. computer no-limit Texas hold 'em competition took
place from April 24-May 8, 2015 at River's Casino in Pittsburgh, PA. In this
article I present my thoughts on the competition design, agent architecture,
and lessons learned.",My Reflections on the First Man vs. Machine No-Limit Texas Hold 'em Competition,2015-10-29 09:53:15,Sam Ganzfried,"http://arxiv.org/abs/1510.08578v2, http://arxiv.org/pdf/1510.08578v2",cs.GT
35367,th,"We study the problem of Bayesian learning in a dynamical system involving
strategic agents with asymmetric information. In a series of seminal papers in
the literature, this problem has been investigated under a simplifying model
where myopically selfish players appear sequentially and act once in the game,
based on private noisy observations of the system state and public observation
of past players' actions. It has been shown that there exist information
cascades where users discard their private information and mimic the action of
their predecessor. In this paper, we provide a framework for studying Bayesian
learning dynamics in a more general setting than the one described above. In
particular, our model incorporates cases where players are non-myopic and
strategically participate for the whole duration of the game, and cases where
an endogenous process selects which subset of players will act at each time
instance. The proposed framework hinges on a sequential decomposition
methodology for finding structured perfect Bayesian equilibria (PBE) of a
general class of dynamic games with asymmetric information, where user-specific
states evolve as conditionally independent Markov processes and users make
independent noisy observations of their states. Using this methodology, we
study a specific dynamic learning model where players make decisions about
public investment based on their estimates of everyone's types. We characterize
a set of informational cascades for this problem where learning stops for the
team as a whole. We show that in such cascades, all players' estimates of other
players' types freeze even though each individual player asymptotically learns
its own true type.",Decentralized Bayesian learning in dynamic games: A framework for studying informational cascades,2016-07-23 00:29:18,"Deepanshu Vasal, Achilleas Anastasopoulos","http://arxiv.org/abs/1607.06847v2, http://arxiv.org/pdf/1607.06847v2",cs.GT
35368,th,"We study a sequential-learning model featuring a network of naive agents with
Gaussian information structures. Agents apply a heuristic rule to aggregate
predecessors' actions. They weigh these actions according the strengths of
their social connections to different predecessors. We show this rule arises
endogenously when agents wrongly believe others act solely on private
information and thus neglect redundancies among observations. We provide a
simple linear formula expressing agents' actions in terms of network paths and
use this formula to characterize the set of networks where naive agents
eventually learn correctly. This characterization implies that, on all networks
where later agents observe more than one neighbor, there exist
disproportionately influential early agents who can cause herding on incorrect
actions. Going beyond existing social-learning results, we compute the
probability of such mislearning exactly. This allows us to compare likelihoods
of incorrect herding, and hence expected welfare losses, across network
structures. The probability of mislearning increases when link densities are
higher and when networks are more integrated. In partially segregated networks,
divergent early signals can lead to persistent disagreement between groups.",Network Structure and Naive Sequential Learning,2017-02-25 09:49:04,"Krishna Dasaratha, Kevin He","http://dx.doi.org/10.3982/TE3388, http://arxiv.org/abs/1703.02105v7, http://arxiv.org/pdf/1703.02105v7",q-fin.EC
35370,th,"Agents learn about a changing state using private signals and their
neighbors' past estimates of the state. We present a model in which Bayesian
agents in equilibrium use neighbors' estimates simply by taking weighted sums
with time-invariant weights. The dynamics thus parallel those of the tractable
DeGroot model of learning in networks, but arise as an equilibrium outcome
rather than a behavioral assumption. We examine whether information aggregation
is nearly optimal as neighborhoods grow large. A key condition for this is
signal diversity: each individual's neighbors have private signals that not
only contain independent information, but also have sufficiently different
distributions. Without signal diversity $\unicode{x2013}$ e.g., if private
signals are i.i.d. $\unicode{x2013}$ learning is suboptimal in all networks and
highly inefficient in some. Turning to social influence, we find it is much
more sensitive to one's signal quality than to one's number of neighbors, in
contrast to standard models with exogenous updating rules.",Learning from Neighbors about a Changing State,2018-01-06 19:14:47,"Krishna Dasaratha, Benjamin Golub, Nir Hak","http://dx.doi.org/10.1093/restud/rdac077, http://arxiv.org/abs/1801.02042v8, http://arxiv.org/pdf/1801.02042v8",econ.TH
35371,th,"We consider a model of oligopolistic competition in a market with search
frictions, in which competing firms with products of unknown quality advertise
how much information a consumer's visit will glean. In the unique symmetric
equilibrium of this game, the countervailing incentives of attraction and
persuasion yield a payoff function for each firm that is linear in the firm's
realized effective value. If the expected quality of the products is
sufficiently high (or competition is sufficiently fierce), this corresponds to
full information--firms provide the first-best level of information. If not,
this corresponds to information dispersion--firms randomize over signals.",Attraction versus Persuasion: Information Provision in Search Markets,2018-02-26 18:25:32,"Pak Hung Au, Mark Whitmeyer","http://arxiv.org/abs/1802.09396v7, http://arxiv.org/pdf/1802.09396v7",math.PR
35372,th,"Melamed, Harrell, and Simpson have recently reported on an experiment which
appears to show that cooperation can arise in a dynamic network without
reputational knowledge, i.e., purely via dynamics [1]. We believe that their
experimental design is actually not testing this, in so far as players do know
the last action of their current partners before making a choice on their own
next action and subsequently deciding which link to cut. Had the authors given
no information at all, the result would be a decline in cooperation as shown in
[2].",Reputation is required for cooperation to emerge in dynamic networks,2018-03-16 02:39:21,"Jose A. Cuesta, Carlos Gracia-Lázaro, Yamir Moreno, Angel Sánchez","http://arxiv.org/abs/1803.06035v1, http://arxiv.org/pdf/1803.06035v1",physics.soc-ph
35373,th,"I study endogenous learning dynamics for people who misperceive intertemporal
correlations in random sequences. Biased agents face an optimal-stopping
problem. They are uncertain about the underlying distribution and learn its
parameters from predecessors. Agents stop when early draws are ""good enough,""
so predecessors' experiences contain negative streaks but not positive streaks.
When agents wrongly expect systematic reversals (the ""gambler's fallacy""), they
understate the likelihood of consecutive below-average draws, converge to
over-pessimistic beliefs about the distribution's mean, and stop too early.
Agents uncertain about the distribution's variance overestimate it to an extent
that depends on predecessors' stopping thresholds. I also analyze how other
misperceptions of intertemporal correlation interact with endogenous data
censoring.",Mislearning from Censored Data: The Gambler's Fallacy and Other Correlational Mistakes in Optimal-Stopping Problems,2018-03-22 02:21:54,Kevin He,"http://dx.doi.org/10.3982/TE4657, http://arxiv.org/abs/1803.08170v6, http://arxiv.org/pdf/1803.08170v6",q-fin.EC
35374,th,"Currency trading (Forex) is the largest world market in terms of volume. We
analyze trading and tweeting about the EUR-USD currency pair over a period of
three years. First, a large number of tweets were manually labeled, and a
Twitter stance classification model is constructed. The model then classifies
all the tweets by the trading stance signal: buy, hold, or sell (EUR vs. USD).
The Twitter stance is compared to the actual currency rates by applying the
event study methodology, well-known in financial economics. It turns out that
there are large differences in Twitter stance distribution and potential
trading returns between the four groups of Twitter users: trading robots,
spammers, trading companies, and individual traders. Additionally, we observe
attempts of reputation manipulation by post festum removal of tweets with poor
predictions, and deleting/reposting of identical tweets to increase the
visibility without tainting one's Twitter timeline.","Forex trading and Twitter: Spam, bots, and reputation manipulation",2018-04-06 15:36:28,"Igor Mozetič, Peter Gabrovšek, Petra Kralj Novak","http://arxiv.org/abs/1804.02233v2, http://arxiv.org/pdf/1804.02233v2",cs.SI
35375,th,"Creating strong agents for games with more than two players is a major open
problem in AI. Common approaches are based on approximating game-theoretic
solution concepts such as Nash equilibrium, which have strong theoretical
guarantees in two-player zero-sum games, but no guarantees in non-zero-sum
games or in games with more than two players. We describe an agent that is able
to defeat a variety of realistic opponents using an exact Nash equilibrium
strategy in a 3-player imperfect-information game. This shows that, despite a
lack of theoretical guarantees, agents based on Nash equilibrium strategies can
be successful in multiplayer games after all.",Successful Nash Equilibrium Agent for a 3-Player Imperfect-Information Game,2018-04-13 08:15:28,"Sam Ganzfried, Austin Nowak, Joannier Pinales","http://arxiv.org/abs/1804.04789v1, http://arxiv.org/pdf/1804.04789v1",cs.GT
35377,th,"We propose a method to design a decentralized energy market which guarantees
individual rationality (IR) in expectation, in the presence of system-level
grid constraints. We formulate the market as a welfare maximization problem
subject to IR constraints, and we make use of Lagrangian duality to model the
problem as a n-person non-cooperative game with a unique generalized Nash
equilibrium (GNE). We provide a distributed algorithm which converges to the
GNE. The convergence and properties of the algorithm are investigated by means
of numerical simulations.",A rational decentralized generalized Nash equilibrium seeking for energy markets,2018-06-04 15:33:13,"Lorenzo Nespoli, Matteo Salani, Vasco Medici","http://arxiv.org/abs/1806.01072v1, http://arxiv.org/pdf/1806.01072v1",cs.CE
35379,th,"We study online pricing algorithms for the Bayesian selection problem with
production constraints and its generalization to the laminar matroid Bayesian
online selection problem. Consider a firm producing (or receiving) multiple
copies of different product types over time. The firm can offer the products to
arriving buyers, where each buyer is interested in one product type and has a
private valuation drawn independently from a possibly different but known
distribution.
  Our goal is to find an adaptive pricing for serving the buyers that maximizes
the expected social-welfare (or revenue) subject to two constraints. First, at
any time the total number of sold items of each type is no more than the number
of produced items. Second, the total number of sold items does not exceed the
total shipping capacity. This problem is a special case of the well-known
matroid Bayesian online selection problem studied in [Kleinberg and Weinberg,
2012], when the underlying matroid is laminar.
  We give the first Polynomial-Time Approximation Scheme (PTAS) for the above
problem as well as its generalization to the laminar matroid Bayesian online
selection problem when the depth of the laminar family is bounded by a
constant. Our approach is based on rounding the solution of a hierarchy of
linear programming relaxations that systematically strengthen the commonly used
ex-ante linear programming formulation of these problems and approximate the
optimum online solution with any degree of accuracy. Our rounding algorithm
respects the relaxed constraints of higher-levels of the laminar tree only in
expectation, and exploits the negative dependency of the selection rule of
lower-levels to achieve the required concentration that guarantees the
feasibility with high probability.",Nearly Optimal Pricing Algorithms for Production Constrained and Laminar Bayesian Selection,2018-07-15 05:19:33,"Nima Anari, Rad Niazadeh, Amin Saberi, Ali Shameli","http://arxiv.org/abs/1807.05477v1, http://arxiv.org/pdf/1807.05477v1",cs.GT
35380,th,"Direct democracy is a special case of an ensemble of classifiers, where every
person (classifier) votes on every issue. This fails when the average voter
competence (classifier accuracy) falls below 50%, which can happen in noisy
settings where voters have only limited information, or when there are multiple
topics and the average voter competence may not be high enough for some topics.
Representative democracy, where voters choose representatives to vote, can be
an elixir in both these situations. Representative democracy is a specific way
to improve the ensemble of classifiers. We introduce a mathematical model for
studying representative democracy, in particular understanding the parameters
of a representative democracy that gives maximum decision making capability.
Our main result states that under general and natural conditions,
  1. Representative democracy can make the correct decisions simultaneously for
multiple noisy issues.
  2. When the cost of voting is fixed, the optimal representative democracy
requires that representatives are elected from constant sized groups: the
number of representatives should be linear in the number of voters.
  3. When the cost and benefit of voting are both polynomial, the optimal group
size is close to linear in the number of voters. This work sets the
mathematical foundation for studying the quality-quantity tradeoff in a
representative democracy-type ensemble (fewer highly qualified representatives
versus more less qualified representatives).",A Mathematical Model for Optimal Decisions in a Representative Democracy,2018-07-17 03:09:53,"Malik Magdon-Ismail, Lirong Xia","http://arxiv.org/abs/1807.06157v1, http://arxiv.org/pdf/1807.06157v1",cs.GT
35381,th,"Cooperative behavior in real social dilemmas is often perceived as a
phenomenon emerging from norms and punishment. To overcome this paradigm, we
highlight the interplay between the influence of social networks on
individuals, and the activation of spontaneous self-regulating mechanisms,
which may lead them to behave cooperatively, while interacting with others and
taking conflicting decisions over time. By extending Evolutionary game theory
over networks, we prove that cooperation partially or fully emerges whether
self-regulating mechanisms are sufficiently stronger than social pressure.
Interestingly, even few cooperative individuals act as catalyzing agents for
the cooperation of others, thus activating a recruiting mechanism, eventually
driving the whole population to cooperate.",Self-regulation promotes cooperation in social networks,2018-07-20 17:06:43,"Dario Madeo, Chiara Mocenni","http://arxiv.org/abs/1807.07848v1, http://arxiv.org/pdf/1807.07848v1",physics.soc-ph
35383,th,"This paper introduces a unified framework called cooperative extensive form
games, which (i) generalizes standard non-cooperative games, and (ii) allows
for more complex coalition formation dynamics than previous concepts like
coalition-proof Nash equilibrium. Central to this framework is a novel solution
concept called cooperative equilibrium system (CES). CES differs from Nash
equilibrium in two important respects. First, a CES is immune to both
unilateral and multilateral `credible' deviations. Second, unlike Nash
equilibrium, whose stability relies on the assumption that the strategies of
non-deviating players are held fixed, CES allows for the possibility that
players may regroup and adjust their strategies in response to a deviation. The
main result establishes that every cooperative extensive form game, possibly
with imperfect information, possesses a CES. For games with perfect
information, the proof is constructive. This framework is broadly applicable in
contexts such as oligopolistic markets and dynamic political bargaining.",The strategy of conflict and cooperation,2018-08-21 06:27:42,Mehmet S. Ismail,"http://arxiv.org/abs/1808.06750v7, http://arxiv.org/pdf/1808.06750v7",econ.TH
35384,th,"We study a two-stage model, in which students are 1) admitted to college on
the basis of an entrance exam which is a noisy signal about their
qualifications (type), and then 2) those students who were admitted to college
can be hired by an employer as a function of their college grades, which are an
independently drawn noisy signal of their type. Students are drawn from one of
two populations, which might have different type distributions. We assume that
the employer at the end of the pipeline is rational, in the sense that it
computes a posterior distribution on student type conditional on all
information that it has available (college admissions, grades, and group
membership), and makes a decision based on posterior expectation. We then study
what kinds of fairness goals can be achieved by the college by setting its
admissions rule and grading policy. For example, the college might have the
goal of guaranteeing equal opportunity across populations: that the probability
of passing through the pipeline and being hired by the employer should be
independent of group membership, conditioned on type. Alternately, the college
might have the goal of incentivizing the employer to have a group blind hiring
rule. We show that both goals can be achieved when the college does not report
grades. On the other hand, we show that under reasonable conditions, these
goals are impossible to achieve even in isolation when the college uses an
(even minimally) informative grading policy.",Downstream Effects of Affirmative Action,2018-08-27 22:15:15,"Sampath Kannan, Aaron Roth, Juba Ziani","http://arxiv.org/abs/1808.09004v1, http://arxiv.org/pdf/1808.09004v1",cs.GT
35385,th,"We map the recently proposed notions of algorithmic fairness to economic
models of Equality of opportunity (EOP)---an extensively studied ideal of
fairness in political philosophy. We formally show that through our conceptual
mapping, many existing definition of algorithmic fairness, such as predictive
value parity and equality of odds, can be interpreted as special cases of EOP.
In this respect, our work serves as a unifying moral framework for
understanding existing notions of algorithmic fairness. Most importantly, this
framework allows us to explicitly spell out the moral assumptions underlying
each notion of fairness, and interpret recent fairness impossibility results in
a new light. Last but not least and inspired by luck egalitarian models of EOP,
we propose a new family of measures for algorithmic fairness. We illustrate our
proposal empirically and show that employing a measure of algorithmic
(un)fairness when its underlying moral assumptions are not satisfied, can have
devastating consequences for the disadvantaged group's welfare.",A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity,2018-09-10 18:33:51,"Hoda Heidari, Michele Loi, Krishna P. Gummadi, Andreas Krause","http://arxiv.org/abs/1809.03400v2, http://arxiv.org/pdf/1809.03400v2",cs.LG
35386,th,"In nature and society problems arise when different interests are difficult
to reconcile, which are modeled in game theory. While most applications assume
uncorrelated games, a more detailed modeling is necessary to consider the
correlations that influence the decisions of the players. The current theory
for correlated games, however, enforces the players to obey the instructions
from a third party or ""correlation device"" to reach equilibrium, but this
cannot be achieved for all initial correlations. We extend here the existing
framework of correlated games and find that there are other interesting and
previously unknown Nash equilibria that make use of correlations to obtain the
best payoff. This is achieved by allowing the players the freedom to follow or
not to follow the suggestions of the correlation device. By assigning
independent probabilities to follow every possible suggestion, the players
engage in a response game that turns out to have a rich structure of Nash
equilibria that goes beyond the correlated equilibrium and mixed-strategy
solutions. We determine the Nash equilibria for all possible correlated
Snowdrift games, which we find to be describable by Ising Models in thermal
equilibrium. We believe that our approach paves the way to a study of
correlations in games that uncovers the existence of interesting underlying
interaction mechanisms, without compromising the independence of the players.",Nash Equilibria in the Response Strategy of Correlated Games,2018-09-07 15:52:12,"A. D. Correia, H. T. C. Stoof","http://dx.doi.org/10.1038/s41598-018-36562-2, http://arxiv.org/abs/1809.03860v1, http://arxiv.org/pdf/1809.03860v1",physics.soc-ph
35390,th,"Optimal mechanism design enjoys a beautiful and well-developed theory, and
also a number of killer applications. Rules of thumb produced by the field
influence everything from how governments sell wireless spectrum licenses to
how the major search engines auction off online advertising. There are,
however, some basic problems for which the traditional optimal mechanism design
approach is ill-suited---either because it makes overly strong assumptions, or
because it advocates overly complex designs. This survey reviews several common
issues with optimal mechanisms, including exorbitant communication,
computation, and informational requirements; and it presents several examples
demonstrating that passing to the relaxed goal of an approximately optimal
mechanism allows us to reason about fundamental questions that seem out of
reach of the traditional theory.",Approximately Optimal Mechanism Design,2018-12-31 19:54:47,"Tim Roughgarden, Inbal Talgam-Cohen","http://arxiv.org/abs/1812.11896v2, http://arxiv.org/pdf/1812.11896v2",econ.TH
35391,th,"This paper shows that the fuzzy temporal logic can model figures of thought
to describe decision-making behaviors. In order to exemplify, some economic
behaviors observed experimentally were modeled from problems of choice
containing time, uncertainty and fuzziness. Related to time preference, it is
noted that the subadditive discounting is mandatory in positive rewards
situations and, consequently, results in the magnitude effect and time effect,
where the last has a stronger discounting for earlier delay periods (as in, one
hour, one day), but a weaker discounting for longer delay periods (for
instance, six months, one year, ten years). In addition, it is possible to
explain the preference reversal (change of preference when two rewards proposed
on different dates are shifted in the time). Related to the Prospect Theory, it
is shown that the risk seeking and the risk aversion are magnitude dependents,
where the risk seeking may disappear when the values to be lost are very high.",Decision-making and Fuzzy Temporal Logic,2019-01-07 21:56:24,José Cláudio do Nascimento,"http://arxiv.org/abs/1901.01970v2, http://arxiv.org/pdf/1901.01970v2",cs.AI
35394,th,"As financial instruments grow in complexity more and more information is
neglected by risk optimization practices. This brings down a curtain of opacity
on the origination of risk, that has been one of the main culprits in the
2007-2008 global financial crisis. We discuss how the loss of transparency may
be quantified in bits, using information theoretic concepts. We find that {\em
i)} financial transformations imply large information losses, {\em ii)}
portfolios are more information sensitive than individual stocks only if
fundamental analysis is sufficiently informative on the co-movement of assets,
that {\em iii)} securitisation, in the relevant range of parameters, yields
assets that are less information sensitive than the original stocks and that
{\em iv)} when diversification (or securitisation) is at its best (i.e. when
assets are uncorrelated) information losses are maximal. We also address the
issue of whether pricing schemes can be introduced to deal with information
losses. This is relevant for the transmission of incentives to gather
information on the risk origination side. Within a simple mean variance scheme,
we find that market incentives are not generally sufficient to make information
harvesting sustainable.",Lost in Diversification,2019-01-28 19:39:51,"Marco Bardoscia, Daniele d'Arienzo, Matteo Marsili, Valerio Volpati","http://dx.doi.org/10.1016/j.crhy.2019.05.015, http://arxiv.org/abs/1901.09795v1, http://arxiv.org/pdf/1901.09795v1",q-fin.GN
35395,th,"The standard solution concept for stochastic games is Markov perfect
equilibrium (MPE); however, its computation becomes intractable as the number
of players increases. Instead, we consider mean field equilibrium (MFE) that
has been popularized in the recent literature. MFE takes advantage of averaging
effects in models with a large number of players. We make three main
contributions. First, our main result provides conditions that ensure the
uniqueness of an MFE. We believe this uniqueness result is the first of its
nature in the class of models we study. Second, we generalize previous MFE
existence results. Third, we provide general comparative statics results. We
apply our results to dynamic oligopoly models and to heterogeneous agent
macroeconomic models commonly used in previous work in economics and
operations.","Mean Field Equilibrium: Uniqueness, Existence, and Comparative Statics",2019-03-06 12:56:58,"Bar Light, Gabriel Weintraub","http://arxiv.org/abs/1903.02273v3, http://arxiv.org/pdf/1903.02273v3",econ.TH
35397,th,"We consider a stochastic game of contribution to the common good in which the
players have continuous control over the degree of contribution, and we examine
the gradualism arising from the free rider effect. This game belongs to the
class of variable concession games which generalize wars of attrition.
Previously known examples of variable concession games in the literature yield
equilibria characterized by singular control strategies without any delay of
concession. However, these no-delay equilibria are in contrast to mixed
strategy equilibria of canonical wars of attrition in which each player delays
concession by a randomized time. We find that a variable contribution game with
a single state variable, which extends the Nerlove-Arrow model, possesses an
equilibrium characterized by regular control strategies that result in a
gradual concession. This equilibrium naturally generalizes the mixed strategy
equilibria from the canonical wars of attrition. Stochasticity of the problem
accentuates the qualitative difference between a singular control solution and
a regular control equilibrium solution. We also find that asymmetry between the
players can mitigate the inefficiency caused by the gradualism.",Game of Variable Contributions to the Common Good under Uncertainty,2019-04-01 01:41:25,H. Dharma Kwon,"http://dx.doi.org/10.1287/opre.2019.1879, http://arxiv.org/abs/1904.00500v1, http://arxiv.org/pdf/1904.00500v1",math.OC
35399,th,"Journal ranking is becoming more important in assessing the quality of
academic research. Several indices have been suggested for this purpose,
typically on the basis of a citation graph between the journals. We follow an
axiomatic approach and find an impossibility theorem: any self-consistent
ranking method, which satisfies a natural monotonicity property, should depend
on the level of aggregation. Our result presents a trade-off between two
axiomatic properties and reveals a dilemma of aggregation.",Journal ranking should depend on the level of aggregation,2019-04-08 16:27:34,László Csató,"http://dx.doi.org/10.1016/j.joi.2019.100975, http://arxiv.org/abs/1904.06300v4, http://arxiv.org/pdf/1904.06300v4",cs.DL
35402,th,"We develop original models to study interacting agents in financial markets
and in social networks. Within these models randomness is vital as a form of
shock or news that decays with time. Agents learn from their observations and
learning ability to interpret news or private information in time-varying
networks. Under general assumption on the noise, a limit theorem is developed
for the generalised DeGroot framework for certain type of conditions governing
the learning. In this context, the agents beliefs (properly scaled) converge in
distribution that is not necessarily normal. Fresh insights are gained not only
from proposing a new setting for social learning models but also from using
different techniques to study discrete time random linear dynamical systems.",Averaging plus Learning Models and Their Asymptotics,2019-04-17 11:34:14,"Ionel Popescu, Tushar Vaidya","http://dx.doi.org/10.1098/rspa.2022.0681, http://arxiv.org/abs/1904.08131v5, http://arxiv.org/pdf/1904.08131v5",q-fin.MF
35403,th,"We consider an agent who represents uncertainty about the environment via a
possibly misspecified model. Each period, the agent takes an action, observes a
consequence, and uses Bayes' rule to update her belief about the environment.
This framework has become increasingly popular in economics to study behavior
driven by incorrect or biased beliefs. Current literature has characterized
asymptotic behavior under fairly specific assumptions. By first showing that
the key element to predict the agent's behavior is the frequency of her past
actions, we are able to characterize asymptotic behavior in general settings in
terms of the solutions of a generalization of a differential equation that
describes the evolution of the frequency of actions. We then present a series
of implications that can be readily applied to economic applications, thus
providing off-the-shelf tools that can be used to characterize behavior under
misspecified learning.",Asymptotic Behavior of Bayesian Learners with Misspecified Models,2019-04-18 03:58:32,"Ignacio Esponda, Demian Pouzo, Yuichi Yamamoto","http://arxiv.org/abs/1904.08551v2, http://arxiv.org/pdf/1904.08551v2",econ.TH
35404,th,"The literature specifies extensive-form games in many styles, and eventually
I hope to formally translate games across those styles. Toward that end, this
paper defines $\mathbf{NCF}$, the category of node-and-choice forms. The
category's objects are extensive forms in essentially any style, and the
category's isomorphisms are made to accord with the literature's small handful
of ad hoc style equivalences.
  Further, this paper develops two full subcategories: $\mathbf{CsqF}$ for
forms whose nodes are choice-sequences, and $\mathbf{CsetF}$ for forms whose
nodes are choice-sets. I show that $\mathbf{NCF}$ is ""isomorphically enclosed""
in $\mathbf{CsqF}$ in the sense that each $\mathbf{NCF}$ form is isomorphic to
a $\mathbf{CsqF}$ form. Similarly, I show that $\mathbf{CsqF_{\tilde a}}$ is
isomorphically enclosed in $\mathbf{CsetF}$ in the sense that each
$\mathbf{CsqF}$ form with no-absentmindedness is isomorphic to a
$\mathbf{CsetF}$ form. The converses are found to be almost immediate, and the
resulting equivalences unify and simplify two ad hoc style equivalences in
Kline and Luckraz 2016 and Streufert 2019.
  Aside from the larger agenda, this paper already makes three practical
contributions. Style equivalences are made easier to derive by [1] a natural
concept of isomorphic invariance and [2] the composability of isomorphic
enclosures. In addition, [3] some new consequences of equivalence are
systematically deduced.","The Category of Node-and-Choice Forms, with Subcategories for Choice-Sequence Forms and Choice-Set Forms",2019-04-27 04:30:55,Peter A. Streufert,"http://dx.doi.org/10.1007/978-981-15-1342-8_2, http://arxiv.org/abs/1904.12085v1, http://arxiv.org/pdf/1904.12085v1",econ.TH
35405,th,"We study repeated independent Blackwell experiments; standard examples
include drawing multiple samples from a population, or performing a measurement
in different locations. In the baseline setting of a binary state of nature, we
compare experiments in terms of their informativeness in large samples.
Addressing a question due to Blackwell (1951), we show that generically an
experiment is more informative than another in large samples if and only if it
has higher Renyi divergences.
  We apply our analysis to the problem of measuring the degree of dissimilarity
between distributions by means of divergences. A useful property of Renyi
divergences is their additivity with respect to product distributions. Our
characterization of Blackwell dominance in large samples implies that every
additive divergence that satisfies the data processing inequality is an
integral of Renyi divergences.",From Blackwell Dominance in Large Samples to Renyi Divergences and Back Again,2019-06-07 02:15:02,"Xiaosheng Mu, Luciano Pomatto, Philipp Strack, Omer Tamuz","http://arxiv.org/abs/1906.02838v5, http://arxiv.org/pdf/1906.02838v5",math.ST
35407,th,"With vast databases at their disposal, private tech companies can compete
with public statistical agencies to provide population statistics. However,
private companies face different incentives to provide high-quality statistics
and to protect the privacy of the people whose data are used. When both privacy
protection and statistical accuracy are public goods, private providers tend to
produce at least one suboptimally, but it is not clear which. We model a firm
that publishes statistics under a guarantee of differential privacy. We prove
that provision by the private firm results in inefficiently low data quality in
this framework.",Suboptimal Provision of Privacy and Statistical Accuracy When They are Public Goods,2019-06-22 02:42:46,"John M. Abowd, Ian M. Schmutte, William Sexton, Lars Vilhuber","http://arxiv.org/abs/1906.09353v1, http://arxiv.org/pdf/1906.09353v1",econ.TH
35408,th,"Poker is a large complex game of imperfect information, which has been
singled out as a major AI challenge problem. Recently there has been a series
of breakthroughs culminating in agents that have successfully defeated the
strongest human players in two-player no-limit Texas hold 'em. The strongest
agents are based on algorithms for approximating Nash equilibrium strategies,
which are stored in massive binary files and unintelligible to humans. A recent
line of research has explored approaches for extrapolating knowledge from
strong game-theoretic strategies that can be understood by humans. This would
be useful when humans are the ultimate decision maker and allow humans to make
better decisions from massive algorithmically-generated strategies. Using
techniques from machine learning we have uncovered a new simple, fundamental
rule of poker strategy that leads to a significant improvement in performance
over the best prior rule and can also easily be applied by human players.",Most Important Fundamental Rule of Poker Strategy,2019-06-09 01:42:24,"Sam Ganzfried, Max Chiswick","http://arxiv.org/abs/1906.09895v3, http://arxiv.org/pdf/1906.09895v3",cs.AI
35409,th,"We propose new Degroot-type social learning models with feedback in a
continuous time, to investigate the effect of a noisy information source on
consensus formation in a social network. Unlike the standard Degroot framework,
noisy information models destroy consensus formation. On the other hand, the
noisy opinion dynamics converge to the equilibrium distribution that
encapsulates correlations among agents' opinions. Interestingly, such an
equilibrium distribution is also a non-equilibrium steady state (NESS) with a
non-zero probabilistic current loop. Thus, noisy information source leads to a
NESS at long times that encodes persistent correlated opinion dynamics of
learning agents. Our model provides a simple realization of NESS in the context
of social learning. Other phenomena such as synchronization of opinions when
agents are subject to a common noise are also studied.",Broken Detailed Balance and Non-Equilibrium Dynamics in Noisy Social Learning Models,2019-06-27 11:00:36,"Tushar Vaidya, Thiparat Chotibut, Georgios Piliouras","http://dx.doi.org/10.1016/j.physa.2021.125818, http://arxiv.org/abs/1906.11481v2, http://arxiv.org/pdf/1906.11481v2",physics.soc-ph
35411,th,"We study the problem of allocating divisible bads (chores) among multiple
agents with additive utilities when monetary transfers are not allowed. The
competitive rule is known for its remarkable fairness and efficiency properties
in the case of goods. This rule was extended to chores in prior work by
Bogomolnaia, Moulin, Sandomirskiy, and Yanovskaya (2017). The rule produces
Pareto optimal and envy-free allocations for both goods and chores. In the case
of goods, the outcome of the competitive rule can be easily computed.
Competitive allocations solve the Eisenberg-Gale convex program; hence the
outcome is unique and can be approximately found by standard gradient methods.
An exact algorithm that runs in polynomial time in the number of agents and
goods was given by Orlin (2010).
  In the case of chores, the competitive rule does not solve any convex
optimization problem; instead, competitive allocations correspond to local
minima, local maxima, and saddle points of the Nash social welfare on the
Pareto frontier of the set of feasible utilities. The Pareto frontier may
contain many such points; consequently, the competitive rule's outcome is no
longer unique.
  In this paper, we show that all the outcomes of the competitive rule for
chores can be computed in strongly polynomial time if either the number of
agents or the number of chores is fixed. The approach is based on a combination
of three ideas: all consumption graphs of Pareto optimal allocations can be
listed in polynomial time; for a given consumption graph, a candidate for a
competitive utility profile can be constructed via an explicit formula; each
candidate can be checked for competitiveness, and the allocation can be
reconstructed using a maximum flow computation.
  Our algorithm gives an approximately-fair allocation of indivisible chores by
the rounding technique of Barman and Krishnamurthy (2018).",Algorithms for Competitive Division of Chores,2019-07-03 09:55:04,"Simina Brânzei, Fedor Sandomirskiy","http://arxiv.org/abs/1907.01766v2, http://arxiv.org/pdf/1907.01766v2",cs.GT
35413,th,"We study a model of vendors competing to sell a homogeneous product to
customers spread evenly along a linear city. This model is based on Hotelling's
celebrated paper in 1929. Our aim in this paper is to present a necessary and
sufficient condition for the equilibrium. This yields a representation for the
equilibrium. To achieve this, we first formulate the model mathematically.
Next, we prove that the condition holds if and only if vendors are equilibrium.",Necessary and sufficient condition for equilibrium of the Hotelling model,2019-07-14 13:26:40,"Satoshi Hayashi, Naoki Tsuge","http://arxiv.org/abs/1907.06200v1, http://arxiv.org/pdf/1907.06200v1",econ.TH
35414,th,"We investigate a multi-household DSGE model in which past aggregate
consumption impacts the confidence, and therefore consumption propensity, of
individual households. We find that such a minimal setup is extremely rich, and
leads to a variety of realistic output dynamics: high output with no crises;
high output with increased volatility and deep, short lived recessions;
alternation of high and low output states where relatively mild drop in
economic conditions can lead to a temporary confidence collapse and steep
decline in economic activity. The crisis probability depends exponentially on
the parameters of the model, which means that markets cannot efficiently price
the associated risk premium. We conclude by stressing that within our
framework, {\it narratives} become an important monetary policy tool, that can
help steering the economy back on track.","Confidence Collapse in a Multi-Household, Self-Reflexive DSGE Model",2019-07-17 13:13:35,"Federico Guglielmo Morelli, Michael Benzaquen, Marco Tarzia, Jean-Philippe Bouchaud","http://arxiv.org/abs/1907.07425v1, http://arxiv.org/pdf/1907.07425v1",q-fin.GN
35416,th,"How does supply uncertainty affect the structure of supply chain networks? To
answer this question we consider a setting where retailers and suppliers must
establish a costly relationship with each other prior to engaging in trade.
Suppliers, with uncertain yield, announce wholesale prices, while retailers
must decide which suppliers to link to based on their wholesale prices.
Subsequently, retailers compete with each other in Cournot fashion to sell the
acquired supply to consumers. We find that in equilibrium retailers concentrate
their links among too few suppliers, i.e., there is insufficient
diversification of the supply base. We find that either reduction of supply
variance or increase of mean supply, increases a supplier's profit. However,
these two ways of improving service have qualitatively different effects on
welfare: improvement of the expected supply by a supplier makes everyone better
off, whereas improvement of supply variance lowers consumer surplus.",Yield Uncertainty and Strategic Formation of Supply Chain Networks,2019-07-20 19:19:46,"Victor Amelkin, Rakesh Vohra","http://dx.doi.org/10.1002/net.22186, http://arxiv.org/abs/1907.09943v1, http://arxiv.org/pdf/1907.09943v1",cs.GT
35417,th,"Motivated by the problem of market power in electricity markets, we
introduced in previous works a mechanism for simplified markets of two agents
with linear cost. In standard procurement auctions, the market power resulting
from the quadratic transmission losses allows the producers to bid above their
true values, which are their production cost. The mechanism proposed in the
previous paper optimally reduces the producers' margin to the society's
benefit. In this paper, we extend those results to a more general market made
of a finite number of agents with piecewise linear cost functions, which makes
the problem more difficult, but simultaneously more realistic. We show that the
methodology works for a large class of externalities. We also provide an
algorithm to solve the principal allocation problem. Our contribution provides
a benchmark to assess the sub-optimality of the mechanisms used in practice.",Optimal auctions for networked markets with externalities,2019-07-23 21:02:28,"Benjamin Heymann, Alejandro Jofré","http://arxiv.org/abs/1907.10080v1, http://arxiv.org/pdf/1907.10080v1",econ.TH
35418,th,"Exchanges acquire excess processing capacity to accommodate trading activity
surges associated with zero-sum high-frequency trader (HFT) ""duels."" The idle
capacity's opportunity cost is an externality of low-latency trading. We build
a model of decentralized exchanges (DEX) with flexible capacity. On DEX, HFTs
acquire speed in real-time from peer-to-peer networks. The price of speed
surges during activity bursts, as HFTs simultaneously race to market. Relative
to centralized exchanges, HFTs acquire more speed on DEX, but for shorter
timespans. Low-latency ""sprints"" speed up price discovery without harming
liquidity. Overall, speed rents decrease and fewer resources are locked-in to
support zero-sum HFT trades.",Liquid Speed: On-Demand Fast Trading at Distributed Exchanges,2019-07-24 23:56:12,"Michael Brolley, Marius Zoican","http://arxiv.org/abs/1907.10720v1, http://arxiv.org/pdf/1907.10720v1",q-fin.TR
35419,th,"The stochastic multi-armed bandit model captures the tradeoff between
exploration and exploitation. We study the effects of competition and
cooperation on this tradeoff. Suppose there are $k$ arms and two players, Alice
and Bob. In every round, each player pulls an arm, receives the resulting
reward, and observes the choice of the other player but not their reward.
Alice's utility is $\Gamma_A + \lambda \Gamma_B$ (and similarly for Bob), where
$\Gamma_A$ is Alice's total reward and $\lambda \in [-1, 1]$ is a cooperation
parameter. At $\lambda = -1$ the players are competing in a zero-sum game, at
$\lambda = 1$, they are fully cooperating, and at $\lambda = 0$, they are
neutral: each player's utility is their own reward. The model is related to the
economics literature on strategic experimentation, where usually players
observe each other's rewards.
  With discount factor $\beta$, the Gittins index reduces the one-player
problem to the comparison between a risky arm, with a prior $\mu$, and a
predictable arm, with success probability $p$. The value of $p$ where the
player is indifferent between the arms is the Gittins index $g = g(\mu,\beta) >
m$, where $m$ is the mean of the risky arm.
  We show that competing players explore less than a single player: there is
$p^* \in (m, g)$ so that for all $p > p^*$, the players stay at the predictable
arm. However, the players are not myopic: they still explore for some $p > m$.
On the other hand, cooperating players explore more than a single player. We
also show that neutral players learn from each other, receiving strictly higher
total rewards than they would playing alone, for all $ p\in (p^*, g)$, where
$p^*$ is the threshold from the competing case.
  Finally, we show that competing and neutral players eventually settle on the
same arm in every Nash equilibrium, while this can fail for cooperating
players.","Multiplayer Bandit Learning, from Competition to Cooperation",2019-08-03 11:20:54,"Simina Brânzei, Yuval Peres","http://arxiv.org/abs/1908.01135v3, http://arxiv.org/pdf/1908.01135v3",cs.GT
35420,th,"We outline a token model for Truebit, a retrofitting, blockchain enhancement
which enables secure, community-based computation. The model addresses the
challenge of stable task pricing, as raised in the Truebit whitepaper, without
appealing to external oracles, exchanges, or hierarchical nodes. The system's
sustainable economics and fair market pricing derive from a mintable token
format which leverages existing tokens for liquidity. Finally, we introduce a
governance layer whose lifecycles culminates with permanent dissolution into
utility tokens, thereby tending the network towards autonomous
decentralization.",Bootstrapping a stable computation token,2019-08-08 09:41:13,"Jason Teutsch, Sami Mäkelä, Surya Bakshi","http://arxiv.org/abs/1908.02946v1, http://arxiv.org/pdf/1908.02946v1",cs.CR
35421,th,"In December 2015, a bounty emerged to establish both reliable communication
and secure transfer of value between the Dogecoin and Ethereum blockchains.
This prized ""Dogethereum bridge"" would allow parties to ""lock"" a DOGE coin on
Dogecoin and in exchange receive a newly minted WOW token in Ethereum. Any
subsequent owner of the WOW token could burn it and, in exchange, earn the
right to ""unlock"" a DOGE on Dogecoin.
  We describe an efficient, trustless, and retrofitting Dogethereum
construction which requires no fork but rather employs economic collateral to
achieve a ""lock"" operation in Dogecoin. The protocol relies on bulletproofs,
Truebit, and parametrized tokens to efficiently and trustlessly relay events
from the ""true"" Dogecoin blockchain into Ethereum. The present construction not
only enables cross-platform exchange but also allows Ethereum smart contracts
to trustlessly access Dogecoin. A similar technique adds Ethereum-based smart
contracts to Bitcoin and Bitcoin data to Ethereum smart contracts.",Retrofitting a two-way peg between blockchains,2019-08-12 07:41:13,"Jason Teutsch, Michael Straka, Dan Boneh","http://arxiv.org/abs/1908.03999v1, http://arxiv.org/pdf/1908.03999v1",cs.CR
35422,th,"Ethereum has emerged as a dynamic platform for exchanging cryptocurrency
tokens. While token crowdsales cannot simultaneously guarantee buyers both
certainty of valuation and certainty of participation, we show that if each
token buyer specifies a desired purchase quantity at each valuation then
everyone can successfully participate. Our implementation introduces smart
contract techniques which recruit outside participants in order to circumvent
computational complexity barriers.",Interactive coin offerings,2019-08-12 07:30:55,"Jason Teutsch, Vitalik Buterin, Christopher Brown","http://arxiv.org/abs/1908.04295v1, http://arxiv.org/pdf/1908.04295v1",econ.TH
35423,th,"We provide elementary proofs of several results concerning the possible
outcomes arising from a fixed profile within the class of positional voting
systems. Our arguments enable a simple and explicit construction of paradoxical
profiles, and we also demonstrate how to choose weights that realize desirable
results from a given profile. The analysis ultimately boils down to thinking
about positional voting systems in terms of doubly stochastic matrices.",Positional Voting and Doubly Stochastic Matrices,2019-08-18 22:48:33,"Jacqueline Anderson, Brian Camara, John Pike","http://arxiv.org/abs/1908.06506v2, http://arxiv.org/pdf/1908.06506v2",math.CO
35431,th,"For a discrete time Markov chain and in line with Strotz' consistent planning
we develop a framework for problems of optimal stopping that are
time-inconsistent due to the consideration of a non-linear function of an
expected reward. We consider pure and mixed stopping strategies and a (subgame
perfect Nash) equilibrium. We provide different necessary and sufficient
equilibrium conditions including a verification theorem. Using a fixed point
argument we provide equilibrium existence results. We adapt and study the
notion of the myopic adjustment process and introduce different kinds of
equilibrium stability. We show that neither existence nor uniqueness of
equilibria should generally be expected. The developed theory is applied to a
mean-variance problem and a variance problem.","Time-inconsistent stopping, myopic adjustment & equilibrium stability: with a mean-variance application",2019-09-26 09:17:03,"Sören Christensen, Kristoffer Lindensjö","http://arxiv.org/abs/1909.11921v2, http://arxiv.org/pdf/1909.11921v2",math.OC
35426,th,"The Shapley value has become a popular method to attribute the prediction of
a machine-learning model on an input to its base features. The use of the
Shapley value is justified by citing [16] showing that it is the \emph{unique}
method that satisfies certain good properties (\emph{axioms}).
  There are, however, a multiplicity of ways in which the Shapley value is
operationalized in the attribution problem. These differ in how they reference
the model, the training data, and the explanation context. These give very
different results, rendering the uniqueness result meaningless. Furthermore, we
find that previously proposed approaches can produce counterintuitive
attributions in theory and in practice---for instance, they can assign non-zero
attributions to features that are not even referenced by the model.
  In this paper, we use the axiomatic approach to study the differences between
some of the many operationalizations of the Shapley value for attribution, and
propose a technique called Baseline Shapley (BShap) that is backed by a proper
uniqueness result. We also contrast BShap with Integrated Gradients, another
extension of Shapley value to the continuous setting.",The many Shapley values for model explanation,2019-08-22 19:13:10,"Mukund Sundararajan, Amir Najmi","http://arxiv.org/abs/1908.08474v2, http://arxiv.org/pdf/1908.08474v2",cs.AI
35427,th,"In this article we solve the problem of maximizing the expected utility of
future consumption and terminal wealth to determine the optimal pension or
life-cycle fund strategy for a cohort of pension fund investors. The setup is
strongly related to a DC pension plan where additionally (individual)
consumption is taken into account. The consumption rate is subject to a
time-varying minimum level and terminal wealth is subject to a terminal floor.
Moreover, the preference between consumption and terminal wealth as well as the
intertemporal coefficient of risk aversion are time-varying and therefore
depend on the age of the considered pension cohort. The optimal consumption and
investment policies are calculated in the case of a Black-Scholes financial
market framework and hyperbolic absolute risk aversion (HARA) utility
functions. We generalize Ye (2008) (2008 American Control Conference, 356-362)
by adding an age-dependent coefficient of risk aversion and extend Steffensen
(2011) (Journal of Economic Dynamics and Control, 35(5), 659-667), Hentschel
(2016) (Doctoral dissertation, Ulm University) and Aase (2017) (Stochastics,
89(1), 115-141) by considering consumption in combination with terminal wealth
and allowing for consumption and terminal wealth floors via an application of
HARA utility functions. A case study on fitting several models to realistic,
time-dependent life-cycle consumption and relative investment profiles shows
that only our extended model with time-varying preference parameters provides
sufficient flexibility for an adequate fit. This is of particular interest to
life-cycle products for (private) pension investments or pension insurance in
general.",Optimal life-cycle consumption and investment decisions under age-dependent risk preferences,2019-08-27 04:17:52,"Andreas Lichtenstern, Pavel V. Shevchenko, Rudi Zagst","http://dx.doi.org/10.1007/s11579-020-00276-9, http://arxiv.org/abs/1908.09976v1, http://arxiv.org/pdf/1908.09976v1",q-fin.MF
35429,th,"Our research is concerned with studying behavioural changes within a dynamic
system, i.e. health care, and their effects on the decision-making process.
Evolutionary Game theory is applied to investigate the most probable
strategy(ies) adopted by individuals in a finite population based on the
interactions among them with an eye to modelling behaviour using the following
metrics: cost of investment, cost of management, cost of treatment, reputation
benefit for the provider(s), and the gained health benefit for the patient.",Modelling Cooperation in a Dynamic Healthcare System,2019-09-06 20:23:39,"Zainab Alalawi, Yifeng Zeng, The Anh Han, Aiman Elragig","http://arxiv.org/abs/1909.03070v1, http://arxiv.org/pdf/1909.03070v1",cs.GT
35430,th,"We introduce and analyze new envy-based fairness concepts for agents with
weights that quantify their entitlements in the allocation of indivisible
items. We propose two variants of weighted envy-freeness up to one item (WEF1):
strong, where envy can be eliminated by removing an item from the envied
agent's bundle, and weak, where envy can be eliminated either by removing an
item (as in the strong version) or by replicating an item from the envied
agent's bundle in the envying agent's bundle. We show that for additive
valuations, an allocation that is both Pareto optimal and strongly WEF1 always
exists and can be computed in pseudo-polynomial time; moreover, an allocation
that maximizes the weighted Nash social welfare may not be strongly WEF1, but
always satisfies the weak version of the property. Moreover, we establish that
a generalization of the round-robin picking sequence algorithm produces in
polynomial time a strongly WEF1 allocation for an arbitrary number of agents;
for two agents, we can efficiently achieve both strong WEF1 and Pareto
optimality by adapting the adjusted winner procedure. Our work highlights
several aspects in which weighted fair division is richer and more challenging
than its unweighted counterpart.",Weighted Envy-Freeness in Indivisible Item Allocation,2019-09-23 20:44:59,"Mithun Chakraborty, Ayumi Igarashi, Warut Suksompong, Yair Zick","http://dx.doi.org/10.1145/3457166, http://arxiv.org/abs/1909.10502v7, http://arxiv.org/pdf/1909.10502v7",cs.AI
35432,th,"This paper studies dynamic mechanism design in a quasilinear Markovian
environment and analyzes a direct mechanism model of a principal-agent
framework in which the agent is allowed to exit at any period. We consider that
the agent's private information, referred to as state, evolves over time. The
agent makes decisions of whether to stop or continue and what to report at each
period. The principal, on the other hand, chooses decision rules consisting of
an allocation rule and a set of payment rules to maximize her ex-ante expected
payoff. In order to influence the agent's stopping decision, one of the
terminal payment rules is posted-price, i.e., it depends only on the realized
stopping time of the agent. We define the incentive compatibility in this
dynamic environment in terms of Bellman equations, which is then simplified by
establishing a one-shot deviation principle. Given the optimality of the
stopping rule, a sufficient condition for incentive compatibility is obtained
by constructing the state-dependent payment rules in terms of a set of
functions parameterized by the allocation rule. A necessary condition is
derived from envelope theorem, which explicitly formulates the state-dependent
payment rules in terms of allocation rules. A class of monotone environment is
considered to characterize the optimal stopping by a threshold rule. The
posted-price payment rules are then pinned down in terms of the allocation rule
and the threshold function up to a constant. The incentive compatibility
constraints restrict the design of the posted-price payment rule by a regular
condition.",On Incentive Compatibility in Dynamic Mechanism Design With Exit Option in a Markovian Environment,2019-09-30 17:05:50,"Tao Zhang, Quanyan Zhu","http://arxiv.org/abs/1909.13720v4, http://arxiv.org/pdf/1909.13720v4",math.DS
35435,th,"We use machine learning to provide a tractable measure of the amount of
predictable variation in the data that a theory captures, which we call its
""completeness."" We apply this measure to three problems: assigning certain
equivalents to lotteries, initial play in games, and human generation of random
sequences. We discover considerable variation in the completeness of existing
models, which sheds light on whether to focus on developing better models with
the same features or instead to look for new features that will improve
predictions. We also illustrate how and why completeness varies with the
experiments considered, which highlights the role played in choosing which
experiments to run.",Measuring the Completeness of Theories,2019-10-15 22:46:54,"Drew Fudenberg, Jon Kleinberg, Annie Liang, Sendhil Mullainathan","http://arxiv.org/abs/1910.07022v1, http://arxiv.org/pdf/1910.07022v1",econ.TH
35437,th,"This survey explores the literature on game-theoretic models of network
formation under the hypothesis of mutual consent in link formation. The
introduction of consent in link formation imposes a coordination problem in the
network formation process. This survey explores the conclusions from this
theory and the various methodologies to avoid the main pitfalls. The main
insight originates from Myerson's work on mutual consent in link formation and
his main conclusion that the empty network (the network without any links)
always emerges as a strong Nash equilibrium in any game-theoretic model of
network formation under mutual consent and positive link formation costs.
Jackson and Wolinsky introduced a cooperative framework to avoid this main
pitfall. They devised the notion of a pairwise stable network to arrive at
equilibrium networks that are mainly non-trivial. Unfortunately, this notion of
pairwise stability requires coordinated action by pairs of decision makers in
link formation. I survey the possible solutions in a purely non-cooperative
framework of network formation under mutual consent by exploring potential
refinements of the standard Nash equilibrium concept to explain the emergence
of non-trivial networks. This includes the notions of unilateral and monadic
stability. The first one is founded on advanced rational reasoning of
individuals about how others would respond to one's efforts to modify the
network. The latter incorporates trusting, boundedly rational behaviour into
the network formation process. The survey is concluded with an initial
exploration of external correlation devices as an alternative framework to
address mutual consent in network formation.",Building social networks under consent: A survey,2019-10-25 16:04:55,Robert P. Gilles,"http://arxiv.org/abs/1910.11693v2, http://arxiv.org/pdf/1910.11693v2",econ.TH
35439,th,"Mean field game theory studies the behavior of a large number of interacting
individuals in a game theoretic setting and has received a lot of attention in
the past decade (Lasry and Lions, Japanese journal of mathematics, 2007). In
this work, we derive mean field game partial differential equation systems from
deterministic microscopic agent dynamics. The dynamics are given by a
particular class of ordinary differential equations, for which an optimal
strategy can be computed (Bressan, Milan Journal of Mathematics, 2011). We use
the concept of Nash equilibria and apply the dynamic programming principle to
derive the mean field limit equations and we study the scaling behavior of the
system as the number of agents tends to infinity and find several mean field
game limits. Especially we avoid in our derivation the notion of measure
derivatives. Novel scales are motivated by an example of an agent-based
financial market model.",Microscopic Derivation of Mean Field Game Models,2019-10-30 00:12:16,"Martin Frank, Michael Herty, Torsten Trimborn","http://arxiv.org/abs/1910.13534v1, http://arxiv.org/pdf/1910.13534v1",math.OC
35441,th,"The Rock-Paper-Scissors (RPS) game is a classic non-cooperative game widely
studied in terms of its theoretical analysis as well as in its applications,
ranging from sociology and biology to economics. Many experimental results of
the RPS game indicate that this game is better modelled by the discretized
best-response dynamics rather than continuous time dynamics. In this work we
show that the attractor of the discrete time best-response dynamics of the RPS
game is finite and periodic. Moreover we also describe the bifurcations of the
attractor and determine the exact number, period and location of the periodic
strategies.",Periodic attractor in the discrete time best-response dynamics of the Rock-Paper-Scissors game,2019-12-14 15:20:12,"José Pedro Gaivão, Telmo Peixe","http://dx.doi.org/10.1007/s13235-020-00371-y, http://arxiv.org/abs/1912.06831v1, http://arxiv.org/pdf/1912.06831v1",math.DS
35442,th,"For object reallocation problems, if preferences are strict but otherwise
unrestricted, the Top Trading Cycles rule (TTC) is the leading rule: It is the
only rule satisfying efficiency, individual rationality, and
strategy-proofness. However, on the subdomain of single-peaked preferences,
Bade (2019) defines a new rule, the ""crawler"", which also satisfies these three
properties. (i) The crawler selects an allocation by ""visiting"" agents in a
specific order. A natural ""dual"" rule can be defined by proceeding in the
reverse order. Our first theorem states that the crawler and its dual are
actually the same. (ii) Single-peakedness of a preference profile may in fact
hold for more than one order and its reverse. Our second theorem states that
the crawler is invariant to the choice of the order. (iii) For object
allocation problems (as opposed to reallocation problems), we define a
probabilistic version of the crawler by choosing an endowment profile at random
according to a uniform distribution, and applying the original definition. Our
third theorem states that this rule is the same as the ""random priority rule"".",The Crawler: Three Equivalence Results for Object (Re)allocation Problems when Preferences Are Single-peaked,2019-12-14 22:27:12,"Yuki Tamura, Hadi Hosseini","http://arxiv.org/abs/1912.06909v2, http://arxiv.org/pdf/1912.06909v2",econ.TH
35445,th,"In the past few years, several new matching models have been proposed and
studied that take into account complex distributional constraints. Relevant
lines of work include (1) school choice with diversity constraints where
students have (possibly overlapping) types and (2) hospital-doctor matching
where various regional quotas are imposed. In this paper, we present a
polynomial-time reduction to transform an instance of (1) to an instance of (2)
and we show how the feasibility and stability of corresponding matchings are
preserved under the reduction. Our reduction provides a formal connection
between two important strands of work on matching with distributional
constraints. We then apply the reduction in two ways. Firstly, we show that it
is NP-complete to check whether a feasible and stable outcome for (1) exists.
Due to our reduction, these NP-completeness results carry over to setting (2).
In view of this, we help unify some of the results that have been presented in
the literature. Secondly, if we have positive results for (2), then we have
corresponding results for (1). One key conclusion of our results is that
further developments on axiomatic and algorithmic aspects of hospital-doctor
matching with regional quotas will result in corresponding results for school
choice with diversity constraints.",From Matching with Diversity Constraints to Matching with Regional Quotas,2020-02-17 05:51:46,"Haris Aziz, Serge Gaspers, Zhaohong Sun, Toby Walsh","http://arxiv.org/abs/2002.06748v1, http://arxiv.org/pdf/2002.06748v1",cs.GT
35446,th,"In a second-price auction with i.i.d. (independent identically distributed)
bidder valuations, adding bidders increases expected buyer surplus if the
distribution of valuations has a sufficiently heavy right tail. While this does
not imply that a bidder in an auction should prefer for more bidders to join
the auction, it does imply that a bidder should prefer it in exchange for the
bidder being allowed to participate in more auctions. Also, for a heavy-tailed
valuation distribution, marginal expected seller revenue per added bidder
remains strong even when there are already many bidders.",Heavy Tails Make Happy Buyers,2020-02-20 23:47:42,Eric Bax,"http://arxiv.org/abs/2002.09014v1, http://arxiv.org/pdf/2002.09014v1",cs.GT
35447,th,"A Euclidean path integral is used to find an optimal strategy for a firm
under a Walrasian system, Pareto optimality and a non-cooperative feedback Nash
Equilibrium. We define dynamic optimal strategies and develop a Feynman type
path integration method to capture all non-additive convex strategies. We also
show that the method can solve the non-linear case, for example
Merton-Garman-Hamiltonian system, which the traditional Pontryagin maximum
principle cannot solve in closed form. Furthermore, under Walrasian system we
are able to solve for the optimal strategy under a linear constraint with a
linear objective function with respect to strategy.",Optimization of a Dynamic Profit Function using Euclidean Path Integral,2020-02-21 19:22:48,"P. Pramanik, A. M. Polansky","http://arxiv.org/abs/2002.09394v1, http://arxiv.org/pdf/2002.09394v1",econ.TH
35448,th,"A well-known axiom for proportional representation is Proportionality of
Solid Coalitions (PSC). We characterize committees satisfying PSC as possible
outcomes of the Minimal Demand rule, which generalizes an approach pioneered by
Michael Dummett.",A characterization of proportionally representative committees,2020-02-22 04:48:56,"Haris Aziz, Barton E. Lee","http://arxiv.org/abs/2002.09598v2, http://arxiv.org/pdf/2002.09598v2",cs.GT
35449,th,"We study the set of possible joint posterior belief distributions of a group
of agents who share a common prior regarding a binary state, and who observe
some information structure. For two agents we introduce a quantitative version
of Aumann's Agreement Theorem, and show that it is equivalent to a
characterization of feasible distributions due to Dawid et al. (1995). For any
number of agents, we characterize feasible distributions in terms of a
""no-trade"" condition. We use these characterizations to study information
structures with independent posteriors. We also study persuasion problems with
multiple receivers, exploring the extreme feasible distributions.",Feasible Joint Posterior Beliefs,2020-02-26 12:01:10,"Itai Arieli, Yakov Babichenko, Fedor Sandomirskiy, Omer Tamuz","http://arxiv.org/abs/2002.11362v4, http://arxiv.org/pdf/2002.11362v4",econ.TH
35450,th,"We develop a theory for continuous-time non-Markovian stochastic control
problems which are inherently time-inconsistent. Their distinguishing feature
is that the classical Bellman optimality principle no longer holds. Our
formulation is cast within the framework of a controlled non-Markovian forward
stochastic differential equation, and a general objective functional setting.
We adopt a game-theoretic approach to study such problems, meaning that we seek
for sub-game perfect Nash equilibrium points. As a first novelty of this work,
we introduce and motivate a refinement of the definition of equilibrium that
allows us to establish a direct and rigorous proof of an extended dynamic
programming principle, in the same spirit as in the classical theory. This in
turn allows us to introduce a system consisting of an infinite family of
backward stochastic differential equations analogous to the classical HJB
equation. We prove that this system is fundamental, in the sense that its
well-posedness is both necessary and sufficient to characterise the value
function and equilibria. As a final step we provide an existence and uniqueness
result. Some examples and extensions of our results are also presented.","Me, myself and I: a general theory of non-Markovian time-inconsistent stochastic control for sophisticated agents",2020-02-28 09:54:54,"Camilo Hernández, Dylan Possamaï","http://arxiv.org/abs/2002.12572v2, http://arxiv.org/pdf/2002.12572v2",math.OC
35451,th,"In non-truthful auctions, agents' utility for a strategy depends on the
strategies of the opponents and also the prior distribution over their private
types; the set of Bayes Nash equilibria generally has an intricate dependence
on the prior. Using the First Price Auction as our main demonstrating example,
we show that $\tilde O(n / \epsilon^2)$ samples from the prior with $n$ agents
suffice for an algorithm to learn the interim utilities for all monotone
bidding strategies. As a consequence, this number of samples suffice for
learning all approximate equilibria. We give almost matching (up to polylog
factors) lower bound on the sample complexity for learning utilities. We also
consider a setting where agents must pay a search cost to discover their own
types. Drawing on a connection between this setting and the first price
auction, discovered recently by Kleinberg et al. (2016), we show that $\tilde
O(n / \epsilon^2)$ samples suffice for utilities and equilibria to be estimated
in a near welfare-optimal descending auction in this setting. En route, we
improve the sample complexity bound, recently obtained by Guo et al. (2021),
for the Pandora's Box problem, which is a classical model for sequential
consumer search.",Learning Utilities and Equilibria in Non-Truthful Auctions,2020-07-03 17:44:33,"Hu Fu, Tao Lin","http://arxiv.org/abs/2007.01722v3, http://arxiv.org/pdf/2007.01722v3",cs.GT
35452,th,"The prisoner's dilemma (PD) is a game-theoretic model studied in a wide array
of fields to understand the emergence of cooperation between rational
self-interested agents. In this work, we formulate a spatial iterated PD as a
discrete-event dynamical system where agents play the game in each time-step
and analyse it algebraically using Krohn-Rhodes algebraic automata theory using
a computational implementation of the holonomy decomposition of transformation
semigroups. In each iteration all players adopt the most profitable strategy in
their immediate neighbourhood. Perturbations resetting the strategy of a given
player provide additional generating events for the dynamics. Our initial study
shows that the algebraic structure, including how natural subsystems comprising
permutation groups acting on the spatial distributions of strategies, arise in
certain parameter regimes for the pay-off matrix, and are absent for other
parameter regimes. Differences in the number of group levels in the holonomy
decomposition (an upper bound for Krohn-Rhodes complexity) are revealed as more
pools of reversibility appear when the temptation to defect is at an
intermediate level. Algebraic structure uncovered by this analysis can be
interpreted to shed light on the dynamics of the spatial iterated PD.",Spatial Iterated Prisoner's Dilemma as a Transformation Semigroup,2020-07-03 21:20:30,"Isaiah Farahbakhsh, Chrystopher L. Nehaniv","http://arxiv.org/abs/2007.01896v2, http://arxiv.org/pdf/2007.01896v2",math.DS
35453,th,"We investigate how to model the beliefs of an agent who becomes more aware.
We use the framework of Halpern and Rego (2013) by adding probability, and
define a notion of a model transition that describes constraints on how, if an
agent becomes aware of a new formula $\phi$ in state $s$ of a model $M$, she
transitions to state $s^*$ in a model $M^*$. We then discuss how such a model
can be applied to information disclosure.",Dynamic Awareness,2020-07-06 18:28:22,"Joseph Y. Halpern, Evan Piermont","http://arxiv.org/abs/2007.02823v1, http://arxiv.org/pdf/2007.02823v1",cs.AI
35454,th,"This paper provides a complete review of the continuous-time optimal
contracting problem introduced by Sannikov, in the extended context allowing
for possibly different discount rates for both parties. The agent's problem is
to seek for optimal effort, given the compensation scheme proposed by the
principal over a random horizon. Then, given the optimal agent's response, the
principal determines the best compensation scheme in terms of running payment,
retirement, and lump-sum payment at retirement. A Golden Parachute is a
situation where the agent ceases any effort at some positive stopping time, and
receives a payment afterwards, possibly under the form of a lump sum payment,
or of a continuous stream of payments. We show that a Golden Parachute only
exists in certain specific circumstances. This is in contrast with the results
claimed by Sannikov, where the only requirement is a positive agent's marginal
cost of effort at zero. Namely, we show that there is no Golden Parachute if
this parameter is too large. Similarly, in the context of a concave marginal
utility, there is no Golden Parachute if the agent's utility function has a too
negative curvature at zero. In the general case, we prove that an agent with
positive reservation utility is either never retired by the principal, or
retired above some given threshold (as in Sannikov's solution). We show that
different discount factors induce a face-lifted utility function, which allows
to reduce the analysis to a setting similar to the equal discount rates one.
Finally, we also confirm that an agent with small reservation utility may have
an informational rent, meaning that the principal optimally offers him a
contract with strictly higher utility than his participation value.",Is there a Golden Parachute in Sannikov's principal-agent problem?,2020-07-10 15:50:49,"Dylan Possamaï, Nizar Touzi","http://arxiv.org/abs/2007.05529v2, http://arxiv.org/pdf/2007.05529v2",econ.TH
35457,th,"Convex analysis is fundamental to proving inequalities that have a wide
variety of applications in economics and mathematics. In this paper we provide
Jensen-type inequalities for functions that are, intuitively, ""very"" convex.
These inequalities are simple to apply and can be used to generalize and extend
previous results or to derive new results. We apply our inequalities to
quantify the notion ""more risk averse"" provided in \cite{pratt1978risk}. We
also apply our results in other applications from different fields, including
risk measures, Poisson approximation, moment generating functions,
log-likelihood functions, and Hermite-Hadamard type inequalities.",New Jensen-type inequalities and their applications,2020-07-18 01:08:02,Bar Light,"http://arxiv.org/abs/2007.09258v3, http://arxiv.org/pdf/2007.09258v3",math.OC
35458,th,"Most online platforms strive to learn from interactions with users, and many
engage in exploration: making potentially suboptimal choices for the sake of
acquiring new information. We study the interplay between exploration and
competition: how such platforms balance the exploration for learning and the
competition for users. Here users play three distinct roles: they are customers
that generate revenue, they are sources of data for learning, and they are
self-interested agents which choose among the competing platforms.
  We consider a stylized duopoly model in which two firms face the same
multi-armed bandit problem. Users arrive one by one and choose between the two
firms, so that each firm makes progress on its bandit problem only if it is
chosen. Through a mix of theoretical results and numerical simulations, we
study whether and to what extent competition incentivizes the adoption of
better bandit algorithms, and whether it leads to welfare increases for users.
We find that stark competition induces firms to commit to a ""greedy"" bandit
algorithm that leads to low welfare. However, weakening competition by
providing firms with some ""free"" users incentivizes better exploration
strategies and increases welfare. We investigate two channels for weakening the
competition: relaxing the rationality of users and giving one firm a
first-mover advantage. Our findings are closely related to the ""competition vs.
innovation"" relationship, and elucidate the first-mover advantage in the
digital economy.",Competing Bandits: The Perils of Exploration Under Competition,2020-07-20 17:19:08,"Guy Aridor, Yishay Mansour, Aleksandrs Slivkins, Zhiwei Steven Wu","http://arxiv.org/abs/2007.10144v7, http://arxiv.org/pdf/2007.10144v7",cs.GT
35459,th,"This paper studies continuous-time optimal contracting in a hierarchy problem
which generalises the model of Sung (2015). The hierarchy is modeled by a
series of interlinked principal-agent problems, leading to a sequence of
Stackelberg equilibria. More precisely, the principal can contract with the
managers to incentivise them to act in her best interest, despite only
observing the net benefits of the total hierarchy. Managers in turn subcontract
with the agents below them. Both agents and managers independently control in
continuous time a stochastic process representing their outcome. First, we show
through a continuous-time adaptation of Sung's model that, even if the agents
only control the drift of their outcome, their manager controls the volatility
of their continuation utility. This first simple example justifies the use of
recent results on optimal contracting for drift and volatility control, and
therefore the theory of second-order backward stochastic differential
equations, developed in the theoretical part of this paper, dedicated to a more
general model. The comprehensive approach we outline highlights the benefits of
considering a continuous-time model and opens the way to obtain comparative
statics. We also explain how the model can be extended to a large-scale
principal-agent hierarchy. Since the principal's problem can be reduced to only
an $m$-dimensional state space and a $2m$-dimensional control set, where $m$ is
the number of managers immediately below her, and is therefore independent of
the size of the hierarchy below these managers, the dimension of the problem
does not explode.",Continuous-time incentives in hierarchies,2020-07-21 15:47:29,Emma Hubert,"http://arxiv.org/abs/2007.10758v1, http://arxiv.org/pdf/2007.10758v1",math.OC
35460,th,"In multi-agent environments in which coordination is desirable, the history
of play often causes lock-in at sub-optimal outcomes. Notoriously, technologies
with a significant environmental footprint or high social cost persist despite
the successful development of more environmentally friendly and/or socially
efficient alternatives. The displacement of the status quo is hindered by
entrenched economic interests and network effects. To exacerbate matters, the
standard mechanism design approaches based on centralized authorities with the
capacity to use preferential subsidies to effectively dictate system outcomes
are not always applicable to modern decentralized economies. What other types
of mechanisms are feasible? In this paper, we develop and analyze a mechanism
that induces transitions from inefficient lock-ins to superior alternatives.
This mechanism does not exogenously favor one option over another -- instead,
the phase transition emerges endogenously via a standard evolutionary learning
model, Q-learning, where agents trade-off exploration and exploitation.
Exerting the same transient influence to both the efficient and inefficient
technologies encourages exploration and results in irreversible phase
transitions and permanent stabilization of the efficient one. On a technical
level, our work is based on bifurcation and catastrophe theory, a branch of
mathematics that deals with changes in the number and stability properties of
equilibria. Critically, our analysis is shown to be structurally robust to
significant and even adversarially chosen perturbations to the parameters of
both our game and our behavioral model.",Catastrophe by Design in Population Games: Destabilizing Wasteful Locked-in Technologies,2020-07-25 10:55:15,"Stefanos Leonardos, Iosif Sakos, Costas Courcoubetis, Georgios Piliouras","http://arxiv.org/abs/2007.12877v1, http://arxiv.org/pdf/2007.12877v1",cs.GT
35462,th,"We here study the Battle of the Sexes game, a textbook case of asymmetric
games, on small networks. Due to the conflicting preferences of the players,
analytical approaches are scarce and most often update strategies are employed
in numerical simulations of repeated games on networks until convergence is
reached. As a result, correlations between the choices of the players emerge.
Our approach is to study these correlations with a generalized Ising model.
Using the response strategy framework, we describe how the actions of the
players can bring the network into a steady configuration, starting from an
out-of-equilibrium one. We obtain these configurations using game-theoretical
tools, and describe the results using Ising parameters. We exhaust the
two-player case, giving a detailed account of all the equilibrium
possibilities. Going to three players, we generalize the Ising model and
compare the equilibrium solutions of three representative types of network. We
find that players that are not directly linked retain a degree of correlation
that is proportional to their initial correlation. We also find that the local
network structure is the most relevant for small values of the magnetic field
and the interaction strength of the Ising model. Finally, we conclude that
certain parameters of the equilibrium states are network independent, which
opens up the possibility of an analytical description of asymmetric games
played on networks.",Asymmetric games on networks: towards an Ising-model representation,2020-11-05 13:23:38,"A. D. Correia, L. L. Leestmaker, H. T. C. Stoof","http://arxiv.org/abs/2011.02739v3, http://arxiv.org/pdf/2011.02739v3",physics.soc-ph
35463,th,"We present a linear--quadratic Stackelberg game with a large number of
followers and we also derive the mean field limit of infinitely many followers.
The relation between optimization and mean-field limit is studied and
conditions for consistency are established. Finally, we propose a numerical
method based on the derived models and present numerical results.",Multiscale Control of Stackelberg Games,2020-11-06 18:05:43,"Michael Herty, Sonja Steffensen, Anna Thünen","http://arxiv.org/abs/2011.03405v1, http://arxiv.org/pdf/2011.03405v1",math.OC
35465,th,"Understanding the likelihood for an election to be tied is a classical topic
in many disciplines including social choice, game theory, political science,
and public choice. Despite a large body of literature and the common belief
that ties are rare, little is known about how rare ties are in large elections
except for a few simple positional scoring rules under the i.i.d. uniform
distribution over the votes, known as the Impartial Culture (IC) in social
choice. In particular, little progress was made after Marchant explicitly posed
the likelihood of k-way ties under IC as an open question in 2001.
  We give an asymptotic answer to the open question for a wide range of
commonly studied voting rules under a model that is much more general and
realistic than i.i.d. models (especially IC) -- the smoothed social choice
framework by Xia that was inspired by the celebrated smoothed complexity
analysis by Spielman and Teng. We prove dichotomy theorems on the smoothed
likelihood of ties under positional scoring rules, edge-order-based rules, and
some multi-round score-based elimination rules, which include commonly studied
voting rules such as plurality, Borda, veto, maximin, Copeland, ranked pairs,
Schulze, STV, and Coombs as special cases. We also complement the theoretical
results by experiments on synthetic data and real-world rank data on Preflib.
Our main technical tool is an improved dichotomous characterization on the
smoothed likelihood for a Poisson multinomial variable to be in a polyhedron,
which is proved by exploring the interplay between the V-representation and the
matrix representation of polyhedra and might be of independent interest.",How Likely Are Large Elections Tied?,2020-11-07 18:16:04,Lirong Xia,"http://arxiv.org/abs/2011.03791v3, http://arxiv.org/pdf/2011.03791v3",cs.GT
35493,th,"This paper is an axiomatic study of consistent approval-based multi-winner
rules, i.e., voting rules that select a fixed-size group of candidates based on
approval ballots. We introduce the class of counting rules and provide an
axiomatic characterization of this class based on the consistency axiom.
Building upon this result, we axiomatically characterize three important
consistent multi-winner rules: Proportional Approval Voting, Multi-Winner
Approval Voting and the Approval Chamberlin--Courant rule. Our results
demonstrate the variety of multi-winner rules and illustrate three different,
orthogonal principles that multi-winner voting rules may represent: individual
excellence, diversity, and proportionality.",Consistent Approval-Based Multi-Winner Rules,2017-04-08 10:29:43,"Martin Lackner, Piotr Skowron","http://arxiv.org/abs/1704.02453v6, http://arxiv.org/pdf/1704.02453v6",cs.GT
35467,th,"I relate bipartite graph matchings to stable matchings. I prove a necessary
and sufficient condition for the existence of a saturating stable matching,
where every agent on one side is matched, for all possible preferences. I
extend my analysis to perfect stable matchings, where every agent on both sides
is matched.",Saturating stable matchings,2020-11-11 22:59:50,Muhammad Maaz,"http://dx.doi.org/10.1016/j.orl.2021.06.013, http://arxiv.org/abs/2011.06046v2, http://arxiv.org/pdf/2011.06046v2",cs.GT
35468,th,"We study a market of investments on networks, where each agent (vertex) can
invest in any enterprise linked to her, and at the same time, raise capital for
her firm's enterprise from other agents she is linked to. Failing to raise
sufficient capital results with the firm defaulting, being unable to invest in
others. Our main objective is to examine the role of collateral contracts in
handling the strategic risk that can propagate to a systemic risk throughout
the network in a cascade of defaults. We take a mechanism-design approach and
solve for the optimal scheme of collateral contracts that capital raisers offer
their investors. These contracts aim at sustaining the efficient level of
investment as a unique Nash equilibrium, while minimizing the total collateral.
  Our main results contrast the network environment with its non-network
counterpart (where the sets of investors and capital raisers are disjoint). We
show that for acyclic investment networks, the network environment does not
necessitate any additional collaterals, and systemic risk can be fully handled
by optimal bilateral collateral contracts between capital raisers and their
investors. This is, unfortunately, not the case for cyclic investment networks.
We show that bilateral contracting will not suffice to resolve systemic risk,
and the market will need an external entity to design a global collateral
scheme for all capital raisers. Furthermore, the minimum total collateral that
will sustain the efficient level of investment as a unique equilibrium may be
arbitrarily higher, even in simple cyclic investment networks, compared with
its corresponding non-network environment. Additionally, we prove
computational-complexity results, both for a single enterprise and for
networks.",Optimal Collaterals in Multi-Enterprise Investment Networks,2020-11-12 10:59:46,"Moshe Babaioff, Yoav Kolumbus, Eyal Winter","http://dx.doi.org/10.1145/3485447.3512053, http://arxiv.org/abs/2011.06247v3, http://arxiv.org/pdf/2011.06247v3",cs.GT
35470,th,"We provide an economically sound micro-foundation to linear price impact
models, by deriving them as the equilibrium of a suitable agent-based system.
Our setup generalizes the well-known Kyle model, by dropping the assumption of
a terminal time at which fundamental information is revealed so to describe a
stationary market, while retaining agents' rationality and asymmetric
information. We investigate the stationary equilibrium for arbitrary Gaussian
noise trades and fundamental information, and show that the setup is compatible
with universal price diffusion at small times, and non-universal mean-reversion
at time scales at which fluctuations in fundamentals decay. Our model provides
a testable relation between volatility of prices, magnitude of fluctuations in
fundamentals and level of volume traded in the market.",A Stationary Kyle Setup: Microfounding propagator models,2020-11-20 10:23:35,"Michele Vodret, Iacopo Mastromatteo, Bence Tóth, Michael Benzaquen","http://dx.doi.org/10.1088/1742-5468/abe702, http://arxiv.org/abs/2011.10242v3, http://arxiv.org/pdf/2011.10242v3",q-fin.TR
35472,th,"The expectation is an example of a descriptive statistic that is monotone
with respect to stochastic dominance, and additive for sums of independent
random variables. We provide a complete characterization of such statistics,
and explore a number of applications to models of individual and group
decision-making. These include a representation of stationary monotone time
preferences, extending the work of Fishburn and Rubinstein (1982) to time
lotteries. This extension offers a new perspective on risk attitudes toward
time, as well as on the aggregation of multiple discount factors.",Monotone additive statistics,2021-02-01 06:43:06,"Xiaosheng Mu, Luciano Pomatto, Philipp Strack, Omer Tamuz","http://arxiv.org/abs/2102.00618v4, http://arxiv.org/pdf/2102.00618v4",econ.TH
35494,th,"I study the measurement of scientists' influence using bibliographic data.
The main result is an axiomatic characterization of the family of
citation-counting indices, a broad class of influence measures which includes
the renowned h-index. The result highlights several limitations of these
indices: they are not suitable to compare scientists across different fields,
and they cannot account for indirect influence. I explore how these limitations
can be overcome by using richer bibliographic information.",The Limits of Citation Counts,2017-11-07 22:22:02,Antonin Macé,"http://arxiv.org/abs/1711.02695v3, http://arxiv.org/pdf/1711.02695v3",cs.DL
35473,th,"We study network games in which players choose both the partners with whom
they associate and an action level (e.g., effort) that creates spillovers for
those partners. We introduce a framework and two solution concepts, extending
standard approaches for analyzing each choice in isolation: Nash equilibrium in
actions and pairwise stability in links. Our main results show that, under
suitable order conditions on incentives, stable networks take simple forms. The
first condition concerns whether links create positive or negative payoff
spillovers. The second concerns whether actions are strategic complements to
links, or strategic substitutes. Together, these conditions yield a taxonomy of
the relationship between network structure and economic primitives organized
around two network architectures: ordered overlapping cliques and nested split
graphs. We apply our model to understand the consequences of competition for
status, to microfound matching models that assume clique formation, and to
interpret empirical findings that highlight unintended consequences of group
design.",Games on Endogenous Networks,2021-02-02 19:27:00,"Evan Sadler, Benjamin Golub","http://arxiv.org/abs/2102.01587v7, http://arxiv.org/pdf/2102.01587v7",econ.TH
35474,th,"We introduce a new two-sided stable matching problem that describes the
summer internship matching practice of an Australian university. The model is a
case between two models of Kamada and Kojima on matchings with distributional
constraints. We study three solution concepts, the strong and weak stability
concepts proposed by Kamada and Kojima, and a new one in between the two,
called cutoff stability. Kamada and Kojima showed that a strongly stable
matching may not exist in their most restricted model with disjoint regional
quotas. Our first result is that checking its existence is NP-hard. We then
show that a cutoff stable matching exists not just for the summer internship
problem but also for the general matching model with arbitrary heredity
constraints. We present an algorithm to compute a cutoff stable matching and
show that it runs in polynomial time in our special case of summer internship
model. However, we also show that finding a maximum size cutoff stable matching
is NP-hard, but we provide a Mixed Integer Linear Program formulation for this
optimisation problem.",Cutoff stability under distributional constraints with an application to summer internship matching,2021-02-05 03:11:46,"Haris Aziz, Anton Baychkov, Peter Biro","http://arxiv.org/abs/2102.02931v2, http://arxiv.org/pdf/2102.02931v2",cs.GT
35477,th,"We study a game theoretic model of standardized testing for college
admissions. Students are of two types; High and Low. There is a college that
would like to admit the High type students. Students take a potentially costly
standardized exam which provides a noisy signal of their type.
  The students come from two populations, which are identical in talent (i.e.
the type distribution is the same), but differ in their access to resources:
the higher resourced population can at their option take the exam multiple
times, whereas the lower resourced population can only take the exam once. We
study two models of score reporting, which capture existing policies used by
colleges. The first policy (sometimes known as ""super-scoring"") allows students
to report the max of the scores they achieve. The other policy requires that
all scores be reported.
  We find in our model that requiring that all scores be reported results in
superior outcomes in equilibrium, both from the perspective of the college (the
admissions rule is more accurate), and from the perspective of equity across
populations: a student's probability of admission is independent of their
population, conditional on their type. In particular, the false positive rates
and false negative rates are identical in this setting, across the highly and
poorly resourced student populations. This is the case despite the fact that
the more highly resourced students can -- at their option -- either report a
more accurate signal of their type, or pool with the lower resourced population
under this policy.",Best vs. All: Equity and Accuracy of Standardized Test Score Reporting,2021-02-15 22:27:22,"Sampath Kannan, Mingzi Niu, Aaron Roth, Rakesh Vohra","http://arxiv.org/abs/2102.07809v1, http://arxiv.org/pdf/2102.07809v1",cs.GT
35488,th,"In this paper, we investigate two heterogeneous triopoly games where the
demand function of the market is isoelastic. The local stability and the
bifurcation of these games are systematically analyzed using the symbolic
approach proposed by the author. The novelty of the present work is twofold. On
one hand, the results of this paper are analytical, which are different from
the existing results in the literature based on observations through numerical
simulations. In particular, we rigorously prove the existence of double routes
to chaos through the period-doubling bifurcation and through the Neimark-Sacker
bifurcation. On the other hand, for the special case of the involved firms
having identical marginal costs, we acquire the necessary and sufficient
conditions of the local stability for both models. By further analyzing these
conditions, it seems that that the presence of the local monopolistic
approximation (LMA) mechanism might have a stabilizing effect for heterogeneous
triopoly games with the isoelastic demand.",Analysis of stability and bifurcation for two heterogeneous triopoly games with the isoelastic demand,2021-12-11 14:01:20,Xiaoliang Li,"http://arxiv.org/abs/2112.05950v1, http://arxiv.org/pdf/2112.05950v1",math.DS
35478,th,"Bilateral trade, a fundamental topic in economics, models the problem of
intermediating between two strategic agents, a seller and a buyer, willing to
trade a good for which they hold private valuations. Despite the simplicity of
this problem, a classical result by Myerson and Satterthwaite (1983) affirms
the impossibility of designing a mechanism which is simultaneously efficient,
incentive compatible, individually rational, and budget balanced. This
impossibility result fostered an intense investigation of meaningful trade-offs
between these desired properties. Much work has focused on approximately
efficient fixed-price mechanisms, i.e., Blumrosen and Dobzinski (2014; 2016),
Colini-Baldeschi et al. (2016), which have been shown to fully characterize
strong budget balanced and ex-post individually rational direct revelation
mechanisms. All these results, however, either assume some knowledge on the
priors of the seller/buyer valuations, or a black box access to some samples of
the distributions, as in D{\""u}tting et al. (2021). In this paper, we cast for
the first time the bilateral trade problem in a regret minimization framework
over rounds of seller/buyer interactions, with no prior knowledge on the
private seller/buyer valuations. Our main contribution is a complete
characterization of the regret regimes for fixed-price mechanisms with
different models of feedback and private valuations, using as benchmark the
best fixed price in hindsight. More precisely, we prove the following bounds on
the regret:
  $\bullet$ $\widetilde{\Theta}(\sqrt{T})$ for full-feedback (i.e., direct
revelation mechanisms);
  $\bullet$ $\widetilde{\Theta}(T^{2/3})$ for realistic feedback (i.e.,
posted-price mechanisms) and independent seller/buyer valuations with bounded
densities;
  $\bullet$ $\Theta(T)$ for realistic feedback and seller/buyer valuations with
bounded densities;
  $\bullet$ $\Theta(T)$ for realistic feedback and independent seller/buyer
valuations;
  $\bullet$ $\Theta(T)$ for the adversarial setting.",A Regret Analysis of Bilateral Trade,2021-02-16 11:53:17,"Nicolò Cesa-Bianchi, Tommaso Cesari, Roberto Colomboni, Federico Fusco, Stefano Leonardi","http://dx.doi.org/10.1145/3465456.3467645, http://arxiv.org/abs/2102.08754v1, http://arxiv.org/pdf/2102.08754v1",cs.LG
35479,th,"Critical decisions like loan approvals, medical interventions, and college
admissions are guided by predictions made in the presence of uncertainty. In
this paper, we prove that uncertainty has a disparate impact. While it imparts
errors across all demographic groups, the types of errors vary systematically:
Groups with higher average outcomes are typically assigned higher false
positive rates, while those with lower average outcomes are assigned higher
false negative rates. We show that additional data acquisition can eliminate
the disparity and broaden access to opportunity. The strategy, which we call
Affirmative Information, could stand as an alternative to Affirmative Action.",The Disparate Impact of Uncertainty: Affirmative Action vs. Affirmative Information,2021-02-19 19:40:47,Claire Lazar Reich,"http://arxiv.org/abs/2102.10019v4, http://arxiv.org/pdf/2102.10019v4",stat.ML
35481,th,"We study a repeated persuasion setting between a sender and a receiver, where
at each time $t$, the sender observes a payoff-relevant state drawn
independently and identically from an unknown prior distribution, and shares
state information with the receiver, who then myopically chooses an action. As
in the standard setting, the sender seeks to persuade the receiver into
choosing actions that are aligned with the sender's preference by selectively
sharing information about the state. However, in contrast to the standard
models, the sender does not know the prior, and has to persuade while gradually
learning the prior on the fly.
  We study the sender's learning problem of making persuasive action
recommendations to achieve low regret against the optimal persuasion mechanism
with the knowledge of the prior distribution. Our main positive result is an
algorithm that, with high probability, is persuasive across all rounds and
achieves $O(\sqrt{T\log T})$ regret, where $T$ is the horizon length. The core
philosophy behind the design of our algorithm is to leverage robustness against
the sender's ignorance of the prior. Intuitively, at each time our algorithm
maintains a set of candidate priors, and chooses a persuasion scheme that is
simultaneously persuasive for all of them. To demonstrate the effectiveness of
our algorithm, we further prove that no algorithm can achieve regret better
than $\Omega(\sqrt{T})$, even if the persuasiveness requirements were
significantly relaxed. Therefore, our algorithm achieves optimal regret for the
sender's learning problem up to terms logarithmic in $T$.",Learning to Persuade on the Fly: Robustness Against Ignorance,2021-02-20 00:02:15,"You Zu, Krishnamurthy Iyer, Haifeng Xu","http://arxiv.org/abs/2102.10156v1, http://arxiv.org/pdf/2102.10156v1",cs.GT
35482,th,"We introduce a new algorithm for finding stable matchings in multi-sided
matching markets. Our setting is motivated by a PhD market of students,
advisors, and co-advisors, and can be generalized to supply chain networks
viewed as $n$-sided markets. In the three-sided PhD market, students primarily
care about advisors and then about co-advisors (consistent preferences), while
advisors and co-advisors have preferences over students only (hence they are
cooperative). A student must be matched to one advisor and one co-advisor, or
not at all. In contrast to previous work, advisor-student and
student-co-advisor pairs may not be mutually acceptable (e.g., a student may
not want to work with an advisor or co-advisor and vice versa). We show that
three-sided stable matchings always exist, and present an algorithm that, in
time quadratic in the market size (up to log factors), finds a three-sided
stable matching using any two-sided stable matching algorithm as matching
engine. We illustrate the challenges that arise when not all advisor-co-advisor
pairs are compatible. We then generalize our algorithm to $n$-sided markets
with quotas and show how they can model supply chain networks. Finally, we show
how our algorithm outperforms the baseline given by [Danilov, 2003] in terms of
both producing a stable matching and a larger number of matches on a synthetic
dataset.",Finding Stable Matchings in PhD Markets with Consistent Preferences and Cooperative Partners,2021-02-23 21:10:46,"Maximilian Mordig, Riccardo Della Vecchia, Nicolò Cesa-Bianchi, Bernhard Schölkopf","http://arxiv.org/abs/2102.11834v2, http://arxiv.org/pdf/2102.11834v2",cs.GT
35489,th,"Spin models of markets inspired by physics models of magnetism, as the Ising
model, allow for the study of the collective dynamics of interacting agents in
a market. The number of possible states has been mostly limited to two (buy or
sell) or three options. However, herding effects of competing stocks and the
collective dynamics of a whole market may escape our reach in the simplest
models. Here I study a q-spin Potts model version of a simple Ising market
model to represent the dynamics of a stock market index in a spin model. As a
result, a self-organized gain-loss asymmetry in the time series of an index
variable composed of stocks in this market is observed.",A q-spin Potts model of markets: Gain-loss asymmetry in stock indices as an emergent phenomenon,2021-12-12 21:15:33,Stefan Bornholdt,"http://arxiv.org/abs/2112.06290v1, http://arxiv.org/pdf/2112.06290v1",physics.soc-ph
35483,th,"Consider a coordination game played on a network, where agents prefer taking
actions closer to those of their neighbors and to their own ideal points in
action space. We explore how the welfare outcomes of a coordination game depend
on network structure and the distribution of ideal points throughout the
network. To this end, we imagine a benevolent or adversarial planner who
intervenes, at a cost, to change ideal points in order to maximize or minimize
utilitarian welfare subject to a constraint. A complete characterization of
optimal interventions is obtained by decomposing interventions into principal
components of the network's adjacency matrix. Welfare is most sensitive to
interventions proportional to the last principal component, which focus on
local disagreement. A welfare-maximizing planner optimally works to reduce
local disagreement, bringing the ideal points of neighbors closer together,
whereas a malevolent adversary optimally drives neighbors' ideal points apart
to decrease welfare. Such welfare-maximizing/minimizing interventions are very
different from ones that would be done to change some traditional measures of
discord, such as the cross-sectional variation of equilibrium actions. In fact,
an adversary sowing disagreement to maximize her impact on welfare will
minimize her impact on global variation in equilibrium actions, underscoring a
tension between improving welfare and increasing global cohesion of equilibrium
behavior.",Discord and Harmony in Networks,2021-02-26 08:56:01,"Andrea Galeotti, Benjamin Golub, Sanjeev Goyal, Rithvik Rao","http://arxiv.org/abs/2102.13309v1, http://arxiv.org/pdf/2102.13309v1",econ.TH
35484,th,"Revision game is a very new model formulating the real-time situation where
players dynamically prepare and revise their actions in advance before a
deadline when payoffs are realized. It is at the cutting edge of dynamic game
theory and can be applied in many real-world scenarios, such as eBay auction,
stock market, election, online games, crowdsourcing, etc. In this work, we
novelly identify a class of strategies for revision games which are called
Limited Retaliation strategies. An limited retaliation strategy stipulates
that, (1) players first follow a recommended cooperative plan; (2) if anyone
deviates from the plan, the limited retaliation player retaliates by using the
defection action for a limited duration; (3) after the retaliation, the limited
retaliation player returns to the cooperative plan. A limited retaliation
strategy has three key features. It is cooperative, sustaining a high level of
social welfare. It is vengeful, deterring the opponent from betrayal by
threatening with a future retaliation. It is yet forgiving, since it resumes
cooperation after a proper retaliation. The cooperativeness and vengefulness
make it constitute cooperative subgame perfect equilibrium, while the
forgiveness makes it tolerate occasional mistakes. limited retaliation
strategies show significant advantages over Grim Trigger, which is currently
the only known strategy for revision games. Besides its contribution as a new
robust and welfare-optimizing equilibrium strategy, our results about limited
retaliation strategy can also be used to explain how easy cooperation can
happen, and why forgiveness emerges in real-world multi-agent interactions. In
addition, limited retaliation strategies are simple to derive and
computationally efficient, making it easy for algorithm design and
implementation in many multi-agent systems.","Cooperation, Retaliation and Forgiveness in Revision Games",2021-12-04 10:40:09,"Dong Hao, Qi Shi, Jinyan Su, Bo An","http://arxiv.org/abs/2112.02271v4, http://arxiv.org/pdf/2112.02271v4",cs.GT
35485,th,"In a crowdsourcing contest, a principal holding a task posts it to a crowd.
People in the crowd then compete with each other to win the rewards. Although
in real life, a crowd is usually networked and people influence each other via
social ties, existing crowdsourcing contest theories do not aim to answer how
interpersonal relationships influence people's incentives and behaviors and
thereby affect the crowdsourcing performance. In this work, we novelly take
people's social ties as a key factor in the modeling and designing of agents'
incentives in crowdsourcing contests. We establish two contest mechanisms by
which the principal can impel the agents to invite their neighbors to
contribute to the task. The first mechanism has a symmetric Bayesian Nash
equilibrium, and it is very simple for agents to play and easy for the
principal to predict the contest performance. The second mechanism has an
asymmetric Bayesian Nash equilibrium, and agents' behaviors in equilibrium show
a vast diversity which is strongly related to their social relations. The
Bayesian Nash equilibrium analysis of these new mechanisms reveals that,
besides agents' intrinsic abilities, the social relations among them also play
a central role in decision-making. Moreover, we design an effective algorithm
to automatically compute the Bayesian Nash equilibrium of the invitation
crowdsourcing contest and further adapt it to a large graph dataset. Both
theoretical and empirical results show that the new invitation crowdsourcing
contests can substantially enlarge the number of participants, whereby the
principal can obtain significantly better solutions without a large
advertisement expenditure.",Social Sourcing: Incorporating Social Networks Into Crowdsourcing Contest Design,2021-12-06 12:18:18,"Qi Shi, Dong Hao","http://dx.doi.org/10.1109/TNET.2022.3223367, http://arxiv.org/abs/2112.02884v2, http://arxiv.org/pdf/2112.02884v2",cs.AI
35486,th,"In statistical decision theory, a model is said to be Pareto optimal (or
admissible) if no other model carries less risk for at least one state of
nature while presenting no more risk for others. How can you rationally
aggregate/combine a finite set of Pareto optimal models while preserving Pareto
efficiency? This question is nontrivial because weighted model averaging does
not, in general, preserve Pareto efficiency. This paper presents an answer in
four logical steps: (1) A rational aggregation rule should preserve Pareto
efficiency (2) Due to the complete class theorem, Pareto optimal models must be
Bayesian, i.e., they minimize a risk where the true state of nature is averaged
with respect to some prior. Therefore each Pareto optimal model can be
associated with a prior, and Pareto efficiency can be maintained by aggregating
Pareto optimal models through their priors. (3) A prior can be interpreted as a
preference ranking over models: prior $\pi$ prefers model A over model B if the
average risk of A is lower than the average risk of B. (4) A
rational/consistent aggregation rule should preserve this preference ranking:
If both priors $\pi$ and $\pi'$ prefer model A over model B, then the prior
obtained by aggregating $\pi$ and $\pi'$ must also prefer A over B. Under these
four steps, we show that all rational/consistent aggregation rules are as
follows: Give each individual Pareto optimal model a weight, introduce a weak
order/ranking over the set of Pareto optimal models, aggregate a finite set of
models S as the model associated with the prior obtained as the weighted
average of the priors of the highest-ranked models in S. This result shows that
all rational/consistent aggregation rules must follow a generalization of
hierarchical Bayesian modeling. Following our main result, we present
applications to Kernel smoothing, time-depreciating models, and voting
mechanisms.",Aggregation of Pareto optimal models,2021-12-08 11:21:15,"Hamed Hamze Bajgiran, Houman Owhadi","http://arxiv.org/abs/2112.04161v1, http://arxiv.org/pdf/2112.04161v1",econ.TH
35487,th,"In this discussion draft, we explore different duopoly games of players with
quadratic costs, where the market is supposed to have the isoelastic demand.
Different from the usual approaches based on numerical computations, the
methods used in the present work are built on symbolic computations, which can
produce analytical and rigorous results. Our investigations show that the
stability regions are enlarged for the games considered in this work compared
to their counterparts with linear costs, which generalizes the classical
results of ""F. M. Fisher. The stability of the Cournot oligopoly solution: The
effects of speeds of adjustment and increasing marginal costs. The Review of
Economic Studies, 28(2):125--135, 1961."".",Stability of Cournot duopoly games with isoelastic demands and quadratic costs,2021-12-11 13:52:07,"Xiaoliang Li, Li Su","http://arxiv.org/abs/2112.05948v2, http://arxiv.org/pdf/2112.05948v2",cs.SC
35496,th,"Static stability in economic models means negative incentives for deviation
from equilibrium strategies, which we expect to assure a return to equilibrium,
i.e., dynamic stability, as long as agents respond to incentives. There have
been many attempts to prove this link, especially in evolutionary game theory,
yielding both negative and positive results. This paper presents a universal
and intuitive approach to this link. We prove that static stability assures
dynamic stability if agents' choices of switching strategies are rationalizable
by introducing costs and constraints in those switching decisions. This idea
guides us to define \textit{net }gains from switches as the payoff improvement
after deducting the costs. Under rationalizable dynamics, an agent maximizes
the expected net gain subject to the constraints. We prove that the aggregate
maximized expected net gain works as a Lyapunov function. It also explains
reasons behind the known negative results. While our analysis here is confined
to myopic evolutionary dynamics in population games, our approach is applicable
to more complex situations.",Net gains in evolutionary dynamics: A unifying and intuitive approach to dynamic stability,2018-05-13 18:23:19,Dai Zusai,"http://arxiv.org/abs/1805.04898v9, http://arxiv.org/pdf/1805.04898v9",math.OC
35497,th,"Efficient computability is an important property of solution concepts in
matching markets. We consider the computational complexity of finding and
verifying various solution concepts in trading networks-multi-sided matching
markets with bilateral contracts-under the assumption of full substitutability
of agents' preferences. It is known that outcomes that satisfy trail stability
always exist and can be found in linear time. Here we consider a slightly
stronger solution concept in which agents can simultaneously offer an upstream
and a downstream contract. We show that deciding the existence of outcomes
satisfying this solution concept is an NP-complete problem even in a special
(flow network) case of our model. It follows that the existence of stable
outcomes--immune to deviations by arbitrary sets of agents-is also an NP-hard
problem in trading networks (and in flow networks). Finally, we show that even
verifying whether a given outcome is stable is NP-complete in trading networks.",Complexity of Stability in Trading Networks,2018-05-22 20:42:34,"Tamás Fleiner, Zsuzsanna Jankó, Ildikó Schlotter, Alexander Teytelboym","http://arxiv.org/abs/1805.08758v2, http://arxiv.org/pdf/1805.08758v2",cs.CC
35499,th,"In principal-agent models, a principal offers a contract to an agent to
perform a certain task. The agent exerts a level of effort that maximizes her
utility. The principal is oblivious to the agent's chosen level of effort, and
conditions her wage only on possible outcomes. In this work, we consider a
model in which the principal is unaware of the agent's utility and action
space: she sequentially offers contracts to identical agents, and observes the
resulting outcomes. We present an algorithm for learning the optimal contract
under mild assumptions. We bound the number of samples needed for the principal
to obtain a contract that is within $\eps$ of her optimal net profit for every
$\eps>0$. Our results are robust even when considering risk-averse agents.
Furthermore, we show that when there are only two possible outcomes or the
agent is risk-neutral, the algorithm's outcome approximates the optimal
contract described in the classical theory.",Learning Approximately Optimal Contracts,2018-11-16 13:05:42,"Alon Cohen, Moran Koren, Argyrios Deligkas","http://arxiv.org/abs/1811.06736v2, http://arxiv.org/pdf/1811.06736v2",cs.GT
35500,th,"We study the incentive properties of envy-free mechanisms for the allocation
of rooms and payments of rent among financially constrained roommates. Each
agent reports her values for rooms, her housing earmark (soft budget), and an
index that reflects the difficulty the agent experiences from having to pay
over this amount. Then an envy-free allocation for these reports is
recommended. The complete information non-cooperative outcomes of each of these
mechanisms are exactly the envy-free allocations with respect to true
preferences if and only if the admissible budget violation indices have a
bound.",Expressive mechanisms for equitable rent division on a budget,2019-02-08 07:35:33,Rodrigo A. Velez,"http://arxiv.org/abs/1902.02935v3, http://arxiv.org/pdf/1902.02935v3",econ.TH
35501,th,"Evaluation of systemic risk in networks of financial institutions in general
requires information of inter-institution financial exposures. In the framework
of Debt Rank algorithm, we introduce an approximate method of systemic risk
evaluation which requires only node properties, such as total assets and
liabilities, as inputs. We demonstrate that this approximation captures a large
portion of systemic risk measured by Debt Rank. Furthermore, using Monte Carlo
simulations, we investigate network structures that can amplify systemic risk.
Indeed, while no topology in general sense is {\em a priori} more stable if the
market is liquid [1], a larger complexity is detrimental for the overall
stability [2]. Here we find that the measure of scalar assortativity correlates
well with level of systemic risk. In particular, network structures with high
systemic risk are scalar assortative, meaning that risky banks are mostly
exposed to other risky banks. Network structures with low systemic risk are
scalar disassortative, with interactions of risky banks with stable banks.",Controlling systemic risk - network structures that minimize it and node properties to calculate it,2019-02-22 16:28:26,"Sebastian M. Krause, Hrvoje Štefančić, Vinko Zlatić, Guido Caldarelli","http://dx.doi.org/10.1103/PhysRevE.103.042304, http://arxiv.org/abs/1902.08483v1, http://arxiv.org/pdf/1902.08483v1",q-fin.RM
35502,th,"We provide conditions for stable equilibrium in Cournot duopoly models with
tax evasion and time delay. We prove that our conditions actually imply
asymptotically stable equilibrium and delay independence. Conditions include
the same marginal cost and equal probability for evading taxes. We give
examples of cost and inverse demand functions satisfying the proposed
conditions. Some economic interpretations of our results are also included.",Conditions for stable equilibrium in Cournot duopoly models with tax evasion and time delay,2019-05-08 00:35:45,"Raul Villafuerte-Segura, Eduardo Alvarado-Santos, Benjamin A. Itza-Ortiz","http://dx.doi.org/10.1063/1.5131266, http://arxiv.org/abs/1905.02817v2, http://arxiv.org/pdf/1905.02817v2",math.OC
35503,th,"We consider the problem of a decision-maker searching for information on
multiple alternatives when information is learned on all alternatives
simultaneously. The decision-maker has a running cost of searching for
information, and has to decide when to stop searching for information and
choose one alternative. The expected payoff of each alternative evolves as a
diffusion process when information is being learned. We present necessary and
sufficient conditions for the solution, establishing existence and uniqueness.
We show that the optimal boundary where search is stopped (free boundary) is
star-shaped, and present an asymptotic characterization of the value function
and the free boundary. We show properties of how the distance between the free
boundary and the diagonal varies with the number of alternatives, and how the
free boundary under parallel search relates to the one under sequential search,
with and without economies of scale on the search costs.",Parallel Search for Information,2019-05-16 03:54:49,"T. Tony Ke, Wenpin Tang, J. Miguel Villas-Boas, Yuming Zhang","http://arxiv.org/abs/1905.06485v2, http://arxiv.org/pdf/1905.06485v2",econ.TH
35504,th,"In finite games mixed Nash equilibria always exist, but pure equilibria may
fail to exist. To assess the relevance of this nonexistence, we consider games
where the payoffs are drawn at random. In particular, we focus on games where a
large number of players can each choose one of two possible strategies, and the
payoffs are i.i.d. with the possibility of ties. We provide asymptotic results
about the random number of pure Nash equilibria, such as fast growth and a
central limit theorem, with bounds for the approximation error. Moreover, by
using a new link between percolation models and game theory, we describe in
detail the geometry of Nash equilibria and show that, when the probability of
ties is small, a best-response dynamics reaches a Nash equilibrium with a
probability that quickly approaches one as the number of players grows. We show
that a multitude of phase transitions depend only on a single parameter of the
model, that is, the probability of having ties.",Pure Nash Equilibria and Best-Response Dynamics in Random Games,2019-05-26 11:08:35,"Ben Amiet, Andrea Collevecchio, Marco Scarsini, Ziwen Zhong","http://arxiv.org/abs/1905.10758v4, http://arxiv.org/pdf/1905.10758v4",cs.GT
35507,th,"In the current book I suggest an off-road path to the subject of optimal
transport. I tried to avoid prior knowledge of analysis, PDE theory and
functional analysis, as much as possible. Thus I concentrate on discrete and
semi-discrete cases, and always assume compactness for the underlying spaces.
However, some fundamental knowledge of measure theory and convexity is
unavoidable. In order to make it as self-contained as possible I included an
appendix with some basic definitions and results. I believe that any graduate
student in mathematics, as well as advanced undergraduate students, can read
and understand this book. Some chapters (in particular in Parts II\&III ) can
also be interesting for experts. Starting with the the most fundamental, fully
discrete problem I attempted to place optimal transport as a particular case of
the celebrated stable marriage problem. From there we proceed to the partition
problem, which can be formulated as a transport from a continuous space to a
discrete one. Applications to information theory and game theory (cooperative
and non-cooperative) are introduced as well.
  Finally, the general case of transport between two compact measure spaces is
introduced as a coupling between two semi-discrete transports.",Semi-discrete optimal transport,2019-11-11 18:44:44,Gershon Wolansky,"http://arxiv.org/abs/1911.04348v4, http://arxiv.org/pdf/1911.04348v4",math.OC
35509,th,"Assortment optimization is an important problem that arises in many
industries such as retailing and online advertising where the goal is to find a
subset of products from a universe of substitutable products which maximize
seller's expected revenue. One of the key challenges in this problem is to
model the customer substitution behavior. Many parametric random utility
maximization (RUM) based choice models have been considered in the literature.
However, in all these models, probability of purchase increases as we include
more products to an assortment. This is not true in general and in many
settings more choices hurt sales. This is commonly referred to as the choice
overload. In this paper we attempt to address this limitation in RUM through a
generalization of the Markov chain based choice model considered in Blanchet et
al. (2016). As a special case, we show that our model reduces to a
generalization of MNL with no-purchase attractions dependent on the assortment
S and strictly increasing with the size of assortment S. While we show that the
assortment optimization under this model is NP-hard, we present fully
polynomial-time approximation scheme (FPTAS) under reasonable assumptions.",A Generalized Markov Chain Model to Capture Dynamic Preferences and Choice Overload,2019-11-15 19:02:16,"Kumar Goutam, Vineet Goyal, Agathe Soret","http://arxiv.org/abs/1911.06716v4, http://arxiv.org/pdf/1911.06716v4",econ.TH
35510,th,"Distortion-based analysis has established itself as a fruitful framework for
comparing voting mechanisms. m voters and n candidates are jointly embedded in
an (unknown) metric space, and the voters submit rankings of candidates by
non-decreasing distance from themselves. Based on the submitted rankings, the
social choice rule chooses a winning candidate; the quality of the winner is
the sum of the (unknown) distances to the voters. The rule's choice will in
general be suboptimal, and the worst-case ratio between the cost of its chosen
candidate and the optimal candidate is called the rule's distortion. It was
shown in prior work that every deterministic rule has distortion at least 3,
while the Copeland rule and related rules guarantee worst-case distortion at
most 5; a very recent result gave a rule with distortion $2+\sqrt{5} \approx
4.236$.
  We provide a framework based on LP-duality and flow interpretations of the
dual which provides a simpler and more unified way for proving upper bounds on
the distortion of social choice rules. We illustrate the utility of this
approach with three examples. First, we give a fairly simple proof of a strong
generalization of the upper bound of 5 on the distortion of Copeland, to social
choice rules with short paths from the winning candidate to the optimal
candidate in generalized weak preference graphs. A special case of this result
recovers the recent $2+\sqrt{5}$ guarantee. Second, using this generalized
bound, we show that the Ranked Pairs and Schulze rules have distortion
$\Theta(\sqrt(n))$. Finally, our framework naturally suggests a combinatorial
rule that is a strong candidate for achieving distortion 3, which had also been
proposed in recent work. We prove that the distortion bound of 3 would follow
from any of three combinatorial conjectures we formulate.",An Analysis Framework for Metric Voting based on LP Duality,2019-11-17 09:34:11,David Kempe,"http://arxiv.org/abs/1911.07162v3, http://arxiv.org/pdf/1911.07162v3",cs.GT
35512,th,"In distortion-based analysis of social choice rules over metric spaces, one
assumes that all voters and candidates are jointly embedded in a common metric
space. Voters rank candidates by non-decreasing distance. The mechanism,
receiving only this ordinal (comparison) information, should select a candidate
approximately minimizing the sum of distances from all voters. It is known that
while the Copeland rule and related rules guarantee distortion at most 5, many
other standard voting rules, such as Plurality, Veto, or $k$-approval, have
distortion growing unboundedly in the number $n$ of candidates.
  Plurality, Veto, or $k$-approval with small $k$ require less communication
from the voters than all deterministic social choice rules known to achieve
constant distortion. This motivates our study of the tradeoff between the
distortion and the amount of communication in deterministic social choice
rules.
  We show that any one-round deterministic voting mechanism in which each voter
communicates only the candidates she ranks in a given set of $k$ positions must
have distortion at least $\frac{2n-k}{k}$; we give a mechanism achieving an
upper bound of $O(n/k)$, which matches the lower bound up to a constant. For
more general communication-bounded voting mechanisms, in which each voter
communicates $b$ bits of information about her ranking, we show a slightly
weaker lower bound of $\Omega(n/b)$ on the distortion.
  For randomized mechanisms, it is known that Random Dictatorship achieves
expected distortion strictly smaller than 3, almost matching a lower bound of
$3-\frac{2}{n}$ for any randomized mechanism that only receives each voter's
top choice. We close this gap, by giving a simple randomized social choice rule
which only uses each voter's first choice, and achieves expected distortion
$3-\frac{2}{n}$.","Communication, Distortion, and Randomness in Metric Voting",2019-11-19 10:15:37,David Kempe,"http://arxiv.org/abs/1911.08129v2, http://arxiv.org/pdf/1911.08129v2",cs.GT
35514,th,"We consider the facility location problem in the one-dimensional setting
where each facility can serve a limited number of agents from the algorithmic
and mechanism design perspectives. From the algorithmic perspective, we prove
that the corresponding optimization problem, where the goal is to locate
facilities to minimize either the total cost to all agents or the maximum cost
of any agent is NP-hard. However, we show that the problem is fixed-parameter
tractable, and the optimal solution can be computed in polynomial time whenever
the number of facilities is bounded, or when all facilities have identical
capacities. We then consider the problem from a mechanism design perspective
where the agents are strategic and need not reveal their true locations. We
show that several natural mechanisms studied in the uncapacitated setting
either lose strategyproofness or a bound on the solution quality for the total
or maximum cost objective. We then propose new mechanisms that are
strategyproof and achieve approximation guarantees that almost match the lower
bounds.",Facility Location Problem with Capacity Constraints: Algorithmic and Mechanism Design Perspectives,2019-11-22 05:14:34,"Haris Aziz, Hau Chan, Barton E. Lee, Bo Li, Toby Walsh","http://arxiv.org/abs/1911.09813v1, http://arxiv.org/pdf/1911.09813v1",cs.GT
35516,th,"A fundamental result in cake cutting states that for any number of players
with arbitrary preferences over a cake, there exists a division of the cake
such that every player receives a single contiguous piece and no player is left
envious. We generalize this result by showing that it is possible to partition
the players into groups of any desired sizes and divide the cake among the
groups, so that each group receives a single contiguous piece and no player
finds the piece of another group better than that of the player's own group.",How to Cut a Cake Fairly: A Generalization to Groups,2020-01-10 10:19:18,"Erel Segal-Halevi, Warut Suksompong","http://dx.doi.org/10.1080/00029890.2021.1835338, http://arxiv.org/abs/2001.03327v3, http://arxiv.org/pdf/2001.03327v3",econ.TH
35517,th,"We model the production of complex goods in a large supply network. Each firm
sources several essential inputs through relationships with other firms.
Individual supply relationships are at risk of idiosyncratic failure, which
threatens to disrupt production. To protect against this, firms multisource
inputs and strategically invest to make relationships stronger, trading off the
cost of investment against the benefits of increased robustness. A supply
network is called fragile if aggregate output is very sensitive to small
aggregate shocks. We show that supply networks of intermediate productivity are
fragile in equilibrium, even though this is always inefficient. The endogenous
configuration of supply networks provides a new channel for the powerful
amplification of shocks.",Supply Network Formation and Fragility,2020-01-12 08:13:38,"Matthew Elliott, Benjamin Golub, Matthew V. Leduc","http://arxiv.org/abs/2001.03853v7, http://arxiv.org/pdf/2001.03853v7",econ.TH
35519,th,"How should one combine noisy information from diverse sources to make an
inference about an objective ground truth? This frequently recurring, normative
question lies at the core of statistics, machine learning, policy-making, and
everyday life. It has been called ""combining forecasts"", ""meta-analysis"",
""ensembling"", and the ""MLE approach to voting"", among other names. Past studies
typically assume that noisy votes are identically and independently distributed
(i.i.d.), but this assumption is often unrealistic. Instead, we assume that
votes are independent but not necessarily identically distributed and that our
ensembling algorithm has access to certain auxiliary information related to the
underlying model governing the noise in each vote. In our present work, we: (1)
define our problem and argue that it reflects common and socially relevant real
world scenarios, (2) propose a multi-arm bandit noise model and count-based
auxiliary information set, (3) derive maximum likelihood aggregation rules for
ranked and cardinal votes under our noise model, (4) propose, alternatively, to
learn an aggregation rule using an order-invariant neural network, and (5)
empirically compare our rules to common voting rules and naive
experience-weighted modifications. We find that our rules successfully use
auxiliary information to outperform the naive baselines.",Objective Social Choice: Using Auxiliary Information to Improve Voting Outcomes,2020-01-28 00:21:19,"Silviu Pitis, Michael R. Zhang","http://arxiv.org/abs/2001.10092v1, http://arxiv.org/pdf/2001.10092v1",cs.MA
35520,th,"While fictitious play is guaranteed to converge to Nash equilibrium in
certain game classes, such as two-player zero-sum games, it is not guaranteed
to converge in non-zero-sum and multiplayer games. We show that fictitious play
in fact leads to improved Nash equilibrium approximation over a variety of game
classes and sizes than (counterfactual) regret minimization, which has recently
produced superhuman play for multiplayer poker. We also show that when
fictitious play is run several times using random initializations it is able to
solve several known challenge problems in which the standard version is known
to not converge, including Shapley's classic counterexample. These provide some
of the first positive results for fictitious play in these settings, despite
the fact that worst-case theoretical results are negative.",Empirical Analysis of Fictitious Play for Nash Equilibrium Computation in Multiplayer Games,2020-01-30 06:47:09,Sam Ganzfried,"http://arxiv.org/abs/2001.11165v8, http://arxiv.org/pdf/2001.11165v8",cs.GT
35551,th,"We state and prove Kuhn's equivalence theorem for a new representation of
games, the intrinsic form. First, we introduce games in intrinsic form where
information is represented by $\sigma$-fields over a product set. For this
purpose, we adapt to games the intrinsic representation that Witsenhausen
introduced in control theory. Those intrinsic games do not require an explicit
description of the play temporality, as opposed to extensive form games on
trees. Second, we prove, for this new and more general representation of games,
that behavioral and mixed strategies are equivalent under perfect recall
(Kuhn's theorem). As the intrinsic form replaces the tree structure with a
product structure, the handling of information is easier. This makes the
intrinsic form a new valuable tool for the analysis of games with information.",Kuhn's Equivalence Theorem for Games in Intrinsic Form,2020-06-26 10:35:21,"Benjamin Heymann, Michel de Lara, Jean-Philippe Chancelier","http://arxiv.org/abs/2006.14838v1, http://arxiv.org/pdf/2006.14838v1",math.OC
35521,th,"We study the design of decision-making mechanism for resource allocations
over a multi-agent system in a dynamic environment. Agents' privately observed
preference over resources evolves over time and the population is dynamic due
to the adoption of stopping rules. The proposed model designs the rules of
encounter for agents participating in the dynamic mechanism by specifying an
allocation rule and three payment rules to elicit agents' coupled decision
makings of honest preference reporting and optimal stopping over multiple
periods. The mechanism provides a special posted-price payment rule that
depends only on each agent's realized stopping time to directly influence the
population dynamics. This letter focuses on the theoretical implementability of
the rules in perfect Bayesian Nash equilibrium and characterizes necessary and
sufficient conditions to guarantee agents' honest equilibrium behaviors over
periods. We provide the design principles to construct the payments in terms of
the allocation rules and identify the restrictions of the designer's ability to
influence the population dynamics. The established conditions make the
designer's problem of finding multiple rules to determine an optimal allocation
rule.",Implementability of Honest Multi-Agent Sequential Decision-Making with Dynamic Population,2020-03-06 16:06:47,"Tao Zhang, Quanyan Zhu","http://arxiv.org/abs/2003.03173v2, http://arxiv.org/pdf/2003.03173v2",eess.SY
35523,th,"We consider a robust version of the revenue maximization problem, where a
single seller wishes to sell $n$ items to a single unit-demand buyer. In this
robust version, the seller knows the buyer's marginal value distribution for
each item separately, but not the joint distribution, and prices the items to
maximize revenue in the worst case over all compatible correlation structures.
We devise a computationally efficient (polynomial in the support size of the
marginals) algorithm that computes the worst-case joint distribution for any
choice of item prices. And yet, in sharp contrast to the additive buyer case
(Carroll, 2017), we show that it is NP-hard to approximate the optimal choice
of prices to within any factor better than $n^{1/2-\epsilon}$. For the special
case of marginal distributions that satisfy the monotone hazard rate property,
we show how to guarantee a constant fraction of the optimal worst-case revenue
using item pricing; this pricing equates revenue across all possible
correlations and can be computed efficiently.",Escaping Cannibalization? Correlation-Robust Pricing for a Unit-Demand Buyer,2020-03-12 20:24:56,"Moshe Babaioff, Michal Feldman, Yannai A. Gonczarowski, Brendan Lucier, Inbal Talgam-Cohen","http://arxiv.org/abs/2003.05913v2, http://arxiv.org/pdf/2003.05913v2",cs.GT
35524,th,"Over the last few decades, electricity markets around the world have adopted
multi-settlement structures, allowing for balancing of supply and demand as
more accurate forecast information becomes available. Given increasing
uncertainty due to adoption of renewables, more recent market design work has
focused on optimization of expectation of some quantity, e.g. social welfare.
However, social planners and policy makers are often risk averse, so that such
risk neutral formulations do not adequately reflect prevailing attitudes
towards risk, nor explain the decisions that follow. Hence we incorporate the
commonly used risk measure conditional value at risk (CVaR) into the central
planning objective, and study how a two-stage market operates when the
individual generators are risk neutral. Our primary result is to show existence
(by construction) of a sequential competitive equilibrium (SCEq) in this
risk-aware two-stage market. Given equilibrium prices, we design a market
mechanism which achieves social cost minimization assuming that agents are non
strategic.",A Risk Aware Two-Stage Market Mechanism for Electricity with Renewable Generation,2020-03-13 08:19:43,"Nathan Dahlin, Rahul Jain","http://arxiv.org/abs/2003.06119v1, http://arxiv.org/pdf/2003.06119v1",eess.SY
35525,th,"In this paper we consider a local service-requirement assignment problem
named exact capacitated domination from an algorithmic point of view. This
problem aims to find a solution (a Nash equilibrium) to a game-theoretic model
of public good provision. In the problem we are given a capacitated graph, a
graph with a parameter defined on each vertex that is interpreted as the
capacity of that vertex. The objective is to find a DP-Nash subgraph: a
spanning bipartite subgraph with partite sets D and P, called the D-set and
P-set respectively, such that no vertex in P is isolated and that each vertex
in D is adjacent to a number of vertices equal to its capacity. We show that
whether a capacitated graph has a unique DP-Nash subgraph can be decided in
polynomial time. However, we also show that the nearby problem of deciding
whether a capacitated graph has a unique D-set is co-NP-complete.",Exact capacitated domination: on the computational complexity of uniqueness,2020-03-16 13:47:10,"Gregory Gutin, Philip R Neary, Anders Yeo","http://arxiv.org/abs/2003.07106v3, http://arxiv.org/pdf/2003.07106v3",math.CO
35526,th,"Motivated by empirical evidence that individuals within group decision making
simultaneously aspire to maximize utility and avoid inequality we propose a
criterion based on the entropy-norm pair for geometric selection of strict Nash
equilibria in n-person games. For this, we introduce a mapping of an n-person
set of Nash equilibrium utilities in an Entropy-Norm space. We suggest that the
most suitable group choice is the equilibrium closest to the largest
entropy-norm pair of a rescaled Entropy-Norm space. Successive application of
this criterion permits an ordering of the possible Nash equilibria in an
n-person game accounting simultaneously equality and utility of players
payoffs. Limitations of this approach for certain exceptional cases are
discussed. In addition, the criterion proposed is applied and compared with the
results of a group decision making experiment.",Entropy-Norm space for geometric selection of strict Nash equilibria in n-person games,2020-03-20 15:30:57,"A. B. Leoneti, G. A. Prataviera","http://dx.doi.org/10.1016/j.physa.2020.124407, http://arxiv.org/abs/2003.09225v1, http://arxiv.org/pdf/2003.09225v1",physics.soc-ph
35768,th,"A menu description presents a mechanism to player $i$ in two steps. Step (1)
uses the reports of other players to describe $i$'s menu: the set of $i$'s
potential outcomes. Step (2) uses $i$'s report to select $i$'s favorite outcome
from her menu. Can menu descriptions better expose strategyproofness, without
sacrificing simplicity? We propose a new, simple menu description of Deferred
Acceptance. We prove that -- in contrast with other common matching mechanisms
-- this menu description must differ substantially from the corresponding
traditional description. We demonstrate, with a lab experiment on two
elementary mechanisms, the promise and challenges of menu descriptions.",Strategyproofness-Exposing Mechanism Descriptions,2022-09-27 07:31:42,"Yannai A. Gonczarowski, Ori Heffetz, Clayton Thomas","http://arxiv.org/abs/2209.13148v2, http://arxiv.org/pdf/2209.13148v2",econ.TH
35527,th,"Reinforcement learning algorithms describe how an agent can learn an optimal
action policy in a sequential decision process, through repeated experience. In
a given environment, the agent policy provides him some running and terminal
rewards. As in online learning, the agent learns sequentially. As in
multi-armed bandit problems, when an agent picks an action, he can not infer
ex-post the rewards induced by other action choices. In reinforcement learning,
his actions have consequences: they influence not only rewards, but also future
states of the world. The goal of reinforcement learning is to find an optimal
policy -- a mapping from the states of the world to the set of actions, in
order to maximize cumulative reward, which is a long term strategy. Exploring
might be sub-optimal on a short-term horizon but could lead to optimal
long-term ones. Many problems of optimal control, popular in economics for more
than forty years, can be expressed in the reinforcement learning framework, and
recent advances in computational science, provided in particular by deep
learning algorithms, can be used by economists in order to solve complex
behavioral problems. In this article, we propose a state-of-the-art of
reinforcement learning techniques, and present applications in economics, game
theory, operation research and finance.",Reinforcement Learning in Economics and Finance,2020-03-23 01:31:35,"Arthur Charpentier, Romuald Elie, Carl Remlinger","http://arxiv.org/abs/2003.10014v1, http://arxiv.org/pdf/2003.10014v1",econ.TH
35528,th,"In 1979, Hylland and Zeckhauser \cite{hylland} gave a simple and general
scheme for implementing a one-sided matching market using the power of a
pricing mechanism. Their method has nice properties -- it is incentive
compatible in the large and produces an allocation that is Pareto optimal --
and hence it provides an attractive, off-the-shelf method for running an
application involving such a market. With matching markets becoming ever more
prevalant and impactful, it is imperative to finally settle the computational
complexity of this scheme.
  We present the following partial resolution:
  1. A combinatorial, strongly polynomial time algorithm for the special case
of $0/1$ utilities.
  2. An example that has only irrational equilibria, hence proving that this
problem is not in PPAD. Furthermore, its equilibria are disconnected, hence
showing that the problem does not admit a convex programming formulation.
  3. A proof of membership of the problem in the class FIXP.
  We leave open the (difficult) question of determining if the problem is
FIXP-hard. Settling the status of the special case when utilities are in the
set $\{0, {\frac 1 2}, 1 \}$ appears to be even more difficult.",Computational Complexity of the Hylland-Zeckhauser Scheme for One-Sided Matching Markets,2020-04-03 05:53:09,"Vijay V. Vazirani, Mihalis Yannakakis","http://arxiv.org/abs/2004.01348v6, http://arxiv.org/pdf/2004.01348v6",cs.GT
35529,th,"We consider the sale of a single item to multiple buyers by a
revenue-maximizing seller. Recent work of Akbarpour and Li formalizes
\emph{credibility} as an auction desideratum, and prove that the only optimal,
credible, strategyproof auction is the ascending price auction with reserves
(Akbarpour and Li, 2019).
  In contrast, when buyers' valuations are MHR, we show that the mild
additional assumption of a cryptographically secure commitment scheme suffices
for a simple \emph{two-round} auction which is optimal, strategyproof, and
credible (even when the number of bidders is only known by the auctioneer).
  We extend our analysis to the case when buyer valuations are
$\alpha$-strongly regular for any $\alpha > 0$, up to arbitrary $\varepsilon$
in credibility. Interestingly, we also prove that this construction cannot be
extended to regular distributions, nor can the $\varepsilon$ be removed with
multiple bidders.","Credible, Truthful, and Two-Round (Optimal) Auctions via Cryptographic Commitments",2020-04-03 17:43:02,"Matheus V. X. Ferreira, S. Matthew Weinberg","http://dx.doi.org/10.1145/3391403.3399495, http://arxiv.org/abs/2004.01598v2, http://arxiv.org/pdf/2004.01598v2",cs.GT
35530,th,"A fundamental property of choice functions is stability, which, loosely
speaking, prescribes that choice sets are invariant under adding and removing
unchosen alternatives. We provide several structural insights that improve our
understanding of stable choice functions. In particular, (i) we show that every
stable choice function is generated by a unique simple choice function, which
never excludes more than one alternative, (ii) we completely characterize which
simple choice functions give rise to stable choice functions, and (iii) we
prove a strong relationship between stability and a new property of tournament
solutions called local reversal symmetry. Based on these findings, we provide
the first concrete tournament---consisting of 24 alternatives---in which the
tournament equilibrium set fails to be stable. Furthermore, we prove that there
is no more discriminating stable tournament solution than the bipartisan set
and that the bipartisan set is the unique most discriminating tournament
solution which satisfies standard properties proposed in the literature.",On the Structure of Stable Tournament Solutions,2020-04-03 19:16:00,"Felix Brandt, Markus Brill, Hans Georg Seedig, Warut Suksompong","http://dx.doi.org/10.1007/s00199-016-1024-x, http://arxiv.org/abs/2004.01651v1, http://arxiv.org/pdf/2004.01651v1",econ.TH
35531,th,"We propose a Condorcet consistent voting method that we call Split Cycle.
Split Cycle belongs to the small family of known voting methods satisfying the
anti-vote-splitting criterion of independence of clones. In this family, only
Split Cycle satisfies a new criterion we call immunity to spoilers, which
concerns adding candidates to elections, as well as the known criteria of
positive involvement and negative involvement, which concern adding voters to
elections. Thus, in contrast to other clone-independent methods, Split Cycle
mitigates both ""spoiler effects"" and ""strong no show paradoxes.""",Split Cycle: A New Condorcet Consistent Voting Method Independent of Clones and Immune to Spoilers,2020-04-06 02:20:17,"Wesley H. Holliday, Eric Pacuit","http://dx.doi.org/10.1007/s11127-023-01042-3, http://arxiv.org/abs/2004.02350v10, http://arxiv.org/pdf/2004.02350v10",cs.GT
35533,th,"We study a resource allocation setting where $m$ discrete items are to be
divided among $n$ agents with additive utilities, and the agents' utilities for
individual items are drawn at random from a probability distribution. Since
common fairness notions like envy-freeness and proportionality cannot always be
satisfied in this setting, an important question is when allocations satisfying
these notions exist. In this paper, we close several gaps in the line of work
on asymptotic fair division. First, we prove that the classical round-robin
algorithm is likely to produce an envy-free allocation provided that
$m=\Omega(n\log n/\log\log n)$, matching the lower bound from prior work. We
then show that a proportional allocation exists with high probability as long
as $m\geq n$, while an allocation satisfying envy-freeness up to any item (EFX)
is likely to be present for any relation between $m$ and $n$. Finally, we
consider a related setting where each agent is assigned exactly one item and
the remaining items are left unassigned, and show that the transition from
non-existence to existence with respect to envy-free assignments occurs at
$m=en$.",Closing Gaps in Asymptotic Fair Division,2020-04-12 11:21:09,"Pasin Manurangsi, Warut Suksompong","http://dx.doi.org/10.1137/20M1353381, http://arxiv.org/abs/2004.05563v1, http://arxiv.org/pdf/2004.05563v1",cs.GT
35534,th,"We present an analysis of the Proof-of-Work consensus algorithm, used on the
Bitcoin blockchain, using a Mean Field Game framework. Using a master equation,
we provide an equilibrium characterization of the total computational power
devoted to mining the blockchain (hashrate). From a simple setting we show how
the master equation approach allows us to enrich the model by relaxing most of
the simplifying assumptions. The essential structure of the game is preserved
across all the enrichments. In deterministic settings, the hashrate ultimately
reaches a steady state in which it increases at the rate of technological
progress. In stochastic settings, there exists a target for the hashrate for
every possible random state. As a consequence, we show that in equilibrium the
security of the underlying blockchain is either $i)$ constant, or $ii)$
increases with the demand for the underlying cryptocurrency.",Mean Field Game Approach to Bitcoin Mining,2020-04-17 13:57:33,"Charles Bertucci, Louis Bertucci, Jean-Michel Lasry, Pierre-Louis Lions","http://arxiv.org/abs/2004.08167v1, http://arxiv.org/pdf/2004.08167v1",econ.TH
35535,th,"This paper develops the category $\mathbf{NCG}$. Its objects are
node-and-choice games, which include essentially all extensive-form games. Its
morphisms allow arbitrary transformations of a game's nodes, choices, and
players, as well as monotonic transformations of the utility functions of the
game's players. Among the morphisms are subgame inclusions. Several
characterizations and numerous properties of the isomorphisms are derived. For
example, it is shown that isomorphisms preserve the game-theoretic concepts of
no-absentmindedness, perfect-information, and (pure-strategy) Nash-equilibrium.
Finally, full subcategories are defined for choice-sequence games and
choice-set games, and relationships among these two subcategories and
$\mathbf{NCG}$ itself are expressed and derived via isomorphic inclusions and
equivalences.",The Category of Node-and-Choice Extensive-Form Games,2020-04-23 17:41:59,Peter A. Streufert,"http://arxiv.org/abs/2004.11196v2, http://arxiv.org/pdf/2004.11196v2",econ.TH
35536,th,"We study search, evaluation, and selection of candidates of unknown quality
for a position. We examine the effects of ""soft"" affirmative action policies
increasing the relative percentage of minority candidates in the candidate
pool. We show that, while meant to encourage minority hiring, such policies may
backfire if the evaluation of minority candidates is noisier than that of
non-minorities. This may occur even if minorities are at least as qualified and
as valuable as non-minorities. The results provide a possible explanation for
why certain soft affirmative action policies have proved counterproductive,
even in the absence of (implicit) biases.",Soft Affirmative Action and Minority Recruitment,2020-04-30 20:01:35,"Daniel Fershtman, Alessandro Pavan","http://arxiv.org/abs/2004.14953v1, http://arxiv.org/pdf/2004.14953v1",econ.TH
35537,th,"We introduce an algorithmic decision process for multialternative choice that
combines binary comparisons and Markovian exploration. We show that a
preferential property, transitivity, makes it testable.",Multialternative Neural Decision Processes,2020-05-03 16:19:37,"Carlo Baldassi, Simone Cerreia-Vioglio, Fabio Maccheroni, Massimo Marinacci, Marco Pirazzini","http://arxiv.org/abs/2005.01081v5, http://arxiv.org/pdf/2005.01081v5",cs.AI
35538,th,"In this paper, we consider a discrete-time stochastic Stackelberg game with a
single leader and multiple followers. Both the followers and the leader
together have conditionally independent private types, conditioned on action
and previous state, that evolve as controlled Markov processes. The objective
is to compute the stochastic Stackelberg equilibrium of the game where the
leader commits to a dynamic strategy. Each follower's strategy is the best
response to the leader's strategies and other followers' strategies while the
each leader's strategy is optimum given the followers play the best response.
In general, computing such equilibrium involves solving a fixed-point equation
for the whole game. In this paper, we present a backward recursive algorithm
that computes such strategies by solving smaller fixed-point equations for each
time $t$. Based on this algorithm, we compute stochastic Stackelberg
equilibrium of a security example and a dynamics information design example
used in~\cite{El17} (beeps).",Sequential decomposition of stochastic Stackelberg games,2020-05-05 11:24:28,Deepanshu Vasal,"http://arxiv.org/abs/2005.01997v2, http://arxiv.org/pdf/2005.01997v2",math.OC
35539,th,"In~[1],authors considered a general finite horizon model of dynamic game of
asymmetric information, where N players have types evolving as independent
Markovian process, where each player observes its own type perfectly and
actions of all players. The authors present a sequential decomposition
algorithm to find all structured perfect Bayesian equilibria of the game. The
algorithm consists of solving a class of fixed-point of equations for each time
$t,\pi_t$, whose existence was left as an open question. In this paper, we
prove existence of these fixed-point equations for compact metric spaces.",Existence of structured perfect Bayesian equilibrium in dynamic games of asymmetric information,2020-05-12 10:37:44,Deepanshu Vasal,"http://arxiv.org/abs/2005.05586v2, http://arxiv.org/pdf/2005.05586v2",cs.GT
35607,th,"The paper proposes a natural measure space of zero-sum perfect information
games with upper semicontinuous payoffs. Each game is specified by the game
tree, and by the assignment of the active player and of the capacity to each
node of the tree. The payoff in a game is defined as the infimum of the
capacity over the nodes that have been visited during the play. The active
player, the number of children, and the capacity are drawn from a given joint
distribution independently across the nodes. We characterize the cumulative
distribution function of the value $v$ using the fixed points of the so-called
value generating function. The characterization leads to a necessary and
sufficient condition for the event $v \geq k$ to occur with positive
probability. We also study probabilistic properties of the set of Player I's
$k$-optimal strategies and the corresponding plays.",Random perfect information games,2021-04-21 16:38:03,"János Flesch, Arkadi Predtetchinski, Ville Suomala","http://arxiv.org/abs/2104.10528v1, http://arxiv.org/pdf/2104.10528v1",cs.GT
35540,th,"In a two-player zero-sum graph game the players move a token throughout a
graph to produce an infinite path, which determines the winner or payoff of the
game. Traditionally, the players alternate turns in moving the token. In {\em
bidding games}, however, the players have budgets, and in each turn, we hold an
""auction"" (bidding) to determine which player moves the token: both players
simultaneously submit bids and the higher bidder moves the token. The bidding
mechanisms differ in their payment schemes. Bidding games were largely studied
with variants of {\em first-price} bidding in which only the higher bidder pays
his bid. We focus on {\em all-pay} bidding, where both players pay their bids.
Finite-duration all-pay bidding games were studied and shown to be technically
more challenging than their first-price counterparts. We study for the first
time, infinite-duration all-pay bidding games. Our most interesting results are
for {\em mean-payoff} objectives: we portray a complete picture for games
played on strongly-connected graphs. We study both pure (deterministic) and
mixed (probabilistic) strategies and completely characterize the optimal sure
and almost-sure (with probability $1$) payoffs that the players can
respectively guarantee. We show that mean-payoff games under all-pay bidding
exhibit the intriguing mathematical properties of their first-price
counterparts; namely, an equivalence with {\em random-turn games} in which in
each turn, the player who moves is selected according to a (biased) coin toss.
The equivalences for all-pay bidding are more intricate and unexpected than for
first-price bidding.",Infinite-Duration All-Pay Bidding Games,2020-05-12 11:51:46,"Guy Avni, Ismaël Jecker, Đorđe Žikelić","http://arxiv.org/abs/2005.06636v2, http://arxiv.org/pdf/2005.06636v2",econ.TH
35541,th,"We consider the problem of dynamic information design with one sender and one
receiver where the sender observers a private state of the system and takes an
action to send a signal based on its observation to a receiver. Based on this
signal, the receiver takes an action that determines rewards for both the
sender and the receiver and controls the state of the system. In this technical
note, we show that this problem can be considered as a problem of dynamic game
of asymmetric information and its perfect Bayesian equilibrium (PBE) and
Stackelberg equilibrium (SE) can be analyzed using the algorithms presented in
[1], [2] by the same author (among others). We then extend this model when
there is one sender and multiple receivers and provide algorithms to compute a
class of equilibria of this game.",Dynamic information design,2020-05-13 20:26:08,Deepanshu Vasal,"http://arxiv.org/abs/2005.07267v1, http://arxiv.org/pdf/2005.07267v1",econ.TH
35542,th,"Stable matching in a community consisting of $N$ men and $N$ women is a
classical combinatorial problem that has been the subject of intense
theoretical and empirical study since its introduction in 1962 in a seminal
paper by Gale and Shapley.
  When the input preference profile is generated from a distribution, we study
the output distribution of two stable matching procedures:
women-proposing-deferred-acceptance and men-proposing-deferred-acceptance. We
show that the two procedures are ex-ante equivalent: that is, under certain
conditions on the input distribution, their output distributions are identical.
  In terms of technical contributions, we generalize (to the non-uniform case)
an integral formula, due to Knuth and Pittel, which gives the probability that
a fixed matching is stable. Using an inclusion-exclusion principle on the set
of rotations, we give a new formula which gives the probability that a fixed
matching is the women/men-optimal stable matching. We show that those two
probabilities are equal with an integration by substitution.",Two-Sided Random Matching Markets: Ex-Ante Equivalence of the Deferred Acceptance Procedures,2020-05-18 13:49:39,Simon Mauras,"http://dx.doi.org/10.1145/3391403.3399448, http://arxiv.org/abs/2005.08584v1, http://arxiv.org/pdf/2005.08584v1",cs.GT
35543,th,"We consider two models of computation for Tarski's order preserving function
f related to fixed points in a complete lattice: the oracle function model and
the polynomial function model. In both models, we find the first polynomial
time algorithm for finding a Tarski's fixed point. In addition, we provide a
matching oracle bound for determining the uniqueness in the oracle function
model and prove it is Co-NP hard in the polynomial function model. The
existence of the pure Nash equilibrium in supermodular games is proved by
Tarski's fixed point theorem. Exploring the difference between supermodular
games and Tarski's fixed point, we also develop the computational results for
finding one pure Nash equilibrium and determining the uniqueness of the
equilibrium in supermodular games.",Computations and Complexities of Tarski's Fixed Points and Supermodular Games,2020-05-20 06:32:37,"Chuangyin Dang, Qi Qi, Yinyu Ye","http://arxiv.org/abs/2005.09836v1, http://arxiv.org/pdf/2005.09836v1",cs.GT
35544,th,"If agents cooperate only within small groups of some bounded sizes, is there
a way to partition the population into small groups such that no collection of
agents can do better by forming a new group? This paper revisited f-core in a
transferable utility setting. By providing a new formulation to the problem, we
built up a link between f-core and the transportation theory. Such a link helps
us to establish an exact existence result, and a characterization result of
f-core for a general class of agents, as well as some improvements in computing
the f-core in the finite type case.",Cooperation in Small Groups -- an Optimal Transport Approach,2020-05-22 18:56:08,Xinyang Wang,"http://arxiv.org/abs/2005.11244v1, http://arxiv.org/pdf/2005.11244v1",econ.TH
35545,th,"In fair division problems, we are given a set $S$ of $m$ items and a set $N$
of $n$ agents with individual preferences, and the goal is to find an
allocation of items among agents so that each agent finds the allocation fair.
There are several established fairness concepts and envy-freeness is one of the
most extensively studied ones. However envy-free allocations do not always
exist when items are indivisible and this has motivated relaxations of
envy-freeness: envy-freeness up to one item (EF1) and envy-freeness up to any
item (EFX) are two well-studied relaxations. We consider the problem of finding
EF1 and EFX allocations for utility functions that are not necessarily
monotone, and propose four possible extensions of different strength to this
setting.
  In particular, we present a polynomial-time algorithm for finding an EF1
allocation for two agents with arbitrary utility functions. An example is given
showing that EFX allocations need not exist for two agents with non-monotone,
non-additive, identical utility functions. However, when all agents have
monotone (not necessarily additive) identical utility functions, we prove that
an EFX allocation of chores always exists. As a step toward understanding the
general case, we discuss two subclasses of utility functions: Boolean utilities
that are $\{0,+1\}$-valued functions, and negative Boolean utilities that are
$\{0,-1\}$-valued functions. For the latter, we give a polynomial time
algorithm that finds an EFX allocation when the utility functions are
identical.","Envy-free Relaxations for Goods, Chores, and Mixed Items",2020-06-08 12:25:31,"Kristóf Bérczi, Erika R. Bérczi-Kovács, Endre Boros, Fekadu Tolessa Gedefa, Naoyuki Kamiyama, Telikepalli Kavitha, Yusuke Kobayashi, Kazuhisa Makino","http://arxiv.org/abs/2006.04428v1, http://arxiv.org/pdf/2006.04428v1",econ.TH
35546,th,"We survey the design of elections that are resilient to attempted
interference by third parties. For example, suppose votes have been cast in an
election between two candidates, and then each vote is randomly changed with a
small probability, independently of the other votes. It is desirable to keep
the outcome of the election the same, regardless of the changes to the votes.
It is well known that the US electoral college system is about 5 times more
likely to have a changed outcome due to vote corruption, when compared to a
majority vote. In fact, Mossel, O'Donnell and Oleszkiewicz proved in 2005 that
the majority voting method is most stable to this random vote corruption, among
voting methods where each person has a small influence on the election. We
discuss some recent progress on the analogous result for elections between more
than two candidates. In this case, plurality should be most stable to
corruption in votes. We also survey results on adversarial election
manipulation (where an adversary can select particular votes to change, perhaps
in a non-random way), and we briefly discuss ranked choice voting methods
(where a vote is a ranked list of candidates).",Designing Stable Elections: A Survey,2020-06-09 21:59:48,Steven Heilman,"http://arxiv.org/abs/2006.05460v2, http://arxiv.org/pdf/2006.05460v2",math.PR
35547,th,"A population of voters must elect representatives among themselves to decide
on a sequence of possibly unforeseen binary issues. Voters care only about the
final decision, not the elected representatives. The disutility of a voter is
proportional to the fraction of issues, where his preferences disagree with the
decision.
  While an issue-by-issue vote by all voters would maximize social welfare, we
are interested in how well the preferences of the population can be
approximated by a small committee.
  We show that a k-sortition (a random committee of k voters with the majority
vote within the committee) leads to an outcome within the factor 1+O(1/k) of
the optimal social cost for any number of voters n, any number of issues $m$,
and any preference profile.
  For a small number of issues m, the social cost can be made even closer to
optimal by delegation procedures that weigh committee members according to
their number of followers. However, for large m, we demonstrate that the
k-sortition is the worst-case optimal rule within a broad family of
committee-based rules that take into account metric information about the
preference profile of the whole population.",Representative Committees of Peers,2020-06-14 11:20:47,"Reshef Meir, Fedor Sandomirskiy, Moshe Tennenholtz","http://arxiv.org/abs/2006.07837v1, http://arxiv.org/pdf/2006.07837v1",cs.GT
35548,th,"We study the problem of modeling purchase of multiple products and utilizing
it to display optimized recommendations for online retailers and e-commerce
platforms.
  We present a parsimonious multi-purchase family of choice models called the
Bundle-MVL-K family, and develop a binary search based iterative strategy that
efficiently computes optimized recommendations for this model. We establish the
hardness of computing optimal recommendation sets, and derive several
structural properties of the optimal solution that aid in speeding up
computation. This is one of the first attempts at operationalizing
multi-purchase class of choice models. We show one of the first quantitative
links between modeling multiple purchase behavior and revenue gains. The
efficacy of our modeling and optimization techniques compared to competing
solutions is shown using several real world datasets on multiple metrics such
as model fitness, expected revenue gains and run-time reductions. For example,
the expected revenue benefit of taking multiple purchases into account is
observed to be $\sim5\%$ in relative terms for the Ta Feng and UCI shopping
datasets, when compared to the MNL model for instances with $\sim 1500$
products. Additionally, across $6$ real world datasets, the test log-likelihood
fits of our models are on average $17\%$ better in relative terms. Our work
contributes to the study multi-purchase decisions, analyzing consumer demand
and the retailers optimization problem. The simplicity of our models and the
iterative nature of our optimization technique allows practitioners meet
stringent computational constraints while increasing their revenues in
practical recommendation applications at scale, especially in e-commerce
platforms and other marketplaces.","Multi-Purchase Behavior: Modeling, Estimation and Optimization",2020-06-15 02:47:14,"Theja Tulabandhula, Deeksha Sinha, Saketh Reddy Karra, Prasoon Patidar","http://arxiv.org/abs/2006.08055v2, http://arxiv.org/pdf/2006.08055v2",cs.IR
35549,th,"We consider an odd-sized ""jury"", which votes sequentially between two states
of Nature (say A and B, or Innocent and Guilty) with the majority opinion
determining the verdict. Jurors have private information in the form of a
signal in [-1,+1], with higher signals indicating A more likely. Each juror has
an ability in [0,1], which is proportional to the probability of A given a
positive signal, an analog of Condorcet's p for binary signals. We assume that
jurors vote honestly for the alternative they view more likely, given their
signal and prior voting, because they are experts who want to enhance their
reputation (after their vote and actual state of Nature is revealed). For a
fixed set of jury abilities, the reliability of the verdict depends on the
voting order. For a jury of size three, the optimal ordering is always as
follows: middle ability first, then highest ability, then lowest. For
sufficiently heterogeneous juries, sequential voting is more reliable than
simultaneous voting and is in fact optimal (allowing for non-honest voting).
When average ability is fixed, verdict reliability is increasing in
heterogeneity.
  For medium-sized juries, we find through simulation that the median ability
juror should still vote first and the remaining ones should have increasing and
then decreasing abilities.",Optimizing Voting Order on Sequential Juries: A Median Voter Theorem and Beyond,2020-06-24 23:58:23,"Steve Alpern, Bo Chen","http://arxiv.org/abs/2006.14045v2, http://arxiv.org/pdf/2006.14045v2",econ.TH
35550,th,"This paper studies competitions with rank-based reward among a large number
of teams. Within each sizable team, we consider a mean-field contribution game
in which each team member contributes to the jump intensity of a common Poisson
project process; across all teams, a mean field competition game is formulated
on the rank of the completion time, namely the jump time of Poisson project
process, and the reward to each team is paid based on its ranking. On the layer
of teamwise competition game, three optimization problems are introduced when
the team size is determined by: (i) the team manager; (ii) the central planner;
(iii) the team members' voting as partnership. We propose a relative
performance criteria for each team member to share the team's reward and
formulate some special cases of mean field games of mean field games, which are
new to the literature. In all problems with homogeneous parameters, the
equilibrium control of each worker and the equilibrium or optimal team size can
be computed in an explicit manner, allowing us to analytically examine the
impacts of some model parameters and discuss their economic implications. Two
numerical examples are also presented to illustrate the parameter dependence
and comparison between different team size decision making.",Teamwise Mean Field Competitions,2020-06-24 17:13:43,"Xiang Yu, Yuchong Zhang, Zhou Zhou","http://arxiv.org/abs/2006.14472v2, http://arxiv.org/pdf/2006.14472v2",cs.GT
35552,th,"The study of network formation is pervasive in economics, sociology, and many
other fields. In this paper, we model network formation as a `choice' that is
made by nodes in a network to connect to other nodes. We study these `choices'
using discrete-choice models, in which an agent chooses between two or more
discrete alternatives. We employ the `repeated-choice' (RC) model to study
network formation. We argue that the RC model overcomes important limitations
of the multinomial logit (MNL) model, which gives one framework for studying
network formation, and that it is well-suited to study network formation. We
also illustrate how to use the RC model to accurately study network formation
using both synthetic and real-world networks. Using edge-independent synthetic
networks, we also compare the performance of the MNL model and the RC model. We
find that the RC model estimates the data-generation process of our synthetic
networks more accurately than the MNL model. In a patent citation network,
which forms sequentially, we present a case study of a qualitatively
interesting scenario -- the fact that new patents are more likely to cite
older, more cited, and similar patents -- for which employing the RC model
yields interesting insights.",Mixed Logit Models and Network Formation,2020-06-30 07:01:02,"Harsh Gupta, Mason A. Porter","http://arxiv.org/abs/2006.16516v5, http://arxiv.org/pdf/2006.16516v5",cs.SI
35553,th,"In a 1983 paper, Yannelis-Prabhakar rely on Michael's selection theorem to
guarantee a continuous selection in the context of the existence of maximal
elements and equilibria in abstract economies. In this tribute to Nicholas
Yannelis, we root this paper in Chapter II of Yannelis' 1983 Rochester Ph.D.
dissertation, and identify its pioneering application of the paracompactness
condition to current and ongoing work of Yannelis and his co-authors, and to
mathematical economics more generally. We move beyond the literature to provide
a necessary and sufficient condition for upper semi-continuous local and global
selections of correspondences, and to provide application to five domains of
Yannelis' interests: Berge's maximum theorem, the Gale-Nikaido-Debreu lemma,
the Gale-McKenzie survival assumption, Shafer's non-transitive setting, and the
Anderson-Khan-Rashid approximate existence theorem. The last resonates with
Chapter VI of the Yannelis' dissertation.",The Yannelis-Prabhakar Theorem on Upper Semi-Continuous Selections in Paracompact Spaces: Extensions and Applications,2020-06-30 13:56:20,"M. Ali Khan, Metin Uyanik","http://dx.doi.org/10.1007/s00199-021-01359-4, http://arxiv.org/abs/2006.16681v1, http://arxiv.org/pdf/2006.16681v1",econ.TH
35554,th,"Simulated Annealing is the crowning glory of Markov Chain Monte Carlo Methods
for the solution of NP-hard optimization problems in which the cost function is
known. Here, by replacing the Metropolis engine of Simulated Annealing with a
reinforcement learning variation -- that we call Macau Algorithm -- we show
that the Simulated Annealing heuristic can be very effective also when the cost
function is unknown and has to be learned by an artificial agent.",Ergodic Annealing,2020-08-01 13:17:11,"Carlo Baldassi, Fabio Maccheroni, Massimo Marinacci, Marco Pirazzini","http://arxiv.org/abs/2008.00234v1, http://arxiv.org/pdf/2008.00234v1",cs.AI
35555,th,"Motivated by an equilibrium problem, we establish the existence of a solution
for a family of Markovian backward stochastic differential equations with
quadratic nonlinearity and discontinuity in $Z$. Using unique continuation and
backward uniqueness, we show that the set of discontinuity has measure zero. In
a continuous-time stochastic model of an endowment economy, we prove the
existence of an incomplete Radner equilibrium with nondegenerate endogenous
volatility.",Radner equilibrium and systems of quadratic BSDEs with discontinuous generators,2020-08-08 14:55:17,"Luis Escauriaza, Daniel C. Schwarz, Hao Xing","http://arxiv.org/abs/2008.03500v3, http://arxiv.org/pdf/2008.03500v3",math.PR
35556,th,"We propose six axioms concerning when one candidate should defeat another in
a democratic election involving two or more candidates. Five of the axioms are
widely satisfied by known voting procedures. The sixth axiom is a weakening of
Kenneth Arrow's famous condition of the Independence of Irrelevant Alternatives
(IIA). We call this weakening Coherent IIA. We prove that the five axioms plus
Coherent IIA single out a method of determining defeats studied in our recent
work: Split Cycle. In particular, Split Cycle provides the most resolute
definition of defeat among any satisfying the six axioms for democratic defeat.
In addition, we analyze how Split Cycle escapes Arrow's Impossibility Theorem
and related impossibility results.",Axioms for Defeat in Democratic Elections,2020-08-16 00:43:51,"Wesley H. Holliday, Eric Pacuit","http://arxiv.org/abs/2008.08451v4, http://arxiv.org/pdf/2008.08451v4",econ.TH
35557,th,"We consider a discrete-time dynamic search game in which a number of players
compete to find an invisible object that is moving according to a time-varying
Markov chain. We examine the subgame perfect equilibria of these games. The
main result of the paper is that the set of subgame perfect equilibria is
exactly the set of greedy strategy profiles, i.e. those strategy profiles in
which the players always choose an action that maximizes their probability of
immediately finding the object. We discuss various variations and extensions of
the model.",Search for a moving target in a competitive environment,2020-08-21 22:08:16,"Benoit Duvocelle, János Flesch, Hui Min Shi, Dries Vermeulen","http://arxiv.org/abs/2008.09653v2, http://arxiv.org/pdf/2008.09653v2",math.OC
35558,th,"We introduce a discrete-time search game, in which two players compete to
find an object first. The object moves according to a time-varying Markov chain
on finitely many states. The players know the Markov chain and the initial
probability distribution of the object, but do not observe the current state of
the object. The players are active in turns. The active player chooses a state,
and this choice is observed by the other player. If the object is in the chosen
state, this player wins and the game ends. Otherwise, the object moves
according to the Markov chain and the game continues at the next period.
  We show that this game admits a value, and for any error-term $\veps>0$, each
player has a pure (subgame-perfect) $\veps$-optimal strategy. Interestingly, a
0-optimal strategy does not always exist. The $\veps$-optimal strategies are
robust in the sense that they are $2\veps$-optimal on all finite but
sufficiently long horizons, and also $2\veps$-optimal in the discounted version
of the game provided that the discount factor is close to 1. We derive results
on the analytic and structural properties of the value and the $\veps$-optimal
strategies. Moreover, we examine the performance of the finite truncation
strategies, which are easy to calculate and to implement. We devote special
attention to the important time-homogeneous case, where additional results
hold.",A competitive search game with a moving target,2020-08-27 13:12:17,"Benoit Duvocelle, János Flesch, Mathias Staudigl, Dries Vermeulen","http://arxiv.org/abs/2008.12032v1, http://arxiv.org/pdf/2008.12032v1",cs.GT
35559,th,"Agent-based modeling (ABM) is a powerful paradigm to gain insight into social
phenomena. One area that ABM has rarely been applied is coalition formation.
Traditionally, coalition formation is modeled using cooperative game theory. In
this paper, a heuristic algorithm is developed that can be embedded into an ABM
to allow the agents to find coalition. The resultant coalition structures are
comparable to those found by cooperative game theory solution approaches,
specifically, the core. A heuristic approach is required due to the
computational complexity of finding a cooperative game theory solution which
limits its application to about only a score of agents. The ABM paradigm
provides a platform in which simple rules and interactions between agents can
produce a macro-level effect without the large computational requirements. As
such, it can be an effective means for approximating cooperative game solutions
for large numbers of agents. Our heuristic algorithm combines agent-based
modeling and cooperative game theory to help find agent partitions that are
members of a games' core solution. The accuracy of our heuristic algorithm can
be determined by comparing its outcomes to the actual core solutions. This
comparison achieved by developing an experiment that uses a specific example of
a cooperative game called the glove game. The glove game is a type of exchange
economy game. Finding the traditional cooperative game theory solutions is
computationally intensive for large numbers of players because each possible
partition must be compared to each possible coalition to determine the core
set; hence our experiment only considers games of up to nine players. The
results indicate that our heuristic approach achieves a core solution over 90%
of the time for the games considered in our experiment.",Finding Core Members of Cooperative Games using Agent-Based Modeling,2020-08-30 20:38:43,"Daniele Vernon-Bido, Andrew J. Collins","http://arxiv.org/abs/2009.00519v1, http://arxiv.org/pdf/2009.00519v1",cs.MA
35560,th,"Data are invaluable. How can we assess the value of data objectively,
systematically and quantitatively? Pricing data, or information goods in
general, has been studied and practiced in dispersed areas and principles, such
as economics, marketing, electronic commerce, data management, data mining and
machine learning. In this article, we present a unified, interdisciplinary and
comprehensive overview of this important direction. We examine various
motivations behind data pricing, understand the economics of data pricing and
review the development and evolution of pricing models according to a series of
fundamental principles. We discuss both digital products and data products. We
also consider a series of challenges and directions for future work.",A Survey on Data Pricing: from Economics to Data Science,2020-09-09 22:31:38,Jian Pei,"http://dx.doi.org/10.1109/TKDE.2020.3045927, http://arxiv.org/abs/2009.04462v2, http://arxiv.org/pdf/2009.04462v2",econ.TH
35561,th,"This work researches the impact of including a wider range of participants in
the strategy-making process on the performance of organizations which operate
in either moderately or highly complex environments. Agent-based simulation
demonstrates that the increased number of ideas generated from larger and
diverse crowds and subsequent preference aggregation lead to rapid discovery of
higher peaks in the organization's performance landscape. However, this is not
the case when the expansion in the number of participants is small. The results
confirm the most frequently mentioned benefit in the Open Strategy literature:
the discovery of better performing strategies.",On the Effectiveness of Minisum Approval Voting in an Open Strategy Setting: An Agent-Based Approach,2020-09-07 17:50:35,"Joop van de Heijning, Stephan Leitner, Alexandra Rausch","http://arxiv.org/abs/2009.04912v2, http://arxiv.org/pdf/2009.04912v2",cs.AI
35562,th,"Complexity and limited ability have profound effect on how we learn and make
decisions under uncertainty. Using the theory of finite automaton to model
belief formation, this paper studies the characteristics of optimal learning
behavior in small and big worlds, where the complexity of the environment is
low and high, respectively, relative to the cognitive ability of the decision
maker. Optimal behavior is well approximated by the Bayesian benchmark in very
small world but is more different as the world gets bigger. In addition, in big
worlds, the optimal learning behavior could exhibit a wide range of
well-documented non-Bayesian learning behavior, including the use of
heuristics, correlation neglect, persistent over-confidence, inattentive
learning, and other behaviors of model simplification or misspecification.
These results establish a clear and testable relationship among the prominence
of non-Bayesian learning behavior, complexity, and cognitive ability.",Learning in a Small/Big World,2020-09-24 22:25:02,Benson Tsz Kin Leung,"http://arxiv.org/abs/2009.11917v8, http://arxiv.org/pdf/2009.11917v8",econ.TH
35563,th,"Consider the set of probability measures with given marginal distributions on
the product of two complete, separable metric spaces, seen as a correspondence
when the marginal distributions vary. In problems of optimal transport,
continuity of this correspondence from marginal to joint distributions is often
desired, in light of Berge's Maximum Theorem, to establish continuity of the
value function in the marginal distributions, as well as stability of the set
of optimal transport plans. Bergin (1999) established the continuity of this
correspondence, and in this note, we present a novel and considerably shorter
proof of this important result. We then examine an application to an assignment
game (transferable utility matching problem) with unknown type distributions.",On the Continuity of the Feasible Set Mapping in Optimal Transport,2020-09-27 16:17:26,"Mario Ghossoub, David Saunders","http://arxiv.org/abs/2009.12838v1, http://arxiv.org/pdf/2009.12838v1",q-fin.RM
35564,th,"Evolutionary game theory has proven to be an elegant framework providing many
fruitful insights in population dynamics and human behaviour. Here, we focus on
the aspect of behavioural plasticity and its effect on the evolution of
populations. We consider games with only two strategies in both well-mixed
infinite and finite populations settings. We assume that individuals might
exhibit behavioural plasticity referred to as incompetence of players. We study
the effect of such heterogeneity on the outcome of local interactions and,
ultimately, on global competition. For instance, a strategy that was dominated
before can become desirable from the selection perspective when behavioural
plasticity is taken into account. Furthermore, it can ease conditions for a
successful fixation in infinite populations' invasions. We demonstrate our
findings on the examples of Prisoners' Dilemma and Snowdrift game, where we
define conditions under which cooperation can be promoted.",The role of behavioural plasticity in finite vs infinite populations,2020-09-28 12:14:58,"M. Kleshnina, K. Kaveh, K. Chatterjee","http://arxiv.org/abs/2009.13160v1, http://arxiv.org/pdf/2009.13160v1",q-bio.PE
35565,th,"We analyze statistical discrimination in hiring markets using a multi-armed
bandit model. Myopic firms face workers arriving with heterogeneous observable
characteristics. The association between the worker's skill and characteristics
is unknown ex ante; thus, firms need to learn it. Laissez-faire causes
perpetual underestimation: minority workers are rarely hired, and therefore,
the underestimation tends to persist. Even a marginal imbalance in the
population ratio frequently results in perpetual underestimation. We propose
two policy solutions: a novel subsidy rule (the hybrid mechanism) and the
Rooney Rule. Our results indicate that temporary affirmative actions
effectively alleviate discrimination stemming from insufficient data.",On Statistical Discrimination as a Failure of Social Learning: A Multi-Armed Bandit Approach,2020-10-02 19:20:14,"Junpei Komiyama, Shunya Noda","http://arxiv.org/abs/2010.01079v6, http://arxiv.org/pdf/2010.01079v6",econ.TH
35566,th,"The Glosten-Milgrom model describes a single asset market, where informed
traders interact with a market maker, in the presence of noise traders. We
derive an analogy between this financial model and a Szil\'ard information
engine by {\em i)} showing that the optimal work extraction protocol in the
latter coincides with the pricing strategy of the market maker in the former
and {\em ii)} defining a market analogue of the physical temperature from the
analysis of the distribution of market orders. Then we show that the expected
gain of informed traders is bounded above by the product of this market
temperature with the amount of information that informed traders have, in exact
analogy with the corresponding formula for the maximal expected amount of work
that can be extracted from a cycle of the information engine. This suggests
that recent ideas from information thermodynamics may shed light on financial
markets, and lead to generalised inequalities, in the spirit of the extended
second law of thermodynamics.",Information thermodynamics of financial markets: the Glosten-Milgrom model,2020-10-05 13:36:07,"Léo Touzo, Matteo Marsili, Don Zagier","http://dx.doi.org/10.1088/1742-5468/abe59b, http://arxiv.org/abs/2010.01905v2, http://arxiv.org/pdf/2010.01905v2",cond-mat.stat-mech
35567,th,"Linear Fisher markets are a fundamental economic model with applications in
fair division as well as large-scale Internet markets. In the
finite-dimensional case of $n$ buyers and $m$ items, a market equilibrium can
be computed using the Eisenberg-Gale convex program. Motivated by large-scale
Internet advertising and fair division applications, this paper considers a
generalization of a linear Fisher market where there is a finite set of buyers
and a continuum of items. We introduce generalizations of the Eisenberg-Gale
convex program and its dual to this infinite-dimensional setting, which leads
to Banach-space optimization problems. We establish existence of optimal
solutions, strong duality, as well as necessity and sufficiency of KKT-type
conditions. All these properties are established via non-standard arguments,
which circumvent the limitations of duality theory in optimization over
infinite-dimensional Banach spaces. Furthermore, we show that there exists a
pure equilibrium allocation, i.e., a division of the item space. When the item
space is a closed interval and buyers have piecewise linear valuations, we show
that the Eisenberg-Gale-type convex program over the infinite-dimensional
allocations can be reformulated as a finite-dimensional convex conic program,
which can be solved efficiently using off-the-shelf optimization software based
on primal-dual interior-point methods. Based on our convex conic reformulation,
we develop the first polynomial-time cake-cutting algorithm that achieves
Pareto optimality, envy-freeness, and proportionality. For general buyer
valuations or a very large number of buyers, we propose computing market
equilibrium using stochastic dual averaging, which finds approximate
equilibrium prices with high probability. Finally, we discuss how the above
results easily extend to the case of quasilinear utilities.",Infinite-Dimensional Fisher Markets and Tractable Fair Division,2020-10-07 00:05:49,"Yuan Gao, Christian Kroer","http://arxiv.org/abs/2010.03025v5, http://arxiv.org/pdf/2010.03025v5",cs.GT
35568,th,"I characterize the consumer-optimal market segmentation in competitive
markets where multiple firms selling differentiated products to consumers with
unit demand. This segmentation is public---in that each firm observes the same
market segments---and takes a simple form: in each market segment, there is a
dominant firm favored by all consumers in that segment. By segmenting the
market, all but the dominant firm maximally compete to poach the consumer's
business, setting price to equal marginal cost. Information, thus, is being
used to amplify competition. This segmentation simultaneously generates an
efficient allocation and delivers to each firm its minimax profit.",Using Information to Amplify Competition,2020-10-12 00:02:42,Wenhao Li,"http://arxiv.org/abs/2010.05342v2, http://arxiv.org/pdf/2010.05342v2",econ.GN
35569,th,"The Empirical Revenue Maximization (ERM) is one of the most important price
learning algorithms in auction design: as the literature shows it can learn
approximately optimal reserve prices for revenue-maximizing auctioneers in both
repeated auctions and uniform-price auctions. However, in these applications
the agents who provide inputs to ERM have incentives to manipulate the inputs
to lower the outputted price. We generalize the definition of an
incentive-awareness measure proposed by Lavi et al (2019), to quantify the
reduction of ERM's outputted price due to a change of $m\ge 1$ out of $N$ input
samples, and provide specific convergence rates of this measure to zero as $N$
goes to infinity for different types of input distributions. By adopting this
measure, we construct an efficient, approximately incentive-compatible, and
revenue-optimal learning algorithm using ERM in repeated auctions against
non-myopic bidders, and show approximate group incentive-compatibility in
uniform-price auctions.",A Game-Theoretic Analysis of the Empirical Revenue Maximization Algorithm with Endogenous Sampling,2020-10-12 11:20:35,"Xiaotie Deng, Ron Lavi, Tao Lin, Qi Qi, Wenwei Wang, Xiang Yan","http://arxiv.org/abs/2010.05519v1, http://arxiv.org/pdf/2010.05519v1",cs.GT
35570,th,"We examine a problem of demand for insurance indemnification, when the
insured is sensitive to ambiguity and behaves according to the Maxmin-Expected
Utility model of Gilboa and Schmeidler (1989), whereas the insurer is a
(risk-averse or risk-neutral) Expected-Utility maximizer. We characterize
optimal indemnity functions both with and without the customary ex ante
no-sabotage requirement on feasible indemnities, and for both concave and
linear utility functions for the two agents. This allows us to provide a
unifying framework in which we examine the effects of the no-sabotage
condition, marginal utility of wealth, belief heterogeneity, as well as
ambiguity (multiplicity of priors) on the structure of optimal indemnity
functions. In particular, we show how the singularity in beliefs leads to an
optimal indemnity function that involves full insurance on an event to which
the insurer assigns zero probability, while the decision maker assigns a
positive probability. We examine several illustrative examples, and we provide
numerical studies for the case of a Wasserstein and a Renyi ambiguity set.",Optimal Insurance under Maxmin Expected Utility,2020-10-14 23:06:04,"Corina Birghila, Tim J. Boonen, Mario Ghossoub","http://arxiv.org/abs/2010.07383v1, http://arxiv.org/pdf/2010.07383v1",q-fin.RM
35571,th,"This paper models the US-China trade conflict and attempts to analyze the
(optimal) strategic choices. In contrast to the existing literature on the
topic, we employ the expected utility theory and examine the conflict
mathematically. In both perfect information and incomplete information games,
we show that expected net gains diminish as the utility of winning increases
because of the costs incurred during the struggle. We find that the best
response function exists for China but not for the US during the conflict. We
argue that the less the US coerces China to change its existing trade
practices, the higher the US expected net gains. China's best choice is to
maintain the status quo, and any further aggression in its policy and behavior
will aggravate the situation.",Modeling the US-China trade conflict: a utility theory approach,2020-10-23 15:31:23,"Yuhan Zhang, Cheng Chang","http://arxiv.org/abs/2010.12351v1, http://arxiv.org/pdf/2010.12351v1",econ.GN
35572,th,"EIP-1559 is a proposal to make several tightly coupled additions to
Ethereum's transaction fee mechanism, including variable-size blocks and a
burned base fee that rises and falls with demand. This report assesses the
game-theoretic strengths and weaknesses of the proposal and explores some
alternative designs.",Transaction Fee Mechanism Design for the Ethereum Blockchain: An Economic Analysis of EIP-1559,2020-12-02 00:48:57,Tim Roughgarden,"http://arxiv.org/abs/2012.00854v1, http://arxiv.org/pdf/2012.00854v1",cs.GT
35573,th,"In an election campaign, candidates must decide how to optimally allocate
their efforts/resources optimally among the regions of a country. As a result,
the outcome of the election will depend on the players' strategies and the
voters' preferences. In this work, we present a zero-sum game where two
candidates decide how to invest a fixed resource in a set of regions, while
considering their sizes and biases. We explore the Majority System (MS) as well
as the Electoral College (EC) voting systems. We prove equilibrium existence
and uniqueness under MS in a deterministic model; in addition, their closed
form expressions are provided when fixing the subset of regions and relaxing
the non-negative investing constraint. For the stochastic case, we use Monte
Carlo simulations to compute the players' payoffs, together with its gradient
and hessian. For the EC, given the lack of Equilibrium in pure strategies, we
propose an iterative algorithm to find Equilibrium in mixed strategies in a
subset of the simplex lattice. We illustrate numerical instances under both
election systems, and contrast players' equilibrium strategies. Finally, we
show that polarization induces candidates to focus on larger regions with
negative biases under MS, whereas candidates concentrate on swing states under
EC.",On the Resource Allocation for Political Campaigns,2020-12-05 00:15:18,"Sebastián Morales, Charles Thraves","http://arxiv.org/abs/2012.02856v1, http://arxiv.org/pdf/2012.02856v1",cs.GT
35574,th,"An increasing number of politicians are relying on cheaper, easier to access
technologies such as online social media platforms to communicate with their
constituency. These platforms present a cheap and low-barrier channel of
communication to politicians, potentially intensifying political competition by
allowing many to enter political races. In this study, we demonstrate that
lowering costs of communication, which allows many entrants to come into a
competitive market, can strengthen an incumbent's position when the newcomers
compete by providing more information to the voters. We show an asymmetric
bad-news-good-news effect where early negative news hurts the challengers more
than the positive news benefit them, such that in aggregate, an incumbent
politician's chances of winning is higher with more entrants in the market. Our
findings indicate that communication through social media and other platforms
can intensify competition, how-ever incumbency advantage may be strengthened
rather than weakened as an outcome of higher number of entrants into a
political market.","Competition, Politics, & Social Media",2020-12-06 20:15:55,"Benson Tsz Kin Leung, Pinar Yildirim","http://arxiv.org/abs/2012.03327v1, http://arxiv.org/pdf/2012.03327v1",econ.GN
35575,th,"This is an expanded version of the lecture given at the AMS Short Course on
Mean Field Games, on January 13, 2020 in Denver CO. The assignment was to
discuss applications of Mean Field Games in finance and economics. I need to
admit upfront that several of the examples reviewed in this chapter were
already discussed in book form. Still, they are here accompanied with
discussions of, and references to, works which appeared over the last three
years. Moreover, several completely new sections are added to show how recent
developments in financial engineering and economics can benefit from being
viewed through the lens of the Mean Field Game paradigm. The new financial
engineering applications deal with bitcoin mining and the energy markets, while
the new economic applications concern models offering a smooth transition
between macro-economics and finance, and contract theory.",Applications of Mean Field Games in Financial Engineering and Economic Theory,2020-12-09 09:57:20,Rene Carmona,"http://arxiv.org/abs/2012.05237v1, http://arxiv.org/pdf/2012.05237v1",q-fin.GN
35576,th,"We examine the long-term behavior of a Bayesian agent who has a misspecified
belief about the time lag between actions and feedback, and learns about the
payoff consequences of his actions over time. Misspecified beliefs about time
lags result in attribution errors, which have no long-term effect when the
agent's action converges, but can lead to arbitrarily large long-term
inefficiencies when his action cycles. Our proof uses concentration
inequalities to bound the frequency of action switches, which are useful to
study learning problems with history dependence. We apply our methods to study
a policy choice game between a policy-maker who has a correctly specified
belief about the time lag and the public who has a misspecified belief.",Misspecified Beliefs about Time Lags,2020-12-14 06:45:43,"Yingkai Li, Harry Pei","http://arxiv.org/abs/2012.07238v1, http://arxiv.org/pdf/2012.07238v1",econ.TH
35577,th,"We analyze how interdependencies between organizations in financial networks
can lead to multiple possible equilibrium outcomes. A multiplicity arises if
and only if there exists a certain type of dependency cycle in the network that
allows for self-fulfilling chains of defaults. We provide necessary and
sufficient conditions for banks' solvency in any equilibrium. Building on these
conditions, we characterize the minimum bailout payments needed to ensure
systemic solvency, as well as how solvency can be ensured by guaranteeing a
specific set of debt payments. Bailout injections needed to eliminate
self-fulfilling cycles of defaults (credit freezes) are fully recoverable,
while those needed to prevent cascading defaults outside of cycles are not. We
show that the minimum bailout problem is computationally hard, but provide an
upper bound on optimal payments and show that the problem has intuitive
solutions in specific network structures such as those with disjoint cycles or
a core-periphery structure.","Credit Freezes, Equilibrium Multiplicity, and Optimal Bailouts in Financial Networks",2020-12-22 09:02:09,"Matthew O. Jackson, Agathe Pernoud","http://arxiv.org/abs/2012.12861v2, http://arxiv.org/pdf/2012.12861v2",cs.GT
35578,th,"The controversies around the 2020 US presidential elections certainly casts
serious concerns on the efficiency of the current voting system in representing
the people's will. Is the naive Plurality voting suitable in an extremely
polarized political environment? Alternate voting schemes are gradually gaining
public support, wherein the voters rank their choices instead of just voting
for their first preference. However they do not capture certain crucial aspects
of voter preferences like disapprovals and negativities against candidates. I
argue that these unexpressed negativities are the predominant source of
polarization in politics. I propose a voting scheme with an explicit expression
of these negative preferences, so that we can simultaneously decipher the
popularity as well as the polarity of each candidate. The winner is picked by
an optimal tradeoff between the most popular and the least polarizing
candidate. By penalizing the candidates for their polarization, we can
discourage the divisive campaign rhetorics and pave way for potential third
party candidates.",Negative votes to depolarize politics,2020-12-26 04:05:24,Karthik H. Shankar,"http://dx.doi.org/10.1007/s10726-022-09799-6, http://arxiv.org/abs/2012.13657v1, http://arxiv.org/pdf/2012.13657v1",econ.TH
35579,th,"We consider games of chance played by someone with external capital that
cannot be applied to the game and determine how this affects risk-adjusted
optimal betting. Specifically, we focus on Kelly optimization as a metric,
optimizing the expected logarithm of total capital including both capital in
play and the external capital. For games with multiple rounds, we determine the
optimal strategy through dynamic programming and construct a close
approximation through the WKB method. The strategy can be described in terms of
short-term utility functions, with risk aversion depending on the ratio of the
amount in the game to the external money. Thus, a rational player's behavior
varies between conservative play that approaches Kelly strategy as they are
able to invest a larger fraction of total wealth and extremely aggressive play
that maximizes linear expectation when a larger portion of their capital is
locked away. Because you always have expected future productivity to account
for as external resources, this goes counter to the conventional wisdom that
super-Kelly betting is a ruinous proposition.",Calculated Boldness: Optimizing Financial Decisions with Illiquid Assets,2020-12-27 02:33:33,"Stanislav Shalunov, Alexei Kitaev, Yakov Shalunov, Arseniy Akopyan","http://arxiv.org/abs/2012.13830v1, http://arxiv.org/pdf/2012.13830v1",q-fin.PM
35580,th,"Toward explaining the persistence of biased inferences, we propose a
framework to evaluate competing (mis)specifications in strategic settings.
Agents with heterogeneous (mis)specifications coexist and draw Bayesian
inferences about their environment through repeated play. The relative
stability of (mis)specifications depends on their adherents' equilibrium
payoffs. A key mechanism is the learning channel: the endogeneity of perceived
best replies due to inference. We characterize when a rational society is only
vulnerable to invasion by some misspecification through the learning channel.
The learning channel leads to new stability phenomena, and can confer an
evolutionary advantage to otherwise detrimental biases in economically relevant
applications.",Evolutionarily Stable (Mis)specifications: Theory and Applications,2020-12-30 05:33:15,"Kevin He, Jonathan Libgober","http://arxiv.org/abs/2012.15007v4, http://arxiv.org/pdf/2012.15007v4",econ.TH
35581,th,"We analyze the performance of the best-response dynamic across all
normal-form games using a random games approach. The playing sequence -- the
order in which players update their actions -- is essentially irrelevant in
determining whether the dynamic converges to a Nash equilibrium in certain
classes of games (e.g. in potential games) but, when evaluated across all
possible games, convergence to equilibrium depends on the playing sequence in
an extreme way. Our main asymptotic result shows that the best-response dynamic
converges to a pure Nash equilibrium in a vanishingly small fraction of all
(large) games when players take turns according to a fixed cyclic order. By
contrast, when the playing sequence is random, the dynamic converges to a pure
Nash equilibrium if one exists in almost all (large) games.","Best-response dynamics, playing sequences, and convergence to equilibrium in random games",2021-01-12 01:32:48,"Torsten Heinrich, Yoojin Jang, Luca Mungo, Marco Pangallo, Alex Scott, Bassel Tarbush, Samuel Wiese","http://arxiv.org/abs/2101.04222v3, http://arxiv.org/pdf/2101.04222v3",econ.TH
35582,th,"The classic paper of Shapley and Shubik \cite{Shapley1971assignment}
characterized the core of the assignment game using ideas from matching theory
and LP-duality theory and their highly non-trivial interplay. Whereas the core
of this game is always non-empty, that of the general graph matching game can
be empty.
  This paper salvages the situation by giving an imputation in the
$2/3$-approximate core for the latter. This bound is best possible, since it is
the integrality gap of the natural underlying LP. Our profit allocation method
goes further: the multiplier on the profit of an agent is often better than ${2
\over 3}$ and lies in the interval $[{2 \over 3}, 1]$, depending on how
severely constrained the agent is.
  Next, we provide new insights showing how discerning core imputations of an
assignment games are by studying them via the lens of complementary slackness.
We present a relationship between the competitiveness of individuals and teams
of agents and the amount of profit they accrue in imputations that lie in the
core, where by {\em competitiveness} we mean whether an individual or a team is
matched in every/some/no maximum matching. This also sheds light on the
phenomenon of degeneracy in assignment games, i.e., when the maximum weight
matching is not unique.
  The core is a quintessential solution concept in cooperative game theory. It
contains all ways of distributing the total worth of a game among agents in
such a way that no sub-coalition has incentive to secede from the grand
coalition. Our imputation, in the $2/3$-approximate core, implies that a
sub-coalition will gain at most a $3/2$ factor by seceding, and less in typical
cases.",The General Graph Matching Game: Approximate Core,2021-01-19 03:53:22,Vijay V. Vazirani,"http://arxiv.org/abs/2101.07390v4, http://arxiv.org/pdf/2101.07390v4",cs.GT
35583,th,"In the era of a growing population, systemic changes to the world, and the
rising risk of crises, humanity has been facing an unprecedented challenge of
resource scarcity. Confronting and addressing the issues concerning the scarce
resource's conservation, competition, and stimulation by grappling its
characteristics and adopting viable policy instruments calls the
decision-maker's attention with a paramount priority. In this paper, we develop
the first general decentralized cross-sector supply chain network model that
captures the unique features of scarce resources under a unifying fiscal policy
scheme. We formulate the problem as a network equilibrium model with
finite-dimensional variational inequality theories. We then characterize the
network equilibrium with a set of classic theoretical properties, as well as
with a set of properties that are novel to the network games application
literature, namely, the lowest eigenvalue of the game Jacobian. Lastly, we
provide a series of illustrative examples, including a medical glove supply
network, to showcase how our model can be used to investigate the efficacy of
the imposed policies in relieving supply chain distress and stimulating
welfare. Our managerial insights inform and expand the political dialogues on
fiscal policy design, public resource legislation, social welfare
redistribution, and supply chain practice toward sustainability.",Relief and Stimulus in A Cross-sector Multi-product Scarce Resource Supply Chain Network,2021-01-23 01:48:41,"Xiaowei Hu, Peng Li","http://arxiv.org/abs/2101.09373v3, http://arxiv.org/pdf/2101.09373v3",econ.TH
35584,th,"Data sharing issues pervade online social and economic environments. To
foster social progress, it is important to develop models of the interaction
between data producers and consumers that can promote the rise of cooperation
between the involved parties. We formalize this interaction as a game, the data
sharing game, based on the Iterated Prisoner's Dilemma and deal with it through
multi-agent reinforcement learning techniques. We consider several strategies
for how the citizens may behave, depending on the degree of centralization
sought. Simulations suggest mechanisms for cooperation to take place and, thus,
achieve maximum social utility: data consumers should perform some kind of
opponent modeling, or a regulator should transfer utility between both players
and incentivise them.",Data sharing games,2021-01-26 14:29:01,"Víctor Gallego, Roi Naveiro, David Ríos Insua, Wolfram Rozas","http://arxiv.org/abs/2101.10721v1, http://arxiv.org/pdf/2101.10721v1",cs.GT
35585,th,"The ELLIS PhD program is a European initiative that supports excellent young
researchers by connecting them to leading researchers in AI. In particular, PhD
students are supervised by two advisors from different countries: an advisor
and a co-advisor. In this work we summarize the procedure that, in its final
step, matches students to advisors in the ELLIS 2020 PhD program. The steps of
the procedure are based on the extensive literature of two-sided matching
markets and the college admissions problem [Knuth and De Bruijn, 1997, Gale and
Shapley, 1962, Rothand Sotomayor, 1992]. We introduce PolyGS, an algorithm for
the case of two-sided markets with quotas on both sides (also known as
many-to-many markets) which we use throughout the selection procedure of
pre-screening, interview matching and final matching with advisors. The
algorithm returns a stable matching in the sense that no unmatched persons
prefer to be matched together rather than with their current partners (given
their indicated preferences). Roth [1984] gives evidence that only stable
matchings are likely to be adhered to over time. Additionally, the matching is
student-optimal. Preferences are constructed based on the rankings each side
gives to the other side and the overlaps of research fields. We present and
discuss the matchings that the algorithm produces in the ELLIS 2020 PhD
program.",Two-Sided Matching Markets in the ELLIS 2020 PhD Program,2021-01-28 18:50:15,"Maximilian Mordig, Riccardo Della Vecchia, Nicolò Cesa-Bianchi, Bernhard Schölkopf","http://arxiv.org/abs/2101.12080v3, http://arxiv.org/pdf/2101.12080v3",cs.GT
35586,th,"How cooperation evolves and manifests itself in the thermodynamic or infinite
player limit of social dilemma games is a matter of intense speculation.
Various analytical methods have been proposed to analyze the thermodynamic
limit of social dilemmas. In this work, we compare two analytical methods,
i.e., Darwinian evolution and Nash equilibrium mapping, with a numerical
agent-based approach. For completeness, we also give results for another
analytical method, Hamiltonian dynamics. In contrast to Hamiltonian dynamics,
which involves the maximization of payoffs of all individuals, in Darwinian
evolution, the payoff of a single player is maximized with respect to its
interaction with the nearest neighbor. While the Hamiltonian dynamics method
utterly fails as compared to Nash equilibrium mapping, the Darwinian evolution
method gives a false positive for game magnetization -- the net difference
between the fraction of cooperators and defectors -- when payoffs obey the
condition a + d = b + c, wherein a,d represents the diagonal elements and b,c
the off-diagonal elements in a symmetric social dilemma game payoff matrix.
When either a + d =/= b + c or when one looks at the average payoff per player,
the Darwinian evolution method fails, much like the Hamiltonian dynamics
approach. On the other hand, the Nash equilibrium mapping and numerical
agent-based method agree well for both game magnetization and average payoff
per player for the social dilemmas in question, i.e., the Hawk-Dove game and
the Public goods game. This paper thus brings to light the inconsistency of the
Darwinian evolution method vis-a-vis both Nash equilibrium mapping and a
numerical agent-based approach.",Nash equilibrium mapping vs Hamiltonian dynamics vs Darwinian evolution for some social dilemma games in the thermodynamic limit,2021-02-27 22:13:49,"Colin Benjamin, Arjun Krishnan U M","http://dx.doi.org/10.1140/epjb/s10051-023-00573-4, http://arxiv.org/abs/2103.00295v2, http://arxiv.org/pdf/2103.00295v2",cond-mat.stat-mech
35587,th,"We study equilibrium distancing during epidemics. Distancing reduces the
individual's probability of getting infected but comes at a cost. It creates a
single-peaked epidemic, flattens the curve and decreases the size of the
epidemic. We examine more closely the effects of distancing on the outset, the
peak and the final size of the epidemic. First, we define a behavioral basic
reproduction number and show that it is concave in the transmission rate. The
infection, therefore, spreads only if the transmission rate is in the
intermediate region. Second, the peak of the epidemic is non-monotonic in the
transmission rate. A reduction in the transmission rate can lead to an increase
of the peak. On the other hand, a decrease in the cost of distancing always
flattens the curve. Third, both an increase in the infection rate as well as an
increase in the cost of distancing increase the size of the epidemic. Our
results have important implications on the modeling of interventions. Imposing
restrictions on the infection rate has qualitatively different effects on the
trajectory of the epidemics than imposing assumptions on the cost of
distancing. The interventions that affect interactions rather than the
transmission rate should, therefore, be modeled as changes in the cost of
distancing.",Epidemics with Behavior,2021-02-28 22:11:31,"Satoshi Fukuda, Nenad Kos, Christoph Wolf","http://arxiv.org/abs/2103.00591v1, http://arxiv.org/pdf/2103.00591v1",econ.GN
35588,th,"The economic approach to determine optimal legal policies involves maximizing
a social welfare function. We propose an alternative: a consent-approach that
seeks to promote consensual interactions and deter non-consensual interactions.
The consent-approach does not rest upon inter-personal utility comparisons or
value judgments about preferences. It does not require any additional
information relative to the welfare-approach. We highlight the contrast between
the welfare-approach and the consent-approach using a stylized model inspired
by seminal cases of harassment and the #MeToo movement. The social welfare
maximizing penalty for harassment in our model can be zero under the
welfare-approach but not under the consent-approach.",Welfare v. Consent: On the Optimal Penalty for Harassment,2021-03-01 06:52:41,"Ratul Das Chaudhury, Birendra Rai, Liang Choon Wang, Dyuti Banerjee","http://arxiv.org/abs/2103.00734v2, http://arxiv.org/pdf/2103.00734v2",econ.GN
35589,th,"When does society eventually learn the truth, or take the correct action, via
observational learning? In a general model of sequential learning over social
networks, we identify a simple condition for learning dubbed excludability.
Excludability is a joint property of agents' preferences and their information.
When required to hold for all preferences, it is equivalent to information
having ""unbounded beliefs"", which demands that any agent can individually
identify the truth, even if only with small probability. But unbounded beliefs
may be untenable with more than two states: e.g., it is incompatible with the
monotone likelihood ratio property. Excludability reveals that what is crucial
for learning, instead, is that a single agent must be able to displace any
wrong action, even if she cannot take the correct action. We develop two
classes of preferences and information that jointly satisfy excludability: (i)
for a one-dimensional state, preferences with single-crossing differences and a
new informational condition, directionally unbounded beliefs; and (ii) for a
multi-dimensional state, Euclidean preferences and subexponential
location-shift information.",Beyond Unbounded Beliefs: How Preferences and Information Interplay in Social Learning,2021-03-04 02:31:19,"Navin Kartik, SangMok Lee, Tianhao Liu, Daniel Rappoport","http://arxiv.org/abs/2103.02754v4, http://arxiv.org/pdf/2103.02754v4",econ.TH
35641,th,"We determine winners and losers of immigration using a general equilibrium
search and matching model in which native and non-native employees, who are
heterogeneous with respect to their skill level, produce different types of
goods. Unemployment benefits and the provision of public goods are financed by
a progressive taxation on wages and profits. The estimation of the baseline
model for Italy shows that the presence of non-natives in 2017 led real wages
of low and high-skilled employees to be 4% lower and 8% higher, respectively.
Profits of employers in the low-skilled market were 6% lower, while those of
employers in the high-skilled market were 10% higher. At aggregate level, total
GDP was 14% higher, GDP per worker and the per capita provision of public goods
4% higher, while government revenues and social security contributions raised
by 70 billion euros and 18 billion euros, respectively.",Winners and losers of immigration,2021-07-14 11:28:43,"Davide Fiaschi, Cristina Tealdi","http://arxiv.org/abs/2107.06544v2, http://arxiv.org/pdf/2107.06544v2",econ.GN
35590,th,"We study learning dynamics in distributed production economies such as
blockchain mining, peer-to-peer file sharing and crowdsourcing. These economies
can be modelled as multi-product Cournot competitions or all-pay auctions
(Tullock contests) when individual firms have market power, or as Fisher
markets with quasi-linear utilities when every firm has negligible influence on
market outcomes. In the former case, we provide a formal proof that Gradient
Ascent (GA) can be Li-Yorke chaotic for a step size as small as $\Theta(1/n)$,
where $n$ is the number of firms. In stark contrast, for the Fisher market
case, we derive a Proportional Response (PR) protocol that converges to market
equilibrium. The positive results on the convergence of the PR dynamics are
obtained in full generality, in the sense that they hold for Fisher markets
with \emph{any} quasi-linear utility functions. Conversely, the chaos results
for the GA dynamics are established even in the simplest possible setting of
two firms and one good, and they hold for a wide range of price functions with
different demand elasticities. Our findings suggest that by considering
multi-agent interactions from a market rather than a game-theoretic
perspective, we can formally derive natural learning protocols which are stable
and converge to effective outcomes rather than being chaotic.",Learning in Markets: Greed Leads to Chaos but Following the Price is Right,2021-03-15 19:48:30,"Yun Kuen Cheung, Stefanos Leonardos, Georgios Piliouras","http://arxiv.org/abs/2103.08529v2, http://arxiv.org/pdf/2103.08529v2",cs.GT
35591,th,"Previous research on two-dimensional extensions of Hotelling's location game
has argued that spatial competition leads to maximum differentiation in one
dimensions and minimum differentiation in the other dimension. We expand on
existing models to allow for endogenous entry into the market. We find that
competition may lead to the min/max finding of previous work but also may lead
to maximum differentiation in both dimensions. The critical issue in
determining the degree of differentiation is if existing firms are seeking to
deter entry of a new firm or to maximizing profits within an existing, stable
market.",Differentiation in a Two-Dimensional Market with Endogenous Sequential Entry,2021-03-20 01:27:00,"Jeffrey D. Michler, Benjamin M. Gramig","http://arxiv.org/abs/2103.11051v1, http://arxiv.org/pdf/2103.11051v1",econ.GN
35592,th,"We introduce a one-parameter family of polymatrix replicators defined in a
three-dimensional cube and study its bifurcations. For a given interval of
parameters, this family exhibits suspended horseshoes and persistent strange
attractors. The proof relies on the existence of a homoclinic cycle to the
interior equilibrium. We also describe the phenomenological steps responsible
for the transition from regular to chaotic dynamics in our system (route to
chaos).",Persistent Strange attractors in 3D Polymatrix Replicators,2021-03-20 23:52:42,"Telmo Peixe, Alexandre A. Rodrigues","http://dx.doi.org/10.1016/j.physd.2022.133346, http://arxiv.org/abs/2103.11242v2, http://arxiv.org/pdf/2103.11242v2",math.DS
35593,th,"Product personalization opens the door to price discrimination. A rich
product line allows firms to better tailor products to consumers' tastes, but
the mere choice of a product carries valuable information about consumers that
can be leveraged for price discrimination. We study this trade-off in an
upstream-downstream model, where a consumer buys a good of variable quality
upstream, followed by an indivisible good downstream. The downstream firm's use
of the consumer's purchase history for price discrimination introduces a novel
distortion: The upstream firm offers a subset of the products that it would
offer if, instead, it could jointly design its product line and downstream
pricing. By controlling the degree of product personalization the upstream firm
curbs ratcheting forces that result from the consumer facing downstream price
discrimination.",Purchase history and product personalization,2021-03-22 01:17:35,"Laura Doval, Vasiliki Skreta","http://arxiv.org/abs/2103.11504v5, http://arxiv.org/pdf/2103.11504v5",econ.TH
35594,th,"We propose a new forward electricity market framework that admits
heterogeneous market participants with second-order cone strategy sets, who
accurately express the nonlinearities in their costs and constraints through
conic bids, and a network operator facing conic operational constraints. In
contrast to the prevalent linear-programming-based electricity markets, we
highlight how the inclusion of second-order cone constraints improves
uncertainty-, asset- and network-awareness of the market, which is key to the
successful transition towards an electricity system based on weather-dependent
renewable energy sources. We analyze our general market-clearing proposal using
conic duality theory to derive efficient spatially-differentiated prices for
the multiple commodities, comprising of energy and flexibility services. Under
the assumption of perfect competition, we prove the equivalence of the
centrally-solved market-clearing optimization problem to a competitive spatial
price equilibrium involving a set of rational and self-interested participants
and a price setter. Finally, under common assumptions, we prove that moving
towards conic markets does not incur the loss of desirable economic properties
of markets, namely market efficiency, cost recovery and revenue adequacy. Our
numerical studies focus on the specific use case of uncertainty-aware market
design and demonstrate that the proposed conic market brings advantages over
existing alternatives within the linear programming market framework.",Moving from Linear to Conic Markets for Electricity,2021-03-22 21:26:33,"Anubhav Ratha, Pierre Pinson, Hélène Le Cadre, Ana Virag, Jalal Kazempour","http://arxiv.org/abs/2103.12122v3, http://arxiv.org/pdf/2103.12122v3",econ.TH
35595,th,"We introduce a model of the diffusion of an epidemic with demographically
heterogeneous agents interacting socially on a spatially structured network.
Contagion-risk averse agents respond behaviorally to the diffusion of the
infections by limiting their social interactions. Schools and workplaces also
respond by allowing students and employees to attend and work remotely. The
spatial structure induces local herd immunities along socio-demographic
dimensions, which significantly affect the dynamics of infections. We study
several non-pharmaceutical interventions; e.g., i) lockdown rules, which set
thresholds on the spread of the infection for the closing and reopening of
economic activities; ii) neighborhood lockdowns, leveraging granular
(neighborhood-level) information to improve the effectiveness public health
policies; iii) selective lockdowns, which restrict social interactions by
location (in the network) and by the demographic characteristics of the agents.
Substantiating a ""Lucas critique"" argument, we assess the cost of naive
discretionary policies ignoring agents and firms' behavioral responses.",Spatial-SIR with Network Structure and Behavior: Lockdown Rules and the Lucas Critique,2021-03-25 15:30:00,"Alberto Bisin, Andrea Moro","http://arxiv.org/abs/2103.13789v3, http://arxiv.org/pdf/2103.13789v3",econ.GN
35596,th,"In recent years, prominent blockchain systems such as Bitcoin and Ethereum
have experienced explosive growth in transaction volume, leading to frequent
surges in demand for limited block space and causing transaction fees to
fluctuate by orders of magnitude. Existing systems sell space using first-price
auctions; however, users find it difficult to estimate how much they need to
bid in order to get their transactions accepted onto the chain. If they bid too
low, their transactions can have long confirmation times. If they bid too high,
they pay larger fees than necessary.
  In light of these issues, new transaction fee mechanisms have been proposed,
most notably EIP-1559, aiming to provide better usability. EIP-1559 is a
history-dependent mechanism that relies on block utilization to adjust a base
fee. We propose an alternative design -- a {\em dynamic posted-price mechanism}
-- which uses not only block utilization but also observable bids from past
blocks to compute a posted price for subsequent blocks. We show its potential
to reduce price volatility by providing examples for which the prices of
EIP-1559 are unstable while the prices of the proposed mechanism are stable.
More generally, whenever the demand for the blockchain stabilizes, we ask if
our mechanism is able to converge to a stable state. Our main result provides
sufficient conditions in a probabilistic setting for which the proposed
mechanism is approximately welfare optimal and the prices are stable. Our main
technical contribution towards establishing stability is an iterative algorithm
that, given oracle access to a Lipschitz continuous and strictly concave
function $f$, converges to a fixed point of $f$.",Dynamic Posted-Price Mechanisms for the Blockchain Transaction Fee Market,2021-03-26 00:41:13,"Matheus V. X. Ferreira, Daniel J. Moroz, David C. Parkes, Mitchell Stern","http://dx.doi.org/10.1145/3479722.3480991, http://arxiv.org/abs/2103.14144v2, http://arxiv.org/pdf/2103.14144v2",cs.GT
35597,th,"This paper shows the usefulness of Perov's contraction principle, which
generalizes Banach's contraction principle to a vector-valued metric, for
studying dynamic programming problems in which the discount factor can be
stochastic. The discounting condition $\beta<1$ is replaced by $\rho(B)<1$,
where $B$ is an appropriate nonnegative matrix and $\rho$ denotes the spectral
radius. Blackwell's sufficient condition is also generalized in this setting.
Applications to asset pricing and optimal savings are discussed.",Perov's Contraction Principle and Dynamic Programming with Stochastic Discounting,2021-03-26 02:14:58,Alexis Akira Toda,"http://dx.doi.org/10.1016/j.orl.2021.09.001, http://arxiv.org/abs/2103.14173v2, http://arxiv.org/pdf/2103.14173v2",econ.TH
35598,th,"We propose a simple rule of thumb for countries which have embarked on a
vaccination campaign while still facing the need to keep non-pharmaceutical
interventions (NPI) in place because of the ongoing spread of SARS-CoV-2. If
the aim is to keep the death rate from increasing, NPIs can be loosened when it
is possible to vaccinate more than twice the growth rate of new cases. If the
aim is to keep the pressure on hospitals under control, the vaccination rate
has to be about four times higher. These simple rules can be derived from the
observation that the risk of death or a severe course requiring hospitalization
from a COVID-19 infection increases exponentially with age and that the sizes
of age cohorts decrease linearly at the top of the population pyramid.
Protecting the over 60-year-olds, which constitute approximately one-quarter of
the population in Europe (and most OECD countries), reduces the potential loss
of life by 95 percent.",When to end a lock down? How fast must vaccination campaigns proceed in order to keep health costs in check?,2021-03-29 15:20:34,"Claudius Gros, Thomas Czypionka, Daniel Gros","http://dx.doi.org/10.1098/rsos.211055, http://arxiv.org/abs/2103.15544v4, http://arxiv.org/pdf/2103.15544v4",q-bio.PE
35599,th,"In this paper monetary risk measures that are positively superhomogeneous,
called star-shaped risk measures, are characterized and their properties
studied. The measures in this class, which arise when the controversial
subadditivity property of coherent risk measures is dispensed with and positive
homogeneity is weakened, include all practically used risk measures, in
particular, both convex risk measures and Value-at-Risk. From a financial
viewpoint, our relaxation of convexity is necessary to quantify the capital
requirements for risk exposure in the presence of liquidity risk, competitive
delegation, or robust aggregation mechanisms. From a decision theoretical
perspective, star-shaped risk measures emerge from variational preferences when
risk mitigation strategies can be adopted by a rational decision maker.",Star-shaped Risk Measures,2021-03-29 20:33:10,"Erio Castagnoli, Giacomo Cattelan, Fabio Maccheroni, Claudio Tebaldi, Ruodu Wang","http://arxiv.org/abs/2103.15790v3, http://arxiv.org/pdf/2103.15790v3",econ.TH
35600,th,"In this study, we consider the real-world problem of assigning students to
classes, where each student has a preference list, ranking a subset of classes
in order of preference. Though we use existing approaches to include the daily
class assignment of Gunma University, new concepts and adjustments are required
to find improved results depending on real instances in the field. Thus, we
propose minimax-rank constrained maximum-utility matchings and a compromise
between maximum-utility matchings and fair matchings, where a matching is said
to be fair if it lexicographically minimizes the number of students assigned to
classes not included in their choices, the number of students assigned to their
last choices, and so on. In addition, we also observe the potential
inefficiency of the student proposing deferred acceptance mechanism with single
tie-breaking, which a hot topic in the literature on the school choice problem.",Optimal class assignment problem: a case study at Gunma University,2021-03-31 11:02:38,"Akifumi Kira, Kiyohito Nagano, Manabu Sugiyama, Naoyuki Kamiyama","http://arxiv.org/abs/2103.16879v1, http://arxiv.org/pdf/2103.16879v1",cs.GT
35601,th,"We study the experimentation dynamics of a decision maker (DM) in a two-armed
bandit setup (Bolton and Harris (1999)), where the agent holds ambiguous
beliefs regarding the distribution of the return process of one arm and is
certain about the other one. The DM entertains Multiplier preferences a la
Hansen and Sargent (2001), thus we frame the decision making environment as a
two-player differential game against nature in continuous time. We characterize
the DM value function and her optimal experimentation strategy that turns out
to follow a cut-off rule with respect to her belief process. The belief
threshold for exploring the ambiguous arm is found in closed form and is shown
to be increasing with respect to the ambiguity aversion index. We then study
the effect of provision of an unambiguous information source about the
ambiguous arm. Interestingly, we show that the exploration threshold rises
unambiguously as a result of this new information source, thereby leading to
more conservatism. This analysis also sheds light on the efficient time to
reach for an expert opinion.",Robust Experimentation in the Continuous Time Bandit Problem,2021-03-31 23:42:39,Farzad Pourbabaee,"http://dx.doi.org/10.1007/s00199-020-01328-3, http://arxiv.org/abs/2104.00102v1, http://arxiv.org/pdf/2104.00102v1",econ.TH
35602,th,"We consider a general tractable model for default contagion and systemic risk
in a heterogeneous financial network, subject to an exogenous macroeconomic
shock. We show that, under some regularity assumptions, the default cascade
model could be transferred to a death process problem represented by
balls-and-bins model. We also reduce the dimension of the problem by
classifying banks according to different types, in an appropriate type space.
These types may be calibrated to real-world data by using machine learning
techniques. We then state various limit theorems regarding the final size of
default cascade over different types. In particular, under suitable assumptions
on the degree and threshold distributions, we show that the final size of
default cascade has asymptotically Gaussian fluctuations. We next state limit
theorems for different system-wide wealth aggregation functions and show how
the systemic risk measure, in a given stress test scenario, could be related to
the structure and heterogeneity of financial networks. We finally show how
these results could be used by a social planner to optimally target
interventions during a financial crisis, with a budget constraint and under
partial information of the financial network.",Limit Theorems for Default Contagion and Systemic Risk,2021-04-01 07:18:44,"Hamed Amini, Zhongyuan Cao, Agnes Sulem","http://arxiv.org/abs/2104.00248v1, http://arxiv.org/pdf/2104.00248v1",q-fin.RM
35603,th,"In real life auctions, a widely observed phenomenon is the winner's curse --
the winner's high bid implies that the winner often over-estimates the value of
the good for sale, resulting in an incurred negative utility. The seminal work
of Eyster and Rabin [Econometrica'05] introduced a behavioral model aimed to
explain this observed anomaly. We term agents who display this bias ""cursed
agents"". We adopt their model in the interdependent value setting, and aim to
devise mechanisms that prevent the cursed agents from obtaining negative
utility. We design mechanisms that are cursed ex-post IC, that is, incentivize
agents to bid their true signal even though they are cursed, while ensuring
that the outcome is individually rational -- the price the agents pay is no
more than the agents' true value.
  Since the agents might over-estimate the good's value, such mechanisms might
require the seller to make positive transfers to the agents to prevent agents
from over-paying. For revenue maximization, we give the optimal deterministic
and anonymous mechanism. For welfare maximization, we require ex-post budget
balance (EPBB), as positive transfers might lead to negative revenue. We
propose a masking operation that takes any deterministic mechanism, and imposes
that the seller would not make positive transfers, enforcing EPBB. We show that
in typical settings, EPBB implies that the mechanism cannot make any positive
transfers, implying that applying the masking operation on the fully efficient
mechanism results in a socially optimal EPBB mechanism. This further implies
that if the valuation function is the maximum of agents' signals, the optimal
EPBB mechanism obtains zero welfare. In contrast, we show that for sum-concave
valuations, which include weighted-sum valuations and l_p-norms, the welfare
optimal EPBB mechanism obtains half of the optimal welfare as the number of
agents grows large.",Cursed yet Satisfied Agents,2021-04-02 04:15:53,"Yiling Chen, Alon Eden, Juntao Wang","http://arxiv.org/abs/2104.00835v4, http://arxiv.org/pdf/2104.00835v4",cs.GT
35604,th,"We use the topology of simplicial complexes to model political structures
following [1]. Simplicial complexes are a natural tool to encode interactions
in the structures since a simplex can be used to represent a subset of
compatible agents. We translate the wedge, cone, and suspension operations into
the language of political structures and show how these constructions
correspond to merging structures and introducing mediators. We introduce the
notions of the viability of an agent and the stability of a political system
and examine their interplay with the simplicial complex topology, casting their
interactions in category-theoretic language whenever possible. We introduce a
refinement of the model by assigning weights to simplices corresponding to the
number of issues the agents agree on. In addition, homology of simplicial
complexes is used to detect non-viabilities, certain cycles of incompatible
agents, and the (non)presence of mediators. Finally, we extend some results
from [1], bringing viability and stability into the language of friendly
delegations and using homology to examine the existence of R-compromises and
D-compromises.",Political structures and the topology of simplicial complexes,2021-04-05 23:07:35,"Andrea Mock, Ismar Volic","http://arxiv.org/abs/2104.02131v3, http://arxiv.org/pdf/2104.02131v3",physics.soc-ph
35605,th,"This is an introductory textbook of the history of economics of inequality
for undergraduates and genreral readers. It begins with Adam Smith's critique
of Rousseau. The first and second chapters focus on Smith and Karl Marx, in the
broad classical tradition of economics, where it is believed that there is an
inseparable relationship between production and distribution, economic growth
and inequality. Chapters 3 and 4 argue that despite the fact that the founders
of the neoclassical school had shown an active interest in social issues,
namely worker poverty, the issues of production and distribution became
discussed separately among neoclassicals. Toward the end of the 20th century,
however, there was a renewed awareness within economics of the problem of the
relationship between production and distribution. The young Piketty's
beginnings as an economist are set against this backdrop. Chapters 5 to 8
explain the circumstances of the restoration of classical concerns within the
neoclassical framework. Then, in chapters 9 and 10, I discuss the fact that
Thomas Piketty's seminal work is a new development in this ""inequality
renaissance,"" and try to gain a perspective on future trends in the debate.
Mathematical appendix presents simple models of growth and distribution.",The Struggle with Inequality,2021-04-15 14:18:21,Shin-Ichiro Inaba,"http://arxiv.org/abs/2104.07379v2, http://arxiv.org/pdf/2104.07379v2",econ.GN
35606,th,"A common practice in many auctions is to offer bidders an opportunity to
improve their bids, known as a Best and Final Offer (BAFO) stage. This final
bid can depend on new information provided about either the asset or the
competitors. This paper examines the effects of new information regarding
competitors, seeking to determine what information the auctioneer should
provide assuming the set of allowable bids is discrete. The rational strategy
profile that maximizes the revenue of the auctioneer is the one where each
bidder makes the highest possible bid that is lower than his valuation of the
item. This strategy profile is an equilibrium for a large enough number of
bidders, regardless of the information released. We compare the number of
bidders needed for this profile to be an equilibrium under different
information settings. We find that it becomes an equilibrium with fewer bidders
when less additional information is made available to the bidders regarding the
competition. It follows that when the number of bidders is a priori unknown,
there are some advantages to the auctioneer to not reveal information.",I Want to Tell You? Maximizing Revenue in First-Price Two-Stage Auctions,2021-04-20 16:06:59,"Galit Ashkenazi-Golan, Yevgeny Tsodikovich, Yannick Viossat","http://arxiv.org/abs/2104.09942v1, http://arxiv.org/pdf/2104.09942v1",cs.GT
35818,th,"Barseghyan and Molinari (2023) give sufficient conditions for
semi-nonparametric point identification of parameters of interest in a mixture
model of decision-making under risk, allowing for unobserved heterogeneity in
utility functions and limited consideration. A key assumption in the model is
that the heterogeneity of risk preferences is unobservable but
context-independent. In this comment, we build on their insights and present
identification results in a setting where the risk preferences are allowed to
be context-dependent.",Context-Dependent Heterogeneous Preferences: A Comment on Barseghyan and Molinari (2023),2023-05-18 15:50:58,"Matias D. Cattaneo, Xinwei Ma, Yusufcan Masatlioglu","http://arxiv.org/abs/2305.10934v1, http://arxiv.org/pdf/2305.10934v1",econ.TH
35608,th,"We develop a rational inattention theory of echo chamber, whereby players
gather information about an uncertain state by allocating limited attention
capacities across biased primary sources and other players. The resulting
Poisson attention network transmits information from the primary source to a
player either directly, or indirectly through the other players. Rational
inattention generates heterogeneous demands for information among players who
are biased toward different decisions. In an echo chamber equilibrium, each
player restricts attention to his own-biased source and like-minded friends, as
the latter attend to the same primary source as his, and so could serve as
secondary sources in case the information transmission from the primary source
to him is disrupted. We provide sufficient conditions that give rise to echo
chamber equilibria, characterize the attention networks inside echo chambers,
and use our results to inform the design and regulation of information
platforms.",A Rational Inattention Theory of Echo Chamber,2021-04-21 20:32:04,"Lin Hu, Anqi Li, Xu Tan","http://arxiv.org/abs/2104.10657v7, http://arxiv.org/pdf/2104.10657v7",econ.TH
35609,th,"Decades of research suggest that information exchange in groups and
organizations can reliably improve judgment accuracy in tasks such as financial
forecasting, market research, and medical decision-making. However, we show
that improving the accuracy of numeric estimates does not necessarily improve
the accuracy of decisions. For binary choice judgments, also known as
classification tasks--e.g. yes/no or build/buy decisions--social influence is
most likely to grow the majority vote share, regardless of the accuracy of that
opinion. As a result, initially inaccurate groups become increasingly
inaccurate after information exchange even as they signal stronger support. We
term this dynamic the ""crowd classification problem."" Using both a novel
dataset as well as a reanalysis of three previous datasets, we study this
process in two types of information exchange: (1) when people share votes only,
and (2) when people form and exchange numeric estimates prior to voting.
Surprisingly, when people exchange numeric estimates prior to voting, the
binary choice vote can become less accurate even as the average numeric
estimate becomes more accurate. Our findings recommend against voting as a form
of decision-making when groups are optimizing for accuracy. For those cases
where voting is required, we discuss strategies for managing communication to
avoid the crowd classification problem. We close with a discussion of how our
results contribute to a broader contingency theory of collective intelligence.",The Crowd Classification Problem: Social Dynamics of Binary Choice Accuracy,2021-04-22 23:00:52,"Joshua Becker, Douglas Guilbeault, Ned Smith","http://arxiv.org/abs/2104.11300v1, http://arxiv.org/pdf/2104.11300v1",econ.GN
35610,th,"We modify the standard model of price competition with horizontally
differentiated products, imperfect information, and search frictions by
allowing consumers to flexibly acquire information about a product's match
value during their visits. We characterize a consumer's optimal search and
information acquisition protocol and analyze the pricing game between firms.
Notably, we establish that in search markets there are fundamental differences
between search frictions and information frictions, which affect market prices,
profits, and consumer welfare in markedly different ways. Although higher
search costs beget higher prices (and profits for firms), higher information
acquisition costs lead to lower prices and may benefit consumers. We discuss
implications of our findings for policies concerning disclosure rules and
hidden fees.",Search and Competition with Flexible Investigations,2021-04-27 16:07:51,"Vasudha Jain, Mark Whitmeyer","http://arxiv.org/abs/2104.13159v1, http://arxiv.org/pdf/2104.13159v1",econ.TH
35611,th,"Many real-world games contain parameters which can affect payoffs, action
spaces, and information states. For fixed values of the parameters, the game
can be solved using standard algorithms. However, in many settings agents must
act without knowing the values of the parameters that will be encountered in
advance. Often the decisions must be made by a human under time and resource
constraints, and it is unrealistic to assume that a human can solve the game in
real time. We present a new framework that enables human decision makers to
make fast decisions without the aid of real-time solvers. We demonstrate
applicability to a variety of situations including settings with multiple
players and imperfect information.",Human strategic decision making in parametrized games,2021-04-30 06:40:27,Sam Ganzfried,"http://arxiv.org/abs/2104.14744v3, http://arxiv.org/pdf/2104.14744v3",cs.GT
35612,th,"We study two-sided reputational bargaining with opportunities to issue an
ultimatum -- threats to force dispute resolution. Each player is either a
justified type, who never concedes and issues an ultimatum whenever an
opportunity arrives, or an unjustified type, who can concede, wait, or bluff
with an ultimatum. In equilibrium, the presence of ultimatum opportunities can
harm or benefit a player by decelerating or accelerating reputation building.
When only one player can issue an ultimatum, equilibrium play is unique. The
hazard rate of dispute resolution is discontinuous and piecewise monotonic in
time. As the probabilities of being justified vanish, agreement is immediate
and efficient, and if the set of justifiable demands is rich, payoffs modify
Abreu and Gul (2000), with the discount rate replaced by the ultimatum
opportunity arrival rate if the former is smaller. When both players' ultimatum
opportunities arrive sufficiently fast, there may exist multiple equilibria in
which their reputations do not build up and negotiation lasts forever.",Reputational Bargaining with Ultimatum Opportunities,2021-05-04 18:44:48,"Mehmet Ekmekci, Hanzhe Zhang","http://arxiv.org/abs/2105.01581v1, http://arxiv.org/pdf/2105.01581v1",econ.TH
35613,th,"The present work generalizes the analytical results of Petrikaite (2016) to a
market where more than two firms interact. As a consequence, for a generic
number of firms in the oligopoly model described by Janssen et al (2005), the
relationship between the critical discount factor which sustains the monopoly
collusive allocation and the share of perfectly informed buyers is
non-monotonic, reaching a unique internal point of minimum. The first section
locates the work within the proper economic framework. The second section hosts
the analytical computations and the mathematical reasoning needed to derive the
desired generalization, which mainly relies on the Leibniz rule for the
differentiation under the integral sign and the Bounded Convergence Theorem.",Sustainability of Collusion and Market Transparency in a Sequential Search Market: a Generalization,2021-05-05 17:49:10,"Jacopo De Tullio, Giuseppe Puleio","http://arxiv.org/abs/2105.02094v1, http://arxiv.org/pdf/2105.02094v1",econ.TH
35614,th,"This paper aims to develop new mathematical and computational tools for
modeling the distribution of portfolio returns across portfolios. We establish
relevant mathematical formulas and propose efficient algorithms, drawing upon
powerful techniques in computational geometry and the literature on splines, to
compute the probability density function, the cumulative distribution function,
and the k-th moment of the probability function. Our algorithmic tools and
implementations efficiently handle portfolios with 10000 assets, and compute
moments of order k up to 40 in a few seconds, thus handling real-life
scenarios. We focus on the long-only strategy which is the most common type of
investment, i.e. on portfolios whose weights are non-negative and sum up to 1;
our approach is readily generalizable. Thus, we leverage a geometric
representation of the stock market, where the investment set defines a simplex
polytope. The cumulative distribution function corresponds to a portfolio score
capturing the percentage of portfolios yielding a return not exceeding a given
value. We introduce closed-form analytic formulas for the first 4 moments of
the cross-sectional returns distribution, as well as a novel algorithm to
compute all higher moments. We show that the first 4 moments are a direct
mapping of the asset returns' moments. All of our algorithms and solutions are
fully general and include the special case of equal asset returns, which was
sometimes excluded in previous works. Finally, we apply our portfolio score in
the design of new performance measures and asset management. We found our
score-based optimal portfolios less concentrated than the mean-variance
portfolio and much less risky in terms of ranking.",The cross-sectional distribution of portfolio returns and applications,2021-05-14 01:29:12,"Ludovic Calès, Apostolos Chalkis, Ioannis Z. Emiris","http://arxiv.org/abs/2105.06573v1, http://arxiv.org/pdf/2105.06573v1",cs.CE
35615,th,"Motivated by a variety of online matching platforms, we consider demand and
supply units which are located i.i.d. in [0,1]^d, and each demand unit needs to
be matched with a supply unit. The goal is to minimize the expected average
distance between matched pairs (the ""cost""). We model dynamic arrivals of one
or both of demand and supply with uncertain locations of future arrivals, and
characterize the scaling behavior of the achievable cost in terms of system
size (number of supply units), as a function of the dimension d. Our
achievability results are backed by concrete matching algorithms. Across cases,
we find that the platform can achieve cost (nearly) as low as that achievable
if the locations of future arrivals had been known beforehand. Furthermore, in
all cases except one, cost nearly as low in terms of scaling as the expected
distance to the nearest neighboring supply unit is achievable, i.e., the
matching constraint does not cause an increase in cost either. The aberrant
case is where only demand arrivals are dynamic, and d=1; excess supply
significantly reduces cost in this case.",Dynamic Spatial Matching,2021-05-16 04:49:58,Yash Kanoria,"http://arxiv.org/abs/2105.07329v3, http://arxiv.org/pdf/2105.07329v3",math.PR
35616,th,"We consider a generalization of rational inattention problems by measuring
costs of information through the information radius (Sibson, 1969; Verd\'u,
2015) of statistical experiments. We introduce a notion of attention elasticity
measuring the sensitivity of attention strategies with respect to changes in
incentives. We show how the introduced class of cost functions controls
attention elasticities while the Shannon model restricts attention elasticity
to be unity. We explore further differences and similarities relative to the
Shannon model in relation to invariance, posterior separability, consideration
sets, and the ability to learn events with certainty. Lastly, we provide an
efficient alternating minimization method -- analogous to the Blahut-Arimoto
algorithm -- to obtain optimal attention strategies.",Attention elasticities and invariant information costs,2021-05-17 04:24:32,Dániel Csaba,"http://arxiv.org/abs/2105.07565v1, http://arxiv.org/pdf/2105.07565v1",econ.GN
35617,th,"The maximin share (MMS) guarantee is a desirable fairness notion for
allocating indivisible goods. While MMS allocations do not always exist,
several approximation techniques have been developed to ensure that all agents
receive a fraction of their maximin share. We focus on an alternative
approximation notion, based on the population of agents, that seeks to
guarantee MMS for a fraction of agents. We show that no optimal approximation
algorithm can satisfy more than a constant number of agents, and discuss the
existence and computation of MMS for all but one agent and its relation to
approximate MMS guarantees. We then prove the existence of allocations that
guarantee MMS for $\frac{2}{3}$ of agents, and devise a polynomial time
algorithm that achieves this bound for up to nine agents. A key implication of
our result is the existence of allocations that guarantee
$\text{MMS}^{\lceil{3n/2}\rceil}$, i.e., the value that agents receive by
partitioning the goods into $\lceil{\frac{3}{2}n}\rceil$ bundles, improving the
best known guarantee of $\text{MMS}^{2n-2}$. Finally, we provide empirical
experiments using synthetic data.",Guaranteeing Maximin Shares: Some Agents Left Behind,2021-05-19 23:17:42,"Hadi Hosseini, Andrew Searns","http://arxiv.org/abs/2105.09383v1, http://arxiv.org/pdf/2105.09383v1",cs.GT
35618,th,"This exercise proposes a learning mechanism to model economic agent's
decision-making process using an actor-critic structure in the literature of
artificial intelligence. It is motivated by the psychology literature of
learning through reinforcing good or bad decisions. In a model of an
environment, to learn to make decisions, this AI agent needs to interact with
its environment and make explorative actions. Each action in a given state
brings a reward signal to the agent. These interactive experience is saved in
the agent's memory, which is then used to update its subjective belief of the
world. The agent's decision-making strategy is formed and adjusted based on
this evolving subjective belief. This agent does not only take an action that
it knows would bring a high reward, it also explores other possibilities. This
is the process of taking explorative actions, and it ensures that the agent
notices changes in its environment and adapt its subjective belief and
decisions accordingly. Through a model of stochastic optimal growth, I
illustrate that the economic agent under this proposed learning structure is
adaptive to changes in an underlying stochastic process of the economy. AI
agents can differ in their levels of exploration, which leads to different
experience in the same environment. This reflects on to their different
learning behaviours and welfare obtained. The chosen economic structure
possesses the fundamental decision making problems of macroeconomic models,
i.e., how to make consumption-saving decisions in a lifetime, and it can be
generalised to other decision-making processes and economic models.",Learning from zero: how to make consumption-saving decisions in a stochastic environment with an AI algorithm,2021-05-21 05:39:12,"Rui, Shi","http://arxiv.org/abs/2105.10099v2, http://arxiv.org/pdf/2105.10099v2",econ.TH
35619,th,"This paper introduces Gm, which is a category for extensive-form games. It
also provides some applications.
  The category's objects are games, which are understood to be sets of nodes
which have been endowed with edges, information sets, actions, players, and
utility functions. Its arrows are functions from source nodes to target nodes
that preserve the additional structure. For instance, a game's information-set
collection is newly regarded as a topological basis for the game's
decision-node set, and thus a morphism's continuity serves to preserve
information sets. Given these definitions, a game monomorphism is characterized
by the property of not mapping two source runs (plays) to the same target run.
Further, a game isomorphism is characterized as a bijection whose restriction
to decision nodes is a homeomorphism, whose induced player transformation is
injective, and which strictly preserves the ordinal content of the utility
functions.
  The category is then applied to some game-theoretic concepts beyond the
definition of a game. A Selten subgame is characterized as a special kind of
categorical subgame, and game isomorphisms are shown to preserve strategy sets,
Nash equilibria, Selten subgames, subgame-perfect equilibria,
perfect-information, and no-absentmindedness. Further, it is shown that the
full subcategory for distinguished-action sequence games is essentially wide in
the category of all games, and that the full subcategory of action-set games is
essentially wide in the full subcategory for games with no-absentmindedness.",A Category for Extensive-Form Games,2021-05-24 19:40:42,Peter A. Streufert,"http://arxiv.org/abs/2105.11398v1, http://arxiv.org/pdf/2105.11398v1",econ.TH
35620,th,"A monopolist seller of multiple goods screens a buyer whose type is initially
unknown to both but drawn from a commonly known distribution. The buyer
privately learns about his type via a signal. We derive the seller's optimal
mechanism in two different information environments. We begin by deriving the
buyer-optimal outcome. Here, an information designer first selects a signal,
and then the seller chooses an optimal mechanism in response; the designer's
objective is to maximize consumer surplus. Then, we derive the optimal
informationally robust mechanism. In this case, the seller first chooses the
mechanism, and then nature picks the signal that minimizes the seller's
profits. We derive the relation between both problems and show that the optimal
mechanism in both cases takes the form of pure bundling.",Multi-Dimensional Screening: Buyer-Optimal Learning and Informational Robustness,2021-05-26 05:31:57,"Rahul Deb, Anne-Katrin Roesler","http://arxiv.org/abs/2105.12304v1, http://arxiv.org/pdf/2105.12304v1",econ.TH
35621,th,"This paper studies how violations of structural assumptions like expected
utility and exponential discounting can be connected to basic rationality
violations caused by reference-dependent preferences, even if these assumptions
are typically regarded as independent building blocks of decision theory. A
reference-dependent generalization of behavioral postulates captures preference
changes across various choice domains. It gives rise to a linear order that
endogenously determines reference alternatives, which in turn determines the
preference parameters for a choice problem. With canonical models as
foundation, preference changes are captured using known technologies like the
concavity of utility functions and the levels of discount factors. The
framework allows us to study risk, time, and social preferences collectively,
where seemingly independent anomalies are interconnected through the lens of
reference-dependent choice.",Ordered Reference Dependent Choice,2021-05-27 05:22:02,Xi Zhi Lim,"http://arxiv.org/abs/2105.12915v4, http://arxiv.org/pdf/2105.12915v4",econ.TH
35622,th,"We propose and develop an algebraic approach to revealed preference. Our
approach dispenses with non algebraic structure, such as topological
assumptions. We provide algebraic axioms of revealed preference that subsume
previous, classical revealed preference axioms, as well as generate new axioms
for behavioral theories, and show that a data set is rationalizable if and only
if it is consistent with an algebraic axiom.",An algebraic approach to revealed preferences,2021-05-31 20:34:06,"Mikhail Freer, Cesar Martinelli","http://arxiv.org/abs/2105.15175v1, http://arxiv.org/pdf/2105.15175v1",econ.TH
35623,th,"We consider the algorithmic question of choosing a subset of candidates of a
given size $k$ from a set of $m$ candidates, with knowledge of voters' ordinal
rankings over all candidates. We consider the well-known and classic scoring
rule for achieving diverse representation: the Chamberlin-Courant (CC) or
$1$-Borda rule, where the score of a committee is the average over the voters,
of the rank of the best candidate in the committee for that voter; and its
generalization to the average of the top $s$ best candidates, called the
$s$-Borda rule.
  Our first result is an improved analysis of the natural and well-studied
greedy heuristic. We show that greedy achieves a $\left(1 -
\frac{2}{k+1}\right)$-approximation to the maximization (or satisfaction)
version of CC rule, and a $\left(1 - \frac{2s}{k+1}\right)$-approximation to
the $s$-Borda score. Our result improves on the best known approximation
algorithm for this problem. We show that these bounds are almost tight.
  For the dissatisfaction (or minimization) version of the problem, we show
that the score of $\frac{m+1}{k+1}$ can be viewed as an optimal benchmark for
the CC rule, as it is essentially the best achievable score of any
polynomial-time algorithm even when the optimal score is a polynomial factor
smaller (under standard computational complexity assumptions). We show that
another well-studied algorithm for this problem, called the Banzhaf rule,
attains this benchmark.
  We finally show that for the $s$-Borda rule, when the optimal value is small,
these algorithms can be improved by a factor of $\tilde \Omega(\sqrt{s})$ via
LP rounding. Our upper and lower bounds are a significant improvement over
previous results, and taken together, not only enable us to perform a finer
comparison of greedy algorithms for these problems, but also provide analytic
justification for using such algorithms in practice.",Optimal Algorithms for Multiwinner Elections and the Chamberlin-Courant Rule,2021-05-31 23:28:59,"Kamesh Munagala, Zeyu Shen, Kangning Wang","http://dx.doi.org/10.1145/3465456.3467624, http://arxiv.org/abs/2106.00091v1, http://arxiv.org/pdf/2106.00091v1",cs.GT
35624,th,"The analogies between economics and classical mechanics can be extended from
constrained optimization to constrained dynamics by formalizing economic
(constraint) forces and economic power in analogy to physical (constraint)
forces in Lagrangian mechanics. In the differential-algebraic equation
framework of General Constrained Dynamics (GCD), households, firms, banks, and
the government employ forces to change economic variables according to their
desire and their power to assert their interest. These ex-ante forces are
completed by constraint forces from unanticipated system constraints to yield
the ex-post dynamics. The flexible out-of-equilibrium model can combine
Keynesian concepts such as the balance sheet approach and slow adaptation of
prices and quantities with bounded rationality (gradient climbing) and
interacting agents discussed in behavioral economics and agent-based models.
The framework integrates some elements of different schools of thought and
overcomes some restrictions inherent to optimization approaches, such as the
assumption of markets operating in or close to equilibrium. Depending on the
parameter choice for power relations and adaptation speeds, the model
nevertheless can converge to a neoclassical equilibrium, and reacts to an
austerity shock in a neoclassical or post-Keynesian way.",Modeling the out-of-equilibrium dynamics of bounded rationality and economic constraints,2021-06-01 16:39:37,Oliver Richters,"http://dx.doi.org/10.1016/j.jebo.2021.06.005, http://arxiv.org/abs/2106.00483v2, http://arxiv.org/pdf/2106.00483v2",econ.TH
35625,th,"We initiate the work towards a comprehensive picture of the smoothed
satisfaction of voting axioms, to provide a finer and more realistic foundation
for comparing voting rules. We adopt the smoothed social choice framework,
where an adversary chooses arbitrarily correlated ""ground truth"" preferences
for the agents, on top of which random noises are added. We focus on
characterizing the smoothed satisfaction of two well-studied voting axioms:
Condorcet criterion and participation. We prove that for any fixed number of
alternatives, when the number of voters $n$ is sufficiently large, the smoothed
satisfaction of the Condorcet criterion under a wide range of voting rules is
$1$, $1-\exp(-\Theta(n))$, $\Theta(n^{-0.5})$, $ \exp(-\Theta(n))$, or being
$\Theta(1)$ and $1-\Theta(1)$ at the same time; and the smoothed satisfaction
of participation is $1-\Theta(n^{-0.5})$. Our results address open questions by
Berg and Lepelley in 1994 for these rules, and also confirm the following
high-level message: the Condorcet criterion is a bigger concern than
participation under realistic models.",The Smoothed Satisfaction of Voting Axioms,2021-06-03 18:55:11,Lirong Xia,"http://arxiv.org/abs/2106.01947v1, http://arxiv.org/pdf/2106.01947v1",econ.TH
35626,th,"A patient seller aims to sell a good to an impatient buyer (i.e., one who
discounts utility over time). The buyer will remain in the market for a period
of time $T$, and her private value is drawn from a publicly known distribution.
What is the revenue-optimal pricing-curve (sequence of (price, time) pairs) for
the seller? Is randomization of help here? Is the revenue-optimal pricing curve
computable in polynomial time? We answer these questions in this paper. We give
an efficient algorithm for computing the revenue-optimal pricing curve. We show
that pricing curves, that post a price at each point of time and let the buyer
pick her utility maximizing time to buy, are revenue-optimal among a much
broader class of sequential lottery mechanisms. I.e., mechanisms that allow the
seller to post a menu of lotteries at each point of time cannot get any higher
revenue than pricing curves. We also show that the even broader class of
mechanisms that allow the menu of lotteries to be adaptively set, can earn
strictly higher revenue than that of pricing curves, and the revenue gap can be
as big as the support size of the buyer's value distribution.",Optimal Pricing Schemes for an Impatient Buyer,2021-06-04 00:53:37,"Yuan Deng, Jieming Mao, Balasubramanian Sivan, Kangning Wang","http://dx.doi.org/10.1137/1.9781611977554.ch16, http://arxiv.org/abs/2106.02149v2, http://arxiv.org/pdf/2106.02149v2",cs.GT
35627,th,"All economies require physical resource consumption to grow and maintain
their structure. The modern economy is additionally characterized by private
debt. The Human and Resources with MONEY (HARMONEY) economic growth model links
these features using a stock and flow consistent framework in physical and
monetary units. Via an updated version, we explore the interdependence of
growth and three major structural metrics of an economy. First, we show that
relative decoupling of gross domestic product (GDP) from resource consumption
is an expected pattern that occurs because of physical limits to growth, not a
response to avoid physical limits. While an increase in resource efficiency of
operating capital does increase the level of relative decoupling, so does a
change in pricing from one based on full costs to one based only on marginal
costs that neglects depreciation and interest payments leading to higher debt
ratios. Second, if assuming full labor bargaining power for wages, when a
previously-growing economy reaches peak resource extraction and GDP, wages
remain high but profits and debt decline to zero. By removing bargaining power,
profits can remain positive at the expense of declining wages. Third, the
distribution of intermediate transactions within the input-output table of the
model follows the same temporal pattern as in the post-World War II U.S.
economy. These results indicate that the HARMONEY framework enables realistic
investigation of interdependent structural change and trade-offs between
economic distribution, size, and resources consumption.","Interdependence of Growth, Structure, Size and Resource Consumption During an Economic Growth Cycle",2021-06-04 17:29:50,Carey W. King,"http://arxiv.org/abs/2106.02512v1, http://arxiv.org/pdf/2106.02512v1",econ.TH
35628,th,"The optimal taxation of assets requires attention to two concerns: 1) the
elasticity of the supply of assets and 2) the impact of taxing assets on
distributional objectives. The most efficient way to attend to these two
concerns is to tax assets of different types separately, rather than having one
tax on all assets. When assets are created by specialized effort rather than by
saving, as with innovations, discoveries of mineral deposits and development of
unregulated natural monopolies, it is interesting to consider a regime in which
the government awards a prize for the creation of the asset and then collects
the remaining value of the asset in taxes. Analytically, the prize is like a
wage after taxes. In this perspective, prizes are awarded based on a variation
on optimal taxation theory, while assets of different types are taxed in
divergent ways, depending on their characteristics. Some categories of assets
are abolished.",Optimal Taxation of Assets,2021-06-05 13:27:19,"Nicolaus Tideman, Thomas Mecherikunnel","http://arxiv.org/abs/2106.02861v1, http://arxiv.org/pdf/2106.02861v1",econ.GN
35629,th,"This paper studies a multi-armed bandit problem where the decision-maker is
loss averse, in particular she is risk averse in the domain of gains and risk
loving in the domain of losses. The focus is on large horizons. Consequences of
loss aversion for asymptotic (large horizon) properties are derived in a number
of analytical results. The analysis is based on a new central limit theorem for
a set of measures under which conditional variances can vary in a largely
unstructured history-dependent way subject only to the restriction that they
lie in a fixed interval.","A Central Limit Theorem, Loss Aversion and Multi-Armed Bandits",2021-06-10 06:15:11,"Zengjing Chen, Larry G. Epstein, Guodong Zhang","http://arxiv.org/abs/2106.05472v2, http://arxiv.org/pdf/2106.05472v2",math.PR
35630,th,"The spread of disinformation on social platforms is harmful to society. This
harm may manifest as a gradual degradation of public discourse; but it can also
take the form of sudden dramatic events such as the 2021 insurrection on
Capitol Hill. The platforms themselves are in the best position to prevent the
spread of disinformation, as they have the best access to relevant data and the
expertise to use it. However, mitigating disinformation is costly, not only for
implementing detection algorithms or employing manual effort, but also because
limiting such highly viral content impacts user engagement and potential
advertising revenue. Since the costs of harmful content are borne by other
entities, the platform will therefore have no incentive to exercise the
socially-optimal level of effort. This problem is similar to that of
environmental regulation, in which the costs of adverse events are not directly
borne by a firm, the mitigation effort of a firm is not observable, and the
causal link between a harmful consequence and a specific failure is difficult
to prove. For environmental regulation, one solution is to perform costly
monitoring to ensure that the firm takes adequate precautions according to a
specified rule. However, a fixed rule for classifying disinformation becomes
less effective over time, as bad actors can learn to sequentially and
strategically bypass it. Encoding our domain as a Markov decision process, we
demonstrate that no penalty based on a static rule, no matter how large, can
incentivize optimal effort. Penalties based on an adaptive rule can incentivize
optimal effort, but counter-intuitively, only if the regulator sufficiently
overreacts to harmful events by requiring a greater-than-optimal level of
effort. We offer novel insights for the effective regulation of social
platforms, highlight inherent challenges, and discuss promising avenues for
future work.","Disinformation, Stochastic Harm, and Costly Effort: A Principal-Agent Analysis of Regulating Social Media Platforms",2021-06-18 02:27:43,"Shehroze Khan, James R. Wright","http://arxiv.org/abs/2106.09847v5, http://arxiv.org/pdf/2106.09847v5",cs.GT
35631,th,"In the context of computational social choice, we study voting methods that
assign a set of winners to each profile of voter preferences. A voting method
satisfies the property of positive involvement (PI) if for any election in
which a candidate x would be among the winners, adding another voter to the
election who ranks x first does not cause x to lose. Surprisingly, a number of
standard voting methods violate this natural property. In this paper, we
investigate different ways of measuring the extent to which a voting method
violates PI, using computer simulations. We consider the probability (under
different probability models for preferences) of PI violations in randomly
drawn profiles vs. profile-coalition pairs (involving coalitions of different
sizes). We argue that in order to choose between a voting method that satisfies
PI and one that does not, we should consider the probability of PI violation
conditional on the voting methods choosing different winners. We should also
relativize the probability of PI violation to what we call voter potency, the
probability that a voter causes a candidate to lose. Although absolute
frequencies of PI violations may be low, after this conditioning and
relativization, we see that under certain voting methods that violate PI, much
of a voter's potency is turned against them - in particular, against their
desire to see their favorite candidate elected.",Measuring Violations of Positive Involvement in Voting,2021-06-22 05:46:37,"Wesley H. Holliday, Eric Pacuit","http://dx.doi.org/10.4204/EPTCS.335.17, http://arxiv.org/abs/2106.11502v1, http://arxiv.org/pdf/2106.11502v1",cs.GT
35632,th,"When reasoning about strategic behavior in a machine learning context it is
tempting to combine standard microfoundations of rational agents with the
statistical decision theory underlying classification. In this work, we argue
that a direct combination of these standard ingredients leads to brittle
solution concepts of limited descriptive and prescriptive value. First, we show
that rational agents with perfect information produce discontinuities in the
aggregate response to a decision rule that we often do not observe empirically.
Second, when any positive fraction of agents is not perfectly strategic,
desirable stable points -- where the classifier is optimal for the data it
entails -- cease to exist. Third, optimal decision rules under standard
microfoundations maximize a measure of negative externality known as social
burden within a broad class of possible assumptions about agent behavior.
  Recognizing these limitations we explore alternatives to standard
microfoundations for binary classification. We start by describing a set of
desiderata that help navigate the space of possible assumptions about how
agents respond to a decision rule. In particular, we analyze a natural
constraint on feature manipulations, and discuss properties that are sufficient
to guarantee the robust existence of stable points. Building on these insights,
we then propose the noisy response model. Inspired by smoothed analysis and
empirical observations, noisy response incorporates imperfection in the agent
responses, which we show mitigates the limitations of standard
microfoundations. Our model retains analytical tractability, leads to more
robust insights about stable points, and imposes a lower social burden at
optimality.",Alternative Microfoundations for Strategic Classification,2021-06-24 03:30:58,"Meena Jagadeesan, Celestine Mendler-Dünner, Moritz Hardt","http://arxiv.org/abs/2106.12705v1, http://arxiv.org/pdf/2106.12705v1",cs.LG
35633,th,"This paper unifies two key results from economic theory, namely, revealed
rational inattention and classical revealed preference. Revealed rational
inattention tests for rationality of information acquisition for Bayesian
decision makers. On the other hand, classical revealed preference tests for
utility maximization under known budget constraints. Our first result is an
equivalence result - we unify revealed rational inattention and revealed
preference through an equivalence map over decision parameters and partial
order for payoff monotonicity over the decision space in both setups. Second,
we exploit the unification result computationally to extend robustness measures
for goodness-of-fit of revealed preference tests in the literature to revealed
rational inattention. This extension facilitates quantifying how well a
Bayesian decision maker's actions satisfy rational inattention. Finally, we
illustrate the significance of the unification result on a real-world YouTube
dataset comprising thumbnail, title and user engagement metadata from
approximately 140,000 videos. We compute the Bayesian analog of robustness
measures from revealed preference literature on YouTube metadata features
extracted from a deep auto-encoder, i.e., a deep neural network that learns
low-dimensional features of the metadata. The computed robustness values show
that YouTube user engagement fits the rational inattention model remarkably
well. All our numerical experiments are completely reproducible.",Unifying Revealed Preference and Revealed Rational Inattention,2021-06-28 12:01:09,"Kunal Pattanayak, Vikram Krishnamurthy","http://arxiv.org/abs/2106.14486v4, http://arxiv.org/pdf/2106.14486v4",econ.TH
35634,th,"The intermittent nature of renewable energy resources creates extra
challenges in the operation and control of the electricity grid. Demand
flexibility markets can help in dealing with these challenges by introducing
incentives for customers to modify their demand. Market-based demand-side
management (DSM) have garnered serious attention lately due to its promising
capability of maintaining the balance between supply and demand, while also
keeping customer satisfaction at its highest levels. Many researchers have
proposed using concepts from mechanism design theory in their approaches to
market-based DSM. In this work, we provide a review of the advances in
market-based DSM using mechanism design. We provide a categorisation of the
reviewed literature and evaluate the strengths and weaknesses of each design
criteria. We also study the utility function formulations used in the reviewed
literature and provide a critique of the proposed indirect mechanisms. We show
that despite the extensiveness of the literature on this subject, there remains
concerns and challenges that should be addressed for the realistic
implementation of such DSM approaches. We draw conclusions from our review and
discuss possible future research directions.",Applications of Mechanism Design in Market-Based Demand-Side Management,2021-06-24 14:24:41,"Khaled Abedrabboh, Luluwah Al-Fagih","http://arxiv.org/abs/2106.14659v1, http://arxiv.org/pdf/2106.14659v1",cs.GT
35642,th,"This paper attempts to analyse policymaking in the field of Intellectual
Property (IP) as an instrument of economic growth across the Global North and
South. It begins by studying the links between economic growth and IP, followed
by an understanding of Intellectual Property Rights (IPR) development in the
US, a leading proponent of robust IPR protection internationally. The next
section compares the IPR in the Global North and South and undertakes an
analysis of the diverse factors that result in these differences. The paper
uses the case study of the Indian Pharmaceutical Industry to understand how IPR
may differentially affect economies and conclude that there may not yet be a
one size fits all policy for the adoption of Intellectual Property Rights.",Comparing Intellectual property policy in the Global North and South -- A one-size-fits-all policy for economic prosperity?,2021-07-14 20:20:43,"S Sidhartha Narayan, Malavika Ranjan, Madhumitha Raghuraman","http://arxiv.org/abs/2107.06855v2, http://arxiv.org/pdf/2107.06855v2",econ.TH
35635,th,"Lloyd S. Shapley \cite{Shapley1953a, Shapley1953} introduced a set of axioms
in 1953, now called the {\em Shapley axioms}, and showed that the axioms
characterize a natural allocation among the players who are in grand coalition
of a {\em cooperative game}. Recently, \citet{StTe2019} showed that a
cooperative game can be decomposed into a sum of {\em component games}, one for
each player, whose value at the grand coalition coincides with the {\em Shapley
value}. The component games are defined by the solutions to the naturally
defined system of least squares linear equations via the framework of the {\em
Hodge decomposition} on the hypercube graph.
  In this paper we propose a new set of axioms which characterizes the
component games. Furthermore, we realize them through an intriguing stochastic
path integral driven by a canonical Markov chain. The integrals are natural
representation for the expected total contribution made by the players for each
coalition, and hence can be viewed as their fair share. This allows us to
interpret the component game values for each coalition also as a valid measure
of fair allocation among the players in the coalition. Our axioms may be viewed
as a completion of Shapley axioms in view of this characterization of the
Hodge-theoretic component games, and moreover, the stochastic path integral
representation of the component games may be viewed as an extension of the {\em
Shapley formula}.",A Hodge theoretic extension of Shapley axioms,2021-06-29 08:11:02,Tongseok Lim,"http://arxiv.org/abs/2106.15094v3, http://arxiv.org/pdf/2106.15094v3",math.OC
35636,th,"The matching literature often recommends market centralization under the
assumption that agents know their own preferences and that their preferences
are fixed. We find counterevidence to this assumption in a quasi-experiment. In
Germany's university admissions, a clearinghouse implements the early stages of
the Gale-Shapley algorithm in real time. We show that early offers made in this
decentralized phase, although not more desirable, are accepted more often than
later ones. These results, together with survey evidence and a theoretical
model, are consistent with students' costly learning about universities. We
propose a hybrid mechanism to combine the advantages of decentralization and
centralization.
  Published at The Journal of Political Economy under a new title, ``Preference
Discovery in University Admissions: The Case for Dynamic Multioffer
Mechanisms,'' available at https://doi.org/10.1086/718983 (Open Access).",Decentralizing Centralized Matching Markets: Implications from Early Offers in University Admissions,2021-07-04 06:37:19,"Julien Grenet, YingHua He, Dorothea Kübler","http://dx.doi.org/10.1086/718983, http://arxiv.org/abs/2107.01532v2, http://arxiv.org/pdf/2107.01532v2",econ.GN
35637,th,"We propose a model, which nests a susceptible-infected-recovered-deceased
(SIRD) epidemic model into a dynamic macroeconomic equilibrium framework with
agents' mobility. The latter affect both their income and their probability of
infecting and being infected. Strategic complementarities among individual
mobility choices drive the evolution of aggregate economic activity, while
infection externalities caused by individual mobility affect disease diffusion.
The continuum of rational forward-looking agents coordinates on the Nash
equilibrium of a discrete time, finite-state, infinite-horizon Mean Field Game.
We prove the existence of an equilibrium and provide a recursive construction
method for the search of an equilibrium(a), which also guides our numerical
investigations. We calibrate the model by using Italian experience on COVID-19
epidemic and we discuss policy implications.","Mobility decisions, economic dynamics and epidemic",2021-07-05 01:54:40,"Giorgio Fabbri, Salvatore Federico, Davide Fiaschi, Fausto Gozzi","http://dx.doi.org/10.1007/s00199-023-01485-1, http://arxiv.org/abs/2107.01746v2, http://arxiv.org/pdf/2107.01746v2",econ.GN
35638,th,"Proof-of-Stake blockchains based on a longest-chain consensus protocol are an
attractive energy-friendly alternative to the Proof-of-Work paradigm. However,
formal barriers to ""getting the incentives right"" were recently discovered,
driven by the desire to use the blockchain itself as a source of
pseudorandomness \cite{brown2019formal}.
  We consider instead a longest-chain Proof-of-Stake protocol with perfect,
trusted, external randomness (e.g. a randomness beacon). We produce two main
results.
  First, we show that a strategic miner can strictly outperform an honest miner
with just $32.5\%$ of the total stake. Note that a miner of this size {\em
cannot} outperform an honest miner in the Proof-of-Work model. This establishes
that even with access to a perfect randomness beacon, incentives in
Proof-of-Work and Proof-of-Stake longest-chain protocols are fundamentally
different.
  Second, we prove that a strategic miner cannot outperform an honest miner
with $30.8\%$ of the total stake. This means that, while not quite as secure as
the Proof-of-Work regime, desirable incentive properties of Proof-of-Work
longest-chain protocols can be approximately recovered via Proof-of-Stake with
a perfect randomness beacon.
  The space of possible strategies in a Proof-of-Stake mining game is {\em
significantly} richer than in a Proof-of-Work game. Our main technical
contribution is a characterization of potentially optimal strategies for a
strategic miner, and in particular, a proof that the corresponding
infinite-state MDP admits an optimal strategy that is positive recurrent.",Proof-of-Stake Mining Games with Perfect Randomness,2021-07-08 22:01:58,"Matheus V. X. Ferreira, S. Matthew Weinberg","http://dx.doi.org/10.1145/3465456.3467636, http://arxiv.org/abs/2107.04069v2, http://arxiv.org/pdf/2107.04069v2",cs.GT
35639,th,"We study the impacts of incomplete information on centralized one-to-one
matching markets. We focus on the commonly used Deferred Acceptance mechanism
(Gale and Shapley, 1962). We show that many complete-information results are
fragile to a small infusion of uncertainty about others' preferences.",Centralized Matching with Incomplete Information,2021-07-08 23:32:22,"Marcelo Ariel Fernandez, Kirill Rudov, Leeat Yariv","http://arxiv.org/abs/2107.04098v1, http://arxiv.org/pdf/2107.04098v1",econ.TH
35640,th,"The Condorcet criterion (CC) is a classical and well-accepted criterion for
voting. Unfortunately, it is incompatible with many other desiderata including
participation (Par), half-way monotonicity (HM), Maskin monotonicity (MM), and
strategy-proofness (SP). Such incompatibilities are often known as
impossibility theorems, and are proved by worst-case analysis. Previous work
has investigated the likelihood for these impossibilities to occur under
certain models, which are often criticized of being unrealistic.
  We strengthen previous work by proving the first set of semi-random
impossibilities for voting rules to satisfy CC and the more general, group
versions of the four desiderata: for any sufficiently large number of voters
$n$, any size of the group $1\le B\le \sqrt n$, any voting rule $r$, and under
a large class of {\em semi-random} models that include Impartial Culture, the
likelihood for $r$ to satisfy CC and Par, CC and HM, CC and MM, or CC and SP is
$1-\Omega(\frac{B}{\sqrt n})$. This matches existing lower bounds for CC and
Par ($B=1$) and CC and SP ($B\le \sqrt n$), showing that many commonly-studied
voting rules are already asymptotically optimal in such cases.",Semi-Random Impossibilities of Condorcet Criterion,2021-07-14 03:39:33,Lirong Xia,"http://arxiv.org/abs/2107.06435v2, http://arxiv.org/pdf/2107.06435v2",econ.TH
35643,th,"With the growing use of distributed machine learning techniques, there is a
growing need for data markets that allows agents to share data with each other.
Nevertheless data has unique features that separates it from other commodities
including replicability, cost of sharing, and ability to distort. We study a
setup where each agent can be both buyer and seller of data. For this setup, we
consider two cases: bilateral data exchange (trading data with data) and
unilateral data exchange (trading data with money). We model bilateral sharing
as a network formation game and show the existence of strongly stable outcome
under the top agents property by allowing limited complementarity. We propose
ordered match algorithm which can find the stable outcome in O(N^2) (N is the
number of agents). For the unilateral sharing, under the assumption of additive
cost structure, we construct competitive prices that can implement any social
welfare maximizing outcome. Finally for this setup when agents have private
information, we propose mixed-VCG mechanism which uses zero cost data
distortion of data sharing with its isolated impact to achieve budget balance
while truthfully implementing socially optimal outcomes to the exact level of
budget imbalance of standard VCG mechanisms. Mixed-VCG uses data distortions as
data money for this purpose. We further relax zero cost data distortion
assumption by proposing distorted-mixed-VCG. We also extend our model and
results to data sharing via incremental inquiries and differential privacy
costs.",Data Sharing Markets,2021-07-19 09:00:34,"Mohammad Rasouli, Michael I. Jordan","http://arxiv.org/abs/2107.08630v2, http://arxiv.org/pdf/2107.08630v2",econ.TH
35644,th,"In economy, viewed as a quantum system working as a circuit, each process at
the microscale is a quantum gate among agents. The global configuration of
economy is addressed by optimizing the sustainability of the whole circuit.
This is done in terms of geodesics, starting from some approximations. A
similar yet somehow different approach is applied for the closed system of the
whole and for economy as an open system. Computations may partly be explicit,
especially when the reality is represented in a simplified way. The circuit can
be also optimized by minimizing its complexity, with a partly similar
formalism, yet generally not along the same paths.",Sustainability of Global Economy as a Quantum Circuit,2021-07-13 11:40:47,Antonino Claudio Bonan,"http://arxiv.org/abs/2107.09032v1, http://arxiv.org/pdf/2107.09032v1",econ.TH
35645,th,"This paper generalizes L.S. Shapley's celebrated value allocation theory on
coalition games by discovering and applying a fundamental connection between
stochastic path integration driven by canonical time-reversible Markov chains
and Hodge-theoretic discrete Poisson's equations on general weighted graphs.
  More precisely, we begin by defining cooperative games on general graphs and
generalize Shapley's value allocation formula for those games in terms of
stochastic path integral driven by the associated canonical Markov chain. We
then show the value allocation operator, one for each player defined by the
path integral, turns out to be the solution to the Poisson's equation defined
via the combinatorial Hodge decomposition on general weighted graphs.
  Several motivational examples and applications are presented, in particular,
a section is devoted to reinterpret and extend Nash's and Kohlberg and Neyman's
solution concept for cooperative games. This and other examples, e.g. on
revenue management, suggest that our general framework does not have to be
restricted to cooperative games setup, but may apply to broader range of
problems arising in economics, finance and other social and physical sciences.",Hodge theoretic reward allocation for generalized cooperative games on graphs,2021-07-22 11:13:11,Tongseok Lim,"http://arxiv.org/abs/2107.10510v3, http://arxiv.org/pdf/2107.10510v3",math.PR
35646,th,"This paper specifies an extensive form as a 5-ary relation (that is, as a set
of quintuples) which satisfies eight abstract axioms. Each quintuple is
understood to list a player, a situation (a concept which generalizes an
information set), a decision node, an action, and a successor node.
Accordingly, the axioms are understood to specify abstract relationships
between players, situations, nodes, and actions. Such an extensive form is
called a ""pentaform"". A ""pentaform game"" is then defined to be a pentaform
together with utility functions. The paper's main result is to construct an
intuitive bijection between pentaform games and $\mathbf{Gm}$ games (Streufert
2021, arXiv:2105.11398), which are centrally located in the literature, and
which encompass all finite-horizon or infinite-horizon discrete games. In this
sense, pentaform games equivalently formulate almost all extensive-form games.
Secondary results concern disaggregating pentaforms by subsets, constructing
pentaforms by unions, and initial applications to Selten subgames and
perfect-recall (an extensive application to dynamic programming is in Streufert
2023, arXiv:2302.03855).",Specifying a Game-Theoretic Extensive Form as an Abstract 5-ary Relation,2021-07-22 19:56:07,Peter A. Streufert,"http://arxiv.org/abs/2107.10801v4, http://arxiv.org/pdf/2107.10801v4",econ.TH
35647,th,"We study a model in which before a conflict between two parties escalates
into a war (in the form of an all-pay auction), a party can offer a
take-it-or-leave-it bribe to the other for a peaceful settlement. In contrast
to the received literature, we find that peace security is impossible in our
model. We characterize the necessary and sufficient conditions for peace
implementability. Furthermore, we find that separating equilibria do not exist
and the number of (on-path) bribes in any non-peaceful equilibria is at most
two. We also consider a requesting model and characterize the necessary and
sufficient conditions for the existence of robust peaceful equilibria, all of
which are sustained by the identical (on-path) request. Contrary to the bribing
model, peace security is possible in the requesting model.",Peace through bribing,2021-07-24 13:12:10,"Jingfeng Lu, Zongwei Lu, Christian Riis","http://arxiv.org/abs/2107.11575v3, http://arxiv.org/pdf/2107.11575v3",econ.TH
35648,th,"We propose a new single-winner voting system using ranked ballots: Stable
Voting. The motivating principle of Stable Voting is that if a candidate A
would win without another candidate B in the election, and A beats B in a
head-to-head majority comparison, then A should still win in the election with
B included (unless there is another candidate A' who has the same kind of claim
to winning, in which case a tiebreaker may choose between such candidates). We
call this principle Stability for Winners (with Tiebreaking). Stable Voting
satisfies this principle while also having a remarkable ability to avoid tied
outcomes in elections even with small numbers of voters.",Stable Voting,2021-08-02 00:06:56,"Wesley H. Holliday, Eric Pacuit","http://arxiv.org/abs/2108.00542v9, http://arxiv.org/pdf/2108.00542v9",econ.TH
35649,th,"Unlike tic-tac-toe or checkers, in which optimal play leads to a draw, it is
not known whether optimal play in chess ends in a win for White, a win for
Black, or a draw. But after White moves first in chess, if Black has a double
move followed by a double move of White and then alternating play, play is more
balanced because White does not always tie or lead in moves. Symbolically,
Balanced Alternation gives the following move sequence: After White's (W)
initial move, first Black (B) and then White each have two moves in a row
(BBWW), followed by the alternating sequence, beginning with W, which
altogether can be written as WB/BW/WB/WB/WB... (the slashes separate
alternating pairs of moves). Except for reversal of the 3rd and 4th moves from
WB to BW, this is the standard chess sequence. Because Balanced Alternation
lies between the standard sequence, which favors White, and a comparable
sequence that favors Black, it is highly likely to produce a draw with optimal
play, rendering chess fairer. This conclusion is supported by a computer
analysis of chess openings and how they would play out under Balanced
Alternation.",Fairer Chess: A Reversal of Two Opening Moves in Chess Creates Balance Between White and Black,2021-08-05 15:14:36,"Steven J. Brams, Mehmet S. Ismail","http://arxiv.org/abs/2108.02547v2, http://arxiv.org/pdf/2108.02547v2",econ.TH
35650,th,"To protect his teaching evaluations, an economics professor uses the
following exam curve: if the class average falls below a known target, $m$,
then all students will receive an equal number of free points so as to bring
the mean up to $m$. If the average is above $m$ then there is no curve; curved
grades above $100\%$ will never be truncated to $100\%$ in the gradebook. The
$n$ students in the course all have Cobb-Douglas preferences over the
grade-leisure plane; effort corresponds exactly to earned (uncurved) grades in
a $1:1$ fashion. The elasticity of each student's utility with respect to his
grade is his ability parameter, or relative preference for a high score. I
find, classify, and give complete formulas for all the pure Nash equilibria of
my own game, which my students have been playing for some eight semesters. The
game is supermodular, featuring strategic complementarities, negative
spillovers, and nonsmooth payoffs that generate non-convexities in the reaction
correspondence. The $n+2$ types of equilibria are totally ordered with respect
to effort and Pareto preference, and the lowest $n+1$ of these types are
totally ordered in grade-leisure space. In addition to the no-curve
(""try-hard"") and curved interior equilibria, we have the ""$k$-don't care""
equilibria, whereby the $k$ lowest-ability students are no-shows. As the class
size becomes infinite in the curved interior equilibrium, all students increase
their leisure time by a fixed percentage, i.e., $14\%$, in response to the
disincentive, which amplifies any pre-existing ability differences. All
students' grades inflate by this same (endogenous) factor, say, $1.14$ times
what they would have been under the correct standard.",Grade Inflation and Stunted Effort in a Curved Economics Course,2021-08-08 21:48:31,Alex Garivaltis,"http://arxiv.org/abs/2108.03709v3, http://arxiv.org/pdf/2108.03709v3",econ.TH
35651,th,"We propose and investigate a model for mate searching and marriage in large
societies based on a stochastic matching process and simple decision rules.
Agents have preferences among themselves given by some probability
distribution. They randomly search for better mates, forming new couples and
breaking apart in the process. Marriage is implemented in the model by adding
the decision of stopping searching for a better mate when the affinity between
a couple is higher than a certain fixed amount. We show that the average
utility in the system with marriage can be higher than in the system without
it. Part of our results can be summarized in what sounds like a piece of
advice: don't marry the first person you like and don't search for the love of
your life, but get married if you like your partner more than a sigma above
average. We also find that the average utility attained in our stochastic model
is smaller than the one associated with a stable matching achieved using the
Gale-Shapley algorithm. This can be taken as a formal argument in favor of a
central planner (perhaps an app) with the information to coordinate the
marriage market in order to set a stable matching. To roughly test the adequacy
of our model to describe existent societies, we compare the evolution of the
fraction of married couples in our model with real-world data and obtain good
agreement. In the last section, we formulate the model in the limit of an
infinite number of agents and find an analytical expression for the evolution
of the system.",Benefits of marriage as a search strategy,2021-08-10 22:24:38,Davi B. Costa,"http://arxiv.org/abs/2108.04885v2, http://arxiv.org/pdf/2108.04885v2",econ.TH
35652,th,"A network game assigns a level of collectively generated wealth to every
network that can form on a given set of players. A variable network game
combines a network game with a network formation probability distribution,
describing certain restrictions on network formation. Expected levels of
collectively generated wealth and expected individual payoffs can be formulated
in this setting.
  We investigate properties of the resulting expected wealth levels as well as
the expected variants of well-established network game values as allocation
rules that assign to every variable network game a payoff to the players in a
variable network game. We establish two axiomatizations of the Expected Myerson
Value, originally formulated and proven on the class of communication
situations, based on the well-established component balance, equal bargaining
power and balanced contributions properties. Furthermore, we extend an
established axiomatization of the Position Value based on the balanced link
contribution property to the Expected Position Value.",Expected Values for Variable Network Games,2021-08-16 15:35:40,"Subhadip Chakrabarti, Loyimee Gogoi, Robert P Gilles, Surajit Borkotokey, Rajnish Kumar","http://arxiv.org/abs/2108.07047v2, http://arxiv.org/pdf/2108.07047v2",cs.GT
35653,th,"This paper studies dynamic monopoly pricing for a class of settings that
includes multiple durable, multiple rental, or a mix of varieties. We show that
the driving force behind pricing dynamics is the seller's incentive to switch
consumers - buyers and non-buyers - to higher-valued consumption options by
lowering prices (""trading up""). If consumers cannot be traded up from the
static optimal allocation, pricing dynamics do not emerge in equilibrium. If
consumers can be traded up, pricing dynamics arise until all trading-up
opportunities are exhausted. We study the conditions under which pricing
dynamics end in finite time and characterize the final prices at which dynamics
end.",Dynamic Monopoly Pricing With Multiple Varieties: Trading Up,2021-08-16 18:22:25,"Stefan Buehler, Nicolas Eschenbaum","http://arxiv.org/abs/2108.07146v2, http://arxiv.org/pdf/2108.07146v2",econ.GN
35654,th,"An important but understudied question in economics is how people choose when
facing uncertainty in the timing of events. Here we study preferences over time
lotteries, in which the payment amount is certain but the payment time is
uncertain. Expected discounted utility theory (EDUT) predicts decision makers
to be risk-seeking over time lotteries. We explore a normative model of
growth-optimality, in which decision makers maximise the long-term growth rate
of their wealth. Revisiting experimental evidence on time lotteries, we find
that growth-optimality accords better with the evidence than EDUT. We outline
future experiments to scrutinise further the plausibility of growth-optimality.",Risk Preferences in Time Lotteries,2021-08-18 22:46:55,"Yonatan Berman, Mark Kirstein","http://arxiv.org/abs/2108.08366v1, http://arxiv.org/pdf/2108.08366v1",econ.TH
35655,th,"It is believed that interventions that change the media's costs of
misreporting can increase the information provided by media outlets. This paper
analyzes the validity of this claim and the welfare implications of those types
of interventions that affect misreporting costs. I study a model of
communication between an uninformed voter and a media outlet that knows the
quality of two competing candidates. The alternatives available to the voter
are endogenously championed by the two candidates. I show that higher costs may
lead to more misreporting and persuasion, whereas low costs result in full
revelation; interventions that increase misreporting costs never harm the
voter, but those that do so slightly may be wasteful of public resources. I
conclude that intuitions derived from the interaction between the media and
voters, without incorporating the candidates' strategic responses to the media
environment, do not capture properly the effects of these types of
interventions.",Influential News and Policy-making,2021-08-25 14:00:38,Federico Vaccari,"http://arxiv.org/abs/2108.11177v1, http://arxiv.org/pdf/2108.11177v1",econ.GN
35656,th,"This paper presents six theorems and ten propositions that can be read as
deconstructing and integrating the continuity postulate under the rubric of
pioneering work of Eilenberg, Wold, von Neumann-Morgenstern, Herstein-Milnor
and Debreu. Its point of departure is the fact that the adjective continuous
applied to a function or a binary relation does not acknowledge the many
meanings that can be given to the concept it names, and that under a variety of
technical mathematical structures, its many meanings can be whittled down to
novel and unexpected equivalences that have been missed in the theory of
choice. Specifically, it provides a systematic investigation of the two-way
relation between restricted and full continuity of a function and a binary
relation that, under convex, monotonic and differentiable structures, draws out
the behavioral implications of the postulate.",The Continuity Postulate in Economic Theory: A Deconstruction and an Integration,2021-08-26 15:29:46,"Metin Uyanik, M. Ali Khan","http://arxiv.org/abs/2108.11736v2, http://arxiv.org/pdf/2108.11736v2",econ.TH
35657,th,"This paper studies the design of mechanisms that are robust to
misspecification. We introduce a novel notion of robustness that connects a
variety of disparate approaches and study its implications in a wide class of
mechanism design problems. This notion is quantifiable, allowing us to
formalize and answer comparative statics questions relating the nature and
degree of misspecification to sharp predictions regarding features of feasible
mechanisms. This notion also has a behavioral foundation which reflects the
perception of ambiguity, thus allowing the degree of misspecification to emerge
endogenously. In a number of standard settings, robustness to arbitrarily small
amounts of misspecification generates a discontinuity in the set of feasible
mechanisms and uniquely selects simple, ex post incentive compatible mechanisms
such as second-price auctions. Robustness also sheds light on the value of
private information and the prevalence of full or virtual surplus extraction.",Uncertainty in Mechanism Design,2021-08-28 14:58:45,"Giuseppe Lopomo, Luca Rigotti, Chris Shannon","http://arxiv.org/abs/2108.12633v1, http://arxiv.org/pdf/2108.12633v1",econ.TH
35658,th,"This paper investigates stochastic continuous time contests with a twist: the
designer requires that contest participants incur some cost to submit their
entries. When the designer wishes to maximize the (expected) performance of the
top performer, a strictly positive submission fee is optimal. When the designer
wishes to maximize total (expected) performance, either the highest submission
fee or the lowest submission fee is optimal.",Submission Fees in Risk-Taking Contests,2021-08-30 23:07:59,Mark Whitmeyer,"http://arxiv.org/abs/2108.13506v1, http://arxiv.org/pdf/2108.13506v1",econ.TH
35659,th,"Information policies such as scores, ratings, and recommendations are
increasingly shaping society's choices in high-stakes domains. We provide a
framework to study the welfare implications of information policies on a
population of heterogeneous individuals. We define and characterize the Bayes
welfare set, consisting of the population's utility profiles that are feasible
under some information policy. The Pareto frontier of this set can be recovered
by a series of standard Bayesian persuasion problems, in which a utilitarian
planner takes the role of the information designer. We provide necessary and
sufficient conditions under which an information policy exists that Pareto
dominates the no-information policy. We illustrate our results with
applications to data leakage, price discrimination, and credit ratings.",Persuasion and Welfare,2021-09-07 15:51:58,"Laura Doval, Alex Smolin","http://arxiv.org/abs/2109.03061v4, http://arxiv.org/pdf/2109.03061v4",econ.TH
35660,th,"The inventories carried in a supply chain as a strategic tool to influence
the competing firms are considered to be strategic inventories (SI). We present
a two-period game-theoretic supply chain model, in which a singular
manufacturer supplies products to a pair of identical Cournot duopolistic
retailers. We show that the SI carried by the retailers under dynamic contract
is Pareto-dominating for the manufacturer, retailers, consumers, the channel,
and the society as well. We also find that the retailer's SI, however, can be
eliminated when the manufacturer commits wholesale contract or inventory
holding cost is too high. In comparing the cases with and without downstream
competition, we also show that the downstream Cournot duopoly undermines the
retailers in profits, but benefits all others.",Strategic Inventories in a Supply Chain with Downstream Cournot Duopoly,2021-09-15 01:21:29,"Xiaowei Hu, Jaejin Jang, Nabeel Hamoud, Amirsaman Bajgiran","http://dx.doi.org/10.1504/IJOR.2021.119934, http://arxiv.org/abs/2109.06995v2, http://arxiv.org/pdf/2109.06995v2",econ.GN
35661,th,"This paper proposes a new approach to training recommender systems called
deviation-based learning. The recommender and rational users have different
knowledge. The recommender learns user knowledge by observing what action users
take upon receiving recommendations. Learning eventually stalls if the
recommender always suggests a choice: Before the recommender completes
learning, users start following the recommendations blindly, and their choices
do not reflect their knowledge. The learning rate and social welfare improve
substantially if the recommender abstains from recommending a particular choice
when she predicts that multiple alternatives will produce a similar payoff.",Deviation-Based Learning: Training Recommender Systems Using Informed User Choice,2021-09-20 22:51:37,"Junpei Komiyama, Shunya Noda","http://arxiv.org/abs/2109.09816v2, http://arxiv.org/pdf/2109.09816v2",econ.TH
35703,th,"We consider the problem of revenue-maximizing Bayesian auction design with
several bidders having independent private values over several items. We show
that it can be reduced to the problem of continuous optimal transportation
introduced by Beckmann (1952) where the optimal transportation flow generalizes
the concept of ironed virtual valuations to the multi-item setting. We
establish the strong duality between the two problems and the existence of
solutions. The results rely on insights from majorization and optimal
transportation theories and on the characterization of feasible interim
mechanisms by Hart and Reny (2015).",Beckmann's approach to multi-item multi-bidder auctions,2022-03-14 06:30:58,"Alexander V. Kolesnikov, Fedor Sandomirskiy, Aleh Tsyvinski, Alexander P. Zimin","http://arxiv.org/abs/2203.06837v2, http://arxiv.org/pdf/2203.06837v2",econ.TH
35662,th,"We introduce NEM X, an inclusive retail tariff model that captures features
of existing net energy metering (NEM) policies. It is shown that the optimal
prosumer decision has three modes: (a) the net-consuming mode where the
prosumer consumes more than its behind-the-meter distributed energy resource
(DER) production when the DER production is below a predetermined lower
threshold, (b) the net-producing mode where the prosumer consumes less than its
DER production when the DER production is above a predetermined upper
threshold, and (c) the net-zero energy mode where the prosumer's consumption
matches to its DER generation when its DER production is between the lower and
upper thresholds. Both thresholds are obtained in closed-form. Next, we analyze
the regulator's rate-setting process that determines NEM X parameters such as
retail/sell rates, fixed charges, and price differentials in time-of-use
tariffs' on and off-peak periods. A stochastic Ramsey pricing program that
maximizes social welfare subject to the revenue break-even constraint for the
regulated utility is formulated. Performance of several NEM X policies is
evaluated using real and synthetic data to illuminate impacts of NEM policy
designs on social welfare, cross-subsidies of prosumers by consumers, and
payback time of DER investments that affect long-run DER adoptions.","On Net Energy Metering X: Optimal Prosumer Decisions, Social Welfare, and Cross-Subsidies",2021-09-21 08:58:59,"Ahmed S. Alahmed, Lang Tong","http://arxiv.org/abs/2109.09977v4, http://arxiv.org/pdf/2109.09977v4",eess.SY
35663,th,"An individual can only experience regret if she learns about an unchosen
alternative. In many situations, learning about an unchosen alternative is
possible only if someone else chose it. We develop a model where the ex-post
information available to each regret averse individual depends both on their
own choice and on the choices of others, as others can reveal ex-post
information about what might have been. This implies that what appears to be a
series of isolated single-person decision problems is in fact a rich
multi-player behavioural game, the regret game, where the psychological payoffs
that depend on ex-post information are interconnected. For an open set of
parameters, the regret game is a coordination game with multiple equilibria,
despite the fact that all individuals possess a uniquely optimal choice in
isolation. We experimentally test this prediction and find support for it.",Ignorance is Bliss: A Game of Regret,2021-09-22 21:34:55,"Claudia Cerrone, Francesco Feri, Philip R. Neary","http://arxiv.org/abs/2109.10968v3, http://arxiv.org/pdf/2109.10968v3",econ.TH
35664,th,"I extend the concept of absorptive capacity, used in the analysis of firms,
to a framework applicable to the national level. First, employing confirmatory
factor analyses on 47 variables, I build 13 composite factors crucial to
measuring six national level capacities: technological capacity, financial
capacity, human capacity, infrastructural capacity, public policy capacity, and
social capacity. My data cover most low- and middle-income- economies (LMICs),
eligible for the World Bank's International Development Association (IDA)
support between 2005 and 2019. Second, I analyze the relationship between the
estimated capacity factors and economic growth while controlling for some of
the incoming flows from abroad and other confounders that might influence the
relationship. Lastly, I conduct K-means cluster analysis and then analyze the
results alongside regression estimates to glean patterns and classifications
within the LMICs. Results indicate that enhancing infrastructure (ICT, energy,
trade, and transport), financial (apparatus and environment), and public policy
capacities is a prerequisite for attaining economic growth. Similarly, I find
improving human capital with specialized skills positively impacts economic
growth. Finally, by providing a ranking of which capacity is empirically more
important for economic growth, I offer suggestions to governments with limited
budgets to make wise investments. Likewise, my findings inform international
policy and monetary bodies on how they could better channel their funding in
LMICs to achieve sustainable development goals and boost shared prosperity.",Absorptive capacities and economic growth in low and middle income economies,2021-09-23 20:49:51,Muhammad Salar Khan,"http://arxiv.org/abs/2109.11550v1, http://arxiv.org/pdf/2109.11550v1",econ.GN
35665,th,"Fair division is a significant, long-standing problem and is closely related
to social and economic justice. The conventional division methods such as
cut-and-choose are hardly applicable to realworld problems because of their
complexity and unrealistic assumptions about human behaviors. Here we propose a
fair division method from a completely different perspective, using the
Boltzmann distribution. The Boltzmann distribution adopted from the physical
sciences gives the most probable and unbiased distribution derived from a
goods-centric, rather than a player-centric, division process. The mathematical
model of the Boltzmann fair division was developed for both homogeneous and
heterogeneous division problems, and the players' key factors (contributions,
needs, and preferences) could be successfully integrated. We show that the
Boltzmann fair division is a well-balanced division method maximizing the
players' total utility, and it could be easily finetuned and applicable to
complex real-world problems such as income/wealth redistribution or
international negotiations on fighting climate change.",The Boltzmann fair division for distributive justice,2021-09-24 15:17:04,"Ji-Won Park, Jaeup U. Kim, Cheol-Min Ghim, Chae Un Kim","http://arxiv.org/abs/2109.11917v2, http://arxiv.org/pdf/2109.11917v2",econ.GN
35666,th,"Bilateral trade, a fundamental topic in economics, models the problem of
intermediating between two strategic agents, a seller and a buyer, willing to
trade a good for which they hold private valuations. In this paper, we cast the
bilateral trade problem in a regret minimization framework over $T$ rounds of
seller/buyer interactions, with no prior knowledge on their private valuations.
Our main contribution is a complete characterization of the regret regimes for
fixed-price mechanisms with different feedback models and private valuations,
using as a benchmark the best fixed-price in hindsight. More precisely, we
prove the following tight bounds on the regret:
  - $\Theta(\sqrt{T})$ for full-feedback (i.e., direct revelation mechanisms).
  - $\Theta(T^{2/3})$ for realistic feedback (i.e., posted-price mechanisms)
and independent seller/buyer valuations with bounded densities.
  - $\Theta(T)$ for realistic feedback and seller/buyer valuations with bounded
densities.
  - $\Theta(T)$ for realistic feedback and independent seller/buyer valuations.
  - $\Theta(T)$ for the adversarial setting.",Bilateral Trade: A Regret Minimization Perspective,2021-09-09 01:11:48,"Nicolò Cesa-Bianchi, Tommaso Cesari, Roberto Colomboni, Federico Fusco, Stefano Leonardi","http://arxiv.org/abs/2109.12974v1, http://arxiv.org/pdf/2109.12974v1",cs.GT
35667,th,"Matching markets are of particular interest in computer science and economics
literature as they are often used to model real-world phenomena where we aim to
equitably distribute a limited amount of resources to multiple agents and
determine these distributions efficiently. Although it has been shown that
finding market clearing prices for Fisher markets with indivisible goods is
NP-hard, there exist polynomial-time algorithms able to compute these prices
and allocations when the goods are divisible and the utility functions are
linear. We provide a promising research direction toward the development of a
market that simulates buyers' preferences that vary according to the bundles of
goods allocated to other buyers. Our research aims to elucidate unique ways in
which the theory of matching markets can be extended to account for more
complex and often counterintuitive microeconomic phenomena.",Matching Markets,2021-09-30 08:13:27,"Andrew Yang, Bruce Changlong Xu, Ivan Villa-Renteria","http://arxiv.org/abs/2109.14850v1, http://arxiv.org/pdf/2109.14850v1",cs.GT
35668,th,"We study an information design problem for a non-atomic service scheduling
game. The service starts at a random time and there is a continuum of agent
population who have a prior belief about the service start time but do not
observe the actual realization of it. The agents want to make decisions of when
to join the queue in order to avoid long waits in the queue or not to arrive
earlier than the service has started. There is a planner who knows when the
service starts and makes suggestions to the agents about when to join the queue
through an obedient direct signaling strategy, in order to minimize the average
social cost. We characterize the full information and the no information
equilibria and we show in what conditions it is optimal for the planner to
reveal the full information to the agents. Further, by imposing appropriate
assumptions on the model, we formulate the information design problem as a
generalized problem of moments (GPM) and use computational tools developed for
such problems to solve the problem numerically.",Information Design for a Non-atomic Service Scheduling Game,2021-10-01 00:18:24,"Nasimeh Heydaribeni, Ketan Savla","http://arxiv.org/abs/2110.00090v1, http://arxiv.org/pdf/2110.00090v1",eess.SY
35669,th,"In the setting where we want to aggregate people's subjective evaluations,
plurality vote may be meaningless when a large amount of low-effort people
always report ""good"" regardless of the true quality. ""Surprisingly popular""
method, picking the most surprising answer compared to the prior, handle this
issue to some extent. However, it is still not fully robust to people's
strategies. Here in the setting where a large number of people are asked to
answer a small number of multi-choice questions (multi-task, large group), we
propose an information aggregation method that is robust to people's
strategies. Interestingly, this method can be seen as a rotated ""surprisingly
popular"". It is based on a new clustering method, Determinant MaxImization
(DMI)-clustering, and a key conceptual idea that information elicitation
without ground-truth can be seen as a clustering problem. Of independent
interest, DMI-clustering is a general clustering method that aims to maximize
the volume of the simplex consisting of each cluster's mean multiplying the
product of the cluster sizes. We show that DMI-clustering is invariant to any
non-degenerate affine transformation for all data points. When the data point's
dimension is a constant, DMI-clustering can be solved in polynomial time. In
general, we present a simple heuristic for DMI-clustering which is very similar
to Lloyd's algorithm for k-means. Additionally, we also apply the clustering
idea in the single-task setting and use the spectral method to propose a new
aggregation method that utilizes the second-moment information elicited from
the crowds.",Information Elicitation Meets Clustering,2021-10-03 11:47:55,Yuqing Kong,"http://arxiv.org/abs/2110.00952v1, http://arxiv.org/pdf/2110.00952v1",cs.GT
35670,th,"Condorcet's jury theorem states that the correct outcome is reached in direct
majority voting systems with sufficiently large electorates as long as each
voter's independent probability of voting for that outcome is greater than 0.5.
Yet, in situations where direct voting systems are infeasible, such as due to
high implementation and infrastructure costs, hierarchical voting systems
provide a reasonable alternative. We study differences in outcome precision
between hierarchical and direct voting systems for varying group sizes,
abstention rates, and voter competencies. Using asymptotic expansions of the
derivative of the reliability function (or Banzhaf number), we first prove that
indirect systems differ most from their direct counterparts when group size and
number are equal to each other, and therefore to $\sqrt{N_{\rm d}}$, where
$N_{\rm d}$ is the total number of voters in the direct system. In multitier
systems, we prove that this difference is maximized when group size equals
$\sqrt[n]{N_{\rm d}}$, where $n$ is the number of hierarchical levels. Second,
we show that while direct majority rule always outperforms hierarchical voting
for homogeneous electorates that vote with certainty, as group numbers and size
increase, hierarchical majority voting gains in its ability to represent all
eligible voters. Furthermore, when voter abstention and competency are
correlated within groups, hierarchical systems often outperform direct voting,
which we show by using a generating function approach that is able to
analytically characterize heterogeneous voting systems.",Tradeoffs in Hierarchical Voting Systems,2021-10-05 22:01:52,"Lucas Böttcher, Georgia Kernell","http://dx.doi.org/10.1177/26339137221133401, http://arxiv.org/abs/2110.02298v1, http://arxiv.org/pdf/2110.02298v1",math.CO
35671,th,"In constructing an econometric or statistical model, we pick relevant
features or variables from many candidates. A coalitional game is set up to
study the selection problem where the players are the candidates and the payoff
function is a performance measurement in all possible modeling scenarios. Thus,
in theory, an irrelevant feature is equivalent to a dummy player in the game,
which contributes nothing to all modeling situations. The hypothesis test of
zero mean contribution is the rule to decide a feature is irrelevant or not. In
our mechanism design, the end goal perfectly matches the expected model
performance with the expected sum of individual marginal effects. Within a
class of noninformative likelihood among all modeling opportunities, the
matching equation results in a specific valuation for each feature. After
estimating the valuation and its standard deviation, we drop any candidate
feature if its valuation is not significantly different from zero. In the
simulation studies, our new approach significantly outperforms several popular
methods used in practice, and its accuracy is robust to the choice of the
payoff function.",Feature Selection by a Mechanism Design,2021-10-06 02:53:14,Xingwei Hu,"http://arxiv.org/abs/2110.02419v1, http://arxiv.org/pdf/2110.02419v1",stat.ML
35672,th,"A modelling framework, based on the theory of signal processing, for
characterising the dynamics of systems driven by the unravelling of information
is outlined, and is applied to describe the process of decision making. The
model input of this approach is the specification of the flow of information.
This enables the representation of (i) reliable information, (ii) noise, and
(iii) disinformation, in a unified framework. Because the approach is designed
to characterise the dynamics of the behaviour of people, it is possible to
quantify the impact of information control, including those resulting from the
dissemination of disinformation. It is shown that if a decision maker assigns
an exceptionally high weight on one of the alternative realities, then under
the Bayesian logic their perception hardly changes in time even if evidences
presented indicate that this alternative corresponds to a false reality. Thus
confirmation bias need not be incompatible with Bayesian updating. By observing
the role played by noise in other areas of natural sciences, where noise is
used to excite the system away from false attractors, a new approach to tackle
the dark forces of fake news is proposed.","Noise, fake news, and tenacious Bayesians",2021-10-05 19:11:08,Dorje C. Brody,"http://arxiv.org/abs/2110.03432v3, http://arxiv.org/pdf/2110.03432v3",econ.TH
35673,th,"The Shapley value, one of the well-known allocation rules in game theory,
does not take into account information about the structure of the graph, so by
using the Shapley value for each hyperedge, we introduce a new allocation rule
by considering their first-order combination. We proved that some of the
properties that hold for Shapley and Myerson values also hold for our
allocation rule. In addition, we found the relationship between our allocation
rule and the Forman curvature, which plays an important role in discrete
geometry.",New allocation rule of directed hypergraphs,2021-10-13 08:29:47,Taiki Yamada,"http://arxiv.org/abs/2110.06506v3, http://arxiv.org/pdf/2110.06506v3",cs.GT
35674,th,"In this paper, we study a matching market model on a bipartite network where
agents on each side arrive and depart stochastically by a Poisson process. For
such a dynamic model, we design a mechanism that decides not only which agents
to match, but also when to match them, to minimize the expected number of
unmatched agents. The main contribution of this paper is to achieve theoretical
bounds on the performance of local mechanisms with different timing properties.
We show that an algorithm that waits to thicken the market, called the
$\textit{Patient}$ algorithm, is exponentially better than the
$\textit{Greedy}$ algorithm, i.e., an algorithm that matches agents greedily.
This means that waiting has substantial benefits on maximizing a matching over
a bipartite network. We remark that the Patient algorithm requires the planner
to identify agents who are about to leave the market, and, under the
requirement, the Patient algorithm is shown to be an optimal algorithm. We also
show that, without the requirement, the Greedy algorithm is almost optimal. In
addition, we consider the $\textit{1-sided algorithms}$ where only an agent on
one side can attempt to match. This models a practical matching market such as
a freight exchange market and a labor market where only agents on one side can
make a decision. For this setting, we prove that the Greedy and Patient
algorithms admit the same performance, that is, waiting to thicken the market
is not valuable. This conclusion is in contrast to the case where agents on
both sides can make a decision and the non-bipartite case by [Akbarpour et
al.,$~\textit{Journal of Political Economy}$, 2020].",Dynamic Bipartite Matching Market with Arrivals and Departures,2021-10-21 02:44:04,"Naonori Kakimura, Donghao Zhu","http://arxiv.org/abs/2110.10824v1, http://arxiv.org/pdf/2110.10824v1",cs.DS
35675,th,"We consider a two-player game of war of attrition under complete information.
It is well-known that this class of games admits equilibria in pure, as well as
mixed strategies, and much of the literature has focused on the latter. We show
that if the players' payoffs whilst in ""war"" vary stochastically and their exit
payoffs are heterogeneous, then the game admits Markov Perfect equilibria in
pure strategies only. This is true irrespective of the degree of randomness and
heterogeneity, thus highlighting the fragility of mixed-strategy equilibria to
a natural perturbation of the canonical model. In contrast, when the players'
flow payoffs are deterministic or their exit payoffs are homogeneous, the game
admits equilibria in pure and mixed strategies.",The Absence of Attrition in a War of Attrition under Complete Information,2021-10-22 21:55:39,"George Georgiadis, Youngsoo Kim, H. Dharma Kwon","http://dx.doi.org/10.1016/j.geb.2021.11.004, http://arxiv.org/abs/2110.12013v2, http://arxiv.org/pdf/2110.12013v2",math.OC
35676,th,"Motivated by civic problems such as participatory budgeting and multiwinner
elections, we consider the problem of public good allocation: Given a set of
indivisible projects (or candidates) of different sizes, and voters with
different monotone utility functions over subsets of these candidates, the goal
is to choose a budget-constrained subset of these candidates (or a committee)
that provides fair utility to the voters. The notion of fairness we adopt is
that of core stability from cooperative game theory: No subset of voters should
be able to choose another blocking committee of proportionally smaller size
that provides strictly larger utility to all voters that deviate. The core
provides a strong notion of fairness, subsuming other notions that have been
widely studied in computational social choice.
  It is well-known that an exact core need not exist even when utility
functions of the voters are additive across candidates. We therefore relax the
problem to allow approximation: Voters can only deviate to the blocking
committee if after they choose any extra candidate (called an additament),
their utility still increases by an $\alpha$ factor. If no blocking committee
exists under this definition, we call this an $\alpha$-core.
  Our main result is that an $\alpha$-core, for $\alpha < 67.37$, always exists
when utilities of the voters are arbitrary monotone submodular functions, and
this can be computed in polynomial time. This result improves to $\alpha <
9.27$ for additive utilities, albeit without the polynomial time guarantee. Our
results are a significant improvement over prior work that only shows
logarithmic approximations for the case of additive utilities. We complement
our results with a lower bound of $\alpha > 1.015$ for submodular utilities,
and a lower bound of any function in the number of voters and candidates for
general monotone utilities.",Approximate Core for Committee Selection via Multilinear Extension and Market Clearing,2021-10-24 20:40:20,"Kamesh Munagala, Yiheng Shen, Kangning Wang, Zhiyi Wang","http://arxiv.org/abs/2110.12499v1, http://arxiv.org/pdf/2110.12499v1",cs.GT
35677,th,"In this paper I investigate a Bayesian inverse problem in the specific
setting of a price setting monopolist facing a randomly growing demand in
multiple possibly interconnected markets. Investigating the Value of
Information of a signal to the monopolist in a fully dynamic discrete model
employing the Kalman-Bucy-Stratonovich filter, we find that it may be
non-monotonic in the variance of the signal. In the classical static settings
of the Value of Information literature this relationship may be convex or
concave, but is always monotonic. The existence of the non-monotonicity depends
critically on the exogenous growth rate of the system.",Expanding Multi-Market Monopoly and Nonconcavity in the Value of Information,2021-11-01 14:20:09,Stefan Behringer,"http://arxiv.org/abs/2111.00839v1, http://arxiv.org/pdf/2111.00839v1",econ.TH
35678,th,"Although expected utility theory has proven a fruitful and elegant theory in
the finite realm, attempts to generalize it to infinite values have resulted in
many paradoxes. In this paper, we argue that the use of John Conway's surreal
numbers shall provide a firm mathematical foundation for transfinite decision
theory. To that end, we prove a surreal representation theorem and show that
our surreal decision theory respects dominance reasoning even in the case of
infinite values. We then bring our theory to bear on one of the more venerable
decision problems in the literature: Pascal's Wager. Analyzing the wager
showcases our theory's virtues and advantages. To that end, we analyze two
objections against the wager: Mixed Strategies and Many Gods. After formulating
the two objections in the framework of surreal utilities and probabilities, our
theory correctly predicts that (1) the pure Pascalian strategy beats all mixed
strategies, and (2) what one should do in a Pascalian decision problem depends
on what one's credence function is like. Our analysis therefore suggests that
although Pascal's Wager is mathematically coherent, it does not deliver what it
purports to, a rationally compelling argument that people should lead a
religious life regardless of how confident they are in theism and its
alternatives.",Surreal Decisions,2021-10-23 21:37:20,"Eddy Keming Chen, Daniel Rubio","http://dx.doi.org/10.1111/phpr.12510, http://arxiv.org/abs/2111.00862v1, http://arxiv.org/pdf/2111.00862v1",cs.AI
35679,th,"We focus on a simple, one-dimensional collective decision problem (often
referred to as the facility location problem) and explore issues of
strategyproofness and proportionality-based fairness. We introduce and analyze
a hierarchy of proportionality-based fairness axioms of varying strength:
Individual Fair Share (IFS), Unanimous Fair Share (UFS), Proportionality (as in
Freeman et al, 2021), and Proportional Fairness (PF). For each axiom, we
characterize the family of mechanisms that satisfy the axiom and
strategyproofness. We show that imposing strategyproofness renders many of the
axioms to be equivalent: the family of mechanisms that satisfy proportionality,
unanimity, and strategyproofness is equivalent to the family of mechanisms that
satisfy UFS and strategyproofness, which, in turn, is equivalent to the family
of mechanisms that satisfy PF and strategyproofness. Furthermore, there is a
unique such mechanism: the Uniform Phantom mechanism, which is studied in
Freeman et al. (2021). We also characterize the outcomes of the Uniform Phantom
mechanism as the unique (pure) equilibrium outcome for any mechanism that
satisfies continuity, strict monotonicity, and UFS. Finally, we analyze the
approximation guarantees, in terms of optimal social welfare and minimum total
cost, obtained by mechanisms that are strategyproof and satisfy each
proportionality-based fairness axiom. We show that the Uniform Phantom
mechanism provides the best approximation of the optimal social welfare (and
also minimum total cost) among all mechanisms that satisfy UFS.",Strategyproof and Proportionally Fair Facility Location,2021-11-02 15:41:32,"Haris Aziz, Alexander Lam, Barton E. Lee, Toby Walsh","http://arxiv.org/abs/2111.01566v3, http://arxiv.org/pdf/2111.01566v3",cs.GT
35680,th,"Govindan and Klumpp [7] provided a characterization of perfect equilibria
using Lexicographic Probability Systems (LPSs). Their characterization was
essentially finite in that they showed that there exists a finite bound on the
number of levels in the LPS, but they did not compute it explicitly. In this
note, we draw on two recent developments in Real Algebraic Geometry to obtain a
formula for this bound.",A Finite Characterization of Perfect Equilibria,2021-11-02 17:58:06,"Ivonne Callejas, Srihari Govindan, Lucas Pahl","http://arxiv.org/abs/2111.01638v1, http://arxiv.org/pdf/2111.01638v1",econ.TH
35681,th,"The Gibbard-Satterthwaite theorem states that no unanimous and
non-dictatorial voting rule is strategyproof. We revisit voting rules and
consider a weaker notion of strategyproofness called not obvious manipulability
that was proposed by Troyan and Morrill (2020). We identify several classes of
voting rules that satisfy this notion. We also show that several voting rules
including k-approval fail to satisfy this property. We characterize conditions
under which voting rules are obviously manipulable. One of our insights is that
certain rules are obviously manipulable when the number of alternatives is
relatively large compared to the number of voters. In contrast to the
Gibbard-Satterthwaite theorem, many of the rules we examined are not obviously
manipulable. This reflects the relatively easier satisfiability of the notion
and the zero information assumption of not obvious manipulability, as opposed
to the perfect information assumption of strategyproofness. We also present
algorithmic results for computing obvious manipulations and report on
experiments.",Obvious Manipulability of Voting Rules,2021-11-03 05:41:48,"Haris Aziz, Alexander Lam","http://dx.doi.org/10.1007/978-3-030-87756-9_12, http://arxiv.org/abs/2111.01983v3, http://arxiv.org/pdf/2111.01983v3",cs.GT
35682,th,"We study bilateral trade between two strategic agents. The celebrated result
of Myerson and Satterthwaite states that in general, no incentive-compatible,
individually rational and weakly budget balanced mechanism can be efficient.
I.e., no mechanism with these properties can guarantee a trade whenever buyer
value exceeds seller cost. Given this, a natural question is whether there
exists a mechanism with these properties that guarantees a constant fraction of
the first-best gains-from-trade, namely a constant fraction of the
gains-from-trade attainable whenever buyer's value weakly exceeds seller's
cost. In this work, we positively resolve this long-standing open question on
constant-factor approximation, mentioned in several previous works, using a
simple mechanism.",Approximately Efficient Bilateral Trade,2021-11-05 19:49:45,"Yuan Deng, Jieming Mao, Balasubramanian Sivan, Kangning Wang","http://arxiv.org/abs/2111.03611v1, http://arxiv.org/pdf/2111.03611v1",cs.GT
35683,th,"In modeling multivariate time series for either forecast or policy analysis,
it would be beneficial to have figured out the cause-effect relations within
the data. Regression analysis, however, is generally for correlation relation,
and very few researches have focused on variance analysis for causality
discovery. We first set up an equilibrium for the cause-effect relations using
a fictitious vector autoregressive model. In the equilibrium, long-run
relations are identified from noise, and spurious ones are negligibly close to
zero. The solution, called causality distribution, measures the relative
strength causing the movement of all series or specific affected ones. If a
group of exogenous data affects the others but not vice versa, then, in theory,
the causality distribution for other variables is necessarily zero. The
hypothesis test of zero causality is the rule to decide a variable is
endogenous or not. Our new approach has high accuracy in identifying the true
cause-effect relations among the data in the simulation studies. We also apply
the approach to estimating the causal factors' contribution to climate change.",Decoding Causality by Fictitious VAR Modeling,2021-11-15 01:43:02,Xingwei Hu,"http://arxiv.org/abs/2111.07465v2, http://arxiv.org/pdf/2111.07465v2",stat.ML
35684,th,"A natural notion of rationality/consistency for aggregating models is that,
for all (possibly aggregated) models $A$ and $B$, if the output of model $A$ is
$f(A)$ and if the output model $B$ is $f(B)$, then the output of the model
obtained by aggregating $A$ and $B$ must be a weighted average of $f(A)$ and
$f(B)$. Similarly, a natural notion of rationality for aggregating preferences
of ensembles of experts is that, for all (possibly aggregated) experts $A$ and
$B$, and all possible choices $x$ and $y$, if both $A$ and $B$ prefer $x$ over
$y$, then the expert obtained by aggregating $A$ and $B$ must also prefer $x$
over $y$. Rational aggregation is an important element of uncertainty
quantification, and it lies behind many seemingly different results in economic
theory: spanning social choice, belief formation, and individual decision
making. Three examples of rational aggregation rules are as follows. (1) Give
each individual model (expert) a weight (a score) and use weighted averaging to
aggregate individual or finite ensembles of models (experts). (2) Order/rank
individual model (expert) and let the aggregation of a finite ensemble of
individual models (experts) be the highest-ranked individual model (expert) in
that ensemble. (3) Give each individual model (expert) a weight, introduce a
weak order/ranking over the set of models/experts, aggregate $A$ and $B$ as the
weighted average of the highest-ranked models (experts) in $A$ or $B$. Note
that (1) and (2) are particular cases of (3). In this paper, we show that all
rational aggregation rules are of the form (3). This result unifies aggregation
procedures across different economic environments. Following the main
representation, we show applications and extensions of our representation in
various separated economics topics such as belief formation, choice theory, and
social welfare economics.","Aggregation of Models, Choices, Beliefs, and Preferences",2021-11-23 06:26:42,"Hamed Hamze Bajgiran, Houman Owhadi","http://arxiv.org/abs/2111.11630v1, http://arxiv.org/pdf/2111.11630v1",econ.TH
35685,th,"Securities borrowing and lending are critical to proper functioning of
securities markets. To alleviate securities owners' exposure to borrower
default risk, overcollateralization and indemnification are provided by the
borrower and the lending agent respectively. Haircuts as the level of
overcollateralization and the cost of indemnification are naturally
interrelated: the higher haircut is, the lower cost shall become. This article
presents a method of quantifying their relationship. Borrower dependent
haircuts satisfying the lender's credit risk appetite are computed for US
Treasuries and main equities by applying a repo haircut model to bilateral
securities lending transactions. Indemnification is designed to fulfill a
triple-A risk appetite when the transaction haircut fails to deliver. The cost
of indemnification consists of a risk charge, a capital charge, and a funding
charge, each corresponding to the expected loss, the economic capital, and the
redundant fund needed to arrive at the triple-A haircut.",Securities Lending Haircuts and Indemnification Pricing,2021-11-25 22:15:37,Wujiang Lou,"http://dx.doi.org/10.2139/ssrn.3682930, http://arxiv.org/abs/2111.13228v1, http://arxiv.org/pdf/2111.13228v1",q-fin.MF
35686,th,"The St. Petersburg paradox is the oldest paradox in decision theory and has
played a pivotal role in the introduction of increasing concave utility
functions embodying risk aversion and decreasing marginal utility of gains. All
attempts to resolve it have considered some variants of the original set-up,
but the original paradox has remained unresolved, while the proposed variants
have introduced new complications and problems. Here a rigorous mathematical
resolution of the St. Petersburg paradox is suggested based on a probabilistic
approach to decision theory.",A Resolution of St. Petersburg Paradox,2021-11-24 16:59:32,V. I. Yukalov,"http://arxiv.org/abs/2111.14635v1, http://arxiv.org/pdf/2111.14635v1",math.OC
35687,th,"We present an index theory of equilibria for extensive form games. This
requires developing an index theory for games where the strategy sets of
players are general polytopes and their payoff functions are multiaffine in the
product of these polytopes. Such polytopes arise from identifying
(topologically) equivalent mixed strategies of a normal form game.",Polytope-form games and Index/Degree Theories for Extensive-form games,2022-01-06 18:28:08,Lucas Pahl,"http://arxiv.org/abs/2201.02098v4, http://arxiv.org/pdf/2201.02098v4",econ.TH
35688,th,"A preference profile with $m$ alternatives and $n$ voters is $d$-Manhattan
(resp. $d$-Euclidean) if both the alternatives and the voters can be placed
into the $d$-dimensional space such that between each pair of alternatives,
every voter prefers the one which has a shorter Manhattan (resp. Euclidean)
distance to the voter. Following Bogomolnaia and Laslier [Journal of
Mathematical Economics, 2007] and Chen and Grottke [Social Choice and Welfare,
2021] who look at $d$-Euclidean preference profiles, we study which preference
profiles are $d$-Manhattan depending on the values $m$ and $n$.
  First, we show that each preference profile with $m$ alternatives and $n$
voters is $d$-Manhattan whenever $d$ $\geq$ min($n$, $m$-$1$). Second, for $d =
2$, we show that the smallest non $d$-Manhattan preference profile has either
three voters and six alternatives, or four voters and five alternatives, or
five voters and four alternatives. This is more complex than the case with
$d$-Euclidean preferences (see [Bogomolnaia and Laslier, 2007] and [Bulteau and
Chen, 2020].",Multidimensional Manhattan Preferences,2022-01-24 16:52:38,"Jiehua Chen, Martin Nöllenburg, Sofia Simola, Anaïs Villedieu, Markus Wallinger","http://arxiv.org/abs/2201.09691v1, http://arxiv.org/pdf/2201.09691v1",cs.MA
35689,th,"We study the dynamics of simple congestion games with two resources where a
continuum of agents behaves according to a version of Experience-Weighted
Attraction (EWA) algorithm. The dynamics is characterized by two parameters:
the (population) intensity of choice $a>0$ capturing the economic rationality
of the total population of agents and a discount factor $\sigma\in [0,1]$
capturing a type of memory loss where past outcomes matter exponentially less
than the recent ones. Finally, our system adds a third parameter $b \in (0,1)$,
which captures the asymmetry of the cost functions of the two resources. It is
the proportion of the agents using the first resource at Nash equilibrium, with
$b=1/2$ capturing a symmetric network.
  Within this simple framework, we show a plethora of bifurcation phenomena
where behavioral dynamics destabilize from global convergence to equilibrium,
to limit cycles or even (formally proven) chaos as a function of the parameters
$a$, $b$ and $\sigma$. Specifically, we show that for any discount factor
$\sigma$ the system will be destabilized for a sufficiently large intensity of
choice $a$. Although for discount factor $\sigma=0$ almost always (i.e., $b
\neq 1/2$) the system will become chaotic, as $\sigma$ increases the chaotic
regime will give place to the attracting periodic orbit of period 2. Therefore,
memory loss can simplify game dynamics and make the system predictable. We
complement our theoretical analysis with simulations and several bifurcation
diagrams that showcase the unyielding complexity of the population dynamics
(e.g., attracting periodic orbits of different lengths) even in the simplest
possible potential games.",Unpredictable dynamics in congestion games: memory loss can prevent chaos,2022-01-26 18:07:03,"Jakub Bielawski, Thiparat Chotibut, Fryderyk Falniowski, Michal Misiurewicz, Georgios Piliouras","http://arxiv.org/abs/2201.10992v2, http://arxiv.org/pdf/2201.10992v2",cs.GT
35690,th,"We propose and solve a negotiation model of multiple players facing many
alternative solutions. The model can be generalized to many relevant
circumstances where stakeholders' interests partially overlap and partially
oppose. We also show that the model can be mapped into the well-known directed
percolation and directed polymers problems. Moreover, many statistical
mechanics tools, such as the Replica method, can be fruitfully employed.
Studying our negotiation model can enlighten the links between social-economic
phenomena and traditional statistical mechanics and help to develop new
perspectives and tools in the fertile interdisciplinary field.",Negotiation problem,2022-01-29 20:06:45,"Izat B. Baybusinov, Enrico Maria Fenoaltea, Yi-Cheng Zhang","http://dx.doi.org/10.1016/j.physa.2021.126806, http://arxiv.org/abs/2201.12619v1, http://arxiv.org/pdf/2201.12619v1",physics.soc-ph
35691,th,"We give new characterizations of core imputations for the following games:
  * The assignment game.
  * Concurrent games, i.e., general graph matching games having non-empty core.
  * The unconstrained bipartite $b$-matching game (edges can be matched
multiple times).
  * The constrained bipartite $b$-matching game (edges can be matched at most
once).
  The classic paper of Shapley and Shubik \cite{Shapley1971assignment} showed
that core imputations of the assignment game are precisely optimal solutions to
the dual of the LP-relaxation of the game. Building on this, Deng et al.
\cite{Deng1999algorithms} gave a general framework which yields analogous
characterizations for several fundamental combinatorial games. Interestingly
enough, their framework does not apply to the last two games stated above. In
turn, we show that some of the core imputations of these games correspond to
optimal dual solutions and others do not. This leads to the tantalizing
question of understanding the origins of the latter. We also present new
characterizations of the profits accrued by agents and teams in core
imputations of the first two games. Our characterization for the first game is
stronger than that for the second; the underlying reason is that the
characterization of vertices of the Birkhoff polytope is stronger than that of
the Balinski polytope.",New Characterizations of Core Imputations of Matching and $b$-Matching Games,2022-02-01 21:08:50,Vijay V. Vazirani,"http://arxiv.org/abs/2202.00619v12, http://arxiv.org/pdf/2202.00619v12",cs.GT
35692,th,"The Non-Fungible Token (NFT) is viewed as one of the important applications
of blockchain technology. Although NFT has a large market scale and multiple
practical standards, several limitations of the existing mechanism in NFT
markets exist. This work proposes a novel securitization and repurchase scheme
for NFT to overcome these limitations. We first provide an Asset-Backed
Securities (ABS) solution to settle the limitations of non-fungibility of NFT.
Our securitization design aims to enhance the liquidity of NFTs and enable
Oracles and Automatic Market Makers (AMMs) for NFTs. Then we propose a novel
repurchase protocol for a participant owing a portion of NFT to repurchase
other shares to obtain the complete ownership. As participants may
strategically bid during the acquisition process, our repurchase process is
formulated as a Stackelberg game to explore the equilibrium prices. We also
provide solutions to handle difficulties at market such as budget constraints
and lazy bidders.",ABSNFT: Securitization and Repurchase Scheme for Non-Fungible Tokens Based on Game Theoretical Analysis,2022-02-04 18:46:03,"Hongyin Chen, Yukun Cheng, Xiaotie Deng, Wenhan Huang, Linxuan Rong","http://arxiv.org/abs/2202.02199v2, http://arxiv.org/pdf/2202.02199v2",cs.GT
35693,th,"We develop a tractable model for studying strategic interactions between
learning algorithms. We uncover a mechanism responsible for the emergence of
algorithmic collusion. We observe that algorithms periodically coordinate on
actions that are more profitable than static Nash equilibria. This novel
collusive channel relies on an endogenous statistical linkage in the
algorithms' estimates which we call spontaneous coupling. The model's
parameters predict whether the statistical linkage will appear, and what market
structures facilitate algorithmic collusion. We show that spontaneous coupling
can sustain collusion in prices and market shares, complementing experimental
findings in the literature. Finally, we apply our results to design algorithmic
markets.",Artificial Intelligence and Spontaneous Collusion,2022-02-12 03:50:15,"Martino Banchio, Giacomo Mantegazza","http://arxiv.org/abs/2202.05946v5, http://arxiv.org/pdf/2202.05946v5",econ.TH
35694,th,"Motivated by online advertising auctions, we study auction design in repeated
auctions played by simple Artificial Intelligence algorithms (Q-learning). We
find that first-price auctions with no additional feedback lead to
tacit-collusive outcomes (bids lower than values), while second-price auctions
do not. We show that the difference is driven by the incentive in first-price
auctions to outbid opponents by just one bid increment. This facilitates
re-coordination on low bids after a phase of experimentation. We also show that
providing information about lowest bid to win, as introduced by Google at the
time of switch to first-price auctions, increases competitiveness of auctions.",Artificial Intelligence and Auction Design,2022-02-12 03:54:40,"Martino Banchio, Andrzej Skrzypacz","http://arxiv.org/abs/2202.05947v1, http://arxiv.org/pdf/2202.05947v1",econ.TH
35695,th,"For centuries, it has been widely believed that the influence of a small
coalition of voters is negligible in a large election. Consequently, there is a
large body of literature on characterizing the likelihood for an election to be
influenced when the votes follow certain distributions, especially the
likelihood of being manipulable by a single voter under the i.i.d. uniform
distribution, known as the Impartial Culture (IC).
  In this paper, we extend previous studies in three aspects: (1) we propose a
more general semi-random model, where a distribution adversary chooses a
worst-case distribution and then a contamination adversary modifies up to
$\psi$ portion of the data, (2) we consider many coalitional influence
problems, including coalitional manipulation, margin of victory, and various
vote controls and bribery, and (3) we consider arbitrary and variable coalition
size $B$. Our main theorem provides asymptotically tight bounds on the
semi-random likelihood of the existence of a size-$B$ coalition that can
successfully influence the election under a wide range of voting rules.
Applications of the main theorem and its proof techniques resolve long-standing
open questions about the likelihood of coalitional manipulability under IC, by
showing that the likelihood is $\Theta\left(\min\left\{\frac{B}{\sqrt n},
1\right\}\right)$ for many commonly-studied voting rules.
  The main technical contribution is a characterization of the semi-random
likelihood for a Poisson multinomial variable (PMV) to be unstable, which we
believe to be a general and useful technique with independent interest.",The Impact of a Coalition: Assessing the Likelihood of Voter Influence in Large Elections,2022-02-14 00:27:22,Lirong Xia,"http://arxiv.org/abs/2202.06411v4, http://arxiv.org/pdf/2202.06411v4",econ.TH
35696,th,"In this paper, we analyze the problem of how to adapt the concept of
proportionality to situations where several perfectly divisible resources have
to be allocated among certain set of agents that have exactly one claim which
is used for all resources. In particular, we introduce the constrained
proportional awards rule, which extend the classical proportional rule to these
situations. Moreover, we provide an axiomatic characterization of this rule.",On proportionality in multi-issue problems with crossed claims,2022-02-20 20:57:40,"Rick K. Acosta-Vega, Encarnación Algaba, Joaquín Sánchez-Soriano","http://arxiv.org/abs/2202.09877v1, http://arxiv.org/pdf/2202.09877v1",math.OC
35697,th,"In today's economy, it becomes important for Internet platforms to consider
the sequential information design problem to align its long term interest with
incentives of the gig service providers. This paper proposes a novel model of
sequential information design, namely the Markov persuasion processes (MPPs),
where a sender, with informational advantage, seeks to persuade a stream of
myopic receivers to take actions that maximizes the sender's cumulative
utilities in a finite horizon Markovian environment with varying prior and
utility functions. Planning in MPPs thus faces the unique challenge in finding
a signaling policy that is simultaneously persuasive to the myopic receivers
and inducing the optimal long-term cumulative utilities of the sender.
Nevertheless, in the population level where the model is known, it turns out
that we can efficiently determine the optimal (resp. $\epsilon$-optimal) policy
with finite (resp. infinite) states and outcomes, through a modified
formulation of the Bellman equation.
  Our main technical contribution is to study the MPP under the online
reinforcement learning (RL) setting, where the goal is to learn the optimal
signaling policy by interacting with with the underlying MPP, without the
knowledge of the sender's utility functions, prior distributions, and the
Markov transition kernels. We design a provably efficient no-regret learning
algorithm, the Optimism-Pessimism Principle for Persuasion Process (OP4), which
features a novel combination of both optimism and pessimism principles. Our
algorithm enjoys sample efficiency by achieving a sublinear $\sqrt{T}$-regret
upper bound. Furthermore, both our algorithm and theory can be applied to MPPs
with large space of outcomes and states via function approximation, and we
showcase such a success under the linear setting.",Sequential Information Design: Markov Persuasion Process and Its Efficient Reinforcement Learning,2022-02-22 08:41:43,"Jibang Wu, Zixuan Zhang, Zhe Feng, Zhaoran Wang, Zhuoran Yang, Michael I. Jordan, Haifeng Xu","http://arxiv.org/abs/2202.10678v1, http://arxiv.org/pdf/2202.10678v1",cs.AI
35698,th,"Reputation-based cooperation on social networks offers a causal mechanism
between graph properties and social trust. Recent papers on the `structural
microfoundations` of the society used this insight to show how demographic
processes, such as falling fertility, urbanisation, and migration, can alter
the logic of human societies. This paper demonstrates the underlying mechanism
in a way that is accessible to scientists not specialising in networks.
Additionally, the paper shows that, when the size and degree of the network is
fixed (i.e., all graphs have the same number of agents, who all have the same
number of connections), it is the clustering coefficient that drives
differences in how cooperative social networks are.","Clustering Drives Cooperation on Reputation Networks, All Else Fixed",2022-03-01 14:37:51,Tamas David-Barrett,"http://arxiv.org/abs/2203.00372v1, http://arxiv.org/pdf/2203.00372v1",cs.SI
35699,th,"We consider descending price auctions for selling $m$ units of a good to unit
demand i.i.d. buyers where there is an exogenous bound of $k$ on the number of
price levels the auction clock can take. The auctioneer's problem is to choose
price levels $p_1 > p_2 > \cdots > p_{k}$ for the auction clock such that
auction expected revenue is maximized. The prices levels are announced prior to
the auction. We reduce this problem to a new variant of prophet inequality,
which we call \emph{batched prophet inequality}, where a decision-maker chooses
$k$ (decreasing) thresholds and then sequentially collects rewards (up to $m$)
that are above the thresholds with ties broken uniformly at random. For the
special case of $m=1$ (i.e., selling a single item), we show that the resulting
descending auction with $k$ price levels achieves $1- 1/e^k$ of the
unrestricted (without the bound of $k$) optimal revenue. That means a
descending auction with just 4 price levels can achieve more than 98\% of the
optimal revenue. We then extend our results for $m>1$ and provide a closed-form
bound on the competitive ratio of our auction as a function of the number of
units $m$ and the number of price levels $k$.",Descending Price Auctions with Bounded Number of Price Levels and Batched Prophet Inequality,2022-03-02 22:59:15,"Saeed Alaei, Ali Makhdoumi, Azarakhsh Malekian, Rad Niazadeh","http://arxiv.org/abs/2203.01384v1, http://arxiv.org/pdf/2203.01384v1",cs.GT
35700,th,"A group of players are supposed to follow a prescribed profile of strategies.
If they follow this profile, they will reach a given target. We show that if
the target is not reached because some player deviates, then an outside
observer can identify the deviator. We also construct identification methods in
two nontrivial cases.",Identifying the Deviator,2022-03-08 01:11:07,"Noga Alon, Benjamin Gunby, Xiaoyu He, Eran Shmaya, Eilon Solan","http://arxiv.org/abs/2203.03744v1, http://arxiv.org/pdf/2203.03744v1",math.PR
35701,th,"In the classical version of online bipartite matching, there is a given set
of offline vertices (aka agents) and another set of vertices (aka items) that
arrive online. When each item arrives, its incident edges -- the agents who
like the item -- are revealed and the algorithm must irrevocably match the item
to such agents. We initiate the study of class fairness in this setting, where
agents are partitioned into a set of classes and the matching is required to be
fair with respect to the classes. We adopt popular fairness notions from the
fair division literature such as envy-freeness (up to one item),
proportionality, and maximin share fairness to our setting. Our class versions
of these notions demand that all classes, regardless of their sizes, receive a
fair treatment. We study deterministic and randomized algorithms for matching
indivisible items (leading to integral matchings) and for matching divisible
items (leading to fractional matchings). We design and analyze three novel
algorithms. For matching indivisible items, we propose an
adaptive-priority-based algorithm, MATCH-AND-SHIFT, prove that it achieves
1/2-approximation of both class envy-freeness up to one item and class maximin
share fairness, and show that each guarantee is tight. For matching divisible
items, we design a water-filling-based algorithm, EQUAL-FILLING, that achieves
(1-1/e)-approximation of class envy-freeness and class proportionality; we
prove (1-1/e) to be tight for class proportionality and establish a 3/4 upper
bound on class envy-freeness. Finally, we build upon EQUAL-FILLING to design a
randomized algorithm for matching indivisible items, EQAUL-FILLING-OCS, which
achieves 0.593-approximation of class proportionality. The algorithm and its
analysis crucially leverage the recently introduced technique of online
correlated selection (OCS) [Fahrbach et al., 2020].",Class Fairness in Online Matching,2022-03-08 01:26:11,"Hadi Hosseini, Zhiyi Huang, Ayumi Igarashi, Nisarg Shah","http://arxiv.org/abs/2203.03751v1, http://arxiv.org/pdf/2203.03751v1",cs.GT
35702,th,"Operant keypress tasks, where each action has a consequence, have been
analogized to the construct of ""wanting"" and produce lawful relationships in
humans that quantify preferences for approach and avoidance behavior. It is
unknown if rating tasks without an operant framework, which can be analogized
to ""liking"", show similar lawful relationships. We studied three independent
cohorts of participants (N = 501, 506, and 4,019 participants) collected by two
distinct organizations, using the same 7-point Likert scale to rate negative to
positive preferences for pictures from the International Affective Picture Set.
Picture ratings without an operant framework produced similar value functions,
limit functions, and trade-off functions to those reported in the literature
for operant keypress tasks, all with goodness of fits above 0.75. These value,
limit, and trade-off functions were discrete in their mathematical formulation,
recurrent across all three independent cohorts, and demonstrated scaling
between individual and group curves. In all three experiments, the computation
of loss aversion showed 95% confidence intervals below the value of 2, arguing
against a strong overweighting of losses relative to gains, as has previously
been reported for keypress tasks or games of chance with calibrated
uncertainty. Graphed features from the three cohorts were similar and argue
that preference assessments meet three of four criteria for lawfulness,
providing a simple, short, and low-cost method for the quantitative assessment
of preference without forced choice decisions, games of chance, or operant
keypressing. This approach can easily be implemented on any digital device with
a screen (e.g., cellphones).","Discrete, recurrent, and scalable patterns in human judgement underlie affective picture ratings",2022-03-12 17:40:11,"Emanuel A. Azcona, Byoung-Woo Kim, Nicole L. Vike, Sumra Bari, Shamal Lalvani, Leandros Stefanopoulos, Sean Woodward, Martin Block, Aggelos K. Katsaggelos, Hans C. Breiter","http://arxiv.org/abs/2203.06448v1, http://arxiv.org/pdf/2203.06448v1",cs.HC
35704,th,"In the classic scoring rule setting, a principal incentivizes an agent to
truthfully report their probabilistic belief about some future outcome. This
paper addresses the situation when this private belief, rather than a classical
probability distribution, is instead a quantum mixed state. In the resulting
quantum scoring rule setting, the principal chooses both a scoring function and
a measurement function, and the agent responds with their reported density
matrix. Several characterizations of quantum scoring rules are presented, which
reveal a familiar structure based on convex analysis. Spectral scores, where
the measurement function is given by the spectral decomposition of the reported
density matrix, have particularly elegant structure and connect to quantum
information theory. Turning to property elicitation, eigenvectors of the belief
are elicitable, whereas eigenvalues and entropy have maximal elicitation
complexity. The paper concludes with a discussion of other quantum information
elicitation settings and connections to the literature.",Quantum Information Elicitation,2022-03-14 23:07:47,Rafael Frongillo,"http://arxiv.org/abs/2203.07469v1, http://arxiv.org/pdf/2203.07469v1",cs.GT
35705,th,"Fictitious play has recently emerged as the most accurate scalable algorithm
for approximating Nash equilibrium strategies in multiplayer games. We show
that the degree of equilibrium approximation error of fictitious play can be
significantly reduced by carefully selecting the initial strategies. We present
several new procedures for strategy initialization and compare them to the
classic approach, which initializes all pure strategies to have equal
probability. The best-performing approach, called maximin, solves a nonconvex
quadratic program to compute initial strategies and results in a nearly 75%
reduction in approximation error compared to the classic approach when 5
initializations are used.",Fictitious Play with Maximin Initialization,2022-03-21 10:34:20,Sam Ganzfried,"http://arxiv.org/abs/2203.10774v5, http://arxiv.org/pdf/2203.10774v5",cs.GT
35706,th,"Rotating savings and credit associations (roscas) are informal financial
organizations common in settings where communities have reduced access to
formal financial institutions. In a rosca, a fixed group of participants
regularly contribute sums of money to a pot. This pot is then allocated
periodically using lottery, aftermarket, or auction mechanisms. Roscas are
empirically well-studied in economics. They are, however, challenging to study
theoretically due to their dynamic nature. Typical economic analyses of roscas
stop at coarse ordinal welfare comparisons to other credit allocation
mechanisms, leaving much of roscas' ubiquity unexplained. In this work, we take
an algorithmic perspective on the study of roscas. Building on techniques from
the price of anarchy literature, we present worst-case welfare approximation
guarantees. We further experimentally compare the welfare of outcomes as key
features of the environment vary. These cardinal welfare analyses further
rationalize the prevalence of roscas. We conclude by discussing several other
promising avenues.",An Algorithmic Introduction to Savings Circles,2022-03-23 18:27:30,"Rediet Abebe, Adam Eck, Christian Ikeokwu, Samuel Taggart","http://arxiv.org/abs/2203.12486v1, http://arxiv.org/pdf/2203.12486v1",cs.GT
35707,th,"The behavior of no-regret learning algorithms is well understood in
two-player min-max (i.e, zero-sum) games. In this paper, we investigate the
behavior of no-regret learning in min-max games with dependent strategy sets,
where the strategy of the first player constrains the behavior of the second.
Such games are best understood as sequential, i.e., min-max Stackelberg, games.
We consider two settings, one in which only the first player chooses their
actions using a no-regret algorithm while the second player best responds, and
one in which both players use no-regret algorithms. For the former case, we
show that no-regret dynamics converge to a Stackelberg equilibrium. For the
latter case, we introduce a new type of regret, which we call Lagrangian
regret, and show that if both players minimize their Lagrangian regrets, then
play converges to a Stackelberg equilibrium. We then observe that online mirror
descent (OMD) dynamics in these two settings correspond respectively to a known
nested (i.e., sequential) gradient descent-ascent (GDA) algorithm and a new
simultaneous GDA-like algorithm, thereby establishing convergence of these
algorithms to Stackelberg equilibrium. Finally, we analyze the robustness of
OMD dynamics to perturbations by investigating online min-max Stackelberg
games. We prove that OMD dynamics are robust for a large class of online
min-max games with independent strategy sets. In the dependent case, we
demonstrate the robustness of OMD dynamics experimentally by simulating them in
online Fisher markets, a canonical example of a min-max Stackelberg game with
dependent strategy sets.",Robust No-Regret Learning in Min-Max Stackelberg Games,2022-03-26 21:12:40,"Denizalp Goktas, Jiayi Zhao, Amy Greenwald","http://arxiv.org/abs/2203.14126v2, http://arxiv.org/pdf/2203.14126v2",cs.GT
35708,th,"Under what conditions do the behaviors of players, who play a game
repeatedly, converge to a Nash equilibrium? If one assumes that the players'
behavior is a discrete-time or continuous-time rule whereby the current mixed
strategy profile is mapped to the next, this becomes a problem in the theory of
dynamical systems. We apply this theory, and in particular the concepts of
chain recurrence, attractors, and Conley index, to prove a general
impossibility result: there exist games for which any dynamics is bound to have
starting points that do not end up at a Nash equilibrium. We also prove a
stronger result for $\epsilon$-approximate Nash equilibria: there are games
such that no game dynamics can converge (in an appropriate sense) to
$\epsilon$-Nash equilibria, and in fact the set of such games has positive
measure. Further numerical results demonstrate that this holds for any
$\epsilon$ between zero and $0.09$. Our results establish that, although the
notions of Nash equilibria (and its computation-inspired approximations) are
universally applicable in all games, they are also fundamentally incomplete as
predictors of long term behavior, regardless of the choice of dynamics.","Nash, Conley, and Computation: Impossibility and Incompleteness in Game Dynamics",2022-03-26 21:27:40,"Jason Milionis, Christos Papadimitriou, Georgios Piliouras, Kelly Spendlove","http://arxiv.org/abs/2203.14129v1, http://arxiv.org/pdf/2203.14129v1",cs.GT
35709,th,"The well-known notion of dimension for partial orders by Dushnik and Miller
allows to quantify the degree of incomparability and, thus, is regarded as a
measure of complexity for partial orders. However, despite its usefulness, its
definition is somewhat disconnected from the geometrical idea of dimension,
where, essentially, the number of dimensions indicates how many real lines are
required to represent the underlying partially ordered set.
  Here, we introduce a variation of the Dushnik-Miller notion of dimension that
is closer to geometry, the Debreu dimension, and show the following main
results: (i) how to construct its building blocks under some countability
restrictions, (ii) its relation to other notions of dimension in the
literature, and (iii), as an application of the above, we improve on the
classification of preordered spaces through real-valued monotones.",On a geometrical notion of dimension for partially ordered sets,2022-03-30 16:05:10,"Pedro Hack, Daniel A. Braun, Sebastian Gottwald","http://arxiv.org/abs/2203.16272v3, http://arxiv.org/pdf/2203.16272v3",math.CO
35710,th,"We introduce the notion of performative power, which measures the ability of
a firm operating an algorithmic system, such as a digital content
recommendation platform, to cause change in a population of participants. We
relate performative power to the economic study of competition in digital
economies. Traditional economic concepts struggle with identifying
anti-competitive patterns in digital platforms not least due to the complexity
of market definition. In contrast, performative power is a causal notion that
is identifiable with minimal knowledge of the market, its internals,
participants, products, or prices.
  Low performative power implies that a firm can do no better than to optimize
their objective on current data. In contrast, firms of high performative power
stand to benefit from steering the population towards more profitable behavior.
We confirm in a simple theoretical model that monopolies maximize performative
power. A firm's ability to personalize increases performative power, while
competition and outside options decrease performative power. On the empirical
side, we propose an observational causal design to identify performative power
from discontinuities in how digital platforms display content. This allows to
repurpose causal effects from various studies about digital platforms as lower
bounds on performative power. Finally, we speculate about the role that
performative power might play in competition policy and antitrust enforcement
in digital marketplaces.",Performative Power,2022-03-31 20:49:50,"Moritz Hardt, Meena Jagadeesan, Celestine Mendler-Dünner","http://arxiv.org/abs/2203.17232v2, http://arxiv.org/pdf/2203.17232v2",cs.LG
35711,th,"We propose a consumption-investment decision model where past consumption
peak $h$ plays a crucial role. There are two important consumption levels: the
lowest constrained level and a reference level, at which the risk aversion in
terms of consumption rate is changed. We solve this stochastic control problem
and derive the value function, optimal consumption plan, and optimal investment
strategy in semi-explicit forms. We find five important thresholds of wealth,
all as functions of $h$, and most of them are nonlinear functions. As can be
seen from numerical results and theoretical analysis, this intuitive and simple
model has significant economic implications, and there are at least three
important predictions: the marginal propensity to consume out of wealth is
generally decreasing but can be increasing for intermediate wealth levels, and
it jumps inversely proportional to the risk aversion at the reference point;
the implied relative risk aversion is roughly a smile in wealth; the welfare of
the poor is more vulnerable to wealth shocks than the wealthy. Moreover,
locally changing the risk aversion influences the optimal strategies globally,
revealing some risk allocation behaviors.",Consumption-investment decisions with endogenous reference point and drawdown constraint,2022-04-01 18:45:00,"Zongxia Liang, Xiaodong Luo, Fengyi Yuan","http://arxiv.org/abs/2204.00530v2, http://arxiv.org/pdf/2204.00530v2",q-fin.PM
35712,th,"This paper analyses the stability of cycles within a heteroclinic network
lying in a three-dimensional manifold formed by six cycles, for a one-parameter
model developed in the context of game theory. We show the asymptotic stability
of the network for a range of parameter values compatible with the existence of
an interior equilibrium and we describe an asymptotic technique to decide which
cycle (within the network) is visible in numerics. The technique consists of
reducing the relevant dynamics to a suitable one-dimensional map, the so called
\emph{projective map}. Stability of the fixed points of the projective map
determines the stability of the associated cycles. The description of this new
asymptotic approach is applicable to more general types of networks and is
potentially useful in computational dynamics.",Stability of heteroclinic cycles: a new approach,2022-04-02 15:18:36,"Telmo Peixe, Alexandre A. Rodrigues","http://arxiv.org/abs/2204.00848v1, http://arxiv.org/pdf/2204.00848v1",math.DS
35713,th,"In social choice theory, Sen's value restriction condition is a sufficiency
condition restricted to individuals' ordinal preferences so as to obtain a
transitive social preference under the majority decision rule. In this article,
Sen's transitivity condition is described by use of inequality and equation.
First, for a triple of alternatives, an individual's preference is represented
by a preference map, whose entries are sets containing the ranking position or
positions derived from the individual's preference over that triple of those
alternatives. Second, by using the union operation of sets and the cardinality
concept, Sen's transitivity condition is described by inequalities. Finally, by
using the membership function of sets, Sen's transitivity condition is further
described by equations.",Describing Sen's Transitivity Condition in Inequalities and Equations,2022-04-08 05:00:26,Fujun Hou,"http://arxiv.org/abs/2204.05105v1, http://arxiv.org/pdf/2204.05105v1",econ.TH
35714,th,"Network effects are the added value derived solely from the popularity of a
product in an economic market. Using agent-based models inspired by statistical
physics, we propose a minimal theory of a competitive market for (nearly)
indistinguishable goods with demand-side network effects, sold by statistically
identical sellers. With weak network effects, the model reproduces conventional
microeconomics: there is a statistical steady state of (nearly) perfect
competition. Increasing network effects, we find a phase transition to a robust
non-equilibrium phase driven by the spontaneous formation and collapse of fads
in the market. When sellers update prices sufficiently quickly, an emergent
monopolist can capture the market and undercut competition, leading to a
symmetry- and ergodicity-breaking transition. The non-equilibrium phase
simultaneously exhibits three empirically established phenomena not contained
in the standard theory of competitive markets: spontaneous price fluctuations,
persistent seller profits, and broad distributions of firm market shares.",Non-equilibrium phase transitions in competitive markets caused by network effects,2022-04-11 21:00:00,Andrew Lucas,"http://dx.doi.org/10.1073/pnas.2206702119, http://arxiv.org/abs/2204.05314v2, http://arxiv.org/pdf/2204.05314v2",cond-mat.stat-mech
35715,th,"Interacting agents receive public information at no cost and flexibly acquire
private information at a cost proportional to entropy reduction. When a
policymaker provides more public information, agents acquire less private
information, thus lowering information costs. Does more public information
raise or reduce uncertainty faced by agents? Is it beneficial or detrimental to
welfare? To address these questions, we examine the impacts of public
information on flexible information acquisition in a linear-quadratic-Gaussian
game with arbitrary quadratic material welfare. More public information raises
uncertainty if and only if the game exhibits strategic complementarity, which
can be harmful to welfare. However, when agents acquire a large amount of
information, more provision of public information increases welfare through a
substantial reduction in the cost of information. We give a necessary and
sufficient condition for welfare to increase with public information and
identify optimal public information disclosure, which is either full or partial
disclosure depending upon the welfare function and the slope of the best
response.",Impacts of Public Information on Flexible Information Acquisition,2022-04-20 09:29:37,Takashi Ui,"http://arxiv.org/abs/2204.09250v2, http://arxiv.org/pdf/2204.09250v2",econ.TH
35716,th,"We study the two-agent single-item bilateral trade. Ideally, the trade should
happen whenever the buyer's value for the item exceeds the seller's cost.
However, the classical result of Myerson and Satterthwaite showed that no
mechanism can achieve this without violating one of the Bayesian incentive
compatibility, individual rationality and weakly balanced budget conditions.
This motivates the study of approximating the
trade-whenever-socially-beneficial mechanism, in terms of the expected
gains-from-trade. Recently, Deng, Mao, Sivan, and Wang showed that the
random-offerer mechanism achieves at least a 1/8.23 approximation. We improve
this lower bound to 1/3.15 in this paper. We also determine the exact
worst-case approximation ratio of the seller-pricing mechanism assuming the
distribution of the buyer's value satisfies the monotone hazard rate property.",Improved Approximation to First-Best Gains-from-Trade,2022-04-30 06:19:41,Yumou Fei,"http://arxiv.org/abs/2205.00140v1, http://arxiv.org/pdf/2205.00140v1",cs.GT
35717,th,"We all have preferences when multiple choices are available. If we insist on
satisfying our preferences only, we may suffer a loss due to conflicts with
other people's identical selections. Such a case applies when the choice cannot
be divided into multiple pieces due to the intrinsic nature of the resources.
Former studies, such as the top trading cycle, examined how to conduct fair
joint decision-making while avoiding decision conflicts from the perspective of
game theory when multiple players have their own deterministic preference
profiles. However, in reality, probabilistic preferences can naturally appear
in relation to the stochastic decision-making of humans. Here, we theoretically
derive conflict-free joint decision-making that can satisfy the probabilistic
preferences of all individual players. More specifically, we mathematically
prove the conditions wherein the deviation of the resultant chance of obtaining
each choice from the individual preference profile, which we call the loss,
becomes zero, meaning that all players' satisfaction is perfectly appreciated
while avoiding decision conflicts. Furthermore, even in situations where
zero-loss conflict-free joint decision-making is unachievable, we show how to
derive joint decision-making that accomplishes the theoretical minimum loss
while ensuring conflict-free choices. Numerical demonstrations are also shown
with several benchmarks.",Optimal preference satisfaction for conflict-free joint decisions,2022-05-02 13:31:32,"Hiroaki Shinkawa, Nicolas Chauvet, Guillaume Bachelier, André Röhm, Ryoichi Horisaki, Makoto Naruse","http://dx.doi.org/10.1155/2023/2794839, http://arxiv.org/abs/2205.00799v1, http://arxiv.org/pdf/2205.00799v1",econ.TH
35718,th,"In a Fisher market, agents (users) spend a budget of (artificial) currency to
buy goods that maximize their utilities while a central planner sets prices on
capacity-constrained goods such that the market clears. However, the efficacy
of pricing schemes in achieving an equilibrium outcome in Fisher markets
typically relies on complete knowledge of users' budgets and utilities and
requires that transactions happen in a static market wherein all users are
present simultaneously.
  As a result, we study an online variant of Fisher markets, wherein
budget-constrained users with privately known utility and budget parameters,
drawn i.i.d. from a distribution $\mathcal{D}$, enter the market sequentially.
In this setting, we develop an algorithm that adjusts prices solely based on
observations of user consumption, i.e., revealed preference feedback, and
achieves a regret and capacity violation of $O(\sqrt{n})$, where $n$ is the
number of users and the good capacities scale as $O(n)$. Here, our regret
measure is the optimality gap in the objective of the Eisenberg-Gale program
between an online algorithm and an offline oracle with complete information on
users' budgets and utilities. To establish the efficacy of our approach, we
show that any uniform (static) pricing algorithm, including one that sets
expected equilibrium prices with complete knowledge of the distribution
$\mathcal{D}$, cannot achieve both a regret and constraint violation of less
than $\Omega(\sqrt{n})$. While our revealed preference algorithm requires no
knowledge of the distribution $\mathcal{D}$, we show that if $\mathcal{D}$ is
known, then an adaptive variant of expected equilibrium pricing achieves
$O(\log(n))$ regret and constant capacity violation for discrete distributions.
Finally, we present numerical experiments to demonstrate the performance of our
revealed preference algorithm relative to several benchmarks.",Stochastic Online Fisher Markets: Static Pricing Limits and Adaptive Enhancements,2022-04-27 08:03:45,"Devansh Jalota, Yinyu Ye","http://arxiv.org/abs/2205.00825v3, http://arxiv.org/pdf/2205.00825v3",cs.GT
35719,th,"In many areas of industry and society, e.g., energy, healthcare, logistics,
agents collect vast amounts of data that they deem proprietary. These data
owners extract predictive information of varying quality and relevance from
data depending on quantity, inherent information content, and their own
technical expertise. Aggregating these data and heterogeneous predictive
skills, which are distributed in terms of ownership, can result in a higher
collective value for a prediction task. In this paper, we envision a platform
for improving predictions via implicit pooling of private information in return
for possible remuneration. Specifically, we design a wagering-based forecast
elicitation market platform, where a buyer intending to improve their forecasts
posts a prediction task, and sellers respond to it with their forecast reports
and wagers. This market delivers an aggregated forecast to the buyer
(pre-event) and allocates a payoff to the sellers (post-event) for their
contribution. We propose a payoff mechanism and prove that it satisfies several
desirable economic properties, including those specific to electronic
platforms. Furthermore, we discuss the properties of the forecast aggregation
operator and scoring rules to emphasize their effect on the sellers' payoff.
Finally, we provide numerical examples to illustrate the structure and
properties of the proposed market platform.",A Market for Trading Forecasts: A Wagering Mechanism,2022-05-05 17:19:08,"Aitazaz Ali Raja, Pierre Pinson, Jalal Kazempour, Sergio Grammatico","http://arxiv.org/abs/2205.02668v2, http://arxiv.org/pdf/2205.02668v2",econ.TH
35720,th,"Agents' learning from feedback shapes economic outcomes, and many economic
decision-makers today employ learning algorithms to make consequential choices.
This note shows that a widely used learning algorithm, $\varepsilon$-Greedy,
exhibits emergent risk aversion: it prefers actions with lower variance. When
presented with actions of the same expectation, under a wide range of
conditions, $\varepsilon$-Greedy chooses the lower-variance action with
probability approaching one. This emergent preference can have wide-ranging
consequences, ranging from concerns about fairness to homogenization, and holds
transiently even when the riskier action has a strictly higher expected payoff.
We discuss two methods to correct this bias. The first method requires the
algorithm to reweight data as a function of how likely the actions were to be
chosen. The second requires the algorithm to have optimistic estimates of
actions for which it has not collected much data. We show that risk-neutrality
is restored with these corrections.",Risk Preferences of Learning Algorithms,2022-05-10 04:30:24,"Andreas Haupt, Aroon Narayanan","http://arxiv.org/abs/2205.04619v3, http://arxiv.org/pdf/2205.04619v3",cs.LG
35721,th,"I study a game of strategic exploration with private payoffs and public
actions in a Bayesian bandit setting. In particular, I look at cascade
equilibria, in which agents switch over time from the risky action to the
riskless action only when they become sufficiently pessimistic. I show that
these equilibria exist under some conditions and establish their salient
properties. Individual exploration in these equilibria can be more or less than
the single-agent level depending on whether the agents start out with a common
prior or not, but the most optimistic agent always underexplores. I also show
that allowing the agents to write enforceable ex-ante contracts will lead to
the most ex-ante optimistic agent to buy all payoff streams, providing an
explanation to the buying out of smaller start-ups by more established firms.",Social learning via actions in bandit environments,2022-05-12 17:15:17,Aroon Narayanan,"http://arxiv.org/abs/2205.06107v1, http://arxiv.org/pdf/2205.06107v1",econ.TH
35722,th,"Bayesian models of group learning are studied in Economics since the 1970s.
and more recently in computational linguistics. The models from Economics
postulate that agents maximize utility in their communication and actions. The
Economics models do not explain the ``probability matching"" phenomena that are
observed in many experimental studies. To address these observations, Bayesian
models that do not formally fit into the economic utility maximization
framework were introduced. In these models individuals sample from their
posteriors in communication. In this work we study the asymptotic behavior of
such models on connected networks with repeated communication. Perhaps
surprisingly, despite the fact that individual agents are not utility
maximizers in the classical sense, we establish that the individuals ultimately
agree and furthermore show that the limiting posterior is Bayes optimal.
  We explore the interpretation of our results in terms of Large Language
Models (LLMs). In the positive direction our results can be interpreted as
stating that interaction between different LLMs can lead to optimal learning.
However, we provide an example showing how misspecification may lead LLM agents
to be overconfident in their estimates.",Agreement and Statistical Efficiency in Bayesian Perception Models,2022-05-23 21:21:07,"Yash Deshpande, Elchanan Mossel, Youngtak Sohn","http://arxiv.org/abs/2205.11561v3, http://arxiv.org/pdf/2205.11561v3",math.ST
35723,th,"Demand response involves system operators using incentives to modulate
electricity consumption during peak hours or when faced with an incidental
supply shortage. However, system operators typically have imperfect information
about their customers' baselines, that is, their consumption had the incentive
been absent. The standard approach to estimate the reduction in a customer's
electricity consumption then is to estimate their counterfactual baseline.
However, this approach is not robust to estimation errors or strategic
exploitation by the customers and can potentially lead to overpayments to
customers who do not reduce their consumption and underpayments to those who
do. Moreover, optimal power consumption reductions of the customers depend on
the costs that they incur for curtailing consumption, which in general are
private knowledge of the customers, and which they could strategically
misreport in an effort to improve their own utilities even if it deteriorates
the overall system cost. The two-stage mechanism proposed in this paper
circumvents the aforementioned issues. In the day-ahead market, the
participating loads are required to submit only a probabilistic description of
their next-day consumption and costs to the system operator for day-ahead
planning. It is only in real-time, if and when called upon for demand response,
that the loads are required to report their baselines and costs. They receive
credits for reductions below their reported baselines. The mechanism for
calculating the credits guarantees incentive compatibility of truthful
reporting of the probability distribution in the day-ahead market and truthful
reporting of the baseline and cost in real-time. The mechanism can be viewed as
an extension of the celebrated Vickrey-Clarke-Groves mechanism augmented with a
carefully crafted second-stage penalty for deviations from the day-ahead bids.",A Two-Stage Mechanism for Demand Response Markets,2022-05-24 20:44:47,"Bharadwaj Satchidanandan, Mardavij Roozbehani, Munther A. Dahleh","http://arxiv.org/abs/2205.12236v2, http://arxiv.org/pdf/2205.12236v2",eess.SY
35724,th,"Stereotypes are generalized beliefs about groups of people, which are used to
make decisions and judgments about them. Although such heuristics can be useful
when decisions must be made quickly, or when information is lacking, they can
also serve as the basis for prejudice and discrimination. In this paper we
study the evolution of stereotypes through group reciprocity. We characterize
the warmth of a stereotype as the willingness to cooperate with an individual
based solely on the identity of the group they belong to. We show that when
stereotypes are coarse, such group reciprocity is less likely to evolve, and
stereotypes tend to be negative. We also show that, even when stereotypes are
broadly positive, individuals are often overly pessimistic about the
willingness of those they stereotype to cooperate. We then show that the
tendency for stereotyping itself to evolve is driven by the costs of cognition,
so that more people are stereotyped with greater coarseness as costs increase.
Finally we show that extrinsic ""shocks"", in which the benefits of cooperation
are suddenly reduced, can cause stereotype warmth and judgement bias to turn
sharply negative, consistent with the view that economic and other crises are
drivers of out-group animosity.",Group reciprocity and the evolution of stereotyping,2022-05-25 13:50:25,"Alexander J. Stewart, Nichola Raihani","http://arxiv.org/abs/2205.12652v1, http://arxiv.org/pdf/2205.12652v1",physics.soc-ph
35725,th,"Proportionality is an attractive fairness concept that has been applied to a
range of problems including the facility location problem, a classic problem in
social choice. In our work, we propose a concept called Strong Proportionality,
which ensures that when there are two groups of agents at different locations,
both groups incur the same total cost. We show that although Strong
Proportionality is a well-motivated and basic axiom, there is no deterministic
strategyproof mechanism satisfying the property. We then identify a randomized
mechanism called Random Rank (which uniformly selects a number $k$ between $1$
to $n$ and locates the facility at the $k$'th highest agent location) which
satisfies Strong Proportionality in expectation. Our main theorem characterizes
Random Rank as the unique mechanism that achieves universal truthfulness,
universal anonymity, and Strong Proportionality in expectation among all
randomized mechanisms. Finally, we show via the AverageOrRandomRank mechanism
that even stronger ex-post fairness guarantees can be achieved by weakening
universal truthfulness to strategyproofness in expectation.",Random Rank: The One and Only Strategyproof and Proportionally Fair Randomized Facility Location Mechanism,2022-05-30 03:51:57,"Haris Aziz, Alexander Lam, Mashbat Suzuki, Toby Walsh","http://arxiv.org/abs/2205.14798v2, http://arxiv.org/pdf/2205.14798v2",cs.GT
35758,th,"In order to identify expertise, forecasters should not be tested by their
calibration score, which can always be made arbitrarily small, but rather by
their Brier score. The Brier score is the sum of the calibration score and the
refinement score; the latter measures how good the sorting into bins with the
same forecast is, and thus attests to ""expertise."" This raises the question of
whether one can gain calibration without losing expertise, which we refer to as
""calibeating."" We provide an easy way to calibeat any forecast, by a
deterministic online procedure. We moreover show that calibeating can be
achieved by a stochastic procedure that is itself calibrated, and then extend
the results to simultaneously calibeating multiple procedures, and to
deterministic procedures that are continuously calibrated.","""Calibeating"": Beating Forecasters at Their Own Game",2022-09-11 18:14:17,"Dean P. Foster, Sergiu Hart","http://arxiv.org/abs/2209.04892v2, http://arxiv.org/pdf/2209.04892v2",econ.TH
35726,th,"In social choice theory, anonymity (all agents being treated equally) and
neutrality (all alternatives being treated equally) are widely regarded as
``minimal demands'' and ``uncontroversial'' axioms of equity and fairness.
However, the ANR impossibility -- there is no voting rule that satisfies
anonymity, neutrality, and resolvability (always choosing one winner) -- holds
even in the simple setting of two alternatives and two agents. How to design
voting rules that optimally satisfy anonymity, neutrality, and resolvability
remains an open question.
  We address the optimal design question for a wide range of preferences and
decisions that include ranked lists and committees. Our conceptual contribution
is a novel and strong notion of most equitable refinements that optimally
preserves anonymity and neutrality for any irresolute rule that satisfies the
two axioms. Our technical contributions are twofold. First, we characterize the
conditions for the ANR impossibility to hold under general settings, especially
when the number of agents is large. Second, we propose the
most-favorable-permutation (MFP) tie-breaking to compute a most equitable
refinement and design a polynomial-time algorithm to compute MFP when agents'
preferences are full rankings.",Most Equitable Voting Rules,2022-05-30 06:56:54,Lirong Xia,"http://arxiv.org/abs/2205.14838v3, http://arxiv.org/pdf/2205.14838v3",cs.GT
35727,th,"We study the problem of online learning in competitive settings in the
context of two-sided matching markets. In particular, one side of the market,
the agents, must learn about their preferences over the other side, the firms,
through repeated interaction while competing with other agents for successful
matches. We propose a class of decentralized, communication- and
coordination-free algorithms that agents can use to reach to their stable match
in structured matching markets. In contrast to prior works, the proposed
algorithms make decisions based solely on an agent's own history of play and
requires no foreknowledge of the firms' preferences. Our algorithms are
constructed by splitting up the statistical problem of learning one's
preferences, from noisy observations, from the problem of competing for firms.
We show that under realistic structural assumptions on the underlying
preferences of the agents and firms, the proposed algorithms incur a regret
which grows at most logarithmically in the time horizon. Our results show that,
in the case of matching markets, competition need not drastically affect the
performance of decentralized, communication and coordination free online
learning algorithms.","Decentralized, Communication- and Coordination-free Learning in Structured Matching Markets",2022-06-06 07:08:04,"Chinmay Maheshwari, Eric Mazumdar, Shankar Sastry","http://arxiv.org/abs/2206.02344v1, http://arxiv.org/pdf/2206.02344v1",cs.AI
35728,th,"Aggregating signals from a collection of noisy sources is a fundamental
problem in many domains including crowd-sourcing, multi-agent planning, sensor
networks, signal processing, voting, ensemble learning, and federated learning.
The core question is how to aggregate signals from multiple sources (e.g.
experts) in order to reveal an underlying ground truth. While a full answer
depends on the type of signal, correlation of signals, and desired output, a
problem common to all of these applications is that of differentiating sources
based on their quality and weighting them accordingly. It is often assumed that
this differentiation and aggregation is done by a single, accurate central
mechanism or agent (e.g. judge). We complicate this model in two ways. First,
we investigate the setting with both a single judge, and one with multiple
judges. Second, given this multi-agent interaction of judges, we investigate
various constraints on the judges' reporting space. We build on known results
for the optimal weighting of experts and prove that an ensemble of sub-optimal
mechanisms can perform optimally under certain conditions. We then show
empirically that the ensemble approximates the performance of the optimal
mechanism under a broader range of conditions.",Towards Group Learning: Distributed Weighting of Experts,2022-06-03 03:29:31,"Ben Abramowitz, Nicholas Mattei","http://arxiv.org/abs/2206.02566v1, http://arxiv.org/pdf/2206.02566v1",cs.LG
35729,th,"Democratization of AI involves training and deploying machine learning models
across heterogeneous and potentially massive environments. Diversity of data
opens up a number of possibilities to advance AI systems, but also introduces
pressing concerns such as privacy, security, and equity that require special
attention. This work shows that it is theoretically impossible to design a
rational learning algorithm that has the ability to successfully learn across
heterogeneous environments, which we decoratively call collective intelligence
(CI). By representing learning algorithms as choice correspondences over a
hypothesis space, we are able to axiomatize them with essential properties.
Unfortunately, the only feasible algorithm compatible with all of the axioms is
the standard empirical risk minimization (ERM) which learns arbitrarily from a
single environment. Our impossibility result reveals informational
incomparability between environments as one of the foremost obstacles for
researchers who design novel algorithms that learn from multiple environments,
which sheds light on prerequisites for success in critical areas of machine
learning such as out-of-distribution generalization, federated learning,
algorithmic fairness, and multi-modal learning.",Impossibility of Collective Intelligence,2022-06-05 10:58:39,Krikamol Muandet,"http://arxiv.org/abs/2206.02786v1, http://arxiv.org/pdf/2206.02786v1",cs.LG
35730,th,"In physics, the wavefunctions of bosonic particles collapse when the system
undergoes a Bose-Einstein condensation. In game theory, the strategy of an
agent describes the probability to engage in a certain course of action.
Strategies are expected to differ in competitive situations, namely when there
is a penalty to do the same as somebody else. We study what happens when agents
are interested how they fare not only in absolute terms, but also relative to
others. This preference, denoted envy, is shown to induce the emergence of
distinct social classes via a collective strategy condensation transition.
Members of the lower class pursue identical strategies, in analogy to the
Bose-Einstein condensation, with the upper class remaining individualistic.",Collective strategy condensation towards class-separated societies,2022-06-07 19:14:16,Claudius Gros,"http://dx.doi.org/10.1140/epjb/s10051-022-00362-5, http://arxiv.org/abs/2206.03421v1, http://arxiv.org/pdf/2206.03421v1",econ.TH
35757,th,"In this paper, we consider a discrete-time Stackelberg mean field game with a
finite number of leaders, a finite number of major followers and an infinite
number of minor followers. The leaders and the followers each observe types
privately that evolve as conditionally independent controlled Markov processes.
The leaders are of ""Stackelberg"" kind which means they commit to a dynamic
policy. We consider two types of followers: major and minor, each with a
private type. All the followers best respond to the policies of the Stackelberg
leaders and each other. Knowing that the followers would play a mean field game
(with major players) based on their policy, each (Stackelberg) leader chooses a
policy that maximizes her reward. We refer to the resulting outcome as a
Stackelberg mean field equilibrium with multiple leaders (SMFE-ML). In this
paper, we provide a master equation of this game that allows one to compute all
SMFE-ML. We further extend this notion to the case when there are infinite
number of leaders.",Master equation of discrete-time Stackelberg mean field games with multiple leaders,2022-09-07 17:30:45,Deepanshu Vasal,"http://arxiv.org/abs/2209.03186v1, http://arxiv.org/pdf/2209.03186v1",eess.SY
35731,th,"The Slutsky equation, central in consumer choice theory, is derived from the
usual hypotheses underlying most standard models in Economics, such as full
rationality, homogeneity, and absence of interactions. We present a statistical
physics framework that allows us to relax such assumptions. We first derive a
general fluctuation-response formula that relates the Slutsky matrix to
spontaneous fluctuations of consumption rather than to response to changing
prices and budget. We then show that, within our hypotheses, the symmetry of
the Slutsky matrix remains valid even when agents are only boundedly rational
but non-interacting. We then propose a model where agents are influenced by the
choice of others, leading to a phase transition beyond which consumption is
dominated by herding (or `""fashion"") effects. In this case, the individual
Slutsky matrix is no longer symmetric, even for fully rational agents. The
vicinity of the transition features a peak in asymmetry.",Bounded Rationality and Animal Spirits: A Fluctuation-Response Approach to Slutsky Matrices,2022-06-09 15:51:12,"Jerome Garnier-Brun, Jean-Philippe Bouchaud, Michael Benzaquen","http://dx.doi.org/10.1088/2632-072X/acb0a7, http://arxiv.org/abs/2206.04468v1, http://arxiv.org/pdf/2206.04468v1",econ.TH
35732,th,"Islamic and capitalist economies have several differences, the most
fundamental being that the Islamic economy is characterized by the prohibition
of interest (riba) and speculation (gharar) and the enforcement of
Shariah-compliant profit-loss sharing (mudaraba, murabaha, salam, etc.) and
wealth redistribution (waqf, sadaqah, and zakat). In this study, I apply new
econophysics models of wealth exchange and redistribution to quantitatively
compare these characteristics to those of capitalism and evaluate wealth
distribution and disparity using a simulation. Specifically, regarding
exchange, I propose a loan interest model representing finance capitalism and
riba and a joint venture model representing shareholder capitalism and
mudaraba; regarding redistribution, I create a transfer model representing
inheritance tax and waqf. As exchanges are repeated from an initial uniform
distribution of wealth, wealth distribution approaches a power-law distribution
more quickly for the loan interest than the joint venture model; and the Gini
index, representing disparity, rapidly increases. The joint venture model's
Gini index increases more slowly, but eventually, the wealth distribution in
both models becomes a delta distribution, and the Gini index gradually
approaches 1. Next, when both models are combined with the transfer model to
redistribute wealth in every given period, the loan interest model has a larger
Gini index than the joint venture model, but both converge to a Gini index of
less than 1. These results quantitatively reveal that in the Islamic economy,
disparity is restrained by prohibiting riba and promoting reciprocal exchange
in mudaraba and redistribution through waqf. Comparing Islamic and capitalist
economies provides insights into the benefits of economically embracing the
ethical practice of mutual aid and suggests guidelines for an alternative to
capitalism.",Islamic and capitalist economies: Comparison using econophysics models of wealth exchange and redistribution,2022-06-11 09:47:40,Takeshi Kato,"http://dx.doi.org/10.1371/journal.pone.0275113, http://arxiv.org/abs/2206.05443v2, http://arxiv.org/pdf/2206.05443v2",econ.TH
35733,th,"We formalize a framework for coordinating funding and selecting projects, the
costs of which are shared among agents with quasi-linear utility functions and
individual budgets. Our model contains the classical discrete participatory
budgeting model as a special case, while capturing other useful scenarios. We
propose several important axioms and objectives and study how well they can be
simultaneously satisfied. We show that whereas welfare maximization admits an
FPTAS, welfare maximization subject to a natural and very weak participation
requirement leads to a strong inapproximability. This result is bypassed if we
consider some natural restricted valuations, namely laminar single-minded
valuations and symmetric valuations. Our analysis for the former restriction
leads to the discovery of a new class of tractable instances for the Set Union
Knapsack problem, a classical problem in combinatorial optimization.",Coordinating Monetary Contributions in Participatory Budgeting,2022-06-13 11:27:16,"Haris Aziz, Sujit Gujar, Manisha Padala, Mashbat Suzuki, Jeremy Vollen","http://arxiv.org/abs/2206.05966v3, http://arxiv.org/pdf/2206.05966v3",cs.GT
35734,th,"We consider the diffusion of two alternatives in social networks using a
game-theoretic approach. Each individual plays a coordination game with its
neighbors repeatedly and decides which to adopt. As products are used in
conjunction with others and through repeated interactions, individuals are more
interested in their long-term benefits and tend to show trust to others to
maximize their long-term utility by choosing a suboptimal option with respect
to instantaneous payoff. To capture such trust behavior, we deploy
limited-trust equilibrium (LTE) in diffusion process. We analyze the
convergence of emerging dynamics to equilibrium points using mean-field
approximation and study the equilibrium state and the convergence rate of
diffusion using absorption probability and expected absorption time of a
reduced-size absorbing Markov chain. We also show that the diffusion model on
LTE under the best-response strategy can be converted to the well-known linear
threshold model. Simulation results show that when agents behave trustworthy,
their long-term utility will increase significantly compared to the case when
they are solely self-interested. Moreover, the Markov chain analysis provides a
good estimate of convergence properties over random networks.",Limited-Trust in Diffusion of Competing Alternatives over Social Networks,2022-06-13 20:05:31,"Vincent Leon, S. Rasoul Etesami, Rakesh Nagi","http://arxiv.org/abs/2206.06318v3, http://arxiv.org/pdf/2206.06318v3",cs.SI
35735,th,"We relax the strong rationality assumption for the agents in the paradigmatic
Kyle model of price formation, thereby reconciling the framework of
asymmetrically informed traders with the Adaptive Market Hypothesis, where
agents use inductive rather than deductive reasoning. Building on these ideas,
we propose a stylised model able to account parsimoniously for a rich
phenomenology, ranging from excess volatility to volatility clustering. While
characterising the excess-volatility dynamics, we provide a microfoundation for
GARCH models. Volatility clustering is shown to be related to the self-excited
dynamics induced by traders' behaviour, and does not rely on clustered
fundamental innovations. Finally, we propose an extension to account for the
fragile dynamics exhibited by real markets during flash crashes.",Microfounding GARCH Models and Beyond: A Kyle-inspired Model with Adaptive Agents,2022-06-14 14:26:26,"Michele Vodret, Iacopo Mastromatteo, Bence Toth, Michael Benzaquen","http://arxiv.org/abs/2206.06764v1, http://arxiv.org/pdf/2206.06764v1",q-fin.TR
35736,th,"In tug-of-war, two players compete by moving a counter along edges of a
graph, each winning the right to move at a given turn according to the flip of
a possibly biased coin. The game ends when the counter reaches the boundary, a
fixed subset of the vertices, at which point one player pays the other an
amount determined by the boundary vertex. Economists and mathematicians have
independently studied tug-of-war for many years, focussing respectively on
resource-allocation forms of the game, in which players iteratively spend
precious budgets in an effort to influence the bias of the coins that determine
the turn victors; and on PDE arising in fine mesh limits of the constant-bias
game in a Euclidean setting.
  In this article, we offer a mathematical treatment of a class of tug-of-war
games with allocated budgets: each player is initially given a fixed budget
which she draws on throughout the game to offer a stake at the start of each
turn, and her probability of winning the turn is the ratio of her stake and the
sum of the two stakes. We consider the game played on a tree, with boundary
being the set of leaves, and the payment function being the indicator of a
single distinguished leaf. We find the game value and the essentially unique
Nash equilibrium of a leisurely version of the game, in which the move at any
given turn is cancelled with constant probability after stakes have been
placed. We show that the ratio of the players' remaining budgets is maintained
at its initial value $\lambda$; game value is a biased infinity harmonic
function; and the proportion of remaining budget that players stake at a given
turn is given in terms of the spatial gradient and the $\lambda$-derivative of
game value. We also indicate examples in which the solution takes a different
form in the non-leisurely game.",Stake-governed tug-of-war and the biased infinity Laplacian,2022-06-16 20:00:49,"Alan Hammond, Gábor Pete","http://arxiv.org/abs/2206.08300v2, http://arxiv.org/pdf/2206.08300v2",math.PR
35737,th,"To take advantage of strategy commitment, a useful tactic of playing games, a
leader must learn enough information about the follower's payoff function.
However, this leaves the follower a chance to provide fake information and
influence the final game outcome. Through a carefully contrived payoff function
misreported to the learning leader, the follower may induce an outcome that
benefits him more, compared to the ones when he truthfully behaves.
  We study the follower's optimal manipulation via such strategic behaviors in
extensive-form games. Followers' different attitudes are taken into account. An
optimistic follower maximizes his true utility among all game outcomes that can
be induced by some payoff function. A pessimistic follower only considers
misreporting payoff functions that induce a unique game outcome. For all the
settings considered in this paper, we characterize all the possible game
outcomes that can be induced successfully. We show that it is polynomial-time
tractable for the follower to find the optimal way of misreporting his private
payoff information. Our work completely resolves this follower's optimal
manipulation problem on an extensive-form game tree.",Optimal Private Payoff Manipulation against Commitment in Extensive-form Games,2022-06-27 11:50:28,"Yurong Chen, Xiaotie Deng, Yuhao Li","http://arxiv.org/abs/2206.13119v2, http://arxiv.org/pdf/2206.13119v2",cs.GT
35738,th,"We study the diffusion of a true and a false message when agents are (i)
biased towards one of the messages and (ii) agents are able to inspect messages
for veracity. Inspection of messages implies that a higher rumor prevalence may
increase the prevalence of the truth. We employ this result to discuss how a
planner may optimally choose information inspection rates of the population. We
find that a planner who aims to maximize the prevalence of the truth may find
it optimal to allow rumors to circulate.",Optimal Inspection of Rumors in Networks,2022-07-05 09:29:46,"Luca Paolo Merlino, Nicole Tabasso","http://arxiv.org/abs/2207.01830v1, http://arxiv.org/pdf/2207.01830v1",econ.TH
35739,th,"In today's online advertising markets, a crucial requirement for an
advertiser is to control her total expenditure within a time horizon under some
budget. Among various budget control methods, throttling has emerged as a
popular choice, managing an advertiser's total expenditure by selecting only a
subset of auctions to participate in. This paper provides a theoretical
panorama of a single advertiser's dynamic budget throttling process in repeated
second-price auctions. We first establish a lower bound on the regret and an
upper bound on the asymptotic competitive ratio for any throttling algorithm,
respectively, when the advertiser's values are stochastic and adversarial.
Regarding the algorithmic side, we propose the OGD-CB algorithm, which
guarantees a near-optimal expected regret with stochastic values. On the other
hand, when values are adversarial, we prove that this algorithm also reaches
the upper bound on the asymptotic competitive ratio. We further compare
throttling with pacing, another widely adopted budget control method, in
repeated second-price auctions. In the stochastic case, we demonstrate that
pacing is generally superior to throttling for the advertiser, supporting the
well-known result that pacing is asymptotically optimal in this scenario.
However, in the adversarial case, we give an exciting result indicating that
throttling is also an asymptotically optimal dynamic bidding strategy. Our
results bridge the gaps in theoretical research of throttling in repeated
auctions and comprehensively reveal the ability of this popular
budget-smoothing strategy.",Dynamic Budget Throttling in Repeated Second-Price Auctions,2022-07-11 11:12:02,"Zhaohua Chen, Chang Wang, Qian Wang, Yuqi Pan, Zhuming Shi, Zheng Cai, Yukun Ren, Zhihua Zhu, Xiaotie Deng","http://arxiv.org/abs/2207.04690v7, http://arxiv.org/pdf/2207.04690v7",cs.GT
35740,th,"This paper aims to formulate and study the inverse problem of non-cooperative
linear quadratic games: Given a profile of control strategies, find cost
parameters for which this profile of control strategies is Nash. We formulate
the problem as a leader-followers problem, where a leader aims to implant a
desired profile of control strategies among selfish players. In this paper, we
leverage frequency-domain techniques to develop a necessary and sufficient
condition on the existence of cost parameters for a given profile of
stabilizing control strategies to be Nash under a given linear system. The
necessary and sufficient condition includes the circle criterion for each
player and a rank condition related to the transfer function of each player.
The condition provides an analytical method to check the existence of such cost
parameters, while previous studies need to solve a convex feasibility problem
numerically to answer the same question. We develop an identity in
frequency-domain representation to characterize the cost parameters, which we
refer to as the Kalman equation. The Kalman equation reduces redundancy in the
time-domain analysis that involves solving a convex feasibility problem. Using
the Kalman equation, we also show the leader can enforce the same Nash profile
by applying penalties on the shared state instead of penalizing the player for
other players' actions to avoid the impression of unfairness.",The Inverse Problem of Linear-Quadratic Differential Games: When is a Control Strategies Profile Nash?,2022-07-12 07:26:55,"Yunhan Huang, Tao Zhang, Quanyan Zhu","http://arxiv.org/abs/2207.05303v2, http://arxiv.org/pdf/2207.05303v2",math.OC
35741,th,"The emerging technology of Vehicle-to-Vehicle (V2V) communication over
vehicular ad hoc networks promises to improve road safety by allowing vehicles
to autonomously warn each other of road hazards. However, research on other
transportation information systems has shown that informing only a subset of
drivers of road conditions may have a perverse effect of increasing congestion.
In the context of a simple (yet novel) model of V2V hazard information sharing,
we ask whether partial adoption of this technology can similarly lead to
undesirable outcomes. In our model, drivers individually choose how recklessly
to behave as a function of information received from other V2V-enabled cars,
and the resulting aggregate behavior influences the likelihood of accidents
(and thus the information propagated by the vehicular network). We fully
characterize the game-theoretic equilibria of this model using our new
equilibrium concept. Our model indicates that for a wide range of the parameter
space, V2V information sharing surprisingly increases the equilibrium frequency
of accidents relative to no V2V information sharing, and that it may increase
equilibrium social cost as well.",Information Design for Vehicle-to-Vehicle Communication,2022-07-13 00:33:06,"Brendan T. Gould, Philip N. Brown","http://arxiv.org/abs/2207.06411v1, http://arxiv.org/pdf/2207.06411v1",cs.GT
35766,th,"Balancing pandemic control and economics is challenging, as the numerical
analysis assuming specific economic conditions complicates obtaining
predictable general findings. In this study, we analytically demonstrate how
adopting timely moderate measures helps reconcile medical effectiveness and
economic impact, and explain it as a consequence of the general finding of
``economic irreversibility"" by comparing it with thermodynamics. A general
inequality provides the guiding principles on how such measures should be
implemented. The methodology leading to the exact solution is a novel
theoretical contribution to the econophysics literature.",Timely pandemic countermeasures reduce both health damage and economic loss: Generality of the exact solution,2022-09-16 10:49:56,Tsuyoshi Hondou,"http://dx.doi.org/10.7566/JPSJ.92.043801, http://arxiv.org/abs/2209.12805v2, http://arxiv.org/pdf/2209.12805v2",physics.soc-ph
35742,th,"We study fairness in social choice settings under single-peaked preferences.
Construction and characterization of social choice rules in the single-peaked
domain has been extensively studied in prior works. In fact, in the
single-peaked domain, it is known that unanimous and strategy-proof
deterministic rules have to be min-max rules and those that also satisfy
anonymity have to be median rules. Further, random social choice rules
satisfying these properties have been shown to be convex combinations of
respective deterministic rules. We non-trivially add to this body of results by
including fairness considerations in social choice. Our study directly
addresses fairness for groups of agents. To study group-fairness, we consider
an existing partition of the agents into logical groups, based on natural
attributes such as gender, race, and location. To capture fairness within each
group, we introduce the notion of group-wise anonymity. To capture fairness
across the groups, we propose a weak notion as well as a strong notion of
fairness. The proposed fairness notions turn out to be natural generalizations
of existing individual-fairness notions and moreover provide non-trivial
outcomes for strict ordinal preferences, unlike the existing group-fairness
notions. We provide two separate characterizations of random social choice
rules that satisfy group-fairness: (i) direct characterization (ii) extreme
point characterization (as convex combinations of fair deterministic social
choice rules). We also explore the special case where there are no groups and
provide sharper characterizations of rules that achieve individual-fairness.",Characterization of Group-Fair Social Choice Rules under Single-Peaked Preferences,2022-07-16 20:12:54,"Gogulapati Sreedurga, Soumyarup Sadhukhan, Souvik Roy, Yadati Narahari","http://arxiv.org/abs/2207.07984v1, http://arxiv.org/pdf/2207.07984v1",cs.GT
35743,th,"Cryptographic Self-Selection is a subroutine used to select a leader for
modern proof-of-stake consensus protocols, such as Algorand. In cryptographic
self-selection, each round $r$ has a seed $Q_r$. In round $r$, each account
owner is asked to digitally sign $Q_r$, hash their digital signature to produce
a credential, and then broadcast this credential to the entire network. A
publicly-known function scores each credential in a manner so that the
distribution of the lowest scoring credential is identical to the distribution
of stake owned by each account. The user who broadcasts the lowest-scoring
credential is the leader for round $r$, and their credential becomes the seed
$Q_{r+1}$. Such protocols leave open the possibility of a selfish-mining style
attack: a user who owns multiple accounts that each produce low-scoring
credentials in round $r$ can selectively choose which ones to broadcast in
order to influence the seed for round $r+1$. Indeed, the user can pre-compute
their credentials for round $r+1$ for each potential seed, and broadcast only
the credential (among those with a low enough score to be the leader) that
produces the most favorable seed.
  We consider an adversary who wishes to maximize the expected fraction of
rounds in which an account they own is the leader. We show such an adversary
always benefits from deviating from the intended protocol, regardless of the
fraction of the stake controlled. We characterize the optimal strategy; first
by proving the existence of optimal positive recurrent strategies whenever the
adversary owns last than $38\%$ of the stake. Then, we provide a Markov
Decision Process formulation to compute the optimal strategy.",Optimal Strategic Mining Against Cryptographic Self-Selection in Proof-of-Stake,2022-07-16 21:28:07,"Matheus V. X. Ferreira, Ye Lin Sally Hahn, S. Matthew Weinberg, Catherine Yu","http://dx.doi.org/10.1145/3490486.3538337, http://arxiv.org/abs/2207.07996v1, http://arxiv.org/pdf/2207.07996v1",cs.CR
35744,th,"We study a general scenario of simultaneous contests that allocate prizes
based on equal sharing: each contest awards its prize to all players who
satisfy some contest-specific criterion, and the value of this prize to a
winner decreases as the number of winners increases. The players produce
outputs for a set of activities, and the winning criteria of the contests are
based on these outputs. We consider two variations of the model: (i) players
have costs for producing outputs; (ii) players do not have costs but have
generalized budget constraints. We observe that these games are exact potential
games and hence always have a pure-strategy Nash equilibrium. The price of
anarchy is $2$ for the budget model, but can be unbounded for the cost model.
Our main results are for the computational complexity of these games. We prove
that for general versions of the model exactly or approximately computing a
best response is NP-hard. For natural restricted versions where best response
is easy to compute, we show that finding a pure-strategy Nash equilibrium is
PLS-complete, and finding a mixed-strategy Nash equilibrium is
(PPAD$\cap$PLS)-complete. On the other hand, an approximate pure-strategy Nash
equilibrium can be found in pseudo-polynomial time. These games are a strict
but natural subclass of explicit congestion games, but they still have the same
equilibrium hardness results.",Simultaneous Contests with Equal Sharing Allocation of Prizes: Computational Complexity and Price of Anarchy,2022-07-17 15:18:11,"Edith Elkind, Abheek Ghosh, Paul W. Goldberg","http://arxiv.org/abs/2207.08151v1, http://arxiv.org/pdf/2207.08151v1",cs.GT
35745,th,"This paper examines whether one can learn to play an optimal action while
only knowing part of true specification of the environment. We choose the
optimal pricing problem as our laboratory, where the monopolist is endowed with
an underspecified model of the market demand, but can observe market outcomes.
In contrast to conventional learning models where the model specification is
complete and exogenously fixed, the monopolist has to learn the specification
and the parameters of the demand curve from the data. We formulate the learning
dynamics as an algorithm that forecast the optimal price based on the data,
following the machine learning literature (Shalev-Shwartz and Ben-David
(2014)). Inspired by PAC learnability, we develop a new notion of learnability
by requiring that the algorithm must produce an accurate forecast with a
reasonable amount of data uniformly over the class of models consistent with
the part of the true specification. In addition, we assume that the monopolist
has a lexicographic preference over the payoff and the complexity cost of the
algorithm, seeking an algorithm with a minimum number of parameters subject to
PAC-guaranteeing the optimal solution (Rubinstein (1986)). We show that for the
set of demand curves with strictly decreasing uniformly Lipschitz continuous
marginal revenue curve, the optimal algorithm recursively estimates the slope
and the intercept of the linear demand curve, even if the actual demand curve
is not linear. The monopolist chooses a misspecified model to save
computational cost, while learning the true optimal decision uniformly over the
set of underspecified demand curves.",Learning Underspecified Models,2022-07-20 21:42:29,"In-Koo Cho, Jonathan Libgober","http://arxiv.org/abs/2207.10140v1, http://arxiv.org/pdf/2207.10140v1",econ.TH
35746,th,"In this study, I present a theoretical social learning model to investigate
how confirmation bias affects opinions when agents exchange information over a
social network. Hence, besides exchanging opinions with friends, agents observe
a public sequence of potentially ambiguous signals and interpret it according
to a rule that includes confirmation bias. First, this study shows that
regardless of level of ambiguity both for people or networked society, only two
types of opinions can be formed, and both are biased. However, one opinion type
is less biased than the other depending on the state of the world. The size of
both biases depends on the ambiguity level and relative magnitude of the state
and confirmation biases. Hence, long-run learning is not attained even when
people impartially interpret ambiguity. Finally, analytically confirming the
probability of emergence of the less-biased consensus when people are connected
and have different priors is difficult. Hence, I used simulations to analyze
its determinants and found three main results: i) some network topologies are
more conducive to consensus efficiency, ii) some degree of partisanship
enhances consensus efficiency even under confirmation bias and iii)
open-mindedness (i.e. when partisans agree to exchange opinions with opposing
partisans) might inhibit efficiency in some cases.",Confirmation Bias in Social Networks,2022-07-26 04:11:01,Marcos R. Fernandes,"http://arxiv.org/abs/2207.12594v3, http://arxiv.org/pdf/2207.12594v3",econ.TH
35747,th,"We consider a Bayesian forecast aggregation model where $n$ experts, after
observing private signals about an unknown binary event, report their posterior
beliefs about the event to a principal, who then aggregates the reports into a
single prediction for the event. The signals of the experts and the outcome of
the event follow a joint distribution that is unknown to the principal, but the
principal has access to i.i.d. ""samples"" from the distribution, where each
sample is a tuple of the experts' reports (not signals) and the realization of
the event. Using these samples, the principal aims to find an
$\varepsilon$-approximately optimal aggregator, where optimality is measured in
terms of the expected squared distance between the aggregated prediction and
the realization of the event. We show that the sample complexity of this
problem is at least $\tilde \Omega(m^{n-2} / \varepsilon)$ for arbitrary
discrete distributions, where $m$ is the size of each expert's signal space.
This sample complexity grows exponentially in the number of experts $n$. But,
if the experts' signals are independent conditioned on the realization of the
event, then the sample complexity is significantly reduced, to $\tilde O(1 /
\varepsilon^2)$, which does not depend on $n$. Our results can be generalized
to non-binary events. The proof of our results uses a reduction from the
distribution learning problem and reveals the fact that forecast aggregation is
almost as difficult as distribution learning.",Sample Complexity of Forecast Aggregation,2022-07-26 21:12:53,"Tao Lin, Yiling Chen","http://arxiv.org/abs/2207.13126v4, http://arxiv.org/pdf/2207.13126v4",cs.LG
35748,th,"We consider transferable utility cooperative games with infinitely many
players and the core understood in the space of bounded additive set functions.
We show that, if a game is bounded below, then its core is non-empty if and
only if the game is balanced.
  This finding is a generalization of Schmeidler's (1967) original result ``On
Balanced Games with Infinitely Many Players'', where the game is assumed to be
non-negative. We furthermore demonstrate that, if a game is not bounded below,
then its core might be empty even though the game is balanced; that is, our
result is tight.
  We also generalize Schmeidler's (1967) result to the case of restricted
cooperation too.",On Balanced Games with Infinitely Many Players: Revisiting Schmeidler's Result,2022-07-29 16:36:47,"David Bartl, Miklós Pintér","http://arxiv.org/abs/2207.14672v1, http://arxiv.org/pdf/2207.14672v1",math.OC
35749,th,"We devise a theoretical model for the optimal dynamical control of an
infectious disease whose diffusion is described by the SVIR compartmental
model. The control is realized through implementing social rules to reduce the
disease's spread, which often implies substantial economic and social costs. We
model this trade-off by introducing a functional depending on three terms: a
social cost function, the cost supported by the healthcare system for the
infected population, and the cost of the vaccination campaign. Using the
Pontryagin's Maximum Principle, we give conditions for the existence of the
optimal policy, which we characterize explicitly in three instances of the
social cost function, the linear, quadratic, and exponential models,
respectively. Finally, we present a set of results on the numerical solution of
the optimally controlled system by using Italian data from the recent Covid--19
pandemic for the model calibration.",The economic cost of social distancing during a pandemic: an optimal control approach in the SVIR model,2022-08-09 20:10:17,"Alessandro Ramponi, Maria Elisabetta Tessitore","http://arxiv.org/abs/2208.04908v1, http://arxiv.org/pdf/2208.04908v1",math.OC
35750,th,"We study a game between $N$ job applicants who incur a cost $c$ (relative to
the job value) to reveal their type during interviews and an administrator who
seeks to maximize the probability of hiring the best. We define a full learning
equilibrium and prove its existence, uniqueness, and optimality. In
equilibrium, the administrator accepts the current best applicant $n$ with
probability $c$ if $n<n^*$ and with probability 1 if $n\ge n^*$ for a threshold
$n^*$ independent of $c$. In contrast to the case without cost, where the
success probability converges to $1/\mathrm{e}\approx 0.37$ as $N$ tends to
infinity, with cost the success probability decays like $N^{-c}$.",Incentivizing Hidden Types in Secretary Problem,2022-08-11 18:56:08,"Longjian Li, Alexis Akira Toda","http://arxiv.org/abs/2208.05897v1, http://arxiv.org/pdf/2208.05897v1",econ.TH
35751,th,"The productivity of a common pool of resources may degrade when overly
exploited by a number of selfish investors, a situation known as the tragedy of
the commons (TOC). Without regulations, agents optimize the size of their
individual investments into the commons by balancing incurring costs with the
returns received. The resulting Nash equilibrium involves a self-consistency
loop between individual investment decisions and the state of the commons. As a
consequence, several non-trivial properties emerge. For $N$ investing actors we
prove rigorously that typical payoffs do not scale as $1/N$, the expected
result for cooperating agents, but as $(1/N)^2$. Payoffs are hence reduced with
regard to the functional dependence on $N$, a situation denoted catastrophic
poverty. We show that catastrophic poverty results from a fine-tuned balance
between returns and costs. Additionally, a finite number of oligarchs may be
present. Oligarchs are characterized by payoffs that are finite and not
decreasing when $N$ increases. Our results hold for generic classes of models,
including convex and moderately concave cost functions. For strongly concave
cost functions the Nash equilibrium undergoes a collective reorganization,
being characterized instead by entry barriers and sudden death forced market
exits.",Generic catastrophic poverty when selfish investors exploit a degradable common resource,2022-08-17 12:19:14,Claudius Gros,"http://arxiv.org/abs/2208.08171v2, http://arxiv.org/pdf/2208.08171v2",econ.TH
35752,th,"We show the perhaps surprising inequality that the weighted average of
negatively dependent super-Pareto random variables, possibly caused by
triggering events, is larger than one such random variable in the sense of
first-order stochastic dominance. The class of super-Pareto distributions is
extremely heavy-tailed and it includes the class of infinite-mean Pareto
distributions. We discuss several implications of this result via an
equilibrium analysis in a risk exchange market. First, diversification of
super-Pareto losses increases portfolio risk, and thus a diversification
penalty exists. Second, agents with super-Pareto losses will not share risks in
a market equilibrium. Third, transferring losses from agents bearing
super-Pareto losses to external parties without any losses may arrive at an
equilibrium which benefits every party involved. The empirical studies show
that our new inequality can be observed empirically for real datasets that fit
well with extremely heavy tails.","An unexpected stochastic dominance: Pareto distributions, catastrophes, and risk exchange",2022-08-17 21:17:01,"Yuyu Chen, Paul Embrechts, Ruodu Wang","http://arxiv.org/abs/2208.08471v3, http://arxiv.org/pdf/2208.08471v3",q-fin.RM
35753,th,"We investigate Gately's solution concept for cooperative games with
transferable utilities. Gately's conception introduced a bargaining solution
that minimises the maximal quantified ``propensity to disrupt'' the negotiation
process of the players over the allocation of the generated collective payoffs.
Gately's solution concept is well-defined for a broad class of games. We also
consider a generalisation based on a parameter-based quantification of the
propensity to disrupt. Furthermore, we investigate the relationship of these
generalised Gately values with the Core and the Nucleolus and show that
Gately's solution is in the Core for all regular 3-player games. We identify
exact conditions under which generally these Gately values are Core imputations
for arbitrary regular cooperative games. Finally, we investigate the
relationship of the Gately value with the Shapley value.",Gately Values of Cooperative Games,2022-08-22 13:19:40,"Robert P. Gilles, Lina Mallozzi","http://arxiv.org/abs/2208.10189v2, http://arxiv.org/pdf/2208.10189v2",econ.TH
35754,th,"Multi-agent reinforcement learning (MARL) is a powerful tool for training
automated systems acting independently in a common environment. However, it can
lead to sub-optimal behavior when individual incentives and group incentives
diverge. Humans are remarkably capable at solving these social dilemmas. It is
an open problem in MARL to replicate such cooperative behaviors in selfish
agents. In this work, we draw upon the idea of formal contracting from
economics to overcome diverging incentives between agents in MARL. We propose
an augmentation to a Markov game where agents voluntarily agree to binding
state-dependent transfers of reward, under pre-specified conditions. Our
contributions are theoretical and empirical. First, we show that this
augmentation makes all subgame-perfect equilibria of all fully observed Markov
games exhibit socially optimal behavior, given a sufficiently rich space of
contracts. Next, we complement our game-theoretic analysis by showing that
state-of-the-art RL algorithms learn socially optimal policies given our
augmentation. Our experiments include classic static dilemmas like Stag Hunt,
Prisoner's Dilemma and a public goods game, as well as dynamic interactions
that simulate traffic, pollution management and common pool resource
management.",Get It in Writing: Formal Contracts Mitigate Social Dilemmas in Multi-Agent RL,2022-08-22 20:42:03,"Phillip J. K. Christoffersen, Andreas A. Haupt, Dylan Hadfield-Menell","http://dx.doi.org/10.5555/3545946.3598670, http://arxiv.org/abs/2208.10469v3, http://arxiv.org/pdf/2208.10469v3",cs.AI
35755,th,"Spot electricity markets are considered under a Game-Theoretic framework,
where risk averse players submit orders to the market clearing mechanism to
maximise their own utility. Consistent with the current practice in Europe, the
market clearing mechanism is modelled as a Social Welfare Maximisation problem,
with zonal pricing, and we consider inflexible demand, physical constraints of
the electricity grid, and capacity-constrained producers. A novel type of
non-parametric risk aversion based on a defined worst case scenario is
introduced, and this reduces the dimensionality of the strategy variables and
ensures boundedness of prices. By leveraging these properties we devise Jacobi
and Gauss-Seidel iterative schemes for computation of approximate global Nash
Equilibria, which are in contrast to derivative based local equilibria. Our
methodology is applied to the real world data of Central Western European (CWE)
Spot Market during the 2019-2020 period, and offers a good representation of
the historical time series of prices. By also solving for the assumption of
truthful bidding, we devise a simple method based on hypothesis testing to
infer if and when producers are bidding strategically (instead of truthfully),
and we find evidence suggesting that strategic bidding may be fairly pronounced
in the CWE region.",A fundamental Game Theoretic model and approximate global Nash Equilibria computation for European Spot Power Markets,2022-08-30 14:32:34,"Ioan Alexandru Puiu, Raphael Andreas Hauser","http://arxiv.org/abs/2208.14164v1, http://arxiv.org/pdf/2208.14164v1",cs.GT
35756,th,"Competition between traditional platforms is known to improve user utility by
aligning the platform's actions with user preferences. But to what extent is
alignment exhibited in data-driven marketplaces? To study this question from a
theoretical perspective, we introduce a duopoly market where platform actions
are bandit algorithms and the two platforms compete for user participation. A
salient feature of this market is that the quality of recommendations depends
on both the bandit algorithm and the amount of data provided by interactions
from users. This interdependency between the algorithm performance and the
actions of users complicates the structure of market equilibria and their
quality in terms of user utility. Our main finding is that competition in this
market does not perfectly align market outcomes with user utility.
Interestingly, market outcomes exhibit misalignment not only when the platforms
have separate data repositories, but also when the platforms have a shared data
repository. Nonetheless, the data sharing assumptions impact what mechanism
drives misalignment and also affect the specific form of misalignment (e.g. the
quality of the best-case and worst-case market outcomes). More broadly, our
work illustrates that competition in digital marketplaces has subtle
consequences for user utility that merit further investigation.","Competition, Alignment, and Equilibria in Digital Marketplaces",2022-08-30 20:43:58,"Meena Jagadeesan, Michael I. Jordan, Nika Haghtalab","http://arxiv.org/abs/2208.14423v2, http://arxiv.org/pdf/2208.14423v2",cs.GT
35767,th,"We consider finite two-player normal form games with random payoffs. Player
A's payoffs are i.i.d. from a uniform distribution. Given p in [0, 1], for any
action profile, player B's payoff coincides with player A's payoff with
probability p and is i.i.d. from the same uniform distribution with probability
1-p. This model interpolates the model of i.i.d. random payoff used in most of
the literature and the model of random potential games. First we study the
number of pure Nash equilibria in the above class of games. Then we show that,
for any positive p, asymptotically in the number of available actions, best
response dynamics reaches a pure Nash equilibrium with high probability.",Best-Response dynamics in two-person random games with correlated payoffs,2022-09-26 22:04:06,"Hlafo Alfie Mimun, Matteo Quattropani, Marco Scarsini","http://arxiv.org/abs/2209.12967v2, http://arxiv.org/pdf/2209.12967v2",cs.GT
35759,th,"LP-duality theory has played a central role in the study of cores of games,
right from the early days of this notion to the present time. The classic paper
of Shapley and Shubik \cite{Shapley1971assignment} introduced the ""right"" way
of exploiting the power of this theory, namely picking problems whose
LP-relaxations support polyhedra having integral vertices. So far, the latter
fact was established by showing that the constraint matrix of the underlying
linear system is {\em totally unimodular}.
  We attempt to take this methodology to its logical next step -- {\em using
total dual integrality} -- thereby addressing new classes of games which have
their origins in two major theories within combinatorial optimization, namely
perfect graphs and polymatroids. In the former, we address the stable set and
clique games and in the latter, we address the matroid independent set game.
For each of these games, we prove that the set of core imputations is precisely
the set of optimal solutions to the dual LPs.
  Another novelty is the way the worth of the game is allocated among
sub-coalitions. Previous works follow the {\em bottom-up process} of allocating
to individual agents; the allocation to a sub-coalition is simply the sum of
the allocations to its agents. The {\em natural process for our games is
top-down}. The optimal dual allocates to ""objects"" in the grand coalition; a
sub-coalition inherits the allocation of each object with which it has
non-empty intersection.","Cores of Games via Total Dual Integrality, with Applications to Perfect Graphs and Polymatroids",2022-09-11 19:49:35,Vijay V. Vazirani,"http://arxiv.org/abs/2209.04903v2, http://arxiv.org/pdf/2209.04903v2",cs.GT
35760,th,"A formal write-up of the simple proof (1995) of the existence of calibrated
forecasts by the minimax theorem, which moreover shows that $N^3$ periods
suffice to guarantee a calibration error of at most $1/N$.",Calibrated Forecasts: The Minimax Proof,2022-09-13 13:24:54,Sergiu Hart,"http://arxiv.org/abs/2209.05863v2, http://arxiv.org/pdf/2209.05863v2",econ.TH
35761,th,"In adversarial interactions, one is often required to make strategic
decisions over multiple periods of time, wherein decisions made earlier impact
a player's competitive standing as well as how choices are made in later
stages. In this paper, we study such scenarios in the context of General Lotto
games, which models the competitive allocation of resources over multiple
battlefields between two players. We propose a two-stage formulation where one
of the players has reserved resources that can be strategically pre-allocated
across the battlefields in the first stage. The pre-allocation then becomes
binding and is revealed to the other player. In the second stage, the players
engage by simultaneously allocating their real-time resources against each
other. The main contribution in this paper provides complete characterizations
of equilibrium payoffs in the two-stage game, revealing the interplay between
performance and the amount of resources expended in each stage of the game. We
find that real-time resources are at least twice as effective as pre-allocated
resources. We then determine the player's optimal investment when there are
linear costs associated with purchasing each type of resource before play
begins, and there is a limited monetary budget.",Strategic investments in multi-stage General Lotto games,2022-09-13 18:39:46,"Rahul Chandan, Keith Paarporn, Mahnoosh Alizadeh, Jason R. Marden","http://arxiv.org/abs/2209.06090v1, http://arxiv.org/pdf/2209.06090v1",cs.GT
35762,th,"Let $C$ be a cone in a locally convex Hausdorff topological vector space $X$
containing $0$. We show that there exists a (essentially unique) nonempty
family $\mathscr{K}$ of nonempty subsets of the topological dual $X^\prime$
such that $$ C=\{x \in X: \forall K \in \mathscr{K}, \exists f \in K, \,\, f(x)
\ge 0\}. $$ Then, we identify the additional properties on the family
$\mathscr{K}$ which characterize, among others, closed convex cones, open
convex cones, closed cones, and convex cones. For instance, if $X$ is a Banach
space, then $C$ is a closed cone if and only if the family $\mathscr{K}$ can be
chosen with nonempty convex compact sets.
  These representations provide abstract versions of several recent results in
decision theory and give us the proper framework to obtain new ones. This
allows us to characterize preorders which satisfy the independence axiom over
certain probability measures, answering an open question in
[Econometrica~\textbf{87} (2019), no. 3, 933--980].",Representations of cones and applications to decision theory,2022-09-14 00:36:19,"Paolo Leonetti, Giulio Principi","http://arxiv.org/abs/2209.06310v2, http://arxiv.org/pdf/2209.06310v2",math.CA
35763,th,"We study random-turn resource-allocation games. In the Trail of Lost Pennies,
a counter moves on $\mathbb{Z}$. At each turn, Maxine stakes $a \in [0,\infty)$
and Mina $b \in [0,\infty)$. The counter $X$ then moves adjacently, to the
right with probability $\tfrac{a}{a+b}$. If $X_i \to -\infty$ in this
infinte-turn game, Mina receives one unit, and Maxine zero; if $X_i \to
\infty$, then these receipts are zero and $x$. Thus the net receipt to a given
player is $-A+B$, where $A$ is the sum of her stakes, and $B$ is her terminal
receipt. The game was inspired by unbiased tug-of-war in~[PSSW] from 2009 but
in fact closely resembles the original version of tug-of-war, introduced
[HarrisVickers87] in the economics literature in 1987. We show that the game
has surprising features. For a natural class of strategies, Nash equilibria
exist precisely when $x$ lies in $[\lambda,\lambda^{-1}]$, for a certain
$\lambda \in (0,1)$. We indicate that $\lambda$ is remarkably close to one,
proving that $\lambda \leq 0.999904$ and presenting clear numerical evidence
that $\lambda \geq 1 - 10^{-4}$. For each $x \in [\lambda,\lambda^{-1}]$, we
find countably many Nash equilibria. Each is roughly characterized by an
integral {\em battlefield} index: when the counter is nearby, both players
stake intensely, with rapid but asymmetric decay in stakes as it moves away.
Our results advance premises [HarrisVickers87,Konrad12] for fund management and
the incentive-outcome relation that plausibly hold for many player-funded
stake-governed games. Alongside a companion treatment [HP22] of games with
allocated budgets, we thus offer a detailed mathematical treatment of an
illustrative class of tug-of-war games. We also review the separate
developments of tug-of-war in economics and mathematics in the hope that
mathematicians direct further attention to tug-of-war in its original
resource-allocation guise.",On the Trail of Lost Pennies: player-funded tug-of-war on the integers,2022-09-15 19:54:31,Alan Hammond,"http://arxiv.org/abs/2209.07451v3, http://arxiv.org/pdf/2209.07451v3",math.PR
35764,th,"The Bayesian posterior probability of the true state is stochastically
dominated by that same posterior under the probability law of the true state.
This generalizes to notions of ""optimism"" about posterior probabilities.",Posterior Probabilities: Dominance and Optimism,2022-09-23 17:11:20,"Sergiu Hart, Yosef Rinott","http://dx.doi.org/10.1016/j.econlet.2020.109352, http://arxiv.org/abs/2209.11601v1, http://arxiv.org/pdf/2209.11601v1",econ.TH
35765,th,"In the standard Bayesian framework data are assumed to be generated by a
distribution parametrized by $\theta$ in a parameter space $\Theta$, over which
a prior distribution $\pi$ is given. A Bayesian statistician quantifies the
belief that the true parameter is $\theta_{0}$ in $\Theta$ by its posterior
probability given the observed data. We investigate the behavior of the
posterior belief in $\theta_{0}$ when the data are generated under some
parameter $\theta_{1},$ which may or may not be the same as $\theta_{0}.$
Starting from stochastic orders, specifically, likelihood ratio dominance, that
obtain for resulting distributions of posteriors, we consider monotonicity
properties of the posterior probabilities as a function of the sample size when
data arrive sequentially. While the $\theta_{0}$-posterior is monotonically
increasing (i.e., it is a submartingale) when the data are generated under that
same $\theta_{0}$, it need not be monotonically decreasing in general, not even
in terms of its overall expectation, when the data are generated under a
different $\theta_{1}.$ In fact, it may keep going up and down many times, even
in simple cases such as iid coin tosses. We obtain precise asymptotic rates
when the data come from the wide class of exponential families of
distributions; these rates imply in particular that the expectation of the
$\theta_{0}$-posterior under $\theta_{1}\neq\theta_{0}$ is eventually strictly
decreasing. Finally, we show that in a number of interesting cases this
expectation is a log-concave function of the sample size, and thus unimodal. In
the Bernoulli case we obtain this by developing an inequality that is related
to Tur\'{a}n's inequality for Legendre polynomials.","Posterior Probabilities: Nonmonotonicity, Asymptotic Rates, Log-Concavity, and Turán's Inequality",2022-09-23 20:12:35,"Sergiu Hart, Yosef Rinott","http://dx.doi.org/10.3150/21-BEJ1398, http://arxiv.org/abs/2209.11728v1, http://arxiv.org/pdf/2209.11728v1",math.ST
35769,th,"Trading on decentralized exchanges has been one of the primary use cases for
permissionless blockchains with daily trading volume exceeding billions of
U.S.~dollars. In the status quo, users broadcast transactions and miners are
responsible for composing a block of transactions and picking an execution
ordering -- the order in which transactions execute in the exchange. Due to the
lack of a regulatory framework, it is common to observe miners exploiting their
privileged position by front-running transactions and obtaining risk-fee
profits. In this work, we propose to modify the interaction between miners and
users and initiate the study of {\em verifiable sequencing rules}. As in the
status quo, miners can determine the content of a block; however, they commit
to respecting a sequencing rule that constrains the execution ordering and is
verifiable (there is a polynomial time algorithm that can verify if the
execution ordering satisfies such constraints). Thus in the event a miner
deviates from the sequencing rule, anyone can generate a proof of
non-compliance. We ask if there are sequencing rules that limit price
manipulation from miners in a two-token liquidity pool exchange. Our first
result is an impossibility theorem: for any sequencing rule, there is an
instance of user transactions where the miner can obtain non-zero risk-free
profits. In light of this impossibility result, our main result is a verifiable
sequencing rule that provides execution price guarantees for users. In
particular, for any user transaction A, it ensures that either (1) the
execution price of A is at least as good as if A was the only transaction in
the block, or (2) the execution price of A is worse than this ``standalone''
price and the miner does not gain (or lose) when including A in the block.",Credible Decentralized Exchange Design via Verifiable Sequencing Rules,2022-09-30 19:28:32,"Matheus V. X. Ferreira, David C. Parkes","http://dx.doi.org/10.1145/3564246.3585233, http://arxiv.org/abs/2209.15569v2, http://arxiv.org/pdf/2209.15569v2",cs.GT
35770,th,"In electronic commerce (e-commerce)markets, a decision-maker faces a
sequential choice problem. Third-party intervention plays an important role in
making purchase decisions in this choice process. For instance, while
purchasing products/services online, a buyer's choice or behavior is often
affected by the overall reviewers' ratings, feedback, etc. Moreover, the
reviewer is also a decision-maker. After purchase, the decision-maker would put
forth their reviews for the product, online. Such reviews would affect the
purchase decision of another potential buyer, who would read the reviews before
conforming to his/her final purchase. The question that arises is \textit{how
trustworthy are these review reports and ratings?} The trustworthiness of these
review reports and ratings is based on whether the reviewer is a rational or an
irrational person. Indexing the reviewer's rationality could be a way to
quantify a reviewer's rationality but it does not communicate the history of
his/her behavior. In this article, the researcher aims at formally deriving a
rationality pattern function and thereby, the degree of rationality of the
decision-maker or the reviewer in the sequential choice problem in the
e-commerce markets. Applying such a rationality pattern function could make it
easier to quantify the rational behavior of an agent who participates in the
digital markets. This, in turn, is expected to minimize the information
asymmetry within the decision-making process and identify the paid reviewers or
manipulative reviews.",Measurement of Trustworthiness of the Online Reviews,2022-10-03 13:55:47,Dipankar Das,"http://arxiv.org/abs/2210.00815v2, http://arxiv.org/pdf/2210.00815v2",econ.TH
35771,th,"Let $\succsim$ be a binary relation on the set of simple lotteries over a
countable outcome set $Z$. We provide necessary and sufficient conditions on
$\succsim$ to guarantee the existence of a set $U$ of von Neumann--Morgenstern
utility functions $u: Z\to \mathbf{R}$ such that $$ p\succsim q
\,\,\,\Longleftrightarrow\,\,\, \mathbf{E}_p[u] \ge \mathbf{E}_q[u] \,\text{
for all }u \in U $$ for all simple lotteries $p,q$. In such case, the set $U$
is essentially unique. Then, we show that the analogue characterization does
not hold if $Z$ is uncountable. This provides an answer to an open question
posed by Dubra, Maccheroni, and Ok in [J. Econom. Theory~\textbf{115} (2004),
no.~1, 118--133]. Lastly, we show that different continuity requirements on
$\succsim$ allow for certain restrictions on the possible choices of the set
$U$ of utility functions (e.g., all utility functions are bounded), providing a
wide family of expected multi-utility representations.",Expected multi-utility representations of preferences over lotteries,2022-10-10 17:49:59,Paolo Leonetti,"http://arxiv.org/abs/2210.04739v1, http://arxiv.org/pdf/2210.04739v1",math.CA
35772,th,"A number of rules for resolving majority cycles in elections have been
proposed in the literature. Recently, Holliday and Pacuit (Journal of
Theoretical Politics 33 (2021) 475-524) axiomatically characterized the class
of rules refined by one such cycle-resolving rule, dubbed Split Cycle: in each
majority cycle, discard the majority preferences with the smallest majority
margin. They showed that any rule satisfying five standard axioms, plus a
weakening of Arrow's Independence of Irrelevant Alternatives (IIA) called
Coherent IIA, is refined by Split Cycle. In this paper, we go further and show
that Split Cycle is the only rule satisfying the axioms of Holliday and Pacuit
together with two additional axioms: Coherent Defeat and Positive Involvement
in Defeat. Coherent Defeat states that any majority preference not occurring in
a cycle is retained, while Positive Involvement in Defeat is closely related to
the well-known axiom of Positive Involvement (as in J. P\'{e}rez, Social Choice
and Welfare 18 (2001) 601-616). We characterize Split Cycle not only as a
collective choice rule but also as a social choice correspondence, over both
profiles of linear ballots and profiles of ballots allowing ties.",An Axiomatic Characterization of Split Cycle,2022-10-22 20:21:15,"Yifeng Ding, Wesley H. Holliday, Eric Pacuit","http://arxiv.org/abs/2210.12503v2, http://arxiv.org/pdf/2210.12503v2",econ.TH
35773,th,"While Nash equilibrium has emerged as the central game-theoretic solution
concept, many important games contain several Nash equilibria and we must
determine how to select between them in order to create real strategic agents.
Several Nash equilibrium refinement concepts have been proposed and studied for
sequential imperfect-information games, the most prominent being trembling-hand
perfect equilibrium, quasi-perfect equilibrium, and recently one-sided
quasi-perfect equilibrium. These concepts are robust to certain arbitrarily
small mistakes, and are guaranteed to always exist; however, we argue that
neither of these is the correct concept for developing strong agents in
sequential games of imperfect information. We define a new equilibrium
refinement concept for extensive-form games called observable perfect
equilibrium in which the solution is robust over trembles in
publicly-observable action probabilities (not necessarily over all action
probabilities that may not be observable by opposing players). Observable
perfect equilibrium correctly captures the assumption that the opponent is
playing as rationally as possible given mistakes that have been observed (while
previous solution concepts do not). We prove that observable perfect
equilibrium is always guaranteed to exist, and demonstrate that it leads to a
different solution than the prior extensive-form refinements in no-limit poker.
We expect observable perfect equilibrium to be a useful equilibrium refinement
concept for modeling many important imperfect-information games of interest in
artificial intelligence.",Observable Perfect Equilibrium,2022-10-29 09:07:29,Sam Ganzfried,"http://arxiv.org/abs/2210.16506v8, http://arxiv.org/pdf/2210.16506v8",cs.GT
35774,th,"We study the hidden-action principal-agent problem in an online setting. In
each round, the principal posts a contract that specifies the payment to the
agent based on each outcome. The agent then makes a strategic choice of action
that maximizes her own utility, but the action is not directly observable by
the principal. The principal observes the outcome and receives utility from the
agent's choice of action. Based on past observations, the principal dynamically
adjusts the contracts with the goal of maximizing her utility.
  We introduce an online learning algorithm and provide an upper bound on its
Stackelberg regret. We show that when the contract space is $[0,1]^m$, the
Stackelberg regret is upper bounded by $\widetilde O(\sqrt{m} \cdot
T^{1-1/(2m+1)})$, and lower bounded by $\Omega(T^{1-1/(m+2)})$, where
$\widetilde O$ omits logarithmic factors. This result shows that
exponential-in-$m$ samples are sufficient and necessary to learn a near-optimal
contract, resolving an open problem on the hardness of online contract design.
Moreover, when contracts are restricted to some subset $\mathcal{F} \subset
[0,1]^m$, we define an intrinsic dimension of $\mathcal{F}$ that depends on the
covering number of the spherical code in the space and bound the regret in
terms of this intrinsic dimension. When $\mathcal{F}$ is the family of linear
contracts, we show that the Stackelberg regret grows exactly as
$\Theta(T^{2/3})$.
  The contract design problem is challenging because the utility function is
discontinuous. Bounding the discretization error in this setting has been an
open problem. In this paper, we identify a limited set of directions in which
the utility function is continuous, allowing us to design a new discretization
method and bound its error. This approach enables the first upper bound with no
restrictions on the contract and action space.",The Sample Complexity of Online Contract Design,2022-11-10 20:59:42,"Banghua Zhu, Stephen Bates, Zhuoran Yang, Yixin Wang, Jiantao Jiao, Michael I. Jordan","http://arxiv.org/abs/2211.05732v3, http://arxiv.org/pdf/2211.05732v3",cs.GT
35775,th,"We consider transferable utility cooperative games with infinitely many
players. In particular, we generalize the notions of core and balancedness, and
also the Bondareva-Shapley Theorem for infinite TU-games with and without
restricted cooperation, to the cases where the core consists of
$\kappa$-additive set functions.
  Our generalized Bondareva-Shapley Theorem extends previous results by
Bondareva (1963), Shapley (1967), Schmeidler (1967), Faigle (1989), Kannai
(1969), Kannai (1992), Pinter(2011) and Bartl and Pint\'er (2022).",The $κ$-core and the $κ$-balancedness of TU games,2022-11-10 22:58:52,"David Bartl, Miklós Pintér","http://arxiv.org/abs/2211.05843v1, http://arxiv.org/pdf/2211.05843v1",math.OC
35776,th,"We propose a dynamical model of price formation on a spatial market where
sellers and buyers are placed on the nodes of a graph, and the distribution of
the buyers depends on the positions and prices of the sellers. We find that,
depending on the positions of the sellers and on the level of information
available, the price dynamics of our model can either converge to fixed prices,
or produce cycles of different amplitudes and periods. We show how to measure
the strength of competition in a spatial network by extracting the exponent of
the scaling of the prices with the size of the system. As an application, we
characterize the different level of competition in street networks of real
cities across the globe. Finally, using the model dynamics we can define a
novel measure of node centrality, which quantifies the relevance of a node in a
competitive market.",Model of spatial competition on discrete markets,2022-11-14 17:40:15,"Andrea Civilini, Vito Latora","http://arxiv.org/abs/2211.07412v1, http://arxiv.org/pdf/2211.07412v1",physics.soc-ph
35777,th,"We propose a social welfare maximizing mechanism for an energy community that
aggregates individual and shared community resources under a general net energy
metering (NEM) policy. Referred to as Dynamic NEM, the proposed mechanism
adopts the standard NEM tariff model and sets NEM prices dynamically based on
the total shared renewables within the community. We show that Dynamic NEM
guarantees a higher benefit to each community member than possible outside the
community. We further show that Dynamic NEM aligns the individual member's
incentive with that of the overall community; each member optimizing individual
surplus under Dynamic NEM results in maximum community's social welfare.
Dynamic NEM is also shown to satisfy the cost-causation principle. Empirical
studies using real data on a hypothetical energy community demonstrate the
benefits to community members and grid operators.",Achieving Social Optimality for Energy Communities via Dynamic NEM Pricing,2022-11-17 09:20:52,"Ahmed S. Alahmed, Lang Tong","http://arxiv.org/abs/2211.09360v1, http://arxiv.org/pdf/2211.09360v1",eess.SY
35778,th,"Game theory largely rests on the availability of cardinal utility functions.
In contrast, only ordinal preferences are elicited in fields such as matching
under preferences. The literature focuses on mechanisms with simple dominant
strategies. However, many real-world applications do not have dominant
strategies, so intensities between preferences matter when participants
determine their strategies. Even though precise information about cardinal
utilities is unavailable, some data about the likelihood of utility functions
is typically accessible. We propose to use Bayesian games to formalize
uncertainty about decision-makers utilities by viewing them as a collection of
normal-form games where uncertainty about types persist in all game stages.
Instead of searching for the Bayes-Nash equilibrium, we consider the question
of how uncertainty in utilities is reflected in uncertainty of strategic play.
We introduce $\alpha$-Rank-collections as a solution concept that extends
$\alpha$-Rank, a new solution concept for normal-form games, to Bayesian games.
This allows us to analyze the strategic play in, for example,
(non-strategyproof) matching markets, for which we do not have appropriate
solution concepts so far. $\alpha$-Rank-collections characterize a range of
strategy-profiles emerging from replicator dynamics of the game rather than
equilibrium point. We prove that $\alpha$-Rank-collections are invariant to
positive affine transformations, and that they are efficient to approximate. An
instance of the Boston mechanism is used to illustrate the new solution
concept.",$α$-Rank-Collections: Analyzing Expected Strategic Behavior with Uncertain Utilities,2022-11-18 19:17:27,"Fabian R. Pieroth, Martin Bichler","http://arxiv.org/abs/2211.10317v1, http://arxiv.org/pdf/2211.10317v1",cs.GT
35779,th,"Using simulations between pairs of $\epsilon$-greedy q-learners with
one-period memory, this article demonstrates that the potential function of the
stochastic replicator dynamics (Foster and Young, 1990) allows it to predict
the emergence of error-proof cooperative strategies from the underlying
parameters of the repeated prisoner's dilemma. The observed cooperation rates
between q-learners are related to the ratio between the kinetic energy exerted
by the polar attractors of the replicator dynamics under the grim trigger
strategy. The frontier separating the parameter space conducive to cooperation
from the parameter space dominated by defection can be found by setting the
kinetic energy ratio equal to a critical value, which is a function of the
discount factor, $f(\delta) = \delta/(1-\delta)$, multiplied by a correction
term to account for the effect of the algorithms' exploration probability. The
gradient at the frontier increases with the distance between the game
parameters and the hyperplane that characterizes the incentive compatibility
constraint for cooperation under grim trigger.
  Building on literature from the neurosciences, which suggests that
reinforcement learning is useful to understanding human behavior in risky
environments, the article further explores the extent to which the frontier
derived for q-learners also explains the emergence of cooperation between
humans. Using metadata from laboratory experiments that analyze human choices
in the infinitely repeated prisoner's dilemma, the cooperation rates between
humans are compared to those observed between q-learners under similar
conditions. The correlation coefficients between the cooperation rates observed
for humans and those observed for q-learners are consistently above $0.8$. The
frontier derived from the simulations between q-learners is also found to
predict the emergence of cooperation between humans.",On the Emergence of Cooperation in the Repeated Prisoner's Dilemma,2022-11-24 20:27:29,Maximilian Schaefer,"http://arxiv.org/abs/2211.15331v2, http://arxiv.org/pdf/2211.15331v2",econ.TH
35780,th,"In many real-world settings agents engage in strategic interactions with
multiple opposing agents who can employ a wide variety of strategies. The
standard approach for designing agents for such settings is to compute or
approximate a relevant game-theoretic solution concept such as Nash equilibrium
and then follow the prescribed strategy. However, such a strategy ignores any
observations of opponents' play, which may indicate shortcomings that can be
exploited. We present an approach for opponent modeling in multiplayer
imperfect-information games where we collect observations of opponents' play
through repeated interactions. We run experiments against a wide variety of
real opponents and exact Nash equilibrium strategies in three-player Kuhn poker
and show that our algorithm significantly outperforms all of the agents,
including the exact Nash equilibrium strategies.",Bayesian Opponent Modeling in Multiplayer Imperfect-Information Games,2022-12-12 19:48:53,"Sam Ganzfried, Kevin A. Wang, Max Chiswick","http://arxiv.org/abs/2212.06027v3, http://arxiv.org/pdf/2212.06027v3",cs.GT
35781,th,"The quality of consequences in a decision making problem under (severe)
uncertainty must often be compared among different targets (goals, objectives)
simultaneously. In addition, the evaluations of a consequence's performance
under the various targets often differ in their scale of measurement,
classically being either purely ordinal or perfectly cardinal. In this paper,
we transfer recent developments from abstract decision theory with incomplete
preferential and probabilistic information to this multi-target setting and
show how -- by exploiting the (potentially) partial cardinal and partial
probabilistic information -- more informative orders for comparing decisions
can be given than the Pareto order. We discuss some interesting properties of
the proposed orders between decision options and show how they can be
concretely computed by linear optimization. We conclude the paper by
demonstrating our framework in an artificial (but quite real-world) example in
the context of comparing algorithms under different performance measures.",Multi-Target Decision Making under Conditions of Severe Uncertainty,2022-12-13 14:47:02,"Christoph Jansen, Georg Schollmeyer, Thomas Augustin","http://arxiv.org/abs/2212.06832v1, http://arxiv.org/pdf/2212.06832v1",cs.AI
35782,th,"This paper is intended to investigate the dynamics of heterogeneous Cournot
duopoly games, where the first players adopt identical gradient adjustment
mechanisms but the second players are endowed with distinct rationality levels.
Based on tools of symbolic computations, we introduce a new approach and use it
to establish rigorous conditions of the local stability for these models. We
analytically investigate the bifurcations and prove that the period-doubling
bifurcation is the only possible bifurcation that may occur for all the
considered models. The most important finding of our study is regarding the
influence of players' rational levels on the stability of heterogeneous
duopolistic competition. It is derived that the stability region of the model
where the second firm is rational is the smallest, while that of the one where
the second firm is boundedly rational is the largest. This fact is
counterintuitive and contrasts with relative conclusions in the existing
literature. Furthermore, we also provide numerical simulations to demonstrate
the emergence of complex dynamics such as periodic solutions with different
orders and strange attractors.",Influence of rationality levels on dynamics of heterogeneous Cournot duopolists with quadratic costs,2022-12-14 12:27:06,"Xiaoliang Li, Yihuo Jiang","http://arxiv.org/abs/2212.07128v1, http://arxiv.org/pdf/2212.07128v1",econ.TH
35783,th,"We study various novel complexity measures for two-sided matching mechanisms,
applied to the two canonical strategyproof matching mechanisms, Deferred
Acceptance (DA) and Top Trading Cycles (TTC). Our metrics are designed to
capture the complexity of various structural (rather than computational)
concerns, in particular ones of recent interest from economics. We consider a
canonical, flexible approach to formalizing our questions: define a protocol or
data structure performing some task, and bound the number of bits that it
requires. Our results apply this approach to four questions of general
interest; for matching applicants to institutions, we ask:
  (1) How can one applicant affect the outcome matching?
  (2) How can one applicant affect another applicant's set of options?
  (3) How can the outcome matching be represented / communicated?
  (4) How can the outcome matching be verified?
  We prove that DA and TTC are comparable in complexity under questions (1) and
(4), giving new tight lower-bound constructions and new verification protocols.
Under questions (2) and (3), we prove that TTC is more complex than DA. For
question (2), we prove this by giving a new characterization of which
institutions are removed from each applicant's set of options when a new
applicant is added in DA; this characterization may be of independent interest.
For question (3), our result gives lower bounds proving the tightness of
existing constructions for TTC. This shows that the relationship between the
matching and the priorities is more complex in TTC than in DA, formalizing
previous intuitions from the economics literature. Together, our results
complement recent work that models the complexity of observing
strategyproofness and shows that DA is more complex than TTC. This emphasizes
that diverse considerations must factor into gauging the complexity of matching
mechanisms.",Structural Complexities of Matching Mechanisms,2022-12-16 23:53:30,"Yannai A. Gonczarowski, Clayton Thomas","http://arxiv.org/abs/2212.08709v2, http://arxiv.org/pdf/2212.08709v2",cs.GT
35784,th,"Given the wealth inequality worldwide, there is an urgent need to identify
the mode of wealth exchange through which it arises. To address the research
gap regarding models that combine equivalent exchange and redistribution, this
study compares an equivalent market exchange with redistribution based on power
centers and a nonequivalent exchange with mutual aid using the Polanyi,
Graeber, and Karatani modes of exchange. Two new exchange models based on
multi-agent interactions are reconstructed following an econophysics approach
for evaluating the Gini index (inequality) and total exchange (economic flow).
Exchange simulations indicate that the evaluation parameter of the total
exchange divided by the Gini index can be expressed by the same saturated
curvilinear approximate equation using the wealth transfer rate and time period
of redistribution and the surplus contribution rate of the wealthy and the
saving rate. However, considering the coercion of taxes and its associated
costs and independence based on the morality of mutual aid, a nonequivalent
exchange without return obligation is preferred. This is oriented toward
Graeber's baseline communism and Karatani's mode of exchange D, with
implications for alternatives to the capitalist economy.",Wealth Redistribution and Mutual Aid: Comparison using Equivalent/Nonequivalent Exchange Models of Econophysics,2022-12-31 04:37:26,Takeshi Kato,"http://dx.doi.org/10.3390/e25020224, http://arxiv.org/abs/2301.00091v1, http://arxiv.org/pdf/2301.00091v1",econ.TH
35830,th,"Cryptocurrencies come with a variety of tokenomic policies as well as
aspirations of desirable monetary characteristics that have been described by
proponents as 'sound money' or even 'ultra sound money.' These propositions are
typically devoid of economic analysis so it is a pertinent question how such
aspirations fit in the wider context of monetary economic theory. In this work,
we develop a framework that determines the optimal token supply policy of a
cryptocurrency, as well as investigate how such policy may be algorithmically
implemented. Our findings suggest that the optimal policy complies with the
Friedman rule and it is dependent on the risk free rate, as well as the growth
of the cryptocurrency platform. Furthermore, we demonstrate a wide set of
conditions under which such policy can be implemented via contractions and
expansions of token supply that can be realized algorithmically with block
rewards, taxation of consumption and burning the proceeds, and blockchain
oracles.",Would Friedman Burn your Tokens?,2023-06-29 18:19:13,"Aggelos Kiayias, Philip Lazos, Jan Christoph Schlegel","http://arxiv.org/abs/2306.17025v1, http://arxiv.org/pdf/2306.17025v1",econ.TH
35785,th,"We initiate the study of statistical inference and A/B testing for
first-price pacing equilibria (FPPE). The FPPE model captures the dynamics
resulting from large-scale first-price auction markets where buyers use
pacing-based budget management. Such markets arise in the context of internet
advertising, where budgets are prevalent.
  We propose a statistical framework for the FPPE model, in which a limit FPPE
with a continuum of items models the long-run steady-state behavior of the
auction platform, and an observable FPPE consisting of a finite number of items
provides the data to estimate primitives of the limit FPPE, such as revenue,
Nash social welfare (a fair metric of efficiency), and other parameters of
interest. We develop central limit theorems and asymptotically valid confidence
intervals. Furthermore, we establish the asymptotic local minimax optimality of
our estimators. We then show that the theory can be used for conducting
statistically valid A/B testing on auction platforms. Numerical simulations
verify our central limit theorems, and empirical coverage rates for our
confidence intervals agree with our theory.",Statistical Inference and A/B Testing for First-Price Pacing Equilibria,2023-01-05 22:37:49,"Luofeng Liao, Christian Kroer","http://arxiv.org/abs/2301.02276v3, http://arxiv.org/pdf/2301.02276v3",math.ST
35786,th,"We study a sufficiently general regret criterion for choosing between two
probabilistic lotteries. For independent lotteries, the criterion is consistent
with stochastic dominance and can be made transitive by a unique choice of the
regret function. Together with additional (and intuitively meaningful)
super-additivity property, the regret criterion resolves the Allais' paradox
including the cases were the paradox disappears, and the choices agree with the
expected utility. This superadditivity property is also employed for
establishing consistency between regret and stochastic dominance for dependent
lotteries. Furthermore, we demonstrate how the regret criterion can be used in
Savage's omelet, a classical decision problem in which the lottery outcomes are
not fully resolved. The expected utility cannot be used in such situations, as
it discards important aspects of lotteries.","Regret theory, Allais' Paradox, and Savage's omelet",2023-01-06 13:10:14,"Vardan G. Bardakhchyan, Armen E. Allahverdyan","http://dx.doi.org/10.1016/j.jmp.2023.102807, http://arxiv.org/abs/2301.02447v1, http://arxiv.org/pdf/2301.02447v1",econ.TH
35787,th,"The main ambition of this thesis is to contribute to the development of
cooperative game theory towards combinatorics, algorithmics and discrete
geometry. Therefore, the first chapter of this manuscript is devoted to
highlighting the geometric nature of the coalition functions of transferable
utility games and spotlights the existing connections with the theory of
submodular set functions and polyhedral geometry.
  To deepen the links with polyhedral geometry, we define a new family of
polyhedra, called the basic polyhedra, on which we can apply a generalized
version of the Bondareva-Shapley Theorem to check their nonemptiness. To allow
a practical use of these computational tools, we present an algorithmic
procedure generating the minimal balanced collections, based on Peleg's method.
Subsequently, we apply the generalization of the Bondareva-Shapley Theorem to
design a collection of algorithmic procedures able to check properties or
generate specific sets of coalitions.
  In the next chapter, the connections with combinatorics are investigated.
First, we prove that the balanced collections form a combinatorial species, and
we construct the one of k-uniform hypergraphs of size p, as an intermediary
step to construct the species of balanced collections. Afterwards, a few
results concerning resonance arrangements distorted by games are introduced,
which gives new information about the space of preimputations and the facial
configuration of the core.
  Finally, we address the question of core stability using the results from the
previous chapters. Firstly, we present an algorithm based on Grabisch and
Sudh\""olter's nested balancedness characterization of games with a stable core,
which extensively uses the generalization of the Bondareva-Shapley Theorem
introduced in the second chapter. Secondly, a new necessary condition for core
stability is described, based on the application ...",Geometry of Set Functions in Game Theory: Combinatorial and Computational Aspects,2023-01-08 03:19:51,Dylan Laplace Mermoud,"http://arxiv.org/abs/2301.02950v2, http://arxiv.org/pdf/2301.02950v2",cs.GT
35788,th,"We consider the obnoxious facility location problem (in which agents prefer
the facility location to be far from them) and propose a hierarchy of
distance-based proportional fairness concepts for the problem. These fairness
axioms ensure that groups of agents at the same location are guaranteed to be a
distance from the facility proportional to their group size. We consider
deterministic and randomized mechanisms, and compute tight bounds on the price
of proportional fairness. In the deterministic setting, not only are our
proportional fairness axioms incompatible with strategyproofness, the Nash
equilibria may not guarantee welfare within a constant factor of the optimal
welfare. On the other hand, in the randomized setting, we identify
proportionally fair and strategyproof mechanisms that give an expected welfare
within a constant factor of the optimal welfare.",Proportional Fairness in Obnoxious Facility Location,2023-01-11 10:30:35,"Haris Aziz, Alexander Lam, Bo Li, Fahimeh Ramezani, Toby Walsh","http://arxiv.org/abs/2301.04340v1, http://arxiv.org/pdf/2301.04340v1",cs.GT
35789,th,"Cake cutting is a classic fair division problem, with the cake serving as a
metaphor for a heterogeneous divisible resource. Recently, it was shown that
for any number of players with arbitrary preferences over a cake, it is
possible to partition the players into groups of any desired size and divide
the cake among the groups so that each group receives a single contiguous piece
and every player is envy-free. For two groups, we characterize the group sizes
for which such an assignment can be computed by a finite algorithm, showing
that the task is possible exactly when one of the groups is a singleton. We
also establish an analogous existence result for chore division, and show that
the result does not hold for a mixed cake.",Cutting a Cake Fairly for Groups Revisited,2023-01-22 08:29:42,"Erel Segal-Halevi, Warut Suksompong","http://dx.doi.org/10.1080/00029890.2022.2153566, http://arxiv.org/abs/2301.09061v1, http://arxiv.org/pdf/2301.09061v1",econ.TH
35790,th,"Pen testing is the problem of selecting high-capacity resources when the only
way to measure the capacity of a resource expends its capacity. We have a set
of $n$ pens with unknown amounts of ink and our goal is to select a feasible
subset of pens maximizing the total ink in them. We are allowed to gather more
information by writing with them, but this uses up ink that was previously in
the pens. Algorithms are evaluated against the standard benchmark, i.e, the
optimal pen testing algorithm, and the omniscient benchmark, i.e, the optimal
selection if the quantity of ink in the pens are known.
  We identify optimal and near optimal pen testing algorithms by drawing
analogues to auction theoretic frameworks of deferred-acceptance auctions and
virtual values. Our framework allows the conversion of any near optimal
deferred-acceptance mechanism into a near optimal pen testing algorithm.
Moreover, these algorithms guarantee an additional overhead of at most
$(1+o(1)) \ln n$ in the approximation factor of the omniscient benchmark. We
use this framework to give pen testing algorithms for various combinatorial
constraints like matroid, knapsack, and general downward-closed constraints and
also for online environments.",Combinatorial Pen Testing (or Consumer Surplus of Deferred-Acceptance Auctions),2023-01-29 18:19:37,"Aadityan Ganesh, Jason Hartline","http://arxiv.org/abs/2301.12462v2, http://arxiv.org/pdf/2301.12462v2",cs.GT
35791,th,"The core is a dominant solution concept in economics and cooperative game
theory; it is predominantly used for profit, equivalently cost or utility,
sharing. This paper demonstrates the versatility of this notion by proposing a
completely different use: in a so-called investment management game, which is a
game against nature rather than a cooperative game. This game has only one
agent whose strategy set is all possible ways of distributing her money among
investment firms. The agent wants to pick a strategy such that in each of
exponentially many future scenarios, sufficient money is available in the right
firms so she can buy an optimal investment for that scenario. Such a strategy
constitutes a core imputation under a broad interpretation, though traditional
formal framework, of the core. Our game is defined on perfect graphs, since the
maximum stable set problem can be solved in polynomial time for such graphs. We
completely characterize the core of this game, analogous to Shapley and Shubik
characterization of the core of the assignment game. A key difference is the
following technical novelty: whereas their characterization follows from total
unimodularity, ours follows from total dual integrality",The Investment Management Game: Extending the Scope of the Notion of Core,2023-02-01 20:17:16,Vijay V. Vazirani,"http://arxiv.org/abs/2302.00608v5, http://arxiv.org/pdf/2302.00608v5",econ.TH
35792,th,"The classic Bayesian persuasion model assumes a Bayesian and best-responding
receiver. We study a relaxation of the Bayesian persuasion model where the
receiver can approximately best respond to the sender's signaling scheme. We
show that, under natural assumptions, (1) the sender can find a signaling
scheme that guarantees itself an expected utility almost as good as its optimal
utility in the classic model, no matter what approximately best-responding
strategy the receiver uses; (2) on the other hand, there is no signaling scheme
that gives the sender much more utility than its optimal utility in the classic
model, even if the receiver uses the approximately best-responding strategy
that is best for the sender. Together, (1) and (2) imply that the approximately
best-responding behavior of the receiver does not affect the sender's maximal
achievable utility a lot in the Bayesian persuasion problem. The proofs of both
results rely on the idea of robustification of a Bayesian persuasion scheme:
given a pair of the sender's signaling scheme and the receiver's strategy, we
can construct another signaling scheme such that the receiver prefers to use
that strategy in the new scheme more than in the original scheme, and the two
schemes give the sender similar utilities. As an application of our main result
(1), we show that, in a repeated Bayesian persuasion model where the receiver
learns to respond to the sender by some algorithms, the sender can do almost as
well as in the classic model. Interestingly, unlike (2), with a learning
receiver the sender can sometimes do much better than in the classic model.",Persuading a Behavioral Agent: Approximately Best Responding and Learning,2023-02-07 22:12:46,"Yiling Chen, Tao Lin","http://arxiv.org/abs/2302.03719v1, http://arxiv.org/pdf/2302.03719v1",cs.GT
35793,th,"This paper uses value functions to characterize the pure-strategy
subgame-perfect equilibria of an arbitrary, possibly infinite-horizon game. It
specifies the game's extensive form as a pentaform (Streufert 2023p,
arXiv:2107.10801v4), which is a set of quintuples formalizing the abstract
relationships between nodes, actions, players, and situations (situations
generalize information sets). Because a pentaform is a set, this paper can
explicitly partition the game form into piece forms, each of which starts at a
(Selten) subroot and contains all subsequent nodes except those that follow a
subsequent subroot. Then the set of subroots becomes the domain of a value
function, and the piece-form partition becomes the framework for a value
recursion which generalizes the Bellman equation from dynamic programming. The
main results connect the value recursion with the subgame-perfect equilibria of
the original game, under the assumptions of upper- and lower-convergence.
Finally, a corollary characterizes subgame perfection as the absence of an
improving one-piece deviation.",Dynamic Programming for Pure-Strategy Subgame Perfection in an Arbitrary Game,2023-02-08 06:16:24,Peter A. Streufert,"http://arxiv.org/abs/2302.03855v3, http://arxiv.org/pdf/2302.03855v3",econ.TH
35794,th,"A powerful feature in mechanism design is the ability to irrevocably commit
to the rules of a mechanism. Commitment is achieved by public declaration,
which enables players to verify incentive properties in advance and the outcome
in retrospect. However, public declaration can reveal superfluous information
that the mechanism designer might prefer not to disclose, such as her target
function or private costs. Avoiding this may be possible via a trusted
mediator; however, the availability of a trusted mediator, especially if
mechanism secrecy must be maintained for years, might be unrealistic. We
propose a new approach to commitment, and show how to commit to, and run, any
given mechanism without disclosing it, while enabling the verification of
incentive properties and the outcome -- all without the need for any mediators.
Our framework is based on zero-knowledge proofs -- a cornerstone of modern
cryptographic theory. Applications include non-mediated bargaining with hidden
yet binding offers.",Zero-Knowledge Mechanisms,2023-02-11 06:43:43,"Ran Canetti, Amos Fiat, Yannai A. Gonczarowski","http://arxiv.org/abs/2302.05590v1, http://arxiv.org/pdf/2302.05590v1",econ.TH
35795,th,"Task allocation is a crucial process in modern systems, but it is often
challenged by incomplete information about the utilities of participating
agents. In this paper, we propose a new profit maximization mechanism for the
task allocation problem, where the task publisher seeks an optimal incentive
function to maximize its own profit and simultaneously ensure the truthful
announcing of the agent's private information (type) and its participation in
the task, while an autonomous agent aims at maximizing its own utility function
by deciding on its participation level and announced type. Our mechanism stands
out from the classical contract theory-based truthful mechanisms as it empowers
agents to make their own decisions about their level of involvement, making it
more practical for many real-world task allocation scenarios. It has been
proven that by considering a linear form of incentive function consisting of
two decision functions for the task publisher the mechanism's goals are met.
The proposed truthful mechanism is initially modeled as a non-convex functional
optimization with the double continuum of constraints, nevertheless, we
demonstrate that by deriving an equivalent form of the incentive constraints,
it can be reformulated as a tractable convex optimal control problem. Further,
we propose a numerical algorithm to obtain the solution.",A Tractable Truthful Profit Maximization Mechanism Design with Autonomous Agents,2023-02-11 15:22:57,"Mina Montazeri, Hamed Kebriaei, Babak N. Araabi","http://arxiv.org/abs/2302.05677v1, http://arxiv.org/pdf/2302.05677v1",econ.TH
35796,th,"In domains where agents interact strategically, game theory is applied widely
to predict how agents would behave. However, game-theoretic predictions are
based on the assumption that agents are fully rational and believe in
equilibrium plays, which unfortunately are mostly not true when human decision
makers are involved. To address this limitation, a number of behavioral
game-theoretic models are defined to account for the limited rationality of
human decision makers. The ""quantal cognitive hierarchy"" (QCH) model, which is
one of the more recent models, is demonstrated to be the state-of-art model for
predicting human behaviors in normal-form games. The QCH model assumes that
agents in games can be both non-strategic (level-0) and strategic (level-$k$).
For level-0 agents, they choose their strategies irrespective of other agents.
For level-$k$ agents, they assume that other agents would be behaving at levels
less than $k$ and best respond against them. However, an important assumption
of the QCH model is that the distribution of agents' levels follows a Poisson
distribution. In this paper, we relax this assumption and design a
learning-based method at the population level to iteratively estimate the
empirical distribution of agents' reasoning levels. By using a real-world
dataset from the Swedish lowest unique positive integer game, we demonstrate
how our refined QCH model and the iterative solution-seeking process can be
used in providing a more accurate behavioral model for agents. This leads to
better performance in fitting the real data and allows us to track an agent's
progress in learning to play strategically over multiple rounds.",Improving Quantal Cognitive Hierarchy Model Through Iterative Population Learning,2023-02-13 03:23:26,"Yuhong Xu, Shih-Fen Cheng, Xinyu Chen","http://arxiv.org/abs/2302.06033v2, http://arxiv.org/pdf/2302.06033v2",cs.GT
35797,th,"Recommendation systems are pervasive in the digital economy. An important
assumption in many deployed systems is that user consumption reflects user
preferences in a static sense: users consume the content they like with no
other considerations in mind. However, as we document in a large-scale online
survey, users do choose content strategically to influence the types of content
they get recommended in the future.
  We model this user behavior as a two-stage noisy signalling game between the
recommendation system and users: the recommendation system initially commits to
a recommendation policy, presents content to the users during a cold start
phase which the users choose to strategically consume in order to affect the
types of content they will be recommended in a recommendation phase. We show
that in equilibrium, users engage in behaviors that accentuate their
differences to users of different preference profiles. In addition,
(statistical) minorities out of fear of losing their minority content
exposition may not consume content that is liked by mainstream users. We next
propose three interventions that may improve recommendation quality (both on
average and for minorities) when taking into account strategic consumption: (1)
Adopting a recommendation system policy that uses preferences from a prior, (2)
Communicating to users that universally liked (""mainstream"") content will not
be used as basis of recommendation, and (3) Serving content that is
personalized-enough yet expected to be liked in the beginning. Finally, we
describe a methodology to inform applied theory modeling with survey results.",Recommending to Strategic Users,2023-02-13 20:57:30,"Andreas Haupt, Dylan Hadfield-Menell, Chara Podimata","http://arxiv.org/abs/2302.06559v1, http://arxiv.org/pdf/2302.06559v1",cs.CY
35798,th,"LP-duality theory has played a central role in the study of the core, right
from its early days to the present time. However, despite the extensive nature
of this work, basic gaps still remain. We address these gaps using the
following building blocks from LP-duality theory: 1. Total unimodularity (TUM).
2. Complementary slackness conditions and strict complementarity. Our
exploration of TUM leads to defining new games, characterizing their cores and
giving novel ways of using core imputations to enforce constraints that arise
naturally in applications of these games. The latter include: 1. Efficient
algorithms for finding min-max fair, max-min fair and equitable core
imputations. 2. Encouraging diversity and avoiding over-representation in a
generalization of the assignment game. Complementarity enables us to prove new
properties of core imputations of the assignment game and its generalizations.",LP-Duality Theory and the Cores of Games,2023-02-15 15:46:50,Vijay V. Vazirani,"http://arxiv.org/abs/2302.07627v5, http://arxiv.org/pdf/2302.07627v5",cs.GT
35799,th,"Utilities and transmission system operators (TSO) around the world implement
demand response programs for reducing electricity consumption by sending
information on the state of balance between supply demand to end-use consumers.
We construct a Bayesian persuasion model to analyse such demand response
programs. Using a simple model consisting of two time steps for contract
signing and invoking, we analyse the relation between the pricing of
electricity and the incentives of the TSO to garble information about the true
state of the generation. We show that if the electricity is priced at its
marginal cost of production, the TSO has no incentive to lie and always tells
the truth. On the other hand, we provide conditions where overpricing of
electricity leads the TSO to provide no information to the consumer.",Signalling for Electricity Demand Response: When is Truth Telling Optimal?,2023-02-24 20:36:42,"Rene Aid, Anupama Kowli, Ankur A. Kulkarni","http://arxiv.org/abs/2302.12770v3, http://arxiv.org/pdf/2302.12770v3",eess.SY
35800,th,"We propose a novel model for refugee housing respecting the preferences of
accepting community and refugees themselves. In particular, we are given a
topology representing the local community, a set of inhabitants occupying some
vertices of the topology, and a set of refugees that should be housed on the
empty vertices of graph. Both the inhabitants and the refugees have preferences
over the structure of their neighbourhood.
  We are specifically interested in the problem of finding housings such that
the preferences of every individual are met; using game-theoretical words, we
are looking for housings that are stable with respect to some well-defined
notion of stability. We investigate conditions under which the existence of
equilibria is guaranteed and study the computational complexity of finding such
a stable outcome. As the problem is NP-hard even in very simple settings, we
employ the parameterised complexity framework to give a finer-grained view on
the problem's complexity with respect to natural parameters and structural
restrictions of the given topology.",Host Community Respecting Refugee Housing,2023-02-27 20:42:03,"Dušan Knop, Šimon Schierreich","http://arxiv.org/abs/2302.13997v2, http://arxiv.org/pdf/2302.13997v2",cs.GT
35885,th,"There is increasing regulatory interest in whether machine learning
algorithms deployed in consequential domains (e.g. in criminal justice) treat
different demographic groups ""fairly."" However, there are several proposed
notions of fairness, typically mutually incompatible. Using criminal justice as
an example, we study a model in which society chooses an incarceration rule.
Agents of different demographic groups differ in their outside options (e.g.
opportunity for legal employment) and decide whether to commit crimes. We show
that equalizing type I and type II errors across groups is consistent with the
goal of minimizing the overall crime rate; other popular notions of fairness
are not.",Fair Prediction with Endogenous Behavior,2020-02-18 19:07:25,"Christopher Jung, Sampath Kannan, Changhwa Lee, Mallesh M. Pai, Aaron Roth, Rakesh Vohra","http://arxiv.org/abs/2002.07147v1, http://arxiv.org/pdf/2002.07147v1",econ.TH
35801,th,"In decentralized finance (""DeFi""), automated market makers (AMMs) enable
traders to programmatically exchange one asset for another. Such trades are
enabled by the assets deposited by liquidity providers (LPs). The goal of this
paper is to characterize and interpret the optimal (i.e., profit-maximizing)
strategy of a monopolist liquidity provider, as a function of that LP's beliefs
about asset prices and trader behavior. We introduce a general framework for
reasoning about AMMs based on a Bayesian-like belief inference framework, where
LPs maintain an asset price estimate. In this model, the market maker (i.e.,
LP) chooses a demand curve that specifies the quantity of a risky asset to be
held at each dollar price. Traders arrive sequentially and submit a price bid
that can be interpreted as their estimate of the risky asset price; the AMM
responds to this submitted bid with an allocation of the risky asset to the
trader, a payment that the trader must pay, and a revised internal estimate for
the true asset price. We define an incentive-compatible (IC) AMM as one in
which a trader's optimal strategy is to submit its true estimate of the asset
price, and characterize the IC AMMs as those with downward-sloping demand
curves and payments defined by a formula familiar from Myerson's optimal
auction theory. We generalize Myerson's virtual values, and characterize the
profit-maximizing IC AMM. The optimal demand curve generally has a jump that
can be interpreted as a ""bid-ask spread,"" which we show is caused by a
combination of adverse selection risk (dominant when the degree of information
asymmetry is large) and monopoly pricing (dominant when asymmetry is small).
This work opens up new research directions into the study of automated exchange
mechanisms from the lens of optimal auction theory and iterative belief
inference, using tools of theoretical computer science in a novel way.",A Myersonian Framework for Optimal Liquidity Provision in Automated Market Makers,2023-03-01 06:21:29,"Jason Milionis, Ciamac C. Moallemi, Tim Roughgarden","http://dx.doi.org/10.4230/LIPIcs.ITCS.2024.80, http://arxiv.org/abs/2303.00208v2, http://arxiv.org/pdf/2303.00208v2",cs.GT
35802,th,"This paper investigates the moral hazard problem in finite horizon with both
continuous and lump-sum payments, involving a time-inconsistent sophisticated
agent and a standard utility maximiser principal. Building upon the so-called
dynamic programming approach in Cvitani\'c, Possama\""i, and Touzi [18] and the
recently available results in Hern\'andez and Possama\""i [43], we present a
methodology that covers the previous contracting problem. Our main contribution
consists in a characterisation of the moral hazard problem faced by the
principal. In particular, it shows that under relatively mild technical
conditions on the data of the problem, the supremum of the principal's expected
utility over a smaller restricted family of contracts is equal to the supremum
over all feasible contracts. Nevertheless, this characterisation yields, as far
as we know, a novel class of control problems that involve the control of a
forward Volterra equation via Volterra-type controls, and infinite-dimensional
stochastic target constraints. Despite the inherent challenges associated to
such a problem, we study the solution under three different specifications of
utility functions for both the agent and the principal, and draw qualitative
implications from the form of the optimal contract. The general case remains
the subject of future research.",Time-inconsistent contract theory,2023-03-03 00:52:39,"Camilo Hernández, Dylan Possamaï","http://arxiv.org/abs/2303.01601v1, http://arxiv.org/pdf/2303.01601v1",econ.TH
35803,th,"Information design in an incomplete information game includes a designer with
the goal of influencing players' actions through signals generated from a
designed probability distribution so that its objective function is optimized.
We consider a setting in which the designer has partial knowledge on agents'
utilities. We address the uncertainty about players' preferences by formulating
a robust information design problem against worst case payoffs. If the players
have quadratic payoffs that depend on the players' actions and an unknown
payoff-relevant state, and signals on the state that follow a Gaussian
distribution conditional on the state realization, then the information design
problem under quadratic design objectives is a semidefinite program (SDP).
Specifically, we consider ellipsoid perturbations over payoff coefficients in
linear-quadratic-Gaussian (LQG) games. We show that this leads to a tractable
robust SDP formulation. Numerical studies are carried out to identify the
relation between the perturbation levels and the optimal information
structures.",Robust Social Welfare Maximization via Information Design in Linear-Quadratic-Gaussian Games,2023-03-09 21:40:39,"Furkan Sezer, Ceyhun Eksin","http://arxiv.org/abs/2303.05489v2, http://arxiv.org/pdf/2303.05489v2",math.OC
35804,th,"Consider a normal location model $X \mid \theta \sim N(\theta, \sigma^2)$
with known $\sigma^2$. Suppose $\theta \sim G_0$, where the prior $G_0$ has
zero mean and unit variance. Let $G_1$ be a possibly misspecified prior with
zero mean and unit variance. We show that the squared error Bayes risk of the
posterior mean under $G_1$ is bounded, uniformly over $G_0, G_1, \sigma^2 > 0$.",Mean-variance constrained priors have finite maximum Bayes risk in the normal location model,2023-03-15 17:36:08,Jiafeng Chen,"http://arxiv.org/abs/2303.08653v1, http://arxiv.org/pdf/2303.08653v1",math.ST
35805,th,"Instant runoff voting (IRV) has recently gained popularity as an alternative
to plurality voting for political elections, with advocates claiming a range of
advantages, including that it produces more moderate winners than plurality and
could thus help address polarization. However, there is little theoretical
backing for this claim, with existing evidence focused on case studies and
simulations. In this work, we prove that IRV has a moderating effect relative
to plurality voting in a precise sense, developed in a 1-dimensional Euclidean
model of voter preferences. We develop a theory of exclusion zones, derived
from properties of the voter distribution, which serve to show how moderate and
extreme candidates interact during IRV vote tabulation. The theory allows us to
prove that if voters are symmetrically distributed and not too concentrated at
the extremes, IRV cannot elect an extreme candidate over a moderate. In
contrast, we show plurality can and validate our results computationally. Our
methods provide new frameworks for the analysis of voting systems, deriving
exact winner distributions geometrically and establishing a connection between
plurality voting and stick-breaking processes.",The Moderating Effect of Instant Runoff Voting,2023-03-17 05:37:27,"Kiran Tomlinson, Johan Ugander, Jon Kleinberg","http://arxiv.org/abs/2303.09734v5, http://arxiv.org/pdf/2303.09734v5",cs.MA
35806,th,"We study the complexity of finding an approximate (pure) Bayesian Nash
equilibrium in a first-price auction with common priors when the tie-breaking
rule is part of the input. We show that the problem is PPAD-complete even when
the tie-breaking rule is trilateral (i.e., it specifies item allocations when
no more than three bidders are in tie, and adopts the uniform tie-breaking rule
otherwise). This is the first hardness result for equilibrium computation in
first-price auctions with common priors. On the positive side, we give a PTAS
for the problem under the uniform tie-breaking rule.",Complexity of Equilibria in First-Price Auctions under General Tie-Breaking Rules,2023-03-29 04:57:34,"Xi Chen, Binghui Peng","http://arxiv.org/abs/2303.16388v1, http://arxiv.org/pdf/2303.16388v1",cs.GT
35807,th,"Since Kopel's duopoly model was proposed about three decades ago, there are
almost no analytical results on the equilibria and their stability in the
asymmetric case. The first objective of our study is to fill this gap. This
paper analyzes the asymmetric duopoly model of Kopel analytically by using
several tools based on symbolic computations. We discuss the possibility of the
existence of multiple positive equilibria and establish necessary and
sufficient conditions for a given number of positive equilibria to exist. The
possible positions of the equilibria in Kopel's model are also explored.
Furthermore, in the asymmetric model of Kopel, if the duopolists adopt the best
response reactions or homogeneous adaptive expectations, we establish rigorous
conditions for the local stability of equilibria for the first time. The
occurrence of chaos in Kopel's model seems to be supported by observations
through numerical simulations, which, however, is challenging to prove
rigorously. The second objective is to prove the existence of snapback
repellers in Kopel's map, which implies the existence of chaos in the sense of
Li-Yorke according to Marotto's theorem.",Stability and chaos of the duopoly model of Kopel: A study based on symbolic computations,2023-04-05 00:35:38,"Xiaoliang Li, Kongyan Chen, Wei Niu, Bo Huang","http://arxiv.org/abs/2304.02136v2, http://arxiv.org/pdf/2304.02136v2",math.DS
35808,th,"This article investigates mechanism-based explanations for a well-known
empirical pattern in sociology of education, namely, that Black-White unequal
access to school resources -- defined as advanced coursework -- is the highest
in racially diverse and majority-White schools. Through an empirically
calibrated and validated agent-based model, this study explores the dynamics of
two qualitatively informed mechanisms, showing (1) that we have reason to
believe that the presence of White students in school can influence the
emergence of Black-White advanced enrollment disparities and (2) that such
influence can represent another possible explanation for the macro-level
pattern of interest. Results contribute to current scholarly accounts of
within-school inequalities, shedding light into policy strategies to improve
the educational experiences of Black students in racially integrated settings.",The presence of White students and the emergence of Black-White within-school inequalities: two interaction-based mechanisms,2023-04-10 23:09:47,João M. Souto-Maior,"http://arxiv.org/abs/2304.04849v4, http://arxiv.org/pdf/2304.04849v4",physics.soc-ph
35809,th,"The study of systemic risk is often presented through the analysis of several
measures referring to quantities used by practitioners and policy makers.
Almost invariably, those measures evaluate the size of the impact that
exogenous events can exhibit on a financial system without analysing the nature
of initial shock. Here we present a symmetric approach and propose a set of
measures that are based on the amount of exogenous shock that can be absorbed
by the system before it starts to deteriorate. For this purpose, we use a
linearized version of DebtRank that allows to clearly show the onset of
financial distress towards a correct systemic risk estimation. We show how we
can explicitly compute localized and uniform exogenous shocks and explained
their behavior though spectral graph theory. We also extend analysis to
heterogeneous shocks that have to be computed by means of Monte Carlo
simulations. We believe that our approach is more general and natural and
allows to express in a standard way the failure risk in financial systems.",Systemic risk measured by systems resiliency to initial shocks,2023-04-12 15:13:46,"Luka Klinčić, Vinko Zlatić, Guido Caldarelli, Hrvoje Štefančić","http://arxiv.org/abs/2304.05794v1, http://arxiv.org/pdf/2304.05794v1",physics.soc-ph
35810,th,"Equilibrium solution concepts of normal-form games, such as Nash equilibria,
correlated equilibria, and coarse correlated equilibria, describe the joint
strategy profiles from which no player has incentive to unilaterally deviate.
They are widely studied in game theory, economics, and multiagent systems.
Equilibrium concepts are invariant under certain transforms of the payoffs. We
define an equilibrium-inspired distance metric for the space of all normal-form
games and uncover a distance-preserving equilibrium-invariant embedding.
Furthermore, we propose an additional transform which defines a
better-response-invariant distance metric and embedding. To demonstrate these
metric spaces we study $2\times2$ games. The equilibrium-invariant embedding of
$2\times2$ games has an efficient two variable parameterization (a reduction
from eight), where each variable geometrically describes an angle on a unit
circle. Interesting properties can be spatially inferred from the embedding,
including: equilibrium support, cycles, competition, coordination, distances,
best-responses, and symmetries. The best-response-invariant embedding of
$2\times2$ games, after considering symmetries, rediscovers a set of 15 games,
and their respective equivalence classes. We propose that this set of game
classes is fundamental and captures all possible interesting strategic
interactions in $2\times2$ games. We introduce a directed graph representation
and name for each class. Finally, we leverage the tools developed for
$2\times2$ games to develop game theoretic visualizations of large normal-form
and extensive-form games that aim to fingerprint the strategic interactions
that occur within.","Equilibrium-Invariant Embedding, Metric Space, and Fundamental Set of $2\times2$ Normal-Form Games",2023-04-20 00:31:28,"Luke Marris, Ian Gemp, Georgios Piliouras","http://arxiv.org/abs/2304.09978v1, http://arxiv.org/pdf/2304.09978v1",cs.GT
35811,th,"In dynamic environments, Q-learning is an automaton that (i) provides
estimates (Q-values) of the continuation values associated with each available
action; and (ii) follows the naive policy of almost always choosing the action
with highest Q-value. We consider a family of automata that are based on
Q-values but whose policy may systematically favor some actions over others,
for example through a bias that favors cooperation. In the spirit of Compte and
Postlewaite [2018], we look for equilibrium biases within this family of
Q-based automata. We examine classic games under various monitoring
technologies and find that equilibrium biases may strongly foster collusion.",Q-learning with biased policy rules,2023-04-25 11:25:10,Olivier Compte,"http://arxiv.org/abs/2304.12647v2, http://arxiv.org/pdf/2304.12647v2",econ.TH
35812,th,"The logistics of urban areas are becoming more sophisticated due to the fast
city population growth. The stakeholders are faced with the challenges of the
dynamic complexity of city logistics(CL) systems characterized by the
uncertainty effect together with the freight vehicle emissions causing
pollution. In this conceptual paper, we present a research methodology for the
environmental sustainability of CL systems that can be attained by effective
stakeholders' collaboration under non-chaotic situations and the presumption of
the human levity tendency. We propose the mathematical axioms of the
uncertainty effect while putting forward the notion of condition effectors, and
how to assign hypothetical values to them. Finally, we employ a spider network
and causal loop diagram to investigate the system's elements and their behavior
over time.",Modeling the Complexity of City Logistics Systems for Sustainability,2023-04-27 10:21:56,"Taiwo Adetiloye, Anjali Awasthi","http://arxiv.org/abs/2304.13987v1, http://arxiv.org/pdf/2304.13987v1",math.OC
35813,th,"This paper provides a systematic study of the robust Stackelberg equilibrium
(RSE), which naturally generalizes the widely adopted solution concept of the
strong Stackelberg equilibrium (SSE). The RSE accounts for any possible
up-to-$\delta$ suboptimal follower responses in Stackelberg games and is
adopted to improve the robustness of the leader's strategy. While a few
variants of robust Stackelberg equilibrium have been considered in previous
literature, the RSE solution concept we consider is importantly different -- in
some sense, it relaxes previously studied robust Stackelberg strategies and is
applicable to much broader sources of uncertainties.
  We provide a thorough investigation of several fundamental properties of RSE,
including its utility guarantees, algorithmics, and learnability. We first show
that the RSE we defined always exists and thus is well-defined. Then we
characterize how the leader's utility in RSE changes with the robustness level
considered. On the algorithmic side, we show that, in sharp contrast to the
tractability of computing an SSE, it is NP-hard to obtain a fully polynomial
approximation scheme (FPTAS) for any constant robustness level. Nevertheless,
we develop a quasi-polynomial approximation scheme (QPTAS) for RSE. Finally, we
examine the learnability of the RSE in a natural learning scenario, where both
players' utilities are not known in advance, and provide almost tight sample
complexity results on learning the RSE. As a corollary of this result, we also
obtain an algorithm for learning SSE, which strictly improves a key result of
Bai et al. in terms of both utility guarantee and computational efficiency.",Robust Stackelberg Equilibria,2023-04-28 20:19:21,"Jiarui Gan, Minbiao Han, Jibang Wu, Haifeng Xu","http://arxiv.org/abs/2304.14990v2, http://arxiv.org/pdf/2304.14990v2",cs.GT
35814,th,"In the past several decades, the world's economy has become increasingly
globalized. On the other hand, there are also ideas advocating the practice of
``buy local'', by which people buy locally produced goods and services rather
than those produced farther away. In this paper, we establish a mathematical
theory of real price that determines the optimal global versus local spending
of an agent which achieves the agent's optimal tradeoff between spending and
obtained utility. Our theory of real price depends on the asymptotic analysis
of a Markov chain transition probability matrix related to the network of
producers and consumers. We show that the real price of a product or service
can be determined from the involved Markov chain matrix, and can be
dramatically different from the product's label price. In particular, we show
that the label prices of products and services are often not ``real'' or
directly ``useful'': given two products offering the same myopic utility, the
one with lower label price may not necessarily offer better asymptotic utility.
This theory shows that the globality or locality of the products and services
does have different impacts on the spending-utility tradeoff of a customer. The
established mathematical theory of real price can be used to determine whether
to adopt or not to adopt certain artificial intelligence (AI) technologies from
an economic perspective.","To AI or not to AI, to Buy Local or not to Buy Local: A Mathematical Theory of Real Price",2023-05-09 05:43:47,"Huan Cai, Catherine Xu, Weiyu Xu","http://arxiv.org/abs/2305.05134v1, http://arxiv.org/pdf/2305.05134v1",econ.TH
35815,th,"A seller is pricing identical copies of a good to a stream of unit-demand
buyers. Each buyer has a value on the good as his private information. The
seller only knows the empirical value distribution of the buyer population and
chooses the revenue-optimal price. We consider a widely studied third-degree
price discrimination model where an information intermediary with perfect
knowledge of the arriving buyer's value sends a signal to the seller, hence
changing the seller's posterior and inducing the seller to set a personalized
posted price. Prior work of Bergemann, Brooks, and Morris (American Economic
Review, 2015) has shown the existence of a signaling scheme that preserves
seller revenue, while always selling the item, hence maximizing consumer
surplus. In a departure from prior work, we ask whether the consumer surplus
generated is fairly distributed among buyers with different values. To this
end, we aim to maximize welfare functions that reward more balanced surplus
allocations.
  Our main result is the surprising existence of a novel signaling scheme that
simultaneously $8$-approximates all welfare functions that are non-negative,
monotonically increasing, symmetric, and concave, compared with any other
signaling scheme. Classical examples of such welfare functions include the
utilitarian social welfare, the Nash welfare, and the max-min welfare. Such a
guarantee cannot be given by any consumer-surplus-maximizing scheme -- which
are the ones typically studied in the literature. In addition, our scheme is
socially efficient, and has the fairness property that buyers with higher
values enjoy higher expected surplus, which is not always the case for existing
schemes.",Fair Price Discrimination,2023-05-11 20:45:06,"Siddhartha Banerjee, Kamesh Munagala, Yiheng Shen, Kangning Wang","http://arxiv.org/abs/2305.07006v1, http://arxiv.org/pdf/2305.07006v1",cs.GT
35816,th,"We show that computing the optimal social surplus requires $\Omega(mn)$ bits
of communication between the website and the bidders in a sponsored search
auction with $n$ slots on the website and with tick size of $2^{-m}$ in the
discrete model, even when bidders are allowed to freely communicate with each
other.",Social Surplus Maximization in Sponsored Search Auctions Requires Communication,2023-05-12 21:49:03,Suat Evren,"http://arxiv.org/abs/2305.07729v1, http://arxiv.org/pdf/2305.07729v1",cs.GT
35817,th,"A seller wants to sell an item to n buyers. Buyer valuations are drawn i.i.d.
from a distribution unknown to the seller; the seller only knows that the
support is included in [a, b]. To be robust, the seller chooses a DSIC
mechanism that optimizes the worst-case performance relative to the first-best
benchmark. Our analysis unifies the regret and the ratio objectives.
  For these objectives, we derive an optimal mechanism and the corresponding
performance in quasi-closed form, as a function of the support information and
the number of buyers n. Our analysis reveals three regimes of support
information and a new class of robust mechanisms. i.) With ""low"" support
information, the optimal mechanism is a second-price auction (SPA) with random
reserve, a focal class in earlier literature. ii.) With ""high"" support
information, SPAs are strictly suboptimal, and an optimal mechanism belongs to
a class of mechanisms we introduce, which we call pooling auctions (POOL);
whenever the highest value is above a threshold, the mechanism still allocates
to the highest bidder, but otherwise the mechanism allocates to a uniformly
random buyer, i.e., pools low types. iii.) With ""moderate"" support information,
a randomization between SPA and POOL is optimal.
  We also characterize optimal mechanisms within nested central subclasses of
mechanisms: standard mechanisms that only allocate to the highest bidder, SPA
with random reserve, and SPA with no reserve. We show strict separations in
terms of performance across classes, implying that deviating from standard
mechanisms is necessary for robustness.",Robust Auction Design with Support Information,2023-05-16 02:23:22,"Jerry Anunrojwong, Santiago R. Balseiro, Omar Besbes","http://arxiv.org/abs/2305.09065v2, http://arxiv.org/pdf/2305.09065v2",econ.TH
35819,th,"In this paper, we navigate the intricate domain of reviewer rewards in
open-access academic publishing, leveraging the precision of mathematics and
the strategic acumen of game theory. We conceptualize the prevailing
voucher-based reviewer reward system as a two-player game, subsequently
identifying potential shortcomings that may incline reviewers towards binary
decisions. To address this issue, we propose and mathematically formalize an
alternative reward system with the objective of mitigating this bias and
promoting more comprehensive reviews. We engage in a detailed investigation of
the properties and outcomes of both systems, employing rigorous
game-theoretical analysis and deep reinforcement learning simulations. Our
results underscore a noteworthy divergence between the two systems, with our
proposed system demonstrating a more balanced decision distribution and
enhanced stability. This research not only augments the mathematical
understanding of reviewer reward systems, but it also provides valuable
insights for the formulation of policies within journal review system. Our
contribution to the mathematical community lies in providing a game-theoretical
perspective to a real-world problem and in the application of deep
reinforcement learning to simulate and understand this complex system.",Game-Theoretical Analysis of Reviewer Rewards in Peer-Review Journal Systems: Analysis and Experimental Evaluation using Deep Reinforcement Learning,2023-05-20 07:13:35,Minhyeok Lee,"http://arxiv.org/abs/2305.12088v1, http://arxiv.org/pdf/2305.12088v1",cs.AI
35820,th,"This paper evaluates market equilibrium under different pricing mechanisms in
a two-settlement 100%-renewables electricity market. Given general probability
distributions of renewable energy, we establish game-theoretical models to
analyze equilibrium bidding strategies, market prices, and profits under
uniform pricing (UP) and pay-as-bid pricing (PAB). We prove that UP can
incentivize suppliers to withhold bidding quantities and lead to price spikes.
PAB can reduce the market price, but it may lead to a mixed-strategy price
equilibrium. Then, we present a regulated uniform pricing scheme (RUP) based on
suppliers' marginal costs that include penalty costs for real-time deviations.
We show that RUP can achieve lower yet positive prices and profits compared
with PAB in a duopoly market, which approximates the least-cost system outcome.
Simulations with synthetic and real data find that under PAB and RUP, higher
uncertainty of renewables and real-time shortage penalty prices can increase
the market price by encouraging lower bidding quantities, thereby increasing
suppliers' profits.",Uniform Pricing vs Pay as Bid in 100%-Renewables Electricity Markets: A Game-theoretical Analysis,2023-05-21 03:50:39,"Dongwei Zhao, Audun Botterud, Marija Ilic","http://arxiv.org/abs/2305.12309v1, http://arxiv.org/pdf/2305.12309v1",eess.SY
35821,th,"During epidemics people reduce their social and economic activity to lower
their risk of infection. Such social distancing strategies will depend on
information about the course of the epidemic but also on when they expect the
epidemic to end, for instance due to vaccination. Typically it is difficult to
make optimal decisions, because the available information is incomplete and
uncertain. Here, we show how optimal decision making depends on knowledge about
vaccination timing in a differential game in which individual decision making
gives rise to Nash equilibria, and the arrival of the vaccine is described by a
probability distribution. We show that the earlier the vaccination is expected
to happen and the more precisely the timing of the vaccination is known, the
stronger is the incentive to socially distance. In particular, equilibrium
social distancing only meaningfully deviates from the no-vaccination
equilibrium course if the vaccine is expected to arrive before the epidemic
would have run its course. We demonstrate how the probability distribution of
the vaccination time acts as a generalised form of discounting, with the
special case of an exponential vaccination time distribution directly
corresponding to regular exponential discounting.",Rational social distancing in epidemics with uncertain vaccination timing,2023-05-23 05:28:14,"Simon K. Schnyder, John J. Molina, Ryoichi Yamamoto, Matthew S. Turner","http://arxiv.org/abs/2305.13618v1, http://arxiv.org/pdf/2305.13618v1",econ.TH
35822,th,"Sociologists of education increasingly highlight the role of opportunity
hoarding in the formation of Black-White educational inequalities. Informed by
this literature, this article unpacks the necessary and sufficient conditions
under which the hoarding of educational resources emerges within schools. It
develops a qualitatively informed agent-based model which captures Black and
White students' competition for a valuable school resource: advanced
coursework. In contrast to traditional accounts -- which explain the emergence
of hoarding through the actions of Whites that keep valuable resources within
White communities -- simulations, perhaps surprisingly, show hoarding to arise
even when Whites do not play the role of hoarders of resources. Behind this
result is the fact that a structural inequality (i.e., racial differences in
social class) -- and not action-driven hoarding -- is the necessary condition
for hoarding to emerge. Findings, therefore, illustrate that common
action-driven understandings of opportunity hoarding can overlook the
structural foundations behind this important phenomenon. Policy implications
are discussed.",Hoarding without hoarders: unpacking the emergence of opportunity hoarding within schools,2023-05-24 05:49:38,João M. Souto-Maior,"http://arxiv.org/abs/2305.14653v2, http://arxiv.org/pdf/2305.14653v2",physics.soc-ph
35823,th,"We design TimeBoost: a practical transaction ordering policy for rollup
sequencers that takes into account both transaction timestamps and bids; it
works by creating a score from timestamps and bids, and orders transactions
based on this score.
  TimeBoost is transaction-data-independent (i.e., can work with encrypted
transactions) and supports low transaction finalization times similar to a
first-come first-serve (FCFS or pure-latency) ordering policy. At the same
time, it avoids the inefficient latency competition created by an FCFS policy.
It further satisfies useful economic properties of first-price auctions that
come with a pure-bidding policy. We show through rigorous economic analyses how
TimeBoost allows players to compete on arbitrage opportunities in a way that
results in better guarantees compared to both pure-latency and pure-bidding
approaches.",Buying Time: Latency Racing vs. Bidding in Transaction Ordering,2023-06-03 22:20:39,"Akaki Mamageishvili, Mahimna Kelkar, Jan Christoph Schlegel, Edward W. Felten","http://arxiv.org/abs/2306.02179v2, http://arxiv.org/pdf/2306.02179v2",cs.GT
35886,th,"Models of social learning feature either binary signals or abstract signal
structures often deprived of micro-foundations. Both models are limited when
analyzing interim results or performing empirical analysis. We present a method
of generating signal structures which are richer than the binary model, yet are
tractable enough to perform simulations and empirical analysis. We demonstrate
the method's usability by revisiting two classical papers: (1) we discuss the
economic significance of unbounded signals Smith and Sorensen (2000); (2) we
use experimental data from Anderson and Holt (1997) to perform econometric
analysis. Additionally, we provide a necessary and sufficient condition for the
occurrence of action cascades.",A Practical Approach to Social Learning,2020-02-25 19:41:23,"Amir Ban, Moran Koren","http://arxiv.org/abs/2002.11017v1, http://arxiv.org/pdf/2002.11017v1",econ.TH
35824,th,"In online ad markets, a rising number of advertisers are employing bidding
agencies to participate in ad auctions. These agencies are specialized in
designing online algorithms and bidding on behalf of their clients. Typically,
an agency usually has information on multiple advertisers, so she can
potentially coordinate bids to help her clients achieve higher utilities than
those under independent bidding.
  In this paper, we study coordinated online bidding algorithms in repeated
second-price auctions with budgets. We propose algorithms that guarantee every
client a higher utility than the best she can get under independent bidding. We
show that these algorithms achieve maximal coalition welfare and discuss
bidders' incentives to misreport their budgets, in symmetric cases. Our proofs
combine the techniques of online learning and equilibrium analysis, overcoming
the difficulty of competing with a multi-dimensional benchmark. The performance
of our algorithms is further evaluated by experiments on both synthetic and
real data. To the best of our knowledge, we are the first to consider bidder
coordination in online repeated auctions with constraints.",Coordinated Dynamic Bidding in Repeated Second-Price Auctions with Budgets,2023-06-13 14:55:04,"Yurong Chen, Qian Wang, Zhijian Duan, Haoran Sun, Zhaohua Chen, Xiang Yan, Xiaotie Deng","http://arxiv.org/abs/2306.07709v1, http://arxiv.org/pdf/2306.07709v1",cs.GT
35825,th,"We study the classic house-swapping problem of Shapley and Scarf (1974) in a
setting where agents may have ""objective"" indifferences, i.e., indifferences
that are shared by all agents. In other words, if any one agent is indifferent
between two houses, then all agents are indifferent between those two houses.
The most direct interpretation is the presence of multiple copies of the same
object. Our setting is a special case of the house-swapping problem with
general indifferences. We derive a simple, easily interpretable algorithm that
produces the unique strict core allocation of the house-swapping market, if it
exists. Our algorithm runs in square polynomial time, a substantial improvement
over the cubed time methods for the more general problem.",House-Swapping with Objective Indifferences,2023-06-16 01:16:22,"Will Sandholtz, Andrew Tai","http://arxiv.org/abs/2306.09529v1, http://arxiv.org/pdf/2306.09529v1",econ.TH
35826,th,"This paper extends the Isotonic Mechanism from the single-owner to
multi-owner settings, in an effort to make it applicable to peer review where a
paper often has multiple authors. Our approach starts by partitioning all
submissions of a machine learning conference into disjoint blocks, each of
which shares a common set of co-authors. We then employ the Isotonic Mechanism
to elicit a ranking of the submissions from each author and to produce adjusted
review scores that align with both the reported ranking and the original review
scores. The generalized mechanism uses a weighted average of the adjusted
scores on each block. We show that, under certain conditions, truth-telling by
all authors is a Nash equilibrium for any valid partition of the overlapping
ownership sets. However, we demonstrate that while the mechanism's performance
in terms of estimation accuracy depends on the partition structure, optimizing
this structure is computationally intractable in general. We develop a nearly
linear-time greedy algorithm that provably finds a performant partition with
appealing robust approximation guarantees. Extensive experiments on both
synthetic data and real-world conference review data demonstrate the
effectiveness of this generalized Isotonic Mechanism.",An Isotonic Mechanism for Overlapping Ownership,2023-06-19 23:33:25,"Jibang Wu, Haifeng Xu, Yifan Guo, Weijie Su","http://arxiv.org/abs/2306.11154v1, http://arxiv.org/pdf/2306.11154v1",cs.GT
35827,th,"We study the power of menus of contracts in principal-agent problems with
adverse selection (agents can be one of several types) and moral hazard (we
cannot observe agent actions directly). For principal-agent problems with $T$
types and $n$ actions, we show that the best menu of contracts can obtain a
factor $\Omega(\max(n, \log T))$ more utility for the principal than the best
individual contract, partially resolving an open question of Guruganesh et al.
(2021). We then turn our attention to randomized menus of linear contracts,
where we likewise show that randomized linear menus can be $\Omega(T)$ better
than the best single linear contract. As a corollary, we show this implies an
analogous gap between deterministic menus of (general) contracts and randomized
menus of contracts (as introduced by Castiglioni et al. (2022)).",The Power of Menus in Contract Design,2023-06-22 07:28:44,"Guru Guruganesh, Jon Schneider, Joshua Wang, Junyao Zhao","http://arxiv.org/abs/2306.12667v1, http://arxiv.org/pdf/2306.12667v1",cs.GT
35828,th,"One cannot make truly fair decisions using integer linear programs unless one
controls the selection probabilities of the (possibly many) optimal solutions.
For this purpose, we propose a unified framework when binary decision variables
represent agents with dichotomous preferences, who only care about whether they
are selected in the final solution. We develop several general-purpose
algorithms to fairly select optimal solutions, for example, by maximizing the
Nash product or the minimum selection probability, or by using a random
ordering of the agents as a selection criterion (Random Serial Dictatorship).
As such, we embed the black-box procedure of solving an integer linear program
into a framework that is explainable from start to finish. Moreover, we study
the axiomatic properties of the proposed methods by embedding our framework
into the rich literature of cooperative bargaining and probabilistic social
choice. Lastly, we evaluate the proposed methods on a specific application,
namely kidney exchange. We find that while the methods maximizing the Nash
product or the minimum selection probability outperform the other methods on
the evaluated welfare criteria, methods such as Random Serial Dictatorship
perform reasonably well in computation times that are similar to those of
finding a single optimal solution.",Fair integer programming under dichotomous preferences,2023-06-23 12:06:13,"Tom Demeulemeester, Dries Goossens, Ben Hermans, Roel Leus","http://arxiv.org/abs/2306.13383v1, http://arxiv.org/pdf/2306.13383v1",cs.GT
35829,th,"In this paper we give the first explicit enumeration of all maximal Condorcet
domains on $n\leq 7$ alternatives. This has been accomplished by developing a
new algorithm for constructing Condorcet domains, and an implementation of that
algorithm which has been run on a supercomputer.
  We follow this up by the first survey of the properties of all maximal
Condorcet domains up to degree 7, with respect to many properties studied in
the social sciences and mathematical literature. We resolve several open
questions posed by other authors, both by examples from our data and theorems.
We give a new set of results on the symmetry properties of Condorcet domains
which unify earlier works.
  Finally we discuss connections to other domain types such as non-dictatorial
domains and generalisations of single-peaked domains. All our data is made
freely available for other researches via a new website.",Condorcet Domains of Degree at most Seven,2023-06-28 11:05:06,"Dolica Akello-Egwell, Charles Leedham-Green, Alastair Litterick, Klas Markström, Søren Riis","http://arxiv.org/abs/2306.15993v5, http://arxiv.org/pdf/2306.15993v5",cs.DM
35831,th,"The Black-Scholes-Merton model is a mathematical model for the dynamics of a
financial market that includes derivative investment instruments, and its
formula provides a theoretical price estimate of European-style options. The
model's fundamental idea is to eliminate risk by hedging the option by
purchasing and selling the underlying asset in a specific way, that is, to
replicate the payoff of the option with a portfolio (which continuously trades
the underlying) whose value at each time can be verified. One of the most
crucial, yet restrictive, assumptions for this task is that the market follows
a geometric Brownian motion, which has been relaxed and generalized in various
ways.
  The concept of robust finance revolves around developing models that account
for uncertainties and variations in financial markets. Martingale Optimal
Transport, which is an adaptation of the Optimal Transport theory to the robust
financial framework, is one of the most prominent directions. In this paper, we
consider market models with arbitrarily many underlying assets whose values are
observed over arbitrarily many time periods, and demonstrates the existence of
a portfolio sub- or super-hedging a general path-dependent derivative security
in terms of trading European options and underlyings, as well as the portfolio
replicating the derivative payoff when the market model yields the extremal
price of the derivative given marginal distributions of the underlyings. In
mathematical terms, this paper resolves the question of dual attainment for the
multi-period vectorial martingale optimal transport problem.",Replication of financial derivatives under extreme market models given marginals,2023-07-03 10:44:59,Tongseok Lim,"http://arxiv.org/abs/2307.00807v1, http://arxiv.org/pdf/2307.00807v1",q-fin.MF
35832,th,"Classification algorithms are increasingly used in areas such as housing,
credit, and law enforcement in order to make decisions affecting peoples'
lives. These algorithms can change individual behavior deliberately (a fraud
prediction algorithm deterring fraud) or inadvertently (content sorting
algorithms spreading misinformation), and they are increasingly facing public
scrutiny and regulation. Some of these regulations, like the elimination of
cash bail in some states, have focused on \textit{lowering the stakes of
certain classifications}. In this paper we characterize how optimal
classification by an algorithm designer can affect the distribution of behavior
in a population -- sometimes in surprising ways. We then look at the effect of
democratizing the rewards and punishments, or stakes, to algorithmic
classification to consider how a society can potentially stem (or facilitate!)
predatory classification. Our results speak to questions of algorithmic
fairness in settings where behavior and algorithms are interdependent, and
where typical measures of fairness focusing on statistical accuracy across
groups may not be appropriate.","Algorithms, Incentives, and Democracy",2023-07-05 17:22:01,"Elizabeth Maggie Penn, John W. Patty","http://arxiv.org/abs/2307.02319v1, http://arxiv.org/pdf/2307.02319v1",econ.TH
35833,th,"We study Proportional Response Dynamics (PRD) in linear Fisher markets where
participants act asynchronously. We model this scenario as a sequential process
in which in every step, an adversary selects a subset of the players that will
update their bids, subject to liveness constraints. We show that if every
bidder individually uses the PRD update rule whenever they are included in the
group of bidders selected by the adversary, then (in the generic case) the
entire dynamic converges to a competitive equilibrium of the market. Our proof
technique uncovers further properties of linear Fisher markets, such as the
uniqueness of the equilibrium for generic parameters and the convergence of
associated best-response dynamics and no-swap regret dynamics under certain
conditions.",Asynchronous Proportional Response Dynamics in Markets with Adversarial Scheduling,2023-07-09 09:31:20,"Yoav Kolumbus, Menahem Levy, Noam Nisan","http://arxiv.org/abs/2307.04108v1, http://arxiv.org/pdf/2307.04108v1",cs.GT
35834,th,"In an information aggregation game, a set of senders interact with a receiver
through a mediator. Each sender observes the state of the world and
communicates a message to the mediator, who recommends an action to the
receiver based on the messages received. The payoff of the senders and of the
receiver depend on both the state of the world and the action selected by the
receiver. This setting extends the celebrated cheap talk model in two aspects:
there are many senders (as opposed to just one) and there is a mediator. From a
practical perspective, this setting captures platforms in which strategic
experts advice is aggregated in service of action recommendations to the user.
We aim at finding an optimal mediator/platform that maximizes the users'
welfare given highly resilient incentive compatibility requirements on the
equilibrium selected: we want the platform to be incentive compatible for the
receiver/user when selecting the recommended action, and we want it to be
resilient against group deviations by the senders/experts. We provide highly
positive answers to this challenge, manifested through efficient algorithms.",Resilient Information Aggregation,2023-07-11 10:06:13,"Itai Arieli, Ivan Geffner, Moshe Tennenholtz","http://dx.doi.org/10.4204/EPTCS.379.6, http://arxiv.org/abs/2307.05054v1, http://arxiv.org/pdf/2307.05054v1",econ.TH
35835,th,"With a novel search algorithm or assortment planning or assortment
optimization algorithm that takes into account a Bayesian approach to
information updating and two-stage assortment optimization techniques, the
current research provides a novel concept of competitiveness in the digital
marketplace. Via the search algorithm, there is competition between the
platform, vendors, and private brands of the platform. The current paper
suggests a model and discusses how competition and collusion arise in the
digital marketplace through assortment planning or assortment optimization
algorithm. Furthermore, it suggests a model of an assortment algorithm free
from collusion between the platform and the large vendors. The paper's major
conclusions are that collusive assortment may raise a product's purchase
likelihood but fail to maximize expected revenue. The proposed assortment
planning, on the other hand, maintains competitiveness while maximizing
expected revenue.",A Model of Competitive Assortment Planning Algorithm,2023-07-16 17:15:18,Dipankar Das,"http://arxiv.org/abs/2307.09479v1, http://arxiv.org/pdf/2307.09479v1",econ.TH
35853,th,"We consider manipulations in the context of coalitional games, where a
coalition aims to increase the total payoff of its members. An allocation rule
is immune to coalitional manipulation if no coalition can benefit from internal
reallocation of worth on the level of its subcoalitions
(reallocation-proofness), and if no coalition benefits from a lower worth while
all else remains the same (weak coalitional monotonicity). Replacing additivity
in Shapley's original characterization by these requirements yields a new
foundation of the Shapley value, i.e., it is the unique efficient and symmetric
allocation rule that awards nothing to a null player and is immune to
coalitional manipulations. We further find that for efficient allocation rules,
reallocation-proofness is equivalent to constrained marginality, a weaker
variant of Young's marginality axiom. Our second characterization improves upon
Young's characterization by weakening the independence requirement intrinsic to
marginality.",Coalitional Manipulations and Immunity of the Shapley Value,2023-10-31 15:43:31,"Christian Basteck, Frank Huettner","http://arxiv.org/abs/2310.20415v1, http://arxiv.org/pdf/2310.20415v1",econ.TH
35836,th,"A growing number of central authorities use assignment mechanisms to allocate
students to schools in a way that reflects student preferences and school
priorities. However, most real-world mechanisms give students an incentive to
be strategic and misreport their preferences. In this paper, we provide an
identification approach for causal effects of school assignment on future
outcomes that accounts for strategic misreporting. Misreporting may invalidate
existing point-identification approaches, and we derive sharp bounds for causal
effects that are robust to strategic behavior. Our approach applies to any
mechanism as long as there exist placement scores and cutoffs that characterize
that mechanism's allocation rule. We use data from a deferred acceptance
mechanism that assigns students to more than 1,000 university-major
combinations in Chile. Students behave strategically because the mechanism in
Chile constrains the number of majors that students submit in their preferences
to eight options. Our methodology takes that into account and partially
identifies the effect of changes in school assignment on various graduation
outcomes.",Causal Effects in Matching Mechanisms with Strategically Reported Preferences,2023-07-26 19:35:42,"Marinho Bertanha, Margaux Luflade, Ismael Mourifié","http://arxiv.org/abs/2307.14282v1, http://arxiv.org/pdf/2307.14282v1",econ.EM
35837,th,"We present a new optimization-based method for aggregating preferences in
settings where each decision maker, or voter, expresses preferences over pairs
of alternatives. The challenge is to come up with a ranking that agrees as much
as possible with the votes cast in cases when some of the votes conflict. Only
a collection of votes that contains no cycles is non-conflicting and can induce
a partial order over alternatives. Our approach is motivated by the observation
that a collection of votes that form a cycle can be treated as ties. The method
is then to remove unions of cycles of votes, or circulations, from the vote
graph and determine aggregate preferences from the remainder.
  We introduce the strong maximum circulation which is formed by a union of
cycles, the removal of which guarantees a unique outcome in terms of the
induced partial order. Furthermore, it contains all the aggregate preferences
remaining following the elimination of any maximum circulation. In contrast,
the well-known, optimization-based, Kemeny method has non-unique output and can
return multiple, conflicting rankings for the same input. In addition, Kemeny's
method requires solving an NP-hard problem, whereas our algorithm is efficient,
based on network flow techniques, and runs in strongly polynomial time,
independent of the number of votes.
  We address the construction of a ranking from the partial order and show that
rankings based on a convex relaxation of Kemeny's model are consistent with our
partial order. We then study the properties of removing a maximal circulation
versus a maximum circulation and establish that, while maximal circulations
will in general identify a larger number of aggregate preferences, the partial
orders induced by the removal of different maximal circulations are not unique
and may be conflicting. Moreover, finding a minimum maximal circulation is an
NP-hard problem.",The Strong Maximum Circulation Algorithm: A New Method for Aggregating Preference Rankings,2023-07-28 20:51:05,"Nathan Atkinson, Scott C. Ganz, Dorit S. Hochbaum, James B. Orlin","http://arxiv.org/abs/2307.15702v1, http://arxiv.org/pdf/2307.15702v1",cs.SI
35838,th,"We study the impact of data sharing policies on cyber insurance markets.
These policies have been proposed to address the scarcity of data about cyber
threats, which is essential to manage cyber risks. We propose a Cournot duopoly
competition model in which two insurers choose the number of policies they
offer (i.e., their production level) and also the resources they invest to
ensure the quality of data regarding the cost of claims (i.e., the data quality
of their production cost). We find that enacting mandatory data sharing
sometimes creates situations in which at most one of the two insurers invests
in data quality, whereas both insurers would invest when information sharing is
not mandatory. This raises concerns about the merits of making data sharing
mandatory.",Duopoly insurers' incentives for data quality under a mandatory cyber data sharing regime,2023-05-29 23:19:14,"Carlos Barreto, Olof Reinert, Tobias Wiesinger, Ulrik Franke","http://dx.doi.org/10.1016/j.cose.2023.103292, http://arxiv.org/abs/2308.00795v1, http://arxiv.org/pdf/2308.00795v1",econ.TH
35839,th,"We introduce a new network centrality measure founded on the Gately value for
cooperative games with transferable utilities. A directed network is
interpreted as representing control or authority relations between
players--constituting a hierarchical network. The power distribution of a
hierarchical network can be represented through a TU-game. We investigate the
properties of this TU-representation and investigate the Gately value of the
TU-representation resulting in the Gately power measure. We establish when the
Gately measure is a Core power gauge, investigate the relationship of the
Gately with the $\beta$-measure, and construct an axiomatisation of the Gately
measure.",Game theoretic foundations of the Gately power measure for directed networks,2023-08-04 15:00:28,"Robert P. Gilles, Lina Mallozzi","http://arxiv.org/abs/2308.02274v1, http://arxiv.org/pdf/2308.02274v1",cs.GT
35840,th,"Major advances in Machine Learning (ML) and Artificial Intelligence (AI)
increasingly take the form of developing and releasing general-purpose models.
These models are designed to be adapted by other businesses and agencies to
perform a particular, domain-specific function. This process has become known
as adaptation or fine-tuning. This paper offers a model of the fine-tuning
process where a Generalist brings the technological product (here an ML model)
to a certain level of performance, and one or more Domain-specialist(s) adapts
it for use in a particular domain. Both entities are profit-seeking and incur
costs when they invest in the technology, and they must reach a bargaining
agreement on how to share the revenue for the technology to reach the market.
For a relatively general class of cost and revenue functions, we characterize
the conditions under which the fine-tuning game yields a profit-sharing
solution. We observe that any potential domain-specialization will either
contribute, free-ride, or abstain in their uptake of the technology, and we
provide conditions yielding these different strategies. We show how methods
based on bargaining solutions and sub-game perfect equilibria provide insights
into the strategic behavior of firms in these types of interactions, and we
find that profit-sharing can still arise even when one firm has significantly
higher costs than another. We also provide methods for identifying
Pareto-optimal bargaining arrangements for a general set of utility functions.",Fine-Tuning Games: Bargaining and Adaptation for General-Purpose Models,2023-08-08 20:01:42,"Benjamin Laufer, Jon Kleinberg, Hoda Heidari","http://arxiv.org/abs/2308.04399v2, http://arxiv.org/pdf/2308.04399v2",cs.GT
35871,th,"Fairness is one of the most desirable societal principles in collective
decision-making. It has been extensively studied in the past decades for its
axiomatic properties and has received substantial attention from the multiagent
systems community in recent years for its theoretical and computational aspects
in algorithmic decision-making. However, these studies are often not
sufficiently rich to capture the intricacies of human perception of fairness in
the ambivalent nature of the real-world problems. We argue that not only fair
solutions should be deemed desirable by social planners (designers), but they
should be governed by human and societal cognition, consider perceived outcomes
based on human judgement, and be verifiable. We discuss how achieving this goal
requires a broad transdisciplinary approach ranging from computing and AI to
behavioral economics and human-AI interaction. In doing so, we identify
shortcomings and long-term challenges of the current literature of fair
division, describe recent efforts in addressing them, and more importantly,
highlight a series of open research directions.",The Fairness Fair: Bringing Human Perception into Collective Decision-Making,2023-12-22 06:06:24,Hadi Hosseini,"http://arxiv.org/abs/2312.14402v1, http://arxiv.org/pdf/2312.14402v1",cs.AI
35841,th,"In the committee selection problem, the goal is to choose a subset of size
$k$ from a set of candidates $C$ that collectively gives the best
representation to a set of voters. We consider this problem in Euclidean
$d$-space where each voter/candidate is a point and voters' preferences are
implicitly represented by Euclidean distances to candidates. We explore
fault-tolerance in committee selection and study the following three variants:
(1) given a committee and a set of $f$ failing candidates, find their optimal
replacement; (2) compute the worst-case replacement score for a given committee
under failure of $f$ candidates; and (3) design a committee with the best
replacement score under worst-case failures. The score of a committee is
determined using the well-known (min-max) Chamberlin-Courant rule: minimize the
maximum distance between any voter and its closest candidate in the committee.
Our main results include the following: (1) in one dimension, all three
problems can be solved in polynomial time; (2) in dimension $d \geq 2$, all
three problems are NP-hard; and (3) all three problems admit a constant-factor
approximation in any fixed dimension, and the optimal committee problem has an
FPT bicriterion approximation.",Fault Tolerance in Euclidean Committee Selection,2023-08-14 19:50:48,"Chinmay Sonar, Subhash Suri, Jie Xue","http://arxiv.org/abs/2308.07268v1, http://arxiv.org/pdf/2308.07268v1",cs.GT
35842,th,"Monetary conditions are frequently cited as a significant factor influencing
fluctuations in commodity prices. However, the precise channels of transmission
are less well identified. In this paper, we develop a unified theory to study
the impact of interest rates on commodity prices and the underlying mechanisms.
To that end, we extend the competitive storage model to accommodate
stochastically evolving interest rates, and establish general conditions under
which (i) a unique rational expectations equilibrium exists and can be
efficiently computed, and (ii) interest rates are negatively correlated with
commodity prices. As an application, we quantify the impact of interest rates
on commodity prices through the speculative channel, namely, the role of
speculators in the physical market whose incentive to hold inventories is
influenced by interest rate movements. Our findings demonstrate that real
interest rates have nontrivial and persistent negative effect on commodity
prices, and the magnitude of the impact varies substantially under different
market supply and interest rate regimes.",Interest Rate Dynamics and Commodity Prices,2023-08-15 08:10:35,"Christophe Gouel, Qingyin Ma, John Stachurski","http://arxiv.org/abs/2308.07577v1, http://arxiv.org/pdf/2308.07577v1",econ.TH
35843,th,"Classic optimal transport theory is built on minimizing the expected cost
between two given distributions. We propose the framework of distorted optimal
transport by minimizing a distorted expected cost. This new formulation is
motivated by concrete problems in decision theory, robust optimization, and
risk management, and it has many distinct features compared to the classic
theory. We choose simple cost functions and study different distortion
functions and their implications on the optimal transport plan. We show that on
the real line, the comonotonic coupling is optimal for the distorted optimal
transport problem when the distortion function is convex and the cost function
is submodular and monotone. Some forms of duality and uniqueness results are
provided. For inverse-S-shaped distortion functions and linear cost, we obtain
the unique form of optimal coupling for all marginal distributions, which turns
out to have an interesting ``first comonotonic, then counter-monotonic""
dependence structure; for S-shaped distortion functions a similar structure is
obtained. Our results highlight several challenges and features in distorted
optimal transport, offering a new mathematical bridge between the fields of
probability, decision theory, and risk management.",Distorted optimal transport,2023-08-22 10:25:51,"Haiyan Liu, Bin Wang, Ruodu Wang, Sheng Chao Zhuang","http://arxiv.org/abs/2308.11238v1, http://arxiv.org/pdf/2308.11238v1",math.OC
35844,th,"In 1979, Weitzman introduced Pandora's box problem as a framework for
sequential search with costly inspections. Recently, there has been a surge of
interest in Pandora's box problem, particularly among researchers working at
the intersection of economics and computation. This survey provides an overview
of the recent literature on Pandora's box problem, including its latest
extensions and applications in areas such as market design, decision theory,
and machine learning.",Recent Developments in Pandora's Box Problem: Variants and Applications,2023-08-23 19:39:14,"Hedyeh Beyhaghi, Linda Cai","http://arxiv.org/abs/2308.12242v1, http://arxiv.org/pdf/2308.12242v1",cs.GT
35845,th,"We analyse the typical structure of games in terms of the connectivity
properties of their best-response graphs. Our central result shows that almost
every game that is 'generic' (without indifferences) and has a pure Nash
equilibrium and a 'large' number of players is connected, meaning that every
action profile that is not a pure Nash equilibrium can reach every pure Nash
equilibrium via best-response paths. This has important implications for
dynamics in games. In particular, we show that there are simple, uncoupled,
adaptive dynamics for which period-by-period play converges almost surely to a
pure Nash equilibrium in almost every large generic game that has one (which
contrasts with the known fact that there is no such dynamic that leads almost
surely to a pure Nash equilibrium in every generic game that has one). We build
on recent results in probabilistic combinatorics for our characterisation of
game connectivity.",Game Connectivity and Adaptive Dynamics,2023-09-19 16:32:34,"Tom Johnston, Michael Savery, Alex Scott, Bassel Tarbush","http://arxiv.org/abs/2309.10609v3, http://arxiv.org/pdf/2309.10609v3",econ.TH
35846,th,"Is more information always better? Or are there some situations in which more
information can make us worse off? Good (1967) argues that expected utility
maximizers should always accept more information if the information is
cost-free and relevant. But Good's argument presupposes that you are certain
you will update by conditionalization. If we relax this assumption and allow
agents to be uncertain about updating, these agents can be rationally required
to reject free and relevant information. Since there are good reasons to be
uncertain about updating, rationality can require you to prefer ignorance.",Rational Aversion to Information,2023-09-21 06:51:52,Sven Neth,"http://dx.doi.org/10.1086/727772, http://arxiv.org/abs/2309.12374v3, http://arxiv.org/pdf/2309.12374v3",stat.OT
35847,th,"Community rating is a policy that mandates uniform premium regardless of the
risk factors. In this paper, our focus narrows to the single contract
interpretation wherein we establish a theoretical framework for community
rating using Stiglitz's (1977) monopoly model in which there is a continuum of
agents. We exhibit profitability conditions and show that, under mild
regularity conditions, the optimal premium is unique and satisfies the inverse
elasticity rule. Our numerical analysis, using realistic parameter values,
reveals that under regulation, a 10% increase in indemnity is possible with
minimal impact on other variables.","Theoretical Foundations of Community Rating by a Private Monopolist Insurer: Framework, Regulation, and Numerical Analysis",2023-09-27 00:02:00,"Yann Braouezec, John Cagnol","http://arxiv.org/abs/2309.15269v2, http://arxiv.org/pdf/2309.15269v2",econ.TH
35848,th,"This paper fundamentally reformulates economic and financial theory to
include electronic currencies. The valuation of the electronic currencies will
be based on macroeconomic theory and the fundamental equation of monetary
policy, not the microeconomic theory of discounted cash flows. The view of
electronic currency as a transactional equity associated with tangible assets
of a sub-economy will be developed, in contrast to the view of stock as an
equity associated mostly with intangible assets of a sub-economy. The view will
be developed of the electronic currency management firm as an entity
responsible for coordinated monetary (electronic currency supply and value
stabilization) and fiscal (investment and operational) policies of a
substantial (for liquidity of the electronic currency) sub-economy. The risk
model used in the valuations and the decision-making will not be the
ubiquitous, yet inappropriate, exponential risk model that leads to discount
rates, but will be multi time scale models that capture the true risk. The
decision-making will be approached from the perspective of true systems control
based on a system response function given by the multi scale risk model and
system controllers that utilize the Deep Reinforcement Learning, Generative
Pretrained Transformers, and other methods of Artificial Intelligence
(DRL/GPT/AI). Finally, the sub-economy will be viewed as a nonlinear complex
physical system with both stable equilibriums that are associated with
short-term exploitation, and unstable equilibriums that need to be stabilized
with active nonlinear control based on the multi scale system response
functions and DRL/GPT/AI.",A new economic and financial theory of money,2023-10-08 06:16:06,"Michael E. Glinsky, Sharon Sievert","http://arxiv.org/abs/2310.04986v4, http://arxiv.org/pdf/2310.04986v4",econ.TH
35849,th,"We propose an operating-envelope-aware, prosumer-centric, and efficient
energy community that aggregates individual and shared community distributed
energy resources and transacts with a regulated distribution system operator
(DSO) under a generalized net energy metering tariff design. To ensure safe
network operation, the DSO imposes dynamic export and import limits, known as
dynamic operating envelopes, on end-users' revenue meters. Given the operating
envelopes, we propose an incentive-aligned community pricing mechanism under
which the decentralized optimization of community members' benefit implies the
optimization of overall community welfare. The proposed pricing mechanism
satisfies the cost-causation principle and ensures the stability of the energy
community in a coalition game setting. Numerical examples provide insights into
the characteristics of the proposed pricing mechanism and quantitative measures
of its performance.",Operating-Envelopes-Aware Decentralized Welfare Maximization for Energy Communities,2023-10-11 06:04:34,"Ahmed S. Alahmed, Guido Cavraro, Andrey Bernstein, Lang Tong","http://arxiv.org/abs/2310.07157v1, http://arxiv.org/pdf/2310.07157v1",eess.SY
35850,th,"Sustainability of common-pool resources hinges on the interplay between human
and environmental systems. However, there is still a lack of a novel and
comprehensive framework for modelling extraction of common-pool resources and
cooperation of human agents that can account for different factors that shape
the system behavior and outcomes. In particular, we still lack a critical value
for ensuring resource sustainability under different scenarios. In this paper,
we present a novel framework for studying resource extraction and cooperation
in human-environmental systems for common-pool resources. We explore how
different factors, such as resource availability and conformity effect,
influence the players' decisions and the resource outcomes. We identify
critical values for ensuring resource sustainability under various scenarios.
We demonstrate the observed phenomena are robust to the complexity and
assumptions of the models and discuss implications of our study for policy and
practice, as well as the limitations and directions for future research.",Impact of resource availability and conformity effect on sustainability of common-pool resources,2023-10-11 18:18:13,"Chengyi Tu, Renfei Chen, Ying Fan, Xuwei Pan","http://arxiv.org/abs/2310.07577v2, http://arxiv.org/pdf/2310.07577v2",econ.TH
35851,th,"Commuters looking for the shortest path to their destinations, the security
of networked computers, hedge funds trading on the same stocks, governments and
populations acting to mitigate an epidemic, or employers and employees agreeing
on a contact, are all examples of (dynamic) stochastic differential games. In
essence, game theory deals with the analysis of strategic interactions among
multiple decision-makers. The theory has had enormous impact in a wide variety
of fields, but its rigorous mathematical analysis is rather recent. It started
with the pioneering work of von Neumann and Morgenstern published in 1944.
Since then, game theory has taken centre stage in applied mathematics and
related areas. Game theory has also played an important role in unsuspected
areas: for instance in military applications, when the analysis of guided
interceptor missiles in the 1950s motivated the study of games evolving
dynamically in time. Such games (when possibly subject to randomness) are
called stochastic differential games. Their study started with the work of
Issacs, who crucially recognised the importance of (stochastic) control theory
in the area. Over the past few decades since Isaacs's work, a rich theory of
stochastic differential game has emerged and branched into several directions.
This paper will review recent advances in the study of solvability of
stochastic differential games, with a focus on a purely probabilistic technique
to approach the problem. Unsurprisingly, the number of players involved in the
game is a major factor of the analysis. We will explain how the size of the
population impacts the analyses and solvability of the problem, and discuss
mean field games as well as the convergence of finite player games to mean
field games.",On the population size in stochastic differential games,2023-10-15 22:06:56,"Dylan Possamaï, Ludovic Tangpi","http://arxiv.org/abs/2310.09919v1, http://arxiv.org/pdf/2310.09919v1",math.PR
35852,th,"We explore a version of the minimax theorem for two-person win-lose games
with infinitely many pure strategies. In the countable case, we give a
combinatorial condition on the game which implies the minimax property. In the
general case, we prove that a game satisfies the minimax property along with
all its subgames if and only if none of its subgames is isomorphic to the
""larger number game."" This generalizes a recent theorem of Hanneke, Livni and
Moran. We also propose several applications of our results outside of game
theory.",The minimax property in infinite two-person win-lose games,2023-10-30 10:21:52,Ron Holzman,"http://arxiv.org/abs/2310.19314v1, http://arxiv.org/pdf/2310.19314v1",cs.GT
35872,th,"A cake allocation is called *strongly-proportional* if it allocates each
agent a piece worth for them strictly more than their fair share of 1/n the
total cake value. It is called *connected* if it allocates each agent a
connected piece. We present a necessary and sufficient condition for the
existence of a strongly-proportional connected cake-allocation among agents
with strictly positive valuations.",On Connected Strongly-Proportional Cake-Cutting,2023-12-23 22:08:46,"Zsuzsanna Jankó, Attila Joó, Erel Segal-Halevi","http://arxiv.org/abs/2312.15326v1, http://arxiv.org/pdf/2312.15326v1",math.CO
35854,th,"In the ultimatum game, the challenge is to explain why responders reject
non-zero offers thereby defying classical rationality. Fairness and related
notions have been the main explanations so far. We explain this rejection
behavior via the following principle: if the responder regrets less about
losing the offer than the proposer regrets not offering the best option, the
offer is rejected. This principle qualifies as a rational punishing behavior
and it replaces the experimentally falsified classical rationality (the subgame
perfect Nash equilibrium) that leads to accepting any non-zero offer. The
principle is implemented via the transitive regret theory for probabilistic
lotteries. The expected utility implementation is a limiting case of this. We
show that several experimental results normally prescribed to fairness and
intent-recognition can be given an alternative explanation via rational
punishment; e.g. the comparison between ""fair"" and ""superfair"", the behavior
under raising the stakes etc. Hence we also propose experiments that can
distinguish these two scenarios (fairness versus regret-based punishment). They
assume different utilities for the proposer and responder. We focus on the
mini-ultimatum version of the game and also show how it can emerge from a more
general setup of the game.",Ultimatum game: regret or fairness?,2023-11-07 11:54:02,"Lida H. Aleksanyan, Armen E. Allahverdyan, Vardan G. Bardakhchyan","http://arxiv.org/abs/2311.03814v1, http://arxiv.org/pdf/2311.03814v1",econ.TH
35855,th,"We introduce and mathematically study a conceptual model for the dynamics of
the buyers population in markets of perishable goods where prices are not
posted. Buyers behaviours are driven partly by loyalty to previously visited
merchants and partly by sensitivity to merchants intrinsic attractiveness.
Moreover, attractiveness evolve in time depending on the relative volumes of
buyers, assuming profit/competitiveness optimisation when
favourable/unfavourable. While this negative feedback mechanism is a source of
instability that promotes oscillatory behaviour, our analysis identifies those
critical features that are responsible for the asymptotic stability of
stationary states, both in their immediate neighbourhood and globally in phase
space. In particular, we show that while full loss of clientele occurs
(depending on the initial state) in case of a bounded reactivity rate, it
cannot happen when this rate is unbounded and merchants resilience always
prevails in this case. Altogether, our analysis provides mathematical insights
into the consequences of introducing feedback into buyer-seller interactions
and their diversified impacts on the long term levels of clientele in the
markets.",Population dynamics in fresh product markets with no posted prices,2023-11-07 16:38:17,"Ali Ellouze, Bastien Fernandez","http://arxiv.org/abs/2311.03987v1, http://arxiv.org/pdf/2311.03987v1",econ.TH
35856,th,"Formally, for common knowledge to arise in a dynamic setting, knowledge that
it has arisen must be simultaneously attained by all players. As a result, new
common knowledge is unattainable in many realistic settings, due to timing
frictions. This unintuitive phenomenon, observed by Halpern and Moses (1990),
was discussed by Arrow et al. (1987) and by Aumann (1989), was called a paradox
by Morris (2014), and has evaded satisfactory resolution for four decades. We
resolve this paradox by proposing a new definition for common knowledge, which
coincides with the traditional one in static settings but generalizes it in
dynamic settings. Under our definition, common knowledge can arise without
simultaneity, particularly in canonical examples of the Haplern-Moses paradox.
We demonstrate its usefulness by deriving for it an agreement theorem \`a la
Aumann (1976), and showing that it arises in the setting of Geanakoplos and
Polemarchakis (1982) with timing frictions added.","Common Knowledge, Regained",2023-11-08 01:38:16,"Yannai A. Gonczarowski, Yoram Moses","http://arxiv.org/abs/2311.04374v1, http://arxiv.org/pdf/2311.04374v1",econ.TH
35857,th,"We investigate the problem of approximating an incomplete preference relation
$\succsim$ on a finite set by a complete preference relation. We aim to obtain
this approximation in such a way that the choices on the basis of two
preferences, one incomplete, the other complete, have the smallest possible
discrepancy in the aggregate. To this end, we use the top-difference metric on
preferences, and define a best complete approximation of $\succsim$ as a
complete preference relation nearest to $\succsim$ relative to this metric. We
prove that such an approximation must be a maximal completion of $\succsim$,
and that it is, in fact, any one completion of $\succsim$ with the largest
index. Finally, we use these results to provide a sufficient condition for the
best complete approximation of a preference to be its canonical completion.
This leads to closed-form solutions to the best approximation problem in the
case of several incomplete preference relations of interest.",Best Complete Approximations of Preference Relations,2023-11-11 21:45:59,"Hiroki Nishimura, Efe A. Ok","http://arxiv.org/abs/2311.06641v1, http://arxiv.org/pdf/2311.06641v1",econ.TH
35858,th,"We study a repeated Principal Agent problem between a long lived Principal
and Agent pair in a prior free setting. In our setting, the sequence of
realized states of nature may be adversarially chosen, the Agent is non-myopic,
and the Principal aims for a strong form of policy regret. Following Camara,
Hartline, and Johnson, we model the Agent's long-run behavior with behavioral
assumptions that relax the common prior assumption (for example, that the Agent
has no swap regret). Within this framework, we revisit the mechanism proposed
by Camara et al., which informally uses calibrated forecasts of the unknown
states of nature in place of a common prior. We give two main improvements.
First, we give a mechanism that has an exponentially improved dependence (in
terms of both running time and regret bounds) on the number of distinct states
of nature. To do this, we show that our mechanism does not require truly
calibrated forecasts, but rather forecasts that are unbiased subject to only a
polynomially sized collection of events -- which can be produced with
polynomial overhead. Second, in several important special cases -- including
the focal linear contracting setting -- we show how to remove strong
``Alignment'' assumptions (which informally require that near-ties are always
broken in favor of the Principal) by specifically deploying ``stable'' policies
that do not have any near ties that are payoff relevant to the Principal. Taken
together, our new mechanism makes the compelling framework proposed by Camara
et al. much more powerful, now able to be realized over polynomially sized
state spaces, and while requiring only mild assumptions on Agent behavior.",Efficient Prior-Free Mechanisms for No-Regret Agents,2023-11-14 00:13:42,"Natalie Collina, Aaron Roth, Han Shao","http://arxiv.org/abs/2311.07754v1, http://arxiv.org/pdf/2311.07754v1",cs.GT
35859,th,"How can an informed sender persuade a receiver, having only limited
information about the receiver's beliefs? Motivated by research showing
generative AI can simulate economic agents, we initiate the study of
information design with an oracle. We assume the sender can learn more about
the receiver by querying this oracle, e.g., by simulating the receiver's
behavior. Aside from AI motivations such as general-purpose Large Language
Models (LLMs) and problem-specific machine learning models, alternate
motivations include customer surveys and querying a small pool of live users.
  Specifically, we study Bayesian Persuasion where the sender has a
second-order prior over the receiver's beliefs. After a fixed number of queries
to an oracle to refine this prior, the sender commits to an information
structure. Upon receiving the message, the receiver takes a payoff-relevant
action maximizing her expected utility given her posterior beliefs. We design
polynomial-time querying algorithms that optimize the sender's expected utility
in this Bayesian Persuasion game. As a technical contribution, we show that
queries form partitions of the space of receiver beliefs that can be used to
quantify the sender's knowledge.",Algorithmic Persuasion Through Simulation: Information Design in the Age of Generative AI,2023-11-30 02:01:33,"Keegan Harris, Nicole Immorlica, Brendan Lucier, Aleksandrs Slivkins","http://arxiv.org/abs/2311.18138v1, http://arxiv.org/pdf/2311.18138v1",cs.GT
35860,th,"We analyze the overall benefits of an energy community cooperative game under
which distributed energy resources (DER) are shared behind a regulated
distribution utility meter under a general net energy metering (NEM) tariff.
Two community DER scheduling algorithms are examined. The first is a community
with centrally controlled DER, whereas the second is decentralized letting its
members schedule their own DER locally. For both communities, we prove that the
cooperative game's value function is superadditive, hence the grand coalition
achieves the highest welfare. We also prove the balancedness of the cooperative
game under the two DER scheduling algorithms, which means that there is a
welfare re-distribution scheme that de-incentivizes players from leaving the
grand coalition to form smaller ones. Lastly, we present five ex-post and an
ex-ante welfare re-distribution mechanisms and evaluate them in simulation, in
addition to investigating the performance of various community sizes under the
two DER scheduling algorithms.",Resource Sharing in Energy Communities: A Cooperative Game Approach,2023-11-30 21:41:16,"Ahmed S. Alahmed, Lang Tong","http://arxiv.org/abs/2311.18792v1, http://arxiv.org/pdf/2311.18792v1",cs.GT
35861,th,"The field of algorithmic fairness has rapidly emerged over the past 15 years
as algorithms have become ubiquitous in everyday lives. Algorithmic fairness
traditionally considers statistical notions of fairness algorithms might
satisfy in decisions based on noisy data. We first show that these are
theoretically disconnected from welfare-based notions of fairness. We then
discuss two individual welfare-based notions of fairness, envy freeness and
prejudice freeness, and establish conditions under which they are equivalent to
error rate balance and predictive parity, respectively. We discuss the
implications of these findings in light of the recently discovered
impossibility theorem in algorithmic fairness (Kleinberg, Mullainathan, &
Raghavan (2016), Chouldechova (2017)).",Algorithmic Fairness with Feedback,2023-12-06 00:42:14,"John W. Patty, Elizabeth Maggie Penn","http://arxiv.org/abs/2312.03155v1, http://arxiv.org/pdf/2312.03155v1",econ.TH
35862,th,"Multiwinner voting captures a wide variety of settings, from parliamentary
elections in democratic systems to product placement in online shopping
platforms. There is a large body of work dealing with axiomatic
characterizations, computational complexity, and algorithmic analysis of
multiwinner voting rules. Although many challenges remain, significant progress
has been made in showing existence of fair and representative outcomes as well
as efficient algorithmic solutions for many commonly studied settings. However,
much of this work focuses on single-shot elections, even though in numerous
real-world settings elections are held periodically and repeatedly. Hence, it
is imperative to extend the study of multiwinner voting to temporal settings.
Recently, there have been several efforts to address this challenge. However,
these works are difficult to compare, as they model multi-period voting in very
different ways. We propose a unified framework for studying temporal fairness
in this domain, drawing connections with various existing bodies of work, and
consolidating them within a general framework. We also identify gaps in
existing literature, outline multiple opportunities for future work, and put
forward a vision for the future of multiwinner voting in temporal settings.",Temporal Fairness in Multiwinner Voting,2023-12-07 19:38:32,"Edith Elkind, Svetlana Obraztsova, Nicholas Teh","http://arxiv.org/abs/2312.04417v2, http://arxiv.org/pdf/2312.04417v2",cs.GT
35863,th,"Reducing wealth inequality and disparity is a global challenge. The economic
system is mainly divided into (1) gift and reciprocity, (2) power and
redistribution, (3) market exchange, and (4) mutual aid without reciprocal
obligations. The current inequality stems from a capitalist economy consisting
of (2) and (3). To sublimate (1), which is the human economy, to (4), the
concept of a ""mixbiotic society"" has been proposed in the philosophical realm.
This is a society in which free and diverse individuals, ""I,"" mix with each
other, recognize their respective ""fundamental incapability"" and sublimate them
into ""WE"" solidarity. The economy in this society must have moral
responsibility as a coadventurer and consideration for vulnerability to risk.
Therefore, I focus on two factors of mind perception: moral responsibility and
risk vulnerability, and propose a novel model of wealth distribution following
an econophysical approach. Specifically, I developed a joint-venture model, a
redistribution model in the joint-venture model, and a ""WE economy"" model. A
simulation comparison of a combination of the joint ventures and redistribution
with the WE economies reveals that WE economies are effective in reducing
inequality and resilient in normalizing wealth distribution as advantages, and
susceptible to free riders as disadvantages. However, this disadvantage can be
compensated for by fostering consensus and fellowship, and by complementing it
with joint ventures. This study essentially presents the effectiveness of moral
responsibility, the complementarity between the WE economy and the joint
economy, and the direction of the economy toward reducing inequality. Future
challenges are to develop the WE economy model based on real economic analysis
and psychology, as well as to promote WE economy fieldwork for worker coops and
platform cooperatives to realize a desirable mixbiotic society.",WE economy: Potential of mutual aid distribution based on moral responsibility and risk vulnerability,2023-12-12 04:52:45,Takeshi Kato,"http://arxiv.org/abs/2312.06927v1, http://arxiv.org/pdf/2312.06927v1",econ.TH
35873,th,"The Council of the European Union (EU) is one of the main decision-making
bodies of the EU. Many decisions require a qualified majority: the support of
55% of the member states (currently 15) that represent at least 65% of the
total population. We investigate how the power distribution, based on the
Shapley-Shubik index, and the proportion of winning coalitions change if these
criteria are modified within reasonable bounds. The influence of the two
countries with about 4% of the total population each is found to be almost
flat. The level of decisiveness decreases if the population criterion is above
68% or the states criterion is at least 17. The proportion of winning
coalitions can be increased from 13.2% to 20.8% (30.1%) such that the maximal
relative change in the Shapley--Shubik indices remains below 3.5% (5.5%). Our
results are indispensable to evaluate any proposal for reforming the qualified
majority voting system.",Voting power in the Council of the European Union: A comprehensive sensitivity analysis,2023-12-28 11:07:33,"Dóra Gréta Petróczy, László Csató","http://arxiv.org/abs/2312.16878v1, http://arxiv.org/pdf/2312.16878v1",physics.soc-ph
35864,th,"Algorithmic predictions are increasingly used to inform the allocations of
goods and interventions in the public sphere. In these domains, predictions
serve as a means to an end. They provide stakeholders with insights into
likelihood of future events as a means to improve decision making quality, and
enhance social welfare. However, if maximizing welfare is the ultimate goal,
prediction is only a small piece of the puzzle. There are various other policy
levers a social planner might pursue in order to improve bottom-line outcomes,
such as expanding access to available goods, or increasing the effect sizes of
interventions.
  Given this broad range of design decisions, a basic question to ask is: What
is the relative value of prediction in algorithmic decision making? How do the
improvements in welfare arising from better predictions compare to those of
other policy levers? The goal of our work is to initiate the formal study of
these questions. Our main results are theoretical in nature. We identify
simple, sharp conditions determining the relative value of prediction
vis-\`a-vis expanding access, within several statistical models that are
popular amongst quantitative social scientists. Furthermore, we illustrate how
these theoretical insights may be used to guide the design of algorithmic
decision making systems in practice.",The Relative Value of Prediction in Algorithmic Decision Making,2023-12-13 23:52:45,Juan Carlos Perdomo,"http://arxiv.org/abs/2312.08511v1, http://arxiv.org/pdf/2312.08511v1",cs.CY
35865,th,"A linear-quadratic-Gaussian (LQG) game is an incomplete information game with
quadratic payoff functions and Gaussian payoff states. This study addresses an
information design problem to identify an information structure that maximizes
a quadratic objective function. Gaussian information structures are found to be
optimal among all information structures. Furthermore, the optimal Gaussian
information structure can be determined by semidefinite programming, which is a
natural extension of linear programming. This paper provides sufficient
conditions for the optimality and suboptimality of both no and full information
disclosure. In addition, we characterize optimal information structures in
symmetric LQG games and optimal public information structures in asymmetric LQG
games, with each structure presented in a closed-form expression.",LQG Information Design,2023-12-15 04:36:36,"Masaki Miyashita, Takashi Ui","http://arxiv.org/abs/2312.09479v1, http://arxiv.org/pdf/2312.09479v1",econ.TH
35866,th,"How do we ascribe subjective probability? In decision theory, this question
is often addressed by representation theorems, going back to Ramsey (1926),
which tell us how to define or measure subjective probability by observable
preferences. However, standard representation theorems make strong rationality
assumptions, in particular expected utility maximization. How do we ascribe
subjective probability to agents which do not satisfy these strong rationality
assumptions? I present a representation theorem with weak rationality
assumptions which can be used to define or measure subjective probability for
partly irrational agents.",Better Foundations for Subjective Probability,2023-12-15 16:52:17,Sven Neth,"http://arxiv.org/abs/2312.09796v1, http://arxiv.org/pdf/2312.09796v1",stat.OT
35867,th,"Algorithmic monoculture arises when many decision-makers rely on the same
algorithm to evaluate applicants. An emerging body of work investigates
possible harms of this kind of homogeneity, but has been limited by the
challenge of incorporating market effects in which the preferences and behavior
of many applicants and decision-makers jointly interact to determine outcomes.
  Addressing this challenge, we introduce a tractable theoretical model of
algorithmic monoculture in a two-sided matching market with many participants.
We use the model to analyze outcomes under monoculture (when decision-makers
all evaluate applicants using a common algorithm) and under polyculture (when
decision-makers evaluate applicants independently). All else equal, monoculture
(1) selects less-preferred applicants when noise is well-behaved, (2) matches
more applicants to their top choice, though individual applicants may be worse
off depending on their value to decision-makers and risk tolerance, and (3) is
more robust to disparities in the number of applications submitted.",Monoculture in Matching Markets,2023-12-15 17:46:54,"Kenny Peng, Nikhil Garg","http://arxiv.org/abs/2312.09841v1, http://arxiv.org/pdf/2312.09841v1",cs.GT
35868,th,"While there is universal agreement that agents ought to act ethically, there
is no agreement as to what constitutes ethical behaviour. To address this
problem, recent philosophical approaches to `moral uncertainty' propose
aggregation of multiple ethical theories to guide agent behaviour. However, one
of the foundational proposals for aggregation - Maximising Expected
Choiceworthiness (MEC) - has been criticised as being vulnerable to fanaticism;
the problem of an ethical theory dominating agent behaviour despite low
credence (confidence) in said theory. Fanaticism thus undermines the
`democratic' motivation for accommodating multiple ethical perspectives. The
problem of fanaticism has not yet been mathematically defined. Representing
moral uncertainty as an instance of social welfare aggregation, this paper
contributes to the field of moral uncertainty by 1) formalising the problem of
fanaticism as a property of social welfare functionals and 2) providing
non-fanatical alternatives to MEC, i.e. Highest k-trimmed Mean and Highest
Median.",Moral Uncertainty and the Problem of Fanaticism,2023-12-18 19:09:09,"Jazon Szabo, Jose Such, Natalia Criado, Sanjay Modgil","http://arxiv.org/abs/2312.11589v1, http://arxiv.org/pdf/2312.11589v1",cs.AI
35869,th,"Control barrier functions (CBFs) and safety-critical control have seen a
rapid increase in popularity in recent years, predominantly applied to systems
in aerospace, robotics and neural network controllers. Control barrier
functions can provide a computationally efficient method to monitor arbitrary
primary controllers and enforce state constraints to ensure overall system
safety. One area that has yet to take advantage of the benefits offered by CBFs
is the field of finance and economics. This manuscript re-introduces three
applications of traditional control to economics, and develops and implements
CBFs for such problems. We consider the problem of optimal advertising for the
deterministic and stochastic case and Merton's portfolio optimization problem.
Numerical simulations are used to demonstrate the effectiveness of using
traditional control solutions in tandem with CBFs and stochastic CBFs to solve
such problems in the presence of state constraints.",Stochastic Control Barrier Functions for Economics,2023-12-20 00:34:54,David van Wijk,"http://arxiv.org/abs/2312.12612v1, http://arxiv.org/pdf/2312.12612v1",econ.TH
35870,th,"May's Theorem [K. O. May, Econometrica 20 (1952) 680-684] characterizes
majority voting on two alternatives as the unique preferential voting method
satisfying several simple axioms. Here we show that by adding some desirable
axioms to May's axioms, we can uniquely determine how to vote on three
alternatives. In particular, we add two axioms stating that the voting method
should mitigate spoiler effects and avoid the so-called strong no show paradox.
We prove a theorem stating that any preferential voting method satisfying our
enlarged set of axioms, which includes some weak homogeneity and preservation
axioms, agrees with Minimax voting in all three-alternative elections, except
perhaps in some improbable knife-edged elections in which ties may arise and be
broken in different ways.",An extension of May's Theorem to three alternatives: axiomatizing Minimax voting,2023-12-21 22:18:28,"Wesley H. Holliday, Eric Pacuit","http://arxiv.org/abs/2312.14256v1, http://arxiv.org/pdf/2312.14256v1",econ.TH
35874,th,"The ongoing rapid development of the e-commercial and interest-base websites
make it more pressing to evaluate objects' accurate quality before
recommendation by employing an effective reputation system. The objects'
quality are often calculated based on their historical information, such as
selected records or rating scores, to help visitors to make decisions before
watching, reading or buying. Usually high quality products obtain a higher
average ratings than low quality products regardless of rating biases or
errors. However many empirical cases demonstrate that consumers may be misled
by rating scores added by unreliable users or deliberate tampering. In this
case, users' reputation, i.e., the ability to rating trustily and precisely,
make a big difference during the evaluating process. Thus, one of the main
challenges in designing reputation systems is eliminating the effects of users'
rating bias on the evaluation results. To give an objective evaluation of each
user's reputation and uncover an object's intrinsic quality, we propose an
iterative balance (IB) method to correct users' rating biases. Experiments on
two online video-provided Web sites, namely MovieLens and Netflix datasets,
show that the IB method is a highly self-consistent and robust algorithm and it
can accurately quantify movies' actual quality and users' stability of rating.
Compared with existing methods, the IB method has higher ability to find the
""dark horses"", i.e., not so popular yet good movies, in the Academy Awards.",Eliminating the effect of rating bias on reputation systems,2018-01-17 19:24:03,"Leilei Wu, Zhuoming Ren, Xiao-Long Ren, Jianlin Zhang, Linyuan Lü","http://arxiv.org/abs/1801.05734v1, http://arxiv.org/pdf/1801.05734v1",physics.soc-ph
35875,th,"Evolutionarily stable strategy (ESS) is an important solution concept in game
theory which has been applied frequently to biological models. Informally an
ESS is a strategy that if followed by the population cannot be taken over by a
mutation strategy that is initially rare. Finding such a strategy has been
shown to be difficult from a theoretical complexity perspective. We present an
algorithm for the case where mutations are restricted to pure strategies, and
present experiments on several game classes including random and a
recently-proposed cancer model. Our algorithm is based on a mixed-integer
non-convex feasibility program formulation, which constitutes the first general
optimization formulation for this problem. It turns out that the vast majority
of the games included in the experiments contain ESS with small support, and
our algorithm is outperformed by a support-enumeration based approach. However
we suspect our algorithm may be useful in the future as games are studied that
have ESS with potentially larger and unknown support size.",Optimization-Based Algorithm for Evolutionarily Stable Strategies against Pure Mutations,2018-03-01 23:08:21,Sam Ganzfried,"http://arxiv.org/abs/1803.00607v2, http://arxiv.org/pdf/1803.00607v2",cs.GT
35876,th,"In an economic market, sellers, infomediaries and customers constitute an
economic network. Each seller has her own customer group and the seller's
private customers are unobservable to other sellers. Therefore, a seller can
only sell commodities among her own customers unless other sellers or
infomediaries share her sale information to their customer groups. However, a
seller is not incentivized to share others' sale information by default, which
leads to inefficient resource allocation and limited revenue for the sale. To
tackle this problem, we develop a novel mechanism called customer sharing
mechanism (CSM) which incentivizes all sellers to share each other's sale
information to their private customer groups. Furthermore, CSM also
incentivizes all customers to truthfully participate in the sale. In the end,
CSM not only allocates the commodities efficiently but also optimizes the
seller's revenue.",Customer Sharing in Economic Networks with Costs,2018-07-18 11:55:27,"Bin Li, Dong Hao, Dengji Zhao, Tao Zhou","http://arxiv.org/abs/1807.06822v1, http://arxiv.org/pdf/1807.06822v1",cs.GT
35877,th,"We consider a network of agents. Associated with each agent are her covariate
and outcome. Agents influence each other's outcomes according to a certain
connection/influence structure. A subset of the agents participate on a
platform, and hence, are observable to it. The rest are not observable to the
platform and are called the latent agents. The platform does not know the
influence structure of the observable or the latent parts of the network. It
only observes the data on past covariates and decisions of the observable
agents. Observable agents influence each other both directly and indirectly
through the influence they exert on the latent agents.
  We investigate how the platform can estimate the dependence of the observable
agents' outcomes on their covariates, taking the latent agents into account.
First, we show that this relationship can be succinctly captured by a matrix
and provide an algorithm for estimating it under a suitable approximate
sparsity condition using historical data of covariates and outcomes for the
observable agents. We also obtain convergence rates for the proposed estimator
despite the high dimensionality that allows more agents than observations.
Second, we show that the approximate sparsity condition holds under the
standard conditions used in the literature. Hence, our results apply to a large
class of networks. Finally, we apply our results to two practical settings:
targeted advertising and promotional pricing. We show that by using the
available historical data with our estimator, it is possible to obtain
asymptotically optimal advertising/pricing decisions, despite the presence of
latent agents.",Latent Agents in Networks: Estimation and Targeting,2018-08-14 22:57:55,"Baris Ata, Alexandre Belloni, Ozan Candogan","http://arxiv.org/abs/1808.04878v3, http://arxiv.org/pdf/1808.04878v3",cs.SI
35878,th,"In a pathbreaking paper, Cover and Ordentlich (1998) solved a max-min
portfolio game between a trader (who picks an entire trading algorithm,
$\theta(\cdot)$) and ""nature,"" who picks the matrix $X$ of gross-returns of all
stocks in all periods. Their (zero-sum) game has the payoff kernel
$W_\theta(X)/D(X)$, where $W_\theta(X)$ is the trader's final wealth and $D(X)$
is the final wealth that would have accrued to a $\$1$ deposit into the best
constant-rebalanced portfolio (or fixed-fraction betting scheme) determined in
hindsight. The resulting ""universal portfolio"" compounds its money at the same
asymptotic rate as the best rebalancing rule in hindsight, thereby beating the
market asymptotically under extremely general conditions. Smitten with this
(1998) result, the present paper solves the most general tractable version of
Cover and Ordentlich's (1998) max-min game. This obtains for performance
benchmarks (read: derivatives) that are separately convex and homogeneous in
each period's gross-return vector. For completely arbitrary (even
non-measurable) performance benchmarks, we show how the axiom of choice can be
used to ""find"" an exact maximin strategy for the trader.",Multilinear Superhedging of Lookback Options,2018-10-05 01:50:42,Alex Garivaltis,"http://arxiv.org/abs/1810.02447v2, http://arxiv.org/pdf/1810.02447v2",q-fin.PR
35879,th,"We show that, in a resource allocation problem, the ex ante aggregate utility
of players with cumulative-prospect-theoretic preferences can be increased over
deterministic allocations by implementing lotteries. We formulate an
optimization problem, called the system problem, to find the optimal lottery
allocation. The system problem exhibits a two-layer structure comprised of a
permutation profile and optimal allocations given the permutation profile. For
any fixed permutation profile, we provide a market-based mechanism to find the
optimal allocations and prove the existence of equilibrium prices. We show that
the system problem has a duality gap, in general, and that the primal problem
is NP-hard. We then consider a relaxation of the system problem and derive some
qualitative features of the optimal lottery structure.",Optimal Resource Allocation over Networks via Lottery-Based Mechanisms,2018-12-03 04:04:36,"Soham R. Phade, Venkat Anantharam","http://dx.doi.org/10.1007/978-3-030-16989-3_4, http://arxiv.org/abs/1812.00501v1, http://arxiv.org/pdf/1812.00501v1",econ.TH
35880,th,"Spending by the UK's National Health Service (NHS) on independent healthcare
treatment has been increased in recent years and is predicted to sustain its
upward trend with the forecast of population growth. Some have viewed this
increase as an attempt not to expand the patients' choices but to privatize
public healthcare. This debate poses a social dilemma whether the NHS should
stop cooperating with Private providers. This paper contributes to healthcare
economic modelling by investigating the evolution of cooperation among three
proposed populations: Public Healthcare Providers, Private Healthcare Providers
and Patients. The Patient population is included as a main player in the
decision-making process by expanding patient's choices of treatment. We develop
a generic basic model that measures the cost of healthcare provision based on
given parameters, such as NHS and private healthcare providers' cost of
investments in both sectors, cost of treatments and gained benefits. A
patient's costly punishment is introduced as a mechanism to enhance cooperation
among the three populations. Our findings show that cooperation can be improved
with the introduction of punishment (patient's punishment) against defecting
providers. Although punishment increases cooperation, it is very costly
considering the small improvement in cooperation in comparison to the basic
model.",Pathways to Good Healthcare Services and Patient Satisfaction: An Evolutionary Game Theoretical Approach,2019-07-06 18:38:33,"Zainab Alalawi, The Anh Han, Yifeng Zeng, Aiman Elragig","http://dx.doi.org/10.13140/RG.2.2.30657.10086, http://arxiv.org/abs/1907.07132v1, http://arxiv.org/pdf/1907.07132v1",physics.soc-ph
35881,th,"This note provides a neat and enjoyable expansion and application of the
magnificent Ordentlich-Cover theory of ""universal portfolios."" I generalize
Cover's benchmark of the best constant-rebalanced portfolio (or 1-linear
trading strategy) in hindsight by considering the best bilinear trading
strategy determined in hindsight for the realized sequence of asset prices. A
bilinear trading strategy is a mini two-period active strategy whose final
capital growth factor is linear separately in each period's gross return vector
for the asset market. I apply Cover's ingenious (1991) performance-weighted
averaging technique to construct a universal bilinear portfolio that is
guaranteed (uniformly for all possible market behavior) to compound its money
at the same asymptotic rate as the best bilinear trading strategy in hindsight.
Thus, the universal bilinear portfolio asymptotically dominates the original
(1-linear) universal portfolio in the same technical sense that Cover's
universal portfolios asymptotically dominate all constant-rebalanced portfolios
and all buy-and-hold strategies. In fact, like so many Russian dolls, one can
get carried away and use these ideas to construct an endless hierarchy of ever
more dominant $H$-linear universal portfolios.",A Note on Universal Bilinear Portfolios,2019-07-23 08:55:28,Alex Garivaltis,"http://arxiv.org/abs/1907.09704v2, http://arxiv.org/pdf/1907.09704v2",q-fin.MF
35882,th,"We define discounted differential privacy, as an alternative to
(conventional) differential privacy, to investigate privacy of evolving
datasets, containing time series over an unbounded horizon. We use privacy loss
as a measure of the amount of information leaked by the reports at a certain
fixed time. We observe that privacy losses are weighted equally across time in
the definition of differential privacy, and therefore the magnitude of
privacy-preserving additive noise must grow without bound to ensure
differential privacy over an infinite horizon. Motivated by the discounted
utility theory within the economics literature, we use exponential and
hyperbolic discounting of privacy losses across time to relax the definition of
differential privacy under continual observations. This implies that privacy
losses in distant past are less important than the current ones to an
individual. We use discounted differential privacy to investigate privacy of
evolving datasets using additive Laplace noise and show that the magnitude of
the additive noise can remain bounded under discounted differential privacy. We
illustrate the quality of privacy-preserving mechanisms satisfying discounted
differential privacy on smart-meter measurement time-series of real households,
made publicly available by Ausgrid (an Australian electricity distribution
company).",Temporally Discounted Differential Privacy for Evolving Datasets on an Infinite Horizon,2019-08-12 07:25:54,Farhad Farokhi,"http://arxiv.org/abs/1908.03995v2, http://arxiv.org/pdf/1908.03995v2",cs.CR
35883,th,"Many real-world domains contain multiple agents behaving strategically with
probabilistic transitions and uncertain (potentially infinite) duration. Such
settings can be modeled as stochastic games. While algorithms have been
developed for solving (i.e., computing a game-theoretic solution concept such
as Nash equilibrium) two-player zero-sum stochastic games, research on
algorithms for non-zero-sum and multiplayer stochastic games is limited. We
present a new algorithm for these settings, which constitutes the first
parallel algorithm for multiplayer stochastic games. We present experimental
results on a 4-player stochastic game motivated by a naval strategic planning
scenario, showing that our algorithm is able to quickly compute strategies
constituting Nash equilibrium up to a very small degree of approximation error.",Parallel Algorithm for Approximating Nash Equilibrium in Multiplayer Stochastic Games with Application to Naval Strategic Planning,2019-10-01 07:08:14,"Sam Ganzfried, Conner Laughlin, Charles Morefield","http://arxiv.org/abs/1910.00193v4, http://arxiv.org/pdf/1910.00193v4",cs.GT
35884,th,"We describe a new complete algorithm for computing Nash equilibrium in
multiplayer general-sum games, based on a quadratically-constrained feasibility
program formulation. We demonstrate that the algorithm runs significantly
faster than the prior fastest complete algorithm on several game classes
previously studied and that its runtimes even outperform the best incomplete
algorithms.",Fast Complete Algorithm for Multiplayer Nash Equilibrium,2020-02-12 02:42:14,Sam Ganzfried,"http://arxiv.org/abs/2002.04734v10, http://arxiv.org/pdf/2002.04734v10",cs.GT
35887,th,"This paper presents an inverse reinforcement learning~(IRL) framework for
Bayesian stopping time problems. By observing the actions of a Bayesian
decision maker, we provide a necessary and sufficient condition to identify if
these actions are consistent with optimizing a cost function. In a Bayesian
(partially observed) setting, the inverse learner can at best identify
optimality wrt the observed strategies. Our IRL algorithm identifies optimality
and then constructs set-valued estimates of the cost function.To achieve this
IRL objective, we use novel ideas from Bayesian revealed preferences stemming
from microeconomics. We illustrate the proposed IRL scheme using two important
examples of stopping time problems, namely, sequential hypothesis testing and
Bayesian search. As a real-world example, we illustrate using a YouTube dataset
comprising metadata from 190000 videos how the proposed IRL method predicts
user engagement in online multimedia platforms with high accuracy. Finally, for
finite datasets, we propose an IRL detection algorithm and give finite sample
bounds on its error probabilities.",Necessary and Sufficient Conditions for Inverse Reinforcement Learning of Bayesian Stopping Time Problems,2020-07-07 17:14:12,"Kunal Pattanayak, Vikram Krishnamurthy","http://arxiv.org/abs/2007.03481v6, http://arxiv.org/pdf/2007.03481v6",cs.LG
35888,th,"We consider a model of urban spatial structure proposed by Harris and Wilson
(Environment and Planning A, 1978). The model consists of fast dynamics, which
represent spatial interactions between locations by the entropy-maximizing
principle, and slow dynamics, which represent the evolution of the spatial
distribution of local factors that facilitate such spatial interactions. One
known limitation of the Harris and Wilson model is that it can have multiple
locally stable equilibria, leading to a dependence of predictions on the
initial state. To overcome this, we employ equilibrium refinement by stochastic
stability. We build on the fact that the model is a large-population potential
game and that stochastically stable states in a potential game correspond to
global potential maximizers. Unlike local stability under deterministic
dynamics, the stochastic stability approach allows a unique and unambiguous
prediction for urban spatial configurations. We show that, in the most likely
spatial configuration, the number of retail agglomerations decreases either
when shopping costs for consumers decrease or when the strength of
agglomerative effects increases.",Stochastic stability of agglomeration patterns in an urban retail model,2020-11-13 09:31:07,"Minoru Osawa, Takashi Akamatsu, Yosuke Kogure","http://arxiv.org/abs/2011.06778v1, http://arxiv.org/pdf/2011.06778v1",econ.TH
35889,th,"In this paper, we consider a network of consumers who are under the combined
influence of their neighbors and external influencing entities (the marketers).
The consumers' opinion follows a hybrid dynamics whose opinion jumps are due to
the marketing campaigns. By using the relevant static game model proposed
recently in [1], we prove that although the marketers are in competition and
therefore create tension in the network, the network reaches a consensus.
Exploiting this key result, we propose a coopetition marketing strategy which
combines the one-shot Nash equilibrium actions and a policy of no advertising.
Under reasonable sufficient conditions, it is proved that the proposed
coopetition strategy profile Pareto-dominates the one-shot Nash equilibrium
strategy. This is a very encouraging result to tackle the much more challenging
problem of designing Pareto-optimal and equilibrium strategies for the
considered dynamical marketing game.",Allocating marketing resources over social networks: A long-term analysis,2020-11-17 12:39:52,"Vineeth S. Varma, Samson Lasaulce, Julien Mounthanyvong, Irinel-Constantin Morarescu","http://arxiv.org/abs/2011.09268v1, http://arxiv.org/pdf/2011.09268v1",econ.TH
35890,th,"We study competitive location problems in a continuous setting, in which
facilities have to be placed in a rectangular domain $R$ of normalized
dimensions of $1$ and $\rho\geq 1$, and distances are measured according to the
Manhattan metric. We show that the family of 'balanced' facility configurations
(in which the Voronoi cells of individual facilities are equalized with respect
to a number of geometric properties) is considerably richer in this metric than
for Euclidean distances. Our main result considers the 'One-Round Voronoi Game'
with Manhattan distances, in which first player White and then player Black
each place $n$ points in $R$; each player scores the area for which one of its
facilities is closer than the facilities of the opponent. We give a tight
characterization: White has a winning strategy if and only if $\rho\geq n$; for
all other cases, we present a winning strategy for Black.",Competitive Location Problems: Balanced Facility Location and the One-Round Manhattan Voronoi Game,2020-11-26 16:20:21,"Thomas Byrne, Sándor P. Fekete, Jörg Kalcsics, Linda Kleist","http://arxiv.org/abs/2011.13275v2, http://arxiv.org/pdf/2011.13275v2",cs.CG
35891,th,"We study a spatial, one-shot prisoner's dilemma (PD) model in which selection
operates on both an organism's behavioral strategy (cooperate or defect) and
its choice of when to implement that strategy across a set of discrete time
slots. Cooperators evolve to fixation regularly in the model when we add time
slots to lattices and small-world networks, and their portion of the population
grows, albeit slowly, when organisms interact in a scale-free network. This
selection for cooperators occurs across a wide variety of time slots and it
does so even when a crucial condition for the evolution of cooperation on
graphs is violated--namely, when the ratio of benefits to costs in the PD does
not exceed the number of spatially-adjacent organisms.",Temporal assortment of cooperators in the spatial prisoner's dilemma,2020-11-29 23:27:19,"Tim Johnson, Oleg Smirnov","http://arxiv.org/abs/2011.14440v1, http://arxiv.org/pdf/2011.14440v1",q-bio.PE
35892,th,"Two long-lived senders play a dynamic game of competitive persuasion. Each
period, each provides information to a single short-lived receiver. When the
senders also set prices, we unearth a folk theorem: if they are sufficiently
patient, virtually any vector of feasible and individually rational payoffs can
be sustained in a subgame perfect equilibrium. Without price-setting, there is
a unique subgame perfect equilibrium. In it, patient senders provide less
information--maximally patient ones none.",Dynamic Competitive Persuasion,2018-11-28 19:44:01,Mark Whitmeyer,"http://arxiv.org/abs/1811.11664v6, http://arxiv.org/pdf/1811.11664v6",math.PR
35913,th,"The standard game-theoretic solution concept, Nash equilibrium, assumes that
all players behave rationally. If we follow a Nash equilibrium and opponents
are irrational (or follow strategies from a different Nash equilibrium), then
we may obtain an extremely low payoff. On the other hand, a maximin strategy
assumes that all opposing agents are playing to minimize our payoff (even if it
is not in their best interest), and ensures the maximal possible worst-case
payoff, but results in exceedingly conservative play. We propose a new solution
concept called safe equilibrium that models opponents as behaving rationally
with a specified probability and behaving potentially arbitrarily with the
remaining probability. We prove that a safe equilibrium exists in all
strategic-form games (for all possible values of the rationality parameters),
and prove that its computation is PPAD-hard. We present exact algorithms for
computing a safe equilibrium in both 2 and $n$-player games, as well as
scalable approximation algorithms.",Safe Equilibrium,2022-01-12 04:45:51,Sam Ganzfried,"http://arxiv.org/abs/2201.04266v10, http://arxiv.org/pdf/2201.04266v10",cs.GT
35893,th,"The design of mechanisms that encourage pro-social behaviours in populations
of self-regarding agents is recognised as a major theoretical challenge within
several areas of social, life and engineering sciences. When interference from
external parties is considered, several heuristics have been identified as
capable of engineering a desired collective behaviour at a minimal cost.
However, these studies neglect the diverse nature of contexts and social
structures that characterise real-world populations. Here we analyse the impact
of diversity by means of scale-free interaction networks with high and low
levels of clustering, and test various interference mechanisms using
simulations of agents facing a cooperative dilemma. Our results show that
interference on scale-free networks is not trivial and that distinct levels of
clustering react differently to each interference mechanism. As such, we argue
that no tailored response fits all scale-free networks and present which
mechanisms are more efficient at fostering cooperation in both types of
networks. Finally, we discuss the pitfalls of considering reckless interference
mechanisms.",Exogenous Rewards for Promoting Cooperation in Scale-Free Networks,2019-05-13 13:57:38,"Theodor Cimpeanu, The Anh Han, Francisco C. Santos","http://dx.doi.org/10.1162/isal_a_00181, http://arxiv.org/abs/1905.04964v2, http://arxiv.org/pdf/1905.04964v2",cs.GT
35894,th,"The observed architecture of ecological and socio-economic networks differs
significantly from that of random networks. From a network science standpoint,
non-random structural patterns observed in real networks call for an
explanation of their emergence and an understanding of their potential systemic
consequences. This article focuses on one of these patterns: nestedness. Given
a network of interacting nodes, nestedness can be described as the tendency for
nodes to interact with subsets of the interaction partners of better-connected
nodes. Known since more than $80$ years in biogeography, nestedness has been
found in systems as diverse as ecological mutualistic organizations, world
trade, inter-organizational relations, among many others. This review article
focuses on three main pillars: the existing methodologies to observe nestedness
in networks; the main theoretical mechanisms conceived to explain the emergence
of nestedness in ecological and socio-economic networks; the implications of a
nested topology of interactions for the stability and feasibility of a given
interacting system. We survey results from variegated disciplines, including
statistical physics, graph theory, ecology, and theoretical economics.
Nestedness was found to emerge both in bipartite networks and, more recently,
in unipartite ones; this review is the first comprehensive attempt to unify
both streams of studies, usually disconnected from each other. We believe that
the truly interdisciplinary endeavour -- while rooted in a complex systems
perspective -- may inspire new models and algorithms whose realm of application
will undoubtedly transcend disciplinary boundaries.","Nestedness in complex networks: Observation, emergence, and implications",2019-05-18 17:12:52,"Manuel Sebastian Mariani, Zhuo-Ming Ren, Jordi Bascompte, Claudio Juan Tessone","http://dx.doi.org/10.1016/j.physrep.2019.04.001, http://arxiv.org/abs/1905.07593v1, http://arxiv.org/pdf/1905.07593v1",physics.soc-ph
35895,th,"The Sharing Economy (which includes Airbnb, Apple, Alibaba, Uber, WeWork,
Ebay, Didi Chuxing, Amazon) blossomed across the world, triggered structural
changes in industries and significantly affected international capital flows
primarily by disobeying a wide variety of statutes and laws in many countries.
They also illegally reduced and changing the nature of competition in many
industries often to the detriment of social welfare. This article develops new
dynamic pricing models for the SEOs and derives some stability properties of
mixed games and dynamic algorithms which eliminate antitrust liability and also
reduce deadweight losses, greed, Regret and GPS manipulation. The new dynamic
pricing models contravene the Myerson Satterthwaite Impossibility Theorem.","Complexity, Stability Properties of Mixed Games and Dynamic Algorithms, and Learning in the Sharing Economy",2020-01-18 04:09:36,Michael C. Nwogugu,"http://arxiv.org/abs/2001.08192v1, http://arxiv.org/pdf/2001.08192v1",cs.GT
35896,th,"Successful algorithms have been developed for computing Nash equilibrium in a
variety of finite game classes. However, solving continuous games -- in which
the pure strategy space is (potentially uncountably) infinite -- is far more
challenging. Nonetheless, many real-world domains have continuous action
spaces, e.g., where actions refer to an amount of time, money, or other
resource that is naturally modeled as being real-valued as opposed to integral.
We present a new algorithm for {approximating} Nash equilibrium strategies in
continuous games. In addition to two-player zero-sum games, our algorithm also
applies to multiplayer games and games with imperfect information. We
experiment with our algorithm on a continuous imperfect-information Blotto
game, in which two players distribute resources over multiple battlefields.
Blotto games have frequently been used to model national security scenarios and
have also been applied to electoral competition and auction theory. Experiments
show that our algorithm is able to quickly compute close approximations of Nash
equilibrium strategies for this game.",Algorithm for Computing Approximate Nash Equilibrium in Continuous Games with Application to Continuous Blotto,2020-06-12 22:53:18,Sam Ganzfried,"http://arxiv.org/abs/2006.07443v5, http://arxiv.org/pdf/2006.07443v5",cs.GT
35897,th,"In this work, we provide a general mathematical formalism to study the
optimal control of an epidemic, such as the COVID-19 pandemic, via incentives
to lockdown and testing. In particular, we model the interplay between the
government and the population as a principal-agent problem with moral hazard,
\`a la Cvitani\'c, Possama\""i, and Touzi [27], while an epidemic is spreading
according to dynamics given by compartmental stochastic SIS or SIR models, as
proposed respectively by Gray, Greenhalgh, Hu, Mao, and Pan [45] and Tornatore,
Buccellato, and Vetro [88]. More precisely, to limit the spread of a virus, the
population can decrease the transmission rate of the disease by reducing
interactions between individuals. However, this effort, which cannot be
perfectly monitored by the government, comes at social and monetary cost for
the population. To mitigate this cost, and thus encourage the lockdown of the
population, the government can put in place an incentive policy, in the form of
a tax or subsidy. In addition, the government may also implement a testing
policy in order to know more precisely the spread of the epidemic within the
country, and to isolate infected individuals. In terms of technical results, we
demonstrate the optimal form of the tax, indexed on the proportion of infected
individuals, as well as the optimal effort of the population, namely the
transmission rate chosen in response to this tax. The government's optimisation
problem then boils down to solving an Hamilton-Jacobi-Bellman equation.
Numerical results confirm that if a tax policy is implemented, the population
is encouraged to significantly reduce its interactions. If the government also
adjusts its testing policy, less effort is required on the population side,
individuals can interact almost as usual, and the epidemic is largely contained
by the targeted isolation of positively-tested individuals.","Incentives, lockdown, and testing: from Thucydides's analysis to the COVID-19 pandemic",2020-09-01 17:36:28,"Emma Hubert, Thibaut Mastrolia, Dylan Possamaï, Xavier Warin","http://dx.doi.org/10.1007/s00285-022-01736-0, http://arxiv.org/abs/2009.00484v2, http://arxiv.org/pdf/2009.00484v2",q-bio.PE
35898,th,"On-line firms deploy suites of software platforms, where each platform is
designed to interact with users during a certain activity, such as browsing,
chatting, socializing, emailing, driving, etc. The economic and incentive
structure of this exchange, as well as its algorithmic nature, have not been
explored to our knowledge. We model this interaction as a Stackelberg game
between a Designer and one or more Agents. We model an Agent as a Markov chain
whose states are activities; we assume that the Agent's utility is a linear
function of the steady-state distribution of this chain. The Designer may
design a platform for each of these activities/states; if a platform is adopted
by the Agent, the transition probabilities of the Markov chain are affected,
and so is the objective of the Agent. The Designer's utility is a linear
function of the steady state probabilities of the accessible states minus the
development cost of the platforms. The underlying optimization problem of the
Agent -- how to choose the states for which to adopt the platform -- is an MDP.
If this MDP has a simple yet plausible structure (the transition probabilities
from one state to another only depend on the target state and the recurrent
probability of the current state) the Agent's problem can be solved by a greedy
algorithm. The Designer's optimization problem (designing a custom suite for
the Agent so as to optimize, through the Agent's optimum reaction, the
Designer's revenue), is NP-hard to approximate within any finite ratio;
however, the special case, while still NP-hard, has an FPTAS. These results
generalize from a single Agent to a distribution of Agents with finite support,
as well as to the setting where the Designer must find the best response to the
existing strategies of other Designers. We discuss other implications of our
results and directions of future research.",The Platform Design Problem,2020-09-14 02:53:19,"Christos Papadimitriou, Kiran Vodrahalli, Mihalis Yannakakis","http://arxiv.org/abs/2009.06117v2, http://arxiv.org/pdf/2009.06117v2",cs.GT
35899,th,"After the first lockdown in response to the COVID-19 outbreak, many countries
faced difficulties in balancing infection control with economics. Due to
limited prior knowledge, economists began researching this issue using
cost-benefit analysis and found that infection control processes significantly
affect economic efficiency. A UK study used economic parameters to numerically
demonstrate an optimal balance in the process, including keeping the infected
population stationary. However, universally applicable knowledge, which is
indispensable for the guiding principles of infection control, has not yet been
clearly developed because of the methodological limitations of simulation
studies. Here, we propose a simple model and theoretically prove the universal
result of economic irreversibility by applying the idea of thermodynamics to
pandemic control. This means that delaying infection control measures is more
expensive than implementing infection control measures early while keeping
infected populations stationary. This implies that once the infected population
increases, society cannot return to its previous state without extra
expenditures. This universal result is analytically obtained by focusing on the
infection-spreading phase of pandemics, and is applicable not just to COVID-19,
regardless of ""herd immunity."" It also confirms the numerical observation of
stationary infected populations in its optimally efficient process. Our
findings suggest that economic irreversibility is a guiding principle for
balancing infection control with economic effects.",Economic irreversibility in pandemic control processes: Rigorous modeling of delayed countermeasures and consequential cost increases,2020-10-01 14:28:45,Tsuyoshi Hondou,"http://dx.doi.org/10.7566/JPSJ.90.114007, http://arxiv.org/abs/2010.00305v9, http://arxiv.org/pdf/2010.00305v9",q-bio.PE
35900,th,"Heifetz, Meier and Schipper (HMS) present a lattice model of awareness. The
HMS model is syntax-free, which precludes the simple option to rely on formal
language to induce lattices, and represents uncertainty and unawareness with
one entangled construct, making it difficult to assess the properties of
either. Here, we present a model based on a lattice of Kripke models, induced
by atom subset inclusion, in which uncertainty and unawareness are separate. We
show the models to be equivalent by defining transformations between them which
preserve formula satisfaction, and obtain completeness through our and HMS'
results.",Awareness Logic: A Kripke-based Rendition of the Heifetz-Meier-Schipper Model,2020-12-24 00:24:06,"Gaia Belardinelli, Rasmus K. Rendsvig","http://dx.doi.org/10.1007/978-3-030-65840-3_3, http://arxiv.org/abs/2012.12982v1, http://arxiv.org/pdf/2012.12982v1",cs.AI
35901,th,"I introduce PRZI (Parameterised-Response Zero Intelligence), a new form of
zero-intelligence trader intended for use in simulation studies of the dynamics
of continuous double auction markets. Like Gode & Sunder's classic ZIC trader,
PRZI generates quote-prices from a random distribution over some specified
domain of allowable quote-prices. Unlike ZIC, which uses a uniform distribution
to generate prices, the probability distribution in a PRZI trader is
parameterised in such a way that its probability mass function (PMF) is
determined by a real-valued control variable s in the range [-1.0, +1.0] that
determines the _strategy_ for that trader. When s=0, a PRZI trader is identical
to ZIC, with a uniform PMF; but when |s|=~1 the PRZI trader's PMF becomes
maximally skewed to one extreme or the other of the price-range, thereby making
its quote-prices more or less urgent, biasing the quote-price distribution
toward or away from the trader's limit-price. To explore the co-evolutionary
dynamics of populations of PRZI traders that dynamically adapt their
strategies, I show results from long-term market experiments in which each
trader uses a simple stochastic hill-climber algorithm to repeatedly evaluate
alternative s-values and choose the most profitable at any given time. In these
experiments the profitability of any particular s-value may be non-stationary
because the profitability of one trader's strategy at any one time can depend
on the mix of strategies being played by the other traders at that time, which
are each themselves continuously adapting. Results from these market
experiments demonstrate that the population of traders' strategies can exhibit
rich dynamics, with periods of stability lasting over hundreds of thousands of
trader interactions interspersed by occasional periods of change. Python
source-code for the work reported here has been made publicly available on
GitHub.",Parameterised-Response Zero-Intelligence Traders,2021-03-21 11:43:39,Dave Cliff,"http://arxiv.org/abs/2103.11341v7, http://arxiv.org/pdf/2103.11341v7",q-fin.TR
35902,th,"Can egalitarian norms or conventions survive the presence of dominant
individuals who are ensured of victory in conflicts? We investigate the
interaction of power asymmetry and partner choice in games of conflict over a
contested resource. We introduce three models to study the emergence and
resilience of cooperation among unequals when interaction is random, when
individuals can choose their partners, and where power asymmetries dynamically
depend on accumulated payoffs. We find that the ability to avoid bullies with
higher competitive ability afforded by partner choice mostly restores
cooperative conventions and that the competitive hierarchy never forms. Partner
choice counteracts the hyper dominance of bullies who are isolated in the
network and eliminates the need for others to coordinate in a coalition. When
competitive ability dynamically depends on cumulative payoffs, complex cycles
of coupled network-strategy-rank changes emerge. Effective collaborators gain
popularity (and thus power), adopt aggressive behavior, get isolated, and
ultimately lose power. Neither the network nor behavior converge to a stable
equilibrium. Despite the instability of power dynamics, the cooperative
convention in the population remains stable overall and long-term inequality is
completely eliminated. The interaction between partner choice and dynamic power
asymmetry is crucial for these results: without partner choice, bullies cannot
be isolated, and without dynamic power asymmetry, bullies do not lose their
power even when isolated. We analytically identify a single critical point that
marks a phase transition in all three iterations of our models. This critical
point is where the first individual breaks from the convention and cycles start
to emerge.",Avoiding the bullies: The resilience of cooperation among unequals,2021-04-17 22:55:26,"Michael Foley, Rory Smead, Patrick Forber, Christoph Riedl","http://dx.doi.org/10.1371/journal.pcbi.1008847, http://arxiv.org/abs/2104.08636v1, http://arxiv.org/pdf/2104.08636v1",physics.soc-ph
35903,th,"We test the performance of deep deterministic policy gradient (DDPG), a deep
reinforcement learning algorithm, able to handle continuous state and action
spaces, to learn Nash equilibria in a setting where firms compete in prices.
These algorithms are typically considered model-free because they do not
require transition probability functions (as in e.g., Markov games) or
predefined functional forms. Despite being model-free, a large set of
parameters are utilized in various steps of the algorithm. These are e.g.,
learning rates, memory buffers, state-space dimensioning, normalizations, or
noise decay rates and the purpose of this work is to systematically test the
effect of these parameter configurations on convergence to the analytically
derived Bertrand equilibrium. We find parameter choices that can reach
convergence rates of up to 99%. The reliable convergence may make the method a
useful tool to study strategic behavior of firms even in more complex settings.
Keywords: Bertrand Equilibrium, Competition in Uniform Price Auctions, Deep
Deterministic Policy Gradient Algorithm, Parameter Sensitivity Analysis",Computational Performance of Deep Reinforcement Learning to find Nash Equilibria,2021-04-27 01:14:17,"Christoph Graf, Viktor Zobernig, Johannes Schmidt, Claude Klöckl","http://arxiv.org/abs/2104.12895v1, http://arxiv.org/pdf/2104.12895v1",cs.GT
35904,th,"In this paper, we define a new class of dynamic games played in large
populations of anonymous agents. The behavior of agents in these games depends
on a time-homogeneous type and a time-varying state, which are private to each
agent and characterize their available actions and motifs. We consider finite
type, state, and action spaces. On the individual agent level, the state
evolves in discrete-time as the agent participates in interactions, in which
the state transitions are affected by the agent's individual action and the
distribution of other agents' states and actions. On the societal level, we
consider that the agents form a continuum of mass and that interactions occur
either synchronously or asynchronously, and derive models for the evolution of
the agents' state distribution. We characterize the stationary equilibrium as
the solution concept in our games, which is a condition where all agents are
playing their best response and the state distribution is stationary. At least
one stationary equilibrium is guaranteed to exist in every dynamic population
game. Our approach intersects with previous works on anonymous sequential
games, mean-field games, and Markov decision evolutionary games, but it is
novel in how we relate the dynamic setting to a classical, static population
game setting. In particular, stationary equilibria can be reduced to standard
Nash equilibria in classical population games. This simplifies the analysis of
these games and inspires the formulation of an evolutionary model for the
coupled dynamics of both the agents' actions and states.",Dynamic population games,2021-04-30 00:13:10,"Ezzat Elokda, Andrea Censi, Saverio Bolognani","http://arxiv.org/abs/2104.14662v1, http://arxiv.org/pdf/2104.14662v1",math.OC
35905,th,"Where information grows abundant, attention becomes a scarce resource. As a
result, agents must plan wisely how to allocate their attention in order to
achieve epistemic efficiency. Here, we present a framework for multi-agent
epistemic planning with attention, based on Dynamic Epistemic Logic (DEL, a
powerful formalism for epistemic planning). We identify the framework as a
fragment of standard DEL, and consider its plan existence problem. While in the
general case undecidable, we show that when attention is required for learning,
all instances of the problem are decidable.",Epistemic Planning with Attention as a Bounded Resource,2021-05-20 21:14:41,"Gaia Belardinelli, Rasmus K. Rendsvig","http://arxiv.org/abs/2105.09976v1, http://arxiv.org/pdf/2105.09976v1",cs.AI
35906,th,"Demand for blockchains such as Bitcoin and Ethereum is far larger than
supply, necessitating a mechanism that selects a subset of transactions to
include ""on-chain"" from the pool of all pending transactions. This paper
investigates the problem of designing a blockchain transaction fee mechanism
through the lens of mechanism design. We introduce two new forms of
incentive-compatibility that capture some of the idiosyncrasies of the
blockchain setting, one (MMIC) that protects against deviations by
profit-maximizing miners and one (OCA-proofness) that protects against
off-chain collusion between miners and users.
  This study is immediately applicable to a recent (August 5, 2021) and major
change to Ethereum's transaction fee mechanism, based on a proposal called
""EIP-1559."" Historically, Ethereum's transaction fee mechanism was a
first-price (pay-as-bid) auction. EIP-1559 suggested making several tightly
coupled changes, including the introduction of variable-size blocks, a
history-dependent reserve price, and the burning of a significant portion of
the transaction fees. We prove that this new mechanism earns an impressive
report card: it satisfies the MMIC and OCA-proofness conditions, and is also
dominant-strategy incentive compatible (DSIC) except when there is a sudden
demand spike. We also introduce an alternative design, the ""tipless mechanism,""
which offers an incomparable slate of incentive-compatibility guarantees -- it
is MMIC and DSIC, and OCA-proof unless in the midst of a demand spike.",Transaction Fee Mechanism Design,2021-06-02 20:48:32,Tim Roughgarden,"http://arxiv.org/abs/2106.01340v3, http://arxiv.org/pdf/2106.01340v3",cs.CR
35907,th,"I juxtapose Cover's vaunted universal portfolio selection algorithm (Cover
1991) with the modern representation (Qian 2016; Roncalli 2013) of a portfolio
as a certain allocation of risk among the available assets, rather than a mere
allocation of capital. Thus, I define a Universal Risk Budgeting scheme that
weights each risk budget (instead of each capital budget) by its historical
performance record (a la Cover). I prove that my scheme is mathematically
equivalent to a novel type of Cover and Ordentlich 1996 universal portfolio
that uses a new family of prior densities that have hitherto not appeared in
the literature on universal portfolio theory. I argue that my universal risk
budget, so-defined, is a potentially more perspicuous and flexible type of
universal portfolio; it allows the algorithmic trader to incorporate, with
advantage, his prior knowledge (or beliefs) about the particular covariance
structure of instantaneous asset returns. Say, if there is some dispersion in
the volatilities of the available assets, then the uniform (or Dirichlet)
priors that are standard in the literature will generate a dangerously lopsided
prior distribution over the possible risk budgets. In the author's opinion, the
proposed ""Garivaltis prior"" makes for a nice improvement on Cover's timeless
expert system (Cover 1991), that is properly agnostic and open (from the very
get-go) to different risk budgets. Inspired by Jamshidian 1992, the universal
risk budget is formulated as a new kind of exotic option in the continuous time
Black and Scholes 1973 market, with all the pleasure, elegance, and convenience
that that entails.",Universal Risk Budgeting,2021-06-18 13:06:02,Alex Garivaltis,"http://arxiv.org/abs/2106.10030v2, http://arxiv.org/pdf/2106.10030v2",q-fin.PM
35908,th,"We study a game-theoretic model of blockchain mining economies and show that
griefing, a practice according to which participants harm other participants at
some lesser cost to themselves, is a prevalent threat at its Nash equilibria.
The proof relies on a generalization of evolutionary stability to
non-homogeneous populations via griefing factors (ratios that measure network
losses relative to deviator's own losses) which leads to a formal theoretical
argument for the dissipation of resources, consolidation of power and high
entry barriers that are currently observed in practice.
  A critical assumption in this type of analysis is that miners' decisions have
significant influence in aggregate network outcomes (such as network hashrate).
However, as networks grow larger, the miner's interaction more closely
resembles a distributed production economy or Fisher market and its stability
properties change. In this case, we derive a proportional response (PR) update
protocol which converges to market equilibria at which griefing is irrelevant.
Convergence holds for a wide range of miners risk profiles and various degrees
of resource mobility between blockchains with different mining technologies.
Our empirical findings in a case study with four mineable cryptocurrencies
suggest that risk diversification, restricted mobility of resources (as
enforced by different mining technologies) and network growth, all are
contributing factors to the stability of the inherently volatile blockchain
ecosystem.",From Griefing to Stability in Blockchain Mining Economies,2021-06-23 14:54:26,"Yun Kuen Cheung, Stefanos Leonardos, Georgios Piliouras, Shyam Sridhar","http://arxiv.org/abs/2106.12332v1, http://arxiv.org/pdf/2106.12332v1",cs.GT
35909,th,"The literature on awareness modeling includes both syntax-free and
syntax-based frameworks. Heifetz, Meier \& Schipper (HMS) propose a lattice
model of awareness that is syntax-free. While their lattice approach is elegant
and intuitive, it precludes the simple option of relying on formal language to
induce lattices, and does not explicitly distinguish uncertainty from
unawareness. Contra this, the most prominent syntax-based solution, the
Fagin-Halpern (FH) model, accounts for this distinction and offers a simple
representation of awareness, but lacks the intuitiveness of the lattice
structure. Here, we combine these two approaches by providing a lattice of
Kripke models, induced by atom subset inclusion, in which uncertainty and
unawareness are separate. We show our model equivalent to both HMS and FH
models by defining transformations between them which preserve satisfaction of
formulas of a language for explicit knowledge, and obtain completeness through
our and HMS' results. Lastly, we prove that the Kripke lattice model can be
shown equivalent to the FH model (when awareness is propositionally determined)
also with respect to the language of the Logic of General Awareness, for which
the FH model where originally proposed.",Awareness Logic: Kripke Lattices as a Middle Ground between Syntactic and Semantic Models,2021-06-24 13:04:44,"Gaia Belardinelli, Rasmus K. Rendsvig","http://arxiv.org/abs/2106.12868v1, http://arxiv.org/pdf/2106.12868v1",cs.AI
35910,th,"The interplay between exploration and exploitation in competitive multi-agent
learning is still far from being well understood. Motivated by this, we study
smooth Q-learning, a prototypical learning model that explicitly captures the
balance between game rewards and exploration costs. We show that Q-learning
always converges to the unique quantal-response equilibrium (QRE), the standard
solution concept for games under bounded rationality, in weighted zero-sum
polymatrix games with heterogeneous learning agents using positive exploration
rates. Complementing recent results about convergence in weighted potential
games, we show that fast convergence of Q-learning in competitive settings is
obtained regardless of the number of agents and without any need for parameter
fine-tuning. As showcased by our experiments in network zero-sum games, these
theoretical results provide the necessary guarantees for an algorithmic
approach to the currently open problem of equilibrium selection in competitive
multi-agent settings.",Exploration-Exploitation in Multi-Agent Competition: Convergence with Bounded Rationality,2021-06-24 14:43:38,"Stefanos Leonardos, Georgios Piliouras, Kelly Spendlove","http://arxiv.org/abs/2106.12928v1, http://arxiv.org/pdf/2106.12928v1",cs.GT
35911,th,"Understanding the convergence properties of learning dynamics in repeated
auctions is a timely and important question in the area of learning in
auctions, with numerous applications in, e.g., online advertising markets. This
work focuses on repeated first price auctions where bidders with fixed values
for the item learn to bid using mean-based algorithms -- a large class of
online learning algorithms that include popular no-regret algorithms such as
Multiplicative Weights Update and Follow the Perturbed Leader. We completely
characterize the learning dynamics of mean-based algorithms, in terms of
convergence to a Nash equilibrium of the auction, in two senses: (1)
time-average: the fraction of rounds where bidders play a Nash equilibrium
approaches 1 in the limit; (2)last-iterate: the mixed strategy profile of
bidders approaches a Nash equilibrium in the limit. Specifically, the results
depend on the number of bidders with the highest value: - If the number is at
least three, the bidding dynamics almost surely converges to a Nash equilibrium
of the auction, both in time-average and in last-iterate. - If the number is
two, the bidding dynamics almost surely converges to a Nash equilibrium in
time-average but not necessarily in last-iterate. - If the number is one, the
bidding dynamics may not converge to a Nash equilibrium in time-average nor in
last-iterate. Our discovery opens up new possibilities in the study of
convergence dynamics of learning algorithms.",Nash Convergence of Mean-Based Learning Algorithms in First Price Auctions,2021-10-08 09:01:27,"Xiaotie Deng, Xinyan Hu, Tao Lin, Weiqiang Zheng","http://dx.doi.org/10.1145/3485447.3512059, http://arxiv.org/abs/2110.03906v4, http://arxiv.org/pdf/2110.03906v4",cs.GT
35912,th,"In this paper, we provide a novel and simple algorithm, Clairvoyant
Multiplicative Weights Updates (CMWU) for regret minimization in general games.
CMWU effectively corresponds to the standard MWU algorithm but where all
agents, when updating their mixed strategies, use the payoff profiles based on
tomorrow's behavior, i.e. the agents are clairvoyant. CMWU achieves constant
regret of $\ln(m)/\eta$ in all normal-form games with m actions and fixed
step-sizes $\eta$. Although CMWU encodes in its definition a fixed point
computation, which in principle could result in dynamics that are neither
computationally efficient nor uncoupled, we show that both of these issues can
be largely circumvented. Specifically, as long as the step-size $\eta$ is upper
bounded by $\frac{1}{(n-1)V}$, where $n$ is the number of agents and $[0,V]$ is
the payoff range, then the CMWU updates can be computed linearly fast via a
contraction map. This implementation results in an uncoupled online learning
dynamic that admits a $O (\log T)$-sparse sub-sequence where each agent
experiences at most $O(nV\log m)$ regret. This implies that the CMWU dynamics
converge with rate $O(nV \log m \log T / T)$ to a \textit{Coarse Correlated
Equilibrium}. The latter improves on the current state-of-the-art convergence
rate of \textit{uncoupled online learning dynamics}
\cite{daskalakis2021near,anagnostides2021near}.",Beyond Time-Average Convergence: Near-Optimal Uncoupled Online Learning via Clairvoyant Multiplicative Weights Update,2021-11-29 20:42:24,"Georgios Piliouras, Ryann Sim, Stratis Skoulakis","http://arxiv.org/abs/2111.14737v4, http://arxiv.org/pdf/2111.14737v4",cs.GT
35914,th,"In this paper, we consider a discrete-time Stackelberg mean field game with a
leader and an infinite number of followers. The leader and the followers each
observe types privately that evolve as conditionally independent controlled
Markov processes. The leader commits to a dynamic policy and the followers best
respond to that policy and each other. Knowing that the followers would play a
mean field game based on her policy, the leader chooses a policy that maximizes
her reward. We refer to the resulting outcome as a Stackelberg mean field
equilibrium (SMFE). In this paper, we provide a master equation of this game
that allows one to compute all SMFE. Based on our framework, we consider two
numerical examples. First, we consider an epidemic model where the followers
get infected based on the mean field population. The leader chooses subsidies
for a vaccine to maximize social welfare and minimize vaccination costs. In the
second example, we consider a technology adoption game where the followers
decide to adopt a technology or a product and the leader decides the cost of
one product that maximizes his returns, which are proportional to the people
adopting that technology",Master Equation for Discrete-Time Stackelberg Mean Field Games with single leader,2022-01-16 06:43:48,"Deepanshu Vasal, Randall Berry","http://arxiv.org/abs/2201.05959v1, http://arxiv.org/pdf/2201.05959v1",eess.SY
35915,th,"In selection processes such as hiring, promotion, and college admissions,
implicit bias toward socially-salient attributes such as race, gender, or
sexual orientation of candidates is known to produce persistent inequality and
reduce aggregate utility for the decision maker. Interventions such as the
Rooney Rule and its generalizations, which require the decision maker to select
at least a specified number of individuals from each affected group, have been
proposed to mitigate the adverse effects of implicit bias in selection. Recent
works have established that such lower-bound constraints can be very effective
in improving aggregate utility in the case when each individual belongs to at
most one affected group. However, in several settings, individuals may belong
to multiple affected groups and, consequently, face more extreme implicit bias
due to this intersectionality. We consider independently drawn utilities and
show that, in the intersectional case, the aforementioned non-intersectional
constraints can only recover part of the total utility achievable in the
absence of implicit bias. On the other hand, we show that if one includes
appropriate lower-bound constraints on the intersections, almost all the
utility achievable in the absence of implicit bias can be recovered. Thus,
intersectional constraints can offer a significant advantage over a
reductionist dimension-by-dimension non-intersectional approach to reducing
inequality.",Selection in the Presence of Implicit Bias: The Advantage of Intersectional Constraints,2022-02-03 19:21:50,"Anay Mehrotra, Bary S. R. Pradelski, Nisheeth K. Vishnoi","http://arxiv.org/abs/2202.01661v2, http://arxiv.org/pdf/2202.01661v2",cs.CY
35916,th,"Understanding the evolutionary stability of cooperation is a central problem
in biology, sociology, and economics. There exist only a few known mechanisms
that guarantee the existence of cooperation and its robustness to cheating.
Here, we introduce a new mechanism for the emergence of cooperation in the
presence of fluctuations. We consider agents whose wealth change stochastically
in a multiplicative fashion. Each agent can share part of her wealth as public
good, which is equally distributed among all the agents. We show that, when
agents operate with long time-horizons, cooperation produce an advantage at the
individual level, as it effectively screens agents from the deleterious effect
of environmental fluctuations.",Stable cooperation emerges in stochastic multiplicative growth,2022-02-06 17:51:58,"Lorenzo Fant, Onofrio Mazzarisi, Emanuele Panizon, Jacopo Grilli","http://dx.doi.org/10.1103/PhysRevE.108.L012401, http://arxiv.org/abs/2202.02787v1, http://arxiv.org/pdf/2202.02787v1",q-bio.PE
35917,th,"The study of complexity and optimization in decision theory involves both
partial and complete characterizations of preferences over decision spaces in
terms of real-valued monotones. With this motivation, and following the recent
introduction of new classes of monotones, like injective monotones or strict
monotone multi-utilities, we present the classification of preordered spaces in
terms of both the existence and cardinality of real-valued monotones and the
cardinality of the quotient space. In particular, we take advantage of a
characterization of real-valued monotones in terms of separating families of
increasing sets in order to obtain a more complete classification consisting of
classes that are strictly different from each other. As a result, we gain new
insight into both complexity and optimization, and clarify their interplay in
preordered spaces.",The classification of preordered spaces in terms of monotones: complexity and optimization,2022-02-24 17:00:10,"Pedro Hack, Daniel A. Braun, Sebastian Gottwald","http://dx.doi.org/10.1007/s11238-022-09904-w, http://arxiv.org/abs/2202.12106v3, http://arxiv.org/pdf/2202.12106v3",math.CO
35918,th,"Lloyd Shapley's cooperative value allocation theory is a central concept in
game theory that is widely used in various fields to allocate resources, assess
individual contributions, and determine fairness. The Shapley value formula and
his four axioms that characterize it form the foundation of the theory.
  Shapley value can be assigned only when all cooperative game players are
assumed to eventually form the grand coalition. The purpose of this paper is to
extend Shapley's theory to cover value allocation at every partial coalition
state.
  To achieve this, we first extend Shapley axioms into a new set of five axioms
that can characterize value allocation at every partial coalition state, where
the allocation at the grand coalition coincides with the Shapley value. Second,
we present a stochastic path integral formula, where each path now represents a
general coalition process. This can be viewed as an extension of the Shapley
formula. We apply these concepts to provide a dynamic interpretation and
extension of the value allocation schemes of Shapley, Nash, Kohlberg and
Neyman.
  This generalization is made possible by taking into account Hodge calculus,
stochastic processes, and path integration of edge flows on graphs. We
recognize that such generalization is not limited to the coalition game graph.
As a result, we define Hodge allocation, a general allocation scheme that can
be applied to any cooperative multigraph and yield allocation values at any
cooperative stage.",Hodge allocation for cooperative rewards: a generalization of Shapley's cooperative value allocation theory via Hodge theory on graphs,2022-03-14 08:10:07,Tongseok Lim,"http://arxiv.org/abs/2203.06860v5, http://arxiv.org/pdf/2203.06860v5",math.PR
35919,th,"This paper studies third-degree price discrimination (3PD) based on a random
sample of valuation and covariate data, where the covariate is continuous, and
the distribution of the data is unknown to the seller. The main results of this
paper are twofold. The first set of results is pricing strategy independent and
reveals the fundamental information-theoretic limitation of any data-based
pricing strategy in revenue generation for two cases: 3PD and uniform pricing.
The second set of results proposes the $K$-markets empirical revenue
maximization (ERM) strategy and shows that the $K$-markets ERM and the uniform
ERM strategies achieve the optimal rate of convergence in revenue to that
generated by their respective true-distribution 3PD and uniform pricing optima.
Our theoretical and numerical results suggest that the uniform (i.e.,
$1$-market) ERM strategy generates a larger revenue than the $K$-markets ERM
strategy when the sample size is small enough, and vice versa.",Information-theoretic limitations of data-based price discrimination,2022-04-27 09:33:37,"Haitian Xie, Ying Zhu, Denis Shishkin","http://arxiv.org/abs/2204.12723v4, http://arxiv.org/pdf/2204.12723v4",cs.GT
35920,th,"Epidemics of infectious diseases posing a serious risk to human health have
occurred throughout history. During the ongoing SARS-CoV-2 epidemic there has
been much debate about policy, including how and when to impose restrictions on
behavior. Under such circumstances policymakers must balance a complex spectrum
of objectives, suggesting a need for quantitative tools. Whether health
services might be 'overwhelmed' has emerged as a key consideration yet formal
modelling of optimal policy has so far largely ignored this. Here we show how
costly interventions, such as taxes or subsidies on behaviour, can be used to
exactly align individuals' decision making with government preferences even
when these are not aligned. We assume that choices made by individuals give
rise to Nash equilibrium behavior. We focus on a situation in which the
capacity of the healthcare system to treat patients is limited and identify
conditions under which the disease dynamics respect the capacity limit. In
particular we find an extremely sharp drop in peak infections as the maximum
infection cost in the government's objective function is increased. This is in
marked contrast to the gradual reduction without government intervention. The
infection costs at which this switch occurs depend on how costly the
intervention is to the government. We find optimal interventions that are quite
different to the case when interventions are cost-free. Finally, we identify a
novel analytic solution for the Nash equilibrium behavior for constant
infection cost.",Rational social distancing policy during epidemics with limited healthcare capacity,2022-05-02 10:08:23,"Simon K. Schnyder, John J. Molina, Ryoichi Yamamoto, Matthew S. Turner","http://arxiv.org/abs/2205.00684v1, http://arxiv.org/pdf/2205.00684v1",econ.TH
35921,th,"We consider a general class of multi-agent games in networks, namely the
generalized vertex coloring games (G-VCGs), inspired by real-life applications
of the venue selection problem in events planning. Certain utility responding
to the contemporary coloring assignment will be received by each agent under
some particular mechanism, who, striving to maximize his own utility, is
restricted to local information thus self-organizing when choosing another
color. Our focus is on maximizing some utilitarian-looking welfare objective
function concerning the cumulative utilities across the network in a
decentralized fashion. Firstly, we investigate on a special class of the
G-VCGs, namely Identical Preference VCGs (IP-VCGs) which recovers the
rudimentary work by \cite{chaudhuri2008network}. We reveal its convergence even
under a completely greedy policy and completely synchronous settings, with a
stochastic bound on the converging rate provided. Secondly, regarding the
general G-VCGs, a greediness-preserved Metropolis-Hasting based policy is
proposed for each agent to initiate with the limited information and its
optimality under asynchronous settings is proved using theories from the
regular perturbed Markov processes. The policy was also empirically witnessed
to be robust under independently synchronous settings. Thirdly, in the spirit
of ``robust coloring'', we include an expected loss term in our objective
function to balance between the utilities and robustness. An optimal coloring
for this robust welfare optimization would be derived through a second-stage
MH-policy driven algorithm. Simulation experiments are given to showcase the
efficiency of our proposed strategy.",Utilitarian Welfare Optimization in the Generalized Vertex Coloring Games: An Implication to Venue Selection in Events Planning,2022-06-18 12:21:19,Zeyi Chen,"http://arxiv.org/abs/2206.09153v4, http://arxiv.org/pdf/2206.09153v4",cs.DM
35922,th,"This paper presents karma mechanisms, a novel approach to the repeated
allocation of a scarce resource among competing agents over an infinite time.
Examples include deciding which ride hailing trip requests to serve during peak
demand, granting the right of way in intersections or lane mergers, or
admitting internet content to a regulated fast channel. We study a simplified
yet insightful formulation of these problems where at every instant two agents
from a large population get randomly matched to compete over the resource. The
intuitive interpretation of a karma mechanism is ""If I give in now, I will be
rewarded in the future."" Agents compete in an auction-like setting where they
bid units of karma, which circulates directly among them and is self-contained
in the system. We demonstrate that this allows a society of self-interested
agents to achieve high levels of efficiency without resorting to a (possibly
problematic) monetary pricing of the resource. We model karma mechanisms as
dynamic population games and guarantee the existence of a stationary Nash
equilibrium. We then analyze the performance at the stationary Nash equilibrium
numerically. For the case of homogeneous agents, we compare different mechanism
design choices, showing that it is possible to achieve an efficient and ex-post
fair allocation when the agents are future aware. Finally, we test the
robustness against agent heterogeneity and propose remedies to some of the
observed phenomena via karma redistribution.",A self-contained karma economy for the dynamic allocation of common resources,2022-07-01 18:32:46,"Ezzat Elokda, Saverio Bolognani, Andrea Censi, Florian Dörfler, Emilio Frazzoli","http://dx.doi.org/10.1007/s13235-023-00503-0, http://arxiv.org/abs/2207.00495v3, http://arxiv.org/pdf/2207.00495v3",econ.TH
35938,th,"We propose a social welfare maximizing market mechanism for an energy
community that aggregates individual and community-shared energy resources
under a general net energy metering (NEM) policy. Referred to as Dynamic NEM
(D-NEM), the proposed mechanism dynamically sets the community NEM prices based
on aggregated community resources, including flexible consumption, storage, and
renewable generation. D-NEM guarantees a higher benefit to each community
member than possible outside the community, and no sub-communities would be
better off departing from its parent community. D-NEM aligns each member's
incentive with that of the community such that each member maximizing
individual surplus under D-NEM results in maximum community social welfare.
Empirical studies compare the proposed mechanism with existing benchmarks,
demonstrating its welfare benefits, operational characteristics, and
responsiveness to NEM rates.",Dynamic Net Metering for Energy Communities,2023-06-21 09:03:07,"Ahmed S. Alahmed, Lang Tong","http://arxiv.org/abs/2306.13677v2, http://arxiv.org/pdf/2306.13677v2",eess.SY
35923,th,"We consider the problem of reforming an envy-free matching when each agent is
assigned a single item. Given an envy-free matching, we consider an operation
to exchange the item of an agent with an unassigned item preferred by the agent
that results in another envy-free matching. We repeat this operation as long as
we can. We prove that the resulting envy-free matching is uniquely determined
up to the choice of an initial envy-free matching, and can be found in
polynomial time. We call the resulting matching a reformist envy-free matching,
and then we study a shortest sequence to obtain the reformist envy-free
matching from an initial envy-free matching. We prove that a shortest sequence
is computationally hard to obtain even when each agent accepts at most four
items and each item is accepted by at most three agents. On the other hand, we
give polynomial-time algorithms when each agent accepts at most three items or
each item is accepted by at most two agents. Inapproximability and
fixed-parameter (in)tractability are also discussed.",Reforming an Envy-Free Matching,2022-07-06 16:03:49,"Takehiro Ito, Yuni Iwamasa, Naonori Kakimura, Naoyuki Kamiyama, Yusuke Kobayashi, Yuta Nozaki, Yoshio Okamoto, Kenta Ozeki","http://arxiv.org/abs/2207.02641v1, http://arxiv.org/pdf/2207.02641v1",cs.GT
35924,th,"Federated learning is typically considered a beneficial technology which
allows multiple agents to collaborate with each other, improve the accuracy of
their models, and solve problems which are otherwise too data-intensive /
expensive to be solved individually. However, under the expectation that other
agents will share their data, rational agents may be tempted to engage in
detrimental behavior such as free-riding where they contribute no data but
still enjoy an improved model. In this work, we propose a framework to analyze
the behavior of such rational data generators. We first show how a naive scheme
leads to catastrophic levels of free-riding where the benefits of data sharing
are completely eroded. Then, using ideas from contract theory, we introduce
accuracy shaping based mechanisms to maximize the amount of data generated by
each agent. These provably prevent free-riding without needing any payment
mechanism.",Mechanisms that Incentivize Data Sharing in Federated Learning,2022-07-11 01:36:52,"Sai Praneeth Karimireddy, Wenshuo Guo, Michael I. Jordan","http://arxiv.org/abs/2207.04557v1, http://arxiv.org/pdf/2207.04557v1",cs.GT
35925,th,"During an epidemic, the information available to individuals in the society
deeply influences their belief of the epidemic spread, and consequently the
preventive measures they take to stay safe from the infection. In this paper,
we develop a scalable framework for ascertaining the optimal information
disclosure a government must make to individuals in a networked society for the
purpose of epidemic containment. This problem of information design problem is
complicated by the heterogeneous nature of the society, the positive
externalities faced by individuals, and the variety in the public response to
such disclosures. We use a networked public goods model to capture the
underlying societal structure. Our first main result is a structural
decomposition of the government's objectives into two independent components --
a component dependent on the utility function of individuals, and another
dependent on properties of the underlying network. Since the network dependent
term in this decomposition is unaffected by the signals sent by the government,
this characterization simplifies the problem of finding the optimal information
disclosure policies. We find explicit conditions, in terms of the risk aversion
and prudence, under which no disclosure, full disclosure, exaggeration and
downplay are the optimal policies. The structural decomposition results are
also helpful in studying other forms of interventions like incentive design and
network design.",A Scalable Bayesian Persuasion Framework for Epidemic Containment on Heterogeneous Networks,2022-07-23 21:57:39,"Shraddha Pathak, Ankur A. Kulkarni","http://arxiv.org/abs/2207.11578v1, http://arxiv.org/pdf/2207.11578v1",eess.SY
35926,th,"In many multi-agent settings, participants can form teams to achieve
collective outcomes that may far surpass their individual capabilities.
Measuring the relative contributions of agents and allocating them shares of
the reward that promote long-lasting cooperation are difficult tasks.
Cooperative game theory offers solution concepts identifying distribution
schemes, such as the Shapley value, that fairly reflect the contribution of
individuals to the performance of the team or the Core, which reduces the
incentive of agents to abandon their team. Applications of such methods include
identifying influential features and sharing the costs of joint ventures or
team formation. Unfortunately, using these solutions requires tackling a
computational barrier as they are hard to compute, even in restricted settings.
In this work, we show how cooperative game-theoretic solutions can be distilled
into a learned model by training neural networks to propose fair and stable
payoff allocations. We show that our approach creates models that can
generalize to games far from the training distribution and can predict
solutions for more players than observed during training. An important
application of our framework is Explainable AI: our approach can be used to
speed-up Shapley value computations on many instances.",Neural Payoff Machines: Predicting Fair and Stable Payoff Allocations Among Team Members,2022-08-18 15:33:09,"Daphne Cornelisse, Thomas Rood, Mateusz Malinowski, Yoram Bachrach, Tal Kachman","http://arxiv.org/abs/2208.08798v1, http://arxiv.org/pdf/2208.08798v1",cs.LG
35927,th,"What is the purpose of pre-analysis plans, and how should they be designed?
We propose a principal-agent model where a decision-maker relies on selective
but truthful reports by an analyst. The analyst has data access, and
non-aligned objectives. In this model, the implementation of statistical
decision rules (tests, estimators) requires an incentive-compatible mechanism.
We first characterize which decision rules can be implemented. We then
characterize optimal statistical decision rules subject to implementability. We
show that implementation requires pre-analysis plans. Focussing specifically on
hypothesis tests, we show that optimal rejection rules pre-register a valid
test for the case when all data is reported, and make worst-case assumptions
about unreported data. Optimal tests can be found as a solution to a
linear-programming problem.",Optimal Pre-Analysis Plans: Statistical Decisions Subject to Implementability,2022-08-20 11:54:39,"Maximilian Kasy, Jann Spiess","http://arxiv.org/abs/2208.09638v2, http://arxiv.org/pdf/2208.09638v2",econ.EM
35928,th,"Sending and receiving signals is ubiquitous in the living world. It includes
everything from individual molecules triggering complex metabolic cascades, to
animals using signals to alert their group to the presence of predators. When
communication involves common interest, simple sender-receiver games show how
reliable signaling can emerge and evolve to transmit information effectively.
These games have been analyzed extensively, with some work investigating the
role of static network structure on information transfer. However, no existing
work has examined the coevolution of strategy and network structure in
sender-receiver games. Here we show that coevolution is sufficient to generate
the endogenous formation of distinct groups from an initially homogeneous
population. It also allows for the emergence of novel ``hybrid'' signaling
groups that have not previously been considered or demonstrated in theory or
nature. Hybrid groups are composed of different complementary signaling
behaviors that rely on evolved network structure to achieve effective
communication. Without this structure, such groups would normally fail to
effectively communicate. Our findings pertain to all common interest signaling
games, are robust across many parameters, and mitigate known problems of
inefficient communication. Our work generates new insights for the theory of
adaptive behavior, signaling, and group formation in natural and social systems
across a wide range of environments in which changing network structure is
common. We discuss implications for research on metabolic networks, among
neurons, proteins, and social organisms.",Spontaneous emergence of groups and signaling diversity in dynamic networks,2022-10-22 17:35:54,"Zachary Fulker, Patrick Forber, Rory Smead, Christoph Riedl","http://arxiv.org/abs/2210.17309v1, http://arxiv.org/pdf/2210.17309v1",cs.SI
35929,th,"The rise of big data analytics has automated the decision-making of companies
and increased supply chain agility. In this paper, we study the supply chain
contract design problem faced by a data-driven supplier who needs to respond to
the inventory decisions of the downstream retailer. Both the supplier and the
retailer are uncertain about the market demand and need to learn about it
sequentially. The goal for the supplier is to develop data-driven pricing
policies with sublinear regret bounds under a wide range of possible retailer
inventory policies for a fixed time horizon.
  To capture the dynamics induced by the retailer's learning policy, we first
make a connection to non-stationary online learning by following the notion of
variation budget. The variation budget quantifies the impact of the retailer's
learning strategy on the supplier's decision-making. We then propose dynamic
pricing policies for the supplier for both discrete and continuous demand. We
also note that our proposed pricing policy only requires access to the support
of the demand distribution, but critically, does not require the supplier to
have any prior knowledge about the retailer's learning policy or the demand
realizations. We examine several well-known data-driven policies for the
retailer, including sample average approximation, distributionally robust
optimization, and parametric approaches, and show that our pricing policies
lead to sublinear regret bounds in all these cases.
  At the managerial level, we answer affirmatively that there is a pricing
policy with a sublinear regret bound under a wide range of retailer's learning
policies, even though she faces a learning retailer and an unknown demand
distribution. Our work also provides a novel perspective in data-driven
operations management where the principal has to learn to react to the learning
policies employed by other agents in the system.",Learning to Price Supply Chain Contracts against a Learning Retailer,2022-11-02 07:00:47,"Xuejun Zhao, Ruihao Zhu, William B. Haskell","http://arxiv.org/abs/2211.04586v1, http://arxiv.org/pdf/2211.04586v1",cs.LG
35930,th,"Following the solution to the One-Round Voronoi Game in arXiv:2011.13275, we
naturally may want to consider similar games based upon the competitive
locating of points and subsequent dividing of territories. In order to appease
the tears of White (the first player) after they have potentially been tricked
into going first in a game of point-placement, an alternative game (or rather,
an extension of the Voronoi game) is the Stackelberg game where all is not lost
if Black (the second player) gains over half of the contested area. It turns
out that plenty of results can be transferred from One-Round Voronoi Game and
what remains to be explored for the Stackelberg game is how best White can
mitigate the damage of Black's placements. Since significant weaknesses in
certain arrangements were outlined in arXiv:2011.13275, we shall first consider
arrangements that still satisfy these results (namely, White plays a certain
grid arrangement) and then explore how Black can best exploit these positions.",The Stackelberg Game: responses to regular strategies,2022-11-11 23:17:26,Thomas Byrne,"http://arxiv.org/abs/2211.06472v1, http://arxiv.org/pdf/2211.06472v1",cs.CG
35931,th,"We consider the problem of determining a binary ground truth using advice
from a group of independent reviewers (experts) who express their guess about a
ground truth correctly with some independent probability (competence). In this
setting, when all reviewers are competent (competence greater than one-half),
the Condorcet Jury Theorem tells us that adding more reviewers increases the
overall accuracy, and if all competences are known, then there exists an
optimal weighting of the reviewers. However, in practical settings, reviewers
may be noisy or incompetent, i.e., competence below half, and the number of
experts may be small, so the asymptotic Condorcet Jury Theorem is not
practically relevant. In such cases we explore appointing one or more chairs
(judges) who determine the weight of each reviewer for aggregation, creating
multiple levels. However, these chairs may be unable to correctly identify the
competence of the reviewers they oversee, and therefore unable to compute the
optimal weighting. We give conditions when a set of chairs is able to weight
the reviewers optimally, and depending on the competence distribution of the
agents, give results about when it is better to have more chairs or more
reviewers. Through numerical simulations we show that in some cases it is
better to have more chairs, but in many cases it is better to have more
reviewers.",Who Reviews The Reviewers? A Multi-Level Jury Problem,2022-11-15 23:47:14,"Ben Abramowitz, Omer Lev, Nicholas Mattei","http://arxiv.org/abs/2211.08494v2, http://arxiv.org/pdf/2211.08494v2",cs.LG
35932,th,"It is shown in recent studies that in a Stackelberg game the follower can
manipulate the leader by deviating from their true best-response behavior. Such
manipulations are computationally tractable and can be highly beneficial for
the follower. Meanwhile, they may result in significant payoff losses for the
leader, sometimes completely defeating their first-mover advantage. A warning
to commitment optimizers, the risk these findings indicate appears to be
alleviated to some extent by a strict information advantage the manipulations
rely on. That is, the follower knows the full information about both players'
payoffs whereas the leader only knows their own payoffs. In this paper, we
study the manipulation problem with this information advantage relaxed. We
consider the scenario where the follower is not given any information about the
leader's payoffs to begin with but has to learn to manipulate by interacting
with the leader. The follower can gather necessary information by querying the
leader's optimal commitments against contrived best-response behaviors. Our
results indicate that the information advantage is not entirely indispensable
to the follower's manipulations: the follower can learn the optimal way to
manipulate in polynomial time with polynomially many queries of the leader's
optimal commitment.",Learning to Manipulate a Commitment Optimizer,2023-02-23 10:39:37,"Yurong Chen, Xiaotie Deng, Jiarui Gan, Yuhao Li","http://arxiv.org/abs/2302.11829v2, http://arxiv.org/pdf/2302.11829v2",cs.GT
35933,th,"We study the incentivized information acquisition problem, where a principal
hires an agent to gather information on her behalf. Such a problem is modeled
as a Stackelberg game between the principal and the agent, where the principal
announces a scoring rule that specifies the payment, and then the agent then
chooses an effort level that maximizes her own profit and reports the
information. We study the online setting of such a problem from the principal's
perspective, i.e., designing the optimal scoring rule by repeatedly interacting
with the strategic agent. We design a provably sample efficient algorithm that
tailors the UCB algorithm (Auer et al., 2002) to our model, which achieves a
sublinear $T^{2/3}$-regret after $T$ iterations. Our algorithm features a
delicate estimation procedure for the optimal profit of the principal, and a
conservative correction scheme that ensures the desired agent's actions are
incentivized. Furthermore, a key feature of our regret bound is that it is
independent of the number of states of the environment.",Learning to Incentivize Information Acquisition: Proper Scoring Rules Meet Principal-Agent Model,2023-03-15 16:40:16,"Siyu Chen, Jibang Wu, Yifan Wu, Zhuoran Yang","http://arxiv.org/abs/2303.08613v2, http://arxiv.org/pdf/2303.08613v2",cs.LG
35934,th,"We introduce a decentralized mechanism for pricing and exchanging
alternatives constrained by transaction costs. We characterize the
time-invariant solutions of a heat equation involving a (weighted) Tarski
Laplacian operator, defined for max-plus matrix-weighted graphs, as approximate
equilibria of the trading system. We study algebraic properties of the solution
sets as well as convergence behavior of the dynamical system. We apply these
tools to the ""economic problem"" of allocating scarce resources among competing
uses. Our theory suggests differences in competitive equilibrium, bargaining,
or cost-benefit analysis, depending on the context, are largely due to
differences in the way that transaction costs are incorporated into the
decision-making process. We present numerical simulations of the
synchronization algorithm (RRAggU), demonstrating our theoretical findings.",Max-Plus Synchronization in Decentralized Trading Systems,2023-04-01 06:07:49,"Hans Riess, Michael Munger, Michael M. Zavlanos","http://arxiv.org/abs/2304.00210v2, http://arxiv.org/pdf/2304.00210v2",cs.GT
35935,th,"The creator economy has revolutionized the way individuals can profit through
online platforms. In this paper, we initiate the study of online learning in
the creator economy by modeling the creator economy as a three-party game
between the users, platform, and content creators, with the platform
interacting with the content creator under a principal-agent model through
contracts to encourage better content. Additionally, the platform interacts
with the users to recommend new content, receive an evaluation, and ultimately
profit from the content, which can be modeled as a recommender system.
  Our study aims to explore how the platform can jointly optimize the contract
and recommender system to maximize the utility in an online learning fashion.
We primarily analyze and compare two families of contracts: return-based
contracts and feature-based contracts. Return-based contracts pay the content
creator a fraction of the reward the platform gains. In contrast, feature-based
contracts pay the content creator based on the quality or features of the
content, regardless of the reward the platform receives. We show that under
smoothness assumptions, the joint optimization of return-based contracts and
recommendation policy provides a regret $\Theta(T^{2/3})$. For the
feature-based contract, we introduce a definition of intrinsic dimension $d$ to
characterize the hardness of learning the contract and provide an upper bound
on the regret $\mathcal{O}(T^{(d+1)/(d+2)})$. The upper bound is tight for the
linear family.",Online Learning in a Creator Economy,2023-05-19 04:58:13,"Banghua Zhu, Sai Praneeth Karimireddy, Jiantao Jiao, Michael I. Jordan","http://arxiv.org/abs/2305.11381v1, http://arxiv.org/pdf/2305.11381v1",cs.GT
35936,th,"For a federated learning model to perform well, it is crucial to have a
diverse and representative dataset. However, the data contributors may only be
concerned with the performance on a specific subset of the population, which
may not reflect the diversity of the wider population. This creates a tension
between the principal (the FL platform designer) who cares about global
performance and the agents (the data collectors) who care about local
performance. In this work, we formulate this tension as a game between the
principal and multiple agents, and focus on the linear experiment design
problem to formally study their interaction. We show that the statistical
criterion used to quantify the diversity of the data, as well as the choice of
the federated learning algorithm used, has a significant effect on the
resulting equilibrium. We leverage this to design simple optimal federated
learning mechanisms that encourage data collectors to contribute data
representative of the global population, thereby maximizing global performance.",Evaluating and Incentivizing Diverse Data Contributions in Collaborative Learning,2023-06-09 02:38:25,"Baihe Huang, Sai Praneeth Karimireddy, Michael I. Jordan","http://arxiv.org/abs/2306.05592v1, http://arxiv.org/pdf/2306.05592v1",cs.GT
35937,th,"Cooperative dynamics are central to our understanding of many phenomena in
living and complex systems, including the transition to multicellularity, the
emergence of eusociality in insect colonies, and the development of
full-fledged human societies. However, we lack a universal mechanism to explain
the emergence of cooperation across length scales, across species, and scalable
to large populations of individuals. We present a novel framework for modelling
cooperation games with an arbitrary number of players by combining reaction
networks, methods from quantum mechanics applied to stochastic complex systems,
game theory and stochastic simulations of molecular reactions. Using this
framework, we propose a novel and robust mechanism based on risk aversion that
leads to cooperative behaviour in population games. Rather than individuals
seeking to maximise payouts in the long run, individuals seek to obtain a
minimum set of resources with a given level of confidence and in a limited time
span. We explicitly show that this mechanism leads to the emergence of new Nash
equilibria in a wide range of cooperation games. Our results suggest that risk
aversion is a viable mechanism to explain the emergence of cooperation in a
variety of contexts and with an arbitrary number of individuals greater than
three.",Risk aversion promotes cooperation,2023-06-09 18:36:07,"Jay Armas, Wout Merbis, Janusz Meylahn, Soroush Rafiee Rad, Mauricio J. del Razo","http://arxiv.org/abs/2306.05971v1, http://arxiv.org/pdf/2306.05971v1",physics.soc-ph
35939,th,"The incentive-compatibility properties of blockchain transaction fee
mechanisms have been investigated with *passive* block producers that are
motivated purely by the net rewards earned at the consensus layer. This paper
introduces a model of *active* block producers that have their own private
valuations for blocks (representing, for example, additional value derived from
the application layer). The block producer surplus in our model can be
interpreted as one of the more common colloquial meanings of the term ``MEV.''
  The main results of this paper show that transaction fee mechanism design is
fundamentally more difficult with active block producers than with passive
ones: with active block producers, no non-trivial or approximately
welfare-maximizing transaction fee mechanism can be incentive-compatible for
both users and block producers. These results can be interpreted as a
mathematical justification for the current interest in augmenting transaction
fee mechanisms with additional components such as order flow auctions, block
producer competition, trusted hardware, or cryptographic techniques.",Transaction Fee Mechanism Design with Active Block Producers,2023-07-04 15:35:42,"Maryam Bahrani, Pranav Garimidi, Tim Roughgarden","http://arxiv.org/abs/2307.01686v2, http://arxiv.org/pdf/2307.01686v2",cs.GT
35940,th,"This paper introduces a simulation algorithm for evaluating the
log-likelihood function of a large supermodular binary-action game. Covered
examples include (certain types of) peer effect, technology adoption, strategic
network formation, and multi-market entry games. More generally, the algorithm
facilitates simulated maximum likelihood (SML) estimation of games with large
numbers of players, $T$, and/or many binary actions per player, $M$ (e.g.,
games with tens of thousands of strategic actions, $TM=O(10^4)$). In such cases
the likelihood of the observed pure strategy combination is typically (i) very
small and (ii) a $TM$-fold integral who region of integration has a complicated
geometry. Direct numerical integration, as well as accept-reject Monte Carlo
integration, are computationally impractical in such settings. In contrast, we
introduce a novel importance sampling algorithm which allows for accurate
likelihood simulation with modest numbers of simulation draws.",Scenario Sampling for Large Supermodular Games,2023-07-21 21:51:32,"Bryan S. Graham, Andrin Pelican","http://arxiv.org/abs/2307.11857v1, http://arxiv.org/pdf/2307.11857v1",econ.EM
35941,th,"We introduce generative interpretation, a new approach to estimating
contractual meaning using large language models. As AI triumphalism is the
order of the day, we proceed by way of grounded case studies, each illustrating
the capabilities of these novel tools in distinct ways. Taking well-known
contracts opinions, and sourcing the actual agreements that they adjudicated,
we show that AI models can help factfinders ascertain ordinary meaning in
context, quantify ambiguity, and fill gaps in parties' agreements. We also
illustrate how models can calculate the probative value of individual pieces of
extrinsic evidence. After offering best practices for the use of these models
given their limitations, we consider their implications for judicial practice
and contract theory. Using LLMs permits courts to estimate what the parties
intended cheaply and accurately, and as such generative interpretation
unsettles the current interpretative stalemate. Their use responds to
efficiency-minded textualists and justice-oriented contextualists, who argue
about whether parties will prefer cost and certainty or accuracy and fairness.
Parties--and courts--would prefer a middle path, in which adjudicators strive
to predict what the contract really meant, admitting just enough context to
approximate reality while avoiding unguided and biased assimilation of
evidence. As generative interpretation offers this possibility, we argue it can
become the new workhorse of contractual interpretation.",Generative Interpretation,2023-08-14 05:59:27,"Yonathan A. Arbel, David Hoffman","http://arxiv.org/abs/2308.06907v1, http://arxiv.org/pdf/2308.06907v1",cs.CL
35942,th,"This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning.",Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing,2023-09-12 03:54:15,"Nunzio Lorè, Babak Heydari","http://arxiv.org/abs/2309.05898v1, http://arxiv.org/pdf/2309.05898v1",cs.GT
35943,th,"The practice of marriage is an understudied phenomenon in behavioural
sciences despite being ubiquitous across human cultures. This modelling paper
shows that replacing distant direct kin with in-laws increases the
interconnectedness of the family social network graph, which allows more
cooperative and larger groups. In this framing, marriage can be seen as a
social technology that reduces free-riding within collaborative group. This
approach offers a solution to the puzzle of why our species has this particular
form of regulating mating behaviour, uniquely among pair-bonded animals.",Network Ecology of Marriage,2023-08-06 10:07:51,Tamas David-Barrett,"http://arxiv.org/abs/2310.05928v1, http://arxiv.org/pdf/2310.05928v1",physics.soc-ph
35944,th,"Edge device participation in federating learning (FL) has been typically
studied under the lens of device-server communication (e.g., device dropout)
and assumes an undying desire from edge devices to participate in FL. As a
result, current FL frameworks are flawed when implemented in real-world
settings, with many encountering the free-rider problem. In a step to push FL
towards realistic settings, we propose RealFM: the first truly federated
mechanism which (1) realistically models device utility, (2) incentivizes data
contribution and device participation, and (3) provably removes the free-rider
phenomena. RealFM does not require data sharing and allows for a non-linear
relationship between model accuracy and utility, which improves the utility
gained by the server and participating devices compared to non-participating
devices as well as devices participating in other FL mechanisms. On real-world
data, RealFM improves device and server utility, as well as data contribution,
by up to 3 magnitudes and 7x respectively compared to baseline mechanisms.",RealFM: A Realistic Mechanism to Incentivize Data Contribution and Device Participation,2023-10-20 20:40:39,"Marco Bornstein, Amrit Singh Bedi, Anit Kumar Sahu, Furqan Khan, Furong Huang","http://arxiv.org/abs/2310.13681v1, http://arxiv.org/pdf/2310.13681v1",cs.GT
35945,th,"We present a nonparametric statistical test for determining whether an agent
is following a given mixed strategy in a repeated strategic-form game given
samples of the agent's play. This involves two components: determining whether
the agent's frequencies of pure strategies are sufficiently close to the target
frequencies, and determining whether the pure strategies selected are
independent between different game iterations. Our integrated test involves
applying a chi-squared goodness of fit test for the first component and a
generalized Wald-Wolfowitz runs test for the second component. The results from
both tests are combined using Bonferroni correction to produce a complete test
for a given significance level $\alpha.$ We applied the test to publicly
available data of human rock-paper-scissors play. The data consists of 50
iterations of play for 500 human players. We test with a null hypothesis that
the players are following a uniform random strategy independently at each game
iteration. Using a significance level of $\alpha = 0.05$, we conclude that 305
(61%) of the subjects are following the target strategy.",Nonparametric Strategy Test,2023-12-17 15:09:42,Sam Ganzfried,"http://arxiv.org/abs/2312.10695v2, http://arxiv.org/pdf/2312.10695v2",stat.ME
35946,th,"Nash equilibrium is one of the most influential solution concepts in game
theory. With the development of computer science and artificial intelligence,
there is an increasing demand on Nash equilibrium computation, especially for
Internet economics and multi-agent learning. This paper reviews various
algorithms computing the Nash equilibrium and its approximation solutions in
finite normal-form games from both theoretical and empirical perspectives. For
the theoretical part, we classify algorithms in the literature and present
basic ideas on algorithm design and analysis. For the empirical part, we
present a comprehensive comparison on the algorithms in the literature over
different kinds of games. Based on these results, we provide practical
suggestions on implementations and uses of these algorithms. Finally, we
present a series of open problems from both theoretical and practical
considerations.",A survey on algorithms for Nash equilibria in finite normal-form games,2023-12-18 13:00:47,"Hanyu Li, Wenhan Huang, Zhijian Duan, David Henry Mguni, Kun Shao, Jun Wang, Xiaotie Deng","http://arxiv.org/abs/2312.11063v1, http://arxiv.org/pdf/2312.11063v1",cs.GT
35947,th,"In this paper, a mathematically rigorous solution overturns existing wisdom
regarding New Keynesian Dynamic Stochastic General Equilibrium. I develop a
formal concept of stochastic equilibrium. I prove uniqueness and necessity,
when agents are patient, across a wide class of dynamic stochastic models.
Existence depends on appropriately specified eigenvalue conditions. Otherwise,
no solution of any kind exists. I construct the equilibrium for the benchmark
Calvo New Keynesian. I provide novel comparative statics with the
non-stochastic model of independent mathematical interest. I uncover a
bifurcation between neighbouring stochastic systems and approximations taken
from the Zero Inflation Non-Stochastic Steady State (ZINSS). The correct
Phillips curve agrees with the zero limit from the trend inflation framework.
It contains a large lagged inflation coefficient and a small response to
expected inflation. The response to the output gap is always muted and is zero
at standard parameters. A neutrality result is presented to explain why and to
align Calvo with Taylor pricing. Present and lagged demand shocks enter the
Phillips curve so there is no Divine Coincidence and the system is identified
from structural shocks alone. The lagged inflation slope is increasing in the
inflation response, embodying substantive policy trade-offs. The Taylor
principle is reversed, inactive settings are necessary for existence, pointing
towards inertial policy. The observational equivalence idea of the Lucas
critique is disproven. The bifurcation results from the breakdown of the
constraints implied by lagged nominal rigidity, associated with cross-equation
cancellation possible only at ZINSS. There is a dual relationship between
restrictions on the econometrician and constraints on repricing firms. Thus if
the model is correct, goodness of fit will jump.",Stochastic Equilibrium the Lucas Critique and Keynesian Economics,2023-12-24 01:59:33,David Staines,"http://arxiv.org/abs/2312.16214v1, http://arxiv.org/pdf/2312.16214v1",econ.TH
35948,th,"We study team decision problems where communication is not possible, but
coordination among team members can be realized via signals in a shared
environment. We consider a variety of decision problems that differ in what
team members know about one another's actions and knowledge. For each type of
decision problem, we investigate how different assumptions on the available
signals affect team performance. Specifically, we consider the cases of
perfectly correlated, i.i.d., and exchangeable classical signals, as well as
the case of quantum signals. We find that, whereas in perfect-recall trees
(Kuhn [1950], [1953]) no type of signal improves performance, in
imperfect-recall trees quantum signals may bring an improvement. Isbell [1957]
proved that in non-Kuhn trees, classical i.i.d. signals may improve
performance. We show that further improvement may be possible by use of
classical exchangeable or quantum signals. We include an example of the effect
of quantum signals in the context of high-frequency trading.",Team Decision Problems with Classical and Quantum Signals,2011-07-01 17:32:15,"Adam Brandenburger, Pierfrancesco La Mura","http://dx.doi.org/10.1098/rsta.2015.0096, http://arxiv.org/abs/1107.0237v3, http://arxiv.org/pdf/1107.0237v3",quant-ph
35949,th,"This paper derives a robust on-line equity trading algorithm that achieves
the greatest possible percentage of the final wealth of the best pairs
rebalancing rule in hindsight. A pairs rebalancing rule chooses some pair of
stocks in the market and then perpetually executes rebalancing trades so as to
maintain a target fraction of wealth in each of the two. After each discrete
market fluctuation, a pairs rebalancing rule will sell a precise amount of the
outperforming stock and put the proceeds into the underperforming stock. Under
typical conditions, in hindsight one can find pairs rebalancing rules that
would have spectacularly beaten the market. Our trading strategy, which extends
Ordentlich and Cover's (1998) ""max-min universal portfolio,"" guarantees to
achieve an acceptable percentage of the hindsight-optimized wealth, a
percentage which tends to zero at a slow (polynomial) rate. This means that on
a long enough investment horizon, the trader can enforce a compound-annual
growth rate that is arbitrarily close to that of the best pairs rebalancing
rule in hindsight. The strategy will ""beat the market asymptotically"" if there
turns out to exist a pairs rebalancing rule that grows capital at a higher
asymptotic rate than the market index. The advantages of our algorithm over the
Ordentlich and Cover (1998) strategy are twofold. First, their strategy is
impossible to compute in practice. Second, in considering the more modest
benchmark (instead of the best all-stock rebalancing rule in hindsight), we
reduce the ""cost of universality"" and achieve a higher learning rate.",Super-Replication of the Best Pairs Trade in Hindsight,2018-10-05 01:30:01,Alex Garivaltis,"http://arxiv.org/abs/1810.02444v4, http://arxiv.org/pdf/1810.02444v4",q-fin.PM
35950,th,"This paper prices and replicates the financial derivative whose payoff at $T$
is the wealth that would have accrued to a $\$1$ deposit into the best
continuously-rebalanced portfolio (or fixed-fraction betting scheme) determined
in hindsight. For the single-stock Black-Scholes market, Ordentlich and Cover
(1998) only priced this derivative at time-0, giving
$C_0=1+\sigma\sqrt{T/(2\pi)}$. Of course, the general time-$t$ price is not
equal to $1+\sigma\sqrt{(T-t)/(2\pi)}$. I complete the Ordentlich-Cover (1998)
analysis by deriving the price at any time $t$. By contrast, I also study the
more natural case of the best levered rebalancing rule in hindsight. This
yields $C(S,t)=\sqrt{T/t}\cdot\,\exp\{rt+\sigma^2b(S,t)^2\cdot t/2\}$, where
$b(S,t)$ is the best rebalancing rule in hindsight over the observed history
$[0,t]$. I show that the replicating strategy amounts to betting the fraction
$b(S,t)$ of wealth on the stock over the interval $[t,t+dt].$ This fact holds
for the general market with $n$ correlated stocks in geometric Brownian motion:
we get $C(S,t)=(T/t)^{n/2}\exp(rt+b'\Sigma b\cdot t/2)$, where $\Sigma$ is the
covariance of instantaneous returns per unit time. This result matches the
$\mathcal{O}(T^{n/2})$ ""cost of universality"" derived by Cover in his
""universal portfolio theory"" (1986, 1991, 1996, 1998), which super-replicates
the same derivative in discrete-time. The replicating strategy compounds its
money at the same asymptotic rate as the best levered rebalancing rule in
hindsight, thereby beating the market asymptotically. Naturally enough, we find
that the American-style version of Cover's Derivative is never exercised early
in equilibrium.",Exact Replication of the Best Rebalancing Rule in Hindsight,2018-10-05 04:36:19,Alex Garivaltis,"http://arxiv.org/abs/1810.02485v2, http://arxiv.org/pdf/1810.02485v2",q-fin.PR
35951,th,"This paper studies a two-person trading game in continuous time that
generalizes Garivaltis (2018) to allow for stock prices that both jump and
diffuse. Analogous to Bell and Cover (1988) in discrete time, the players start
by choosing fair randomizations of the initial dollar, by exchanging it for a
random wealth whose mean is at most 1. Each player then deposits the resulting
capital into some continuously-rebalanced portfolio that must be adhered to
over $[0,t]$. We solve the corresponding `investment $\phi$-game,' namely the
zero-sum game with payoff kernel
$\mathbb{E}[\phi\{\textbf{W}_1V_t(b)/(\textbf{W}_2V_t(c))\}]$, where
$\textbf{W}_i$ is player $i$'s fair randomization, $V_t(b)$ is the final wealth
that accrues to a one dollar deposit into the rebalancing rule $b$, and
$\phi(\bullet)$ is any increasing function meant to measure relative
performance. We show that the unique saddle point is for both players to use
the (leveraged) Kelly rule for jump diffusions, which is ordinarily defined by
maximizing the asymptotic almost-sure continuously-compounded capital growth
rate. Thus, the Kelly rule for jump diffusions is the correct behavior for
practically anybody who wants to outperform other traders (on any time frame)
with respect to practically any measure of relative performance.",Game-Theoretic Optimal Portfolios for Jump Diffusions,2018-12-11 21:43:09,Alex Garivaltis,"http://arxiv.org/abs/1812.04603v2, http://arxiv.org/pdf/1812.04603v2",econ.GN
35952,th,"We study T. Cover's rebalancing option (Ordentlich and Cover 1998) under
discrete hindsight optimization in continuous time. The payoff in question is
equal to the final wealth that would have accrued to a $\$1$ deposit into the
best of some finite set of (perhaps levered) rebalancing rules determined in
hindsight. A rebalancing rule (or fixed-fraction betting scheme) amounts to
fixing an asset allocation (i.e. $200\%$ stocks and $-100\%$ bonds) and then
continuously executing rebalancing trades to counteract allocation drift.
Restricting the hindsight optimization to a small number of rebalancing rules
(i.e. 2) has some advantages over the pioneering approach taken by Cover $\&$
Company in their brilliant theory of universal portfolios (1986, 1991, 1996,
1998), where one's on-line trading performance is benchmarked relative to the
final wealth of the best unlevered rebalancing rule of any kind in hindsight.
Our approach lets practitioners express an a priori view that one of the
favored asset allocations (""bets"") $b\in\{b_1,...,b_n\}$ will turn out to have
performed spectacularly well in hindsight. In limiting our robustness to some
discrete set of asset allocations (rather than all possible asset allocations)
we reduce the price of the rebalancing option and guarantee to achieve a
correspondingly higher percentage of the hindsight-optimized wealth at the end
of the planning period. A practitioner who lives to delta-hedge this variant of
Cover's rebalancing option through several decades is guaranteed to see the day
that his realized compound-annual capital growth rate is very close to that of
the best $b_i$ in hindsight. Hence the point of the rock-bottom option price.",Cover's Rebalancing Option With Discrete Hindsight Optimization,2019-03-03 07:36:48,Alex Garivaltis,"http://arxiv.org/abs/1903.00829v2, http://arxiv.org/pdf/1903.00829v2",q-fin.PM
35953,th,"I derive practical formulas for optimal arrangements between sophisticated
stock market investors (namely, continuous-time Kelly gamblers or, more
generally, CRRA investors) and the brokers who lend them cash for leveraged
bets on a high Sharpe asset (i.e. the market portfolio). Rather than, say, the
broker posting a monopoly price for margin loans, the gambler agrees to use a
greater quantity of margin debt than he otherwise would in exchange for an
interest rate that is lower than the broker would otherwise post. The gambler
thereby attains a higher asymptotic capital growth rate and the broker enjoys a
greater rate of intermediation profit than would obtain under non-cooperation.
If the threat point represents a vicious breakdown of negotiations (resulting
in zero margin loans), then we get an elegant rule of thumb:
$r_L^*=(3/4)r+(1/4)(\nu-\sigma^2/2)$, where $r$ is the broker's cost of funds,
$\nu$ is the compound-annual growth rate of the market index, and $\sigma$ is
the annual volatility. We show that, regardless of the particular threat point,
the gambler will negotiate to size his bets as if he himself could borrow at
the broker's call rate.",Nash Bargaining Over Margin Loans to Kelly Gamblers,2019-04-14 08:13:50,Alex Garivaltis,"http://dx.doi.org/10.13140/RG.2.2.29080.65286, http://arxiv.org/abs/1904.06628v2, http://arxiv.org/pdf/1904.06628v2",econ.GN
35954,th,"This paper supplies two possible resolutions of Fortune's (2000) margin-loan
pricing puzzle. Fortune (2000) noted that the margin loan interest rates
charged by stock brokers are very high in relation to the actual (low) credit
risk and the cost of funds. If we live in the Black-Scholes world, the brokers
are presumably making arbitrage profits by shorting dynamically precise amounts
of their clients' portfolios. First, we extend Fortune's (2000) application of
Merton's (1974) no-arbitrage approach to allow for brokers that can only revise
their hedges finitely many times during the term of the loan. We show that
extremely small differences in the revision frequency can easily explain the
observed variation in margin loan pricing. In fact, four additional revisions
per three-day period serve to explain all of the currently observed
heterogeneity. Second, we study monopolistic (or oligopolistic) margin loan
pricing by brokers whose clients are continuous-time Kelly gamblers. The broker
solves a general stochastic control problem that yields simple and pleasant
formulas for the optimal interest rate and the net interest margin. If the
author owned a brokerage, he would charge an interest rate of
$(r+\nu)/2-\sigma^2/4$, where $r$ is the cost of funds, $\nu$ is the
compound-annual growth rate of the S&P 500 index, and $\sigma$ is the
volatility.",Two Resolutions of the Margin Loan Pricing Puzzle,2019-06-03 21:57:56,Alex Garivaltis,"http://arxiv.org/abs/1906.01025v2, http://arxiv.org/pdf/1906.01025v2",econ.GN
35955,th,"We consider a two-person trading game in continuous time whereby each player
chooses a constant rebalancing rule $b$ that he must adhere to over $[0,t]$. If
$V_t(b)$ denotes the final wealth of the rebalancing rule $b$, then Player 1
(the `numerator player') picks $b$ so as to maximize
$\mathbb{E}[V_t(b)/V_t(c)]$, while Player 2 (the `denominator player') picks
$c$ so as to minimize it. In the unique Nash equilibrium, both players use the
continuous-time Kelly rule $b^*=c^*=\Sigma^{-1}(\mu-r\textbf{1})$, where
$\Sigma$ is the covariance of instantaneous returns per unit time, $\mu$ is the
drift vector of the stock market, and $\textbf{1}$ is a vector of ones. Thus,
even over very short intervals of time $[0,t]$, the desire to perform well
relative to other traders leads one to adopt the Kelly rule, which is
ordinarily derived by maximizing the asymptotic exponential growth rate of
wealth. Hence, we find agreement with Bell and Cover's (1988) result in
discrete time.",Game-Theoretic Optimal Portfolios in Continuous Time,2019-06-05 21:01:31,Alex Garivaltis,"http://arxiv.org/abs/1906.02216v2, http://arxiv.org/pdf/1906.02216v2",q-fin.PM
35956,th,"I unravel the basic long run dynamics of the broker call money market, which
is the pile of cash that funds margin loans to retail clients (read: continuous
time Kelly gamblers). Call money is assumed to supply itself perfectly
inelastically, and to continuously reinvest all principal and interest. I show
that the relative size of the money market (that is, relative to the Kelly
bankroll) is a martingale that nonetheless converges in probability to zero.
The margin loan interest rate is a submartingale that converges in mean square
to the choke price $r_\infty:=\nu-\sigma^2/2$, where $\nu$ is the asymptotic
compound growth rate of the stock market and $\sigma$ is its annual volatility.
In this environment, the gambler no longer beats the market asymptotically a.s.
by an exponential factor (as he would under perfectly elastic supply). Rather,
he beats the market asymptotically with very high probability (think 98%) by a
factor (say 1.87, or 87% more final wealth) whose mean cannot exceed what the
leverage ratio was at the start of the model (say, $2:1$). Although the ratio
of the gambler's wealth to that of an equivalent buy-and-hold investor is a
submartingale (always expected to increase), his realized compound growth rate
converges in mean square to $\nu$. This happens because the equilibrium
leverage ratio converges to $1:1$ in lockstep with the gradual rise of margin
loan interest rates.",Long Run Feedback in the Broker Call Money Market,2019-06-24 20:01:51,Alex Garivaltis,"http://arxiv.org/abs/1906.10084v2, http://arxiv.org/pdf/1906.10084v2",econ.GN
35957,th,"Supply chains are the backbone of the global economy. Disruptions to them can
be costly. Centrally managed supply chains invest in ensuring their resilience.
Decentralized supply chains, however, must rely upon the self-interest of their
individual components to maintain the resilience of the entire chain.
  We examine the incentives that independent self-interested agents have in
forming a resilient supply chain network in the face of production disruptions
and competition. In our model, competing suppliers are subject to yield
uncertainty (they deliver less than ordered) and congestion (lead time
uncertainty or, ""soft"" supply caps). Competing retailers must decide which
suppliers to link to based on both price and reliability. In the presence of
yield uncertainty only, the resulting supply chain networks are sparse.
Retailers concentrate their links on a single supplier, counter to the idea
that they should mitigate yield uncertainty by diversifying their supply base.
This happens because retailers benefit from supply variance. It suggests that
competition will amplify output uncertainty. When congestion is included as
well, the resulting networks are denser and resemble the bipartite expander
graphs that have been proposed in the supply chain literature, thereby,
providing the first example of endogenous formation of resilient supply chain
networks, without resilience being explicitly encoded in payoffs. Finally, we
show that a supplier's investments in improved yield can make it worse off.
This happens because high production output saturates the market, which, in
turn lowers prices and profits for participants.",Strategic Formation and Reliability of Supply Chain Networks,2019-09-17 21:46:03,"Victor Amelkin, Rakesh Vohra","http://arxiv.org/abs/1909.08021v2, http://arxiv.org/pdf/1909.08021v2",cs.GT
35958,th,"In this paper, we consider the problem of resource congestion control for
competing online learning agents. On the basis of non-cooperative game as the
model for the interaction between the agents, and the noisy online mirror
ascent as the model for rational behavior of the agents, we propose a novel
pricing mechanism which gives the agents incentives for sustainable use of the
resources. Our mechanism is distributed and resource-centric, in the sense that
it is done by the resources themselves and not by a centralized instance, and
that it is based rather on the congestion state of the resources than the
preferences of the agents. In case that the noise is persistent, and for
several choices of the intrinsic parameter of the agents, such as their
learning rate, and of the mechanism parameters, such as the learning rate of -,
the progressivity of the price-setters, and the extrinsic price sensitivity of
the agents, we show that the accumulative violation of the resource constraints
of the resulted iterates is sub-linear w.r.t. the time horizon. Moreover, we
provide numerical simulations to support our theoretical findings.",Pricing Mechanism for Resource Sustainability in Competitive Online Learning Multi-Agent Systems,2019-10-21 15:49:00,"Ezra Tampubolon, Holger Boche","http://arxiv.org/abs/1910.09314v1, http://arxiv.org/pdf/1910.09314v1",cs.LG
35959,th,"Bounded rationality is an important consideration stemming from the fact that
agents often have limits on their processing abilities, making the assumption
of perfect rationality inapplicable to many real tasks. We propose an
information-theoretic approach to the inference of agent decisions under
Smithian competition. The model explicitly captures the boundedness of agents
(limited in their information-processing capacity) as the cost of information
acquisition for expanding their prior beliefs. The expansion is measured as the
Kullblack-Leibler divergence between posterior decisions and prior beliefs.
When information acquisition is free, the homo economicus agent is recovered,
while in cases when information acquisition becomes costly, agents instead
revert to their prior beliefs. The maximum entropy principle is used to infer
least-biased decisions based upon the notion of Smithian competition formalised
within the Quantal Response Statistical Equilibrium framework. The
incorporation of prior beliefs into such a framework allowed us to
systematically explore the effects of prior beliefs on decision-making in the
presence of market feedback, as well as importantly adding a temporal
interpretation to the framework. We verified the proposed model using
Australian housing market data, showing how the incorporation of prior
knowledge alters the resulting agent decisions. Specifically, it allowed for
the separation of past beliefs and utility maximisation behaviour of the agent
as well as the analysis into the evolution of agent beliefs.",A maximum entropy model of bounded rational decision-making with prior beliefs and market feedback,2021-02-18 09:41:59,"Benjamin Patrick Evans, Mikhail Prokopenko","http://dx.doi.org/10.3390/e23060669, http://arxiv.org/abs/2102.09180v3, http://arxiv.org/pdf/2102.09180v3",cs.IT
35960,th,"Solar Renewable Energy Certificate (SREC) markets are a market-based system
that incentivizes solar energy generation. A regulatory body imposes a lower
bound on the amount of energy each regulated firm must generate via solar
means, providing them with a tradeable certificate for each MWh generated.
Firms seek to navigate the market optimally by modulating their SREC generation
and trading rates. As such, the SREC market can be viewed as a stochastic game,
where agents interact through the SREC price. We study this stochastic game by
solving the mean-field game (MFG) limit with sub-populations of heterogeneous
agents. Market participants optimize costs accounting for trading frictions,
cost of generation, non-linear non-compliance costs, and generation
uncertainty. Moreover, we endogenize SREC price through market clearing. We
characterize firms' optimal controls as the solution of McKean-Vlasov (MV)
FBSDEs and determine the equilibrium SREC price. We establish the existence and
uniqueness of a solution to this MV-FBSDE, and prove that the MFG strategies
form an $\epsilon$-Nash equilibrium for the finite player game. Finally, we
develop a numerical scheme for solving the MV-FBSDEs and conduct a simulation
study.",A Mean-Field Game Approach to Equilibrium Pricing in Solar Renewable Energy Certificate Markets,2020-03-10 22:23:22,"Arvind Shrivats, Dena Firoozi, Sebastian Jaimungal","http://arxiv.org/abs/2003.04938v5, http://arxiv.org/pdf/2003.04938v5",q-fin.MF
35961,th,"Forward invariance of a basin of attraction is often overlooked when using a
Lyapunov stability theorem to prove local stability; even if the Lyapunov
function decreases monotonically in a neighborhood of an equilibrium, the
dynamic may escape from this neighborhood. In this note, we fix this gap by
finding a smaller neighborhood that is forward invariant. This helps us to
prove local stability more naturally without tracking each solution path.
Similarly, we prove a transitivity theorem about basins of attractions without
requiring forward invariance.
  Keywords: Lyapunov function, local stability, forward invariance,
evolutionary dynamics.",On forward invariance in Lyapunov stability theorem for local stability,2020-06-08 01:08:33,Dai Zusai,"http://arxiv.org/abs/2006.04280v1, http://arxiv.org/pdf/2006.04280v1",math.OC
35962,th,"In this work, we study the system of interacting non-cooperative two
Q-learning agents, where one agent has the privilege of observing the other's
actions. We show that this information asymmetry can lead to a stable outcome
of population learning, which generally does not occur in an environment of
general independent learners. The resulting post-learning policies are almost
optimal in the underlying game sense, i.e., they form a Nash equilibrium.
Furthermore, we propose in this work a Q-learning algorithm, requiring
predictive observation of two subsequent opponent's actions, yielding an
optimal strategy given that the latter applies a stationary strategy, and
discuss the existence of the Nash equilibrium in the underlying information
asymmetrical game.",On Information Asymmetry in Competitive Multi-Agent Reinforcement Learning: Convergence and Optimality,2020-10-21 14:19:53,"Ezra Tampubolon, Haris Ceribasic, Holger Boche","http://arxiv.org/abs/2010.10901v2, http://arxiv.org/pdf/2010.10901v2",cs.LG
35963,th,"A common goal in the areas of secure information flow and privacy is to build
effective defenses against unwanted leakage of information. To this end, one
must be able to reason about potential attacks and their interplay with
possible defenses. In this paper, we propose a game-theoretic framework to
formalize strategies of attacker and defender in the context of information
leakage, and provide a basis for developing optimal defense methods. A novelty
of our games is that their utility is given by information leakage, which in
some cases may behave in a non-linear way. This causes a significant deviation
from classic game theory, in which utility functions are linear with respect to
players' strategies. Hence, a key contribution of this paper is the
establishment of the foundations of information leakage games. We consider two
kinds of games, depending on the notion of leakage considered. The first kind,
the QIF-games, is tailored for the theory of quantitative information flow
(QIF). The second one, the DP-games, corresponds to differential privacy (DP).",Information Leakage Games: Exploring Information as a Utility Function,2020-12-22 17:51:30,"Mário S. Alvim, Konstantinos Chatzikokolakis, Yusuke Kawamoto, Catuscia Palamidessi","http://dx.doi.org/10.1145/3517330, http://arxiv.org/abs/2012.12060v3, http://arxiv.org/pdf/2012.12060v3",cs.CR
35964,th,"This paper studies the general relationship between the gearing ratio of a
Leveraged ETF and its corresponding expense ratio, viz., the investment
management fees that are charged for the provision of this levered financial
service. It must not be possible for an investor to combine two or more LETFs
in such a way that his (continuously-rebalanced) LETF portfolio can match the
gearing ratio of a given, professionally managed product and, at the same time,
enjoy lower weighted-average expenses than the existing LETF. Given a finite
set of LETFs that exist in the marketplace, I give necessary and sufficient
conditions for these products to be undominated in the price-gearing plane. In
a beautiful application of the duality theorem of linear programming, I prove a
kind of two-fund theorem for LETFs: given a target gearing ratio for the
investor, the cheapest way to achieve it is to combine (uniquely) the two
nearest undominated LETF products that bracket it on the leverage axis. This
also happens to be the implementation that has the lowest annual turnover. For
the writer's enjoyment, we supply a second proof of the Main Theorem on LETFs
that is based on Carath\'eodory's theorem in convex geometry. Thus, say, a
triple-leveraged (""UltraPro"") exchange-traded product should never be mixed
with cash, if the investor is able to trade in the underlying index. In terms
of financial innovation, our two-fund theorem for LETFs implies that the
introduction of new, undominated 2.5x products would increase the welfare of
all investors whose preferred gearing ratios lie between 2x (""Ultra"") and 3x
(""UltraPro""). Similarly for a 1.5x product.",Rational Pricing of Leveraged ETF Expense Ratios,2021-06-28 18:56:05,Alex Garivaltis,"http://arxiv.org/abs/2106.14820v2, http://arxiv.org/pdf/2106.14820v2",econ.TH
35965,th,"Consider multiple experts with overlapping expertise working on a
classification problem under uncertain input. What constitutes a consistent set
of opinions? How can we predict the opinions of experts on missing sub-domains?
In this paper, we define a framework of to analyze this problem, termed ""expert
graphs."" In an expert graph, vertices represent classes and edges represent
binary opinions on the topics of their vertices. We derive necessary conditions
for expert graph validity and use them to create ""synthetic experts"" which
describe opinions consistent with the observed opinions of other experts. We
show this framework to be equivalent to the well-studied linear ordering
polytope. We show our conditions are not sufficient for describing all expert
graphs on cliques, but are sufficient for cycles.",Expert Graphs: Synthesizing New Expertise via Collaboration,2021-07-15 03:27:16,"Bijan Mazaheri, Siddharth Jain, Jehoshua Bruck","http://arxiv.org/abs/2107.07054v1, http://arxiv.org/pdf/2107.07054v1",cs.LG
35966,th,"How can a social planner adaptively incentivize selfish agents who are
learning in a strategic environment to induce a socially optimal outcome in the
long run? We propose a two-timescale learning dynamics to answer this question
in both atomic and non-atomic games. In our learning dynamics, players adopt a
class of learning rules to update their strategies at a faster timescale, while
a social planner updates the incentive mechanism at a slower timescale. In
particular, the update of the incentive mechanism is based on each player's
externality, which is evaluated as the difference between the player's marginal
cost and the society's marginal cost in each time step. We show that any fixed
point of our learning dynamics corresponds to the optimal incentive mechanism
such that the corresponding Nash equilibrium also achieves social optimality.
We also provide sufficient conditions for the learning dynamics to converge to
a fixed point so that the adaptive incentive mechanism eventually induces a
socially optimal outcome. Finally, we demonstrate that the sufficient
conditions for convergence are satisfied in a variety of games, including (i)
atomic networked quadratic aggregative games, (ii) atomic Cournot competition,
and (iii) non-atomic network routing games.",Inducing Social Optimality in Games via Adaptive Incentive Design,2022-04-12 06:36:42,"Chinmay Maheshwari, Kshitij Kulkarni, Manxi Wu, Shankar Sastry","http://arxiv.org/abs/2204.05507v1, http://arxiv.org/pdf/2204.05507v1",cs.GT
35967,th,"Alice (owner) has knowledge of the underlying quality of her items measured
in grades. Given the noisy grades provided by an independent party, can Bob
(appraiser) obtain accurate estimates of the ground-truth grades of the items
by asking Alice a question about the grades? We address this when the payoff to
Alice is additive convex utility over all her items. We establish that if Alice
has to truthfully answer the question so that her payoff is maximized, the
question must be formulated as pairwise comparisons between her items. Next, we
prove that if Alice is required to provide a ranking of her items, which is the
most fine-grained question via pairwise comparisons, she would be truthful. By
incorporating the ground-truth ranking, we show that Bob can obtain an
estimator with the optimal squared error in certain regimes based on any
possible way of truthful information elicitation. Moreover, the estimated
grades are substantially more accurate than the raw grades when the number of
items is large and the raw grades are very noisy. Finally, we conclude the
paper with several extensions and some refinements for practical
considerations.",A Truthful Owner-Assisted Scoring Mechanism,2022-06-14 17:35:53,Weijie J. Su,"http://arxiv.org/abs/2206.08149v1, http://arxiv.org/pdf/2206.08149v1",cs.LG
35968,th,"We propose to smooth out the calibration score, which measures how good a
forecaster is, by combining nearby forecasts. While regular calibration can be
guaranteed only by randomized forecasting procedures, we show that smooth
calibration can be guaranteed by deterministic procedures. As a consequence, it
does not matter if the forecasts are leaked, i.e., made known in advance:
smooth calibration can nevertheless be guaranteed (while regular calibration
cannot). Moreover, our procedure has finite recall, is stationary, and all
forecasts lie on a finite grid. To construct the procedure, we deal also with
the related setups of online linear regression and weak calibration. Finally,
we show that smooth calibration yields uncoupled finite-memory dynamics in
n-person games ""smooth calibrated learning"" in which the players play
approximate Nash equilibria in almost all periods (by contrast, calibrated
learning, which uses regular calibration, yields only that the time-averages of
play are approximate correlated equilibria).","Smooth Calibration, Leaky Forecasts, Finite Recall, and Nash Dynamics",2022-10-13 19:34:55,"Dean P. Foster, Sergiu Hart","http://dx.doi.org/10.1016/j.geb.2017.12.022, http://arxiv.org/abs/2210.07152v1, http://arxiv.org/pdf/2210.07152v1",econ.TH
35969,th,"Calibration means that forecasts and average realized frequencies are close.
We develop the concept of forecast hedging, which consists of choosing the
forecasts so as to guarantee that the expected track record can only improve.
This yields all the calibration results by the same simple basic argument while
differentiating between them by the forecast-hedging tools used: deterministic
and fixed point based versus stochastic and minimax based. Additional
contributions are an improved definition of continuous calibration, ensuing
game dynamics that yield Nash equilibria in the long run, and a new calibrated
forecasting procedure for binary events that is simpler than all known such
procedures.",Forecast Hedging and Calibration,2022-10-13 19:48:25,"Dean P. Foster, Sergiu Hart","http://dx.doi.org/10.1086/716559, http://arxiv.org/abs/2210.07169v1, http://arxiv.org/pdf/2210.07169v1",econ.TH
35970,th,"In 2023, the International Conference on Machine Learning (ICML) required
authors with multiple submissions to rank their submissions based on perceived
quality. In this paper, we aim to employ these author-specified rankings to
enhance peer review in machine learning and artificial intelligence conferences
by extending the Isotonic Mechanism to exponential family distributions. This
mechanism generates adjusted scores that closely align with the original scores
while adhering to author-specified rankings. Despite its applicability to a
broad spectrum of exponential family distributions, implementing this mechanism
does not require knowledge of the specific distribution form. We demonstrate
that an author is incentivized to provide accurate rankings when her utility
takes the form of a convex additive function of the adjusted review scores. For
a certain subclass of exponential family distributions, we prove that the
author reports truthfully only if the question involves only pairwise
comparisons between her submissions, thus indicating the optimality of ranking
in truthful information elicitation. Moreover, we show that the adjusted scores
improve dramatically the estimation accuracy compared to the original scores
and achieve nearly minimax optimality when the ground-truth scores have bounded
total variation. We conclude the paper by presenting experiments conducted on
the ICML 2023 ranking data, which show significant estimation gain using the
Isotonic Mechanism.",The Isotonic Mechanism for Exponential Family Estimation,2023-04-21 20:59:08,"Yuling Yan, Weijie J. Su, Jianqing Fan","http://arxiv.org/abs/2304.11160v3, http://arxiv.org/pdf/2304.11160v3",math.ST