The NBER Reporter Summer 2004: Conferences
Nineteenth Annual Conference on Macroeconomics
Innovation Policy and the Economy
Economics of the Information Economy
Conference on Fiscal Federalism
Based on a sample of 105 countries, Kaminsky, Reinhart, and Vegh document some key cyclical properties of capital flows, fiscal policy, and monetary policy. First, capital flows are procyclical (that is, external borrowing increases in good times and falls in bad times) for developing countries and, most notably, for middle-high income countries (emerging markets). Second, fiscal policy is procyclical (that is, government spending increases in good times and falls in bad times) for all developing countries. Third, this feast and famine cycle of fiscal spending is positively linked to the capital flows cycle (with spending rising markedly when capital is plentiful). Fourth, there is some evidence to suggest that, in emerging markets, monetary policy is also procyclical. In sum, the evidence suggests that for the middle-high income countries the business, capital flows, monetary policy and fiscal policy cycles all reinforce one another. For such countries, when it rains, it does indeed pour.
Using a standard set of data and a simple neoclassical analytical framework, Engen and Hubbard reconsider and add to empirical evidence on the effect of federal government debt and interest rates. They begin by deriving the effect of government debt on the real interest rate and conclude that it is modest - an increase in government debt equivalent to one percent of GDP would increase the real interest rate by about 2 to 3 basis points. While some existing studies estimate effects in this range, others find larger effects. In many cases, these larger estimates come from specifications relating federal deficits (as opposed to debt) and interest rates, or from specifications not controlling adequately for macroeconomic influences on interest rates that might be correlated with deficits. The bulk of their empirical results suggest that an increase in federal government debt equivalent to one percent of GDP, all else equal, would be expected to increase the long-term real rate of interest by three to five basis points, although some specifications suggest a larger impact, while some estimates are not statistically significantly different from zero. By presenting a range of results with the same data, they illustrate the dependence of estimation on specification and definition differences.
Giannone, Reichlin, and Sala analyze the panel of the Greenbook forecasts and a large panel of monthly variables for the United States since 1970. They show that the dimension of the U.S. economy is two and that a model which exploits, in real time, information on many time series to extract a two dimensional signal, produces a degree of forecasting accuracy of the federal funds rate similar to that of the markets and for output and inflation similar to that of the Greenbook forecasts. They also show that dimension two is generated by a real and nominal shock and that the Phillips curve tradeoff is weak, which implies that the dimension of the policy problem is one.
Gali and Rabanal review recent research efforts that seek to identify and estimate the role of technology as a source of economic fluctuations, in a more direct way than the early RBC literature. The bulk of the evidence suggests a very limited role for aggregate technology shocks, instead pointing to demand factors as the main force behind the strong positive comovement between output and labor input measures that is the hallmark of the business cycle.
Backus, Routledge, and Zin provide a users' guide to non-additive "exotic" preferences: nonlinear time aggregators; departures from expected utility; preferences over time with known and unknown probabilities; risk-sensitive and robust control; "hyperbolic" discounting; and preferences over sets ("temptations"). They apply each to a number of classic issues in macroeconomics and finance, including consumption and saving, portfolio choice, equilibrium asset pricing, and optimal allocation.
Gomme, Rogerson, Rupert, and Wright document the differences in variability of hours worked over the business cycle across several demographic groups and show that these differences are large. They argue that understanding these differences should be useful in understanding the forces that account for aggregate fluctuations in hours worked. In particular, it is well known that standard models of the business cycle driven by technology shocks do not account for all of the variability in hours of work. This raises the question of to what extent the forces in this model can account for the differences across demographic groups. The authors explore this in the context of hours fluctuations by age groups, using a stochastic overlapping generations model. Their analysis shows that the model does a good job of accounting for hours fluctuations of prime age workers, but not for young or old workers. They conclude that a key issue is understanding why fluctuations for young and old workers are so much larger.
These papers and discussions will be published by the MIT Press. The volume's availability will be announced in a future issue of the Reporter.
The spectacular growth of the software industry in some non-G7 economies has aroused both interest and concern. Arora addresses two sets of inter-related issues. First, he explores the determinants of the success stories. Then he touches upon the broader question of what lessons, if any, can be drawn from economic development more generally. From the U.S. perspective, the interesting debate is not the current one on the impact of outsourcing on jobs, but rather whether offshoring of software is a long-term threat to American technological leadership. Arora concludes that policymakers in the United States should not fear the growth of new software producing regions. Instead, the U.S. economy will broadly benefit from their growth. The U.S. technological leadership rests in part on the continued position of the United States as the primary destination for highly trained and skilled scientists and engineers from the world over. Although this is likely to persist for some time, the increasing attractiveness of foreign emerging economy destinations is a long-term concern for continued U.S. technological leadership.
Baumol explores several hypotheses on the appropriate education for innovating entrepreneurship: 1) breakthrough inventions are contributed disproportionately by independent inventors and entrepreneurs, while large firms focus on cumulative, incremental (and often invaluable) improvements; 2) education for mastery of scientific knowledge and methods is enormously valuable for innovation and growth, but can impede heterodox thinking and imagination; 3) large-firm R&D requires personnel who are highly educated in extant information and analytic methods, while successful independent entrepreneurs and inventors often lack such preparation; and 4) while procedures for teaching current knowledge and methods in science and engineering are effective, we know little about training for the critical task of breakthrough innovation.
Feldman defines jurisdictional advantage, the recognition that location is critical to firms' innovative success and that every location has unique assets that are not easily replicated. Drawing from the well developed literature on corporate strategy, she considers analogies to cities in their search for competitive advantage. She argues that jurisdictions may benefit from a strategic orientation that considers the unique and not easily replicated assets, resources, and skill set contained in a jurisdiction and the position of the jurisdiction vis-a-vis the hierarchy of cities in the national and world economy and then maximizes wages and property values within the jurisdiction. She also reviews recent advances in our understanding of patterns of urban specialization and the composition of activities within cities, which suggest strategies that may generate economic growth as well as some strategies to avoid. Finally, she considers the role of firms and their responsibility to jurisdictions in light of the net benefits received from place-specific externalities, and concludes by considering the challenges to implementing jurisdictional advantage.
Gentry and Hubbard find that, while the level of the marginal tax rate has a negative effect in entrepreneurial entry, the progressivity of the tax also discourages entrepreneurship, and significantly so for some groups of households. Prospective entrants from a priori innovative industries and occupations are no less affected by the considerations examined here than other prospective entrants. In terms of destination-based industry and occupation measures of innovative entrepreneurs, the authors find mixed evidence on whether innovative entrepreneurs differ from the general population; the results for entrepreneurs moving to innovative entrepreneurs suggest that they may be unaffected by tax convexity, but the possible endogeneity of this measure of innovative entrepreneurs confounds interpreting this specification. Using education as a measure of potential for innovation, Gentry and Hubbard find that tax convexity discourages entry into self-employment for people of all educational backgrounds. Overall, they find little evidence that the tax effects are focused simply on the employment changes of less skilled or less promising potential entrants.
Merger policy is the most active area in U.S. antitrust policy. It is now widely recognized that merger policy must move beyond its traditional focus on static efficiency to account for innovation and to address dynamic efficiency. Innovation can fundamentally affect merger analysis in two ways. First, it can dramatically affect the relationship between the pre-merger marketplace and what is likely to happen if a proposed merger is consummated. Thus, innovation can fundamentally influence the appropriate analysis for addressing traditional, static efficiency concerns. Second, innovation itself can be an important dimension of market performance that is potentially affected by a merger. Katz and Shelanski explore how merger policy is meeting the challenges posed by innovation.
These papers will appear in an annual volume published by the MIT Press. Its availability will be announced in a future issue of the Reporter. They can also be found at "Books in Progress" on the NBER's website.
Using the phone numbers registered with the Federal Trade Commission's national do-not-call (DNC) list, Varian, Wallenberg, and Woroch identify key demographic and economic determinants of household decisions to block unsolicited telemarketing calls. With a model of households' decisions to register phone numbers and telemarketers' decisions to attempt calls, the authors uncover the factors affecting signup frequencies. They map the more than 60 million registered phone numbers into counties and then match them with household demographic information from the 2000 Census, plus several behavioral variables from national panel datasets. Regressions of county-level signup frequencies on individual demographic variables reveal that participation in the DNC registry is related directly to household income, educational attainment, home mortgage, and linguistic integration. Irregular patterns emerge for household size and for the ages of the children and the head of household. The authors, after further estimation, find that a parsimonious specification including just income, teenaged kids, low education, and whether the state maintains and merges its list explains nearly the same fraction of variance as the full set of demographic variables. States that maintained a DNC list that is subsequently merged with the national list have significantly higher signup rates, while those that declined to merge their lists have significantly lower rates. This suggests that a state list is a close substitute for the national one.
Loder, Van Alstyne, and Wash explore a novel approach to spam based on economic rather than technological or regulatory screening mechanisms. Their first point is that mechanisms designed to promote valuable communication often can outperform those merely designed to block wasteful communication. Their second point shifts the focus from the information in the message to the information known to the sender. Then they can use principles of information asymmetry to "cause" people who knowingly misuse communication to incur higher costs than those who do not. In certain cases, the authors show that this approach leaves recipients better off than with even an idealized or "perfect" filter that costs nothing and makes no mistakes. Their mechanism also accounts for individual differences in opportunity costs, and allows for bi-directional wealth transfers while facilitating both sender signaling and recipient screening.
How frequently do firms advertise prices in online markets? Scholten examines price information at one of the leading Internet price comparison sites, Shopper.com. His results suggest that firms advertise price information about 69 percent of the time. In addition, firms are 13 percent less likely to advertise price information in markets with few consumers. Firms' propensity to advertise prices does not appear to vary inversely with market structure. This suggests that Baye-Morgan provided a very good starting point, but that additional theoretical models are needed to see whether relaxing the assumption that firms' propensity to advertise is symmetric leads to equilibrium outcomes more consistent with the data.
Small firms produce information goods, which have properties of both private and nonrival goods, under conditions of constant returns to scale and free entry. The firms embed messages in their goods, selling access to the good to small consumers and message content to small advertisers. Information goods are excludable if positive access fees are feasible and includable if negative access fees are feasible; Stegeman studies several cases. In equilibrium, firms generally could increase total surplus by increasing the quality of the good, supplying less advertising, and reducing access fees.
They could similarly increase surplus by supplying less advertising and making a profit-compensating adjustment in access fees. Firms may over- or under-produce information goods, and Stegeman identifies circumstances that produce each outcome. The welfare results are mostly robust to the presence of small to moderate negative externalities from advertising.
Economides, Seim, and Viard evaluate the consumer welfare effects of entry into residential local phone service in New York state using household-level data. Since residential local phone service is sold under a menu of two-part tariffs, the authors develop a method for estimating a mixed discrete/continuous demand model. The econometric model incorporates the simultaneity of the discrete plan and continuous consumption choices by consumers and allows for flat-rate plans, bundling of services, and unobservable firm quality. Since utility maximization underlies the model, the authors can estimate welfare effects from the introduction of additional choices or changes in product features. They use the model to evaluate the effect of entry by the two largest competitive local exchange carriers in the New York market from the third quarter of 1999 to the first quarter of 2003. Residential local phone service competition is an important goal of the 1996 Telecommunications Act and the authors provide one of the most detailed evaluations of its effect on consumer welfare. Their preliminary results indicate that relative to what it would have paid to Verizon, the average household switching to AT&T or MCI saved 4.4 percent and 0.7 percent respectively, ignoring quantity and observed and unobserved quality effects from switching.
Miravete and Roller present a framework for estimating a model of horizontal product differentiation in which firms compete in nonlinear tariffs. They explicitly incorporate the information contained in the shape of the tariffs offered by competing duopolists. The model identifies the determinants of the non-uniform equilibrium markups charged to consumers who make different use of cellular telephone services. The authors then use the model to study the early U.S. cellular telephone industry and evaluate, among others, the welfare effects of competition, the benefits of a reduction of the delay in awarding the second cellular license, and alternative linear and nonlinear pricing strategies. They find that a single two-part tariff achieves 63 percent of the potential welfare gains and 94 percent of the profits of a more complex, fully nonlinear tariff.
Borzekowski focuses on the relationship between credit unions' outsourcing of their information systems and their adoption of Internet technologies. Using a dataset that contains semi-annual technology information for 10,390 credit unions from June 1998 through June 2003, he estimates a model that includes both the adoption and outsourcing decisions. The model also explicitly accounts for heterogeneity in a firm's ability to use IT. The estimation results indicate that "IT Type" does matter in the outsourcing decision, but not in the decision to adopt Internet technology. Outsourcing does not appear to lower the cost of Internet adoption, a result that runs counter to the evidence in the raw data which indicates that Internet technology was adopted faster by credit unions that outsourced their IT.
Patients with a chronic illness (such as diabetes or congestive heart failure) are one of the costliest and fastest growing segments of the U.S. health care system. Disease management (DM) programs use clinical standards and information technology to identify high-risk patients among the chronically ill and intervene before expensive treatments become necessary. Despite DM's growing popularity, few studies have shown that these programs actually change patient behaviors, improve health outcomes, or reduce costs. In this paper, Gertler and Simcoe describe the recent rise of DM within the health care industry and estimate its impact on medical care productivity. Using data from a DM program for diabetics at a central Massachusetts HMO, the authors find that the program led to increased compliance with Clinical Practice Guidelines (CPGs), improvements in patient health, and reductions in the total cost of care.
Anecdotal evidence suggests that producers of information products (TV programs, movies, computer software) may respond to potentially cost saving technological change by increasing, not reducing, their total production investments in the "first copy" of each product, possibly at the expense of product variety. Waterman shows that under reasonable assumptions about consumer demand and production technology, a monopolist in fact is induced to increase first copy investments as a result of either what he defines as "quality-enhancing" or "cost-reducing" types of technological advance. In a competitive industry, first copy investments also rise for both types of technological change, while variety falls or stays the same. Contrary to often held expectations, potentially cost saving technological advances in information industries may result in higher barriers to entry and greater concentration.
A longstanding economic question is the appropriate level of protection for intellectual property. The Internet has drastically lowered the cost of copying information goods and provides a natural crucible for assessing the implications of reduced protection. Oberholzer and Strumpf consider the specific case of file sharing and its effect on the legal sales of music. They match a dataset containing 0.01 percent of the world's downloads to U.S. sales data for a large number of albums. To establish causality, downloads are instrumented using technical features related to file sharing, such as network congestion or song length, as well as international school holidays. Downloads have an effect on sales which is statistically indistinguishable from zero, despite rather precise estimates. Moreover, these estimates are of moderate economic significance and are inconsistent with claims that file sharing is the primary reason for the recent decline in music sales.
Media outlets sometimes incorporate ideological content into their programming. Such content may simply be a form of product variety, but it also may be attributable to media outlet owners who are willing to sacrifice some profit in order to engage in ideological persuasion. Balan, DeGraba, Wickelgren assume the existence of such owners and compare the amount and type of persuasion that will occur under two regimes: one in which mergers are prohibited and the other in which they are permitted. The results for the "mergers-prohibited" regime are: there will be diversity of persuasion (that is, more than one variety of persuasion will exist in equilibrium) if and only if the ideological preferences of the different types of potential owners are not too different; and total persuasion is higher when these ideological preferences are less similar. The main results for the mergers-permitted regime are: mergers between firms with identical ideologies cause total persuasion to increase; and mergers between firms with different ideologies cause total persuasion to increase as long as the persuasion utility function is not too concave. Interestingly, permitting mergers sometimes can lead to ideological diversity when there was no diversity under the mergers-prohibited regime.
Using affiliate-level data, Desai, Foley, and Hines analyze the impact of tax haven operations on the non-haven activities of American multinational firms. The evidence implies that American firms use tax haven affiliates both to reallocate taxable income away from high-tax jurisdictions and to facilitate deferral of repatriation taxes, particularly from low-tax jurisdictions. Ownership of tax haven affiliates reduces tax payments by nearby non-haven affiliates to the same degree as would a 21 percent reduction in the local tax rate reduction. While havens facilitate profit reallocation and deferral of repatriation taxes, they may also reduce the cost of capital and thereby increase the attractiveness of foreign investment in non-havens. The evidence indicates that firms with non-haven operations in countries whose economies grow rapidly are the most likely to establish new tax haven affiliates, implying a complementary relationship between haven and nonhaven operations, and the potential for tax haven jurisdictions to contribute to regional economic growth.
Schmidheiny investigates spatial segregation of the population in fiscally decentralized urban areas. The theoretical part of his paper proposes the progressivity of local income taxes as a new explanation for income segregation. The empirical part studies how income tax differentials across communities affect households' location decisions. The data from the Swiss metropolitan area of Basel contain tax information from all households that moved, either within the city center of Basel or from the city center to the outskirts. The empirical results show that rich households are significantly and substantially more likely to move to low-tax communities than poor households.
A theoretical analysis gives rise to the presumption that, in the presence of tax competition, a system of redistributive "fiscal equalization" transfers tends to raise the taxing effort of local jurisdictions. More specifically, Buettner shows that the marginal contribution rate, that is the rate at which an increase in the tax base reduces those transfers, might be positively associated with the local tax rate. This is partly confirmed in an empirical investigation based on a large panel of German municipalities. In particular, changes in the marginal contribution rate attributable to changes in the rules of the system exert a significant positive impact on the local tax rate.
Revelli investigates whether national evaluation of decentralized government performance, by lessening local information spill-overs, tends to reduce the scope for local performance comparisons and consequently to lower the extent of spatial auto-correlation among local government expenditures. He analyzes U.K. local government expenditures on personal social services before and after the introduction of a national performance assessment system that attributes a rating to each local authority. The empirical evidence suggests that the introduction of the social services performance assessment system has substantially reduced policy mimicking among neighboring jurisdictions.
There have been few empirical strategies developed to investigate public provision under majority rule while explicitly accounting for the constraints implied by households' mobility. Most previous empirical work has focused on necessary conditions that the observed expenditures, housing prices, and tax rates had to satisfy in a myopic voting equilibrium. The existing empirical evidence suggests that myopic voting behavior is not consistent with the data. This is puzzling, especially given the prominence that myopic voting plays in the theoretical literature. Calabrese, Epple, Romer, and Sieg develop a new empirical approach that allows them to impose all restrictions that arise from these equilibrium models simultaneously on the data generating process. They can then analyze how close myopic models come in replicating the main regularities about expenditures, taxes, sorting by income, and housing observed in the data. The main results suggest that myopic models can replicate the observed expenditure patterns as well as the observed sorting of households by income. However, these models cannot fit the observed tax rates.
While the Tiebout hypothesis has come under increasing empirical fire, studies have not convincingly ascertained whether weak Tiebout sorting is truly evidence against the hypothesis or simply evidence that the prevalence of centralized state policies removes the conditions necessary for fiscal sorting. Farnham and Sevak explore the extent to which state fiscal policy pertaining to the school finance system affects the incentive or ability to sort on local fiscal characteristics. Using panel data on older households from the Health and Retirement Study, the authors find smaller adjustments of the local fiscal bundle by within-state empty-nest movers in the presence of school finance equalization policies. In addition, they use household data from the 1970-2000 decennial census to analyze differences in within-state and cross-state mobility rates and location choice under different school finance regimes. They find evidence of decreased within-state mobility at critical points in the Tiebout lifecycle when school finance equalization is present. They also find evidence that older households may escape centralization by moving across state lines.
In the European Union and in many federal and non-federal countries, the central government pays subsidies to poor regions. These subsidies often are seen as a redistributive measure which comes at the cost of an efficiency loss. Fuest and Huber develop an economic rationale for regional policy based on economic efficiency. They consider a model of a federation consisting of a rich and a poor region. The economy is characterized by increasing returns to scale in production and imperfect competition in goods markets. Firms initially only produce in the rich region and may set up additional production facilities in the poor region or serve this region via exports, which gives rise to a transport cost. The authors show that the laissez faire allocation is characterized by too little mobility; that is, the number of firms investing in the poor region and the number of households migrating to the rich region is inefficiently low. The optimal regional policy subsidizes investment and supports mobility of households in the poor region. These results also hold if there are autonomous regional governments.
Many states are under court-order to reduce local disparities in education spending. When states spend more on education, that changes both state and local budget constraints, and thus may affect many different spending and revenue decisions. Baicker and Gordon examine how changes in state education spending affect the level and distribution of the total resources available to localities and spending on public goods - both through changes in state spending patterns and through changes in the revenue and spending decisions of local jurisdictions themselves. The authors find that mandated school finance equalizations do increase both the level and progressivity of state spending on education, but that states finance the required increase in education spending in part by reducing their aid to localities for other programs. Local governments, in turn, respond to the increases in state taxation and spending by reducing both their own revenue-raising and their own spending on education and other programs. Thus, while state education aid does increase total spending on education, it does so at the expense of drawing resources away from spending on programs like public welfare, highways, and hospitals.
Darby, Muscatelli, and Roy investigate the use of grants, shared tax revenues, and their impact on fiscal outcomes, including decentralized service provision. They use a panel dataset covering 15 OECD countries to investigate how central and sub-central expenditures, taxation, and intergovernmental grants change in response to central governments' attempts to correct their fiscal positions. Their key results can be summarized as follows: first, successful fiscal consolidations are generally driven by similar, and sustained, falls in expenditure at both central and sub-central tiers. Moreover, the evidence counters Gramlich (1987) in the United States: when central governments cut intergovernmental grants, sub-central tiers do not take redress through offsetting increases in other forms of revenues. Second, unsuccessful consolidations tend to be characterized by increased central government taxation, with no fall back in grants and no tendency for sub-central taxation to change. There does appear to be a strong correlation between success in consolidating central fiscal deficits and similar actions from lower tiers of government. Third, where consolidations are successful, sub-central tiers of government are typically forced to cut back on capital expenditure. This suggests that the burden of adjustment falls onto lower tiers of government; and central governments worry less about the long-term (that is, public investment) consequences of consolidation if these decisions are taken at local level. Also, when faced with cuts in intergovernmental grants, sub-central governments tend to maintain expenditures on wages at the expense of capital expenditure, reflecting a definite compositional switch towards public consumption. Finally, these results shed some light, at least indirectly, on the "Fly-paper Effect," by showing that it operates in reverse. Successful consolidations are characterized by cut-backs in grants that are more than offset by cut-backs in sub-central expenditures. In contrast, periods of unsuccessful consolidation are characterized by increases in central taxation, no change in grants, and small, temporary reductions in sub-central expenditure.
Burbidge, Cuff, and Leach extend the capital tax competition literature by incorporating heterogeneous capital and agglomeration. Their model nests the standard tax competition model as well as the special case in which there is agglomeration but no firm/capital heterogeneity and the opposite case, firm heterogeneity with no agglomeration. The authors build on the existing tax competition literature and establish a link between it and the more recent work on agglomeration using the new economic geography model.
Why would voters resort to a statewide tax limitation to force change in their own local government? Vigdor develops and tests the hypothesis that property tax limitations succeeded because they allowed voters to lower tax rates in other communities. Statewide limitations effectively extend the voting franchise to individuals who have no standing in local elections. Voters may have preferences for tax and expenditure levels in other jurisdictions because they receive rents from employment in those jurisdictions, directly own taxable assets in those jurisdictions, or because changes in other jurisdictions might influence their own residential location choice. Empirical tests of this hypothesis focus on the Massachusetts experience with Proposition 2½, which passed in 1980. Voting patterns, household mobility patterns, and post-Proposition growth in property values all support the nonresident hypothesis.
Dreze, Figuieres, and Hindriks investigate the possibility of achieving by means of voluntary matching grants both the optimal allocation of factors and the optimal level of redistribution in the presence of factor mobility. They use a fiscal competition model in which states differ in their technologies and preferences for redistribution. They derive the optimal differentiation of matching rates across states according to the asymmetries in the technology and in the redistribution motive. Then they derive the willingness of each state to match the contribution of other states, and decompose the aggregate willingness to pay as the sum of two terms. The first term is related to redistribution; it is positive only if matching the contribution of one state brings overall redistribution closer to its optimal level. The second term is related to production; it is positive if the matching to one state leads to a more efficient allocation of factors. Willingness to pay for matching rates converges to zero when both the optimal level of redistribution and the optimal allocation of factors are achieved. The authors then describe the adjustment process for the matching rates that will lead agents to the efficient outcome and guarantee that everyone will gain.