Volume II, No. 2 April—June 1979
John N. Gray
Literature of Liberty, published quarterly by the Cato Institute of San Francisco, is an interdisciplinary periodical intended to be a resource to the scholarly community. Each issue contains a bibliographical essay and summaries of articles which clarify liberty in the fields of Philosophy, Political Science, Law, Economics, History, Psychology, Sociology, Anthropology, Education, and the Humanities. The summaries are based on articles drawn from approximately four hundred journals published in the United States and abroad. These journals are monitored for Literature of Liberty by the associate editors.
Subscriptions and correspondence should be mailed to Literature of Liberty, 1177 University Drive, Menlo Park, California 94025. The annual subscription rate is $16 (4 issues). Single issues are available for $4 per copy. Overseas rates are $20 for surface mail; $34 for airmail. An annual cumulative index is published in the fourth number of each volume. Second-class postage paid at San Francisco, California, and at additional mailing offices.
© Cato Institute 1979
ISSN 0161–7303
Cover: John Stuart Mill in late maturity, reproduced courtesy of Radio Times Hulton Picture Library, London.
Leonard P. Liggio
Editor
John V. Cody
Managing Editor
Ronald Hamowy
Senior Editor
Virginia Brown
Assistant Editor
Anne Stausboll
Production Manager
Edward H. Crane
President, Cato Institute
John E. Bailey, III
Rome, Georgia
Randy Barnett
Chicago, Illinois
William Beach
University of Missouri
Donald Bogie
Georgetown University
Samuel Bostaph
Western Maryland College
M. E. Bradford
University of Dallas
Alfred Cuzan
New Mexico State University
Douglas Den Uyl
Marquette University
Edward C. Facey
Hillsdale College
John N. Gray
Jesus College, Oxford University
Malcolm Greenhill
Oxford University
M. E. Grenander
SUNY at Albany
Walter Grinder
University College, Cork, Ireland
John Hagel
Cambridge, Massachusetts
Jack High
University of California, Los Angeles
Tibor Machan
Reason Foundation, Santa Barbara
William Marina
Florida Atlantic University
Gerald O'Driscoll
New York University
Lyla O'Driscoll
New York Council for the Humanities
David O'Mahony
University College, Cork, Ireland
Ellen Paul
Miami University
Jeffrey Paul
University of Northern Kentucky
Joseph R. Peden
Baruch College, City University of New York
Tommy Rogers
Jackson, Mississippi
Timothy Rogus
Chicago, Illinois
John T. Sanders
Rochester Institute of Technology
Danny Shapiro
University of Minnesota
Sudha Shenoy
University of Newcastle New South Wales
Bruce Shortt
Harvard Law School
Joseph Stromberg
University of Florida
David Suits
Rochester Institute of Technology
Karen Vaughn
George Mason University
Alan Waterman
Trenton State College
Marty Zupan
Santa Barbara, California
F.A. Hayek, in The Constitution of Liberty (1960) notes that the constitutions of the individual states between 1776 and 1787 “show more clearly than the final Constitution of the Union how much the limitation of all governmental power was the object of constitutionalism. This appears, above all, from the prominent position that was everywhere given to inviolable individual rights, which were listed either as part of these constitutional documents or as separate Bills of Rights.... The most famous of these Bills of Rights, that of Virginia, which was drafted and adopted before the Declaration of Independence and modeled on English and colonial precedents, largely served as the prototype not only for those of the other states but also for the French Declaration of the Rights of Man and the Citizen of 1789 and, through that, for all similar European documents.” One source for Europeans was the Jefferson-inspired Researches on the United States (1788; 1976) by Filippo Mazzei, which discussed the Virginia Declaration of Rights.
What impressed Hayek was the Founding Fathers' oft-repeated insistence that “...a frequent recurrency to fundamental principles is absolutely necessary to preserve the blessing of liberty.” Here Hayek was quoting from the draft of the Virginia Declaration of Rights (May 1776), by George Mason.
George Mason (1725–1792) was featured as the cover portrait of our first issue. Mason epitomized the highest ideals of the Founding Fathers as expressed in the Bill of Rights. As a member of the Virginia Committee of Safety and Convention in 1775 and 1776, Mason drew up Virginia's Constitution and Bill of Rights, which exerted a radical influence on American constitutional institutions.
Mason was serving in the Virginia House of Delegates (1776–1788) when he was appointed a member of the Constitutional Convention in Philadelphia in 1787. Since he was a strong advocate of gradual emancipation, he opposed legitimizing slavery in the Constitution as a bargaining point in the convention to obtain consensus on the Constitution. His radical republicanism condemned central government as a danger to individual freedom, and valued local government as a bulwark of freedom against centralized authority. He distrusted the strong powers granted the national government by the new Constitution and headed the opposition to its ratification in the Virginia convention. After ratification, Mason insisted on amendments which led to the Bill of Rights.
Hayek seconds Mason's caution, by emphasizing how necessary the Bill of Rights was as a control on the powers of the government. “The danger so clearly seen at the time was guarded against by the Edition: current; Page: [4] careful proviso (in the Ninth Amendment) that 'the enumeration of certain rights in this Constitution shall not be construed to deny or disparage others retained by the people.'” A recent discussion of Mason's significance appears in Bernard Schwartz, The Great Rights of Mankind (New York, Oxford University Press, 1977). This study supplements Roscoe Pound's The Development of Constitutional Guarantees of Liberty (1957) and Bennett B. Patterson's Forgotten Ninth Amendment (1955).
Daniel Morgan (1736–1802), the 'Revolutionary Rifleman,' dramatically confirms Von Clausewitz's judgment that when an armed citizenry conducts it, “warfare introduces a means of defense peculiar to itself.” Morgan embodies the frontier spirit which is the foundation of American culture. His attitude to authority was typical of the American frontiersman.
Serving with the Virginia Rangers in the French and Indian War, Morgan learned the advantages of guerrilla tactics against regular troops. Rangers dressed in buckskin and moccasins, and armed with the Kentucky long-rifle, were able to achieve mobility and accuracy of shooting unknown to regular armies. The long-rifle was used by rangers to hit the regular troops whose muskets had a much shorter range; its deadly accuracy felled enemy officers, whose loss then created confusion in the ranks.
In 1775, Morgan headed Virginia's first light infantry company raised for the American Revolution. Two years later he took command of a special corps of light infantry or rangers. This corps, known later as “Morgan's Rangers,” played a crucial role in the American victory at the battles of Saratoga (September–October 1777). Saratoga was the military turning point in the American Revolution, and it showed that politically-committed military forcus could be gathered from the countryside to fight successfully.
Later, during the southern campaign (1780–1781) Morgan raised the political awareness of patriots in the Carolinas and mobilized the militias there. This discouraged the Loyalists from joining the British. Once the militias were assembled, Morgan exhorted them to aid the cause of liberty by repelling the invading British. Morgan worked out and implemented important approaches to guerrilla tactics which led to the defeat of Lord Cornwallis at Yorktown. His greatness as a guerrilla tactician combined his political and military leadership of the American militias.
Literature of Liberty's scholarly goals—as a forum for stimulating ideas and a research guide—will be aided by reader cooperation. The journal welcomes letters from readers calling attention to significant articles that merit summarizing. Readers can present research information and comment on topics presented in Literature of Liberty through the Readers' Forum.
History's great tradition is to help us understand ourselves and our world so that each of us, individually and in conjunction with our fellow men, can formulate relevant and reasoned alternatives and become meaningful actors in making history.
Just as “no man is an island,” no historical event is isolated from its context of space and time. The American Revolution drew upon diverse ideas stretching back to the ancient world, was influenced by numerous social conditions each with its own past development, and involved the actions of millions of individuals over a span of years within a transatlantic area.
In examining a “symbolic” event such as the Revolution, however, we often overlook how our whole conceptualization of the boundaries of that “extended” event is largely based upon a sense of comparison.2 In this regard, the key word is not “American,” but “Revolution.” Thus our perception of when the Revolution began and ended follows from our beliefs around the class of events we designate “revolutions.”
Perez Zagorin defines three distinct lines of inquiry for studying revolution. The first is a detailed or general account of one specific revolution. The second presents a formal comparison of two or more revolutions to uncover any significant relationships between them. And, “finally, the third kind of inquiry is theoretical; its purpose is to establish a theory of revolution capable of explaining causes, processes, and effects as a type of change.”3 But, as Perez Zagorin Edition: current; Page: [6] observes, it is the third theoretical study of revolutions which is most impoverished:
[N]othing has appeared that qualifies as a general theory of revolution. Furthermore, among theorists there has been little progressive accumulation of ideas. The general theory of revolution remains subject to confusion, doubt, and disagreement. Even elementary questions of definition, terminology, and delimitation of the field to be explained are not settled.4
Recent historiography of the American Revolution (with a few notable exceptions) has been preoccupied with the particular. But the most striking feature of the writings celebrating the Bicentennial has been the absence of any new, fresh interpretation explaining the broader meaning of that historic occurrence.
In addition, too much of historical scholarship is fragmented and overspecialized, and adrift without theoretical moorings or a unifying vision.5
Our essay seeks to set the mass of recent scholarship of the American Revolution within the unifying paradigm of the sociology of revolution—of revolution as a people's war. This paradigm will permit a better understanding of the nature and meaning of the American Revolution. It will invoke as a leitmotif the tensions among inequality, equality, and egalitarianism which both inspired and divided the human actors of the Revolution.
This unifying paradigm and these issues concerned with equality will emerge as we answer four difficult questions about the era of the American Revolution:
Before answering these four questions at length in the major sections of our essay, we will first briefly define some preliminary issues relating both to a paradigm of revolutionary social change and to the role of equality in such change.
Robert Nisbet in Social Change and History traces the effort to understand and explain social change back to the pre-Socratic Greeks (in the West at least). Heraclitus saw all of life as involving change and he emphasized war as the ultimate activity stimulating Edition: current; Page: [7] social upheaval.6 In developing a cosmology, Adam Smith, as a typical Enlightenment thinker, drew heavily upon concepts first articulated by the Greek Sophists.7 Since the classical world view profoundly influenced the Renaissance and Enlightenment, it is not surprising that patterns of cyclical thought appear continuously from Machiavelli to John Adams.
Machiavelli, as J.G.A. Pocock shows in The Machiavellian Moment: Florentine Political Thought and the Atlantic Republican Tradition, had an enormous impact upon English revolutionaries such as James Harrington, and hence on the later Whigs, and finally on the Americans who shared that outlook. A cyclical metaphor was at the core of the Americans' paradigm or framework for analyzing social change and revolution.8
The emphasis on “modernization” in the sociology of revolution has stimulated the study of social change and has called into question the “inertia” or “tradition” paradigm for revolution. Perhaps the most influential recent contribution has been Barrington Moore, Jr.'s Social Origins of Dictatorship and Democracy.9 One of Moore's most important contributions to an analysis of change was questioning the “inertia” paradigm, one of the unexamined assumptions about change. Borrowing from physics, the inertia paradigm assumed the existence of a traditional, natural order of things in society; only change away from this “norm” need be explained. Quite apart from the conservative bias, inertia overlooks the enormous educational effort required if that “tradition” is to be passed on from one generation to another. This does not happen automatically. The lack of social change in a society is equally as important to explain as any significant change.10
Those who have lived through the last decade of change in America can appreciate the situation facing British officials after 1763. What sort of “tradition” could be emphasized in an Empire which (1) was still feeling the effects of a revolution less than a century before, (2) was already entering a series of changes collectively labeled “the Industrial Revolution,” and (3) recently had acquired a vast overseas empire? Assuming it could be articulated, what meaning would that tradition have for colonists whose average age was roughly sixteen? Complicating the unity of a tradition was the soaring colonial population. A high birth rate and an influx of immigrants (many not from England) would virtually double that population during the years of the “revolutionary generation,” over a third of whom would leave the seaboard areas for land in the interior.
From this viewpoint it is evident that we must consider revolution and social change on both a theoretical level and a global basis. Immanuel Wallerstein's The Modern World-System,11 attempts to utilize such an approach, covering roughly the two hundred years Edition: current; Page: [8] after 1450. Wallerstein's approach reminds us how important is an analytical framework covering a vast historical landscape if we are to fashion more coherent theories of social change and revolution.
Strangely enough, in stressing this broad panorama, modern scholarship has just recently caught up with the popular social unrest which was perceived by many at the time.12 This will serve as a theme of our essay: the nature of popular social unrest in the epoch of the American Revolution.
Our best perspective for examining the American Revolution is to sketch briefly the general agreement about the revolutionary process: the Why, Who, How, and What of revolution.
In reading through all the jargon of modern social science dealing with revolution and change (e.g., “J curves,” “relative deprivation,” and “rising expectations”) we are forcefully impressed that these concepts, if not the terminology, were understood by the ancients, as well as many of the revolutionary generation in America.
As might be expected, much ink and paper have been expended simply on trying to define revolution.13 We need not get bogged down in attempting to offer an all-inclusive definition. For our purposes, a useful, straightforward definition is that of Lyford P. Edwards in The Natural History of Revolution: “A change brought about not necessarily by force and violence, whereby one system of legality is terminated and another originated.”14
Assessing the necessary preconditions for revolution leads us to examine the composition of the potential revolutionary group. The important role of ideology is evidence in Crane Brinton's The Anatomy of Revolution, where he emphasizes “the desertion of the intellectuals” as a key phase in the prerevolutionary developments.15 This involves more than desertion, however, for the intellectuals do not simply withdraw support from the “Old Regime” as Brinton termed those in power. Beyond merely deserting, a growing number of intellectuals mount an increasingly vigorous attack upon the very philosophical underpinnings of the Old Regime; even more importantly, they advance an alternative paradigm, or world view, about how the society ought to be organized.16
The sociology of revolution demands much greater exploration of the whole question of legitimacy and how a new legitimacy comes to transplant the old.17 In this regard, a very useful idea is the “paradigm” derived from the historian of science, Thomas S. Kuhn.18 Our tendency to conceptualize reality in terms of a model, Edition: current; Page: [9] or paradigm, is closely related to the older tradition in the study of the sociology of knowledge which used the term Weltanschauung, or world view, to describe that idea.19 If we see the paradigms as subsets within a world view, an individual might hold a number of separate or overlapping paradigms. The totality of these paradigms constitute his world view and seldom conflict with each other.20
Kuhn's normal science—the dominant, accepted, legitimate paradigm—bears a similarity to the “Old Regime” in the study of the sociology of revolution.21 A current belief in America holds that the authorities need to use force to restore law and order. That outlook seems to be a misreading of the dynamics of social change; real authority always rests upon legitimacy, not force.22 Legitimacy is, in fact, the very antithesis of force. Large protests within a society usually decry some objective inequities, which fuel dissent.
Revolutions, whether in science or society as a whole, are preceded by what could be called “a crisis in legitimacy.” Authority must ultimately rest on a belief, held by virtually the entire society, that the social order is legitimate, that it corresponds with the way things “ought” to be in a just and equitable society. Operationally, men seek solutions to social problems within this legitimate world view. Until a competing revolutionary world view arrives, no one suspects that a solution might be framed outside of this dominant world view.
The concept of legitimacy leads us into another important aspect of the revolutionary process: that is the societal dynamics in revolution, involving the relationship of the leadership to the larger population and the internal workings of the revolutionary coalition. The idea persists that the American Revolution was a minority affair. Walter Lippmann once observed: “Revolutions are always the work of a conscious minority.”23 Since revolutions always have leaders, it tells us little to observe that, say, the American Revolution was led by a small minority. This elite concept fosters the innuendo that such a minority simply manipulates the majority to do its bidding.
Against the view that a minority manipulates revolutions, a general postulate holds that at the level of legitimacy the great social revolutions have always involved the bulk of the population. If a dialogue between leaders and their supporters ceases, or if the leadership exceeds the limits of their legitimacy, then the revolutionary movement hesitates, loses momentum, and may fail altogether. The minority may then resort to force, a treacherous course, for the leadership then begins to lose the legitimacy which animated it, and is no longer very revolutionary.
In “Ideology and an Economic Interpretation of the Revolution” Joseph Ernst has distinguished mentality, ideology, and world Edition: current; Page: [10] view.24 Briefly defined, a “mentality” is a vague but usually broadly held attitude; the dynamic concept of equality that was increasingly held by Americans of the revolutionary generation is an example of such a mentality. Next, a more formal “ideology” characterizes the leadership in any sort of movement: an effort to explain and more fully understand the relationship “between ideas and social circumstances.” At its most general level, the American ideology came to encompass republicanism. Finally, a “world view” is an even more detailed theoretical analysis developed only by a few, usually among the wider leadership. In the American Revolution, those who sought to comprehend the larger role of the British mercantile system, or Empire, were thereby propounding a world view that integrated social, economic, and political events.
Revolutions are shifting coalitions over time—among both the leadership and the larger population. Revolutionary coalitions embody all three of the levels of awareness and so contain overlapping areas of consensus and disagreement. Consequently, there will be basic “fault lines” that create internal divisions within those groups comprising the coalition. Over time, the dynamics of any revolution are shaped by the interaction of specific groups of interests within the coalition, as well as the interaction between them.
As an example, one of the basic fault lines in the American Revolution example divided those who wanted only independence from England from those who wished to seize the opportunity to work more extensive changes in the structure of American society. Was the American Revolution merely a colonial rebellion or was it a true social revolution? The answer is, of course, both.25 Any future interpretation of the nature of the American Revolution must begin by making clear the internal divisions among the revolutionaries, and ways in which the evolving factions and coalitions shaped the direction of change. (This same debate has occupied historians of the Revolution since at least the time of J. Franklin Jameson's The American Revolution Considered as a Social Movement and Carl Becker's The History of Political Parties in the Province of New York.)
Explicating the relationship between the leadership and their supporters leads to another aspect of revolution: in what way does the military means employed affect the whole post-revolutionary society. Whether in an internal civil war or in a colonial war for independence, if one side is able to wage a “people's war,” such a world view and organizational structure will have repercussions throughout the society. One of the major divisions in the American revolutionary coalition—between advocates of a traditional war as opposed to a people's war—reflected a fundamental difference in paradigms, Edition: current; Page: [11] if not world views, among different revolutionary factions.
Revolutionary coalitions cannot be maintained indefinitely. As a revolutionary era reaches its final stages, its radical actions are replaced by an effort to conserve the essentials of the revolutionary program. In the American case this is exemplified in the Constitution replacing the Articles of Confederation. Despite the heated debate over the Constitution, what is significant is that the opposition, with the inclusion of the Bill of Rights, did not conclude that the Constitution was a violation of what they conceived as a legitimate social order.
Our discussion of the sociology of revolution has highlighted the conditions and groups which make revolution a possibility and then a reality. Such an analysis may ignore the fact that individuals (rather than classes or coalitions) feel, think, and act. In short, there is a psychology as well as a sociology of revolution. (It is impossible to miss the Founding Fathers' constant references to ambition, fame, envy, power, or greed as significant factors.) Often lacking in contemporary theories of revolution and social change is an understanding that one must begin with a view of human action or nature which links the individual to the social groups of which he may become a part.26
The drive for equality, broadly understood, can be viewed as the central motivating factor in all revolutionary action. Equality serves as the organizing principle for constructing a social interpretation of the revolutionary era.27 The issue of equality follows from the fact that human beings as social animals demonstrate a tendency toward hierarchical attitudes.
There is a constant tension among three concepts: inequality, equality, and egalitarianism. First inequalitarians tend to be those at the top of a given social order; with their privileges usually based upon birth or wealth, they conceive of a rather rigid hierarchy with little mobility. A number of inequalitarians do feel some paternalistic concern for those beneath them, which may well be reciprocated from a few below.
By contrast, the egalitarian agitates for the destruction of this status system by redistributing property, wealth, and income. The egalitarian program necessitates the creation of an elite group of guardians whose task it will be to administer the new order. In reality, therefore, a fully egalitarian society is a logical impossibility: the small elite is always necessary. The equalitarian society is characterized by the idea of equality before the law. For the Edition: current; Page: [12] equalitarian the chance to compete does not imply the equal chance to win. In such a circumstance of individual differences, hierarchy—or ideally a plurality of hierarchies, offering each person an opportunity to find some field in which he can excel—continues to exist, permitting enormous mobility. The equalitarian society is a contract society, rather than a status society, and is based essentially upon achievement. J.R. Pole's The Pursuit of Equality in America is a reminder of how formative equality has been to the American experience, especially to the revolutionary era.28
What I call virtue in the republic is the love of the patrie, that is to say, the love of equality.
The question of why the American Revolution occurred requires us to distinguish between long and short range factors. Further, in so far as these pertain to the changing structure of American society, were these such as to have created a loss of legitimacy by the government of the Mother Country, apart from actions initiated by the British authorities themselves?
The study most closely resembling an interpretation of the coming of the Revolution during the last decade is Bernard Bailyn's The Ideological Origins of the American Revolution. Bailyn wrote that as he studied the pamphlets and other writings of the revolutionary generation, he was “surprised” as he “discovered” that (even more than by the work of John Locke) the Americans had been influenced by the freedom oriented writings of Whig pamphleteers such as John Trenchard and Thomas Gordon's Cato's Letters and The Independent Whig.2
But Bailyn's discovery of these “Old Whig” pamphleteers was anticipated by others. As early as 1789 David Ramsay's History of the American Revolution mentioned “those fashionable authors, who have defended the cause of liberty. Cato's Letters, The Independent Whig, and such were common...” Reminiscing in 1816 about the era of the 1770s John Adams observed, “Cato's Letters and The Independent Whig, and all the writings of Trenchard and Gordon,...all the writings relative to the revolutions in England became fashionable reading.”3
Bailyn's approach to ideas and historical causation fit comfortably with the dominant outlook which tends to downplay social and economic conflict—that is the struggle over power—in the Edition: current; Page: [13] American past, present, and indirectly, the future. But is it possible to separate ideology (as a cluster of ideas about reality and what ought to be) and political and constitutional issues from a social and economic context? Ideas cannot exist independent of some subject, content, and context.
In enforcing the importance of the writings of Whigs such as Trenchard and Gordon, Bailyn has rendered an important twofold service. First, it becomes apparent how far back beyond 1776 we must go to understand the ideas that were influencing Americans. Secondly, reading through the works of Trenchard and Gordon reveals the extent to which equality was the fundamental issue interwoven into the various specific issues with which they dealt.4
With respect to both of Bailyn's points, J.G.A. Pocock's Machiavellian Moment takes us back to the efforts of Florentine thinkers to sustain a republican form of government. These thinkers (of which Machiavelli was the most profound) were deeply influenced by Aristotle's works and by their reading of the degeneration of the Roman Republic into Empire. One clue to Machiavelli's republicanism is his work as a militia organizer during the period of the Republic in Florence.
Two of the dominating concepts for these republican theorists were virtue and corruption, both essential to understanding the republican paradigm which culminated in the American Revolution. Montesquieu fully understood the republican bearing of virtue in his remark, quoted above, that virtue fundamentally depended upon the existence of equality.5 Conversely, the corruption and decay which undermined republics were closely related to inequality.6 Interwoven through Machiavelli's analysis is his deep concern with the whole question of legitimacy.7
In Pocock's analysis, seventeenth-century England underwent many of the changes the Italian city states had experienced a century before, complicated by the Protestant Reformation. The English debate drew upon Machiavelli and the republican historians of the ancient world. Both Pocock's Machiavellian Moment and Christopher Hill's The World Turned Upside Down: Radical Ideas during the English Revolution8 offer an abundance of evidence to link the debate to inequality/equality/egalitarian divisions.
Drawing upon the ancients, Machiavelli, and Harrington, the “Opposition,” such as Trenchard and Gordon, stretched across a wide political spectrum. Caroline Robbins's The Eighteenth Century Commonwealthman indicates many Whigs thought of themselves as in the tradition of the Levellers of the English Revolution, and Edition: current; Page: [14] those views, stressing equality and liberty, were transmitted across the ocean to the New World.9 In assessing the “Opposition,” Kramnick's Bolingbroke and His Circle has focused some attention on the importance of Lord Bolingbroke.10 Forrest McDonald in The Phaeton Ride has dealt with Bolingbroke's influence on later American leaders such as Thomas Jefferson.11 Roger Durrell Parker has explored “The Gospel of Opposition” both in England and America.12
There were enormous changes occurring in the areas of commerce, banking, and even in manufacturing. Even though the State had often been involved in the process, there was certainly no reason to believe that this had to be the case.13 Indeed, a major issue separated the Court view (those who sought to use government in this economic development, and incidentally help themselves in the process) and the Country Party view (those who felt government intervention was not only unnecessary, but detrimental). The term Financial Revolution has been used by historians to suggest that this State interventionism was the only natural and necessary way to realize this process. This analysis tends to place opponents of the State's intervention in the economy as opponents of market developments, when that simply was not true.14
The Country Party included men so wedded to a world view of agrarian independence that they wanted nothing to do with a financial, commercial, market revolution, with or without State interventionism. In its most rigid form, their's was an egalitarian program modeled on ancient Sparta.15
Many of the Country Party, on the other hand, were committed to equality of opportunity before the law. They believed they could best achieve such equality by limiting the State to a very negative role. This view united them in their opposition to the statism of the Court Party and its evident inequalitarianism. They fully accepted the implications of the emerging urban-market revolution. They were in no way philosophically wedded to agrarian life. Farmlands were simply another area where market and technological techniques would yield important improvements. State interventionism was the enemy.16
A final group was, perhaps, the most important and representative of all. Their rhetoric was usually agrarian. They understood the virtue of the agrarian life: the apparent political stability of a nation of independent yeomen. But they realized the potential benefits from an urban-market sector within society. They were also disenchanted with the long-range corruption of a state financial system based upon great extremes of wealth and the creation of an urban proletariat without property.17 Whatever their ambivalences, they opposed the Court's alliance of State and private interests.
The ideology flowing from the English Revolution needs to be linked to the social change in the American colonies during the eighteenth century. In this reassessment the most important is Rowland Berthoff and John M. Murrin's “Feudalism, Communalism, and the Yeoman Freeholder: The American Revolution Considered as a Social Accident.”18 Berthoff and Murrin point out that “Until very recently few historians argued that the causes of the Revolution lay in the structure of colonial society.” And “[n]either J. Franklin Jameson, when in 1925 he broached the question of the Revolution as a social movement, nor Frederick B. Tolles, in reassessing the matter in 1954, paid any attention to the possibility that social causes impelled the political events of the years 1763 to 1775.”19
One recent example is Gordon S. Wood's observation in “Rhetoric and Reality in the American Revolution,” that “Something profoundly unsettling was going on in (their) society.”20 In going back to the half century before the Revolution, however, Berthoff and Murrin suggest that “[i]n certain ways economic growth and greater social maturity were making the New World resemble the Old more closely.” In such a society “becoming both more like and more unlike that of Europe, more and more unsettled, more complex and less homogeneous, a revolutionary war—even one conducted for the most narrowly political ends—could hardly fail to stimulate certain kinds of change and inhibit others.”21
Berthoff and Murrin suggest that in American society a
recurrent tension between this conservative, even reactionary, ideal and the practical liberty and individuality that their new circumstances stimulated is a familiar theme of colonial history—Puritanism against secularism, communalism eroded by economic progress, hierarchic authority challenged by antinomianism.22
Berthoff and Murrin disagree with those historians who believe “that feudalism was too anachronistic to survive in the free air of a new world.” On the contrary:
The opposite explanation is more compelling. Feudal projects collapsed in the seventeenth century, not because America was too progressive to endure them, but because it was too primitive to sustain them. A feudal order necessarily implies a differentiation of function far beyond the capacity of new societies to create. In every colony the demographic base was much too narrow.... By 1730 the older colonies had become populous enough to make the old feudal claims incredibly lucrative.23
On the shifting social pattern imposed by the State Berthoff and Murrin are worth quoting at length:
exploitation of legal privilege became the single greatest source of personal wealth in the colonies in the generation before Independence. By the 1760s the largest proprietors—and no one else in all of English America—were receiving colonial revenues comparable to the incomes of the greatest English noblemen and larger than those of the richest London merchants. Indeed the Penn claim was rapidly becoming the most valuable single holding in the Western world.24
A number of historians such as Richard Maxwell Brown in “Violence and the American Revolution” have commented upon the rising level of internal social disorder and violence that preceded the American Revolution, and which mounted with growing intensity.25 This protest needs to be linked to the pseudofeudal revival, for as Berthoff and Murrin observe, it “was as divisive as it was profitable, provoking more social violence after 1745 than perhaps any other problem.”
Even prior to the Revolution the most violent protests against the pseudofeudal revival, as Berthoff and Murrin note, came from areas where the settlers were transplanted from New England. New England “resisted the feudal revival because in several important respects it was rather less modern than the rest of English America.” The early New England town conducting its affairs through a general meeting of the freeholders, a large majority of the inhabitants, may seem modern, but “it embodied an archaic English tradition.”26 Kenneth Lockridge has called it a “Utopian Closed Corporate Community.”27 “Because it distilled the communal side of the medieval peasant experience—with lordship quite deliberately excluded—it could resist feudal claims with furious energy during the middle third of the eighteenth century.”28 But as Berthoff and Murrin point out, this communalism had been breaking down from other causes: “the population grew denser, less homogenous, more individualistic, and more European.”
In the face of an attempted pseudofeudal revival, on the one hand, and the breakdown of the vestiges of communalism on the other, “the new democratic individualism harked back to yet a third English model that had survived more successfully in eighteenth-century America than in England itself—the yeoman freeholder.” Here we are brought in contact again with the appeal of the “Country” ideology. In touching on the growing inequalities in prerevolutionary American society, Berthoff and Murrin observe that “the image of a golden age of republican equality, of a society of Edition: current; Page: [17] yeoman freeholders (abstracted from their place among the various interrelated classes of English social tradition and colonial reality), had its greatest appeal at a time when there was solid reason to feel things were going too far the other way.”29
The growth of cities and the development of a market economy are blamed for differences while the continued inequalities engendered by the statism of the political system itself are ignored.30 To what extent did differences occur within the overall development of a rapidly expanding economy in which many were moving upward, though some more rapidly than others?
In addressing these long-run social trends, Jack P. Greene points out that one has to be careful not to ascribe social tensions too great a role in causing the Revolution.31 However, the role of the British government's statist interventionism, which precipitated the social turmoil of the feudal revival, is inseparable from the extension of imperial policymaking, which led directly to the Revolution.
The leading men of America, we may believe, wish to continue to be the principal people in their own country.
Revolutions, of course, are not begotten by abstract social changes extending over a century, but by living individuals who come to feel social repercussions over relatively short periods of time. To survey this accelerating human drama of the American Revolution, we need to describe the shifting composition of the protest coalition as the issues moved toward self-defense and later independence.
Two distinct and dissatisfied groups launched protests against the elites who dominated a colonial society marked by inequalities. Both breathed inspiration from the Country-Whig tradition and its stress on equality. The first group, representing the mechanics and artisans of the burgeoning colonial urban centers, resented being cut off from full participation in the political system and its expanding social differentiation. As in Europe, where such unequal disfranchisement was even more extensive, organized rioting became a carefully orchestrated symptom of politics.2
The second group comprised the townspeople and farmers in the western segments of several colonies, who chafed at the inequities of Edition: current; Page: [18] their underrepresentation in the assemblies. Serious protests erupted in New York, Pennsylvania, and the Carolinas during the same period as the developing quarrel with British imperial authorities.3
The early social protests during the years 1759 to 1765 are well documented in Bernard Knollenberg's Origins of the American Revolution: 1759–1765. Knollenberg observed, “in reading some accounts of the American Revolution, one gets the impression that until the very eve of the outbreak of war, active colonial opposition was limited to a relatively few propagandists and hotheads, which is far from true.”
But the most unifying action of all was the Stamp Act of 1765.4 Nothing better demonstrates the British notions of inequality and subordination. Thomas Whately, the official who drafted the Act, commented upon the higher tax on university and law degrees in America by saying that these were raised, “in order to keep mean persons out of those situations in life which they disgrace.”5 Clearly American equalitarian ideas of mobility, especially through education, were out of step with imperial thinking!
In The Founding of a Nation: A History of the American Revolution, 1763–1776, Merrill Jensen has observed that the Stamp Act “transformed” the nature of “American opposition to British policies.” The real engine of protest was the riots which disturbed the more conservative of the American leaders.6 But the most lasting result of the Stamp Act protest was institutional: a communication network among the Americans grew out of the numerous protest organizations ranging from the Stamp Act Congress to the Sons of Liberty.
What provoked the final crisis, of course, was the Tea Act. Designed to aid that government chartered monopoly, the East India Company, the Act culminated in the famous Boston Tea Party (December 16, 1773). This defiance was a brilliant stroke to polarize the issue and undermine British legitimacy. The British, as is well-known, retaliated by passing the “Coercive,” or “Intolerable Acts.”
In the context of the crisis of legitimacy, the Intolerable Acts form a sort of watershed of revolution. David Ammerman's In the Common Cause: American Response to the Coercive Acts of 1774 indicates the new direction of revolutionary protest. The Americans responded by calling a Continental Congress.
It is noteworthy that the internal dynamics of the protest coalition were also changing, especially in Massachusetts, the heart of protest. Urban firebrands such as Samuel Adams now found themselves out-flanked, and even “out-radicaled,” by the western agrarians7 Edition: current; Page: [19] These militiamen were prepared to fight, if necessary, to protect their rights. As J.R. Pole observes in The Decision of American Independence, “The progressive breakdown of the formal structure of power threw unprecedented opportunities into the hands of the local militants.” From early 1775 onward into the War itself, it was not unusual for local Committees of Safety to exert enormous pressure—a procedure known as Recantation—upon those suspected of Loyalist sympathies. Here was a People's War in action! The first fighting, of course, occurred when the British sought to march to Lexington and Concord, literally into the teeth of this armed countryside of agrarian militia.
Time, itself, is something of a legitimizer. Each day that American institutions ruled the country solidified the notion of their legitimacy. What Adam Smith realized in his memorandum (quoted earlier) to the British government was that local American leaders, having come to rule themselves and their communities for some period of time, would not easily surrender that role.8 More than a military effort by the British would be needed to undo the organic development and growing legitimacy of such a revolutionary society.
In this interim, American thinking increasingly recognized that independence was the only solution to the problem. The catalyst of that final shift was Thomas Paine's little pamphlet “Common Sense.”
Equality, as noted, had been a conspicuous thrust of the Whig tradition. In 1721, for example, in Cato's Letters number 45, “Of the Equality and Inequality of Men,” Trenchard and Gordon had noted, “It is evident to common Sense, that there ought to be no Inequality in Society,...” Paine raised the same equalitarian concern in the quotation he chose for the cover of his own pamphlet: “Man knows no Master save creaking Heaven, Or those whom choice and common good ordain.”
Paine opened “Common Sense” by distinguishing between “society,” which “in every state is a blessing,” and “government,” which, “even in its best state, is but a necessary evil.” Because of “the inability of moral virtue to govern the world,” government, whose purpose was “security,” was necessary. The best form of government was one which insured security “with the least expense and the greatest benefit.”
Paine denied that independence would inaugurate a civil war among the colonies. “Where there are no distinctions there can be no superiority; perfect equality affords no temptation,” Paine argued. “If there is any true cause of fear respecting independence, it is because no plan is yet laid down.”
As John M. Head notes in A Time to Rend: An Essay on the Decision for American Independence, “As late as the fourth week of June, what the members of Congress would do about...independence was not irrevocably established.” Certainly, the advocates of independence were concerned not only to vote it through, but that it win more than a slight majority. Popular pressures, rising up through the state governments especially after mid-May, changed the picture.
The Declaration of Independence was not, of course, in any sense a blueprint for a revolutionary society. At the same time, its emphasis on equality voiced something more than just a declaration of freedom from British rule. In recent years it has become fashionable to talk about the American Revolution as simply a conservative, colonial rebellion. These tensions swirling around the issue of equality would seem to belie that image.9 We need to define precisely what criteria are being employed in making such an assessment. Many years ago R.R. Palmer noted the large percentage of Loyalists who left America, never to return.10 Since this percentage of disenchanted emigrés was larger than that of other so-called more radical revolutions, it appears an unlikely yardstick to measure the radicalness of any revolution. And in a recent study, Men in Rebellion: Higher Governmental Leaders and the Coming of the American Revolution, James Kirby Martin has estimated that elite turnover averaged 77 percent, but ranged as high as 100 percent in several colonies. Compared with the 50 percent in Russia after 1917, this seems very radical indeed! As we shall see, it was this vast turnover and appearance of “new” men which sociologically explains the movement culminating in the adoption of the Constitution.11
Finally, a word is in order about the Tories, or Loyalists. Despite some errors, William H. Nelson's little volume, The American Tory, remains the best. The occupations and social classes of the Loyalists cut across American society even if they were more highly represented among the old oligarchy. Thus, of the 300 people banished from Massachusetts in 1778, about a third were merchants and professional men, another third were farmers, and a final third were artisans, shopkeepers, and laborers. Nelson identifies two areas where Loyalists concentrated: the extreme western frontier from Georgia up into New York, and the maritime regions of the Middle Colonies. Religion also played a part, especially among minorities:
Almost all the Loyalists were, in one way or another, more afraid of America than they were of Britain. Almost all of them had interests that they felt needed protection from an American majority.... Not many Edition: current; Page: [21] Loyalists were as explicit in their distrust of individualism as, say, Jonathan Boucher, but most of them shared his suspicion of a political order based on the 'common good' if the common good was to be defined by a numerical majority.
There existed a conflict of fundamental world views. Loyalists and Patriots “differ not only about the Revolution itself, and revolutions in general: even more deeply, they differ about the essential functions of government, about the proper role of the State, and about the nature of society itself.” It was in essence a confrontation between a corporatist and an individualist world view.12
“War is ten percent fighting, ten percent waiting, and eighty percent self-improvement.”1
The question of how the Americans won the Revolution has for the most part been treated essentially as a military problem usually in terms of conventional armies confronting each other in a series of set battles and campaigns. Some theorists on guerrilla warfare such as Lewis H. Gann, Guerrillas in History, for example, have seen the American Revolution as of little relevance to understanding that mode of warfare:
Regarding revolutions in general, nothing can be more dangerous to insurrectionary planners than the romantic notion that virtuous peoples—rightly struggling to be free—must necessarily win in their struggles against tyrants. This interpretation is based on a misconceived idea of revolutionary wars that many textbooks help to perpetuate. According to the old version, the Americans won the War of Independence because the British Redcoats were no match against liberty-loving farmers sniping from behind cover against over-disciplined regulars.... But the American War of Independence was not mainly won by guerrillas but by regular soldiers and sailors. British soldiers were perfectly capable of becoming as skilled in skirmishing as their American opponents.2
Gann's observations are indicative of the misunderstanding of some writers on guerrilla or counterinsurgency warfare. While guerrilla warfare is a part, a tactic, of revolutionary warfare; the two are not the same. Certainly, neither virtue nor mass support of a population can guarantee victory—a superior foe willing to employ a pacification program involving mass genocide may win—but the support and involvement of the people is a necessary prerequisite to Edition: current; Page: [22] victory in revolutionary warfare, and it is significant that this aspect is now in the process of rediscovery. However, it is peripheral to the essence of revolutionary warfare whether the regular soldiers of an occupying force can develop counterinsurgency techniques. For revolutionary warfare is essentially a political activity, as the quote from Mao above clearly implies. “Self-improvement” means not only as a fighting force, but also in raising the level of consciousness both of the soldiers and of the people as a whole, from a “mentality” toward an “ideology” (in Joseph Ernst's terms).
As James W. Pohl has observed, perhaps the most astute American analyst of people's revolutionary war was Thomas Paine. His Crisis papers, written between 1776 and 1783, are literally filled with observations such as the following: “It is distressing to see an enemy advancing into a country, but it is the only place in which we can beat them” for such a campaign placed the enemy “where he is cut off from all supplies, and must sooner or later inevitably fall into our hands.”3
Since the Americans controlled the country, except where there were British troops—and several times during the war when British armies were in transport at sea none of their forces were on American soil—the British had to devise a strategy to regain North America. For most of the war the British imagined this as an essentially military problem. But from the standpoint of revolutionary warfare and legitimacy, much more was involved.4
George Washington had to devise a strategy to counter that of the British. In his recent study The Way of the Fox: American Strategy in the War for America, 1775–1783, Dave Richard Palmer has traced this through several phases. A great deal has been made of the idea that several times, after American defeats, the British were near victory. A corollary is that American victory was possible only through an alliance with France. In the light of what we know about revolutionary warfare and the tactics of counterinsurgency, both of these assumptions appear wide of the mark.
The tactics of counterinsurgency may be summarized briefly (without mentioning the ideological dimension): first the enemy's regular army is broken up, then the irregular units, and, finally, as the remaining guerrillas are isolated from the population, the insurgency begins to dry up. It is also necessary to deny the enemy the use of any sanctuary into which he can retreat or from which he can secure supplies.
Viewed in this light, it is evident that the British never took the first step toward victory. The Americans understood fully the principles of “protracted” conflict.5 British commanders acknowledged Edition: current; Page: [23] they controlled nothing except where their armies encamped. Lacking that first step, pacification became impossible.
New England, staunchly Patriot—94 percent in Connecticut, for example—was the sanctuary of American forces. From this source supplies and troops flowed, on an irregular basis to be sure, to the American army. In a fine account Page Smith has explained why Washington's army varied so greatly in size, sometimes from one week to the next, as men went back to farm.6 Every fall these farmers went back to plant, but in the spring, year after year, they returned to fight again.
The above suggests that a sociological analysis of the American army would be of value. Here again, the inequalitarian-equalitarian-egalitarian tension played an important part.
From a sociological perspective, the courageous army that struggled through that memorable winter at Valley Forge was hardly representative of either the army or the population supporting it. It was noted above that the backbone of the fighting army of the spring and summer—whether militia or Continentals—often returned to their farms during the fall and especially the winter. Apart from the officers, a high percentage of the winter soldiers were what might otherwise be called displaced men. With few roots in the society, they had nowhere else to go. Years ago Allen Bowman in The Morale of the American Revolutionary Army explored the number of foreigners, convicts, 'former' Loyalists, and British deserters who formed the ranks of the army.
The ambitions of much of the officer corps, and the sense of inequality in some of them, must also be related to the function of the regular army as a military instrument.7 It also reveals one of the major fault lines within the revolutionary coalition. A tenet of radical Whiggism detailed in Lois Schwoerer, “No Standing Armies!” The Antiarmy Ideology in Seventeenth Century England grew out of the “Standing Army” controversy in England.8 Men such as John Trenchard fully understood, from the English Revolution and after, that the King's power rested on his control of a regular, standing army. Bernard Knollenberg's Origins of the American Revolution and Growth of the American Revolution suggest that radical success was a factor in the decision by British policymakers to garrison a force in North America, which might be used there or brought back home to quell domestic dissent.
Radical Whiggism leaned, therefore, toward the idea of a people's militia, as was to be reflected later in the Second Amendment to the American Constitution. Such a force tends to be essentially defensive, as we shall see. It fights best when the enemy invades its community. It has neither the organization, training, weaponry, Edition: current; Page: [24] nor motivation for an offensive action, let alone a sustained one. Its very decentralization mitigates against very effective hierarchical command from above.
On the other hand, Richard Kohn in “The Murder of the Militia System” and Eagle and Sword describes how the less radical members of the American revolutionary coalition tended to think along more conventional military lines.9 Unlike the militia, an organized army is capable of a sustained, offensive campaign. It can initiate an assault, capture, and hold extensive territory.
Beginning with a mentality of equality, a few Americans did not stop with an ideology of republicanism, but carried the analysis a step further, toward a world view of empire. Even young John Adams, who was less drawn toward empire than some other leaders and could write about its contradictions in the 1775 Novanglus letters, was capable of such an imperial vision.10 The most immediate example of the focus of this kind of world view was Canada. Can it be accidental that in 1775, with the British army bottled up in Boston, the American leadership took the opportunity to launch a nearly successful, and then ultimately disastrous, attack on Canada? Assuming the Americans thought the Canadians wanted liberation, which soon appeared an illusion, how can we explain the continued appeal of a Canadian expedition except in terms of empire? As the war drew to a close, Washington and others were still envisioning such a campaign, despite their scant resources. The dreams of empire died hard.
The question of Canada, however, leads to another facet of the war, the French Alliance. Richard B. Morris in The American Revolution Reconsidered, is one of the few historians who suggests, with plausibility, that victory would have been possible without the Alliance, and that the Alliance probably created as many problems as it solved. The opportunity to acquire Canada was also a factor in the alliance with the French. The continued American desire for Canada and the French coolness toward this imperial thrust is described in William C. Stinchcombe, The American Revolution and the French Alliance. Some Americans wanted not only independence, but independence and empire. To understand better that goal and its relationship with the Alliance, the situation in late 1777 and early 1778 must be recalled.
Late in 1777 the British had not only suffered a significant defeat at Germantown, but had also lost their first army at Saratoga. The losses to militia forces, such as John Stark's Green Mountain Boys, which Burgoyne suffered on route, weakened the British army. At the first battle of Saratoga (September 19, 1777), Burgoyne took heavy casualties from Daniel Morgan's sharpshooters, on which see Don Higginbotham, Daniel Morgan: Revolutionary Rifleman, and North Callahan, Daniel Morgan: Ranger of the Revolution. Horatio Edition: current; Page: [25] Gates effectively used the American militia and applied guerrilla strategy in forcing Burgoyne's surrender at Saratoga (October 17, 1777).
The peace feelers that resulted in the Carlisle Commission were superceded by the news of the French Alliance. What is most interesting is the shrill tone with which the American leadership greeted these efforts at negotiation. Surely at that date, this was not a question of undercutting the legitimacy of the American leadership. The more hawkish British leaders correctly indicated that the very negotiations with the Congress added to its legitimacy. What the Congress seemed most intent on doing was cutting off any dialogue between the members of the Carlisle Commission and the larger American population.11 It does not seem unfair to suggest that the great fear might have been that negotiations, once under way, might culminate in independence without empire. The alternative of independence without empire might satisfy the great majority of the people; it was certainly less acceptable to a segment of the leadership concerned with empire. The most complete study is Weldon A. Brown, Empire or Independence: A Study in the Failure of Reconciliation, 1774–1783.12 Franklin, in demanding Florida and Canada, plus an indemnity, was not offering conditions upon which to open negotiations but rather to abort them, and that is the way the British interpreted his actions. The failure of these negotiations protracted the war for over three more years with great suffering on both sides. In a peace two years after that, the Americans finally settled for independence without empire.
What, then, did the Americans gain from the Alliance? Little more than might have been negotiated in 1778. It is true that a French army and naval force made possible Cornwallis's surrender at Yorktown, but that event cannot be dealt with in isolation. The exhaustion of his army in its weaving campaign through the South had been very much the work of regular, partisan, and guerrilla American units.
Nathanael Greene's strategy of dispersal of forces created the basis for the partisan warfare campaign in the South. John Shy's “The American Revolution: The Military Conflict Considered as a Revolutionary War,” Don Higginbotham's The War of American Independence; Military Attitudes, Policies, and Practices, 1763–1789, and Russell F. Weigley's The Partisan War: The South Carolina Campaign of 1780–1782 provide important new analyses of the role of militia and guerrilla warfare. Hugh F. Rankin's Francis Marion: The Swamp Fox discusses the guerrilla volunteer marksmen who formed “Marion Brigade” which played a crucial part at battles such as Georgetown, Eutaw Springs, and Parker's Edition: current; Page: [26] Ferry. Don Higginbotham's “Daniel Morgan: Guerrilla Fighter” analyzes Daniel Morgan's guerrilla tactics (e.g., Cornwallis and Tarleton at the battles of Cowpens, South Carolina and in North Carolina) for which Morgan has been considered the greatest guerrilla commander of the Revolution.
The British called the area around Charlotte, North Carolina, the “Hornets' Nest,” and later they were forced to abandon much of their equipment in evading engagements with American units. That every successful insurgency culminates in regular army forces accepting the surrender of their counterparts should never obscure the role of the irregulars. By that time, many of the irregulars remained in the countryside to administer order, or had returned to their work.
After 1778, British strategy moved toward the possibility of developing a pacification program. As Shy's A People Numerous and Armed makes clear, the fundamental problem was always the American militia:
The British and their allies were fascinated by the rebel militia. Poorly trained and badly led, often without bayonets, seldom comprised of the deadly marksmen dear to American legend, the Revolutionary militia was much more than a military joke, and perhaps the British came to understand that better than did many Americans themselves. The militia enforced law and maintained order wherever the British army did not, and its presence made the movement of smaller British formations dangerous. Washington never ceased complaining about his militia—about their undependability, their indiscipline, their cowardice under fire—but from the British viewpoint, rebel militia was one of the most troublesome and predictable elements in a confusing war. The militia nullified every British attempt to impose royal authority short of using massive armed force. The militia regularly made British light infantry, German Jager, and Tory raiders pay a price, whatever the cost to the militia itself, for their constant probing, foraging, and marauding. The militia never failed in a real emergency to provide reinforcements and even reluctant draftees for the State and Continental regular forces. From the British viewpoint, the militia was the virtually inexhaustible reservoir of rebel military manpower, and it was also the sand in the gears of the pacification machine.13
We have only one intensive case study of the American militia operating in a given locale, Adrian Leiby's insightful The American Revolutionary War in the Hackensack Valley.14 What is significant is that here we are dealing not with an area where the British penetrated only once or twice during the course of the Revolution. On the contrary, one area—Bergen County across the Hudson from New York City—was under the guns of the British and thereby contested for during virtually the entire course of the War. It was thus almost a classic laboratory case for examining the development of an American guerrilla unit. Under the direction of Major John M. Edition: current; Page: [27] Goetschius, the Dutch farmers built a guerrilla unit that from hesitant beginnings by the end of the War matured into a more effective fighting group than the regular army. His correspondence with Washington makes plain that the Dutchman commanded a better understanding of the essentials of revolutionary guerrilla warfare than did his Commander-in-Chief.15
What relevance, if any, is the military history of the American Revolution to an age when liberty seems threatened from within and without? In their study of history the radical Whigs had concluded that the internal threat of a standing, professional, volunteer army far outweighed its potential utility against a foreign threat. Today we know that the irregular, people's army functioned far more effectively than was formerly imagined. There are those, of course, who say that times have changed: that even the “lesson” of Vietnam, of what a guerrilla force can do (provided the larger power does not resort to genocide or nuclear weapons) is irrelevant to a confrontation between the superpowers. While other Communist leaders in the Russian Revolution often criticized the effectiveness of the peasant militia, Leon Trotsky appreciated how truly effective was their fighting capacity against the regular army. He understood that the Party must later smash their “individualism,” and virtually “anarchic” desire to hold their own “individual plots” of land: “Today, free, he for the first feels himself to be someone, and he starts to think that he is the centre of the universe.”16
It has ever been my hobby-horse to see rising in America an empire of liberty, and a prospect of two or three hundred millions of freemen, without one noble or one king among them. You say it is impossible. If I should agree with you in this, I would still say, let us try the experiment, and preserve our equality as long as we can. A better system of education for the common people might preserve them long from such artificial inequalities as are prejudicial to society, by confounding the natural distinction of right and wrong, virtue and vice.
A major question for historians is: What changes occurred in American society as a result of the War and the drive for equality? These developments provide a framework for understanding the Edition: current; Page: [28] equalitarian forces that pushed for replacing the Articles of Confederation and ratifying the Constitution.
Recent assessments of the motivations supporting the Constitution go back to Charles Beard's famous economic interpretation. Without entering into a discussion of Beard's interpretation, some of his economic data may be incorporated into a valid social interpretation of the Constitution.
The American revolutionary leadership studied the past, in part, to build ideologies and world views for shaping the future. “Given the social and cultural structure of the United States during the 1780s, we can deduce that men differed radically over what constitutes the Good Society.”2
Lee Benson, together with other writers, “assume[s] that the characteristics that predisposed men to agrarianism tended also to predispose them to distrust the State.” And, “it follows, therefore, that the new nation should be a decentralized, loose confederation of the several independent states.” On the other hand, “within a liberal republic, the logical corollary of 'commercialism' was a system derived from the proposition that the State could function as a creative, powerful instrument for realizing the Good Society...[T]hey believed the State must be strong and centralized.”3
While Benson acknowledged that not “all agrarians were federalists” or “all Commercialists nationalists,” nonetheless, “a marked tendency existed for agrarians to be federalists and commercialists to be nationalists.” Caution is demanded in doing justice to the relationships between agrarianism/commercialism and distrust of the State, as well as between the decentralized State/Centralized State.4 The critical factor, therefore, was that the perceived political crisis had caused some agrarians—who would otherwise have preferred small government, focused at the state level—to accept a nationalist solution. But that strange union of agrarianism and nationalism is difficult to sustain without the ultimate use of force to retain what are conceived of as the agrarian virtues.5
The most thorough recent study of the period during and after the Revolution, culminating in the adoption of the Constitution, is Gordon S. Wood's The Creation of the American Republic, 1776–1787. The first part, “Ideology of the Revolution,” discusses the Whig world view. Wood underlines the important concepts of Virtue and Equality in the Whig Republican paradigm. Thus, the Revolution, they believed, would be “ultimately sustained by a basic transformation Edition: current; Page: [29] of their social structure.” Obviously, that ideal could hardly be considered a conservative Revolution. While there were “sporadic suggestions for leveling legislation,”...“Equality was...not directly conceived of by most Americans in 1776, including such a devout republican like Samual Adams, as a social leveling.”6 Thus while the Americans recognized all sorts of natural distinctions in society, it was believed these would never become extreme:
It was widely believed that equality of opportunity would necessarily result in a rough equality of station, that as long as the social channels of ascent and descent were kept open, it would be impossible for any artificial aristocrats or overgrown rich men to maintain themselves for long. With social movement founded only on merit, no distinctions could have time to harden.7
However, Wood notes the paradox in the American's belief that the ideal of equality would banish envy.
In an earlier article Wood had discussed the rising social tensions in much the same direction as Berthoff and Murrin.8 “Politics, within the British imperial system, was highly personal and factionalized, involving bitter rivalry among small elite groups for the rewards of State authority, wealth, power, and prestige.
On the other hand, American Whigs had come to feel that removing the imperial system would cure the ills and disorders within the society. If extreme, their perceptions were not without some foundation: And the grievance which “particularly rankled” the Americans “was the abuse of royal authority in creating political and hence social distinctions,” and “the manipulation of official appointments.”9 Any effort to close off a possibility of advancement and greater equality would, and did, lead to confrontation.
Studies more sympathetic than Wood's to the Articles of Confederation are Elisha P. Douglass, Rebels and Democrats: The Struggle for Equal Political Rights and Majority Rule During the American Revolution, and Merrill Jensen, The American Revolution Within America,10 which covers more succinctly many of the points made by Wood. A useful interpretative survey of the issues and the literature culminating in the Constitution is Robert E. Shalhope, “Toward a Republican Synthesis: The Emergence of an Understanding of Republicanism in American Historiography.”11 One cannot overlook the militia as a political institution (whatever one's view of the effectiveness of these essentially defense-minded warriors) as described in David Curtis Skaggs's, “Flaming Patriots and Inflaming Demagogues: The Role of the Maryland Militia in Revolutionary Society and Politics.”12
The fact that government was decentralized under the Articles did not mean that its role at the state level would necessarily be small.13 In most states the “new” men moved to implement a rather extensive program of state interventionism. This included extensive taxation and a monetary inflation which certainly must be regarded as egalitarian in its consequences.14
In limiting the powers of both the executive and the courts, the general thrust of the American Revolution had been toward “popular sovereignty,” placing major political power, with a few, if any, restraints, in the hands of the legislatures. This opened the door for extensive government interventionism, at the local and state levels to be sure, but with few protections for the individual outside the majority.15
Something had happened after 1776 to convince many that the Republican experiment was not working as it should. The solution was to check the arbitrary powers of the populist, state legislatures, and the overly rapid rise of less than well educated “new” men, by raising the central focus of government to the national level. In a sense, it was a gamble to check egalitarianism, at least for a time, by institutionally moving toward the centralization that might hasten empire. Both empire and egalitarianism, of course, were the twin nemeses of republicanism; but there seemed no easy way to halt both.16
Autonomy—the individual's capacity to be psychologically, morally, and socially self-governing—excites controversy and the extremes of partisanship or vilification. No one, however, disputes the enormous popular currency of this notion under such varied synonyms as self-actualization, self-esteem, self-efficacy, independence, individualism, or even the colloquialisms, “doing your own thing” and “being at cause rather than at effect.”
As pointed out by psychologist Nathaniel Branden, autonomy or self-esteem covers such personality traits as self-awareness, self-acceptance, self-responsibility, and self-assertion. The complex and often contradictory range of psychological, ethical, social, and political meanings attached to autonomy have recently been explored in Abraham Maslow's The Farther Reaches of Human Nature (New York: Viking Press 1971) and David L. Norton's Personal Destinies: A Philosophy of Ethical Individualism (Princeton University Press, 1976).
The following summaries investigate how the controversial value of autonomy is related to human liberty in such areas as psychology, ethics, and politics. Do government welfare programs endanger autonomy? Psychologically, should the autonomous personality be judged a cultural ideal or a deviation? Do autonomous personalities impede or promote prosocial behavior, generosity, learning, and cognitive-emotional maturation? Politically, does the autonomous man and woman tend to create an authoritarian or a democratic and open society?
“Welfare vs. Liberty: Prisoners of Benevolence.” The Nation 226 (April 1, 1978): 370–372.
Do government service and charitable agencies assault individual autonomy and self-determination?
Millions of Americans are dependent on government social services and “institutions of caring” such as public schools, mental hospitals, public housing, welfare agencies for the poor, and nursing homes for the old. These citizens are vulnerable to the state's arbitrary authority, which, behind the mask of benevolence, infantilizes them and ignores their legal rights and personal dignity.
For example, patients of government nursing homes are often treated as children. They are denied the control of their money, their freedom to come and go, and their right to have visitors or privacy as they would determine. Exercising a stultifying parental role, officials care more for administrative convenience than for the self-esteem of the patient.
Although legally competent, subjects of service institutions lose many of their Bill of Right guarantees because of such benevolent paternalism. Various authoritarian restrictions tend to eclipse their individuality and independence. Control over an individual's life includes humiliating deference to authority, denial of sex, and penalties for self-expression.
The same self-denying controls practiced in nursing homes and mental hospitals Edition: current; Page: [41] also demean the clients of public schools, public housing, and public welfare. Officials tend to assume the legal power of surrogate parents and dictate what is in the best interest of their clients. Often eligibility standards for social care depend on bureaucratic discretion and judgment of clients' morality.
We need to be more skeptical of government charity and service professionals who determine the lives of clients, many of whom are entrapped against their will in caring institutions. “Power is the natural antagonist of liberty, even if those who exercise power are filled with good intentions.”
“Psychology and the American Ideal.” Journal of Personality and Social Psychology 35 (1977): 767–782.
The predominant theme that describes the American cultural ethos is an extreme form of autonomy: “self-contained individualism.” The problem, it is argued, is that a person living according to the ideal of individual self-sufficiency will suffer isolation and alienation. The self-contained person may be viewed as a narcissist who neither desires nor requires others for his or her completion and life; the self-contained person is or hopes to be sufficient unto himself. With the aim to need or want no one, self-containment is the extreme expression of independence. It is the polar opposite of the concept of interdependence. Self-contained individuals, it is claimed, require strong externally imposed limits “to control their appetites.” It seems that the contemporary psychological ideal of autonomy entails fighting against all forms of cooperative group activity.
The political implications of such individualism may be sketched. Authoritarian systems of control and government would seem most likely if we realize the ideal of self-contained individualism. What other alternative is therefore governing a group of “excessive individualists”? How can a democratic government survive if rugged individualists feel that collective interests and the recognition of vital interdependence are overly constraining?
Contemporary psychology appears to play an important role in perpetuating an individualistic, self-contained perspective and in downplaying the role of interdependent values. Some psychologists have argued that there is no need to oppose egoism (i.e., individualism) and human interdependence. But others set egoism against altruism to make it seem that altruistic values and culture are the enemies of the individual. J.H. Bryan in an article, “Why Children Help: A Review” [Journal of Social Issues 28 (1972): 87–104] goes so far as to lambast the excessive costs of altruism or helping behavior and to oppose prosocial acts and individual freedom:
A helpful person may well be intrusive (e.g., invade our privacy), moralistic (e.g., prevent us from “doing our thing”), or simply conforming to the status quo of proprieties...a helpful person with all his “good” intentions may well violate a variety of personal freedoms that we cherish. (pp. 101–102)
But egoism and altruism are opposite concepts only within the context of self-contained individualism. The ideal of an interdependent system is not the isolated individual who achieves completion and synthesis within himself; the ideal is rather to achieve one's personal function in harmony with others.
The contrast between individualism and interdependence is illustrated in the current theory of personality, androgyny. Androgyny would resolve the polarities of masculinity and femininity in an ideal individual synthesis: each individual would integrate masculine and feminine qualities. This synthesis is an ideal only for a self-contained culture. A collectivist social Edition: current; Page: [42] system, of high cooperation, would prefer a sexually differentiated individual, thus androgyny would be unsuitable.
The individual is not necessarily the possessor of all of a culture's valued qualities; collective cooperation among persons lacking in some qualities does not have to thwart individual self-realization.
Although Kohlberg and Piaget have underlined the importance of autonomous and independent thinking as well as the ideal of a person transcending his collectivity and culture, such an independent view of moral development might harm social cohesion.
“Generosity.” American Philosophical Quarterly 12 (1975): 235–244.
The generous person, though frequently confused with the just, charitable, dutiful, or altruistic and selfless person, may really be the fully autonomous and superabundantly selfish man. The moral confusion over generosity arises from misidentifying its gift-giving with sacrifice and the ethics of duty.
Generosity involves an act of giving because of the value of the act itself rather than because of some other good it brings as a consequence (such as purchasing goods, repaying debts, fulfilling one's duties or station, or expiating guilt). One giving generously intends to do the recipient a real good, but not as a duty in the manner of paternal obligation rescuing a prodigal son. The generous person's intention is gratuitous; he has no “because” in giving beyond the benevolent gift. To answer “Why give someone anything?” the generous person doesn't invoke obligation but rather a “Luciferian freedom,” that freedom needing no reason for what we do.
As a free gift, generosity is distinct from justice, charity, and altruism. The virtue of justice does not monopolize concern for the welfare of others. Generosity also tends to others' welfare, but the generous agent is motivated by self-centeredness rather than pity or fairness to others.
The Good Samaritan, as the image of the charitable man who saves the unfortunate wayfarer, bestows a different sort of benefit than does the generous man. Charity involves rescuing someone from something bad; generous action intends to do someone a more positive good. Generosity, a characteristically Greek ethical notion, is a self-centered liberality in gift-giving which does not notice the beneficiary's Edition: current; Page: [43] need; Christian charity, in contrast, focuses on the other, on his soul with its private needs and pain. This other-centeredness is morally insignificant to the Greek ethics of happiness (eudaimonia), with its concern for self-actualization.
The generous man is not an altruist in his gift. Because he does not reckon his own interests or the interests of others in his gift-giving, the generous man cannot be said to subordinate his interests to another in altruistic or sacrificial fashion. Unlike the altruist, the generous man disdains to attend to moral rules or requirements.
As Aristotle, Descartes, Emerson, and especially Nietzsche portray him, the generous person not only practices liberal acts, but lives a life of liberality and demonstrates a noble superabundant autonomy and self-sufficiency. Disdaining niggardliness or the need for safety and affection, the generous man perceives himself as “one who has overmuch of the good” and “a squanderer with a thousand hands” (Nietzsche, Thus Spake Zarathustra). Generosity is beyond all need, either the benefactor's or the beneficiary's.
In respect to the virtue of generosity, those persons who are morally best are those for whom being good is natural and easy. Generosity does not require that being a good person entails struggle with the self or its spontaneous overflowing of goodness. The generous person is not at the effect of his or another's need but is at the cause of his self-chosen benevolence.
“Motivational Maturity and Helping Behavior.” Journal of Youth and Adolescence 6 (1977): 375–395.
Do approaches to personality development that stress self-worth inhibit generosity and helping behavior? Some psychologists contend that the self-esteem emphasis in the writings of Carl Rogers, Ayn Rand, and Abraham Maslow would probably produce motivations which militate against helping, altruistic, or social concerns [see L. Berkowitz, “The Self, Selfishness, and Altruism.” In J. Macauley and L. Berkowitz eds., Altruism and Helping Behavior. New York: Academic Press, 1970]. However, recent psychological studies indicate that higher levels of self-worth and autonomy characterize the more helpful person.
These test studies, establishing a positive correlation between self-esteem motivation and prosocial behavior, were conducted with college age students. Two groups of subjects were selected on the basis of Aronoff's measure of Maslow's hierarchy of motivational needs [see A. Maslow, Toward a Psychology of Being. Princeton: Van Nostrand, 1968]. In ascending order, Maslow had postulated the following ranking and hierarchy of needs: basic physiological needs, safety needs, love and belonging, esteem needs, need for cognitive understanding, and later developmental needs of self-actualization. Thus, according to Maslow, self-esteem needs rank higher in motivational maturity than the need for safety. Persons of high self-esteem, in Maslow's scheme, would be more socially responsible and caring for the fortunes of their fellow beings than would “safety-dominant” persons. Accordingly, in the studies, group 1 consisted of students who were significantly above the mean in their responses reflecting safety needs; group 2 was composed of students whose test scores manifested significantly high esteem needs. The students in group 2 also exhibited high self-worth as determined by Rosenberg's (1965) self-esteem measure.
The key study using these contrasted sets of students staged a situation in which Edition: current; Page: [44] a confederate of the experimenters pretended to have lost a contact lens in the presence of one of the student test subjects from the two control groups. The individual subjects were scored on the basis of both their time-delay in volunteering help to search for the lens and the duration of their help. As predicted, those persons of high self-esteem were most likely to offer assistance and to help for a longer time.
Such studies show how misplaced the suspicion is that individualistic personality traits make for unhelpful and antisocial behavior. These studies also confirm earlier studies which reported that one's characteristic needs also affect one's “gaming strategy.” Individuals' personalities may be primarily concerned with either belongingness, power, or achievement. People in these three categories behaved predictably different in the famous prisoner's dilemma which weighs competitive versus cooperative behavior and gaming strategy.
“Self-efficacy: Toward a Unifying Theory of Behavioral Change.” Psychological Review 84 (1977): 191–215.
Government subsidies and welfare are frequently defended in the belief that such state support is needed to provide services for citizens who cannot provide for themselves. If individuals felt more self-esteem or more capable of providing for their own needs, they would seek external help less urgently. “Self-efficacy” is the term used to describe one's awareness of his or her capacity to “cope effectively.” The various factors influencing a person's beliefs concerning self-efficacy require detailed analysis. Although this discussion narrowly focuses on how different psychological treatment procedures affect self-efficacy and corresponding behavior changes, broader applications are possible.
Beliefs and expectations of one's own personal efficacy determine whether an individual will initiate coping behavior, how much effort he will expend, and how long he will cope in the face of trials and negative experiences. Beliefs about personal efficacy are distinguishable from beliefs about “response-outcome contingencies.” That is, to know that a particular personal response will produce a desired outcome does not necessarily induce a person to perform the appropriate behavior if that person harbors serious doubts about his ability to accomplish it effectively. Independent and autonomous behavior is more likely to occur if one is confident in his ability to carry it out and thereby achieve the desired outcome.
Four sources of information influence the level of one's perceived self-efficacy: (1) personal performance accomplishments, (2) vicarious experiences of others' accomplishments, (3) verbal persuasion, and (4) emotional arousal or physiological arousal. Of these sources, the most important are personal performance accomplishments. In effect, personal successes enhance self-efficacy expectations; personal failures diminish them. Failure occurring during initial attempts to perform tend to hurt our self-image more grievously than the same failure that occurs after a number of successes.
We learn self-efficacy not only from our own experiences but also from vicarious experiences, that is, from our observations of how others cope successfully with similar situations. But since vicarious experiences derive from social comparisons instead of personal experience, they are less powerful in establishing belief in our own mastery. Similarly, expectations of self-efficacy based solely on verbal persuasion rather than personal experience are less likely to produce significant behavioral change.
Edition: current; Page: [45]For persons possessed of a limited sense of personal efficacy, successful experiences will not necessarily create an intensified feeling of mastery. When expectations and beliefs have served as self-protective devices over a long period of time, we cannot readily modify them.
Such findings on self-efficacy have important implications for efforts to persuade others about the value of liberty. These efforts stand a better chance of success if the individual listener recognizes that his personal efficacy is adequate to meet his needs without feeling dependency toward others.
“A Taxonomy of Democratic Development.” Human Development 19 (1976): 197–210.
How does a free and democratic society preserve itself and people itself with individuals who respect freedom and human rights? To grapple with this question we must identify the psychological stages through which individuals develop the intellectual and moral ideas of freedom and personal autonomy that equip them to function harmoniously in a free society.
From this developmental perspective it can prove useful to advance a classification or taxonomy of democratic character formation. There are suggestive parallels between the individual's social, moral, cognitive development and his democratic socialization or his internalization of those values of freedom and rights that are vital to democracy. Among the attitudes and liberties that democratic citizens must learn to internalize are appreciation of freedom of expression, concern for justice and human rights, avoidance of exploiting others, and trust in the efficacy of persons to make decisions regarding their own welfare and that of society.
A four-level classification of how the individual develops his concepts of free and democratic behavior includes: (1) isolate—the state of a person insufficiently socialized or reflective about the norms of a democratic society (e.g., human rights are not analyzed as abstract universal principles); (2) conformist—the stage where one uncritically accepts and approves the existing system (e.g., one intellectually conforms to and respects the concerns and rights of one's peer group without extending his respect to others); (3) assertive dogmatist—the stage of simplistic, often authoritarian, support of the system in terms of black and white alternatives (e.g., one supports oversimplified solutions to social arrangements such as imposing government constraints on human behavior); and (4) rational humanist—the most mature stage characterizing the autonomous, critical, and independent individual—one concerned with the logical and universal application of rights and freedom as well as sensitive to the dilemmas encountered in moving toward a freer society (e.g., one's primary concern is for universal protection of human life and human potential; one believes that no individual or group should dominate or be dominated by another).
The highest and most mature stage of this taxonomy—the rational humanist—resembles Kohlberg's sixth and highest stage in the development of moral judgments concerning justice: intelligent personal autonomy which decrees that unjust laws may be broken because morality does not consist of special rules and taboos but rather of abstract principles of justice and respect for every individual. Following Piaget, we may hypothesize that a necessary precondition to constructing and abstracting political principles on the mature rational humanist level is the maturation of a person's formal cognitive operations. The actual adoption of democratic principles also depends upon a person's Edition: current; Page: [46] social learning history, for example, their exposure to democratic models and experience in democratic roles.
This four-stage taxonomy is useful for understanding the framework from which individuals advocate the value of liberty. The highest moral stage where one exercises independent conscience is valuable to society because society relies upon the individual's capacity to act rationally upon independent beliefs. But a potential conflict exists when the autonomous and sovereign individual challenges the sovereignty of the society.
Aside from this revolutionary implication in the concept of autonomy, the rational humanist citizen would elevate and foster democratic values by casting an informed vote based on critical investigation of issues and individual responsibility. It thus seems profitable to identify the psychological antecedents of the concept of free democracy on both cognitive and social developmental levels. This has important implications for creating educational curricula which can foster the adoption of free, democratic values.
“Why We Consent to Oppression.” Reason 10 (September 1977): 28–33.
Does the authoritarian state arise from the cradle of the authoritarian family that suppresses self-determination?
Today the question posed by the sixteenth century humanist Étienne de la Boétie in Voluntary Servitude is still crucial: “Why do people voluntarily consent to their enslavement to political tyranny?” Why do people fear their own independence and reject a “live and let live” philosophy that asserts individual freedom? America's drift toward political totalitarianism appears to be rooted in familial totalitarianism. The family serves as the nursery that teaches voluntary servitude; it socializes children in the psychological and ethical will to surrender autonomy and individualism for dependence and selflessness. Children who, out of fear or parental authority, renounce their self-ownership are molded into citizens who consent to arbitrary political authority.
From birth children naturally express self-ownership and self-determination. Self-ownership entails that each person assumes responsibilities for the major aspects of his self: free will, reason as a guide to decision making, the demand for personal freedom, and the pursuit of self-interest. Gradually, however, most children disown their self and helplessly deny their moral autonomy. Why do children become the most oppressed class of persons? Parental authority. Parents threaten: “Don't be selfish!” and “Obey, or suffer!” and thereby stifle the child's wish to express his self. To survive peacefully, to secure food and acceptance, and to escape parental punishment, the child conforms and obeys.
This original compromise of his self-ownership enables the child to survive in the family. Soon he extends this denial of his own self-interest into the context of the school, church, and state. Having subverted his self-interest through choice, the passing years make it difficult for him to revoke his habit of self-oppression. Next, the child lies not only to others, but more importantly, to himself about his own desires and interests. Thus he reaches adulthood a stranger to his inner self.
Voluntary servitude becomes the child's habit because other options seem too threatening. Reason functions not as a means to pursue personal freedom and goals, but as a veil for selflessness. The escape from the shackles of childhood dependency only means new forms of self-oppression. Now the “grown-up” conformist is impotent to rebel against school, church, or state, all of which continue the Edition: current; Page: [47] original parental autocracy and demand a similar obedience and stifling of the self.
In the psychology of self-determination, persons achieve liberation by relearning how to value their own selves. This self-liberation follows two processes: building self-esteem and recovering self-love. Self-esteem is a conditional attribute; it is the self-efficacy we have to earn by living up to our own judgmental standards. Self-love, on the other hand, is an unconditional placing of a high value on our personal selves as living beings. By extending this love to other humans we recognize the basis for granting rights. We thereby respect the sacredness of others' lives and selves.
To achieve a good and free society, we need to leaven self-interest with this extended love for mankind and a love of human liberty. On a family scale, we can anticipate in miniature this ideal society of love and liberty. The first step is fostering our own children's self-determination and liberating our family relationships from fear and force.
This question of self-ownership is vital for the stability of a free society. A free, antiauthoritarian society would inevitably perish unless it were peopled by enough autonomous individuals who value risky freedom over the apparent comforts of tyranny.
“Bridging Science and Values: A Unifying View of Mind and Brain.” American Psychologist 33 (1977): 237–245.
One modern area where the controversy between free will and determinism is being fought is the mind-brain question and the concept of consciousness. Reacting against the older deterministic position, science may be able to clarify value questions by advancing a non-mechanistic theory of brain functioning. This theory would negate many mechanistic, deterministic, and reductionistic features of the earlier materialist-behaviorist doctrine and allow for a conscious causality in processing value statements.
In this nondeterministic theory, any given brain will respond differently to the same input and will tend to process the same information into very different behavioral paths depending on the brain's specific system of value-priorities. Our current concept of the mind-brain relationship thus attributes an active and causal role in brain processing to the phenomena of consciousness.
Some of the earlier but defective theories of consciousness viewed subjective experiences variously as epiphenomena, as passive parallels of brain activity, as identical to neural events, or, finally, as an artifact of our semantic system. A more adequate interpretation of consciousness is the current one which focuses on conscious thoughts as emergent properties of brain activity which do not require identical correspondence between subjective states and the neural events. Such thoughts are active, causal determinants essential to control normal brain functions. This theory asserts that the subjective properties of brain processing conform to general natural laws: that holistic or system properties can exercise causality.
This complex issue requires some qualification. We possess no empirical proof of the stated theory; however, the traditional behaviorist theory also lacks similar proof. The question comes down to a balance of credibility. During the past decade, the modified causal concept of conscious mind has become more credible than the behaviorist view.
Edition: current; Page: [48]Using the present theory, we may approach questions of value scientifically without reducing man to a neurochemical automaton. Man in this scientific image regains much of his freedom and dignity, both of which were challenged by behaviorists. Current theory in mind-brain relations allows a measure of autonomy: a person can determine his own actions based on his personal judgment, cognitive purposes, or subjective wants. Thus, freedom of choice is introduced into the causality of decision making for man. This clearly enthrones the human brain at the apex of other processing systems in its capacity to choose and select events. This neurophysiological theory appears fully consistent with rationalist views that honor man's cognitive capabilities.
Important consequences flow from whichever model of the human mind we select in reference to the mind-brain controversy. The issues of free will and the nature of man are intimately connected with such technical and scientific research.
“The New 'Brain' Concept of Learning.” Phi Delta Kappan 59 (February 1978): 393–396.
Education that aims at preparing autonomous persons to live in a free society needs to heed recent discoveries about the brain's natural methods of functioning. Current educational practices, however, are largely antagonistic to the brain's nature.
Just as a finger bends forward but not backward, the brain has certain ways in which it works and other ways in which it will not work. Among the brain's characteristics highlighted in the recent studies are:
These findings have important implications for educational practice and for socialization in autonomy and freedom.
Educational methods grew up and became entrenched long before this information was available. In large part pedagogy goes against the grain of the brain's nature. For instance, the brain learns by successfully executing an action. What is required is that materials and teachers be available to provide guidance in learning actions. What is not required is a system mainly comprised of talking at, testing, failing, and moving along. Because the brain is a pattern-detecting device of incredible capability and subtlety, it needs vast amounts of input to provide the raw material from which it can discover the relevant patterns. The typical school offers little in the way of input.
Finally, brain research clarifies how to foster independent individuals. People cannot learn the skills or attitudes they need to survive and prosper in a free society exclusively through a conceptual approach. They must have a base of relevant experiences from which they can derive a conceptual understanding. They must have the experience of living and functioning in a free society.
“Evolution and Collective Intolerance.” The Journal of Politics 39 (1977): 667–684.
Applying evolutionary analysis to politics creates disturbing thoughts. If the very process of natural selection reinforces human aggression and competition, what hope does mankind have for peace?
Collective intolerance endangers liberty (meaning Mill's freedom of dissent). This becomes clear by exploring the connection between man's biological nature and his attitudes toward diverse, threatening, and novel ideas. “Collective intolerance” here means the tendency in members of a group not only to insist upon behavioral and intellectual conformity but also to repress unconventional behavior or expression.
Will evolution solve the problem of collective intolerance? To answer this, we must search back into our evolutionary roots, examine recent efforts by biologists to discover the attitudes of the lower primate groups to challenges from contesting groups, as well as investigate modern scientific conceptions of natural selection. Conclusions may be drawn from these primate examples and from what is known about man's behavior toward competing groups when he was still in the herding phase. In effect, man possesses a genetic disposition to identify with a group in order to secure his own survival. Consequently, group survival—and the corresponding necessity to homogenize the group through shared ideas, customs, religious beliefs—dictates intolerance for dissent within the group and opposition to competing societies with different belief systems. Thus, the proliferating warfare of the preceding four centuries is directly linked to the increase in communication between these divergent societies.
What, then, is the likelihood of promoting toleration and liberty within society if man is biologically predisposed to be intolerant? The primary solution is rationality. Once men in competing societies recognize that nuclear war may annihilate all groups, they might discern that increased cooperation is the only means of securing survival. And finally, constitutional guarantees within states ought to be institutionalized in order to protect the liberty to dissent.
To what legal and civil rights are individuals entitled? How free and immune are citizens in the pursuit of their independent choices and actions, especially when such choices and actions are unpopular?
Our next group of summaries explores the often controversial claims of individuals to live freely in civil society, protected in their persons and nonviolent activities.
The historical panorama of America's fitful protection of various civil liberties opens this sequence. We then survey narrower issues, including the right to die, the parents' right to choose education, the right to bear arms, and the debated right to read or view pornography. The concluding topic comes full circle and raises sobering doubts about how consistently the legal system extends civil liberties and impartial justice during emotionally charged times.
“American Liberty: A post-Bicentennial Look at our Unfinished Agenda.” The Civil Liberties Review 4 (May–June 1977): 38–51.
America's experiment with liberty has been a love-hate drama concerning civil liberties.
This drama divides itself into three historic acts: the colonial, rural America from its beginnings to Jacksonian democracy; nineteenth century industrial America ending in the Great Depression of the 1930s; and welfare-warfare America from the New Deal down to today. The leitmotif of this entire drama has been the tension between the rhetoric of freedom or natural rights and their repression.
Freedom emerged in early, republican America because no single group could capture the federal government and impose conformity. Pluralism and the mobility of a spacious America allowed freedom despite narrow and intolerant local communities which curbed individual dissent.
The early federal government repressed popular protest for rights and civil disobedience (e.g., the Whiskey and Fries rebellions). Freedom of religion, however, was nurtured by disestablishing state churches, and free speech progressed. Still, local communities were illiberal centers ruled by authoritarian elites. And despite the Declaration's and Constitution's words, freedom overlooked aliens, blacks, and women.
The second act of the drama of liberty, staged during the century from Jackson to Roosevelt, marks the low point of American civil liberties. Four areas of freedom dominated this turbulent epoch: the treatment of racial minorities; the treatment of workers in an emerging industrial society; the treatment of immigrants by native Anglo-Saxons; and the treatment and legal status of women.
Black slavery ended after the Civil War, but the new-won “freedom” accompanied low socioeconomic status and segregation. Racism, government imposed reservations, and “blaming the victim” poisoned Indian relations.
Edition: current; Page: [51]Meanwhile workers, white and black, struggled to form unions against business elites which controlled government, regulatory commissions, the courts, and police. The federal government also bolstered xenophobia against aliens by branding some “radicals” in order to deport them. Finally, women only gradually won freedom from legal disabilities involving income, property, divorce, and the vote.
Throughout the second period, government suppressed freedom of dissent in time of war (e.g., Lincoln's and Wilson's administrations). Censorship and sexual suppression, enforced by government edicts, operated on both local and national levels.
Franklin D. Roosevelt's New Deal ushered in the last 45 year long act of America's conflict between freedom and its repression. In this period, government has not been the consistent friend of freedom because of self-interest and pragmatism. Modern America also has been burdened by the repressive hand of bureaucracy, expanding government, and the growth of laws together with their discretionary enforcement. Progress, however, was evident in civil rights for workers and blacks, in the waning of censorship, and in legal safeguards.
Repressive trends also continue. Racial progress has been plagued by discrimination. The government erected detention camps for Japanese-Americans and restricted free speech through loyalty programs and the 1939 Hatch Act. Federal agencies such as the FBI and CIA invaded citizens' privacy through wire taps and computerized dossiers, while simultaneously protecting its own political secrets.
“Governments, courts, other power centers, and individuals have always been ready to balance freedom against competing social values and circumstances.”
“The Separation of School and State: Pierce Reconsidered.” Harvard Educational Review 46 (February 1976); reprinted in Studies in Education No.3. Menlo park, California: Institute for Humane Studies, Inc., 1977.
A first amendment interpretation of the Supreme Court's 1925 decision in Pierce v. Society of Sisters suggests that the present state system of compulsory attendance and financing of public schools fails to satisfy the principle of government neutrality toward family choice in education and values.
The fifty year old Pierce decision declared unconstitutional a 1922 Oregon statute which required that each child of school age attend a public school. The basis of the Court's ruling was ambiguous. Did the Court intend to affirm due process and the “property” right of nonpublic schools to exist, or did it guarantee a distinct parental right to direct their children's education apart from the majoritarian state system of schooling? The Court's opinion mentioned the private schools' request for due process “protection against...destruction of their business and property”; simultaneously, it raised a potential first amendment consideration in holding:
“The child is not the mere creature of the state.”
Despite the Pierce decision, Americans have invaded an individual's civil liberty by using the public school system to democratically impose values and beliefs on dissenters who cannot afford private education. Issues of sexual morality, secularism, authoritarianism, and race have become politicized, and values are given state sanction and force when imbedded in public school curricula. It is impossible to eliminate value inculcation in education or expect value-neutral education in secular public schools. The only means to achieve such neutrality would be to apply Edition: current; Page: [52] the guarantees of the First Amendment (separation of state and religion or values) to a reading of the Pierce decision and to have the state allow families the maximum practicable choice in selecting their children's education.
The Pierce ruling, from the First Amendment perspective, preserves the right to reject democratically imposed educational values in child rearing. It is reasonable to apply the First Amendment to Pierce. An implication of this amendment is the protected right of an individual's consciousness and convictions to be free of state coercion. Parents should not be artificially constrained, through taxation, to surrender their children to government school systems espousing beliefs contrary to their own.
Government benefits of schooling ought not be purchased by sacrificing an individual's first amendment rights. Tax financed systems of government education which stipulate that parents may take advantage of “free” education only if they surrender their Pierce guarantees of freedom of conscience and values are not lawful. Other less restrictive systems which respect the right of free choice in education are both practical and more in harmony with the spirit of the First Amendment interpretation of Pierce. The “equal protection clause” of the Fourteenth Amendment is another constitutional barrier. It would ban the plight of poorer citizens who must now reluctantly send their children to public schools because, after paying taxes, they cannot afford private schooling.
The form of compulsory schooling backed by the state power of taxing and police powers manifests deeply disturbing and often unconstitutional effects.
“Why a Civil Libertarian Opposes Gun Control.” The Civil Liberties Review 3 (1976): 24–32.
Following the political assassinations of the 1960s, gun control moved to the forefront of the liberal legislative agenda. However, it may be argued that those of liberal or civil libertarian convictions should oppose gun control. Gun control would lead to greater governmental power and more frequent invasions of privacy by law enforcement agencies. It would court these intrusions without providing greater security against violent crime. Nor would it be particularly advantageous for minorities and women.
The immediate consequence of strict gun control legislation would be to give the military and police a monopoly on arms and the power to determine which civilians may possess them. This would harm the interests of political and racial minorities, as well as women, for two reasons. First, such groups are subject to unusually high rates of violence in spite of the law enforcement efforts of the police. Although studies such as the Eisenhower Commission Firearms Task Force Report have claimed that armed civilian self-defense is ineffective against criminals, contrary evidence exists for believing that arming women and shopkeepers, for example, can dramatically reduce the incidence of rape and armed robbery. Second, the military and police sometimes will fully fail to provide protection to unpopular groups against politically or racially motivated violence. The salient illustration there is the behavior of southern state and local law enforcement officials during the height of the civil rights movement. Had blacks and civil rights workers not been armed, there might have been far greater bloodshed. In fact, it seems that it was the intended victims' Edition: current; Page: [53] ability to defend themselves against the Ku Klux Klan and others that oftentimes provoked the police into doing their job.
Advocates of gun control assume as self-evident that restrictions on, or prohibition of, guns (especially hand guns) will reduce violent crime. There appears no evidence to support this belief. On the contrary, a 1975 study done at the University of Wisconsin concluded that gun control laws have no individual or collective effect in reducing the rate of violent crime. But in addition to being ineffective against crime, effective enforcement of gun control laws would require giving police far more sweeping powers to search and otherwise invade the privacy of the citizenry. Worse, this would doubtlessly result in the arrest and imprisonment of many otherwise law-abiding people.
“Restoring the Balance: the Second Amendment Revisited.” Fordham Urban Law Journal 5 (1976–1977): 30–52.
Current efforts to limit possession of firearms to the organized militia and the theories arguing such constraint do not stand the test of constitutional theory. We can establish this by reviewing and explaining the background of the Second Amendment of the U.S. Constitution from its legislative history as well as from the common law and colonial development of the right to bear arms.
The Second Amendment reads as follows: “A well-regulated Militia being necessary to the security of a free State, the right of the people to keep and bear arms shall not be infringed.” This amendment guarantees the twin goals of both individual and collective defense from violence and aggression. The intent of the framers of the Second Amendment was never to deprive private citizens of defensive arms, which alone might allow them to rebel against a tyrannous government.
State disarmament of citizens frequently served to enable one social or economic class to suppress another, as witness Charles II's disarming of Protestant subjects in England. The common law tradition, as Blackstone's Commentaries on the Laws of England articulates it, favored the citizen's right to possess and carry arms for both collective defense and individual self-defense. The Founding Fathers had learned a painful lesson in how entrenched states may assault the liberty of disarmed citizens during the Revolutionary War. The British Governor of Massachusetts Bay Colony, General Gage, sought to hamstring armed protest and the formation of the rebel's citizen militia by his attempts to disarm the colonists and confiscate their magazines of arms. Chief Justice Earl Warren has noted how much the Revolutionary War was a protest against government standing armies and was largely fought by a civilian army, the militia.
The legislative history of the Second Amendment reinforces how the constitutional framers were anxious to preserve a civilian, “unorganized militia” in contrast to the federally controlled “organized militia.” In an effort to prevent any usurping federal military power independent and superior to the civil power and rights of the people, the decentralized people's militia expressed a check against government. Furthermore, if either federal or state government invaded private rights, The Federalist No. 28 argued for the deterrent of an armed people. Private individuals were entitled to bear arms even apart from membership in the militia.
In this light, the Supreme Court infringed on the Second Amendment rights in United States v. Miller (1939). It ruled that citizens were not constitutionally guaranteed the right to possess or transport a sawed-off shotgun or other arms Edition: current; Page: [54] prohibited by the National Firearms Act of 1934. The Court failed to discern that the right to bear arms is a civil right, a private individual right of citizens, and not primarily of soldiers.
Any type of gun control legislation appears to violate the individual's rights under the Second as well as the Ninth Amendment, which allows the people to retain all rights not explicitly enumerated in the Constitution.
“Compulsory Lifesaving Treatment for the Competent Adult.” Fordham Law Review 44 (October 1975): 1–36.
Can a competent but unwilling adult be required to undergo lifesaving medical treatment by court or other legal rulings? In the face of claims to autonomy, bodily self-determination, privacy, or free religious exercise, does the law recognize a patient's right to forego medical intervention?
This medical, legal, and ethical problem is complex. Various court decisions have judged this issue differently. In some cases courts, deferring to rights implicit in the American concept of personal liberty, have championed the patient's choice. In other cases, courts have ruled that various governmental and private interests are sufficiently compelling to overbalance the patient's choice.
The relevant and fundamental patient's rights, all concomitants of personal liberty, include the right to determine what shall be done with one's body, the right to acquiesce in imminent and inevitable death, and the right of free exercise of religion. A patient's autonomy and choice has been subordinated on the basis of state interests in preventing suicide, in protecting incompetents, in protecting the medical profession, in protecting minor children, and in protecting public health.
Without dealing with the moral dilemma of whether the patient's choice to forego treatment is ethically defensible, one can discover what the law is and elucidate its trends. Several conclusions are evident but difficult to reconcile.
“Is Pornography Good for You?” Southwestern Journal of Philosophy 7 (1976): 95–118.
Can censorship harm the individual by infringing on autonomy? The thesis proposed is that there should be no statutory restriction on pornographic materials. Pornography may be good for you, but censorship never is.
Censorship may be defined as “any action which seeks to control or exclude from consciousness those ideas and/or feelings considered to be intolerable to the censor, or which the censor judges intolerable for the censee.” Censorship may be conscious or unconscious, its controls administered autonomously (by the person himself) or heteronomously (by others). Only heteronomously imposed censorship can be a matter of concern in formulating public policy.
Leading legal opinions and current public debates confuse obscenity and pornography. Pornography may or may not be obscene; what is essential to pornography is that it be exclusively or primarily sexual in content and effect. Obscenity, by contrast, may or may not be pornographic; what is essential is that it be filthy, grotesque, repulsive to ideals or principles, or to generally accepted notions of what is appropriate. Pornography, obscene or not, ought not be subject to censorship as a matter of public policy.
A variety of considerations support the view that censorship is not 'good for you.' The present law is ambiguous; sometimes it is unenforceable, or sometimes enforced inconsistently and selectively. The enforcement power itself is liable to abuse and corruption.
The distinction between illegality and immorality can support the view that the immorality of any conduct is not an adequate reason to have legal or criminal sanctions against it. Censorship threatens and harms individual rights and minority interests. Further, the belief that pornography is socially harmful is not well-founded. Modern society need not uncritically accept dogmas of the past.
Moreover, censorship is at least an impediment to morality if it is not itself immoral: it infringes on free choice and autonomy, the preconditions for morality.
In arguing for the positive value of pornography, one can adduce its potential to be cathartic, instructive, and informative. Moreover, it can be an art form and a way of knowing. Its explicitness can reveal “the tragic, demonic element in human sexuality.” Although a preoccupation with pornography—the censor's as well as the reader's—can indicate that sexuality is not well-integrated into the total personality, that evil does not belong to pornography per se.
“Cold War Justice: The Supreme Court and the Rosenbergs.” American Historical Review 82 (1977): 805–842.
In the hot summer of 1953, Julius and Ethel Rosenberg were executed for conspiring to steal American atomic bomb secrets and to commit espionage for the Soviet Union. Regardless of their guilt or innocence, whether they were “archtraitors” or “martyred saints,” did they receive the full measure of American justice? How did the American legal institutions, especially the Supreme Court, respond to “the most politically sensitive litigation of the Cold War era”?
Edition: current; Page: [56]Felix Frankfurter observed, in a 1956 letter to Justice John M. Harlan: “The merits aside, the manner in which the Court disposed of that [the Rosenberg case], is one of the least edifying episodes of its modern history.” The evidence for and against the Rosenbergs may be variously interpreted, but a key concern should be to analyze how the Court dealt with the case, and how the events of the Cold War and “McCarthyism” might have influenced the Court's decisions. Seven times the case was brought before the Supreme Court, and seven times it failed to get a thorough hearing.
The Rosenberg case raises the issue: to what extent might Cold War partisanship have affected the case's outcome or strained due process and civil liberties? The intertwining of domestic and international events around the case and the actual execution of the Rosenbergs make for somber and fascinating human drama and legal questions.
Many of the questions raised about the Rosenberg case are based on the new information coming from the papers of Circuit Judge Jerome Frank, Justices Frankfurter and Burton, and the material the FBI released under the Freedom of Information Act. An ironical conjecture might guess that had the Rosenbergs received a stay of execution, the Court of Earl Warren—the court famed for its Brown decision and civil liberties cases—might have overturned the death sentences. By the time of the Warren Court, the Cold War had toned down somewhat, and resolutions censuring McCarthy had begun circulating in the Senate. A matter deserving further exploration is that the proponent of civil liberties, William O. Douglas, seemed hard-shelled about the case, except on one occasion when his bluff was called.
What is clear is that to the disinterested observer of the 1970s, the Rosenberg case was not a cut and dried vindication of American equal justice.
In scope and subject matter political economy conceives of its discipline far more ambitiously than does the narrow and fragmented field of modern economics. Conceived of as a broad science of human action, political economy comprises the narrower economic issues, but it also investigates and integrates the ethical, social, and political dimensions of economic activity. It is fitting, therefore, that this group of summaries opens with three reflections on the founder of political economy, Adam Smith.
The coincidence of the recent Bicentennial commemorating both the Declaration of Independence and the publication of Smith's Wealth of Nations suggestively links liberty and its defense in political economy. Smith's concern was to demonstrate how a system of natural liberty was harmonious with justice, moral order, and social harmony. His Wealth of Nations parallels a Newtonian physics of human liberty and unveils how free and voluntary human action as well as self-interest might create a spontaneous economic order.
This theme keynotes the following seven summaries. These summaries, in a vital sense, are the progeny of Smith's concern for finding order in the natural workings of the market. These summaries report how the laws of economic freedom are displayed in the efficient allocation mechanisms of the market, international trade, competitive supply and demand, information, banking, and income distribution.
“The Wealth of Nations.” Economic Inquiry 15 (July 1977): 309–325.
A bicentennial reevaluation of Adam Smith's Wealth of Nations (1776) impresses us with this classic's keen analysis and broad range of economic questions it so admirably discusses. Smith's insight into economics unveiled the importance of the market economy's pricing mechanism as a means of coordinating a complex society which benevolence alone could not coordinate. Reliance on self-interest creates, through the “invisible hand,” an intricate division of labor. In turn, this division achieves “the cooperation and assistance of great multitudes.”
Through self-interest, this remarkable self-regulating market produces the laws of supply and demand, competition, abundance, and prosperity. Indicting the mercantilism of his own age, Smith exposed governmental efforts to improve and regulate the economy as generally perverse:
Every man, as long as he does not violate the laws of justice [should be] perfectly free to pursue his own interest his own way, and to bring both his industry and capital into completion with those of any other man.... The sovereign is completely discharged from a duty, in the Edition: current; Page: [58] attempting to perform which he must always be exposed to innumerable delusions, and for the proper performance of which no human wisdom or knowledge could ever be sufficient; the duty of superintending the industry of private people, and of directing it towards the employments most suitable to the interest of society.
(Wealth of Nations, Modern Library Edition: 1937, p. 651.)
Government tampering with the complexities of the market fails because it lacks both the knowledge and motivation to do a good job in regulating and coordinating the economic system. In addition, governments display a corrupt propensity to be influenced by those whose self-interest stands to gain from advantageous regulation. Smith would limit government to only three duties: to protect society from domestic and foreign aggressors; to establish a legal system of justice defining everyone's rights; and to provide a minimal number of public works and public institutions (e.g., roads, bridges, and canals). Smith might have challenged the need for government construction of such public works in the light of the modern capital market. Yet even within his presuppositions, he argued that such public works should be financed by payments from consumers rather than by subsidies or grants from public revenue.
Smith's view of America and of the contemporaneous American Revolution runs as a minor theme through his Wealth of Nations, accompanying the major theme of the self-regulating pricing system. Smith was both a liberal and a clearsighted realist when he discerned the probable success of the American colonies in breaking away from Great Britain. He viewed the motivation behind the American leaders not so much as a thirst for liberty or democracy, as a quest for position and status. Accordingly, he devised a conciliatory plan that would appeal to the revolutionaries' political ambition: he offered them representation in the British Parliament in proportion to colonial contributions to the revenues of the British Empire. Eventually the Americans could expect two results from their economic power: that the capital of the British Empire would cross the ocean to America, and that an American would be elected Prime Minister. Had his sanguine plan been adopted, America would now rule England and Smith would be celebrated as an American founding father.
“An Adam Smith Renaissance anno 1976? The Bicentenary Output—A Reappraisal of His Scholarship.” Journal of Economic Literature 16, (March 1978): 56–83.
What accounts for the continuing vitality and relevance of Adam Smith's works and ideas? The bicentennial of Smith's Wealth of Nations (1776) is an opportune time for taking stock of the recent vast literature on Smith through a bibliographic essay citing some 175 items of Smithian scholarship.
The new era of Smithian scholarship displays several characteristic demands: (1) We need to interpret Smith's work as an integrated whole, doing justice to its ethics, economics, history, politics, and methodology. Smith, never a narrow economist, can best be appreciated as a far ranging philosopher in the eighteenth century sense. The unified approach to Smith's writings sees no fundamental contradiction between his Wealth of Nations and his Theory of Moral Sentiments. (2) We need to study Smith's social and historical theory as a background for his economics. Both the Theory of Moral Sentiments and the Lectures are required reading to properly interpret The Wealth of Nations. (3) We need to appreciate the logical consistency and realism of Smith's economic Edition: current; Page: [59] theory (such as the circular flow and dynamics of the market). (4) Finally, we need to study again Smith's view of the role of the state and other institutions.
A key area for reassessment is political economy, which Smith described as the “natural system of perfect liberty and justice” or the “liberal plan of equality, liberty, and justice.” Though not a radical advocate of laissez-faire, Smith urged a restricted scope for state functions to achieve his goals of justice and liberty. The state should so restrain itself to enable the people “to provide such a revenue or subsistence for themselves.” From his free market and antimercantilist perspective, Smith appreciated the mechanisms of incentives and disincentives to promote the efficiency of all institutions. The reasons for the state's economic mismanagement, inefficiency, and injustice flow from its in herent lack of such built-in mechanisms.
Linked with such motivations as incentives and disincentives is Smith's concept of self-interest. Unlike Mandeville, he judged self-interest as ethically positive and the engine of economic and social progress. In his own metaphor, self-interest prodded like an “invisible hand” to create a prosperous and amicable society since fellow-feeling and benevolence were weak motives for human action beyond dealings with friends and family.
Probably the best memorial to Adam Smith is the number, range, and variety of scholarly topics that Smith's genius has inspired during the bicentennial. Most noteworthy is the ambitious new series of Glasgow editions of Adam Smith's Works and Correspondence. A brilliant part of the Glasgow series is R.H. Campbell's, Andrew S. Skinner's, and W.B. Todd's new two-volume edition (1976) of The Wealth of Nations. It now replaces Edwin Cannan's 1904 edition as the standard English version.
The overall conclusion of reappraising Adam Smith, the man and author, is that he “was an educated and cultured man, creative and original as a thinker, and unique as an architect of thoughts. The indestructible vitality of his natural system of liberty and justice (i.e, his political economy), rests on his realistic observations and cool assessments of man's nature—the individual's self-interested economic and political activity in society.”
“The Just Economy: The Moral Basis of The Wealth of Nations.” Review of Social Economy 34 (1976): 295–315.
Adam Smith's The Wealth of Nations is intimately concerned with justice and injustice, with the conflict between private and public interests, and with the antinomy of liberty vs. coercion. Smith's central concern, this problem of a just economy, has been neglected. Political economy for Smith was a subdivision of jurisprudence because he believed the proper administration of justice was a prerequisite for a functioning economy and the accumulation of wealth.
Smith followed the Greek tradition of moral philosophy and was not a Hobbesian. He judged that moral norms make community possible; therefore all human societies are essentially moral communities and are committed to notions of right and wrong. Justice is uniquely important because its norms undergird the social order.
Accenting the central role of justice in his Wealth of Nations, Smith at the end of his Theory of Moral Sentiments inter preted positive law as an “imperfect attempt towards a system of natural jurisprudence, or towards an enumeration of Edition: current; Page: [60] the particular rules of justice.” (This resembles Friedrich Hayek's view of law in Law, Legislation, and Liberty.)
According to Smith, political economy aims at a just economy in a just society. Within this moral society, the well-being of all would advance in the fairest, even though imperfect, manner. Not opulence, but economic advancement for the masses in a just society was Smith's goal. No conflict exists between this economic advancement, motivated by self-love in The Wealth of Nations, and the moral advancement through prudence expounded in Moral Sentiments.
Within Smith's moral framework, justice both preserves and makes society possible. Enforcement of justice alone makes force acceptable. And only the enforcement of justice is the prerogative of the state. Otherwise, individuals must be left free to develop higher virtues, such as beneficence. These virtues are intimately connected with personal choice and freedom, and hence cannot be enforced or commanded.
Government, although necessarily connected with force, is not force. Smith defined the institution of government by its purpose, justice; and not by its means, force. Thus for Smith, government is justice institutionalized: “The liberty, reason, and happiness of mankind ... can only flourish where civil government is able to protect them.” [Wealth, p. 754.]
Adam Smith saw liberty (political, economic, religious) in a just society as the ideal, and portrayed it as the central theme of The Wealth of Nations. In a free society, and under a system of justice, people would have their self-expression protected to develop other virtues and efficiently produce goods. Moreover, liberty presupposes limiting government, which though it exists to insure justice, is itself a source of injustice. Through its sheer size, government is dangerous, since it can then perpetrate grave injustices far worse than the minor misdeeds of individual citizens which may be easily rectified in the social order.
In all this, Adam Smith was dealing not merely with the unique problems of one historical era. “Mercantilism” was the name he gave to manifestations of abuses not unknown today in the twentieth century. Mercantilism, and its zero-sum approach to economic relationships under a variety of guises, seems to be a persistent characteristic of modern states.
“An Old Reactionary Free Trader on the New International Economic Order.” Nebraska Journal of Economics and Business 16 (1977): 5–18.
In recent years, Third World nations have demanded that a New International Economic Order (NIEO) replace the current system of international trade. These nations picture themselves as subservient members of a victimized “periphery,” doomed to raw material provision by the economic whims of “core” developed nations.
In order to eliminate the disparity between wealthy and poor countries, NIEO advocates calling for radical change in what they brand as a world of unfair “free” trade. First of all, NIEO would dramatically accelerate the provision of aid, without strings attached, from developed to less developed countries. Secondly, this new order would guarantee preferential treatment for the products of less developed nations in the more developed nations. In addition, it would give the Third World access to patented western technology; the right to expropriate a foreign owned business operating in their lands; Edition: current; Page: [61] and the privilege to contract new longterm debts at bargain interest rates.
Free trade economists, disputing the presuppositions of NIEO, retort that international trade is not free because it is encumbered by quotas, tariffs, exchange controls, and domestic preferences in government purchases. This series of interferences, not free trade, is the culprit causing the widening gap between nations.
Small impoverished countries, in fact, would have more to gain from the introduction of international free trade than would any large country. This is so because free trade would inevitably lead to economies of scale for small participating nations. Also, exports from smaller nations would be less likely to lower world prices, and import demand would also be insufficient to raise prices. Free trade would work to equalize the prices of productive inputs between free trade nations; this would eventually replace either migration of labor or movement of capital.
The overwhelming numbers of unskilled workers in less developed nations may hinder economic progress in these areas. Theoretical and historical evidence confirms, however, that a New International Economic Order would thwart the long-run interest of any participant in world trade. This policy would merely continue the protectionist mentality that has plagued economic growth. The NIEO policy would rest upon the dubious assertion that nonexistent free international trade is a culprit rather than a needed remedy.
“International Trade, Domestic Coalitions, and Liberty: Comparative Responses to the Crisis of 1873–1896.” Journal of Interdisciplinary History 8 (1977): 281–314.
Public economic policies reveal a variety of motivations as shown in the responses of Germany, France, Great Britain, and the United States to the “Crisis of 1873–1896,” when prices declined and output continued to rise. A major goal of research would be to explain why these countries pursued the tariff policies they did. We need to examine economic explanations, political explanations, general international implications for each country, and the economic ideology involved.
In and of themselves, economic and political explanations do not explain all. The British free-trade Anti-Corn Law Lobby remained in power during this period. In Germany, by contrast, protectionist philosophy opposing free-trade had developed early on; and the Junkers swung quite dramatically from free trade to protection. In America, the Republicans dominated politics after the Civil War; they opposed free trade by favoring high tariffs. It is enlightening that the Free Soil Republicans embraced the slogan of “Free Soil, Free Labor, and Free Men,” but failed to champion free trade.
In each country, the dominant coalitions remained intact. In Germany, for example, the coalition Bismarck welded together endured and, in fact, grew stronger. All four nations pursued some variation of imperialism. In America, the industrialists emerged triumphant, delegated little of their power, and were virtually free of criticism (after 1896) until the 1930s.
In the struggle between the coalition that favored a high tariff and that favoring a low one, the winners fell into three groups: groups whose vested interests for their policy were powerful enough to mobilize for action; groups occupying strategic power positions; and groups occupying strategic economic positions. The word “group” is preferable to “class” both because class is often meaningless (as when representatives of heavy industry and manufacturing square off) and because class analysis is too complex.
“Say's (at least) Eight Laws, or what Say and James Mill May Really Have Meant.” Economica (U.K.) 44 (May 1977): 145–161.
Say's Law—defending the market's self-regulating mechanisms of supply and demand as well as the superiority or productive investments over idle consumption—went unchallenged in pre-Keynesian analysis. Say's Law has recently been rehabilitated by several authors:
Thomas Sowell, Say's Law: An Historical Analysis (1972); and Classical Economics Reconsidered (1974);
Robert Clower and Axel Leijonhufvud, “Say's Principle, What It Means and What It Does Not Mean.” International Economic Review 4 (Fall 1973); and
William Hutt, A Rehabilitation of Say's Law (1974).
Part of Lord Keynes's misreading of Say's Law came from its unsatisfactory presentation in J.S. Mill's Principles of Political Economy. More accurately formulated, this economic law states that “demands in general” are “supplies in general”; or, the supply of one kind of goods creates the demand for whatever goods the supplier will acquire in exchange for the supplier's goods or their money price.
“Say's Identity” asserts that no one wants to hold money long, so that every offer (supply) of a quantity of goods automatically constitutes a demand for some other goods of equal market value. A general glut (overproduction of goods and services) is logically impossible.
“Say's Equality” holds that periods of disequilibrium where demand falls short of supply are only temporary and soon disappear with reliable equilibrating forces.
Say's Law had conceptual roots in the Physiocrats' writings, as in Mercier de la Rivière's L'Ordre Naturel (1767). [Cf. J.J. Spengler, “The Physiocrats and Say's Law of Markets,” Journal of Political Economy 53 (1945)]. The belief that Say's Law is incomplete in the first edition (1803) of Say's Traité d'économie politique and that James Mill's Commerce Defended (1807) contains a more explicit Say's Law, results from a superficial reading of Say's first edition. In that edition, much of Say's exposition appears further on in the Traité Vol. II, Book 4 and not in his chapter on débouchés Vol. I, Book 1. Also, Donald Winch's James Mill: Selected Economic Writings (1966) p. 34, shows that Mill both explicitly credits Say with the idea and cites him.
Say's chapter “Des Débouchés” should be translated “on outlets for goods” and denotes the availability of effective demand. Say stated that it is “the abundance of other products in general that facilitates sales. This is one of the most important truths of political economy.... When the exchanges have been completed, it will be found that one has paid for products with products.”
Say emphasized that a given investment expenditure stimulated the wealth of an economy far more than an equal amount of consumption. Say held that “the public interest is consequently not served by consumption, but it is served and served prodigiously by saving, ... the labouring class is served by it more than anyone else. [Savings] are consumed; they furnish markets for many producers; but they are consumed reproductively and furnish markets for the useful goods that are capable of engendering still others, instead of being evaporated in frivolous consumption.”
What needed encouragement and incentives, Say stressed, was the habit of savings; however, arbitrary acts against property as well as freely voted tax increases introduced powerful disincentives against savings.
Say in the Traité, Vol. II, Book 4, Chapter 5 (1803), held that the “demand for products in general is therefore always equal to the sum of the products available.... No glut occurs except when too large a quantity of factors of production is devoted to one type of production and not enough to another.... Means of production are consequently lacking for the Edition: current; Page: [63] former to the extent they are superabundant for the latter.... Inability to sell, therefore, arises not from overabundance but from the misallocation of the factors of production.” Say adds that the notes that Germain Garnier included in his translation of Smith (1802) indicated that “over-abundance of the annual product would ‘obstruct trade; if it were not absorbed by proportionate amounts of consumption’.” Say continued: “I realize that trade can be obstructed by the overabundance of particular products. It is an evil that can never be anything but temporary, for participation in the production of goods ... will instead be devoted to the production of goods that are sought after. But I cannot conceive that the products of the labour of an entire nation can ever be overabundant since one good provides the means to purchase the other.”
This statement of Say's Law seems to lack only a rationale. This is supplied first in Say's expanded chapter on débouchés in the second edition (1814): “every product is created only to be consumed ... as quickly as possible, since every value whose realization is delayed causes a loss to the individual who is currently its possessor of the interest earning corresponding to that delay.”
Keynesians have viewed Malthus's opposition to Say's Law as ‘progressive’. But Malthus defended the feudal landholders against the emerging capitalists (Marx saw such writings as “apologetics ..., partly for ‘strong governments’ whose expenditure is heavy, for the increase of State debts, for holders of sinecures, etc.”).
Say and Mill as proponents of saving and investment (productive consumption) opposed government expenditure such as military spending (unproductive consumption). Mill, following Say, insisted (1807): “it is the maintenance of great fleets and armies, which is always the most formidable weight in the scale of consumption.”
“The Political Economy of the ‘Dispersive Revolution.’” Scottish Journal of Political Economy (UK), 23 (1976): 205–219.
Demands for greater political participation in government are often greeted with sneering words recalling Hunt's:
Were you to preach in most parts of the world, that political connexions are founded on voluntary consent or a mutual promise, the magistrate would soon imprison you, as seditious, for loosening the ties of obedience, if your friends did not shut you up as delirious, for advancing such absurdities.
More soberly, we can offer a review of how we might use economic analysis to examine the demand by the individual for more participation in industrial and political decision making.
The conventional ways of formulating the process of choice conjure up the image of human beings reacting like Pavlovian dogs to external stimuli. These are unsatisfactory ways and it is dangerous to base judgments about society's welfare on them. Liam Hudson (Human Beings: The Psychology of Human Experience, 1975), stays more to the point when he says that if we “see the individual as passive—either as the victim of events that lie outside himself, or as a mere knot of sensations ... we strip the individual of his special status as an agent: someone who makes sense of himself and the world around him, and then acts in the light of the sense he makes.” We will find a more useful paradigm of choice, as J. Buchanan argues, in the principle of gains-from-trade as exemplified by Austrian economist Eugen von Bohm-Bawerk's horse traders rather than in the housewife shopping for Edition: current; Page: [64] groceries in the supermarket who exemplifies the passive maximizer.
Increasingly, persons demand greater control over the political environment and the work situation. Such evidence supports this “break-out” theory of individual economic behavior which suggests that the individual has a strong incentive to seek information on alternative political and economic systems.
But in analysing the demand for political decentralization in countries with centralized government we must consider the actual distribution of political power. Predictions based on reality are likely to surpass those based on some principle of the legitimacy of the exercise of political power. The demand for decentralization may emanate not so much from individual citizens as from interest groups. Thus the Report of the Commission on the Constitution (UK) provides evidence that bargains designed to alter the function of government are not between private invididuals and groups whose representatives carry out their decisions. Such bargains actually arise between entrenched political parties, on the one hand, and a wide range of dissident groups of varying size and efficiency on the other. Such groups differ sharply about how the world looks to them and what it is feasible for government to do.
However, the growth of central government may eventually promote a reaction from individual citizens against the government's control over their daily lives. Citizens reacting to this growing impersonality and remoteness of government may also demand political decentralization. It will arise when individuals express their frustration over the government's inefficient goods and services. The degree of this frustration will depend on the disproportion between their tax obligations and the amount and form of service which they really want.
We can draw a stark contrast between the hierarchical order of the workplace and the democratic order to which at least some pay lip-service. But if alienation serves as a function of hierarchical organization, we cannot explain it away by property relations, because collective ownership of the means of production is not synonomous with democratization at the shop floor level. As Ota Sik argues (The 1973 Ernest Bader Common-Ownership Lecture), a centrally planned system perpetuates hierarchies in firms and creates another source of alienation—the gulf between the structure of production and the structure of needs. In any case, we cannot even be certain that workers would prefer nonhierarchical, i.e., self-managed firms.
“The American Express Case: Public Good or Monopoly?” The Journal of Law and Economics 19 (1976): 163–175.
In 1974 the Consumers Union battled the American Express Company and the U.S. Shoe Retail Corporation, appealing to the Sherman Act. “Restraint of trade” was charged along with “restrictive contract” because American Express obliged retail stores not to give discounts to money paying customers in preference to American Express card purchasers. Consumers Union intimated that credit cards were unjustified by any service provided by the cards, and raised prices unfairly. Eventually American Express settled out of court and waived its “restrictive” contract stipulations.
An important theoretical issue raised by the suit is the effect of credit card usage on pricing. One important motive for using credit cards has generally been overlooked. This factor mitigates the effects of Edition: current; Page: [65] credit cards on prices. Credit card companies provide advertising and “brand name” services, which generally reduce search and information costs for both customer and retailer. Other explanations for the use of credit cards focus primarily on the motivation of the customer in using a credit card; they fail to explain the motivation of the retailer in accepting credit cards. In addition, credit cards provide certain benefits not provided by other instruments (e.g., travelers checks). Advertising services provided by credit cards are generally neglected in any study.
In the long-run, economic reasoning suggests that the granting of cash discounts for money purchases (in contrast to credit card purchases) will not be wide-spread.
“Thomas Jefferson on Money and Banking: Disciple of David Hume and Forerunner of Some Modern Monetary Views.” History of Political Economy 7 (1975): 156–173.
While Jefferson's monetary views have been criticized on the basis of inconsistency and of his presumed failure to understand banking, they were generally consistent with the views of David Hume. And after allowance for the general substitution of demand deposits for bank notes, they are not greatly different from the views of some leading economists today.
Jefferson's proposals for monetary reform were grounded in the libertarian views of Spinoza, Locke, Montesquieu, Hume, Smith and other seventeenth and eighteenth century writers who espoused the natural rights of the individual within a stable framework of rules for competition and enterprise.
Jefferson studied those writers who had already described a system in which most day-to-day restrictions, such as wage and price fixing, trade barriers, occupational restrictions, and other economic controls handed down from the Middle Ages, could be dispensed with. He shared their view that a community is most thriving when left free to individual enterprise. His opposition to chartering the First Bank of the United States reflected his view that the power of the federal government should be limited.
Jefferson's experience with excessive paper money issues encompassed three periods: (1) the colonial period, (2) the Revolutionary War, and (3) the state-bank emission from 1811 to 1816. The emissions in each period were followed by widely fluctuating prices and sharp changes in debtor-creditor relationships. In consequence, he proposed a banking system that would eliminate the economic instability caused by such issues.
Hume outlined a 100 percent commodity reserve banking system that would rigidly limit the quantity of money to the quantity of specie. Like Hume, Jefferson held that an increase in circulation of paper money did not induce an increase in commerce, manufactures, or capital. Adam Smith missed a point shared by Hume and Jefferson: “that paper money has an impact on the total quantity of money and on prices.” Jefferson, on the basis of his experience with American banking, criticized Smith: “The only advantage which Smith proposes by substituting paper in the room of gold and silver ... is to replace an expensive instrument with one less costly.... But this makes no addition to the stock of capital of the nation.”
Jeffersonian economists, such as Charles Holt Carroll, hold that no gains were added to a nation's wealth by an increase Edition: current; Page: [66] in paper money, and continued Jefferson's criticism of Smith.
Among U.S. writers whose monetary proposals are similar to Hume's and Jefferson's are Irving Fisher, Henry Simons, Lloyd Mints, and Milton Friedman.
Although separated by more than a century, Jefferson and the typical recent proponent of more rigid monetary control share many basic political and economic views. Each supports an institutional framework that would provide for compatibility of individual and social interest. Both believe the function of the state should be limited to the production of public goods and services—the maintenance of law and property rights to prevent coercion of one individual by another, common defense, fire protection, roads, a stable monetary system—and that control of resources and production in the private sector should be determined exclusively by enterprise and competition.
In sum, Jefferson proposed a money and banking system that was consistent with his strong libertarian views. His experience with monetary instability and his studies of leading economists convinced him that only a purely specie currency would meet his criteria for a stable monetary unit.
He saw unstable money producing major price changes, altering debtor-creditor relationship, causing windfall gains and losses in private wealth, disrupting foreign trade, and reducing the efficiency of domestic resource use. He believed that no real gain in wealth or production would result from a rising volume of paper money. Consequently, he proposed that banks should be prohibited from issuing monetary liabilities and should operate in much the same way that savings and loan associations and mutual savings banks operate today.
“Theories of Personal Income Distribution: A Survey.” Journal of Economic Literature 16 (1978): 1–55.
Why does one individual earn a larger income than another? More sophisticated versions of this question raise moral issues of fairness and distributive justice as well as economic problems.
Economists, through empirical and theoretical studies, are waging a “great debate” to explain general inequalities and equalities of income earned by various income classes or individuals. They advance many separate reasons for such income differences: inherited abilities (I.Q.), opportunity, family environment, educational training, voluntary individual choices and efforts, or investments which individuals make in their own “human capital” to maximize their opportunities.
Some of these proposed reasons are used to support government redistributions (inheritance and other taxes) or social engineering (public education, etc.) in the hope of increasing equality. Others of these reasons favor the voluntary, spontaneous order of the marketplace and a tolerance for whatever income distributions or inequalities the market produces without government intervention. The unanswered ethical questions are whether or not economic inequalities are unfair in themselves and, if so, why? One of the unanswered economic questions is how can one measure psychic and subjective “income” (beauty, love, admiration, etc.) or hidden sources of material income.
Ability as the cause of personal income distribution has been among the oldest theories. Vilfredo Pareto showed that incomes were distributed not normally but lognormally (skewed towards inequality). The Cambridge School (England), and more recently American Cambridge (Harvard-MIT), have sought explanations in inheritance and institutional organization. Edition: current; Page: [67] With roots in Ricardo and Marx through A. C. Pigou and J. M. Keynes, the Cambridge Theory (expounded by Lord Kaldor, and by Luigi Pasinetti) distinguishes between different savings propensities among social classes and income sources. Kaldor's model of substantial differences in long-run propensities to save by different income classes was refuted in Milton Friedman's A Theory of the Consumption Function (1957).
Public Income Distribution theories seek to find what are the effects of taxation or coercive distribution of incomes. Empirical studies suggest redistribution comes either from upper to lower income classes, or from lower to upper classes. However, Director's Law as stated by George Stigler [Journal of Law and Economics 13 (April 1970)], is based on the fact that the state is used to redistribute income to those who control the state. Stigler concludes that in democracies the middle classes control the state and are therefore the beneficiaries of coercive redistribution.
Based on Pareto's lognormal or skewed income distribution conclusion, some economists have explained that additional talents or abilities tend to multiply a person's productivity (a lognormal rather than additive effect). Some have found relationships between ability and education, ability and responsibility, as well as ability and the future-oriented aptitude for saving or capital accumulation. Harold Lydall has emphasized “the D-factor'—drive, doggedness, determination—as having a multiplicative effect on income.
One interesting fact (A.R. Thatcher's study) is that income distribution among homogeneous manual workers is an unequal as that of the whole population—and has remained so since 1886. In the end, therefore, the ability theory—which sees a person's abilities as the cause of income differences—remains a strong competitor with modern, sophisticated theories, such as the human capital theory.
Milton Friedman's individual choice theory is rooted in the differences among people in their attitudes toward risk—risk preferences. In Friedman's analysis, dynamic societies are characterized by very few high risk-takers and large majorities of the risk-indifferent or risk-averters. Mounting poverty occurs in societies that increasingly tend to prefer the risks of less income (or savings) and the higher utility of nonmonetary advantages. Such choices are influenced by the costs or rewards introduced by coercion, taxation, subsidies, and public transfers of income.
Taking off from Friedman, the Chicago School has developed a more refined Human Capital Theory of income distribution. Human Capital Theory emphasizes that investment in oneself is the result of rational, optimizing decisions (by individuals or their parents). Such decisions are made on the basis of estimates of the probable present value of alternative life style income streams, discounted at some appropriate rate. People with higher ability invest more in themselves, do so at younger ages, and earn higher rates of return on their human capital. The Human Capital Theory has been attacked by the “screening theories” of the Cambridge School: schooling does not teach but merely “screens” those with desirable traits, making schooling an elitist rather than an equalizing device.
Ideological values lie behind the whole issue of income distribution: should society by pass the market's voluntary patterns of production and distribution and allow government to coercively determine the “laws” of distribution at will?
Liberty and values are intimately joined together. To justify or rationally demonstrate values is crucial if liberty and its kindred concepts are to win respect from individuals and society.
Are such values as liberty, autonomy, rights, and property arbitrary and conventional, without objective foundations? If so, nihilism and other serious personal or social consequences would seem to follow.
This group of summaries questions values in general. The first two raise the spectre of relativism and determinism. They question whether our values are socially or culturally determined in mechanistic fashion or autonomously.
But exposing the contradictions in relativism and subjectivism is a far easier task than establishing a natural, rational, and objective base for particular values. The next nine summaries exhibit the continuing debates to ground various values in nature: health, promises, scientific research, liberty, equality, life, and the social sciences. Controversy, it is evident, still reigns.
“Godwin, Oakeshott, and Mrs. Bloomer.” Journal of the History of Ideas 34 (1974): 611–624.
The Victorian era's rational dress movement illustrates how the social reformer can sanely evaluate or criticize society's institutions and also escape being passively molded by society.
Michael Oakeshott attacks in his writings such “institution-haters” as William Godwin (1756–1838, author of An Enquiry Concerning Political Justice) and prefers to regard the individual as deriving his reality primarily from social institutions. Curiously, Oakeshott's defense of institutions closely resembles Godwin's attack on them. Both authors worry about conceptual Procrustes' Beds—the blinkering and distorting effects of preconceptions that are floating abstractions not concretely anchored in personal experience. Both authors also resemble cultural determinists: men are molded by society, especially by political institutions. On the one hand, Godwin rebels against political institutions' authority which infantilizes the individual's autonomy and independent moral conscience. On the other hand, Oakeshott piously venerates the way that, Edition: current; Page: [69] traditions and social institutions fashion the individual's personality and beliefs.
As an “institution-lover” Oakeshott, in Rationalism in Politics (1962), seeks to invalidate individualist social reformers who distrust institutions, by citing the alleged blindness of the rational dress movement and its eponymous heroine, Mrs. Amelia Jenks Bloomer. Oakeshott maintains that rationalist reformers (such as Godwin or the creators of bloomers or knickerbockers) must fail because their abstract ideology blinds them to the nature of the social institution they seek to reform. Such reformers, purportedly, also are unconscious of how social institutions subtly influence their own thoughts and desires. Oakeshott would have us believe that the rational dress reformers cavalierly disregarded the complex folklore and social purposes of feminine dress in an exclusive preoccupation with making a costume suitable for women riding bicycles.
However, the history of the Rational Dress Society and the actual motives in designing such women's clothes as bloomers reveal not simplemindedness but complex considerations (including modesty, fashion, functionalism, warmth, hygiene, comfort, economy, and esthetics).
No one disputes that society and institutions can subtly influence us. The real issue is what should individuals do when society places contradictory demands upon us. Women were expected by society to perform household chores, but the same society imposed a cumbersome costume to impede such chores. If society imposes conflicting demands on individuals, how is a person to determine society's real direction and desires? More fundamentally, even if individuals know the true and stable direction of society, why should they obey it if such direction does not satisfy their individual desires and happiness?
Even if society molds an individual's purposes and desires, why should it be irrational to judge society and its institutions by their ability to make individuals happy? Why should not a creature judge his creator? It seems a pointless design for society to create individual aspirations only to thwart them. Individual happiness can serve as a sensible standard for judging social institutions. In reforming such institutions (dress or the state) we can still consult complex purposes rather than remain simplemindedly fixated on one aspect.
In sum, the rational social reformer can choose an independent and objective standpoint for evaluating society. The rational dress example does not demonstrate that the only course is to drift with the deterministic flow of society. Individuals can objectively examine and choose social values.
“Is Cultural Relativism Self-Refuting?” British Journal of Sociology 27 (March 1977): 75–88.
Relativism, in general, claims that all truth is relative, that is, it completely depends on and varies with time, place, age, person, or environment. One truth could hold for John while, at the same time, its opposite holds for Ken. Man thus becomes the subjective measure of all things. A priori cultural relativism is one variety of relativism and asserts that all evaluations and statements about human behavior must be culturally internal and relative. Cultural relativism, so described, requires denying the very possibility of explanation.
The statement, “All explanation must be understood as internal to or relative to a particular culture,” is not necessarily self-refuting. It need not be meant as a universally valid statement (true for all cultures), but rather as one having Edition: current; Page: [70] validity only within the isolated culture in which it is stated. To render cultural relativism self-contradictory, a statement “X is Y” would have to be accompanied by the further claim: “‘X is Y’ is true.“
Thus, a modest, nondogmatic form of cultural relativism is not self-refuting, but it doesn't really explain too much or advance our understanding. We need another approach. From analyzing a number of concepts of rationality we can demonstrate that any explanation requires a “universal principle of rationality.” To make behavior intelligible two crucial presuppositions are essential: a procedural norm for determining what is to count as intelligible, and a firm belief that sharing procedural norms is a precondition of both meaningful statements and the explanation of behavior.
Herein we see the fatal flaw of cultural relativism. It attempts to make the social world intelligible by using explanations which depend on cultural consensus or personal perceptions. However, this presupposes that there is a criterion for judging what is to count as a “cultural consensus” about the meaning of any act or the validity of any particular perception.
More crucially, in attempting to determine “cultural consensus” or personal perceptions, the investigator is forced to look back to “some previously defined concept of social reality.” Here's the rub! The relativist, by the very logic of his own argument, is precluded from using any external judgment to determine what constitutes a “cultural consensus” or a valid perception.
An unsolvable dilemma confronts the cultural relativist. He “does not even allow for the possibility that one can negotiate meaning with other actors, for what basis is there for negotiation of common conceptions if the very notion is epistemologically suspect?”
“Health as a Theoretical Concept.” Philosophy of Science 44 (1977): 542–573.
Is health a value-free concept? The medical view of health as the absence of disease is such a value-free theoretical notion because it is based on nonsubjective and empirical elements of biological function and statistical normality. Health involves freedom from disease and thus means statistical normality of biological function: the ability to perform all typical physiological functions with at least the typical efficiency of the species.
The concept of health thus depends on an adequate understanding of disease. A value-free approach to health starts out with a functional account of its negation, disease. Disease is a matter of fact and not of evaluative decision. Health, as the absence of disease also becomes a matter of fact.
The opposite view, that health is a value-laden concept, arises from faulty assumptions. Our health judgments, in this view, must be “practical” judgments about the treatment of patients; it also recommends commitment to “positive” health beyond the simple absence of disease (theoretical health). The first assumption of “practical health” believes that “choosing to call a set of phenomena a disease involves a commitment to medical intervention”; the second assumption of “positive health” leads to unnecessary ethical dilemmas that no medical procedures can unravel.
In clinical and philosophical literature, the scope of the term disease includes injury, as distinguished from illness (a particular occurrence of the universal “disease”), and should not be confused with what tends to produce disease.
Edition: current; Page: [71]This functional account of health follows the classical tradition of regarding the normal as the natural, and by stressing the biological notions of goal-directedness and function. It differs from that tradition by identifying ideal functioning with the empirically typical (i.e., the ideal is non-normative). Diseases may be viewed as internal states that reduce an organism's functional ability below typical levels for its species. Health, then, is normal functioning ability. Here, normal and typical are defined statistically in terms of the species.
This understanding of theoretical health differs from the currently popular notion of positive health: something more than the absence of disease. It is important to distinguish between health and various kinds of excellence.
This discussion of physical health and disease may also be extended to resolve controversies within the field of mental health. Mental health experts often debate how much values influence health judgments and who gets committed. With a valid distinction between theoretical (mental) health and practical health, we could consistently assert the objective status of mental disease in individuals but still object to subjecting them to involuntary “practical” treatment to render them more “excellent.” We need not believe, with Thomas Szasz, that mental illness is a myth to protest compulsory treatment of the “sick.”
“Institutions, Practices, and Moral Rules.” Mind 86 (1977): 479–496.
Defenses of liberty, to be credible, must rest on demonstrable and objective moral rules. In this light, it is important to keep abreast of attempts to derive an objective “ought” from a factual “is.”
In a now classic article on that subject (Philosophical Review 73 (1964): 43–58) and a later book, Speech Acts (Cambridge: Cambridge University Press, 1969), John Searle argued that the very language of the “institution” of promising commits us to the language of obligation. In effect, if you say “I promise,” you imply that you “ought” to keep your promise. Our language “game” rules require that such words as “promise” entail the subsequent language of “ought” and “obligation.”
Searle's claim to deriving an “ought” from an “is” is defective. Searle based his claim on the grounds that factual descriptions of institutional acts (e.g., promising) generate evaluative “ought” statements. The defect appears if we distinguish between constitutive and regulative rules. Moral rules (and obligations) are necessarily nonconstitutive rules; hence they are “regulative rules.” As regulative rules (R-rules), moral rules and “oughts” cannot be derived from “constitutive rules” (C-rules).
To define these terms: C-rules are those that “constitute,” define, or create (as well as regulate) an activity or “game” which would not exist logically apart from these rules (e.g., the rules of chess do more than regulate the game, they create chess and the very possibility of playing). R-rules merely regulate forms of behavior or games that exist independently (e.g., the rules of etiquette regulate relationships that exist even without the rules).
There is a sense in which institutional acts such as promising can be termed “right” or “wrong,” which goes beyond their mere conformity to C-rules. However this value or normative sense does not arise from the C-rules themselves, but from the fact that society consciously Edition: current; Page: [72] endows C-rules with prescriptive force because such rules promote ends that the society values.
Thus, the C-rules are not intrinsic, objective norms nor values except for society's stipulating it so. Furthermore, this restricted normative aspect of C-rules differs from the normative aspect of objective moral rules, which evaluates something as right or wrong apart from its being relative to a particular institution or practice.
The same distinction applies to those rules that evaluate an institution as being either good or bad. Some rules evaluate whether an institution fulfills the goals intended for it by society; other rules evaluate the institution itself in terms of its actual effects whether intended or not intended. In either case the moral worth of the C-rules as a whole is evaluated “instrumentally” in terms of some goals it helps to achieve.
Contrary to Searle, C-rules do not entail moral rules. Take, for example, Searle's use of the “institution” of promising. The C-rules defining the conditions for an act to fulfill the terms of a promise are merely a factual description of the promise's meaning. But the rule defining the institution of promising does not entail the desirability or obligation to keep the promise itself. The institutional rule merely describes factually what would be a “correct” move in the linguistic game of promising. In order to be obligated to keep a promise one also needs the moral or regulative rules: (1) that it is good to satisfy another's promised needs and interests; and (2) that the act of making and keeping a promise is a good thing.
We commit no logical inconsistency in the following case: we could dutifully observe the linguistic game rules of always applying the phrase “I hereby promise” to the appropriate situations, but still refuse to actually keep the promise on each occasion. Linguistic propriety does not establish an objective ethics. Another base is required to demonstrate objective values.
“Moral Autonomy and the Rationality of Science.” Philosophy of Science 44 (1977): 513–541.
Should ethical judgments play a central role in rational scientific behavior or merely a nonessential role? Ought the scientist qua scientist make ethical value judgments or ought he remain “morally autonomous?” That is, ought he remain wertfrei, accepting or rejecting theories only with a rational eye to attaining the goals of science? It is argued that in their decisions to accept theories, scientists ought to take account of the ethical consequences of acceptance as well as the consequences in attaining “purely scientific” or “epistemic” objectives.
We begin with the assumption that scientific research is publicly subsidized because of its value in advancing understanding and in promoting social utility. Accepting these as the objectives of science, what are the implications for what constitutes scientific rationality?
First, consider the issue of rationality in accepting or rejecting theories and research topics. Here, the “standard view” errs when it claims that such decisions should be made solely on epistemic grounds; that is, science should be morally autonomous.
The decision-theory view of rationality in science, however, advocates deciding among theories on the probability and value of possible outcomes. On this basis, we may defend the “weak value thesis”: that in accepting theories, scientists in fact make value judgments since they Edition: current; Page: [73] must evaluate the strength of the evidence.
If the weak value thesis is true, what goals should we take into account in deciding to accept theories? Here we can advance to a “strong value thesis.” Not only should we follow the standard view that epistemic goals should be heeded, but we should also consider the practical and moral consequences of accepting theories. This is so because policymakers, in contrast to scientists, need more information. Policymakers stand to suffer greater costs in erroneously accepting a theory. This view may increase scientists' responsibilities, for example, by requiring that they do environmental or political impact studies in connection with their research.
The entire discussion rests on the assumption that both the goals of science and the amount of information provided by scientists are made in a nonmarket setting. This creates the absence of market signals which could clarify how much information is needed about research.
“Dispensing with Moral Rights.” Political Theory 6 (February 1977): 63–74.
Robert Nozick, among others, asserts that men have moral rights which others may not infringe. But such appeals to inviolable rights are often idle and suffer from indeterminateness. Of course, claiming moral rights has obvious tactical advantages despite these grave shortcomings. But can such claims to rights be made legitimately? It may be contended that we really sacrifice nothing by jettisoning the concept of moral rights and simply appealing to judgments based on correct moral principles. Thus, there is no real moral advantage in employing rights language. We will sacrifice nothing important by making claims in the language of “what (objectively) it would be right morally to do.”
A conventionalist rendering of moral rights may be offered. This would, for example, be offered. This would, for example, be sufficient to condemn slavery throughout history even if people in slave holding societies had not arrived at that moral conclusion. This is possible simply by pointing out the errors in moral calculation or fact committed by those societies. Thus, a conventionalist position could accommodate the advantages of a moral rights argument (i.e., that it could condemn violations of rights even if such rights were unrecognized in a given society).
The typical arguments against moral rights are: that lists of these rights tend to proliferate; that there is no single agreed list; and that the definition of these rights is fuzzy. Perhaps the strongest argument is the contention that the proponents of human rights have not yet given an adequate justification of those human rights. The usual properties used to define an equality of rights (e.g., the equality of all persons in some “natural capacity,” or in their liability to pain, or, finally, in their possession of some transcendental properties) seem either to be too weak to justify the whole structure of human rights or to introduce new mysteries that only compound the problem.
The justification of moral rights has been left on rather flimsy grounds, an observation going back over 150 years to Jeremy Bentham.
“Regulation, Liberty, and Equality.” Regulation Nov./Dec. (1977): 11–15.
Many contemporary political and economic academicians are intrigued by “zero-sum” games. Such theorists insist that individuals and policy makers must make trade-offs between opposing values. Supposedly, to advocate any one doctrine necessarily diminishes the viability of other alternatives.
Liberty and equality (absolute varieties), in the abstract, represent two such zero-sum values. When government utilizes its coercive power in the name of equality (e.g., minimum wage legislation or Affirmative Action), it inhibits the capacity of individuals to make choices and act upon them (by reducing their liberty to employ workers freely). Similarly, untethered liberty (Hobbesian anarchy: the victimization of the weak by the strong) endangers basic rights such as freedom from aggression, to which we all have an equal claim at birth.
The critical insight that theorists often avoid making, however, is that the absolute pursuit of either liberty or equality ultimately endangers not only the alternative value (in zero-sum fashion) but the very value advocated itself. Overzealous egalitarian crusades, through which government attempts to level the effects of natural inequalities, generally produce tyrants who enjoy a very unequal control over power and luxuries. And pure liberty, as Camus argued, gives every member of society the “freedom to kill,” ultimately replacing the liberty of each individual with the fear of insecurity.
The relationship of liberty to equality, therefore, cannot ultimately be explained by any uncomplicated zero-sum model. Efforts to pursue either value in the extreme endanger both values. Society thus confronts the choice of blending liberty and equality in the most satisfying proportion. Individuals must periodically weigh, though, the cumulative effects of government policies in order to be certain that absolute power is not leading society towards absolute equality.
Advocates of liberty should realize that in order to maximize their cause they must continually stress that non-violence is a key element to real liberty. By advocating this single restraint on individual interaction, they can greatly diffuse criticisms that their proindividual stance is either extreme or internally inconsistent. Simultaneously, they will promote the greatest possible role for liberty in society.
“Antenatal Injury and the Rights of the Foetus.” Philosophical Quarterly (Scotland) 28 (January 1978): 17–30.
Is it inconsistent to claim, on the one hand, that a child deserves compensation if he is born malformed through injury and negligence done to the fetus, and on the other hand, that the fetus has no rights?
The first claim apparently admits the rights of a fetus. To counter this implication we need a theory connecting rights with interests. Rights function to protect the interests of the bearer of the rights. To say Virginia has a duty she owes to Robin is to say Virginia is obliged to advance and protect Robin's interests. But possession of a right cannot be deduced simply from the fact that Robin has an interest in Virginia's acting in a particular way. In addition we must establish that Robin's interest exhibits the type of moral Edition: current; Page: [75] significance that obliges Virginia or others to protect it by their action or inaction.
We may distinguish two kinds of interests. The first type of interest pertains to conditions needed for a healthy specimen: what can harm it or improve it. Fetuses have this kind of interest (as do plants). However, the second type of interest, moral interest, involves an individual being interested in something with desires, preferences, likes, and dislikes. Only the second type of interests gives rise to moral obligation. A fetus lacks this sort of interest since it lacks desires and purposes. Only when a being shows interests in something does it have rights. Only when a being displays concerns does its harm or benefit have independent moral significance, so that, consequently, rights and duties are owed to it.
There are, however, duties owed “to the child” who does have interests of the proper moral sort. The fetus stands in a causal relationship to the health of the subsequent child. This means that though there are no duties owed to the fetus, there are duties concerning the fetus. (Just as the fact that we owe duties to everyone not to unjustly kill them implies that a builder has duties concerning a building, namely, not to build a faulty structure; the duties are owed to the people but only concern the building.)
That one only owes duties to the child follows from this observation: if we were to assume that no child would be born, the consensus that we have duties to the fetus disappears. Thus no inconsistency arises in asserting that we have a duty to see the fetus is not harmed but no duty to see that the fetus survives. Our duties concerning the fetus are hypothetical, contingent upon assuming that there is a possibility or intention that a child will be born.
This last point, however, raises a problem. If future interests create present duties, why can't the interests of the child-to-be justify the duty not to abort the fetus, on the grounds that the child-to-be will be interested in being born? One way to extricate ourselves from this difficulty is by conceding the possibility that there may be certain interests, or there may be a person that has a bearing in determining our present obligations. But in deciding whether to abort we are deciding whether that possibility exists in a particular case. We can't anticipate the results of our decision in order to make it.
“Should the Numbers Count?” Philosophy and Public Affairs 6 (1977): 293–316.
Consider the following quasi-lifeboat moral dilemma: You have a limited supply of some lifesaving drug. Six people will all inevitably die unless they receive the drug. However, one of the six needs all of the rare drug if he is to live. Each one of the other five needs only one-fifth of the drug. What ought you to do morally?
The general issue is: Should the number of individuals affected by such a “trade-off” action morally determine the ethical decision to do or not do the action? The specific “scarce drug” example is thought-provoking and calls into question an ethical intuition almost universally shared. Most people would tend to answer that the death of five innocent persons is a worse evil and greater loss than the death of one innocent person, “other things being equal.” The example poses an either/or choice. Your situation is to prevent the loss of either one person or five persons. You cannot prevent both losses. You are morally required to prevent the worse evil.
One problem is that “other things” are rarely “equal.” The one person who needs Edition: current; Page: [76] all the drug might be a brilliant scientist on the verge of a medical discovery to make the drug plentiful or cure some other serious illness affecting millions. Again, the five persons in the example might be five “idiot infants” unloved by anyone. Such special considerations are usually not entertained by those who pose the dilemma.
Now suppose that the special consideration has nothing to do with social benefits but with your own personal preference. The individual whose life you choose to save over the other five may be a partner, parent, or close friend. Here the reason for your choice might not be due to any overriding moral obligation to the individual, but simply because you know and like him whereas the other five individuals are strangers.
Or further suppose that you try to argue the one individual into giving up his dose of the drug (which he stipulatively owns) because it would be worse for the five others to die. He might possibly demur and counter: “Worse for whom?” His retort effectively undercuts utilitarian arguments that would attempt to focus on the alleged greater happiness of a greater number of people. The individual simply values his own life more than he values any of the other five. It would seem to be a confusion in any one of the five to try to convince their individual rival by entreating: “None of us is thinking of himself here! But contemplate, if you will, what we the group will suffer. Think of the awful sum of pain that is in the balance here!”
Many more complications might be introduced to the dilemma. But it does not seem cogent that a mere consideration of the relative numbers of people involved in such trade-off situations has significance. Questions raised by this test-case include moral equality, policy choices, and the role of property titles in allocating scarce resources such as lifeboats and rare drugs. The owner of the boat and drug might be the proper one to decide their allocation. Numbers should not dictate choices.
“Values and Political Theory: A Modest Defense of a Qualified Cognitivism.” The Journal of Politics 39 (1977): 877–903.
Are value judgments merely subjective expressions of the attitudes of particular speakers rather than reflections of the intrinsic goodness or badness of a thing or action?
This moral position, known as value noncognitivism, does not represent an adequate account of the value judgments we ordinarily make about political life. The consequences of accepting value noncognitivism would bring into conflict and render impossible the twofold task of political theory: (1) to offer an objective account of politics, and (2) to address the moral issues involved in politics.
A qualified value cognitivism seems preferable. It would ground moral judgments upon a conception of what it means to be a person, or a “model of man”; would provide a link between normative and empirical theory; and would reconcile the two distinct tasks of politics.
Value noncognitivism fails because it focuses only on the performative use of normative terms such as “promise” or “good” while ignoring their primary meaning.
By concentrating on what a speaker is doing (the prescriptive aspect) this approach is deaf to what he is saying. Noncognitivism fails to distinguish between how a moral judgment functions (to persuade, to commend, etc.) and what those terms mean: a distinction between use and meaning.
A cognitive approach to values can remedy the defects of value noncognitivism. An analysis of the term “good” demonstrates Edition: current; Page: [77] that its ascription to anything is neither subjective nor arbitrary. The criteria of what constitutes the human good, or what contributes to our well-being, are likewise nonarbitrary, because they emanate from a conception of human nature rather than from the subjective preferences of any group of people. Human good is not a statistical compilation of what people actually desire, nor does it express the attitudes of a particular speaker to a certain course of action. Good is an evaluation of what actually contributes to the “functioning well” or “flourishing” of a person (of an agent capable of intentional action). “To make a value judgment is, then, to make an assertion whose truth value can be determined only in relation to the model of man within which the statement is made....”
We can avoid pushing emotivism and subjectivism one step back to the question of what is the proper model of man. Research programs, in the social sciences, when examined are “models of man” which serve as “bridges” between normative and empirical theory. Hence, they can be falsified, or at least discarded in favor of explanations that include more of the relevant data.
To arrive at objective values, it is necessary to refute value noncognitivism and to attempt to ground values, or the human good, on something other than subjective whims.
“A Call for Conceptual Clarification in Value Theory: A Response to Professor Moon.” The Journal of Politics 39 (1977): 904–912.
Has Donald Moon actually made a case for an objective foundation of value terms found in the social sciences?
Arguing against Moon's views on “qualified cognitivism” there are two distinct positions concerning value cognitivism/noncognitivism: the one may be called ontological and the other epistemological. Professor Moon tends to confuse the two. Moon, while ostensibly refuting the positivistic-modernist position on value theory, ends up by embracing precisely that framework by arguing for a “value cognitivism that is essentially epistemological.”
Moon's argument is undermined by a failure to provide an ontological or naturalistic base for his epistemological cognitivism. Furthermore, Moon's enterprise fails to be normative because his “research programs” for deriving conceptions of human nature depend on empirical observations of what men, or societies, regard as “good.” Thus, we are still left with value relativism.
A more coherent alternative is Michael Polanyi's, which combines ontological cognitivism with epistemological noncognitivism.
Justice, in the judgment of social thinkers from Plato to Harvard's John Rawls and Robert Nozick, has meant fairness and rightness of human actions in a social context. This harmony dissolves, however, when each thinker seeks to coherently explain the traditional formulation of justice: “giving to each his due.” What is each person's due? How should society determine and assure just allocation in economic resources, education, social standing, legal justice?
These fundamental questions lead to the rival options of choosing either the state or the market as the mechanism of achieving social justice. Should the state be the voice of justice, and essay to achieve social welfare, equality, distributive justice, and a fair balance of competing claims and rights through its coercive authority? Or should the market—the network of voluntary interactions among humans—be the mechanism to guarantee a spontaneous order of both distributive and commutative and commutative justice?
This alternative raises another issue: how may we define the relationship between justice and individual rights or liberty?
So overarching a concept is justice that it overflows the confines of this set of summaries and reappears in several other sections, most notably in the following section on “Property.”
“Liberty and Justice,” Justice and Economic Distribution. Edited by William Shaw and John Arthur. New York: Prentice Hall, 1978: 183–193.
Is a free society consistent with justice? A Lockean or libertarian theory of rights can produce both a free and a just society. Such a Lockean social system recommends itself by reason of its decentralization, personal participation, rights, liberty, and justice. It would allow a maximum of human differentiation with a minimum of imposed conformity.
Lockean individual rights (respect for the liberty of each person) foster the growth of free markets and justice. Under this system the voluntaristic mechanisms of the market would replace political, coercive decision making. This society would intimately link liberty and justice by means of the concepts of just holdings (entitlements) and the wrongness of coercing persons.
Lockean rights stipulate that each human possesses a natural right to his life, liberty, and honestly acquired property. This leads to a “negative” conception of liberty: freedom from coercion against one's person and legitimate property. A society built on Lockean principles would be a complex web of voluntary relationships, a contractual society. Each person's Edition: current; Page: [79] uncoerced and free agreement to trade or to associate would give rise to a market society for the exchange of goods and services.
This market society would be characterized by decentralized decision making: no centralized political authority would compel unwilling participation by bureaucratic edict. All individual parties would have to voluntarily cooperate, participate, and coordinate their plans in reaching any joint decision. Political, involuntary planning, by contrast, breeds interest groups struggles and a Hobbesian war of all against all.
With its supreme social principle of respect for the freedom or noncoercion of each person, a free society bans any act violating personal liberty and encourages only noncoercive acts.
How relevant is this emphasis on noncoercion, property, and liberty to justice? To require noncoercion means that each person's just holdings (or entitlements) must be respected. One perpetrates not only coercion but also injustice when one deprives another nonconsensually of what that person justly acquired. The call for liberty is the call also for justice because justice is the condition of respecting the freedom of individuals to possess all that they are entitled to possess.
Accordingly, in a Lockean society, distributive justice is a procedural strategy of leaving each person free to engage in any rights-respecting (or noncoercive) economic activity. Any political intervention to redistribute goods or services contrary to the voluntary market decisions of individuals would violate justice.
“Justice in Smith: The Right and the Good.” Review of Social Economy 34 (December 1976): 275–294.
Adam Smith exposits a complex view of justice (in The Wealth of Nations and The Theory of Moral Sentiments), which supports liberalism on nonutilitarian grounds. This view corrects John Rawls's characterization of Smith.
Smith provides an alternative to the kind of interest group liberalism that lacks a conception of the common good. His moral system allows for the development of a concept of the common good and of justice. Indeed justice plays a key role in Smith's arguments. Smith's “conception of justice views social interaction as more than the sum total either of purely self-interested individual actions or even of purely benevolent ones.” As a result, modern critics of interest group liberalism show an affinity with Smith's position.
John Rawls's A Theory of Justice (Cambridge, Massachusetts: Harvard University Press, 1971) provides a framework for discussing justice in the Smithian moral system. Rawls's agents decide on principles of justice in a disinterested “original position.” These agents, hidden by a “veil of ignorance” from knowing their respective social positions, determine the principles of justice without vested interests. These principles make the right prior to the good, the reverse being true for utilitarians. Smith probably would have agreed with Rawls; this pits both against interest group liberalism and puts both in favor of justice as fairness.
Smith believes that actions are motivated both by self-interest and sympathy; this permits him to rely on cooperation and synergy in human affairs without calling in government. Morality begins as a simple desire for approbation (which is self-interested), but it evolves into internalized standards emanating from conscience (Smith's “inhabitant of the breast”).
Justice is a prerequisite and primary, for society cannot operate without it. The Edition: current; Page: [80] other virtues need not be similarly compelled by the state, but will develop spontaneously in a just society characterized by mutuality based on sympathy. In this Smith is neither advocating utilitarianism, nor presupposing disinterested benevolence (Rawls misinterprets Smith on this point). Moral rules are not adopted for purely utilitarian reasons by Smith, unless one insists on converting all moral theories into utilitarian ones. Smith and Rawls are closer than Rawls perceives.
“The Use of the Basic Proposition of Justice.” Mind 84 (January 1975): 63–78.
John Rawls in A Theory of Justice (Oxford 1972) advances a make-believe drama of social contract, entitled the “basic proposition”: that people, hypothetically choosing the nature of a society from a specified “original position,” would in fact choose Rawls's social principles. Advancing a variant on social contract theory, Rawls imagines the framers of his ideal society, placed in this “original position,” as rational, self-interested, free from envy, and choosing behind a “veil of ignorance.” Each, ignorant and unbiased by any vested interests that he might possess in the contemplated society, can make a fair judgment of a good society. Such a fairminded, reasonable person, it is argued in Rawls's basic proposition, would choose two principles for the future society: (1) that each member of society have a right to the greatest consistent amount of liberties (equal liberty), and (2) that no inequality be allowable which does not improve the lot of the worst-off in society (difference principle).
The various methodological uses which Rawls claims for his “basic proposition” are superfluous and muddled. Imagining an “original position” which hypothetically illustrates a social contract is philosophically pointless; it is preferable to dispense with social dramas which prove nothing and engage in ordinary logic and reason. Rawls's basic proposition is not superior in its justifactory, expository, or explanatory uses.
To illustrate the philosophical emptiness of using Rawls's basic proposition we can analyze “the Justificatory Use.” This refers to how we can justify or evaluate actual societies by measuring how closely they conform to Rawls's imaginary social contract and its two principles of justice (equal liberty and the difference principle). Rawls reasons as follows: (1) We have certain assumed convictions about the values of liberty, the need for incentives, and the rightness of egalitarianism. This leads us to accept the next step. (2) The circumstances of choosing a social structure in the “original position” seem fair because of the impartial “veil of ignorance.” Therefore we infer the next step. (3) People would choose Rawls's two principles of justice. This supposedly leads us to the final conclusion. (4) We can logically use Rawls's principles as recommendations for justifying, evaluating, or changing real societies in conformity with the demands of justice handed down in our imagined social contract drama.
This entire line of reasoning is termed the “Contract Argument.” But this dramatic use of the contract argument is otiose and no better than the simpler “Ordinary Argument.” The ordinary method of argument dispenses with the imaginative trappings of a hypothetical scenario which bolster the contract argument. In the ordinary argument, we reason from Rawls's premise (1) straight to premise (4). What need is there for imaginative flights Edition: current; Page: [81] that have dubious logical validity? Thus, the basic proposition is irrelevant.
If Rawls were to counterclaim that we would agree to his principles if we were in the dramatized social contract situation, the simple response is that we are not there. But whether we are there or not, it is the philosophical truth and validity of Rawls's arguments that must be established. His fictional drama of imaginary persons agreeing with him does not prove his case.
“Discussion Review: Justice, Theory, and a Theory of Justice.” Philosophy of Science 44 (1977): 594–618.
This critical review of John Rawls's Theory of Justice concentrates on the methodology of the book's arguments and conclusions rather than on their substance. One strong objection would be Rawls's non sequitur about deriving the validity of social principles of justice from the act of choosing them. Rawls implies that what makes certain sorts of social acts right is that rational persons in an “original position” would choose for them to be considered as such. It is more plausible to contend that the reason why rational persons in the original position or in more realistic positions would choose them or reject them would be because they are philosophically right or wrong.
We might also attack the notion that unanimity is a reasonable or necessary condition for social systems based on an adequate theory of justice. This has implications for those who think that the requirements of justice can, in many cases, be met merely by having the affected parties in an interaction agreeing. Similarly this attack would affect those theories of justice that allow different principles to different groups of people.
Other difficulties in Rawls's book are: the undefended assertion that justice and truth are “the first virtues” of social institutions and theories, and the deductive status of Rawls's arguments.
“Maximin Justice and an Alternative Principle of General Advantage.” American Political Science Review 69 (1975): 630–647.
John Rawls's A Theory of Justice (1972) attempts to determine what would constitute a fair allocation of property and goods. Under what conditions can some persons in society initiate legitimate coercion over others to assure such a fair distribution? Rawls's solution is his principle of “maximin justice.” It decrees those inequalities are tolerable and just that work “to the greatest benefit of the least advantaged....” Rawls argues that this principle would be chosen by social contract among rational men in an impartial “original position.”
Rawls's maximin principle is in fact unjust and unfair. Rawls's imaginary social contract would disenfranchise all but the lowest socioeconomic class. What reason have others for complying with it? This principle would have us judge allocations of goods by ignoring everyone but the lowest class. This procedure would increase inequality and also decrease the total goods available to society.
“Affirmative Action Reconsidered.” The Public Interest 42 (Winter 1976): 47–65.
Affirmative action is a vague legal concept which in the name of justice purports to remedy previous discrimination by actively promoting and encouraging the hiring of minority individuals. As we examine the intention, concepts, and actual effects of affirmative action policies, we find they have done more harm than good. Administration of the Civil Rights Act has led to what sponsors of the legislation did not intend; in fact, they said it would not happen. The burden of proof of discrimination has been placed on employers whose proportional representation of employees by race or sex does not measure up to federal agency standards.
Bureaucratic nightmares have been created by affirmative action considerations in academic hiring. Academic administrators, desiring to preserve federal subsidies, increasingly overturn the long-standing practice of academic hiring. Academic departments who are in the best position to judge a professor's qualifications no longer have a say—or they are pressured to act in a way that will not turn off the federal spigot.
While hardly advancing the position of minorities and females, affirmative action policies create the impression that hardwon achievements of these groups are conferred benefits. Here and there, affirmative action has caused some individuals to be hired who would otherwise not have been hired, but it is a doubtful gain in the larger context of attaining self-respect and the respect of others.
“The Relativity of Injury.” Philosophy and Public Affairs 7 (1977): 60–73.
Robert Nozick's minimal state cannot, in fact, be limited to the functions that he prescribes for it.
This is so because the minimal state emerges before any substantive law, while at the same time it is restricted in its actions to pronouncing and enforcing judicial decisions. Without any preexisting definitions of crimes and torts provided in substantive law, Nozick's minimal state will have no definite criteria upon which to base its decisions.
Natural right—the right not be injured, according to one definition—is too empty and relative a notion to guide judicial decisions. Hence, the state will be forced to make law through interpretation without any restraint upon its powers. But only popular sovereignty can provide such a restraint. Accordingly, the state must be “controlled” democratically by those whom it governs. This is the source of its legitimacy. A priori limitations upon state activities (e.g., First Amendment rights) are justified only as instruments to protect popular sovereignty.
If this argument against Nozick is to be countered, there is a clear need for more exposition of the historical role of the common law and its significance as both an antecedent and an alternative to statutory law.
“The role of Sanctions and Coercion in Understanding Law and Legal Systems.” American Journal of Jurisprudence 21 (1976): 71–94.
Philosophers of law have traditionally regarded coercive sanctions as an essential feature of legal systems. While coercive sanctions may be, to use H.L.A. Hart's phrase, “pragmatic necessities,” conceptually they seem an unnecessary feature of legal systems.
Necessary features of a legal system are those which must hold true if legal systems are “to have a point.” Although it is a complex task to spell out what it means for a legal system “to have a point,” one essential point or purpose of legal systems is to provide an authoritative way of resolving or regulating disputes. In this view, the traditional philosophers of law err when they conceive of laws as requiring enforcement by coercive sanctions. But a legal system is basically a framework to regulate human conduct by means of settling disputes. As such, all that is really required to have a functioning legal system is that it be supported. Law enforcement by coercive sanction is merely one way to support a legal system.
Alternative means of legal support are inducements, popular feeling, and nonlegal institutions such as churches or clans. It may be true that present legal systems rely heavily on coercive sanctions and, given human nature, may always rely on them to some extent. But the presence of such sanctions is not an essential and defining characteristic of law.
The “sanctionist” view of law is some-what linked to the social theories and “hard social realities” of eighteenth and nineteenth century industrial societies. But, hopefully, future theories of law will place less stress on coercion. This change in emphasis may encourage men to think of legal systems as structures which they can use to refine, develop, and augment their capacities. In short, such a change of perspective would lead men to see law as a liberating force that promotes freedom rather than as something which restricts it.
Coercive sanctions do not appear to be a necessary part of the concept of law. We can envision a society that maintains social order less by coercive sanctions than is now the case. Although there is no conceptual reason why legal systems should rely on coercion to the extent they do now, it remains for legal theorists to show how the conceivable can in fact work.
“Punishment and Crime: A Critique of Current Findings Concerning the Preventative Effects of Punishment.” Law and Contemporary Problems 41 (1977): 164–204.
This critique aims to be a fairly comprehensive survey of the literature concerning the effects of the criminal justice system on crime. First, it discusses studies of the “special effects” of punishment, that is, the effects of punishment upon individual felons. Next, it analyzes recent efforts to study the general deterrent effects of criminal sanctions.
Prison sentences have traditionally been held to have two purposes regarding the convicted felon: rehabilitation and incapacitation. Since World War II, substantial effort and experimentation have been directed at reducing recidivism through Edition: current; Page: [84] rehabilitation programs. With few exceptions, however, these programs have failed. These failures have significantly disillusioned the criminal justice system with the rehabilitative model and indeterminate sentencing.
Interestingly, little support exists for the following arguments that seek to prove how imprisonment increases the crime rate: (1) it stigmatizes inmates and thus makes it harder for them to support themselves legitimately when released; (2) it improves the inmates' crime skills; and (3) it causes them to accept criminal norms of behavior. Further, Ernest van den Haag's studies indicate that even if a felon ceases his crimes against the public through the incapacitation and rehabilitation of imprisonment, there may be no corresponding change in the overall crime rate. Some kinds of crime may be limited more by the number of opportunities available to commit a crime than by the number of individuals willing to commit it. Thus, on van den Haag's analysis, the incapacitation or rehabilitation of one offender may only disrupt the supply of, say, drugs and create an opening for someone else to enter the “business.” As a result, even if incarceration reduces crime by psychopaths, for example, the rates for other types of crime designed to enrich the criminal may not be strongly affected in the long run by incapacitation alone.
Economists have recently studied the general preventative effects of criminal sanctions. They approach crimes as a kind of entrepreneurial activity by felons. Likewise, they view criminal sanctions as a kind of tax on criminal activity. Given this economic model of criminal activity, economists generally expect that crime rates will decline if the law increases the severity of the “tax.” The law may also improve the effectiveness of arrest, reduce the “payoff” to a given crime, or point out an increase in legitimate economic opportunities. Within the economic model, then, the criminal functions as a rational decision maker.
Another, perhaps complementary, model for studying crime deterrence focuses on the socialization of society's members through the criminal justice system. This approach generally regards law-abiding behavior as habitual. An effective criminal justice system (i.e., one that effectively enforces the laws) cultivates this “habit,” whereas a mild or ineffectively applied system of criminal sanctions fails to provide people with incentives for developing the habit of being law-abiding. Most studies rely on the foregoing models for purposes of analysis.
Two kinds of empirical research have been devoted to general deterrence. The first covers statistical correlations between the relationship of criminal sanction “threat” levels to crime rates across jurisdictions or over time. These often have supported the claim that a high probability of punishment inhibits crime rates. To a lesser extent they have also correlated the severity of punishment with general levels of deterrence. But questionable methodology weakens these studies' conclusions as to the strength, or even the existence, of any deterrence mechanism. Among the methodological flaws are inadequate or inaccurate crime statistics; failure to control other crimogenic factors Edition: current; Page: [85] which may distort the deterrence effect; and failure to distinguish adequately the deterrence process from other processes. This last flaw may cause threat levels from criminal sanctions to be negatively related to crime. All the flaws vitiate the usefulness of these studies.
The second kind of empirical research involves “quasi experiments.” These are sudden changes in the law or enforcement policy which may alter the public's perception of the certainty or severity of criminal sanction for some criminal offenses. Despite their lacking the generality of the correlations, such studies offer some evidence that changes in the crime rate arise from changes in the threat level of the criminal justice system.
“Individualism and Productive Justice.” Ethics 87 (1977): 113–125.
A “eudaimonistic” conception of the individual more solidly supports political individualism than does classical liberalism and its modern spokesman, Robert Nozick, in Anarchy, State, and Utopia.
The Greek ethical norm of eudaimonia denotes the condition of living in harmony with one's unique daimon or innate potentiality; as a moral ideal it stresses the irreplaceable, potential worth of each human person. The eudaimonistic view of man entails a larger role for government than does Nozick's “minimal state.” For eudaimonia, the logically prior problem consists of positively developing individuals (by state assistance if necessary); protecting individuals, the narrow role of classical liberalism's nightwatchman state, takes second place.
Several contrasts emerge from comparing eudaimonistic individualism with Nozick's Lockean individualism (along with its social and political consequences). For Nozick, individuality is a quantitative, unalterable, and static fait accompli, embodied in the “fact of our separate existences” or our brute numerical individuality. On the other hand, eudaimonistic individuality is qualitative and seeks the development of human potentiality. To become an individual in the eudaimonistic sense is a moral responsibility.
This last idea of responsibility logically precedes rights. Rights follow from responsibility, just as “ought” implies “can.” Rights are, thus, the entitlements to the necessary conditions of individuality. Such conditions of individuality come into play when we understand individuality as a development. A basic criticism against classical liberalism's fait accompli or static individuality is that it hides this developmental understanding of personal growth.
One necessary tenet of individuality requires that each person be responsible for providing for himself whatever he can. But a developmental conception of individuality acknowledges that the individual may not or cannot provide certain necessary conditions; it views self-sufficiency as an end-condition rather than a beginning-condition. The justification of the state is that it provides opportunities and conditions of individuation which individuals cannot provide for themselves.
Nozick's numerical individuality and “minimal state” concept invite a historical re-run of classical liberalism, with its subjectivism of values and its excesses of amoral egoism. To be viable today, political individualism needs to be inspired by a new and more profound conception of the individual that recognizes ethical and psychological development in persons. A fuller defense of such an alternative may be found in the author's recent book, Personal Destinies: A Philosophy of Ethical Individualism (Princeton: Princeton University Press, 1976).
The last section's questioning of just allocations and entitlements naturally leads to various concepts about property. A major theme in this set of summaries is the validity of social welfare rights against an absolute concept of property.
Welfare rights seek to achieve the social common good by “balancing rights.” Individual rights—a person's right to property or liberty—are acknowledged but considered only “prima facie,” that is tentative, provisional, and not absolute. From the controversial viewpoint of welfare rights all claims to property and liberty must be set in the scales of the common good and weighed against other competing claims and rights. Against the collective emphasis of welfare rights, neo-Lockean theories of property develop Lockean rights to “life, liberty, and property” in a more individualist direction. The neo-Lockean tendency is to defend the absolute inviolability of each person's title to his or her own life, liberty, and legitimately acquired property.
Accordingly, this sequence opens with two opposed points of view on the validity of prima facie rights. Then follow several analyses of the validity of Lockean and neo-Lockean theories of property rights. Indian land claims and Kant's theory of property precede the concluding study of how “balancing rights” and social welfare crop up again in the venerable theory of the “just price.”
“Liberty and the Redistribution of Property.” Philosophy and Public Affairs 6 (Spring 1977): 226–239.
Does liberty require socialism and the redistribution of property?
It is claimed that liberty is less infringed when government coercively redistributes property from the wealthier producers to poorer citizens than when government coercively protects affluent producers from the acquisitive desires of the poor. This argument relies on “prima facie rights” and the “importance-to-the-agent factor.”
The “redistributive alternative” (RA) is argued to be fairer than the “property rights view” (PRV). Redistribution (RA) would seem to raise the overall level of social welfare and the satisfaction of wants. The poorer recipients appear to have a greater desire to use or consume the goods than do the producers in PRV.
Conceiving of liberty as the right to do whatever one wishes, we grant that RA involves curtailing the liberty of wealthy producers to do as they wish. But PRV seems to violate liberty to a greater degree since poor nonproprietors are prevented by Edition: current; Page: [87] legal penalties from doing what they wish, namely consuming the goods in question.
The PRV objection—that property laws do not curtail the liberty of the poor since the poor have no “right” to the property—ignores the prima facie rights possessed by everyone. All rights appear to be of this “weak,” tentative, and conditional sort: immunities from coercion conditionally valid so long as other factors do not override them and justify restraining one's liberty of action. Property laws, in this view, infringe a prima facie right of the poor by curtailing their liberty and action.
But do we have a standoff or dilemma since both positions, PRV and RA, appear to curtail liberty? No, because the relevant question is which alternative curtails liberty to a greater degree.
What cuts this Gordian Knot is the importance-to-the-agent factor. To formulate this criterion which measures the degree to which people infringe liberty: “the more important the blocked course of action is to the person, the more the person's liberty is curtailed (other things being equal).” Arbitrating the rival claims of PRV and RA with this measuring rod, it is asserted that “the recipients have a greater desire to use or consume the goods than do the producers. Thus it would be more important to the recipients to use or consume the goods than it would be to the producers.”
Is it possible to arrive at a noncontradictory definition of liberty that avoids the embarrassing and compromised claim of some doctrines to curtail liberty less than other doctrines? Also, how can we establish a scientific and objective measure of the relative “importance-to-the-agent factor”? How could one disprove a rich man's claim that he valued the marginal unit of his fortune as of far more importance than would a poor man?
“Prima Facie Versus Natural (Human) Rights.” Journal of Value Inquiry 10 (Summer 1976): 119–131.
Princeton philosopher Gregory Vlastos has plausibly argued that Lockean rights are not absolutely binding in a legal system that relies on them as “fundamental to a scheme of justice” (“Justice and Equality,” in R.B. Brandt, ed. Social Justice, New York: Prentice-Hall, 1962). Instead, Professor Vlastos says, these rights are “prima facie,” that is, provisional or tentative rights which are capable of being overridden in the face of other competing, and stronger moral claims.
This notion of prima facie rights suffers serious flaws. For example, it is claimed that as a prima facie right, someone's right to liberty may be overridden by another's right to welfare. But if this were true, rights could no longer be fundamental to a scheme of justice (as Vlastos agrees they are). The only respect in which rights are capable of being overridden is that they do not apply where politics itself is impossible. They may then be disregarded. But if freedom rights could be overridden by welfare rights, we would have a confusion between political and moral virtues or values, a confusion that would invalidate Vlastos's argument.
Attention to the meaning of prima facie rights is indispensable for anyone concerned with recent “mixed systems” attacks on Lockean natural rights and the free society.
“Locke's Theory of Property.” Interpretation 5 (1975): 226–244.
Locke's theory of property does not yield a society dedicated to laissez-faire capitalism but rather a modest form of social welfare socialism. This thesis is an interpretation of Locke's Two Treatises of Government, particularly Chapter 5 of the second Treatise, and sections 41–43 and 86–90 of the first Treatise. Locke believed that the rights to life, liberty, and property were “natural,” existing in the state of nature before civil or political society. But this does not mean that such rights “can never be overridden by the competing rights of some other person or group.” They are rather provisional or prima facie rights.
Property originates, Locke argued, when man mixes his honest labor with nature and thereby owns the product of his labor and is free to transfer this legitimate possession to others. Locke, however, does not endorse the labor theory of value in the sense that the economic value of labor alone determines what it produces.
But what are the limits of property acquisition for Locke? Two passages from the second Treatise are crucial. (1) “As much as any one can make use of to any advantage of life before it spoils; so much he may by his labour fix a Property in. Whatever is beyond this, is more than his share, and belongs to others.” And (2) a man has a right to acquire as much property as he can, provided that “there is enough, and as good left in common for others.” The concept of spoilage is not essential. After interpretation, we can restate the Lockean Proviso of these two texts as: “This limit ... is that no one has a right to possess something he does not use, regardless of whether or not it spoils in his possessions, if his possession of it prevents others who could and would use it from doing so.”
A further refinement of Locke's limit to property would forbid anyone from acquiring so much wealth in any society that he prevents others from acquiring those possessions necessary to live at a “decent” standard of living, given the total resources of society. A decent standard of living would be those possessions and opportunities that would enable each person to live a happy life in that society, and to develop whatever talents and potentialities are compatible with other members of the society. This would justify social welfare legislation such as minimum wages, a redistributive income tax, and unemployment compensation. Furthermore, Locke's theory implies that an employer's profit is just only if it is not so large as to deprive his employees of a decent living wage.
Property is rightful possession in Locke's analysis. From this, it might be inferred, that we must balance the claims to any man's possessions against the competing claims of fairness and right in social welfare. Two central assumptions here are: the belief that the Lockean right to property means a right to have property (not a right to attempt to have property); and a social utility interpretation of what qualifies as legitimate “use” of property.
“Do Entitlements Imply that Taxation is Theft?” Philosophy and Public Affairs 7 (1977): 74–81.
Robert Nozick's argument (In Anarchy, State, and Utopia) that taxation is theft seems erroneous. Contra Nozick, entitlement theory does not imply that it is wrong to forcibly tax wealth beyond the sum necessary to budget the minimal state's enforcement agencies. Marginal productivity theory weighs heavily against Nozick's view.
The argument runs as follows. An efficient allocation of resources under a market price system requires private rather than common property rights. Common property would encourage waste because thereby the costs of using a resource are not individually allocated (i.e., they are borne in common by no one in particular). Therefore, the creation of private property generates additional productivity by increasing the efficient use of scarce resources.
Next, without protection associations no agencies would exist to define such private rights, and property will remain held in common. Hence, organized protection agencies generate a scarce resource, the privatization of property, which in turn increases production. These protection agencies, then, are entitled to the surplus produced by the scarce resource that they create. This surplus may legitimately be transferred forcibly by them from some individuals to others needing help.
So, entitlement theory seems to allow for the kind of coercive redistribution that Nozick attempts to argue against. This argument, if valid, justifies far more extensive activity by a judicial and enforcement apparatus than Nozick wishes to concede.
“Women and John Locke; or, Who Owns the Apples in the Garden of Eden?” Canadian Journal of Philosophy 7 (1977): 699–724.
An instructive link unites John Locke's “sexism” with the inconsistencies in his theory of rights. Locke's political theory is sexist in assuming the “natural” superiority of male over female. Without certain assumptions about the relations between the sexes, much of his political theory would be different.
Women, says Locke, are naturally “subjugated” to man's rule, though some gifted women can escape this condition. This subjugation rests apparently on the fact that the male is stronger and that women cannot raise children on their own. Thus women are dependent on men and on marriage.
Noteworthy is Locke's view that the palpable natural differences between men do not entail one man's subjugation to another's rule; only in the case of the husband-wife relationship is superior strength between persons a sign of a right to rule. Thus Locke employs a Hobbesian element in his philosophy to justify male domination of woman.
This inconsistency poses a problem for Locke. His design is to distinguish political authority, characterized by consent, from paternal authority, which defenders of monarchy and patriarchic government justified on the grounds of obedience rather than consent. To work out this distinction, Locke had to modify his position on paternal power in the family. In the Edition: current; Page: [90] Second Treatise he claims that such power is really parental power: the authority of parents over children is shared jointly by both husband and wife. Locke further claims that, contrary to monarchists, the father does not have absolute authority over his children. Authority over children is not entailed by mere fatherhood but rather by accepting such responsibility. This is also the case in government.
The heart of the issue is Locke's focus on justifying the father and mother's equal authority over their children. This focus, however, evades justifying the unequal power a husband has over his wife. Despite Locke's sharp distinction between parental power and the husband's domination of his wife, his awareness of this inconsistency sometimes moves him to insist that the husband-wife relationship is also a voluntary one: marriage is a voluntary contract; the power of a husband over his wife is not unlimited (because of natural right and contract); and both parents have an obligation to care for their offspring.
Locke allowed that marriage could be contractual and that there could be mutuality between husband and wife. But his conviction that women cannot care for their offspring seems inconsistent with this. A male's threat that if women do not sign the marriage contract, they will not have anyone to care for their offspring, might nullify such a contract on Locke's own grounds.
However, all this is secondary to Locke's concern to justify the absolute right of the male to pass on property to his heirs alone. Woman's equal right to dispose of familial property he neither considered nor advocated. This is no minor matter. If men are entitled to the fruits of their labor, then how can women be totally excluded from passing on familial property to any of their heirs? Locke agreed that a woman was entitled after the dissolution of a marriage contract to what she brought into it, but only if she happened to include this in the original contract; women's rights to products of their labors are apparently watered down in a Lockean family. For if they were as entitled to the fruits of their labor as men were, they would not need any contract to insure such fruits. One needs no contract on Lockean grounds for recognition of property rights.
“The New Indian Claims and Original Rights to Land.” Social Theory and Practice 4 (1977): 249–272.
Current Indian tribal claims to their ancestral lands should not be based upon historical land entitlement principles, but rather upon what Robert Nozick has called end-state principles. The justification for this conclusion is to be found in Nozick's version of what he calls the Lockean proviso. By conceding with Locke that property rights ought to be limited in order to recognize the moral priority of human need, Nozick has introduced a competing principle of social justice. If this principle is consistently applied, it under-cuts the principles of justice both in acquisition and in transfer and, thereby, invalidates the whole entitlement basis of rights claims.
Nozick does not allow unlimited liberty in either the initial appropriation or the subsequent transfer of property, but qualifies both by specifying that initial appropriation must not worsen the situation of others. This limitation upon initial acquisition has implications for subsequent transfers. For, if a later acquisition worsens the conditions of some, it does so because of the previous acquisitions of others. Therefore, even current entitlements based upon past just appropriations must bend before Nozick's Lockean proviso. If present holdings are subject to involuntary transfers because of violations of Nozick's Lockean proviso, then Edition: current; Page: [91] inheritance (a type of transgenerational voluntary transfer) should be equally subject to regulation by that proviso. Hence, not entitlement but “need” ought to serve as the basis for current property claims flowing out of past injustices.
One may conclude that past injustices against Native Americans constitute the historical causes of, but not the moral sanction for, present Indian claims. These claims ought to be founded on the morally more significant principle embodied in the Lockean proviso. Finally, all property claims should be systematically regulated by a body of positive law whose foundation is that proviso instead of some set of Lockean natural rights.
In public policy terms, Indians deserve monetary compensation for past violations of the federal government's Indian Nonintercourse Act (1790), which promised security to Native Americans against fraudulent seizure of Indian land. Current Indian land claims should not invoke an original and inheritable right to the land. Rather Indians should claim to be rectifying current inequalities and lack of their fair share of American resources together with social and economic opportunities. Society at large owes the modern Indian tribes a collective debt but not necessarily in the form of land or restored “rights” to property. Property rights are not sacrosanct when they are invoked to defend unjust holdings. They must yield to the moral claims of the needs of humans in the spirit of Locke's proviso.
The article demonstrates the incompatibility of entitlement principles with the so-called Lockean proviso which is itself an end-state principle; hence the internal inconsistency of Nozickian libertarianism.
“Kant's Theory of Property.” Political Theory 6 (February 1978): 75–90.
Kant's concern with the question of property and its appropriation, as well as his theoretical philosophy, can be understood only if we appreciate his politics. Two forms of appropriation are distinguishable: one form is the theoretical and epistemological, which concerns objects of knowledge; the other is practical and political, concerning objects of the will. Kant's thought, in this perspective, is an attempt to overcome the problem of alienation. The central theme uniting Kant's speculative philosophy and his politics is his perception of man as a stranger who must appropriate and transform a world which is “other” than him and not made for his purposes.
Man unifies the world through what Kant calls a “transcendental unity of apperception,” which, in turn, constructs an a priori act of synthesis. Thus, the world of flux is transformed into a rational order informed by the categories fabricated from our own minds. This theoretical property entails a right to use, but not to possess, objects which elude the grasp of our synthesizing power.
The next issue concerns the practical (or juridical) property—that over which one claims a right of exclusive use. Kant asks the question: how is this juridical possession possible? That is, what practical connection can exist between the human will and an object? He answers that juridical possession, like epistemological possession, requires a transcendental synthesis, but now it entails a unity of wills rather than of apperception. Therefore, this united, or general, will confers on men the right to appropriate. Individual appropriation arises from an a prior transcendental Edition: current; Page: [92] appropriation of the earth by all men as members of the general will.
Private possession of property presupposes an “innate common possession of the earth's soil corresponding to it.” Kant explicity denies the Lockean situation, in which individual possession of private property precedes the coming together of men to form a contract. Kant justifies private property not on grounds of utility, but of logical necessity.
This interpretation of Kant's theory of property integrates it with the rest of his philosophy and views Kant through a Hegelian perspective.
“Justum pretium: one more round in an ‘endless series.’” History of Political Economy 9 (1977): 504–521.
St. Thomas Aquinas's subtle doctrine of the just price mirrors the tensions between a society of status and a society of contract and exchange. Reconciling its divergent interpretations we can explicate how Aquinas's just price theory both reflects and perpetuates the inequalities of a hierarchical and status society. Medieval “social welfare” dictated a “fair” allocation of property by respecting each person's unequal social function.
The just price insured that goods and services would be exchanged at prices to guarantee each member of society an income proportionate to his “worth,” that is, with an income that would enable him to fulfill his “naturally” ordained social function. As a reflection of the sociology of knowledge, Aquinas's formulation of the just price was intended to forestall a breakdown of the traditional social structure.
Aquinas achieved a remarkable synthesis of the Christian tradition and Aristotelian teleology in articulating the just price doctrine. Aristotle's perception of the universe as structured and purposeful led Aquinas to explain the value of economic goods in terms of their utility to man. But if goods are valued or priced by their human utility (i.e., by the want-satisfying quality of things and not by the relative social “worth” of the producer), how can exchange at such market prices be reconciled with the income distribution demanded by the social estimate of different individuals' worth and hierarchical status?
A recent but inaccurate neoclassical interpretation would read social or “common estimate” for determining the just price as a reference to the competitive market price or society's valuation of the marginal productivity of the goods in question. A sounder interpretation of Aquinas's just price sees social “worth” or status as determined independently of economic value. Aquinas believed that the just price must be set so as to maintain one's natural social status. Civil society is Edition: current; Page: [93] more than a business venture whose purpose is acquiring wealth; the worth of a person and his share of goods, therefore, should not depend on his contribution to the production of wealth. In a society organized around the purpose of the morally good life of all its members, the worth of each person would be judged not in terms of his contribution to production, but in terms of his social contribution to the life of virtue.
If this is so, what is the procedure that guarantees that while “goods exchange at their just prices, income will be distributed in proportion to the relative dignitas [worth] of society's members?” The answer requires us to understand Aquinas's distinction between commutative and distributive justice. Commutative justice refers to justice in market exchanges and requires that the two parties in an exchange receive equal value. Here the just price depends on utility, labor, costs, and supply and demand, not on the social standing of the exchangers. Distributive justice, however, does require that each member of society receive an income commensurate with his social status. We achieve this not through manipulating the just price of the products which each person produces but through an earlier property distribution. An anterior distribution of resources based on hierarchical social rank provides Aquinas with a means of guaranteeing each person an income according to his status and dignitas and is the first step in devising a just economy and allocation of property.
ECONOMICS AS ACOORDINATION PROBLEM:
The Contributions of Frledrlch A. Hayek
by Gerald P. O’Driscoll, Jr.
Foreward by F. A. Hayek
THE FOUNDATIONS OFMODERN AUSTRIANECONOMICS
Edited with an Introduction byEdwin G. Dolan
This first full-length examination ofHayek’s work in economics traces hiscontributions from his lectures on thebusiness cycle to his papers on thepricing system. Professor O’Driscollplaces the significance of that work inthe context of current debate. He showsthat in Hayek’s considering generalequilibrium theory a mere starting pointfor economic analysis, Hayek rejectedorthodox neoclassical theory asinadequate. To Hayek, the real economicproblem was to describe how millions ofpeople, each of whom knows little ornothing about the plans and resourcesof others, could remotely approachan equilibrium state. Instead, heapproached the problem bymasterfully describing the distributionof information as a dynamic process, coordinating the otherwise disparateplans of individual agents.
240 Pages, Index
$15.00 Cloth, $4.95 Paper
($5.50 in Canada)
For free catalogplease write:
SHEED ANDREWS & McMEEL, INC SA&McM
6700 Squibb Road/Mission, KS 66202
With many eminent economistsseeking a more thorough explanationof the nature of market phenomena, many serious scholars have increasinglyfocused on analyses in the tradition ofCarl Menger and the Austrian Schoolof economics. Presenting the bestintroduction to the current Austrianparadigm, The Foundations of ModernAustrian Economics includes essaysby Israel Kirzner, Ludwig Lachmann, Gerald O’Driscoll, Murray Rothbard, and others. The selections includepapers on the nature and significanceof praxeology and comparative statics, Austrian and neo-Ricardian capitaltheory, Austrian and neoclassicalmonetary and trade cycle theory, andother areas. This volume examinesthe main gallery of Austrian ideasand contrasts this tradition with moreconventional economic approaches.
284 Pages, Index
&12.00 Cloth, &4.95 Paper
(& 5.50 in Canada)
A long-needed collection of Alchian’s major papers, including his seminal “Uncertainty, Evolution andEconomic Theory.” Armen A. Alchian is Professor ofEconomics at UCLA and coauthor of the textbookUniversity Economics. Hardcover $10.00, Paperback $3.50.
A telling attack on Lord Keynes’ concept ofunemployment — first published in 1939, now revisedand updated. This edition includes an Addendum on “The Concept of Idle Money.” A pioneering classicthat will continue to provoke controversy – and seriousthought – for years to come. Hardcover $8.95.
Provocative insights for businessmen, governmentofficials and individual consumers on how to makeeconomic decisions when economic information isseriously, and continually, distorted by inflation.Includes papers by William T. Baxter, SolomonFabricant, William H. Fletcher, W. Allen Spivey andWilliam J. Wrobleski, Robert T. Sprouse.Hardcover $8.95.
Liberty Press Liberty Classics
We pay postage on prepaid orders. To order these books, or for a copyof our catalog, write: LibertyPress/ Liberty Classics 7440 North Shadeland, Dept. W3Indianapolis, Indiana 46250
William Appleman Williams, The Contours of American History, 19.
The idea of a conceptualized, or “symbolic,” event such as the Industrial Revolution, as compared to an actual, or “existential,” event such as the death of Charles I, is taken from Page Smith, Historians and History, 202.
Perez Zagorin, “Theories of Revolution in Contemporary Historiography,” Political Science Quarterly 88 (1973): 28–29.
Zagorin, “Theories of Revolution in Contemporary Historiography.”
Merrill Jensen, The American Revolution Within America. Also see Melvin Richter, “The Uses of Theory: Tocqueville's Adaptation of Montesquieu,” in Richter, ed., Essays in Theory and History, 75; Gene Wise, American Historical Explanations: A Strategy for Grounded Inquiry, 76. A good discussion of the rise of imperial authoritarianism, the decline of historical objectivity, and the intellectuals' scramble for financial support as described by Lucian of Samosata is Chester G. Starr, Civilization and the Caesars: The Intellectual Revolution in the Roman Empire, 259–261.
See Robert S. Nisbet, Social Change and History.
See Vernard Foley, The Social Physics of Adam Smith.
This is discussed in J.G.A. Pocock, The Machiavellian Moment: Florentine Political Thought and the Atlantic Republican Tradition, which also lists other of his important writings on the intellectual currents that influenced the American Revolution.
A good critique of this is Theda Skocpol, “A Critical Review of Barrington Moore's Social Origins of Dictatorship and Democracy,” Politics and Society 4 (Fall 1973): 1–34.
See Thomas S. Kuhn, The Structure of Scientific Revolutions. Cf. John A. Moorhouse, “The Mechanistic Foundations of Economic Analysis,” Reason Papers 4 (Winter 1978): 49–67.
Immanuel Wallerstein, The Modern World-System: Capitalist Agriculture and the Origins of the European World-Economy in the Sixteenth Century, the first of a projected four-volume series. Despite the socialist bias, and the propensity to reify the concept of capitalism, there is much of value in his work to certainly justify the ferment it created in sociology. Wallerstein devotes great attention to the institution of the State. But in the end his Marxian outlook prevents him from acknowledging the State as the most significant variable.
Jensen, Within America, 2.
See, for example, Dale Yoder, “Current Definitions of Revolution,” American Journal of Sociology 32 (November 1926): 433–441.
A number of writers agree that certain preliminary circumstances are preconditions before any revolution can occur. Revolutions have tended to occur not in impoverished and retrogressive societies, but rather in those societies where significant advances had been under way. If the following terminology is different, the concepts are similar. Edwards refers to the “balked disposition;” Crane Brinton describes those who felt their situation “cramped;” James C. Davies posits a “J-curve”—a growing gap between expectations and results; and Ted Gurr's idea of relative deprivation. All derive from social psychology concepts of frustration-aggression. J.C. Davies, “Toward a Theory of Revolution,” American Sociological Review 27 (February 1962): 5–19; Ted Robert Gurr, Why Men Rebel. The ancients were also aware that rapid change caused instability. In this regard, Aristotle made clear that a widely-based middle class was the greatest impediment to revolution. Despite all the “modern” theorizing, Aristotle's Politics, Part V, wherein he discusses revolution, is still well worth reading. Yet, however insightful the thesis of frustration-aggression seems, by itself this concept is too broad and general to be useful in understanding revolution.
Brinton, The Anatomy of Revolution, 44–52.
Edwards' understanding of the revolutionary process appears more subtle than that of the more famous work by Brinton. Brinton lost an important idea when he changed one of Edwards' key points, the “transfer of the allegiance of the intellectuals” to the “desertion of the intellectuals.” “Transfer of allegiance,” however, implies a sense of a loss of legitimacy or legality which far transcends the notion of mere support as a kind of cooperation.
Karl Deutsch indicated some years ago he had a study of legitimacy in progress. See also Claus Mueller, The Politics of Communication: A Study in the Political Sociology of Language, Socialization, and Legitimation; and Ronald Rogowski, Rational Legitimacy: Theory of Political Support.
Kuhn, Structure.
See, especially, Karl Mannheim, Ideology and Utopia: An Introduction to the Sociology of Knowledge.
William Marina, Egalitarianism and Empire, suggests three sources of values: supernaturalism, natural law, and statist, positive law.
Kuhn, Structure, 10. Kuhn began with a discussion of “normal science,” which he defined as “research firmly based upon one or more past scientific achievements, achievements that some particular scientific community acknowledges for a time as supplying the foundation for its further practice.” This “body of accepted theory...served for a time implicitly to define the legitimate [emphasis added] problems and methods of a research field for succeeding generations of practitioners.” He concluded that “Men whose research is based on shared paradigms are committed to the same rules and standards for scientific practice. That commitment and the apparent consensus it produces are prerequisites for normal science, i.e., the genesis and continuation of a particular research tradition.” Cf. Murray Rothbard, “Ludwig von Mises and the Paradigm for our Age,” Modern Age (Fall 1971).
One is reminded of the marvelous symbol of authority, the conch shell, in William Golding's forceful study, The Lord of the Flies.
Washington Post, April 12, 1966.
“'Ideology' and an Economic Interpretation of The American Revolution: Explorations in the History of American Radicalism,' 159–185. See also T.F. Carney, The Shape of the Past: Models and Antiquity.
Thomas C. Barrow, “The American Revolution as a Colonial War for Independence,” William and Mary Quarterly, 3d series, 25 (1968): 452–464.
This outlook which permeates so many of the writings and correspondence of the revolutionary generation is captured in the title of John A. Schutz and Douglass Adair, eds., The Spur of Fame: Dialogues of John Adams and Benjamin Rush, 1805–1813.
J.R. Pole's “Loyalists, Whigs, and the Idea of Equality,” in Esmond Wright, ed., A Tug of Loyalities: Anglo-American Relations, 1765–1785, 66–92; and Pole's B.K. Smith Lecture in Social Radicalism and the Idea of Equality in the American Revolution. Of the recent writings on the idea of equality, perhaps the most important, certainly with the most complete bibliography, is Herbert J. Gans, More Equality, though my own model and the direction of my thought is quite different from Gans's.
My essay can profitably be read in conjunction with the bibliographical essay of Professor Murray Rothbard published in the first issue of the Literature of Liberty. I hope soon to publish an expanded version of these observations on revolution and change in relation to the American Revolution, to be entitled, The American Revolution as a People's War: A Refutation of the Widely-Held Minority Myth, and Some Reflections on the Revolution from the Perspective of the Sociology of Revolution and a Theory of Social Change in an Age of Continuing Upheaval.
Quoted in Alfred Cobban, New Cambridge Modern History. Vol. 7, 102.
Bernard Bailyn, The Ideological Origins of the American Revolution.
David Jacobson, The English Libertarian Heritage, Introduction; Clinton Rossiter, Seedtime of the Republic; Carl Degler, Out of Our Past: The Forces that Shaped Modern America; Eric Foner, Tom Paine and Revolutionary America.
See, for example, Trenchard and Gordon's essay, “Of the Equality and Inequality of Men,” written in 1721, and reprinted in Jacobson, Heritage, 101–106.
Discussed in Robert G. Wesson, State Systems: International Pluralism in History, forthcoming.
Pocock, Machiavellian, 156, 191.
Pocock, 194, 208.
See especially his discussions of social tensions in Chapter 2, “The Parchment and the Fire”; of mobility and freedom in Chapter 3, “Masterless Men,” as well as that of the relationship between the Levellers and the Army; the distinction between “Levellers and True Levellers” in Chapter 7; the reaction in Chapter 17, “The World Restored”; the conclusion, Chapter 18; and Appendices 1 and 2: “Hobbes and Winstanly: Reason and Politics”; and Melton and Bunyan: Dialogue with the Radicals.” Given these parallels with the English Revolution, it was perceptive and appropriate that the English military band at the Yorktown surrender in 1781 should play “The World Turned Upside Down.” Also see Perez Zagorin, The Court and the Country.
Caroline Robbins, The Eighteenth Century Commonwealthman, Studies in the Transmission, Development and Circumstance of English Liberal Thought from the Restoration of Charles II until the War with the Thirteen Colonies. Also see J.P. Kenyon, Revolution Principles, 1689–1720, especially 102–127.
Cf. Pocock, Machiavellian, 424, 426.
Forrest McDonald, The Phaeton Ride: The Crisis of American Success, especially the first part of Chapter 2, “The Populists and the Predators.”
Rodger Durrell Parker, “The Gospel of Opposition: A Study in Eighteenth Century Anglo-American Ideology,” doctoral dissertation, Wayne State University, 1975, University Microfilm publication 76–10, 990.
See, for example, Carroll Quigley, The Evolution of Civilizations; and, on China, Mark Elvin, The Pattern of the Chinese Post: A Social and Economic Interpretation.
The most obvious example is, of course, Adam Smith. Another is Tom Paine. Both favored the Financial Revolution but not State interference.
Pocock, Machiavellian, 210–211, 391–399, 468–469. The less extreme version of this idea in Harrington and in Trenchard and Gordon “had in mind not so much a leveling of property as 'an agrarian law, or something like it' to ensure that no individual or group became so rich as to reduce the others to dependence.” Pocock, Machiavellian, 468, and quoting from Cato's Letters. The “something” indicates how far were the Commonwealthmen from any worked out plan or agreement about how to deal with extremes of wealth in their republican conceptualization, whether agrarian or commercial.
As Pocock observes:
“We have already seen that neither [Andrew] Fletcher nor [Daniel] Defoe operated in terms of a simple opposition between land and trade—which should warn us against expecting Augustan politics to look like a simple confrontation between gentleman and merchant—but that each indicates in opposite ways the difficulties of constructing a fully legitimized history out of the movement from one principle to the other.”
Unlike McDonald or Parker, who place Charles Davenant in the Country camp, Pocock appreciates the subtlety of shifting positions and the relationship of all of this to statism and war: “Davenant, more than Fletcher, [John] Toland, or (at this time) Trenchard, was engrossed in the problem of war's ability to generate corrupting forms of finance; and while a major significance of his thought to us is that he looked beyond the problem of trade to that of credit, he did so in the context provided by war.” Pocock, Machiavellian, 436–437.
See, again, Pocock, Machiavellian, especially Chapter 12, “The Anglicization of the Republic: B) Court, Country and Standing Army”; Chapter 13, “Neo-Machiavellian Political Economy: The Augustan Debate over Land, Trade and Credit”; and Chapter 14, “The Eighteenth Century Debate: Virtue, Passion and Commerce.” One is reminded of W.A. Williams's comment that Charles A. Beard was “almost” a socialist—a very wide gap indeed.
In Kurtz and Hutson, Essays on the American Revolution, 256–288.
Berthoff and Murrin, “Feudalism,” 257. The reference is to Jameson, Social Movement, and Frederick B. Tolles, “The American Revolution Considered as a Social Movement: A Reevaluation,” American Historical Review 60 (1954–1955): 1–12. Also see Thomas C. Barrow, “The American Revolution as a Colonial War for Independence,” William and Mary Quarterly 3d ser. 25 (1968): 464, quoted in Berthoff and Murrin, “Feudalism,” 259.
Berthoff and Murrin, 258, quoting Gordon S. Wood, “Rhetoric and Reality in the American Revolution,” William and Mary Quarterly 3d ser. 23 (1966): 31.
Berthoff and Murrin, 261.
Berthoff and Murrin, 262–263.
Berthoff and Murrin, 264–265. Another who takes this view of the importance of feudalism in the coming of the Revolution is Robert A. Nisbet, The Social Impact of the Revolution. If I had to recommend a single selection about the meaning of the American Revolution, I believe I would choose Nisbet's perceptive little twenty-three page pamphlet. He advocates a comparative approach, and in arguing it was a real social revolution against feudalism, makes the following points:
“More than any other type of social organization, feudalism seems not only to invite but to succumb to revolution.... because it virtually consecrates inequality—the prime cause of revolution everywhere, as Tocqueville pointed out—and...succumbs rather easily because of its seeming inability to command wide loyalties.... [A]ll the revolutions of modern history have been those launched against systems more nearly feudal than capitalist.” (p. 3).
Nisbet suggests there might have been no social revolution “without a precipitating war in which ideological values were strong.” War has accompanied each of the great revolutions, and “[t]he link between war and revolution is both existentially and historically close” (p. 9). Among the revolutionary changes he sees are: relation between land and the family (primogeniture and entail) over thirteen Edition: current; Page: [33] separate colonies, confiscation of estates, religious freedom, and some change in attitudes toward slavery (pp. 10–16).
In proclaiming the American Revolution in every way a true social revolution, Nisbet thinks we err in making terror the “touchstone of revolution”: for “[t]o deny the status of revolution because of the absence of these qualities is like denying the status of war because of the absence of atrocities.” It was hardly a local affair, and again we err if we “ignore the libertarian currents that the event set off throughout the world” (p. 23).
Berthoff and Murrin, 266–267. Herbert Aptheker's The American Revolution, some years ago, mentioned the rapidly growing sums of quit-rents in the years just prior to the Revolution. Tocqueville was the first to point to this relationship of what might really be called a pseudofeudalism. This kind of reactionary statism has almost nothing to do with market capitalism, and as Berthoff and Murrin note, “After 50 years of attempts to interpret the French Revolution in terms of a clash between a feudal and capitalistic order, many historians are now moving quite decisively back toward Tocqueville.”
“Violence and the American Revolution” in Kurtz and Hutson, Essays, 81–120, and Alfred F. Young, The Democratic Republicans of New York: The Origins, 1763–1797.
Berthoff and Murrin, 274.
Berthoff and Murrin, 274.
Berthoff and Murrin, 274–275. A recent, excellent study on the period after 1775 is Robert A. Gross, The Minutemen and Their World, a careful analysis of Concord during the War.
Berthoff and Murrin, 281.
The Social Structure of Revolutionary America, 286, cited in Berthoff and Murrin, 280.
“The Social Origins of the American Revolution: An Evaluation and an Interpretation,” Political Science Quarterly 87 (1973): 1–22; Kenneth A. Lockridge, “Social Change and the Meaning of the American Revolution,” Journal of Social History 6 (1973): 403–439, which outlines a number of points similar to Berthoff and Murrin.
In G.H. Guttridge, “Adam Smith on the American Revolution: an Unpublished Memorial,” American Historical Review 38 (1933): 714–720.
See Richard Maxwell Brown, “Violence and the American Revolution,” in Kurtz and Hutson, Essays, 81–120, and the numerous bibliographical items noted therein. Also awaited is publication of Alfred Young's study of the radical political uses of traditional Boston carnivals and parades.
See Gary B. Nash, “Social Change and the Growth of Prerevolutionary Urban Radicalism”; Edward Countryman, “'Out of the Bounds of the Law': Northern Land Rioters in the Eighteenth Century”; Marvin L. Michael Kay, “The North Carolina Regulation, 1766–1776: A Class Conflict”; Dirk Hoerder, “Boston Leaders and Boston Crowds, 1765–1776”; and Ronald Hoffman, “The 'Disaffected' in the Revolutionary South,” all in Alfred F. Young, ed., The American Revolution: Explorations in the History of American Radicalism.
Edmund and Helen Moragn, The Stamp Act Crisis: Prologue to Revolution.
In Murray N. Rothbard, Advance to Revolution, 1760–1775, Vol. III of Conceived in Liberty, 90.
See, for example, Lawrence H. Gipson, Jared Ingersoll; Alan Rogers, Empire and Liberty: American Resistance to British Authority, 1755–1763; Schlesinger, The Colonial Merchants and the American Revolution, 1763–1776; and Jensen, Founding.
Jensen, Founding; Richard D. Brown, Revolutionary Politics in Massachusetts: The Boston Committee of Correspondence and the Towns, 1772–1774; and J.R. Pole, Political Representation in England and the Origins of the American Republic.
Guttridge, “Smith.”
Pole, Equality, Chapter 2.
The Age of Democratic Revolution, Vol. I, 185–190.
The best interpretation of this process over the whole revolutionary era is Merrill Jensen, The American Revolution Within America. See also Library of Congress, Leadership in the American Revolution, papers presented at a Symposium.
There has been of late considerable literature on the Loyalists, perhaps the best (with a very complete bibliographical essay) is Robert McClure Calhoon, The Loyalist in Revolutionary America 1760–1781.
Quoted in Ferdinand E. Banks, Scarcity, Energy, and Economic Progress, xvii.
Lewis H. Gann, Guerrillas in History, 92.
James W. Pohl, “The American Revolution and the Vietnamese War: Pertinent Military Analogies,” The History Teacher 7 (February 1974): 259.
See, for example, David V.I. Bell and Allan E. Goodman, “Vietnam and the American Revolution,” Yale Review 61 (Fall 1971): 26–34; Roy K. Flint, “The Web of Victory: Revolutionary Warfare in Eighteenth Century America, (West Point: mimeograph, 1976); and the following by John Shy: “The American Revolution: The Military Conflict Considered as a Revolutionary War,” in Kurtz and Hutson, Essays, 121–156, also reprinted in Shy's A People Numerous and Armed: Reflections on the Military Struggle for American Independence, in which several essays reflect the influence of Vietnam; Shy, “The American Revolution Today,” in Stanley J. Unterdal, ed., Military History of the American Revolution, 18–32, especially 21; and Shy, “Charles Lee: The Soldier as Radical,” in George Athan Billias, George Washington's Generals, 22–53.
Washington, himself, used the term “protract,” and Hamilton understood the same tactic of keeping an army in the field, avoiding a direct confrontation except on one's own terms, and harassing the enemy piecemeal. This is discussed in William Marina, “The American Revolution and the Minority Myth,” Modern Age 20 (Summer 1976): 298–309; and William Marina, “The American Revolution as a People's War,” Reason 8 (July 1976): 28–38.
Smith, New Age, passim.
Jonathan Gregory Rossie, The Politics of Command in the American Revolution. Rossie mentions that his interest in the subject was inspired by Bernard Knollenberg's Washington and the Revolution: A Reappraisal, published some 35 years earlier. Bernhard A. Uhlendorf, translator and annotator, Revolution in America: Confidential Letters and Journals, 1776–1784, of Adjutant General Major Baurmeister of the Hessian Forces, especially 146. Marion Balderston and David Syrett, The Lost War: Letters from British Officers During the American Revolution. John Shy, “Hearts and Minds in the American Revolution: The Case of 'Long Bill' Scott and Peterborough, New Hampshire,” in Shy, People, 168. On this motive in Vietnam, going back to the French period and the breakup of the integrity of village life, see the works of the French sociologist Paul Mus, Frances Fitzgerald, and also John T. McAlister, Jr., Vietnam: Origins of the Revolution. See also Larry G. Bowman, Captive Americans: Prisoners during the American Revolution.
See also Lois F. Schwoerer, “The Literature of the Standing Army Controversy,” Huntington Library Quarterly 28 (1964–1965): 187–212.
Richard H. Kohn, “The Murder of the Militia System in the Aftermath of the American Revolution,” in Unterdal, Military History, 110–126; and Kohn, Eagle and Sword: The Federalists and the Creation of the Military Establishment in America, 1783–1802. As noted earlier, that fear of standing armies as in herently opposed to republicanism went back through Harrington and Machiavelli (himself a militia organizer) to Roman historians such as Tacitus. See Pocock, Machiavellian Moment, passim.
Smith, New Age, I, 131–132.
Solomon Lutnick, The American Revolution and the British Press, 1775–1783, 124–125, and Smith, New Age, II 1068–1074.
The episode of the Carlisle Peace Commission might, in some ways, be considered the first “credibility gap” in American history. Up to that point, one cannot but be struck by the extent to which action any dialogue in the American revolutionary coalition—despite the fact that it is, after all, the function of leaders to lead—had an enormously grass roots quality. As writers such as Knollenberg and Jensen note, the radicalness of the populace sometimes outran the leadership. In a sense, 1778 was a turning point, for, having established the legitimacy of the Revolutionary consensus around independence, the leadership now demonstrated less willingness to discuss specific alternatives which would require sacrifice for goals beyond this basic consensus.
Shy, “Military Conflict,” in People, 216–217.
Despite a rather cool assessment by Shy, I find the Leiby volume a gold mine of information about the dynamics of revolutionary war in a contested area. A twenty page case study-summary is in William Marina, The American Revolution as a People's War, forthcoming.
Goetschius understood that such irregular forces fought best in defending their home area.
John Ellis, Armies in Revolution, 170; and Carroll Quigley notes:
The hope of the future does not rest, as commonly believed, in winning the peoples of the “buffer fringe” to one superpower or the other, but rather in the invention of new weapons and new tactics that will be so cheap to obtain and so easy to use that they will increase the effectiveness of guerrilla warfare so greatly that the employment of our present weapons of mass destruction will become futile and, on this basis, there can be a revival of democracy and of political decentralization in all three parts of our present world.
The Evolution of Civilizations, 259.
Quoted in Herbert Aptheker, Early Years of the Republic: From the End of the Revolution to the First Administration of Washington, (1783–1793), Vol. III of A History of the American People, 14.
Lee Benson, Turner & Beard: American Historical Writing Reconsidered, 215; Charles A. Beard, An Economic Interpretation of the Constitution of the United States.
Benson, 219–220, 221, 217.
Benson, 227.
See Moore, Social Origins, for a good discussion of Catonism.
Wood, Creation, 70.
Wood, Creation, 70–71.
Wood, Creation, 72. Wood comments further: “By the middle of the eighteenth century the peculiarities of social development in the New World had created an extraordinary society, remarkably equal yet simultaneously unequal, a society so contradictory in its nature that it left contemporaries puzzled and later historians divided. [Wood cites, for example, Jackson Turner Main, The Social Structure of Revolutionary America; and Robert E. and B. Katherine Brown, Virginia, 1705–1786; Democracy or Aristocracy? It was, as many observers noted, a society strangely in conflict with itself. On one hand, social distinctions and symbols of status were highly respected and intensely coveted, indeed, said one witness, even more greedily than by the English themselves. Americans, it seemed, were in 'one continued Race: in which everyone is endeavoring to distance all behind him; and to overtake or pass by, all before him.' Yet, on the other hand, Americans found all these displays of superiority of status particularly detestable, in fact Edition: current; Page: [35] ‘more odious than in any other country.’” Had Wood studied comparative civilizations, he would not have found this such an “extraordinary” phenomenon. It is characteristic of the expansionistic phase of any civilization, especially with respect to frontier areas.
Gordon S. Wood, “Rhetoric and Reality in the American Revolution, William and Mary Quarterly, 3d ser. 23 (1966), which discusses especially Virginia, and Berthoff and Murrin, “Feudalism,” examined at length above.
Wood, Creation, 79. The efforts of several “neo-conservatives” to eliminate the social tensions and ambiguities of equality/egalitarianism, and to create a consensus view of the American past, are implausible. Irving Kristol or Martin Diamond give the impression that egalitarianism was not present in the era of the Founding Fathers, who are portrayed as having a virtual agreement around a conservative Lockean view of political equality. See, for example, Martin Diamond, “The Idea of Equality: The View from the Founding,” in Walter Nicgorski and Ronald Weber, eds., An Almost Chosen People: The Moral Aspirations of Americans, 19–37.
For a critique of some of Jensen's earlier views, see Richard Morris, The American Revolution Reconsidered, especially the chapter on “Confederation and Constitution.”
William and Mary Quarterly 3d ser. 29 (January 1972): 49–80.
In Burton J. Williams, ed., Essays in Honor of James C. Malin, 192–220.
I hope to deal with this interpretation in much greater detail in The American Revolution as a People's War, forthcoming.
Jensen, Within America, 193.
Pole, Equality, 112–113, points out that under the Articles, retaining of “local preferences” meant that there was not equality for all citizens of the United States. Only a Constitution would guarantee the search for national institutions and identity. It is interesting that the areas of the coast and frontier that went heavily for the Constitution as described in Jackson Turner Main, The Anti-Federalists: Critics of the Constitution, 1781–1788, were the same areas that Nelson, Tory, notes as the bastions of Loyalist strength. One suspects a large number of votes for the Constitution came from those formerly of Tory sympathy.
A model, useful for developing further the distinction between Locals and Cosmopolitans, is Jackson Turner Main, Political Parties Before the Constitution. Just one piece of evidence can be cited to show that Locals were not necessarily for small government: they tended to favor increasing the salaries of officials. This fits in with the notion of “new” men who saw expanding local and state government as a means for advancement.